RPi resurrection – Pt. III – Plex Media Server

The third good use one could put the Pi to, and use it thoroughly is as a Media Server. And that’s where Plex comes into the picture! I know, it’s nothing new — since probably the inception of RPi, there have been numerous such apps and OSs which have done the same — XBMC ports, Kodi, and likewise. But I have had mixed experiences with them — beyond the initial “aha!”, the experience wasn’t what one could “delightful!” in the long run. I think the biggest hassle for me was loading the media, to start with. This was followed by other aspects, like account management, supported formats (or the lack of it), and what not.

Plex interface on Chrome mobile browser

However, Plex seems to have upped the game several notches. Or maybe the people at Plex know how to impress this Netflix-addicted population — the ones who would want to be stream on any. device, support both app and browser based streaming, continue from where they left off, be able to load media directly, share their (in-house) media server with their friends/family, be able to restrict content per account, etc. (I am sure you see what I did there.)

Again, I will abstain from listing down the installation steps for Plex — there are numerous websites that have those.

Loading the media just requires following a specific and simple directory format. A spare hard disk which could auto-mount could be attached to the RPi for it. Of course, the advantage of using portable media is that one could attach it to any other media source, and directly modify the media to be made available via Plex. Or, for the geeky ones — a cron job could be written to rsync the media over ssh to this Pi.

There is however, restrictions on the “Free lunch”. If one wants to use the Plex client app, one would have to pay a nominal fee. This, of course, enables a host of features not available otherwise.
The “Free” option, of course, is the browser based client, which can do everything that the app can do, albeit at a slightly less of a convenience. I am not complaining, though. ūüėČ

RPi resurrection – Pt. II – NFS

The second good use you could put your Pi, more so if you have unused external HDDs lying around, is to make an NFS out of it! I’ll spare mirroring the details here, there are many good references on how to go about creating a Samba server. For example, this one.

Since my laptop’s storage is limited, oftentimes it started complaining, as soon as any space-consuming operation started. At that point, I often had to make some hard life choices! :), you know, of the “shall I keep the big file or zap it!” kinds.

On top of that, I am not sure about others, I have realised that OSX has made working with an external HDD as painful as possible! A FAT32 formatted HDD takes forever to be recognised! To top it all, there’s that eternal irk of having to “safely removing the drive”. I mean c’mon. Windows has done it — how long will OSX take??

Anyway, so those were the reasons. But, I guess, the basic reason was: because I wanted to. ūüėÄ

Once, your NFS is in the network, on Mac, it’s just a few more steps to make your new storage available, and ready to use!

The HDD did need to be formatted, as the original format of FAT32 did not go too well with the Raspberry Pi OS (erstwhile called Raspbian Buster) — in that the auto-mount used to fail. I formatted it as FAT to keep it OS-neutral, even though there are trade-offs but the benefits outweighed!

So, yes, that it.

RPi resurrection – Pt. I – Pi-hole

Couple months ago, got to know about the Pi-hole project. It’s an ad-block server that can you can configure at the network level. That is, it can be configured as the DNS in your home router.

Of course, the benefit of network-wide ad-blocking is that it does its job in all of your home devices. If however, your router does not allow configuring a DNS — then you’d have to configure the DNS on a per-device level. While it may sound painful, but trust me, it’s worth it!

Thanks to online advertising — reading even a simple news article has become painful. While a lot of people use ad-block plugins, these plugins are, limited to browsers. How do you deal with ads on devices where one does not use the browser — e.g. while playing games, etc? That’s where a network-level ad-blocking gets an upper hand!

Okay, I just realised that I haven’t talked about why is “RPi” there in this post’s title. The thing is, I came across a post on Pi-hole as I was looking for a better (read: any) use for a legacy Pi2 — which was lying about mostly unused — thanks to it being trumped by newer Pis that I got later.

Seems being a Pi-hole server is one of the best uses that I could’ve put it to! The admin console is a rich and responsive UI, which allows your to further tweak the Pi-hole server as per your needs, for example: for explicitly allowing/denying any ad server, blocking specific keywords, etc.

Pi-hole Admin Console

Depending upon your privacy requirements, there’re also options to not log, or enable data masking/anonymise the data that is logged.

Anyway, as Apache Indian would have put it: ‘nuff said! Do go ahead and try it out this amazing project and may you bask in the glory of an ad-free world! And oh, btw, one doesn’t really need a RaspberryPi for Pi-hole — you can potentially install it on anythingandd…there’s a Docker image as well!

“Apology-based computing”

I came across this phrase sometime back, and was instantly intrigued by it. So, like any good samaritan, let me share what I could make of it for the larger good of humankind! While I present my assessment, I’ll also highlight aspects that make it very viable in most of the computing contexts. We will also delve into how this phrase, at some level, dabbles with the aspects of modern day computing like eventual consistency, the tradeoff between performance and correctness, and Amdahl’s law.

First, let’s get to understand what the phrase means.

Let me make a claim: We come across apology-based computing in our day-to-day digital ongoings — be it shopping on e-com websites, chatting using our phone apps, or general browsing.

So what is it?!

It merely points to the fact that in this age of highly distributed systems1, in a majority of situations, it’s ok for messages to be delayed, go undelivered, or, just go awry! Note that messages here imply any sort of communication between two or more systems1.

If this sounds a bit overbearing and leads to cringing of eyes, you might get want to recall situations in which you had to “refresh the page” in order to see an update, or those times when your text or chat messages did not deliver and a cute little (!) sign appeared beside it, or, (for the technically inclined) you had to explicitly invalidate a cache, so that updated data is reflected quickly. Also, recall the fact that you were sort of okay with it, and in a majority of such scenarios, did not complain.

And, that’s precisely it!

Over-time, as the systems have grown, and as the apps have proliferated into almost all aspects of our life — we’re becoming more and more OK, when every once in a while, the systems do not behave as expected. This is not just philosophical — we humans did not become more patient overnight! Rather, over the years, something interesting has happened — our response as intermediate or ultimate users of these systems has evolved. So much so, that a seemingly bureaucratic statement that,

It’s easier to ask for forgiveness than to get permission.


has found a benign presence in computing and forgiveness or apology aspects have become the order of the day!

Why did this happen, you ask? Because there’s no other option!

Something of this sort is what David Ungar has deliberated upon, and proved in his iconic talk titled “Everything You Know (about Parallel Programming) Is Wrong!” (see embedded video).

In this talk, in the light of the above quote, David Ungar highlights how the bias in computing is leaning (or should lean) more towards something he refers to as “end-to-end nondeterminism”, or “race-and-repair”, rather than correctness.

Correctness or Determinism comes at a cost, and despite one’s best efforts, we are limited by Amdahl’s law — when it’s within the confines of a system, and aspects like CAP, and other distributed system vagaries, the moment a process (transaction) crosses the boundaries of one system.

So, what do we do?

Well, distributed systems engineers are well-aware of the phenomenon that,

Failure is a norm rather than an exception.

which, if you come to think of it, is a paradigm shift from the conventional thinking where we, as programmers or system architects, were told to treat it as catastrophic! However, treating failures as a norm in modern-day computing leads us to something very practical — something we call “designing for failure”.
That is to say, we need to build systems with better resiliency, quick failure detection, fault tolerance, and, with CAP in perspective — be willing to compromise on correctness in favour of availability, by being eventually correct! That pretty much takes care of most of the scenarios. (Of course, we’re not talking about mission critical systems or transactions — where correctness and/or availability, whatever be the cost, is indispensable!)

So, going back to the aforementioned scenarios, what the systems are doing by making us ‘refresh’ the browser window or by making us log back in, is aligning to the correctness part of the application. The other option, of course, would have been a 503 or something like this, which leads to far more painful memories!

1Systems = Processes


Although, maybe, I am obligated to be loyal to AWS for various reasons (my work is AWS-centric, plus the fact that it facilitated my foray in the cloud), but still, over the short span of time that I have been in this field — I am getting increasingly impressed with Microsoft. Their documentation on aspects like cloud design patterns, microservices, etc., is simply impeccable — to say the least!

Moreover, since I am a huge fan of MOOC websites like Coursera, edX, etc, not very long ago, I came across a very nice course on edX, being offered by Microsoft, called Architecting Distributed Cloud Applications, which, again, I loved!

Apart from cloud-specific courses, they have a lot of other, general CS courses in the offing, which are also pretty good.

Overall, I think MS has come of age under Mr. Nadela, and is delivering how it should. Kudos to the teams, who keep such useful artefacts updated, as well as open to the world.

After all, the age of silos is long gone, and collaboration is the key!

Hosting a WordPress blog on AWS (for free!)

Much of what I write is a compilation of what I found on the Internet, esp. Pńďteris ŇÖikiforovs’s post. I am indebted to him, as I heavily benefitted from his post while trying to get WP working. However, since that article is a bit dated, and cannot be used verbatim — I decided to jot this post down.

It is assumed that one would be having the domain name handy (e.g. myblogsite.com or foobar.io), before proceeding with the installation. In this post, let’s go ahead with foobar.io.

So, foobar.io will be hosted on WordPress, with MySQL database, and PHP getting served via ngnix. To enable HTTPS access, we’d use a certificate generated via Letsencrypt.

On AWS, a t2.micro EC2 instance (free-tier eligible) would suffice for this set-up. Note that this instance is free only for 1yr, 750hrs a month, so, it would be a good idea to:

  1. Not run any other (EC2) instance in the same AWS account
  2. Create an AMI, once the installation is over, so that we can port our installation to a new AWS account after the 1yr period, and not have to do all this over again

Alright, so here we go!

I. Log-in to the instance, and start with creating environment variables, which’d used throughout the installation (Lines to modify are highlighted (optional), except. for line #1 (must-do!))

WP_DOMAIN="foobar.io" ## *** Change this domain *** 

II. Ensure that the environment is set (assuming that the above variables are put in a file called setEnv.sh)

source ./setEnv.sh

III. Install the software, viz. nginx, MySQL and PHP (but first remove the default apache server, and upgrade the packages)

sudo apt remove apache2 # we dont want the default server
sudo apt update && sudo apt upgrade -y # lets stay up to date
echo "mysql-server-5.7 mysql-server/root_password password $MYSQL_ROOT_PASSWORD" | sudo debconf-set-selections
echo "mysql-server-5.7 mysql-server/root_password_again password $MYSQL_ROOT_PASSWORD" | sudo debconf-set-selections
sudo apt install -y nginx php php-fpm php-mysql php-curl php-gd mysql-server

IV. Configure MySQL

mysql -u root -p$MYSQL_ROOT_PASSWORD <<EOF

V. Configure nginx

sudo mkdir -p $WP_PATH/public $WP_PATH/logs
sudo tee /etc/nginx/sites-available/$WP_DOMAIN <<EOF

server {
  listen 80;
  server_name $WP_DOMAIN www.$WP_DOMAIN;
  root $WP_PATH/public;
  index index.php;
  access_log $WP_PATH/logs/access.log;
  error_log $WP_PATH/logs/error.log;
  location / {
   try_files \$uri \$uri/ /index.php?\$args;
 location ~ \.php\$ {
 include snippets/fastcgi-php.conf;
 fastcgi_pass unix:/run/php/php7.2-fpm.sock;

sudo ln -s /etc/nginx/sites-available/$WP_DOMAIN /etc/nginx/sites-enabled/$WP_DOMAIN
sudo systemctl restart nginx # if this fails, check the ngnix logs before continuing

VI. Install Letsencrypt, and auto-install certs (Note that in the DNS config, the domain should already be pointing to the current server public IP, as Letsencrypt would be performing domain validation)

cd ~
sudo apt install letsencrypt # we'll use it later, not right now
git clone https://github.com/letsencrypt/letsencrypt
cd letsencrypt
sudo ./letsencrypt-auto --nginx # respond to the prompts after this

VII. Finally, install WordPress

sudo rm -rf $WP_PATH/public/ # !!!
sudo mkdir -p $WP_PATH/public/
sudo chown -R $USER $WP_PATH/public/
cd $WP_PATH/public/

wget https://wordpress.org/latest.tar.gz
tar xf latest.tar.gz --strip-components=1
rm latest.tar.gz

mv wp-config-sample.php wp-config.php
sed -i s/database_name_here/$WP_DB_NAME/ wp-config.php
sed -i s/username_here/$WP_DB_USERNAME/ wp-config.php
sed -i s/password_here/$WP_DB_PASSWORD/ wp-config.php
echo "define('FS_METHOD', 'direct');" >> wp-config.php
sudo chown -R www-data:www-data $WP_PATH/public/

VIII. At this point we should be able to browse to foobar.io, to continue with the WP installation wizard, using the credentials we configured in the first step ($WP_ADMIN_USERNAME, $WP_USERNAME_PASSWORD).

IX. Since Letsencrypt certs expire after a given period of time, we might want to automate the renewal by creating a cron job

sudo tee /etc/cron.daily/letsencrypt <<EOF
letsencrypt renew --agree-tos && systemctl restart nginx
sudo chmod +x /etc/cron.daily/letsencrypt

That’s it! One should be all set now!


If, for any reason, the WP password needs to be reset, the following can be done

UPDATE `wp_users` SET `user_pass`= MD5('yourpassword') WHERE `user_login`='yourusername';

Streaming (only) audio using an old AppleTV (and a few nuances, thereof)

Oh, the ever-unsettled human!

In this age of wireless everything, I chose to stay ‘wired’ for a long time, especially when it came to music. Reason: although I don’t (can’t) claim to be an audiophile, I do appreciate hi-fidelity (hi-fi) music. Hi-fi audio is soothing even at high amplitudes, and I think good tracks deserve a¬†listening, and not just a¬†hearing! In other words, I am not a .mp3 guy, but more of a .wav (or .flac, if you please) person. Uncompressed/lossless audio rules!

My audio rig is a simple (non-wireless) amp and a pair of monitors, and it pretty much serves my purpose.

The main issue, however, was — the amp is about 9′ (9 ft.) away from my music source(s). Which means that I had to make do with a 10′ 3.5mm to RCA audio cable, to stream audio from the laptop, phone, etc.

This, of course, worked like a charm in terms of music quality — any loss in fidelity was too minor to be noticed — but this arrangement wasn’t very safe. I’d to be careful about myself, and more so for others, to avoid tripping over the lengthy cable that went almost diagonally across the room. (Honestly, I was more concerned about what a human, tripping over it, would take along, since the cable was attached to one of the many precious sources at any given point in time. One can easily infer that the priority wasn’t¬†on saving the human in such a scenario.)

Anyway, so a wire-free setup was, maybe not indispensable but good-to-have. I looked into a few options, the cheapest and quite common being a Bluetooth audio receiver. There are many available on Amazon, but from my previous experiences of Bluetooth receivers, I realized that one would have to compromise on sound quality. Now, there might be fancy receivers as well, but I did not want to have to spend a lot.

The rise of the Phoenix…

I did, however, have an AppleTV 2 which was gathering dust, mostly because over time smarter devices/options had replaced whatever little purpose it originally used to serve. The only use left for it was extending/mirroring the Mac screen wirelessly, which isn’t really a jaw-dropping feature!

Point being, I was keen on making use of this, mostly useless, AppleTV in the eventual, wire-free setup. From previous experience, and a bit of Googling,  it came down to two options:

  1. Use the HDMI output option of AppleTV and use the audio port of the target device
  2. Use AppleTV’s optical audio port, somehow.

The first option wasn’t viable because even if there are devices which are capable of extracting the audio from an HDMI source, for example, display devices like monitors, televisions or projectors — their sound processing is, as far as I expect, very rudimentary. So, again, there’s a compromise on sound quality involved. I also found a bunch of cheap HDMI audio “extractors” — but they did not look very different from the Bluetooth receivers I talked about earlier.

The second option¬†seemed to¬†be far more popular. If only¬†the amp in question (or one’s AV Receiver) had an optical audio in — I’d have been all set. But there wasn’t, and hence, I wasn’t.

Fiio D30K

The process thus needed a “bridge” step — a¬†bridging¬†gadget was required to accept this optical audio from AppleTV, and then somehow, magically, let me hook up the amp.
Enter: Fiio D30K! This nifty little thing does exactly that. In other words, if the amp or AV Receiver has a simple RCA/3.5mm input, this optical-to-whatever converter device would do the trick of accepting the optical audio*, and providing the converted audio as RCA/3.5mm out.
There are plenty of similar devices available on Amazon, but one might want to get a decent one. I have been happy with Fiio products over the years, so I went with this one.

Anyway, that pretty much completes the setup! Once this was in place, the audio could then be streamed to my audio rig, wirelessly! Yaay!

Just when you think you’re all set…

There was a hitch! It was in the form of flaky audio, especially when the audio file was large. I correctly suspected that it was because now there was too much traffic on the Wifi network (streaming audio + regular Internet use), keeping in perspective the basic (Netgear WGR614) router that I had.

It was time to separate the concerns.

Fortunately, I had a spare Wifi router lying around, in which I set-up another network dedicated to audio streaming. This, however, posed two related challenges:

  • How to stream audio from the Internet (as the Internet and audio streaming on were now on two difference WLANs)
  • How to connect the laptop (one of the main sources) to more than one (Wifi?) networks.

The resolution was anyone’s guess: one of the networks had to be wired!

I chose to make the streaming network wired, as the router was right next to my desk. On my laptop, this enabled me to stay on (Internet) Wifi and streaming network, at the same time.
As indicated in the picture, I needed to specify that this (audio streaming ethernet) is NOT the network to seek Internet requests from, and hence I did not specify a DNS for it. [Note: IP address specification as ‘manual’ doesn’t have anything to do with the setup — it’s just there for sentimental reasons :).]

Stirred, but not shaken…

A “few minor” issues still remain, but I am happy with the overall setup now. These “few minor” issues are:

  • There’s often a few moments of audio lag when playing streaming video — but that’s not any related to this arrangement. I remember observing this lag even in a normal setup
  • Streaming from Andriod (or other non-iOS devices) requires special software/apps.

[* FiioD30K accepts coaxial input as well]

The ‘L’ in SOLID

Uncle Bob‘s aptly coined¬†SOLID Design Principles form the basis of a robust software application. Today, I want to talk about one of those principles, the Liskov’s Substitution Principle (LSP) because it’s easy to deviate from, and a few conscious design choices can prevent us from doing so.

In the simplest terms, LSP suggests that:

Any change that makes a subtype not replaceable for a supertype, should be avoided.

Suppose, we have a class hierarchy like so:

At the first glance, the relationships here seem fine, but if we carry out an IS-A test, the issue becomes obvious: that Tea isn’t necessarily a CaffeinatedDrink¬†(for instance: there’s decaf!).

Thus, this design violates LSP, because it indicates that all Teas are Caffeinated Drinks. Now, a na√Įve approach would be to try to retrofit this design, to allow for decaf teas as well — by adding a flag or suchlike — but that would be clumsy!

There are several ways to deal with this anomaly, and the decision can be based on the stage of development we’re at, along with other factors.¬†So, let’s continue with our example and see how can it be dealt with:

  1. We know for sure that we’d need to pull Tea out of this hierarchy. Though¬†Coffee looks more justified there, we can pull that out as well, to keep things crisp (and also because someone told you about ‘Decaf Coffee’ as well!).
      • A better option, thus, seems to be:
    • For common behaviour of Teas & Coffees, introduce a Drink type
    • Both Tea and Coffee can then be subtypes of Drink
    • Caffeinated can just be an interface which is implemented as needed

    Upon this change, we don’t cringe anymore to say that Tea IS-A Drink, with Caffeinated behaviour. Whereas, a DecafTea differs from it. Another perspective could be, Coffee is substitutable both for Drink or Caffeinated, but DecafTea is substitutable ONLY for a¬†Drink.

  2. Another approach is to follow Effective Java [Bloch, 2017, Item 18]: Favor composition over inheritance. With this, Drink becomes a member of Tea and Coffee, and Caffeinated (interface) is implemented by all but, say, DecafTea.

Here, we do away with the class hierarchy, and directly use the concrete instances of individual drinks. However, we do keep the Caffeinated behaviour separated, and again, can safely say that Tea/Coffee IS-A Caffeinated drink. Moreover, we’re also getting a more robust design because of disallowing (class-based-) inheritance.

    • How do we ensure we come-up with LSP-compliant design? Well, there are few simple things that can be borne in mind while working on class associations:
  • Intuition: Is it sounding right? [Example: Should StudentEnrollment really extend Student, when all it wants is to access some Student properties?]
  • Concatenation test: Do the Parent and Child types sound right upon concatenation? [Example: While Flyer+Bird may sound correct, a Flyer+Chicken may not. So does ‘Flyer‘ need to be a class type or an interface type?], and finally and most importantly,
  • IS-A test: Is the IS-A¬†condition holding good?

Smarter ValueObjects & an (even more) elegant Builder

Value Objects (VOs) are prevalent and needed in traditional Java programming. They’re almost everywhere —¬† to hold information within a process, for message-passing, and various other areas.

Apart from having getters and setters for the properties, on several occasions, there’s a requirement for these VOs to implement equals() and hashCode(). Developers usually hand-write these methods or use the modern IDE templates to generate them. This works fine initially or until there’s a need to update the VOs with one or more additional properties.

With an update, the baggage that comes with new properties includes:

  • a new set of getters and setters,
  • updates required to equals(), hashCode(), and,
  • update required to toString(),if needed

This is, of course, cumbersome, error-prone, and the simple VO soon starts looking like an airplane cockpit!

Google’s AutoValue framework is a smart approach to address this issue. With just a couple of annotations, almost all of the “junk” is done away with, and the class becomes smarter — any future property updates, including getters, setters, as well as equals()*, hashCode()** and toString() are all handled automagically!

The VO then just looks like a basic set of properties of the given type, like so:

import com.google.auto.value.AutoValue;

abstract class CartItem {
    abstract int itemCode();

    abstract int quantity();

    abstract int price();

    static CartItem create(int itemCode, int quantity, int price) {
        return new AutoValue_CartItem(itemCode, quantity, price);

Note the default presence of a static factory method create(), as suggested in Effective Java [Bloch, 2017], Item 2.

The use of this annotated VO would be no different from a typical one. For instance, the CartItem defined above would have a simple invocation like this:

public void create() throws Exception {
    CartItem item1 = CartItem.create(10,33, 12);
    CartItem item2 = CartItem.create(10,33, 12);

    assertEquals(item1, item2); // this would be true

Apart from the default support for a static factory, AutoValue also supports Builder classes, within the VOs. Armed with this knowledge, let’s take another jab at the example in¬†my previous post on Builders.
We continue with the same Cake example and add the required annotations and modifiers. The updated version of the class would then be:

import com.google.auto.value.AutoValue;

abstract class Cake {
    // Required params
    abstract int flour();
    abstract int bakingPowder();

    // Optional params
    abstract int eggs();
    abstract int sugar();
    abstract int oil();

    static Maker builder(int flourCups, int bkngPwdr) {
        // return builder instance with defaults for non-required field
        return new AutoValue_Cake.Builder().flour(flourCups).bakingPowder(bkngPwdr).eggs(0).sugar(0).oil(0);

    abstract static class Maker {
        abstract Maker flour(int flourCups);
        abstract Maker bakingPowder(int bkngPwdr);
        abstract Maker eggs(int eggCount);
        abstract Maker sugar(int sugarMg);     
        abstract Maker oil(int oilOz);

        abstract Cake build();

Observe that:

  • the member Builder class (named Maker here) just needs to be marked with @AutoValue.Builder annotation, and the framework takes care of everything else
  • in the parent class, we could also have had a no-arg builder() method but we specifically want to¬†have only one way of building this class — with the required params
  • as shown above, the optional parameters should be set to their default values since we want the flexibility of choosing only the relevant optional params. [With non-primitive members, @Nullable can be used.]

Just to complete the discussion, here is an example of the ease with which this new builder can be invoked:

public void makeCakes() {

    // Build a cake without oil
    Cake cakeNoOil = Cake.builder(2, 3).sugar(2).eggs(2).build();


    // Check that it has 0 oil
    assertEquals(0, cakeNoOil.oil()); // default

    // Make cake with oil
    Cake cakeWOil = Cake.builder(2, 3).sugar(2).oil(1).eggs(2).build();

    // Obviously, both the cakes are different
    assertNotEquals(cakeNoOil, cakeWOil); // valid

    // Another cake that's same as cake w/ oil
    Cake anotherCakeWOil = Cake.builder(2, 3).sugar(2).oil(1)

    assertEquals(cakeWOil, anotherCakeWOil); // valid

There are many other fine-grained things that can be done while using AutoValue, like specifying getters for specific properties or customizing toString(), etc.

It’s impressive¬†how AutoValue facilitates writing and static factory methods and builders quickly — taking the headache out of defining and updating VOs.

[Full implementation of the abovementioned example is here.]

Further reading:

  1. AutoValue with Builders
  2. Project Lombok also addresses the VO update issue, along with other things

* Effective Java [Bloch, 2017], Item 10
** Effective Java [Bloch, 2017], Item 11

Books: Java


When I was writing the last post, I realized how much I used to be in awe of Effective Java [Bloch, 2017]. It was a book that covered what no other did. It wasn’t just¬†coding —¬†there’re plenty of books where one could learn¬†“Your first Java Program”¬†and beyond,¬†and throw an air punch. It wasn’t also about language syntax and semantics — Java Complete Reference [Schildt, 2017] fitted the bill there*, or OOPs (every other Java book started with OOPs concepts). Rather, Effective Java covered the basics of writing¬†elegant¬†Java code, and as a by-product, also underlined how easy was it to be swayed away by ‘convention’. I wouldn’t recommend it as a Java learner’s first book. But it should very well be one’s¬†second Java book and the one that she/he would keep revisiting throughout the programming career.

Ever since I’ve picked it up again, it’s become tough to keep it aside. With each of its topic, I realize, over time, how much have I drifted away from the delight of writing¬†good code, and how much do I need to still learn.
Go read it now if you haven’t had a chance yet. What’s more, the much-awaited 3/E, which covers Java 7, 8 and 9, is out now!

While on this topic, let me talk about another one of my favourite books on Java —¬†Thinking in Java¬†[Eckel, 1998].


This is the book which I considered as a Java developer’s Bible at one point in time. Since there are no new editions after the 4/E, the syntactical parts might be a bit obsolete now. But still, in my opinion, it’s the best book to get one’s (Java & OOPS) fundamentals in place.

* I have always found Java Complete Reference a bit too elaborate to my liking. Most of which is about the language syntax and semantics. All of that might have been useful in the early days of the Internet when it wasn’t that easy to look up things online. But I doubt if that’s needed now.