Streaming (only) audio using an old AppleTV (and a few nuances, thereof)

Oh, the ever-unsettled human!

In this age of wireless everything, I chose to stay ‘wired’ for a long time, especially when it came to music. Reason: although I don’t (can’t) claim to be an audiophile, I do appreciate hi-fidelity (hi-fi) music. Hi-fi audio is soothing even at high amplitudes, and I think good tracks deserve a listening, and not just a hearing! In other words, I am not a .mp3 guy, but more of a .wav (or .flac, if you please) person. Uncompressed/lossless audio rules!

My audio rig is a simple (non-wireless) amp and a pair of monitors, and it pretty much serves my purpose.

The main issue, however, was — the amp is about 9′ (9 ft.) away from my music source(s). Which means that I had to make do with a 10′ 3.5mm to RCA audio cable, to stream audio from the laptop, phone, etc.

This, of course, worked like a charm in terms of music quality — any loss in fidelity was too minor to be noticed — but this arrangement wasn’t very safe. I’d to be careful about myself, and more so for others, to avoid tripping over the lengthy cable that went almost diagonally across the room. (Honestly, I was more concerned about what a human, tripping over it, would take along, since the cable was attached to one of the many precious sources at any given point in time. One can easily infer that the priority wasn’t on saving the human in such a scenario.)

Anyway, so a wire-free setup was, maybe not indispensable but good-to-have. I looked into a few options, the cheapest and quite common being a Bluetooth audio receiver. There are many available on Amazon, but from my previous experiences of Bluetooth receivers, I realized that one would have to compromise on sound quality. Now, there might be fancy receivers as well, but I did not want to have to spend a lot.

The rise of the Phoenix…

I did, however, have an AppleTV 2 which was gathering dust, mostly because over time smarter devices/options had replaced whatever little purpose it originally used to serve. The only use left for it was extending/mirroring the Mac screen wirelessly, which isn’t really a jaw-dropping feature!

Point being, I was keen on making use of this, mostly useless, AppleTV in the eventual, wire-free setup. From previous experience, and a bit of Googling,  it came down to two options:

  1. Use the HDMI output option of AppleTV and use the audio port of the target device
  2. Use AppleTV’s optical audio port, somehow.

The first option wasn’t viable because even if there are devices which are capable of extracting the audio from an HDMI source, for example, display devices like monitors, televisions or projectors — their sound processing is, as far as I expect, very rudimentary. So, again, there’s a compromise on sound quality involved. I also found a bunch of cheap HDMI audio “extractors” — but they did not look very different from the Bluetooth receivers I talked about earlier.

The second option seemed to be far more popular. If only the amp in question (or one’s AV Receiver) had an optical audio in — I’d have been all set. But there wasn’t, and hence, I wasn’t.

Fiio D30K

The process thus needed a “bridge” step — a bridging gadget was required to accept this optical audio from AppleTV, and then somehow, magically, let me hook up the amp.
Enter: Fiio D30K! This nifty little thing does exactly that. In other words, if the amp or AV Receiver has a simple RCA/3.5mm input, this optical-to-whatever converter device would do the trick of accepting the optical audio*, and providing the converted audio as RCA/3.5mm out.
There are plenty of similar devices available on Amazon, but one might want to get a decent one. I have been happy with Fiio products over the years, so I went with this one.

Anyway, that pretty much completes the setup! Once this was in place, the audio could then be streamed to my audio rig, wirelessly! Yaay!

Just when you think you’re all set…

There was a hitch! It was in the form of flaky audio, especially when the audio file was large. I correctly suspected that it was because now there was too much traffic on the Wifi network (streaming audio + regular Internet use), keeping in perspective the basic (Netgear WGR614) router that I had.

It was time to separate the concerns.

Fortunately, I had a spare Wifi router lying around, in which I set-up another network dedicated to audio streaming. This, however, posed two related challenges:

  • How to stream audio from the Internet (as the Internet and audio streaming on were now on two difference WLANs)
  • How to connect the laptop (one of the main sources) to more than one (Wifi?) networks.

The resolution was anyone’s guess: one of the networks had to be wired!

I chose to make the streaming network wired, as the router was right next to my desk. On my laptop, this enabled me to stay on (Internet) Wifi and streaming network, at the same time.
As indicated in the picture, I needed to specify that this (audio streaming ethernet) is NOT the network to seek Internet requests from, and hence I did not specify a DNS for it. [Note: IP address specification as ‘manual’ doesn’t have anything to do with the setup — it’s just there for sentimental reasons :).]

Stirred, but not shaken…

A “few minor” issues still remain, but I am happy with the overall setup now. These “few minor” issues are:

  • There’s often a few moments of audio lag when playing streaming video — but that’s not any related to this arrangement. I remember observing this lag even in a normal setup
  • Streaming from Andriod (or other non-iOS devices) requires special software/apps.

[* FiioD30K accepts coaxial input as well]

The ‘L’ in SOLID

Uncle Bob‘s aptly coined SOLID Design Principles form the basis of a robust software application. Today, I want to talk about one of those principles, the Liskov’s Substitution Principle (LSP) because it’s easy to deviate from, and a few conscious design choices can prevent us from doing so.

In the simplest terms, LSP suggests that:

Any change that makes a subtype not replaceable for a supertype, should be avoided.

Suppose, we have a class hierarchy like so:

At the first glance, the relationships here seem fine, but if we carry out an IS-A test, the issue becomes obvious: that Tea isn’t necessarily a CaffeinatedDrink (for instance: there’s decaf!).

Thus, this design violates LSP, because it indicates that all Teas are Caffeinated Drinks. Now, a naïve approach would be to try to retrofit this design, to allow for decaf teas as well — by adding a flag or suchlike — but that would be clumsy!

There are several ways to deal with this anomaly, and the decision can be based on the stage of development we’re at, along with other factors. So, let’s continue with our example and see how can it be dealt with:

  1. We know for sure that we’d need to pull Tea out of this hierarchy. Though Coffee looks more justified there, we can pull that out as well, to keep things crisp (and also because someone told you about ‘Decaf Coffee’ as well!).
      A better option, thus, seems to be:

    • For common behavior of Teas & Coffees, introduce a Drink type
    • Both Tea and Coffee can then be subtypes of Drink
    • Caffeinated can just be an interface which is implemented as needed

    Upon this change, we don’t cringe anymore to say that Tea IS-A Drink, with Caffeinated behaviour. Whereas, a DecafTea differs from it. Another perspective could be, Coffee is substitutable both for Drink or Caffeinated, but DecafTea is substitutable ONLY for a Drink.

  2. Another approach is to follow Effective Java [Bloch, 2017, Item 18]: Favor composition over inheritance. With this, Drink becomes a member of Tea and Coffee, and Caffeinated (interface) is implemented by all but, say, DecafTea.


Here, we do away with the class hierarchy, and directly use the concrete instances of individual drinks. However, we do keep the Caffeinated behaviour separated, and again, can safely say that Tea/Coffee IS-A Caffeinated drink. Moreover, we’re also getting a more robust design because of disallowing (class-based-) inheritance.

    How do we ensure we come-up with LSP-compliant design? Well, there are few simple things that can be borne in mind while working on class associations:

  • Intuition: Is it sounding right? [Example: Should StudentEnrollment really extend Student, when all it wants is to access some Student properties?]
  • Concatenation test: Do the Parent and Child types sound right upon concatenation? [Example: While Flyer+Bird may sound correct, a Flyer+Chicken may not. So does ‘Flyer‘ need to be a class type or an interface type?], and finally and most importantly,
  • IS-A test: Is the IS-A condition holding good?

Smarter ValueObjects & an (even more) elegant Builder

Value Objects (VOs) are prevalent and needed in traditional Java programming. They’re almost everywhere —  to hold information within a process, for message-passing, and various other areas.

Apart from having getters and setters for the properties, on several occasions, there’s a requirement for these VOs to implement equals() and hashCode(). Developers usually hand-write these methods or use the modern IDE templates to generate them. This works fine initially or until there’s a need to update the VOs with one or more additional properties.

With an update, the baggage that comes with new properties includes:

  • a new set of getters and setters,
  • updates required to equals(), hashCode(), and,
  • update required to toString(),if needed

This is, of course, cumbersome, error-prone, and the simple VO soon starts looking like an airplane cockpit!

Google’s AutoValue framework is a smart approach to address this issue. With just a couple of annotations, almost all of the “junk” is done away with, and the class becomes smarter — any future property updates, including getters, setters, as well as equals()*, hashCode()** and toString() are all handled automagically!

The VO then just looks like a basic set of properties of the given type, like so:

import com.google.auto.value.AutoValue;

@AutoValue
abstract class CartItem {
    abstract int itemCode();

    abstract int quantity();

    abstract int price();

    static CartItem create(int itemCode, int quantity, int price) {
        return new AutoValue_CartItem(itemCode, quantity, price);
    }
}

Note the default presence of a static factory method create(), as suggested in Effective Java [Bloch, 2017], Item 2.

The use of this annotated VO would be no different from a typical one. For instance, the CartItem defined above would have a simple invocation like this:

@Test
public void create() throws Exception {
    CartItem item1 = CartItem.create(10,33, 12);
    CartItem item2 = CartItem.create(10,33, 12);

    assertEquals(item1, item2); // this would be true
}

Apart from the default support for a static factory, AutoValue also supports Builder classes, within the VOs. Armed with this knowledge, let’s take another jab at the example in my previous post on Builders.
We continue with the same Cake example and add the required annotations and modifiers. The updated version of the class would then be:

import com.google.auto.value.AutoValue;

@AutoValue
abstract class Cake {
    // Required params
    abstract int flour();
    abstract int bakingPowder();

    // Optional params
    abstract int eggs();
    abstract int sugar();
    abstract int oil();

    static Maker builder(int flourCups, int bkngPwdr) {
        // return builder instance with defaults for non-required field
        return new AutoValue_Cake.Builder().flour(flourCups).bakingPowder(bkngPwdr).eggs(0).sugar(0).oil(0);
    }

    @AutoValue.Builder
    abstract static class Maker {
        abstract Maker flour(int flourCups);
        abstract Maker bakingPowder(int bkngPwdr);
        abstract Maker eggs(int eggCount);
        abstract Maker sugar(int sugarMg);     
        abstract Maker oil(int oilOz);

        abstract Cake build();
    }
}

Observe that:

  • the member Builder class (named Maker here) just needs to be marked with @AutoValue.Builder annotation, and the framework takes care of everything else
  • in the parent class, we could also have had a no-arg builder() method but we specifically want to have only one way of building this class — with the required params
  • as shown above, the optional parameters should be set to their default values since we want the flexibility of choosing only the relevant optional params. [With non-primitive members, @Nullable can be used.]

Just to complete the discussion, here is an example of the ease with which this new builder can be invoked:

@Test
public void makeCakes() {

    // Build a cake without oil
    Cake cakeNoOil = Cake.builder(2, 3).sugar(2).eggs(2).build();

    assertNotNull(cakeNoOil);

    // Check that it has 0 oil
    assertEquals(0, cakeNoOil.oil()); // default

    // Make cake with oil
    Cake cakeWOil = Cake.builder(2, 3).sugar(2).oil(1).eggs(2).build();

    // Obviously, both the cakes are different
    assertNotEquals(cakeNoOil, cakeWOil); // valid

    // Another cake that's same as cake w/ oil
    Cake anotherCakeWOil = Cake.builder(2, 3).sugar(2).oil(1)
            .eggs(2).build();

    assertEquals(cakeWOil, anotherCakeWOil); // valid
}

There are many other fine-grained things that can be done while using AutoValue, like specifying getters for specific properties or customizing toString(), etc.

It’s impressive how AutoValue facilitates writing and static factory methods and builders quickly — taking the headache out of defining and updating VOs.

[Full implementation of the abovementioned example is here.]

Further reading:

  1. AutoValue with Builders
  2. Project Lombok also addresses the VO update issue, along with other things

* Effective Java [Bloch, 2017], Item 10
** Effective Java [Bloch, 2017], Item 11

Books: Java

EffectiveJava-Bloch

When I was writing the last post, I realized how much I used to be in awe of Effective Java [Bloch, 2017]. It was a book that covered what no other did. It wasn’t just coding — there’re plenty of books where one could learn “Your first Java Program” and beyond, and throw an air punch. It wasn’t also about language syntax and semantics — Java Complete Reference [Schildt, 2017] fitted the bill there*, or OOPs (every other Java book started with OOPs concepts). Rather, Effective Java covered the basics of writing elegant Java code, and as a by-product, also underlined how easy was it to be swayed away by ‘convention’. I wouldn’t recommend it as a Java learner’s first book. But it should very well be one’s second Java book and the one that she/he would keep revisiting throughout the programming career.

Ever since I’ve picked it up again, it’s become tough to keep it aside. With each of its topic, I realize, over time, how much have I drifted away from the delight of writing good code, and how much do I need to still learn.
Go read it now if you haven’t had a chance yet. What’s more, the much-awaited 3/E, which covers Java 7, 8 and 9, is out now!

While on this topic, let me talk about another one of my favourite books on Java — Thinking in Java [Eckel, 1998].

ThinkingInJava-Eckel

This is the book which I considered as a Java developer’s Bible at one point in time. Since there are no new editions after the 4/E, the syntactical parts might be a bit obsolete now. But still, in my opinion, it’s the best book to get one’s (Java & OOPS) fundamentals in place.

* I have always found Java Complete Reference a bit too elaborate to my liking. Most of which is about the language syntax and semantics. All of that might have been useful in the early days of the Internet when it wasn’t that easy to look up things online. But I doubt if that’s needed now.

The elegance of Builder pattern

Paraphrasing Josh Bloch in Effective Java [Bloch, 2017, Item 2]:

While creating objects, in cases where the number of optional parameters of an object is considerable, say 4 or more, one might think of static factory methods [Bloch, 2017, Item 1] as a solution — but they’re more suitable for a small set of parameters. When there are several optional params, static factories cannot be used as it’s cumbersome to imagine and cater to all possible parameter combinations. Another approach that’s proposed in such cases is using JavaBeans but it has its own shortcomings.

Therefore, we usually go with multiple (telescoping) constructors for such requirements. For example:

public Cake(int oilTbsp, int flourMg){
  this(oilTbsp, flourMg, 0);
}

public Cake(int oilTbsp, int flourMg, int eggCount){
  this(oilTbsp, flourMg, eggCount, 0);
}

public Cake(int oilTbsp, int flourMg, int eggCount, int bakingPowderMg){
  this(oilTbsp, flourMg, eggCount, bakingPowederMg);
}

//...

Such implementations, although purpose-serving, are a bit contrived in that the class client needs to tally the parameters accurately. Consider a large parameter list, and this would be an overkill.

A variation of Builder pattern [Gamma, 1995], is what Bloch suggests, for such cases. In it, a builder class is a static member of the class it builds, for example:

public class Cake{
  //...
  private Cake(Builder builder){
    //...
  }

  public static class Builder{
    //...
  }
}

Since the original constructor is hidden, the client first gets a reference to the Builder class — passing all the required params to its constructor or static factory. The client then calls setters on the returned builder object to set the optional parameters of interest. Finally, the client makes a call to the build() method to generate an immutable object.
Since the builder setter methods return the builder itself, the invocations can be chained, like so:

// Set only the parameters of interest
Cake cake = new Cake.Builder(350, 45).egg(2).sugar(240).cocoa(35)...build();

As is apparent, this is intuitive as well as concise.

A builder can be further enhanced by enabling it to build more than one object, based on parameters. One has to be cautious, however, to disallow building an object of an inconsistent state. This can be ensured by validating the passed parameters as early as possible and throwing a suitable exception.

Builders can also be used to automate certain tasks and fill in the fields. For example, autoincrementing the object Id, etc.

As Josh Bloch advises, we should be using Builders as often as possible, especially in cases where the number of parameters is significant. They’re a simple and elegant alternative to telescoping constructors or JavaBeans.

[Full implementation of the Cake builder example is here.]

Tying snips.ai, Strava & Google Speech Engine

So, this happened a couple months ago, and I had lots of fun doing it (watch the video):

A detailed post would follow. (And yes, as mentioned in the video description, kindly ignore the choice of LED colours :)).

A Spark learning.

About a month back, I’d done something I was not very proud of — a piece of code that I was not very happy about — and I had decided to get back to it when time permits.

The scenario was something close to the typical Word Count problem, in which the task required counting words at specific indices, and then print the unified count per word. Of course, there were many other things to consider in the final solution since a streaming context was being dealt with — but those are outside the purview of what I’m trying to highlight here.

An abridged problem statement could be put as:

Given a comma-separated stream of lines, pick the words at indices j and k from each line, and print the cumulative count of all the words at jth and kth positions.

So the ‘crude’ solution to isolate all the words was:

  1. Implement a PairFunction<String, String, Integer>, to get all the words at a given index
  2. Use the above function to get all occurrences of words at index jand get a Pair (Word -> Count) RDD
  3. Use the same function to get all occurrences of words at index k, and get another Pair (Word -> Count) RDD
  4. Use the RDD union() operator to combine the above two RDDs, and get a unified RDD
  5. Do other operations (reduceByKey, etc. …)

As is apparent, anyone would cringe at this approach, especially due to the two passes (#2, #3) over the entire data set — even though it gets the work done!
So I decided to revisit this piece, with the tool of additional knowledge about what all Spark offers.

One useful tool is the flatMap operation that Spark Java 8 offers. By Spark’s definition:

flatMap is a DStream operation that creates a new DStream by generating multiple new records from each record in the source DStream

Given our requirement, this was exactly what was needed — create two records (one for each jth and kth index word), for each incoming line. This would, of course, benefit us in that we have the final (unified) RDD in just a single pass of the incoming stream of lines.

I went ahead with a flatMapToPair implementation, like so:

JavaPairDStream<String, Integer>  unified = lines.flatMapToPair((s) -> {
        String a[] = s.split(",");
        List<Tuple2<String, Integer>> apFreq = new ArrayList<>();
        apFreq.add(new Tuple2<>(a[Constants.J_INDEX],1));
        apFreq.add(new Tuple2<>(a[Constants.K_INDEX],1));
        return apFreq.iterator();
});

To further validate the benefits, I ran some tests* with datasets ranging from 1M to 100M records and the benefits of flatMap approach were more and more pronounced as data grew bigger.

Following were the observations.

flatmap

As we can see, whilst the difference is ~2s for 1 million records, it becomes almost twice as we reach 10M and more than twice at around 100M mark.
It’s therefore, obvious that production systems (e.g. a real-time analytics solution), where data the volume is much higher, need to be cautious about the choice of each operation (transformation, filtering or any other action), as these govern the inter-stage as well as the overall throughput of a Spark application.

 


* Test conditions:
– Performed on a 3-Node (m4.large) Spark cluster on AWS, using Spark 2.2.0 on Hadoop 2.7
– Considers only the time spent on a particular stage (union or flatMap), available via Spark UI
– Each reading is an average of time taken in 3 separate runs

A ‘Kafka > Storm > Kafka’ Topology gotcha

If you’re trying to make a Kafka Storm topology work, and are getting baffled by your recipient topic not receiving any damn thing, here’s the secret:

  • The defaultorg.apache.storm.kafka.bolt.KafkaBolt implementation expects only a single key field from the upstream (Bolt/Spout)
  • If you’re tying your KafkaBolt to a KafkaSpout, you’ve got to use the internal name:str
  • However, if you have an upstream Bolt, doing some filtering, then make sure that you tie the name of your ONLY output field (value) to the KafkaBolt

Let me break it down a little bit more for the larger good.

Consider a very basic Storm topology where we read raw messages from a Kafka Topic (say, raw_records), enrich/cleanse them (in a Bolt), and publish these enriched/filtered records on another Kaka Topic (say, filtered_records).

Given that the final publisher (the guy that talks to filtered_records) is a KafkaBolt, it needs a way to find out the relevant key that the values are available from. And that key is what you need to specify/detect from the upstream bolt or spout.

So, the declared output field of the upstream Bolt would be something like:

@Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
    outputFieldsDeclarer.declare(new Fields(new String[]{"output"}));
}

Note the key field named “output“.

Now, in KafkaBolt the only thing to take care of is using this key field in the configuration, like so:

KafkaBolt bolt = (new KafkaBolt()).withProducerProperties(newProps(BROKER_URL,
        OUTPUT_TOPIC))
        .withTopicSelector(new DefaultTopicSelector(OUTPUT_TOPIC))
        .withTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper("key",
                "output"));

The default key field name is “message“, so you could as well use the no-arg constructor of  FieldNameBasedTupleToKafkaMapper, by specifying the upstream key as “message“.

If however, you have scenario where you’d want to pass both the key and value from the upstream, for example,

@Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
    outputFieldsDeclarer.declare(new Fields(new String[]{"word","count"}));
}

Note that we’ve specified the key field here as “word“.

Then obviously, we need to use this (modified) key name downstream, like so:

KafkaBolt bolt = (new KafkaBolt()).withProducerProperties(newProps(BROKER_URL,
        OUTPUT_TOPIC))
        .withTopicSelector(new DefaultTopicSelector(OUTPUT_TOPIC))
        .withTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper("word",
                "count"));

Update (2017-08-23): Added the scenario where a modified key name can be used.