Streaming (only) audio using an old AppleTV (and a few nuances, thereof)

Oh, the ever-unsettled human!

In this age of wireless everything, I chose to stay ‘wired’ for a long time, especially when it came to music. Reason: although I don’t (can’t) claim to be an audiophile, I do appreciate hi-fidelity (hi-fi) music. Hi-fi audio is soothing even at high amplitudes, and I think good tracks deserve a listening, and not just a hearing! In other words, I am not a .mp3 guy, but more of a .wav (or .flac, if you please) person. Uncompressed/lossless audio rules!

My audio rig is a simple (non-wireless) amp and a pair of monitors, and it pretty much serves my purpose.

The main issue, however, was — the amp is about 9′ (9 ft.) away from my music source(s). Which means that I had to make do with a 10′ 3.5mm to RCA audio cable, to stream audio from the laptop, phone, etc.

This, of course, worked like a charm in terms of music quality — any loss in fidelity was too minor to be noticed — but this arrangement wasn’t very safe. I’d to be careful about myself, and more so for others, to avoid tripping over the lengthy cable that went almost diagonally across the room. (Honestly, I was more concerned about what a human, tripping over it, would take along, since the cable was attached to one of the many precious sources at any given point in time. One can easily infer that the priority wasn’t on saving the human in such a scenario.)

Anyway, so a wire-free setup was, maybe not indispensable but good-to-have. I looked into a few options, the cheapest and quite common being a Bluetooth audio receiver. There are many available on Amazon, but from my previous experiences of Bluetooth receivers, I realized that one would have to compromise on sound quality. Now, there might be fancy receivers as well, but I did not want to have to spend a lot.

The rise of the Phoenix…

I did, however, have an AppleTV 2 which was gathering dust, mostly because over time smarter devices/options had replaced whatever little purpose it originally used to serve. The only use left for it was extending/mirroring the Mac screen wirelessly, which isn’t really a jaw-dropping feature!

Point being, I was keen on making use of this, mostly useless, AppleTV in the eventual, wire-free setup. From previous experience, and a bit of Googling,  it came down to two options:

  1. Use the HDMI output option of AppleTV and use the audio port of the target device
  2. Use AppleTV’s optical audio port, somehow.

The first option wasn’t viable because even if there are devices which are capable of extracting the audio from an HDMI source, for example, display devices like monitors, televisions or projectors — their sound processing is, as far as I expect, very rudimentary. So, again, there’s a compromise on sound quality involved. I also found a bunch of cheap HDMI audio “extractors” — but they did not look very different from the Bluetooth receivers I talked about earlier.

The second option seemed to be far more popular. If only the amp in question (or one’s AV Receiver) had an optical audio in — I’d have been all set. But there wasn’t, and hence, I wasn’t.

Fiio D30K

The process thus needed a “bridge” step — a bridging gadget was required to accept this optical audio from AppleTV, and then somehow, magically, let me hook up the amp.
Enter: Fiio D30K! This nifty little thing does exactly that. In other words, if the amp or AV Receiver has a simple RCA/3.5mm input, this optical-to-whatever converter device would do the trick of accepting the optical audio*, and providing the converted audio as RCA/3.5mm out.
There are plenty of similar devices available on Amazon, but one might want to get a decent one. I have been happy with Fiio products over the years, so I went with this one.

Anyway, that pretty much completes the setup! Once this was in place, the audio could then be streamed to my audio rig, wirelessly! Yaay!

Just when you think you’re all set…

There was a hitch! It was in the form of flaky audio, especially when the audio file was large. I correctly suspected that it was because now there was too much traffic on the Wifi network (streaming audio + regular Internet use), keeping in perspective the basic (Netgear WGR614) router that I had.

It was time to separate the concerns.

Fortunately, I had a spare Wifi router lying around, in which I set-up another network dedicated to audio streaming. This, however, posed two related challenges:

  • How to stream audio from the Internet (as the Internet and audio streaming on were now on two difference WLANs)
  • How to connect the laptop (one of the main sources) to more than one (Wifi?) networks.

The resolution was anyone’s guess: one of the networks had to be wired!

I chose to make the streaming network wired, as the router was right next to my desk. On my laptop, this enabled me to stay on (Internet) Wifi and streaming network, at the same time.
As indicated in the picture, I needed to specify that this (audio streaming ethernet) is NOT the network to seek Internet requests from, and hence I did not specify a DNS for it. [Note: IP address specification as ‘manual’ doesn’t have anything to do with the setup — it’s just there for sentimental reasons :).]

Stirred, but not shaken…

A “few minor” issues still remain, but I am happy with the overall setup now. These “few minor” issues are:

  • There’s often a few moments of audio lag when playing streaming video — but that’s not any related to this arrangement. I remember observing this lag even in a normal setup
  • Streaming from Andriod (or other non-iOS devices) requires special software/apps.

[* FiioD30K accepts coaxial input as well]

A ‘Kafka > Storm > Kafka’ Topology gotcha

If you’re trying to make a Kafka Storm topology work, and are getting baffled by your recipient topic not receiving any damn thing, here’s the secret:

  • The defaultorg.apache.storm.kafka.bolt.KafkaBolt implementation expects only a single key field from the upstream (Bolt/Spout)
  • If you’re tying your KafkaBolt to a KafkaSpout, you’ve got to use the internal name:str
  • However, if you have an upstream Bolt, doing some filtering, then make sure that you tie the name of your ONLY output field (value) to the KafkaBolt

Let me break it down a little bit more for the larger good.

Consider a very basic Storm topology where we read raw messages from a Kafka Topic (say, raw_records), enrich/cleanse them (in a Bolt), and publish these enriched/filtered records on another Kaka Topic (say, filtered_records).

Given that the final publisher (the guy that talks to filtered_records) is a KafkaBolt, it needs a way to find out the relevant key that the values are available from. And that key is what you need to specify/detect from the upstream bolt or spout.

So, the declared output field of the upstream Bolt would be something like:

@Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
    outputFieldsDeclarer.declare(new Fields(new String[]{"output"}));
}

Note the key field named “output“.

Now, in KafkaBolt the only thing to take care of is using this key field in the configuration, like so:

KafkaBolt bolt = (new KafkaBolt()).withProducerProperties(newProps(BROKER_URL,
        OUTPUT_TOPIC))
        .withTopicSelector(new DefaultTopicSelector(OUTPUT_TOPIC))
        .withTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper("key",
                "output"));

The default key field name is “message“, so you could as well use the no-arg constructor of  FieldNameBasedTupleToKafkaMapper, by specifying the upstream key as “message“.

If however, you have scenario where you’d want to pass both the key and value from the upstream, for example,

@Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
    outputFieldsDeclarer.declare(new Fields(new String[]{"word","count"}));
}

Note that we’ve specified the key field here as “word“.

Then obviously, we need to use this (modified) key name downstream, like so:

KafkaBolt bolt = (new KafkaBolt()).withProducerProperties(newProps(BROKER_URL,
        OUTPUT_TOPIC))
        .withTopicSelector(new DefaultTopicSelector(OUTPUT_TOPIC))
        .withTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper("word",
                "count"));

Update (2017-08-23): Added the scenario where a modified key name can be used.