This is a rehash of my first-prize-winning from the 2007 BarcampBoston2 programming contest.

The original version was a Ruby on Rails application. The ASCII Tux-to-HTML rendering happened in Ruby. The audio was written using midilib to a MIDI file in a known location on disk which was played through a <object> tag.

The ASCII Tux was generated with an image-to-ASCII converter and then cleaned up a bit by hand (exactly how is lost; maybe with jp2a).

The original version did not render ASCII characters. It used the Flickr search API to find a set of images matching some user-provided search term. Data from the image URLs was distilled into a stream of notes for the MIDI file.

This all worked great during the programming contest demo, but the MIDI file never felt right. All user searches would read and write to the same MIDI file, meaning Alice might play Bob's audio. We MUST make sure that every user receives the proper random audio!

This version has no server-side component and is just an HTML page and a bit of Javascript. It uses the Web Audio API to generate audio within the browser.

It's not as pretty without all the Flickr images, but at least we have colored ASCII text, right?

Certain browsers may not play the audio. Sorry. You're really missing out.