The original version was a Ruby
on Rails application. The ASCII Tux-to-HTML rendering happened in
Ruby. The audio was written
using midilib to a MIDI
file in a known location on disk which was played through
The original version did not render ASCII characters. It used the Flickr search API to find a set of images matching some user-provided search term. Data from the image URLs was distilled into a stream of notes for the MIDI file.
This all worked great during the programming contest demo, but the MIDI file never felt right. All user searches would read and write to the same MIDI file, meaning Alice might play Bob's audio. We MUST make sure that every user receives the proper random audio!
It's not as pretty without all the Flickr images, but at least we have colored ASCII text, right?