Corollary thoughts on auditory AR/ambient stuff

  • Many years ago I read about a dude who’d converted Unix server logs into a real-time auditory environment — specifically, a rain forest. Server load controlled the level of the rain, CGI calls were bird chirps, potential malicious attacks were the cough of a jaguar, etc. Sadly, I can’t find any info on this anymore. Anybody know anything?
  • A simpler, easy-to-implement application: you assign musical DNA traits to individual aspects of your collected data stream, and your Pandora/Last.fm/MOG/whatever retrieves music accordingly. For example: whenever I receive new mail, play “excited” music. Or whenever I sell an item on Etsy, play Iggy Pop’s “Success”. It’s not matching music to your mood, it’s matching it to your information landscape.
  • Once I turn on Stikki’s “local only” feature, I’m thinking of seeding the world with microcompositions — music only available when the user is in a specific location. Like “Soundtrack to the corner of Maryland and Harmon”.
  • Somebody ought to pay me to think about this stuff. Everybody email Joi and tell him to hire me at the Media Lab. 😉

Leave a comment