The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Statistical Analysis of Massive Data Streams: Proceedings of a Workshop
Transcript of Presentation
MR. HANSEN: [Speech in progress]. That involves artists like Rauschenberg and even Andy Warhol. The idea was to try to pair, then, mostly engineers and artists together to see what kind of useful forms of artistic expression might come out. In a very self-conscious way, I think, the approach was to try to revive this tradition with the arts and, hence, was born this arts and multimedia program.
It was actually an interesting event. The idea, as I said, was to very self-consciously pair artists and researchers together, and this will actually get to streaming data in a moment, I promise. So, what happened was, they organized a two-day workshop where 20 or so media artists from New York City and 20 or so invited researchers in the labs met in the boardroom, then, of Lucent, and each got 10 minutes to describe what they do. I had 10 minutes to kind of twitch and talk about what I do, and the artists got some time to have some very beautiful slides, and have a very big vocabulary and talk about what they do. Then we were supposed to pair up, somehow, find somebody and then put a proposal together, and they would fund three residency programs where the project would get funded.
Ben and I put together perhaps the simplest thing given our backgrounds, him being a sound artist and me being a statistician. We put together a proposal on data sonification, which is a process by which data is rendered in sound, for the purpose of understanding some of its characteristics that may not be immediately obvious in the visual realm. So, instead of visualizing a data set, you might play a data set and get something out of it. This is sort of an old idea, and it seems like everything I have done John Chambers has done many, many years ago. So, I have kind of given up on trying to be unique or novel in any way.
He was working with perhaps the father of electronic music, Max Mathews, at Bell Labs. This was back in 1974. He developed something that Bell Labs at the time gave the title MAVIS, the Multidimensional Audiovisual Interactive Sensifier. The idea was that you would take a data set or take a matrix and you would map the first column, say, the pitch, to the second column, the timbre, the third column, the volume. Then there would be some order to the data somehow and you would just play it. John said you got a series of squeaks and then a squawk, perhaps, if there was an outlier, and that was as far as it went. He said it wasn’t particularly interesting to listen to, but maybe there was something that could be done to kind of smoke out some characteristics in the data. Actually, this kind of mapping, when Ben and I were talking, we thought that this kind of mapping might be able to withstand underground bomb blasts and earthquakes. Apparently, this problem was motivated by Suki, who was involved in the Soviet test ban discussions. At least, I am getting this now all from Bill.
I thought I could give you an example of what some of this early sonification sounds like. A friend of mine at the GMD has developed a program on earthquake sonification, and here is what the Kobe quake sounds like, if you speed it up 2,200 times