Music plays a vital role in every culture on Earth, contributing to the quality of life for billions of people. From the very beginning, engineering and technology have strongly influenced the development of music and musical instruments. The power and potential of this relationship is exemplified in the work of great multidisciplinary thinkers, such as Leonardo Da Vinci, Benjamin Franklin, and Alexander Graham Bell, whose innovations were inspired by their passions for both fields.
In the past decade, there has been a revolution in music—including the aggregation of enormous digital music libraries, the use of portable digital devices to listen to music, and the growing ease of digital distribution. At the same time, advanced tools for creating, manipulating, and interacting with music have become widely accessible. These changes are reshaping the music industry, which has moved far beyond the sale of recordings into personalized music search and retrieval, fan-generated remixes and “mash-ups,” and interactive video games.
The rapid proliferation of digital music has also given rise to an explosion of music-related information, and the new field of music information retrieval has been focused on finding methods for managing this data. In the future, listening to music will be transformed by systems that can locate pieces selected from a practically unlimited pool of available music, and fine-tuned to satisfy the mood and preferences of the listener at that moment.
To reconcile quantitative signal content with the complex and obscure perceptions and aesthetic preferences of listeners, music information retrieval requires unpreced ented collaboration between experts in signal processing, machine learning, data management, psychology, sociology, and musicology. In the first presentation in this session, Brian Whitman (The Echo Nest) describes advances
in the field that combine audio features and myriad music-related data sources to derive metrics for complex judgments, such as similarities among artists and personalized music recommendations.
The next speaker, Douglas Repetto (Columbia University) is the founder of DorkBot, a collection of local groups using technology in non-mainstream ways, usually classified in the category of “art,” for want of a better name. He reviews how contemporary composers explore the limits of technology in their art, and the wider experiences of people in the “maker” community who practice what is clearly engineering, but outside of traditional engineering institutions.
Engineering advances have transformed the creative palette available to composers and musicians. Sounds that cannot be produced by physical instruments can be generated electronically, and modern laptop computers have sufficient processing power to perform complex syntheses and audio transformations. Applications of these technologies in collaborative live performance have been pioneered by the Princeton Laptop Orchestra (PLOrk), co-founded by Dan Trueman (Princeton University). In his presentation, he details the technologies that have been developed and used by PLOrk and the orchestra’s ongoing efforts to use music technology to engage and energize undergraduate and K–12 students.
The relationship between music and mathematics has fascinated people for many centuries. From the ancient Greeks who considered music a purely mathematical discipline to the serialist composers of the 20th century who relied on numeric combinations to drive compositional choices, countless attempts have been made to derive and define a formal relationship between the two fields. Elaine Chew (University of Southern California) describes her use of mathematics to analyze and understand music and how she incorporates mathematical representations into visualizations for live performance.