To create an auditory immersion, it's important for everything, that adds up to the perception, to feel natural. Applying a certain randomness in when and where a sound occurs, makes the experience more realistic.
This project includes the software and tech stack for an audio exhibition system, that guides visitors through different sceneries. Each scene incorporates a set of sounds, played on different speakers positioned around the visitor. The software generates the soundscape in real-time, determining when and where the sounds are played.
This approach aims, to leave the visitors attention in the room, instead of directing it more towards a screen, as is often the case with home theater systems.
When the program is started, it loads a configuration file. In this file, the different audio channels, scenes and their sounds are defined. (A detailed documentation, how to use the underlying compiler, can be found here)
It then generates events, which are sent to a digital audio center, written in Max MSP. The events differ in ambience events, that play on all speakers, and special events, that are only played by only one speaker. The audio center processes these events to playable soundtracks and passes them to the connected speakers.
To show how the system works, an example use case was built. It consists of a forest scene with chirping birds and rustling leaves, as well as a river scene with different sound peaks.
The output devices are six speakers and one subwoofer, to utilize the low frequencies of the river sounds. The speakers are placed evenly around the visitor, building a sweet spot in the middle of the room.
In the end, the system was set up together with the other students projects, which were directed towards the spectators other senses.
To get a glimpse of the whole experience, you can watch the video below.
Visual footage made by Michael Kluge!