Note: Project still heavily in development phase and will not compile easily, when in a half-decent state, executables and modification instructions for others will be here :)
Final Year University Project:
- An Exploration into the Links Between Emergent Behaviour and Granular Synthesis, Assessing their Learning Potential in a Multimodal, Interactive Environment
Audio and visual stimuli are fundamental parts of the human learning process, this project investigates how these can be applied in interactive synthesis and pattern formation. The overarching aim of this project was to develop a learning tool which teaches a user about both granular synthesis and emergent behaviour. It also aims to explore some of the links that can be drawn between them, using interactivity and multimodal feedback to enhance the learning experience. Multimodal feedback of the emergent behaviour is provided by spatialised audio from a granular synthesiser, three-dimensional visualisation and the ability to adjust parameters of both the emergence algorithms and granular synthesis in real-time.
Emergent behaviour is the formation of patterns and behaviours from the interaction of individual parts in a self-organising system. Flocking, swarming and schooling are examples of this in nature, and are largely very similar. The systems are decentralised, and each entity acts autonomously based on the information around it, derived from a set of perceivable 'rules’.
Granular synthesis is based around snippets or grains of audio that when combined, can act as building blocks for larger sound objects. A single grain effectively captures two perceptual dimensions and is defined in the time-domain by its length, amplitude envelope and starting point, and in the frequency domain by the source- material of the waveform itself, either synthetically generated or extracted from existing audio. When many grains are combined into a cloud-like texture, the variations of all these parameters interact to change the spectrum of the overall sound. Controlling this process often requires the manipulation of significant amounts of data as each grain can have independent parameters, making the output from a many-agent emergent system an ideal driving tool. The audio output is then encoded using ambisonics, a method of mathematically encoding the characteristics of an audio object in multi-dimensional space, allowing each grain to have its own position in the sound field as a sound producing agent would in the real-world.
Therefore, one of the core links that the project explores is the equivalence between a single grain of audio and an entity in an emergent system, and how emergent behaviour can influence control parameters such as grain spatial position and length. Granular synthesis and emergent behaviour have been paired together in other systems, extrapolating this into three dimensions may provide a more immersive and useful learning experience.