Interactive Music

There are numerous techniques use in composition for video game, with varying complexities. Some attempt to mirror player actions whereas other choose to serve as a bed of sound void of any intentional relation to player autonomy.

Non-Interactive Forms and Educated Guesses

The purpose of the music in Halo: Combat Evolved during the level ‘The Maw’ is non-interactive:

Although this is the case the music still connects with the player during this suspenseful part of the game, contributing to induce certain emotions, there is also the use of a stinger at 00:05:00 to aid this, yet it does not sync with the music.

Pneuma (2015) by Deco Digital:

Although the music plays in a looping fashion with little melodic content it does transition to new pieces of music when entering new areas. Whilst walking up the stairs at the beginning of the prologue to the game it could be argued that an educated guess was made to play the harp in an increasing scale at a moment when the composer felt the player may be just about reaching the top near the door. After playing this through myself and intentionally taking longer to arrive at that point the music does start this harp sequence and this continues even if standing still. To create an interactive element to this, the developer could have triggered each note for every set number of stairs the player moves up, ending in a crescendo when arriving at the top, much like it does when the user in this play through reaches the button or in my play through where this happened part way up the stars.

Transitional Forms

The transition from one piece of music to the other usually with a basic fade in fade out system, this can be heard here:

This could have been done in a more effective way with the use of parallel forms or horizontal re-sequencing. This would have created a greater flow to the composition had less impact on breaking user immersion.

Parallel Forms and Ornamental Forms

A basic example of parallel layering (vertical re-orchestration) can been seen here:

The game Kameleon (2015) by Kajero demonstrates this as well as the use of musical stingers (ornamental form):

The music changes the deeper the player goes into the water, using different instrument loops that are added and subtracted from the initial bed of music. This is also mirrored if they player returns to the top.


Procedural Audio

Farnell (2007) describes procedural audio as,

“non-linear, often synthetic sound, created in real time according to a set of programmatic rules and live input.” 

This means sound is created generally synthetically whenever the player of the game initiates a set of conditions within the game world, for example the player swings a sword. This could be done with multiple recorded assets, but this approach uses valuable memory and has to be recorded leading to additional cost.

With the constraint of memory ever present in games, how can we further relieve these constraints on audio variation? Procedural audio is one such way. Fournel (2010) tells us that procedural audio is used:

  • due to memory constraints or other technology limitations
  • when there is too much content to create
  • when we need variations of the same asset
  • when the asset changes depending on the game context

Procedural audio for sound effects is real time sound synthesis, one example of this is SoundSeed by Wwise.

Wwise claim that by using Soundseed developers never have to compromise on variety. The Soundseed Impact modeller and plugin demonstrated in the above video can take just one recorded source file, analyse its content and then create variations of the sound at run time. It is however possible to create sound from the ground up in Soundseed as well as in other similarly purposed products, this gives the ability of creating original unheard sounds if desirable, this is ultimately the more time consuming and technically difficult.

As well as impact and resonant sounds, Soundseed can also create wind sound which would have a large memory footprint. Fournel (2010) explains that good candidates for PA include:

  • Repetitive (e.g. footstep, impacts)
  • Large memory footprint (e.g. wind, ocean waves)
  • Require a lot of control (e.g. car engine, creature vocalizations)
  • Highly dependent on the game physics (e.g. rolling ball, sounds driven by motion controller)
  • Just too many of them to be designed (vast universe, user-defined content…)

If we look again at Paul’s Pure Data patch from last weeks blog he demonstrates wave modelling here:

he demonstrates building sound from the ground up. And because the sound is synthesised it can be made to sound different every time it is played as he demonstrates for the game Sim Cell.

At present PA does not provide an all in one solution, Fournel (2010) explains that it is excellent at recreating sounds like wind, water and impacts but for other sounds there is a lack of sound designers with the technical knowledge for implementation. Hopefully in the coming years development teams are better trained in this area as Fournel (2010) suggests.


Farnell (2007) An Introduction to Procedural Audio and its Application in Computer Games. [Online] Available from: [Accessed 25 October 2015].

Fournel (2010) Procedural Audio for Video Games, Are we There Yet? GDC Vault [Online] Available from: [Accessed 25 October 2015].

LostChocolateLab (2010) Audio Implementation Greats No’8: Procedural Audio Now. [Online] Available from: [Accessed 25 October 2015].

Non-repetitive Design

Non-repetitive game audio is said to be complimentary to its visual counterpart, Whitmore (2003) states,

“In a sense, linear music is to pre-rendered animation as adaptive music is to real-time 3D graphics. What did games gain from game-rendered art assets? The ability to view objects from any side or distance, and the flexibility to create a truly interactive game environment. These graphical advances give gamers a more immersive and controllable environment, and adaptive music offers similar benefits.”

A musical score that is game-rendered allows player action to determine what musical components are called upon, this makes the music more integral to the visual changes. Although this process gives a greater audio dynamic, the separation of audio assets into a stem style format, increases memory usage.

This kind of adaptive audio can be seen here:

Some developers take early principals of game audio and use synthetic sound, but whereas 8-bit audio of earlier game releases can still be repetitive, as seen here:

modern synthesised game audio can have a seemingly infinite pallet of variation.

Paul (2015) demonstrates in Pure Data the possibilities of programming a bespoke audio tool. The randomised sequencer means that the music will never play the same way twice when revisiting the same moment in the game. For further variation, game cues inform the system to change its timbral output to compliment visual changes. Paul’s system not only generates the musical score but also the sound design, text box’s for example play randomised additive synth one shots that change slightly every time it is called upon.

His work on Sim Cell (2013) by Strange Loop Games was intended to use this system but due to processor constraints he was forced to render various takes of the systems output to be placed into the game in a more typical way.

Paul’s system can be seen working here:

It is notable that both memory and processor constraints have played their part in limiting the amount of audible variety that is available in current generation game development. The end user however is generally unaware of this, developers still manage to work within these limitations to produce a rich sonic environment.

Ways in which developers work within these constraints can include musical genre, according to Collins (2008), minimal ambient music paired with ambient sound design in puzzle games avoids the need for numerous compositions or the repetition of melodic music over a lengthy period of time. Monument Valley (2014) by USTWO makes use of this technique:

The ambient sonic scape could also have an affect on a players perception of time passing whilst playing, in this sense, the audio is playing a temporal role. Noseworthy & Finlay (2009) state that individuals are able to estimate more accurately the amount of time that has past better whilst music is present alongside ambient sound. Accurate perception of time passing is also reduced when only ambient sound is present. When paired with puzzle games, the use of ambient music instead of melodic music could be seen as intentional temporal audio, this would not fully prevent perception as music is still played but would reduce the users ability to do so somewhat. The dimishing ability to accurately perceive time passing would be beneficial to game, keeping the players attention for longer.


Collins , K (2008) An Introduction to the History, Theory and Practice of Video Game Music and Sound Design. The Mit Press. London:England.

Noseworthy, T. & Finlay, K. (2009) A Comparison of Ambient Casino Sound and Music: effects on Dissociation and on Perceptions of Elapsed Time While Playing Slot Machines. Journal of Gambling Studies.

Paul, L (2015) Advanced Topics in Video Game Audio. Video Game Audio. [Online] Available from: [Accessed 11 October 2015].

Whitmore, G (2003) Design With Music In Mind: A Guide to Adaptive Audio for Game Designers. Gamasutra. [Online] Available from: [Accessed 11 October 2015].

Game Audio Functions

This week is looking at the function audio provides in a video game.

The game I have chosen to look at is Halo 2 (2004)

Firstly the cutscene narrative sets the mood for the player to be ready to investigate a hostile area (instruction).

Gameplay starts at 00:01:16, at this point we hear flames coming from the crashed pelican (environment) this is effected with a low pass filter to demonstrate the player is disorientated or regaining consciousness (sensory immersion).

At 00:01:25 we player encounters the first hostile target. After shooting the enemy screams out and hits the floor (feedback) dead.

At 00:01:32 the sound of an enemy drop ship and can heard, this is a notification sound that prompts the player to prepare for enemy reinforcements. It also orientates the player as the objects sound gives clues to direction. This can also be hear at 00:04:15, players shields are low which results in a beep, the player can then be seen to take a moment behind a box with the recharging sound begins at which point the player continues to move again, around the same time feedback is given to the player in the form of vocal pain, the player has allowed themselves to be shot to a dangerously low health level so it punished with audible pain from their character.

At 00:01:44 and again at 00:02:14 the level ambient sound plays distant gun fire (sensory immersion) enhancing the level of physical presence in the game (Ermi & Mayra, 2005).


Hi, I’m Matt Hellewell. This blog is intended for use for the duration of my time at Leeds Beckett University whilst completing the MSc Sound and Music for Interactive Game.

I previously completed a BA (Hons) Creative Music Technology at Doncaster College, my interest in game sound arose during this time, especially in my final year when I looked more deeply into the temporal effect of sound, and flow.

I’ve always had an interest in playing games from an early age, at the moment I play a lot of Destiny (Bungie). This was also the game I used during my dissertation.

My interest in Sound Design also came about during my time at Doncaster. An example of my work can be heard here:

Musically I enjoy electronic, classical, sound track and ambient pieces.

So far I have never written music/adaptive music for games but I have created sound effects/sound design. As yet I know very little about audio implementation into game other than dabbling in Wwise but I hope over the next few weeks this will change!