The Interface Between Music Composition and Game Design


>>David Plylar: Good
afternoon everybody. Thank you so much
for being here. My name is David Plylar with the
music division at the Library of Congress, and this
is our final event of our augmented realities
videogame music mini-fest. We’ve had a great
weekend, and we’re so happy that so many people were
able to come to these events. Today, we’re going to
finish out with composer and author Winnifred
Phillips who will be speaking about her music and also
just the whole process of composing videogame
music and how that works. She will also have — our bookshop has her book on
composing game music and just that intersection there that
will be on sale in the back, and she’ll also be
signing afterwards for those who are interested. So please join me in
welcoming Winnifred Phillips. [ Applause ]>>Winifred Phillips:
Hi everyone. Thanks for attending my lecture. I’m honored to have
been invited to speak at the Library of Congress. As David mentioned, I’ll be
taking questions at the end of this lecture and
if anyone would like to speak further
the Library of Congress shop
has kindly arranged for a book signing that’s going to be taking place
right over there. So please feel free
to join me then. Is everybody ready? Let’s get started. As David mentioned, my
name is Winnifred Phillips, and I’m a composer of
music for videogames. You might also know me
if you’ve read my book, “A Composer’s Guide to
Game Music” published by the Massachusetts
Institute of Technology Press. I’ve composed music for
lots of games on lots of different gaming platforms
ranging from phone games and simple hand-helds
all the way up to state-of-the-art
virtual reality systems. My best-known work includes
music for games in five of the biggest franchises. Assassin’s Creed, Total
War, Little Big Planet, The Sims, and God of War. I’m here to talk with
you about the interface between music composition
and videogame design. Now this is the pinnacle
of sophistication in terms of music implementation
and games. So let’s take a look
at an example. In the video you
are about to see, notice how the music changes
as the main character moves from the road at the bottom
of the screen all the way up to the top where he
crosses a rushing river. [ Music ] So that was Frogger from
Konami circa 1981 music by Takio [inaudible]. Let’s take a look at the
interactive music system. The game mentioned begins
with a modo tempo musical idea that accompanies our little
amphibian while he’s crossing the road, and then it switches
to a more energetic motif where [inaudible]
across the river. Now it’s simple but it was a
major innovation in its time, and the basic interactive music
technique in Frogger is still in use in modern games
today including one of my own projects, Assassin’s
Creed Liberation developed by Ubisoft and just remastered
and released for PC’s, the PlayStation four,
and the Xbox One. Let’s take a quick look at
what going from the road to the river sounds like in
Assassin’s Creed Liberation. [ Music ] So we’ve looked at a dynamic
game music system applied to vastly different
musical scores. With Frogger, we’ve seen
a [inaudible] tune score from the early days of
videogaming and then we’ve gone to the high-production values
of this orchestral game music that I composed for
Assassin’s Creed Liberation which will be performed
as a part of the Assassin’s Creed
Symphony World Tour. Just to give you an idea of
how far game music has come, the Assassin’s Creed Symphony
will be having its premier performance at the Dolby
Theater in LA in June this year with an 80-piece
orchestra and choir. And you have to design
construction for this music has
remained remarkably similar to the early days
of game design. It’s a simple music system
and it’s made up of components that are assigned
to specific tasks, a track for exploring
the environment, and another for rowing
down the river. It’s the triggering
of these components that makes the music interactive
and the triggers depend on the actions of the player. If the player just decides
to turn around and keep on exploring then the
exploration music will continue and the rowing music
may never begin. It’s all up to the player. That’s what interactive
music is about. Of course, this was
a simple example. From here things get a
lot more interesting. As composers working in
videogames, we spend a lot of time concerned with a
few important questions. How do we structure this music? How many components? How many triggers? How do we mix it, and how
can it best serve the action of the game? Over the course of
this talk I’m going to be giving you an
overview of the main forms of musical interactivity
that have shaped the scores of modern games and
vintage ones. I’ll be showing you examples
from some of my own projects where I can share with
you what I’ve learned about interactive music while
on the job, and we’re also going to be looking at some
examples from some other games that have touched upon
interactive music as well so that we can get a more
broad overview of the topic. Along the way we’ll be
discussing the functionality of each dynamic game
music model and the way in which the player
gets to interact with the musical content. But first, let’s get a preview of what we’re going
to be covering today. Interactive music
is currently split into two defined categories. The recorded music approach
entails the use of audio files like wav, OGG, MP3, and
other similar file formats. This category is
currently sub-divided into two specific techniques,
horizontal resequencing and vertical layering. And then there is
the music data method in which the music isn’t fixed
into an audio file format at all but instead it’s saved
as performance data in the MIDI protocol or less
often the MOD file format or it’s worked into a
generative based music system. I’ll be spending
the most time today on the recorded music approach but since that’s used most
heavily in modern games but also going to
be doing an overview of the music data
methods as well. Now let’s take a step back
and look at another simple but effective way of
making music interactive by tasking the player with
following into its sources. Let’s listen to a track I
composed for the videogame God of War from Sony Interactive
Entertainment and watch one of the ways in which
the player can interact with the music in
a very direct way.>>There is safe passage through
the deadly sands but only those who hear and follow the
siren’s song will discover it. You must find the
siren’s great host. Only they can guide you
to Chronos, the titan. [ Music ]>>Winifred Phillips So in
that gameplay scenario from God of War, the player
follows the music which adjusts its spatial
position then becomes either louder or fainter depending
upon the player’s proximity to the source. Kind of like a sophisticated
version of Marco Polo. Let’s take a look at another
example of this technique from one of my more
recent projects, the Dragon Front virtual
reality game developed by High Voltage Software
and available now for the Oculus Rift and
the Samsung Gear VR. Dragon Front is a traditional
high-fantasy adventure in virtual reality. Each game session is
a self-contained match on a playing field loaded with
fantasy monsters and machines. Now with all this in mind,
the music for Dragon Front had to convey a suitably
bold and dramatic style and this was especially
important for the game’s epic main theme. So I composed big
Victoria’s Anthem that played in full encompassing stereo as the opening logo
sequence starts up. Now the theme music was
designed to follow the player into the hub area,
but bombastic music in the hub area could
have been distracting. So at this point the music
moves from a direct channel to the player and it takes up
a position in the environment as though it were issuing
from the speakers somewhere. Let’s see how that worked. [ Music ] This becomes more fun in
VR when players realize that they can actually turn
around and locate the source of the music based on its
position in the environment. Watch how players can look around in the Dragon Front
hub area, locate the source of the music and actually
turn it off if they want to. [ Music ] [ Music ] Putting music into the
environment and allowing players to interact with it, that
can be very powerful. Let’s look at another example. Bebylon Battle Royale
due to be released on multiple virtual
reality platforms by developers Kite
and Lightening. Bebylon is an outrageous and
wacky bumper car style game with a really unique
visual aesthetic. The whole premise of Bebylon
hinges on immortal babies in over-the-top arena contests
in a futuristic setting. Music during gameplay is
represented by a group of in-game baby musicians. So the music originates from
that source, and we’re able to look over at the band and
see the babies rocking out and playing their
instruments in the VR world. Here is a video showing one of the games baby
bands performing one of the tracks I composed
for Bebylon Battle Royale.>>We are the grass
knuckle heads. [ Music ] So as you can see from the baby
bands of Bebylon, it’s possible to have a game in which
all of the music originates from a visible in-game source. In this case, an
in-game rock band. Now what I’ve noticed is that
players really enjoy this sort of direct tangible relationships
with the music of a game. And the fun factor
increases as the potential for player interactions with
the musical score becomes more complex. As gamers, we like to feel
as though we’re participating in the way in which the musical
score is composed and presented. We like the illusion that we’re
contributing to the creation of a piece of music and we can
actually trace that enjoyment of interactive music all the
way back to the 18th century. Mozart and Haydn may have
been the first composers to create interactive
pieces of music. They did this with
a very clever game. It consisted of a
large assortment of individual measures
of music all numbered. Players rolled dice
and depending on the numbers they rolled, the
measures of music would be put into a particular order. Now the final result was a piece
of finished music that seems to be completely original to
whoever had rolled the dice. These musical dice games
were enormously popular. Even in the 18th century people
loved the sensation of power over a piece of music, of
being able to interact with it, to change it into something new. Now this brings us
to the first model of musical interactivity we’re
going to be talking about today. It’s called horizontal
resequencing. Now in a lot of ways,
horizontal resequencing is a lot like that musical dice game
that we were just talking about. The music for a horizontal
resequencing model is recorded into an audio file format
and then it’s constructed in segments or chunks so that it
can be arranged and rearranged into different orders. Essentially it can
be resequenced. Now we call it horizontal
resequencing because we’re picturing time
as a horizontal phenomenon. Flowing from left to right kind of like the notes
on a staff of music. When we switch the order of musical chunks we’re
altering their position in time which we perceive
to be horizontal. So now let’s examine a fairly
straightforward horizontal resequencing model as it was
used in one of my projects, the Speed Racer videogame from
Warner Brothers Interactive. In Speed Racer, players get
to drive futuristic cars in over 350 miles per hour
down hot wheels style tracks that twist into loop de
loops in impossible curves. Every now and then
players enter something that is called a
zone mode during which they’re suddenly
invincible and they’re traveling much,
much faster than normal. The game used horizontal
resequencing to enable music interactivity
during the zone mode. Let’s look at the
component music files and some music production
software so that we can see how those
interactive chunks fit together. Every piece of racing music
in the game had to be able to transition into zone mode. You can see here the main piece
of racing music crossfaded into the zone mode track which
was always 15 seconds long. Once the 15 seconds had elapsed, the main racing music would
pick up again seamlessly. Let’s listen to the
music first by itself. [ Music ] Now let’s see how that
worked in the game. [ Music ] You can see that when the player
exits the main racing gameplay and enters the zone mode, the
music smoothly transitions into the zone mode track
from the racing music. The two pieces of music
were designed to be synced so that their tempos and
their beats would align well throughout those transitions. Now how can these sorts of
transitions be accomplished? One common way is in the
placing of digital markers within the music files
dropping them in locations that would make for
good switching points from one interactive
music file to another. We might want to
drop a digital marker on every single beat making
it possible to switch anywhere in a piece of music
as long as the tempos and the measures
were well-aligned, or we might drop markers at
the start of every measure so that the music doesn’t switch until the beginning
of the next measure. We might even choose specific
moments for the dropping of those digital markers and
the music wouldn’t switch until those designated
points had been reached. Now the Speed Racer example
included two chunks of music for dynamic switching but well
this is a fairly streamlined example of horizontal
resequencing. Let’s take a look at an
example of this method that has more files involved. This is Tron 2.0, a
futuristic game set in the world of the Jeff Bridge’s film
about living computer programs in a digitized virtual world. One of the most interesting
aspects of this videogame is its
music system developed by Buena Vista Interactive
and composer Nathan Grigg. In Tron 2.0, every music
composition consists of a group of short chunks most
lasting from 10 to 15 seconds and each encompassing about
two to four measures of music. The segments could be triggered
in different orders or sequences and each segment included
an ambient reverb tale that would overlap nicely with whatever musical
content would follow it. Let’s take a look
at how that worked. [ Music ] One of the advantages of formatting game music
this way is the ability to alter the order in which
musical events take place. Let’s listen to those chunks
now in a different order. [ Music ] Rearranging the order of musical
content makes music feel less predictable which in turn
makes it seem less repetitious. Repetition fatigue is a
big problem in the field of videogame music
and it stems largely from the nature of
games themselves. Games are much longer
experiences than those we might
have in other forms of entertainment media like
film and television programs. Players could spend a lot of
time in any game location. So the music associated with
that location has the potential to be repeated indefinitely. With a horizontal resequencing
model like the one employed in Tron 2.0, the music can
fluidly juggle its ideas into different sequential
arrangements. And this has the potential to keep the music feeling
fresh for much longer. Let’s see how that system
works during actual gameplay in Tron 2.0. [ Music ] When we’re preparing music
for an interactive system such as this horizontal
resequencing model, we have to pay close attention
to the volume levels of all of the interactive components
of the system making sure that the overall perceived
volume remains consistent and that the transitions
feel smooth and natural. Any sudden jumps in oral
intensity can jar the player and alert the player to
the artificial nature of the music system
so that’s going to be something we
need to avoid. Let’s look at another situation in which horizontal resequencing
is used, and this is one of the trickier situations
for a game audio professional. Can a licensed track be
adapted into this system? Now clearly if a composer
is creating custom music for horizontal resequencing then
the resulting composition is going to be structured
in an ideal way. So is it possible to make a
traditional linear music track work within a horizontal
resequencing model? In the game Luminous
from Q Entertainment, the music system
integrated licensed music into an interactive framework that employed horizontal
resequencing. Let’s see how the audio
team made that work. In Luminous, players
manipulate falling blocks. The mechanic is somewhat
reminiscent of Tetris. Line up the blocks to
form the solid colors and they disappear awarding
the player with a point bonus. Now the audio team at Q Entertainment
made the music respond to those point bonuses in
a really interesting way. Let’s take as an example the
track Shining by Mondo Grosso. It’s an electronica track
and it’s built around lots of repeating patterns which
actually makes it pretty ideal for this interactive
music system. In the game the music would
often loop a short section of the track repeatedly until the player had
progressed sufficiently to trigger the music
to move forward. We’re about to listen
to the track here. Let’s pay particular
attention to the lyrics. The first phrase is world of
silence creep in sightless time. And the second phrase is
[inaudible] of fatness, sleep in flightless mind. So let’s hear how that sounds. [ Music ] Now during gameplay, the player
would be manipulating those falling blocks and the music
system would be playing this song and then suddenly it would
start looping a short section of the song over and
over and over again until the player had
progressed sufficiently to trigger the music
to move forward. Now remember that first
phrase world of silence, creep in sightless time. That’s the phrase
that’s about to repeat. [ Music ] Now let’s see that
interacted music system in action in the game. [ Music ] Using a licensed track, this horizontal resequencing
music system gave players the opportunity to feel a
burst of accomplishment when the music moved forward and
progressed into something new. So now we’ve discussed
the first model of music interactivity
we’re going to cover today. Horizontal resequencing. So let’s move onto the
second, vertical layering. What is it? Well, let’s start with a
comparison from the world of audio engineering,
stem mixing. The recording of a
musical performance that embodies some isolated
part of the overall composition. Let’s say the lead
vocal in a pop track or the French horn section
in an orchestral piece that by recording these
elements separately we get to have enhanced control
over those elements when we’re mixing
the final recording. Now taken to the extreme, we might separately stem every
single instrument in an ensemble or conversely we might group
the instruments into subsets that represent a certain
percentage of the whole. The vertical layering
technique shares a lot in common with the stem mixing process. In vertical layering a music
composition is recorded into an audio file format but the whole performance isn’t
captured in a single audio file. Instead it’s sectioned out
into separate audio recordings that together embody
the whole composition, and we call it vertical because
we’re picturing the simultaneous components stacked on top of
each other in a vertical tower. Well like the notes of a
chord on a staff of music. The music is structured so that
the game’s audio engine can essentially act like a virtual
mixing engineer adjusting the volume levels of every single
stem turning some on, some off. Now this is great because
then the audio team can create triggering points in the game
where layers get adjusted, activated or deactivated
according to what’s happening
during the course of play. Because this music is subject to
these kind of radical changes, it has to be structured so that
its isolated components are satisfying to listen
to by themselves. Now that’s the main difference that separates vertical
layering from stem mixing. When we stem in the traditional
recording environment, we’re intending to create
a final track in which all of the stems are
playing together, but that’s not necessarily the
case with vertical layering. Let’s take a look at an example. When I was hired to join
the music composition team for Little Big Planet
two, the developers at Media Molecule had already
created a pretty complex six-layer system for
vertical layering. So it’s a great example
to show off the power of the vertical layering
approach. Let’s listen to a brief excerpt
of one of the tracks I composed for Little Big Planet two. This is Victoria’s lab and
here all six layers are playing at once. [ Music ] Well you can hear there’s
a lot going on here. So let’s break it down. Six layers each with
distinct content. There is the drum layer. [ Music ] There is a layer
with a small ensemble of cute instruments
including Calliope, accordion, and beat-boxing. [ Music ] There’s a layer with a small
women’s ensemble singing in counterpoint. [ Music ] There’s a layer for an
orchestral string section. [ Music ] There’s a layer for a Soprano
diva and a gothic organ. [ Music ] And finally, there’s a layer for
rock guitars and [inaudible]. [ Music ] What makes this track
so flexible for interactive implementation
is the broad range of contrasts between the content
in each layer, and when you play them
separately they each project a distinct emotional atmosphere. But that changes as
the game engine begins to manipulate their
volume levels activating and deactivating the
layers and combining them into different subsets
according to what’s going on during gameplay
at any given time. Let’s listen to this
track in action. Here are a few excerpts
of gameplay from the Victoria’s lab level
of Little Big Planet two. [ Music ] With vertical layers, the
music can grow more complex or it can simplify down
into its most basic elements and still remain consistently
true to a single musical idea. Now we’re aware that we’re
listening to the same track but it seems to be
magically morphing in sync with our actions. Here’s another example. The videogame Portal
Two from Valve. Composer Mike Marasce
used vertical layers to sync the music with
the player’s actions. For instance, a low
and slow music layer for when the player
is navigating a series of platforms. [ Music ] And a high fast layer for
whenever the player jumps. [ Music ] Now here’s some gameplay
from Portal Two. Notice how the slow
first layer is joined by the fast second layer
whenever the player jumps. [ Music ] There are two approaches to the
construction and implementation of a vertical layering system,
additive and interchange. In an additive model of
vertical layering the layers are structured so that even if the game engine plays every
single layer simultaneously and at full volume, the
final result is still going to be pleasing to listen to. Now the Victoria’s lab example from Little Big Planet
Two was an example in the additive model, but it
was a fairly complex design. So let’s take a look at something that’s
a little bit simpler. When I was brought onto compose
the music for The Maw videogame, the developers at Twisted Pixel
Games wanted an interactive music system and they
asked me to design it. So I put together an additive
vertical layering system with three layers. Now since The Maw was a game about a one-eyed purple alien I
kept the music broadly comedic. For instance, in the Luffer
Lands level there is a layer for slapstick situations. [ Music ] There is a layer for just
wandering around and exploring. [ Music ] And finally, there’s a layer for
when the Maw gets superpowers and starts firing lasers
right out of his eyes. [ Music ] Since this is an
additive composition, all three layers needed
to be complimentary when they were played together. So let’s watch some
gameplay excerpts to see how this worked
during the Maw videogame. [ Music ] Now the opposite of
an additive model in vertical layering is a
composition in which not all of the layers are
designed to coexist. Instead they replace each other
like interchangeable parts; hence the name the
interchange method. Interchange construction is
not as commonplace as additive because it’s not as flexible. If some of the layers
can’t be played together that does reduce the number of interactive combination
possibilities. However, we do encounter the
interchange model occasionally and in one of my own projects,
Spore Hero from Electronic Arts, I created an interchange dynamic
composition in vertical layers. The Spore Hero videogame is
a science fiction adventure and it begins with a menu system
including three gameplay modes. The main menu, the battle mode,
and the spore [inaudible]. Now for this menu system, I
composed an interactive track in vertical layers according
to the interchange model. Here is the main menu layer. [ Music ] Now here is the battle
mode layer. [ Music ] And finally, here is
the [inaudible] layer. [ Music ] The interactive music
system was designed so that the layers could play
together creating a consistent atmosphere that avoided
choppiness. For instance, when
going from the main menu to the battle mode, the music
of the main menu would continue to play softly underneath the
battle mode layer; however, the [inaudible] layer
never coexisted with the music of the main menu. Those two layers
were interchangeable but they were never
simultaneous. Let’s see how that
worked in the game. [ Music ] So now that we’ve
explored vertical layering, let’s take a look at an issue
that can be pretty problematic for our game audio pro. Can a linear non-interactive
track be adapted into a vertical layering
music system? Well we know that custom
music works but what if the track wasn’t
written to be interactive? What would it take to
make such a track work within an interactive system
such as vertical layering? We can actually look
to Hollywood for some inspiration here. Sometimes when a
music supervisor on a film selects a pop track for licensing the
supervisor will ask the artist to supply the music
in its original stems. This is so that a custom mix
can be created that’s going to sit well within
the final film. This is also a good strategy
for a game audio designer trying to inject a little bit of vertical interactivity
into a linear track. The Developer Queasy
Games used this approach to integrate linear tracks into
a vertical layering music system for their Sound Shapes game. So let’s see how the
audio team made that work. First, they connected with the
artist Beck obtaining three unreleased tracks from
him which then they used as interactive music for
three stages of their game. Here’s a short sample
of one of those tracks. This one is called Cities. [ Music ] The audio team connected
specific layers in the music with specific stages so that
when the player progressed from one area of the game to
the next, associated layers of the music would
activate or deactivate. So let’s take a quick look
at how that functioned. [ Music ] So players of the Sound
Shapes game got the chance to feel the burst
of accomplishment as they progressed and uncovered
different aspects of the music by virtue of its vertical
layering music system. Now we’ve covered the
two big methods that fall under the recorded
music approach. Horizontal resequencing
and vertical layering. Most games that aspire to incorporate interactive
music usually opt for one of these two methods or
even a combination of both. For instance, in Spore
Hero I created tracks in both the horizontal
resequencing and vertical layering models. However, there is
a second category that we haven’t covered just yet
and it has both a long history in game design and an
interesting future. Music data. When we’re talking about
music data we’re specifically addressing music that
hasn’t been fixed into a recorded audio format,
hasn’t been rendered into a file like WAV, OGG, MP3 and so on. Instead, the music exists as a
file containing a digital record of a musical performance and an accompanying sound
bank consisting of instruments that will be triggered
by that performance data. Now after hearing that
description most composers and game audio pros are going
to instantly think of MIDI, otherwise known as Musical
Instrument Digital Interface. It’s a data format in heavy use in music production
and composition. It’s been a mainstay
in recording studios since the early 80s, but to understand
MIDI better let’s look at a comparison from
the year 1917. [ Music ] Igor Stravinsky is well
known for his masterworks for orchestra such as the Rite
of Spring and the Fire Bird, but he also liked the idea of
creating music for an instrument that didn’t require
musicians at all. Instead, preserving a record of
a musical performance as data. We were just listening to a player piano perform
Stravinsky’s Etude Pour Pianola. Notice the piano roll, the perforations embody the
instructions for note events in the musical composition. Today our modern MIDI software
applications take their visual look directly from piano rolls. The note events are
recorded as data and they trigger musical
instruments or sound banks to emit their sound according to the instructions
they’re receiving. Now, here’s where
things get interesting. If the music exists as
a set of instructions and a digital sound
library then it’s possible for a videogame’s programming to manipulate those
instructions directly fiddling with every single instrument and
manipulating their performances in lots of different ways. Videogames are using MIDI
extensively in the early 90s, and one of the most
iconic examples of this is an early PC game
called Monkey Island Two, LeChuck’s Revenge. Developed by Lucas Arts, the
game was released in 1991 and it featured the Lucas Arts
patented music system known as iMuse or interactive music
streaming engine which was used by the music composition
team at Lucas Arts. The iMuse system could add
melodies or subtract them, change which instruments
were playing which parts, adjust the tempo and the key
or even change on the dime from one track to another
using seamless transitions. Let’s take a look at this
system in action as it switches from a general track for
outdoor exploration to the music for the inside of a bar. [ Music ] This videogame soundtrack is now
over two decades old; and yes, it certainly does show its age. Yes, the music system was
revolutionary and it led to a lot of the interactive
music innovations that shaped the scores
of modern games. Ideas such as vertical layering and horizontal resequencing
were first tried using MIDI, and the format itself hasn’t
completely faded away although its use has certainly
dwindled over time. In the early 2000s, platforms such as the Nintendo DS
made heavy use of MIDI and that led me to my
first MIDI project, Shrek the Third for
the Nintendo DS. Activision had already hired
me to compose the music for the console version of the
game, so when they also asked me to create an all
new musical score for the Nintendo DS version I
got the chance to learn a lot about creating a
MIDI game score. For Shrek the Third DS, I
was allotted only 1.5 megs of memory for the game’s score. That was both for the MIDI data and the accompanying
sound library. That’s a very small
memory budget. So it wasn’t going
to be possible to create a lushly
atmospheric score with that. Instead I concentrated
on a quirky mix of retro synth elements
that could cope with the low-resolution sound
files and the restrictions of the Nintendo DS hardware. For each track I
composed a backing groove and three interchangeable
melodies that would correspond with whatever character happened
to be active at the time. Either Shrek, Puss
in Boots, or Arthur. So here’s an example
of that system. [ Music ] Since the release
of the Nintendo 3DS, MIDI scores have largely been
supplanted by audio scores for Nintendo’s handhelds, but MIDI hasn’t entirely
fled the scene. For instance some major
Nintendo console games in the modern era still
feature MIDI scores and this includes
games from some of Nintendo’s biggest
franchises. Here’s an interactive MIDI
based score from the music team of the Legend of Zelda
Twilight Princess. [ Music ] Also in recent years, some
games have included tools that allow players to create
music for their own levels. These games typically
feature MIDI sequences which are MIDI based
applications that record note events as
data and then play them back. Players use these
sequences to create music for their own levels and
the original composers from the game’s music
composition teams would also use these same sequences to
create music for the games. Usually characterized by
an electronic sound pallet, this music can take advantage of the extensive adaptive
possibilities that are inherent in the music data format. Let’s see an example of this
sequencer-based game music. This is from Sound Shapes. The music is Extraterrestrium
by the artist known as I Am Robot and Proud. Notice how the music
reacts to the gameplay. [ Music ] Now let’s take a quick
look at another example of this sequencer
based interactive music data composition. This time from Little
Big Planet Two. This is Wood Ringer
by the artist Byan. Watch how the music system
triggers the baseline to begin just as the player
is swinging across the chasm. [ Music ] Now let’s talk about
another music data format that was tremendously
popular in the early days of game development
and it’s still in use in modern games today. In early videogames, the mode or module format was
popular for a lot of reasons. It has a lot of similarities
to MIDI. Both file formats consist
of music data designed to trigger a sound bank. However, the mod format allows
the sound bank to be compressed into extremely small file sizes. Smaller than are possible
with traditional MIDI. Plus, the sound bank can
be incorporated directly into the Mod file and doesn’t
have to exist separately. And finally, Mod files can
store their music data as chunks of numbered musical patterns
and these can be reorganized and shuffled around as
the audio team sees fit. One of the earliest
examples of this can be found in the simple music
interactivity of the videogame Pinball
Fantasies created by [inaudible] and the team at Digital
Illusions. Let’s watch a little gameplay
from Pinball Fantasies. Notice how the music
changes quickly in response to the Pinball’s
trajectory and speed. [ Music ] The Mod format is
still being used in modern videogame
scores particularly in the casual games sector. A good example of this
is Bejeweled three from Popcap Games. Composers Alexander Brand and then Peter Hodgebug used the
mod format to incorporate layers and tempo changes into the music
they composed for this game. In the game’s lightning mode, the tempo of the music
accelerated very slowly over time. So let’s look at a video that
shows a few slices of gameplay so you can see how
the tempo changed. [ Music ] So now we move on to the final
category in our exploration of music data in games. Let’s return for a moment to those musical dice games
we were talking about before. You remember, the rules
allowed for the creation of the interactive music from
a set of musical fragments that were shuffled
according to the roll of dice. We’ve discussed a bunch of interactive music
techniques today and most of them could be compared
to those musical fragments from that 18th century
dice game. But generative music
is different. It actually shares more
in common with the dice. Generative music is founded on
the concept of indeterminacy. The insertion of chance into
the way in which music unfolds and the ability to randomize
attracts musical components in order to keep the listening
experience constantly fresh and new. And to better understand
this idea, let’s start by examining one of the very earliest
examples of generative music. In 1985, composer Peter Langston
invented a system he called riffology for the
opening title screen of the videogame Ball
Blazer from Activision. Now riffology is a
nice demonstration of the core principles of
generative music in action. Mathematical probability
resides at the very heart of the riffology system, a
set of conditional statements that sort of resemble
your classic if then. So let me give you a
theoretical example of such an if then statement. Let’s say we have a piece of
music and it’s in G major. The melody just sounded in
F sharp, so what comes next? Well, that depends on how the
probability rules have been defined in the system. So let’s say if the
melody has just sounded in F sharp then a G natural
will follow it 80% of the time. The next note will be a D
natural for 15% of the time, and an E natural will
only sound 5% of the time, and there you have the core
idea behind generative music. And also the reason why it’s
sometimes called algorithmic compositions since it’s
essentially a procedure for the calculation
of musical variables. For Peter Langston’s riffology,
the system would choose from a set of 32 melody
fragments and he would calculate which fragment was going to
come next depending upon the conditional statements that Peter Langston had
built into his system. Now in this way, the
melody would spin out very unpredictably. So let’s hear how that sounded
during the opening title screen of the videogame Ball Blazer. Notice the foreground melody
in the upper register carried by a high-pitched tone. [ Music ] Well that fast-paced melody
in the high-pitched tone mode, it wasn’t particularly striking
but it was unpredictable and that’s really the point. Generative music doesn’t offer
us a promise of memorable tunes but it does create melodies
that avoid repeating themselves, and that addresses the
issue of repetitions to take what we were
talking about earlier. Now where generative music
becomes interactive is in the involvement of the
player in changing the variables that the algorithm uses to
calculate musical probability. So let’s look at
a more modern game that includes a degenerative
score. In one of the levels of
the Electroplankton game from Nintendo, the player is
tasked with changing the angle of the leaves, altering the
trajectory of little red fruits that emit bell-like tones as
they bounce off the foliage. Let’s see how the music
team made that system work. [ Music ] Electroplankton. Now as you could see, the
result is a little reminiscent of windchimes which are
essentially generative musical instruments themselves since
their tones are generated by probability factors
and chance. But let’s take another
look at a modern game that incorporated
the generative score. In the Spore videogame from
EA Maxis, players start as single-celled organisms in a
process of simulated evolution. In this gameplay video you’ll
hear that the music team at EA Maxis created
a foreground melody with a randomized quality
a little reminiscent of Ballblazer, and you’ll also
notice how the music grows in complexity as the
player progresses. [ Music ] So we started with
Frogger, perky amphibian, and we’re concluding with Spore,
another lively aquatic creature. So essentially we’ve ended
right where we began. You’ll remember we began this
talk to explore the interface between music composition
and videogame design. We’ve looked at a lot
of different systems. We’ve explored their
structure and their uses. I’ve shared some of my personal
experience as a composer of interactive music
for videogames and we’ve also checked out a lot of other interactive
game music systems from both the past
and the present. We’ve looked at the recorded
music approach checking out both horizontal resequencing
and vertical layering, and we’ve also explored the
music data method expressed via either MIDI, mod, or by virtue
of a generative music system. Now all of these various forms of interactive music
don’t exist in isolation. Traditional linear
music also has its place and there are times in which
linear music really is the only viable solution particularly
during narrative story-driven moments of a videogame. Linear and interactive
music can work together. Since the 18th century, people
have enjoyed playing games that give them the
illusion of control over the structure
of a piece of music. Players like the feeling
of direct interaction with a musical composition. So when game composers are
constructing an interactive music system, our
goal is to increase that sensation of power and fun. To make the music feel reactive
to the player’s every move. Now when it’s all said and done,
our first responsibility is to meet the music
needs of the game and entertain the people
who are playing it. We strive towards musical
excellence using whatever tools we have at our disposal. Now I hope you’ve enjoyed
learning about the interface between music composition
and videogame design. Thanks very much. [ Applause ] Thank you. Well now we’ve come to
the Q and A portion, so if anyone has any
questions please remember to share your name and
speak into the microphone. It’s going to be coming around. Also, if anyone would like
to speak further after this, the Library of Congress
shop has arranged for a book signing that’s going
to take place right over there so please feel free
to join me then. So if anyone has
a first question?>>So I was wondering, I
know copyright’s a big thing. But like you were saying
that you usually have to take from other games
even in the past. So I was wondering how
do you approach that to where you can change that so that you can avoid having
the copyright issues at least from your experience?>>Winifred Phillips Oh, well,
copyright is a very interesting and important aspect
of our work. I mean a lot of games are
structured specifically around the idea of
using licensed music, and I’m sure you can think of a
lot electronic arts sports games in which music is introduced that for the first time you
find a band that you really love by playing one of
those sports games. That’s been a really big
avenue for young artists to make their start
and it’s been great for the music community
at large. But that’s something
that especially a company like Electronic Arts
is really serious about to obtain the rights and
to compensate artists in a way that makes sense and is
good for their careers. So the copyright is something
that is very important. No game developer wants to find
out that they’ve used a piece of music and then they haven’t
secured the rights appropriately so that they can’t
move forward with it. Particularly if you’ve
fallen in love with a track and you’ve incorporated it into
your game and oh God forbid, you’ve actually structured
your gameplay around it and then you find
out you can’t use it. So yes, copyright is super
important, and it’s great that the Library of Congress has
served the artistic community for so long in making sure
that artists are protected and that this kind of
creativity can continue in a way that satisfies both
our audiences and also the artists
making the music. Thank you.>>Hi, my name is Ed. What challenges do you face
or conventions do you follow when you are mixing down say
like a dynamic piece of music for a retail CD release
or promotional material? How do you make that stay
interesting to a listener if it’s not as dynamic
as it once was?>>Winifred Phillips Ed,
that’s a fantastic question. Thank you so much. You know, that’s a
really interesting process because a lot of the times
when you’re composing music for a game, you’re
essentially composing a lot of different bits and pieces. And you know that during
gameplay they’re going to be triggered by the
progress of the player. So it’s essentially a flexible
fluid story that might happen. But you have all of these
bits and pieces and afterwards when I’m creating a
soundtrack and I’ve done that several times now for
various games, I try to think about the most impactful course
that the player might have taken through that level, through
that piece of music and all of the bits and pieces that
are associated with it, and then I will construct in
my music production software, kind of like an ideal course. An ideal way to go
through it, and I’ll mix it so that it’s a memory of the
experience of playing that game. A lot of the people who buy
these soundtracks are people who have played the games and
they want to own the music because they want to
relive their experience, and I take that seriously
as when I’m doing this kind of work creating a soundtrack. I want to create music that’s
going to light up their memory and give them that kind
of warm and happy feeling of having played that game. So that’s what I’m thinking
about when I’m pulling all of the interactive
elements together. I want to create a sort of
ideal listening experience. You’re welcome. Does anyone have
any other questions?>>Hi, my name is
Vincent Vasana. My colleagues are — we’re
from the Peabody Institute in Baltimore studying film
and videogame scoring. So a couple of questions
if you don’t mind. A lot of us, I’m sure, have had
the question, how did you kind of get into the industry, kind
of break into the industry? What was your start? How did you go about all that?>>Winifred Phillips That’s
actually a really interesting question, Vince. My experience was
a little unique. Before I got into the
videogame industry I was working on a series called Radio Tales
for National Public Radio. So that was very different. It was like dramatizations of
classic stories for literature and mythology for the radio
and that was a lot of fun. So that was my first job as
a musician, as a composer. I was creating over 100
episodes of that series but I’d always been a gamer. I’d been a gamer ever
since I was a kid, and it was a really
important thing for me. I remember the one day when
I was playing Tomb Raider and it just — the
lightbulb went off in my head about the idea of actually
creating music for games. And so, I kind of got obsessed
with it and I started working on trying to pursue it. Finding out who the people
are at game studios and trying to reach out to them,
letting them know the kind of work I had done before. I had no game credits. Of course that’s the thing. It was an uphill battle, but my timing actually was
very good in one instance. I reached out to
a music supervisor at Sony Interactive
Entertainment right when he was trying to build
a music team for God of War. And I had a skill I could offer because I’m also a
classically trained vocalist. So he was looking for someone who could create choral
tracks for God of War. Any of you who have played it, you know that those
big choral moments are such an important part of
the God of War experience. So that was something that
I could offer; and so, he asked me to meet him at E3 and we had our first
meeting there. And at the same time when I was
going to E3 to meet with him, I also met with a whole
bunch of other people because I thought oh if
I’m going to be going to E3, is everyone familiar? Electronic Entertainment Expo. It’s a big convention in Los
Angeles that’s held every year for the videogame industry. So I thought if I’m going to this big convention
I’d better also try to see if I can make some more work
happen; and so, I was meeting with a lot of developers. And I also met with
the developers of the game called Charlie and the Chocolate
Factory at the same time. It was a tie into
the Tim Burton film. And so, after that meeting
ended and I came back, I did some demo music for both
projects for both God of War and for Charlie and
the Chocolate Factory, and I was hired for
both at the same time. Yes. So I mean you can think
that’s a hugely divergent thing. One is like light and whimsical
and full of imagination, and the other one is dark and
brooding and full of angst. But it was kind of cool for
me though because then I got to start my career with
two very different tracks. And from then on, that’s basically been the
way my career has been. I have swung back and forth
from let’s say kind of a grand and dark orchestral
project like Dragon Front or Assassin’s Creed,
and then I’ll swing back to something light and airy
like Spore Hero or The Maw. And I’ve gotten to
do that ever since. It’s been actually a lot of fun. But my experience
has been unique. I can’t think of a
lot of other composers who started out like that. It’s partly just how
fortunate I was in my timing. And to answer your question
Vince, it really does need you to just keep with it and
to study the industry, to try to understand what’s
happening, to go to conventions and conferences like the
Game Developer’s Conference, the Electronic Entertainment
Expo, and to just try to make yourself known
and to reach out, and eventually your
timing is going to be good and you’re going to
make that connection with somebody who’s looking
for what you can provide, and then you can just kick
off your career from there.>>I have one more
question if you don’t mind.>>Winifred Phillips Sure.>>How often do you
use virtual libraries in comparison to
a real orchestra? Do you use virtual
libraries more for mockups or do you use it more
for scoring a game or something like that?>>Winifred Phillips Yes. That’s a great question. Orchestral sample libraries
are something I use fairly frequently and it depends
very much on each project. Now there have been projects
where I had used it just in the mockup stage
at the beginning and then the project has gone on
to record with a live orchestra. So that’s always fun. But then there are
other projects where the budget is just not
going to accommodate that, and one of the things
that’s important to me as an artist is the option
of being able to work with both large studios
and also indy teams that don’t have the
same kind of budgets. It allows me to do a
wide range of projects. They’re exciting and creative. You get to work with
young developers who are really hungry, and they
have pie in the sky dreams. But then you can also
work with studios where there’s a large
infrastructure and a lot of resources. So I swing back and
forth with that too. I’ve worked with both the
orchestral sample libraries and the live musicians. I find that both our difficult
disciplines but very different in terms of their challenges to make an orchestral sample
library sound satisfying and realistic. It requires — and it’s a
minute attention to details and a really good understanding
of how sample libraries work and how live musicians play
so that you can approximate that sound in a way that’s going to feel satisfying
for listeners. And on the other hand
when you’re dealing with a live orchestra,
you really want to be able to take advantage of the
strengths of that medium and appeal to the expressiveness
that a live orchestra or live soloists can bring.>>Thank you.>>Winifred Phillips
You’re welcome. Oh, do we have any
other questions? Oh, over here.>>As someone who’s
played musical instruments but has never composed, if I
were to try to make indy game or something and I wanted
to make my own music, what kind of recommendations
would you have?>>Winifred Phillips You know, that is actually something
that’s happened quite a bit, and I’ve seen development teams with the main developer
also creating the music, and there have been some
really interesting games that have been created that way. One of the things that’s cool is if you are a musician that’s
creating a game you’re kind of creating the game with
the sense of how the music, it fits into the
mechanics of the game since you’re doing both. I think that’s actually
really neat. So that’s something that I’ve
seen done, but I do think that if you haven’t composed
before it might make sense to try to start doing some
of that first before you try to bring those two
elements together. I mean they’re very different
disciplines and you want to have a basic skillset
that you’re comfortable with before you start taking
something that’s hard in and of itself, music
composition and then adding into it something else that’s
hard by itself, game design. You don’t want to
get overwhelmed. So the opportunity to just
create music on its own or if you are familiar with
any student teams or teams that are being involved
in game jams, oh, for anyone who isn’t familiar, a
game jam is one of these events in which all of these game
developers come together to create games on
the fly really quick which is actually
kind of a lot of fun because it becomes this very
festive kind of party atmosphere and they all create these very,
very quickly, and it’s a way to be very creative and solve
problems right on the spot. But it’s also a fantastic
opportunity to jump into a team right away as the
composer and get an opportunity to just think about that part
of the game, to create the music within the structure of a team. And I think you’d learn
a lot about what goes into music composition for
games by doing that first. Before you know you’ve put
up like a thousand-ton weight on yourself of doing
both at the same time.>>Thanks.>>Winifred Phillips
You’re welcome.>>Question. My name is Ira. I’m interested to know given the
gaming consoles other platforms have grown so much in
terms of their capacity. What limits do you feel
are placed now or still on your budget and what’s
your response then in terms of strategies that
you’ve [inaudible] to increase your expressiveness and what resources are
available while stile remaining within the resources allowed?>>Winifred Phillips
Great question. Yes, that’s something that
definitely becomes an issue from project to project. You know I honestly can’t think
of a single project I’ve worked on where budget doesn’t
become a factor even when the budgets are very large because there’s always
the opportunity to just have your ambitions
explode beyond the boundaries that any game budget
can accommodate. So we’re always thinking
about the logistics of it but since I work on a lot of
different kinds of projects, I work with big teams and small
teams, I’m constantly coming across the idea that the
music budget isn’t going to accommodate a more
ambitious musical score. So at that point you sort
— you triage the situation. You look at it and you say, “How
can I maximize the potential of a smaller amount of music
so that it can provide coverage for a game and not become too
repetitive, too annoying?” You don’t want your music
to become a negative. You always want it to be
something people love. So there are lots
of ways to do that. Sometimes the music
needs to be broken apart. When I talked about
vertical layering, the idea of music broken apart
into its individual layers and then they can
be used separately. So that makes one piece
of music more flexible to cover a larger
amount of time. It can morph and change
and become more adaptive to what’s going on and that
way you can use that same piece of music for longer and it
can still be satisfying. It doesn’t feel repetitive. On the other hand, there is also
the quite valid consideration of when music should
settle back into silence to give the player room
to absorb the moment when it should come in. Spotting a game and
trying to determine where the music should
be is also a way in which a small budget can
accommodate the music resources. If you strategically place
music in those positions where it’s going to have
maximum impact, where it’s going to be meaningful, then
you’ve used music smart, and you can still have a
satisfying musical experience in a game without
needing an enormous budget in order to accommodate it. So those are two projects — two approaches that
can address a problem but it’s the continuing problem. Every game development
studio wrestles with it at one time or another. I mean, we’ve all got
the hope and the yearning to do something really
special with music, and a lot of the times we can. And sometimes the restrictions
can make us be creative in ways we couldn’t
have predicted. And then that is a way to
grow as a development team or as an artist, as a composer. So these things, these
challenges can serve to make us grow and
become better. So if anyone has any
further questions or if not, then I guess that we might wrap
up here and thank you so much for coming to this talk. I really have appreciated. You’ve been really kind to
me and thank you very much. [ Applause ]