I have started working on the Audio Engine for WD. Developing an audio engine is something completely new for me and, even if I studied it a bit as part of my formation as a game developer, it is not as straightforward as some may think.
As you possibly know there are two types of audio in a videogame: sound effects (SFX) and background musics (BGM). The distinction is not as shallow as some may think because the two usually have different requirements to do their work properly.
- They must be played instantly (if you delay an effect the whole “illusion” is broken, think about earing an explosion several moment after the animation is started);
- They are usually short;
- They are usually positional, that is they change in relation to the position of the character hearing them.
- There is no need to start them instantly;
- They are usually long and their memory footprint large;
- They are not positional.
These differences are reflected in the way that several libraries and engines managed them. At code level there are usually two different objects to manage them, and this is also true for the I/O library used by WD: SFML.
SFXs are simple objects that are completely loaded in memory at the start of a level and are called when the time is right. No processing is needed and their content is sent immediately and raw to the sound card.
BGMs, on the other hand, are “streamed” on demand. When required a buffer is filled with the data coming from the source file and the buffer is used to load subsequent “chunks” in memory, one at a time, to give the impression of continuous playing. You can imagine it as having two slices of the music, one which is playing and one that is always one step forward, loading the next chunk.
To switch from one BGM to another you could use a fast crossfade but Daniele wanted something different: he wants to move seamlessly from one “phrase” (or movement) of the music to another and loop them together. He already described the concept in an earlier post. This increased the complexity of the system but, most importantly, rendered moot point 2.1 of the above list.
My current solution is to treat BGMs as SFXs inside the Audio Engine: by preloading everything in memory and keep high precision counters to move around the music in a seamless manner. My first tests seem promising.
Especially, I have kept track of the memory usage and it seems that the increase in the memory footprint is quite manageable. As several others technologies in the context of game development, I think indie-developers should worry a lot less about optimization than big studios and I am trying to live up to my own advice. If we were targeting low-end mobile platforms things would have to be different, but today loading twenty or even one hundred MB of sound data in memory is no longer an insurmountable constraint if you target the PC as the distribution platform, possibly even previous gen consoles, depending on the type of game you are developing.
Thanks for reading.