LCL

LOST CHOCOLATE LAB

0 A.D. Audio Development I signed on as Sound Designer for this Historical RTS currently in production at WildFireGames called 0 A.D. ("zero ay-dee") in March of 2005.
If this is your first time here, you can start at the beginning, and then work your way forward though the archive links. -------------------------------------------------------
Saturday, January 07, 2006
Programming Ambient Sound
During one of the recent programmer meetings, the idea's for how to handle the playing of ambient sounds was discussed and what follows is a bit of my response to the thinking that went on during that meeting.

While I've been digesting this fantastic programming discussion regarding audio...I realized that it is much like hearing someone speak a foreign language: very beautiful sounding, but at times hard to understand!

I do think I've grasped most of the concepts that were tossed around, and hope to summarize and show examples related to the questions and proposed audio implementation scheme.

The main issues covered:
1a. Identify Visible Entities
1b. decidewhattoplay
(C++ to Javascript (JS))


This seems right to my feeble mind...check to see what's "on-screen" and pass the list to JS to decide what to play. Outside my scope to fully understand, but I get the concept.

2. ambientGroup: a soundGroup of several sounds; one is picked at random to be played (see below)
implement the soundGroup functionality, which involves implementing the sound XML loader

See: 0 A.D. XML Sceme

Seems good, hopefully no surprises with this one, and all seems like it is still in order.

3. Proximity to listener/ Weight
Full relative positional volume, global sound setting applied to all volumes based on proximity to camera. (listener)

Good Good

4a. Listener (camera) position.

Ok, there was quite a bit of debate on listener position (as i believe it's called in OpenAL) and defining the 3d distance cut off. Heavy lifting like patches and frustums and other stuff that goes right over my head. ;)
So, let me try to get out the concept in plain English, and then we can work our way back to a solution/definiton of these things as it relates to programming.

My understanding:
OpenAL determines your speaker configuration and decides where to place things in space positionally, either in surround (3d) or by "mixing down" to stereo.

It was wildly debated as to how to position the listener/ defining "screen center", using the bit of terrain at screen center, strange listener positions (like looking up to the sky)...and I hope the following example of how I think it should work might simplify it.

(oh, and that blue thing in the middle is the listeners head, facing forward.)

Vertical Listener Orientation:


Horizontal Listener Orientation:


Thoughts?
Mine:
This is a pretty extreme viewing angle, great for cinematic. (And hardcore Alfred Hitchcock style RTS players)...should be really interesting sounding!

4b. Intensity (Importance)
Another hot one, essentially...how to keep the world from blaring all sounds at all times at you.
Various idea's to control the level of chaos, including giving everything an "importance" that would be sorted out by C++ code (I think)...Without going too much into what went on at the meeting, let me throw out what had been discussed/defined previously...and see if that clarifies how it could be handled.

In the post Audio Implementation: Intensity & Severity discussions and resulting conceptual screen shot mock ups, it goes like this:
Intensity thus far defined as:
If number of sources on screen is less than 4 then plays intensity 1 (mono)
If number of sources on screen is 4-9 then play intensity 2 (stereo)
If number of sources on screen is 10-20 then play intensity 3 (stereo)
If number of sources on screen is 21-40 then play intensity 4 (stereo)
And so on...

Priority thus far has been defined as:
100=0- emergency use only (undefined overhead)
90= required (voiceover, you are under attack warning)
80= rather important (battle sounds, death, destruction)
70= rather important (building)
60= not so important (resources)
50= fluff (ambient - birds, etc.)

Allowing for priority ranges between values.

So, if C++ identifies visible entities on-screen...what should happen next is:
Calculate # of same Entities & determine what "Intensity" sound should be played (C++ or JS?)
Calculate Priority of entity/audio_group to help decide what the "importance" of the sound (In relation to the rest of the sounds being played on the screen, with regard to the max # of channels available) (C++ or JS?)
Play sound specified in XML for entity or audio_group(JS)


re: Building Ambiences...

if you are max zoomed out, there are likely to be over 30(?) of the same building on screen (in some instances), for this a single special (looping) sound that represents 21-40 of that building will play.

(unsure how it will sound/ but the likely hood that the engine would play a single loop for each "type" of entity (and/or it's action) is pretty good, then we would default to the priority scheme and possibly limit the number of channels at max zoom to the 80-100 priority range) {Does that sound too difficult?}

limiting what sounds should play & when is a concern at all zoom levels, you don't really want to hear the foresters when you're at war...hopefully the priority scheme will enable us to make those decisions. Let me know if this is unclear.

Hope this helps clarify, and not confuse.

Again, my thanks goes to the fella's that took up the torch and hashed out the difficult task of pulling the programming thinking together for the audio department. Dreaming it big, still along way to go...but such brilliant minds, it is fascinating to watch them work.

Until next time, thats the stuff!
Damian
LCL
---------------------------------------------------------------------------

0 Comments:

Post a Comment

Links to this post:

Create a Link

<< Home