- Simple cross-platform game engine - Introduction
- Universal Box2D debug draw for OpenGL ES 1.x and OpenGL ES 2.0
- Loading images under Windows
- Load images under Android with NDK and JNI
- Using JNI_OnLoad() in Adroid NDK development
Today I will write some notes on how to write simple mixer for streaming audio. I will use OpenSL ES and pure C/C++ code (no java needed). Building simple sound mixer for Android includes two areas:
- working with OpenSL ES - with its object, interfaces, ...,
- creating some logic to build buffers with data and sending them to output.
Initializing with OpenSL takes quite a lot of code. OpenSL ES objects are first created but no resources are allocated. According to OpenSL ES 1.1 Specification (see it at khronos.org site) the object is: "an abstraction of a set of resources, assigned for a well-defined set of tasks, and the state of these resources." To allocate the resources the object must be Realized.
To access features the object offers you have to acquire interface object. Interfaces are defined as: " an abstraction of a set of related features that a certain object provides."
First we have to define Engine object which is entry point into OpenSL ES API:
With slCreateEngine we are creating Engine object that is returned in first parameter. Next two parameters specify optional features. Last three parameter refers to const values you can see in code list and are related to number of interfaces, which interfaces are requested and whether these interfaces are required or optional. We are requesting only for one interface (SL_IID_ENGINE).
Just now no resources are allocated yet. We have to Realize the object. The second parameter says whether it should be asynchronous. We want synchronous realization.
Now we can cache interfaces. Here we have only one and we will store it in mEngine variable (the last parameter is for output now) .
Next we are going to create output mix - object that is in the end and sends our data to HW device. The creation takes the same logic as for Engine but this time we have zero interfaces.
Now we are going to build the sound player that will be responsible for keeping queue with sound data full. It will be attached to Engine object and it will send its output to created output mix.
In following routine we are also encountering pieces of the second area - the mixer logic. When we meet them I will just mention it, describe it briefly and skip for now as the mixer logic will be explained in second part of this article.
The initial part is related to mixer logic - it marks all sound channels of the mixer as unused.
First we define data locator - we say where the data we want to play comes from.
We say that the data will be in memory buffer and that we have two buffers. If we wanted to play some mp3 music stored in file we would use SL_DATALOCATOR_ANDROIDFD with different additional parameters.
Then we define the format of the data that will be stored in memory buffers:
The parameters are self-explaining. We will create buffers with raw PCM data. Our playback rate will be 11025 Hz and the data will be 16 bit little endian.
Now we can combine the location of data with its format to create SLDataSource object that describes the input data:
We have finished the description of input so now we have to describe the output. We will send data to output mix we created when initializing in start() method:
Now it is time to create the sound player object. the object will be attached to Engine, its data will be as described (raw 16-bit PCM stored in memory buffers) and it will output it dataSink that will forward it to output mix. We will follow again the OpenSL ES logic - create object, realize it (to allocate resources ...), get interfaces. Notice that we have three interfaces. SL_IID_PLAY will allow us to start, stop, pause the playing. SL_IID_BUFFERQUEUE will allow us to control the queue with buffers (we have two of them). The last interface will allow us to control the volume:
Object is created and realized - get all the three interfaces:
At this point we have initialized OpenSL ES Engine, we created audio player and we can start sending the data (in defined format) to it. We said we have two memory buffers. We can fill it with data and enqueue it but how we know that the playing finished and we should send next data? We can register callback routine through the buffer queue interface. When playing of buffer in queue is finished our custom routine (soundPlayerCallback) will get called and we can prepare and send next buffer.
If we had only one buffer the audio may get choppy as there would be missing data in queue. So in the very beginning we clear both the buffers (fill it with silence) and we send both of them to queue. When playing of the first is finished our callback gets called and we can fill the first buffer with new data. While we are doing so there are still data in second buffer that is playing. Following snippet if more related to mixer logic that will be described in second part. But shortly - there are 2 buffers and one pointer that flips between them.
I was wandering whether the data are copied into the queue upon sending and thus I could have only one buffer. But it seems it is not safe as Specification reads: "The buffers that are queued in a player object are used in place and are not required to be copied by the device, although this may be implementation-dependent. The application developer should be aware that modifying the content of a buffer after it has been queued is undefined and can cause audio corruption."
Finally we can finish our long routine and start playing:
The callback routine is as simple as this:
and sendBuffer() routine is the last piece in mosaic. All the routines called from it - prepareSoundBuffer() and swapSoundBuffers() are related to mixer logic and do not mess with OpenSL ES.
So far we described the initialization so it is time to show routines that will stop playing and clean. First clearing the sound player...:
... and clearing the Engine and sound output:
So, for now we have:
- initialized OpenSL ES Engine and sound output,
- created AudioPlayer with defined input and output,
- registered callback that will notify us when new data is needed
So far we are hearing only silence. In next part I will describe how to fill buffers with data and how to mix channels to produce some sound.