Symbian Developer Library

SYMBIAN OS V6.1 EDITION FOR C++

[Index] [Glossary] [Previous] [Next]



Audio Streaming overview


Purpose

The Audio Streaming API is the interface to streaming sampled audio data into the media server.

Streamed audio data is received by the media server client incrementally. Using the audio streaming API, the client does not have to wait until the entire sound clip has downloaded before sending it to the server — the data fragments can be sent as they are received.

The user of the API should maintain the data fragments in a queue before sending them to the server. If the user attempts to send data faster than the server can receive it, the excess data fragments are maintained in another client side queue (invisible to the user), whose elements are references to the buffers passed to it. The server notifies the client using a callback each time it has received a data fragment. This indicates to the client that that data fragment can be deleted.

The client is also notified when the stream is opened so that it is available to be used (opening takes place asynchronously), and also when all data has been sent to the output device so that playback is complete.

To ensure that the audio data is played smoothly, without pauses between data fragments, the server uses buffers, one to receive data from the client, the other to send data to the output device. When the data in the output buffer has been sent to the device, the buffers are swapped around. The use of buffers is not exposed to the user of the API.

This API can be used to stream audio data only, and the streamed data is contained in descriptors. Client applications must ensure that the data is in 16 bit PCM format as this is the only format supported. The API does not support mixing. A priority mechanism is used to control access to the sound device by more than one client.

[Top]


Architectural relationships

The Audio Streaming API is related to the following APIs: —

Audio Sample Player. If the audio data is held locally in a file or descriptor then the Audio Sample Player API should be used in preference to the Audio Streaming API.

Descriptors. The binary audio data written to the stream is stored in 8 bit descriptors.

[Top]


Usage

Using the interface involves opening, setting audio properties, writing to and closing the stream. It is implemented by the CMdaAudioOutputStream class.

The stream must be opened before it can be used. The volume and the stream's audio properties must be set, either when opening the stream, or after having opened it, but only the volume can be changed while data is being sent to the stream.

The package to pass to the Open() function must be of type TMdaAudioDataSettings. The parameters like the sample rate and the number of channels must be specified as enum values, e.g. TMdaAudioDataSettings::ESampleRate8000Hz rather than 8000.

The client can then call WriteL() to begin streaming data to the server. Subsequent calls to WriteL() can be made as soon as the data is ready to be sent — you don't need to wait for the callback function MMdaAudioOutputStreamCallback::MaoscBufferCopied() to indicate that the previous data fragment has been received by the server. This callback function can be used to delete the buffer that has been played.

The user of the API is faced with a trade off between using small and large buffers to store the audio data. The use of smaller buffers increases the chance of an underrun (where the sound device finishes playing before the next buffer of sound data has been sent to it), but reduces the initial delay before sound begins to play.

[Top]


See also

Audio Sample Player overview

Descriptors overview