Skip to main content

odin_audio_read_data

Reads audio data from the specified

OdinMediaStreamHandle

. This will return audio data in the format specified when calling

odin_startup_ex

or 48 kHz interleaved by default (i.e. if you used

odin_startup

to initialize the SDK).

See our minimal example for an example of how to use this function.

Declaration
OdinReturnCode odin_audio_read_data(OdinMediaStreamHandle stream,
float *out_buffer,
size_t out_buffer_len);

Parameters

stream

Handle of the media stream from which to read audio data. See

OdinMediaStreamHandle

Declaration
OdinMediaStreamHandle stream;

out_buffer

Pointer to a buffer where the audio data will be stored.

Declaration
float *out_buffer;

out_buffer_len

The length of the buffer in samples. The buffer must be large enough to store the requested number of samples.

Declaration
size_t out_buffer_len;

Returns

A return code indicating success or failure.

Return Type
OdinReturnCode

Discussion / Example

48Khz means that the audio data is sampled at 48,000 times per second meaning that in one second there are 48,000 samples with interleaved float values (between -1 and 1). The best approach is to read samples every 20 milliseconds. This means that we do read data 50 times per second (ie. 1 second is 1000 milliseconds, 1000 / 20 = 50). In each read, we will get 48,000 / 50 = 960 samples.

Here is an example from our NodeJS SDK:

Let's get started with some types. For each MediaAdded event (i.e. a user added a new media stream), we will store the media stream handle, the media stream id, and the peer id in a map.

Media types
struct Media {
OdinMediaStreamHandle Handle;
uint16_t Id;
uint64_t PeerId;
};

std::map<uint16_t, Media> _mediaStreams;

We also have a structure to store the audio samples which also has a method to convert float32 (-1 to 1) to float16.

AudioSamples structure
/**
* Data structure to store audio samples used to copy The AudioDataReceived event data
*/
struct AudioSamples {
short Data[960];
float OriginalData[960];
size_t Len;
uint64_t PeerId;
uint16_t MediaId;

void ConvertFloat32ToFloat16(const float *in_buffer, size_t in_buffer_len, short *out_buffer) {
for (size_t i = 0; i < in_buffer_len; i++) {
float f = in_buffer[i];
// Clamp to [-1, 1] range
if (f > 1.0f) {
f = 1.0f;
} else if (f < -1.0f) {
f = -1.0f;
}
// Convert to 16-bit float
int16_t f16 = (int16_t) (f * 32767.0f);
out_buffer[i] = f16;
}
}

void SetSamples(const float *samples, size_t len)
{
ConvertFloat32ToFloat16(samples, len, Data);
memcpy(OriginalData, samples, len * sizeof(float));
Len = len;
}

~AudioSamples()
{
}
};

While the NodeJS is running and connected to the room, we will read audio data from each media stream (i.e. from each user connected to the room and having a microphone active) and send it to the NodeJS layer. We will use a buffer of 960 samples to store the audio data.

Read audio data from media streams
while (this->_running)
{
// Read audio samples from each users media connected to the room
for (auto it = _mediaStreams.begin(); it != _mediaStreams.end(); it++)
{
OdinReturnCode rc = odin_audio_read_data((OdinMediaStreamHandle)it->second.Handle, this->_audioSamplesBuffer, 960);

if (odin_is_error(rc)) {
printf("Odin NodeJS Addon: Failed to read audio data\n");
break;
}

AudioSamples* samples = new AudioSamples();
samples->SetSamples(this->_audioSamplesBuffer, 960);
samples->PeerId = it->second.PeerId;
samples->MediaId = it->second.Id;

// Perform a blocking call (i.e. send samples to the NodeJS / JavaScript layer)
napi_status status = _audioDataReceivedEventListener.BlockingCall( samples, callback );
if ( status != napi_ok )
{
// Handle error
printf("Odin NodeJS Addon: Failed to call audio data received callback\n");
break;
}
}

// Wait for 20ms before reading the next audio data
std::this_thread::sleep_for( std::chrono::milliseconds ( 20 ) );
}