Skip to main content

Overview

We offer ODIN, a high-quality cross-platform immersive voice SDK that includes state-of-the-art noise suppression technologies.

Our Web SDK includes fallback code for WebRTC and is compatible with most browsers. While it can also be used in NodeJS, it may not be as performant as our native NodeJS SDK, which is designed for advanced use cases and bots.

warning

The NodeJS SDK is currently in beta. This means that the API is not final and may change from version to version, and it may not be ready for large-scale production use. If you encounter any issues, please let us know on our Discord server.

Although we strive to keep the API as similar as possible between our SDKs, there are some differences. Since wrapping objects in JavaScript and TypeScript is easier than in C/C++, we only use objects where necessary in our native SDK. For instance, the WebSDK provides an OdinPeer object that encapsulates various methods and properties. However, for NodeJS use cases, you usually only need a peerId and mediaId, so we decided not to overcomplicate things. Of course, you can always create your own wrapper if needed.

tip

The main difference between the two SDKs is that the NodeJS version allows you to access raw audio data from the ODIN server, which enables you to record or stream audio to the room. While the WebSDK has similar capabilities, it relies on Web Audio, which is not available in NodeJS. Rather than creating a complex polyfill, we found it easier and faster to wrap the ODIN native SDK into NodeJS bindings.

By providing two separate libraries, we can optimize each one for its respective platform and use cases.

ODIN is our cross-platform immersive voice SDK with best-in-class quality and noise suppression technologies.

Source Code

The NodeJS SDK is developed in C/C++ using Node API and is open source. Check out the source in our public NPM repository.

Installation

npm npm npm

Install the web SDK via npm in your project like this:

npm install --save @4players/odin-nodejs

Interoperability

The ODIN web SDK is fully compatible with our other client SDKs, so you can easily communicate between your NodeJS script and other clients—even Unity or Unreal Engine clients.

However, it's important to understand that you cannot combine the WebSDK and the NodeJS SDK in the same NodeJS project. You either have @4players/odin-nodejs or @4players/odin in your package.json file, but not both.

Platform support

As the ODIN NodeJS SDK is a native module, it requires native binaries to be built for your platform. We currently provide prebuilt binaries for these platforms:

  • Windows (x64)
  • macOS (x64 and ARM)
  • Linux (x64)

If you deploy your application to a different platform, npm install will try to build the library from source. This requires a working C/C++ compiler and the necessary build tools to be installed on your system. You can find more info on that topic in the node-gyp documentation.

Event Handling

The ODIN server will automatically notify you about relevant updates, and the ODIN NodeJS SDK uses the JavaScript EventTarget API to dispatch custom events based on the OdinEvents interface. Use any of the provided addEventListener methods to set up a function that will be called whenever the specified event is delivered to the target.

These events are available:

OdinEvents

Use them like shown in this example:

Listening for events
// Create an odin client instance using our access key and create a room
const odinClient = new OdinClient();
const room = odinClient.createRoom("__YOUR_ACCESS_KEY__", roomName, userName);

// Listen on PeerJoined messages and print the user data of the joined peer
room.addEventListener('PeerJoined', (event) => {
console.log("Received PeerJoined event", event);
console.log(JSON.parse(new TextDecoder().decode(event.userData)));
});

// Listen on PeerLeft messages and print the user data of the left peer
room.addEventListener('PeerLeft', (event) => {
console.log("Received PeerLeft event", event);
});

Using Media objects

ODIN works with the concept of media objects that are attached to a room. A media is either remote (i.e., connected to the mic of someone else’s device) or a local media, which is linked to your own microphone or other input device.

After joining a room, you can create a media (linked to your local input device) and add it to the room. After joining a room, you are just a listener in the room, you don't send anything to it (i.e., you are muted).

Receiving audio data

Once the script has joined a room, it receives events that you add handlers for. If you want to get access to audio data, you'll need to add an event handler for AudioDataReceived. The data property of the event contains the audio samples in 16-bit format or 32-bit floats (-1 to 1):

Receiving audio data
// Add an event filter for audio data received events
room.addEventListener('AudioDataReceived', (data) => {
// Getting an array of the sample buffer - use for example to visualize audio
let ui32 = new Float32Array(data.samples32.buffer);
console.log(ui32);

let ui16 = new Int16Array(data.samples16.buffer);
console.log(ui16);
});

Sending audio data

If you want your bot to send audio into the room (i.e., text to speech or music and other sounds), you need to attach a media to that room:

Creating a media object
// Send music to the room
const sendMusic = async (media) => {
// Prepare our MP3 decoder and load the sample file
const audioBuffer = await decode(fs.readFileSync('./santa.mp3'));

// ODIN requires 20ms chunks of audio data (i.e., 50 times a second). We need to calculate the chunk length based on
// the sample rate of the file by dividing it by 50. If the sample rate is 48kHz, we need to send 960 samples per chunk.
const chunkLength = audioBuffer.sampleRate / 50;

// Create a stream that will match the settings of the file
const audioBufferStream = new AudioBufferStream({
channels: audioBuffer.numberOfChannels,
sampleRate: audioBuffer.sampleRate,
float: true,
bitDepth: 32,
chunkLength: chunkLength
});

// Create a queue to store the chunks of audio data
const queue = [];

// Whenever the stream has data, add it to the queue
audioBufferStream.on('data', (data) => {
const floats = new Float32Array(new Uint8Array(data).buffer);
queue.push(floats);
});

// Start a timer to send audio data at regular intervals
const interval = setInterval(() => {
if (queue.length > 0) {
const chunk = queue.shift();
media.sendAudioData(chunk);
} else {
// If there's no more data to send, stop the timer
clearInterval(interval);
audioBufferStream.end();
console.log("Audio finished");
}
}, 20); // Send a chunk every 20ms

audioBufferStream.write(audioBuffer);
}

// Create a media stream in the room - it will return an OdinMedia instance that we can use to send data to ODIN
const media = room.createAudioStream(48000, 2);

// Start the stream and send the music to ODIN
sendMusic(media).then(() => {
console.log("Finished sending song");
});

Examples

We have prepared a couple of examples to get you started with the ODIN NodeJS SDK. You can find them in our SDK in the tests folder. After installing the NodeJS SDK, check out this folder: node_modules/@4players/odin-nodejs/tests. You'll find one example on how to record audio and send it to OpenAI's whisper module for transcription and another sample that shows you how to send data into the channel. Examples are also provided for sending text messages when users join the room.