Integrating ODIN Voice Chat in Unreal using C++
Although the Unreal Engine plugin comes with full Blueprint support to make it as easy as possible to use ODIN in your game, it is also easily possible to implement the plugin using C++. Please make sure to have a basic understanding of how ODIN works as this helps a lot of understanding the next steps. Additionally this guide assumes that you have basic knowledge of the Unreal Engine, its Editor and the C++ API.
This guide highlights the key steps to take to get started with ODIN. For a more detailed implementation please refer to our Unreal Sample project. Copy the C++ classes in the Unreal Sample Project and paste them in your own project to get started quickly!
Basic Process
As outlined in the introduction, every user connected to the same ODIN room (identified by a string of your choice) can exchange both data and voice. An ODIN room is created automatically by the ODIN server as soon as the first user joins and is removed once the last user leaves.
To join a room, a room token is required. This token grants access to an ODIN room and can be generated directly within the client. While this approach is sufficient for testing and development, it is not recommended for production because it exposes your ODIN Access Keys to the client. In production environments, the room token should be created on a secured server. To support this, we provide ready-to-use packages for JavaScript (via npm) and PHP (via Composer). In addition, we offer a complete server implementation that can be deployed as a cloud function on AWS or Google Cloud.
Once the room has been joined, users can exchange data such as text chat messages or other real-time information. If voice communication is required, an outgoing voice data stream (e.g. a microphone connected to an UOdinEncoder) must be added to the room. This enables every participant to communicate with one another. More advanced techniques, such as 3D audio, allow users to update their spatial positions at regular intervals. The server then ensures that only nearby users hear the voice stream, reducing both bandwidth consumption and CPU usage. Details on these techniques will be discussed later.
Summary of the Basic Steps
- Obtain an access key.
- Create a room token using the access key and a specific room identifier (string identifier).
- Join the room with the generated room token.
- Setup an Odin Encoder to connect a microphone to the room.
Implementing with C++
To integrate ODIN into an existing (or new) Unreal Engine project, you will work with the C++ classes provided in the Odin and OdinLibrary modules of the Odin Plugin. After installation, these modules must be added to your project’s build file, located at:
Source/<YourProject>/<YourProject>.build.cs
In addition, you should include dependencies on Unreal Engine's AudioCapture and AudioCaptureCore modules, as these provide the functionality required to capture microphone input from the user. Your PublicDependencyModuleNames entry should therefore look as follows:
Overview
Below is the full class we are about to create. In the sample, it derives from UActorComponent, but you can place the code wherever it best fits your architecture. An UActorComponent is often a good choice because it can be attached to any AActor to add functionality in a modular way.
Header File:
Source File:
You can attach this component to any actor, but the local Player Controller is typically the most appropriate location: it exists exactly once per client and is owned by that client, which aligns well with per-user voice operations. Ensure the hosting actor is present on every client; for example, the GameMode only exists on the server and is therefore not suitable.
In the following sections, we will build this component step by step and explain the reasoning behind each part.
Creating the Component
First, create the component class in C++. The simplest path is via the Unreal Editor:
- Open your project in the Unreal Editor.
- Navigate to
Tools → New C++ Class…. - Select
Actor Componentas the parent class (this guide assumesUActorComponent, but choose whatever best fits your project). - Name the class (for example
OdinClientComponent) and set it toPublic. - Click
Create.
The IDE (e.g., Visual Studio) will open and the project files should regenerate automatically. If they do not, return to your .uproject, right-click it in Explorer/Finder, and select Generate Visual Studio project files….
Once the class opens in your IDE, you can begin implementing the logic. Ensure your .build.cs already includes the required ODIN and audio modules as described above. This avoids "missing symbol" issues when you start adding ODIN types and audio capture code.
Creating an Access Key
Before you can connect to ODIN, you need to create an access key. An access key authenticates your requests to the ODIN server and contains your subscription-specific information, such as the maximum number of concurrent users allowed in a room and other configuration settings. A free access key allows up to 25 users to join the same room. For larger capacities or production use, you will need to subscribe to one of our paid tiers. See the pricing page for details.
For an in-depth explanation of access keys, refer to the Understanding Access Keys guide.
Creating a Demo Key
For now, you can generate a demo access key suitable for up to 25 concurrent users using the widget below:
Click on the button to create an access key that you can use for free for up to 25 concurrent users.
Click Create Access Key and store the key securely. You will need it later when joining a room and exchanging data or voice streams. A demo access key can later on be upgraded to a full-access key.
Creating a Room Token
For this example, we will create the room token directly on the client, inside the component’s BeginPlay() function. In most real-world use cases you may not want players to automatically join voice chat as soon as the game starts. Typically you would trigger this with a specific gameplay event or user action. However, for testing purposes, BeginPlay() provides a convenient entry point.
To begin, define a Token Generator and a String as instance variables in your header file. In the source file, you will also need to provide a UOdinJsonObject pointer, which will be filled by the token generator with an extended authentication JSON. For now, we'll only make use of the raw JWT RoomToken.
Make sure to also include the relevant ODIN header files in order to access the required functionality, then call the GenerateRoomToken function.
And initialize properties in the BeginPlay() function:
In production, the room token should always be generated on a secure backend (e.g., a cloud function) and delivered to the client on demand. For testing, generating a token directly in the client is acceptable, but do not commit your access key to a public repository.
Both RoomName and UserName serve as placeholders. In practice, you will want to integrate these with your game's logic for assigning users to rooms and identities. For testing, it is enough that all clients use the same room name and generate the room token with the same access key in order for them to connect to the same ODIN room.
Similarly, replace <YOUR_ACCESS_KEY> with either your free access key or your own logic for securely reading the key from a local configuration file.
Configure the Room Access
To join a room in v2 of the ODIN Voice Plugin, we construct the UOdinRoom and call ConnectRoom. Note that we no longer pass APM settings to ConstructRoom - these are handled by the Audio Pipeline, which we will configure later on.
In your component's BeginPlay() function, simply construct a UOdinRoom and keep a pointer to it so you can manage it later. The setup should look like this:
Header:
Source:
Event Flow
Once your client is connected to the ODIN server, several events will be triggered that enable you to set up your scene and connect audio output to player objects. Understanding and handling these events correctly is essential for a functioning voice integration.
Check out the Event Flow section of our Manual for more information.
These events form the backbone of how ODIN synchronizes users and audio streams in real time. By correctly handling these callbacks, you ensure that your voice chat remains stable, synchronized, and responsive as players join, speak, or leave. |
Adding a Peer Joined Event
To handle ODIN events, create functions that you bind to the corresponding event delegates. In this guide we will connect the two core events used during initial setup: OnRoomPeerJoinedBP and OnRoomJoinedBP.
Because these are dynamic multicast delegates (usable in Blueprints), the callback functions must be declared with the UFUNCTION() macro.
Header:
We bind these handlers early, before calling ConnectRoom, so that no events are missed.
Source:
Next, implement the functions. For this walkthrough we start with OnPeerJoinedHandler. To receive the incoming audio of the connecting peer, we create an UOdinDecoder instance using UOdinDecoder::ConstructDecoder(this, 48000, true). The 48000 refers to the sample rate in Hz that we want the decoder to produce. The true refers to the number of Audio Channels we want the decoder to output - true for stereo output, false for mono. Then we connect the decoder to the peer that just joined by calling UOdinFunctionLibrary::RegisterDecoder(Decoder, OdinRoom, PeerData.peer_id).
A decoder processes audio datagrams received from remote peers. To play back the resulting audio, you need an audio component that converts the stream into audible output: the Odin Synth Component. Use UOdinSynthComponent::SetDecoder to connect the synth to the UOdinDecoder object. This establishes the playback path. After assigning, make sure to activate the component so audio is actually produced.
For full 3D voice chat, the recommended approach is to add an Odin Synth Component to each player character or pawn and position it near the character's head. At runtime, you can retrieve the correct instance with GetComponentByClass() based on which peer the UOdinDecoder belongs to. This ensures accurate spatialization and attenuation. You wil need to keep track of which Synth Component is connected to which peer, e.g. by using a TMap.
To keep this guide focused, we will ignore 3D spatialization for now and provide 2D output only. In that case, you can call AActor::AddComponentByClass() at runtime to create and attach an Odin Synth Component to the locally controlled player character. This avoids the need to resolve which character corresponds to the ODIN peer and simplifies initial setup.
Always set up and bind your event handlers before joining a room. This ensures you receive OnRoomPeerJoinedBP and OnRoomJoinedBP events for users who are already present when your client connects.
That's it. With this setup, every user connected to the same room will be heard at full volume through the local player’s position.
In a real 3D game you would not route all audio through the local player. Instead, you would map the ODIN Peer ID to your Unreal Player's Unique Net ID and assign each UOdinDecoder output to the correct player character. This way Unreal's audio engine automatically applies spatialization and attenuation, i.e. by reducing the volume as players move farther away from the listener.
This approach is presented later on in the guide.
Joining a Room
Now we have everything in place to join a room. A room token for an Odin room with id RoomName has been created, and the room settings for our client have been configured. The final step is to connect both and initiate the join.
In this step, we simply use the Room pointer together with the default ODIN gateway URL and the room token. With this, the client will attempt to join the room.
The default gateway used here is located in Europe. If you'd like to connect to a gateway in another region, please take a look at our available gateways list.
Adding Microphone input
Now that we have successfully joined a room, we need to add our own microphone input, so that other users in the room can hear us. This is done by creating an UOdinAudioCapture object by calling UOdinFunctionLibrary::CreateOdinAudioCapture. Then we create a new UOdinEncoder object with a call to UOdinFunctionLibrary::CreateOdinEncoderFromGenerator and providing the target Odin room and audio capture object as input parameters. Finally we simply call UOdinAudioCapture::StartCapturingAudio to start capturing from the audio and pushing it to the connected Odin room.
Since capturing requires keeping the capture object alive, we keep the pointers to the UOdinAudioCapture and UOdinEncoderobjects in UPROPERTY() marked instance variables. This also allows us to start and stop microphone input whenever needed.
Header:
Source:
Configuring Audio Processing (APM)
In V2, Audio Processing Module (APM) settings like Noise Suppression and Voice Activity Detection are configured on the UOdinPipeline of the Encoder. This allows to set different effects for each room or scenario.
The APM configuration code can be applied at any step after the UOdinEncoder creation. Simply retrieve the encoder's audio pipeline, initialize the APM and/or VAD effect structures and insert the effects.