Integrating ODIN Voice Chat in Unreal using C++
Although the Unreal Engine plugin comes with full Blueprint support to make it as easy as possible to use ODIN in your game, it is also easily possible to implement the plugin using C++. Please make sure to have a basic understanding of how ODIN works as this helps a lot of understanding the next steps. Additionally this guide assumes that you have basic knowledge of the Unreal Engine, its Editor and the C++ API.
This guide highlights the key steps to take to get started with ODIN. For a more detailed implementation please refer to our Unreal Sample project. Copy the C++ classes in the Unreal Sample Project and paste them in your own project to get started quickly!
Basic Process
As outlined in the introduction, every user connected to the same ODIN room (identified by a string of your choice) can exchange both data and voice. An ODIN room is created automatically by the ODIN server as soon as the first user joins and is removed once the last user leaves.
To join a room, a room token is required. This token grants access to an ODIN room and can be generated directly within the client. While this approach is sufficient for testing and development, it is not recommended for production because it exposes your ODIN Access Keys to the client. In production environments, the room token should be created on a secured server. To support this, we provide ready-to-use packages for JavaScript (via npm) and PHP (via Composer). In addition, we offer a complete server implementation that can be deployed as a cloud function on AWS or Google Cloud.
Once the room has been joined, users can exchange data such as text chat messages or other real-time information. If voice communication is required, an outgoing voice data stream (e.g. a microphone connected to an UOdinEncoder) must be added to the room. This enables every participant to communicate with one another. More advanced techniques, such as 3D audio, allow users to update their spatial positions at regular intervals. The server then ensures that only nearby users hear the voice stream, reducing both bandwidth consumption and CPU usage. Details on these techniques will be discussed later.
Summary of the Basic Steps
- Obtain an access key.
- Create a room token using the access key and a specific room identifier (string identifier).
- Join the room with the generated room token.
- Setup an Odin Encoder to connect a microphone to the room.
Implementing with C++
To integrate ODIN into an existing (or new) Unreal Engine project, you will work with the C++ classes provided in the Odin and OdinLibrary modules of the Odin Plugin. After installation, these modules must be added to your project’s build file, located at:
Source/<YourProject>/<YourProject>.build.cs
In addition, you should include dependencies on Unreal Engine's AudioCapture and AudioCaptureCore modules, as these provide the functionality required to capture microphone input from the user. Your PublicDependencyModuleNames entry should therefore look as follows:
Overview
Below is the full class we are about to create. In the sample, it derives from UActorComponent, but you can place the code wherever it best fits your architecture. An UActorComponent is often a good choice because it can be attached to any AActor to add functionality in a modular way.
Header File:
Source File:
You can attach this component to any actor, but the local Player Controller is typically the most appropriate location: it exists exactly once per client and is owned by that client, which aligns well with per-user voice operations. Ensure the hosting actor is present on every client; for example, the GameMode only exists on the server and is therefore not suitable.
In the following sections, we will build this component step by step and explain the reasoning behind each part.
Creating the Component
First, create the component class in C++. The simplest path is via the Unreal Editor:
- Open your project in the Unreal Editor.
- Navigate to
Tools → New C++ Class…. - Select
Actor Componentas the parent class (this guide assumesUActorComponent, but choose whatever best fits your project). - Name the class (for example
OdinClientComponent) and set it toPublic. - Click
Create.
The IDE (e.g., Visual Studio) will open and the project files should regenerate automatically. If they do not, return to your .uproject, right-click it in Explorer/Finder, and select Generate Visual Studio project files….
Once the class opens in your IDE, you can begin implementing the logic. Ensure your .build.cs already includes the required ODIN and audio modules as described above. This avoids "missing symbol" issues when you start adding ODIN types and audio capture code.
Creating an Access Key
Before you can connect to ODIN, you need to create an access key. An access key authenticates your requests to the ODIN server and contains your subscription-specific information, such as the maximum number of concurrent users allowed in a room and other configuration settings. A free access key allows up to 25 users to join the same room. For larger capacities or production use, you will need to subscribe to one of our paid tiers. See the pricing page for details.
For an in-depth explanation of access keys, refer to the Understanding Access Keys guide.
Creating a Demo Key
For now, you can generate a demo access key suitable for up to 25 concurrent users using the widget below:
Click on the button to create an access key that you can use for free for up to 25 concurrent users.
Click Create Access Key and store the key securely. You will need it later when joining a room and exchanging data or voice streams. A demo access key can later on be upgraded to a full-access key.
Creating a Room Token
For this example, we will create the room token directly on the client, inside the component’s BeginPlay() function. In most real-world use cases you may not want players to automatically join voice chat as soon as the game starts. Typically you would trigger this with a specific gameplay event or user action. However, for testing purposes, BeginPlay() provides a convenient entry point.
To begin, define a Token Generator and a String as instance variables in your header file. In the source file, you will also need to provide a UOdinJsonObject pointer, which will be filled by the token generator with an extended authentication JSON. For now, we'll only make use of the raw JWT RoomToken.
Make sure to also include the relevant ODIN header files in order to access the required functionality, then call the GenerateRoomToken function.
And initialize properties in the BeginPlay() function:
In production, the room token should always be generated on a secure backend (e.g., a cloud function) and delivered to the client on demand. For testing, generating a token directly in the client is acceptable, but do not commit your access key to a public repository.
Both RoomName and UserName serve as placeholders. In practice, you will want to integrate these with your game's logic for assigning users to rooms and identities. For testing, it is enough that all clients use the same room name and generate the room token with the same access key in order for them to connect to the same ODIN room.
Similarly, replace <YOUR_ACCESS_KEY> with either your free access key or your own logic for securely reading the key from a local configuration file.
Configure the Room Access
To join a room in v2 of the ODIN Voice Plugin, we construct the UOdinRoom and call ConnectRoom. Note that we no longer pass APM settings to ConstructRoom - these are handled by the Audio Pipeline, which we will configure later on.
In your component's BeginPlay() function, simply construct a UOdinRoom and keep a pointer to it so you can manage it later. The setup should look like this:
Header:
Source:
Event Flow
Once your client is connected to the ODIN server, several events will be triggered that enable you to set up your scene and connect audio output to player objects. Understanding and handling these events correctly is essential for a functioning voice integration.
Check out the Event Flow section of our Manual for more information.
These events form the backbone of how ODIN synchronizes users and audio streams in real time. By correctly handling these callbacks, you ensure that your voice chat remains stable, synchronized, and responsive as players join, speak, or leave. |
Adding a Peer Joined Event
To handle ODIN events, create functions that you bind to the corresponding event delegates. In this guide we will connect the two core events used during initial setup: OnRoomPeerJoinedBP and OnRoomJoinedBP.
Because these are dynamic multicast delegates (usable in Blueprints), the callback functions must be declared with the UFUNCTION() macro.
Header:
We bind these handlers early, before calling ConnectRoom, so that no events are missed.
Source:
Next, implement the functions. For this walkthrough we start with OnPeerJoinedHandler. To receive the incoming audio of the connecting peer, we create an UOdinDecoder instance using UOdinDecoder::ConstructDecoder(this, 48000, true). The 48000 refers to the sample rate in Hz that we want the decoder to produce. The true refers to the number of Audio Channels we want the decoder to output - true for stereo output, false for mono. Then we connect the decoder to the peer that just joined by calling UOdinFunctionLibrary::RegisterDecoder(Decoder, OdinRoom, PeerData.peer_id).
A decoder processes audio datagrams received from remote peers. To play back the resulting audio, you need an audio component that converts the stream into audible output: the Odin Synth Component. Use UOdinSynthComponent::SetDecoder to connect the synth to the UOdinDecoder object. This establishes the playback path. After assigning, make sure to activate the component so audio is actually produced.
For full 3D voice chat, the recommended approach is to add an Odin Synth Component to each player character or pawn and position it near the character's head. At runtime, you can retrieve the correct instance with GetComponentByClass() based on which peer the UOdinDecoder belongs to. This ensures accurate spatialization and attenuation. You wil need to keep track of which Synth Component is connected to which peer, e.g. by using a TMap.
To keep this guide focused, we will ignore 3D spatialization for now and provide 2D output only. In that case, you can call AActor::AddComponentByClass() at runtime to create and attach an Odin Synth Component to the locally controlled player character. This avoids the need to resolve which character corresponds to the ODIN peer and simplifies initial setup.
Always set up and bind your event handlers before joining a room. This ensures you receive OnRoomPeerJoinedBP and OnRoomJoinedBP events for users who are already present when your client connects.
That's it. With this setup, every user connected to the same room will be heard at full volume through the local player’s position.
In a real 3D game you would not route all audio through the local player. Instead, you would map the ODIN Peer ID to your Unreal Player's Unique Net ID and assign each UOdinDecoder output to the correct player character. This way Unreal's audio engine automatically applies spatialization and attenuation, i.e. by reducing the volume as players move farther away from the listener.
This approach is presented later on in the guide.
Joining a Room
Now we have everything in place to join a room. A room token for an Odin room with id RoomName has been created, and the room settings for our client have been configured. The final step is to connect both and initiate the join.
In this step, we simply use the Room pointer together with the default ODIN gateway URL and the room token. With this, the client will attempt to join the room.
The default gateway used here is located in Europe. If you'd like to connect to a gateway in another region, please take a look at our available gateways list.
Adding Microphone input
Now that we have successfully joined a room, we need to add our own microphone input, so that other users in the room can hear us. This is done by creating an UOdinAudioCapture object by calling UOdinFunctionLibrary::CreateOdinAudioCapture. Then we create a new UOdinEncoder object with a call to UOdinFunctionLibrary::CreateOdinEncoderFromGenerator and providing the target Odin room and audio capture object as input parameters. Finally we simply call UOdinAudioCapture::StartCapturingAudio to start capturing from the audio and pushing it to the connected Odin room.
Since capturing requires keeping the capture object alive, we keep the pointers to the UOdinAudioCapture and UOdinEncoderobjects in UPROPERTY() marked instance variables. This also allows us to start and stop microphone input whenever needed.
Header:
Source:
Configuring Audio Processing (APM)
In V2, Audio Processing Module (APM) settings like Noise Suppression and Voice Activity Detection are configured on the UOdinPipeline of the Encoder. This allows to set different effects for each room or scenario.
The APM configuration code can be applied at any step after the UOdinEncoder creation. Simply retrieve the encoder's audio pipeline, initialize the APM and/or VAD effect structures and insert the effects.
Enabling 3D Audio
So far, we have enabled voice chat in the application, but most likely you will also want to make use of Unreal's 3D audio engine. Enabling this is straightforward: you can assign Attenuation Settings directly to your Odin Synth Component. However, there is another important consideration: making sure that the Odin Synth Components are positioned correctly in the scene.
The simplest approach is to attach Odin Synth Components to the Pawns that represent the respective players. To do this reliably, you need to track which Odin Peer is associated with which player. Let's take a closer look at how to achieve that.
The exact implementation depends on your networking and replication system. In most cases you will be using Unreal Engine's native networking, so we will assume that setup here. If you are using a different system, you will need to adapt the approach to fit your networking solution.
Creating the Necessary Classes
The goal of the next steps is to map each Odin Peer Id to the corresponding Character in the scene so that every UOdinDecoder can be assigned to the correct Actor. To achieve this we need to create two additional classes alongside the existing UOdinClientComponent.
First, we create a new class derived from Unreal's ACharacter. Each ACharacter exists on all clients, and there is one instance for every connected player (including a possible ListenServer). This makes it the ideal place to replicate a player's ID from the server to all clients. In this guide, the class is called AOdinCharacter. Don't forget to update your Game Mode to reference this class so it will be used in your project. Alternatively, you can derive your current Default Player Character class from AOdinCharacter to retain existing functionality. In the sample project, the Blueprint BP_ThirdPersonCharacter was simply reparented to AOdinCharacter for this purpose.
Next, we create a class derived from UGameInstance to hold the mapping. A Game Instance object exists on each client but is never replicated, making it suitable for storing and maintaining per-client data. In this guide, the derived class is called UOdinGameInstance.
After creating these two classes and regenerating the Visual Studio project files, you can begin implementing the logic to connect Odin peers with their corresponding characters.
Creating the Player Character Maps
To assign the correct Odin Synth Component to the appropriate player-controlled character, we need to track a unique identifier for each player, their characters, and their Odin Peer IDs.
In the UOdinGameInstance class, create two maps: one mapping the unique player ID to the player character (using an FGuid), and another mapping the Odin Peer ID to the player character (using an int64). The first map keeps track of which player character belongs to which player. When another Odin peer joins the voice chat room, you can then use this mapping to determine which peer ID corresponds to which player character.
Your header file should look similar to this:
Adjusting the Join Room Routine
Next, we will move the routine that joins an odin room from the BeginPlay() function of UOdinClientComponent into its own function. The reason is simple: the client must wait until it knows its own PlayerId, because that value needs to be sent as User Data in the Join Room call.
Move the code from BeginPlay() into a dedicated function and declare that function in the header as well. This function will take an FGuid which we pass as User Data.
Header:
To pass the PlayerId as User Data, use the UOdinJsonObject helper class from the Odin SDK. Create a UOdinJsonObject, add a String Field with the key PlayerId, and set the value to the GUID cast to FString. Then, we will take the AuthJson object, which we previously initialized with a call to GenerateRoomToken, and add the user data json object as and object field with the key user_data. This will initialize the client's Peer User Data with the required PlayerId.
We then use the UOdinJsonObject::EncodeJson function to retrieve the authentication JSON as an FString, which replaces the RoomToken during the UOdinRoom::ConnectRoom call.
Source:
The AuthJson object was initialized automatically by the call to GenerateRoomToken. In production, you will not be able to use this function, so you will need to setup an authentication JSON by yourself. Luckily this is a simple process - the AuthJson object initially only contains a string field with key token and the value set to the generated room token. To setup your own authentication JSON object, create a new UOdinJsonObject and set the token string field appropriately. You can then continue with setting up the user data as described above.
Propagating an Identifier
If you have not done so earlier, now is the time to attach the component to your Player Controller. For this sample we assume you have added UOdinClientComponent to your game's default PlayerController. You can do this either in C++ within your PlayerController class, or via the Blueprint Editor if your default controller is a Blueprint.
With that in place, we can propagate a per-player identifier for the current game session. You can use GUIDs or any existing unique player identifiers you already maintain (for example from a login flow). First, declare the corresponding replicated variable in C++ and configure the class for replication. We want each client to react as soon as PlayerId is replicated, so it can be added to the local map of player characters and player IDs. The header will look like this in the end:
Header:
Next, implement GetLifetimeReplicatedProps() to enable replication of PlayerId.
Source:
You can assign the ID at any appropriate point during startup. A good entry point is the BeginPlay function of the Character class. On the server, set the replicated variable so it propagates to all clients. Also add the newly created PlayerId to the Game Instance's PlayerCharacters map.
If this character is locally controlled, start the ODIN room join routine on the UOdinClientComponent.
Finally, implement OnRep_PlayerId(), which is invoked on each client once the PlayerId arrives. Here you perform the same bookkeeping as above, without creating a new GUID. Since the client now knows its PlayerId, it can start the ConnectToOdin() routine.
Handling the Peer Joined Event with an Identifier
With player identifiers in place, each client will receive an event when a player joins the ODIN room. In this handler, extract the player identifier from the provided user data. Once you resolve which character belongs to the newly joined peer, add that character to the new map using the passed Peer Id.
With the identifier mapping in place, you can now attach the Odin Synth Component to the correct player character. Do not attach the component to the local player by default. Instead, resolve the target character using the Peer Id and your Game Instance's Odin Player Character map, then attach and activate the synth component on that character.
The finished OnPeerJoinedHandler() will look like this:
Conclusion
This is all you need to add Odin Synth Components to the correct player characters. From here, you can adjust the attenuation settings to whatever best fits your 3D world. The most straightforward approach is simply assigning an attenuation settings asset created in the Unreal Editor to your Odin Synth Component.
Because Odin itself is agnostic of the underlying audio engine, you are free to use the built-in Unreal Audio Engine or any third-party solution such as Steam Audio, Microsoft Audio, FMOD, or Wwise. All of these work by configuring your project and applying the appropriate attenuation settings to the Odin Synth Component.