Skip to main content

React

This guide walks through integrating the ODIN Voice Web SDK into a React application. It covers every step from plugin initialization to cleanup, using idiomatic React patterns (hooks, refs, state). The code below is based on the tested example in the ODIN Web SDK repository.

info

This example uses @4players/odin-tokens to generate tokens client-side for simplicity. In production, tokens should be generated on your backend server to protect your access key. See Getting Started for details.

Dependencies

npm install @4players/odin @4players/odin-tokens

Full Example

App.tsx
import { useState, useEffect, useRef, useCallback } from 'react';
import { Room, DeviceManager, AudioInput, setOutputDevice } from '@4players/odin';
import { TokenGenerator } from '@4players/odin-tokens';

// Type for tracking peer information in component state
interface PeerInfo {
id: number;
userId: string;
isLocal: boolean;
}

function App() {
// ---------------------------------------------------------------------------
// State
// ---------------------------------------------------------------------------

const [isConnected, setIsConnected] = useState(false);
const [isConnecting, setIsConnecting] = useState(false);
const [peers, setPeers] = useState<PeerInfo[]>([]);
// Track which peers are currently speaking via their peer ID
const [speakingPeers, setSpeakingPeers] = useState<Set<number>>(new Set());
const [isMuted, setIsMuted] = useState(false);
const [error, setError] = useState<string | null>(null);

// Refs persist across renders without triggering re-renders — ideal for SDK
// objects that live outside the React lifecycle
const roomRef = useRef<Room | null>(null);
const audioInputRef = useRef<AudioInput | null>(null);

// ---------------------------------------------------------------------------
// Cleanup on unmount — close audio and leave room
// ---------------------------------------------------------------------------

useEffect(() => {
return () => {
audioInputRef.current?.close();
roomRef.current?.leave();
};
}, []);

// ---------------------------------------------------------------------------
// Join a voice room
// ---------------------------------------------------------------------------

const joinRoom = async () => {
setError(null);
setIsConnecting(true);

try {

// Generate a token (in production, fetch this from your backend)
const generator = new TokenGenerator('YOUR_ACCESS_KEY');
const token = await generator.createToken('default', 'react-user');

// Create a new Room instance for this session
const room = new Room();
roomRef.current = room;

// ----- Register event handlers BEFORE joining -----

// Called once the room has been successfully joined
room.onJoined = () => {
setIsConnected(true);
setIsConnecting(false);
};

// Called when we leave or get disconnected from the room
room.onLeft = () => {
setIsConnected(false);
setIsConnecting(false);
setPeers([]);
};

// Called whenever a peer joins — including ourselves and peers already present
room.onPeerJoined = (payload) => {
// Compare with room.ownPeerId to identify the local peer
const isLocal = payload.peer.id === room.ownPeerId;

setPeers((prev) => [
...prev.filter((p) => p.id !== payload.peer.id),
{ id: payload.peer.id, userId: payload.peer.userId, isLocal },
]);

// Attach a per-peer audio activity handler to track who is speaking
payload.peer.onAudioActivity = ({ media }) => {
setSpeakingPeers((prev) => {
const next = new Set(prev);
if (media.isActive) {
next.add(payload.peer.id);
} else {
next.delete(payload.peer.id);
}
return next;
});
};
};

// Called when a peer leaves the room
room.onPeerLeft = (payload) => {
setPeers((prev) => prev.filter((p) => p.id !== payload.peer.id));
setSpeakingPeers((prev) => {
const next = new Set(prev);
next.delete(payload.peer.id);
return next;
});
};

// Set the output device — required to hear other peers
await setOutputDevice({});

// Join the room with the token
await room.join(token, { gateway: 'https://gateway.odin.4players.io' });

// Create a microphone input. Defaults: system mic with echo cancellation,
// noise suppression, and automatic gain control enabled.
const audioInput = await DeviceManager.createAudioInput();
audioInputRef.current = audioInput;

// Attach the microphone to the room — audio transmission starts here
await room.addAudioInput(audioInput);
} catch (e) {
setError(e instanceof Error ? e.message : 'Failed to join room');
setIsConnecting(false);
}
};

// ---------------------------------------------------------------------------
// Leave the room
// ---------------------------------------------------------------------------

const leaveRoom = () => {
if (audioInputRef.current) {
// Remove the audio input from the room (stops the encoder)
roomRef.current?.removeAudioInput(audioInputRef.current);
// Close the AudioInput (releases the microphone)
audioInputRef.current.close();
audioInputRef.current = null;
}
if (roomRef.current) {
// Disconnect from the room
roomRef.current.leave();
roomRef.current = null;
}
// Reset all state
setIsConnected(false);
setPeers([]);
setSpeakingPeers(new Set());
setIsMuted(false);
};

// ---------------------------------------------------------------------------
// Mute / Unmute toggle
// ---------------------------------------------------------------------------

const toggleMute = async () => {
if (!audioInputRef.current) return;

if (isMuted) {
// Unmute: restore volume first, then re-add to room to resume encoding
await audioInputRef.current.setVolume(1);
await roomRef.current?.addAudioInput(audioInputRef.current);
} else {
// Mute: remove from room to stop the encoder (saves CPU),
// then use 'muted' to stop the MediaStream so the browser's
// recording indicator disappears
roomRef.current?.removeAudioInput(audioInputRef.current);
await audioInputRef.current.setVolume('muted');
}
setIsMuted(!isMuted);
};

// ---------------------------------------------------------------------------
// Render
// ---------------------------------------------------------------------------

return (
<div>
{error && <p>{error}</p>}

{!isConnected ? (
<button onClick={joinRoom} disabled={isConnecting}>
{isConnecting ? 'Connecting...' : 'Join Voice Chat'}
</button>
) : (
<div>
<button onClick={toggleMute}>{isMuted ? 'Unmute' : 'Mute'}</button>
<button onClick={leaveRoom}>Leave</button>

<h3>Peers ({peers.length})</h3>
<ul>
{peers.map((peer) => (
<li key={peer.id}>
{peer.isLocal ? 'You' : peer.userId}
{speakingPeers.has(peer.id) && ' (speaking)'}
</li>
))}
</ul>
</div>
)}
</div>
);
}

export default App;

Step-by-Step Breakdown

Plugin

The audio plugin is registered automatically by the SDK when needed -- no manual initialization is required. See Customize the Plugin if you need to customize the plugin.

warning

Modern browsers require a user gesture (click/tap) before an AudioContext can be started. Ensure joinRoom is called from a button click handler.

Event Handlers

All event handlers must be registered before calling room.join(). The example registers four handlers:

HandlerPurpose
room.onJoinedUpdate connection state when successfully joined
room.onLeftReset state when disconnected
room.onPeerJoinedTrack peers; attach per-peer onAudioActivity for speaking detection
room.onPeerLeftRemove peer from state

The per-peer

PeerEvents

onAudioActivity handler receives { media } where media.isActive indicates whether the peer is currently speaking.

Joining and Audio

Two critical calls must happen in order:

  1. setOutputDevice({}) — Must be called in order to hear other peers
  2. room.addAudioInput(audioInput) — Attaches the microphone and starts audio transmission

Muting

The example uses the full mute approach for maximum resource savings. See Muting & Volume Control for all available approaches.

Cleanup

The useEffect cleanup function ensures audio is stopped and the room is left when the component unmounts, preventing resource leaks.