This guide walks through integrating the ODIN Voice Web SDK into an Angular application. It uses modern Angular patterns — standalone components, signals for reactive state, and OnPush change detection. The code below is based on the tested example in the ODIN Web SDK repository.
This example uses @4players/odin-tokens to generate tokens client-side for simplicity. In production, tokens should be generated on your backend server to protect your access key. See Getting Started for details.
Dependencies
npm install @4players/odin @4players/odin-tokens
Full Example
app.component.ts
import {
Component,
OnDestroy,
signal,
ChangeDetectionStrategy,
} from "@angular/core";
import { FormsModule } from "@angular/forms";
import {
Room,
DeviceManager,
AudioInput,
setOutputDevice,
} from "@4players/odin";
import { TokenGenerator } from "@4players/odin-tokens";
interface PeerInfo {
id: number;
userId: string;
isLocal: boolean;
}
@Component({
selector: "app-root",
standalone: true,
imports: [FormsModule],
changeDetection: ChangeDetectionStrategy.OnPush,
template: `
@if (error()) {
<p>{{ error() }}</p>
}
@if (!isConnected()) {
<button (click)="joinRoom()" [disabled]="isConnecting()">
{{ isConnecting() ? "Connecting..." : "Join Voice Chat" }}
</button>
} @else {
<button (click)="toggleMute()">
{{ isMuted() ? "Unmute" : "Mute" }}
</button>
<button (click)="leaveRoom()">Leave</button>
<h3>Peers ({{ peers().length }})</h3>
<ul>
@for (peer of peers(); track peer.id) {
<li>
{{ peer.isLocal ? "You" : peer.userId }}
@if (speakingPeers().has(peer.id)) {
(speaking)
}
</li>
}
</ul>
}
`,
})
export class AppComponent implements OnDestroy {
isConnected = signal(false);
isConnecting = signal(false);
peers = signal<PeerInfo[]>([]);
speakingPeers = signal<Set<number>>(new Set());
isMuted = signal(false);
error = signal<string | null>(null);
private room: Room | null = null;
private audioInput: AudioInput | null = null;
ngOnDestroy() {
this.leaveRoom();
}
async joinRoom() {
this.error.set(null);
this.isConnecting.set(true);
try {
const generator = new TokenGenerator("YOUR_ACCESS_KEY");
const token = await generator.createToken("default", "angular-user");
this.room = new Room();
this.room.onJoined = () => {
this.isConnected.set(true);
this.isConnecting.set(false);
};
this.room.onLeft = () => {
this.isConnected.set(false);
this.isConnecting.set(false);
this.peers.set([]);
};
this.room.onPeerJoined = (payload) => {
const isLocal = payload.peer.id === this.room!.ownPeerId;
this.peers.update((prev) => [
...prev.filter((p) => p.id !== payload.peer.id),
{ id: payload.peer.id, userId: payload.peer.userId, isLocal },
]);
payload.peer.onAudioActivity = ({ media }) => {
this.speakingPeers.update((prev) => {
const next = new Set(prev);
if (media.isActive) {
next.add(payload.peer.id);
} else {
next.delete(payload.peer.id);
}
return next;
});
};
};
this.room.onPeerLeft = (payload) => {
this.peers.update((prev) =>
prev.filter((p) => p.id !== payload.peer.id),
);
this.speakingPeers.update((prev) => {
const next = new Set(prev);
next.delete(payload.peer.id);
return next;
});
};
await setOutputDevice({});
await this.room.join(token, {
gateway: "https://gateway.odin.4players.io",
});
this.audioInput = await DeviceManager.createAudioInput();
await this.room.addAudioInput(this.audioInput);
} catch (e) {
this.error.set(e instanceof Error ? e.message : "Failed to join room");
this.isConnecting.set(false);
}
}
leaveRoom() {
if (this.audioInput) {
this.room?.removeAudioInput(this.audioInput);
this.audioInput.close();
this.audioInput = null;
}
if (this.room) {
this.room.leave();
this.room = null;
}
this.isConnected.set(false);
this.peers.set([]);
this.speakingPeers.set(new Set());
this.isMuted.set(false);
}
async toggleMute() {
if (!this.audioInput) return;
if (this.isMuted()) {
await this.audioInput.setVolume(1);
await this.room?.addAudioInput(this.audioInput);
} else {
this.room?.removeAudioInput(this.audioInput);
await this.audioInput.setVolume("muted");
}
this.isMuted.update((v) => !v);
}
}
Step-by-Step Breakdown
Plugin
The audio plugin is registered automatically by the SDK when needed -- no manual initialization is required. See Customize the Plugin if you need to customize the plugin.
Modern browsers require a user gesture (click/tap) before an AudioContext can be started. Ensure joinRoom is called from a button click handler (e.g. (click)="joinRoom()").
Reactive State with Signals
Angular signals provide fine-grained reactivity compatible with OnPush change detection. SDK objects (Room, AudioInput) are stored as private class properties since they are not rendered directly — only their derived state (connection status, peers, mute state) is exposed as signals.
Event Handlers
All event handlers must be registered before calling room.join(). The example registers four handlers:
| Handler | Purpose |
|---|
room.onJoined | Update connection signal when successfully joined |
room.onLeft | Reset signals when disconnected |
room.onPeerJoined | Track peers via signal.update(); attach per-peer onAudioActivity |
room.onPeerLeft | Remove peer from signals |
The per-peer PeerEvents
onAudioActivity handler receives { media } where media.isActive indicates whether the peer is currently speaking.
Joining and Audio
Two critical calls must happen in order:
setOutputDevice({}) — Must be called to hear other peers
room.addAudioInput(audioInput) — Attaches the microphone and starts audio transmission
Muting
The example uses the full mute approach for maximum resource savings. See Muting & Volume Control for all available approaches.
Cleanup
ngOnDestroy calls leaveRoom() to ensure the microphone is released and the room connection is closed when the component is destroyed.