Every other guide teaches one primitive. This one assembles them. By the end you'll have a working meeting app — multiple participants, camera + mic, a participant grid, screen share, in-call chat, recording, and connection-quality handling — built only with the public SDKs.
The code here mirrors LevelChat's own first-party meeting app. It is not a
toy: the same joinLive → publishCamera/Mic → wire participant + track events → leave shape
runs the real product. Where the raw @levelchat/web SDK needs boilerplate, we show it once so
you understand what's happening, then point at the @levelchat/web-react hooks that collapse it.
What you'll build, step by step:
- Project setup — Next.js +
@levelchat/web - A server-side token endpoint
- Connect:
new LevelChat()→joinLive()→ aLiveStream - Publish camera + mic with device selection
- Render a live participant grid from track events
- Screen share
- In-call chat
- Recording (host-only)
- Connection quality + reconnection
- Clean teardown
- The less-boilerplate path with
@levelchat/web-react
1. Project setup
Start from a fresh Next.js app (App Router) and add the SDK:
npx create-next-app@latest my-meeting-app --typescript --app
cd my-meeting-app
npm install @levelchat/web @levelchat/web-react@levelchat/web is the framework-agnostic core — ~80 kB gzipped, zero runtime deps.
@levelchat/web-react is optional; we use the raw core for steps 3–10 so nothing is hidden,
then switch to the React bindings in step 11.
2. Mint a token on your server
A room token is a short-lived JWT scoped to one (room, user, role). It is minted on your
server with your project API key (lc_pk_*) — never in the browser, where the key would leak.
Issue a POST /v1/auth/tokens/room to LevelChat. This is the same proven request the
Quickstart uses; a dedicated @levelchat/node SDK is on the roadmap, until then
plain fetch is all you need:
export async function POST(req: Request) {
const userId = await yourAuth(req); // your existing auth
const { searchParams } = new URL(req.url);
const roomId = searchParams.get('room')!;
const displayName = searchParams.get('name') ?? 'Guest';
const res = await fetch(`${process.env.LEVELCHAT_API_URL}/v1/auth/tokens/room`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${process.env.LEVELCHAT_API_KEY}`, // lc_pk_xxx.yyy
},
body: JSON.stringify({
roomId,
userId,
identity: userId,
displayName,
roomType: 'meeting', // 'meeting' | 'live' | 'webinar' | '1to1'
role: 'publisher', // every meeting participant publishes
caps: ['publish:camera', 'publish:mic', 'publish:screen', 'subscribe:all', 'chat:send'],
ttlSeconds: 600, // 10 minutes
}),
});
if (!res.ok) return new Response('token mint failed', { status: 500 });
const { token, expires_at } = await res.json();
return Response.json({ token, expires_at });
}The response is { token, expires_at }. The client SDK reads the room id straight out of the
JWT and auto-routes to the nearest region — you don't pass a signaling URL unless you self-host.
Recording note. If a participant should be able to start a recording, add
recordto their token'scaps. The capability is enforced server-side, so a UI-only "host" flag is not enough — see step 8.
3. Connect
The whole client lifecycle is: construct a LevelChat, call joinLive(), and you get back a
LiveStream. LiveStream wraps the underlying Room and exposes the role-aware helpers most
apps need (live.publishCamera(), live.publishMic(), live.publishScreen(), live.leave()),
while live.room gives you the full low-level event surface.
import { LevelChat, type LiveStream } from '@levelchat/web';
async function connect(roomId: string, displayName: string): Promise<LiveStream> {
// 1. Mint the token from YOUR server (step 2).
const { token } = await fetch(
`/api/lc-token?room=${roomId}&name=${encodeURIComponent(displayName)}`,
).then((r) => r.json());
// 2. Construct the client. `logLevel: 'warn'` is the production default.
const client = new LevelChat({ logLevel: 'warn' });
// 3. Join. `role: 'broadcaster'` gives every participant the publish
// surface — in a meeting, everyone is a broadcaster. `roomType`
// is a topology hint; the server still enforces room policy.
const live = await client.joinLive({
token,
role: 'broadcaster',
roomType: 'meeting',
});
// `live.room.ready` resolves once the initial join-ack has been
// processed — await it before reading `live.room.participants`.
await live.room.ready;
return live;
}After joinLive resolves you are in the room. live.room.localId is your participant id, and
live.room.participants is a ReadonlyMap of everyone already present (including you).
4. Publish camera + mic
publishCamera() and publishMic() acquire the device, start sending, and return a TrackView
whose .mediaStreamTrack you bind to a <video> (or <audio>) element for the local preview.
Both accept a deviceId so you can honor a device picker. Enumerate devices with
navigator.mediaDevices.enumerateDevices() (the browser API) before joining, or use the SDK's
client-independent DeviceManager — but the raw browser call is enough for most apps.
async function publishLocalMedia(
live: LiveStream,
opts: { cameraDeviceId?: string; micDeviceId?: string; camOn: boolean; micOn: boolean },
): Promise<MediaStream | null> {
const tracks: MediaStreamTrack[] = [];
if (opts.camOn) {
const cam = await live.publishCamera({
resolution: '720p',
...(opts.cameraDeviceId ? { deviceId: opts.cameraDeviceId } : {}),
// Simulcast: three layers the SFU picks from per-subscriber.
encodings: [{ rid: 'h', maxBitrate: 1_500_000 }],
});
if (cam.mediaStreamTrack) tracks.push(cam.mediaStreamTrack);
}
if (opts.micOn) {
const mic = await live.publishMic(
opts.micDeviceId ? { deviceId: opts.micDeviceId } : undefined,
);
if (mic.mediaStreamTrack) tracks.push(mic.mediaStreamTrack);
}
// Bundle the two tracks into one MediaStream for the local preview.
return tracks.length > 0 ? new MediaStream(tracks) : null;
}Bind the returned stream to a muted, mirrored <video> for the self-view:
<video ref={localVideoRef} autoPlay playsInline muted style={{ transform: 'scaleX(-1)' }} />if (localVideoRef.current && localStream) {
localVideoRef.current.srcObject = localStream;
}To mute without dropping the publication, flip the track's enabled flag — this stops
outgoing media at the WebRTC layer without renegotiating the connection:
function toggleMic(localStream: MediaStream, muted: boolean) {
localStream.getAudioTracks().forEach((t) => (t.enabled = !muted));
}
function toggleCam(localStream: MediaStream, muted: boolean) {
localStream.getVideoTracks().forEach((t) => (t.enabled = !muted));
}If publishCamera throws (permission denied, no device), don't fail the join — the user can
still see and hear everyone else. Catch the error, keep the room connected, and surface a toast
from your UI layer, not from the connection code:
import { LevelChatError } from '@levelchat/web';
try {
await room.publishCamera({ resolution: '720p' });
} catch (err) {
if (err instanceof LevelChatError && err.code === 'media/permission-denied') {
showToast('Camera blocked — you can still see and hear everyone.');
} else {
throw err; // unexpected — let it surface
}
}Switching devices mid-call follows the same publish path. To change camera, unpublish the
current track and publish a new one with the chosen deviceId — room.devices enumerates
what's available and emits a change event when hardware is plugged or unplugged:
async function switchCamera(room, deviceId, currentTrackId) {
await room.stopPublishing(currentTrackId);
return room.publishCamera({ deviceId, resolution: '720p' });
}5. Render a participant grid
This is the part developers most need shown. The SDK does not hand you <video> elements — it
hands you events. You listen, you maintain a map of participant → MediaStream, and you
render. Four events drive the grid:
participant-joined/participant-left— roster changestrack-subscribed/track-unsubscribed— a remote track became (un)available
When track-subscribed fires, you get a TrackView with .mediaStreamTrack and
.participantId. Append that track to a MediaStream you keep per participant, then bind the
stream to that participant's <video>. Camera and mic land on the same stream; screen-share
tracks (source: 'screen-video' | 'screen-audio') you route to a separate stream.
interface RemotePeer {
id: string;
displayName: string;
stream: MediaStream; // camera + mic
screen: MediaStream | null; // screen share, when present
}
function wireParticipantGrid(
live: LiveStream,
localId: string | null,
onChange: (peers: RemotePeer[]) => void,
) {
const peers = new Map<string, RemotePeer>();
const emit = () => onChange([...peers.values()]);
// Seed with whoever is already in the room (minus ourselves).
for (const p of live.room.participants.values()) {
if (p.id === localId) continue;
peers.set(p.id, {
id: p.id,
displayName: p.identity ?? p.displayName ?? 'Participant',
stream: new MediaStream(),
screen: null,
});
}
emit();
live.room.on('participant-joined', (p) => {
if (p.id === localId || peers.has(p.id)) return;
peers.set(p.id, {
id: p.id,
displayName: p.identity ?? p.displayName ?? 'Participant',
stream: new MediaStream(),
screen: null,
});
emit();
});
live.room.on('participant-left', (p) => {
peers.delete(p.id);
emit();
});
live.room.on('track-subscribed', (track) => {
if (!track.mediaStreamTrack || !track.participantId) return;
const peer = peers.get(track.participantId);
if (!peer) return;
const isScreen = track.source === 'screen-video' || track.source === 'screen-audio';
if (isScreen) {
peer.screen ??= new MediaStream();
peer.screen.addTrack(track.mediaStreamTrack);
} else {
peer.stream.addTrack(track.mediaStreamTrack);
}
emit();
});
live.room.on('track-unsubscribed', (track) => {
if (!track.participantId) return;
const peer = peers.get(track.participantId);
if (!peer) return;
if (track.source === 'screen-video' || track.source === 'screen-audio') {
peer.screen = null;
emit();
}
});
}Then render each peer's stream into a <video>. In React, the binding goes in an effect so it
re-runs when the stream reference changes:
function PeerTile({ peer }: { peer: RemotePeer }) {
const videoRef = useRef<HTMLVideoElement>(null);
useEffect(() => {
if (videoRef.current) videoRef.current.srcObject = peer.stream;
}, [peer.stream]);
return (
<div className="tile">
<video ref={videoRef} autoPlay playsInline />
<span className="name">{peer.displayName}</span>
</div>
);
}That's the whole grid: a Map kept in sync by four event handlers, rendered as <video>
elements. Everything else — speaker bias, pagination, pinned tiles — is layout on top of this.
6. Screen share
live.publishScreen() opens the browser's screen-picker and publishes the chosen surface. It
returns an array of TrackView — one for the video, optionally one for system audio.
frameRate: 15 is the right default for slides and demos:
async function startScreenShare(live: LiveStream): Promise<MediaStream> {
const tracks = await live.publishScreen({ frameRate: 15 });
const stream = new MediaStream();
const trackIds: string[] = [];
for (const t of tracks) {
if (t.mediaStreamTrack) stream.addTrack(t.mediaStreamTrack);
trackIds.push(t.id);
// The browser's own "Stop sharing" pill ends the track directly —
// listen for it so your UI collapses in sync.
t.mediaStreamTrack?.addEventListener('ended', () => stopScreenShare(live, trackIds), {
once: true,
});
}
return stream;
}
async function stopScreenShare(live: LiveStream, trackIds: string[]) {
for (const id of trackIds) {
await live.room.stopPublishing(id);
}
}Remote participants receive the screen track through the same track-subscribed event from
step 5 — your handler already routes source: 'screen-video' to the peer's screen stream.
7. In-call chat
Chat rides the room's data channel. Send with live.room.sendChat({ text }); receive by
listening for the chat-message event. The event payload is a ChatMessageView —
{ id, from, to?, text, at, encrypted }. Note sendChat does not echo your own message
back, so insert it locally when you send:
function wireChat(live: LiveStream, localId: string | null, onMessage: (m: ChatEntry) => void) {
live.room.on('chat-message', (msg) => {
const owner = live.room.participants.get(msg.from);
onMessage({
id: msg.id,
authorId: msg.from,
authorName: owner?.identity ?? msg.from.slice(0, 8),
body: msg.text,
at: msg.at,
mine: msg.from === localId,
});
});
}
function sendChat(live: LiveStream, text: string) {
if (!text.trim()) return;
live.room.sendChat({ text });
// Optimistically render our own message — the server won't echo it.
}
interface ChatEntry {
id: string;
authorId: string;
authorName: string;
body: string;
at: string;
mine: boolean;
}8. Recording
Recording is server-side: the SFU captures the call and a worker compresses it. The host
triggers it with live.room.record(). The canonical option is compose — 'tracks' (one file
per participant per kind), 'grid' (evenly-tiled), or 'speaker' (active-speaker switcher):
async function startRecording(live: LiveStream) {
const recording = await live.room.record({ compose: 'speaker' });
// recording.id — surface it in your UI / store it for later.
return recording.id;
}
async function stopRecording(live: LiveStream) {
await live.room.stopRecording();
}room.record() with no arguments records everyone, tracks layout, mp4_av1 output.
record()vsstartRecording(). The canonical method isroom.record({ compose }).room.startRecording({ layout })is the pre-1.0 alias — still supported,layoutmaps ontocompose— so older code that predates the rename keeps working. New code should useroom.record(). See the Recordings guide.
Recording is gated by the record capability on the token (step 2). A UI-only "host" boolean
won't do — the SFU rejects the request if the cap is absent. Mint host tokens with record in
their caps.
9. Connection quality + reconnection
The SDK auto-reconnects on transient network loss (exponential backoff, ICE restart on resume). You don't have to do anything for recovery — but you should show it. Two events:
live.room.on('connection-state', (state) => {
// 'connecting' | 'connected' | 'reconnecting' | 'disconnected' | 'failed'
if (state === 'reconnecting') showReconnectingBanner();
if (state === 'connected') hideReconnectingBanner();
});
live.room.on('connection-quality', (participantId, quality) => {
// `quality.label` ∈ 'excellent' | 'good' | 'fair' | 'poor' | 'disconnected'
// For the local participant, `participantId` is 'local'.
if (participantId === 'local' || participantId === live.room.localId) {
updateLocalQualityIndicator(quality);
}
});Render a small network indicator per tile from connection-quality, and a top-of-call banner
from connection-state. That's the whole resilience UI.
10. Clean teardown
live.leave() closes the signaling channel, tears down every peer connection, and stops your
local publications. Always call it — on a "Leave" button, and on unmount:
async function leave(live: LiveStream, localStream: MediaStream | null) {
await live.leave();
// `leave()` stops published tracks, but stop the preview stream's
// tracks too so the camera light goes off immediately.
localStream?.getTracks().forEach((t) => t.stop());
}In React, wire it into a cleanup effect:
useEffect(() => {
return () => {
void live?.leave();
localStream?.getTracks().forEach((t) => t.stop());
};
}, [live, localStream]);11. The less-boilerplate path: @levelchat/web-react
Steps 3–10 used the raw core so nothing was hidden. For a React app, @levelchat/web-react
collapses the connection lifecycle, the participant tracking, and the rendering into hooks and
components. Same WebRTC underneath — the React layer is purely ergonomic.
<LevelChatProvider> joins on mount and tears down on unmount. useParticipants() is the
roster, useLocalParticipant() gives you the publish helpers, useChat() is the message list,
and <ParticipantGrid> / <VideoTile> / <Chat> are the pre-styled surfaces:
'use client';
import { useEffect, useState } from 'react';
import {
LevelChatProvider,
ParticipantGrid,
Chat,
useLocalParticipant,
useParticipants,
useChat,
useRoom,
} from '@levelchat/web-react';
export default function RoomPage({ params }: { params: { id: string } }) {
const [token, setToken] = useState<string>();
useEffect(() => {
fetch(`/api/lc-token?room=${params.id}&name=Alice`)
.then((r) => r.json())
.then((d) => setToken(d.token));
}, [params.id]);
if (!token) return <p>Connecting…</p>;
// `autoJoin` takes the same options as `client.joinRoom` — pass the
// token your server minted. The provider joins on mount, leaves on unmount.
return (
<LevelChatProvider autoJoin={{ token }}>
<ParticipantGrid />
<Controls />
<RoomChat />
</LevelChatProvider>
);
}
function Controls() {
// `useLocalParticipant` returns the publish helpers, scoped to the
// local participant. They no-op until the provider has joined.
const { publishCamera, publishMic, publishScreen } = useLocalParticipant();
return (
<div className="controls">
<button onClick={() => publishCamera({ resolution: '720p' })}>Camera</button>
<button onClick={() => publishMic()}>Mic</button>
<button onClick={() => publishScreen({ frameRate: 15 })}>Share screen</button>
</div>
);
}
function RoomChat() {
const room = useRoom();
const messages = useChat(); // append-only ChatMessageView[]
return <Chat messages={messages} onSend={(text) => room?.sendChat({ text })} localName="Alice" />;
}Recording, screen share, and connection events all work the same way through useRoom() —
useRoom() returns the same Room object you wired by hand in steps 5–10, so
room.record(...), room.on('connection-state', ...) etc. are all still available when you
need to drop below the components.
What you built
A complete meeting app: server-minted tokens, a joinLive connection, published camera + mic
with device selection and mute, a live participant grid driven by track events, screen share,
in-call chat, host-only recording, connection-quality UI, and clean teardown — the exact shape
of LevelChat's own production Meet client, built only with the public SDKs.
Where to go deeper:
- How LevelChat works — the mental model: mesh vs SFU, the token model, the join lifecycle, and the full event surface.
- Rooms — room topology,
roomType, lifecycle, and configuration. - Recordings — compose modes, output formats, artifact storage.
- Live streaming — the 1-to-N broadcast topology and viewer scale.
- Web SDK reference — the full
@levelchat/webAPI, error codes, E2EE, bundle. - Components — every primitive in
@levelchat/react-componentswith props. - Webhooks — react to
room.*andrecording.*events server-side.