How LevelChat works
This page is the mental model you need before you read any SDK reference. It
describes what the client SDK actually does — sourced from the real
@levelchat/web implementation — so a developer (or a coding agent) can reason
about the system instead of pattern-matching snippets.
If you only read one section, read The lifecycle.
The objects
LevelChat has four objects. Every SDK — web, iOS, Android, React Native, Flutter — exposes the same four, with platform-idiomatic names.
| Object | What it is |
|---|---|
| Client | LevelChat — the entry point you construct once. Holds config (region, log level, telemetry). Hands back Room instances. |
| Room | One call. The unit of presence. You get a Room back from joinRoom() / joinLive(). It owns the connection, the participants, the events. |
| Participant | A person (or bot) in the room. There is one local participant — you — and zero or more remote participants. |
| Track | One stream of media: a camera, a microphone, or a screen share. Participants publish tracks; other participants subscribe to them. |
A Room is not the same thing as the server-side room resource you create
with the REST API. The REST resource (POST /v1/rooms) is the durable record —
its lifecycle (created → active → ended → archived) is covered in the
Rooms guide. The SDK Room object is your client-side
connection to that resource: it exists from joinRoom() until leave().
The token model
The SDK never holds your API key. The trust boundary is the room token — a short-lived JWT your server mints and hands to the client.
your server ──(API key)──▶ LevelChat token endpoint ──▶ room JWT (≤ 1 hour)
│
└──────────────────────────────────────────────────▶ your client
│
client ──(room JWT)──▶ joinRoom({ token }) ──▶ signaling WebSocket upgrade- The API key is server-only. It is
lc_pk_xxx.yyyand it bypasses every quota and permission check — it must never reach a browser or app bundle.client.issueToken()exists as a dev convenience and throws unless anapiKeyis set; in production you mint tokens on your backend. - The room JWT is scoped. It encodes the room id, the participant identity,
and a capability list (
publish:camera,subscribe:all,chat:send, …). The signaling server enforces those capabilities authoritatively — a token withoutpublish:*cannot publish even if the client tries. - The JWT is short-lived. Default TTL is 600 seconds; the server caps it at 3600. The token only authorises the WebSocket upgrade — once connected, the session is independent of the token. A leaked token expires fast.
- The SDK reads the room id out of the JWT. It decodes (does not
verify — verification is server-side only) the
room/audclaim to know which signaling endpoint to dial. You never pass a room id tojoinRoom()separately.
See License vs subscription for how tokens relate to your commercial plan, and the Rooms guide for the full capability list.
The lifecycle
Every LevelChat session is the same four steps, in the same order:
token ──▶ join ──▶ publish ──▶ subscribe
(1) (2) (3) (4)1. Token
Your server mints a room JWT (see the token model) and returns it to the client. The client does no auth of its own.
2. Join
const lc = new LevelChat({ logLevel: 'info' });
const room = await lc.joinRoom({ token });joinRoom() resolves the signaling endpoint (from the JWT's region, or a
signalingUrl override), opens a signaling WebSocket, and sends a hello
frame. The server replies with the current participant roster. await room.ready resolves once that roster is populated — await it before reading
room.participants if you need an accurate first snapshot.
joinLive() is the same flow with a role (viewer or broadcaster): a
viewer never acquires camera/mic and the SDK rejects publish* calls
client-side, mirroring what the token's capabilities allow server-side.
3. Publish
const camera = await room.publishCamera({ resolution: '720p' });
await room.publishMic();Publishing calls getUserMedia (the browser's permission prompt fires here),
creates a local Track, and adds it to the connection. Camera publishes use
simulcast by default — the SDK encodes three quality layers (low / mid /
high) so each subscriber can receive the layer their bandwidth supports.
publishScreen() returns an array of tracks (the captured surface, plus
system audio if withAudio is set).
4. Subscribe
You do not poll for remote media. The SDK emits a track-subscribed event
when a remote track is ready to render:
room.on('track-subscribed', (track) => {
if (!track.mediaStreamTrack) return;
const el = document.createElement('video');
el.srcObject = new MediaStream([track.mediaStreamTrack]);
el.autoplay = true;
el.playsInline = true;
document.body.appendChild(el);
});By default the SDK subscribes to every remote track automatically. Use
room.subscribe(participantId, kind) / room.unsubscribe(...) for manual
control, and room.setPreferredQuality('low' | 'high' | 'auto') to bias which
simulcast layer you receive.
When you are done, await room.leave() tears down every peer connection,
stops local tracks, and closes signaling.
Mesh vs SFU — where media flows
This is the part most developers get wrong about LevelChat, so it is worth being precise.
LevelChat rooms run as a peer-to-peer mesh by default. When you join, the
SDK holds one RTCPeerConnection per remote participant. Your media flows
directly to each peer; there is no media server in the path. This keeps
latency lowest and is the right topology for small calls (1:1 and small
meetings).
- The mesh is built lazily: a peer connection is created when a participant joins (or when their first offer arrives).
- Glare is avoided deterministically — for each pair, the participant whose id sorts lower lexicographically is the offerer. Both sides reach the same conclusion with no coordination round-trip. This is the W3C perfect-negotiation pattern.
- Mesh cost is
O(n²)connections across the room andO(n)upstreams per participant — fine for a handful of peers, not for a webinar.
The SFU path engages when a room needs a media server. The SDK lazily
constructs an SFU client and publishes the local tracks to a mediasoup router.
Today this is triggered by room.record() (the recording service subscribes
to the router via a server-side transport) and is the path used for
large/broadcast rooms. When the SFU is engaged, mesh peer connections continue
to operate in parallel — both paths share the same local publications.
The roomType hint you pass to joinLive() ('1to1', 'meeting', 'live',
'webinar') tells the SDK and server which topology the room is optimised for.
It does not gate capabilities at the client — the signaling server is the
authority on room policy.
What the signaling channel does
The signaling WebSocket is the control plane. It carries:
- the
hello/ roster handshake on join, - SDP offers/answers and ICE candidates for the mesh,
- subscribe / unsubscribe / preferred-layer requests,
- chat messages and reactions (relayed, not peer-to-peer),
- recording start/stop RPCs,
- participant join/leave and active-speaker notifications.
It never carries media. Media always flows over the WebRTC peer connections (mesh) or through the SFU. If the signaling socket drops, the SDK reconnects with exponential backoff (250 ms → 8 s, ±25% jitter) and restarts ICE on the existing peer connections — media can survive a signaling blip.
The event surface
A Room is an event emitter. You drive your UI from its events rather than
polling. The full typed event map (RoomEvents) is documented in the
Web SDK reference; the events you will use most:
| Event | Fires when |
|---|---|
participant-joined | A remote participant joins. |
participant-left | A remote participant leaves (with an optional reason). |
track-published | A remote participant publishes a track (before you subscribe). |
track-subscribed | A remote track is ready to render — attach its mediaStreamTrack. |
track-unsubscribed | A remote track went away — detach it. |
active-speaker | The dominant speaker changed. |
connection-state | connecting / connected / reconnecting / disconnected / failed. |
connection-quality | A 5-band quality score (excellent … poor) transitioned. |
chat-message | A chat message arrived. |
reaction | A reaction arrived. |
error | An error reached the top of the SDK stack. |
Every platform SDK publishes the same event names (adapted to the platform's idiom — see the SDK parity matrix).
What the SDK handles for you
You do not write code for any of the following — they are inside the SDK:
- ICE / NAT traversal. A public STUN server ships by default; pass
iceServersto add TURN for symmetric-NAT clients. - Glare avoidance. Deterministic offerer selection per peer pair.
- Reconnection. Exponential backoff with jitter on signaling loss, plus ICE restart on resume.
- Simulcast. Three layers encoded automatically; the SFU or peer picks the layer per subscriber.
- Codec negotiation. The SDK probes
RTCRtpSendercapabilities and prefers AV1 ▸ VP9 ▸ H.264 unless you pin a codec. - Quality scoring. A weighted RTT/jitter/loss scorer emits
connection-qualitytransitions.
Where to go next
- Quickstart — the token → join → publish flow as runnable code.
- Build a meeting app — the full end-to-end tutorial.
- Web SDK reference — every class, method, event, and error code.
- Rooms guide — the server-side room resource and its lifecycle.
- License vs subscription — how tokens map to your plan.