githubEdit

TreeDN: Multicast Content Delivery

Blockcast implements a TreeDN architecture as defined in RFC 9706arrow-up-right — a tree-based Content Delivery Network designed for live streaming to mass audiences. TreeDN combines Source-Specific Multicast (SSM) with Automatic Multicast Tunneling (AMT) overlays to deliver content efficiently across both multicast-enabled and unicast-only networks.

No player modification required. Standard DASH/HLS and MoQ players connect to the nearest edge node over standard HTTP or QUIC — the multicast-to-unicast conversion happens entirely within the network. Existing encoder pipelines, DRM workflows, and player SDKs remain untouched.

Hardware-accelerated scale. Each Juniper MX hardware AMT relay supports up to 500,000 concurrent tunnels at line-rate throughput up to 1 Tbps. With a target deployment of 10,000+ ISP-hosted routers, the aggregate relay fleet can serve billions of concurrent multicast viewers without requiring new infrastructure — leveraging routers already deployed in carrier networks.

Bandwidth economics. In traditional unicast CDN delivery, each concurrent viewer requires a dedicated stream — 1 million viewers at 5 Mbps consumes 5 Tbps of backbone bandwidth. With TreeDN multicast, a single stream is replicated at each relay hop, reducing backbone bandwidth by orders of magnitude. At scale, multicast delivery approaches a fixed cost per stream regardless of audience size, compared to the linear cost per viewer of unicast CDN.

By adopting TreeDN, Blockcast enables Replication-as-a-Service (RaaS) across a tiered deployment model:

  • Tier 1 — Carrier-grade: ISP-deployed Juniper MX hardware relays with SLA-backed anycast infrastructure.

  • Tier 2 — Professionally operated: Datacenter-hosted RELAY nodes running software AMT (amtr) and edge caches.

  • Tier 3 — Community edge: DePIN-incentivized BEACON nodes extending coverage to the deep network edge — homes, venues, and mobile devices. These are leaf nodes only; they do not carry transit traffic or serve repairs.


Layered Architecture

The Blockcast TreeDN architecture is organized into three planes. Each plane has distinct responsibilities and communicates with adjacent planes through well-defined interfaces.

spinner

RFC 9706 Terminology Mapping

RFC 9706 Concept
Blockcast Component
Description

Content/Multicast Source

CAST Node

FFmpeg tee muxer outputs CMAF HTTP to MAHP Sender, MMTP/QUIC to MoQ Relay, and MMTP SSM to multicast. Orchestrated by cast binary with RRULE scheduling and service303 health reporting.

Native Multicast Router

PIM-SSM Infrastructure

Forwards multicast within enabled network segments

AMT Relay

RELAY Node (AMT function)

Tunnels multicast to unicast-only networks (hardware or software)

AMT Gateway

BEACON Node

Receives tunneled multicast at the deep edge

Native Receiver

BEACON Node (native mode)

Directly joins SSM groups on multicast-enabled networks

RaaS Provider

Tiered Operators

Tier 1: ISP carrier-grade (Juniper MX), Tier 2: datacenter RELAY operators, Tier 3: DePIN community BEACON nodes


Control Plane

The control plane orchestrates the entire TreeDN delivery network — managing node lifecycle, routing decisions, and multi-CDN interconnection.

CDN Controller

The CDN Controller extends Magma orc8rarrow-up-right to manage the distributed network of CAST, RELAY, and BEACON gateways. It provides:

  • Gateway registration and management: Nodes bootstrap with mTLS certificates and register as managed gateways, each running blockcastd as the local orchestrator.

  • Dynamic service management: Services (cache, multicast sender, AMT relay, traffic router, MoQ relay) are enabled or disabled per-node via cloud-managed configuration (mconfig) without requiring restarts.

  • Cloud services: Three specialized controllers — CDN (network orchestration), Beacon (peer discovery and coordination), and Capacity Map (resource allocation and load balancing).

Traffic Router

The Traffic Router directs client requests to the optimal edge node using three complementary routing modes:

DNS Routing (standard CDN mode)

Returns A/AAAA records for the nearest healthy RELAY or BEACON cache. The client resolves a hostname (e.g., cdn.example.com) and receives the IP of the best edge node based on geolocation, consistent hashing, and Health Protocol state. This is the primary routing mode for all standard CDN traffic.

HTTP Routing (redirect mode)

Returns HTTP 302 redirects to the best edge node. This mode enables unprivileged-port operation (>1024) on restrictive platforms like Android or ChromeOS, making personal BEACON caches at home practical — the redirect URL includes the custom port. HTTP routing is also used for CDNi request delegation between upstream and downstream CDNs (RRI interface), with content provider signatures per RFC 9246 and RFC 9421.

DRIAD Routing (RFC 8777arrow-up-right — multicast relay discovery)

Used exclusively by MAHP/MoQ receivers (on RELAYs and BEACONs) to discover the topologically nearest AMT relay for multicast stream acquisition. DRIAD works by reversing the multicast source IP and querying DNS (e.g., 10.95.25.69.amt.in-addr.arpa). This routing mode operates independently of HTTP/DNS content routing — it resolves the multicast data path, not the client request path.

Additional capabilities:

  • CDNi delegation: Request routing can be delegated via the CDN Interconnection Request Routing Interface, with content provider signatures per RFC 9246 and RFC 9421.

  • Multicast source resolution: Enables MAHP gateways in BEACONs and RELAYs to resolve the address of the multicast source or the fallback AMT relay using coverage information and client addressing.

Traffic Monitor

Continuously polls the health and performance of all network nodes:

  • Stat polling: Collects cache throughput, error rates, and delivery service statistics.

  • Health polling: Heartbeat checks to detect node availability in real time.

  • AMT relay monitoring: Queries the unified amt-astats service (/_astats endpoint), which aggregates health and tunnel state from both hardware (Juniper MX) and software (Linux amtr) relays.

  • State feed: Provides the combined health state to Traffic Router for routing decisions.

CDNi Interfaces

Blockcast implements the SVTA Open Caching API v2.1 for multi-CDN interconnection:

Interface
Purpose

FCI (Footprint & Capabilities)

Advertises network reach by ASN, country code, or H3 geospatial index

CI (Configuration)

Receives content provider configuration: origins, ACLs, cache rules, capacity limits

RRI (Request Routing)

Delegates DNS/HTTP routing decisions between upstream and downstream CDNs

Trigger

Content prepositioning, cache invalidation, and status tracking


Data Plane: Multicast Backbone

The data plane implements the TreeDN multicast tree — from content ingestion at CAST nodes through relay distribution to deep-edge BEACON receivers.

CAST Sender (Content Source)

CAST nodes are the root of the TreeDN multicast tree. They ingest content from upstream CDNs or origin servers and deliver it downstream through three parallel output paths.

FFmpeg Encoder (tee output)

A single FFmpeg process encodes source video (H.264/H.265 + AAC/Opus) and uses a tee muxer to output to three destinations simultaneously:

  1. CMAF HTTP → local MAHP Sender (Caddy) for ROUTE/FLUTE encoding and delivery to MAHP Receivers

  2. MMTP/QUIC → downstream MoQ Relay via WebTransport (moq_mmt muxer, IETF MoQ draft)

  3. MMTP SSM → SSM multicast groups (232.x.x.x) with RaptorQ FEC (k=32, r=8) for distribution via AMT Relay to players with built-in AMT gateway (IWA browser extension, mobile/TV SDK)

All three outputs carry the same content from the same FFmpeg process, ensuring consistency across delivery paths. Native multicast is preferred where available; AMT tunneling provides the highest fan-out efficiency for direct-to-player delivery at scale.

MAHP Sender — ROUTE/FLUTE (high-throughput broadcast)

A Caddy-based sender that receives CMAF HTTP segments (DASH/HLS) from the co-located FFmpeg and re-encodes them as ROUTE/FLUTE (RFC 9223) for delivery to downstream MAHP Receivers:

  • ROUTE encoding: Encodes CMAF segments as ROUTE Transport Objects with auto-incremented TOIs, FEC protection (RaptorQ), and TESLA authentication.

  • Unicast push: Pushes ROUTE objects to downstream MAHP Receivers over HTTP.

  • File delivery: Pre-positioned content delivery via File Delivery Table (FDT) metadata.

  • Signaling: Lower Layer Signaling (LLS) on 224.0.23.60:4937 for ATSC 3.0 service discovery, and Service Level Signaling (SLS) for manifests on TSI 0.

Supports ATSC 3.0, DVB-MABR, and 3GPP MBMS standards.

MoQ path (low-latency streaming)

FFmpeg publishes directly to the downstream MoQ Relay using the moq_mmt muxer — wrapping fMP4 fragments in MMTP packets and delivering them over WebTransport/QUIC (IETF MoQ draft). There is no intermediate MoQ Server on the CAST node; FFmpeg acts as the MoQ publisher. The MoQ Relay (hang-mmt-fec) receives the MMTP/QUIC stream and forwards it to downstream subscribers and child relays. Sub-500ms glass-to-glass latency.

  • Dual-stack relay: A Rust implementation supporting both moqtail-00 (Draft-14 MoQ) and moqlite-00 protocols.

  • Track registry: Cross-protocol forwarding with FEC metadata translation.

Cast Orchestrator (publisher lifecycle)

The cast binary orchestrates publisher processes on each CAST node. It discovers delivery service configuration from Traffic Ops (origins, relay endpoints, multicast groups, FEC parameters) and manages the full publisher lifecycle — including scheduled events via RRULE recurrence rules (RFC 5545), automatic failover with configurable watchdog restarts, and per-occurrence health reporting to the CDN Controller via service303.

RELAY Node (Mid-Tier)

RELAY nodes form the middle tier of the TreeDN tree. RELAYs can be chained hierarchically — a parent RELAY feeds child RELAYs, which in turn feed BEACONs — forming a multi-level distribution tree that scales coverage without overloading any single node.

Each RELAY exposes an ALPN ingress that routes incoming client connections to the appropriate backend:

  • HTTP ingress (ALPN h2, http/1.1): Unmodified DASH/HLS players connect here and are served from the encrypted cache. The cache is populated by the MAHP Receiver (see below), so the player is completely unaware of the multicast transport.

  • MoQ ingress (ALPN moq-00): Unmodified MoQ players connect here and are served by the MoQ Relay, which is a pass-through relay — it forwards MoQ tracks directly without caching.

MAHP Receiver + Repair Server

The MAHP Receiver is the multicast-to-HTTP bridge on each RELAY. It:

  • Receives multicast streams via IGMPv3 SSM join, applies FEC decoding, and pushes reconstructed HTTP objects (DASH/HLS segments) into the local encrypted cache.

  • Operates a FEC repair server: Downstream RELAYs and BEACONs can request missing FEC symbols via unicast HTTP. This is a key distinction — RELAYs serve repairs, BEACONs do not.

  • Retransmits multicast to downstream nodes within its coverage area.

MoQ Relay

The MoQ Relay forwards MoQ tracks from CAST or parent RELAYs to downstream subscribers. It does not use the cache — MoQ is a real-time pass-through protocol with no store-and-forward semantics.

Additional RELAY capabilities:

  • Encrypted cache storage: All cached objects are encrypted at rest using per-delivery-service keys derived from CDN Controller configuration. Node operators cannot read cached content even with physical access. Standard DRM (Widevine, PlayReady, FairPlay) is preserved end-to-end — the cache stores already-encrypted DRM segments, and cache-layer encryption adds a second envelope. Key rotation is managed by the CDN Controller and distributed via SyncRPC.

  • Bridges multicast islands: Acquires streams from CAST nodes or parent RELAYs via AMT tunneling when native multicast peering is unavailable, then re-multicasts locally.

  • Serves unicast fallback: When a player connects via HTTP or MoQ, the RELAY serves content regardless of whether it was acquired via multicast or unicast origin fetch.

Graceful degradation and failover

RELAYs implement hitless failover to protect viewer experience during disruptions:

  • Multicast → unicast fallback: If multicast reception fails (relay down, network path loss), the MAHP Receiver automatically falls back to unicast HTTP origin fetch within the same session. The player sees no interruption — the cache continues serving from whichever source is active.

  • Relay failover: Traffic Router continuously monitors RELAY health. If a RELAY becomes unhealthy, new client requests are routed to the next-best RELAY via DNS TTL expiry or HTTP 302 re-routing. Active sessions drain gracefully.

  • BEACON resilience: If a BEACON's upstream RELAY fails, the BEACON performs a new DRIAD lookup and re-joins via an alternate relay. Cached content continues serving during the transition.

  • MoQ path independence: The MoQ relay path (QUIC) operates independently of the MAHP/multicast path. If multicast fails, MoQ viewers are unaffected, and vice versa. The HA simulcast at CAST ensures both paths receive content independently.

AMT Relay

AMT Relays (RFC 7450) are the border devices that bridge multicast traffic from the SSM-enabled backbone to unicast-only networks. Blockcast supports two deployment models:

Hardware: Juniper MX Routers

Juniper MX routers are the primary relay infrastructure for Blockcast — deployed inside ISP networks where they perform AMT relay functions in router silicon at line rate. Because ISPs already operate MX routers for peering and transit, Blockcast leverages existing infrastructure with a configuration change rather than new hardware deployment.

Capability
Specification

Concurrent AMT tunnels

Up to 500,000 per router

Throughput

Up to 1 Tbps line-rate forwarding

AMT implementation

Native in Junos firmware (no software overlay)

Telemetry export

gNMI (port 32767) for health and tunnel state

Accounting export

jFlow/IPFIX (port 4739) for per-flow CDNi billing

Relay discovery

Anycast addressing — multiple routers share a prefix for geographic load balancing

Target deployment

10,000+ routers across Tier 1 and Tier 2 ISPs

ISP deployment model: ISPs enable AMT relay on existing MX routers and announce an anycast prefix for relay discovery. Blockcast's amt-astats monitoring agent connects via gNMI and jFlow to integrate the hardware relay into the control plane — no ISP-side software deployment required. The ISP benefits from reduced peering traffic (multicast replaces N unicast streams with 1) and optional RaaS revenue sharing.

Software: Linux Kernel AMT (amtr)

For operators without Juniper hardware, Blockcast provides a containerized software relay:

  • Linux kernel AMT module (net/ipv4/amt.c, Linux 5.12+) with full RFC 7450 state machine.

  • eBPF tunnel tracking: XDP hooks for kernel-level per-gateway packet inspection with ring buffer events.

  • Socket-based fallback tracker parsing AMT packet types in userspace.

Software relays are suitable for datacenter and cloud deployments where hardware relay access is unavailable. Performance scales with host CPU and NIC capabilities — a single 32-core server with 25GbE can support approximately 50,000 concurrent tunnels.

Unified Monitoring: amt-astats

Both relay types are monitored through a single amt-astats service that provides a unified interface for the control plane:

  • /_astats HTTP (port 8080): Single endpoint queried by Traffic Monitor — aggregates health and tunnel state from both hardware and software relays.

  • gNMI Client: Connects to Juniper MX hardware relays for telemetry and tunnel state.

  • jFlow Collector: Receives IPFIX flows from Juniper MX for CDNi accounting and billing.

  • eBPF Tracker: Inspects kernel AMT module traffic on software relays via XDP hooks.

  • Service303 gRPC (port 9191): Orc8r health monitoring and metrics export.

  • CDNi Log Export: Streams tunnel lifecycle events to CDN Controller via gRPC (cdni_log.proto).

spinner

DRIAD Relay Discovery (RFC 8777)

As described in the Traffic Router's DRIAD routing mode, receivers must discover the topologically closest AMT relay before joining a multicast stream. The full DRIAD procedure:

  1. Reverse the source IP octets: 69.25.95.1010.95.25.69

  2. Build DNS query: 10.95.25.69.amt.in-addr.arpa

  3. Resolve via DNS (or DNS-over-HTTPS in browsers): Returns A/AAAA records pointing to the nearest AMT relay.

  4. Select optimal relay: Measure RTT to candidates, prefer lowest-latency relay with available capacity.

  5. Cache result: 5-minute TTL to avoid repeated lookups.

DRIAD supports anycast relay addresses, enabling natural load balancing across relay deployments. When multiple relays share the same anycast prefix, DNS returns the topologically nearest.

Forward Error Correction & Authentication

All multicast streams are protected against packet loss and tampering:

FEC (Forward Error Correction)

Scheme
Use Case
Recovery

RaptorQ

Optimal for live video; rateless fountain code

>30% packet loss without retransmission

Reed-Solomon

Precise recovery for file delivery; MDS code

Exact symbol-count recovery

Cross-Session RaptorQ

Combines symbols across sessions for interleaved FEC

Enhanced protection for bursty loss

Authentication

  • TESLA (Timed Efficient Stream Loss-tolerant Authentication): Time-delayed key disclosure for ROUTE/FLUTE streams. Configurable keychain length, disclosure delay, and crypto algorithms (RSA, ECDSA).

  • ALTA (Asymmetric Loss-Tolerant Authentication): Ed25519 per-packet verification for MoQ-MMT streams. Integrated into the PIM multicast gateway.


Client Layer: Receivers & Playback

The client layer represents the leaf nodes of the TreeDN tree — where content reaches end users.

The end-user client is an unmodified player. Standard HTTP players (DASH/HLS) and MoQ players connect to the nearest BEACON or RELAY over standard protocols — they are completely unaware of the multicast transport underneath. The multicast-to-unicast conversion happens entirely within the BEACON or RELAY.

Each BEACON runs two receiver services matched to the two sender protocols:

  • MAHP Receiver (ROUTE/FLUTE): Receives multicast, applies FEC decoding, and pushes reconstructed DASH/HLS segments into an encrypted local cache. Unmodified HTTP players are served from this cache.

  • MoQ Relay (MMTP): Pass-through relay for MoQ tracks over QUIC. Does not use the cache — MoQ is real-time with no store-and-forward semantics. Unmodified MoQ players connect directly.

Unlike RELAYs, BEACONs do not operate a FEC repair server — they are leaf nodes that only consume repairs from upstream RELAYs.

For users who want to extend multicast reception to the client device itself, optional IWA (browser) and Mobile SDK extensions provide a direct AMT/multicast data path — but this is an enhancement, not a requirement.

MAHP BEACON (ROUTE/FLUTE Receiver)

A Caddy-based multicast receiver that acts as a transparent reverse proxy, combining multicast reception with HTTP serving:

  • Multicast reception: Joins SSM groups via IGMPv3, receives ROUTE Transport Objects, applies FEC decoding.

  • HTTP cache proxy: Serves cached content to local clients as standard HTTP/DASH/HLS responses — unmodified players connect directly and are unaware of the multicast transport.

  • Encrypted cache storage: All cached objects are encrypted at rest, matching the RELAY encrypted cache model.

  • Unicast repair client: Requests missing FEC symbols from an upstream RELAY's repair server via HTTP GET when multicast packets are lost. Unlike RELAYs, BEACONs do not operate a repair server — they only consume repairs.

  • TESLA verification: Authenticates each packet using time-delayed key disclosure.

Deployment: Packaged as a container image, the MAHP BEACON runs on:

  • ISP deep-edge modems (Purple Foundation or compatible container runtimes)

  • Smart TVs and set-top boxes

  • Mobile phones (Android/iOS)

  • Home servers and mini-PCs

Configuration is pushed from the CDN Controller via blockcastd and can be updated without restarts.

MoQ BEACON (Browser & Mobile Receiver)

The MoQ BEACON serves unmodified MoQ players over standard QUIC/WebTransport. For clients that want direct multicast reception, the PIM Multicast Gateway provides optional extensions:

  • Chrome IWA (Isolated Web App): Optional in-browser gateway with Direct Sockets API access. Manages multicast subscriptions, AMT tunnel lifecycle, and DRIAD relay discovery — enabling direct multicast data path without a local BEACON.

  • Chrome Extension: Message router bridging the webpage and IWA, handling tab lifecycle and origin-based security.

  • WASM Modules: Rust-compiled AMT protocol (RFC 7450 state machine, IGMP/MLD report generation) and ALTA authentication (Ed25519 verification).

  • Mobile SDK: Optional native AMT gateway for Android and iOS applications — same direct multicast capability as the IWA, for native apps.

Transport Modes (when IWA or Mobile SDK is active):

Mode
Behavior
Latency
Throughput

Native

Direct Sockets API → OS multicast join

~1ms

10Gbps+

AMT Tunnel

DRIAD discovery → UDP tunnel via WASM

~2ms

5Gbps+

Auto (default)

Try native → fallback to tunnel if no packets in N seconds

Adaptive

Adaptive

Without the IWA or Mobile SDK, the client simply connects to the nearest BEACON or RELAY as a standard MoQ/HTTP player.

For browsers without IWA support (Firefox, Safari), the MoQ BEACON can bridge to non-IWA clients via:

  • mDNS advertisement (_moq-gateway._tcp.local): Local network IWA discovery

  • WebTransport bridge: Forwards multicast to secondary browsers over QUIC

  • Standalone Node.js gateway: Docker-deployable for headless environments


MMT-MoQ: Multicast Delivery Paths

MPEG Media Transport (MMT, ISO 23008-1arrow-up-right) is the unifying container format across all delivery paths. MMT is already the native container for ATSC 3.0 broadcast (Americas, South Korea), ARIB STD-B60 (Japan), and 5G Multicast-Broadcast Services (3GPP MBS). By defining how MMT maps onto MoQ objects (draft-ramadan-moq-mmtarrow-up-right), the same MMTP packets — identical bytes — can flow over any transport without transcoding.

This means content ingested once at a CAST node reaches every device class through its optimal delivery path:

Delivery Path
Physical Medium
Receiver
Requirements

ATSC 3.0 broadcast

Over-the-air RF spectrum

TV with ATSC 3.0 tuner

Tuner hardware (Samsung, LG, Sony 2020+)

5G MBS

Cellular broadcast (3GPP)

5G phone/tablet

5G modem + MBS middleware

PIM-SSM + IGMPv3

ISP IP multicast

IWA browser, native app

Multicast-enabled ISP, UDP socket access

AMT unicast tunnel

UDP encapsulation (RFC 7450)

IWA browser, native app

Any network with UDP, DRIAD relay discovery

MoQ/QUIC unicast

WebTransport

Any browser, any app

None (universal fallback)

Over the Air: ATSC 3.0 (Broadcast TV)

ATSC 3.0 televisions receive MMTP streams natively over RF tuner hardware. These devices perform S-TSID parsing for FEC parameters and multicast group information, RaptorQ FEC decoding, and MMTP depacketization into MPU media segments — all in dedicated silicon.

MoQ integration happens at the gateway level, not on the TV itself. The MMT-MoQ draft defines bidirectional S-TSID ↔ MoQ Catalog conversion, so the same content is discoverable by both broadcast receivers (via S-TSID/SLT bootstrap signaling) and MoQ subscribers (via catalog JSON). An ATSC 3.0 broadcast station's S-TSID becomes a MoQ catalog, enabling a single content pipeline to serve both broadcast and IP-delivered audiences simultaneously.

Over the Air: 5G Multicast-Broadcast Services

3GPP MBS (TS 22.246arrow-up-right, Release 17) defines multicast-broadcast at the 5G modem level. 5G MBS can carry the same MMTP streams as ATSC 3.0, with the modem handling delivery and the CAS (Conditional Access System) stack handling decryption independently.

CAS muting enables simultaneous broadcast: the same encrypted MMTP stream serves ATSC 3.0 tuners over RF and 5G MBS devices over cellular. Each receiver's CAS stack decrypts independently — the content provider encrypts once, and both physical layers deliver the same protected bytes.

On Android, the receive path is provided by the Qualcomm LTE Broadcast SDK or the open-source 5G-MAG middlewarearrow-up-right. The modem delivers MMTP packets directly to the application layer — same bytes, different physical medium. Combined with the DePIN token incentive model, 5G MBS receivers can earn rewards simply by tuning in to multicast streams, contributing to network offload.

Over ISP: PIM-SSM + IGMPv3

When an ISP has enabled PIM-SSM on their routers, clients can receive multicast natively — the most bandwidth-efficient delivery path after broadcast.

  1. The MoQ catalog includes SSM (S,G) endpoint addresses in its multicast extension field.

  2. A native client or IWA browser app issues an IGMPv3 join for the source-specific group (e.g., (10.0.0.1, 232.1.1.10)).

  3. PIM-SSM builds a shortest-path tree from the source through the ISP's routers to the subscriber's edge.

  4. MMTP packets replicate in router silicon at each hop — not in software on x86 servers.

  5. The client receives the identical MMTP packets that the CAST node produced.

SSM group allocation maps each ABR quality tier to a separate multicast group, so quality switches are IGMPv3 leave/join operations. FEC repair symbols are carried on dedicated groups. For browsers, this requires IWA with DirectSocket API (UDPSocket.joinGroup()). For native apps, Android provides MulticastSocket with MulticastLock, and iOS provides NWConnectionGroup (iOS 14+).

Hardware-Offloaded AMT Unicast Tunnel

AMT (RFC 7450arrow-up-right) is the pragmatic fallback when PIM-SSM is not available on the subscriber's ISP. AMT encapsulates multicast packets inside a UDP unicast tunnel:

  1. The client discovers the nearest AMT relay via DRIAD (RFC 8777arrow-up-right) — a DNS reverse lookup on the multicast source IP.

  2. The client establishes an AMT tunnel (UDP port 2268) to the relay.

  3. The relay performs an IGMPv3 join on behalf of the client on its multicast-enabled upstream interface.

  4. MMTP packets arrive at the relay via native multicast, get encapsulated in UDP, and tunnel to the client.

The critical advantage is hardware offload: thousands of ISP edge routers (Juniper MX) already have AMT relay capability built into their routing ASICs. A single router performs the packet replication function that previously demanded dedicated CDN server complexes — up to 500,000 concurrent tunnels at line-rate throughput per device. The marginal cost is a configuration change, not new hardware.

This is the path shown in the architecture diagram as FFmpeg → MMTP SSM → AMT Relay -.-> AMT tunnel -.-> IWA/SDK Player.

Transport Hierarchy (Graceful Degradation)

Clients attempt transports in preference order, starting with the highest-bandwidth path and falling back gracefully:

Native clients (TV, mobile SDK):

Priority
Transport
Availability

1

Native SSM (IGMPv3/MLDv2)

Multicast-enabled networks

2

AMT over UDP (RFC 7450)

Any network with UDP

3

MoQ over QUIC

Universal

4

HTTP DASH/HLS

Universal fallback

Browser clients:

Priority
Transport
Availability

1

SSM via IWA DirectSocket

IWA-enabled Chrome

2

AMT via IWA DirectSocket

IWA-enabled Chrome

3

MoQ over WebTransport

Chrome 97+, Firefox 115+

4

HTTP DASH via MSE/fetch

Universal fallback

Every client starts at Priority 3 or 4 (unicast), receives the MoQ catalog which includes the multicast endpoint discovery extension, then promotes itself to the highest available path. The catalog itself is the discovery mechanism — no separate signaling infrastructure is needed.

For general consumer web applications, MoQ/WebTransport (Priority 3) is effectively the highest available transport. IWA DirectSocket (Priorities 1–2) is available to enterprise-managed Chrome, DePIN participants with token-incentivized IWA installation, and technical users who side-load the IWA.

IWA Home Gateway

An IWA node running on a wired-connected device (Chromebox, desktop, mini-PC) can serve as a local multicast anchor, bridging the gap between ISP multicast and household wireless devices:

The IWA receives SSM or AMT multicast via DirectSocket over wired Ethernet — providing the stable, high-throughput link needed for sustained 4K/8K reception. It then operates as a local MoQ relay, redistributing content over WebTransport to wireless clients on the same network.

This provides several advantages:

  • Wired reception avoids Wi-Fi jitter and supports higher bitrates than wireless multicast (most consumer APs do not relay multicast).

  • A single multicast stream serves the entire household regardless of the number of local viewers.

  • Wireless clients use standard MoQ/WebTransport with no IWA requirement, no special permissions, and full browser compatibility.

  • Local FEC repair reduces retransmission round trips to the origin.


Multicast Publishing Flow

This flow shows how content enters the TreeDN tree — from a content provider configuring a stream through to the first multicast transmission.

spinner

Relay & Coverage Flow

This flow shows how RELAY nodes extend multicast coverage — acquiring streams from CAST nodes, bridging via AMT, and serving downstream nodes. RELAYs can be chained hierarchically (CAST → parent RELAY → child RELAY → BEACON). Each RELAY operates a FEC repair server; BEACONs only request repairs.

spinner

Playback Flow

This flow shows the end-to-end path from a viewer requesting content through to multicast-accelerated playback on their device. The end user runs an unmodified HTTP or MoQ player — the BEACON or RELAY handles all multicast complexity transparently.

spinner

Key Interfaces Summary

Interface
Protocol
Port
Between
Purpose

SyncRPC

gRPC (mTLS)

Controller ↔ Gateway

Configuration push, service dispatch

Service303

gRPC

9191

Gateway → Controller

Health, metrics, lifecycle

Cast TO Discovery

HTTP REST

443

CAST → Traffic Ops

Publisher config pull: DS, transports, sessions, RRULE

Cast State

gRPC (service303)

9191

CAST → Controller

Per-occurrence publisher health (cast_publisher type)

Cast State Query

HTTP REST

443

TrafficPortal → TO

GET /servers/{id}/cast_state, GET /sessions/{id}/cast_state

CDNi FCI/CI/RRI

HTTP REST

443

uCDN ↔ Blockcast

Multi-CDN interconnection

AStats

HTTP

8080

amt-astats → Traffic Monitor

Unified tunnel stats, health (HW + SW)

gNMI

gRPC

32767

amt-astats → Juniper MX

Hardware relay telemetry (via gNMI client)

jFlow/IPFIX

UDP

4739

Juniper MX → amt-astats

Per-flow accounting (via jFlow collector)

AMT

UDP

2268

Relay ↔ Gateway

Multicast tunneling (RFC 7450)

DRIAD

DNS

53

Gateway → DNS

Relay discovery (RFC 8777)

SSM Multicast

UDP

varies

CAST → Relay → BEACON

Native multicast delivery

LLS

UDP

4937

CAST → network

ATSC 3.0 service discovery

HTTP Ingress

TLS (ALPN)

443

Player → RELAY/BEACON

Unmodified DASH/HLS player access

MoQ Ingress

QUIC (ALPN)

443

Player → RELAY/BEACON

Unmodified MoQ player access

MoQ

QUIC

443

CAST/Relay ↔ Relay

Low-latency media relay (pass-through)

FEC Repair

HTTP

8081

BEACON → RELAY

Unicast repair symbol requests


Standards & References

Standard
Role in Blockcast

Architecture blueprint for tree-based multicast CDN

Tunnels multicast over unicast-only networks

DNS-based AMT relay discovery

Real-time object delivery over multicast

IPv4 multicast group membership

IPv6 multicast group membership

RFC 9246 / RFC 9421

CDNi request routing with content provider signatures

SVTA OC-API v2.1

Open Caching API for CDN interconnection

MPEG Media Transport — universal container for broadcast, multicast, and unicast delivery

ATSC 3.0 / DVB-MABR / 3GPP MBMS

Broadcast standards supported by ROUTE/FLUTE and MMTP delivery

5G Multicast-Broadcast Services for cellular multicast delivery

MMT packaging for MoQ — maps MMTP packets to MoQ objects

Multicast delivery and catalog endpoint discovery for MoQ

Forward Error Correction for MoQ (RaptorQ, LDPC, Reed-Solomon)

Last updated

Was this helpful?