TreeDN: Multicast Content Delivery
Blockcast implements a TreeDN architecture as defined in RFC 9706 — a tree-based Content Delivery Network designed for live streaming to mass audiences. TreeDN combines Source-Specific Multicast (SSM) with Automatic Multicast Tunneling (AMT) overlays to deliver content efficiently across both multicast-enabled and unicast-only networks.
No player modification required. Standard DASH/HLS and MoQ players connect to the nearest edge node over standard HTTP or QUIC — the multicast-to-unicast conversion happens entirely within the network. Existing encoder pipelines, DRM workflows, and player SDKs remain untouched.
Hardware-accelerated scale. Each Juniper MX hardware AMT relay supports up to 500,000 concurrent tunnels at line-rate throughput up to 1 Tbps. With a target deployment of 10,000+ ISP-hosted routers, the aggregate relay fleet can serve billions of concurrent multicast viewers without requiring new infrastructure — leveraging routers already deployed in carrier networks.
Bandwidth economics. In traditional unicast CDN delivery, each concurrent viewer requires a dedicated stream — 1 million viewers at 5 Mbps consumes 5 Tbps of backbone bandwidth. With TreeDN multicast, a single stream is replicated at each relay hop, reducing backbone bandwidth by orders of magnitude. At scale, multicast delivery approaches a fixed cost per stream regardless of audience size, compared to the linear cost per viewer of unicast CDN.
By adopting TreeDN, Blockcast enables Replication-as-a-Service (RaaS) across a tiered deployment model:
Tier 1 — Carrier-grade: ISP-deployed Juniper MX hardware relays with SLA-backed anycast infrastructure.
Tier 2 — Professionally operated: Datacenter-hosted RELAY nodes running software AMT (
amtr) and edge caches.Tier 3 — Community edge: DePIN-incentivized BEACON nodes extending coverage to the deep network edge — homes, venues, and mobile devices. These are leaf nodes only; they do not carry transit traffic or serve repairs.
Dual Delivery Protocols
TreeDN delivers content through two complementary protocols, each optimized for different use cases:
Both protocols are fed simultaneously from a single FFmpeg process on each CAST node, ensuring content consistency across delivery paths. Players connect to whichever protocol their platform supports — the network handles everything transparently.
For details on how multicast reaches unicast-only networks, see AMT Relays & Multicast Backbone. For the unified MMT container format and broadcast integration, see MoQ & MMT: Streaming Delivery Paths.
Layered Architecture
The Blockcast TreeDN architecture is organized into three planes. Each plane has distinct responsibilities and communicates with adjacent planes through well-defined interfaces.
RFC 9706 Terminology Mapping
Content/Multicast Source
CAST Node
FFmpeg tee muxer outputs CMAF HTTP to MAHP Sender, MMTP/QUIC to MoQ Relay, and MMTP SSM to multicast. Orchestrated by cast binary with RRULE scheduling and service303 health reporting.
Native Multicast Router
PIM-SSM Infrastructure
Forwards multicast within enabled network segments
AMT Relay
RELAY Node (AMT function)
Tunnels multicast to unicast-only networks (hardware or software)
AMT Gateway
BEACON Node
Receives tunneled multicast at the deep edge
Native Receiver
BEACON Node (native mode)
Directly joins SSM groups on multicast-enabled networks
RaaS Provider
Tiered Operators
Tier 1: ISP carrier-grade (Juniper MX), Tier 2: datacenter RELAY operators, Tier 3: DePIN community BEACON nodes
Control Plane
The control plane orchestrates the entire TreeDN delivery network — managing node lifecycle, routing decisions, and multi-CDN interconnection.
CDN Controller
The CDN Controller extends Magma orc8r to manage the distributed network of CAST, RELAY, and BEACON gateways. It provides:
Gateway registration and management: Nodes bootstrap with mTLS certificates and register as managed gateways, each running
blockcastdas the local orchestrator.Dynamic service management: Services (cache, multicast sender, AMT relay, traffic router, MoQ relay) are enabled or disabled per-node via cloud-managed configuration (mconfig) without requiring restarts.
Cloud services: Three specialized controllers — CDN (network orchestration), Beacon (peer discovery and coordination), and Capacity Map (resource allocation and load balancing).
Traffic Router
The Traffic Router directs client requests to the optimal edge node using three complementary routing modes:
DNS Routing (standard CDN mode)
Returns A/AAAA records for the nearest healthy RELAY or BEACON cache. The client resolves a hostname (e.g., cdn.example.com) and receives the IP of the best edge node based on geolocation, consistent hashing, and Health Protocol state. This is the primary routing mode for all standard CDN traffic.
HTTP Routing (redirect mode)
Returns HTTP 302 redirects to the best edge node. This mode enables unprivileged-port operation (>1024) on restrictive platforms like Android or ChromeOS, making personal BEACON caches at home practical — the redirect URL includes the custom port. HTTP routing is also used for CDNi request delegation between upstream and downstream CDNs (RRI interface), with content provider signatures per RFC 9246 and RFC 9421.
DRIAD Routing (RFC 8777 — multicast relay discovery)
Used exclusively by MAHP/MoQ receivers (on RELAYs and BEACONs) to discover the topologically nearest AMT relay for multicast stream acquisition. DRIAD works by reversing the multicast source IP and querying DNS (e.g., 10.95.25.69.amt.in-addr.arpa). This routing mode operates independently of HTTP/DNS content routing — it resolves the multicast data path, not the client request path.
Additional capabilities:
CDNi delegation: Request routing can be delegated via the CDN Interconnection Request Routing Interface, with content provider signatures per RFC 9246 and RFC 9421.
Multicast source resolution: Enables MAHP gateways in BEACONs and RELAYs to resolve the address of the multicast source or the fallback AMT relay using coverage information and client addressing.
Traffic Monitor
Continuously polls the health and performance of all network nodes:
Stat polling: Collects cache throughput, error rates, and delivery service statistics.
Health polling: Heartbeat checks to detect node availability in real time.
AMT relay monitoring: Queries the unified
amt-astatsservice (/_astatsendpoint), which aggregates health and tunnel state from both hardware (Juniper MX) and software (Linuxamtr) relays.State feed: Provides the combined health state to Traffic Router for routing decisions.
CDNi Interfaces
Blockcast implements the SVTA Open Caching API v2.1 for multi-CDN interconnection:
FCI (Footprint & Capabilities)
Advertises network reach by ASN, country code, or H3 geospatial index
CI (Configuration)
Receives content provider configuration: origins, ACLs, cache rules, capacity limits
RRI (Request Routing)
Delegates DNS/HTTP routing decisions between upstream and downstream CDNs
Trigger
Content prepositioning, cache invalidation, and status tracking
CAST Sender (Content Source)
CAST nodes are the root of the TreeDN multicast tree. They ingest content from upstream CDNs or origin servers and deliver it downstream through three parallel output paths from a single FFmpeg process.
FFmpeg Encoder (tee output)
A single FFmpeg process encodes source video (H.264/H.265 + AAC/Opus) and uses a tee muxer to output to three destinations simultaneously:
CMAF HTTP → local MAHP Sender (Caddy) for ROUTE/FLUTE encoding and delivery to MAHP Receivers
MMTP/QUIC → downstream MoQ Relay via WebTransport (
moq_mmtmuxer, IETF MoQ draft)MMTP SSM → SSM multicast groups (
232.x.x.x) with RaptorQ FEC (k=32, r=8) for distribution via AMT Relay to players with built-in AMT gateway (IWA browser extension, mobile/TV SDK)
All three outputs carry the same content from the same FFmpeg process, ensuring consistency across delivery paths. Native multicast is preferred where available; AMT tunneling provides the highest fan-out efficiency for direct-to-player delivery at scale.
Cast Orchestrator (publisher lifecycle)
The cast binary orchestrates publisher processes on each CAST node. It discovers delivery service configuration from Traffic Ops (origins, relay endpoints, multicast groups, FEC parameters) and manages the full publisher lifecycle — including scheduled events via RRULE recurrence rules (RFC 5545), automatic failover with configurable watchdog restarts, and per-occurrence health reporting to the CDN Controller via service303.
Graceful Degradation & Failover
RELAYs implement hitless failover to protect viewer experience during disruptions:
Multicast → unicast fallback: If multicast reception fails (relay down, network path loss), the MAHP Receiver automatically falls back to unicast HTTP origin fetch within the same session. The player sees no interruption — the cache continues serving from whichever source is active.
Relay failover: Traffic Router continuously monitors RELAY health. If a RELAY becomes unhealthy, new client requests are routed to the next-best RELAY via DNS TTL expiry or HTTP 302 re-routing. Active sessions drain gracefully.
BEACON resilience: If a BEACON's upstream RELAY fails, the BEACON performs a new DRIAD lookup and re-joins via an alternate relay. Cached content continues serving during the transition.
MoQ path independence: The MoQ relay path (QUIC) operates independently of the MAHP/multicast path. If multicast fails, MoQ viewers are unaffected, and vice versa. The HA simulcast at CAST ensures both paths receive content independently.
Key Interfaces Summary
SyncRPC
gRPC (mTLS)
—
Controller ↔ Gateway
Configuration push, service dispatch
Service303
gRPC
9191
Gateway → Controller
Health, metrics, lifecycle
Cast TO Discovery
HTTP REST
443
CAST → Traffic Ops
Publisher config pull: DS, transports, sessions, RRULE
Cast State
gRPC (service303)
9191
CAST → Controller
Per-occurrence publisher health (cast_publisher type)
Cast State Query
HTTP REST
443
TrafficPortal → TO
GET /servers/{id}/cast_state, GET /sessions/{id}/cast_state
CDNi FCI/CI/RRI
HTTP REST
443
uCDN ↔ Blockcast
Multi-CDN interconnection
AStats
HTTP
8080
amt-astats → Traffic Monitor
Unified tunnel stats, health (HW + SW)
gNMI
gRPC
32767
amt-astats → Juniper MX
Hardware relay telemetry (via gNMI client)
jFlow/IPFIX
UDP
4739
Juniper MX → amt-astats
Per-flow accounting (via jFlow collector)
AMT
UDP
2268
Relay ↔ Gateway
Multicast tunneling (RFC 7450)
DRIAD
DNS
53
Gateway → DNS
Relay discovery (RFC 8777)
SSM Multicast
UDP
varies
CAST → Relay → BEACON
Native multicast delivery
LLS
UDP
4937
CAST → network
ATSC 3.0 service discovery
HTTP Ingress
TLS (ALPN)
443
Player → RELAY/BEACON
Unmodified DASH/HLS player access
MoQ Ingress
QUIC (ALPN)
443
Player → RELAY/BEACON
Unmodified MoQ player access
MoQ
QUIC
443
CAST/Relay ↔ Relay
Low-latency media relay (pass-through)
FEC Repair
HTTP
8081
BEACON → RELAY
Unicast repair symbol requests
Standards & References
RFC 9706 (TreeDN)
Architecture blueprint for tree-based multicast CDN
RFC 7450 (AMT)
Tunnels multicast over unicast-only networks
RFC 8777 (DRIAD)
DNS-based AMT relay discovery
RFC 9223 (ROUTE)
Real-time object delivery over multicast
RFC 3376 (IGMPv3)
IPv4 multicast group membership
RFC 3810 (MLDv2)
IPv6 multicast group membership
RFC 9246 / RFC 9421
CDNi request routing with content provider signatures
SVTA OC-API v2.1
Open Caching API for CDN interconnection
ISO 23008-1 (MMT)
MPEG Media Transport — universal container for broadcast, multicast, and unicast delivery
ATSC 3.0 / DVB-MABR / 3GPP MBMS
Broadcast standards supported by ROUTE/FLUTE and MMTP delivery
3GPP TS 22.246 (5G MBS)
5G Multicast-Broadcast Services for cellular multicast delivery
MMT packaging for MoQ — maps MMTP packets to MoQ objects
Multicast delivery and catalog endpoint discovery for MoQ
Forward Error Correction for MoQ (RaptorQ, LDPC, Reed-Solomon)
Last updated
Was this helpful?