furcate

Capability · Orchestration

Run fleets in the hundreds of thousands.

Furcate composes Kubernetes-class control planes for edge — KubeEdge (100k+ nodes / 1M+ pods documented), OpenYurt for offline-capable edge, K3s for the lowest resource budgets, Akri for K8s-native leaf-device discovery — under one fleet operations UI, with provenance for every dispatch.

01

KubeEdge — 100k+ edge nodes per cluster

CNCF incubation project. Published performance test scaling to 100,000 concurrent edge nodes managing 1,000,000+ active pods. The default when the customer's edge fleet is large enough to need cloud-native ops. Furcate runs as a KubeEdge component so it slots into existing K8s investments.

02

OpenYurt — edge-native K8s with offline-first

Brings native edge-computing capabilities to Kubernetes — edge nodes run K8s without continuous cloud connectivity. The strongest choice when fleet operates in intermittent / disconnected environments (mining, maritime, remote energy, defence).

03

K3s — lightweight K8s at the lowest resource

Lowest resource consumption among lightweight K8s distributions in 2026 benchmarks. Full K8s experience in significantly reduced memory footprint. The default when devices are individually small but the fleet is large.

04

Akri — K8s-native leaf-device discovery

Built on the K8s Device Plugins framework. Discovers small edge devices via ONVIF / udev / OPC UA handlers. Creates a K8s service per device with HA when nodes lose network or fail. The way leaf devices (cameras, sensors, PLCs) become first-class K8s objects without custom integration.

05

OTA campaigns — canary + blue-green + TPM-attested

Phased rollouts (1% → 10% → 50% → 100%) with TPM 2.0 attestation gating between phases. Blue-green for atomic version swaps. Mesh-aware self-healing when partial outages happen. Every campaign runs through ProcessSim's simulator first so the dispatch guard catches misconfigurations before they touch a live device.

06

Federated learning at production scale

NVIDIA FLARE + Flower + ExecuTorch. FLARE's hierarchical FL architecture scales to thousands of edge devices in real production deployments — Eli Lilly TuneLab (Rhino Federated Computing), Taiwan MOHW national healthcare FL, Tri-Labs (Sandia / LANL / LLNL) federated AI pilot. Raw data never leaves the device; only model deltas are aggregated.

07

Dapr + EdgeX Foundry — distributed runtime + IoT

Dapr (CNCF) provides workflow, pub/sub, state, secrets, bindings, actors, distributed lock, and cryptography as platform primitives — State of Dapr 2026 reports 20-40% developer productivity uplift. EdgeX Foundry provides vendor-neutral device-data ingestion, normalisation, and analysis. The two layer cleanly: Dapr handles application semantics, EdgeX handles device protocol normalisation.

08

Auditable autonomy

Every model swap, every OTA campaign, every federated round, every agent dispatch is logged with the full input window, the model that produced it, the simulation run that validated it, and the policy that approved it. Cryptographic chaining of the event log. Replay viewer for operators; structured event log for auditors. Designed for IEC 62443, NIS2, GDPR, HIPAA, FIPS, NIST, CMMC regimes.

Stack in play

Open foundations composed at this layer.

KubeEdge

K8s for edge · 100k+ nodes

Production-scale published; CNCF; the default for large fleets.

OpenYurt

offline-capable edge K8s

Strongest offline / intermittent connectivity story.

K3s

lightweight K8s

Lowest resource footprint among lightweight K8s.

Akri

K8s leaf-device discovery

ONVIF / udev / OPC UA → K8s services with HA.

Dapr

distributed app runtime

Workflow, pub/sub, state, actors as primitives.

NVIDIA FLARE + Flower

federated learning

Production FL at thousands of devices; raw data stays on device.

EdgeX Foundry

vendor-neutral edge IoT

Modular device-data ingest + normalisation.