furcate

Platform

Four capabilities, one continuous fabric.

The platform is structured around four capabilities. Each layer composes the next: compute hosts the runtime; the runtime is reachable over the network; orchestration coordinates them across thousands of devices; and the sovereignty boundary runs around the whole stack — customer-controlled, end-to-end.

01

Runtime

Edge AI execution — silicon to model.

TensorRT Edge-LLM for Jetson / DRIVE LLMs and VLMs (FP8, NVFP4, INT4 quantization, EAGLE-3 speculative decoding). LiteRT and ONNX Runtime as the cross-platform default. ExecuTorch for PyTorch on microcontrollers. OpenVINO for Intel-tuned inference. WasmEdge / Wasmtime for sandboxed serverless edge — 1-5 ms cold starts vs 100ms-1s+ for containers.

How runtime works

02

Network

Sovereign mesh — devices speak on your terms.

Matter / Thread for residential and SMB. LoRaWAN for outdoor and industrial. Private 5G + Time-Sensitive Networking (TSN) for deterministic-latency factory floors. Eclipse Hono for protocol abstraction. TPM 2.0 + TEE for hardware root of trust on every device. Customer data leaves the trust boundary only with explicit, logged consent.

How networking works

03

Orchestration

Fleet management at scale, with provenance.

KubeEdge (100k+ nodes, 1M+ pods documented) and OpenYurt for cloud-native edge. K3s for the lowest-resource environments. Akri for K8s-native leaf-device discovery. Dapr as the distributed-application runtime. OTA campaigns with canary, blue-green, and TPM-attested rollout. Federated learning rounds via NVIDIA FLARE + Flower + ExecuTorch.

How orchestration works

04

Compute

Hardware support — consumer to industrial.

NVIDIA Jetson Orin Nano Super (67 TOPS, $249, 7-25 W). Hailo-10H AI HAT+ 2 on Pi 5 (40 TOPS INT4 / 20 INT8, 2.5 W, runs LLMs at 10 tok/s for $130). Coral M.2 (4 TOPS, 2 W). ESP32-P4, RP2350, Arduino, ARM Cortex-A/M, x86. Reference designs and OEM-ready firmware images. Custom silicon supported.

How compute works