furcate

Solutions · Research & academia

Reproducible edge-AI experiments.

Scientific edge-AI work needs a runtime that's open, an orchestrator that's auditable, and a federated-learning framework that already ships in production. Furcate composes NVIDIA FLARE + Flower + ExecuTorch + KubeEdge + WasmEdge + the TimesFM bench — all open, all cited, all reproducible — for university, national lab, and corporate research teams.

Apache 2.0 / OSI-approved throughout

Open foundations

lineage from checkpoint to deployment

Reproducible

FLARE + Flower + ExecuTorch native

Federated

every model, every benchmark, every claim

Cited

Use cases

What the platform actually does, here.

Federated learning at sensor-network scale

Run NVIDIA FLARE FL rounds across hundreds-to-thousands of edge devices without raw data leaving any node. The pattern Eli Lilly TuneLab, Taiwan MOHW healthcare, and Tri-Labs (Sandia / LANL / LLNL) use in production — adopt it for your dataset without re-implementing the FL protocol.

On-device fine-tuning

ExecuTorch + FLARE handle PyTorch fine-tuning on mobile and microcontroller-class devices. The compute / battery / privacy trade-offs are open research questions; Furcate gives you a reproducible substrate to study them.

Cross-embodiment robotics policies

OpenVLA (Apache 2.0) trained on the Open X-Embodiment dataset of 1M+ trajectories. Octo (93M params) for fast-inference cobots. RT-2 / π₀ as benchmark targets. Furcate orchestrates these on Jetson / Hailo / Coral hardware so your group can study cross-embodiment transfer empirically.

Reproducible deployment pipelines

From foundation-model checkpoint to edge-deployed binary, every step (quantization, packaging, OTA enrolment, attestation) is logged. A paper that says 'we ran INT4-quantised Phi-4 on Pi 5 + Hailo' is replayable from your reviewer's machine because we kept the bytes-for-bytes lineage.

TimesFM / Chronos / Moirai zero-shot benchmarks

The same time-series foundation-model bench used in Coast / Grid / Industry — drop your dataset onto a Furcate node and benchmark zero-shot performance across the open TSFMs without re-implementing the inference path.

Air-gapped research deployments

Sensitive research data (clinical, defence, dual-use) often can't leave a regulated environment. Furcate runs air-gapped on Microsoft Azure Local, HPE Private Cloud AI, or your own bare-metal cluster. The research happens inside your perimeter; the substrate is open and auditable.

How a deployment runs

Replicable end-to-end.

  1. 01

    Step 1: Pick the model bench (TimesFM / OpenVLA / Qwen3-VL / etc) and the hardware target (Jetson / Pi+Hailo / Coral / ESP32). Furcate ships a working reproducer for each pairing.

  2. 02

    Step 2: Pull your dataset onto a Furcate node. The platform handles ingestion, normalisation, and partitioning across the sensor / device network.

  3. 03

    Step 3: Run experiments — federated rounds, fine-tunes, A/B model comparisons. Lineage is automatic; reproducer scripts are emitted for the paper appendix.

  4. 04

    Step 4: Publish with full lineage — checkpoint hashes, dataset partition seeds, hardware envelope, runtime versions. A reviewer with a Pi 5 + Hailo can replicate at their desk.

Stack active in this configuration

  • NVIDIA FLARE
  • Flower
  • ExecuTorch
  • OpenVLA
  • Octo
  • TimesFM 2.5
  • Chronos
  • Moirai
  • WasmEdge