← Back to research
In Development

controla — Local-First Self-Improving Inference OS

PersonalPersonal
Static routing in local inference stacks assigns every request to the same backend regardless of task type, VRAM headroom, or load history — no task-aware dispatch, no hardware state, no learning from prior outcomes; modality coverage is fragmented across disjoint infrastructure with no unified API surface; learning state is in-memory only — a process restart erases every routing insight accumulated; no policy validation mechanism means routing changes go live blind with no regression gate against historical performance data.
⚠️controla is a local-first inference OS — 19 backends across 7 modalities (text gen, STT, TTS, image generation, embeddings, vision, reasoning) under one OpenAI-compatible API. Every request is classified, scored against 6 dimensions, scheduled, dispatched, and observed. Routing policy learns from execution telemetry — contextual EWMA weights per `(backend, task_type, complexity)` persist across restarts via Redis. Not a gateway. Not a proxy. A self-improving control plane that treats inference as a managed system workload.
⚙️FeatureExtractor classifies task type across 10 categories and 3 complexity levels before dispatch. ScoringEngine evaluates every candidate across capability, performance, resource, load, reliability, and context — VRAM-aware routing applies −15 if model cannot fit, +1.5 when already loaded. Redis-backed priority queue with per-user fairness enforcement, deadline-aware dispatch via `x-latency-budget` header, and starvation prevention. ExecutionPlanner decomposes high-complexity reasoning into typed step chains. 329 passing tests.
🛡️The scoring engine is stateless and deterministic — same inputs, same score; all learning lives above it in a versioned RoutingPolicy layer. ReplayEngine validates every policy candidate against historical data before promotion — gated on p95 latency regression and failure rate delta. ε-greedy exploration runs under hard guardrails: capability-matched and VRAM-safe backends only, within configured latency ceilings, on designated traffic buckets.
🚀19 backends across 7 modalities — text gen (vLLM · Ollama · TensorRT-LLM · NVIDIA NIM · ExLlamaV2 · LocalAI · AirLLM), STT (faster-whisper · Parakeet · Voxtral · WhisperX), TTS (Kokoro · Fish Audio), image generation (ComfyUI · Automatic1111 · InvokeAI), embeddings (Infinity · TEI), vision (Koboldcpp). Learning state persists across restarts via Redis. Routing accuracy compounds with usage.
19 backends · 7 modalities · closed learning loop · routing accuracy compounds with every deployment
Local-First Inference OS19 Backends · 7 ModalitiesClosed Learning Loop (EWMA)Redis-Backed Priority SchedulerVRAM-Aware RoutingPolicy Versioning · Replay Validation
View on GitHub