Chi
Systems code and application code.
Same language, same syntax.
Why Chi?
C++-like performance
System mode (.xs) — Write in this mode for full manual memory management — no GC overhead. Zero-cost abstractions, direct hardware control. Compile into optimized binary via LLVM.
Go-like ergonomics
Application mode (.x) — Write in this mode to not worry about memory. Escape analysis and garbage collection handle it for you.
Memory safety
Compiler-enforced borrow checking for system-level code. The compiler catches dangling references, use-after-free and memory safety issues at compile time.
First class async / await
Write concurrent code like in TypeScript. Async functions, await, Promise.all are built into the language — not bolted on.
Opening up new possibilities
These domains have been held back by the two-language problem — forced to split between a high-performance language and a high-level one, with huge FFI overheads in between. Chi eliminates that boundary.
Desktop applications
Building desktop apps means choosing between native performance with a difficult language, or developer-friendly tools that ship an entire browser engine just to render a UI. Either way, the result is bloated binaries, high memory usage, or a painful development experience.
With Chi: Write the UI and business logic with garbage collection, and take manual memory control where rendering and platform integration demand it — single binary, no embedded browser, no IPC.
Game engines
Game engines are typically written in C++ for performance, but gameplay code needs a higher-level scripting language. This creates a constant boundary between the two — crossing it costs performance, adds complexity, and limits what gameplay programmers can do.
With Chi: Gameplay programmers get GC and ergonomic iteration. Engine internals like physics and rendering get zero-cost abstractions. Same codebase, no binding layer.
Mobile applications
Cross-platform frameworks like React Native pay a heavy cost crossing the bridge between JavaScript and native code on every interaction. Going fully native means writing the app twice, or sharing logic through C++ with painful FFI bindings on each platform.
With Chi: One codebase compiles to a native binary per platform. UI code gets garbage collection, performance-critical paths get direct hardware access — no bridge, no runtime, no duplication.
AI/ML runtime
Python is where the models are built, but it's painful to distribute and serve. Projects like llama.cpp exist because shipping a C++ binary is so much simpler — but then you're iterating on model logic in C++, which is its own kind of painful.
With Chi: Ship a single binary with no runtime dependencies. Iterate on model logic with high-level ergonomics, or directly inferface with the hardware when needed, in the same language.