IX-on-capOS Hosting Research
Research note on using IX as a package corpus and content-addressed build model for a more mature capOS system. It explains what IX provides, why it is useful for capOS, and how to extract the most value from it without importing CPython/POSIX assumptions as an architectural dependency.
What IX Is
IX is a source-based package/build system. It describes packages as templates, expands those templates into build descriptors and shell scripts, fetches and verifies source inputs, executes dependency-ordered builds, stores outputs in a content-addressed store, and publishes usable package environments through realm mappings.
For capOS, IX should be treated as three separable assets:
- a package corpus with thousands of package definitions and accumulated build knowledge;
- a content-addressed build/store model that already fits reproducible artifact management;
- a compact Python control plane that can be adapted once authority-bearing operations move behind capOS services.
IX should not be treated as a requirement to reproduce Unix inside capOS. Its current implementation uses CPython, Jinja2, subprocesses, shell tools, filesystem paths, symlinks, hardlinks, signals, and process groups because it runs on Unix-like hosts today. Those are implementation assumptions, not the part worth preserving unchanged.
Why IX Is Useful for capOS
capOS needs a credible path from isolated demos to a useful userspace closure. IX is useful because it supplies a package/build corpus and model that can exercise the exact system boundaries capOS needs to grow:
- process spawning with explicit argv, env, cwd, stdio, and exit status;
- fetch, archive extraction, and content verification as auditable services;
- Store and Namespace capabilities instead of ambient global filesystem authority;
- build sandboxing with explicit input, scratch, output, network, and resource policies;
- static-tool bootstrapping before a full dynamic POSIX environment exists;
- differential testing against the existing host IX implementation.
The main value is leverage. IX can give capOS real package metadata, real build scripts, and real toolchain pressure without making CPython or a broad POSIX personality the first required userspace milestone.
Best Way to Get the Most from IX
The optimal strategy is to preserve IX’s package corpus and build semantics while replacing the Unix-shaped execution boundary with capability-native services.
The high-value path is:
- Run upstream IX on the host first to build and validate early capOS artifacts.
- Use CPython/Jinja2 on the host as a reference oracle, not as the in-system foundation.
- Render IX templates through a Rust
ix-templatecomponent that implements the subset IX actually uses. - Run the adapted IX planner/control plane on native MicroPython once capOS has enough runtime support.
- Move fetch, extract, build, Store commit, Namespace publish, and process lifecycle into typed capOS services.
This gets most of IX’s value: package knowledge, reproducible build structure, and a practical self-hosting path. It avoids the lowest-value part: spending early capOS effort on a large CPython/POSIX compatibility layer just to preserve upstream implementation details.
Position
CPython is not an architectural prerequisite for IX-on-capOS.
It is a compatibility shortcut for running upstream IX with minimal changes. For a clean capOS-native integration, the better design is:
- keep IX’s package corpus and content-addressed build model;
- adapt IX’s Python control-plane code instead of preserving every CPython and POSIX assumption;
- run the adapted control plane on a native MicroPython port;
- move build execution, fetching, archive extraction, store mutation, and sandboxing into typed capOS services;
- render IX templates through a Rust template service or tightly scoped IX template engine, not full Jinja2 on MicroPython;
- keep CPython on the host as a differential test oracle and bootstrap tool, not as a required foundation layer for capOS.
MicroPython is a credible sweet spot only with that boundary. It is not a credible sweet spot if the requirement is “make upstream Jinja2, subprocess, fcntl, process groups, and Unix filesystem behavior all work inside MicroPython.”
Sources Inspected
- Upstream IX repository:
https://github.com/pg83/ix - IX package guide:
PKGS.md - IX core:
core/ - IX templates:
pkgs/die/ - Bundled IX template deps:
deps/jinja-3.1.6/,deps/markupsafe-3.0.3/ - MicroPython library docs:
https://docs.micropython.org/en/latest/library/index.html - MicroPython CPython-difference docs:
https://docs.micropython.org/en/latest/genrst/ - MicroPython porting docs:
https://docs.micropython.org/en/latest/develop/index.html - Jinja docs:
https://jinja.palletsprojects.com/en/latest/intro/ - MiniJinja docs:
https://docs.rs/minijinja/latest/minijinja/
Upstream IX Shape
IX is a source-based, content-addressed package/build system. Package
definitions are Jinja templates under pkgs/, mostly named ix.sh, and the
template hierarchy under pkgs/die/ expands those package descriptions into
JSON descriptors and shell build scripts.
The inspected clone has:
- 3788 package
ix.shfiles; - 66 files under
pkgs/die; - a template chain centered on
base.json,ix.json,script.json,sh0.sh,sh1.sh,sh2.sh,sh.sh,base.sh,std/ix.sh, and language/build-system templates for C, Rust, Go, Python, CMake, Meson, Ninja, WAF, GN, Kconfig, and shell-only generated packages.
The IX template surface is broad but not arbitrary Jinja. In the package tree surveyed, the Jinja tags used were:
| Tag | Count |
|---|---|
block | 14358 |
endblock | 14360 |
extends | 3808 |
if / endif | 451 / 451 |
include | 344 |
else | 123 |
set / endset | 52 / 52 |
for / endfor | 49 / 49 |
elif | 23 |
No macro, import, from, with, filter, raw, or call tags were
found in the inspected tree. That matters: IX’s template needs are probably a
finite subset around inheritance, blocks, self.block(), super(), includes,
conditionals, loops, assignments, expressions, and custom filters.
IX’s own Jinja wrapper is small. core/j2.py defines:
- custom loader with
//root handling; - include inlining;
- filters such as
b64e,b64d,jd,jl,group_by,basename,dirname,ser,des,lines,eval,defined,field,pad,add,preproc,parse_urls,parse_list,list_to_json, andfjoin.
That makes the template layer replaceable. The risk is not “Jinja is impossible.” The risk is “full upstream Jinja2 drags in a CPython-shaped runtime just to implement a template subset IX mostly uses in a disciplined way.”
Current IX Runtime Surface
The IX Python core uses ordinary host-scripting features:
os,os.path,json,hashlib,base64,random,string,functools,itertools,platform,getpass;shutil.which,shutil.rmtree,shutil.move;subprocess.run,check_call,check_output;os.execvpe,os.kill,os.setpgrp,signal.signal;fcntl.fcntlto reset stdout flags;asynciofor graph scheduling;multiprocessing.cpu_count;contextvarsfallback support forasyncio.to_thread;tarfile,zipfile;ssl,urllib3, usually only to suppress certificate warnings while fetchers are shell-driven;os.symlink,os.link,os.rename,os.makedirs,open, and file tests.
core/execute.py is the important boundary. It schedules a DAG, prepares
output directories, calls shell commands with environment variables and stdin,
checks output touch files, and kills the process group on failure.
core/cmd_misc.py and core/shell_cmd.py cover fetch, extraction, hash
checking, archive unpacking, and hardlinking fetched inputs.
core/realm.py maps build outputs into realm names using symlinks and metadata
under /ix/realm.
core/ops.py selects an execution mode. Today the modes are local, system,
fake, and molot. A capOS executor mode is the correct integration point.
CPython Path
CPython is the obvious route for upstream compatibility:
- upstream Jinja2 is designed for modern Python and uses normal CPython-style standard library facilities;
- IX’s current Python code assumes
subprocess,asyncio,fcntl,shutil, archive modules, and process semantics; - CPython plus
libcapos-posixwould let a large fraction of that code run with limited changes.
That does not make CPython the right product dependency for IX-on-capOS. CPython pulls in a large libc/POSIX surface and encourages preserving Unix process and filesystem assumptions that capOS should make explicit through capabilities.
CPython should be used in two places:
- Host-side bootstrap and reference evaluation.
- Optional compatibility mode once
libcapos-posixis mature.
It should not be the required path for a clean IX-capOS integration.
If CPython is needed later, capOS has two routes:
- Native CPython through musl plus
libcapos-posix. - CPython compiled to WASI and run through a native WASI runtime.
The native POSIX route is the only route that makes sense for IX-style build
workloads. It needs fd tables, path lookup, read/write/close/lseek, directory
iteration, rename/unlink/mkdir, time, memory mapping, posix_spawn, pipes,
exit status, and eventually sockets. That is the same compatibility work
needed for shell tools and build systems, so it should arrive as part of the
general userspace-compatibility track, not as an IX-specific dependency.
The WASI route is useful for sandboxed or compute-heavy Python, but it is a poor fit for IX package builds because IX fundamentally drives external tools, filesystem trees, fetchers, and process lifecycles. WASI CPython can be useful as a script sandbox, not as the main IX appliance runtime.
MicroPython Path
MicroPython is attractive because capOS needs an embeddable system scripting runtime before it needs a full desktop Python environment.
The upstream docs frame MicroPython as a Python implementation with a smaller,
configurable library set. The latest library docs list micro versions of
modules relevant to IX, including asyncio, gzip, hashlib, json, os,
platform, random, re, select, socket, ssl, struct, sys, time,
zlib, and _thread, while warning that most standard modules are subsets
and that port builds may include only part of the documented surface.
That is a good fit for capOS. It means a capOS port can expose a deliberately chosen OS surface instead of pretending to be Linux.
MicroPython should host:
- package graph traversal;
- package metadata parsing;
- target/config normalization;
- dependency expansion;
- high-level policy;
- command graph generation;
- calls into capOS-native services.
MicroPython should not own:
- generic subprocess emulation;
- shell execution internals;
- process groups or Unix signals;
- TLS/network fetching;
- archive formats beyond small helper cases;
- hardlink/symlink implementation;
- content store mutation;
- build sandboxing;
- parallel job scheduling if that wants kernel-visible resource control.
Those belong in capOS services.
Native MicroPython Port Shape
A capOS MicroPython port should be a new MicroPython platform port, not the Unix port with a large compatibility shim underneath.
The port should provide:
- VM startup through
capos-rt; - heap allocation from a fixed initial heap first, then
VirtualMemorywhen growth is available; - stdin/stdout/stderr backed by granted stream or Console capabilities;
- module import from a read-only Namespace plus frozen modules;
- a small VFS adapter over Store/Namespace for scripts and package metadata;
- native C/Rust extension modules for capOS capabilities;
- deterministic error mapping from capability exceptions to Python exceptions.
The initial built-in surface should be deliberately small:
syswith argv/path/modules;ospath and file operations backed by a granted namespace;timebacked by a clock capability;hashlib,json,binascii/base64,random,struct;- optional
asyncioif the planner keeps Python-level concurrency; - no general-purpose
subprocessuntil the service boundary proves it is necessary.
For IX, the MicroPython port should ship frozen planner modules and native
bindings to ix-template, BuildCoordinator, Store, Namespace, Fetcher,
and Archive. That keeps the trusted scripting surface small and avoids
import-time dependency drift.
Jinja2 and MicroPython
Full Jinja2 compatibility on MicroPython remains unproven and is probably not the optimal target.
Current Jinja docs say Jinja supports Python 3.10 and newer, depends on
MarkupSafe, and compiles templates to optimized Python code. The bundled IX
Jinja tree imports modules such as typing, weakref, importlib,
contextlib, inspect, ast, types, collections, itertools, io, and
MarkupSafe. Some of these can be ported or stubbed, but that is a CPython
compatibility project, not a small MicroPython extension.
The better path is to treat IX’s template language as an input format and render it with a capOS-native component.
Recommended template strategy:
- Build an
ix-templateRust component using MiniJinja or a smaller IX-specific template subset. - Register IX’s custom filters from
core/j2.py. - Implement IX’s loader semantics:
//package-root paths, relative includes, and cached sources. - Reject unsupported Jinja constructs with deterministic errors.
- Keep CPython/Jinja2 as a host-side oracle for differential testing until the capOS renderer matches the package corpus.
MiniJinja is a practical candidate because it is Rust-native, based on Jinja2
syntax/behavior, supports custom filters and dynamic objects, and has feature
flags for trimming unused template features. IX needs multi-template support
because it uses extends, include, and block.
If MiniJinja compatibility is insufficient, the fallback is not CPython by
default. The fallback is an IX-template subset evaluator that implements the
constructs actually used by pkgs/.
Optimal Architecture
The clean design is an IX-capOS build appliance, not a Unix personality layer that happens to run IX.
flowchart TD
CLI[ix CLI or build request] --> Planner[ix planner on MicroPython]
Planner --> Template[ix-template renderer]
Planner --> Graph[normalized build graph]
Template --> Graph
Graph --> Coordinator[capOS BuildCoordinator service]
Coordinator --> Fetcher[Fetcher service]
Coordinator --> Extractor[Archive service]
Coordinator --> Store[Store service]
Coordinator --> Sandbox[BuildSandbox service]
Fetcher --> Store
Extractor --> Store
Sandbox --> Proc[ProcessSpawner]
Sandbox --> Scratch[writable scratch namespace]
Sandbox --> Inputs[read-only input namespaces]
Proc --> Tools[sh, make, cc, cargo, go, coreutils]
Sandbox --> Output[write-once output namespace]
Output --> Store
Store --> Realm[Namespace snapshot / realm publish]
The planner remains small and scriptable. The authority-bearing work happens in services:
BuildCoordinator: owns graph execution and job state.Store: content-addressed objects and output commits.Namespace: names, realms, snapshots, and package environments.Fetcher: network-capable source acquisition with explicit TLS and cache policy.Archive: deterministic extraction and path-safety checks.BuildSandbox: constructs per-build capability sets.ProcessSpawner: starts shell/tools with controlled argv, env, cwd, stdio, and granted capabilities.Toolchainpackages: statically linked tools built externally first, then eventually by IX itself.
The adapted IX planner should call service APIs instead of shelling out for operations that are native capOS concepts.
Control-Plane Boundary
MicroPython should see a narrow, high-level API. It should not synthesize Unix from first principles.
Example shape:
import ixcapos
import ixtemplate
pkg = ixcapos.load_package("bin/minised")
desc = ixtemplate.render_package(pkg.name, pkg.context)
graph = ixcapos.plan(desc, target="x86_64-unknown-capos")
result = ixcapos.build(graph)
ixcapos.publish_realm("dev", result.outputs)
The Python layer can still look like IX. The implementation behind it should be capability-native.
Service API Sketch
The exact schema should follow the project schema style, but this is the shape of the boundary:
interface BuildCoordinator {
plan @0 (package :Text, target :Text, options :BuildOptions)
-> (graph :BuildGraph);
build @1 (graph :BuildGraph) -> (result :BuildResult);
publish @2 (realm :Text, outputs :List(OutputRef))
-> (namespace :Namespace);
}
interface BuildSandbox {
run @0 (command :Command, inputs :List(Namespace),
scratch :Namespace, output :Namespace, policy :SandboxPolicy)
-> (status :ExitStatus, log :BlobRef);
}
interface Fetcher {
fetch @0 (url :Text, sha256 :Data, policy :FetchPolicy)
-> (blob :BlobRef);
}
interface Archive {
extract @0 (archive :BlobRef, policy :ExtractPolicy)
-> (tree :Namespace);
}
Important policy fields:
- network allowed or denied;
- wall-clock and CPU budgets;
- maximum output bytes;
- allowed executable namespaces;
- allowed output path policy;
- whether timestamps are normalized;
- whether symlinks are preserved, rejected, or translated;
- whether hardlinks become store references or copied files.
Store and Realm Mapping
IX’s /ix/store maps well to capOS Store.
IX’s realms should not be literal symlink trees in capOS. They should be named Namespace snapshots:
| IX concept | capOS mapping |
|---|---|
/ix/store/<uid>-name | Store object/tree with stable content hash and metadata |
| build output dir | write-once output namespace |
| build temp dir | scratch namespace with cleanup policy |
| realm | named Namespace snapshot |
| symlink from realm to output | Namespace binding or bind manifest |
| hardlinked source cache | Store reference or copy-on-write blob binding |
touch output sentinel | build-result metadata, optionally synthetic file for compatibility |
This preserves IX’s reproducibility model without importing global Unix authority.
Process and Filesystem Requirements
A mature capOS needs these primitives before IX builds can run natively:
ProcessSpawnerandProcessHandle;- argv/env/cwd/stdin/stdout/stderr passing;
- exit status;
- pipes or stream capabilities;
- fd-table support in the POSIX layer for ported tools;
- read-only input namespaces;
- writable scratch namespaces;
- write-once output namespaces;
- directory listing, create, rename, unlink, and metadata;
- symlink translation or explicit rejection policy;
- hardlink translation or store-reference fallback;
- monotonic time;
- resource limits;
- cancellation.
For package builds, the tool surface is larger than IX’s Python surface:
sh;find,sed,grep,awk,sort,xargs,install,cp,mv,rm,ln,chmod,touch,cat;tar,gzip,xz,zstd,zip,unzip;make,cmake,ninja,meson,pkg-config;- C compiler/linker/archive tools;
cargoand Rust toolchains;- Go toolchain;
- Python only for packages that build with Python.
IX’s static-linking bias helps because the early tool closure can be imported as statically linked binaries.
What to Patch Out of IX
For a clean capOS fit, patch or replace these upstream assumptions:
| Upstream assumption | capOS replacement |
|---|---|
subprocess.run everywhere | BuildSandbox.run() or ProcessSpawner |
process groups and SIGKILL | ProcessHandle.killTree() or sandbox cancellation |
fcntl stdout flag reset | remove or make no-op |
chrt, nice | scheduler/resource policy on sandbox |
sudo, su, chown | no permission-bit authority; use capability grants |
unshare, tmpfs, jail | BuildSandbox with explicit caps |
/ix/store global path | Store capability plus namespace mount view |
/ix/realm symlink tree | Namespace snapshot/publish |
| hardlinks for fetched files | Store refs or copy fallback |
curl/wget subprocess fetch | Fetcher service |
Python tarfile/zipfile | Archive service |
asyncio executor | BuildCoordinator scheduler |
This is more invasive than a “light patch”, but it is cleaner. The IX package corpus and target/build knowledge are preserved; Unix process plumbing is not.
MicroPython Port Scope
The MicroPython port should be sized around IX planner needs plus general system scripting:
Native modules:
capos: bootstrap capabilities, typed capability calls, errors.ixcapos: package graph and build-service client bindings.ixtemplate: template render calls if the renderer is an embedded Rust/C component.ixstore: Store and Namespace helpers.
Python/micro-library requirements:
json;hashlib;base64orbinascii;os.pathsubset;random;time;- small
shutilsubset for path operations if old IX code remains; - small
asyncioonly if planner concurrency remains in Python.
Avoid implementing:
- general
subprocess; - general
fcntl; - full
signal; - full
multiprocessing; - full
tarfile; - full
zipfile; - full
ssl/urllib3; - full Jinja2.
Those are symptoms of preserving the wrong boundary.
CPython Still Has a Role
CPython remains useful even if it is not a capOS prerequisite:
- run upstream IX on the development host;
- compare rendered descriptors from CPython/Jinja2 against
ix-template; - generate fixtures for the capOS renderer;
- bootstrap the first static tool closure;
- serve as a later optional POSIX compatibility demo.
Differential testing should be explicit:
flowchart LR
Pkg[IX package] --> Cpy[Host CPython + Jinja2]
Pkg --> Cap[capOS ix-template]
Cpy --> A[descriptor A]
Cap --> B[descriptor B]
A --> Diff[normalized diff]
B --> Diff
Diff --> Corpus[compatibility corpus]
This makes CPython a test oracle, not a trusted runtime dependency inside capOS.
Staged Plan
Stage A: Host IX builds capOS artifacts
Run IX on Linux host first. Add a capos target and recipes for static capOS
ELFs. This validates package metadata, target triples, linker flags, and static
closure assumptions before capOS hosts any of it.
Outputs:
x86_64-unknown-capostarget model in IX;- recipes for
libcapos,capos-rt, shell/coreutils candidates, MicroPython, and archive/fetch helpers; - static artifacts imported into the boot image or Store.
Stage B: Template compatibility harness
Build ix-template on the host. Render a package corpus through CPython/Jinja2
and through ix-template. Normalize JSON/script output and record divergences.
Outputs:
- supported IX template subset;
- custom filter implementation;
- fixture corpus;
- list of unsupported packages or constructs.
Stage C: Native MicroPython port
Port MicroPython to capOS as a normal native userspace program using
capos-rt and a small libc/POSIX subset only where needed.
Outputs:
- REPL or script runner;
- frozen IX planner modules;
- native
capos,ixcapos, andixtemplatemodules; - no promise of full CPython compatibility.
Stage D: BuildCoordinator and sandboxed execution
Implement capOS-native build services and run simple package builds using externally supplied static tools.
Outputs:
- build graph execution;
- per-build scratch/output namespaces;
- deterministic logs and output commits;
- cancellation and resource policies.
Stage E: IX package corpus migration
Patch IX templates for capOS target semantics. Start with simple C/static packages, then Rust, then Go.
Outputs:
- C/static package subset;
- regular Rust package support once regular Rust runtime/toolchain work is ready;
- Go package support when
GOOS=caposor imported Go toolchain support is credible; - WASI packages as a separate target family where useful.
Stage F: Self-hosting
Run the IX-capOS appliance inside capOS to rebuild a meaningful part of its own userspace closure.
Outputs:
- build the MicroPython IX planner inside capOS;
- build core shell/coreutils/archive tools inside capOS;
- build
libcaposand selected static service binaries; - eventually build Rust and Go runtime/toolchain pieces.
Why This Is Better Than “CPython First”
The CPython-first route optimizes for running upstream IX quickly. The MicroPython-plus-services route optimizes for capOS’s actual design:
- capability authority stays typed and explicit;
- build isolation is native instead of Linux namespace emulation;
- Store/Namespace are first-class rather than hidden behind
/ix; - fetch/archive/build operations are auditable services;
- the scripting runtime remains small;
- the system does not need full CPython before it can have a package manager;
- CPython can still be added later through the POSIX layer without blocking IX-capOS.
The tradeoff is that IX-capOS becomes a real port/fork at the control-plane boundary. That is acceptable for a clean capability-native fit.
Risks
Template compatibility is the main technical risk. IX uses a restricted-looking
Jinja subset, but exact self.block(), super(), whitespace, expression, and
undefined-value behavior must match closely enough for package hashes to remain
stable. This needs corpus testing, not confidence.
Build-script compatibility is the largest scope risk. Even if IX planning is native, the package corpus still executes conventional build systems. capOS must provide enough shell, coreutils, archive, compiler, and filesystem behavior for those tools.
Toolchain bootstrapping is a long dependency chain. The first useful IX-capOS system will import statically linked tools from a host. Native self-hosting is late-stage work.
Store semantics need care around directories, symlinks, hardlinks, mtimes, and executable bits. These details affect build reproducibility and package compatibility.
MicroPython must not grow into a bad CPython clone. If many missing modules are implemented only to satisfy upstream IX assumptions, the design boundary has failed.
Recommendation
Adopt IX as a package corpus and build model, not as a CPython/POSIX program to preserve unchanged.
The optimal capOS-native solution is:
- Host-side upstream IX remains available for bootstrap and oracle tests.
ix-templatein Rust renders the actual IX template subset.- Native MicroPython runs the adapted IX planner/control plane.
- capOS services execute all authority-bearing operations: fetch, extract, build sandbox, Store commit, Namespace publish, and process lifecycle.
- CPython is deferred to general POSIX compatibility and optional tooling.
This makes MicroPython the sweet spot for the in-system IX control plane while avoiding the trap of turning MicroPython into CPython.