Proposal: Repository Composition
How capOS should be split across repositories so that the public capability-OS claim, the kernel review queue, and the security/release cadence stay recognizable as the project grows beyond a single private workspace.
Purpose
capOS currently lives in a single private repository that mixes the kernel, the userspace runtime, the native shell, generic capability/IPC/ring demos, the Aurelian Frontier game, an academic whitepaper draft, the public docs site sources, and proposals for protocol stacks, language runtimes, GPU support, cloud images, and other future tracks.
That packing is acceptable while the project is private and agent-driven: one workspace, one review loop, one history. It is not the right shape once capOS becomes public. A single-repository public capOS would conflate unrelated scopes, drag unrelated tracks through one review queue, attach the OS security posture to product-shaped surfaces, and force unrelated release cadences to share one tag stream.
This proposal defines:
- what the public capOS core repository should defend (the scope rule);
- what should ship in sibling repositories that depend on capOS;
- the criteria for when a track is ready to split;
- the cross-repository mechanics that keep splits honest.
It generalizes the “Repository Hygiene Gates” of
docs/proposals/public-release-boundaries-proposal.md. The adventure split
and the curated git-history rewrite remain release gates in that proposal;
this proposal explains why those gates exist and how the same rule applies
to other tracks over time.
Non-Goals
- This proposal does not require splitting any track on a deadline beyond
the explicit release gates already named in
docs/proposals/public-release-boundaries-proposal.md. It defines a rule, not a calendar. - It does not redesign the capability model, schema, or kernel/runtime boundary. Those are owned by the relevant subsystem proposals.
- It does not propose a multi-organization governance model. capOS may remain a single-maintainer or small-team project across multiple repositories.
- It does not propose mirroring sibling repositories back into capOS. Once a track has split, capOS does not re-vendor it.
- It does not promise a public chat or coordination forum for cross-repo work; that follows the launch phases in the public-release proposal.
Scope Rule For The Core Repository
The capOS core repository defends a narrow, recognizable claim. A track belongs in the core repository when at least one of the following is true:
- removing it would weaken a capability-OS invariant the kernel or runtime currently enforces;
- removing it would delete a proof the documented review process relies on;
- it is part of the minimum surface required to boot capOS in QEMU and exercise the documented capability/IPC/ring/scheduling/security invariants.
A track does not belong in the core repository when its primary purpose is product, protocol, or language-runtime work that happens to run on capOS, even when it currently shares a workspace with the kernel.
In practice the core repository should contain:
- the schema definitions and the generated bindings the kernel and
runtime rely on (
schema/,capos-abi/,capos-lib/,capos-config/); - the kernel itself, including arch-specific code under
kernel/src/arch/; - the userspace runtime contract that consumes the schema (
capos-rt/,init/,shell/); - the manifest and code-generation tooling needed to boot and build
capOS (
tools/mkmanifest/,tools/capnp-build/); - demos that exist only to exercise core capability/IPC/ring/scheduling/ trust-boundary invariants, not application-shaped product surfaces;
- the security boundary, verification workflow, trusted-build-input, panic-surface inventory, and authority-accounting/transfer design documents;
- the core proposals describing the OS itself: capability model, IPC, error handling, scheduling, SMP, networking architecture (high level), storage and naming (high level), service architecture, security and verification, formal MAC/MIC, live upgrade design, threading, key-management abstractions, user identity and policy abstractions;
docs/changelog.md,docs/roadmap.md,WORKPLAN.md, andREVIEW.md/REVIEW_FINDINGS.mdfor the narrowed core scope;- the documentation site sources that describe the core scope. The deployment of those sources can be a sibling concern (see “Public Website And Hosted Demos” below); the sources themselves stay with the OS they describe.
Tracks That Should Eventually Move Out
The following tracks already exist or are planned, and each one is or will become a candidate for a sibling repository. Each carries its own scope statement, security posture, maintainer load profile, and release cadence that should not be merged into capOS’s core scope statement.
The list is descriptive, not a queue. A track moves only when the split criteria below are satisfied for that track.
Adventure Ecosystem
Server, client, NPC processes, content generator, content blobs,
adventure-named manifests/run targets, adventure proposals/backlog/demo
docs, contributor-quest mechanics. The split is already a release gate in
docs/proposals/public-release-boundaries-proposal.md. The dedicated
sibling is capos-adventure. Any capOS invariant currently exercised
only by an adventure demo needs a non-adventure equivalent in capOS, or
moves into capos-adventure together with its proof.
Whitepaper And Academic Publication
papers/schema-as-abi/ is a Typst project, and docs/paper/plan.md,
docs/paper/outline.md, and docs/paper/evidence-gaps.md are paper
planning documents. Academic publication has its own review cycle,
publication venue, citation cadence, and corrections process that should
not share the OS’s tag stream. A capos-paper repository can cite capOS
by tag or commit, track evidence-gap closure, and run paper-specific
build/CI without expanding the OS repository’s review surface.
docs/changelog.md and proof-evidence narrative remain in capOS so the
paper has a stable reference target.
Public Website And Hosted Demos
The public landing page, marketing-shaped copy, hosted-demo deployment scripts (Cloudflare Pages glue, container images, CI for the public site, hosted WebShellGateway and adventure-demo deployment) are operational concerns with public-traffic implications. They should not share a release cadence with kernel changes, and their incident response must not pull on kernel review capacity.
mdBook content describing the OS itself stays in capOS. The deployment
of that content as a public site can move to a sibling repository (for
example capos-site) that depends on the capOS docs sources by tag or
commit. The hosted public WebShellGateway or adventure-demo deployment
follows Phase D of the public-release proposal and lives outside capOS.
Userspace Network Stack And NIC Drivers
The current QEMU smoke path keeps smoltcp, virtio-net, the line
discipline, and the Telnet IAC filter inside kernel/. Once the
userspace driver authority gate (docs/dma-isolation-design.md) lands
and the userspace TCP/IP stack and NIC drivers leave the kernel
(docs/proposals/networking-proposal.md Phase C), the resulting
userspace components are large enough and carry enough independent
attack surface to live in capos-net. capOS keeps the kernel-side
DMA/MMIO/interrupt authority gates and the schema/ABI of the network
capabilities; the implementation of the stack is a downstream consumer.
Production Remote-Access Services
The host-local Telnet demo is research evidence for the
TerminalSession / SessionManager / AuthorityBroker /
RestrictedShellLauncher boundary; it stays in capOS. The host-local
SSH Shell Gateway research demo similarly stays as long as it is a
host-local research artifact under
docs/proposals/ssh-shell-proposal.md.
The production successors – a real OpenSSH-protocol gateway with
production host-key management, persistent authorized-key/account
storage, channel policy, audit, and remote-traffic threat model, and any
production WebShellGateway with browser-side session UI and public
moderation policy – are product-shaped services. They belong in
dedicated repositories (for example capos-ssh-gateway,
capos-web-shell) once they outgrow the host-local research surface.
Protocol Stacks Built On Key-Management Primitives
TLS/X.509, OIDC/OAuth2, ACME, OCSP, CT log handling, DPoP, workload
identity federation, and similar large protocol surfaces described in
docs/proposals/certificates-and-tls-proposal.md and
docs/proposals/oidc-and-oauth2-proposal.md should ship as sibling
repositories (for example capos-tls, capos-oidc) consuming the capOS
key-management primitives. Their CVE response, dependency surface, and
review queue should not be merged into the OS core’s. capOS keeps the
abstract SymmetricKey, PrivateKey, KeySource, KeyVault, and
audit primitives from
docs/proposals/cryptography-and-key-management-proposal.md; the
protocol stacks are downstream consumers.
Language Runtimes And Toolchain Ports
Go (GOOS=capos), libc / libcapos, WASI, Lua, and any future language
runtime port belong in dedicated repositories (capos-go,
capos-libc, capos-wasi, capos-lua, …). Language-runtime
releases follow upstream language cadence, and porting work should not
block kernel review. The capOS userspace ABI documented in
capos-rt/, capos-abi/, and the schema is the contract these ports
target.
GPU And CUDA Capability Integration
The GPU capability work in docs/proposals/gpu-capability-proposal.md
brings a large external driver and toolkit dependency surface, vendor
runtime distribution constraints, and hardware-specific testing needs.
When implementation begins it belongs in a dedicated capos-gpu
repository. capOS keeps the abstract device-authority gate and the
relevant capability schema; vendor-specific glue and toolkit packaging
is downstream.
LLM And Agent Runtime
The agent shell tool runner, model bindings, on-ISO local-model
packaging, and provider-specific glue from
docs/proposals/llm-and-agent-proposal.md and
docs/proposals/realtime-voice-agent-shell-proposal.md carry
independent supply-chain, content-policy, and operational concerns.
Provider TOS, model weight redistribution, and content-safety reviews
do not belong on the kernel review queue.
The shell capability and authority model – including how the agent
shell’s per-tool consent/step-up/forbidden modes consume broker-issued
capabilities – stays in capOS. The agent runner itself, the model
bindings, the on-ISO local-model packaging, and the provider glue ship
in a dedicated repository when implementation begins (for example
capos-agent-shell).
Cloud Images And Instance Bootstrap
Cloud VM image building, AWS/GCP/Azure packaging, NVMe and cloud-NIC
integrations, and the cloud-metadata bootstrap from
docs/proposals/cloud-deployment-proposal.md and
docs/proposals/cloud-metadata-proposal.md are operational
image-building concerns with cloud-vendor dependency exposure. They
should live in a capos-cloud-images repository that consumes capOS
releases as inputs.
Volume Encryption And KMS Integration
The encryption-at-rest work from
docs/proposals/volume-encryption-proposal.md will pull in cloud KMS
clients, key-rotation policy, and cryptographic dependency exposure
that should ship in a dedicated capos-volume-crypto (or similarly
named) repository. The abstract key-management contracts and the
storage-side authority gates remain in capOS.
Hosted Demo Tooling, Logs, And Operational Glue
Anything that is part of operating a public capOS deployment – session-quota policy, browser-side WebShellGateway UI, public landing copy, hosted log/metric pipelines, abuse-mitigation glue, public moderation tooling – is operational rather than OS work. It should live with the relevant sibling (for example public website or WebShellGateway service repositories) rather than inside capOS.
Tracks That Stay In The Core Repository
These tracks are intrinsic to the OS claim and should not be considered split candidates:
- the kernel, including arch-specific code under
kernel/src/arch/; - the schema definitions and generated bindings;
- the userspace runtime (
capos-rt),init, the native shell, and the manifest tools needed to boot capOS; - demos that exercise core capability/IPC/ring/scheduling invariants:
capset-bootstrap,console-paths,ring-corruption,ring-reserved-opcodes,ring-nop,ring-fairness,endpoint-roundtrip,ipc-server,ipc-client,terminal-session,terminal-stranger,tls-smoke(the TLS userspace runtime smoke, not protocol stack),virtual-memory,timer-smoke,timer-flood,ipc-zerocopy-demo, and any future demo that exists only to exercise a core capability invariant; - the chat demo as a generic IPC and service-object example may stay, but only in a form that defends a capability-OS invariant. Game-shaped chat features (named NPC actors, contributor-quest framing, adventure-tied identity flows) follow the adventure split;
- the security boundary, verification workflow, trusted-build-input, panic-surface inventory, authority-accounting/transfer design, and DMA-isolation design documents;
- the core proposals listed in the “Scope Rule” section above;
docs/changelog.md,docs/roadmap.md,WORKPLAN.md, andREVIEW.md/REVIEW_FINDINGS.mdfor the narrowed core scope;docs/research/, because each research note grounds a current capability-OS design decision; research notes that grow into full proposals follow the relevant subsystem.
When To Split A Track
A track should not be split prematurely. While a track lives only in proposal documents or a small experimental crate, the friction of a sibling repository (separate CI, separate review setup, separate license and security policy, cross-repo version pinning) outweighs the benefit.
The right time to split is when all of the following are true for the track:
- Independent product or protocol shape. The track has a recognizable purpose that is not “exercise a capOS invariant”. For example, a TLS stack, a Go port, a hosted public demo, or a game.
- Non-trivial implementation surface. The track draws review attention away from kernel review or carries an independent dependency surface large enough to need its own dependency-policy/audit posture.
- Defensible cross-repo dependency direction. The sibling can build against a tagged or pinned capOS reference without modifying capOS internals; the inverse direction (capOS depending on the sibling for a core invariant proof) is not required.
- Independent release cadence is desirable. The track wants its own tag stream, security advisory channel, or upstream synchronization schedule.
When any of these is missing, the track stays in the core repository or remains a proposal until it is ready.
A useful counter-test: would a public reader looking at the core capOS README, security policy, and release notes be misled by the presence of this track? If yes, that is a sign the scope statement is being stretched and the track is overdue to split. If a reader would not notice, the benefit of splitting is small.
Cross-Repository Mechanics
When a sibling repository is created, the following mechanics apply.
GitHub Organization Placement
The capOS core repository currently lives under a personal GitHub account. Once one or more siblings exist, hosting them all under the same personal account conflates personal projects with the capOS project, makes maintainer-set changes harder, and gives a confusing public landing surface for readers looking for the project.
The intended landing place for capOS and its siblings is a dedicated
GitHub organization, cap-os-dev. Concretely:
- the curated public-import history defined by the history-rewrite gate
in
docs/proposals/public-release-boundaries-proposal.mdis published as a freshcap-os-dev/caposrepository when the organization is used. A GitHub repository transfer or fork from the current private capOS repository is not the intended mechanism, because it would carry the existing private uncurated history, branches, refs, and intermediate agent-loop state into the public organization. The current private repository may continue to exist as an internal mirror after publication, but it is not the same repository as the public one; - siblings are created under
cap-os-dev/<sibling>rather than under any individual maintainer’s account; for examplecap-os-dev/capos-adventure,cap-os-dev/capos-paper,cap-os-dev/capos-site,cap-os-dev/capos-net,cap-os-dev/capos-ssh-gateway,cap-os-dev/capos-web-shell,cap-os-dev/capos-tls,cap-os-dev/capos-oidc,cap-os-dev/capos-go,cap-os-dev/capos-libc,cap-os-dev/capos-wasi,cap-os-dev/capos-lua,cap-os-dev/capos-gpu,cap-os-dev/capos-agent-shell,cap-os-dev/capos-cloud-images,cap-os-dev/capos-volume-crypto; - repository names listed in this proposal and in
docs/proposals/public-release-boundaries-proposal.mdare intent names, not reservations. Final naming happens at the moment a sibling is actually created and may collapse, rename, or skip entries based on what the project actually needs.
Using a dedicated organization also makes the public-release maintainer boundaries easier to enforce: organization-level security policy, issue-template defaults, branch-protection settings, and team membership apply consistently across capOS and its siblings without per-repository drift.
The org adoption is not a blocker for the public-release hygiene gates:
the adventure split and history rewrite from
docs/proposals/public-release-boundaries-proposal.md are the
release-blocking gates, and they can land regardless of whether the
public-import history is first published under cap-os-dev/capos or
temporarily under another account. cap-os-dev is, however, the
recommended public landing surface, and once it is used, public-facing
materials should point at the organization rather than at any
individual maintainer’s account.
Dependency Direction
- The sibling depends on capOS by tag, commit, or other pinned reference; it does not depend on capOS by path-dependency into a private workspace.
- capOS does not depend on the sibling for any core invariant or proof. capOS may declare an optional release artifact from a sibling (for example a packaged adventure demo image) when an end-to-end story requires it, but the artifact must be a declared release input, not a path link.
- When a sibling demonstrates a capOS invariant by running on it, the sibling records the capOS reference (tag or commit) it was tested against, and the sibling carries the proof, not capOS.
Per-Repository Hygiene
- Each sibling repository owns its own license,
CONTRIBUTING.md,SECURITY.md, issue/PR templates, and review-capacity statement, even when the initial maintainer set overlaps with capOS. - Each sibling repository owns its own scope statement and public claim list. Public capOS claims do not extend over sibling content; sibling claims do not extend over capOS.
- Generated artifacts, content blobs, and large binaries belong with the sibling that owns the source they describe, never with capOS unless capOS itself produced them.
Documentation Location Rule
- Documentation about a sibling lives in the sibling. capOS may keep a
short pointer in
docs/proposals/index.md, the README, or a release-notes section so readers can find the sibling, but it does not duplicate sibling-internal proposals, backlog, or roadmap state. - Cross-repo planning that is privately coordinated must still respect the public-release rule that “if a task is public, its active status lives in one place”; capOS does not maintain a public mirror of sibling task state.
Security Coordination
- During a transition phase, security reports affecting capOS and a
sibling are coordinated through the capOS
SECURITY.mdcontact, with downstream siblingSECURITY.mdfiles pointing back to that contact until the sibling has its own staffed response. - Once a sibling has a staffed security response, its
SECURITY.mdbecomes authoritative for sibling-only issues, and only cross-cutting reports require coordination. - Neither capOS nor a sibling promises a security-fix SLA at the research-software stage; the capOS security statement language remains the baseline.
Release And Tagging
- Each sibling owns its own release cadence and tag stream.
- A sibling release that requires a specific capOS revision pins it explicitly in the sibling’s release notes.
- capOS releases do not promise sibling availability or compatibility beyond “the schema and userspace ABI used by sibling X at tag Y are what capOS at tag Z provides”.
History At Split Time
- A split should not silently remove evidence. Before a sibling becomes the authoritative location for a track, the relevant proofs, demos, and documentation must be present and reviewed in the sibling.
- The capOS history rewrite specified in
docs/proposals/public-release-boundaries-proposal.mddoes not need to preserve the pre-split track history inside capOS. The sibling’s history begins at split time with whatever curated initial state the sibling chooses to publish. - The capOS
docs/changelog.mdcontinues to record completed capability-OS milestones; sibling milestones are recorded in the sibling.
Migration Approach
The split is gradual and gated by readiness, not by a release calendar beyond the explicit public-release prerequisites.
The intended order is:
- Adventure ecosystem – gated by the public-release adventure-split gate. This is the first concrete instance of the rule and produces a reusable pattern (cross-repo dependency direction, sibling hygiene, documentation pointers) for later splits.
- Whitepaper / academic publication – when the paper is ready to accept public review, or when its evidence-gap log starts to drive review cycles independent of the kernel review queue.
- Public website and hosted-demo deployment – when a hosted demo becomes a real operational milestone (Phase D of the public-release proposal) rather than a research artifact.
- Userspace network stack and NIC drivers – after the userspace driver authority gate lands and the in-kernel networking surface shrinks to the kernel-side authority gates.
- Production remote-access services, protocol stacks, language runtimes, GPU, LLM/agent, cloud images, volume encryption – as their implementations begin and meet the split criteria.
Splits earlier in this list set the precedent for splits later in the list. If the adventure split is messy, later splits should learn from it before being attempted.
Anti-Goals
- Do not split the kernel. The kernel is one repository. Architecture
layers (
kernel/src/arch/<arch>/) stay inside capOS; aarch64 and other ports stay in-tree. The split rule is about distinguishing the OS from applications, protocols, and language runtimes, not about cutting the kernel into micro-repos. - Do not split userspace runtime internals.
capos-rt,init, and the native shell stay together because they share the userspace ABI contract. - Do not vendor sibling repositories back into capOS. Once a track has split, capOS does not re-import it as a path or vendored copy. Cross-repo coordination uses tags and pinned references, not vendoring.
- Do not split for marketing reasons alone. The split criteria are about protecting review capacity, security posture, and the public scope statement. Splitting only to project a larger ecosystem surface area without staffed maintenance is not allowed.
- Do not block on a perfect split plan. A track that meets the split criteria can be moved with the minimum mechanics described above. Cross-repo mechanics will improve incrementally; waiting for an ideal model before any split is its own failure mode.
Open Questions
- Where should the chat demo end up after the adventure split? It is partly generic IPC scaffolding and partly application-shaped (chat rooms, message history). The current intent is that a generic capability-IPC chat surface stays in capOS as a service-object proof, while game-shaped chat features follow adventure. The exact line is not yet drawn.
- How should
docs/research/be treated long term? Each note grounds a current design decision, so it stays in capOS. If research notes proliferate after public release, a curateddocs/research/index.mdmay be enough to keep them navigable without splitting them out. - Should the mdBook docs sources and the docs site deployment be in the same repository or split? The current intent is that the sources stay in capOS while the deployment can move to a sibling. Whether that split is worth doing before a hosted demo exists is open.
- How should cross-repo CI evidence be presented when a paper or a service repository wants to cite a capOS proof run? A simple “tested against capOS commit X” record is the baseline; richer attestation can be added later if the project needs it.
- When is the right moment to publish a sibling’s first release? Sibling-internal readiness criteria belong in the sibling; capOS does not gate sibling releases beyond the cross-repo mechanics described here.
Design Grounding
Grounding files for this proposal:
README.mdWORKPLAN.mdREVIEW.mdREVIEW_FINDINGS.mddocs/roadmap.mddocs/changelog.mddocs/proposals/public-release-boundaries-proposal.mddocs/proposals/aurelian-frontier-proposal.mddocs/proposals/contributor-quest-mechanics-proposal.mddocs/proposals/networking-proposal.mddocs/proposals/ssh-shell-proposal.mddocs/proposals/shell-proposal.mddocs/proposals/boot-to-shell-proposal.mddocs/proposals/cloud-deployment-proposal.mddocs/proposals/cloud-metadata-proposal.mddocs/proposals/cryptography-and-key-management-proposal.mddocs/proposals/certificates-and-tls-proposal.mddocs/proposals/oidc-and-oauth2-proposal.mddocs/proposals/llm-and-agent-proposal.mddocs/proposals/realtime-voice-agent-shell-proposal.mddocs/proposals/gpu-capability-proposal.mddocs/proposals/go-runtime-proposal.mddocs/proposals/userspace-binaries-proposal.mddocs/proposals/volume-encryption-proposal.mddocs/proposals/storage-and-naming-proposal.mddocs/proposals/security-and-verification-proposal.mddocs/proposals/mdbook-docs-site-proposal.mddocs/security/trust-boundaries.mddocs/security/verification-workflow.mddocs/dma-isolation-design.mddocs/trusted-build-inputs.md
No docs/research/ report is directly applicable. This proposal is
project-composition policy layered on existing capOS architecture, not a
new OS architecture or runtime design.