refactor(monorepo): restructure repo with core, website, vault, os packages

- add monorepo Makefile delegating to sub-projects
- update CI workflows, GoReleaser, gitignore for new structure
- revise README, CONTRIBUTING.md for monorepo overview
- bump Go to 1.24
This commit is contained in:
anonpenguin23 2026-03-26 18:21:55 +02:00
parent ebaf37e9d0
commit abcc23c4f3
33 changed files with 3426 additions and 562 deletions

View File

@ -46,6 +46,7 @@ jobs:
uses: docker/setup-qemu-action@v3 uses: docker/setup-qemu-action@v3
- name: Build binary - name: Build binary
working-directory: core
env: env:
GOARCH: ${{ matrix.arch }} GOARCH: ${{ matrix.arch }}
CGO_ENABLED: 0 CGO_ENABLED: 0
@ -71,7 +72,7 @@ jobs:
mkdir -p ${PKG_NAME}/usr/local/bin mkdir -p ${PKG_NAME}/usr/local/bin
# Copy binaries # Copy binaries
cp build/usr/local/bin/* ${PKG_NAME}/usr/local/bin/ cp core/build/usr/local/bin/* ${PKG_NAME}/usr/local/bin/
chmod 755 ${PKG_NAME}/usr/local/bin/* chmod 755 ${PKG_NAME}/usr/local/bin/*
# Create control file # Create control file

View File

@ -23,7 +23,7 @@ jobs:
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v4 uses: actions/setup-go@v4
with: with:
go-version: '1.21' go-version: '1.24'
cache: true cache: true
- name: Run GoReleaser - name: Run GoReleaser

149
.gitignore vendored
View File

@ -1,56 +1,4 @@
# Binaries # === Global ===
*.exe
*.exe~
*.dll
*.so
*.dylib
*.test
*.out
bin/
bin-linux/
dist/
orama-cli-linux
# Build artifacts
*.deb
*.rpm
*.tar.gz
*.zip
# Go
go.work
.gocache/
# Dependencies
# vendor/
# Environment & credentials
.env
.env.*
.env.local
.env.*.local
scripts/remote-nodes.conf
keys_backup/
e2e/config.yaml
# Config (generated/local)
configs/
# Data & databases
data/*
*.db
# IDE & editor files
.vscode/
.idea/
.cursor/
.claude/
.mcp.json
*.swp
*.swo
*~
# OS generated files
.DS_Store .DS_Store
.DS_Store? .DS_Store?
._* ._*
@ -58,39 +6,80 @@ data/*
.Trashes .Trashes
ehthumbs.db ehthumbs.db
Thumbs.db Thumbs.db
*.swp
*.swo
*~
# IDE
.vscode/
.idea/
.cursor/
# Environment & credentials
.env
.env.*
!.env.example
.mcp.json
.claude/
.codex/
# === Core (Go) ===
core/phantom-auth/
core/bin/
core/bin-linux/
core/dist/
core/orama-cli-linux
core/keys_backup/
core/.gocache/
core/configs/
core/data/*
core/tmp/
core/temp/
core/results/
core/rnd/
core/vps.txt
core/coverage.txt
core/coverage.html
core/profile.out
core/e2e/config.yaml
core/scripts/remote-nodes.conf
# Go build artifacts
*.exe
*.exe~
*.dll
*.so
*.dylib
*.test
*.out
*.deb
*.rpm
*.tar.gz
*.zip
go.work
# Logs # Logs
*.log *.log
# Temporary files # Databases
tmp/ *.db
temp/
*.tmp
# Coverage & profiling # === Website ===
coverage.txt website/node_modules/
coverage.html website/dist/
profile.out website/invest-api/invest-api
website/invest-api/*.db
website/invest-api/*.db-shm
website/invest-api/*.db-wal
# Local development # === Vault (Zig) ===
vault/.zig-cache/
vault/zig-out/
# === OS ===
os/output/
# === Local development ===
.dev/ .dev/
.local/ .local/
local/ local/
.codex/
results/
rnd/
vps.txt
# Project subdirectories (managed separately)
website/
phantom-auth/
# One-off scripts & tools
redeploy-6.sh
terms-agreement
./bootstrap
./node
./cli
./inspector
docs/later_todos/
sim/

View File

@ -9,11 +9,13 @@ env:
before: before:
hooks: hooks:
- go mod tidy - cmd: go mod tidy
dir: core
builds: builds:
# orama CLI binary # orama CLI binary
- id: orama - id: orama
dir: core
main: ./cmd/cli main: ./cmd/cli
binary: orama binary: orama
goos: goos:
@ -31,6 +33,7 @@ builds:
# orama-node binary (Linux only for apt) # orama-node binary (Linux only for apt)
- id: orama-node - id: orama-node
dir: core
main: ./cmd/node main: ./cmd/node
binary: orama-node binary: orama-node
goos: goos:
@ -84,7 +87,7 @@ nfpms:
section: utils section: utils
priority: optional priority: optional
contents: contents:
- src: ./README.md - src: ./core/README.md
dst: /usr/share/doc/orama/README.md dst: /usr/share/doc/orama/README.md
deb: deb:
lintian_overrides: lintian_overrides:
@ -106,7 +109,7 @@ nfpms:
section: net section: net
priority: optional priority: optional
contents: contents:
- src: ./README.md - src: ./core/README.md
dst: /usr/share/doc/orama-node/README.md dst: /usr/share/doc/orama-node/README.md
deb: deb:
lintian_overrides: lintian_overrides:

View File

@ -1,47 +1,78 @@
# Contributing to DeBros Network # Contributing to Orama Network
Thanks for helping improve the network! This guide covers setup, local dev, tests, and PR guidelines. Thanks for helping improve the network! This monorepo contains multiple projects — pick the one relevant to your contribution.
## Requirements ## Repository Structure
- Go 1.22+ (1.23 recommended) | Package | Language | Build |
- RQLite (optional for local runs; the Makefile starts nodes with embedded setup) |---------|----------|-------|
- Make (optional) | `core/` | Go 1.24+ | `make core-build` |
| `website/` | TypeScript (pnpm) | `make website-build` |
| `vault/` | Zig 0.14+ | `make vault-build` |
| `os/` | Go + Buildroot | `make os-build` |
## Setup ## Setup
```bash ```bash
git clone https://github.com/DeBrosOfficial/network.git git clone https://github.com/DeBrosOfficial/network.git
cd network cd network
make deps
``` ```
## Build, Test, Lint ### Core (Go)
- Build: `make build`
- Test: `make test`
- Format/Vet: `make fmt vet` (or `make lint`)
````
Useful CLI commands:
```bash ```bash
./bin/orama health cd core
./bin/orama peers make deps
./bin/orama status make build
```` make test
```
## Versioning ### Website
- The CLI reports its version via `orama version`. ```bash
- Releases are tagged (e.g., `v0.18.0-beta`) and published via GoReleaser. cd website
pnpm install
pnpm dev
```
### Vault (Zig)
```bash
cd vault
zig build
zig build test
```
## Pull Requests ## Pull Requests
1. Fork and create a topic branch. 1. Fork and create a topic branch from `main`.
2. Ensure `make build test` passes; include tests for new functionality. 2. Ensure `make test` passes for affected packages.
3. Keep PRs focused and well-described (motivation, approach, testing). 3. Include tests for new functionality or bug fixes.
4. Update README/docs for behavior changes. 4. Keep PRs focused — one concern per PR.
5. Write a clear description: motivation, approach, and how you tested it.
6. Update docs if you're changing user-facing behavior.
## Code Style
### Go (core/, os/)
- Follow standard Go conventions
- Run `make lint` before submitting
- Wrap errors with context: `fmt.Errorf("failed to X: %w", err)`
- No magic values — use named constants
### TypeScript (website/)
- TypeScript strict mode
- Follow existing patterns in the codebase
### Zig (vault/)
- Follow standard Zig conventions
- Run `zig build test` before submitting
## Security
If you find a security vulnerability, **do not open a public issue**. Email security@debros.io instead.
Thank you for contributing! Thank you for contributing!

56
Makefile Normal file
View File

@ -0,0 +1,56 @@
# Orama Monorepo
# Delegates to sub-project Makefiles
.PHONY: help build test clean
# === Core (Go network) ===
.PHONY: core core-build core-test core-clean core-lint
core: core-build
core-build:
$(MAKE) -C core build
core-test:
$(MAKE) -C core test
core-lint:
$(MAKE) -C core lint
core-clean:
$(MAKE) -C core clean
# === Website ===
.PHONY: website website-dev website-build
website-dev:
cd website && pnpm dev
website-build:
cd website && pnpm build
# === Vault (Zig) ===
.PHONY: vault vault-build vault-test
vault-build:
cd vault && zig build
vault-test:
cd vault && zig build test
# === OS ===
.PHONY: os os-build
os-build:
$(MAKE) -C os
# === Aggregate ===
build: core-build
test: core-test
clean: core-clean
help:
@echo "Orama Monorepo"
@echo ""
@echo " Core (Go): make core-build | core-test | core-lint | core-clean"
@echo " Website: make website-dev | website-build"
@echo " Vault (Zig): make vault-build | vault-test"
@echo " OS: make os-build"
@echo ""
@echo " Aggregate: make build | test | clean (delegates to core)"

484
README.md
View File

@ -1,465 +1,49 @@
# Orama Network - Distributed P2P Platform # Orama Network
A high-performance API Gateway and distributed platform built in Go. Provides a unified HTTP/HTTPS API for distributed SQL (RQLite), distributed caching (Olric), decentralized storage (IPFS), pub/sub messaging, and serverless WebAssembly execution. A decentralized infrastructure platform combining distributed SQL, IPFS storage, caching, serverless WASM execution, and privacy relay — all managed through a unified API gateway.
**Architecture:** Modular Gateway / Edge Proxy following SOLID principles ## Packages
## Features | Package | Language | Description |
|---------|----------|-------------|
- **🔐 Authentication** - Wallet signatures, API keys, JWT tokens | [core/](core/) | Go | API gateway, distributed node, CLI, and client SDK |
- **💾 Storage** - IPFS-based decentralized file storage with encryption | [website/](website/) | TypeScript | Marketing website and invest portal |
- **⚡ Cache** - Distributed cache with Olric (in-memory key-value) | [vault/](vault/) | Zig | Distributed secrets vault (Shamir's Secret Sharing) |
- **🗄️ Database** - RQLite distributed SQL with Raft consensus + Per-namespace SQLite databases | [os/](os/) | Go + Buildroot | OramaOS — hardened minimal Linux for network nodes |
- **📡 Pub/Sub** - Real-time messaging via LibP2P and WebSocket
- **⚙️ Serverless** - WebAssembly function execution with host functions
- **🌐 HTTP Gateway** - Unified REST API with automatic HTTPS (Let's Encrypt)
- **📦 Client SDK** - Type-safe Go SDK for all services
- **🚀 App Deployments** - Deploy React, Next.js, Go, Node.js apps with automatic domains
- **🗄️ SQLite Databases** - Per-namespace isolated databases with IPFS backups
## Application Deployments
Deploy full-stack applications with automatic domain assignment and namespace isolation.
### Deploy a React App
```bash
# Build your app
cd my-react-app
npm run build
# Deploy to Orama Network
orama deploy static ./dist --name my-app
# Your app is now live at: https://my-app.orama.network
```
### Deploy Next.js with SSR
```bash
cd my-nextjs-app
# Ensure next.config.js has: output: 'standalone'
npm run build
orama deploy nextjs . --name my-nextjs --ssr
# Live at: https://my-nextjs.orama.network
```
### Deploy Go Backend
```bash
# Build for Linux (name binary 'app' for auto-detection)
GOOS=linux GOARCH=amd64 go build -o app main.go
# Deploy (must implement /health endpoint)
orama deploy go ./app --name my-api
# API live at: https://my-api.orama.network
```
### Create SQLite Database
```bash
# Create database
orama db create my-database
# Create schema
orama db query my-database "CREATE TABLE users (id INT, name TEXT)"
# Insert data
orama db query my-database "INSERT INTO users VALUES (1, 'Alice')"
# Query data
orama db query my-database "SELECT * FROM users"
# Backup to IPFS
orama db backup my-database
```
### Full-Stack Example
Deploy a complete app with React frontend, Go backend, and SQLite database:
```bash
# 1. Create database
orama db create myapp-db
orama db query myapp-db "CREATE TABLE users (id INT PRIMARY KEY, name TEXT)"
# 2. Deploy Go backend (connects to database)
GOOS=linux GOARCH=amd64 go build -o api main.go
orama deploy go ./api --name myapp-api
# 3. Deploy React frontend (calls backend API)
cd frontend && npm run build
orama deploy static ./dist --name myapp
# Access:
# Frontend: https://myapp.orama.network
# Backend: https://myapp-api.orama.network
```
**📖 Full Guide**: See [Deployment Guide](docs/DEPLOYMENT_GUIDE.md) for complete documentation, examples, and best practices.
## Quick Start ## Quick Start
### Building
```bash ```bash
# Build all binaries # Build the core network binaries
make build make core-build
# Run tests
make core-test
# Start website dev server
make website-dev
# Build vault
make vault-build
``` ```
## CLI Commands
### Authentication
```bash
orama auth login # Authenticate with wallet
orama auth status # Check authentication
orama auth logout # Clear credentials
```
### Application Deployments
```bash
# Deploy applications
orama deploy static <path> --name myapp # React, Vue, static sites
orama deploy nextjs <path> --name myapp --ssr # Next.js with SSR (requires output: 'standalone')
orama deploy go <path> --name myapp # Go binaries (must have /health endpoint)
orama deploy nodejs <path> --name myapp # Node.js apps (must have /health endpoint)
# Manage deployments
orama app list # List all deployments
orama app get <name> # Get deployment details
orama app logs <name> --follow # View logs
orama app delete <name> # Delete deployment
orama app rollback <name> --version 1 # Rollback to version
```
### SQLite Databases
```bash
orama db create <name> # Create database
orama db query <name> "SELECT * FROM t" # Execute SQL query
orama db list # List all databases
orama db backup <name> # Backup to IPFS
orama db backups <name> # List backups
```
### Environment Management
```bash
orama env list # List available environments
orama env current # Show active environment
orama env use <name> # Switch environment
```
## Serverless Functions (WASM)
Orama supports high-performance serverless function execution using WebAssembly (WASM). Functions are isolated, secure, and can interact with network services like the distributed cache.
> **Full guide:** See [docs/SERVERLESS.md](docs/SERVERLESS.md) for host functions API, secrets management, PubSub triggers, and examples.
### 1. Build Functions
Functions must be compiled to WASM. We recommend using [TinyGo](https://tinygo.org/).
```bash
# Build example functions to examples/functions/bin/
./examples/functions/build.sh
```
### 2. Deployment
Deploy your compiled `.wasm` file to the network via the Gateway.
```bash
# Deploy a function
curl -X POST https://your-node.example.com/v1/functions \
-H "Authorization: Bearer <your_api_key>" \
-F "name=hello-world" \
-F "namespace=default" \
-F "wasm=@./examples/functions/bin/hello.wasm"
```
### 3. Invocation
Trigger your function with a JSON payload. The function receives the payload via `stdin` and returns its response via `stdout`.
```bash
# Invoke via HTTP
curl -X POST https://your-node.example.com/v1/functions/hello-world/invoke \
-H "Authorization: Bearer <your_api_key>" \
-H "Content-Type: application/json" \
-d '{"name": "Developer"}'
```
### 4. Management
```bash
# List all functions in a namespace
curl https://your-node.example.com/v1/functions?namespace=default
# Delete a function
curl -X DELETE https://your-node.example.com/v1/functions/hello-world?namespace=default
```
## Production Deployment
### Prerequisites
- Ubuntu 22.04+ or Debian 12+
- `amd64` or `arm64` architecture
- 4GB RAM, 50GB SSD, 2 CPU cores
### Required Ports
**External (must be open in firewall):**
- **80** - HTTP (ACME/Let's Encrypt certificate challenges)
- **443** - HTTPS (Main gateway API endpoint)
- **4101** - IPFS Swarm (peer connections)
- **7001** - RQLite Raft (cluster consensus)
**Internal (bound to localhost, no firewall needed):**
- 4501 - IPFS API
- 5001 - RQLite HTTP API
- 6001 - Unified Gateway
- 8080 - IPFS Gateway
- 9050 - Anyone SOCKS5 proxy
- 9094 - IPFS Cluster API
- 3320/3322 - Olric Cache
**Anyone Relay Mode (optional, for earning rewards):**
- 9001 - Anyone ORPort (relay traffic, must be open externally)
### Anyone Network Integration
Orama Network integrates with the [Anyone Protocol](https://anyone.io) for anonymous routing. By default, nodes run as **clients** (consuming the network). Optionally, you can run as a **relay operator** to earn rewards.
**Client Mode (Default):**
- Routes traffic through Anyone network for anonymity
- SOCKS5 proxy on localhost:9050
- No rewards, just consumes network
**Relay Mode (Earn Rewards):**
- Provide bandwidth to the Anyone network
- Earn $ANYONE tokens as a relay operator
- Requires 100 $ANYONE tokens in your wallet
- Requires ORPort (9001) open to the internet
```bash
# Install as relay operator (earn rewards)
sudo orama node install --vps-ip <IP> --domain <domain> \
--anyone-relay \
--anyone-nickname "MyRelay" \
--anyone-contact "operator@email.com" \
--anyone-wallet "0x1234...abcd"
# With exit relay (legal implications apply)
sudo orama node install --vps-ip <IP> --domain <domain> \
--anyone-relay \
--anyone-exit \
--anyone-nickname "MyExitRelay" \
--anyone-contact "operator@email.com" \
--anyone-wallet "0x1234...abcd"
# Migrate existing Anyone installation
sudo orama node install --vps-ip <IP> --domain <domain> \
--anyone-relay \
--anyone-migrate \
--anyone-nickname "MyRelay" \
--anyone-contact "operator@email.com" \
--anyone-wallet "0x1234...abcd"
```
**Important:** After installation, register your relay at [dashboard.anyone.io](https://dashboard.anyone.io) to start earning rewards.
### Installation
**macOS (Homebrew):**
```bash
brew install DeBrosOfficial/tap/orama
```
**Linux (Debian/Ubuntu):**
```bash
# Download and install the latest .deb package
curl -sL https://github.com/DeBrosOfficial/network/releases/latest/download/orama_$(curl -s https://api.github.com/repos/DeBrosOfficial/network/releases/latest | grep tag_name | cut -d '"' -f 4 | tr -d 'v')_linux_amd64.deb -o orama.deb
sudo dpkg -i orama.deb
```
**From Source:**
```bash
go install github.com/DeBrosOfficial/network/cmd/cli@latest
```
**Setup (after installation):**
```bash
sudo orama node install --interactive
```
### Service Management
```bash
# Status
sudo orama node status
# Control services
sudo orama node start
sudo orama node stop
sudo orama node restart
# Diagnose issues
sudo orama node doctor
# View logs
orama node logs node --follow
orama node logs gateway --follow
orama node logs ipfs --follow
```
### Upgrade
```bash
# Upgrade to latest version
sudo orama node upgrade --restart
```
## Configuration
All configuration lives in `~/.orama/`:
- `configs/node.yaml` - Node configuration
- `configs/gateway.yaml` - Gateway configuration
- `configs/olric.yaml` - Cache configuration
- `secrets/` - Keys and certificates
- `data/` - Service data directories
## Troubleshooting
### Services Not Starting
```bash
# Check status
sudo orama node status
# View logs
orama node logs node --follow
# Check log files
sudo orama node doctor
```
### Port Conflicts
```bash
# Check what's using specific ports
sudo lsof -i :443 # HTTPS Gateway
sudo lsof -i :7001 # TCP/SNI Gateway
sudo lsof -i :6001 # Internal Gateway
```
### RQLite Cluster Issues
```bash
# Connect to RQLite CLI
rqlite -H localhost -p 5001
# Check cluster status
.nodes
.status
.ready
# Check consistency level
.consistency
```
### Reset Installation
```bash
# Production reset (⚠️ DESTROYS DATA)
sudo orama node uninstall
sudo rm -rf /opt/orama/.orama
sudo orama node install
```
## HTTP Gateway API
### Main Gateway Endpoints
- `GET /health` - Health status
- `GET /v1/status` - Full status
- `GET /v1/version` - Version info
- `POST /v1/rqlite/exec` - Execute SQL
- `POST /v1/rqlite/query` - Query database
- `GET /v1/rqlite/schema` - Get schema
- `POST /v1/pubsub/publish` - Publish message
- `GET /v1/pubsub/topics` - List topics
- `GET /v1/pubsub/ws?topic=<name>` - WebSocket subscribe
- `POST /v1/functions` - Deploy function (multipart/form-data)
- `POST /v1/functions/{name}/invoke` - Invoke function
- `GET /v1/functions` - List functions
- `DELETE /v1/functions/{name}` - Delete function
- `GET /v1/functions/{name}/logs` - Get function logs
See `openapi/gateway.yaml` for complete API specification.
## Documentation ## Documentation
- **[Deployment Guide](docs/DEPLOYMENT_GUIDE.md)** - Deploy React, Next.js, Go apps and manage databases | Document | Description |
- **[Architecture Guide](docs/ARCHITECTURE.md)** - System architecture and design patterns |----------|-------------|
- **[Client SDK](docs/CLIENT_SDK.md)** - Go SDK documentation and examples | [Architecture](core/docs/ARCHITECTURE.md) | System architecture and design patterns |
- **[Monitoring](docs/MONITORING.md)** - Cluster monitoring and health checks | [Deployment Guide](core/docs/DEPLOYMENT_GUIDE.md) | Deploy apps, databases, and domains |
- **[Inspector](docs/INSPECTOR.md)** - Deep subsystem health inspection | [Dev & Deploy](core/docs/DEV_DEPLOY.md) | Building, deploying to VPS, rolling upgrades |
- **[Serverless Functions](docs/SERVERLESS.md)** - WASM serverless with host functions | [Security](core/docs/SECURITY.md) | Security hardening and threat model |
- **[WebRTC](docs/WEBRTC.md)** - Real-time communication setup | [Monitoring](core/docs/MONITORING.md) | Cluster health monitoring |
- **[Common Problems](docs/COMMON_PROBLEMS.md)** - Troubleshooting known issues | [Client SDK](core/docs/CLIENT_SDK.md) | Go SDK documentation |
| [Serverless](core/docs/SERVERLESS.md) | WASM serverless functions |
## Resources | [Common Problems](core/docs/COMMON_PROBLEMS.md) | Troubleshooting known issues |
- [RQLite Documentation](https://rqlite.io/docs/)
- [IPFS Documentation](https://docs.ipfs.tech/)
- [LibP2P Documentation](https://docs.libp2p.io/)
- [WebAssembly](https://webassembly.org/)
- [GitHub Repository](https://github.com/DeBrosOfficial/network)
- [Issue Tracker](https://github.com/DeBrosOfficial/network/issues)
## Project Structure
```
network/
├── cmd/ # Binary entry points
│ ├── cli/ # CLI tool
│ ├── gateway/ # HTTP Gateway
│ ├── node/ # P2P Node
├── pkg/ # Core packages
│ ├── gateway/ # Gateway implementation
│ │ └── handlers/ # HTTP handlers by domain
│ ├── client/ # Go SDK
│ ├── serverless/ # WASM engine
│ ├── rqlite/ # Database ORM
│ ├── contracts/ # Interface definitions
│ ├── httputil/ # HTTP utilities
│ └── errors/ # Error handling
├── docs/ # Documentation
├── e2e/ # End-to-end tests
└── examples/ # Example code
```
## Contributing ## Contributing
Contributions are welcome! This project follows: See [CONTRIBUTING.md](CONTRIBUTING.md) for setup, development, and PR guidelines.
- **SOLID Principles** - Single responsibility, open/closed, etc.
- **DRY Principle** - Don't repeat yourself
- **Clean Architecture** - Clear separation of concerns
- **Test Coverage** - Unit and E2E tests required
See our architecture docs for design patterns and guidelines. ## License
[AGPL-3.0](LICENSE)

8
core/.env.example Normal file
View File

@ -0,0 +1,8 @@
# OpenRouter API Key for changelog generation
# Get your API key from https://openrouter.ai/keys
OPENROUTER_API_KEY=your-api-key-here
# ZeroSSL API Key for TLS certificates (alternative to Let's Encrypt)
# Get your free API key from https://app.zerossl.com/developer
# If not set, Caddy will use Let's Encrypt as the default CA
ZEROSSL_API_KEY=

52
os/Makefile Normal file
View File

@ -0,0 +1,52 @@
SHELL := /bin/bash
.PHONY: agent build sign test clean
VERSION ?= $(shell git describe --tags --always 2>/dev/null || echo "dev")
ARCH ?= amd64
# Directories
AGENT_DIR := agent
BUILDROOT_DIR := buildroot
SCRIPTS_DIR := scripts
OUTPUT_DIR := output
# --- Agent ---
agent:
@echo "=== Building orama-agent ==="
cd $(AGENT_DIR) && GOOS=linux GOARCH=$(ARCH) CGO_ENABLED=0 \
go build -ldflags "-s -w" -o ../$(OUTPUT_DIR)/orama-agent ./cmd/orama-agent/
@echo "Built: $(OUTPUT_DIR)/orama-agent"
agent-test:
@echo "=== Testing orama-agent ==="
cd $(AGENT_DIR) && go test ./...
# --- Full Image Build ---
build: agent
@echo "=== Building OramaOS image ==="
ORAMA_VERSION=$(VERSION) ARCH=$(ARCH) $(SCRIPTS_DIR)/build.sh
@echo "Build complete: $(OUTPUT_DIR)/"
# --- Signing ---
sign:
@echo "=== Signing OramaOS image ==="
$(SCRIPTS_DIR)/sign.sh $(OUTPUT_DIR)/orama-os-$(VERSION)-$(ARCH)
# --- QEMU Testing ---
test: build
@echo "=== Launching QEMU test VM ==="
$(SCRIPTS_DIR)/test-vm.sh $(OUTPUT_DIR)/orama-os.qcow2
test-vm:
@echo "=== Launching QEMU with existing image ==="
$(SCRIPTS_DIR)/test-vm.sh $(OUTPUT_DIR)/orama-os.qcow2
# --- Clean ---
clean:
rm -rf $(OUTPUT_DIR)
@echo "Cleaned output directory"

View File

@ -0,0 +1,35 @@
// orama-agent is the sole root process on OramaOS.
// It handles enrollment, LUKS key management, service supervision,
// over-the-air updates, and command reception.
package main
import (
"log"
"os"
"os/signal"
"syscall"
"github.com/DeBrosOfficial/orama-os/agent/internal/boot"
)
func main() {
log.SetFlags(log.Ldate | log.Ltime | log.Lshortfile)
log.Println("orama-agent starting")
agent, err := boot.NewAgent()
if err != nil {
log.Fatalf("failed to initialize agent: %v", err)
}
if err := agent.Run(); err != nil {
log.Fatalf("agent failed: %v", err)
}
// Wait for termination signal
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, syscall.SIGTERM, syscall.SIGINT)
sig := <-sigCh
log.Printf("received %s, shutting down", sig)
agent.Shutdown()
}

8
os/agent/go.mod Normal file
View File

@ -0,0 +1,8 @@
module github.com/DeBrosOfficial/orama-os/agent
go 1.24.0
require (
golang.org/x/crypto v0.48.0 // indirect
golang.org/x/sys v0.41.0 // indirect
)

4
os/agent/go.sum Normal file
View File

@ -0,0 +1,4 @@
golang.org/x/crypto v0.48.0 h1:/VRzVqiRSggnhY7gNRxPauEQ5Drw9haKdM0jqfcCFts=
golang.org/x/crypto v0.48.0/go.mod h1:r0kV5h3qnFPlQnBSrULhlsRfryS2pmewsg+XfMgkVos=
golang.org/x/sys v0.41.0 h1:Ivj+2Cp/ylzLiEU89QhWblYnOE9zerudt9Ftecq2C6k=
golang.org/x/sys v0.41.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=

View File

@ -0,0 +1,304 @@
// Package boot orchestrates the OramaOS agent boot sequence.
//
// Two modes:
// - Enrollment mode (first boot): HTTP server on :9999, WG setup, LUKS format, share distribution
// - Standard boot (subsequent): WG up, LUKS unlock via Shamir shares, start services
package boot
import (
"context"
"encoding/base64"
"encoding/json"
"fmt"
"log"
"net/http"
"os"
"path/filepath"
"sync"
"time"
"github.com/DeBrosOfficial/orama-os/agent/internal/command"
"github.com/DeBrosOfficial/orama-os/agent/internal/enroll"
"github.com/DeBrosOfficial/orama-os/agent/internal/health"
"github.com/DeBrosOfficial/orama-os/agent/internal/sandbox"
"github.com/DeBrosOfficial/orama-os/agent/internal/update"
"github.com/DeBrosOfficial/orama-os/agent/internal/wireguard"
)
const (
// OramaDir is the base data directory, mounted from the LUKS-encrypted partition.
OramaDir = "/opt/orama/.orama"
// EnrolledFlag indicates that this node has completed enrollment.
EnrolledFlag = "/opt/orama/.orama/enrolled"
// DataDevice is the LUKS-encrypted data partition.
DataDevice = "/dev/sda3"
// DataMapperName is the device-mapper name for the unlocked LUKS partition.
DataMapperName = "orama-data"
// DataMountPoint is where the decrypted data partition is mounted.
DataMountPoint = "/opt/orama/.orama"
// WireGuardConfigPath is the path to the WireGuard configuration baked into rootfs
// during enrollment, or written during first boot.
WireGuardConfigPath = "/etc/wireguard/wg0.conf"
// GatewayEndpoint is the default gateway URL for enrollment WebSocket.
// Overridden by /etc/orama/gateway-url if present.
GatewayEndpoint = "wss://gateway.orama.network/v1/agent/enroll"
)
// Agent is the main orchestrator for the OramaOS node.
type Agent struct {
wg *wireguard.Manager
supervisor *sandbox.Supervisor
updater *update.Manager
cmdRecv *command.Receiver
reporter *health.Reporter
mu sync.Mutex
shutdown bool
}
// NewAgent creates a new Agent instance.
func NewAgent() (*Agent, error) {
return &Agent{
wg: wireguard.NewManager(),
}, nil
}
// Run executes the boot sequence. It detects whether this is a first boot
// (enrollment) or a standard boot, and acts accordingly.
func (a *Agent) Run() error {
if isEnrolled() {
return a.standardBoot()
}
return a.enrollmentBoot()
}
// isEnrolled checks if the node has completed enrollment.
func isEnrolled() bool {
_, err := os.Stat(EnrolledFlag)
return err == nil
}
// enrollmentBoot handles first-boot enrollment.
func (a *Agent) enrollmentBoot() error {
log.Println("ENROLLMENT MODE: first boot detected")
// 1. Start enrollment server on port 9999
enrollServer := enroll.NewServer(resolveGatewayEndpoint())
result, err := enrollServer.Run()
if err != nil {
return fmt.Errorf("enrollment failed: %w", err)
}
log.Println("enrollment complete, configuring node")
// 2. Configure WireGuard with received config
if err := a.wg.Configure(result.WireGuardConfig); err != nil {
return fmt.Errorf("failed to configure WireGuard: %w", err)
}
if err := a.wg.Up(); err != nil {
return fmt.Errorf("failed to bring up WireGuard: %w", err)
}
// 3. Generate LUKS key, format, and encrypt data partition
luksKey, err := GenerateLUKSKey()
if err != nil {
return fmt.Errorf("failed to generate LUKS key: %w", err)
}
if err := FormatAndEncrypt(DataDevice, luksKey); err != nil {
ZeroBytes(luksKey)
return fmt.Errorf("failed to format LUKS partition: %w", err)
}
// 4. Distribute LUKS key shares to peer vault-guardians
if err := DistributeKeyShares(luksKey, result.Peers, result.NodeID); err != nil {
ZeroBytes(luksKey)
return fmt.Errorf("failed to distribute key shares: %w", err)
}
ZeroBytes(luksKey)
// 5. FormatAndEncrypt already mounted the partition — no need to decrypt again.
// 6. Write enrolled flag
if err := os.MkdirAll(filepath.Dir(EnrolledFlag), 0755); err != nil {
return fmt.Errorf("failed to create enrolled flag dir: %w", err)
}
if err := os.WriteFile(EnrolledFlag, []byte("1"), 0644); err != nil {
return fmt.Errorf("failed to write enrolled flag: %w", err)
}
log.Println("enrollment complete, proceeding to standard boot")
// 7. Start services
return a.startServices()
}
// standardBoot handles normal reboot sequence.
func (a *Agent) standardBoot() error {
log.Println("STANDARD BOOT: enrolled node")
// 1. Bring up WireGuard
if err := a.wg.Up(); err != nil {
return fmt.Errorf("failed to bring up WireGuard: %w", err)
}
// 2. Try Shamir-based LUKS key reconstruction
luksKey, err := FetchAndReconstruct(a.wg)
if err != nil {
// Shamir failed — fall back to genesis unlock mode.
// This happens when the genesis node reboots before enough peers
// have joined for Shamir distribution, or when peers are offline.
log.Printf("Shamir reconstruction failed: %v", err)
log.Println("Entering genesis unlock mode — waiting for operator unlock via WireGuard")
luksKey, err = a.waitForGenesisUnlock()
if err != nil {
return fmt.Errorf("genesis unlock failed: %w", err)
}
}
// 3. Decrypt and mount data partition
if err := DecryptAndMount(DataDevice, luksKey); err != nil {
ZeroBytes(luksKey)
return fmt.Errorf("failed to mount data partition: %w", err)
}
ZeroBytes(luksKey)
// 4. Mark boot as successful (A/B boot counting)
if err := update.MarkBootSuccessful(); err != nil {
log.Printf("WARNING: failed to mark boot successful: %v", err)
}
// 5. Start services
return a.startServices()
}
// waitForGenesisUnlock starts a temporary HTTP server on the WireGuard interface
// (port 9998) that accepts a LUKS key from the operator.
// The operator sends: POST /v1/agent/unlock with {"key":"<base64-luks-key>"}
func (a *Agent) waitForGenesisUnlock() ([]byte, error) {
keyCh := make(chan []byte, 1)
errCh := make(chan error, 1)
mux := http.NewServeMux()
mux.HandleFunc("/v1/agent/unlock", func(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
return
}
var req struct {
Key string `json:"key"` // base64-encoded LUKS key
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
http.Error(w, "invalid JSON", http.StatusBadRequest)
return
}
keyBytes, err := base64.StdEncoding.DecodeString(req.Key)
if err != nil {
http.Error(w, "invalid base64 key", http.StatusBadRequest)
return
}
if len(keyBytes) != 32 {
http.Error(w, "key must be 32 bytes", http.StatusBadRequest)
return
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(map[string]string{"status": "unlocking"})
keyCh <- keyBytes
})
server := &http.Server{
Addr: ":9998",
Handler: mux,
ReadTimeout: 10 * time.Second,
WriteTimeout: 10 * time.Second,
}
go func() {
if err := server.ListenAndServe(); err != http.ErrServerClosed {
errCh <- fmt.Errorf("genesis unlock server error: %w", err)
}
}()
log.Println("Genesis unlock server listening on :9998")
log.Println("Run 'orama node unlock --genesis --node-ip <wg-ip>' to unlock this node")
select {
case key := <-keyCh:
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
server.Shutdown(ctx)
return key, nil
case err := <-errCh:
return nil, err
}
}
// startServices launches all node services in sandboxes and starts background tasks.
func (a *Agent) startServices() error {
// Start service supervisor
a.supervisor = sandbox.NewSupervisor()
if err := a.supervisor.StartAll(); err != nil {
return fmt.Errorf("failed to start services: %w", err)
}
// Start command receiver (listen for Gateway commands over WG)
a.cmdRecv = command.NewReceiver(a.supervisor)
go a.cmdRecv.Listen()
// Start update checker (periodic)
a.updater = update.NewManager()
go a.updater.RunLoop()
// Start health reporter (periodic)
a.reporter = health.NewReporter(a.supervisor)
go a.reporter.RunLoop()
return nil
}
// Shutdown gracefully stops all services.
func (a *Agent) Shutdown() {
a.mu.Lock()
defer a.mu.Unlock()
if a.shutdown {
return
}
a.shutdown = true
log.Println("shutting down agent")
if a.cmdRecv != nil {
a.cmdRecv.Stop()
}
if a.updater != nil {
a.updater.Stop()
}
if a.reporter != nil {
a.reporter.Stop()
}
if a.supervisor != nil {
a.supervisor.StopAll()
}
}
// resolveGatewayEndpoint reads the gateway URL from config or uses the default.
func resolveGatewayEndpoint() string {
data, err := os.ReadFile("/etc/orama/gateway-url")
if err == nil {
return string(data)
}
return GatewayEndpoint
}

View File

@ -0,0 +1,504 @@
package boot
import (
"bytes"
"crypto/rand"
"crypto/sha256"
"encoding/base64"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"log"
"math"
"net/http"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
"github.com/DeBrosOfficial/orama-os/agent/internal/types"
"github.com/DeBrosOfficial/orama-os/agent/internal/wireguard"
)
// GenerateLUKSKey generates a cryptographically random 32-byte key for LUKS encryption.
func GenerateLUKSKey() ([]byte, error) {
key := make([]byte, 32)
if _, err := rand.Read(key); err != nil {
return nil, fmt.Errorf("failed to read random bytes: %w", err)
}
return key, nil
}
// FormatAndEncrypt formats a device with LUKS2 encryption and creates an ext4 filesystem.
func FormatAndEncrypt(device string, key []byte) error {
log.Printf("formatting %s with LUKS2", device)
// cryptsetup luksFormat --type luks2 --cipher aes-xts-plain64 <device> --key-file=-
cmd := exec.Command("cryptsetup", "luksFormat", "--type", "luks2",
"--cipher", "aes-xts-plain64", "--batch-mode", device, "--key-file=-")
cmd.Stdin = bytes.NewReader(key)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("luksFormat failed: %w\n%s", err, string(output))
}
// cryptsetup open <device> orama-data --key-file=-
cmd = exec.Command("cryptsetup", "open", device, DataMapperName, "--key-file=-")
cmd.Stdin = bytes.NewReader(key)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("cryptsetup open failed: %w\n%s", err, string(output))
}
// mkfs.ext4 /dev/mapper/orama-data
cmd = exec.Command("mkfs.ext4", "-F", "/dev/mapper/"+DataMapperName)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("mkfs.ext4 failed: %w\n%s", err, string(output))
}
// Mount
if err := os.MkdirAll(DataMountPoint, 0755); err != nil {
return fmt.Errorf("failed to create mount point: %w", err)
}
cmd = exec.Command("mount", "/dev/mapper/"+DataMapperName, DataMountPoint)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("mount failed: %w\n%s", err, string(output))
}
log.Println("LUKS partition formatted and mounted")
return nil
}
// DecryptAndMount opens and mounts an existing LUKS partition.
func DecryptAndMount(device string, key []byte) error {
// cryptsetup open <device> orama-data --key-file=-
cmd := exec.Command("cryptsetup", "open", device, DataMapperName, "--key-file=-")
cmd.Stdin = bytes.NewReader(key)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("cryptsetup open failed: %w\n%s", err, string(output))
}
if err := os.MkdirAll(DataMountPoint, 0755); err != nil {
return fmt.Errorf("failed to create mount point: %w", err)
}
cmd = exec.Command("mount", "/dev/mapper/"+DataMapperName, DataMountPoint)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("mount failed: %w\n%s", err, string(output))
}
return nil
}
// DistributeKeyShares splits the LUKS key into Shamir shares and pushes them
// to peer vault-guardians over WireGuard.
func DistributeKeyShares(key []byte, peers []types.Peer, nodeID string) error {
n := len(peers)
if n == 0 {
return fmt.Errorf("no peers available for key distribution")
}
// Adaptive threshold: at least 3, or n/3 (whichever is greater)
k := int(math.Max(3, float64(n)/3.0))
if k > n {
k = n
}
log.Printf("splitting LUKS key into %d shares (threshold=%d)", n, k)
shares, err := shamirSplit(key, n, k)
if err != nil {
return fmt.Errorf("shamir split failed: %w", err)
}
// Derive agent identity from the node's WG private key
identity, err := deriveAgentIdentity()
if err != nil {
return fmt.Errorf("failed to derive agent identity: %w", err)
}
for i, peer := range peers {
session, err := vaultAuth(peer.WGIP, identity)
if err != nil {
return fmt.Errorf("failed to authenticate with peer %s: %w", peer.WGIP, err)
}
shareB64 := base64.StdEncoding.EncodeToString(shares[i])
secretName := fmt.Sprintf("luks-key-%s", nodeID)
if err := vaultPutSecret(peer.WGIP, session, secretName, shareB64, 1); err != nil {
return fmt.Errorf("failed to store share on peer %s: %w", peer.WGIP, err)
}
log.Printf("stored share %d/%d on peer %s", i+1, n, peer.WGIP)
}
return nil
}
// FetchAndReconstruct fetches Shamir shares from peers and reconstructs the LUKS key.
// Uses exponential backoff: 1s, 2s, 4s, 8s, 16s, max 5 retries.
func FetchAndReconstruct(wg *wireguard.Manager) ([]byte, error) {
peers, err := loadPeerConfig()
if err != nil {
return nil, fmt.Errorf("failed to load peer config: %w", err)
}
nodeID, err := loadNodeID()
if err != nil {
return nil, fmt.Errorf("failed to load node ID: %w", err)
}
identity, err := deriveAgentIdentity()
if err != nil {
return nil, fmt.Errorf("failed to derive agent identity: %w", err)
}
n := len(peers)
k := int(math.Max(3, float64(n)/3.0))
if k > n {
k = n
}
secretName := fmt.Sprintf("luks-key-%s", nodeID)
var shares [][]byte
const maxRetries = 5
for attempt := 0; attempt <= maxRetries; attempt++ {
if attempt > 0 {
delay := time.Duration(1<<uint(attempt-1)) * time.Second
log.Printf("retrying share fetch in %v (attempt %d/%d)", delay, attempt, maxRetries)
time.Sleep(delay)
}
shares = nil
for _, peer := range peers {
session, authErr := vaultAuth(peer.WGIP, identity)
if authErr != nil {
log.Printf("auth failed with peer %s: %v", peer.WGIP, authErr)
continue
}
shareB64, getErr := vaultGetSecret(peer.WGIP, session, secretName)
if getErr != nil {
log.Printf("share fetch failed from peer %s: %v", peer.WGIP, getErr)
continue
}
shareBytes, decErr := base64.StdEncoding.DecodeString(shareB64)
if decErr != nil {
log.Printf("invalid share from peer %s: %v", peer.WGIP, decErr)
continue
}
shares = append(shares, shareBytes)
if len(shares) >= k+1 { // fetch K+1 for malicious share detection
break
}
}
if len(shares) >= k {
break
}
}
if len(shares) < k {
return nil, fmt.Errorf("could not fetch enough shares: got %d, need %d", len(shares), k)
}
// Reconstruct key
key, err := shamirCombine(shares[:k])
if err != nil {
return nil, fmt.Errorf("shamir combine failed: %w", err)
}
// If we have K+1 shares, verify consistency (malicious share detection)
if len(shares) > k {
altKey, altErr := shamirCombine(shares[1 : k+1])
if altErr == nil && !bytes.Equal(key, altKey) {
ZeroBytes(altKey)
log.Println("WARNING: malicious share detected — share sets produce different keys")
// TODO: identify the bad share, alert cluster, exclude that peer
}
ZeroBytes(altKey)
}
return key, nil
}
// ZeroBytes overwrites a byte slice with zeros to clear sensitive data from memory.
func ZeroBytes(b []byte) {
for i := range b {
b[i] = 0
}
}
// deriveAgentIdentity derives a deterministic identity from the WG private key.
func deriveAgentIdentity() (string, error) {
data, err := os.ReadFile("/etc/wireguard/private.key")
if err != nil {
return "", fmt.Errorf("failed to read WG private key: %w", err)
}
hash := sha256.Sum256(bytes.TrimSpace(data))
return hex.EncodeToString(hash[:]), nil
}
// vaultAuth authenticates with a peer's vault-guardian using the V2 challenge-response flow.
// Returns a session token valid for 1 hour.
func vaultAuth(peerIP, identity string) (string, error) {
client := &http.Client{Timeout: 10 * time.Second}
// Step 1: Request challenge
challengeBody, _ := json.Marshal(map[string]string{"identity": identity})
resp, err := client.Post(
fmt.Sprintf("http://%s:7500/v2/vault/auth/challenge", peerIP),
"application/json",
bytes.NewReader(challengeBody),
)
if err != nil {
return "", fmt.Errorf("challenge request failed: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return "", fmt.Errorf("challenge returned status %d", resp.StatusCode)
}
var challengeResp struct {
Nonce string `json:"nonce"`
Tag string `json:"tag"`
}
if err := json.NewDecoder(resp.Body).Decode(&challengeResp); err != nil {
return "", fmt.Errorf("failed to parse challenge response: %w", err)
}
// Step 2: Create session
sessionBody, _ := json.Marshal(map[string]string{
"identity": identity,
"nonce": challengeResp.Nonce,
"tag": challengeResp.Tag,
})
resp2, err := client.Post(
fmt.Sprintf("http://%s:7500/v2/vault/auth/session", peerIP),
"application/json",
bytes.NewReader(sessionBody),
)
if err != nil {
return "", fmt.Errorf("session request failed: %w", err)
}
defer resp2.Body.Close()
if resp2.StatusCode != http.StatusOK {
return "", fmt.Errorf("session returned status %d", resp2.StatusCode)
}
var sessionResp struct {
Token string `json:"token"`
}
if err := json.NewDecoder(resp2.Body).Decode(&sessionResp); err != nil {
return "", fmt.Errorf("failed to parse session response: %w", err)
}
return sessionResp.Token, nil
}
// vaultPutSecret stores a secret via the V2 vault API (PUT).
func vaultPutSecret(peerIP, sessionToken, name, value string, version int) error {
client := &http.Client{Timeout: 10 * time.Second}
body, _ := json.Marshal(map[string]interface{}{
"share": value,
"version": version,
})
req, err := http.NewRequest("PUT",
fmt.Sprintf("http://%s:7500/v2/vault/secrets/%s", peerIP, name),
bytes.NewReader(body))
if err != nil {
return err
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("X-Session-Token", sessionToken)
resp, err := client.Do(req)
if err != nil {
return fmt.Errorf("PUT request failed: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusCreated {
respBody, _ := io.ReadAll(resp.Body)
return fmt.Errorf("vault PUT returned %d: %s", resp.StatusCode, string(respBody))
}
return nil
}
// vaultGetSecret retrieves a secret via the V2 vault API (GET).
func vaultGetSecret(peerIP, sessionToken, name string) (string, error) {
client := &http.Client{Timeout: 10 * time.Second}
req, err := http.NewRequest("GET",
fmt.Sprintf("http://%s:7500/v2/vault/secrets/%s", peerIP, name), nil)
if err != nil {
return "", err
}
req.Header.Set("X-Session-Token", sessionToken)
resp, err := client.Do(req)
if err != nil {
return "", fmt.Errorf("GET request failed: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return "", fmt.Errorf("vault GET returned %d", resp.StatusCode)
}
var result struct {
Share string `json:"share"`
}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return "", fmt.Errorf("failed to parse vault response: %w", err)
}
return result.Share, nil
}
// shamirSplit splits a secret into n shares with threshold k.
// Uses Shamir's Secret Sharing over GF(256).
func shamirSplit(secret []byte, n, k int) ([][]byte, error) {
if n < k {
return nil, fmt.Errorf("n (%d) must be >= k (%d)", n, k)
}
if k < 2 {
return nil, fmt.Errorf("threshold must be >= 2")
}
shares := make([][]byte, n)
for i := range shares {
shares[i] = make([]byte, len(secret))
}
// For each byte of the secret, create a random polynomial of degree k-1
for byteIdx := 0; byteIdx < len(secret); byteIdx++ {
// Generate random coefficients for the polynomial
// coeffs[0] = secret byte, coeffs[1..k-1] = random
coeffs := make([]byte, k)
coeffs[0] = secret[byteIdx]
if _, err := rand.Read(coeffs[1:]); err != nil {
return nil, err
}
// Evaluate polynomial at points 1, 2, ..., n
for i := 0; i < n; i++ {
x := byte(i + 1) // x = 1, 2, ..., n (never 0)
shares[i][byteIdx] = evalPolynomial(coeffs, x)
}
}
return shares, nil
}
// shamirCombine reconstructs a secret from k shares using Lagrange interpolation over GF(256).
func shamirCombine(shares [][]byte) ([]byte, error) {
if len(shares) < 2 {
return nil, fmt.Errorf("need at least 2 shares")
}
secretLen := len(shares[0])
secret := make([]byte, secretLen)
// Share indices are 1-based (x = 1, 2, 3, ...)
// We need to know which x values we have
xs := make([]byte, len(shares))
for i := range xs {
xs[i] = byte(i + 1)
}
for byteIdx := 0; byteIdx < secretLen; byteIdx++ {
// Lagrange interpolation at x=0
var val byte
for i, xi := range xs {
// Compute Lagrange basis polynomial L_i(0)
num := byte(1)
den := byte(1)
for j, xj := range xs {
if i == j {
continue
}
num = gf256Mul(num, xj) // 0 - xj = xj in GF(256) (additive inverse = self)
den = gf256Mul(den, xi^xj) // xi - xj = xi XOR xj
}
lagrange := gf256Mul(num, gf256Inv(den))
val ^= gf256Mul(shares[i][byteIdx], lagrange)
}
secret[byteIdx] = val
}
return secret, nil
}
// evalPolynomial evaluates a polynomial at x over GF(256).
func evalPolynomial(coeffs []byte, x byte) byte {
result := coeffs[len(coeffs)-1]
for i := len(coeffs) - 2; i >= 0; i-- {
result = gf256Mul(result, x) ^ coeffs[i]
}
return result
}
// GF(256) multiplication using the AES (Rijndael) irreducible polynomial: x^8 + x^4 + x^3 + x + 1
func gf256Mul(a, b byte) byte {
var result byte
for b > 0 {
if b&1 != 0 {
result ^= a
}
hi := a & 0x80
a <<= 1
if hi != 0 {
a ^= 0x1B // x^8 + x^4 + x^3 + x + 1
}
b >>= 1
}
return result
}
// gf256Inv computes the multiplicative inverse in GF(256) using extended Euclidean or lookup.
// Uses Fermat's little theorem: a^(-1) = a^(254) in GF(256).
func gf256Inv(a byte) byte {
if a == 0 {
return 0 // 0 has no inverse, but we return 0 by convention
}
result := a
for i := 0; i < 6; i++ {
result = gf256Mul(result, result)
result = gf256Mul(result, a)
}
result = gf256Mul(result, result) // now result = a^254
return result
}
// loadPeerConfig loads the peer list from the enrollment config.
func loadPeerConfig() ([]types.Peer, error) {
data, err := os.ReadFile(filepath.Join(OramaDir, "configs", "peers.json"))
if err != nil {
return nil, err
}
var peers []types.Peer
if err := json.Unmarshal(data, &peers); err != nil {
return nil, err
}
return peers, nil
}
// loadNodeID loads this node's ID from the enrollment config.
func loadNodeID() (string, error) {
data, err := os.ReadFile(filepath.Join(OramaDir, "configs", "node-id"))
if err != nil {
return "", err
}
return strings.TrimSpace(string(data)), nil
}

View File

@ -0,0 +1,197 @@
// Package command implements the command receiver that accepts instructions
// from the Gateway over WireGuard.
//
// The agent listens on a local HTTP endpoint (only accessible via WG) for
// commands like restart, status, logs, and leave.
package command
import (
"context"
"encoding/json"
"fmt"
"log"
"net/http"
"os"
"strings"
"time"
"github.com/DeBrosOfficial/orama-os/agent/internal/sandbox"
)
const (
// ListenAddr is the address for the command receiver (WG-only).
ListenAddr = ":9998"
)
// Command represents an incoming command from the Gateway.
type Command struct {
Action string `json:"action"` // "restart", "status", "logs", "leave"
Service string `json:"service"` // optional: specific service name
}
// Receiver listens for commands from the Gateway.
type Receiver struct {
supervisor *sandbox.Supervisor
server *http.Server
}
// NewReceiver creates a new command receiver.
func NewReceiver(supervisor *sandbox.Supervisor) *Receiver {
return &Receiver{
supervisor: supervisor,
}
}
// Listen starts the HTTP server for receiving commands.
func (r *Receiver) Listen() {
mux := http.NewServeMux()
mux.HandleFunc("/v1/agent/command", r.handleCommand)
mux.HandleFunc("/v1/agent/status", r.handleStatus)
mux.HandleFunc("/v1/agent/health", r.handleHealth)
mux.HandleFunc("/v1/agent/logs", r.handleLogs)
r.server = &http.Server{
Addr: ListenAddr,
Handler: mux,
ReadTimeout: 10 * time.Second,
WriteTimeout: 10 * time.Second,
}
log.Printf("command receiver listening on %s", ListenAddr)
if err := r.server.ListenAndServe(); err != http.ErrServerClosed {
log.Printf("command receiver error: %v", err)
}
}
// Stop gracefully shuts down the command receiver.
func (r *Receiver) Stop() {
if r.server != nil {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
r.server.Shutdown(ctx)
}
}
func (r *Receiver) handleCommand(w http.ResponseWriter, req *http.Request) {
if req.Method != http.MethodPost {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
return
}
var cmd Command
if err := json.NewDecoder(req.Body).Decode(&cmd); err != nil {
http.Error(w, "invalid JSON", http.StatusBadRequest)
return
}
log.Printf("received command: %s (service: %s)", cmd.Action, cmd.Service)
switch cmd.Action {
case "restart":
if cmd.Service == "" {
http.Error(w, "service name required for restart", http.StatusBadRequest)
return
}
if err := r.supervisor.RestartService(cmd.Service); err != nil {
writeJSON(w, http.StatusInternalServerError, map[string]string{"error": err.Error()})
return
}
writeJSON(w, http.StatusOK, map[string]string{"status": "restarted"})
case "status":
status := r.supervisor.GetStatus()
writeJSON(w, http.StatusOK, status)
default:
writeJSON(w, http.StatusBadRequest, map[string]string{"error": "unknown action: " + cmd.Action})
}
}
func (r *Receiver) handleStatus(w http.ResponseWriter, req *http.Request) {
status := r.supervisor.GetStatus()
writeJSON(w, http.StatusOK, status)
}
func (r *Receiver) handleHealth(w http.ResponseWriter, req *http.Request) {
status := r.supervisor.GetStatus()
healthy := true
for _, running := range status {
if !running {
healthy = false
break
}
}
result := map[string]interface{}{
"healthy": healthy,
"services": status,
}
writeJSON(w, http.StatusOK, result)
}
func (r *Receiver) handleLogs(w http.ResponseWriter, req *http.Request) {
service := req.URL.Query().Get("service")
if service == "" {
service = "all"
}
linesParam := req.URL.Query().Get("lines")
maxLines := 100
if linesParam != "" {
if n, err := parseInt(linesParam); err == nil && n > 0 {
maxLines = n
if maxLines > 1000 {
maxLines = 1000
}
}
}
const logsDir = "/opt/orama/.orama/logs"
result := make(map[string]string)
if service == "all" {
// Return tail of each service log
services := []string{"rqlite", "olric", "ipfs", "ipfs-cluster", "gateway", "coredns"}
for _, svc := range services {
logPath := logsDir + "/" + svc + ".log"
lines := tailFile(logPath, maxLines)
result[svc] = lines
}
} else {
logPath := logsDir + "/" + service + ".log"
result[service] = tailFile(logPath, maxLines)
}
writeJSON(w, http.StatusOK, result)
}
func tailFile(path string, n int) string {
data, err := os.ReadFile(path)
if err != nil {
return ""
}
lines := strings.Split(string(data), "\n")
if len(lines) > n {
lines = lines[len(lines)-n:]
}
return strings.Join(lines, "\n")
}
func parseInt(s string) (int, error) {
n := 0
for _, c := range s {
if c < '0' || c > '9' {
return 0, fmt.Errorf("not a number")
}
n = n*10 + int(c-'0')
}
return n, nil
}
func writeJSON(w http.ResponseWriter, code int, data interface{}) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(code)
json.NewEncoder(w).Encode(data)
}

View File

@ -0,0 +1,138 @@
// Package enroll implements the one-time enrollment server for OramaOS nodes.
//
// On first boot, the agent starts an HTTP server on port 9999 that serves
// a registration code. The operator retrieves this code and provides it to
// the Gateway (via `orama node enroll`). The Gateway then pushes cluster
// configuration back to the agent via WebSocket.
package enroll
import (
"context"
"crypto/rand"
"encoding/hex"
"encoding/json"
"fmt"
"log"
"net/http"
"sync"
"time"
"github.com/DeBrosOfficial/orama-os/agent/internal/types"
)
// Result contains the enrollment data received from the Gateway.
type Result struct {
NodeID string `json:"node_id"`
WireGuardConfig string `json:"wireguard_config"`
ClusterSecret string `json:"cluster_secret"`
Peers []types.Peer `json:"peers"`
}
// Server is the enrollment HTTP server.
type Server struct {
gatewayURL string
result *Result
mu sync.Mutex
done chan struct{}
}
// NewServer creates a new enrollment server.
func NewServer(gatewayURL string) *Server {
return &Server{
gatewayURL: gatewayURL,
done: make(chan struct{}),
}
}
// Run starts the enrollment server and blocks until enrollment is complete.
// Returns the enrollment result containing cluster configuration.
func (s *Server) Run() (*Result, error) {
// Generate registration code (8 alphanumeric chars)
code, err := generateCode()
if err != nil {
return nil, fmt.Errorf("failed to generate registration code: %w", err)
}
log.Printf("ENROLLMENT CODE: %s", code)
log.Printf("Waiting for enrollment on port 9999...")
// Channel for enrollment completion
enrollCh := make(chan *Result, 1)
errCh := make(chan error, 1)
mux := http.NewServeMux()
// Serve registration code — one-shot endpoint
var served bool
mux.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
s.mu.Lock()
if served {
s.mu.Unlock()
http.Error(w, "already served", http.StatusGone)
return
}
served = true
s.mu.Unlock()
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(map[string]string{
"code": code,
"expires": time.Now().Add(10 * time.Minute).Format(time.RFC3339),
})
})
// Receive enrollment config from Gateway (pushed after code verification)
mux.HandleFunc("/v1/agent/enroll/complete", func(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
return
}
var result Result
if err := json.NewDecoder(r.Body).Decode(&result); err != nil {
http.Error(w, "invalid JSON", http.StatusBadRequest)
return
}
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(map[string]string{"status": "ok"})
enrollCh <- &result
})
server := &http.Server{
Addr: ":9999",
Handler: mux,
ReadTimeout: 10 * time.Second,
WriteTimeout: 10 * time.Second,
}
// Start server in background
go func() {
if err := server.ListenAndServe(); err != http.ErrServerClosed {
errCh <- fmt.Errorf("enrollment server error: %w", err)
}
}()
// Wait for enrollment or error
select {
case result := <-enrollCh:
// Gracefully shut down the enrollment server
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
server.Shutdown(ctx)
log.Println("enrollment server closed")
return result, nil
case err := <-errCh:
return nil, err
}
}
// generateCode generates an 8-character alphanumeric registration code.
func generateCode() (string, error) {
b := make([]byte, 4)
if _, err := rand.Read(b); err != nil {
return "", err
}
return hex.EncodeToString(b), nil
}

View File

@ -0,0 +1,135 @@
// Package health provides periodic health reporting to the cluster.
package health
import (
"bytes"
"encoding/json"
"log"
"net/http"
"os"
"strings"
"sync"
"time"
"github.com/DeBrosOfficial/orama-os/agent/internal/sandbox"
)
const (
// ReportInterval is how often health reports are sent.
ReportInterval = 30 * time.Second
// GatewayHealthEndpoint is the gateway endpoint for health reports.
GatewayHealthEndpoint = "/v1/node/health"
)
// Report represents a health report sent to the cluster.
type Report struct {
NodeID string `json:"node_id"`
Version string `json:"version"`
Uptime int64 `json:"uptime_seconds"`
Services map[string]bool `json:"services"`
Healthy bool `json:"healthy"`
Timestamp time.Time `json:"timestamp"`
}
// Reporter periodically sends health reports.
type Reporter struct {
supervisor *sandbox.Supervisor
startTime time.Time
mu sync.Mutex
stopCh chan struct{}
stopped bool
}
// NewReporter creates a new health reporter.
func NewReporter(supervisor *sandbox.Supervisor) *Reporter {
return &Reporter{
supervisor: supervisor,
startTime: time.Now(),
stopCh: make(chan struct{}),
}
}
// RunLoop periodically sends health reports.
func (r *Reporter) RunLoop() {
log.Println("health reporter started")
ticker := time.NewTicker(ReportInterval)
defer ticker.Stop()
for {
r.sendReport()
select {
case <-ticker.C:
case <-r.stopCh:
return
}
}
}
// Stop signals the reporter to exit.
func (r *Reporter) Stop() {
r.mu.Lock()
defer r.mu.Unlock()
if !r.stopped {
r.stopped = true
close(r.stopCh)
}
}
func (r *Reporter) sendReport() {
status := r.supervisor.GetStatus()
healthy := true
for _, running := range status {
if !running {
healthy = false
break
}
}
report := Report{
NodeID: readNodeID(),
Version: readVersion(),
Uptime: int64(time.Since(r.startTime).Seconds()),
Services: status,
Healthy: healthy,
Timestamp: time.Now(),
}
body, err := json.Marshal(report)
if err != nil {
log.Printf("failed to marshal health report: %v", err)
return
}
// Send to local gateway (which forwards to the cluster)
client := &http.Client{Timeout: 5 * time.Second}
resp, err := client.Post(
"http://127.0.0.1:6001"+GatewayHealthEndpoint,
"application/json",
bytes.NewReader(body),
)
if err != nil {
// Gateway may not be up yet during startup — this is expected
return
}
resp.Body.Close()
}
func readNodeID() string {
data, err := os.ReadFile("/opt/orama/.orama/configs/node-id")
if err != nil {
return "unknown"
}
return strings.TrimSpace(string(data))
}
func readVersion() string {
data, err := os.ReadFile("/etc/orama-version")
if err != nil {
return "unknown"
}
return strings.TrimSpace(string(data))
}

View File

@ -0,0 +1,274 @@
// Package sandbox manages service processes in isolated Linux namespaces.
//
// Each service runs with:
// - Separate mount namespace (CLONE_NEWNS) for filesystem isolation
// - Separate UTS namespace (CLONE_NEWUTS) for hostname isolation
// - Dedicated uid/gid (no root)
// - Read-only root filesystem except for the service's data directory
//
// NO PID namespace (CLONE_NEWPID) — services like RQLite and Olric become PID 1
// in a new PID namespace, which changes signal semantics (SIGTERM is ignored by default
// for PID 1). Mount + UTS namespaces provide sufficient isolation.
package sandbox
import (
"fmt"
"log"
"os"
"os/exec"
"sync"
"syscall"
)
// Config defines the sandbox parameters for a service.
type Config struct {
Name string // Human-readable name (e.g., "rqlite", "ipfs")
Binary string // Absolute path to the binary
Args []string // Command-line arguments
User uint32 // UID to run as
Group uint32 // GID to run as
DataDir string // Writable data directory
LogFile string // Path to log file
Seccomp SeccompMode // Seccomp enforcement mode
}
// Process represents a running sandboxed service.
type Process struct {
Config Config
cmd *exec.Cmd
}
// Start launches the service in an isolated namespace.
func Start(cfg Config) (*Process, error) {
// Write seccomp profile for this service
profilePath, err := WriteProfile(cfg.Name, cfg.Seccomp)
if err != nil {
log.Printf("WARNING: failed to write seccomp profile for %s: %v (running without seccomp)", cfg.Name, err)
} else {
modeStr := "enforce"
if cfg.Seccomp == SeccompAudit {
modeStr = "audit"
}
log.Printf("seccomp profile for %s written to %s (mode: %s)", cfg.Name, profilePath, modeStr)
}
cmd := exec.Command(cfg.Binary, cfg.Args...)
cmd.SysProcAttr = &syscall.SysProcAttr{
Cloneflags: syscall.CLONE_NEWNS | // mount namespace
syscall.CLONE_NEWUTS, // hostname namespace
Credential: &syscall.Credential{
Uid: cfg.User,
Gid: cfg.Group,
},
}
// Redirect output to log file
if cfg.LogFile != "" {
logFile, err := os.OpenFile(cfg.LogFile, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0644)
if err != nil {
return nil, fmt.Errorf("failed to open log file %s: %w", cfg.LogFile, err)
}
cmd.Stdout = logFile
cmd.Stderr = logFile
}
if err := cmd.Start(); err != nil {
return nil, fmt.Errorf("failed to start %s: %w", cfg.Name, err)
}
log.Printf("started %s (PID %d, UID %d)", cfg.Name, cmd.Process.Pid, cfg.User)
return &Process{Config: cfg, cmd: cmd}, nil
}
// Stop sends SIGTERM to the process and waits for exit.
func (p *Process) Stop() error {
if p.cmd == nil || p.cmd.Process == nil {
return nil
}
log.Printf("stopping %s (PID %d)", p.Config.Name, p.cmd.Process.Pid)
if err := p.cmd.Process.Signal(syscall.SIGTERM); err != nil {
return fmt.Errorf("failed to signal %s: %w", p.Config.Name, err)
}
if err := p.cmd.Wait(); err != nil {
// Process exited with non-zero — not necessarily an error during shutdown
log.Printf("%s exited: %v", p.Config.Name, err)
}
return nil
}
// IsRunning returns true if the process is still alive.
func (p *Process) IsRunning() bool {
if p.cmd == nil || p.cmd.Process == nil {
return false
}
// Signal 0 checks if the process exists
return p.cmd.Process.Signal(syscall.Signal(0)) == nil
}
// Supervisor manages the lifecycle of all sandboxed services.
type Supervisor struct {
mu sync.Mutex
processes map[string]*Process
}
// NewSupervisor creates a new service supervisor.
func NewSupervisor() *Supervisor {
return &Supervisor{
processes: make(map[string]*Process),
}
}
// StartAll launches all configured services in the correct dependency order.
// Order: RQLite → Olric → IPFS → IPFS Cluster → Gateway → CoreDNS
func (s *Supervisor) StartAll() error {
services := defaultServiceConfigs()
for _, cfg := range services {
proc, err := Start(cfg)
if err != nil {
return fmt.Errorf("failed to start %s: %w", cfg.Name, err)
}
s.mu.Lock()
s.processes[cfg.Name] = proc
s.mu.Unlock()
}
log.Printf("all %d services started", len(services))
return nil
}
// StopAll stops all services in reverse order.
func (s *Supervisor) StopAll() {
s.mu.Lock()
defer s.mu.Unlock()
// Stop in reverse dependency order
order := []string{"coredns", "gateway", "ipfs-cluster", "ipfs", "olric", "rqlite"}
for _, name := range order {
if proc, ok := s.processes[name]; ok {
if err := proc.Stop(); err != nil {
log.Printf("error stopping %s: %v", name, err)
}
}
}
}
// RestartService restarts a single service by name.
func (s *Supervisor) RestartService(name string) error {
s.mu.Lock()
proc, exists := s.processes[name]
s.mu.Unlock()
if !exists {
return fmt.Errorf("service %s not found", name)
}
if err := proc.Stop(); err != nil {
log.Printf("error stopping %s for restart: %v", name, err)
}
newProc, err := Start(proc.Config)
if err != nil {
return fmt.Errorf("failed to restart %s: %w", name, err)
}
s.mu.Lock()
s.processes[name] = newProc
s.mu.Unlock()
return nil
}
// GetStatus returns the running status of all services.
func (s *Supervisor) GetStatus() map[string]bool {
s.mu.Lock()
defer s.mu.Unlock()
status := make(map[string]bool)
for name, proc := range s.processes {
status[name] = proc.IsRunning()
}
return status
}
// defaultServiceConfigs returns the service configurations in startup order.
func defaultServiceConfigs() []Config {
const (
oramaDir = "/opt/orama/.orama"
binDir = "/opt/orama/bin"
logsDir = "/opt/orama/.orama/logs"
)
// Start in SeccompAudit mode to profile syscalls on sandbox.
// Switch to SeccompEnforce after capturing required syscalls in production.
mode := SeccompAudit
return []Config{
{
Name: "rqlite",
Binary: "/usr/local/bin/rqlited",
Args: []string{"-node-id", "1", "-http-addr", "0.0.0.0:4001", "-raft-addr", "0.0.0.0:4002", oramaDir + "/data/rqlite"},
User: 1001,
Group: 1001,
DataDir: oramaDir + "/data/rqlite",
LogFile: logsDir + "/rqlite.log",
Seccomp: mode,
},
{
Name: "olric",
Binary: "/usr/local/bin/olric-server",
Args: nil, // configured via OLRIC_SERVER_CONFIG env
User: 1002,
Group: 1002,
DataDir: oramaDir + "/data",
LogFile: logsDir + "/olric.log",
Seccomp: mode,
},
{
Name: "ipfs",
Binary: "/usr/local/bin/ipfs",
Args: []string{"daemon", "--enable-pubsub-experiment", "--repo-dir=" + oramaDir + "/data/ipfs/repo"},
User: 1003,
Group: 1003,
DataDir: oramaDir + "/data/ipfs",
LogFile: logsDir + "/ipfs.log",
Seccomp: mode,
},
{
Name: "ipfs-cluster",
Binary: "/usr/local/bin/ipfs-cluster-service",
Args: []string{"daemon", "--config", oramaDir + "/data/ipfs-cluster/service.json"},
User: 1004,
Group: 1004,
DataDir: oramaDir + "/data/ipfs-cluster",
LogFile: logsDir + "/ipfs-cluster.log",
Seccomp: mode,
},
{
Name: "gateway",
Binary: binDir + "/gateway",
Args: []string{"--config", oramaDir + "/configs/gateway.yaml"},
User: 1005,
Group: 1005,
DataDir: oramaDir,
LogFile: logsDir + "/gateway.log",
Seccomp: mode,
},
{
Name: "coredns",
Binary: "/usr/local/bin/coredns",
Args: []string{"-conf", "/etc/coredns/Corefile"},
User: 1006,
Group: 1006,
DataDir: oramaDir,
LogFile: logsDir + "/coredns.log",
Seccomp: mode,
},
}
}

View File

@ -0,0 +1,221 @@
package sandbox
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
)
// SeccompAction defines the action to take when a syscall is matched or not.
type SeccompAction string
const (
// ActionAllow allows the syscall.
ActionAllow SeccompAction = "SCMP_ACT_ALLOW"
// ActionLog logs the syscall but allows it (audit mode).
ActionLog SeccompAction = "SCMP_ACT_LOG"
// ActionKillProcess kills the process when the syscall is made.
ActionKillProcess SeccompAction = "SCMP_ACT_KILL_PROCESS"
)
// SeccompProfile defines a seccomp filter in the format understood by
// libseccomp / OCI runtime spec. The agent writes this to a temp file
// and applies it via the seccomp notifier or BPF loader before exec.
type SeccompProfile struct {
DefaultAction SeccompAction `json:"defaultAction"`
Syscalls []SeccompSyscall `json:"syscalls"`
}
// SeccompSyscall defines a set of syscalls and the action to take.
type SeccompSyscall struct {
Names []string `json:"names"`
Action SeccompAction `json:"action"`
}
// SeccompMode controls enforcement level.
type SeccompMode int
const (
// SeccompEnforce kills the process on disallowed syscalls.
SeccompEnforce SeccompMode = iota
// SeccompAudit logs disallowed syscalls but allows them (for profiling).
SeccompAudit
)
// baseSyscalls are syscalls every service needs for basic operation.
var baseSyscalls = []string{
// Process lifecycle
"exit", "exit_group", "getpid", "getppid", "gettid",
"clone", "clone3", "fork", "vfork", "execve", "execveat",
"wait4", "waitid",
// Memory management
"brk", "mmap", "munmap", "mremap", "mprotect", "madvise",
"mlock", "munlock",
// File operations
"read", "write", "pread64", "pwrite64", "readv", "writev",
"open", "openat", "close", "dup", "dup2", "dup3",
"stat", "fstat", "lstat", "newfstatat",
"access", "faccessat", "faccessat2",
"lseek", "fcntl", "flock",
"getcwd", "readlink", "readlinkat",
"getdents64",
// Directory operations
"mkdir", "mkdirat", "rmdir",
"rename", "renameat", "renameat2",
"unlink", "unlinkat",
"symlink", "symlinkat",
"link", "linkat",
"chmod", "fchmod", "fchmodat",
"chown", "fchown", "fchownat",
"utimensat",
// IO multiplexing
"epoll_create1", "epoll_ctl", "epoll_wait", "epoll_pwait", "epoll_pwait2",
"poll", "ppoll", "select", "pselect6",
"eventfd", "eventfd2",
// Networking (basic)
"socket", "connect", "accept", "accept4",
"bind", "listen",
"sendto", "recvfrom", "sendmsg", "recvmsg",
"shutdown", "getsockname", "getpeername",
"getsockopt", "setsockopt",
// Signals
"rt_sigaction", "rt_sigprocmask", "rt_sigreturn",
"sigaltstack", "kill", "tgkill",
// Time
"clock_gettime", "clock_getres", "gettimeofday",
"nanosleep", "clock_nanosleep",
// Threading / synchronization
"futex", "set_robust_list", "get_robust_list",
"set_tid_address",
// System info
"uname", "getuid", "getgid", "geteuid", "getegid",
"getgroups", "getrlimit", "setrlimit", "prlimit64",
"sysinfo", "getrandom",
// Pipe and IPC
"pipe", "pipe2",
"ioctl",
// Misc
"arch_prctl", "prctl", "seccomp",
"sched_yield", "sched_getaffinity",
"rseq",
"close_range",
"membarrier",
}
// ServiceSyscalls defines additional syscalls required by each service
// beyond the base set. These were determined by running services in audit
// mode (SCMP_ACT_LOG) and capturing required syscalls.
var ServiceSyscalls = map[string][]string{
"rqlite": {
// Raft log + SQLite WAL
"fsync", "fdatasync", "ftruncate", "fallocate",
"sync_file_range",
// SQLite memory-mapped I/O
"mincore",
// Raft networking (TCP)
"sendfile",
},
"olric": {
// Memberlist gossip (UDP multicast + TCP)
"sendmmsg", "recvmmsg",
// Embedded map operations
"fsync", "fdatasync", "ftruncate",
},
"ipfs": {
// Block storage and data transfer
"sendfile", "splice", "tee",
// Repo management
"fsync", "fdatasync", "ftruncate", "fallocate",
// libp2p networking
"sendmmsg", "recvmmsg",
},
"ipfs-cluster": {
// CRDT datastore
"fsync", "fdatasync", "ftruncate", "fallocate",
// libp2p networking
"sendfile",
},
"gateway": {
// HTTP server
"sendfile", "splice",
// WebSocket
"sendmmsg", "recvmmsg",
// TLS
"fsync", "fdatasync",
},
"coredns": {
// DNS (UDP + TCP on port 53)
"sendmmsg", "recvmmsg",
// Zone file / cache
"fsync", "fdatasync",
},
}
// BuildProfile creates a seccomp profile for the given service.
func BuildProfile(serviceName string, mode SeccompMode) *SeccompProfile {
defaultAction := ActionKillProcess
if mode == SeccompAudit {
defaultAction = ActionLog
}
// Combine base + service-specific syscalls
allowed := make([]string, len(baseSyscalls))
copy(allowed, baseSyscalls)
if extra, ok := ServiceSyscalls[serviceName]; ok {
allowed = append(allowed, extra...)
}
return &SeccompProfile{
DefaultAction: defaultAction,
Syscalls: []SeccompSyscall{
{
Names: allowed,
Action: ActionAllow,
},
},
}
}
// WriteProfile writes a seccomp profile to a temporary file and returns the path.
// The caller is responsible for removing the file after the process starts.
func WriteProfile(serviceName string, mode SeccompMode) (string, error) {
profile := BuildProfile(serviceName, mode)
data, err := json.MarshalIndent(profile, "", " ")
if err != nil {
return "", fmt.Errorf("failed to marshal seccomp profile: %w", err)
}
dir := "/tmp/orama-seccomp"
if err := os.MkdirAll(dir, 0700); err != nil {
return "", fmt.Errorf("failed to create seccomp dir: %w", err)
}
path := filepath.Join(dir, serviceName+".json")
if err := os.WriteFile(path, data, 0600); err != nil {
return "", fmt.Errorf("failed to write seccomp profile: %w", err)
}
return path, nil
}

View File

@ -0,0 +1,8 @@
// Package types defines shared types used across agent packages.
package types
// Peer represents a cluster peer with vault-guardian access.
type Peer struct {
WGIP string `json:"wg_ip"`
NodeID string `json:"node_id"`
}

View File

@ -0,0 +1,314 @@
// Package update implements OTA updates with A/B partition switching.
//
// Partition layout:
//
// /dev/sda1 — rootfs-A (current or standby, read-only, dm-verity)
// /dev/sda2 — rootfs-B (standby or current, read-only, dm-verity)
// /dev/sda3 — data (LUKS encrypted, persistent)
//
// Uses systemd-boot with Boot Loader Specification (BLS) entries.
// Boot counting: tries_left=3 on new partition, decremented each boot.
// If all tries exhausted, systemd-boot falls back to the other partition.
package update
import (
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"os"
"os/exec"
"strings"
"sync"
"time"
)
const (
// CheckInterval is how often we check for updates.
CheckInterval = 1 * time.Hour
// UpdateURL is the endpoint to check for new versions.
UpdateURL = "https://updates.orama.network/v1/latest"
// PartitionA is the rootfs-A device.
PartitionA = "/dev/sda1"
// PartitionB is the rootfs-B device.
PartitionB = "/dev/sda2"
)
// VersionInfo describes an available update.
type VersionInfo struct {
Version string `json:"version"`
Arch string `json:"arch"`
SHA256 string `json:"sha256"`
Signature string `json:"signature"`
URL string `json:"url"`
Size int64 `json:"size"`
}
// Manager handles OTA updates.
type Manager struct {
mu sync.Mutex
stopCh chan struct{}
stopped bool
}
// NewManager creates a new update manager.
func NewManager() *Manager {
return &Manager{
stopCh: make(chan struct{}),
}
}
// RunLoop periodically checks for updates and applies them.
func (m *Manager) RunLoop() {
log.Println("update manager started")
// Initial delay to let the system stabilize after boot
select {
case <-time.After(5 * time.Minute):
case <-m.stopCh:
return
}
ticker := time.NewTicker(CheckInterval)
defer ticker.Stop()
for {
if err := m.checkAndApply(); err != nil {
log.Printf("update check failed: %v", err)
}
select {
case <-ticker.C:
case <-m.stopCh:
return
}
}
}
// Stop signals the update loop to exit.
func (m *Manager) Stop() {
m.mu.Lock()
defer m.mu.Unlock()
if !m.stopped {
m.stopped = true
close(m.stopCh)
}
}
// checkAndApply checks for a new version and applies it if available.
func (m *Manager) checkAndApply() error {
arch := detectArch()
resp, err := http.Get(fmt.Sprintf("%s?arch=%s", UpdateURL, arch))
if err != nil {
return fmt.Errorf("failed to check for updates: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode == http.StatusNoContent {
return nil // no update available
}
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("update server returned %d", resp.StatusCode)
}
var info VersionInfo
if err := json.NewDecoder(resp.Body).Decode(&info); err != nil {
return fmt.Errorf("failed to parse update info: %w", err)
}
currentVersion := readCurrentVersion()
if info.Version == currentVersion {
return nil // already up to date
}
log.Printf("update available: %s → %s", currentVersion, info.Version)
// Download image
imagePath, err := m.download(info)
if err != nil {
return fmt.Errorf("download failed: %w", err)
}
defer os.Remove(imagePath)
// Verify checksum
if err := verifyChecksum(imagePath, info.SHA256); err != nil {
return fmt.Errorf("checksum verification failed: %w", err)
}
// Verify rootwallet signature (EVM personal_sign over the SHA256 hash)
if info.Signature == "" {
return fmt.Errorf("update has no signature — refusing to install")
}
if err := verifySignature(info.SHA256, info.Signature); err != nil {
return fmt.Errorf("signature verification failed: %w", err)
}
log.Println("update signature verified")
// Write to standby partition
standby := getStandbyPartition()
if err := writeImage(imagePath, standby); err != nil {
return fmt.Errorf("failed to write image: %w", err)
}
// Update bootloader entry with tries_left=3
if err := updateBootEntry(standby, info.Version); err != nil {
return fmt.Errorf("failed to update boot entry: %w", err)
}
log.Printf("update %s installed on %s — reboot to activate", info.Version, standby)
return nil
}
// download fetches the update image to a temporary file.
func (m *Manager) download(info VersionInfo) (string, error) {
log.Printf("downloading update %s (%d bytes)", info.Version, info.Size)
resp, err := http.Get(info.URL)
if err != nil {
return "", err
}
defer resp.Body.Close()
f, err := os.CreateTemp("/tmp", "orama-update-*.img")
if err != nil {
return "", err
}
if _, err := io.Copy(f, resp.Body); err != nil {
f.Close()
os.Remove(f.Name())
return "", err
}
f.Close()
return f.Name(), nil
}
// verifyChecksum verifies the SHA256 checksum of a file.
func verifyChecksum(path, expected string) error {
f, err := os.Open(path)
if err != nil {
return err
}
defer f.Close()
h := sha256.New()
if _, err := io.Copy(h, f); err != nil {
return err
}
got := hex.EncodeToString(h.Sum(nil))
if got != expected {
return fmt.Errorf("checksum mismatch: got %s, expected %s", got, expected)
}
return nil
}
// writeImage writes a raw image to a partition device.
func writeImage(imagePath, device string) error {
log.Printf("writing image to %s", device)
cmd := exec.Command("dd", "if="+imagePath, "of="+device, "bs=4M", "conv=fsync")
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("dd failed: %w\n%s", err, string(output))
}
return nil
}
// getStandbyPartition returns the partition that is NOT currently booted.
func getStandbyPartition() string {
// Read current root device from /proc/cmdline
cmdline, err := os.ReadFile("/proc/cmdline")
if err != nil {
return PartitionB // fallback
}
if strings.Contains(string(cmdline), "root="+PartitionA) {
return PartitionB
}
return PartitionA
}
// getCurrentPartition returns the currently booted partition.
func getCurrentPartition() string {
cmdline, err := os.ReadFile("/proc/cmdline")
if err != nil {
return PartitionA
}
if strings.Contains(string(cmdline), "root="+PartitionB) {
return PartitionB
}
return PartitionA
}
// updateBootEntry configures systemd-boot to boot from the standby partition
// with tries_left=3 for automatic rollback.
func updateBootEntry(partition, version string) error {
// Create BLS entry with boot counting
entryName := "orama-" + version
entryPath := fmt.Sprintf("/boot/loader/entries/%s+3.conf", entryName)
content := fmt.Sprintf(`title OramaOS %s
linux /vmlinuz
options root=%s ro quiet
`, version, partition)
if err := os.MkdirAll("/boot/loader/entries", 0755); err != nil {
return err
}
if err := os.WriteFile(entryPath, []byte(content), 0644); err != nil {
return err
}
// Set as one-shot boot target
cmd := exec.Command("bootctl", "set-oneshot", entryName+"+3")
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("bootctl set-oneshot failed: %w\n%s", err, string(output))
}
return nil
}
// MarkBootSuccessful marks the current boot as successful, removing the
// tries counter so systemd-boot doesn't fall back.
func MarkBootSuccessful() error {
cmd := exec.Command("bootctl", "set-default", "orama")
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("bootctl set-default failed: %w\n%s", err, string(output))
}
log.Println("boot marked as successful")
return nil
}
// readCurrentVersion reads the installed version from /etc/orama-version.
func readCurrentVersion() string {
data, err := os.ReadFile("/etc/orama-version")
if err != nil {
return "unknown"
}
return strings.TrimSpace(string(data))
}
// detectArch returns the current architecture.
func detectArch() string {
data, err := os.ReadFile("/proc/sys/kernel/arch")
if err != nil {
return "amd64"
}
arch := strings.TrimSpace(string(data))
if arch == "x86_64" {
return "amd64"
}
return arch
}

View File

@ -0,0 +1,216 @@
package update
import (
"crypto/ecdsa"
"crypto/elliptic"
"encoding/hex"
"errors"
"fmt"
"math/big"
"strings"
"golang.org/x/crypto/sha3"
)
// SignerAddress is the Ethereum address authorized to sign OramaOS updates.
// Updates signed by any other address are rejected.
const SignerAddress = "0xb5d8a496c8b2412990d7D467E17727fdF5954afC"
// verifySignature verifies an EVM personal_sign signature against the expected signer.
// hashHex is the hex-encoded SHA-256 hash of the update checksum file.
// signatureHex is the 65-byte hex-encoded EVM signature (r || s || v).
func verifySignature(hashHex, signatureHex string) error {
if SignerAddress == "0x0000000000000000000000000000000000000000" {
return fmt.Errorf("signer address not configured — refusing unsigned update")
}
sigBytes, err := hex.DecodeString(strings.TrimPrefix(signatureHex, "0x"))
if err != nil {
return fmt.Errorf("invalid signature hex: %w", err)
}
if len(sigBytes) != 65 {
return fmt.Errorf("invalid signature length: got %d, expected 65", len(sigBytes))
}
// Compute EVM personal_sign message hash
msgHash := personalSignHash(hashHex)
// Split signature into r, s, v
r := new(big.Int).SetBytes(sigBytes[:32])
s := new(big.Int).SetBytes(sigBytes[32:64])
v := sigBytes[64]
if v >= 27 {
v -= 27
}
if v > 1 {
return fmt.Errorf("invalid signature recovery id: %d", v)
}
// Recover public key
pubKey, err := recoverPubkey(msgHash, r, s, v)
if err != nil {
return fmt.Errorf("public key recovery failed: %w", err)
}
// Derive Ethereum address
recovered := pubkeyToAddress(pubKey)
expected := strings.ToLower(strings.TrimPrefix(SignerAddress, "0x"))
got := strings.ToLower(strings.TrimPrefix(recovered, "0x"))
if got != expected {
return fmt.Errorf("update signed by 0x%s, expected 0x%s", got, expected)
}
return nil
}
// personalSignHash computes keccak256("\x19Ethereum Signed Message:\n" + len(msg) + msg).
func personalSignHash(message string) []byte {
prefix := fmt.Sprintf("\x19Ethereum Signed Message:\n%d", len(message))
h := sha3.NewLegacyKeccak256()
h.Write([]byte(prefix))
h.Write([]byte(message))
return h.Sum(nil)
}
// recoverPubkey recovers the ECDSA public key from a secp256k1 signature.
// Uses the standard EC point recovery algorithm.
func recoverPubkey(hash []byte, r, s *big.Int, v byte) (*ecdsa.PublicKey, error) {
// secp256k1 curve parameters
curve := secp256k1Curve()
N := curve.Params().N
P := curve.Params().P
if r.Sign() <= 0 || s.Sign() <= 0 {
return nil, errors.New("invalid signature: r or s is zero")
}
if r.Cmp(N) >= 0 || s.Cmp(N) >= 0 {
return nil, errors.New("invalid signature: r or s >= N")
}
// Step 1: Compute candidate x = r + v*N (v is 0 or 1)
x := new(big.Int).Set(r)
if v == 1 {
x.Add(x, N)
}
if x.Cmp(P) >= 0 {
return nil, errors.New("invalid recovery: x >= P")
}
// Step 2: Recover the y coordinate from x
Rx, Ry, err := decompressPoint(curve, x)
if err != nil {
return nil, fmt.Errorf("point decompression failed: %w", err)
}
// The y parity must match the recovery id
if Ry.Bit(0) != 0 {
Ry.Sub(P, Ry) // negate y
}
// v determines which y we want: v=0 → even y, v=1 → odd y for the first candidate
// Actually for EVM: v just selects x=r vs x=r+N. y is chosen to make verification work.
// We try both y values and verify.
for _, negateY := range []bool{false, true} {
testRy := new(big.Int).Set(Ry)
if negateY {
testRy.Sub(P, testRy)
}
// Step 3: Compute public key: Q = r^(-1) * (s*R - e*G)
rInv := new(big.Int).ModInverse(r, N)
if rInv == nil {
return nil, errors.New("r has no modular inverse")
}
// s * R
sRx, sRy := curve.ScalarMult(Rx, testRy, s.Bytes())
// e * G (where e = hash interpreted as big.Int)
e := new(big.Int).SetBytes(hash)
eGx, eGy := curve.ScalarBaseMult(e.Bytes())
// s*R - e*G = s*R + (-e*G)
negEGy := new(big.Int).Sub(P, eGy)
qx, qy := curve.Add(sRx, sRy, eGx, negEGy)
// Q = r^(-1) * (s*R - e*G)
qx, qy = curve.ScalarMult(qx, qy, rInv.Bytes())
// Verify: the recovered key should produce a valid signature
pub := &ecdsa.PublicKey{Curve: curve, X: qx, Y: qy}
if ecdsa.Verify(pub, hash, r, s) {
return pub, nil
}
}
return nil, errors.New("could not recover public key from signature")
}
// pubkeyToAddress derives an Ethereum address from a public key.
// address = keccak256(uncompressed_pubkey_bytes[1:])[12:]
func pubkeyToAddress(pub *ecdsa.PublicKey) string {
pubBytes := elliptic.Marshal(pub.Curve, pub.X, pub.Y)
h := sha3.NewLegacyKeccak256()
h.Write(pubBytes[1:]) // skip 0x04 prefix
hash := h.Sum(nil)
return "0x" + hex.EncodeToString(hash[12:])
}
// decompressPoint recovers the y coordinate from x on the given curve.
// Solves y² = x³ + 7 (secp256k1: a=0, b=7).
func decompressPoint(curve elliptic.Curve, x *big.Int) (*big.Int, *big.Int, error) {
P := curve.Params().P
// y² = x³ + b mod P
x3 := new(big.Int).Mul(x, x)
x3.Mul(x3, x)
x3.Mod(x3, P)
// b = 7 for secp256k1
b := big.NewInt(7)
y2 := new(big.Int).Add(x3, b)
y2.Mod(y2, P)
// y = sqrt(y²) mod P
// For P ≡ 3 (mod 4), sqrt(a) = a^((P+1)/4) mod P
// secp256k1's P ≡ 3 (mod 4), so this works.
exp := new(big.Int).Add(P, big.NewInt(1))
exp.Rsh(exp, 2) // (P+1)/4
y := new(big.Int).Exp(y2, exp, P)
// Verify
verify := new(big.Int).Mul(y, y)
verify.Mod(verify, P)
if verify.Cmp(y2) != 0 {
return nil, nil, fmt.Errorf("x=%s is not on the curve", x.Text(16))
}
return x, y, nil
}
// secp256k1Curve returns the secp256k1 elliptic curve used by Ethereum.
// Go's standard library doesn't include secp256k1, so we define it here.
func secp256k1Curve() elliptic.Curve {
return &secp256k1CurveParams
}
var secp256k1CurveParams = secp256k1CurveImpl{
CurveParams: &elliptic.CurveParams{
P: hexBigInt("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F"),
N: hexBigInt("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141"),
B: big.NewInt(7),
Gx: hexBigInt("79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798"),
Gy: hexBigInt("483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8"),
BitSize: 256,
Name: "secp256k1",
},
}
type secp256k1CurveImpl struct {
*elliptic.CurveParams
}
func hexBigInt(s string) *big.Int {
n, _ := new(big.Int).SetString(s, 16)
return n
}

View File

@ -0,0 +1,139 @@
// Package wireguard manages the WireGuard interface for OramaOS.
//
// On OramaOS, the WireGuard kernel module is built-in (Linux 6.6+).
// Configuration is written during enrollment and persisted on the rootfs.
package wireguard
import (
"fmt"
"log"
"os"
"os/exec"
)
const (
// Interface is the WireGuard interface name.
Interface = "wg0"
// ConfigPath is the default WireGuard configuration file.
ConfigPath = "/etc/wireguard/wg0.conf"
// PrivateKeyPath stores the WG private key separately for identity derivation.
PrivateKeyPath = "/etc/wireguard/private.key"
)
// Manager handles WireGuard interface lifecycle.
type Manager struct {
iface string
}
// NewManager creates a new WireGuard manager.
func NewManager() *Manager {
return &Manager{iface: Interface}
}
// Configure writes the WireGuard configuration to disk.
// Called during enrollment with config received from the Gateway.
func (m *Manager) Configure(config string) error {
if err := os.MkdirAll("/etc/wireguard", 0700); err != nil {
return fmt.Errorf("failed to create wireguard dir: %w", err)
}
if err := os.WriteFile(ConfigPath, []byte(config), 0600); err != nil {
return fmt.Errorf("failed to write WG config: %w", err)
}
log.Println("WireGuard configuration written")
return nil
}
// Up brings the WireGuard interface up using wg-quick.
func (m *Manager) Up() error {
log.Println("bringing up WireGuard interface")
cmd := exec.Command("wg-quick", "up", m.iface)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("wg-quick up failed: %w\n%s", err, string(output))
}
log.Println("WireGuard interface is up")
return nil
}
// Down takes the WireGuard interface down.
func (m *Manager) Down() error {
cmd := exec.Command("wg-quick", "down", m.iface)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("wg-quick down failed: %w\n%s", err, string(output))
}
log.Println("WireGuard interface is down")
return nil
}
// GetPeers returns the current WireGuard peer list with their IPs.
func (m *Manager) GetPeers() ([]string, error) {
cmd := exec.Command("wg", "show", m.iface, "allowed-ips")
output, err := cmd.Output()
if err != nil {
return nil, fmt.Errorf("wg show failed: %w", err)
}
var ips []string
for _, line := range splitLines(string(output)) {
if line == "" {
continue
}
// Format: "<pubkey>\t<ip>/32\n"
parts := splitTabs(line)
if len(parts) >= 2 {
ip := parts[1]
// Strip /32 suffix
if idx := indexOf(ip, '/'); idx >= 0 {
ip = ip[:idx]
}
ips = append(ips, ip)
}
}
return ips, nil
}
func splitLines(s string) []string {
var lines []string
start := 0
for i := 0; i < len(s); i++ {
if s[i] == '\n' {
lines = append(lines, s[start:i])
start = i + 1
}
}
if start < len(s) {
lines = append(lines, s[start:])
}
return lines
}
func splitTabs(s string) []string {
var parts []string
start := 0
for i := 0; i < len(s); i++ {
if s[i] == '\t' {
parts = append(parts, s[start:i])
start = i + 1
}
}
if start < len(s) {
parts = append(parts, s[start:])
}
return parts
}
func indexOf(s string, c byte) int {
for i := 0; i < len(s); i++ {
if s[i] == c {
return i
}
}
return -1
}

View File

@ -0,0 +1,60 @@
# OramaOS disk image layout
#
# Partition table:
# sda1 — rootfs-A (SquashFS + dm-verity, read-only)
# sda2 — rootfs-B (standby for A/B updates, same size)
# sda3 — data (formatted as LUKS2 during enrollment)
#
# EFI System Partition contains systemd-boot + kernel.
image efi-part.vfat {
vfat {
files = {
"bzImage" = "EFI/Linux/vmlinuz"
}
}
size = 64M
}
image rootfs-a.img {
hdimage {}
partition rootfs-a {
image = "rootfs.squashfs"
}
}
image orama-os.img {
hdimage {
gpt = true
}
# EFI System Partition (systemd-boot + kernel)
partition esp {
image = "efi-part.vfat"
partition-type-uuid = "c12a7328-f81f-11d2-ba4b-00a0c93ec93b"
bootable = true
size = 64M
}
# rootfs-A (active partition)
partition rootfs-a {
image = "rootfs.squashfs"
partition-type-uuid = "0fc63daf-8483-4772-8e79-3d69d8477de4"
size = 2G
}
# rootfs-B (standby for A/B updates)
partition rootfs-b {
partition-type-uuid = "0fc63daf-8483-4772-8e79-3d69d8477de4"
size = 2G
}
# Data partition (LUKS2 encrypted during enrollment)
# Fills remaining disk space on the target VPS.
partition data {
partition-type-uuid = "0fc63daf-8483-4772-8e79-3d69d8477de4"
size = 10G
}
}

View File

@ -0,0 +1,92 @@
# OramaOS Kernel Configuration (Linux 6.6 LTS)
# This is a minimal config — only what OramaOS needs.
# Start from x86_64 defconfig and overlay these options.
# Architecture
CONFIG_64BIT=y
CONFIG_X86_64=y
# EFI boot (required for systemd-boot)
CONFIG_EFI=y
CONFIG_EFI_STUB=y
CONFIG_EFI_PARTITION=y
CONFIG_EFIVAR_FS=y
# WireGuard (built-in since 5.6, no compat module needed)
CONFIG_WIREGUARD=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_NET_FOU=y
CONFIG_NET_UDP_TUNNEL=y
# dm-verity (read-only rootfs integrity)
CONFIG_BLK_DEV_DM=y
CONFIG_DM_VERITY=y
CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG=y
# dm-crypt / LUKS
CONFIG_DM_CRYPT=y
CONFIG_CRYPTO_AES=y
CONFIG_CRYPTO_XTS=y
CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_SHA512=y
CONFIG_CRYPTO_USER_API_HASH=y
CONFIG_CRYPTO_USER_API_SKCIPHER=y
# Namespaces (for service sandboxing)
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_IPC_NS=y
# Seccomp (syscall filtering)
CONFIG_SECCOMP=y
CONFIG_SECCOMP_FILTER=y
# Cgroups (resource limiting)
CONFIG_CGROUPS=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_PIDS=y
CONFIG_MEMCG=y
# Filesystem support
CONFIG_EXT4_FS=y
CONFIG_SQUASHFS=y
CONFIG_SQUASHFS_XZ=y
CONFIG_VFAT_FS=y
# Block devices
CONFIG_BLK_DEV_LOOP=y
CONFIG_VIRTIO_BLK=y
# Networking
CONFIG_NET=y
CONFIG_INET=y
CONFIG_IPV6=y
CONFIG_NETFILTER=y
CONFIG_NF_CONNTRACK=y
CONFIG_NETFILTER_XTABLES=y
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_NAT=y
# VirtIO (for QEMU testing and cloud VPS)
CONFIG_VIRTIO=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_NET=y
CONFIG_VIRTIO_CONSOLE=y
CONFIG_HW_RANDOM_VIRTIO=y
# Serial console (for QEMU debugging)
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
# Disable unnecessary features
# CONFIG_SOUND is not set
# CONFIG_USB_SUPPORT is not set
# CONFIG_WLAN is not set
# CONFIG_BLUETOOTH is not set
# CONFIG_NFS_FS is not set
# CONFIG_CIFS is not set

View File

@ -0,0 +1,78 @@
#!/bin/bash
# OramaOS post-build script.
# Runs after Buildroot builds the rootfs but before image creation.
# $TARGET_DIR is the rootfs directory.
set -euo pipefail
TARGET_DIR="$1"
echo "=== OramaOS post_build.sh ==="
# --- Remove all shell access ---
# Operators must not have interactive access to OramaOS nodes.
# Busybox is kept for mount/umount/etc that systemd needs,
# but all shell entry points are removed.
rm -f "$TARGET_DIR/bin/bash"
rm -f "$TARGET_DIR/bin/ash"
rm -f "$TARGET_DIR/usr/bin/ssh"
rm -f "$TARGET_DIR/usr/sbin/sshd"
# Replace /bin/sh with /bin/false — any attempt to spawn a shell fails
ln -sf /bin/false "$TARGET_DIR/bin/sh"
# Remove getty / login (no console login)
rm -f "$TARGET_DIR/sbin/getty"
rm -f "$TARGET_DIR/bin/login"
rm -f "$TARGET_DIR/usr/bin/login"
# Disable all TTY gettys
rm -f "$TARGET_DIR/etc/systemd/system/getty.target.wants/"*
rm -f "$TARGET_DIR/etc/systemd/system/multi-user.target.wants/getty@"*
# --- Create service users ---
# Each service runs under a dedicated uid/gid (defined in sandbox.go).
for uid_name in "1001:rqlite" "1002:olric" "1003:ipfs" "1004:ipfscluster" "1005:gateway" "1006:coredns"; do
uid="${uid_name%%:*}"
name="${uid_name##*:}"
echo "${name}:x:${uid}:${uid}:${name} service:/nonexistent:/bin/false" >> "$TARGET_DIR/etc/passwd"
echo "${name}:x:${uid}:" >> "$TARGET_DIR/etc/group"
done
# --- Create required directories ---
mkdir -p "$TARGET_DIR/opt/orama/bin"
mkdir -p "$TARGET_DIR/opt/orama/.orama/configs"
mkdir -p "$TARGET_DIR/opt/orama/.orama/data"
mkdir -p "$TARGET_DIR/opt/orama/.orama/logs"
mkdir -p "$TARGET_DIR/etc/orama"
mkdir -p "$TARGET_DIR/etc/wireguard"
mkdir -p "$TARGET_DIR/boot/loader/entries"
# --- Copy pre-built binaries ---
# These are placed here by the outer build script (scripts/build.sh).
BINS_DIR="${BINARIES_DIR:-$TARGET_DIR/../images}"
if [ -d "$BINS_DIR/orama-bins" ]; then
cp "$BINS_DIR/orama-bins/orama-agent" "$TARGET_DIR/usr/bin/orama-agent"
chmod 755 "$TARGET_DIR/usr/bin/orama-agent"
# Service binaries go to /opt/orama/bin/ or /usr/local/bin/
for bin in rqlited olric-server ipfs ipfs-cluster-service coredns gateway; do
if [ -f "$BINS_DIR/orama-bins/$bin" ]; then
cp "$BINS_DIR/orama-bins/$bin" "$TARGET_DIR/usr/local/bin/$bin"
chmod 755 "$TARGET_DIR/usr/local/bin/$bin"
fi
done
fi
# --- Write version file ---
if [ -n "${ORAMA_VERSION:-}" ]; then
echo "$ORAMA_VERSION" > "$TARGET_DIR/etc/orama-version"
fi
# --- systemd-boot loader config ---
cat > "$TARGET_DIR/boot/loader/loader.conf" <<'LOADER'
default orama-*
timeout 0
console-mode max
LOADER
echo "=== OramaOS post_build.sh complete ==="

View File

@ -0,0 +1,50 @@
#!/bin/bash
# OramaOS post-image script.
# Runs after rootfs image is created. Sets up dm-verity and final disk image.
# $BINARIES_DIR contains the built images (rootfs.squashfs, bzImage, etc.)
set -euo pipefail
BINARIES_DIR="$1"
BOARD_DIR="$(dirname "$0")"
echo "=== OramaOS post_image.sh ==="
# --- Generate dm-verity hash tree for rootfs ---
ROOTFS="$BINARIES_DIR/rootfs.squashfs"
VERITY_HASH="$BINARIES_DIR/rootfs.verity"
VERITY_TABLE="$BINARIES_DIR/rootfs.verity.table"
if command -v veritysetup &>/dev/null; then
echo "Generating dm-verity hash tree..."
veritysetup format "$ROOTFS" "$VERITY_HASH" > "$VERITY_TABLE"
ROOT_HASH=$(grep "Root hash:" "$VERITY_TABLE" | awk '{print $3}')
echo "dm-verity root hash: $ROOT_HASH"
echo "$ROOT_HASH" > "$BINARIES_DIR/rootfs.roothash"
else
echo "WARNING: veritysetup not found, skipping dm-verity (dev build only)"
fi
# --- Generate partition image using genimage ---
if [ -f "$BOARD_DIR/genimage.cfg" ]; then
GENIMAGE_TMP="$BINARIES_DIR/genimage.tmp"
rm -rf "$GENIMAGE_TMP"
genimage \
--rootpath "$TARGET_DIR" \
--tmppath "$GENIMAGE_TMP" \
--inputpath "$BINARIES_DIR" \
--outputpath "$BINARIES_DIR" \
--config "$BOARD_DIR/genimage.cfg"
rm -rf "$GENIMAGE_TMP"
echo "Disk image generated: $BINARIES_DIR/orama-os.img"
fi
# --- Convert to qcow2 for cloud deployment ---
if command -v qemu-img &>/dev/null; then
echo "Converting to qcow2..."
qemu-img convert -f raw -O qcow2 \
"$BINARIES_DIR/orama-os.img" \
"$BINARIES_DIR/orama-os.qcow2"
echo "qcow2 image: $BINARIES_DIR/orama-os.qcow2"
fi
echo "=== OramaOS post_image.sh complete ==="

View File

@ -0,0 +1,24 @@
[Unit]
Description=Orama Agent
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/usr/bin/orama-agent
Restart=on-failure
RestartSec=5
StandardOutput=journal
StandardError=journal
# The agent is the only root process on OramaOS.
# It manages all services in sandboxed namespaces.
# No hardening directives — the agent needs full root access for:
# - cryptsetup (LUKS key management)
# - mount/umount (data partition)
# - wg-quick (WireGuard interface)
# - clone(CLONE_NEWNS|CLONE_NEWUTS) (namespace creation)
# - setting uid/gid for child processes
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,74 @@
# OramaOS Buildroot defconfig
# Minimal, locked-down Linux image for Orama Network nodes.
# No SSH, no shell, no operator access. Only the orama-agent runs as root.
# Architecture
BR2_x86_64=y
# Toolchain
BR2_TOOLCHAIN_EXTERNAL=y
BR2_TOOLCHAIN_EXTERNAL_DOWNLOAD=y
# Kernel
BR2_LINUX_KERNEL=y
BR2_LINUX_KERNEL_CUSTOM_VERSION=y
BR2_LINUX_KERNEL_CUSTOM_VERSION_VALUE="6.6.70"
BR2_LINUX_KERNEL_USE_CUSTOM_CONFIG=y
BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE="board/orama/kernel.config"
BR2_LINUX_KERNEL_INSTALL_TARGET=y
BR2_LINUX_KERNEL_NEEDS_HOST_OPENSSL=y
# Init system: systemd
BR2_INIT_SYSTEMD=y
BR2_PACKAGE_SYSTEMD_BOOTD=y
# Rootfs: SquashFS (read-only, used with dm-verity)
BR2_TARGET_ROOTFS_SQUASHFS=y
BR2_TARGET_ROOTFS_SQUASHFS_4_0=y
# Required packages for LUKS + boot
BR2_PACKAGE_UTIL_LINUX=y
BR2_PACKAGE_UTIL_LINUX_MOUNT=y
BR2_PACKAGE_UTIL_LINUX_UMOUNT=y
BR2_PACKAGE_KMOD=y
BR2_PACKAGE_CRYPTSETUP=y
BR2_PACKAGE_LVM2=y
# Busybox: keep for systemd compatibility, but shell removed in post_build.sh
BR2_PACKAGE_BUSYBOX=y
# WireGuard tools (kernel module is built-in since 6.6)
BR2_PACKAGE_WIREGUARD_TOOLS=y
# Network utilities
BR2_PACKAGE_IPROUTE2=y
BR2_PACKAGE_IPTABLES=y
# Certificate authorities for HTTPS
BR2_PACKAGE_CA_CERTIFICATES=y
# No SSH — this is intentional. Operators must not have shell access.
# BR2_PACKAGE_OPENSSH is not set
# BR2_PACKAGE_DROPBEAR is not set
# No package manager
# BR2_PACKAGE_OPKG is not set
# Post-build scripts
BR2_ROOTFS_POST_BUILD_SCRIPT="board/orama/post_build.sh"
BR2_ROOTFS_POST_IMAGE_SCRIPT="board/orama/post_image.sh"
BR2_ROOTFS_POST_SCRIPT_ARGS=""
# Overlay
BR2_ROOTFS_OVERLAY="board/orama/rootfs_overlay"
# Image generation
BR2_ROOTFS_POST_IMAGE_SCRIPT="board/orama/post_image.sh"
# Host tools needed for image generation
BR2_PACKAGE_HOST_GENIMAGE=y
BR2_PACKAGE_HOST_MTOOLS=y
# Timezone
BR2_TARGET_TZ_INFO=y
BR2_TARGET_LOCALTIME="UTC"

View File

@ -0,0 +1,19 @@
# orama-agent Buildroot external package
# The agent binary is cross-compiled externally and copied into the rootfs
# by post_build.sh. This .mk is a placeholder for Buildroot's package system
# in case we want to integrate the Go build into Buildroot later.
ORAMA_AGENT_VERSION = 1.0.0
ORAMA_AGENT_SITE = $(TOPDIR)/../agent
ORAMA_AGENT_SITE_METHOD = local
define ORAMA_AGENT_BUILD_CMDS
cd $(@D) && GOOS=linux GOARCH=amd64 CGO_ENABLED=0 \
go build -o $(@D)/orama-agent ./cmd/orama-agent/
endef
define ORAMA_AGENT_INSTALL_TARGET_CMDS
install -D -m 0755 $(@D)/orama-agent $(TARGET_DIR)/usr/bin/orama-agent
endef
$(eval $(generic-package))

125
os/scripts/build.sh Executable file
View File

@ -0,0 +1,125 @@
#!/bin/bash
# OramaOS full image build script.
#
# Prerequisites:
# - Go 1.23+ installed
# - Buildroot downloaded (set BUILDROOT_SRC or it clones automatically)
# - Host tools: genimage, qemu-img (optional), veritysetup (optional)
#
# Usage:
# ORAMA_VERSION=1.0.0 ARCH=amd64 ./scripts/build.sh
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
ROOT_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
VERSION="${ORAMA_VERSION:-dev}"
ARCH="${ARCH:-amd64}"
OUTPUT_DIR="$ROOT_DIR/output"
BUILDROOT_DIR="$ROOT_DIR/buildroot"
BUILDROOT_SRC="${BUILDROOT_SRC:-$ROOT_DIR/.buildroot}"
BUILDROOT_VERSION="2024.02.9"
mkdir -p "$OUTPUT_DIR"
echo "=== OramaOS Build ==="
echo "Version: $VERSION"
echo "Arch: $ARCH"
echo "Output: $OUTPUT_DIR"
echo ""
# --- Step 1: Cross-compile orama-agent ---
echo "--- Step 1: Building orama-agent ---"
cd "$ROOT_DIR/agent"
GOOS=linux GOARCH="$ARCH" CGO_ENABLED=0 \
go build -ldflags "-s -w -X main.version=$VERSION" \
-o "$OUTPUT_DIR/orama-agent" ./cmd/orama-agent/
echo "Built: orama-agent"
# --- Step 2: Collect service binaries ---
echo "--- Step 2: Collecting service binaries ---"
BINS_DIR="$OUTPUT_DIR/orama-bins"
mkdir -p "$BINS_DIR"
# Copy the agent
cp "$OUTPUT_DIR/orama-agent" "$BINS_DIR/orama-agent"
# If the main orama project has been built, collect its binaries
ORAMA_BUILD="${ORAMA_BUILD_DIR:-$ROOT_DIR/../orama}"
if [ -d "$ORAMA_BUILD/build" ]; then
echo "Copying service binaries from $ORAMA_BUILD/build/"
for bin in rqlited olric-server ipfs ipfs-cluster-service coredns gateway; do
src="$ORAMA_BUILD/build/$bin"
if [ -f "$src" ]; then
cp "$src" "$BINS_DIR/$bin"
echo " Copied: $bin"
else
echo " WARNING: $bin not found in $ORAMA_BUILD/build/"
fi
done
else
echo "WARNING: orama build dir not found at $ORAMA_BUILD/build/"
echo " Service binaries must be placed in $BINS_DIR/ manually."
fi
# --- Step 3: Download Buildroot if needed ---
if [ ! -d "$BUILDROOT_SRC" ]; then
echo "--- Step 3: Downloading Buildroot $BUILDROOT_VERSION ---"
TARBALL="buildroot-$BUILDROOT_VERSION.tar.xz"
wget -q "https://buildroot.org/downloads/$TARBALL" -O "/tmp/$TARBALL"
mkdir -p "$BUILDROOT_SRC"
tar xf "/tmp/$TARBALL" -C "$BUILDROOT_SRC" --strip-components=1
rm "/tmp/$TARBALL"
else
echo "--- Step 3: Using existing Buildroot at $BUILDROOT_SRC ---"
fi
# --- Step 4: Configure and build with Buildroot ---
echo "--- Step 4: Running Buildroot ---"
BUILD_OUTPUT="$OUTPUT_DIR/buildroot-build"
mkdir -p "$BUILD_OUTPUT"
# Copy our board and config files into the Buildroot tree
cp -r "$BUILDROOT_DIR/board" "$BUILDROOT_SRC/"
cp -r "$BUILDROOT_DIR/configs" "$BUILDROOT_SRC/"
# Place binaries where post_build.sh expects them
mkdir -p "$BUILD_OUTPUT/images/orama-bins"
cp "$BINS_DIR"/* "$BUILD_OUTPUT/images/orama-bins/" 2>/dev/null || true
# Set version for post_build.sh
export ORAMA_VERSION="$VERSION"
export BINARIES_DIR="$BUILD_OUTPUT/images"
# Run Buildroot
cd "$BUILDROOT_SRC"
make O="$BUILD_OUTPUT" orama_defconfig
make O="$BUILD_OUTPUT" -j"$(nproc)"
# --- Step 5: Copy final artifacts ---
echo "--- Step 5: Copying artifacts ---"
FINAL_PREFIX="orama-os-${VERSION}-${ARCH}"
if [ -f "$BUILD_OUTPUT/images/orama-os.img" ]; then
cp "$BUILD_OUTPUT/images/orama-os.img" "$OUTPUT_DIR/${FINAL_PREFIX}.img"
echo "Raw image: $OUTPUT_DIR/${FINAL_PREFIX}.img"
fi
if [ -f "$BUILD_OUTPUT/images/orama-os.qcow2" ]; then
cp "$BUILD_OUTPUT/images/orama-os.qcow2" "$OUTPUT_DIR/${FINAL_PREFIX}.qcow2"
echo "qcow2: $OUTPUT_DIR/${FINAL_PREFIX}.qcow2"
fi
if [ -f "$BUILD_OUTPUT/images/rootfs.roothash" ]; then
cp "$BUILD_OUTPUT/images/rootfs.roothash" "$OUTPUT_DIR/${FINAL_PREFIX}.roothash"
echo "Root hash: $OUTPUT_DIR/${FINAL_PREFIX}.roothash"
fi
# Generate SHA256 checksums
cd "$OUTPUT_DIR"
sha256sum "${FINAL_PREFIX}"* > "${FINAL_PREFIX}.sha256"
echo "Checksums: $OUTPUT_DIR/${FINAL_PREFIX}.sha256"
echo ""
echo "=== OramaOS Build Complete ==="
echo "Artifacts in $OUTPUT_DIR/"

59
os/scripts/sign.sh Executable file
View File

@ -0,0 +1,59 @@
#!/bin/bash
# Sign OramaOS image artifacts with rootwallet.
#
# Usage:
# ./scripts/sign.sh output/orama-os-1.0.0-amd64
#
# This signs the checksum file, producing a .sig file that can be verified
# with the embedded public key on nodes.
set -euo pipefail
PREFIX="$1"
if [ -z "$PREFIX" ]; then
echo "Usage: $0 <artifact-prefix>"
echo " e.g.: $0 output/orama-os-1.0.0-amd64"
exit 1
fi
CHECKSUM_FILE="${PREFIX}.sha256"
if [ ! -f "$CHECKSUM_FILE" ]; then
echo "Error: checksum file not found: $CHECKSUM_FILE"
echo "Run 'make build' first."
exit 1
fi
# Compute hash of the checksum file
HASH=$(sha256sum "$CHECKSUM_FILE" | awk '{print $1}')
echo "Signing hash: $HASH"
# Sign with rootwallet (EVM secp256k1 personal_sign)
if ! command -v rw &>/dev/null; then
echo "Error: 'rw' (rootwallet CLI) not found in PATH"
exit 1
fi
SIGNATURE=$(rw sign "$HASH" --chain evm 2>&1)
if [ $? -ne 0 ]; then
echo "Error: rw sign failed: $SIGNATURE"
exit 1
fi
# Write signature file
SIG_FILE="${PREFIX}.sig"
echo "$SIGNATURE" > "$SIG_FILE"
echo "Signature written: $SIG_FILE"
# Verify the signature
echo "Verifying signature..."
VERIFY=$(rw verify "$HASH" "$SIGNATURE" --chain evm 2>&1)
if [ $? -ne 0 ]; then
echo "WARNING: Signature verification failed: $VERIFY"
exit 1
fi
echo "Signature verified successfully."
echo ""
echo "Artifacts:"
echo " Checksum: $CHECKSUM_FILE"
echo " Signature: $SIG_FILE"

62
os/scripts/test-vm.sh Executable file
View File

@ -0,0 +1,62 @@
#!/bin/bash
# Launch OramaOS in a QEMU VM for testing.
#
# Usage:
# ./scripts/test-vm.sh output/orama-os.qcow2
#
# The VM runs with:
# - 2 CPUs, 2GB RAM
# - VirtIO disk, network
# - Serial console (for debugging)
# - Host network (user mode) with port forwarding:
# - 9999 → 9999 (enrollment server)
# - 9998 → 9998 (command receiver)
set -euo pipefail
IMAGE="${1:-output/orama-os.qcow2}"
if [ ! -f "$IMAGE" ]; then
echo "Error: image not found: $IMAGE"
echo "Run 'make build' first, or specify image path."
exit 1
fi
# Create a temporary copy to avoid modifying the original
WORK_IMAGE="/tmp/orama-os-test-$(date +%s).qcow2"
cp "$IMAGE" "$WORK_IMAGE"
echo "=== OramaOS Test VM ==="
echo "Image: $IMAGE"
echo "Working copy: $WORK_IMAGE"
echo ""
echo "Port forwarding:"
echo " localhost:9999 → VM:9999 (enrollment)"
echo " localhost:9998 → VM:9998 (commands)"
echo ""
echo "Press Ctrl-A X to exit QEMU."
echo ""
qemu-system-x86_64 \
-enable-kvm \
-cpu host \
-smp 2 \
-m 2G \
-drive file="$WORK_IMAGE",format=qcow2,if=virtio \
-netdev user,id=net0,hostfwd=tcp::9999-:9999,hostfwd=tcp::9998-:9998 \
-device virtio-net-pci,netdev=net0 \
-nographic \
-serial mon:stdio \
-bios /usr/share/ovmf/OVMF.fd 2>/dev/null || \
qemu-system-x86_64 \
-cpu max \
-smp 2 \
-m 2G \
-drive file="$WORK_IMAGE",format=qcow2,if=virtio \
-netdev user,id=net0,hostfwd=tcp::9999-:9999,hostfwd=tcp::9998-:9998 \
-device virtio-net-pci,netdev=net0 \
-nographic \
-serial mon:stdio
# Clean up
rm -f "$WORK_IMAGE"
echo "Cleaned up working copy."