mirror of
https://github.com/DeBrosOfficial/orama.git
synced 2026-03-27 09:24:12 +00:00
Compare commits
36 Commits
v0.112.6-n
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
82c477266d | ||
|
|
169be97026 | ||
|
|
4b7c342c77 | ||
|
|
7d5ccc0678 | ||
|
|
1ca779880b | ||
|
|
3b779cd5a0 | ||
|
|
b94fd1efcd | ||
|
|
abcc23c4f3 | ||
|
|
ebaf37e9d0 | ||
|
|
7c165b9579 | ||
|
|
c536e45d0f | ||
|
|
655bd92178 | ||
|
|
211c0275d3 | ||
|
|
5456d57aeb | ||
|
|
8ea4499052 | ||
|
|
6657c90e36 | ||
|
|
0764ac287e | ||
|
|
c4fd1878a7 | ||
|
|
3d70f92ed5 | ||
|
|
fa826f0d00 | ||
|
|
733b059681 | ||
|
|
78d876e71b | ||
|
|
6468019136 | ||
|
|
e2b6f7d721 | ||
|
|
fd87eec476 | ||
|
|
a0468461ab | ||
|
|
2f5718146a | ||
|
|
f26676db2c | ||
|
|
fade8f89ed | ||
|
|
ed4e490463 | ||
|
|
6898f47e2e | ||
|
|
f0d2621199 | ||
|
|
c6998b6ac2 | ||
|
|
45a8285ae8 | ||
|
|
80e26f33fb | ||
|
|
ade6241357 |
91
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
91
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
@ -0,0 +1,91 @@
|
|||||||
|
name: Bug Report
|
||||||
|
description: Report a bug in Orama Network
|
||||||
|
labels: ["bug"]
|
||||||
|
body:
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: |
|
||||||
|
Thanks for reporting a bug! Please fill out the sections below.
|
||||||
|
|
||||||
|
**Security issues:** If this is a security vulnerability, do NOT open an issue. Email security@orama.io instead.
|
||||||
|
|
||||||
|
- type: input
|
||||||
|
id: version
|
||||||
|
attributes:
|
||||||
|
label: Orama version
|
||||||
|
description: "Run `orama version` to find this"
|
||||||
|
placeholder: "v0.18.0-beta"
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: dropdown
|
||||||
|
id: component
|
||||||
|
attributes:
|
||||||
|
label: Component
|
||||||
|
options:
|
||||||
|
- Gateway / API
|
||||||
|
- CLI (orama command)
|
||||||
|
- WireGuard / Networking
|
||||||
|
- RQLite / Storage
|
||||||
|
- Olric / Caching
|
||||||
|
- IPFS / Pinning
|
||||||
|
- CoreDNS
|
||||||
|
- OramaOS
|
||||||
|
- Other
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: description
|
||||||
|
attributes:
|
||||||
|
label: Description
|
||||||
|
description: A clear description of the bug
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: steps
|
||||||
|
attributes:
|
||||||
|
label: Steps to reproduce
|
||||||
|
description: Minimal steps to reproduce the behavior
|
||||||
|
placeholder: |
|
||||||
|
1. Run `orama ...`
|
||||||
|
2. See error
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: expected
|
||||||
|
attributes:
|
||||||
|
label: Expected behavior
|
||||||
|
description: What you expected to happen
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: actual
|
||||||
|
attributes:
|
||||||
|
label: Actual behavior
|
||||||
|
description: What actually happened (include error messages and logs if any)
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: environment
|
||||||
|
attributes:
|
||||||
|
label: Environment
|
||||||
|
description: OS, Go version, deployment environment, etc.
|
||||||
|
placeholder: |
|
||||||
|
- OS: Ubuntu 22.04
|
||||||
|
- Go: 1.23
|
||||||
|
- Environment: sandbox
|
||||||
|
validations:
|
||||||
|
required: false
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: context
|
||||||
|
attributes:
|
||||||
|
label: Additional context
|
||||||
|
description: Logs, screenshots, monitor reports, or anything else that might help
|
||||||
|
validations:
|
||||||
|
required: false
|
||||||
49
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
49
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
@ -0,0 +1,49 @@
|
|||||||
|
name: Feature Request
|
||||||
|
description: Suggest a new feature or improvement
|
||||||
|
labels: ["enhancement"]
|
||||||
|
body:
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: |
|
||||||
|
Thanks for the suggestion! Please describe what you'd like to see.
|
||||||
|
|
||||||
|
- type: dropdown
|
||||||
|
id: component
|
||||||
|
attributes:
|
||||||
|
label: Component
|
||||||
|
options:
|
||||||
|
- Gateway / API
|
||||||
|
- CLI (orama command)
|
||||||
|
- WireGuard / Networking
|
||||||
|
- RQLite / Storage
|
||||||
|
- Olric / Caching
|
||||||
|
- IPFS / Pinning
|
||||||
|
- CoreDNS
|
||||||
|
- OramaOS
|
||||||
|
- Other
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: problem
|
||||||
|
attributes:
|
||||||
|
label: Problem
|
||||||
|
description: What problem does this solve? Why do you need it?
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: solution
|
||||||
|
attributes:
|
||||||
|
label: Proposed solution
|
||||||
|
description: How do you think this should work?
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: alternatives
|
||||||
|
attributes:
|
||||||
|
label: Alternatives considered
|
||||||
|
description: Any workarounds or alternative approaches you've thought of
|
||||||
|
validations:
|
||||||
|
required: false
|
||||||
31
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
31
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
@ -0,0 +1,31 @@
|
|||||||
|
## Summary
|
||||||
|
|
||||||
|
<!-- What does this PR do? Keep it to 1-3 bullet points. -->
|
||||||
|
|
||||||
|
## Motivation
|
||||||
|
|
||||||
|
<!-- Why is this change needed? Link to an issue if applicable. -->
|
||||||
|
|
||||||
|
## Test plan
|
||||||
|
|
||||||
|
<!-- How did you verify this works? -->
|
||||||
|
|
||||||
|
- [ ] `make test` passes
|
||||||
|
- [ ] Tested on sandbox/staging environment
|
||||||
|
|
||||||
|
## Distributed system impact
|
||||||
|
|
||||||
|
<!-- Does this change affect any of the following? If yes, explain. -->
|
||||||
|
|
||||||
|
- [ ] Raft quorum / RQLite
|
||||||
|
- [ ] WireGuard mesh / networking
|
||||||
|
- [ ] Olric gossip / caching
|
||||||
|
- [ ] Service startup ordering
|
||||||
|
- [ ] Rolling upgrade compatibility
|
||||||
|
|
||||||
|
## Checklist
|
||||||
|
|
||||||
|
- [ ] Tests added for new functionality or bug fix
|
||||||
|
- [ ] No debug code (`fmt.Println`, `log.Println`) left behind
|
||||||
|
- [ ] Docs updated (if user-facing behavior changed)
|
||||||
|
- [ ] Errors wrapped with context (`fmt.Errorf("...: %w", err)`)
|
||||||
80
.github/workflows/publish-sdk.yml
vendored
Normal file
80
.github/workflows/publish-sdk.yml
vendored
Normal file
@ -0,0 +1,80 @@
|
|||||||
|
name: Publish SDK to npm
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
inputs:
|
||||||
|
version:
|
||||||
|
description: "Version to publish (e.g., 1.0.0). Leave empty to use package.json version."
|
||||||
|
required: false
|
||||||
|
dry-run:
|
||||||
|
description: "Dry run (don't actually publish)"
|
||||||
|
type: boolean
|
||||||
|
default: false
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
contents: write
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
publish:
|
||||||
|
name: Build & Publish @debros/orama
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
defaults:
|
||||||
|
run:
|
||||||
|
working-directory: sdk
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout code
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Set up Node.js
|
||||||
|
uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: "20"
|
||||||
|
registry-url: "https://registry.npmjs.org"
|
||||||
|
|
||||||
|
- name: Install pnpm
|
||||||
|
uses: pnpm/action-setup@v4
|
||||||
|
with:
|
||||||
|
version: 9
|
||||||
|
|
||||||
|
- name: Install dependencies
|
||||||
|
run: pnpm install --frozen-lockfile
|
||||||
|
|
||||||
|
- name: Bump version
|
||||||
|
if: inputs.version != ''
|
||||||
|
run: npm version ${{ inputs.version }} --no-git-tag-version
|
||||||
|
|
||||||
|
- name: Typecheck
|
||||||
|
run: pnpm typecheck
|
||||||
|
|
||||||
|
- name: Build
|
||||||
|
run: pnpm build
|
||||||
|
|
||||||
|
- name: Run unit tests
|
||||||
|
run: pnpm vitest run tests/unit
|
||||||
|
|
||||||
|
- name: Publish (dry run)
|
||||||
|
if: inputs.dry-run == true
|
||||||
|
run: npm publish --access public --dry-run
|
||||||
|
env:
|
||||||
|
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||||
|
|
||||||
|
- name: Publish
|
||||||
|
if: inputs.dry-run == false
|
||||||
|
run: npm publish --access public
|
||||||
|
env:
|
||||||
|
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||||
|
|
||||||
|
- name: Get published version
|
||||||
|
if: inputs.dry-run == false
|
||||||
|
id: version
|
||||||
|
run: echo "version=$(node -p "require('./package.json').version")" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
- name: Create git tag
|
||||||
|
if: inputs.dry-run == false
|
||||||
|
working-directory: .
|
||||||
|
run: |
|
||||||
|
git config user.name "github-actions[bot]"
|
||||||
|
git config user.email "github-actions[bot]@users.noreply.github.com"
|
||||||
|
git tag "sdk/v${{ steps.version.outputs.version }}"
|
||||||
|
git push origin "sdk/v${{ steps.version.outputs.version }}"
|
||||||
6
.github/workflows/release-apt.yml
vendored
6
.github/workflows/release-apt.yml
vendored
@ -28,7 +28,8 @@ jobs:
|
|||||||
- name: Set up Go
|
- name: Set up Go
|
||||||
uses: actions/setup-go@v5
|
uses: actions/setup-go@v5
|
||||||
with:
|
with:
|
||||||
go-version: "1.23"
|
go-version: "1.24"
|
||||||
|
cache-dependency-path: core/go.sum
|
||||||
|
|
||||||
- name: Get version
|
- name: Get version
|
||||||
id: version
|
id: version
|
||||||
@ -46,6 +47,7 @@ jobs:
|
|||||||
uses: docker/setup-qemu-action@v3
|
uses: docker/setup-qemu-action@v3
|
||||||
|
|
||||||
- name: Build binary
|
- name: Build binary
|
||||||
|
working-directory: core
|
||||||
env:
|
env:
|
||||||
GOARCH: ${{ matrix.arch }}
|
GOARCH: ${{ matrix.arch }}
|
||||||
CGO_ENABLED: 0
|
CGO_ENABLED: 0
|
||||||
@ -71,7 +73,7 @@ jobs:
|
|||||||
mkdir -p ${PKG_NAME}/usr/local/bin
|
mkdir -p ${PKG_NAME}/usr/local/bin
|
||||||
|
|
||||||
# Copy binaries
|
# Copy binaries
|
||||||
cp build/usr/local/bin/* ${PKG_NAME}/usr/local/bin/
|
cp core/build/usr/local/bin/* ${PKG_NAME}/usr/local/bin/
|
||||||
chmod 755 ${PKG_NAME}/usr/local/bin/*
|
chmod 755 ${PKG_NAME}/usr/local/bin/*
|
||||||
|
|
||||||
# Create control file
|
# Create control file
|
||||||
|
|||||||
4
.github/workflows/release.yaml
vendored
4
.github/workflows/release.yaml
vendored
@ -23,8 +23,8 @@ jobs:
|
|||||||
- name: Set up Go
|
- name: Set up Go
|
||||||
uses: actions/setup-go@v4
|
uses: actions/setup-go@v4
|
||||||
with:
|
with:
|
||||||
go-version: '1.21'
|
go-version: '1.24'
|
||||||
cache: true
|
cache-dependency-path: core/go.sum
|
||||||
|
|
||||||
- name: Run GoReleaser
|
- name: Run GoReleaser
|
||||||
uses: goreleaser/goreleaser-action@v5
|
uses: goreleaser/goreleaser-action@v5
|
||||||
|
|||||||
154
.gitignore
vendored
154
.gitignore
vendored
@ -1,56 +1,4 @@
|
|||||||
# Binaries
|
# === Global ===
|
||||||
*.exe
|
|
||||||
*.exe~
|
|
||||||
*.dll
|
|
||||||
*.so
|
|
||||||
*.dylib
|
|
||||||
*.test
|
|
||||||
*.out
|
|
||||||
bin/
|
|
||||||
bin-linux/
|
|
||||||
dist/
|
|
||||||
orama-cli-linux
|
|
||||||
|
|
||||||
# Build artifacts
|
|
||||||
*.deb
|
|
||||||
*.rpm
|
|
||||||
*.tar.gz
|
|
||||||
*.zip
|
|
||||||
|
|
||||||
# Go
|
|
||||||
go.work
|
|
||||||
.gocache/
|
|
||||||
|
|
||||||
# Dependencies
|
|
||||||
# vendor/
|
|
||||||
|
|
||||||
# Environment & credentials
|
|
||||||
.env
|
|
||||||
.env.*
|
|
||||||
.env.local
|
|
||||||
.env.*.local
|
|
||||||
scripts/remote-nodes.conf
|
|
||||||
keys_backup/
|
|
||||||
e2e/config.yaml
|
|
||||||
|
|
||||||
# Config (generated/local)
|
|
||||||
configs/
|
|
||||||
|
|
||||||
# Data & databases
|
|
||||||
data/*
|
|
||||||
*.db
|
|
||||||
|
|
||||||
# IDE & editor files
|
|
||||||
.vscode/
|
|
||||||
.idea/
|
|
||||||
.cursor/
|
|
||||||
.claude/
|
|
||||||
.mcp.json
|
|
||||||
*.swp
|
|
||||||
*.swo
|
|
||||||
*~
|
|
||||||
|
|
||||||
# OS generated files
|
|
||||||
.DS_Store
|
.DS_Store
|
||||||
.DS_Store?
|
.DS_Store?
|
||||||
._*
|
._*
|
||||||
@ -58,39 +6,85 @@ data/*
|
|||||||
.Trashes
|
.Trashes
|
||||||
ehthumbs.db
|
ehthumbs.db
|
||||||
Thumbs.db
|
Thumbs.db
|
||||||
|
*.swp
|
||||||
|
*.swo
|
||||||
|
*~
|
||||||
|
|
||||||
|
# IDE
|
||||||
|
.vscode/
|
||||||
|
.idea/
|
||||||
|
.cursor/
|
||||||
|
|
||||||
|
# Environment & credentials
|
||||||
|
.env
|
||||||
|
.env.*
|
||||||
|
!.env.example
|
||||||
|
.mcp.json
|
||||||
|
.claude/
|
||||||
|
.codex/
|
||||||
|
|
||||||
|
# === Core (Go) ===
|
||||||
|
core/phantom-auth/
|
||||||
|
core/bin/
|
||||||
|
core/bin-linux/
|
||||||
|
core/dist/
|
||||||
|
core/orama-cli-linux
|
||||||
|
core/keys_backup/
|
||||||
|
core/.gocache/
|
||||||
|
core/configs/
|
||||||
|
core/data/*
|
||||||
|
core/tmp/
|
||||||
|
core/temp/
|
||||||
|
core/results/
|
||||||
|
core/rnd/
|
||||||
|
core/vps.txt
|
||||||
|
core/coverage.txt
|
||||||
|
core/coverage.html
|
||||||
|
core/profile.out
|
||||||
|
core/e2e/config.yaml
|
||||||
|
core/scripts/remote-nodes.conf
|
||||||
|
|
||||||
|
# Go build artifacts
|
||||||
|
*.exe
|
||||||
|
*.exe~
|
||||||
|
*.dll
|
||||||
|
*.so
|
||||||
|
*.dylib
|
||||||
|
*.test
|
||||||
|
*.out
|
||||||
|
*.deb
|
||||||
|
*.rpm
|
||||||
|
*.tar.gz
|
||||||
|
*.zip
|
||||||
|
go.work
|
||||||
|
|
||||||
# Logs
|
# Logs
|
||||||
*.log
|
*.log
|
||||||
|
|
||||||
# Temporary files
|
# Databases
|
||||||
tmp/
|
*.db
|
||||||
temp/
|
|
||||||
*.tmp
|
|
||||||
|
|
||||||
# Coverage & profiling
|
# === Website ===
|
||||||
coverage.txt
|
website/node_modules/
|
||||||
coverage.html
|
website/dist/
|
||||||
profile.out
|
website/invest-api/invest-api
|
||||||
|
website/invest-api/*.db
|
||||||
|
website/invest-api/*.db-shm
|
||||||
|
website/invest-api/*.db-wal
|
||||||
|
|
||||||
# Local development
|
# === SDK (TypeScript) ===
|
||||||
|
sdk/node_modules/
|
||||||
|
sdk/dist/
|
||||||
|
sdk/coverage/
|
||||||
|
|
||||||
|
# === Vault (Zig) ===
|
||||||
|
vault/.zig-cache/
|
||||||
|
vault/zig-out/
|
||||||
|
|
||||||
|
# === OS ===
|
||||||
|
os/output/
|
||||||
|
|
||||||
|
# === Local development ===
|
||||||
.dev/
|
.dev/
|
||||||
.local/
|
.local/
|
||||||
local/
|
local/
|
||||||
.codex/
|
|
||||||
results/
|
|
||||||
rnd/
|
|
||||||
vps.txt
|
|
||||||
|
|
||||||
# Project subdirectories (managed separately)
|
|
||||||
website/
|
|
||||||
phantom-auth/
|
|
||||||
|
|
||||||
# One-off scripts & tools
|
|
||||||
redeploy-6.sh
|
|
||||||
terms-agreement
|
|
||||||
./bootstrap
|
|
||||||
./node
|
|
||||||
./cli
|
|
||||||
./inspector
|
|
||||||
docs/later_todos/
|
|
||||||
sim/
|
|
||||||
@ -9,11 +9,13 @@ env:
|
|||||||
|
|
||||||
before:
|
before:
|
||||||
hooks:
|
hooks:
|
||||||
- go mod tidy
|
- cmd: go mod tidy
|
||||||
|
dir: core
|
||||||
|
|
||||||
builds:
|
builds:
|
||||||
# orama CLI binary
|
# orama CLI binary
|
||||||
- id: orama
|
- id: orama
|
||||||
|
dir: core
|
||||||
main: ./cmd/cli
|
main: ./cmd/cli
|
||||||
binary: orama
|
binary: orama
|
||||||
goos:
|
goos:
|
||||||
@ -31,6 +33,7 @@ builds:
|
|||||||
|
|
||||||
# orama-node binary (Linux only for apt)
|
# orama-node binary (Linux only for apt)
|
||||||
- id: orama-node
|
- id: orama-node
|
||||||
|
dir: core
|
||||||
main: ./cmd/node
|
main: ./cmd/node
|
||||||
binary: orama-node
|
binary: orama-node
|
||||||
goos:
|
goos:
|
||||||
@ -84,7 +87,7 @@ nfpms:
|
|||||||
section: utils
|
section: utils
|
||||||
priority: optional
|
priority: optional
|
||||||
contents:
|
contents:
|
||||||
- src: ./README.md
|
- src: ./core/README.md
|
||||||
dst: /usr/share/doc/orama/README.md
|
dst: /usr/share/doc/orama/README.md
|
||||||
deb:
|
deb:
|
||||||
lintian_overrides:
|
lintian_overrides:
|
||||||
@ -106,7 +109,7 @@ nfpms:
|
|||||||
section: net
|
section: net
|
||||||
priority: optional
|
priority: optional
|
||||||
contents:
|
contents:
|
||||||
- src: ./README.md
|
- src: ./core/README.md
|
||||||
dst: /usr/share/doc/orama-node/README.md
|
dst: /usr/share/doc/orama-node/README.md
|
||||||
deb:
|
deb:
|
||||||
lintian_overrides:
|
lintian_overrides:
|
||||||
|
|||||||
@ -1,47 +1,78 @@
|
|||||||
# Contributing to DeBros Network
|
# Contributing to Orama Network
|
||||||
|
|
||||||
Thanks for helping improve the network! This guide covers setup, local dev, tests, and PR guidelines.
|
Thanks for helping improve the network! This monorepo contains multiple projects — pick the one relevant to your contribution.
|
||||||
|
|
||||||
## Requirements
|
## Repository Structure
|
||||||
|
|
||||||
- Go 1.22+ (1.23 recommended)
|
| Package | Language | Build |
|
||||||
- RQLite (optional for local runs; the Makefile starts nodes with embedded setup)
|
|---------|----------|-------|
|
||||||
- Make (optional)
|
| `core/` | Go 1.24+ | `make core-build` |
|
||||||
|
| `website/` | TypeScript (pnpm) | `make website-build` |
|
||||||
|
| `vault/` | Zig 0.14+ | `make vault-build` |
|
||||||
|
| `os/` | Go + Buildroot | `make os-build` |
|
||||||
|
|
||||||
## Setup
|
## Setup
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/DeBrosOfficial/network.git
|
git clone https://github.com/DeBrosOfficial/network.git
|
||||||
cd network
|
cd network
|
||||||
make deps
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Build, Test, Lint
|
### Core (Go)
|
||||||
|
|
||||||
- Build: `make build`
|
|
||||||
- Test: `make test`
|
|
||||||
- Format/Vet: `make fmt vet` (or `make lint`)
|
|
||||||
|
|
||||||
````
|
|
||||||
|
|
||||||
Useful CLI commands:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
./bin/orama health
|
cd core
|
||||||
./bin/orama peers
|
make deps
|
||||||
./bin/orama status
|
make build
|
||||||
````
|
make test
|
||||||
|
```
|
||||||
|
|
||||||
## Versioning
|
### Website
|
||||||
|
|
||||||
- The CLI reports its version via `orama version`.
|
```bash
|
||||||
- Releases are tagged (e.g., `v0.18.0-beta`) and published via GoReleaser.
|
cd website
|
||||||
|
pnpm install
|
||||||
|
pnpm dev
|
||||||
|
```
|
||||||
|
|
||||||
|
### Vault (Zig)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd vault
|
||||||
|
zig build
|
||||||
|
zig build test
|
||||||
|
```
|
||||||
|
|
||||||
## Pull Requests
|
## Pull Requests
|
||||||
|
|
||||||
1. Fork and create a topic branch.
|
1. Fork and create a topic branch from `main`.
|
||||||
2. Ensure `make build test` passes; include tests for new functionality.
|
2. Ensure `make test` passes for affected packages.
|
||||||
3. Keep PRs focused and well-described (motivation, approach, testing).
|
3. Include tests for new functionality or bug fixes.
|
||||||
4. Update README/docs for behavior changes.
|
4. Keep PRs focused — one concern per PR.
|
||||||
|
5. Write a clear description: motivation, approach, and how you tested it.
|
||||||
|
6. Update docs if you're changing user-facing behavior.
|
||||||
|
|
||||||
|
## Code Style
|
||||||
|
|
||||||
|
### Go (core/, os/)
|
||||||
|
|
||||||
|
- Follow standard Go conventions
|
||||||
|
- Run `make lint` before submitting
|
||||||
|
- Wrap errors with context: `fmt.Errorf("failed to X: %w", err)`
|
||||||
|
- No magic values — use named constants
|
||||||
|
|
||||||
|
### TypeScript (website/)
|
||||||
|
|
||||||
|
- TypeScript strict mode
|
||||||
|
- Follow existing patterns in the codebase
|
||||||
|
|
||||||
|
### Zig (vault/)
|
||||||
|
|
||||||
|
- Follow standard Zig conventions
|
||||||
|
- Run `zig build test` before submitting
|
||||||
|
|
||||||
|
## Security
|
||||||
|
|
||||||
|
If you find a security vulnerability, **do not open a public issue**. Email security@debros.io instead.
|
||||||
|
|
||||||
Thank you for contributing!
|
Thank you for contributing!
|
||||||
|
|||||||
214
Makefile
214
Makefile
@ -1,186 +1,66 @@
|
|||||||
TEST?=./...
|
# Orama Monorepo
|
||||||
|
# Delegates to sub-project Makefiles
|
||||||
|
|
||||||
.PHONY: test
|
.PHONY: help build test clean
|
||||||
test:
|
|
||||||
@echo Running tests...
|
|
||||||
go test -v $(TEST)
|
|
||||||
|
|
||||||
# Gateway-focused E2E tests assume gateway and nodes are already running
|
# === Core (Go network) ===
|
||||||
# Auto-discovers configuration from ~/.orama and queries database for API key
|
.PHONY: core core-build core-test core-clean core-lint
|
||||||
# No environment variables required
|
core: core-build
|
||||||
.PHONY: test-e2e test-e2e-deployments test-e2e-fullstack test-e2e-https test-e2e-quick test-e2e-prod test-e2e-shared test-e2e-cluster test-e2e-integration test-e2e-production
|
|
||||||
|
|
||||||
# Production E2E tests - includes production-only tests
|
core-build:
|
||||||
test-e2e-prod:
|
$(MAKE) -C core build
|
||||||
@if [ -z "$$ORAMA_GATEWAY_URL" ]; then \
|
|
||||||
echo "❌ ORAMA_GATEWAY_URL not set"; \
|
|
||||||
echo "Usage: ORAMA_GATEWAY_URL=https://dbrs.space make test-e2e-prod"; \
|
|
||||||
exit 1; \
|
|
||||||
fi
|
|
||||||
@echo "Running E2E tests (including production-only) against $$ORAMA_GATEWAY_URL..."
|
|
||||||
go test -v -tags "e2e production" -timeout 30m ./e2e/...
|
|
||||||
|
|
||||||
# Generic e2e target
|
core-test:
|
||||||
test-e2e:
|
$(MAKE) -C core test
|
||||||
@echo "Running comprehensive E2E tests..."
|
|
||||||
@echo "Auto-discovering configuration from ~/.orama..."
|
|
||||||
go test -v -tags e2e -timeout 30m ./e2e/...
|
|
||||||
|
|
||||||
test-e2e-deployments:
|
core-lint:
|
||||||
@echo "Running deployment E2E tests..."
|
$(MAKE) -C core lint
|
||||||
go test -v -tags e2e -timeout 15m ./e2e/deployments/...
|
|
||||||
|
|
||||||
test-e2e-fullstack:
|
core-clean:
|
||||||
@echo "Running fullstack E2E tests..."
|
$(MAKE) -C core clean
|
||||||
go test -v -tags e2e -timeout 20m -run "TestFullStack" ./e2e/...
|
|
||||||
|
|
||||||
test-e2e-https:
|
# === Website ===
|
||||||
@echo "Running HTTPS/external access E2E tests..."
|
.PHONY: website website-dev website-build
|
||||||
go test -v -tags e2e -timeout 10m -run "TestHTTPS" ./e2e/...
|
website-dev:
|
||||||
|
cd website && pnpm dev
|
||||||
|
|
||||||
test-e2e-shared:
|
website-build:
|
||||||
@echo "Running shared E2E tests..."
|
cd website && pnpm build
|
||||||
go test -v -tags e2e -timeout 10m ./e2e/shared/...
|
|
||||||
|
|
||||||
test-e2e-cluster:
|
# === SDK (TypeScript) ===
|
||||||
@echo "Running cluster E2E tests..."
|
.PHONY: sdk sdk-build sdk-test
|
||||||
go test -v -tags e2e -timeout 15m ./e2e/cluster/...
|
sdk: sdk-build
|
||||||
|
|
||||||
test-e2e-integration:
|
sdk-build:
|
||||||
@echo "Running integration E2E tests..."
|
cd sdk && pnpm install && pnpm build
|
||||||
go test -v -tags e2e -timeout 20m ./e2e/integration/...
|
|
||||||
|
|
||||||
test-e2e-production:
|
sdk-test:
|
||||||
@echo "Running production-only E2E tests..."
|
cd sdk && pnpm test
|
||||||
go test -v -tags "e2e production" -timeout 15m ./e2e/production/...
|
|
||||||
|
|
||||||
test-e2e-quick:
|
# === Vault (Zig) ===
|
||||||
@echo "Running quick E2E smoke tests..."
|
.PHONY: vault vault-build vault-test
|
||||||
go test -v -tags e2e -timeout 5m -run "TestStatic|TestHealth" ./e2e/...
|
vault-build:
|
||||||
|
cd vault && zig build
|
||||||
|
|
||||||
# Network - Distributed P2P Database System
|
vault-test:
|
||||||
# Makefile for development and build tasks
|
cd vault && zig build test
|
||||||
|
|
||||||
.PHONY: build clean test deps tidy fmt vet lint install-hooks upload-devnet upload-testnet redeploy-devnet redeploy-testnet release health
|
# === OS ===
|
||||||
|
.PHONY: os os-build
|
||||||
|
os-build:
|
||||||
|
$(MAKE) -C os
|
||||||
|
|
||||||
VERSION := 0.112.6
|
# === Aggregate ===
|
||||||
COMMIT ?= $(shell git rev-parse --short HEAD 2>/dev/null || echo unknown)
|
build: core-build
|
||||||
DATE ?= $(shell date -u +%Y-%m-%dT%H:%M:%SZ)
|
test: core-test
|
||||||
LDFLAGS := -X 'main.version=$(VERSION)' -X 'main.commit=$(COMMIT)' -X 'main.date=$(DATE)'
|
clean: core-clean
|
||||||
LDFLAGS_LINUX := -s -w $(LDFLAGS)
|
|
||||||
|
|
||||||
# Build targets
|
|
||||||
build: deps
|
|
||||||
@echo "Building network executables (version=$(VERSION))..."
|
|
||||||
@mkdir -p bin
|
|
||||||
go build -ldflags "$(LDFLAGS)" -o bin/identity ./cmd/identity
|
|
||||||
go build -ldflags "$(LDFLAGS)" -o bin/orama-node ./cmd/node
|
|
||||||
go build -ldflags "$(LDFLAGS)" -o bin/orama ./cmd/cli/
|
|
||||||
# Inject gateway build metadata via pkg path variables
|
|
||||||
go build -ldflags "$(LDFLAGS) -X 'github.com/DeBrosOfficial/network/pkg/gateway.BuildVersion=$(VERSION)' -X 'github.com/DeBrosOfficial/network/pkg/gateway.BuildCommit=$(COMMIT)' -X 'github.com/DeBrosOfficial/network/pkg/gateway.BuildTime=$(DATE)'" -o bin/gateway ./cmd/gateway
|
|
||||||
go build -ldflags "$(LDFLAGS)" -o bin/sfu ./cmd/sfu
|
|
||||||
go build -ldflags "$(LDFLAGS)" -o bin/turn ./cmd/turn
|
|
||||||
@echo "Build complete! Run ./bin/orama version"
|
|
||||||
|
|
||||||
# Cross-compile CLI for Linux (only binary needed locally; VPS builds everything else from source)
|
|
||||||
build-linux: deps
|
|
||||||
@echo "Cross-compiling CLI for linux/amd64 (version=$(VERSION))..."
|
|
||||||
@mkdir -p bin-linux
|
|
||||||
GOOS=linux GOARCH=amd64 go build -ldflags "$(LDFLAGS_LINUX)" -trimpath -o bin-linux/orama ./cmd/cli/
|
|
||||||
@echo "✓ CLI built at bin-linux/orama"
|
|
||||||
@echo ""
|
|
||||||
@echo "Next steps:"
|
|
||||||
@echo " ./scripts/generate-source-archive.sh"
|
|
||||||
@echo " ./bin/orama install --vps-ip <ip> --nameserver --domain ..."
|
|
||||||
|
|
||||||
# Install git hooks
|
|
||||||
install-hooks:
|
|
||||||
@echo "Installing git hooks..."
|
|
||||||
@bash scripts/install-hooks.sh
|
|
||||||
|
|
||||||
# Clean build artifacts
|
|
||||||
clean:
|
|
||||||
@echo "Cleaning build artifacts..."
|
|
||||||
rm -rf bin/
|
|
||||||
rm -rf data/
|
|
||||||
@echo "Clean complete!"
|
|
||||||
|
|
||||||
# Upload source to devnet using fanout (upload to 1 node, parallel distribute to rest)
|
|
||||||
upload-devnet:
|
|
||||||
@bash scripts/upload-source-fanout.sh --env devnet
|
|
||||||
|
|
||||||
# Upload source to testnet using fanout
|
|
||||||
upload-testnet:
|
|
||||||
@bash scripts/upload-source-fanout.sh --env testnet
|
|
||||||
|
|
||||||
# Deploy to devnet (build + rolling upgrade all nodes)
|
|
||||||
redeploy-devnet:
|
|
||||||
@bash scripts/redeploy.sh --devnet
|
|
||||||
|
|
||||||
# Deploy to devnet without rebuilding
|
|
||||||
redeploy-devnet-quick:
|
|
||||||
@bash scripts/redeploy.sh --devnet --no-build
|
|
||||||
|
|
||||||
# Deploy to testnet (build + rolling upgrade all nodes)
|
|
||||||
redeploy-testnet:
|
|
||||||
@bash scripts/redeploy.sh --testnet
|
|
||||||
|
|
||||||
# Deploy to testnet without rebuilding
|
|
||||||
redeploy-testnet-quick:
|
|
||||||
@bash scripts/redeploy.sh --testnet --no-build
|
|
||||||
|
|
||||||
# Interactive release workflow (tag + push)
|
|
||||||
release:
|
|
||||||
@bash scripts/release.sh
|
|
||||||
|
|
||||||
# Check health of all nodes in an environment
|
|
||||||
# Usage: make health ENV=devnet
|
|
||||||
health:
|
|
||||||
@if [ -z "$(ENV)" ]; then \
|
|
||||||
echo "Usage: make health ENV=devnet|testnet"; \
|
|
||||||
exit 1; \
|
|
||||||
fi
|
|
||||||
@while IFS='|' read -r env host pass role key; do \
|
|
||||||
[ -z "$$env" ] && continue; \
|
|
||||||
case "$$env" in \#*) continue;; esac; \
|
|
||||||
env="$$(echo "$$env" | xargs)"; \
|
|
||||||
[ "$$env" != "$(ENV)" ] && continue; \
|
|
||||||
role="$$(echo "$$role" | xargs)"; \
|
|
||||||
bash scripts/check-node-health.sh "$$host" "$$pass" "$$host ($$role)"; \
|
|
||||||
done < scripts/remote-nodes.conf
|
|
||||||
|
|
||||||
# Help
|
|
||||||
help:
|
help:
|
||||||
@echo "Available targets:"
|
@echo "Orama Monorepo"
|
||||||
@echo " build - Build all executables"
|
|
||||||
@echo " clean - Clean build artifacts"
|
|
||||||
@echo " test - Run unit tests"
|
|
||||||
@echo ""
|
@echo ""
|
||||||
@echo "E2E Testing:"
|
@echo " Core (Go): make core-build | core-test | core-lint | core-clean"
|
||||||
@echo " make test-e2e-prod - Run all E2E tests incl. production-only (needs ORAMA_GATEWAY_URL)"
|
@echo " Website: make website-dev | website-build"
|
||||||
@echo " make test-e2e-shared - Run shared E2E tests (cache, storage, pubsub, auth)"
|
@echo " Vault (Zig): make vault-build | vault-test"
|
||||||
@echo " make test-e2e-cluster - Run cluster E2E tests (libp2p, olric, rqlite, namespace)"
|
@echo " OS: make os-build"
|
||||||
@echo " make test-e2e-integration - Run integration E2E tests (fullstack, persistence, concurrency)"
|
|
||||||
@echo " make test-e2e-deployments - Run deployment E2E tests"
|
|
||||||
@echo " make test-e2e-production - Run production-only E2E tests (DNS, HTTPS, cross-node)"
|
|
||||||
@echo " make test-e2e-quick - Quick smoke tests (static deploys, health checks)"
|
|
||||||
@echo " make test-e2e - Generic E2E tests (auto-discovers config)"
|
|
||||||
@echo ""
|
@echo ""
|
||||||
@echo " Example:"
|
@echo " Aggregate: make build | test | clean (delegates to core)"
|
||||||
@echo " ORAMA_GATEWAY_URL=https://orama-devnet.network make test-e2e-prod"
|
|
||||||
@echo ""
|
|
||||||
@echo "Deployment:"
|
|
||||||
@echo " make redeploy-devnet - Build + rolling deploy to all devnet nodes"
|
|
||||||
@echo " make redeploy-devnet-quick - Deploy to devnet without rebuilding"
|
|
||||||
@echo " make redeploy-testnet - Build + rolling deploy to all testnet nodes"
|
|
||||||
@echo " make redeploy-testnet-quick- Deploy to testnet without rebuilding"
|
|
||||||
@echo " make health ENV=devnet - Check health of all nodes in an environment"
|
|
||||||
@echo " make release - Interactive release workflow (tag + push)"
|
|
||||||
@echo ""
|
|
||||||
@echo "Maintenance:"
|
|
||||||
@echo " deps - Download dependencies"
|
|
||||||
@echo " tidy - Tidy dependencies"
|
|
||||||
@echo " fmt - Format code"
|
|
||||||
@echo " vet - Vet code"
|
|
||||||
@echo " lint - Lint code (fmt + vet)"
|
|
||||||
@echo " help - Show this help"
|
|
||||||
|
|||||||
483
README.md
483
README.md
@ -1,463 +1,50 @@
|
|||||||
# Orama Network - Distributed P2P Platform
|
# Orama Network
|
||||||
|
|
||||||
A high-performance API Gateway and distributed platform built in Go. Provides a unified HTTP/HTTPS API for distributed SQL (RQLite), distributed caching (Olric), decentralized storage (IPFS), pub/sub messaging, and serverless WebAssembly execution.
|
A decentralized infrastructure platform combining distributed SQL, IPFS storage, caching, serverless WASM execution, and privacy relay — all managed through a unified API gateway.
|
||||||
|
|
||||||
**Architecture:** Modular Gateway / Edge Proxy following SOLID principles
|
## Packages
|
||||||
|
|
||||||
## Features
|
| Package | Language | Description |
|
||||||
|
|---------|----------|-------------|
|
||||||
- **🔐 Authentication** - Wallet signatures, API keys, JWT tokens
|
| [core/](core/) | Go | API gateway, distributed node, CLI, and client SDK |
|
||||||
- **💾 Storage** - IPFS-based decentralized file storage with encryption
|
| [sdk/](sdk/) | TypeScript | `@debros/orama` — JavaScript/TypeScript SDK ([npm](https://www.npmjs.com/package/@debros/orama)) |
|
||||||
- **⚡ Cache** - Distributed cache with Olric (in-memory key-value)
|
| [website/](website/) | TypeScript | Marketing website and invest portal |
|
||||||
- **🗄️ Database** - RQLite distributed SQL with Raft consensus + Per-namespace SQLite databases
|
| [vault/](vault/) | Zig | Distributed secrets vault (Shamir's Secret Sharing) |
|
||||||
- **📡 Pub/Sub** - Real-time messaging via LibP2P and WebSocket
|
| [os/](os/) | Go + Buildroot | OramaOS — hardened minimal Linux for network nodes |
|
||||||
- **⚙️ Serverless** - WebAssembly function execution with host functions
|
|
||||||
- **🌐 HTTP Gateway** - Unified REST API with automatic HTTPS (Let's Encrypt)
|
|
||||||
- **📦 Client SDK** - Type-safe Go SDK for all services
|
|
||||||
- **🚀 App Deployments** - Deploy React, Next.js, Go, Node.js apps with automatic domains
|
|
||||||
- **🗄️ SQLite Databases** - Per-namespace isolated databases with IPFS backups
|
|
||||||
|
|
||||||
## Application Deployments
|
|
||||||
|
|
||||||
Deploy full-stack applications with automatic domain assignment and namespace isolation.
|
|
||||||
|
|
||||||
### Deploy a React App
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Build your app
|
|
||||||
cd my-react-app
|
|
||||||
npm run build
|
|
||||||
|
|
||||||
# Deploy to Orama Network
|
|
||||||
orama deploy static ./dist --name my-app
|
|
||||||
|
|
||||||
# Your app is now live at: https://my-app.orama.network
|
|
||||||
```
|
|
||||||
|
|
||||||
### Deploy Next.js with SSR
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd my-nextjs-app
|
|
||||||
|
|
||||||
# Ensure next.config.js has: output: 'standalone'
|
|
||||||
npm run build
|
|
||||||
orama deploy nextjs . --name my-nextjs --ssr
|
|
||||||
|
|
||||||
# Live at: https://my-nextjs.orama.network
|
|
||||||
```
|
|
||||||
|
|
||||||
### Deploy Go Backend
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Build for Linux (name binary 'app' for auto-detection)
|
|
||||||
GOOS=linux GOARCH=amd64 go build -o app main.go
|
|
||||||
|
|
||||||
# Deploy (must implement /health endpoint)
|
|
||||||
orama deploy go ./app --name my-api
|
|
||||||
|
|
||||||
# API live at: https://my-api.orama.network
|
|
||||||
```
|
|
||||||
|
|
||||||
### Create SQLite Database
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create database
|
|
||||||
orama db create my-database
|
|
||||||
|
|
||||||
# Create schema
|
|
||||||
orama db query my-database "CREATE TABLE users (id INT, name TEXT)"
|
|
||||||
|
|
||||||
# Insert data
|
|
||||||
orama db query my-database "INSERT INTO users VALUES (1, 'Alice')"
|
|
||||||
|
|
||||||
# Query data
|
|
||||||
orama db query my-database "SELECT * FROM users"
|
|
||||||
|
|
||||||
# Backup to IPFS
|
|
||||||
orama db backup my-database
|
|
||||||
```
|
|
||||||
|
|
||||||
### Full-Stack Example
|
|
||||||
|
|
||||||
Deploy a complete app with React frontend, Go backend, and SQLite database:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Create database
|
|
||||||
orama db create myapp-db
|
|
||||||
orama db query myapp-db "CREATE TABLE users (id INT PRIMARY KEY, name TEXT)"
|
|
||||||
|
|
||||||
# 2. Deploy Go backend (connects to database)
|
|
||||||
GOOS=linux GOARCH=amd64 go build -o api main.go
|
|
||||||
orama deploy go ./api --name myapp-api
|
|
||||||
|
|
||||||
# 3. Deploy React frontend (calls backend API)
|
|
||||||
cd frontend && npm run build
|
|
||||||
orama deploy static ./dist --name myapp
|
|
||||||
|
|
||||||
# Access:
|
|
||||||
# Frontend: https://myapp.orama.network
|
|
||||||
# Backend: https://myapp-api.orama.network
|
|
||||||
```
|
|
||||||
|
|
||||||
**📖 Full Guide**: See [Deployment Guide](docs/DEPLOYMENT_GUIDE.md) for complete documentation, examples, and best practices.
|
|
||||||
|
|
||||||
## Quick Start
|
## Quick Start
|
||||||
|
|
||||||
### Building
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Build all binaries
|
# Build the core network binaries
|
||||||
make build
|
make core-build
|
||||||
|
|
||||||
|
# Run tests
|
||||||
|
make core-test
|
||||||
|
|
||||||
|
# Start website dev server
|
||||||
|
make website-dev
|
||||||
|
|
||||||
|
# Build vault
|
||||||
|
make vault-build
|
||||||
```
|
```
|
||||||
|
|
||||||
## CLI Commands
|
|
||||||
|
|
||||||
### Authentication
|
|
||||||
|
|
||||||
```bash
|
|
||||||
orama auth login # Authenticate with wallet
|
|
||||||
orama auth status # Check authentication
|
|
||||||
orama auth logout # Clear credentials
|
|
||||||
```
|
|
||||||
|
|
||||||
### Application Deployments
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Deploy applications
|
|
||||||
orama deploy static <path> --name myapp # React, Vue, static sites
|
|
||||||
orama deploy nextjs <path> --name myapp --ssr # Next.js with SSR (requires output: 'standalone')
|
|
||||||
orama deploy go <path> --name myapp # Go binaries (must have /health endpoint)
|
|
||||||
orama deploy nodejs <path> --name myapp # Node.js apps (must have /health endpoint)
|
|
||||||
|
|
||||||
# Manage deployments
|
|
||||||
orama app list # List all deployments
|
|
||||||
orama app get <name> # Get deployment details
|
|
||||||
orama app logs <name> --follow # View logs
|
|
||||||
orama app delete <name> # Delete deployment
|
|
||||||
orama app rollback <name> --version 1 # Rollback to version
|
|
||||||
```
|
|
||||||
|
|
||||||
### SQLite Databases
|
|
||||||
|
|
||||||
```bash
|
|
||||||
orama db create <name> # Create database
|
|
||||||
orama db query <name> "SELECT * FROM t" # Execute SQL query
|
|
||||||
orama db list # List all databases
|
|
||||||
orama db backup <name> # Backup to IPFS
|
|
||||||
orama db backups <name> # List backups
|
|
||||||
```
|
|
||||||
|
|
||||||
### Environment Management
|
|
||||||
|
|
||||||
```bash
|
|
||||||
orama env list # List available environments
|
|
||||||
orama env current # Show active environment
|
|
||||||
orama env use <name> # Switch environment
|
|
||||||
```
|
|
||||||
|
|
||||||
## Serverless Functions (WASM)
|
|
||||||
|
|
||||||
Orama supports high-performance serverless function execution using WebAssembly (WASM). Functions are isolated, secure, and can interact with network services like the distributed cache.
|
|
||||||
|
|
||||||
> **Full guide:** See [docs/SERVERLESS.md](docs/SERVERLESS.md) for host functions API, secrets management, PubSub triggers, and examples.
|
|
||||||
|
|
||||||
### 1. Build Functions
|
|
||||||
|
|
||||||
Functions must be compiled to WASM. We recommend using [TinyGo](https://tinygo.org/).
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Build example functions to examples/functions/bin/
|
|
||||||
./examples/functions/build.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Deployment
|
|
||||||
|
|
||||||
Deploy your compiled `.wasm` file to the network via the Gateway.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Deploy a function
|
|
||||||
curl -X POST https://your-node.example.com/v1/functions \
|
|
||||||
-H "Authorization: Bearer <your_api_key>" \
|
|
||||||
-F "name=hello-world" \
|
|
||||||
-F "namespace=default" \
|
|
||||||
-F "wasm=@./examples/functions/bin/hello.wasm"
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Invocation
|
|
||||||
|
|
||||||
Trigger your function with a JSON payload. The function receives the payload via `stdin` and returns its response via `stdout`.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Invoke via HTTP
|
|
||||||
curl -X POST https://your-node.example.com/v1/functions/hello-world/invoke \
|
|
||||||
-H "Authorization: Bearer <your_api_key>" \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{"name": "Developer"}'
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Management
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# List all functions in a namespace
|
|
||||||
curl https://your-node.example.com/v1/functions?namespace=default
|
|
||||||
|
|
||||||
# Delete a function
|
|
||||||
curl -X DELETE https://your-node.example.com/v1/functions/hello-world?namespace=default
|
|
||||||
```
|
|
||||||
|
|
||||||
## Production Deployment
|
|
||||||
|
|
||||||
### Prerequisites
|
|
||||||
|
|
||||||
- Ubuntu 22.04+ or Debian 12+
|
|
||||||
- `amd64` or `arm64` architecture
|
|
||||||
- 4GB RAM, 50GB SSD, 2 CPU cores
|
|
||||||
|
|
||||||
### Required Ports
|
|
||||||
|
|
||||||
**External (must be open in firewall):**
|
|
||||||
|
|
||||||
- **80** - HTTP (ACME/Let's Encrypt certificate challenges)
|
|
||||||
- **443** - HTTPS (Main gateway API endpoint)
|
|
||||||
- **4101** - IPFS Swarm (peer connections)
|
|
||||||
- **7001** - RQLite Raft (cluster consensus)
|
|
||||||
|
|
||||||
**Internal (bound to localhost, no firewall needed):**
|
|
||||||
|
|
||||||
- 4501 - IPFS API
|
|
||||||
- 5001 - RQLite HTTP API
|
|
||||||
- 6001 - Unified Gateway
|
|
||||||
- 8080 - IPFS Gateway
|
|
||||||
- 9050 - Anyone SOCKS5 proxy
|
|
||||||
- 9094 - IPFS Cluster API
|
|
||||||
- 3320/3322 - Olric Cache
|
|
||||||
|
|
||||||
**Anyone Relay Mode (optional, for earning rewards):**
|
|
||||||
|
|
||||||
- 9001 - Anyone ORPort (relay traffic, must be open externally)
|
|
||||||
|
|
||||||
### Anyone Network Integration
|
|
||||||
|
|
||||||
Orama Network integrates with the [Anyone Protocol](https://anyone.io) for anonymous routing. By default, nodes run as **clients** (consuming the network). Optionally, you can run as a **relay operator** to earn rewards.
|
|
||||||
|
|
||||||
**Client Mode (Default):**
|
|
||||||
- Routes traffic through Anyone network for anonymity
|
|
||||||
- SOCKS5 proxy on localhost:9050
|
|
||||||
- No rewards, just consumes network
|
|
||||||
|
|
||||||
**Relay Mode (Earn Rewards):**
|
|
||||||
- Provide bandwidth to the Anyone network
|
|
||||||
- Earn $ANYONE tokens as a relay operator
|
|
||||||
- Requires 100 $ANYONE tokens in your wallet
|
|
||||||
- Requires ORPort (9001) open to the internet
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Install as relay operator (earn rewards)
|
|
||||||
sudo orama node install --vps-ip <IP> --domain <domain> \
|
|
||||||
--anyone-relay \
|
|
||||||
--anyone-nickname "MyRelay" \
|
|
||||||
--anyone-contact "operator@email.com" \
|
|
||||||
--anyone-wallet "0x1234...abcd"
|
|
||||||
|
|
||||||
# With exit relay (legal implications apply)
|
|
||||||
sudo orama node install --vps-ip <IP> --domain <domain> \
|
|
||||||
--anyone-relay \
|
|
||||||
--anyone-exit \
|
|
||||||
--anyone-nickname "MyExitRelay" \
|
|
||||||
--anyone-contact "operator@email.com" \
|
|
||||||
--anyone-wallet "0x1234...abcd"
|
|
||||||
|
|
||||||
# Migrate existing Anyone installation
|
|
||||||
sudo orama node install --vps-ip <IP> --domain <domain> \
|
|
||||||
--anyone-relay \
|
|
||||||
--anyone-migrate \
|
|
||||||
--anyone-nickname "MyRelay" \
|
|
||||||
--anyone-contact "operator@email.com" \
|
|
||||||
--anyone-wallet "0x1234...abcd"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Important:** After installation, register your relay at [dashboard.anyone.io](https://dashboard.anyone.io) to start earning rewards.
|
|
||||||
|
|
||||||
### Installation
|
|
||||||
|
|
||||||
**macOS (Homebrew):**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
brew install DeBrosOfficial/tap/orama
|
|
||||||
```
|
|
||||||
|
|
||||||
**Linux (Debian/Ubuntu):**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Download and install the latest .deb package
|
|
||||||
curl -sL https://github.com/DeBrosOfficial/network/releases/latest/download/orama_$(curl -s https://api.github.com/repos/DeBrosOfficial/network/releases/latest | grep tag_name | cut -d '"' -f 4 | tr -d 'v')_linux_amd64.deb -o orama.deb
|
|
||||||
sudo dpkg -i orama.deb
|
|
||||||
```
|
|
||||||
|
|
||||||
**From Source:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
go install github.com/DeBrosOfficial/network/cmd/cli@latest
|
|
||||||
```
|
|
||||||
|
|
||||||
**Setup (after installation):**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo orama node install --interactive
|
|
||||||
```
|
|
||||||
|
|
||||||
### Service Management
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Status
|
|
||||||
sudo orama node status
|
|
||||||
|
|
||||||
# Control services
|
|
||||||
sudo orama node start
|
|
||||||
sudo orama node stop
|
|
||||||
sudo orama node restart
|
|
||||||
|
|
||||||
# Diagnose issues
|
|
||||||
sudo orama node doctor
|
|
||||||
|
|
||||||
# View logs
|
|
||||||
orama node logs node --follow
|
|
||||||
orama node logs gateway --follow
|
|
||||||
orama node logs ipfs --follow
|
|
||||||
```
|
|
||||||
|
|
||||||
### Upgrade
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Upgrade to latest version
|
|
||||||
sudo orama node upgrade --restart
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
All configuration lives in `~/.orama/`:
|
|
||||||
|
|
||||||
- `configs/node.yaml` - Node configuration
|
|
||||||
- `configs/gateway.yaml` - Gateway configuration
|
|
||||||
- `configs/olric.yaml` - Cache configuration
|
|
||||||
- `secrets/` - Keys and certificates
|
|
||||||
- `data/` - Service data directories
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Services Not Starting
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check status
|
|
||||||
systemctl status orama-node
|
|
||||||
|
|
||||||
# View logs
|
|
||||||
journalctl -u orama-node -f
|
|
||||||
|
|
||||||
# Check log files
|
|
||||||
tail -f /opt/orama/.orama/logs/node.log
|
|
||||||
```
|
|
||||||
|
|
||||||
### Port Conflicts
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check what's using specific ports
|
|
||||||
sudo lsof -i :443 # HTTPS Gateway
|
|
||||||
sudo lsof -i :7001 # TCP/SNI Gateway
|
|
||||||
sudo lsof -i :6001 # Internal Gateway
|
|
||||||
```
|
|
||||||
|
|
||||||
### RQLite Cluster Issues
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Connect to RQLite CLI
|
|
||||||
rqlite -H localhost -p 5001
|
|
||||||
|
|
||||||
# Check cluster status
|
|
||||||
.nodes
|
|
||||||
.status
|
|
||||||
.ready
|
|
||||||
|
|
||||||
# Check consistency level
|
|
||||||
.consistency
|
|
||||||
```
|
|
||||||
|
|
||||||
### Reset Installation
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Production reset (⚠️ DESTROYS DATA)
|
|
||||||
sudo orama node uninstall
|
|
||||||
sudo rm -rf /opt/orama/.orama
|
|
||||||
sudo orama node install
|
|
||||||
```
|
|
||||||
|
|
||||||
## HTTP Gateway API
|
|
||||||
|
|
||||||
### Main Gateway Endpoints
|
|
||||||
|
|
||||||
- `GET /health` - Health status
|
|
||||||
- `GET /v1/status` - Full status
|
|
||||||
- `GET /v1/version` - Version info
|
|
||||||
- `POST /v1/rqlite/exec` - Execute SQL
|
|
||||||
- `POST /v1/rqlite/query` - Query database
|
|
||||||
- `GET /v1/rqlite/schema` - Get schema
|
|
||||||
- `POST /v1/pubsub/publish` - Publish message
|
|
||||||
- `GET /v1/pubsub/topics` - List topics
|
|
||||||
- `GET /v1/pubsub/ws?topic=<name>` - WebSocket subscribe
|
|
||||||
- `POST /v1/functions` - Deploy function (multipart/form-data)
|
|
||||||
- `POST /v1/functions/{name}/invoke` - Invoke function
|
|
||||||
- `GET /v1/functions` - List functions
|
|
||||||
- `DELETE /v1/functions/{name}` - Delete function
|
|
||||||
- `GET /v1/functions/{name}/logs` - Get function logs
|
|
||||||
|
|
||||||
See `openapi/gateway.yaml` for complete API specification.
|
|
||||||
|
|
||||||
## Documentation
|
## Documentation
|
||||||
|
|
||||||
- **[Deployment Guide](docs/DEPLOYMENT_GUIDE.md)** - Deploy React, Next.js, Go apps and manage databases
|
| Document | Description |
|
||||||
- **[Architecture Guide](docs/ARCHITECTURE.md)** - System architecture and design patterns
|
|----------|-------------|
|
||||||
- **[Client SDK](docs/CLIENT_SDK.md)** - Go SDK documentation and examples
|
| [Architecture](core/docs/ARCHITECTURE.md) | System architecture and design patterns |
|
||||||
- **[Gateway API](docs/GATEWAY_API.md)** - Complete HTTP API reference
|
| [Deployment Guide](core/docs/DEPLOYMENT_GUIDE.md) | Deploy apps, databases, and domains |
|
||||||
- **[Security Deployment](docs/SECURITY_DEPLOYMENT_GUIDE.md)** - Production security hardening
|
| [Dev & Deploy](core/docs/DEV_DEPLOY.md) | Building, deploying to VPS, rolling upgrades |
|
||||||
- **[Testing Plan](docs/TESTING_PLAN.md)** - Comprehensive testing strategy and implementation
|
| [Security](core/docs/SECURITY.md) | Security hardening and threat model |
|
||||||
|
| [Monitoring](core/docs/MONITORING.md) | Cluster health monitoring |
|
||||||
## Resources
|
| [Client SDK](core/docs/CLIENT_SDK.md) | Go SDK documentation |
|
||||||
|
| [Serverless](core/docs/SERVERLESS.md) | WASM serverless functions |
|
||||||
- [RQLite Documentation](https://rqlite.io/docs/)
|
| [Common Problems](core/docs/COMMON_PROBLEMS.md) | Troubleshooting known issues |
|
||||||
- [IPFS Documentation](https://docs.ipfs.tech/)
|
|
||||||
- [LibP2P Documentation](https://docs.libp2p.io/)
|
|
||||||
- [WebAssembly](https://webassembly.org/)
|
|
||||||
- [GitHub Repository](https://github.com/DeBrosOfficial/network)
|
|
||||||
- [Issue Tracker](https://github.com/DeBrosOfficial/network/issues)
|
|
||||||
|
|
||||||
## Project Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
network/
|
|
||||||
├── cmd/ # Binary entry points
|
|
||||||
│ ├── cli/ # CLI tool
|
|
||||||
│ ├── gateway/ # HTTP Gateway
|
|
||||||
│ ├── node/ # P2P Node
|
|
||||||
├── pkg/ # Core packages
|
|
||||||
│ ├── gateway/ # Gateway implementation
|
|
||||||
│ │ └── handlers/ # HTTP handlers by domain
|
|
||||||
│ ├── client/ # Go SDK
|
|
||||||
│ ├── serverless/ # WASM engine
|
|
||||||
│ ├── rqlite/ # Database ORM
|
|
||||||
│ ├── contracts/ # Interface definitions
|
|
||||||
│ ├── httputil/ # HTTP utilities
|
|
||||||
│ └── errors/ # Error handling
|
|
||||||
├── docs/ # Documentation
|
|
||||||
├── e2e/ # End-to-end tests
|
|
||||||
└── examples/ # Example code
|
|
||||||
```
|
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
|
|
||||||
Contributions are welcome! This project follows:
|
See [CONTRIBUTING.md](CONTRIBUTING.md) for setup, development, and PR guidelines.
|
||||||
- **SOLID Principles** - Single responsibility, open/closed, etc.
|
|
||||||
- **DRY Principle** - Don't repeat yourself
|
|
||||||
- **Clean Architecture** - Clear separation of concerns
|
|
||||||
- **Test Coverage** - Unit and E2E tests required
|
|
||||||
|
|
||||||
See our architecture docs for design patterns and guidelines.
|
## License
|
||||||
|
|
||||||
|
[AGPL-3.0](LICENSE)
|
||||||
|
|||||||
8
core/.env.example
Normal file
8
core/.env.example
Normal file
@ -0,0 +1,8 @@
|
|||||||
|
# OpenRouter API Key for changelog generation
|
||||||
|
# Get your API key from https://openrouter.ai/keys
|
||||||
|
OPENROUTER_API_KEY=your-api-key-here
|
||||||
|
|
||||||
|
# ZeroSSL API Key for TLS certificates (alternative to Let's Encrypt)
|
||||||
|
# Get your free API key from https://app.zerossl.com/developer
|
||||||
|
# If not set, Caddy will use Let's Encrypt as the default CA
|
||||||
|
ZEROSSL_API_KEY=
|
||||||
@ -8,7 +8,7 @@ NOCOLOR='\033[0m'
|
|||||||
|
|
||||||
# Run tests before push
|
# Run tests before push
|
||||||
echo -e "\n${CYAN}Running tests...${NOCOLOR}"
|
echo -e "\n${CYAN}Running tests...${NOCOLOR}"
|
||||||
go test ./... # Runs all tests in your repo
|
cd "$(git rev-parse --show-toplevel)/core" && go test ./...
|
||||||
status=$?
|
status=$?
|
||||||
if [ $status -ne 0 ]; then
|
if [ $status -ne 0 ]; then
|
||||||
echo -e "${RED}Push aborted: some tests failed.${NOCOLOR}"
|
echo -e "${RED}Push aborted: some tests failed.${NOCOLOR}"
|
||||||
181
core/Makefile
Normal file
181
core/Makefile
Normal file
@ -0,0 +1,181 @@
|
|||||||
|
TEST?=./...
|
||||||
|
|
||||||
|
.PHONY: test
|
||||||
|
test:
|
||||||
|
@echo Running tests...
|
||||||
|
go test -v $(TEST)
|
||||||
|
|
||||||
|
# Gateway-focused E2E tests assume gateway and nodes are already running
|
||||||
|
# Auto-discovers configuration from ~/.orama and queries database for API key
|
||||||
|
# No environment variables required
|
||||||
|
.PHONY: test-e2e test-e2e-deployments test-e2e-fullstack test-e2e-https test-e2e-quick test-e2e-prod test-e2e-shared test-e2e-cluster test-e2e-integration test-e2e-production
|
||||||
|
|
||||||
|
# Production E2E tests - includes production-only tests
|
||||||
|
test-e2e-prod:
|
||||||
|
@if [ -z "$$ORAMA_GATEWAY_URL" ]; then \
|
||||||
|
echo "❌ ORAMA_GATEWAY_URL not set"; \
|
||||||
|
echo "Usage: ORAMA_GATEWAY_URL=https://dbrs.space make test-e2e-prod"; \
|
||||||
|
exit 1; \
|
||||||
|
fi
|
||||||
|
@echo "Running E2E tests (including production-only) against $$ORAMA_GATEWAY_URL..."
|
||||||
|
go test -v -tags "e2e production" -timeout 30m ./e2e/...
|
||||||
|
|
||||||
|
# Generic e2e target
|
||||||
|
test-e2e:
|
||||||
|
@echo "Running comprehensive E2E tests..."
|
||||||
|
@echo "Auto-discovering configuration from ~/.orama..."
|
||||||
|
go test -v -tags e2e -timeout 30m ./e2e/...
|
||||||
|
|
||||||
|
test-e2e-deployments:
|
||||||
|
@echo "Running deployment E2E tests..."
|
||||||
|
go test -v -tags e2e -timeout 15m ./e2e/deployments/...
|
||||||
|
|
||||||
|
test-e2e-fullstack:
|
||||||
|
@echo "Running fullstack E2E tests..."
|
||||||
|
go test -v -tags e2e -timeout 20m -run "TestFullStack" ./e2e/...
|
||||||
|
|
||||||
|
test-e2e-https:
|
||||||
|
@echo "Running HTTPS/external access E2E tests..."
|
||||||
|
go test -v -tags e2e -timeout 10m -run "TestHTTPS" ./e2e/...
|
||||||
|
|
||||||
|
test-e2e-shared:
|
||||||
|
@echo "Running shared E2E tests..."
|
||||||
|
go test -v -tags e2e -timeout 10m ./e2e/shared/...
|
||||||
|
|
||||||
|
test-e2e-cluster:
|
||||||
|
@echo "Running cluster E2E tests..."
|
||||||
|
go test -v -tags e2e -timeout 15m ./e2e/cluster/...
|
||||||
|
|
||||||
|
test-e2e-integration:
|
||||||
|
@echo "Running integration E2E tests..."
|
||||||
|
go test -v -tags e2e -timeout 20m ./e2e/integration/...
|
||||||
|
|
||||||
|
test-e2e-production:
|
||||||
|
@echo "Running production-only E2E tests..."
|
||||||
|
go test -v -tags "e2e production" -timeout 15m ./e2e/production/...
|
||||||
|
|
||||||
|
test-e2e-quick:
|
||||||
|
@echo "Running quick E2E smoke tests..."
|
||||||
|
go test -v -tags e2e -timeout 5m -run "TestStatic|TestHealth" ./e2e/...
|
||||||
|
|
||||||
|
# Network - Distributed P2P Database System
|
||||||
|
# Makefile for development and build tasks
|
||||||
|
|
||||||
|
.PHONY: build clean test deps tidy fmt vet lint install-hooks push-devnet push-testnet rollout-devnet rollout-testnet release
|
||||||
|
|
||||||
|
VERSION := 0.120.0
|
||||||
|
COMMIT ?= $(shell git rev-parse --short HEAD 2>/dev/null || echo unknown)
|
||||||
|
DATE ?= $(shell date -u +%Y-%m-%dT%H:%M:%SZ)
|
||||||
|
LDFLAGS := -X 'main.version=$(VERSION)' -X 'main.commit=$(COMMIT)' -X 'main.date=$(DATE)'
|
||||||
|
LDFLAGS_LINUX := -s -w $(LDFLAGS)
|
||||||
|
|
||||||
|
# Build targets
|
||||||
|
build: deps
|
||||||
|
@echo "Building network executables (version=$(VERSION))..."
|
||||||
|
@mkdir -p bin
|
||||||
|
go build -ldflags "$(LDFLAGS)" -o bin/identity ./cmd/identity
|
||||||
|
go build -ldflags "$(LDFLAGS)" -o bin/orama-node ./cmd/node
|
||||||
|
go build -ldflags "$(LDFLAGS)" -o bin/orama ./cmd/cli/
|
||||||
|
# Inject gateway build metadata via pkg path variables
|
||||||
|
go build -ldflags "$(LDFLAGS) -X 'github.com/DeBrosOfficial/network/pkg/gateway.BuildVersion=$(VERSION)' -X 'github.com/DeBrosOfficial/network/pkg/gateway.BuildCommit=$(COMMIT)' -X 'github.com/DeBrosOfficial/network/pkg/gateway.BuildTime=$(DATE)'" -o bin/gateway ./cmd/gateway
|
||||||
|
go build -ldflags "$(LDFLAGS)" -o bin/sfu ./cmd/sfu
|
||||||
|
go build -ldflags "$(LDFLAGS)" -o bin/turn ./cmd/turn
|
||||||
|
@echo "Build complete! Run ./bin/orama version"
|
||||||
|
|
||||||
|
# Cross-compile CLI for Linux (only binary needed locally; VPS builds everything else from source)
|
||||||
|
build-linux: deps
|
||||||
|
@echo "Cross-compiling CLI for linux/amd64 (version=$(VERSION))..."
|
||||||
|
@mkdir -p bin-linux
|
||||||
|
GOOS=linux GOARCH=amd64 go build -ldflags "$(LDFLAGS_LINUX)" -trimpath -o bin-linux/orama ./cmd/cli/
|
||||||
|
@echo "✓ CLI built at bin-linux/orama"
|
||||||
|
@echo ""
|
||||||
|
@echo "Prefer 'make build-archive' for full pre-built binary archive."
|
||||||
|
|
||||||
|
# Build pre-compiled binary archive for deployment (all binaries + deps)
|
||||||
|
build-archive: deps
|
||||||
|
@echo "Building binary archive (version=$(VERSION))..."
|
||||||
|
go build -ldflags "$(LDFLAGS)" -o bin/orama ./cmd/cli/
|
||||||
|
./bin/orama build --output /tmp/orama-$(VERSION)-linux-amd64.tar.gz
|
||||||
|
|
||||||
|
# Install git hooks
|
||||||
|
install-hooks:
|
||||||
|
@echo "Installing git hooks..."
|
||||||
|
@bash scripts/install-hooks.sh
|
||||||
|
|
||||||
|
# Install orama CLI to ~/.local/bin and configure PATH
|
||||||
|
install: build
|
||||||
|
@bash scripts/install.sh
|
||||||
|
|
||||||
|
# Clean build artifacts
|
||||||
|
clean:
|
||||||
|
@echo "Cleaning build artifacts..."
|
||||||
|
rm -rf bin/
|
||||||
|
rm -rf data/
|
||||||
|
@echo "Clean complete!"
|
||||||
|
|
||||||
|
# Push binary archive to devnet nodes (fanout distribution)
|
||||||
|
push-devnet:
|
||||||
|
./bin/orama node push --env devnet
|
||||||
|
|
||||||
|
# Push binary archive to testnet nodes (fanout distribution)
|
||||||
|
push-testnet:
|
||||||
|
./bin/orama node push --env testnet
|
||||||
|
|
||||||
|
# Full rollout to devnet (build + push + rolling upgrade)
|
||||||
|
rollout-devnet:
|
||||||
|
./bin/orama node rollout --env devnet --yes
|
||||||
|
|
||||||
|
# Full rollout to testnet (build + push + rolling upgrade)
|
||||||
|
rollout-testnet:
|
||||||
|
./bin/orama node rollout --env testnet --yes
|
||||||
|
|
||||||
|
# Interactive release workflow (tag + push)
|
||||||
|
release:
|
||||||
|
@bash scripts/release.sh
|
||||||
|
|
||||||
|
# Check health of all nodes in an environment
|
||||||
|
# Usage: make health ENV=devnet
|
||||||
|
health:
|
||||||
|
@if [ -z "$(ENV)" ]; then \
|
||||||
|
echo "Usage: make health ENV=devnet|testnet"; \
|
||||||
|
exit 1; \
|
||||||
|
fi
|
||||||
|
./bin/orama monitor report --env $(ENV)
|
||||||
|
|
||||||
|
# Help
|
||||||
|
help:
|
||||||
|
@echo "Available targets:"
|
||||||
|
@echo " build - Build all executables"
|
||||||
|
@echo " install - Build and install 'orama' CLI to ~/.local/bin"
|
||||||
|
@echo " clean - Clean build artifacts"
|
||||||
|
@echo " test - Run unit tests"
|
||||||
|
@echo ""
|
||||||
|
@echo "E2E Testing:"
|
||||||
|
@echo " make test-e2e-prod - Run all E2E tests incl. production-only (needs ORAMA_GATEWAY_URL)"
|
||||||
|
@echo " make test-e2e-shared - Run shared E2E tests (cache, storage, pubsub, auth)"
|
||||||
|
@echo " make test-e2e-cluster - Run cluster E2E tests (libp2p, olric, rqlite, namespace)"
|
||||||
|
@echo " make test-e2e-integration - Run integration E2E tests (fullstack, persistence, concurrency)"
|
||||||
|
@echo " make test-e2e-deployments - Run deployment E2E tests"
|
||||||
|
@echo " make test-e2e-production - Run production-only E2E tests (DNS, HTTPS, cross-node)"
|
||||||
|
@echo " make test-e2e-quick - Quick smoke tests (static deploys, health checks)"
|
||||||
|
@echo " make test-e2e - Generic E2E tests (auto-discovers config)"
|
||||||
|
@echo ""
|
||||||
|
@echo " Example:"
|
||||||
|
@echo " ORAMA_GATEWAY_URL=https://orama-devnet.network make test-e2e-prod"
|
||||||
|
@echo ""
|
||||||
|
@echo "Deployment:"
|
||||||
|
@echo " make build-archive - Build pre-compiled binary archive for deployment"
|
||||||
|
@echo " make push-devnet - Push binary archive to devnet nodes"
|
||||||
|
@echo " make push-testnet - Push binary archive to testnet nodes"
|
||||||
|
@echo " make rollout-devnet - Full rollout: build + push + rolling upgrade (devnet)"
|
||||||
|
@echo " make rollout-testnet - Full rollout: build + push + rolling upgrade (testnet)"
|
||||||
|
@echo " make health ENV=devnet - Check health of all nodes in an environment"
|
||||||
|
@echo " make release - Interactive release workflow (tag + push)"
|
||||||
|
@echo ""
|
||||||
|
@echo "Maintenance:"
|
||||||
|
@echo " deps - Download dependencies"
|
||||||
|
@echo " tidy - Tidy dependencies"
|
||||||
|
@echo " fmt - Format code"
|
||||||
|
@echo " vet - Vet code"
|
||||||
|
@echo " lint - Lint code (fmt + vet)"
|
||||||
|
@echo " help - Show this help"
|
||||||
@ -9,6 +9,7 @@ import (
|
|||||||
// Command groups
|
// Command groups
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/cmd/app"
|
"github.com/DeBrosOfficial/network/pkg/cli/cmd/app"
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/cmd/authcmd"
|
"github.com/DeBrosOfficial/network/pkg/cli/cmd/authcmd"
|
||||||
|
"github.com/DeBrosOfficial/network/pkg/cli/cmd/buildcmd"
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/cmd/dbcmd"
|
"github.com/DeBrosOfficial/network/pkg/cli/cmd/dbcmd"
|
||||||
deploycmd "github.com/DeBrosOfficial/network/pkg/cli/cmd/deploy"
|
deploycmd "github.com/DeBrosOfficial/network/pkg/cli/cmd/deploy"
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/cmd/envcmd"
|
"github.com/DeBrosOfficial/network/pkg/cli/cmd/envcmd"
|
||||||
@ -17,6 +18,7 @@ import (
|
|||||||
"github.com/DeBrosOfficial/network/pkg/cli/cmd/monitorcmd"
|
"github.com/DeBrosOfficial/network/pkg/cli/cmd/monitorcmd"
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/cmd/namespacecmd"
|
"github.com/DeBrosOfficial/network/pkg/cli/cmd/namespacecmd"
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/cmd/node"
|
"github.com/DeBrosOfficial/network/pkg/cli/cmd/node"
|
||||||
|
"github.com/DeBrosOfficial/network/pkg/cli/cmd/sandboxcmd"
|
||||||
)
|
)
|
||||||
|
|
||||||
// version metadata populated via -ldflags at build time
|
// version metadata populated via -ldflags at build time
|
||||||
@ -83,6 +85,12 @@ and interacting with the Orama distributed network.`,
|
|||||||
// Serverless function commands
|
// Serverless function commands
|
||||||
rootCmd.AddCommand(functioncmd.Cmd)
|
rootCmd.AddCommand(functioncmd.Cmd)
|
||||||
|
|
||||||
|
// Build command (cross-compile binary archive)
|
||||||
|
rootCmd.AddCommand(buildcmd.Cmd)
|
||||||
|
|
||||||
|
// Sandbox command (ephemeral Hetzner Cloud clusters)
|
||||||
|
rootCmd.AddCommand(sandboxcmd.Cmd)
|
||||||
|
|
||||||
return rootCmd
|
return rootCmd
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -357,11 +357,36 @@ Function Invocation:
|
|||||||
|
|
||||||
All inter-node communication is encrypted via a WireGuard VPN mesh:
|
All inter-node communication is encrypted via a WireGuard VPN mesh:
|
||||||
|
|
||||||
- **WireGuard IPs:** Each node gets a private IP (10.0.0.x) used for all cluster traffic
|
- **WireGuard IPs:** Each node gets a private IP (10.0.0.x/24) used for all cluster traffic
|
||||||
- **UFW Firewall:** Only public ports are exposed: 22 (SSH), 53 (DNS, nameservers only), 80/443 (HTTP/HTTPS), 51820 (WireGuard UDP)
|
- **UFW Firewall:** Only public ports are exposed: 22 (SSH), 53 (DNS, nameservers only), 80/443 (HTTP/HTTPS), 51820 (WireGuard UDP)
|
||||||
|
- **IPv6 disabled:** System-wide via sysctl to prevent bypass of IPv4 firewall rules
|
||||||
- **Internal services** (RQLite 5001/7001, IPFS 4001/4501, Olric 3320/3322, Gateway 6001) are only accessible via WireGuard or localhost
|
- **Internal services** (RQLite 5001/7001, IPFS 4001/4501, Olric 3320/3322, Gateway 6001) are only accessible via WireGuard or localhost
|
||||||
- **Invite tokens:** Single-use, time-limited tokens for secure node joining. No shared secrets on the CLI
|
- **Invite tokens:** Single-use, time-limited tokens for secure node joining. No shared secrets on the CLI
|
||||||
- **Join flow:** New nodes authenticate via HTTPS (443), establish WireGuard tunnel, then join all services over the encrypted mesh
|
- **Join flow:** New nodes authenticate via HTTPS (443) with TOFU certificate pinning, establish WireGuard tunnel, then join all services over the encrypted mesh
|
||||||
|
|
||||||
|
### Service Authentication
|
||||||
|
|
||||||
|
- **RQLite:** HTTP basic auth on all queries/executions — credentials generated at genesis, distributed via join response
|
||||||
|
- **Olric:** Memberlist gossip encrypted with a shared 32-byte key
|
||||||
|
- **IPFS Cluster:** TrustedPeers restricted to known cluster peer IDs (not `*`)
|
||||||
|
- **Internal endpoints:** `/v1/internal/wg/peers` and `/v1/internal/wg/peer/remove` require cluster secret
|
||||||
|
- **Vault:** V1 push/pull endpoints require session token authentication when guardian is configured
|
||||||
|
- **WebSockets:** Origin header validated against the node's configured domain
|
||||||
|
|
||||||
|
### Token & Key Security
|
||||||
|
|
||||||
|
- **Refresh tokens:** Stored as SHA-256 hashes (never plaintext)
|
||||||
|
- **API keys:** Stored as HMAC-SHA256 hashes with a server-side secret
|
||||||
|
- **TURN secrets:** Encrypted at rest with AES-256-GCM (key derived from cluster secret)
|
||||||
|
- **Binary signing:** Build archives signed with rootwallet EVM signature, verified on install
|
||||||
|
|
||||||
|
### Process Isolation
|
||||||
|
|
||||||
|
- **Dedicated user:** All services run as `orama` user (not root)
|
||||||
|
- **systemd hardening:** `ProtectSystem=strict`, `NoNewPrivileges=yes`, `PrivateDevices=yes`, etc.
|
||||||
|
- **Capabilities:** Caddy and CoreDNS get `CAP_NET_BIND_SERVICE` for privileged ports
|
||||||
|
|
||||||
|
See [SECURITY.md](SECURITY.md) for the full security hardening reference.
|
||||||
|
|
||||||
### TLS/HTTPS
|
### TLS/HTTPS
|
||||||
|
|
||||||
@ -504,6 +529,31 @@ WebRTC uses a separate port allocation system from core namespace services:
|
|||||||
|
|
||||||
See [docs/WEBRTC.md](WEBRTC.md) for full details including client integration, API reference, and debugging.
|
See [docs/WEBRTC.md](WEBRTC.md) for full details including client integration, API reference, and debugging.
|
||||||
|
|
||||||
|
## OramaOS
|
||||||
|
|
||||||
|
For mainnet, devnet, and testnet environments, nodes run **OramaOS** — a custom minimal Linux image built with Buildroot.
|
||||||
|
|
||||||
|
**Key properties:**
|
||||||
|
- No SSH, no shell — operators cannot access the filesystem
|
||||||
|
- LUKS full-disk encryption with Shamir key distribution across peers
|
||||||
|
- Read-only rootfs (SquashFS + dm-verity)
|
||||||
|
- A/B partition updates with cryptographic signature verification
|
||||||
|
- Service sandboxing via Linux namespaces + seccomp
|
||||||
|
- Single root process: the **orama-agent**
|
||||||
|
|
||||||
|
**The orama-agent manages:**
|
||||||
|
- Boot sequence and LUKS key reconstruction
|
||||||
|
- WireGuard tunnel setup
|
||||||
|
- Service lifecycle in sandboxed namespaces
|
||||||
|
- Command reception from Gateway over WireGuard (port 9998)
|
||||||
|
- OS updates (download, verify, A/B swap, reboot with rollback)
|
||||||
|
|
||||||
|
**Node enrollment:** OramaOS nodes join via `orama node enroll` instead of `orama node install`. The enrollment flow uses a registration code + invite token + wallet verification.
|
||||||
|
|
||||||
|
See [ORAMAOS_DEPLOYMENT.md](ORAMAOS_DEPLOYMENT.md) for the full deployment guide.
|
||||||
|
|
||||||
|
Sandbox clusters remain on Ubuntu for development convenience.
|
||||||
|
|
||||||
## Future Enhancements
|
## Future Enhancements
|
||||||
|
|
||||||
1. **GraphQL Support** - GraphQL gateway alongside REST
|
1. **GraphQL Support** - GraphQL gateway alongside REST
|
||||||
@ -2,6 +2,8 @@
|
|||||||
|
|
||||||
How to completely remove all Orama Network state from a VPS so it can be reinstalled fresh.
|
How to completely remove all Orama Network state from a VPS so it can be reinstalled fresh.
|
||||||
|
|
||||||
|
> **OramaOS nodes:** This guide applies to Ubuntu-based nodes only. OramaOS has no SSH or shell access. To remove an OramaOS node: use `POST /v1/node/leave` via the Gateway API for graceful departure, or reflash the OramaOS image via your VPS provider's dashboard for a factory reset. See [ORAMAOS_DEPLOYMENT.md](ORAMAOS_DEPLOYMENT.md) for details.
|
||||||
|
|
||||||
## Quick Clean (Copy-Paste)
|
## Quick Clean (Copy-Paste)
|
||||||
|
|
||||||
Run this as root or with sudo on the target VPS:
|
Run this as root or with sudo on the target VPS:
|
||||||
@ -32,7 +32,7 @@ wg set wg0 peer <NodeA-pubkey> remove
|
|||||||
wg set wg0 peer <NodeA-pubkey> endpoint <NodeA-public-ip>:51820 allowed-ips <NodeA-wg-ip>/32 persistent-keepalive 25
|
wg set wg0 peer <NodeA-pubkey> endpoint <NodeA-public-ip>:51820 allowed-ips <NodeA-wg-ip>/32 persistent-keepalive 25
|
||||||
```
|
```
|
||||||
|
|
||||||
Then restart services: `sudo orama prod restart`
|
Then restart services: `sudo orama node restart`
|
||||||
|
|
||||||
You can find peer public keys with `wg show wg0`.
|
You can find peer public keys with `wg show wg0`.
|
||||||
|
|
||||||
@ -46,7 +46,7 @@ cat /opt/orama/.orama/data/namespaces/<name>/configs/olric-*.yaml
|
|||||||
|
|
||||||
If `bindAddr` is `0.0.0.0`, the node will try to bind to IPv6 on dual-stack hosts, breaking memberlist gossip.
|
If `bindAddr` is `0.0.0.0`, the node will try to bind to IPv6 on dual-stack hosts, breaking memberlist gossip.
|
||||||
|
|
||||||
**Fix:** Edit the YAML to use the node's WireGuard IP (run `ip addr show wg0` to find it), then restart: `sudo orama prod restart`
|
**Fix:** Edit the YAML to use the node's WireGuard IP (run `ip addr show wg0` to find it), then restart: `sudo orama node restart`
|
||||||
|
|
||||||
This was fixed in code (BindAddr validation in `SpawnOlric`), so new namespaces won't have this issue.
|
This was fixed in code (BindAddr validation in `SpawnOlric`), so new namespaces won't have this issue.
|
||||||
|
|
||||||
@ -82,7 +82,7 @@ olric_servers:
|
|||||||
- "10.0.0.Z:10002"
|
- "10.0.0.Z:10002"
|
||||||
```
|
```
|
||||||
|
|
||||||
Then: `sudo orama prod restart`
|
Then: `sudo orama node restart`
|
||||||
|
|
||||||
This was fixed in code, so new namespaces get the correct config.
|
This was fixed in code, so new namespaces get the correct config.
|
||||||
|
|
||||||
@ -90,7 +90,7 @@ This was fixed in code, so new namespaces get the correct config.
|
|||||||
|
|
||||||
## 3. Namespace not restoring after restart (missing cluster-state.json)
|
## 3. Namespace not restoring after restart (missing cluster-state.json)
|
||||||
|
|
||||||
**Symptom:** After `orama prod restart`, the namespace services don't come back because `RestoreLocalClustersFromDisk` has no state file.
|
**Symptom:** After `orama node restart`, the namespace services don't come back because `RestoreLocalClustersFromDisk` has no state file.
|
||||||
|
|
||||||
**Check:**
|
**Check:**
|
||||||
|
|
||||||
@ -117,9 +117,9 @@ This was fixed in code — `ProvisionCluster` now saves state to all nodes (incl
|
|||||||
|
|
||||||
## 4. Namespace gateway processes not restarting after upgrade
|
## 4. Namespace gateway processes not restarting after upgrade
|
||||||
|
|
||||||
**Symptom:** After `orama upgrade --restart` or `orama prod restart`, namespace gateway/olric/rqlite services don't start.
|
**Symptom:** After `orama upgrade --restart` or `orama node restart`, namespace gateway/olric/rqlite services don't start.
|
||||||
|
|
||||||
**Cause:** `orama prod stop` disables systemd template services (`orama-namespace-gateway@<name>.service`). They have `PartOf=orama-node.service`, but that only propagates restart to **enabled** services.
|
**Cause:** `orama node stop` disables systemd template services (`orama-namespace-gateway@<name>.service`). They have `PartOf=orama-node.service`, but that only propagates restart to **enabled** services.
|
||||||
|
|
||||||
**Fix:** Re-enable the services before restarting:
|
**Fix:** Re-enable the services before restarting:
|
||||||
|
|
||||||
@ -127,7 +127,7 @@ This was fixed in code — `ProvisionCluster` now saves state to all nodes (incl
|
|||||||
systemctl enable orama-namespace-rqlite@<name>.service
|
systemctl enable orama-namespace-rqlite@<name>.service
|
||||||
systemctl enable orama-namespace-olric@<name>.service
|
systemctl enable orama-namespace-olric@<name>.service
|
||||||
systemctl enable orama-namespace-gateway@<name>.service
|
systemctl enable orama-namespace-gateway@<name>.service
|
||||||
sudo orama prod restart
|
sudo orama node restart
|
||||||
```
|
```
|
||||||
|
|
||||||
This was fixed in code — the upgrade orchestrator now re-enables `@` services before restarting.
|
This was fixed in code — the upgrade orchestrator now re-enables `@` services before restarting.
|
||||||
@ -150,11 +150,68 @@ ssh -n user@host 'command'
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. RQLite returns 401 Unauthorized
|
||||||
|
|
||||||
|
**Symptom:** RQLite queries fail with HTTP 401 after security hardening.
|
||||||
|
|
||||||
|
**Cause:** RQLite now requires basic auth. The client isn't sending credentials.
|
||||||
|
|
||||||
|
**Fix:** Ensure the RQLite client is configured with the credentials from `/opt/orama/.orama/secrets/rqlite-auth.json`. The central RQLite client wrapper (`pkg/rqlite/client.go`) handles this automatically. If using a standalone client (e.g., CoreDNS plugin), ensure it's also configured.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Olric cluster split after upgrade
|
||||||
|
|
||||||
|
**Symptom:** Olric nodes can't gossip after enabling memberlist encryption.
|
||||||
|
|
||||||
|
**Cause:** Olric memberlist encryption is all-or-nothing. Nodes with encryption can't communicate with nodes without it.
|
||||||
|
|
||||||
|
**Fix:** All nodes must be restarted simultaneously when enabling Olric encryption. The cache will be lost (it rebuilds from DB). This is expected — Olric is a cache, not persistent storage.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. OramaOS: LUKS unlock fails
|
||||||
|
|
||||||
|
**Symptom:** OramaOS node can't reconstruct its LUKS key after reboot.
|
||||||
|
|
||||||
|
**Cause:** Not enough peer vault-guardians are online to meet the Shamir threshold (K = max(3, N/3)).
|
||||||
|
|
||||||
|
**Fix:** Ensure enough cluster nodes are online and reachable over WireGuard. The agent retries with exponential backoff. For genesis nodes before 5+ peers exist, use:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
orama node unlock --genesis --node-ip <wg-ip>
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 9. OramaOS: Enrollment timeout
|
||||||
|
|
||||||
|
**Symptom:** `orama node enroll` hangs or times out.
|
||||||
|
|
||||||
|
**Cause:** The OramaOS node's port 9999 isn't reachable, or the Gateway can't reach the node's WebSocket.
|
||||||
|
|
||||||
|
**Fix:** Check that port 9999 is open in your VPS provider's external firewall (Hetzner firewall, AWS security groups, etc.). OramaOS opens it internally, but provider-level firewalls must be configured separately.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 10. Binary signature verification fails
|
||||||
|
|
||||||
|
**Symptom:** `orama node install` rejects the binary archive with a signature error.
|
||||||
|
|
||||||
|
**Cause:** The archive was tampered with, or the manifest.sig file is missing/corrupted.
|
||||||
|
|
||||||
|
**Fix:** Rebuild the archive with `orama build` and re-sign with `make sign` (in the orama-os repo). Ensure you're using the rootwallet that matches the embedded signer address.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## General Debugging Tips
|
## General Debugging Tips
|
||||||
|
|
||||||
- **Always use `sudo orama prod restart`** instead of raw `systemctl` commands
|
- **Always use `sudo orama node restart`** instead of raw `systemctl` commands
|
||||||
- **Namespace data lives at:** `/opt/orama/.orama/data/namespaces/<name>/`
|
- **Namespace data lives at:** `/opt/orama/.orama/data/namespaces/<name>/`
|
||||||
- **Check service logs:** `journalctl -u orama-namespace-olric@<name>.service --no-pager -n 50`
|
- **Check service logs:** `journalctl -u orama-namespace-olric@<name>.service --no-pager -n 50`
|
||||||
- **Check WireGuard:** `wg show wg0` — look for recent handshakes and transfer bytes
|
- **Check WireGuard:** `wg show wg0` — look for recent handshakes and transfer bytes
|
||||||
- **Check gateway health:** `curl http://localhost:<port>/v1/health` from the node itself
|
- **Check gateway health:** `curl http://localhost:<port>/v1/health` from the node itself
|
||||||
- **Node IPs:** Check `scripts/remote-nodes.conf` for credentials, `wg show wg0` for WG IPs
|
- **Node IPs:** Check `scripts/remote-nodes.conf` for credentials, `wg show wg0` for WG IPs
|
||||||
|
- **OramaOS nodes:** No SSH access — use Gateway API endpoints (`/v1/node/status`, `/v1/node/logs`) for diagnostics
|
||||||
@ -27,87 +27,64 @@ make test
|
|||||||
|
|
||||||
## Deploying to VPS
|
## Deploying to VPS
|
||||||
|
|
||||||
Source is always deployed via SCP (no git on VPS). The CLI is the only binary cross-compiled locally; everything else is built from source on the VPS.
|
All binaries are pre-compiled locally and shipped as a binary archive. Zero compilation on the VPS.
|
||||||
|
|
||||||
### Deploy Workflow
|
### Deploy Workflow
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 1. Cross-compile the CLI for Linux
|
# One-command: build + push + rolling upgrade
|
||||||
make build-linux
|
orama node rollout --env testnet
|
||||||
|
|
||||||
# 2. Generate a source archive (includes CLI binary + full source)
|
# Or step by step:
|
||||||
./scripts/generate-source-archive.sh
|
|
||||||
# Creates: /tmp/network-source.tar.gz
|
|
||||||
|
|
||||||
# 3. Install on a new VPS (handles SCP, extract, and remote install automatically)
|
# 1. Build binary archive (cross-compiles all binaries for linux/amd64)
|
||||||
./bin/orama node install --vps-ip <ip> --nameserver --domain <domain> --base-domain <domain>
|
orama build
|
||||||
|
# Creates: /tmp/orama-<version>-linux-amd64.tar.gz
|
||||||
|
|
||||||
# Or upgrade an existing VPS
|
# 2. Push archive to all nodes (fanout via hub node)
|
||||||
./bin/orama node upgrade --restart
|
orama node push --env testnet
|
||||||
|
|
||||||
|
# 3. Rolling upgrade (one node at a time, followers first, leader last)
|
||||||
|
orama node upgrade --env testnet
|
||||||
```
|
```
|
||||||
|
|
||||||
The `orama node install` command automatically:
|
### Fresh Node Install
|
||||||
1. Uploads the source archive via SCP
|
|
||||||
2. Extracts source to `/opt/orama/src` and installs the CLI to `/usr/local/bin/orama`
|
```bash
|
||||||
3. Runs `orama node install` on the VPS which builds all binaries from source (Go, CoreDNS, Caddy, Olric, etc.)
|
# Build the archive first (if not already built)
|
||||||
|
orama build
|
||||||
|
|
||||||
|
# Install on a new VPS (auto-uploads binary archive, zero compilation)
|
||||||
|
orama node install --vps-ip <ip> --nameserver --domain <domain> --base-domain <domain>
|
||||||
|
```
|
||||||
|
|
||||||
|
The installer auto-detects the binary archive at `/opt/orama/manifest.json` and copies pre-built binaries instead of compiling from source.
|
||||||
|
|
||||||
### Upgrading a Multi-Node Cluster (CRITICAL)
|
### Upgrading a Multi-Node Cluster (CRITICAL)
|
||||||
|
|
||||||
**NEVER restart all nodes simultaneously.** RQLite uses Raft consensus and requires a majority (quorum) to function. Restarting all nodes at once can cause cluster splits where nodes elect different leaders or form isolated clusters.
|
**NEVER restart all nodes simultaneously.** RQLite uses Raft consensus and requires a majority (quorum) to function.
|
||||||
|
|
||||||
#### Safe Upgrade Procedure (Rolling Restart)
|
#### Safe Upgrade Procedure
|
||||||
|
|
||||||
Always upgrade nodes **one at a time**, waiting for each to rejoin before proceeding:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 1. Build CLI + generate archive
|
# Full rollout (build + push + rolling upgrade, one command)
|
||||||
make build-linux
|
orama node rollout --env testnet
|
||||||
./scripts/generate-source-archive.sh
|
|
||||||
# Creates: /tmp/network-source.tar.gz
|
|
||||||
|
|
||||||
# 2. Upload to ONE node first (the "hub" node)
|
# Or with more control:
|
||||||
sshpass -p '<password>' scp /tmp/network-source.tar.gz ubuntu@<hub-ip>:/tmp/
|
orama node push --env testnet # Push archive to all nodes
|
||||||
|
orama node upgrade --env testnet # Rolling upgrade (auto-detects leader)
|
||||||
|
orama node upgrade --env testnet --node 1.2.3.4 # Single node only
|
||||||
|
orama node upgrade --env testnet --delay 60 # 60s between nodes
|
||||||
|
```
|
||||||
|
|
||||||
# 3. Fan out from hub to all other nodes (server-to-server is faster)
|
The rolling upgrade automatically:
|
||||||
ssh ubuntu@<hub-ip>
|
1. Upgrades **follower** nodes first
|
||||||
for ip in <ip2> <ip3> <ip4> <ip5> <ip6>; do
|
2. Upgrades the **leader** last
|
||||||
scp /tmp/network-source.tar.gz ubuntu@$ip:/tmp/
|
3. Waits a configurable delay between nodes (default: 30s)
|
||||||
done
|
|
||||||
exit
|
|
||||||
|
|
||||||
# 4. Extract on ALL nodes (can be done in parallel, no restart yet)
|
After each node, verify health:
|
||||||
for ip in <ip1> <ip2> <ip3> <ip4> <ip5> <ip6>; do
|
```bash
|
||||||
ssh ubuntu@$ip 'sudo bash -s' < scripts/extract-deploy.sh
|
orama monitor report --env testnet
|
||||||
done
|
|
||||||
|
|
||||||
# 5. Find the RQLite leader (upgrade this one LAST)
|
|
||||||
orama monitor report --env <env>
|
|
||||||
# Check "rqlite_leader" in summary output
|
|
||||||
|
|
||||||
# 6. Upgrade FOLLOWER nodes one at a time
|
|
||||||
ssh ubuntu@<follower-ip> 'sudo orama node stop && sudo orama node upgrade --restart'
|
|
||||||
|
|
||||||
# IMPORTANT: Verify FULL health before proceeding to next node:
|
|
||||||
orama monitor report --env <env> --node <follower-ip>
|
|
||||||
# Check:
|
|
||||||
# - All services active, 0 restart loops
|
|
||||||
# - RQLite: Follower state, applied_index matches cluster
|
|
||||||
# - All RQLite peers reachable (no partition alerts)
|
|
||||||
# - WireGuard peers connected with recent handshakes
|
|
||||||
# Only proceed to next node after ALL checks pass.
|
|
||||||
#
|
|
||||||
# NOTE: After restarting a node, other nodes may briefly report it as
|
|
||||||
# "unreachable" with "broken pipe" errors. This is normal — Raft TCP
|
|
||||||
# connections need ~1-2 minutes to re-establish. Wait and re-check
|
|
||||||
# before escalating.
|
|
||||||
|
|
||||||
# Repeat for each follower...
|
|
||||||
|
|
||||||
# 7. Upgrade the LEADER node last
|
|
||||||
ssh ubuntu@<leader-ip> 'sudo orama node stop && sudo orama node upgrade --restart'
|
|
||||||
|
|
||||||
# Verify the new leader was elected and cluster is fully healthy:
|
|
||||||
orama monitor report --env <env>
|
|
||||||
```
|
```
|
||||||
|
|
||||||
#### What NOT to Do
|
#### What NOT to Do
|
||||||
@ -121,31 +98,38 @@ orama monitor report --env <env>
|
|||||||
|
|
||||||
If nodes get stuck in "Candidate" state or show "leader not found" errors:
|
If nodes get stuck in "Candidate" state or show "leader not found" errors:
|
||||||
|
|
||||||
1. Identify which node has the most recent data (usually the old leader)
|
|
||||||
2. Keep that node running as the new leader
|
|
||||||
3. On each other node, clear RQLite data and restart:
|
|
||||||
```bash
|
|
||||||
sudo orama node stop
|
|
||||||
sudo rm -rf /opt/orama/.orama/data/rqlite
|
|
||||||
sudo systemctl start orama-node
|
|
||||||
```
|
|
||||||
4. The node should automatically rejoin using its configured `rqlite_join_address`
|
|
||||||
|
|
||||||
If automatic rejoin fails, the node may have started without the `-join` flag. Check:
|
|
||||||
```bash
|
```bash
|
||||||
ps aux | grep rqlited
|
# Recover the Raft cluster (specify the node with highest commit index as leader)
|
||||||
# Should include: -join 10.0.0.1:7001 (or similar)
|
orama node recover-raft --env testnet --leader 1.2.3.4
|
||||||
```
|
```
|
||||||
|
|
||||||
If `-join` is missing, the node bootstrapped standalone. You'll need to either:
|
This will:
|
||||||
- Restart orama-node (it should detect empty data and use join)
|
1. Stop orama-node on ALL nodes
|
||||||
- Or do a full cluster rebuild from CLEAN_NODE.md
|
2. Backup + delete raft/ on non-leader nodes
|
||||||
|
3. Start the leader, wait for Leader state
|
||||||
|
4. Start remaining nodes in batches
|
||||||
|
5. Verify cluster health
|
||||||
|
|
||||||
### Deploying to Multiple Nodes
|
### Cleaning Nodes for Reinstallation
|
||||||
|
|
||||||
To deploy to all nodes, repeat steps 3-5 (dev) or 3-4 (production) for each VPS IP.
|
```bash
|
||||||
|
# Wipe all data and services (preserves Anyone relay keys)
|
||||||
|
orama node clean --env testnet --force
|
||||||
|
|
||||||
**Important:** When using `--restart`, do nodes one at a time (see "Upgrading a Multi-Node Cluster" above).
|
# Also remove shared binaries (rqlited, ipfs, caddy, etc.)
|
||||||
|
orama node clean --env testnet --nuclear --force
|
||||||
|
|
||||||
|
# Single node only
|
||||||
|
orama node clean --env testnet --node 1.2.3.4 --force
|
||||||
|
```
|
||||||
|
|
||||||
|
### Push Options
|
||||||
|
|
||||||
|
```bash
|
||||||
|
orama node push --env devnet # Fanout via hub (default, fastest)
|
||||||
|
orama node push --env testnet --node 1.2.3.4 # Single node
|
||||||
|
orama node push --env testnet --direct # Sequential, no fanout
|
||||||
|
```
|
||||||
|
|
||||||
### CLI Flags Reference
|
### CLI Flags Reference
|
||||||
|
|
||||||
@ -189,11 +173,56 @@ To deploy to all nodes, repeat steps 3-5 (dev) or 3-4 (production) for each VPS
|
|||||||
|
|
||||||
| Flag | Description |
|
| Flag | Description |
|
||||||
|------|-------------|
|
|------|-------------|
|
||||||
| `--restart` | Restart all services after upgrade |
|
| `--restart` | Restart all services after upgrade (local mode) |
|
||||||
|
| `--env <env>` | Target environment for remote rolling upgrade |
|
||||||
|
| `--node <ip>` | Upgrade a single node only |
|
||||||
|
| `--delay <seconds>` | Delay between nodes during rolling upgrade (default: 30) |
|
||||||
| `--anyone-relay` | Enable Anyone relay (same flags as install) |
|
| `--anyone-relay` | Enable Anyone relay (same flags as install) |
|
||||||
| `--anyone-bandwidth <pct>` | Limit relay to N% of VPS bandwidth (default: 30, 0=unlimited) |
|
| `--anyone-bandwidth <pct>` | Limit relay to N% of VPS bandwidth (default: 30, 0=unlimited) |
|
||||||
| `--anyone-accounting <GB>` | Monthly data cap for relay in GB (0=unlimited) |
|
| `--anyone-accounting <GB>` | Monthly data cap for relay in GB (0=unlimited) |
|
||||||
|
|
||||||
|
#### `orama build`
|
||||||
|
|
||||||
|
| Flag | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `--arch <arch>` | Target architecture (default: amd64) |
|
||||||
|
| `--output <path>` | Output archive path |
|
||||||
|
| `--verbose` | Verbose build output |
|
||||||
|
|
||||||
|
#### `orama node push`
|
||||||
|
|
||||||
|
| Flag | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `--env <env>` | Target environment (required) |
|
||||||
|
| `--node <ip>` | Push to a single node only |
|
||||||
|
| `--direct` | Sequential upload (no hub fanout) |
|
||||||
|
|
||||||
|
#### `orama node rollout`
|
||||||
|
|
||||||
|
| Flag | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `--env <env>` | Target environment (required) |
|
||||||
|
| `--no-build` | Skip the build step |
|
||||||
|
| `--yes` | Skip confirmation |
|
||||||
|
| `--delay <seconds>` | Delay between nodes (default: 30) |
|
||||||
|
|
||||||
|
#### `orama node clean`
|
||||||
|
|
||||||
|
| Flag | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `--env <env>` | Target environment (required) |
|
||||||
|
| `--node <ip>` | Clean a single node only |
|
||||||
|
| `--nuclear` | Also remove shared binaries |
|
||||||
|
| `--force` | Skip confirmation (DESTRUCTIVE) |
|
||||||
|
|
||||||
|
#### `orama node recover-raft`
|
||||||
|
|
||||||
|
| Flag | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `--env <env>` | Target environment (required) |
|
||||||
|
| `--leader <ip>` | Leader node IP — highest commit index (required) |
|
||||||
|
| `--force` | Skip confirmation (DESTRUCTIVE) |
|
||||||
|
|
||||||
#### `orama node` (Service Management)
|
#### `orama node` (Service Management)
|
||||||
|
|
||||||
Use these commands to manage services on production nodes:
|
Use these commands to manage services on production nodes:
|
||||||
@ -291,7 +320,35 @@ is properly configured, always use the HTTPS domain URL.
|
|||||||
UFW from external access. The join request goes through Caddy on port 80 (HTTP) or 443 (HTTPS),
|
UFW from external access. The join request goes through Caddy on port 80 (HTTP) or 443 (HTTPS),
|
||||||
which proxies to the gateway internally.
|
which proxies to the gateway internally.
|
||||||
|
|
||||||
## Pre-Install Checklist
|
## OramaOS Enrollment
|
||||||
|
|
||||||
|
For OramaOS nodes (mainnet, devnet, testnet), use the enrollment flow instead of `orama node install`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Flash OramaOS image to VPS (via provider dashboard)
|
||||||
|
# 2. Generate invite token on existing cluster node
|
||||||
|
orama node invite --expiry 24h
|
||||||
|
|
||||||
|
# 3. Enroll the OramaOS node
|
||||||
|
orama node enroll --node-ip <vps-public-ip> --token <invite-token> --gateway <gateway-url>
|
||||||
|
|
||||||
|
# 4. For genesis node reboots (before 5+ peers exist)
|
||||||
|
orama node unlock --genesis --node-ip <wg-ip>
|
||||||
|
```
|
||||||
|
|
||||||
|
OramaOS nodes have no SSH access. All management happens through the Gateway API:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Status, logs, commands — all via Gateway proxy
|
||||||
|
curl "https://gateway.example.com/v1/node/status?node_id=<id>"
|
||||||
|
curl "https://gateway.example.com/v1/node/logs?node_id=<id>&service=gateway"
|
||||||
|
```
|
||||||
|
|
||||||
|
See [ORAMAOS_DEPLOYMENT.md](ORAMAOS_DEPLOYMENT.md) for the full guide.
|
||||||
|
|
||||||
|
**Note:** `orama node clean` does not work on OramaOS nodes (no SSH). Use `orama node leave` for graceful departure, or reflash the image for a factory reset.
|
||||||
|
|
||||||
|
## Pre-Install Checklist (Ubuntu Only)
|
||||||
|
|
||||||
Before running `orama node install` on a VPS, ensure:
|
Before running `orama node install` on a VPS, ensure:
|
||||||
|
|
||||||
@ -167,18 +167,18 @@ The inspector reads node definitions from a pipe-delimited config file (default:
|
|||||||
### Format
|
### Format
|
||||||
|
|
||||||
```
|
```
|
||||||
# environment|user@host|password|role|ssh_key
|
# environment|user@host|role
|
||||||
devnet|ubuntu@1.2.3.4|mypassword|node|
|
devnet|ubuntu@1.2.3.4|node
|
||||||
devnet|ubuntu@5.6.7.8|mypassword|nameserver-ns1|/path/to/key
|
devnet|ubuntu@5.6.7.8|nameserver-ns1
|
||||||
```
|
```
|
||||||
|
|
||||||
| Field | Description |
|
| Field | Description |
|
||||||
|-------|-------------|
|
|-------|-------------|
|
||||||
| `environment` | Cluster name (`devnet`, `testnet`) |
|
| `environment` | Cluster name (`devnet`, `testnet`) |
|
||||||
| `user@host` | SSH credentials |
|
| `user@host` | SSH credentials |
|
||||||
| `password` | SSH password |
|
|
||||||
| `role` | `node` or `nameserver-ns1`, `nameserver-ns2`, etc. |
|
| `role` | `node` or `nameserver-ns1`, `nameserver-ns2`, etc. |
|
||||||
| `ssh_key` | Optional path to SSH private key |
|
|
||||||
|
SSH keys are resolved from rootwallet (`rw vault ssh get <host>/<user> --priv`).
|
||||||
|
|
||||||
Blank lines and lines starting with `#` are ignored.
|
Blank lines and lines starting with `#` are ignored.
|
||||||
|
|
||||||
233
core/docs/ORAMAOS_DEPLOYMENT.md
Normal file
233
core/docs/ORAMAOS_DEPLOYMENT.md
Normal file
@ -0,0 +1,233 @@
|
|||||||
|
# OramaOS Deployment Guide
|
||||||
|
|
||||||
|
OramaOS is a custom minimal Linux image built with Buildroot. It replaces the standard Ubuntu-based node deployment for mainnet, devnet, and testnet environments. Sandbox clusters remain on Ubuntu for development convenience.
|
||||||
|
|
||||||
|
## What is OramaOS?
|
||||||
|
|
||||||
|
OramaOS is a locked-down operating system designed specifically for Orama node operators. Key properties:
|
||||||
|
|
||||||
|
- **No SSH, no shell** — operators cannot access the filesystem or run commands on the machine
|
||||||
|
- **LUKS full-disk encryption** — the data partition is encrypted; the key is split via Shamir's Secret Sharing across peer nodes
|
||||||
|
- **Read-only rootfs** — the OS image uses SquashFS with dm-verity integrity verification
|
||||||
|
- **A/B partition updates** — signed OS images are applied atomically with automatic rollback on failure
|
||||||
|
- **Service sandboxing** — each service runs in its own Linux namespace with seccomp syscall filtering
|
||||||
|
- **Signed binaries** — all updates are cryptographically signed with the Orama rootwallet
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
Partition Layout:
|
||||||
|
/dev/sda1 — ESP (EFI System Partition, systemd-boot)
|
||||||
|
/dev/sda2 — rootfs-A (SquashFS, read-only, dm-verity)
|
||||||
|
/dev/sda3 — rootfs-B (standby, for A/B updates)
|
||||||
|
/dev/sda4 — data (LUKS2 encrypted, ext4)
|
||||||
|
|
||||||
|
Boot Flow:
|
||||||
|
systemd-boot → dm-verity rootfs → orama-agent → WireGuard → services
|
||||||
|
```
|
||||||
|
|
||||||
|
The **orama-agent** is the only root process. It manages:
|
||||||
|
- Boot sequence and LUKS key reconstruction
|
||||||
|
- WireGuard tunnel setup
|
||||||
|
- Service lifecycle (start, stop, restart in sandboxed namespaces)
|
||||||
|
- Command reception from the Gateway over WireGuard
|
||||||
|
- OS updates (download, verify signature, A/B swap, reboot)
|
||||||
|
|
||||||
|
## Enrollment Flow
|
||||||
|
|
||||||
|
OramaOS nodes join the cluster through an enrollment process (different from the Ubuntu `orama node install` flow):
|
||||||
|
|
||||||
|
### Step 1: Flash OramaOS to VPS
|
||||||
|
|
||||||
|
Download the OramaOS image and flash it to your VPS:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download image (URL provided upon acceptance)
|
||||||
|
wget https://releases.orama.network/oramaos-v1.0.0-amd64.qcow2
|
||||||
|
|
||||||
|
# Flash to VPS (provider-specific — Hetzner, Vultr, etc.)
|
||||||
|
# Most providers support uploading custom images via their dashboard
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: First Boot — Enrollment Mode
|
||||||
|
|
||||||
|
On first boot, the agent:
|
||||||
|
1. Generates a random 8-character registration code
|
||||||
|
2. Starts a temporary HTTP server on port 9999
|
||||||
|
3. Opens an outbound WebSocket to the Gateway
|
||||||
|
4. Waits for enrollment to complete
|
||||||
|
|
||||||
|
The registration code is displayed on the VPS console (if available) and served at `http://<vps-ip>:9999/`.
|
||||||
|
|
||||||
|
### Step 3: Run Enrollment from CLI
|
||||||
|
|
||||||
|
On your local machine (where you have the `orama` CLI and rootwallet):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate an invite token on any existing cluster node
|
||||||
|
orama node invite --expiry 24h
|
||||||
|
|
||||||
|
# Enroll the OramaOS node
|
||||||
|
orama node enroll --node-ip <vps-public-ip> --token <invite-token> --gateway <gateway-url>
|
||||||
|
```
|
||||||
|
|
||||||
|
The enrollment command:
|
||||||
|
1. Fetches the registration code from the node (port 9999)
|
||||||
|
2. Sends the code + invite token to the Gateway
|
||||||
|
3. Gateway validates everything, assigns a WireGuard IP, and pushes config to the node
|
||||||
|
4. Node configures WireGuard, formats the LUKS-encrypted data partition
|
||||||
|
5. LUKS key is split via Shamir and distributed to peer vault-guardians
|
||||||
|
6. Services start in sandboxed namespaces
|
||||||
|
7. Port 9999 closes permanently
|
||||||
|
|
||||||
|
### Step 4: Verify
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check the node is online and healthy
|
||||||
|
orama monitor report --env <env>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Genesis Node
|
||||||
|
|
||||||
|
The first OramaOS node in a cluster is the **genesis node**. It has a special boot path because there are no peers yet for Shamir key distribution:
|
||||||
|
|
||||||
|
1. Genesis generates a LUKS key and encrypts the data partition
|
||||||
|
2. The LUKS key is encrypted with a rootwallet-derived key and stored on the unencrypted rootfs
|
||||||
|
3. On reboot (before enough peers exist), the operator must manually unlock:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
orama node unlock --genesis --node-ip <wg-ip>
|
||||||
|
```
|
||||||
|
|
||||||
|
This command:
|
||||||
|
1. Fetches the encrypted genesis key from the node
|
||||||
|
2. Decrypts it using the rootwallet (`rw decrypt`)
|
||||||
|
3. Sends the decrypted LUKS key to the agent over WireGuard
|
||||||
|
|
||||||
|
Once 5+ peers have joined, the genesis node distributes Shamir shares to peers, deletes the local encrypted key, and transitions to normal Shamir-based unlock. After this transition, `orama node unlock` is no longer needed.
|
||||||
|
|
||||||
|
## Normal Reboot (Shamir Unlock)
|
||||||
|
|
||||||
|
When an enrolled OramaOS node reboots:
|
||||||
|
|
||||||
|
1. Agent starts, brings up WireGuard
|
||||||
|
2. Contacts peer vault-guardians over WireGuard
|
||||||
|
3. Fetches K Shamir shares (K = threshold, typically `max(3, N/3)`)
|
||||||
|
4. Reconstructs LUKS key via Lagrange interpolation over GF(256)
|
||||||
|
5. Decrypts and mounts data partition
|
||||||
|
6. Starts all services
|
||||||
|
7. Zeros key from memory
|
||||||
|
|
||||||
|
If not enough peers are available, the agent enters a degraded "waiting for peers" state and retries with exponential backoff (1s, 2s, 4s, 8s, 16s, max 5 retries per cycle).
|
||||||
|
|
||||||
|
## Node Management
|
||||||
|
|
||||||
|
Since OramaOS has no SSH, all management happens through the Gateway API:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check node status
|
||||||
|
curl "https://gateway.example.com/v1/node/status?node_id=<id>"
|
||||||
|
|
||||||
|
# Send a command (e.g., restart a service)
|
||||||
|
curl -X POST "https://gateway.example.com/v1/node/command?node_id=<id>" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"action":"restart","service":"rqlite"}'
|
||||||
|
|
||||||
|
# View logs
|
||||||
|
curl "https://gateway.example.com/v1/node/logs?node_id=<id>&service=gateway&lines=100"
|
||||||
|
|
||||||
|
# Graceful node departure
|
||||||
|
curl -X POST "https://gateway.example.com/v1/node/leave" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"node_id":"<id>"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
The Gateway proxies these requests to the agent over WireGuard (port 9998). The agent is never directly accessible from the public internet.
|
||||||
|
|
||||||
|
## OS Updates
|
||||||
|
|
||||||
|
OramaOS uses an A/B partition scheme for atomic, rollback-safe updates:
|
||||||
|
|
||||||
|
1. Agent periodically checks for new versions
|
||||||
|
2. Downloads the signed image (P2P over WireGuard between nodes)
|
||||||
|
3. Verifies the rootwallet EVM signature against the embedded public key
|
||||||
|
4. Writes to the standby partition (if running from A, writes to B)
|
||||||
|
5. Sets systemd-boot to boot from B with `tries_left=3`
|
||||||
|
6. Reboots
|
||||||
|
7. If B boots successfully (agent starts, WG connects, services healthy): marks B as "good"
|
||||||
|
8. If B fails 3 times: systemd-boot automatically falls back to A
|
||||||
|
|
||||||
|
No operator intervention is needed for updates. Failed updates are automatically rolled back.
|
||||||
|
|
||||||
|
## Service Sandboxing
|
||||||
|
|
||||||
|
Each service on OramaOS runs in an isolated environment:
|
||||||
|
|
||||||
|
- **Mount namespace** — each service only sees its own data directory as writable; everything else is read-only
|
||||||
|
- **UTS namespace** — isolated hostname
|
||||||
|
- **Dedicated UID/GID** — each service runs as a different user (not root)
|
||||||
|
- **Seccomp filtering** — per-service syscall allowlist (initially in audit mode, then enforce mode)
|
||||||
|
|
||||||
|
Services and their sandbox profiles:
|
||||||
|
| Service | Writable Path | Extra Syscalls |
|
||||||
|
|---------|--------------|----------------|
|
||||||
|
| RQLite | `/opt/orama/.orama/data/rqlite` | fsync, fdatasync (Raft + SQLite WAL) |
|
||||||
|
| Olric | `/opt/orama/.orama/data/olric` | sendmmsg, recvmmsg (gossip) |
|
||||||
|
| IPFS | `/opt/orama/.orama/data/ipfs` | sendfile, splice (data transfer) |
|
||||||
|
| Gateway | `/opt/orama/.orama/data/gateway` | sendfile, splice (HTTP) |
|
||||||
|
| CoreDNS | `/opt/orama/.orama/data/coredns` | sendmmsg, recvmmsg (DNS) |
|
||||||
|
|
||||||
|
## OramaOS vs Ubuntu Deployment
|
||||||
|
|
||||||
|
| Feature | Ubuntu | OramaOS |
|
||||||
|
|---------|--------|---------|
|
||||||
|
| SSH access | Yes | No |
|
||||||
|
| Shell access | Yes | No |
|
||||||
|
| Disk encryption | No | LUKS2 (Shamir) |
|
||||||
|
| OS updates | Manual (`orama node upgrade`) | Automatic (signed, A/B) |
|
||||||
|
| Service isolation | systemd only | Namespaces + seccomp |
|
||||||
|
| Rootfs integrity | None | dm-verity |
|
||||||
|
| Binary signing | Optional | Required |
|
||||||
|
| Operator data access | Full | None |
|
||||||
|
| Environments | All (including sandbox) | Mainnet, devnet, testnet |
|
||||||
|
|
||||||
|
## Cleaning / Factory Reset
|
||||||
|
|
||||||
|
OramaOS nodes cannot be cleaned with the standard `orama node clean` command (no SSH access). Instead:
|
||||||
|
|
||||||
|
- **Graceful departure:** `orama node leave` via the Gateway API — stops services, redistributes Shamir shares, removes WG peer
|
||||||
|
- **Factory reset:** Reflash the OramaOS image on the VPS via the hosting provider's dashboard
|
||||||
|
- **Data is unrecoverable:** Since the LUKS key is distributed across peers, reflashing destroys all data permanently
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Node stuck in enrollment mode
|
||||||
|
The node boots but enrollment never completes.
|
||||||
|
|
||||||
|
**Check:** Can you reach `http://<vps-ip>:9999/` from your machine? If not, the VPS firewall may be blocking port 9999.
|
||||||
|
|
||||||
|
**Fix:** Ensure port 9999 is open in the VPS provider's firewall. OramaOS opens it automatically via its internal firewall, but external provider firewalls (Hetzner, AWS security groups) must be configured separately.
|
||||||
|
|
||||||
|
### LUKS unlock fails (not enough peers)
|
||||||
|
After reboot, the node can't reconstruct its LUKS key.
|
||||||
|
|
||||||
|
**Check:** How many peer nodes are online? The node needs at least K peers (threshold) to be reachable over WireGuard.
|
||||||
|
|
||||||
|
**Fix:** Ensure enough cluster nodes are online. If this is the genesis node and fewer than 5 peers exist, use:
|
||||||
|
```bash
|
||||||
|
orama node unlock --genesis --node-ip <wg-ip>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Update failed, node rolled back
|
||||||
|
The node applied an update but reverted to the previous version.
|
||||||
|
|
||||||
|
**Check:** The agent logs will show why the new partition failed to boot (accessible via `GET /v1/node/logs?service=agent`).
|
||||||
|
|
||||||
|
**Common causes:** Corrupted download (signature verification should catch this), hardware issue, or incompatible configuration.
|
||||||
|
|
||||||
|
### Services not starting after reboot
|
||||||
|
The node rebooted and LUKS unlocked, but services are unhealthy.
|
||||||
|
|
||||||
|
**Check:** `GET /v1/node/status` — which services are down?
|
||||||
|
|
||||||
|
**Fix:** Try restarting the specific service via `POST /v1/node/command` with `{"action":"restart","service":"<name>"}`. If the issue persists, check service logs.
|
||||||
208
core/docs/SANDBOX.md
Normal file
208
core/docs/SANDBOX.md
Normal file
@ -0,0 +1,208 @@
|
|||||||
|
# Sandbox: Ephemeral Hetzner Cloud Clusters
|
||||||
|
|
||||||
|
Spin up temporary 5-node Orama clusters on Hetzner Cloud for development and testing. Total cost: ~€0.04/hour.
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# One-time setup (API key, domain, floating IPs, SSH key)
|
||||||
|
orama sandbox setup
|
||||||
|
|
||||||
|
# Create a cluster (~5 minutes)
|
||||||
|
orama sandbox create --name my-feature
|
||||||
|
|
||||||
|
# Check health
|
||||||
|
orama sandbox status
|
||||||
|
|
||||||
|
# SSH into a node
|
||||||
|
orama sandbox ssh 1
|
||||||
|
|
||||||
|
# Deploy code changes
|
||||||
|
orama sandbox rollout
|
||||||
|
|
||||||
|
# Tear it down
|
||||||
|
orama sandbox destroy
|
||||||
|
```
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
### 1. Hetzner Cloud Account
|
||||||
|
|
||||||
|
Create a project at [console.hetzner.cloud](https://console.hetzner.cloud) and generate an API token with read/write permissions under **Security > API Tokens**.
|
||||||
|
|
||||||
|
### 2. Domain with Glue Records
|
||||||
|
|
||||||
|
You need a domain (or subdomain) that points to Hetzner Floating IPs. The `orama sandbox setup` wizard will guide you through this.
|
||||||
|
|
||||||
|
**Example:** Using `sbx.dbrs.space`
|
||||||
|
|
||||||
|
At your domain registrar:
|
||||||
|
1. Create glue records (Personal DNS Servers):
|
||||||
|
- `ns1.sbx.dbrs.space` → `<floating-ip-1>`
|
||||||
|
- `ns2.sbx.dbrs.space` → `<floating-ip-2>`
|
||||||
|
2. Set custom nameservers for `sbx.dbrs.space`:
|
||||||
|
- `ns1.sbx.dbrs.space`
|
||||||
|
- `ns2.sbx.dbrs.space`
|
||||||
|
|
||||||
|
DNS propagation can take up to 48 hours.
|
||||||
|
|
||||||
|
### 3. Binary Archive
|
||||||
|
|
||||||
|
Build the binary archive before creating a cluster:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
orama build
|
||||||
|
```
|
||||||
|
|
||||||
|
This creates `/tmp/orama-<version>-linux-amd64.tar.gz` with all pre-compiled binaries.
|
||||||
|
|
||||||
|
## Setup
|
||||||
|
|
||||||
|
Run the interactive setup wizard:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
orama sandbox setup
|
||||||
|
```
|
||||||
|
|
||||||
|
This will:
|
||||||
|
1. Prompt for your Hetzner API token and validate it
|
||||||
|
2. Ask for your sandbox domain
|
||||||
|
3. Create or reuse 2 Hetzner Floating IPs (~$0.005/hr each)
|
||||||
|
4. Create a firewall with sandbox rules
|
||||||
|
5. Create a rootwallet SSH entry (`sandbox/root`) if it doesn't exist
|
||||||
|
6. Upload the wallet-derived public key to Hetzner
|
||||||
|
7. Display DNS configuration instructions
|
||||||
|
|
||||||
|
Config is saved to `~/.orama/sandbox.yaml`.
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
### `orama sandbox create [--name <name>]`
|
||||||
|
|
||||||
|
Creates a new 5-node cluster. If `--name` is omitted, a random name is generated (e.g., "swift-falcon").
|
||||||
|
|
||||||
|
**Cluster layout:**
|
||||||
|
- Nodes 1-2: Nameservers (CoreDNS + Caddy + all services)
|
||||||
|
- Nodes 3-5: Regular nodes (all services except CoreDNS)
|
||||||
|
|
||||||
|
**Phases:**
|
||||||
|
1. Provision 5 CX22 servers on Hetzner (parallel, ~90s)
|
||||||
|
2. Assign floating IPs to nameserver nodes (~10s)
|
||||||
|
3. Upload binary archive to all nodes (parallel, ~60s)
|
||||||
|
4. Install genesis node + generate invite tokens (~120s)
|
||||||
|
5. Join remaining 4 nodes (serial with health checks, ~180s)
|
||||||
|
6. Verify cluster health (~15s)
|
||||||
|
|
||||||
|
**One sandbox at a time.** Since the floating IPs are shared, only one sandbox can own the nameservers. Destroy the active sandbox before creating a new one.
|
||||||
|
|
||||||
|
### `orama sandbox destroy [--name <name>] [--force]`
|
||||||
|
|
||||||
|
Tears down a cluster:
|
||||||
|
1. Unassigns floating IPs
|
||||||
|
2. Deletes all 5 servers (parallel)
|
||||||
|
3. Removes state file
|
||||||
|
|
||||||
|
Use `--force` to skip confirmation.
|
||||||
|
|
||||||
|
### `orama sandbox list`
|
||||||
|
|
||||||
|
Lists all sandboxes with their status. Also checks Hetzner for orphaned servers that don't have a corresponding state file.
|
||||||
|
|
||||||
|
### `orama sandbox status [--name <name>]`
|
||||||
|
|
||||||
|
Shows per-node health including:
|
||||||
|
- Service status (active/inactive)
|
||||||
|
- RQLite role (Leader/Follower)
|
||||||
|
- Cluster summary (commit index, voter count)
|
||||||
|
|
||||||
|
### `orama sandbox rollout [--name <name>]`
|
||||||
|
|
||||||
|
Deploys code changes:
|
||||||
|
1. Uses the latest binary archive from `/tmp/` (run `orama build` first)
|
||||||
|
2. Pushes to all nodes
|
||||||
|
3. Rolling upgrade: followers first, leader last, 15s between nodes
|
||||||
|
|
||||||
|
### `orama sandbox ssh <node-number>`
|
||||||
|
|
||||||
|
Opens an interactive SSH session to a sandbox node (1-5).
|
||||||
|
|
||||||
|
```bash
|
||||||
|
orama sandbox ssh 1 # SSH into node 1 (genesis/ns1)
|
||||||
|
orama sandbox ssh 3 # SSH into node 3 (regular node)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### Floating IPs
|
||||||
|
|
||||||
|
Hetzner Floating IPs are persistent IPv4 addresses that can be reassigned between servers. They solve the DNS chicken-and-egg problem:
|
||||||
|
|
||||||
|
- Glue records at the registrar point to 2 Floating IPs (configured once)
|
||||||
|
- Each new sandbox assigns the Floating IPs to its nameserver nodes
|
||||||
|
- DNS works instantly — no propagation delay between clusters
|
||||||
|
|
||||||
|
### SSH Authentication
|
||||||
|
|
||||||
|
Sandbox uses a rootwallet-derived SSH key (`sandbox/root` vault entry), the same mechanism as production. The wallet must be unlocked (`rw unlock`) before running sandbox commands that use SSH. The public key is uploaded to Hetzner during setup and injected into every server at creation time.
|
||||||
|
|
||||||
|
### Server Naming
|
||||||
|
|
||||||
|
Servers: `sbx-<name>-<N>` (e.g., `sbx-swift-falcon-1` through `sbx-swift-falcon-5`)
|
||||||
|
|
||||||
|
### State Files
|
||||||
|
|
||||||
|
Sandbox state is stored at `~/.orama/sandboxes/<name>.yaml`. This tracks server IDs, IPs, roles, and cluster status.
|
||||||
|
|
||||||
|
## Cost
|
||||||
|
|
||||||
|
| Resource | Cost | Qty | Total |
|
||||||
|
|----------|------|-----|-------|
|
||||||
|
| CX22 (2 vCPU, 4GB) | €0.006/hr | 5 | €0.03/hr |
|
||||||
|
| Floating IPv4 | €0.005/hr | 2 | €0.01/hr |
|
||||||
|
| **Total** | | | **~€0.04/hr** |
|
||||||
|
|
||||||
|
Servers are billed per hour. Floating IPs are billed as long as they exist (even unassigned). Destroy the sandbox when not in use to save on server costs.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### "sandbox not configured"
|
||||||
|
|
||||||
|
Run `orama sandbox setup` first.
|
||||||
|
|
||||||
|
### "no binary archive found"
|
||||||
|
|
||||||
|
Run `orama build` to create the binary archive.
|
||||||
|
|
||||||
|
### "sandbox X is already active"
|
||||||
|
|
||||||
|
Only one sandbox can be active at a time. Destroy it first:
|
||||||
|
```bash
|
||||||
|
orama sandbox destroy --name <name>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Server creation fails
|
||||||
|
|
||||||
|
Check:
|
||||||
|
- Hetzner API token is valid and has read/write permissions
|
||||||
|
- You haven't hit Hetzner's server limit (default: 10 per project)
|
||||||
|
- The selected location has CX22 capacity
|
||||||
|
|
||||||
|
### Genesis install fails
|
||||||
|
|
||||||
|
SSH into the node to debug:
|
||||||
|
```bash
|
||||||
|
orama sandbox ssh 1
|
||||||
|
journalctl -u orama-node -f
|
||||||
|
```
|
||||||
|
|
||||||
|
The sandbox will be left in "error" state. You can destroy and recreate it.
|
||||||
|
|
||||||
|
### DNS not resolving
|
||||||
|
|
||||||
|
1. Verify glue records are configured at your registrar
|
||||||
|
2. Check propagation: `dig NS sbx.dbrs.space @8.8.8.8`
|
||||||
|
3. Propagation can take 24-48 hours for new domains
|
||||||
|
|
||||||
|
### Orphaned servers
|
||||||
|
|
||||||
|
If `orama sandbox list` shows orphaned servers, delete them manually at [console.hetzner.cloud](https://console.hetzner.cloud). Sandbox servers are labeled `orama-sandbox=<name>` for easy identification.
|
||||||
194
core/docs/SECURITY.md
Normal file
194
core/docs/SECURITY.md
Normal file
@ -0,0 +1,194 @@
|
|||||||
|
# Security Hardening
|
||||||
|
|
||||||
|
This document describes all security measures applied to the Orama Network, covering both Phase 1 (service hardening on existing Ubuntu nodes) and Phase 2 (OramaOS locked-down image).
|
||||||
|
|
||||||
|
## Phase 1: Service Hardening
|
||||||
|
|
||||||
|
These measures apply to all nodes (Ubuntu and OramaOS).
|
||||||
|
|
||||||
|
### Network Isolation
|
||||||
|
|
||||||
|
**CIDR Validation (Step 1.1)**
|
||||||
|
- WireGuard subnet restricted to `10.0.0.0/24` across all components: firewall rules, rate limiter, auth module, and WireGuard PostUp/PostDown iptables rules
|
||||||
|
- Prevents other tenants on shared VPS providers from bypassing the firewall via overlapping `10.x.x.x` ranges
|
||||||
|
|
||||||
|
**IPv6 Disabled (Step 1.2)**
|
||||||
|
- IPv6 disabled system-wide via sysctl: `net.ipv6.conf.all.disable_ipv6=1`
|
||||||
|
- Prevents services bound to `0.0.0.0` from being reachable via IPv6 (which had no firewall rules)
|
||||||
|
|
||||||
|
### Authentication
|
||||||
|
|
||||||
|
**Internal Endpoint Auth (Step 1.3)**
|
||||||
|
- `/v1/internal/wg/peers` and `/v1/internal/wg/peer/remove` now require cluster secret validation
|
||||||
|
- Peer removal additionally validates the request originates from a WireGuard subnet IP
|
||||||
|
|
||||||
|
**RQLite Authentication (Step 1.7)**
|
||||||
|
- RQLite runs with `-auth` flag pointing to a credentials file
|
||||||
|
- All RQLite HTTP requests include `Authorization: Basic <base64>` headers
|
||||||
|
- Credentials generated at cluster genesis, distributed to joining nodes via join response
|
||||||
|
- Both the central RQLite client wrapper and the standalone CoreDNS RQLite client send auth
|
||||||
|
|
||||||
|
**Olric Gossip Encryption (Step 1.8)**
|
||||||
|
- Olric memberlist uses a 32-byte encryption key for all gossip traffic
|
||||||
|
- Key generated at genesis, distributed via join response
|
||||||
|
- Prevents rogue nodes from joining the gossip ring and poisoning caches
|
||||||
|
- Note: encryption is all-or-nothing (coordinated restart required when enabling)
|
||||||
|
|
||||||
|
**IPFS Cluster TrustedPeers (Step 1.9)**
|
||||||
|
- IPFS Cluster `TrustedPeers` populated with actual cluster peer IDs (was `["*"]`)
|
||||||
|
- New peers added to TrustedPeers on all existing nodes during join
|
||||||
|
- Prevents unauthorized peers from controlling IPFS pinning
|
||||||
|
|
||||||
|
**Vault V1 Auth Enforcement (Step 1.14)**
|
||||||
|
- V1 push/pull endpoints require a valid session token when vault-guardian is configured
|
||||||
|
- Previously, auth was optional for backward compatibility — any WG peer could read/overwrite Shamir shares
|
||||||
|
|
||||||
|
### Token & Key Storage
|
||||||
|
|
||||||
|
**Refresh Token Hashing (Step 1.5)**
|
||||||
|
- Refresh tokens stored as SHA-256 hashes in RQLite (never plaintext)
|
||||||
|
- On lookup: hash the incoming token, query by hash
|
||||||
|
- On revocation: hash before revoking (both single-token and by-subject)
|
||||||
|
- Existing tokens invalidated on upgrade (users re-authenticate)
|
||||||
|
|
||||||
|
**API Key Hashing (Step 1.6)**
|
||||||
|
- API keys stored as HMAC-SHA256 hashes using a server-side secret
|
||||||
|
- HMAC secret generated at cluster genesis, stored in `~/.orama/secrets/api-key-hmac-secret`
|
||||||
|
- On lookup: compute HMAC, query by hash — fast enough for every request (unlike bcrypt)
|
||||||
|
- In-memory cache uses raw key as cache key (never persisted)
|
||||||
|
- During rolling upgrade: dual lookup (HMAC first, then raw as fallback) until all nodes upgraded
|
||||||
|
|
||||||
|
**TURN Secret Encryption (Step 1.15)**
|
||||||
|
- TURN shared secrets encrypted at rest in RQLite using AES-256-GCM
|
||||||
|
- Encryption key derived via HKDF from the cluster secret with purpose string `"turn-encryption"`
|
||||||
|
|
||||||
|
### TLS & Transport
|
||||||
|
|
||||||
|
**InsecureSkipVerify Fix (Step 1.10)**
|
||||||
|
- During node join, TLS verification uses TOFU (Trust On First Use)
|
||||||
|
- Invite token output includes the CA certificate fingerprint (SHA-256)
|
||||||
|
- Joining node verifies the server cert fingerprint matches before proceeding
|
||||||
|
- After join: CA cert stored locally for future connections
|
||||||
|
|
||||||
|
**WebSocket Origin Validation (Step 1.4)**
|
||||||
|
- All WebSocket upgraders validate the `Origin` header against the node's configured domain
|
||||||
|
- Non-browser clients (no Origin header) are still allowed
|
||||||
|
- Prevents cross-site WebSocket hijacking attacks
|
||||||
|
|
||||||
|
### Process Isolation
|
||||||
|
|
||||||
|
**Dedicated User (Step 1.11)**
|
||||||
|
- All services run as the `orama` user (not root)
|
||||||
|
- Caddy and CoreDNS get `AmbientCapabilities=CAP_NET_BIND_SERVICE` for ports 80/443 and 53
|
||||||
|
- WireGuard stays as root (kernel netlink requires it)
|
||||||
|
- vault-guardian already had proper hardening
|
||||||
|
|
||||||
|
**systemd Hardening (Step 1.12)**
|
||||||
|
- All service units include:
|
||||||
|
```ini
|
||||||
|
ProtectSystem=strict
|
||||||
|
ProtectHome=yes
|
||||||
|
NoNewPrivileges=yes
|
||||||
|
PrivateDevices=yes
|
||||||
|
ProtectKernelTunables=yes
|
||||||
|
ProtectKernelModules=yes
|
||||||
|
RestrictNamespaces=yes
|
||||||
|
ReadWritePaths=/opt/orama/.orama
|
||||||
|
```
|
||||||
|
- Applied to both template files (`pkg/environments/templates/`) and hardcoded unit generators (`pkg/environments/production/services.go`)
|
||||||
|
|
||||||
|
### Supply Chain
|
||||||
|
|
||||||
|
**Binary Signing (Step 1.13)**
|
||||||
|
- Build archives include `manifest.sig` — a rootwallet EVM signature of the manifest hash
|
||||||
|
- During install, the signature is verified against the embedded Orama public key
|
||||||
|
- Unsigned or tampered archives are rejected
|
||||||
|
|
||||||
|
## Phase 2: OramaOS
|
||||||
|
|
||||||
|
These measures apply only to OramaOS nodes (mainnet, devnet, testnet).
|
||||||
|
|
||||||
|
### Immutable OS
|
||||||
|
|
||||||
|
- **Read-only rootfs** — SquashFS with dm-verity integrity verification
|
||||||
|
- **No shell** — `/bin/sh` symlinked to `/bin/false`, no bash/ash/ssh
|
||||||
|
- **No SSH** — OpenSSH not included in the image
|
||||||
|
- **Minimal packages** — only what's needed for systemd, cryptsetup, and the agent
|
||||||
|
|
||||||
|
### Full-Disk Encryption
|
||||||
|
|
||||||
|
- **LUKS2** with AES-XTS-Plain64 on the data partition
|
||||||
|
- **Shamir's Secret Sharing** over GF(256) — LUKS key split across peer vault-guardians
|
||||||
|
- **Adaptive threshold** — K = max(3, N/3) where N is the number of peers
|
||||||
|
- **Key zeroing** — LUKS key wiped from memory immediately after use
|
||||||
|
- **Malicious share detection** — fetch K+1 shares when possible, verify consistency
|
||||||
|
|
||||||
|
### Service Sandboxing
|
||||||
|
|
||||||
|
Each service runs in isolated Linux namespaces:
|
||||||
|
- **CLONE_NEWNS** — mount namespace (filesystem isolation)
|
||||||
|
- **CLONE_NEWUTS** — hostname namespace
|
||||||
|
- **Dedicated UID/GID** — each service has its own user
|
||||||
|
- **Seccomp filtering** — per-service syscall allowlist
|
||||||
|
|
||||||
|
Note: CLONE_NEWPID is intentionally omitted — it makes services PID 1 in their namespace, which changes signal semantics (SIGTERM ignored by default for PID 1).
|
||||||
|
|
||||||
|
### Signed Updates
|
||||||
|
|
||||||
|
- A/B partition scheme with systemd-boot and boot counting (`tries_left=3`)
|
||||||
|
- All updates signed with rootwallet EVM signature (secp256k1 + keccak256)
|
||||||
|
- Signer address: `0xb5d8a496c8b2412990d7D467E17727fdF5954afC`
|
||||||
|
- P2P distribution over WireGuard between nodes
|
||||||
|
- Automatic rollback on 3 consecutive boot failures
|
||||||
|
|
||||||
|
### Zero Operator Access
|
||||||
|
|
||||||
|
- Operators cannot read data on the machine (LUKS encrypted, no shell)
|
||||||
|
- Management only through Gateway API → agent over WireGuard
|
||||||
|
- All commands are logged and auditable
|
||||||
|
- No root access, no console access, no file system access
|
||||||
|
|
||||||
|
## Rollout Strategy
|
||||||
|
|
||||||
|
### Phase 1 Batches
|
||||||
|
|
||||||
|
```
|
||||||
|
Batch 1 (zero-risk, no restart):
|
||||||
|
- CIDR fix
|
||||||
|
- IPv6 disable
|
||||||
|
- Internal endpoint auth
|
||||||
|
- WebSocket origin check
|
||||||
|
|
||||||
|
Batch 2 (medium-risk, restart needed):
|
||||||
|
- Hash refresh tokens
|
||||||
|
- Hash API keys
|
||||||
|
- Binary signing
|
||||||
|
- Vault V1 auth enforcement
|
||||||
|
- TURN secret encryption
|
||||||
|
|
||||||
|
Batch 3 (high-risk, coordinated rollout):
|
||||||
|
- RQLite auth (followers first, leader last)
|
||||||
|
- Olric encryption (simultaneous restart)
|
||||||
|
- IPFS Cluster TrustedPeers
|
||||||
|
|
||||||
|
Batch 4 (infrastructure changes):
|
||||||
|
- InsecureSkipVerify fix
|
||||||
|
- Dedicated user
|
||||||
|
- systemd hardening
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 2
|
||||||
|
|
||||||
|
1. Build and test OramaOS image in QEMU
|
||||||
|
2. Deploy to sandbox cluster alongside Ubuntu nodes
|
||||||
|
3. Verify interop and stability
|
||||||
|
4. Gradual migration: testnet → devnet → mainnet (one node at a time, maintaining Raft quorum)
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
All changes verified on sandbox cluster before production deployment:
|
||||||
|
|
||||||
|
- `make test` — all unit tests pass
|
||||||
|
- `orama monitor report --env sandbox` — full cluster health
|
||||||
|
- Manual endpoint testing (e.g., curl without auth → 401)
|
||||||
|
- Security-specific checks (IPv6 listeners, RQLite auth, binary signatures)
|
||||||
@ -20,6 +20,10 @@ require (
|
|||||||
github.com/miekg/dns v1.1.70
|
github.com/miekg/dns v1.1.70
|
||||||
github.com/multiformats/go-multiaddr v0.16.0
|
github.com/multiformats/go-multiaddr v0.16.0
|
||||||
github.com/olric-data/olric v0.7.0
|
github.com/olric-data/olric v0.7.0
|
||||||
|
github.com/pion/interceptor v0.1.40
|
||||||
|
github.com/pion/rtcp v1.2.15
|
||||||
|
github.com/pion/turn/v4 v4.0.2
|
||||||
|
github.com/pion/webrtc/v4 v4.1.2
|
||||||
github.com/rqlite/gorqlite v0.0.0-20250609141355-ac86a4a1c9a8
|
github.com/rqlite/gorqlite v0.0.0-20250609141355-ac86a4a1c9a8
|
||||||
github.com/spf13/cobra v1.10.2
|
github.com/spf13/cobra v1.10.2
|
||||||
github.com/stretchr/testify v1.11.1
|
github.com/stretchr/testify v1.11.1
|
||||||
@ -123,11 +127,9 @@ require (
|
|||||||
github.com/pion/dtls/v2 v2.2.12 // indirect
|
github.com/pion/dtls/v2 v2.2.12 // indirect
|
||||||
github.com/pion/dtls/v3 v3.0.6 // indirect
|
github.com/pion/dtls/v3 v3.0.6 // indirect
|
||||||
github.com/pion/ice/v4 v4.0.10 // indirect
|
github.com/pion/ice/v4 v4.0.10 // indirect
|
||||||
github.com/pion/interceptor v0.1.40 // indirect
|
|
||||||
github.com/pion/logging v0.2.3 // indirect
|
github.com/pion/logging v0.2.3 // indirect
|
||||||
github.com/pion/mdns/v2 v2.0.7 // indirect
|
github.com/pion/mdns/v2 v2.0.7 // indirect
|
||||||
github.com/pion/randutil v0.1.0 // indirect
|
github.com/pion/randutil v0.1.0 // indirect
|
||||||
github.com/pion/rtcp v1.2.15 // indirect
|
|
||||||
github.com/pion/rtp v1.8.19 // indirect
|
github.com/pion/rtp v1.8.19 // indirect
|
||||||
github.com/pion/sctp v1.8.39 // indirect
|
github.com/pion/sctp v1.8.39 // indirect
|
||||||
github.com/pion/sdp/v3 v3.0.13 // indirect
|
github.com/pion/sdp/v3 v3.0.13 // indirect
|
||||||
@ -136,8 +138,6 @@ require (
|
|||||||
github.com/pion/stun/v3 v3.0.0 // indirect
|
github.com/pion/stun/v3 v3.0.0 // indirect
|
||||||
github.com/pion/transport/v2 v2.2.10 // indirect
|
github.com/pion/transport/v2 v2.2.10 // indirect
|
||||||
github.com/pion/transport/v3 v3.0.7 // indirect
|
github.com/pion/transport/v3 v3.0.7 // indirect
|
||||||
github.com/pion/turn/v4 v4.0.2 // indirect
|
|
||||||
github.com/pion/webrtc/v4 v4.1.2 // indirect
|
|
||||||
github.com/pkg/errors v0.9.1 // indirect
|
github.com/pkg/errors v0.9.1 // indirect
|
||||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||||
github.com/prometheus/client_golang v1.23.0 // indirect
|
github.com/prometheus/client_golang v1.23.0 // indirect
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user