mirror of
https://github.com/DeBrosOfficial/orama.git
synced 2026-03-27 09:24:12 +00:00
Compare commits
No commits in common. "main" and "v0.112.6-nightly" have entirely different histories.
main
...
v0.112.6-n
@ -8,7 +8,7 @@ NOCOLOR='\033[0m'
|
|||||||
|
|
||||||
# Run tests before push
|
# Run tests before push
|
||||||
echo -e "\n${CYAN}Running tests...${NOCOLOR}"
|
echo -e "\n${CYAN}Running tests...${NOCOLOR}"
|
||||||
cd "$(git rev-parse --show-toplevel)/core" && go test ./...
|
go test ./... # Runs all tests in your repo
|
||||||
status=$?
|
status=$?
|
||||||
if [ $status -ne 0 ]; then
|
if [ $status -ne 0 ]; then
|
||||||
echo -e "${RED}Push aborted: some tests failed.${NOCOLOR}"
|
echo -e "${RED}Push aborted: some tests failed.${NOCOLOR}"
|
||||||
91
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
91
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
@ -1,91 +0,0 @@
|
|||||||
name: Bug Report
|
|
||||||
description: Report a bug in Orama Network
|
|
||||||
labels: ["bug"]
|
|
||||||
body:
|
|
||||||
- type: markdown
|
|
||||||
attributes:
|
|
||||||
value: |
|
|
||||||
Thanks for reporting a bug! Please fill out the sections below.
|
|
||||||
|
|
||||||
**Security issues:** If this is a security vulnerability, do NOT open an issue. Email security@orama.io instead.
|
|
||||||
|
|
||||||
- type: input
|
|
||||||
id: version
|
|
||||||
attributes:
|
|
||||||
label: Orama version
|
|
||||||
description: "Run `orama version` to find this"
|
|
||||||
placeholder: "v0.18.0-beta"
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: dropdown
|
|
||||||
id: component
|
|
||||||
attributes:
|
|
||||||
label: Component
|
|
||||||
options:
|
|
||||||
- Gateway / API
|
|
||||||
- CLI (orama command)
|
|
||||||
- WireGuard / Networking
|
|
||||||
- RQLite / Storage
|
|
||||||
- Olric / Caching
|
|
||||||
- IPFS / Pinning
|
|
||||||
- CoreDNS
|
|
||||||
- OramaOS
|
|
||||||
- Other
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
id: description
|
|
||||||
attributes:
|
|
||||||
label: Description
|
|
||||||
description: A clear description of the bug
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
id: steps
|
|
||||||
attributes:
|
|
||||||
label: Steps to reproduce
|
|
||||||
description: Minimal steps to reproduce the behavior
|
|
||||||
placeholder: |
|
|
||||||
1. Run `orama ...`
|
|
||||||
2. See error
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
id: expected
|
|
||||||
attributes:
|
|
||||||
label: Expected behavior
|
|
||||||
description: What you expected to happen
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
id: actual
|
|
||||||
attributes:
|
|
||||||
label: Actual behavior
|
|
||||||
description: What actually happened (include error messages and logs if any)
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
id: environment
|
|
||||||
attributes:
|
|
||||||
label: Environment
|
|
||||||
description: OS, Go version, deployment environment, etc.
|
|
||||||
placeholder: |
|
|
||||||
- OS: Ubuntu 22.04
|
|
||||||
- Go: 1.23
|
|
||||||
- Environment: sandbox
|
|
||||||
validations:
|
|
||||||
required: false
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
id: context
|
|
||||||
attributes:
|
|
||||||
label: Additional context
|
|
||||||
description: Logs, screenshots, monitor reports, or anything else that might help
|
|
||||||
validations:
|
|
||||||
required: false
|
|
||||||
49
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
49
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
@ -1,49 +0,0 @@
|
|||||||
name: Feature Request
|
|
||||||
description: Suggest a new feature or improvement
|
|
||||||
labels: ["enhancement"]
|
|
||||||
body:
|
|
||||||
- type: markdown
|
|
||||||
attributes:
|
|
||||||
value: |
|
|
||||||
Thanks for the suggestion! Please describe what you'd like to see.
|
|
||||||
|
|
||||||
- type: dropdown
|
|
||||||
id: component
|
|
||||||
attributes:
|
|
||||||
label: Component
|
|
||||||
options:
|
|
||||||
- Gateway / API
|
|
||||||
- CLI (orama command)
|
|
||||||
- WireGuard / Networking
|
|
||||||
- RQLite / Storage
|
|
||||||
- Olric / Caching
|
|
||||||
- IPFS / Pinning
|
|
||||||
- CoreDNS
|
|
||||||
- OramaOS
|
|
||||||
- Other
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
id: problem
|
|
||||||
attributes:
|
|
||||||
label: Problem
|
|
||||||
description: What problem does this solve? Why do you need it?
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
id: solution
|
|
||||||
attributes:
|
|
||||||
label: Proposed solution
|
|
||||||
description: How do you think this should work?
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
id: alternatives
|
|
||||||
attributes:
|
|
||||||
label: Alternatives considered
|
|
||||||
description: Any workarounds or alternative approaches you've thought of
|
|
||||||
validations:
|
|
||||||
required: false
|
|
||||||
31
.github/PULL_REQUEST_TEMPLATE.md
vendored
31
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -1,31 +0,0 @@
|
|||||||
## Summary
|
|
||||||
|
|
||||||
<!-- What does this PR do? Keep it to 1-3 bullet points. -->
|
|
||||||
|
|
||||||
## Motivation
|
|
||||||
|
|
||||||
<!-- Why is this change needed? Link to an issue if applicable. -->
|
|
||||||
|
|
||||||
## Test plan
|
|
||||||
|
|
||||||
<!-- How did you verify this works? -->
|
|
||||||
|
|
||||||
- [ ] `make test` passes
|
|
||||||
- [ ] Tested on sandbox/staging environment
|
|
||||||
|
|
||||||
## Distributed system impact
|
|
||||||
|
|
||||||
<!-- Does this change affect any of the following? If yes, explain. -->
|
|
||||||
|
|
||||||
- [ ] Raft quorum / RQLite
|
|
||||||
- [ ] WireGuard mesh / networking
|
|
||||||
- [ ] Olric gossip / caching
|
|
||||||
- [ ] Service startup ordering
|
|
||||||
- [ ] Rolling upgrade compatibility
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
|
|
||||||
- [ ] Tests added for new functionality or bug fix
|
|
||||||
- [ ] No debug code (`fmt.Println`, `log.Println`) left behind
|
|
||||||
- [ ] Docs updated (if user-facing behavior changed)
|
|
||||||
- [ ] Errors wrapped with context (`fmt.Errorf("...: %w", err)`)
|
|
||||||
80
.github/workflows/publish-sdk.yml
vendored
80
.github/workflows/publish-sdk.yml
vendored
@ -1,80 +0,0 @@
|
|||||||
name: Publish SDK to npm
|
|
||||||
|
|
||||||
on:
|
|
||||||
workflow_dispatch:
|
|
||||||
inputs:
|
|
||||||
version:
|
|
||||||
description: "Version to publish (e.g., 1.0.0). Leave empty to use package.json version."
|
|
||||||
required: false
|
|
||||||
dry-run:
|
|
||||||
description: "Dry run (don't actually publish)"
|
|
||||||
type: boolean
|
|
||||||
default: false
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: write
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
publish:
|
|
||||||
name: Build & Publish @debros/orama
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
defaults:
|
|
||||||
run:
|
|
||||||
working-directory: sdk
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout code
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: Set up Node.js
|
|
||||||
uses: actions/setup-node@v4
|
|
||||||
with:
|
|
||||||
node-version: "20"
|
|
||||||
registry-url: "https://registry.npmjs.org"
|
|
||||||
|
|
||||||
- name: Install pnpm
|
|
||||||
uses: pnpm/action-setup@v4
|
|
||||||
with:
|
|
||||||
version: 9
|
|
||||||
|
|
||||||
- name: Install dependencies
|
|
||||||
run: pnpm install --frozen-lockfile
|
|
||||||
|
|
||||||
- name: Bump version
|
|
||||||
if: inputs.version != ''
|
|
||||||
run: npm version ${{ inputs.version }} --no-git-tag-version
|
|
||||||
|
|
||||||
- name: Typecheck
|
|
||||||
run: pnpm typecheck
|
|
||||||
|
|
||||||
- name: Build
|
|
||||||
run: pnpm build
|
|
||||||
|
|
||||||
- name: Run unit tests
|
|
||||||
run: pnpm vitest run tests/unit
|
|
||||||
|
|
||||||
- name: Publish (dry run)
|
|
||||||
if: inputs.dry-run == true
|
|
||||||
run: npm publish --access public --dry-run
|
|
||||||
env:
|
|
||||||
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
|
|
||||||
|
|
||||||
- name: Publish
|
|
||||||
if: inputs.dry-run == false
|
|
||||||
run: npm publish --access public
|
|
||||||
env:
|
|
||||||
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
|
|
||||||
|
|
||||||
- name: Get published version
|
|
||||||
if: inputs.dry-run == false
|
|
||||||
id: version
|
|
||||||
run: echo "version=$(node -p "require('./package.json').version")" >> $GITHUB_OUTPUT
|
|
||||||
|
|
||||||
- name: Create git tag
|
|
||||||
if: inputs.dry-run == false
|
|
||||||
working-directory: .
|
|
||||||
run: |
|
|
||||||
git config user.name "github-actions[bot]"
|
|
||||||
git config user.email "github-actions[bot]@users.noreply.github.com"
|
|
||||||
git tag "sdk/v${{ steps.version.outputs.version }}"
|
|
||||||
git push origin "sdk/v${{ steps.version.outputs.version }}"
|
|
||||||
6
.github/workflows/release-apt.yml
vendored
6
.github/workflows/release-apt.yml
vendored
@ -28,8 +28,7 @@ jobs:
|
|||||||
- name: Set up Go
|
- name: Set up Go
|
||||||
uses: actions/setup-go@v5
|
uses: actions/setup-go@v5
|
||||||
with:
|
with:
|
||||||
go-version: "1.24"
|
go-version: "1.23"
|
||||||
cache-dependency-path: core/go.sum
|
|
||||||
|
|
||||||
- name: Get version
|
- name: Get version
|
||||||
id: version
|
id: version
|
||||||
@ -47,7 +46,6 @@ jobs:
|
|||||||
uses: docker/setup-qemu-action@v3
|
uses: docker/setup-qemu-action@v3
|
||||||
|
|
||||||
- name: Build binary
|
- name: Build binary
|
||||||
working-directory: core
|
|
||||||
env:
|
env:
|
||||||
GOARCH: ${{ matrix.arch }}
|
GOARCH: ${{ matrix.arch }}
|
||||||
CGO_ENABLED: 0
|
CGO_ENABLED: 0
|
||||||
@ -73,7 +71,7 @@ jobs:
|
|||||||
mkdir -p ${PKG_NAME}/usr/local/bin
|
mkdir -p ${PKG_NAME}/usr/local/bin
|
||||||
|
|
||||||
# Copy binaries
|
# Copy binaries
|
||||||
cp core/build/usr/local/bin/* ${PKG_NAME}/usr/local/bin/
|
cp build/usr/local/bin/* ${PKG_NAME}/usr/local/bin/
|
||||||
chmod 755 ${PKG_NAME}/usr/local/bin/*
|
chmod 755 ${PKG_NAME}/usr/local/bin/*
|
||||||
|
|
||||||
# Create control file
|
# Create control file
|
||||||
|
|||||||
4
.github/workflows/release.yaml
vendored
4
.github/workflows/release.yaml
vendored
@ -23,8 +23,8 @@ jobs:
|
|||||||
- name: Set up Go
|
- name: Set up Go
|
||||||
uses: actions/setup-go@v4
|
uses: actions/setup-go@v4
|
||||||
with:
|
with:
|
||||||
go-version: '1.24'
|
go-version: '1.21'
|
||||||
cache-dependency-path: core/go.sum
|
cache: true
|
||||||
|
|
||||||
- name: Run GoReleaser
|
- name: Run GoReleaser
|
||||||
uses: goreleaser/goreleaser-action@v5
|
uses: goreleaser/goreleaser-action@v5
|
||||||
|
|||||||
144
.gitignore
vendored
144
.gitignore
vendored
@ -1,50 +1,4 @@
|
|||||||
# === Global ===
|
# Binaries
|
||||||
.DS_Store
|
|
||||||
.DS_Store?
|
|
||||||
._*
|
|
||||||
.Spotlight-V100
|
|
||||||
.Trashes
|
|
||||||
ehthumbs.db
|
|
||||||
Thumbs.db
|
|
||||||
*.swp
|
|
||||||
*.swo
|
|
||||||
*~
|
|
||||||
|
|
||||||
# IDE
|
|
||||||
.vscode/
|
|
||||||
.idea/
|
|
||||||
.cursor/
|
|
||||||
|
|
||||||
# Environment & credentials
|
|
||||||
.env
|
|
||||||
.env.*
|
|
||||||
!.env.example
|
|
||||||
.mcp.json
|
|
||||||
.claude/
|
|
||||||
.codex/
|
|
||||||
|
|
||||||
# === Core (Go) ===
|
|
||||||
core/phantom-auth/
|
|
||||||
core/bin/
|
|
||||||
core/bin-linux/
|
|
||||||
core/dist/
|
|
||||||
core/orama-cli-linux
|
|
||||||
core/keys_backup/
|
|
||||||
core/.gocache/
|
|
||||||
core/configs/
|
|
||||||
core/data/*
|
|
||||||
core/tmp/
|
|
||||||
core/temp/
|
|
||||||
core/results/
|
|
||||||
core/rnd/
|
|
||||||
core/vps.txt
|
|
||||||
core/coverage.txt
|
|
||||||
core/coverage.html
|
|
||||||
core/profile.out
|
|
||||||
core/e2e/config.yaml
|
|
||||||
core/scripts/remote-nodes.conf
|
|
||||||
|
|
||||||
# Go build artifacts
|
|
||||||
*.exe
|
*.exe
|
||||||
*.exe~
|
*.exe~
|
||||||
*.dll
|
*.dll
|
||||||
@ -52,39 +6,91 @@ core/scripts/remote-nodes.conf
|
|||||||
*.dylib
|
*.dylib
|
||||||
*.test
|
*.test
|
||||||
*.out
|
*.out
|
||||||
|
bin/
|
||||||
|
bin-linux/
|
||||||
|
dist/
|
||||||
|
orama-cli-linux
|
||||||
|
|
||||||
|
# Build artifacts
|
||||||
*.deb
|
*.deb
|
||||||
*.rpm
|
*.rpm
|
||||||
*.tar.gz
|
*.tar.gz
|
||||||
*.zip
|
*.zip
|
||||||
|
|
||||||
|
# Go
|
||||||
go.work
|
go.work
|
||||||
|
.gocache/
|
||||||
|
|
||||||
|
# Dependencies
|
||||||
|
# vendor/
|
||||||
|
|
||||||
|
# Environment & credentials
|
||||||
|
.env
|
||||||
|
.env.*
|
||||||
|
.env.local
|
||||||
|
.env.*.local
|
||||||
|
scripts/remote-nodes.conf
|
||||||
|
keys_backup/
|
||||||
|
e2e/config.yaml
|
||||||
|
|
||||||
|
# Config (generated/local)
|
||||||
|
configs/
|
||||||
|
|
||||||
|
# Data & databases
|
||||||
|
data/*
|
||||||
|
*.db
|
||||||
|
|
||||||
|
# IDE & editor files
|
||||||
|
.vscode/
|
||||||
|
.idea/
|
||||||
|
.cursor/
|
||||||
|
.claude/
|
||||||
|
.mcp.json
|
||||||
|
*.swp
|
||||||
|
*.swo
|
||||||
|
*~
|
||||||
|
|
||||||
|
# OS generated files
|
||||||
|
.DS_Store
|
||||||
|
.DS_Store?
|
||||||
|
._*
|
||||||
|
.Spotlight-V100
|
||||||
|
.Trashes
|
||||||
|
ehthumbs.db
|
||||||
|
Thumbs.db
|
||||||
|
|
||||||
# Logs
|
# Logs
|
||||||
*.log
|
*.log
|
||||||
|
|
||||||
# Databases
|
# Temporary files
|
||||||
*.db
|
tmp/
|
||||||
|
temp/
|
||||||
|
*.tmp
|
||||||
|
|
||||||
# === Website ===
|
# Coverage & profiling
|
||||||
website/node_modules/
|
coverage.txt
|
||||||
website/dist/
|
coverage.html
|
||||||
website/invest-api/invest-api
|
profile.out
|
||||||
website/invest-api/*.db
|
|
||||||
website/invest-api/*.db-shm
|
|
||||||
website/invest-api/*.db-wal
|
|
||||||
|
|
||||||
# === SDK (TypeScript) ===
|
# Local development
|
||||||
sdk/node_modules/
|
|
||||||
sdk/dist/
|
|
||||||
sdk/coverage/
|
|
||||||
|
|
||||||
# === Vault (Zig) ===
|
|
||||||
vault/.zig-cache/
|
|
||||||
vault/zig-out/
|
|
||||||
|
|
||||||
# === OS ===
|
|
||||||
os/output/
|
|
||||||
|
|
||||||
# === Local development ===
|
|
||||||
.dev/
|
.dev/
|
||||||
.local/
|
.local/
|
||||||
local/
|
local/
|
||||||
|
.codex/
|
||||||
|
results/
|
||||||
|
rnd/
|
||||||
|
vps.txt
|
||||||
|
|
||||||
|
# Project subdirectories (managed separately)
|
||||||
|
website/
|
||||||
|
phantom-auth/
|
||||||
|
|
||||||
|
# One-off scripts & tools
|
||||||
|
redeploy-6.sh
|
||||||
|
terms-agreement
|
||||||
|
./bootstrap
|
||||||
|
./node
|
||||||
|
./cli
|
||||||
|
./inspector
|
||||||
|
docs/later_todos/
|
||||||
|
sim/
|
||||||
@ -9,13 +9,11 @@ env:
|
|||||||
|
|
||||||
before:
|
before:
|
||||||
hooks:
|
hooks:
|
||||||
- cmd: go mod tidy
|
- go mod tidy
|
||||||
dir: core
|
|
||||||
|
|
||||||
builds:
|
builds:
|
||||||
# orama CLI binary
|
# orama CLI binary
|
||||||
- id: orama
|
- id: orama
|
||||||
dir: core
|
|
||||||
main: ./cmd/cli
|
main: ./cmd/cli
|
||||||
binary: orama
|
binary: orama
|
||||||
goos:
|
goos:
|
||||||
@ -33,7 +31,6 @@ builds:
|
|||||||
|
|
||||||
# orama-node binary (Linux only for apt)
|
# orama-node binary (Linux only for apt)
|
||||||
- id: orama-node
|
- id: orama-node
|
||||||
dir: core
|
|
||||||
main: ./cmd/node
|
main: ./cmd/node
|
||||||
binary: orama-node
|
binary: orama-node
|
||||||
goos:
|
goos:
|
||||||
@ -87,7 +84,7 @@ nfpms:
|
|||||||
section: utils
|
section: utils
|
||||||
priority: optional
|
priority: optional
|
||||||
contents:
|
contents:
|
||||||
- src: ./core/README.md
|
- src: ./README.md
|
||||||
dst: /usr/share/doc/orama/README.md
|
dst: /usr/share/doc/orama/README.md
|
||||||
deb:
|
deb:
|
||||||
lintian_overrides:
|
lintian_overrides:
|
||||||
@ -109,7 +106,7 @@ nfpms:
|
|||||||
section: net
|
section: net
|
||||||
priority: optional
|
priority: optional
|
||||||
contents:
|
contents:
|
||||||
- src: ./core/README.md
|
- src: ./README.md
|
||||||
dst: /usr/share/doc/orama-node/README.md
|
dst: /usr/share/doc/orama-node/README.md
|
||||||
deb:
|
deb:
|
||||||
lintian_overrides:
|
lintian_overrides:
|
||||||
|
|||||||
@ -1,78 +1,47 @@
|
|||||||
# Contributing to Orama Network
|
# Contributing to DeBros Network
|
||||||
|
|
||||||
Thanks for helping improve the network! This monorepo contains multiple projects — pick the one relevant to your contribution.
|
Thanks for helping improve the network! This guide covers setup, local dev, tests, and PR guidelines.
|
||||||
|
|
||||||
## Repository Structure
|
## Requirements
|
||||||
|
|
||||||
| Package | Language | Build |
|
- Go 1.22+ (1.23 recommended)
|
||||||
|---------|----------|-------|
|
- RQLite (optional for local runs; the Makefile starts nodes with embedded setup)
|
||||||
| `core/` | Go 1.24+ | `make core-build` |
|
- Make (optional)
|
||||||
| `website/` | TypeScript (pnpm) | `make website-build` |
|
|
||||||
| `vault/` | Zig 0.14+ | `make vault-build` |
|
|
||||||
| `os/` | Go + Buildroot | `make os-build` |
|
|
||||||
|
|
||||||
## Setup
|
## Setup
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/DeBrosOfficial/network.git
|
git clone https://github.com/DeBrosOfficial/network.git
|
||||||
cd network
|
cd network
|
||||||
```
|
|
||||||
|
|
||||||
### Core (Go)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd core
|
|
||||||
make deps
|
make deps
|
||||||
make build
|
|
||||||
make test
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Website
|
## Build, Test, Lint
|
||||||
|
|
||||||
|
- Build: `make build`
|
||||||
|
- Test: `make test`
|
||||||
|
- Format/Vet: `make fmt vet` (or `make lint`)
|
||||||
|
|
||||||
|
````
|
||||||
|
|
||||||
|
Useful CLI commands:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd website
|
./bin/orama health
|
||||||
pnpm install
|
./bin/orama peers
|
||||||
pnpm dev
|
./bin/orama status
|
||||||
```
|
````
|
||||||
|
|
||||||
### Vault (Zig)
|
## Versioning
|
||||||
|
|
||||||
```bash
|
- The CLI reports its version via `orama version`.
|
||||||
cd vault
|
- Releases are tagged (e.g., `v0.18.0-beta`) and published via GoReleaser.
|
||||||
zig build
|
|
||||||
zig build test
|
|
||||||
```
|
|
||||||
|
|
||||||
## Pull Requests
|
## Pull Requests
|
||||||
|
|
||||||
1. Fork and create a topic branch from `main`.
|
1. Fork and create a topic branch.
|
||||||
2. Ensure `make test` passes for affected packages.
|
2. Ensure `make build test` passes; include tests for new functionality.
|
||||||
3. Include tests for new functionality or bug fixes.
|
3. Keep PRs focused and well-described (motivation, approach, testing).
|
||||||
4. Keep PRs focused — one concern per PR.
|
4. Update README/docs for behavior changes.
|
||||||
5. Write a clear description: motivation, approach, and how you tested it.
|
|
||||||
6. Update docs if you're changing user-facing behavior.
|
|
||||||
|
|
||||||
## Code Style
|
|
||||||
|
|
||||||
### Go (core/, os/)
|
|
||||||
|
|
||||||
- Follow standard Go conventions
|
|
||||||
- Run `make lint` before submitting
|
|
||||||
- Wrap errors with context: `fmt.Errorf("failed to X: %w", err)`
|
|
||||||
- No magic values — use named constants
|
|
||||||
|
|
||||||
### TypeScript (website/)
|
|
||||||
|
|
||||||
- TypeScript strict mode
|
|
||||||
- Follow existing patterns in the codebase
|
|
||||||
|
|
||||||
### Zig (vault/)
|
|
||||||
|
|
||||||
- Follow standard Zig conventions
|
|
||||||
- Run `zig build test` before submitting
|
|
||||||
|
|
||||||
## Security
|
|
||||||
|
|
||||||
If you find a security vulnerability, **do not open a public issue**. Email security@debros.io instead.
|
|
||||||
|
|
||||||
Thank you for contributing!
|
Thank you for contributing!
|
||||||
|
|||||||
214
Makefile
214
Makefile
@ -1,66 +1,186 @@
|
|||||||
# Orama Monorepo
|
TEST?=./...
|
||||||
# Delegates to sub-project Makefiles
|
|
||||||
|
|
||||||
.PHONY: help build test clean
|
.PHONY: test
|
||||||
|
test:
|
||||||
|
@echo Running tests...
|
||||||
|
go test -v $(TEST)
|
||||||
|
|
||||||
# === Core (Go network) ===
|
# Gateway-focused E2E tests assume gateway and nodes are already running
|
||||||
.PHONY: core core-build core-test core-clean core-lint
|
# Auto-discovers configuration from ~/.orama and queries database for API key
|
||||||
core: core-build
|
# No environment variables required
|
||||||
|
.PHONY: test-e2e test-e2e-deployments test-e2e-fullstack test-e2e-https test-e2e-quick test-e2e-prod test-e2e-shared test-e2e-cluster test-e2e-integration test-e2e-production
|
||||||
|
|
||||||
core-build:
|
# Production E2E tests - includes production-only tests
|
||||||
$(MAKE) -C core build
|
test-e2e-prod:
|
||||||
|
@if [ -z "$$ORAMA_GATEWAY_URL" ]; then \
|
||||||
|
echo "❌ ORAMA_GATEWAY_URL not set"; \
|
||||||
|
echo "Usage: ORAMA_GATEWAY_URL=https://dbrs.space make test-e2e-prod"; \
|
||||||
|
exit 1; \
|
||||||
|
fi
|
||||||
|
@echo "Running E2E tests (including production-only) against $$ORAMA_GATEWAY_URL..."
|
||||||
|
go test -v -tags "e2e production" -timeout 30m ./e2e/...
|
||||||
|
|
||||||
core-test:
|
# Generic e2e target
|
||||||
$(MAKE) -C core test
|
test-e2e:
|
||||||
|
@echo "Running comprehensive E2E tests..."
|
||||||
|
@echo "Auto-discovering configuration from ~/.orama..."
|
||||||
|
go test -v -tags e2e -timeout 30m ./e2e/...
|
||||||
|
|
||||||
core-lint:
|
test-e2e-deployments:
|
||||||
$(MAKE) -C core lint
|
@echo "Running deployment E2E tests..."
|
||||||
|
go test -v -tags e2e -timeout 15m ./e2e/deployments/...
|
||||||
|
|
||||||
core-clean:
|
test-e2e-fullstack:
|
||||||
$(MAKE) -C core clean
|
@echo "Running fullstack E2E tests..."
|
||||||
|
go test -v -tags e2e -timeout 20m -run "TestFullStack" ./e2e/...
|
||||||
|
|
||||||
# === Website ===
|
test-e2e-https:
|
||||||
.PHONY: website website-dev website-build
|
@echo "Running HTTPS/external access E2E tests..."
|
||||||
website-dev:
|
go test -v -tags e2e -timeout 10m -run "TestHTTPS" ./e2e/...
|
||||||
cd website && pnpm dev
|
|
||||||
|
|
||||||
website-build:
|
test-e2e-shared:
|
||||||
cd website && pnpm build
|
@echo "Running shared E2E tests..."
|
||||||
|
go test -v -tags e2e -timeout 10m ./e2e/shared/...
|
||||||
|
|
||||||
# === SDK (TypeScript) ===
|
test-e2e-cluster:
|
||||||
.PHONY: sdk sdk-build sdk-test
|
@echo "Running cluster E2E tests..."
|
||||||
sdk: sdk-build
|
go test -v -tags e2e -timeout 15m ./e2e/cluster/...
|
||||||
|
|
||||||
sdk-build:
|
test-e2e-integration:
|
||||||
cd sdk && pnpm install && pnpm build
|
@echo "Running integration E2E tests..."
|
||||||
|
go test -v -tags e2e -timeout 20m ./e2e/integration/...
|
||||||
|
|
||||||
sdk-test:
|
test-e2e-production:
|
||||||
cd sdk && pnpm test
|
@echo "Running production-only E2E tests..."
|
||||||
|
go test -v -tags "e2e production" -timeout 15m ./e2e/production/...
|
||||||
|
|
||||||
# === Vault (Zig) ===
|
test-e2e-quick:
|
||||||
.PHONY: vault vault-build vault-test
|
@echo "Running quick E2E smoke tests..."
|
||||||
vault-build:
|
go test -v -tags e2e -timeout 5m -run "TestStatic|TestHealth" ./e2e/...
|
||||||
cd vault && zig build
|
|
||||||
|
|
||||||
vault-test:
|
# Network - Distributed P2P Database System
|
||||||
cd vault && zig build test
|
# Makefile for development and build tasks
|
||||||
|
|
||||||
# === OS ===
|
.PHONY: build clean test deps tidy fmt vet lint install-hooks upload-devnet upload-testnet redeploy-devnet redeploy-testnet release health
|
||||||
.PHONY: os os-build
|
|
||||||
os-build:
|
|
||||||
$(MAKE) -C os
|
|
||||||
|
|
||||||
# === Aggregate ===
|
VERSION := 0.112.6
|
||||||
build: core-build
|
COMMIT ?= $(shell git rev-parse --short HEAD 2>/dev/null || echo unknown)
|
||||||
test: core-test
|
DATE ?= $(shell date -u +%Y-%m-%dT%H:%M:%SZ)
|
||||||
clean: core-clean
|
LDFLAGS := -X 'main.version=$(VERSION)' -X 'main.commit=$(COMMIT)' -X 'main.date=$(DATE)'
|
||||||
|
LDFLAGS_LINUX := -s -w $(LDFLAGS)
|
||||||
|
|
||||||
|
# Build targets
|
||||||
|
build: deps
|
||||||
|
@echo "Building network executables (version=$(VERSION))..."
|
||||||
|
@mkdir -p bin
|
||||||
|
go build -ldflags "$(LDFLAGS)" -o bin/identity ./cmd/identity
|
||||||
|
go build -ldflags "$(LDFLAGS)" -o bin/orama-node ./cmd/node
|
||||||
|
go build -ldflags "$(LDFLAGS)" -o bin/orama ./cmd/cli/
|
||||||
|
# Inject gateway build metadata via pkg path variables
|
||||||
|
go build -ldflags "$(LDFLAGS) -X 'github.com/DeBrosOfficial/network/pkg/gateway.BuildVersion=$(VERSION)' -X 'github.com/DeBrosOfficial/network/pkg/gateway.BuildCommit=$(COMMIT)' -X 'github.com/DeBrosOfficial/network/pkg/gateway.BuildTime=$(DATE)'" -o bin/gateway ./cmd/gateway
|
||||||
|
go build -ldflags "$(LDFLAGS)" -o bin/sfu ./cmd/sfu
|
||||||
|
go build -ldflags "$(LDFLAGS)" -o bin/turn ./cmd/turn
|
||||||
|
@echo "Build complete! Run ./bin/orama version"
|
||||||
|
|
||||||
|
# Cross-compile CLI for Linux (only binary needed locally; VPS builds everything else from source)
|
||||||
|
build-linux: deps
|
||||||
|
@echo "Cross-compiling CLI for linux/amd64 (version=$(VERSION))..."
|
||||||
|
@mkdir -p bin-linux
|
||||||
|
GOOS=linux GOARCH=amd64 go build -ldflags "$(LDFLAGS_LINUX)" -trimpath -o bin-linux/orama ./cmd/cli/
|
||||||
|
@echo "✓ CLI built at bin-linux/orama"
|
||||||
|
@echo ""
|
||||||
|
@echo "Next steps:"
|
||||||
|
@echo " ./scripts/generate-source-archive.sh"
|
||||||
|
@echo " ./bin/orama install --vps-ip <ip> --nameserver --domain ..."
|
||||||
|
|
||||||
|
# Install git hooks
|
||||||
|
install-hooks:
|
||||||
|
@echo "Installing git hooks..."
|
||||||
|
@bash scripts/install-hooks.sh
|
||||||
|
|
||||||
|
# Clean build artifacts
|
||||||
|
clean:
|
||||||
|
@echo "Cleaning build artifacts..."
|
||||||
|
rm -rf bin/
|
||||||
|
rm -rf data/
|
||||||
|
@echo "Clean complete!"
|
||||||
|
|
||||||
|
# Upload source to devnet using fanout (upload to 1 node, parallel distribute to rest)
|
||||||
|
upload-devnet:
|
||||||
|
@bash scripts/upload-source-fanout.sh --env devnet
|
||||||
|
|
||||||
|
# Upload source to testnet using fanout
|
||||||
|
upload-testnet:
|
||||||
|
@bash scripts/upload-source-fanout.sh --env testnet
|
||||||
|
|
||||||
|
# Deploy to devnet (build + rolling upgrade all nodes)
|
||||||
|
redeploy-devnet:
|
||||||
|
@bash scripts/redeploy.sh --devnet
|
||||||
|
|
||||||
|
# Deploy to devnet without rebuilding
|
||||||
|
redeploy-devnet-quick:
|
||||||
|
@bash scripts/redeploy.sh --devnet --no-build
|
||||||
|
|
||||||
|
# Deploy to testnet (build + rolling upgrade all nodes)
|
||||||
|
redeploy-testnet:
|
||||||
|
@bash scripts/redeploy.sh --testnet
|
||||||
|
|
||||||
|
# Deploy to testnet without rebuilding
|
||||||
|
redeploy-testnet-quick:
|
||||||
|
@bash scripts/redeploy.sh --testnet --no-build
|
||||||
|
|
||||||
|
# Interactive release workflow (tag + push)
|
||||||
|
release:
|
||||||
|
@bash scripts/release.sh
|
||||||
|
|
||||||
|
# Check health of all nodes in an environment
|
||||||
|
# Usage: make health ENV=devnet
|
||||||
|
health:
|
||||||
|
@if [ -z "$(ENV)" ]; then \
|
||||||
|
echo "Usage: make health ENV=devnet|testnet"; \
|
||||||
|
exit 1; \
|
||||||
|
fi
|
||||||
|
@while IFS='|' read -r env host pass role key; do \
|
||||||
|
[ -z "$$env" ] && continue; \
|
||||||
|
case "$$env" in \#*) continue;; esac; \
|
||||||
|
env="$$(echo "$$env" | xargs)"; \
|
||||||
|
[ "$$env" != "$(ENV)" ] && continue; \
|
||||||
|
role="$$(echo "$$role" | xargs)"; \
|
||||||
|
bash scripts/check-node-health.sh "$$host" "$$pass" "$$host ($$role)"; \
|
||||||
|
done < scripts/remote-nodes.conf
|
||||||
|
|
||||||
|
# Help
|
||||||
help:
|
help:
|
||||||
@echo "Orama Monorepo"
|
@echo "Available targets:"
|
||||||
|
@echo " build - Build all executables"
|
||||||
|
@echo " clean - Clean build artifacts"
|
||||||
|
@echo " test - Run unit tests"
|
||||||
@echo ""
|
@echo ""
|
||||||
@echo " Core (Go): make core-build | core-test | core-lint | core-clean"
|
@echo "E2E Testing:"
|
||||||
@echo " Website: make website-dev | website-build"
|
@echo " make test-e2e-prod - Run all E2E tests incl. production-only (needs ORAMA_GATEWAY_URL)"
|
||||||
@echo " Vault (Zig): make vault-build | vault-test"
|
@echo " make test-e2e-shared - Run shared E2E tests (cache, storage, pubsub, auth)"
|
||||||
@echo " OS: make os-build"
|
@echo " make test-e2e-cluster - Run cluster E2E tests (libp2p, olric, rqlite, namespace)"
|
||||||
|
@echo " make test-e2e-integration - Run integration E2E tests (fullstack, persistence, concurrency)"
|
||||||
|
@echo " make test-e2e-deployments - Run deployment E2E tests"
|
||||||
|
@echo " make test-e2e-production - Run production-only E2E tests (DNS, HTTPS, cross-node)"
|
||||||
|
@echo " make test-e2e-quick - Quick smoke tests (static deploys, health checks)"
|
||||||
|
@echo " make test-e2e - Generic E2E tests (auto-discovers config)"
|
||||||
@echo ""
|
@echo ""
|
||||||
@echo " Aggregate: make build | test | clean (delegates to core)"
|
@echo " Example:"
|
||||||
|
@echo " ORAMA_GATEWAY_URL=https://orama-devnet.network make test-e2e-prod"
|
||||||
|
@echo ""
|
||||||
|
@echo "Deployment:"
|
||||||
|
@echo " make redeploy-devnet - Build + rolling deploy to all devnet nodes"
|
||||||
|
@echo " make redeploy-devnet-quick - Deploy to devnet without rebuilding"
|
||||||
|
@echo " make redeploy-testnet - Build + rolling deploy to all testnet nodes"
|
||||||
|
@echo " make redeploy-testnet-quick- Deploy to testnet without rebuilding"
|
||||||
|
@echo " make health ENV=devnet - Check health of all nodes in an environment"
|
||||||
|
@echo " make release - Interactive release workflow (tag + push)"
|
||||||
|
@echo ""
|
||||||
|
@echo "Maintenance:"
|
||||||
|
@echo " deps - Download dependencies"
|
||||||
|
@echo " tidy - Tidy dependencies"
|
||||||
|
@echo " fmt - Format code"
|
||||||
|
@echo " vet - Vet code"
|
||||||
|
@echo " lint - Lint code (fmt + vet)"
|
||||||
|
@echo " help - Show this help"
|
||||||
|
|||||||
483
README.md
483
README.md
@ -1,50 +1,463 @@
|
|||||||
# Orama Network
|
# Orama Network - Distributed P2P Platform
|
||||||
|
|
||||||
A decentralized infrastructure platform combining distributed SQL, IPFS storage, caching, serverless WASM execution, and privacy relay — all managed through a unified API gateway.
|
A high-performance API Gateway and distributed platform built in Go. Provides a unified HTTP/HTTPS API for distributed SQL (RQLite), distributed caching (Olric), decentralized storage (IPFS), pub/sub messaging, and serverless WebAssembly execution.
|
||||||
|
|
||||||
## Packages
|
**Architecture:** Modular Gateway / Edge Proxy following SOLID principles
|
||||||
|
|
||||||
| Package | Language | Description |
|
## Features
|
||||||
|---------|----------|-------------|
|
|
||||||
| [core/](core/) | Go | API gateway, distributed node, CLI, and client SDK |
|
- **🔐 Authentication** - Wallet signatures, API keys, JWT tokens
|
||||||
| [sdk/](sdk/) | TypeScript | `@debros/orama` — JavaScript/TypeScript SDK ([npm](https://www.npmjs.com/package/@debros/orama)) |
|
- **💾 Storage** - IPFS-based decentralized file storage with encryption
|
||||||
| [website/](website/) | TypeScript | Marketing website and invest portal |
|
- **⚡ Cache** - Distributed cache with Olric (in-memory key-value)
|
||||||
| [vault/](vault/) | Zig | Distributed secrets vault (Shamir's Secret Sharing) |
|
- **🗄️ Database** - RQLite distributed SQL with Raft consensus + Per-namespace SQLite databases
|
||||||
| [os/](os/) | Go + Buildroot | OramaOS — hardened minimal Linux for network nodes |
|
- **📡 Pub/Sub** - Real-time messaging via LibP2P and WebSocket
|
||||||
|
- **⚙️ Serverless** - WebAssembly function execution with host functions
|
||||||
|
- **🌐 HTTP Gateway** - Unified REST API with automatic HTTPS (Let's Encrypt)
|
||||||
|
- **📦 Client SDK** - Type-safe Go SDK for all services
|
||||||
|
- **🚀 App Deployments** - Deploy React, Next.js, Go, Node.js apps with automatic domains
|
||||||
|
- **🗄️ SQLite Databases** - Per-namespace isolated databases with IPFS backups
|
||||||
|
|
||||||
|
## Application Deployments
|
||||||
|
|
||||||
|
Deploy full-stack applications with automatic domain assignment and namespace isolation.
|
||||||
|
|
||||||
|
### Deploy a React App
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Build your app
|
||||||
|
cd my-react-app
|
||||||
|
npm run build
|
||||||
|
|
||||||
|
# Deploy to Orama Network
|
||||||
|
orama deploy static ./dist --name my-app
|
||||||
|
|
||||||
|
# Your app is now live at: https://my-app.orama.network
|
||||||
|
```
|
||||||
|
|
||||||
|
### Deploy Next.js with SSR
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd my-nextjs-app
|
||||||
|
|
||||||
|
# Ensure next.config.js has: output: 'standalone'
|
||||||
|
npm run build
|
||||||
|
orama deploy nextjs . --name my-nextjs --ssr
|
||||||
|
|
||||||
|
# Live at: https://my-nextjs.orama.network
|
||||||
|
```
|
||||||
|
|
||||||
|
### Deploy Go Backend
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Build for Linux (name binary 'app' for auto-detection)
|
||||||
|
GOOS=linux GOARCH=amd64 go build -o app main.go
|
||||||
|
|
||||||
|
# Deploy (must implement /health endpoint)
|
||||||
|
orama deploy go ./app --name my-api
|
||||||
|
|
||||||
|
# API live at: https://my-api.orama.network
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create SQLite Database
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create database
|
||||||
|
orama db create my-database
|
||||||
|
|
||||||
|
# Create schema
|
||||||
|
orama db query my-database "CREATE TABLE users (id INT, name TEXT)"
|
||||||
|
|
||||||
|
# Insert data
|
||||||
|
orama db query my-database "INSERT INTO users VALUES (1, 'Alice')"
|
||||||
|
|
||||||
|
# Query data
|
||||||
|
orama db query my-database "SELECT * FROM users"
|
||||||
|
|
||||||
|
# Backup to IPFS
|
||||||
|
orama db backup my-database
|
||||||
|
```
|
||||||
|
|
||||||
|
### Full-Stack Example
|
||||||
|
|
||||||
|
Deploy a complete app with React frontend, Go backend, and SQLite database:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Create database
|
||||||
|
orama db create myapp-db
|
||||||
|
orama db query myapp-db "CREATE TABLE users (id INT PRIMARY KEY, name TEXT)"
|
||||||
|
|
||||||
|
# 2. Deploy Go backend (connects to database)
|
||||||
|
GOOS=linux GOARCH=amd64 go build -o api main.go
|
||||||
|
orama deploy go ./api --name myapp-api
|
||||||
|
|
||||||
|
# 3. Deploy React frontend (calls backend API)
|
||||||
|
cd frontend && npm run build
|
||||||
|
orama deploy static ./dist --name myapp
|
||||||
|
|
||||||
|
# Access:
|
||||||
|
# Frontend: https://myapp.orama.network
|
||||||
|
# Backend: https://myapp-api.orama.network
|
||||||
|
```
|
||||||
|
|
||||||
|
**📖 Full Guide**: See [Deployment Guide](docs/DEPLOYMENT_GUIDE.md) for complete documentation, examples, and best practices.
|
||||||
|
|
||||||
## Quick Start
|
## Quick Start
|
||||||
|
|
||||||
|
### Building
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Build the core network binaries
|
# Build all binaries
|
||||||
make core-build
|
make build
|
||||||
|
|
||||||
# Run tests
|
|
||||||
make core-test
|
|
||||||
|
|
||||||
# Start website dev server
|
|
||||||
make website-dev
|
|
||||||
|
|
||||||
# Build vault
|
|
||||||
make vault-build
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## CLI Commands
|
||||||
|
|
||||||
|
### Authentication
|
||||||
|
|
||||||
|
```bash
|
||||||
|
orama auth login # Authenticate with wallet
|
||||||
|
orama auth status # Check authentication
|
||||||
|
orama auth logout # Clear credentials
|
||||||
|
```
|
||||||
|
|
||||||
|
### Application Deployments
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Deploy applications
|
||||||
|
orama deploy static <path> --name myapp # React, Vue, static sites
|
||||||
|
orama deploy nextjs <path> --name myapp --ssr # Next.js with SSR (requires output: 'standalone')
|
||||||
|
orama deploy go <path> --name myapp # Go binaries (must have /health endpoint)
|
||||||
|
orama deploy nodejs <path> --name myapp # Node.js apps (must have /health endpoint)
|
||||||
|
|
||||||
|
# Manage deployments
|
||||||
|
orama app list # List all deployments
|
||||||
|
orama app get <name> # Get deployment details
|
||||||
|
orama app logs <name> --follow # View logs
|
||||||
|
orama app delete <name> # Delete deployment
|
||||||
|
orama app rollback <name> --version 1 # Rollback to version
|
||||||
|
```
|
||||||
|
|
||||||
|
### SQLite Databases
|
||||||
|
|
||||||
|
```bash
|
||||||
|
orama db create <name> # Create database
|
||||||
|
orama db query <name> "SELECT * FROM t" # Execute SQL query
|
||||||
|
orama db list # List all databases
|
||||||
|
orama db backup <name> # Backup to IPFS
|
||||||
|
orama db backups <name> # List backups
|
||||||
|
```
|
||||||
|
|
||||||
|
### Environment Management
|
||||||
|
|
||||||
|
```bash
|
||||||
|
orama env list # List available environments
|
||||||
|
orama env current # Show active environment
|
||||||
|
orama env use <name> # Switch environment
|
||||||
|
```
|
||||||
|
|
||||||
|
## Serverless Functions (WASM)
|
||||||
|
|
||||||
|
Orama supports high-performance serverless function execution using WebAssembly (WASM). Functions are isolated, secure, and can interact with network services like the distributed cache.
|
||||||
|
|
||||||
|
> **Full guide:** See [docs/SERVERLESS.md](docs/SERVERLESS.md) for host functions API, secrets management, PubSub triggers, and examples.
|
||||||
|
|
||||||
|
### 1. Build Functions
|
||||||
|
|
||||||
|
Functions must be compiled to WASM. We recommend using [TinyGo](https://tinygo.org/).
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Build example functions to examples/functions/bin/
|
||||||
|
./examples/functions/build.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Deployment
|
||||||
|
|
||||||
|
Deploy your compiled `.wasm` file to the network via the Gateway.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Deploy a function
|
||||||
|
curl -X POST https://your-node.example.com/v1/functions \
|
||||||
|
-H "Authorization: Bearer <your_api_key>" \
|
||||||
|
-F "name=hello-world" \
|
||||||
|
-F "namespace=default" \
|
||||||
|
-F "wasm=@./examples/functions/bin/hello.wasm"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Invocation
|
||||||
|
|
||||||
|
Trigger your function with a JSON payload. The function receives the payload via `stdin` and returns its response via `stdout`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Invoke via HTTP
|
||||||
|
curl -X POST https://your-node.example.com/v1/functions/hello-world/invoke \
|
||||||
|
-H "Authorization: Bearer <your_api_key>" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"name": "Developer"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Management
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all functions in a namespace
|
||||||
|
curl https://your-node.example.com/v1/functions?namespace=default
|
||||||
|
|
||||||
|
# Delete a function
|
||||||
|
curl -X DELETE https://your-node.example.com/v1/functions/hello-world?namespace=default
|
||||||
|
```
|
||||||
|
|
||||||
|
## Production Deployment
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
- Ubuntu 22.04+ or Debian 12+
|
||||||
|
- `amd64` or `arm64` architecture
|
||||||
|
- 4GB RAM, 50GB SSD, 2 CPU cores
|
||||||
|
|
||||||
|
### Required Ports
|
||||||
|
|
||||||
|
**External (must be open in firewall):**
|
||||||
|
|
||||||
|
- **80** - HTTP (ACME/Let's Encrypt certificate challenges)
|
||||||
|
- **443** - HTTPS (Main gateway API endpoint)
|
||||||
|
- **4101** - IPFS Swarm (peer connections)
|
||||||
|
- **7001** - RQLite Raft (cluster consensus)
|
||||||
|
|
||||||
|
**Internal (bound to localhost, no firewall needed):**
|
||||||
|
|
||||||
|
- 4501 - IPFS API
|
||||||
|
- 5001 - RQLite HTTP API
|
||||||
|
- 6001 - Unified Gateway
|
||||||
|
- 8080 - IPFS Gateway
|
||||||
|
- 9050 - Anyone SOCKS5 proxy
|
||||||
|
- 9094 - IPFS Cluster API
|
||||||
|
- 3320/3322 - Olric Cache
|
||||||
|
|
||||||
|
**Anyone Relay Mode (optional, for earning rewards):**
|
||||||
|
|
||||||
|
- 9001 - Anyone ORPort (relay traffic, must be open externally)
|
||||||
|
|
||||||
|
### Anyone Network Integration
|
||||||
|
|
||||||
|
Orama Network integrates with the [Anyone Protocol](https://anyone.io) for anonymous routing. By default, nodes run as **clients** (consuming the network). Optionally, you can run as a **relay operator** to earn rewards.
|
||||||
|
|
||||||
|
**Client Mode (Default):**
|
||||||
|
- Routes traffic through Anyone network for anonymity
|
||||||
|
- SOCKS5 proxy on localhost:9050
|
||||||
|
- No rewards, just consumes network
|
||||||
|
|
||||||
|
**Relay Mode (Earn Rewards):**
|
||||||
|
- Provide bandwidth to the Anyone network
|
||||||
|
- Earn $ANYONE tokens as a relay operator
|
||||||
|
- Requires 100 $ANYONE tokens in your wallet
|
||||||
|
- Requires ORPort (9001) open to the internet
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install as relay operator (earn rewards)
|
||||||
|
sudo orama node install --vps-ip <IP> --domain <domain> \
|
||||||
|
--anyone-relay \
|
||||||
|
--anyone-nickname "MyRelay" \
|
||||||
|
--anyone-contact "operator@email.com" \
|
||||||
|
--anyone-wallet "0x1234...abcd"
|
||||||
|
|
||||||
|
# With exit relay (legal implications apply)
|
||||||
|
sudo orama node install --vps-ip <IP> --domain <domain> \
|
||||||
|
--anyone-relay \
|
||||||
|
--anyone-exit \
|
||||||
|
--anyone-nickname "MyExitRelay" \
|
||||||
|
--anyone-contact "operator@email.com" \
|
||||||
|
--anyone-wallet "0x1234...abcd"
|
||||||
|
|
||||||
|
# Migrate existing Anyone installation
|
||||||
|
sudo orama node install --vps-ip <IP> --domain <domain> \
|
||||||
|
--anyone-relay \
|
||||||
|
--anyone-migrate \
|
||||||
|
--anyone-nickname "MyRelay" \
|
||||||
|
--anyone-contact "operator@email.com" \
|
||||||
|
--anyone-wallet "0x1234...abcd"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Important:** After installation, register your relay at [dashboard.anyone.io](https://dashboard.anyone.io) to start earning rewards.
|
||||||
|
|
||||||
|
### Installation
|
||||||
|
|
||||||
|
**macOS (Homebrew):**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
brew install DeBrosOfficial/tap/orama
|
||||||
|
```
|
||||||
|
|
||||||
|
**Linux (Debian/Ubuntu):**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download and install the latest .deb package
|
||||||
|
curl -sL https://github.com/DeBrosOfficial/network/releases/latest/download/orama_$(curl -s https://api.github.com/repos/DeBrosOfficial/network/releases/latest | grep tag_name | cut -d '"' -f 4 | tr -d 'v')_linux_amd64.deb -o orama.deb
|
||||||
|
sudo dpkg -i orama.deb
|
||||||
|
```
|
||||||
|
|
||||||
|
**From Source:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
go install github.com/DeBrosOfficial/network/cmd/cli@latest
|
||||||
|
```
|
||||||
|
|
||||||
|
**Setup (after installation):**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo orama node install --interactive
|
||||||
|
```
|
||||||
|
|
||||||
|
### Service Management
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Status
|
||||||
|
sudo orama node status
|
||||||
|
|
||||||
|
# Control services
|
||||||
|
sudo orama node start
|
||||||
|
sudo orama node stop
|
||||||
|
sudo orama node restart
|
||||||
|
|
||||||
|
# Diagnose issues
|
||||||
|
sudo orama node doctor
|
||||||
|
|
||||||
|
# View logs
|
||||||
|
orama node logs node --follow
|
||||||
|
orama node logs gateway --follow
|
||||||
|
orama node logs ipfs --follow
|
||||||
|
```
|
||||||
|
|
||||||
|
### Upgrade
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Upgrade to latest version
|
||||||
|
sudo orama node upgrade --restart
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
All configuration lives in `~/.orama/`:
|
||||||
|
|
||||||
|
- `configs/node.yaml` - Node configuration
|
||||||
|
- `configs/gateway.yaml` - Gateway configuration
|
||||||
|
- `configs/olric.yaml` - Cache configuration
|
||||||
|
- `secrets/` - Keys and certificates
|
||||||
|
- `data/` - Service data directories
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Services Not Starting
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check status
|
||||||
|
systemctl status orama-node
|
||||||
|
|
||||||
|
# View logs
|
||||||
|
journalctl -u orama-node -f
|
||||||
|
|
||||||
|
# Check log files
|
||||||
|
tail -f /opt/orama/.orama/logs/node.log
|
||||||
|
```
|
||||||
|
|
||||||
|
### Port Conflicts
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check what's using specific ports
|
||||||
|
sudo lsof -i :443 # HTTPS Gateway
|
||||||
|
sudo lsof -i :7001 # TCP/SNI Gateway
|
||||||
|
sudo lsof -i :6001 # Internal Gateway
|
||||||
|
```
|
||||||
|
|
||||||
|
### RQLite Cluster Issues
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Connect to RQLite CLI
|
||||||
|
rqlite -H localhost -p 5001
|
||||||
|
|
||||||
|
# Check cluster status
|
||||||
|
.nodes
|
||||||
|
.status
|
||||||
|
.ready
|
||||||
|
|
||||||
|
# Check consistency level
|
||||||
|
.consistency
|
||||||
|
```
|
||||||
|
|
||||||
|
### Reset Installation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Production reset (⚠️ DESTROYS DATA)
|
||||||
|
sudo orama node uninstall
|
||||||
|
sudo rm -rf /opt/orama/.orama
|
||||||
|
sudo orama node install
|
||||||
|
```
|
||||||
|
|
||||||
|
## HTTP Gateway API
|
||||||
|
|
||||||
|
### Main Gateway Endpoints
|
||||||
|
|
||||||
|
- `GET /health` - Health status
|
||||||
|
- `GET /v1/status` - Full status
|
||||||
|
- `GET /v1/version` - Version info
|
||||||
|
- `POST /v1/rqlite/exec` - Execute SQL
|
||||||
|
- `POST /v1/rqlite/query` - Query database
|
||||||
|
- `GET /v1/rqlite/schema` - Get schema
|
||||||
|
- `POST /v1/pubsub/publish` - Publish message
|
||||||
|
- `GET /v1/pubsub/topics` - List topics
|
||||||
|
- `GET /v1/pubsub/ws?topic=<name>` - WebSocket subscribe
|
||||||
|
- `POST /v1/functions` - Deploy function (multipart/form-data)
|
||||||
|
- `POST /v1/functions/{name}/invoke` - Invoke function
|
||||||
|
- `GET /v1/functions` - List functions
|
||||||
|
- `DELETE /v1/functions/{name}` - Delete function
|
||||||
|
- `GET /v1/functions/{name}/logs` - Get function logs
|
||||||
|
|
||||||
|
See `openapi/gateway.yaml` for complete API specification.
|
||||||
|
|
||||||
## Documentation
|
## Documentation
|
||||||
|
|
||||||
| Document | Description |
|
- **[Deployment Guide](docs/DEPLOYMENT_GUIDE.md)** - Deploy React, Next.js, Go apps and manage databases
|
||||||
|----------|-------------|
|
- **[Architecture Guide](docs/ARCHITECTURE.md)** - System architecture and design patterns
|
||||||
| [Architecture](core/docs/ARCHITECTURE.md) | System architecture and design patterns |
|
- **[Client SDK](docs/CLIENT_SDK.md)** - Go SDK documentation and examples
|
||||||
| [Deployment Guide](core/docs/DEPLOYMENT_GUIDE.md) | Deploy apps, databases, and domains |
|
- **[Gateway API](docs/GATEWAY_API.md)** - Complete HTTP API reference
|
||||||
| [Dev & Deploy](core/docs/DEV_DEPLOY.md) | Building, deploying to VPS, rolling upgrades |
|
- **[Security Deployment](docs/SECURITY_DEPLOYMENT_GUIDE.md)** - Production security hardening
|
||||||
| [Security](core/docs/SECURITY.md) | Security hardening and threat model |
|
- **[Testing Plan](docs/TESTING_PLAN.md)** - Comprehensive testing strategy and implementation
|
||||||
| [Monitoring](core/docs/MONITORING.md) | Cluster health monitoring |
|
|
||||||
| [Client SDK](core/docs/CLIENT_SDK.md) | Go SDK documentation |
|
## Resources
|
||||||
| [Serverless](core/docs/SERVERLESS.md) | WASM serverless functions |
|
|
||||||
| [Common Problems](core/docs/COMMON_PROBLEMS.md) | Troubleshooting known issues |
|
- [RQLite Documentation](https://rqlite.io/docs/)
|
||||||
|
- [IPFS Documentation](https://docs.ipfs.tech/)
|
||||||
|
- [LibP2P Documentation](https://docs.libp2p.io/)
|
||||||
|
- [WebAssembly](https://webassembly.org/)
|
||||||
|
- [GitHub Repository](https://github.com/DeBrosOfficial/network)
|
||||||
|
- [Issue Tracker](https://github.com/DeBrosOfficial/network/issues)
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
network/
|
||||||
|
├── cmd/ # Binary entry points
|
||||||
|
│ ├── cli/ # CLI tool
|
||||||
|
│ ├── gateway/ # HTTP Gateway
|
||||||
|
│ ├── node/ # P2P Node
|
||||||
|
├── pkg/ # Core packages
|
||||||
|
│ ├── gateway/ # Gateway implementation
|
||||||
|
│ │ └── handlers/ # HTTP handlers by domain
|
||||||
|
│ ├── client/ # Go SDK
|
||||||
|
│ ├── serverless/ # WASM engine
|
||||||
|
│ ├── rqlite/ # Database ORM
|
||||||
|
│ ├── contracts/ # Interface definitions
|
||||||
|
│ ├── httputil/ # HTTP utilities
|
||||||
|
│ └── errors/ # Error handling
|
||||||
|
├── docs/ # Documentation
|
||||||
|
├── e2e/ # End-to-end tests
|
||||||
|
└── examples/ # Example code
|
||||||
|
```
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
|
|
||||||
See [CONTRIBUTING.md](CONTRIBUTING.md) for setup, development, and PR guidelines.
|
Contributions are welcome! This project follows:
|
||||||
|
- **SOLID Principles** - Single responsibility, open/closed, etc.
|
||||||
|
- **DRY Principle** - Don't repeat yourself
|
||||||
|
- **Clean Architecture** - Clear separation of concerns
|
||||||
|
- **Test Coverage** - Unit and E2E tests required
|
||||||
|
|
||||||
## License
|
See our architecture docs for design patterns and guidelines.
|
||||||
|
|
||||||
[AGPL-3.0](LICENSE)
|
|
||||||
|
|||||||
@ -9,7 +9,6 @@ import (
|
|||||||
// Command groups
|
// Command groups
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/cmd/app"
|
"github.com/DeBrosOfficial/network/pkg/cli/cmd/app"
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/cmd/authcmd"
|
"github.com/DeBrosOfficial/network/pkg/cli/cmd/authcmd"
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/cmd/buildcmd"
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/cmd/dbcmd"
|
"github.com/DeBrosOfficial/network/pkg/cli/cmd/dbcmd"
|
||||||
deploycmd "github.com/DeBrosOfficial/network/pkg/cli/cmd/deploy"
|
deploycmd "github.com/DeBrosOfficial/network/pkg/cli/cmd/deploy"
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/cmd/envcmd"
|
"github.com/DeBrosOfficial/network/pkg/cli/cmd/envcmd"
|
||||||
@ -18,7 +17,6 @@ import (
|
|||||||
"github.com/DeBrosOfficial/network/pkg/cli/cmd/monitorcmd"
|
"github.com/DeBrosOfficial/network/pkg/cli/cmd/monitorcmd"
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/cmd/namespacecmd"
|
"github.com/DeBrosOfficial/network/pkg/cli/cmd/namespacecmd"
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/cmd/node"
|
"github.com/DeBrosOfficial/network/pkg/cli/cmd/node"
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/cmd/sandboxcmd"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// version metadata populated via -ldflags at build time
|
// version metadata populated via -ldflags at build time
|
||||||
@ -85,12 +83,6 @@ and interacting with the Orama distributed network.`,
|
|||||||
// Serverless function commands
|
// Serverless function commands
|
||||||
rootCmd.AddCommand(functioncmd.Cmd)
|
rootCmd.AddCommand(functioncmd.Cmd)
|
||||||
|
|
||||||
// Build command (cross-compile binary archive)
|
|
||||||
rootCmd.AddCommand(buildcmd.Cmd)
|
|
||||||
|
|
||||||
// Sandbox command (ephemeral Hetzner Cloud clusters)
|
|
||||||
rootCmd.AddCommand(sandboxcmd.Cmd)
|
|
||||||
|
|
||||||
return rootCmd
|
return rootCmd
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1,8 +0,0 @@
|
|||||||
# OpenRouter API Key for changelog generation
|
|
||||||
# Get your API key from https://openrouter.ai/keys
|
|
||||||
OPENROUTER_API_KEY=your-api-key-here
|
|
||||||
|
|
||||||
# ZeroSSL API Key for TLS certificates (alternative to Let's Encrypt)
|
|
||||||
# Get your free API key from https://app.zerossl.com/developer
|
|
||||||
# If not set, Caddy will use Let's Encrypt as the default CA
|
|
||||||
ZEROSSL_API_KEY=
|
|
||||||
181
core/Makefile
181
core/Makefile
@ -1,181 +0,0 @@
|
|||||||
TEST?=./...
|
|
||||||
|
|
||||||
.PHONY: test
|
|
||||||
test:
|
|
||||||
@echo Running tests...
|
|
||||||
go test -v $(TEST)
|
|
||||||
|
|
||||||
# Gateway-focused E2E tests assume gateway and nodes are already running
|
|
||||||
# Auto-discovers configuration from ~/.orama and queries database for API key
|
|
||||||
# No environment variables required
|
|
||||||
.PHONY: test-e2e test-e2e-deployments test-e2e-fullstack test-e2e-https test-e2e-quick test-e2e-prod test-e2e-shared test-e2e-cluster test-e2e-integration test-e2e-production
|
|
||||||
|
|
||||||
# Production E2E tests - includes production-only tests
|
|
||||||
test-e2e-prod:
|
|
||||||
@if [ -z "$$ORAMA_GATEWAY_URL" ]; then \
|
|
||||||
echo "❌ ORAMA_GATEWAY_URL not set"; \
|
|
||||||
echo "Usage: ORAMA_GATEWAY_URL=https://dbrs.space make test-e2e-prod"; \
|
|
||||||
exit 1; \
|
|
||||||
fi
|
|
||||||
@echo "Running E2E tests (including production-only) against $$ORAMA_GATEWAY_URL..."
|
|
||||||
go test -v -tags "e2e production" -timeout 30m ./e2e/...
|
|
||||||
|
|
||||||
# Generic e2e target
|
|
||||||
test-e2e:
|
|
||||||
@echo "Running comprehensive E2E tests..."
|
|
||||||
@echo "Auto-discovering configuration from ~/.orama..."
|
|
||||||
go test -v -tags e2e -timeout 30m ./e2e/...
|
|
||||||
|
|
||||||
test-e2e-deployments:
|
|
||||||
@echo "Running deployment E2E tests..."
|
|
||||||
go test -v -tags e2e -timeout 15m ./e2e/deployments/...
|
|
||||||
|
|
||||||
test-e2e-fullstack:
|
|
||||||
@echo "Running fullstack E2E tests..."
|
|
||||||
go test -v -tags e2e -timeout 20m -run "TestFullStack" ./e2e/...
|
|
||||||
|
|
||||||
test-e2e-https:
|
|
||||||
@echo "Running HTTPS/external access E2E tests..."
|
|
||||||
go test -v -tags e2e -timeout 10m -run "TestHTTPS" ./e2e/...
|
|
||||||
|
|
||||||
test-e2e-shared:
|
|
||||||
@echo "Running shared E2E tests..."
|
|
||||||
go test -v -tags e2e -timeout 10m ./e2e/shared/...
|
|
||||||
|
|
||||||
test-e2e-cluster:
|
|
||||||
@echo "Running cluster E2E tests..."
|
|
||||||
go test -v -tags e2e -timeout 15m ./e2e/cluster/...
|
|
||||||
|
|
||||||
test-e2e-integration:
|
|
||||||
@echo "Running integration E2E tests..."
|
|
||||||
go test -v -tags e2e -timeout 20m ./e2e/integration/...
|
|
||||||
|
|
||||||
test-e2e-production:
|
|
||||||
@echo "Running production-only E2E tests..."
|
|
||||||
go test -v -tags "e2e production" -timeout 15m ./e2e/production/...
|
|
||||||
|
|
||||||
test-e2e-quick:
|
|
||||||
@echo "Running quick E2E smoke tests..."
|
|
||||||
go test -v -tags e2e -timeout 5m -run "TestStatic|TestHealth" ./e2e/...
|
|
||||||
|
|
||||||
# Network - Distributed P2P Database System
|
|
||||||
# Makefile for development and build tasks
|
|
||||||
|
|
||||||
.PHONY: build clean test deps tidy fmt vet lint install-hooks push-devnet push-testnet rollout-devnet rollout-testnet release
|
|
||||||
|
|
||||||
VERSION := 0.120.0
|
|
||||||
COMMIT ?= $(shell git rev-parse --short HEAD 2>/dev/null || echo unknown)
|
|
||||||
DATE ?= $(shell date -u +%Y-%m-%dT%H:%M:%SZ)
|
|
||||||
LDFLAGS := -X 'main.version=$(VERSION)' -X 'main.commit=$(COMMIT)' -X 'main.date=$(DATE)'
|
|
||||||
LDFLAGS_LINUX := -s -w $(LDFLAGS)
|
|
||||||
|
|
||||||
# Build targets
|
|
||||||
build: deps
|
|
||||||
@echo "Building network executables (version=$(VERSION))..."
|
|
||||||
@mkdir -p bin
|
|
||||||
go build -ldflags "$(LDFLAGS)" -o bin/identity ./cmd/identity
|
|
||||||
go build -ldflags "$(LDFLAGS)" -o bin/orama-node ./cmd/node
|
|
||||||
go build -ldflags "$(LDFLAGS)" -o bin/orama ./cmd/cli/
|
|
||||||
# Inject gateway build metadata via pkg path variables
|
|
||||||
go build -ldflags "$(LDFLAGS) -X 'github.com/DeBrosOfficial/network/pkg/gateway.BuildVersion=$(VERSION)' -X 'github.com/DeBrosOfficial/network/pkg/gateway.BuildCommit=$(COMMIT)' -X 'github.com/DeBrosOfficial/network/pkg/gateway.BuildTime=$(DATE)'" -o bin/gateway ./cmd/gateway
|
|
||||||
go build -ldflags "$(LDFLAGS)" -o bin/sfu ./cmd/sfu
|
|
||||||
go build -ldflags "$(LDFLAGS)" -o bin/turn ./cmd/turn
|
|
||||||
@echo "Build complete! Run ./bin/orama version"
|
|
||||||
|
|
||||||
# Cross-compile CLI for Linux (only binary needed locally; VPS builds everything else from source)
|
|
||||||
build-linux: deps
|
|
||||||
@echo "Cross-compiling CLI for linux/amd64 (version=$(VERSION))..."
|
|
||||||
@mkdir -p bin-linux
|
|
||||||
GOOS=linux GOARCH=amd64 go build -ldflags "$(LDFLAGS_LINUX)" -trimpath -o bin-linux/orama ./cmd/cli/
|
|
||||||
@echo "✓ CLI built at bin-linux/orama"
|
|
||||||
@echo ""
|
|
||||||
@echo "Prefer 'make build-archive' for full pre-built binary archive."
|
|
||||||
|
|
||||||
# Build pre-compiled binary archive for deployment (all binaries + deps)
|
|
||||||
build-archive: deps
|
|
||||||
@echo "Building binary archive (version=$(VERSION))..."
|
|
||||||
go build -ldflags "$(LDFLAGS)" -o bin/orama ./cmd/cli/
|
|
||||||
./bin/orama build --output /tmp/orama-$(VERSION)-linux-amd64.tar.gz
|
|
||||||
|
|
||||||
# Install git hooks
|
|
||||||
install-hooks:
|
|
||||||
@echo "Installing git hooks..."
|
|
||||||
@bash scripts/install-hooks.sh
|
|
||||||
|
|
||||||
# Install orama CLI to ~/.local/bin and configure PATH
|
|
||||||
install: build
|
|
||||||
@bash scripts/install.sh
|
|
||||||
|
|
||||||
# Clean build artifacts
|
|
||||||
clean:
|
|
||||||
@echo "Cleaning build artifacts..."
|
|
||||||
rm -rf bin/
|
|
||||||
rm -rf data/
|
|
||||||
@echo "Clean complete!"
|
|
||||||
|
|
||||||
# Push binary archive to devnet nodes (fanout distribution)
|
|
||||||
push-devnet:
|
|
||||||
./bin/orama node push --env devnet
|
|
||||||
|
|
||||||
# Push binary archive to testnet nodes (fanout distribution)
|
|
||||||
push-testnet:
|
|
||||||
./bin/orama node push --env testnet
|
|
||||||
|
|
||||||
# Full rollout to devnet (build + push + rolling upgrade)
|
|
||||||
rollout-devnet:
|
|
||||||
./bin/orama node rollout --env devnet --yes
|
|
||||||
|
|
||||||
# Full rollout to testnet (build + push + rolling upgrade)
|
|
||||||
rollout-testnet:
|
|
||||||
./bin/orama node rollout --env testnet --yes
|
|
||||||
|
|
||||||
# Interactive release workflow (tag + push)
|
|
||||||
release:
|
|
||||||
@bash scripts/release.sh
|
|
||||||
|
|
||||||
# Check health of all nodes in an environment
|
|
||||||
# Usage: make health ENV=devnet
|
|
||||||
health:
|
|
||||||
@if [ -z "$(ENV)" ]; then \
|
|
||||||
echo "Usage: make health ENV=devnet|testnet"; \
|
|
||||||
exit 1; \
|
|
||||||
fi
|
|
||||||
./bin/orama monitor report --env $(ENV)
|
|
||||||
|
|
||||||
# Help
|
|
||||||
help:
|
|
||||||
@echo "Available targets:"
|
|
||||||
@echo " build - Build all executables"
|
|
||||||
@echo " install - Build and install 'orama' CLI to ~/.local/bin"
|
|
||||||
@echo " clean - Clean build artifacts"
|
|
||||||
@echo " test - Run unit tests"
|
|
||||||
@echo ""
|
|
||||||
@echo "E2E Testing:"
|
|
||||||
@echo " make test-e2e-prod - Run all E2E tests incl. production-only (needs ORAMA_GATEWAY_URL)"
|
|
||||||
@echo " make test-e2e-shared - Run shared E2E tests (cache, storage, pubsub, auth)"
|
|
||||||
@echo " make test-e2e-cluster - Run cluster E2E tests (libp2p, olric, rqlite, namespace)"
|
|
||||||
@echo " make test-e2e-integration - Run integration E2E tests (fullstack, persistence, concurrency)"
|
|
||||||
@echo " make test-e2e-deployments - Run deployment E2E tests"
|
|
||||||
@echo " make test-e2e-production - Run production-only E2E tests (DNS, HTTPS, cross-node)"
|
|
||||||
@echo " make test-e2e-quick - Quick smoke tests (static deploys, health checks)"
|
|
||||||
@echo " make test-e2e - Generic E2E tests (auto-discovers config)"
|
|
||||||
@echo ""
|
|
||||||
@echo " Example:"
|
|
||||||
@echo " ORAMA_GATEWAY_URL=https://orama-devnet.network make test-e2e-prod"
|
|
||||||
@echo ""
|
|
||||||
@echo "Deployment:"
|
|
||||||
@echo " make build-archive - Build pre-compiled binary archive for deployment"
|
|
||||||
@echo " make push-devnet - Push binary archive to devnet nodes"
|
|
||||||
@echo " make push-testnet - Push binary archive to testnet nodes"
|
|
||||||
@echo " make rollout-devnet - Full rollout: build + push + rolling upgrade (devnet)"
|
|
||||||
@echo " make rollout-testnet - Full rollout: build + push + rolling upgrade (testnet)"
|
|
||||||
@echo " make health ENV=devnet - Check health of all nodes in an environment"
|
|
||||||
@echo " make release - Interactive release workflow (tag + push)"
|
|
||||||
@echo ""
|
|
||||||
@echo "Maintenance:"
|
|
||||||
@echo " deps - Download dependencies"
|
|
||||||
@echo " tidy - Tidy dependencies"
|
|
||||||
@echo " fmt - Format code"
|
|
||||||
@echo " vet - Vet code"
|
|
||||||
@echo " lint - Lint code (fmt + vet)"
|
|
||||||
@echo " help - Show this help"
|
|
||||||
@ -1,233 +0,0 @@
|
|||||||
# OramaOS Deployment Guide
|
|
||||||
|
|
||||||
OramaOS is a custom minimal Linux image built with Buildroot. It replaces the standard Ubuntu-based node deployment for mainnet, devnet, and testnet environments. Sandbox clusters remain on Ubuntu for development convenience.
|
|
||||||
|
|
||||||
## What is OramaOS?
|
|
||||||
|
|
||||||
OramaOS is a locked-down operating system designed specifically for Orama node operators. Key properties:
|
|
||||||
|
|
||||||
- **No SSH, no shell** — operators cannot access the filesystem or run commands on the machine
|
|
||||||
- **LUKS full-disk encryption** — the data partition is encrypted; the key is split via Shamir's Secret Sharing across peer nodes
|
|
||||||
- **Read-only rootfs** — the OS image uses SquashFS with dm-verity integrity verification
|
|
||||||
- **A/B partition updates** — signed OS images are applied atomically with automatic rollback on failure
|
|
||||||
- **Service sandboxing** — each service runs in its own Linux namespace with seccomp syscall filtering
|
|
||||||
- **Signed binaries** — all updates are cryptographically signed with the Orama rootwallet
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
```
|
|
||||||
Partition Layout:
|
|
||||||
/dev/sda1 — ESP (EFI System Partition, systemd-boot)
|
|
||||||
/dev/sda2 — rootfs-A (SquashFS, read-only, dm-verity)
|
|
||||||
/dev/sda3 — rootfs-B (standby, for A/B updates)
|
|
||||||
/dev/sda4 — data (LUKS2 encrypted, ext4)
|
|
||||||
|
|
||||||
Boot Flow:
|
|
||||||
systemd-boot → dm-verity rootfs → orama-agent → WireGuard → services
|
|
||||||
```
|
|
||||||
|
|
||||||
The **orama-agent** is the only root process. It manages:
|
|
||||||
- Boot sequence and LUKS key reconstruction
|
|
||||||
- WireGuard tunnel setup
|
|
||||||
- Service lifecycle (start, stop, restart in sandboxed namespaces)
|
|
||||||
- Command reception from the Gateway over WireGuard
|
|
||||||
- OS updates (download, verify signature, A/B swap, reboot)
|
|
||||||
|
|
||||||
## Enrollment Flow
|
|
||||||
|
|
||||||
OramaOS nodes join the cluster through an enrollment process (different from the Ubuntu `orama node install` flow):
|
|
||||||
|
|
||||||
### Step 1: Flash OramaOS to VPS
|
|
||||||
|
|
||||||
Download the OramaOS image and flash it to your VPS:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Download image (URL provided upon acceptance)
|
|
||||||
wget https://releases.orama.network/oramaos-v1.0.0-amd64.qcow2
|
|
||||||
|
|
||||||
# Flash to VPS (provider-specific — Hetzner, Vultr, etc.)
|
|
||||||
# Most providers support uploading custom images via their dashboard
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: First Boot — Enrollment Mode
|
|
||||||
|
|
||||||
On first boot, the agent:
|
|
||||||
1. Generates a random 8-character registration code
|
|
||||||
2. Starts a temporary HTTP server on port 9999
|
|
||||||
3. Opens an outbound WebSocket to the Gateway
|
|
||||||
4. Waits for enrollment to complete
|
|
||||||
|
|
||||||
The registration code is displayed on the VPS console (if available) and served at `http://<vps-ip>:9999/`.
|
|
||||||
|
|
||||||
### Step 3: Run Enrollment from CLI
|
|
||||||
|
|
||||||
On your local machine (where you have the `orama` CLI and rootwallet):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Generate an invite token on any existing cluster node
|
|
||||||
orama node invite --expiry 24h
|
|
||||||
|
|
||||||
# Enroll the OramaOS node
|
|
||||||
orama node enroll --node-ip <vps-public-ip> --token <invite-token> --gateway <gateway-url>
|
|
||||||
```
|
|
||||||
|
|
||||||
The enrollment command:
|
|
||||||
1. Fetches the registration code from the node (port 9999)
|
|
||||||
2. Sends the code + invite token to the Gateway
|
|
||||||
3. Gateway validates everything, assigns a WireGuard IP, and pushes config to the node
|
|
||||||
4. Node configures WireGuard, formats the LUKS-encrypted data partition
|
|
||||||
5. LUKS key is split via Shamir and distributed to peer vault-guardians
|
|
||||||
6. Services start in sandboxed namespaces
|
|
||||||
7. Port 9999 closes permanently
|
|
||||||
|
|
||||||
### Step 4: Verify
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check the node is online and healthy
|
|
||||||
orama monitor report --env <env>
|
|
||||||
```
|
|
||||||
|
|
||||||
## Genesis Node
|
|
||||||
|
|
||||||
The first OramaOS node in a cluster is the **genesis node**. It has a special boot path because there are no peers yet for Shamir key distribution:
|
|
||||||
|
|
||||||
1. Genesis generates a LUKS key and encrypts the data partition
|
|
||||||
2. The LUKS key is encrypted with a rootwallet-derived key and stored on the unencrypted rootfs
|
|
||||||
3. On reboot (before enough peers exist), the operator must manually unlock:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
orama node unlock --genesis --node-ip <wg-ip>
|
|
||||||
```
|
|
||||||
|
|
||||||
This command:
|
|
||||||
1. Fetches the encrypted genesis key from the node
|
|
||||||
2. Decrypts it using the rootwallet (`rw decrypt`)
|
|
||||||
3. Sends the decrypted LUKS key to the agent over WireGuard
|
|
||||||
|
|
||||||
Once 5+ peers have joined, the genesis node distributes Shamir shares to peers, deletes the local encrypted key, and transitions to normal Shamir-based unlock. After this transition, `orama node unlock` is no longer needed.
|
|
||||||
|
|
||||||
## Normal Reboot (Shamir Unlock)
|
|
||||||
|
|
||||||
When an enrolled OramaOS node reboots:
|
|
||||||
|
|
||||||
1. Agent starts, brings up WireGuard
|
|
||||||
2. Contacts peer vault-guardians over WireGuard
|
|
||||||
3. Fetches K Shamir shares (K = threshold, typically `max(3, N/3)`)
|
|
||||||
4. Reconstructs LUKS key via Lagrange interpolation over GF(256)
|
|
||||||
5. Decrypts and mounts data partition
|
|
||||||
6. Starts all services
|
|
||||||
7. Zeros key from memory
|
|
||||||
|
|
||||||
If not enough peers are available, the agent enters a degraded "waiting for peers" state and retries with exponential backoff (1s, 2s, 4s, 8s, 16s, max 5 retries per cycle).
|
|
||||||
|
|
||||||
## Node Management
|
|
||||||
|
|
||||||
Since OramaOS has no SSH, all management happens through the Gateway API:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check node status
|
|
||||||
curl "https://gateway.example.com/v1/node/status?node_id=<id>"
|
|
||||||
|
|
||||||
# Send a command (e.g., restart a service)
|
|
||||||
curl -X POST "https://gateway.example.com/v1/node/command?node_id=<id>" \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{"action":"restart","service":"rqlite"}'
|
|
||||||
|
|
||||||
# View logs
|
|
||||||
curl "https://gateway.example.com/v1/node/logs?node_id=<id>&service=gateway&lines=100"
|
|
||||||
|
|
||||||
# Graceful node departure
|
|
||||||
curl -X POST "https://gateway.example.com/v1/node/leave" \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{"node_id":"<id>"}'
|
|
||||||
```
|
|
||||||
|
|
||||||
The Gateway proxies these requests to the agent over WireGuard (port 9998). The agent is never directly accessible from the public internet.
|
|
||||||
|
|
||||||
## OS Updates
|
|
||||||
|
|
||||||
OramaOS uses an A/B partition scheme for atomic, rollback-safe updates:
|
|
||||||
|
|
||||||
1. Agent periodically checks for new versions
|
|
||||||
2. Downloads the signed image (P2P over WireGuard between nodes)
|
|
||||||
3. Verifies the rootwallet EVM signature against the embedded public key
|
|
||||||
4. Writes to the standby partition (if running from A, writes to B)
|
|
||||||
5. Sets systemd-boot to boot from B with `tries_left=3`
|
|
||||||
6. Reboots
|
|
||||||
7. If B boots successfully (agent starts, WG connects, services healthy): marks B as "good"
|
|
||||||
8. If B fails 3 times: systemd-boot automatically falls back to A
|
|
||||||
|
|
||||||
No operator intervention is needed for updates. Failed updates are automatically rolled back.
|
|
||||||
|
|
||||||
## Service Sandboxing
|
|
||||||
|
|
||||||
Each service on OramaOS runs in an isolated environment:
|
|
||||||
|
|
||||||
- **Mount namespace** — each service only sees its own data directory as writable; everything else is read-only
|
|
||||||
- **UTS namespace** — isolated hostname
|
|
||||||
- **Dedicated UID/GID** — each service runs as a different user (not root)
|
|
||||||
- **Seccomp filtering** — per-service syscall allowlist (initially in audit mode, then enforce mode)
|
|
||||||
|
|
||||||
Services and their sandbox profiles:
|
|
||||||
| Service | Writable Path | Extra Syscalls |
|
|
||||||
|---------|--------------|----------------|
|
|
||||||
| RQLite | `/opt/orama/.orama/data/rqlite` | fsync, fdatasync (Raft + SQLite WAL) |
|
|
||||||
| Olric | `/opt/orama/.orama/data/olric` | sendmmsg, recvmmsg (gossip) |
|
|
||||||
| IPFS | `/opt/orama/.orama/data/ipfs` | sendfile, splice (data transfer) |
|
|
||||||
| Gateway | `/opt/orama/.orama/data/gateway` | sendfile, splice (HTTP) |
|
|
||||||
| CoreDNS | `/opt/orama/.orama/data/coredns` | sendmmsg, recvmmsg (DNS) |
|
|
||||||
|
|
||||||
## OramaOS vs Ubuntu Deployment
|
|
||||||
|
|
||||||
| Feature | Ubuntu | OramaOS |
|
|
||||||
|---------|--------|---------|
|
|
||||||
| SSH access | Yes | No |
|
|
||||||
| Shell access | Yes | No |
|
|
||||||
| Disk encryption | No | LUKS2 (Shamir) |
|
|
||||||
| OS updates | Manual (`orama node upgrade`) | Automatic (signed, A/B) |
|
|
||||||
| Service isolation | systemd only | Namespaces + seccomp |
|
|
||||||
| Rootfs integrity | None | dm-verity |
|
|
||||||
| Binary signing | Optional | Required |
|
|
||||||
| Operator data access | Full | None |
|
|
||||||
| Environments | All (including sandbox) | Mainnet, devnet, testnet |
|
|
||||||
|
|
||||||
## Cleaning / Factory Reset
|
|
||||||
|
|
||||||
OramaOS nodes cannot be cleaned with the standard `orama node clean` command (no SSH access). Instead:
|
|
||||||
|
|
||||||
- **Graceful departure:** `orama node leave` via the Gateway API — stops services, redistributes Shamir shares, removes WG peer
|
|
||||||
- **Factory reset:** Reflash the OramaOS image on the VPS via the hosting provider's dashboard
|
|
||||||
- **Data is unrecoverable:** Since the LUKS key is distributed across peers, reflashing destroys all data permanently
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Node stuck in enrollment mode
|
|
||||||
The node boots but enrollment never completes.
|
|
||||||
|
|
||||||
**Check:** Can you reach `http://<vps-ip>:9999/` from your machine? If not, the VPS firewall may be blocking port 9999.
|
|
||||||
|
|
||||||
**Fix:** Ensure port 9999 is open in the VPS provider's firewall. OramaOS opens it automatically via its internal firewall, but external provider firewalls (Hetzner, AWS security groups) must be configured separately.
|
|
||||||
|
|
||||||
### LUKS unlock fails (not enough peers)
|
|
||||||
After reboot, the node can't reconstruct its LUKS key.
|
|
||||||
|
|
||||||
**Check:** How many peer nodes are online? The node needs at least K peers (threshold) to be reachable over WireGuard.
|
|
||||||
|
|
||||||
**Fix:** Ensure enough cluster nodes are online. If this is the genesis node and fewer than 5 peers exist, use:
|
|
||||||
```bash
|
|
||||||
orama node unlock --genesis --node-ip <wg-ip>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Update failed, node rolled back
|
|
||||||
The node applied an update but reverted to the previous version.
|
|
||||||
|
|
||||||
**Check:** The agent logs will show why the new partition failed to boot (accessible via `GET /v1/node/logs?service=agent`).
|
|
||||||
|
|
||||||
**Common causes:** Corrupted download (signature verification should catch this), hardware issue, or incompatible configuration.
|
|
||||||
|
|
||||||
### Services not starting after reboot
|
|
||||||
The node rebooted and LUKS unlocked, but services are unhealthy.
|
|
||||||
|
|
||||||
**Check:** `GET /v1/node/status` — which services are down?
|
|
||||||
|
|
||||||
**Fix:** Try restarting the specific service via `POST /v1/node/command` with `{"action":"restart","service":"<name>"}`. If the issue persists, check service logs.
|
|
||||||
@ -1,208 +0,0 @@
|
|||||||
# Sandbox: Ephemeral Hetzner Cloud Clusters
|
|
||||||
|
|
||||||
Spin up temporary 5-node Orama clusters on Hetzner Cloud for development and testing. Total cost: ~€0.04/hour.
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# One-time setup (API key, domain, floating IPs, SSH key)
|
|
||||||
orama sandbox setup
|
|
||||||
|
|
||||||
# Create a cluster (~5 minutes)
|
|
||||||
orama sandbox create --name my-feature
|
|
||||||
|
|
||||||
# Check health
|
|
||||||
orama sandbox status
|
|
||||||
|
|
||||||
# SSH into a node
|
|
||||||
orama sandbox ssh 1
|
|
||||||
|
|
||||||
# Deploy code changes
|
|
||||||
orama sandbox rollout
|
|
||||||
|
|
||||||
# Tear it down
|
|
||||||
orama sandbox destroy
|
|
||||||
```
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
### 1. Hetzner Cloud Account
|
|
||||||
|
|
||||||
Create a project at [console.hetzner.cloud](https://console.hetzner.cloud) and generate an API token with read/write permissions under **Security > API Tokens**.
|
|
||||||
|
|
||||||
### 2. Domain with Glue Records
|
|
||||||
|
|
||||||
You need a domain (or subdomain) that points to Hetzner Floating IPs. The `orama sandbox setup` wizard will guide you through this.
|
|
||||||
|
|
||||||
**Example:** Using `sbx.dbrs.space`
|
|
||||||
|
|
||||||
At your domain registrar:
|
|
||||||
1. Create glue records (Personal DNS Servers):
|
|
||||||
- `ns1.sbx.dbrs.space` → `<floating-ip-1>`
|
|
||||||
- `ns2.sbx.dbrs.space` → `<floating-ip-2>`
|
|
||||||
2. Set custom nameservers for `sbx.dbrs.space`:
|
|
||||||
- `ns1.sbx.dbrs.space`
|
|
||||||
- `ns2.sbx.dbrs.space`
|
|
||||||
|
|
||||||
DNS propagation can take up to 48 hours.
|
|
||||||
|
|
||||||
### 3. Binary Archive
|
|
||||||
|
|
||||||
Build the binary archive before creating a cluster:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
orama build
|
|
||||||
```
|
|
||||||
|
|
||||||
This creates `/tmp/orama-<version>-linux-amd64.tar.gz` with all pre-compiled binaries.
|
|
||||||
|
|
||||||
## Setup
|
|
||||||
|
|
||||||
Run the interactive setup wizard:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
orama sandbox setup
|
|
||||||
```
|
|
||||||
|
|
||||||
This will:
|
|
||||||
1. Prompt for your Hetzner API token and validate it
|
|
||||||
2. Ask for your sandbox domain
|
|
||||||
3. Create or reuse 2 Hetzner Floating IPs (~$0.005/hr each)
|
|
||||||
4. Create a firewall with sandbox rules
|
|
||||||
5. Create a rootwallet SSH entry (`sandbox/root`) if it doesn't exist
|
|
||||||
6. Upload the wallet-derived public key to Hetzner
|
|
||||||
7. Display DNS configuration instructions
|
|
||||||
|
|
||||||
Config is saved to `~/.orama/sandbox.yaml`.
|
|
||||||
|
|
||||||
## Commands
|
|
||||||
|
|
||||||
### `orama sandbox create [--name <name>]`
|
|
||||||
|
|
||||||
Creates a new 5-node cluster. If `--name` is omitted, a random name is generated (e.g., "swift-falcon").
|
|
||||||
|
|
||||||
**Cluster layout:**
|
|
||||||
- Nodes 1-2: Nameservers (CoreDNS + Caddy + all services)
|
|
||||||
- Nodes 3-5: Regular nodes (all services except CoreDNS)
|
|
||||||
|
|
||||||
**Phases:**
|
|
||||||
1. Provision 5 CX22 servers on Hetzner (parallel, ~90s)
|
|
||||||
2. Assign floating IPs to nameserver nodes (~10s)
|
|
||||||
3. Upload binary archive to all nodes (parallel, ~60s)
|
|
||||||
4. Install genesis node + generate invite tokens (~120s)
|
|
||||||
5. Join remaining 4 nodes (serial with health checks, ~180s)
|
|
||||||
6. Verify cluster health (~15s)
|
|
||||||
|
|
||||||
**One sandbox at a time.** Since the floating IPs are shared, only one sandbox can own the nameservers. Destroy the active sandbox before creating a new one.
|
|
||||||
|
|
||||||
### `orama sandbox destroy [--name <name>] [--force]`
|
|
||||||
|
|
||||||
Tears down a cluster:
|
|
||||||
1. Unassigns floating IPs
|
|
||||||
2. Deletes all 5 servers (parallel)
|
|
||||||
3. Removes state file
|
|
||||||
|
|
||||||
Use `--force` to skip confirmation.
|
|
||||||
|
|
||||||
### `orama sandbox list`
|
|
||||||
|
|
||||||
Lists all sandboxes with their status. Also checks Hetzner for orphaned servers that don't have a corresponding state file.
|
|
||||||
|
|
||||||
### `orama sandbox status [--name <name>]`
|
|
||||||
|
|
||||||
Shows per-node health including:
|
|
||||||
- Service status (active/inactive)
|
|
||||||
- RQLite role (Leader/Follower)
|
|
||||||
- Cluster summary (commit index, voter count)
|
|
||||||
|
|
||||||
### `orama sandbox rollout [--name <name>]`
|
|
||||||
|
|
||||||
Deploys code changes:
|
|
||||||
1. Uses the latest binary archive from `/tmp/` (run `orama build` first)
|
|
||||||
2. Pushes to all nodes
|
|
||||||
3. Rolling upgrade: followers first, leader last, 15s between nodes
|
|
||||||
|
|
||||||
### `orama sandbox ssh <node-number>`
|
|
||||||
|
|
||||||
Opens an interactive SSH session to a sandbox node (1-5).
|
|
||||||
|
|
||||||
```bash
|
|
||||||
orama sandbox ssh 1 # SSH into node 1 (genesis/ns1)
|
|
||||||
orama sandbox ssh 3 # SSH into node 3 (regular node)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
### Floating IPs
|
|
||||||
|
|
||||||
Hetzner Floating IPs are persistent IPv4 addresses that can be reassigned between servers. They solve the DNS chicken-and-egg problem:
|
|
||||||
|
|
||||||
- Glue records at the registrar point to 2 Floating IPs (configured once)
|
|
||||||
- Each new sandbox assigns the Floating IPs to its nameserver nodes
|
|
||||||
- DNS works instantly — no propagation delay between clusters
|
|
||||||
|
|
||||||
### SSH Authentication
|
|
||||||
|
|
||||||
Sandbox uses a rootwallet-derived SSH key (`sandbox/root` vault entry), the same mechanism as production. The wallet must be unlocked (`rw unlock`) before running sandbox commands that use SSH. The public key is uploaded to Hetzner during setup and injected into every server at creation time.
|
|
||||||
|
|
||||||
### Server Naming
|
|
||||||
|
|
||||||
Servers: `sbx-<name>-<N>` (e.g., `sbx-swift-falcon-1` through `sbx-swift-falcon-5`)
|
|
||||||
|
|
||||||
### State Files
|
|
||||||
|
|
||||||
Sandbox state is stored at `~/.orama/sandboxes/<name>.yaml`. This tracks server IDs, IPs, roles, and cluster status.
|
|
||||||
|
|
||||||
## Cost
|
|
||||||
|
|
||||||
| Resource | Cost | Qty | Total |
|
|
||||||
|----------|------|-----|-------|
|
|
||||||
| CX22 (2 vCPU, 4GB) | €0.006/hr | 5 | €0.03/hr |
|
|
||||||
| Floating IPv4 | €0.005/hr | 2 | €0.01/hr |
|
|
||||||
| **Total** | | | **~€0.04/hr** |
|
|
||||||
|
|
||||||
Servers are billed per hour. Floating IPs are billed as long as they exist (even unassigned). Destroy the sandbox when not in use to save on server costs.
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### "sandbox not configured"
|
|
||||||
|
|
||||||
Run `orama sandbox setup` first.
|
|
||||||
|
|
||||||
### "no binary archive found"
|
|
||||||
|
|
||||||
Run `orama build` to create the binary archive.
|
|
||||||
|
|
||||||
### "sandbox X is already active"
|
|
||||||
|
|
||||||
Only one sandbox can be active at a time. Destroy it first:
|
|
||||||
```bash
|
|
||||||
orama sandbox destroy --name <name>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Server creation fails
|
|
||||||
|
|
||||||
Check:
|
|
||||||
- Hetzner API token is valid and has read/write permissions
|
|
||||||
- You haven't hit Hetzner's server limit (default: 10 per project)
|
|
||||||
- The selected location has CX22 capacity
|
|
||||||
|
|
||||||
### Genesis install fails
|
|
||||||
|
|
||||||
SSH into the node to debug:
|
|
||||||
```bash
|
|
||||||
orama sandbox ssh 1
|
|
||||||
journalctl -u orama-node -f
|
|
||||||
```
|
|
||||||
|
|
||||||
The sandbox will be left in "error" state. You can destroy and recreate it.
|
|
||||||
|
|
||||||
### DNS not resolving
|
|
||||||
|
|
||||||
1. Verify glue records are configured at your registrar
|
|
||||||
2. Check propagation: `dig NS sbx.dbrs.space @8.8.8.8`
|
|
||||||
3. Propagation can take 24-48 hours for new domains
|
|
||||||
|
|
||||||
### Orphaned servers
|
|
||||||
|
|
||||||
If `orama sandbox list` shows orphaned servers, delete them manually at [console.hetzner.cloud](https://console.hetzner.cloud). Sandbox servers are labeled `orama-sandbox=<name>` for easy identification.
|
|
||||||
@ -1,194 +0,0 @@
|
|||||||
# Security Hardening
|
|
||||||
|
|
||||||
This document describes all security measures applied to the Orama Network, covering both Phase 1 (service hardening on existing Ubuntu nodes) and Phase 2 (OramaOS locked-down image).
|
|
||||||
|
|
||||||
## Phase 1: Service Hardening
|
|
||||||
|
|
||||||
These measures apply to all nodes (Ubuntu and OramaOS).
|
|
||||||
|
|
||||||
### Network Isolation
|
|
||||||
|
|
||||||
**CIDR Validation (Step 1.1)**
|
|
||||||
- WireGuard subnet restricted to `10.0.0.0/24` across all components: firewall rules, rate limiter, auth module, and WireGuard PostUp/PostDown iptables rules
|
|
||||||
- Prevents other tenants on shared VPS providers from bypassing the firewall via overlapping `10.x.x.x` ranges
|
|
||||||
|
|
||||||
**IPv6 Disabled (Step 1.2)**
|
|
||||||
- IPv6 disabled system-wide via sysctl: `net.ipv6.conf.all.disable_ipv6=1`
|
|
||||||
- Prevents services bound to `0.0.0.0` from being reachable via IPv6 (which had no firewall rules)
|
|
||||||
|
|
||||||
### Authentication
|
|
||||||
|
|
||||||
**Internal Endpoint Auth (Step 1.3)**
|
|
||||||
- `/v1/internal/wg/peers` and `/v1/internal/wg/peer/remove` now require cluster secret validation
|
|
||||||
- Peer removal additionally validates the request originates from a WireGuard subnet IP
|
|
||||||
|
|
||||||
**RQLite Authentication (Step 1.7)**
|
|
||||||
- RQLite runs with `-auth` flag pointing to a credentials file
|
|
||||||
- All RQLite HTTP requests include `Authorization: Basic <base64>` headers
|
|
||||||
- Credentials generated at cluster genesis, distributed to joining nodes via join response
|
|
||||||
- Both the central RQLite client wrapper and the standalone CoreDNS RQLite client send auth
|
|
||||||
|
|
||||||
**Olric Gossip Encryption (Step 1.8)**
|
|
||||||
- Olric memberlist uses a 32-byte encryption key for all gossip traffic
|
|
||||||
- Key generated at genesis, distributed via join response
|
|
||||||
- Prevents rogue nodes from joining the gossip ring and poisoning caches
|
|
||||||
- Note: encryption is all-or-nothing (coordinated restart required when enabling)
|
|
||||||
|
|
||||||
**IPFS Cluster TrustedPeers (Step 1.9)**
|
|
||||||
- IPFS Cluster `TrustedPeers` populated with actual cluster peer IDs (was `["*"]`)
|
|
||||||
- New peers added to TrustedPeers on all existing nodes during join
|
|
||||||
- Prevents unauthorized peers from controlling IPFS pinning
|
|
||||||
|
|
||||||
**Vault V1 Auth Enforcement (Step 1.14)**
|
|
||||||
- V1 push/pull endpoints require a valid session token when vault-guardian is configured
|
|
||||||
- Previously, auth was optional for backward compatibility — any WG peer could read/overwrite Shamir shares
|
|
||||||
|
|
||||||
### Token & Key Storage
|
|
||||||
|
|
||||||
**Refresh Token Hashing (Step 1.5)**
|
|
||||||
- Refresh tokens stored as SHA-256 hashes in RQLite (never plaintext)
|
|
||||||
- On lookup: hash the incoming token, query by hash
|
|
||||||
- On revocation: hash before revoking (both single-token and by-subject)
|
|
||||||
- Existing tokens invalidated on upgrade (users re-authenticate)
|
|
||||||
|
|
||||||
**API Key Hashing (Step 1.6)**
|
|
||||||
- API keys stored as HMAC-SHA256 hashes using a server-side secret
|
|
||||||
- HMAC secret generated at cluster genesis, stored in `~/.orama/secrets/api-key-hmac-secret`
|
|
||||||
- On lookup: compute HMAC, query by hash — fast enough for every request (unlike bcrypt)
|
|
||||||
- In-memory cache uses raw key as cache key (never persisted)
|
|
||||||
- During rolling upgrade: dual lookup (HMAC first, then raw as fallback) until all nodes upgraded
|
|
||||||
|
|
||||||
**TURN Secret Encryption (Step 1.15)**
|
|
||||||
- TURN shared secrets encrypted at rest in RQLite using AES-256-GCM
|
|
||||||
- Encryption key derived via HKDF from the cluster secret with purpose string `"turn-encryption"`
|
|
||||||
|
|
||||||
### TLS & Transport
|
|
||||||
|
|
||||||
**InsecureSkipVerify Fix (Step 1.10)**
|
|
||||||
- During node join, TLS verification uses TOFU (Trust On First Use)
|
|
||||||
- Invite token output includes the CA certificate fingerprint (SHA-256)
|
|
||||||
- Joining node verifies the server cert fingerprint matches before proceeding
|
|
||||||
- After join: CA cert stored locally for future connections
|
|
||||||
|
|
||||||
**WebSocket Origin Validation (Step 1.4)**
|
|
||||||
- All WebSocket upgraders validate the `Origin` header against the node's configured domain
|
|
||||||
- Non-browser clients (no Origin header) are still allowed
|
|
||||||
- Prevents cross-site WebSocket hijacking attacks
|
|
||||||
|
|
||||||
### Process Isolation
|
|
||||||
|
|
||||||
**Dedicated User (Step 1.11)**
|
|
||||||
- All services run as the `orama` user (not root)
|
|
||||||
- Caddy and CoreDNS get `AmbientCapabilities=CAP_NET_BIND_SERVICE` for ports 80/443 and 53
|
|
||||||
- WireGuard stays as root (kernel netlink requires it)
|
|
||||||
- vault-guardian already had proper hardening
|
|
||||||
|
|
||||||
**systemd Hardening (Step 1.12)**
|
|
||||||
- All service units include:
|
|
||||||
```ini
|
|
||||||
ProtectSystem=strict
|
|
||||||
ProtectHome=yes
|
|
||||||
NoNewPrivileges=yes
|
|
||||||
PrivateDevices=yes
|
|
||||||
ProtectKernelTunables=yes
|
|
||||||
ProtectKernelModules=yes
|
|
||||||
RestrictNamespaces=yes
|
|
||||||
ReadWritePaths=/opt/orama/.orama
|
|
||||||
```
|
|
||||||
- Applied to both template files (`pkg/environments/templates/`) and hardcoded unit generators (`pkg/environments/production/services.go`)
|
|
||||||
|
|
||||||
### Supply Chain
|
|
||||||
|
|
||||||
**Binary Signing (Step 1.13)**
|
|
||||||
- Build archives include `manifest.sig` — a rootwallet EVM signature of the manifest hash
|
|
||||||
- During install, the signature is verified against the embedded Orama public key
|
|
||||||
- Unsigned or tampered archives are rejected
|
|
||||||
|
|
||||||
## Phase 2: OramaOS
|
|
||||||
|
|
||||||
These measures apply only to OramaOS nodes (mainnet, devnet, testnet).
|
|
||||||
|
|
||||||
### Immutable OS
|
|
||||||
|
|
||||||
- **Read-only rootfs** — SquashFS with dm-verity integrity verification
|
|
||||||
- **No shell** — `/bin/sh` symlinked to `/bin/false`, no bash/ash/ssh
|
|
||||||
- **No SSH** — OpenSSH not included in the image
|
|
||||||
- **Minimal packages** — only what's needed for systemd, cryptsetup, and the agent
|
|
||||||
|
|
||||||
### Full-Disk Encryption
|
|
||||||
|
|
||||||
- **LUKS2** with AES-XTS-Plain64 on the data partition
|
|
||||||
- **Shamir's Secret Sharing** over GF(256) — LUKS key split across peer vault-guardians
|
|
||||||
- **Adaptive threshold** — K = max(3, N/3) where N is the number of peers
|
|
||||||
- **Key zeroing** — LUKS key wiped from memory immediately after use
|
|
||||||
- **Malicious share detection** — fetch K+1 shares when possible, verify consistency
|
|
||||||
|
|
||||||
### Service Sandboxing
|
|
||||||
|
|
||||||
Each service runs in isolated Linux namespaces:
|
|
||||||
- **CLONE_NEWNS** — mount namespace (filesystem isolation)
|
|
||||||
- **CLONE_NEWUTS** — hostname namespace
|
|
||||||
- **Dedicated UID/GID** — each service has its own user
|
|
||||||
- **Seccomp filtering** — per-service syscall allowlist
|
|
||||||
|
|
||||||
Note: CLONE_NEWPID is intentionally omitted — it makes services PID 1 in their namespace, which changes signal semantics (SIGTERM ignored by default for PID 1).
|
|
||||||
|
|
||||||
### Signed Updates
|
|
||||||
|
|
||||||
- A/B partition scheme with systemd-boot and boot counting (`tries_left=3`)
|
|
||||||
- All updates signed with rootwallet EVM signature (secp256k1 + keccak256)
|
|
||||||
- Signer address: `0xb5d8a496c8b2412990d7D467E17727fdF5954afC`
|
|
||||||
- P2P distribution over WireGuard between nodes
|
|
||||||
- Automatic rollback on 3 consecutive boot failures
|
|
||||||
|
|
||||||
### Zero Operator Access
|
|
||||||
|
|
||||||
- Operators cannot read data on the machine (LUKS encrypted, no shell)
|
|
||||||
- Management only through Gateway API → agent over WireGuard
|
|
||||||
- All commands are logged and auditable
|
|
||||||
- No root access, no console access, no file system access
|
|
||||||
|
|
||||||
## Rollout Strategy
|
|
||||||
|
|
||||||
### Phase 1 Batches
|
|
||||||
|
|
||||||
```
|
|
||||||
Batch 1 (zero-risk, no restart):
|
|
||||||
- CIDR fix
|
|
||||||
- IPv6 disable
|
|
||||||
- Internal endpoint auth
|
|
||||||
- WebSocket origin check
|
|
||||||
|
|
||||||
Batch 2 (medium-risk, restart needed):
|
|
||||||
- Hash refresh tokens
|
|
||||||
- Hash API keys
|
|
||||||
- Binary signing
|
|
||||||
- Vault V1 auth enforcement
|
|
||||||
- TURN secret encryption
|
|
||||||
|
|
||||||
Batch 3 (high-risk, coordinated rollout):
|
|
||||||
- RQLite auth (followers first, leader last)
|
|
||||||
- Olric encryption (simultaneous restart)
|
|
||||||
- IPFS Cluster TrustedPeers
|
|
||||||
|
|
||||||
Batch 4 (infrastructure changes):
|
|
||||||
- InsecureSkipVerify fix
|
|
||||||
- Dedicated user
|
|
||||||
- systemd hardening
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 2
|
|
||||||
|
|
||||||
1. Build and test OramaOS image in QEMU
|
|
||||||
2. Deploy to sandbox cluster alongside Ubuntu nodes
|
|
||||||
3. Verify interop and stability
|
|
||||||
4. Gradual migration: testnet → devnet → mainnet (one node at a time, maintaining Raft quorum)
|
|
||||||
|
|
||||||
## Verification
|
|
||||||
|
|
||||||
All changes verified on sandbox cluster before production deployment:
|
|
||||||
|
|
||||||
- `make test` — all unit tests pass
|
|
||||||
- `orama monitor report --env sandbox` — full cluster health
|
|
||||||
- Manual endpoint testing (e.g., curl without auth → 401)
|
|
||||||
- Security-specific checks (IPv6 listeners, RQLite auth, binary signatures)
|
|
||||||
@ -1,4 +0,0 @@
|
|||||||
-- Invalidate all existing refresh tokens.
|
|
||||||
-- Tokens were stored in plaintext; the application now stores SHA-256 hashes.
|
|
||||||
-- Users will need to re-authenticate (tokens have 30-day expiry anyway).
|
|
||||||
UPDATE refresh_tokens SET revoked_at = datetime('now') WHERE revoked_at IS NULL;
|
|
||||||
@ -1,318 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"archive/tar"
|
|
||||||
"compress/gzip"
|
|
||||||
"crypto/sha256"
|
|
||||||
"encoding/hex"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"net/http"
|
|
||||||
"os"
|
|
||||||
"os/exec"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Manifest describes the contents of a binary archive.
|
|
||||||
type Manifest struct {
|
|
||||||
Version string `json:"version"`
|
|
||||||
Commit string `json:"commit"`
|
|
||||||
Date string `json:"date"`
|
|
||||||
Arch string `json:"arch"`
|
|
||||||
Checksums map[string]string `json:"checksums"` // filename -> sha256
|
|
||||||
}
|
|
||||||
|
|
||||||
// generateManifest creates the manifest with SHA256 checksums of all binaries.
|
|
||||||
func (b *Builder) generateManifest() (*Manifest, error) {
|
|
||||||
m := &Manifest{
|
|
||||||
Version: b.version,
|
|
||||||
Commit: b.commit,
|
|
||||||
Date: b.date,
|
|
||||||
Arch: b.flags.Arch,
|
|
||||||
Checksums: make(map[string]string),
|
|
||||||
}
|
|
||||||
|
|
||||||
entries, err := os.ReadDir(b.binDir)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, entry := range entries {
|
|
||||||
if entry.IsDir() {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
path := filepath.Join(b.binDir, entry.Name())
|
|
||||||
hash, err := sha256File(path)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to hash %s: %w", entry.Name(), err)
|
|
||||||
}
|
|
||||||
m.Checksums[entry.Name()] = hash
|
|
||||||
}
|
|
||||||
|
|
||||||
return m, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// createArchive creates the tar.gz archive from the build directory.
|
|
||||||
func (b *Builder) createArchive(outputPath string, manifest *Manifest) error {
|
|
||||||
fmt.Printf("\nCreating archive: %s\n", outputPath)
|
|
||||||
|
|
||||||
// Write manifest.json to tmpDir
|
|
||||||
manifestData, err := json.MarshalIndent(manifest, "", " ")
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if err := os.WriteFile(filepath.Join(b.tmpDir, "manifest.json"), manifestData, 0644); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create output file
|
|
||||||
f, err := os.Create(outputPath)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
|
|
||||||
gw := gzip.NewWriter(f)
|
|
||||||
defer gw.Close()
|
|
||||||
|
|
||||||
tw := tar.NewWriter(gw)
|
|
||||||
defer tw.Close()
|
|
||||||
|
|
||||||
// Add bin/ directory
|
|
||||||
if err := addDirToTar(tw, b.binDir, "bin"); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add systemd/ directory
|
|
||||||
systemdDir := filepath.Join(b.tmpDir, "systemd")
|
|
||||||
if _, err := os.Stat(systemdDir); err == nil {
|
|
||||||
if err := addDirToTar(tw, systemdDir, "systemd"); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add packages/ directory if it exists
|
|
||||||
packagesDir := filepath.Join(b.tmpDir, "packages")
|
|
||||||
if _, err := os.Stat(packagesDir); err == nil {
|
|
||||||
if err := addDirToTar(tw, packagesDir, "packages"); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add manifest.json
|
|
||||||
if err := addFileToTar(tw, filepath.Join(b.tmpDir, "manifest.json"), "manifest.json"); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add manifest.sig if it exists (created by --sign)
|
|
||||||
sigPath := filepath.Join(b.tmpDir, "manifest.sig")
|
|
||||||
if _, err := os.Stat(sigPath); err == nil {
|
|
||||||
if err := addFileToTar(tw, sigPath, "manifest.sig"); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Print summary
|
|
||||||
fmt.Printf(" bin/: %d binaries\n", len(manifest.Checksums))
|
|
||||||
fmt.Printf(" systemd/: namespace templates\n")
|
|
||||||
fmt.Printf(" manifest: v%s (%s) linux/%s\n", manifest.Version, manifest.Commit, manifest.Arch)
|
|
||||||
|
|
||||||
info, err := f.Stat()
|
|
||||||
if err == nil {
|
|
||||||
fmt.Printf(" size: %s\n", formatBytes(info.Size()))
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// signManifest signs the manifest hash using rootwallet CLI.
|
|
||||||
// Produces manifest.sig containing the hex-encoded EVM signature.
|
|
||||||
func (b *Builder) signManifest(manifest *Manifest) error {
|
|
||||||
fmt.Printf("\nSigning manifest with rootwallet...\n")
|
|
||||||
|
|
||||||
// Serialize manifest deterministically (compact JSON, sorted keys via json.Marshal)
|
|
||||||
manifestData, err := json.Marshal(manifest)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to marshal manifest: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Hash the manifest JSON
|
|
||||||
hash := sha256.Sum256(manifestData)
|
|
||||||
hashHex := hex.EncodeToString(hash[:])
|
|
||||||
|
|
||||||
// Call rw sign <hash> --chain evm
|
|
||||||
cmd := exec.Command("rw", "sign", hashHex, "--chain", "evm")
|
|
||||||
var stdout, stderr strings.Builder
|
|
||||||
cmd.Stdout = &stdout
|
|
||||||
cmd.Stderr = &stderr
|
|
||||||
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return fmt.Errorf("rw sign failed: %w\n%s", err, stderr.String())
|
|
||||||
}
|
|
||||||
|
|
||||||
signature := strings.TrimSpace(stdout.String())
|
|
||||||
if signature == "" {
|
|
||||||
return fmt.Errorf("rw sign produced empty signature")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write signature file
|
|
||||||
sigPath := filepath.Join(b.tmpDir, "manifest.sig")
|
|
||||||
if err := os.WriteFile(sigPath, []byte(signature), 0644); err != nil {
|
|
||||||
return fmt.Errorf("failed to write manifest.sig: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf(" Manifest signed (SHA256: %s...)\n", hashHex[:16])
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// addDirToTar adds all files in a directory to the tar archive under the given prefix.
|
|
||||||
func addDirToTar(tw *tar.Writer, srcDir, prefix string) error {
|
|
||||||
return filepath.Walk(srcDir, func(path string, info os.FileInfo, err error) error {
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Calculate relative path
|
|
||||||
relPath, err := filepath.Rel(srcDir, path)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
tarPath := filepath.Join(prefix, relPath)
|
|
||||||
|
|
||||||
if info.IsDir() {
|
|
||||||
header := &tar.Header{
|
|
||||||
Name: tarPath + "/",
|
|
||||||
Mode: 0755,
|
|
||||||
Typeflag: tar.TypeDir,
|
|
||||||
}
|
|
||||||
return tw.WriteHeader(header)
|
|
||||||
}
|
|
||||||
|
|
||||||
return addFileToTar(tw, path, tarPath)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// addFileToTar adds a single file to the tar archive.
|
|
||||||
func addFileToTar(tw *tar.Writer, srcPath, tarPath string) error {
|
|
||||||
f, err := os.Open(srcPath)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
|
|
||||||
info, err := f.Stat()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
header := &tar.Header{
|
|
||||||
Name: tarPath,
|
|
||||||
Size: info.Size(),
|
|
||||||
Mode: int64(info.Mode()),
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := tw.WriteHeader(header); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = io.Copy(tw, f)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// sha256File computes the SHA256 hash of a file.
|
|
||||||
func sha256File(path string) (string, error) {
|
|
||||||
f, err := os.Open(path)
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
|
|
||||||
h := sha256.New()
|
|
||||||
if _, err := io.Copy(h, f); err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
return hex.EncodeToString(h.Sum(nil)), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// downloadFile downloads a URL to a local file path.
|
|
||||||
func downloadFile(url, destPath string) error {
|
|
||||||
client := &http.Client{Timeout: 5 * time.Minute}
|
|
||||||
resp, err := client.Get(url)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to download %s: %w", url, err)
|
|
||||||
}
|
|
||||||
defer resp.Body.Close()
|
|
||||||
|
|
||||||
if resp.StatusCode != http.StatusOK {
|
|
||||||
return fmt.Errorf("download %s returned status %d", url, resp.StatusCode)
|
|
||||||
}
|
|
||||||
|
|
||||||
f, err := os.Create(destPath)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
|
|
||||||
_, err = io.Copy(f, resp.Body)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// extractFileFromTarball extracts a single file from a tar.gz archive.
|
|
||||||
func extractFileFromTarball(tarPath, targetFile, destPath string) error {
|
|
||||||
f, err := os.Open(tarPath)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
|
|
||||||
gr, err := gzip.NewReader(f)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer gr.Close()
|
|
||||||
|
|
||||||
tr := tar.NewReader(gr)
|
|
||||||
for {
|
|
||||||
header, err := tr.Next()
|
|
||||||
if err == io.EOF {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Match the target file (strip leading ./ if present)
|
|
||||||
name := strings.TrimPrefix(header.Name, "./")
|
|
||||||
if name == targetFile {
|
|
||||||
out, err := os.OpenFile(destPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0755)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer out.Close()
|
|
||||||
|
|
||||||
if _, err := io.Copy(out, tr); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return fmt.Errorf("file %s not found in archive %s", targetFile, tarPath)
|
|
||||||
}
|
|
||||||
|
|
||||||
// formatBytes formats bytes into a human-readable string.
|
|
||||||
func formatBytes(b int64) string {
|
|
||||||
const unit = 1024
|
|
||||||
if b < unit {
|
|
||||||
return fmt.Sprintf("%d B", b)
|
|
||||||
}
|
|
||||||
div, exp := int64(unit), 0
|
|
||||||
for n := b / unit; n >= unit; n /= unit {
|
|
||||||
div *= unit
|
|
||||||
exp++
|
|
||||||
}
|
|
||||||
return fmt.Sprintf("%.1f %cB", float64(b)/float64(div), "KMGTPE"[exp])
|
|
||||||
}
|
|
||||||
@ -1,829 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"os/exec"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/constants"
|
|
||||||
)
|
|
||||||
|
|
||||||
// oramaBinary defines a binary to cross-compile from the project source.
|
|
||||||
type oramaBinary struct {
|
|
||||||
Name string // output binary name
|
|
||||||
Package string // Go package path relative to project root
|
|
||||||
// Extra ldflags beyond the standard ones
|
|
||||||
ExtraLDFlags string
|
|
||||||
}
|
|
||||||
|
|
||||||
// Builder orchestrates the entire build process.
|
|
||||||
type Builder struct {
|
|
||||||
flags *Flags
|
|
||||||
projectDir string
|
|
||||||
tmpDir string
|
|
||||||
binDir string
|
|
||||||
version string
|
|
||||||
commit string
|
|
||||||
date string
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewBuilder creates a new Builder.
|
|
||||||
func NewBuilder(flags *Flags) *Builder {
|
|
||||||
return &Builder{flags: flags}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build runs the full build pipeline.
|
|
||||||
func (b *Builder) Build() error {
|
|
||||||
start := time.Now()
|
|
||||||
|
|
||||||
// Find project root
|
|
||||||
projectDir, err := findProjectRoot()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
b.projectDir = projectDir
|
|
||||||
|
|
||||||
// Read version from Makefile or use "dev"
|
|
||||||
b.version = b.readVersion()
|
|
||||||
b.commit = b.readCommit()
|
|
||||||
b.date = time.Now().UTC().Format("2006-01-02T15:04:05Z")
|
|
||||||
|
|
||||||
// Create temp build directory
|
|
||||||
b.tmpDir, err = os.MkdirTemp("", "orama-build-*")
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to create temp dir: %w", err)
|
|
||||||
}
|
|
||||||
defer os.RemoveAll(b.tmpDir)
|
|
||||||
|
|
||||||
b.binDir = filepath.Join(b.tmpDir, "bin")
|
|
||||||
if err := os.MkdirAll(b.binDir, 0755); err != nil {
|
|
||||||
return fmt.Errorf("failed to create bin dir: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("Building orama %s for linux/%s\n", b.version, b.flags.Arch)
|
|
||||||
fmt.Printf("Project: %s\n\n", b.projectDir)
|
|
||||||
|
|
||||||
// Step 1: Cross-compile Orama binaries
|
|
||||||
if err := b.buildOramaBinaries(); err != nil {
|
|
||||||
return fmt.Errorf("failed to build orama binaries: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 2: Cross-compile Vault Guardian (Zig)
|
|
||||||
if err := b.buildVaultGuardian(); err != nil {
|
|
||||||
return fmt.Errorf("failed to build vault-guardian: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 3: Cross-compile Olric
|
|
||||||
if err := b.buildOlric(); err != nil {
|
|
||||||
return fmt.Errorf("failed to build olric: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 4: Cross-compile IPFS Cluster
|
|
||||||
if err := b.buildIPFSCluster(); err != nil {
|
|
||||||
return fmt.Errorf("failed to build ipfs-cluster: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 5: Build CoreDNS with RQLite plugin
|
|
||||||
if err := b.buildCoreDNS(); err != nil {
|
|
||||||
return fmt.Errorf("failed to build coredns: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 6: Build Caddy with Orama DNS module
|
|
||||||
if err := b.buildCaddy(); err != nil {
|
|
||||||
return fmt.Errorf("failed to build caddy: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 7: Download pre-built IPFS Kubo
|
|
||||||
if err := b.downloadIPFS(); err != nil {
|
|
||||||
return fmt.Errorf("failed to download ipfs: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 8: Download pre-built RQLite
|
|
||||||
if err := b.downloadRQLite(); err != nil {
|
|
||||||
return fmt.Errorf("failed to download rqlite: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 9: Copy systemd templates
|
|
||||||
if err := b.copySystemdTemplates(); err != nil {
|
|
||||||
return fmt.Errorf("failed to copy systemd templates: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 10: Generate manifest
|
|
||||||
manifest, err := b.generateManifest()
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to generate manifest: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 11: Sign manifest (optional)
|
|
||||||
if b.flags.Sign {
|
|
||||||
if err := b.signManifest(manifest); err != nil {
|
|
||||||
return fmt.Errorf("failed to sign manifest: %w", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 12: Create archive
|
|
||||||
outputPath := b.flags.Output
|
|
||||||
if outputPath == "" {
|
|
||||||
outputPath = fmt.Sprintf("/tmp/orama-%s-linux-%s.tar.gz", b.version, b.flags.Arch)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := b.createArchive(outputPath, manifest); err != nil {
|
|
||||||
return fmt.Errorf("failed to create archive: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
elapsed := time.Since(start).Round(time.Second)
|
|
||||||
fmt.Printf("\nBuild complete in %s\n", elapsed)
|
|
||||||
fmt.Printf("Archive: %s\n", outputPath)
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *Builder) buildOramaBinaries() error {
|
|
||||||
fmt.Println("[1/8] Cross-compiling Orama binaries...")
|
|
||||||
|
|
||||||
ldflags := fmt.Sprintf("-s -w -X 'main.version=%s' -X 'main.commit=%s' -X 'main.date=%s'",
|
|
||||||
b.version, b.commit, b.date)
|
|
||||||
|
|
||||||
gatewayLDFlags := fmt.Sprintf("%s -X 'github.com/DeBrosOfficial/network/pkg/gateway.BuildVersion=%s' -X 'github.com/DeBrosOfficial/network/pkg/gateway.BuildCommit=%s' -X 'github.com/DeBrosOfficial/network/pkg/gateway.BuildTime=%s'",
|
|
||||||
ldflags, b.version, b.commit, b.date)
|
|
||||||
|
|
||||||
binaries := []oramaBinary{
|
|
||||||
{Name: "orama", Package: "./cmd/cli/"},
|
|
||||||
{Name: "orama-node", Package: "./cmd/node/"},
|
|
||||||
{Name: "gateway", Package: "./cmd/gateway/", ExtraLDFlags: gatewayLDFlags},
|
|
||||||
{Name: "identity", Package: "./cmd/identity/"},
|
|
||||||
{Name: "sfu", Package: "./cmd/sfu/"},
|
|
||||||
{Name: "turn", Package: "./cmd/turn/"},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, bin := range binaries {
|
|
||||||
flags := ldflags
|
|
||||||
if bin.ExtraLDFlags != "" {
|
|
||||||
flags = bin.ExtraLDFlags
|
|
||||||
}
|
|
||||||
|
|
||||||
output := filepath.Join(b.binDir, bin.Name)
|
|
||||||
cmd := exec.Command("go", "build",
|
|
||||||
"-ldflags", flags,
|
|
||||||
"-trimpath",
|
|
||||||
"-o", output,
|
|
||||||
bin.Package)
|
|
||||||
cmd.Dir = b.projectDir
|
|
||||||
cmd.Env = b.crossEnv()
|
|
||||||
cmd.Stdout = os.Stdout
|
|
||||||
cmd.Stderr = os.Stderr
|
|
||||||
|
|
||||||
if b.flags.Verbose {
|
|
||||||
fmt.Printf(" go build -o %s %s\n", bin.Name, bin.Package)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return fmt.Errorf("failed to build %s: %w", bin.Name, err)
|
|
||||||
}
|
|
||||||
fmt.Printf(" ✓ %s\n", bin.Name)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *Builder) buildVaultGuardian() error {
|
|
||||||
fmt.Println("[2/8] Cross-compiling Vault Guardian (Zig)...")
|
|
||||||
|
|
||||||
// Ensure zig is available
|
|
||||||
if _, err := exec.LookPath("zig"); err != nil {
|
|
||||||
return fmt.Errorf("zig not found in PATH — install from https://ziglang.org/download/")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Vault source is sibling to orama project
|
|
||||||
vaultDir := filepath.Join(b.projectDir, "..", "orama-vault")
|
|
||||||
if _, err := os.Stat(filepath.Join(vaultDir, "build.zig")); err != nil {
|
|
||||||
return fmt.Errorf("vault source not found at %s — expected orama-vault as sibling directory: %w", vaultDir, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Map Go arch to Zig target triple
|
|
||||||
var zigTarget string
|
|
||||||
switch b.flags.Arch {
|
|
||||||
case "amd64":
|
|
||||||
zigTarget = "x86_64-linux-musl"
|
|
||||||
case "arm64":
|
|
||||||
zigTarget = "aarch64-linux-musl"
|
|
||||||
default:
|
|
||||||
return fmt.Errorf("unsupported architecture for vault: %s", b.flags.Arch)
|
|
||||||
}
|
|
||||||
|
|
||||||
if b.flags.Verbose {
|
|
||||||
fmt.Printf(" zig build -Dtarget=%s -Doptimize=ReleaseSafe\n", zigTarget)
|
|
||||||
}
|
|
||||||
|
|
||||||
cmd := exec.Command("zig", "build",
|
|
||||||
fmt.Sprintf("-Dtarget=%s", zigTarget),
|
|
||||||
"-Doptimize=ReleaseSafe")
|
|
||||||
cmd.Dir = vaultDir
|
|
||||||
cmd.Stdout = os.Stdout
|
|
||||||
cmd.Stderr = os.Stderr
|
|
||||||
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return fmt.Errorf("zig build failed: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Copy output binary to build bin dir
|
|
||||||
src := filepath.Join(vaultDir, "zig-out", "bin", "vault-guardian")
|
|
||||||
dst := filepath.Join(b.binDir, "vault-guardian")
|
|
||||||
if err := copyFile(src, dst); err != nil {
|
|
||||||
return fmt.Errorf("failed to copy vault-guardian binary: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Println(" ✓ vault-guardian")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// copyFile copies a file from src to dst, preserving executable permissions.
|
|
||||||
func copyFile(src, dst string) error {
|
|
||||||
srcFile, err := os.Open(src)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer srcFile.Close()
|
|
||||||
|
|
||||||
dstFile, err := os.OpenFile(dst, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0755)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer dstFile.Close()
|
|
||||||
|
|
||||||
if _, err := srcFile.WriteTo(dstFile); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *Builder) buildOlric() error {
|
|
||||||
fmt.Printf("[3/8] Cross-compiling Olric %s...\n", constants.OlricVersion)
|
|
||||||
|
|
||||||
// go install doesn't support cross-compilation with GOBIN set,
|
|
||||||
// so we create a temporary module and use go build -o instead.
|
|
||||||
tmpDir, err := os.MkdirTemp("", "olric-build-*")
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("create temp dir: %w", err)
|
|
||||||
}
|
|
||||||
defer os.RemoveAll(tmpDir)
|
|
||||||
|
|
||||||
modInit := exec.Command("go", "mod", "init", "olric-build")
|
|
||||||
modInit.Dir = tmpDir
|
|
||||||
modInit.Stderr = os.Stderr
|
|
||||||
if err := modInit.Run(); err != nil {
|
|
||||||
return fmt.Errorf("go mod init: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
modGet := exec.Command("go", "get",
|
|
||||||
fmt.Sprintf("github.com/olric-data/olric/cmd/olric-server@%s", constants.OlricVersion))
|
|
||||||
modGet.Dir = tmpDir
|
|
||||||
modGet.Env = append(os.Environ(),
|
|
||||||
"GOPROXY=https://proxy.golang.org|direct",
|
|
||||||
"GONOSUMDB=*")
|
|
||||||
modGet.Stderr = os.Stderr
|
|
||||||
if err := modGet.Run(); err != nil {
|
|
||||||
return fmt.Errorf("go get olric: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
cmd := exec.Command("go", "build",
|
|
||||||
"-ldflags", "-s -w",
|
|
||||||
"-trimpath",
|
|
||||||
"-o", filepath.Join(b.binDir, "olric-server"),
|
|
||||||
fmt.Sprintf("github.com/olric-data/olric/cmd/olric-server"))
|
|
||||||
cmd.Dir = tmpDir
|
|
||||||
cmd.Env = append(b.crossEnv(),
|
|
||||||
"GOPROXY=https://proxy.golang.org|direct",
|
|
||||||
"GONOSUMDB=*")
|
|
||||||
cmd.Stdout = os.Stdout
|
|
||||||
cmd.Stderr = os.Stderr
|
|
||||||
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
fmt.Println(" ✓ olric-server")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *Builder) buildIPFSCluster() error {
|
|
||||||
fmt.Printf("[4/8] Cross-compiling IPFS Cluster %s...\n", constants.IPFSClusterVersion)
|
|
||||||
|
|
||||||
tmpDir, err := os.MkdirTemp("", "ipfs-cluster-build-*")
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("create temp dir: %w", err)
|
|
||||||
}
|
|
||||||
defer os.RemoveAll(tmpDir)
|
|
||||||
|
|
||||||
modInit := exec.Command("go", "mod", "init", "ipfs-cluster-build")
|
|
||||||
modInit.Dir = tmpDir
|
|
||||||
modInit.Stderr = os.Stderr
|
|
||||||
if err := modInit.Run(); err != nil {
|
|
||||||
return fmt.Errorf("go mod init: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
modGet := exec.Command("go", "get",
|
|
||||||
fmt.Sprintf("github.com/ipfs-cluster/ipfs-cluster/cmd/ipfs-cluster-service@%s", constants.IPFSClusterVersion))
|
|
||||||
modGet.Dir = tmpDir
|
|
||||||
modGet.Env = append(os.Environ(),
|
|
||||||
"GOPROXY=https://proxy.golang.org|direct",
|
|
||||||
"GONOSUMDB=*")
|
|
||||||
modGet.Stderr = os.Stderr
|
|
||||||
if err := modGet.Run(); err != nil {
|
|
||||||
return fmt.Errorf("go get ipfs-cluster: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
cmd := exec.Command("go", "build",
|
|
||||||
"-ldflags", "-s -w",
|
|
||||||
"-trimpath",
|
|
||||||
"-o", filepath.Join(b.binDir, "ipfs-cluster-service"),
|
|
||||||
"github.com/ipfs-cluster/ipfs-cluster/cmd/ipfs-cluster-service")
|
|
||||||
cmd.Dir = tmpDir
|
|
||||||
cmd.Env = append(b.crossEnv(),
|
|
||||||
"GOPROXY=https://proxy.golang.org|direct",
|
|
||||||
"GONOSUMDB=*")
|
|
||||||
cmd.Stdout = os.Stdout
|
|
||||||
cmd.Stderr = os.Stderr
|
|
||||||
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
fmt.Println(" ✓ ipfs-cluster-service")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *Builder) buildCoreDNS() error {
|
|
||||||
fmt.Printf("[5/8] Building CoreDNS %s with RQLite plugin...\n", constants.CoreDNSVersion)
|
|
||||||
|
|
||||||
buildDir := filepath.Join(b.tmpDir, "coredns-build")
|
|
||||||
|
|
||||||
// Clone CoreDNS
|
|
||||||
fmt.Println(" Cloning CoreDNS...")
|
|
||||||
cmd := exec.Command("git", "clone", "--depth", "1",
|
|
||||||
"--branch", "v"+constants.CoreDNSVersion,
|
|
||||||
"https://github.com/coredns/coredns.git", buildDir)
|
|
||||||
cmd.Stdout = os.Stdout
|
|
||||||
cmd.Stderr = os.Stderr
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return fmt.Errorf("failed to clone coredns: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Copy RQLite plugin from local source
|
|
||||||
pluginSrc := filepath.Join(b.projectDir, "pkg", "coredns", "rqlite")
|
|
||||||
pluginDst := filepath.Join(buildDir, "plugin", "rqlite")
|
|
||||||
if err := os.MkdirAll(pluginDst, 0755); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
entries, err := os.ReadDir(pluginSrc)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to read rqlite plugin source at %s: %w", pluginSrc, err)
|
|
||||||
}
|
|
||||||
for _, entry := range entries {
|
|
||||||
if entry.IsDir() || filepath.Ext(entry.Name()) != ".go" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
data, err := os.ReadFile(filepath.Join(pluginSrc, entry.Name()))
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if err := os.WriteFile(filepath.Join(pluginDst, entry.Name()), data, 0644); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write plugin.cfg (same as build-linux-coredns.sh)
|
|
||||||
pluginCfg := `metadata:metadata
|
|
||||||
cancel:cancel
|
|
||||||
tls:tls
|
|
||||||
reload:reload
|
|
||||||
nsid:nsid
|
|
||||||
bufsize:bufsize
|
|
||||||
root:root
|
|
||||||
bind:bind
|
|
||||||
debug:debug
|
|
||||||
trace:trace
|
|
||||||
ready:ready
|
|
||||||
health:health
|
|
||||||
pprof:pprof
|
|
||||||
prometheus:metrics
|
|
||||||
errors:errors
|
|
||||||
log:log
|
|
||||||
dnstap:dnstap
|
|
||||||
local:local
|
|
||||||
dns64:dns64
|
|
||||||
acl:acl
|
|
||||||
any:any
|
|
||||||
chaos:chaos
|
|
||||||
loadbalance:loadbalance
|
|
||||||
cache:cache
|
|
||||||
rewrite:rewrite
|
|
||||||
header:header
|
|
||||||
dnssec:dnssec
|
|
||||||
autopath:autopath
|
|
||||||
minimal:minimal
|
|
||||||
template:template
|
|
||||||
transfer:transfer
|
|
||||||
hosts:hosts
|
|
||||||
file:file
|
|
||||||
auto:auto
|
|
||||||
secondary:secondary
|
|
||||||
loop:loop
|
|
||||||
forward:forward
|
|
||||||
grpc:grpc
|
|
||||||
erratic:erratic
|
|
||||||
whoami:whoami
|
|
||||||
on:github.com/coredns/caddy/onevent
|
|
||||||
sign:sign
|
|
||||||
view:view
|
|
||||||
rqlite:rqlite
|
|
||||||
`
|
|
||||||
if err := os.WriteFile(filepath.Join(buildDir, "plugin.cfg"), []byte(pluginCfg), 0644); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add dependencies
|
|
||||||
fmt.Println(" Adding dependencies...")
|
|
||||||
goPath := os.Getenv("PATH")
|
|
||||||
baseEnv := append(os.Environ(),
|
|
||||||
"PATH="+goPath,
|
|
||||||
"GOPROXY=https://proxy.golang.org|direct",
|
|
||||||
"GONOSUMDB=*")
|
|
||||||
|
|
||||||
for _, dep := range []string{"github.com/miekg/dns@latest", "go.uber.org/zap@latest"} {
|
|
||||||
cmd := exec.Command("go", "get", dep)
|
|
||||||
cmd.Dir = buildDir
|
|
||||||
cmd.Env = baseEnv
|
|
||||||
cmd.Stderr = os.Stderr
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return fmt.Errorf("failed to get %s: %w", dep, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
cmd = exec.Command("go", "mod", "tidy")
|
|
||||||
cmd.Dir = buildDir
|
|
||||||
cmd.Env = baseEnv
|
|
||||||
cmd.Stderr = os.Stderr
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return fmt.Errorf("go mod tidy failed: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Generate plugin code
|
|
||||||
fmt.Println(" Generating plugin code...")
|
|
||||||
cmd = exec.Command("go", "generate")
|
|
||||||
cmd.Dir = buildDir
|
|
||||||
cmd.Env = baseEnv
|
|
||||||
cmd.Stderr = os.Stderr
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return fmt.Errorf("go generate failed: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Cross-compile
|
|
||||||
fmt.Println(" Building binary...")
|
|
||||||
cmd = exec.Command("go", "build",
|
|
||||||
"-ldflags", "-s -w",
|
|
||||||
"-trimpath",
|
|
||||||
"-o", filepath.Join(b.binDir, "coredns"))
|
|
||||||
cmd.Dir = buildDir
|
|
||||||
cmd.Env = append(baseEnv,
|
|
||||||
"GOOS=linux",
|
|
||||||
fmt.Sprintf("GOARCH=%s", b.flags.Arch),
|
|
||||||
"CGO_ENABLED=0")
|
|
||||||
cmd.Stdout = os.Stdout
|
|
||||||
cmd.Stderr = os.Stderr
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return fmt.Errorf("build failed: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Println(" ✓ coredns")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *Builder) buildCaddy() error {
|
|
||||||
fmt.Printf("[6/8] Building Caddy %s with Orama DNS module...\n", constants.CaddyVersion)
|
|
||||||
|
|
||||||
// Ensure xcaddy is available
|
|
||||||
if _, err := exec.LookPath("xcaddy"); err != nil {
|
|
||||||
return fmt.Errorf("xcaddy not found in PATH — install with: go install github.com/caddyserver/xcaddy/cmd/xcaddy@latest")
|
|
||||||
}
|
|
||||||
|
|
||||||
moduleDir := filepath.Join(b.tmpDir, "caddy-dns-orama")
|
|
||||||
if err := os.MkdirAll(moduleDir, 0755); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write go.mod
|
|
||||||
goMod := fmt.Sprintf(`module github.com/DeBrosOfficial/caddy-dns-orama
|
|
||||||
|
|
||||||
go 1.22
|
|
||||||
|
|
||||||
require (
|
|
||||||
github.com/caddyserver/caddy/v2 v2.%s
|
|
||||||
github.com/libdns/libdns v1.1.0
|
|
||||||
)
|
|
||||||
`, constants.CaddyVersion[2:])
|
|
||||||
if err := os.WriteFile(filepath.Join(moduleDir, "go.mod"), []byte(goMod), 0644); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write provider.go — read from the caddy installer's generated code
|
|
||||||
// We inline the same provider code used by the VPS-side caddy installer
|
|
||||||
providerCode := generateCaddyProviderCode()
|
|
||||||
if err := os.WriteFile(filepath.Join(moduleDir, "provider.go"), []byte(providerCode), 0644); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// go mod tidy
|
|
||||||
cmd := exec.Command("go", "mod", "tidy")
|
|
||||||
cmd.Dir = moduleDir
|
|
||||||
cmd.Env = append(os.Environ(),
|
|
||||||
"GOPROXY=https://proxy.golang.org|direct",
|
|
||||||
"GONOSUMDB=*")
|
|
||||||
cmd.Stderr = os.Stderr
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return fmt.Errorf("go mod tidy failed: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build with xcaddy
|
|
||||||
fmt.Println(" Building binary...")
|
|
||||||
cmd = exec.Command("xcaddy", "build",
|
|
||||||
"v"+constants.CaddyVersion,
|
|
||||||
"--with", "github.com/DeBrosOfficial/caddy-dns-orama="+moduleDir,
|
|
||||||
"--output", filepath.Join(b.binDir, "caddy"))
|
|
||||||
cmd.Env = append(os.Environ(),
|
|
||||||
"GOOS=linux",
|
|
||||||
fmt.Sprintf("GOARCH=%s", b.flags.Arch),
|
|
||||||
"GOPROXY=https://proxy.golang.org|direct",
|
|
||||||
"GONOSUMDB=*")
|
|
||||||
cmd.Stdout = os.Stdout
|
|
||||||
cmd.Stderr = os.Stderr
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return fmt.Errorf("xcaddy build failed: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Println(" ✓ caddy")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *Builder) downloadIPFS() error {
|
|
||||||
fmt.Printf("[7/8] Downloading IPFS Kubo %s...\n", constants.IPFSKuboVersion)
|
|
||||||
|
|
||||||
arch := b.flags.Arch
|
|
||||||
tarball := fmt.Sprintf("kubo_%s_linux-%s.tar.gz", constants.IPFSKuboVersion, arch)
|
|
||||||
url := fmt.Sprintf("https://dist.ipfs.tech/kubo/%s/%s", constants.IPFSKuboVersion, tarball)
|
|
||||||
tarPath := filepath.Join(b.tmpDir, tarball)
|
|
||||||
|
|
||||||
if err := downloadFile(url, tarPath); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract ipfs binary from kubo/ipfs
|
|
||||||
if err := extractFileFromTarball(tarPath, "kubo/ipfs", filepath.Join(b.binDir, "ipfs")); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Println(" ✓ ipfs")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *Builder) downloadRQLite() error {
|
|
||||||
fmt.Printf("[8/8] Downloading RQLite %s...\n", constants.RQLiteVersion)
|
|
||||||
|
|
||||||
arch := b.flags.Arch
|
|
||||||
tarball := fmt.Sprintf("rqlite-v%s-linux-%s.tar.gz", constants.RQLiteVersion, arch)
|
|
||||||
url := fmt.Sprintf("https://github.com/rqlite/rqlite/releases/download/v%s/%s", constants.RQLiteVersion, tarball)
|
|
||||||
tarPath := filepath.Join(b.tmpDir, tarball)
|
|
||||||
|
|
||||||
if err := downloadFile(url, tarPath); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract rqlited binary
|
|
||||||
extractDir := fmt.Sprintf("rqlite-v%s-linux-%s", constants.RQLiteVersion, arch)
|
|
||||||
if err := extractFileFromTarball(tarPath, extractDir+"/rqlited", filepath.Join(b.binDir, "rqlited")); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Println(" ✓ rqlited")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *Builder) copySystemdTemplates() error {
|
|
||||||
systemdSrc := filepath.Join(b.projectDir, "systemd")
|
|
||||||
systemdDst := filepath.Join(b.tmpDir, "systemd")
|
|
||||||
if err := os.MkdirAll(systemdDst, 0755); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
entries, err := os.ReadDir(systemdSrc)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to read systemd dir: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, entry := range entries {
|
|
||||||
if entry.IsDir() || !strings.HasSuffix(entry.Name(), ".service") {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
data, err := os.ReadFile(filepath.Join(systemdSrc, entry.Name()))
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if err := os.WriteFile(filepath.Join(systemdDst, entry.Name()), data, 0644); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// crossEnv returns the environment for cross-compilation.
|
|
||||||
func (b *Builder) crossEnv() []string {
|
|
||||||
return append(os.Environ(),
|
|
||||||
"GOOS=linux",
|
|
||||||
fmt.Sprintf("GOARCH=%s", b.flags.Arch),
|
|
||||||
"CGO_ENABLED=0")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *Builder) readVersion() string {
|
|
||||||
// Try to read from Makefile
|
|
||||||
data, err := os.ReadFile(filepath.Join(b.projectDir, "Makefile"))
|
|
||||||
if err != nil {
|
|
||||||
return "dev"
|
|
||||||
}
|
|
||||||
for _, line := range strings.Split(string(data), "\n") {
|
|
||||||
line = strings.TrimSpace(line)
|
|
||||||
if strings.HasPrefix(line, "VERSION") {
|
|
||||||
parts := strings.SplitN(line, ":=", 2)
|
|
||||||
if len(parts) == 2 {
|
|
||||||
return strings.TrimSpace(parts[1])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return "dev"
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *Builder) readCommit() string {
|
|
||||||
cmd := exec.Command("git", "rev-parse", "--short", "HEAD")
|
|
||||||
cmd.Dir = b.projectDir
|
|
||||||
out, err := cmd.Output()
|
|
||||||
if err != nil {
|
|
||||||
return "unknown"
|
|
||||||
}
|
|
||||||
return strings.TrimSpace(string(out))
|
|
||||||
}
|
|
||||||
|
|
||||||
// generateCaddyProviderCode returns the Caddy DNS provider Go source.
|
|
||||||
// This is the same code used by the VPS-side caddy installer.
|
|
||||||
func generateCaddyProviderCode() string {
|
|
||||||
return `// Package orama implements a DNS provider for Caddy that uses the Orama Network
|
|
||||||
// gateway's internal ACME API for DNS-01 challenge validation.
|
|
||||||
package orama
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"context"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"net/http"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/caddyserver/caddy/v2"
|
|
||||||
"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile"
|
|
||||||
"github.com/libdns/libdns"
|
|
||||||
)
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
caddy.RegisterModule(Provider{})
|
|
||||||
}
|
|
||||||
|
|
||||||
// Provider wraps the Orama DNS provider for Caddy.
|
|
||||||
type Provider struct {
|
|
||||||
// Endpoint is the URL of the Orama gateway's ACME API
|
|
||||||
// Default: http://localhost:6001/v1/internal/acme
|
|
||||||
Endpoint string ` + "`json:\"endpoint,omitempty\"`" + `
|
|
||||||
}
|
|
||||||
|
|
||||||
// CaddyModule returns the Caddy module information.
|
|
||||||
func (Provider) CaddyModule() caddy.ModuleInfo {
|
|
||||||
return caddy.ModuleInfo{
|
|
||||||
ID: "dns.providers.orama",
|
|
||||||
New: func() caddy.Module { return new(Provider) },
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Provision sets up the module.
|
|
||||||
func (p *Provider) Provision(ctx caddy.Context) error {
|
|
||||||
if p.Endpoint == "" {
|
|
||||||
p.Endpoint = "http://localhost:6001/v1/internal/acme"
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// UnmarshalCaddyfile parses the Caddyfile configuration.
|
|
||||||
func (p *Provider) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {
|
|
||||||
for d.Next() {
|
|
||||||
for d.NextBlock(0) {
|
|
||||||
switch d.Val() {
|
|
||||||
case "endpoint":
|
|
||||||
if !d.NextArg() {
|
|
||||||
return d.ArgErr()
|
|
||||||
}
|
|
||||||
p.Endpoint = d.Val()
|
|
||||||
default:
|
|
||||||
return d.Errf("unrecognized option: %s", d.Val())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// AppendRecords adds records to the zone.
|
|
||||||
func (p *Provider) AppendRecords(ctx context.Context, zone string, records []libdns.Record) ([]libdns.Record, error) {
|
|
||||||
var added []libdns.Record
|
|
||||||
for _, rec := range records {
|
|
||||||
rr := rec.RR()
|
|
||||||
if rr.Type != "TXT" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
fqdn := rr.Name + "." + zone
|
|
||||||
payload := map[string]string{"fqdn": fqdn, "value": rr.Data}
|
|
||||||
body, err := json.Marshal(payload)
|
|
||||||
if err != nil {
|
|
||||||
return added, fmt.Errorf("failed to marshal request: %w", err)
|
|
||||||
}
|
|
||||||
req, err := http.NewRequestWithContext(ctx, "POST", p.Endpoint+"/present", bytes.NewReader(body))
|
|
||||||
if err != nil {
|
|
||||||
return added, fmt.Errorf("failed to create request: %w", err)
|
|
||||||
}
|
|
||||||
req.Header.Set("Content-Type", "application/json")
|
|
||||||
client := &http.Client{Timeout: 30 * time.Second}
|
|
||||||
resp, err := client.Do(req)
|
|
||||||
if err != nil {
|
|
||||||
return added, fmt.Errorf("failed to present challenge: %w", err)
|
|
||||||
}
|
|
||||||
resp.Body.Close()
|
|
||||||
if resp.StatusCode != http.StatusOK {
|
|
||||||
return added, fmt.Errorf("present failed with status %d", resp.StatusCode)
|
|
||||||
}
|
|
||||||
added = append(added, rec)
|
|
||||||
}
|
|
||||||
return added, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeleteRecords removes records from the zone.
|
|
||||||
func (p *Provider) DeleteRecords(ctx context.Context, zone string, records []libdns.Record) ([]libdns.Record, error) {
|
|
||||||
var deleted []libdns.Record
|
|
||||||
for _, rec := range records {
|
|
||||||
rr := rec.RR()
|
|
||||||
if rr.Type != "TXT" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
fqdn := rr.Name + "." + zone
|
|
||||||
payload := map[string]string{"fqdn": fqdn, "value": rr.Data}
|
|
||||||
body, err := json.Marshal(payload)
|
|
||||||
if err != nil {
|
|
||||||
return deleted, fmt.Errorf("failed to marshal request: %w", err)
|
|
||||||
}
|
|
||||||
req, err := http.NewRequestWithContext(ctx, "POST", p.Endpoint+"/cleanup", bytes.NewReader(body))
|
|
||||||
if err != nil {
|
|
||||||
return deleted, fmt.Errorf("failed to create request: %w", err)
|
|
||||||
}
|
|
||||||
req.Header.Set("Content-Type", "application/json")
|
|
||||||
client := &http.Client{Timeout: 30 * time.Second}
|
|
||||||
resp, err := client.Do(req)
|
|
||||||
if err != nil {
|
|
||||||
return deleted, fmt.Errorf("failed to cleanup challenge: %w", err)
|
|
||||||
}
|
|
||||||
resp.Body.Close()
|
|
||||||
if resp.StatusCode != http.StatusOK {
|
|
||||||
return deleted, fmt.Errorf("cleanup failed with status %d", resp.StatusCode)
|
|
||||||
}
|
|
||||||
deleted = append(deleted, rec)
|
|
||||||
}
|
|
||||||
return deleted, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetRecords returns the records in the zone. Not used for ACME.
|
|
||||||
func (p *Provider) GetRecords(ctx context.Context, zone string) ([]libdns.Record, error) {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// SetRecords sets the records in the zone. Not used for ACME.
|
|
||||||
func (p *Provider) SetRecords(ctx context.Context, zone string, records []libdns.Record) ([]libdns.Record, error) {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Interface guards
|
|
||||||
var (
|
|
||||||
_ caddy.Module = (*Provider)(nil)
|
|
||||||
_ caddy.Provisioner = (*Provider)(nil)
|
|
||||||
_ caddyfile.Unmarshaler = (*Provider)(nil)
|
|
||||||
_ libdns.RecordAppender = (*Provider)(nil)
|
|
||||||
_ libdns.RecordDeleter = (*Provider)(nil)
|
|
||||||
_ libdns.RecordGetter = (*Provider)(nil)
|
|
||||||
_ libdns.RecordSetter = (*Provider)(nil)
|
|
||||||
)
|
|
||||||
`
|
|
||||||
}
|
|
||||||
@ -1,82 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"flag"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"runtime"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Flags represents build command flags.
|
|
||||||
type Flags struct {
|
|
||||||
Arch string
|
|
||||||
Output string
|
|
||||||
Verbose bool
|
|
||||||
Sign bool // Sign the archive manifest with rootwallet
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle is the entry point for the build command.
|
|
||||||
func Handle(args []string) {
|
|
||||||
flags, err := parseFlags(args)
|
|
||||||
if err != nil {
|
|
||||||
if err == flag.ErrHelp {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
b := NewBuilder(flags)
|
|
||||||
if err := b.Build(); err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseFlags(args []string) (*Flags, error) {
|
|
||||||
fs := flag.NewFlagSet("build", flag.ContinueOnError)
|
|
||||||
fs.SetOutput(os.Stderr)
|
|
||||||
|
|
||||||
flags := &Flags{}
|
|
||||||
|
|
||||||
fs.StringVar(&flags.Arch, "arch", "amd64", "Target architecture (amd64, arm64)")
|
|
||||||
fs.StringVar(&flags.Output, "output", "", "Output archive path (default: /tmp/orama-<version>-linux-<arch>.tar.gz)")
|
|
||||||
fs.BoolVar(&flags.Verbose, "verbose", false, "Verbose output")
|
|
||||||
fs.BoolVar(&flags.Sign, "sign", false, "Sign the manifest with rootwallet (requires rw in PATH)")
|
|
||||||
|
|
||||||
if err := fs.Parse(args); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return flags, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// findProjectRoot walks up from the current directory looking for go.mod.
|
|
||||||
func findProjectRoot() (string, error) {
|
|
||||||
dir, err := os.Getwd()
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
|
|
||||||
for {
|
|
||||||
if _, err := os.Stat(filepath.Join(dir, "go.mod")); err == nil {
|
|
||||||
// Verify it's the network project
|
|
||||||
if _, err := os.Stat(filepath.Join(dir, "cmd", "cli")); err == nil {
|
|
||||||
return dir, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
parent := filepath.Dir(dir)
|
|
||||||
if parent == dir {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
dir = parent
|
|
||||||
}
|
|
||||||
|
|
||||||
return "", fmt.Errorf("could not find project root (no go.mod with cmd/cli found)")
|
|
||||||
}
|
|
||||||
|
|
||||||
// detectHostArch returns the host architecture in Go naming convention.
|
|
||||||
func detectHostArch() string {
|
|
||||||
return runtime.GOARCH
|
|
||||||
}
|
|
||||||
@ -1,24 +0,0 @@
|
|||||||
package buildcmd
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/build"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Cmd is the top-level build command.
|
|
||||||
var Cmd = &cobra.Command{
|
|
||||||
Use: "build",
|
|
||||||
Short: "Build pre-compiled binary archive for deployment",
|
|
||||||
Long: `Cross-compile all Orama binaries and dependencies for Linux,
|
|
||||||
then package them into a deployment archive. The archive includes:
|
|
||||||
- Orama binaries (CLI, node, gateway, identity, SFU, TURN)
|
|
||||||
- Olric, IPFS Kubo, IPFS Cluster, RQLite, CoreDNS, Caddy
|
|
||||||
- Systemd namespace templates
|
|
||||||
- manifest.json with checksums
|
|
||||||
|
|
||||||
The resulting archive can be pushed to nodes with 'orama node push'.`,
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
build.Handle(args)
|
|
||||||
},
|
|
||||||
DisableFlagParsing: true,
|
|
||||||
}
|
|
||||||
@ -1,25 +0,0 @@
|
|||||||
package node
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/production/clean"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
var cleanCmd = &cobra.Command{
|
|
||||||
Use: "clean",
|
|
||||||
Short: "Clean (wipe) remote nodes for reinstallation",
|
|
||||||
Long: `Remove all Orama data, services, and configuration from remote nodes.
|
|
||||||
Anyone relay keys at /var/lib/anon/ are preserved.
|
|
||||||
|
|
||||||
This is a DESTRUCTIVE operation. Use --force to skip confirmation.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
orama node clean --env testnet # Clean all testnet nodes
|
|
||||||
orama node clean --env testnet --node 1.2.3.4 # Clean specific node
|
|
||||||
orama node clean --env testnet --nuclear # Also remove shared binaries
|
|
||||||
orama node clean --env testnet --force # Skip confirmation`,
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
clean.Handle(args)
|
|
||||||
},
|
|
||||||
DisableFlagParsing: true,
|
|
||||||
}
|
|
||||||
@ -1,26 +0,0 @@
|
|||||||
package node
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/production/enroll"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
var enrollCmd = &cobra.Command{
|
|
||||||
Use: "enroll",
|
|
||||||
Short: "Enroll an OramaOS node into the cluster",
|
|
||||||
Long: `Enroll a freshly booted OramaOS node into the cluster.
|
|
||||||
|
|
||||||
The OramaOS node displays a registration code on port 9999. Provide this code
|
|
||||||
along with an invite token to complete enrollment. The Gateway pushes cluster
|
|
||||||
configuration (WireGuard, secrets, peer list) to the node.
|
|
||||||
|
|
||||||
Usage:
|
|
||||||
orama node enroll --node-ip <ip> --code <code> --token <invite-token> --env <environment>
|
|
||||||
|
|
||||||
The node must be reachable over the public internet on port 9999 (enrollment only).
|
|
||||||
After enrollment, port 9999 is permanently closed and all communication goes over WireGuard.`,
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
enroll.Handle(args)
|
|
||||||
},
|
|
||||||
DisableFlagParsing: true,
|
|
||||||
}
|
|
||||||
@ -1,24 +0,0 @@
|
|||||||
package node
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/production/push"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
var pushCmd = &cobra.Command{
|
|
||||||
Use: "push",
|
|
||||||
Short: "Push binary archive to remote nodes",
|
|
||||||
Long: `Upload a pre-built binary archive to remote nodes.
|
|
||||||
|
|
||||||
By default, uses fanout distribution: uploads to one hub node,
|
|
||||||
then distributes to all others via server-to-server SCP.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
orama node push --env devnet # Fanout to all devnet nodes
|
|
||||||
orama node push --env testnet --node 1.2.3.4 # Single node
|
|
||||||
orama node push --env testnet --direct # Sequential upload to each node`,
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
push.Handle(args)
|
|
||||||
},
|
|
||||||
DisableFlagParsing: true,
|
|
||||||
}
|
|
||||||
@ -1,31 +0,0 @@
|
|||||||
package node
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/production/recover"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
var recoverRaftCmd = &cobra.Command{
|
|
||||||
Use: "recover-raft",
|
|
||||||
Short: "Recover RQLite cluster from split-brain",
|
|
||||||
Long: `Recover the RQLite Raft cluster from split-brain failure.
|
|
||||||
|
|
||||||
Strategy:
|
|
||||||
1. Stop orama-node on ALL nodes simultaneously
|
|
||||||
2. Backup and delete raft/ on non-leader nodes
|
|
||||||
3. Start leader node, wait for Leader state
|
|
||||||
4. Start remaining nodes in batches
|
|
||||||
5. Verify cluster health
|
|
||||||
|
|
||||||
The --leader flag must point to the node with the highest commit index.
|
|
||||||
|
|
||||||
This is a DESTRUCTIVE operation. Use --force to skip confirmation.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
orama node recover-raft --env testnet --leader 1.2.3.4
|
|
||||||
orama node recover-raft --env devnet --leader 1.2.3.4 --force`,
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
recover.Handle(args)
|
|
||||||
},
|
|
||||||
DisableFlagParsing: true,
|
|
||||||
}
|
|
||||||
@ -1,22 +0,0 @@
|
|||||||
package node
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/production/rollout"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
var rolloutCmd = &cobra.Command{
|
|
||||||
Use: "rollout",
|
|
||||||
Short: "Build, push, and rolling upgrade all nodes in an environment",
|
|
||||||
Long: `Full deployment pipeline: build binary archive locally, push to all nodes,
|
|
||||||
then perform a rolling upgrade (one node at a time).
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
orama node rollout --env testnet # Full: build + push + rolling upgrade
|
|
||||||
orama node rollout --env testnet --no-build # Skip build, use existing archive
|
|
||||||
orama node rollout --env testnet --yes # Skip confirmation`,
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
rollout.Handle(args)
|
|
||||||
},
|
|
||||||
DisableFlagParsing: true,
|
|
||||||
}
|
|
||||||
@ -1,26 +0,0 @@
|
|||||||
package node
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/production/unlock"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
var unlockCmd = &cobra.Command{
|
|
||||||
Use: "unlock",
|
|
||||||
Short: "Unlock an OramaOS genesis node",
|
|
||||||
Long: `Manually unlock a genesis OramaOS node that cannot reconstruct its LUKS key
|
|
||||||
via Shamir shares (not enough peers online).
|
|
||||||
|
|
||||||
This is only needed for the genesis node before enough peers have joined for
|
|
||||||
Shamir-based unlock. Once 5+ peers exist, the genesis node transitions to
|
|
||||||
normal Shamir unlock and this command is no longer needed.
|
|
||||||
|
|
||||||
Usage:
|
|
||||||
orama node unlock --genesis --node-ip <wg-ip>
|
|
||||||
|
|
||||||
The node must be reachable over WireGuard on port 9998.`,
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
unlock.Handle(args)
|
|
||||||
},
|
|
||||||
DisableFlagParsing: true,
|
|
||||||
}
|
|
||||||
@ -1,140 +0,0 @@
|
|||||||
package sandboxcmd
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/sandbox"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Cmd is the root command for sandbox operations.
|
|
||||||
var Cmd = &cobra.Command{
|
|
||||||
Use: "sandbox",
|
|
||||||
Short: "Manage ephemeral Hetzner Cloud clusters for testing",
|
|
||||||
Long: `Spin up temporary 5-node Orama clusters on Hetzner Cloud for development and testing.
|
|
||||||
|
|
||||||
Setup (one-time):
|
|
||||||
orama sandbox setup
|
|
||||||
|
|
||||||
Usage:
|
|
||||||
orama sandbox create [--name <name>] Create a new 5-node cluster
|
|
||||||
orama sandbox destroy [--name <name>] Tear down a cluster
|
|
||||||
orama sandbox list List active sandboxes
|
|
||||||
orama sandbox status [--name <name>] Show cluster health
|
|
||||||
orama sandbox rollout [--name <name>] Build + push + rolling upgrade
|
|
||||||
orama sandbox ssh <node-number> SSH into a sandbox node (1-5)
|
|
||||||
orama sandbox reset Delete all infra and config to start fresh`,
|
|
||||||
}
|
|
||||||
|
|
||||||
var setupCmd = &cobra.Command{
|
|
||||||
Use: "setup",
|
|
||||||
Short: "Interactive setup: Hetzner API key, domain, floating IPs, SSH key",
|
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
|
||||||
return sandbox.Setup()
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
var createCmd = &cobra.Command{
|
|
||||||
Use: "create",
|
|
||||||
Short: "Create a new 5-node sandbox cluster (~5 min)",
|
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
|
||||||
name, _ := cmd.Flags().GetString("name")
|
|
||||||
return sandbox.Create(name)
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
var destroyCmd = &cobra.Command{
|
|
||||||
Use: "destroy",
|
|
||||||
Short: "Destroy a sandbox cluster and release resources",
|
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
|
||||||
name, _ := cmd.Flags().GetString("name")
|
|
||||||
force, _ := cmd.Flags().GetBool("force")
|
|
||||||
return sandbox.Destroy(name, force)
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
var listCmd = &cobra.Command{
|
|
||||||
Use: "list",
|
|
||||||
Short: "List active sandbox clusters",
|
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
|
||||||
return sandbox.List()
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
var statusCmd = &cobra.Command{
|
|
||||||
Use: "status",
|
|
||||||
Short: "Show cluster health report",
|
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
|
||||||
name, _ := cmd.Flags().GetString("name")
|
|
||||||
return sandbox.Status(name)
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
var rolloutCmd = &cobra.Command{
|
|
||||||
Use: "rollout",
|
|
||||||
Short: "Build + push + rolling upgrade to sandbox cluster",
|
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
|
||||||
name, _ := cmd.Flags().GetString("name")
|
|
||||||
anyoneClient, _ := cmd.Flags().GetBool("anyone-client")
|
|
||||||
return sandbox.Rollout(name, sandbox.RolloutFlags{
|
|
||||||
AnyoneClient: anyoneClient,
|
|
||||||
})
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
var resetCmd = &cobra.Command{
|
|
||||||
Use: "reset",
|
|
||||||
Short: "Delete all sandbox infrastructure and config to start fresh",
|
|
||||||
Long: `Deletes floating IPs, firewall, and SSH key from Hetzner Cloud,
|
|
||||||
then removes the local config (~/.orama/sandbox.yaml) and SSH keys.
|
|
||||||
|
|
||||||
Use this when you need to switch datacenter locations (floating IPs are
|
|
||||||
location-bound) or to completely start over with sandbox setup.`,
|
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
|
||||||
return sandbox.Reset()
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
var sshCmd = &cobra.Command{
|
|
||||||
Use: "ssh <node-number>",
|
|
||||||
Short: "SSH into a sandbox node (1-5)",
|
|
||||||
Args: cobra.ExactArgs(1),
|
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
|
||||||
name, _ := cmd.Flags().GetString("name")
|
|
||||||
var nodeNum int
|
|
||||||
if _, err := fmt.Sscanf(args[0], "%d", &nodeNum); err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Invalid node number: %s (expected 1-5)\n", args[0])
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
return sandbox.SSHInto(name, nodeNum)
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
// create flags
|
|
||||||
createCmd.Flags().String("name", "", "Sandbox name (random if not specified)")
|
|
||||||
|
|
||||||
// destroy flags
|
|
||||||
destroyCmd.Flags().String("name", "", "Sandbox name (uses active if not specified)")
|
|
||||||
destroyCmd.Flags().Bool("force", false, "Skip confirmation")
|
|
||||||
|
|
||||||
// status flags
|
|
||||||
statusCmd.Flags().String("name", "", "Sandbox name (uses active if not specified)")
|
|
||||||
|
|
||||||
// rollout flags
|
|
||||||
rolloutCmd.Flags().String("name", "", "Sandbox name (uses active if not specified)")
|
|
||||||
rolloutCmd.Flags().Bool("anyone-client", false, "Enable Anyone client (SOCKS5 proxy) on all nodes")
|
|
||||||
|
|
||||||
// ssh flags
|
|
||||||
sshCmd.Flags().String("name", "", "Sandbox name (uses active if not specified)")
|
|
||||||
|
|
||||||
Cmd.AddCommand(setupCmd)
|
|
||||||
Cmd.AddCommand(createCmd)
|
|
||||||
Cmd.AddCommand(destroyCmd)
|
|
||||||
Cmd.AddCommand(listCmd)
|
|
||||||
Cmd.AddCommand(statusCmd)
|
|
||||||
Cmd.AddCommand(rolloutCmd)
|
|
||||||
Cmd.AddCommand(sshCmd)
|
|
||||||
Cmd.AddCommand(resetCmd)
|
|
||||||
}
|
|
||||||
@ -1,189 +0,0 @@
|
|||||||
package clean
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bufio"
|
|
||||||
"flag"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/remotessh"
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/inspector"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Flags holds clean command flags.
|
|
||||||
type Flags struct {
|
|
||||||
Env string // Target environment
|
|
||||||
Node string // Single node IP
|
|
||||||
Nuclear bool // Also remove shared binaries
|
|
||||||
Force bool // Skip confirmation
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle is the entry point for the clean command.
|
|
||||||
func Handle(args []string) {
|
|
||||||
flags, err := parseFlags(args)
|
|
||||||
if err != nil {
|
|
||||||
if err == flag.ErrHelp {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := execute(flags); err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseFlags(args []string) (*Flags, error) {
|
|
||||||
fs := flag.NewFlagSet("clean", flag.ContinueOnError)
|
|
||||||
fs.SetOutput(os.Stderr)
|
|
||||||
|
|
||||||
flags := &Flags{}
|
|
||||||
fs.StringVar(&flags.Env, "env", "", "Target environment (devnet, testnet) [required]")
|
|
||||||
fs.StringVar(&flags.Node, "node", "", "Clean a single node IP only")
|
|
||||||
fs.BoolVar(&flags.Nuclear, "nuclear", false, "Also remove shared binaries (rqlited, ipfs, caddy, etc.)")
|
|
||||||
fs.BoolVar(&flags.Force, "force", false, "Skip confirmation (DESTRUCTIVE)")
|
|
||||||
|
|
||||||
if err := fs.Parse(args); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if flags.Env == "" {
|
|
||||||
return nil, fmt.Errorf("--env is required\nUsage: orama node clean --env <devnet|testnet> --force")
|
|
||||||
}
|
|
||||||
|
|
||||||
return flags, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func execute(flags *Flags) error {
|
|
||||||
nodes, err := remotessh.LoadEnvNodes(flags.Env)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
cleanup, err := remotessh.PrepareNodeKeys(nodes)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer cleanup()
|
|
||||||
|
|
||||||
if flags.Node != "" {
|
|
||||||
nodes = remotessh.FilterByIP(nodes, flags.Node)
|
|
||||||
if len(nodes) == 0 {
|
|
||||||
return fmt.Errorf("node %s not found in %s environment", flags.Node, flags.Env)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("Clean %s: %d node(s)\n", flags.Env, len(nodes))
|
|
||||||
if flags.Nuclear {
|
|
||||||
fmt.Printf(" Mode: NUCLEAR (removes binaries too)\n")
|
|
||||||
}
|
|
||||||
for _, n := range nodes {
|
|
||||||
fmt.Printf(" - %s (%s)\n", n.Host, n.Role)
|
|
||||||
}
|
|
||||||
fmt.Println()
|
|
||||||
|
|
||||||
// Confirm unless --force
|
|
||||||
if !flags.Force {
|
|
||||||
fmt.Printf("This will DESTROY all data on these nodes. Anyone relay keys are preserved.\n")
|
|
||||||
fmt.Printf("Type 'yes' to confirm: ")
|
|
||||||
reader := bufio.NewReader(os.Stdin)
|
|
||||||
input, _ := reader.ReadString('\n')
|
|
||||||
if strings.TrimSpace(input) != "yes" {
|
|
||||||
fmt.Println("Aborted.")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
fmt.Println()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Clean each node
|
|
||||||
var failed []string
|
|
||||||
for i, node := range nodes {
|
|
||||||
fmt.Printf("[%d/%d] Cleaning %s...\n", i+1, len(nodes), node.Host)
|
|
||||||
if err := cleanNode(node, flags.Nuclear); err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, " ✗ %s: %v\n", node.Host, err)
|
|
||||||
failed = append(failed, node.Host)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
fmt.Printf(" ✓ %s cleaned\n\n", node.Host)
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(failed) > 0 {
|
|
||||||
return fmt.Errorf("clean failed on %d node(s): %s", len(failed), strings.Join(failed, ", "))
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("✓ Clean complete (%d nodes)\n", len(nodes))
|
|
||||||
fmt.Printf(" Anyone relay keys preserved at /var/lib/anon/\n")
|
|
||||||
fmt.Printf(" To reinstall: orama node install --vps-ip <ip> ...\n")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func cleanNode(node inspector.Node, nuclear bool) error {
|
|
||||||
sudo := remotessh.SudoPrefix(node)
|
|
||||||
|
|
||||||
nuclearFlag := ""
|
|
||||||
if nuclear {
|
|
||||||
nuclearFlag = "NUCLEAR=1"
|
|
||||||
}
|
|
||||||
|
|
||||||
// The cleanup script runs on the remote node
|
|
||||||
script := fmt.Sprintf(`%sbash -c '
|
|
||||||
%s
|
|
||||||
|
|
||||||
# Stop services
|
|
||||||
for svc in caddy coredns orama-node orama-gateway orama-ipfs-cluster orama-ipfs orama-olric orama-anyone-relay orama-anyone-client; do
|
|
||||||
systemctl stop "$svc" 2>/dev/null
|
|
||||||
systemctl disable "$svc" 2>/dev/null
|
|
||||||
done
|
|
||||||
|
|
||||||
# Kill stragglers
|
|
||||||
pkill -9 -f "orama-node" 2>/dev/null || true
|
|
||||||
pkill -9 -f "olric-server" 2>/dev/null || true
|
|
||||||
pkill -9 -f "ipfs" 2>/dev/null || true
|
|
||||||
|
|
||||||
# Remove systemd units
|
|
||||||
rm -f /etc/systemd/system/orama-*.service
|
|
||||||
rm -f /etc/systemd/system/coredns.service
|
|
||||||
rm -f /etc/systemd/system/caddy.service
|
|
||||||
systemctl daemon-reload 2>/dev/null
|
|
||||||
|
|
||||||
# Tear down WireGuard
|
|
||||||
ip link delete wg0 2>/dev/null || true
|
|
||||||
rm -f /etc/wireguard/wg0.conf
|
|
||||||
|
|
||||||
# Reset firewall
|
|
||||||
ufw --force reset 2>/dev/null || true
|
|
||||||
ufw default deny incoming 2>/dev/null || true
|
|
||||||
ufw default allow outgoing 2>/dev/null || true
|
|
||||||
ufw allow 22/tcp 2>/dev/null || true
|
|
||||||
ufw --force enable 2>/dev/null || true
|
|
||||||
|
|
||||||
# Remove data
|
|
||||||
rm -rf /opt/orama
|
|
||||||
|
|
||||||
# Clean configs
|
|
||||||
rm -rf /etc/coredns
|
|
||||||
rm -rf /etc/caddy
|
|
||||||
rm -f /tmp/orama-*.sh /tmp/network-source.tar.gz /tmp/orama-*.tar.gz
|
|
||||||
|
|
||||||
# Nuclear: remove binaries
|
|
||||||
if [ -n "$NUCLEAR" ]; then
|
|
||||||
rm -f /usr/local/bin/orama /usr/local/bin/orama-node /usr/local/bin/gateway
|
|
||||||
rm -f /usr/local/bin/identity /usr/local/bin/sfu /usr/local/bin/turn
|
|
||||||
rm -f /usr/local/bin/olric-server /usr/local/bin/ipfs /usr/local/bin/ipfs-cluster-service
|
|
||||||
rm -f /usr/local/bin/rqlited /usr/local/bin/coredns
|
|
||||||
rm -f /usr/bin/caddy
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Verify Anyone keys preserved
|
|
||||||
if [ -d /var/lib/anon ]; then
|
|
||||||
echo " Anyone relay keys preserved at /var/lib/anon/"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo " Node cleaned successfully"
|
|
||||||
'`, sudo, nuclearFlag)
|
|
||||||
|
|
||||||
return remotessh.RunSSHStreaming(node, script)
|
|
||||||
}
|
|
||||||
@ -1,123 +0,0 @@
|
|||||||
// Package enroll implements the OramaOS node enrollment command.
|
|
||||||
//
|
|
||||||
// Flow:
|
|
||||||
// 1. Operator fetches registration code from the OramaOS node (port 9999)
|
|
||||||
// 2. Operator provides code + invite token to Gateway
|
|
||||||
// 3. Gateway validates, generates cluster config, pushes to node
|
|
||||||
// 4. Node configures WireGuard, encrypts data partition, starts services
|
|
||||||
package enroll
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"net/http"
|
|
||||||
"os"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Handle processes the enroll command.
|
|
||||||
func Handle(args []string) {
|
|
||||||
flags, err := ParseFlags(args)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 1: Fetch registration code from the OramaOS node
|
|
||||||
fmt.Printf("Fetching registration code from %s:9999...\n", flags.NodeIP)
|
|
||||||
|
|
||||||
var code string
|
|
||||||
if flags.Code != "" {
|
|
||||||
// Code provided directly — skip fetch
|
|
||||||
code = flags.Code
|
|
||||||
} else {
|
|
||||||
fetchedCode, err := fetchRegistrationCode(flags.NodeIP)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: could not reach OramaOS node: %v\n", err)
|
|
||||||
fmt.Fprintf(os.Stderr, "Make sure the node is booted and port 9999 is reachable.\n")
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
code = fetchedCode
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("Registration code: %s\n", code)
|
|
||||||
|
|
||||||
// Step 2: Send enrollment request to the Gateway
|
|
||||||
fmt.Printf("Sending enrollment to Gateway at %s...\n", flags.GatewayURL)
|
|
||||||
|
|
||||||
if err := enrollWithGateway(flags.GatewayURL, flags.Token, code, flags.NodeIP); err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: enrollment failed: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("Node %s enrolled successfully.\n", flags.NodeIP)
|
|
||||||
fmt.Printf("The node is now configuring WireGuard and encrypting its data partition.\n")
|
|
||||||
fmt.Printf("This may take a few minutes. Check status with: orama node status --env %s\n", flags.Env)
|
|
||||||
}
|
|
||||||
|
|
||||||
// fetchRegistrationCode retrieves the one-time registration code from the OramaOS node.
|
|
||||||
func fetchRegistrationCode(nodeIP string) (string, error) {
|
|
||||||
client := &http.Client{Timeout: 10 * time.Second}
|
|
||||||
resp, err := client.Get(fmt.Sprintf("http://%s:9999/", nodeIP))
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("GET failed: %w", err)
|
|
||||||
}
|
|
||||||
defer resp.Body.Close()
|
|
||||||
|
|
||||||
if resp.StatusCode == http.StatusGone {
|
|
||||||
return "", fmt.Errorf("registration code already served (node may be partially enrolled)")
|
|
||||||
}
|
|
||||||
if resp.StatusCode != http.StatusOK {
|
|
||||||
return "", fmt.Errorf("unexpected status %d", resp.StatusCode)
|
|
||||||
}
|
|
||||||
|
|
||||||
var result struct {
|
|
||||||
Code string `json:"code"`
|
|
||||||
Expires string `json:"expires"`
|
|
||||||
}
|
|
||||||
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
|
|
||||||
return "", fmt.Errorf("invalid response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return result.Code, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// enrollWithGateway sends the enrollment request to the Gateway, which validates
|
|
||||||
// the code and token, then pushes cluster configuration to the OramaOS node.
|
|
||||||
func enrollWithGateway(gatewayURL, token, code, nodeIP string) error {
|
|
||||||
body, _ := json.Marshal(map[string]string{
|
|
||||||
"code": code,
|
|
||||||
"token": token,
|
|
||||||
"node_ip": nodeIP,
|
|
||||||
})
|
|
||||||
|
|
||||||
req, err := http.NewRequest("POST", gatewayURL+"/v1/node/enroll", bytes.NewReader(body))
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
req.Header.Set("Content-Type", "application/json")
|
|
||||||
req.Header.Set("Authorization", "Bearer "+token)
|
|
||||||
|
|
||||||
client := &http.Client{Timeout: 60 * time.Second}
|
|
||||||
resp, err := client.Do(req)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("request failed: %w", err)
|
|
||||||
}
|
|
||||||
defer resp.Body.Close()
|
|
||||||
|
|
||||||
if resp.StatusCode == http.StatusUnauthorized {
|
|
||||||
return fmt.Errorf("invalid or expired invite token")
|
|
||||||
}
|
|
||||||
if resp.StatusCode == http.StatusBadRequest {
|
|
||||||
respBody, _ := io.ReadAll(resp.Body)
|
|
||||||
return fmt.Errorf("bad request: %s", string(respBody))
|
|
||||||
}
|
|
||||||
if resp.StatusCode != http.StatusOK {
|
|
||||||
respBody, _ := io.ReadAll(resp.Body)
|
|
||||||
return fmt.Errorf("gateway returned %d: %s", resp.StatusCode, string(respBody))
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@ -1,46 +0,0 @@
|
|||||||
package enroll
|
|
||||||
|
|
||||||
import (
|
|
||||||
"flag"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Flags holds the parsed command-line flags for the enroll command.
|
|
||||||
type Flags struct {
|
|
||||||
NodeIP string // Public IP of the OramaOS node
|
|
||||||
Code string // Registration code (optional — fetched automatically if not provided)
|
|
||||||
Token string // Invite token for cluster joining
|
|
||||||
GatewayURL string // Gateway HTTPS URL
|
|
||||||
Env string // Environment name (for display only)
|
|
||||||
}
|
|
||||||
|
|
||||||
// ParseFlags parses the enroll command flags.
|
|
||||||
func ParseFlags(args []string) (*Flags, error) {
|
|
||||||
fs := flag.NewFlagSet("enroll", flag.ContinueOnError)
|
|
||||||
fs.SetOutput(os.Stderr)
|
|
||||||
|
|
||||||
flags := &Flags{}
|
|
||||||
|
|
||||||
fs.StringVar(&flags.NodeIP, "node-ip", "", "Public IP of the OramaOS node (required)")
|
|
||||||
fs.StringVar(&flags.Code, "code", "", "Registration code from the node (auto-fetched if not provided)")
|
|
||||||
fs.StringVar(&flags.Token, "token", "", "Invite token for cluster joining (required)")
|
|
||||||
fs.StringVar(&flags.GatewayURL, "gateway", "", "Gateway URL (required, e.g. https://gateway.example.com)")
|
|
||||||
fs.StringVar(&flags.Env, "env", "production", "Environment name")
|
|
||||||
|
|
||||||
if err := fs.Parse(args); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if flags.NodeIP == "" {
|
|
||||||
return nil, fmt.Errorf("--node-ip is required")
|
|
||||||
}
|
|
||||||
if flags.Token == "" {
|
|
||||||
return nil, fmt.Errorf("--token is required")
|
|
||||||
}
|
|
||||||
if flags.GatewayURL == "" {
|
|
||||||
return nil, fmt.Errorf("--gateway is required")
|
|
||||||
}
|
|
||||||
|
|
||||||
return flags, nil
|
|
||||||
}
|
|
||||||
@ -1,261 +0,0 @@
|
|||||||
package push
|
|
||||||
|
|
||||||
import (
|
|
||||||
"flag"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/remotessh"
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/inspector"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Flags holds push command flags.
|
|
||||||
type Flags struct {
|
|
||||||
Env string // Target environment (devnet, testnet)
|
|
||||||
Node string // Single node IP (optional)
|
|
||||||
Direct bool // Sequential upload to each node (no fanout)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle is the entry point for the push command.
|
|
||||||
func Handle(args []string) {
|
|
||||||
flags, err := parseFlags(args)
|
|
||||||
if err != nil {
|
|
||||||
if err == flag.ErrHelp {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := execute(flags); err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseFlags(args []string) (*Flags, error) {
|
|
||||||
fs := flag.NewFlagSet("push", flag.ContinueOnError)
|
|
||||||
fs.SetOutput(os.Stderr)
|
|
||||||
|
|
||||||
flags := &Flags{}
|
|
||||||
fs.StringVar(&flags.Env, "env", "", "Target environment (devnet, testnet) [required]")
|
|
||||||
fs.StringVar(&flags.Node, "node", "", "Push to a single node IP only")
|
|
||||||
fs.BoolVar(&flags.Direct, "direct", false, "Upload directly to each node (no hub fanout)")
|
|
||||||
|
|
||||||
if err := fs.Parse(args); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if flags.Env == "" {
|
|
||||||
return nil, fmt.Errorf("--env is required\nUsage: orama node push --env <devnet|testnet>")
|
|
||||||
}
|
|
||||||
|
|
||||||
return flags, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func execute(flags *Flags) error {
|
|
||||||
// Find archive
|
|
||||||
archivePath := findNewestArchive()
|
|
||||||
if archivePath == "" {
|
|
||||||
return fmt.Errorf("no binary archive found in /tmp/ (run `orama build` first)")
|
|
||||||
}
|
|
||||||
|
|
||||||
info, _ := os.Stat(archivePath)
|
|
||||||
fmt.Printf("Archive: %s (%s)\n", filepath.Base(archivePath), formatBytes(info.Size()))
|
|
||||||
|
|
||||||
// Resolve nodes
|
|
||||||
nodes, err := remotessh.LoadEnvNodes(flags.Env)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Prepare wallet-derived SSH keys
|
|
||||||
cleanup, err := remotessh.PrepareNodeKeys(nodes)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer cleanup()
|
|
||||||
|
|
||||||
// Filter to single node if specified
|
|
||||||
if flags.Node != "" {
|
|
||||||
nodes = remotessh.FilterByIP(nodes, flags.Node)
|
|
||||||
if len(nodes) == 0 {
|
|
||||||
return fmt.Errorf("node %s not found in %s environment", flags.Node, flags.Env)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("Environment: %s (%d nodes)\n\n", flags.Env, len(nodes))
|
|
||||||
|
|
||||||
if flags.Direct || len(nodes) == 1 {
|
|
||||||
return pushDirect(archivePath, nodes)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Load keys into ssh-agent for fanout forwarding
|
|
||||||
if err := remotessh.LoadAgentKeys(nodes); err != nil {
|
|
||||||
return fmt.Errorf("load agent keys for fanout: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return pushFanout(archivePath, nodes)
|
|
||||||
}
|
|
||||||
|
|
||||||
// pushDirect uploads the archive to each node sequentially.
|
|
||||||
func pushDirect(archivePath string, nodes []inspector.Node) error {
|
|
||||||
remotePath := "/tmp/" + filepath.Base(archivePath)
|
|
||||||
|
|
||||||
for i, node := range nodes {
|
|
||||||
fmt.Printf("[%d/%d] Pushing to %s...\n", i+1, len(nodes), node.Host)
|
|
||||||
|
|
||||||
if err := remotessh.UploadFile(node, archivePath, remotePath); err != nil {
|
|
||||||
return fmt.Errorf("upload to %s failed: %w", node.Host, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := extractOnNode(node, remotePath); err != nil {
|
|
||||||
return fmt.Errorf("extract on %s failed: %w", node.Host, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf(" ✓ %s done\n\n", node.Host)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("✓ Push complete (%d nodes)\n", len(nodes))
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// pushFanout uploads to a hub node, then fans out to all others via agent forwarding.
|
|
||||||
func pushFanout(archivePath string, nodes []inspector.Node) error {
|
|
||||||
hub := remotessh.PickHubNode(nodes)
|
|
||||||
remotePath := "/tmp/" + filepath.Base(archivePath)
|
|
||||||
|
|
||||||
// Step 1: Upload to hub
|
|
||||||
fmt.Printf("[hub] Uploading to %s...\n", hub.Host)
|
|
||||||
if err := remotessh.UploadFile(hub, archivePath, remotePath); err != nil {
|
|
||||||
return fmt.Errorf("upload to hub %s failed: %w", hub.Host, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := extractOnNode(hub, remotePath); err != nil {
|
|
||||||
return fmt.Errorf("extract on hub %s failed: %w", hub.Host, err)
|
|
||||||
}
|
|
||||||
fmt.Printf(" ✓ hub %s done\n\n", hub.Host)
|
|
||||||
|
|
||||||
// Step 2: Fan out from hub to remaining nodes in parallel (via agent forwarding)
|
|
||||||
remaining := make([]inspector.Node, 0, len(nodes)-1)
|
|
||||||
for _, n := range nodes {
|
|
||||||
if n.Host != hub.Host {
|
|
||||||
remaining = append(remaining, n)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(remaining) == 0 {
|
|
||||||
fmt.Printf("✓ Push complete (1 node)\n")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("[fanout] Distributing from %s to %d nodes...\n", hub.Host, len(remaining))
|
|
||||||
|
|
||||||
var wg sync.WaitGroup
|
|
||||||
errors := make([]error, len(remaining))
|
|
||||||
|
|
||||||
for i, target := range remaining {
|
|
||||||
wg.Add(1)
|
|
||||||
go func(idx int, target inspector.Node) {
|
|
||||||
defer wg.Done()
|
|
||||||
|
|
||||||
// SCP from hub to target (agent forwarding serves the key)
|
|
||||||
scpCmd := fmt.Sprintf("scp -o StrictHostKeyChecking=accept-new -o ConnectTimeout=10 %s %s@%s:%s",
|
|
||||||
remotePath, target.User, target.Host, remotePath)
|
|
||||||
|
|
||||||
if err := remotessh.RunSSHStreaming(hub, scpCmd, remotessh.WithAgentForward()); err != nil {
|
|
||||||
errors[idx] = fmt.Errorf("fanout to %s failed: %w", target.Host, err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := extractOnNodeVia(hub, target, remotePath); err != nil {
|
|
||||||
errors[idx] = fmt.Errorf("extract on %s failed: %w", target.Host, err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf(" ✓ %s done\n", target.Host)
|
|
||||||
}(i, target)
|
|
||||||
}
|
|
||||||
|
|
||||||
wg.Wait()
|
|
||||||
|
|
||||||
// Check for errors
|
|
||||||
var failed []string
|
|
||||||
for i, err := range errors {
|
|
||||||
if err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, " ✗ %s: %v\n", remaining[i].Host, err)
|
|
||||||
failed = append(failed, remaining[i].Host)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(failed) > 0 {
|
|
||||||
return fmt.Errorf("push failed on %d node(s): %s", len(failed), strings.Join(failed, ", "))
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("\n✓ Push complete (%d nodes)\n", len(nodes))
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// extractOnNode extracts the archive on a remote node.
|
|
||||||
func extractOnNode(node inspector.Node, remotePath string) error {
|
|
||||||
sudo := remotessh.SudoPrefix(node)
|
|
||||||
cmd := fmt.Sprintf("%smkdir -p /opt/orama && %star xzf %s -C /opt/orama && %srm -f %s",
|
|
||||||
sudo, sudo, remotePath, sudo, remotePath)
|
|
||||||
return remotessh.RunSSHStreaming(node, cmd)
|
|
||||||
}
|
|
||||||
|
|
||||||
// extractOnNodeVia extracts the archive on a target node by SSHing through the hub.
|
|
||||||
// Uses agent forwarding so the hub can authenticate to the target.
|
|
||||||
func extractOnNodeVia(hub, target inspector.Node, remotePath string) error {
|
|
||||||
sudo := remotessh.SudoPrefix(target)
|
|
||||||
extractCmd := fmt.Sprintf("%smkdir -p /opt/orama && %star xzf %s -C /opt/orama && %srm -f %s",
|
|
||||||
sudo, sudo, remotePath, sudo, remotePath)
|
|
||||||
|
|
||||||
// SSH from hub to target to extract (agent forwarding serves the key)
|
|
||||||
sshCmd := fmt.Sprintf("ssh -o StrictHostKeyChecking=accept-new -o ConnectTimeout=10 %s@%s '%s'",
|
|
||||||
target.User, target.Host, extractCmd)
|
|
||||||
|
|
||||||
return remotessh.RunSSHStreaming(hub, sshCmd, remotessh.WithAgentForward())
|
|
||||||
}
|
|
||||||
|
|
||||||
// findNewestArchive finds the newest binary archive in /tmp/.
|
|
||||||
func findNewestArchive() string {
|
|
||||||
entries, err := os.ReadDir("/tmp")
|
|
||||||
if err != nil {
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
|
|
||||||
var best string
|
|
||||||
var bestMod int64
|
|
||||||
for _, entry := range entries {
|
|
||||||
name := entry.Name()
|
|
||||||
if strings.HasPrefix(name, "orama-") && strings.Contains(name, "-linux-") && strings.HasSuffix(name, ".tar.gz") {
|
|
||||||
info, err := entry.Info()
|
|
||||||
if err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if info.ModTime().Unix() > bestMod {
|
|
||||||
best = filepath.Join("/tmp", name)
|
|
||||||
bestMod = info.ModTime().Unix()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return best
|
|
||||||
}
|
|
||||||
|
|
||||||
func formatBytes(b int64) string {
|
|
||||||
const unit = 1024
|
|
||||||
if b < unit {
|
|
||||||
return fmt.Sprintf("%d B", b)
|
|
||||||
}
|
|
||||||
div, exp := int64(unit), 0
|
|
||||||
for n := b / unit; n >= unit; n /= unit {
|
|
||||||
div *= unit
|
|
||||||
exp++
|
|
||||||
}
|
|
||||||
return fmt.Sprintf("%.1f %cB", float64(b)/float64(div), "KMGTPE"[exp])
|
|
||||||
}
|
|
||||||
@ -1,312 +0,0 @@
|
|||||||
package recover
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bufio"
|
|
||||||
"flag"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/remotessh"
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/inspector"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Flags holds recover-raft command flags.
|
|
||||||
type Flags struct {
|
|
||||||
Env string // Target environment
|
|
||||||
Leader string // Leader node IP (highest commit index)
|
|
||||||
Force bool // Skip confirmation
|
|
||||||
}
|
|
||||||
|
|
||||||
const (
|
|
||||||
raftDir = "/opt/orama/.orama/data/rqlite/raft"
|
|
||||||
backupDir = "/tmp/rqlite-raft-backup"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Handle is the entry point for the recover-raft command.
|
|
||||||
func Handle(args []string) {
|
|
||||||
flags, err := parseFlags(args)
|
|
||||||
if err != nil {
|
|
||||||
if err == flag.ErrHelp {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := execute(flags); err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseFlags(args []string) (*Flags, error) {
|
|
||||||
fs := flag.NewFlagSet("recover-raft", flag.ContinueOnError)
|
|
||||||
fs.SetOutput(os.Stderr)
|
|
||||||
|
|
||||||
flags := &Flags{}
|
|
||||||
fs.StringVar(&flags.Env, "env", "", "Target environment (devnet, testnet) [required]")
|
|
||||||
fs.StringVar(&flags.Leader, "leader", "", "Leader node IP (node with highest commit index) [required]")
|
|
||||||
fs.BoolVar(&flags.Force, "force", false, "Skip confirmation (DESTRUCTIVE)")
|
|
||||||
|
|
||||||
if err := fs.Parse(args); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if flags.Env == "" {
|
|
||||||
return nil, fmt.Errorf("--env is required\nUsage: orama node recover-raft --env <devnet|testnet> --leader <ip>")
|
|
||||||
}
|
|
||||||
if flags.Leader == "" {
|
|
||||||
return nil, fmt.Errorf("--leader is required\nUsage: orama node recover-raft --env <devnet|testnet> --leader <ip>")
|
|
||||||
}
|
|
||||||
|
|
||||||
return flags, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func execute(flags *Flags) error {
|
|
||||||
nodes, err := remotessh.LoadEnvNodes(flags.Env)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
cleanup, err := remotessh.PrepareNodeKeys(nodes)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer cleanup()
|
|
||||||
|
|
||||||
// Find leader node
|
|
||||||
leaderNodes := remotessh.FilterByIP(nodes, flags.Leader)
|
|
||||||
if len(leaderNodes) == 0 {
|
|
||||||
return fmt.Errorf("leader %s not found in %s environment", flags.Leader, flags.Env)
|
|
||||||
}
|
|
||||||
leader := leaderNodes[0]
|
|
||||||
|
|
||||||
// Separate leader from followers
|
|
||||||
var followers []inspector.Node
|
|
||||||
for _, n := range nodes {
|
|
||||||
if n.Host != leader.Host {
|
|
||||||
followers = append(followers, n)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Print plan
|
|
||||||
fmt.Printf("Recover Raft: %s (%d nodes)\n", flags.Env, len(nodes))
|
|
||||||
fmt.Printf(" Leader candidate: %s (%s) — raft/ data preserved\n", leader.Host, leader.Role)
|
|
||||||
for _, n := range followers {
|
|
||||||
fmt.Printf(" - %s (%s) — raft/ will be deleted\n", n.Host, n.Role)
|
|
||||||
}
|
|
||||||
fmt.Println()
|
|
||||||
|
|
||||||
// Confirm unless --force
|
|
||||||
if !flags.Force {
|
|
||||||
fmt.Printf("⚠️ THIS WILL:\n")
|
|
||||||
fmt.Printf(" 1. Stop orama-node on ALL %d nodes\n", len(nodes))
|
|
||||||
fmt.Printf(" 2. DELETE raft/ data on %d nodes (backup to %s)\n", len(followers), backupDir)
|
|
||||||
fmt.Printf(" 3. Keep raft/ data ONLY on %s (leader candidate)\n", leader.Host)
|
|
||||||
fmt.Printf(" 4. Restart all nodes to reform the cluster\n")
|
|
||||||
fmt.Printf("\nType 'yes' to confirm: ")
|
|
||||||
reader := bufio.NewReader(os.Stdin)
|
|
||||||
input, _ := reader.ReadString('\n')
|
|
||||||
if strings.TrimSpace(input) != "yes" {
|
|
||||||
fmt.Println("Aborted.")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
fmt.Println()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Phase 1: Stop orama-node on ALL nodes
|
|
||||||
if err := phase1StopAll(nodes); err != nil {
|
|
||||||
return fmt.Errorf("phase 1 (stop all): %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Phase 2: Backup and delete raft/ on non-leader nodes
|
|
||||||
if err := phase2ClearFollowers(followers); err != nil {
|
|
||||||
return fmt.Errorf("phase 2 (clear followers): %w", err)
|
|
||||||
}
|
|
||||||
fmt.Printf(" Leader node %s raft/ data preserved.\n\n", leader.Host)
|
|
||||||
|
|
||||||
// Phase 3: Start leader node and wait for Leader state
|
|
||||||
if err := phase3StartLeader(leader); err != nil {
|
|
||||||
return fmt.Errorf("phase 3 (start leader): %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Phase 4: Start remaining nodes in batches
|
|
||||||
if err := phase4StartFollowers(followers); err != nil {
|
|
||||||
return fmt.Errorf("phase 4 (start followers): %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Phase 5: Verify cluster health
|
|
||||||
phase5Verify(nodes, leader)
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func phase1StopAll(nodes []inspector.Node) error {
|
|
||||||
fmt.Printf("== Phase 1: Stopping orama-node on all %d nodes ==\n", len(nodes))
|
|
||||||
|
|
||||||
var failed []inspector.Node
|
|
||||||
for _, node := range nodes {
|
|
||||||
sudo := remotessh.SudoPrefix(node)
|
|
||||||
fmt.Printf(" Stopping %s ... ", node.Host)
|
|
||||||
|
|
||||||
cmd := fmt.Sprintf("%ssystemctl stop orama-node 2>&1 && echo STOPPED", sudo)
|
|
||||||
if err := remotessh.RunSSHStreaming(node, cmd); err != nil {
|
|
||||||
fmt.Printf("FAILED\n")
|
|
||||||
failed = append(failed, node)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
fmt.Println()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Kill stragglers
|
|
||||||
if len(failed) > 0 {
|
|
||||||
fmt.Printf("\n⚠️ %d nodes failed to stop. Attempting kill...\n", len(failed))
|
|
||||||
for _, node := range failed {
|
|
||||||
sudo := remotessh.SudoPrefix(node)
|
|
||||||
cmd := fmt.Sprintf("%skillall -9 orama-node rqlited 2>/dev/null; echo KILLED", sudo)
|
|
||||||
_ = remotessh.RunSSHStreaming(node, cmd)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("\nWaiting 5s for processes to fully stop...\n")
|
|
||||||
time.Sleep(5 * time.Second)
|
|
||||||
fmt.Println()
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func phase2ClearFollowers(followers []inspector.Node) error {
|
|
||||||
fmt.Printf("== Phase 2: Clearing raft state on %d non-leader nodes ==\n", len(followers))
|
|
||||||
|
|
||||||
for _, node := range followers {
|
|
||||||
sudo := remotessh.SudoPrefix(node)
|
|
||||||
fmt.Printf(" Clearing %s ... ", node.Host)
|
|
||||||
|
|
||||||
script := fmt.Sprintf(`%sbash -c '
|
|
||||||
rm -rf %s
|
|
||||||
if [ -d %s ]; then
|
|
||||||
cp -r %s %s 2>/dev/null || true
|
|
||||||
rm -rf %s
|
|
||||||
echo "CLEARED (backup at %s)"
|
|
||||||
else
|
|
||||||
echo "NO_RAFT_DIR (nothing to clear)"
|
|
||||||
fi
|
|
||||||
'`, sudo, backupDir, raftDir, raftDir, backupDir, raftDir, backupDir)
|
|
||||||
|
|
||||||
if err := remotessh.RunSSHStreaming(node, script); err != nil {
|
|
||||||
fmt.Printf("FAILED: %v\n", err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
fmt.Println()
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func phase3StartLeader(leader inspector.Node) error {
|
|
||||||
fmt.Printf("== Phase 3: Starting leader node (%s) ==\n", leader.Host)
|
|
||||||
|
|
||||||
sudo := remotessh.SudoPrefix(leader)
|
|
||||||
startCmd := fmt.Sprintf("%ssystemctl start orama-node", sudo)
|
|
||||||
if err := remotessh.RunSSHStreaming(leader, startCmd); err != nil {
|
|
||||||
return fmt.Errorf("failed to start leader node %s: %w", leader.Host, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf(" Waiting for leader to become Leader...\n")
|
|
||||||
maxWait := 120
|
|
||||||
elapsed := 0
|
|
||||||
|
|
||||||
for elapsed < maxWait {
|
|
||||||
// Check raft state via RQLite status endpoint
|
|
||||||
checkCmd := `curl -s --max-time 3 http://localhost:5001/status 2>/dev/null | python3 -c "
|
|
||||||
import sys,json
|
|
||||||
try:
|
|
||||||
d=json.load(sys.stdin)
|
|
||||||
print(d.get('store',{}).get('raft',{}).get('state',''))
|
|
||||||
except:
|
|
||||||
print('')
|
|
||||||
" 2>/dev/null || echo ""`
|
|
||||||
|
|
||||||
// We can't easily capture output from RunSSHStreaming, so we use a simple approach
|
|
||||||
// Check via a combined command that prints a marker
|
|
||||||
stateCheckCmd := fmt.Sprintf(`state=$(%s); echo "RAFT_STATE=$state"`, checkCmd)
|
|
||||||
// Since RunSSHStreaming prints to stdout, we'll poll and let user see the state
|
|
||||||
fmt.Printf(" ... polling (%ds / %ds)\n", elapsed, maxWait)
|
|
||||||
|
|
||||||
// Try to check state - the output goes to stdout via streaming
|
|
||||||
_ = remotessh.RunSSHStreaming(leader, stateCheckCmd)
|
|
||||||
|
|
||||||
time.Sleep(5 * time.Second)
|
|
||||||
elapsed += 5
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf(" Leader start complete. Check output above for state.\n\n")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func phase4StartFollowers(followers []inspector.Node) error {
|
|
||||||
fmt.Printf("== Phase 4: Starting %d remaining nodes ==\n", len(followers))
|
|
||||||
|
|
||||||
batchSize := 3
|
|
||||||
for i, node := range followers {
|
|
||||||
sudo := remotessh.SudoPrefix(node)
|
|
||||||
fmt.Printf(" Starting %s ... ", node.Host)
|
|
||||||
|
|
||||||
cmd := fmt.Sprintf("%ssystemctl start orama-node && echo STARTED", sudo)
|
|
||||||
if err := remotessh.RunSSHStreaming(node, cmd); err != nil {
|
|
||||||
fmt.Printf("FAILED: %v\n", err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
fmt.Println()
|
|
||||||
|
|
||||||
// Batch delay for cluster stability
|
|
||||||
if (i+1)%batchSize == 0 && i+1 < len(followers) {
|
|
||||||
fmt.Printf(" (waiting 15s between batches for cluster stability)\n")
|
|
||||||
time.Sleep(15 * time.Second)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Println()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func phase5Verify(nodes []inspector.Node, leader inspector.Node) {
|
|
||||||
fmt.Printf("== Phase 5: Waiting for cluster to stabilize ==\n")
|
|
||||||
|
|
||||||
// Wait in 30s increments
|
|
||||||
for _, s := range []int{30, 60, 90, 120} {
|
|
||||||
time.Sleep(30 * time.Second)
|
|
||||||
fmt.Printf(" ... %ds\n", s)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("\n== Cluster status ==\n")
|
|
||||||
for _, node := range nodes {
|
|
||||||
marker := ""
|
|
||||||
if node.Host == leader.Host {
|
|
||||||
marker = " ← LEADER"
|
|
||||||
}
|
|
||||||
|
|
||||||
checkCmd := `curl -s --max-time 5 http://localhost:5001/status 2>/dev/null | python3 -c "
|
|
||||||
import sys,json
|
|
||||||
try:
|
|
||||||
d=json.load(sys.stdin)
|
|
||||||
r=d.get('store',{}).get('raft',{})
|
|
||||||
n=d.get('store',{}).get('num_nodes','?')
|
|
||||||
print(f'state={r.get(\"state\",\"?\")} commit={r.get(\"commit_index\",\"?\")} leader={r.get(\"leader\",{}).get(\"node_id\",\"?\")} nodes={n}')
|
|
||||||
except:
|
|
||||||
print('NO_RESPONSE')
|
|
||||||
" 2>/dev/null || echo "SSH_FAILED"`
|
|
||||||
|
|
||||||
fmt.Printf(" %s%s: ", node.Host, marker)
|
|
||||||
_ = remotessh.RunSSHStreaming(node, checkCmd)
|
|
||||||
fmt.Println()
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("\n== Recovery complete ==\n\n")
|
|
||||||
fmt.Printf("Next steps:\n")
|
|
||||||
fmt.Printf(" 1. Run 'orama monitor report --env <env>' to verify full cluster health\n")
|
|
||||||
fmt.Printf(" 2. If some nodes show Candidate state, give them more time (up to 5 min)\n")
|
|
||||||
fmt.Printf(" 3. If nodes fail to join, check /opt/orama/.orama/logs/rqlite-node.log on the node\n")
|
|
||||||
}
|
|
||||||
@ -1,102 +0,0 @@
|
|||||||
package rollout
|
|
||||||
|
|
||||||
import (
|
|
||||||
"flag"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/build"
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/production/push"
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/production/upgrade"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Flags holds rollout command flags.
|
|
||||||
type Flags struct {
|
|
||||||
Env string // Target environment (devnet, testnet)
|
|
||||||
NoBuild bool // Skip the build step
|
|
||||||
Yes bool // Skip confirmation
|
|
||||||
Delay int // Delay in seconds between nodes
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle is the entry point for the rollout command.
|
|
||||||
func Handle(args []string) {
|
|
||||||
flags, err := parseFlags(args)
|
|
||||||
if err != nil {
|
|
||||||
if err == flag.ErrHelp {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := execute(flags); err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseFlags(args []string) (*Flags, error) {
|
|
||||||
fs := flag.NewFlagSet("rollout", flag.ContinueOnError)
|
|
||||||
fs.SetOutput(os.Stderr)
|
|
||||||
|
|
||||||
flags := &Flags{}
|
|
||||||
fs.StringVar(&flags.Env, "env", "", "Target environment (devnet, testnet) [required]")
|
|
||||||
fs.BoolVar(&flags.NoBuild, "no-build", false, "Skip build step (use existing archive)")
|
|
||||||
fs.BoolVar(&flags.Yes, "yes", false, "Skip confirmation")
|
|
||||||
fs.IntVar(&flags.Delay, "delay", 30, "Delay in seconds between nodes during rolling upgrade")
|
|
||||||
|
|
||||||
if err := fs.Parse(args); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if flags.Env == "" {
|
|
||||||
return nil, fmt.Errorf("--env is required\nUsage: orama node rollout --env <devnet|testnet>")
|
|
||||||
}
|
|
||||||
|
|
||||||
return flags, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func execute(flags *Flags) error {
|
|
||||||
start := time.Now()
|
|
||||||
|
|
||||||
fmt.Printf("Rollout to %s\n", flags.Env)
|
|
||||||
fmt.Printf(" Build: %s\n", boolStr(!flags.NoBuild, "yes", "skip"))
|
|
||||||
fmt.Printf(" Delay: %ds between nodes\n\n", flags.Delay)
|
|
||||||
|
|
||||||
// Step 1: Build
|
|
||||||
if !flags.NoBuild {
|
|
||||||
fmt.Printf("Step 1/3: Building binary archive...\n\n")
|
|
||||||
buildFlags := &build.Flags{
|
|
||||||
Arch: "amd64",
|
|
||||||
}
|
|
||||||
builder := build.NewBuilder(buildFlags)
|
|
||||||
if err := builder.Build(); err != nil {
|
|
||||||
return fmt.Errorf("build failed: %w", err)
|
|
||||||
}
|
|
||||||
fmt.Println()
|
|
||||||
} else {
|
|
||||||
fmt.Printf("Step 1/3: Build skipped (--no-build)\n\n")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 2: Push
|
|
||||||
fmt.Printf("Step 2/3: Pushing to all %s nodes...\n\n", flags.Env)
|
|
||||||
push.Handle([]string{"--env", flags.Env})
|
|
||||||
|
|
||||||
fmt.Println()
|
|
||||||
|
|
||||||
// Step 3: Rolling upgrade
|
|
||||||
fmt.Printf("Step 3/3: Rolling upgrade across %s...\n\n", flags.Env)
|
|
||||||
upgrade.Handle([]string{"--env", flags.Env, "--delay", fmt.Sprintf("%d", flags.Delay)})
|
|
||||||
|
|
||||||
elapsed := time.Since(start).Round(time.Second)
|
|
||||||
fmt.Printf("\nRollout complete in %s\n", elapsed)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func boolStr(b bool, trueStr, falseStr string) string {
|
|
||||||
if b {
|
|
||||||
return trueStr
|
|
||||||
}
|
|
||||||
return falseStr
|
|
||||||
}
|
|
||||||
@ -1,166 +0,0 @@
|
|||||||
// Package unlock implements the genesis node unlock command.
|
|
||||||
//
|
|
||||||
// When the genesis OramaOS node reboots before enough peers exist for
|
|
||||||
// Shamir-based LUKS key reconstruction, the operator must manually provide
|
|
||||||
// the LUKS key. This command reads the encrypted genesis key from the
|
|
||||||
// node's rootfs, decrypts it with the rootwallet, and sends it to the agent.
|
|
||||||
package unlock
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"encoding/base64"
|
|
||||||
"encoding/json"
|
|
||||||
"flag"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"net/http"
|
|
||||||
"os"
|
|
||||||
"os/exec"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Flags holds parsed command-line flags.
|
|
||||||
type Flags struct {
|
|
||||||
NodeIP string // WireGuard IP of the OramaOS node
|
|
||||||
Genesis bool // Must be set to confirm genesis unlock
|
|
||||||
KeyFile string // Path to the encrypted genesis key file (optional override)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle processes the unlock command.
|
|
||||||
func Handle(args []string) {
|
|
||||||
flags, err := parseFlags(args)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
if !flags.Genesis {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: --genesis flag is required to confirm genesis unlock\n")
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 1: Read the encrypted genesis key from the node
|
|
||||||
fmt.Printf("Fetching encrypted genesis key from %s...\n", flags.NodeIP)
|
|
||||||
encKey, err := fetchGenesisKey(flags.NodeIP)
|
|
||||||
if err != nil && flags.KeyFile == "" {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: could not fetch genesis key from node: %v\n", err)
|
|
||||||
fmt.Fprintf(os.Stderr, "You can provide the key file directly with --key-file\n")
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
if flags.KeyFile != "" {
|
|
||||||
data, readErr := os.ReadFile(flags.KeyFile)
|
|
||||||
if readErr != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: could not read key file: %v\n", readErr)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
encKey = strings.TrimSpace(string(data))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 2: Decrypt with rootwallet
|
|
||||||
fmt.Println("Decrypting genesis key with rootwallet...")
|
|
||||||
luksKey, err := decryptGenesisKey(encKey)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: decryption failed: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 3: Send LUKS key to the agent over WireGuard
|
|
||||||
fmt.Printf("Sending LUKS key to agent at %s:9998...\n", flags.NodeIP)
|
|
||||||
if err := sendUnlockKey(flags.NodeIP, luksKey); err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: unlock failed: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Println("Genesis node unlocked successfully.")
|
|
||||||
fmt.Println("The node is decrypting and mounting its data partition.")
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseFlags(args []string) (*Flags, error) {
|
|
||||||
fs := flag.NewFlagSet("unlock", flag.ContinueOnError)
|
|
||||||
fs.SetOutput(os.Stderr)
|
|
||||||
|
|
||||||
flags := &Flags{}
|
|
||||||
fs.StringVar(&flags.NodeIP, "node-ip", "", "WireGuard IP of the OramaOS node (required)")
|
|
||||||
fs.BoolVar(&flags.Genesis, "genesis", false, "Confirm genesis node unlock")
|
|
||||||
fs.StringVar(&flags.KeyFile, "key-file", "", "Path to encrypted genesis key file (optional)")
|
|
||||||
|
|
||||||
if err := fs.Parse(args); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if flags.NodeIP == "" {
|
|
||||||
return nil, fmt.Errorf("--node-ip is required")
|
|
||||||
}
|
|
||||||
|
|
||||||
return flags, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// fetchGenesisKey retrieves the encrypted genesis key from the node.
|
|
||||||
// The agent serves it at GET /v1/agent/genesis-key (only during genesis unlock mode).
|
|
||||||
func fetchGenesisKey(nodeIP string) (string, error) {
|
|
||||||
client := &http.Client{Timeout: 10 * time.Second}
|
|
||||||
resp, err := client.Get(fmt.Sprintf("http://%s:9998/v1/agent/genesis-key", nodeIP))
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("request failed: %w", err)
|
|
||||||
}
|
|
||||||
defer resp.Body.Close()
|
|
||||||
|
|
||||||
if resp.StatusCode != http.StatusOK {
|
|
||||||
body, _ := io.ReadAll(resp.Body)
|
|
||||||
return "", fmt.Errorf("status %d: %s", resp.StatusCode, string(body))
|
|
||||||
}
|
|
||||||
|
|
||||||
var result struct {
|
|
||||||
EncryptedKey string `json:"encrypted_key"`
|
|
||||||
}
|
|
||||||
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
|
|
||||||
return "", fmt.Errorf("invalid response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return result.EncryptedKey, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// decryptGenesisKey decrypts the AES-256-GCM encrypted LUKS key using rootwallet.
|
|
||||||
// The key was encrypted with: AES-256-GCM(luksKey, HKDF(rootwalletKey, "genesis-luks"))
|
|
||||||
// For now, we use `rw decrypt` if available, or a local HKDF+AES-GCM implementation.
|
|
||||||
func decryptGenesisKey(encryptedKey string) ([]byte, error) {
|
|
||||||
// Try rw decrypt first
|
|
||||||
cmd := exec.Command("rw", "decrypt", encryptedKey, "--purpose", "genesis-luks", "--chain", "evm")
|
|
||||||
output, err := cmd.Output()
|
|
||||||
if err == nil {
|
|
||||||
decoded, decErr := base64.StdEncoding.DecodeString(strings.TrimSpace(string(output)))
|
|
||||||
if decErr != nil {
|
|
||||||
return nil, fmt.Errorf("failed to decode decrypted key: %w", decErr)
|
|
||||||
}
|
|
||||||
return decoded, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil, fmt.Errorf("rw decrypt failed: %w (is rootwallet installed and initialized?)", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// sendUnlockKey sends the decrypted LUKS key to the agent's unlock endpoint.
|
|
||||||
func sendUnlockKey(nodeIP string, luksKey []byte) error {
|
|
||||||
body, _ := json.Marshal(map[string]string{
|
|
||||||
"key": base64.StdEncoding.EncodeToString(luksKey),
|
|
||||||
})
|
|
||||||
|
|
||||||
client := &http.Client{Timeout: 30 * time.Second}
|
|
||||||
resp, err := client.Post(
|
|
||||||
fmt.Sprintf("http://%s:9998/v1/agent/unlock", nodeIP),
|
|
||||||
"application/json",
|
|
||||||
bytes.NewReader(body),
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("request failed: %w", err)
|
|
||||||
}
|
|
||||||
defer resp.Body.Close()
|
|
||||||
|
|
||||||
if resp.StatusCode != http.StatusOK {
|
|
||||||
respBody, _ := io.ReadAll(resp.Body)
|
|
||||||
return fmt.Errorf("status %d: %s", resp.StatusCode, string(respBody))
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@ -1,75 +0,0 @@
|
|||||||
package upgrade
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/remotessh"
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/inspector"
|
|
||||||
)
|
|
||||||
|
|
||||||
// RemoteUpgrader handles rolling upgrades across remote nodes.
|
|
||||||
type RemoteUpgrader struct {
|
|
||||||
flags *Flags
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewRemoteUpgrader creates a new remote upgrader.
|
|
||||||
func NewRemoteUpgrader(flags *Flags) *RemoteUpgrader {
|
|
||||||
return &RemoteUpgrader{flags: flags}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Execute runs the remote rolling upgrade.
|
|
||||||
func (r *RemoteUpgrader) Execute() error {
|
|
||||||
nodes, err := remotessh.LoadEnvNodes(r.flags.Env)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
cleanup, err := remotessh.PrepareNodeKeys(nodes)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer cleanup()
|
|
||||||
|
|
||||||
// Filter to single node if specified
|
|
||||||
if r.flags.NodeFilter != "" {
|
|
||||||
nodes = remotessh.FilterByIP(nodes, r.flags.NodeFilter)
|
|
||||||
if len(nodes) == 0 {
|
|
||||||
return fmt.Errorf("node %s not found in %s environment", r.flags.NodeFilter, r.flags.Env)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("Rolling upgrade: %s (%d nodes, %ds delay)\n\n", r.flags.Env, len(nodes), r.flags.Delay)
|
|
||||||
|
|
||||||
// Print execution plan
|
|
||||||
for i, node := range nodes {
|
|
||||||
fmt.Printf(" %d. %s (%s)\n", i+1, node.Host, node.Role)
|
|
||||||
}
|
|
||||||
fmt.Println()
|
|
||||||
|
|
||||||
for i, node := range nodes {
|
|
||||||
fmt.Printf("[%d/%d] Upgrading %s (%s)...\n", i+1, len(nodes), node.Host, node.Role)
|
|
||||||
|
|
||||||
if err := r.upgradeNode(node); err != nil {
|
|
||||||
return fmt.Errorf("upgrade failed on %s: %w\nStopping rollout — remaining nodes not upgraded", node.Host, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf(" ✓ %s upgraded\n", node.Host)
|
|
||||||
|
|
||||||
// Wait between nodes (except after the last one)
|
|
||||||
if i < len(nodes)-1 && r.flags.Delay > 0 {
|
|
||||||
fmt.Printf(" Waiting %ds before next node...\n\n", r.flags.Delay)
|
|
||||||
time.Sleep(time.Duration(r.flags.Delay) * time.Second)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("\n✓ Rolling upgrade complete (%d nodes)\n", len(nodes))
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// upgradeNode runs `orama node upgrade --restart` on a single remote node.
|
|
||||||
func (r *RemoteUpgrader) upgradeNode(node inspector.Node) error {
|
|
||||||
sudo := remotessh.SudoPrefix(node)
|
|
||||||
cmd := fmt.Sprintf("%sorama node upgrade --restart", sudo)
|
|
||||||
return remotessh.RunSSHStreaming(node, cmd)
|
|
||||||
}
|
|
||||||
@ -1,69 +0,0 @@
|
|||||||
package remotessh
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/inspector"
|
|
||||||
)
|
|
||||||
|
|
||||||
// FindNodesConf searches for the nodes.conf file
|
|
||||||
// in common locations relative to the current directory or project root.
|
|
||||||
func FindNodesConf() string {
|
|
||||||
candidates := []string{
|
|
||||||
"scripts/nodes.conf",
|
|
||||||
"../scripts/nodes.conf",
|
|
||||||
"network/scripts/nodes.conf",
|
|
||||||
}
|
|
||||||
|
|
||||||
// Also check from home dir
|
|
||||||
home, _ := os.UserHomeDir()
|
|
||||||
if home != "" {
|
|
||||||
candidates = append(candidates, filepath.Join(home, ".orama", "nodes.conf"))
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, c := range candidates {
|
|
||||||
if _, err := os.Stat(c); err == nil {
|
|
||||||
return c
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
|
|
||||||
// LoadEnvNodes loads all nodes for a given environment from nodes.conf.
|
|
||||||
// SSHKey fields are NOT set — caller must call PrepareNodeKeys() after this.
|
|
||||||
func LoadEnvNodes(env string) ([]inspector.Node, error) {
|
|
||||||
confPath := FindNodesConf()
|
|
||||||
if confPath == "" {
|
|
||||||
return nil, fmt.Errorf("nodes.conf not found (checked scripts/, ../scripts/, network/scripts/)")
|
|
||||||
}
|
|
||||||
|
|
||||||
nodes, err := inspector.LoadNodes(confPath)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to load %s: %w", confPath, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
filtered := inspector.FilterByEnv(nodes, env)
|
|
||||||
if len(filtered) == 0 {
|
|
||||||
return nil, fmt.Errorf("no nodes found for environment %q in %s", env, confPath)
|
|
||||||
}
|
|
||||||
|
|
||||||
return filtered, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// PickHubNode selects the first node as the hub for fanout distribution.
|
|
||||||
func PickHubNode(nodes []inspector.Node) inspector.Node {
|
|
||||||
return nodes[0]
|
|
||||||
}
|
|
||||||
|
|
||||||
// FilterByIP returns nodes matching the given IP address.
|
|
||||||
func FilterByIP(nodes []inspector.Node, ip string) []inspector.Node {
|
|
||||||
var filtered []inspector.Node
|
|
||||||
for _, n := range nodes {
|
|
||||||
if n.Host == ip {
|
|
||||||
filtered = append(filtered, n)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return filtered
|
|
||||||
}
|
|
||||||
@ -1,104 +0,0 @@
|
|||||||
package remotessh
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"os/exec"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/inspector"
|
|
||||||
)
|
|
||||||
|
|
||||||
// SSHOption configures SSH command behavior.
|
|
||||||
type SSHOption func(*sshOptions)
|
|
||||||
|
|
||||||
type sshOptions struct {
|
|
||||||
agentForward bool
|
|
||||||
noHostKeyCheck bool
|
|
||||||
}
|
|
||||||
|
|
||||||
// WithAgentForward enables SSH agent forwarding (-A flag).
|
|
||||||
// Used by push fanout so the hub can reach targets via the forwarded agent.
|
|
||||||
func WithAgentForward() SSHOption {
|
|
||||||
return func(o *sshOptions) { o.agentForward = true }
|
|
||||||
}
|
|
||||||
|
|
||||||
// WithNoHostKeyCheck disables host key verification and uses /dev/null as known_hosts.
|
|
||||||
// Use for ephemeral servers (sandbox) where IPs are frequently recycled.
|
|
||||||
func WithNoHostKeyCheck() SSHOption {
|
|
||||||
return func(o *sshOptions) { o.noHostKeyCheck = true }
|
|
||||||
}
|
|
||||||
|
|
||||||
// UploadFile copies a local file to a remote host via SCP.
|
|
||||||
// Requires node.SSHKey to be set (via PrepareNodeKeys).
|
|
||||||
func UploadFile(node inspector.Node, localPath, remotePath string, opts ...SSHOption) error {
|
|
||||||
if node.SSHKey == "" {
|
|
||||||
return fmt.Errorf("no SSH key for %s (call PrepareNodeKeys first)", node.Name())
|
|
||||||
}
|
|
||||||
|
|
||||||
var cfg sshOptions
|
|
||||||
for _, o := range opts {
|
|
||||||
o(&cfg)
|
|
||||||
}
|
|
||||||
|
|
||||||
dest := fmt.Sprintf("%s@%s:%s", node.User, node.Host, remotePath)
|
|
||||||
|
|
||||||
args := []string{"-o", "ConnectTimeout=10", "-i", node.SSHKey}
|
|
||||||
if cfg.noHostKeyCheck {
|
|
||||||
args = append([]string{"-o", "StrictHostKeyChecking=no", "-o", "UserKnownHostsFile=/dev/null"}, args...)
|
|
||||||
} else {
|
|
||||||
args = append([]string{"-o", "StrictHostKeyChecking=accept-new"}, args...)
|
|
||||||
}
|
|
||||||
args = append(args, localPath, dest)
|
|
||||||
|
|
||||||
cmd := exec.Command("scp", args...)
|
|
||||||
cmd.Stdout = os.Stdout
|
|
||||||
cmd.Stderr = os.Stderr
|
|
||||||
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return fmt.Errorf("SCP to %s failed: %w", node.Host, err)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// RunSSHStreaming executes a command on a remote host via SSH,
|
|
||||||
// streaming stdout/stderr to the local terminal in real-time.
|
|
||||||
// Requires node.SSHKey to be set (via PrepareNodeKeys).
|
|
||||||
func RunSSHStreaming(node inspector.Node, command string, opts ...SSHOption) error {
|
|
||||||
if node.SSHKey == "" {
|
|
||||||
return fmt.Errorf("no SSH key for %s (call PrepareNodeKeys first)", node.Name())
|
|
||||||
}
|
|
||||||
|
|
||||||
var cfg sshOptions
|
|
||||||
for _, o := range opts {
|
|
||||||
o(&cfg)
|
|
||||||
}
|
|
||||||
|
|
||||||
args := []string{"-o", "ConnectTimeout=10", "-i", node.SSHKey}
|
|
||||||
if cfg.noHostKeyCheck {
|
|
||||||
args = append([]string{"-o", "StrictHostKeyChecking=no", "-o", "UserKnownHostsFile=/dev/null"}, args...)
|
|
||||||
} else {
|
|
||||||
args = append([]string{"-o", "StrictHostKeyChecking=accept-new"}, args...)
|
|
||||||
}
|
|
||||||
if cfg.agentForward {
|
|
||||||
args = append(args, "-A")
|
|
||||||
}
|
|
||||||
args = append(args, fmt.Sprintf("%s@%s", node.User, node.Host), command)
|
|
||||||
|
|
||||||
cmd := exec.Command("ssh", args...)
|
|
||||||
cmd.Stdout = os.Stdout
|
|
||||||
cmd.Stderr = os.Stderr
|
|
||||||
cmd.Stdin = os.Stdin
|
|
||||||
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return fmt.Errorf("SSH to %s failed: %w", node.Host, err)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// SudoPrefix returns "sudo " for non-root users, empty for root.
|
|
||||||
func SudoPrefix(node inspector.Node) string {
|
|
||||||
if node.User == "root" {
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
return "sudo "
|
|
||||||
}
|
|
||||||
@ -1,216 +0,0 @@
|
|||||||
package remotessh
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"os/exec"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/inspector"
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/rwagent"
|
|
||||||
)
|
|
||||||
|
|
||||||
// vaultClient is the interface used by wallet functions to talk to the agent.
|
|
||||||
// Defaults to the real rwagent.Client; tests replace it with a mock.
|
|
||||||
type vaultClient interface {
|
|
||||||
GetSSHKey(ctx context.Context, host, username, format string) (*rwagent.VaultSSHData, error)
|
|
||||||
CreateSSHEntry(ctx context.Context, host, username string) (*rwagent.VaultSSHData, error)
|
|
||||||
}
|
|
||||||
|
|
||||||
// newClient creates the default vaultClient. Package-level var for test injection.
|
|
||||||
var newClient func() vaultClient = func() vaultClient {
|
|
||||||
return rwagent.New(os.Getenv("RW_AGENT_SOCK"))
|
|
||||||
}
|
|
||||||
|
|
||||||
// wrapAgentError wraps rwagent errors with user-friendly messages.
|
|
||||||
// When the agent is locked, it also triggers the RootWallet desktop app
|
|
||||||
// to show the unlock dialog via deep link (best-effort, fire-and-forget).
|
|
||||||
func wrapAgentError(err error, action string) error {
|
|
||||||
if rwagent.IsNotRunning(err) {
|
|
||||||
return fmt.Errorf("%s: rootwallet agent is not running — start with: rw agent start && rw agent unlock", action)
|
|
||||||
}
|
|
||||||
if rwagent.IsLocked(err) {
|
|
||||||
return fmt.Errorf("%s: rootwallet agent is locked — unlock timed out after waiting. Unlock via the RootWallet app or run: rw agent unlock", action)
|
|
||||||
}
|
|
||||||
if rwagent.IsApprovalDenied(err) {
|
|
||||||
return fmt.Errorf("%s: rootwallet access denied — approve this app in the RootWallet desktop app", action)
|
|
||||||
}
|
|
||||||
return fmt.Errorf("%s: %w", action, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// PrepareNodeKeys resolves wallet-derived SSH keys for all nodes.
|
|
||||||
// Retrieves private keys from the rootwallet agent daemon, writes PEMs to
|
|
||||||
// temp files, and sets node.SSHKey for each node.
|
|
||||||
//
|
|
||||||
// The nodes slice is modified in place — each node.SSHKey is set to
|
|
||||||
// the path of the temporary key file.
|
|
||||||
//
|
|
||||||
// Returns a cleanup function that zero-overwrites and removes all temp files.
|
|
||||||
// Caller must defer cleanup().
|
|
||||||
func PrepareNodeKeys(nodes []inspector.Node) (cleanup func(), err error) {
|
|
||||||
client := newClient()
|
|
||||||
ctx := context.Background()
|
|
||||||
|
|
||||||
// Create temp dir for all keys
|
|
||||||
tmpDir, err := os.MkdirTemp("", "orama-ssh-")
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("create temp dir: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Track resolved keys by host/user to avoid duplicate agent calls
|
|
||||||
keyPaths := make(map[string]string) // "host/user" → temp file path
|
|
||||||
var allKeyPaths []string
|
|
||||||
|
|
||||||
for i := range nodes {
|
|
||||||
var key string
|
|
||||||
if nodes[i].VaultTarget != "" {
|
|
||||||
key = nodes[i].VaultTarget
|
|
||||||
} else {
|
|
||||||
key = nodes[i].Host + "/" + nodes[i].User
|
|
||||||
}
|
|
||||||
if existing, ok := keyPaths[key]; ok {
|
|
||||||
nodes[i].SSHKey = existing
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
host, user := parseVaultTarget(key)
|
|
||||||
data, err := client.GetSSHKey(ctx, host, user, "priv")
|
|
||||||
if err != nil {
|
|
||||||
cleanupKeys(tmpDir, allKeyPaths)
|
|
||||||
return nil, wrapAgentError(err, fmt.Sprintf("resolve key for %s", nodes[i].Name()))
|
|
||||||
}
|
|
||||||
|
|
||||||
if !strings.Contains(data.PrivateKey, "BEGIN OPENSSH PRIVATE KEY") {
|
|
||||||
cleanupKeys(tmpDir, allKeyPaths)
|
|
||||||
return nil, fmt.Errorf("agent returned invalid key for %s", nodes[i].Name())
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write PEM to temp file with restrictive perms
|
|
||||||
keyFile := filepath.Join(tmpDir, fmt.Sprintf("id_%d", i))
|
|
||||||
if err := os.WriteFile(keyFile, []byte(data.PrivateKey), 0600); err != nil {
|
|
||||||
cleanupKeys(tmpDir, allKeyPaths)
|
|
||||||
return nil, fmt.Errorf("write key for %s: %w", nodes[i].Name(), err)
|
|
||||||
}
|
|
||||||
|
|
||||||
keyPaths[key] = keyFile
|
|
||||||
allKeyPaths = append(allKeyPaths, keyFile)
|
|
||||||
nodes[i].SSHKey = keyFile
|
|
||||||
}
|
|
||||||
|
|
||||||
cleanup = func() {
|
|
||||||
cleanupKeys(tmpDir, allKeyPaths)
|
|
||||||
}
|
|
||||||
return cleanup, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// LoadAgentKeys loads SSH keys for the given nodes into the system ssh-agent.
|
|
||||||
// Used by push fanout to enable agent forwarding.
|
|
||||||
// Retrieves private keys from the rootwallet agent and pipes them to ssh-add.
|
|
||||||
func LoadAgentKeys(nodes []inspector.Node) error {
|
|
||||||
client := newClient()
|
|
||||||
ctx := context.Background()
|
|
||||||
|
|
||||||
// Deduplicate host/user pairs
|
|
||||||
seen := make(map[string]bool)
|
|
||||||
var targets []string
|
|
||||||
for _, n := range nodes {
|
|
||||||
var key string
|
|
||||||
if n.VaultTarget != "" {
|
|
||||||
key = n.VaultTarget
|
|
||||||
} else {
|
|
||||||
key = n.Host + "/" + n.User
|
|
||||||
}
|
|
||||||
if seen[key] {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
seen[key] = true
|
|
||||||
targets = append(targets, key)
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(targets) == 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, target := range targets {
|
|
||||||
host, user := parseVaultTarget(target)
|
|
||||||
data, err := client.GetSSHKey(ctx, host, user, "priv")
|
|
||||||
if err != nil {
|
|
||||||
return wrapAgentError(err, fmt.Sprintf("get key for %s", target))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Pipe private key to ssh-add via stdin
|
|
||||||
cmd := exec.Command("ssh-add", "-")
|
|
||||||
cmd.Stdin = strings.NewReader(data.PrivateKey)
|
|
||||||
cmd.Stderr = os.Stderr
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return fmt.Errorf("ssh-add failed for %s: %w", target, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// EnsureVaultEntry creates a wallet SSH entry if it doesn't already exist.
|
|
||||||
// Checks the rootwallet agent for an existing entry, creates one if not found.
|
|
||||||
func EnsureVaultEntry(vaultTarget string) error {
|
|
||||||
client := newClient()
|
|
||||||
ctx := context.Background()
|
|
||||||
|
|
||||||
host, user := parseVaultTarget(vaultTarget)
|
|
||||||
|
|
||||||
// Check if entry already exists
|
|
||||||
_, err := client.GetSSHKey(ctx, host, user, "pub")
|
|
||||||
if err == nil {
|
|
||||||
return nil // entry exists
|
|
||||||
}
|
|
||||||
|
|
||||||
// If not found, create it
|
|
||||||
if rwagent.IsNotFound(err) {
|
|
||||||
_, createErr := client.CreateSSHEntry(ctx, host, user)
|
|
||||||
if createErr != nil {
|
|
||||||
return wrapAgentError(createErr, fmt.Sprintf("create vault entry %s", vaultTarget))
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return wrapAgentError(err, fmt.Sprintf("check vault entry %s", vaultTarget))
|
|
||||||
}
|
|
||||||
|
|
||||||
// ResolveVaultPublicKey returns the OpenSSH public key string for a vault entry.
|
|
||||||
func ResolveVaultPublicKey(vaultTarget string) (string, error) {
|
|
||||||
client := newClient()
|
|
||||||
ctx := context.Background()
|
|
||||||
|
|
||||||
host, user := parseVaultTarget(vaultTarget)
|
|
||||||
data, err := client.GetSSHKey(ctx, host, user, "pub")
|
|
||||||
if err != nil {
|
|
||||||
return "", wrapAgentError(err, fmt.Sprintf("get public key for %s", vaultTarget))
|
|
||||||
}
|
|
||||||
|
|
||||||
pubKey := strings.TrimSpace(data.PublicKey)
|
|
||||||
if !strings.HasPrefix(pubKey, "ssh-") {
|
|
||||||
return "", fmt.Errorf("agent returned invalid public key for %s", vaultTarget)
|
|
||||||
}
|
|
||||||
return pubKey, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// parseVaultTarget splits a "host/user" vault target string into host and user.
|
|
||||||
func parseVaultTarget(target string) (host, user string) {
|
|
||||||
idx := strings.Index(target, "/")
|
|
||||||
if idx < 0 {
|
|
||||||
return target, ""
|
|
||||||
}
|
|
||||||
return target[:idx], target[idx+1:]
|
|
||||||
}
|
|
||||||
|
|
||||||
// cleanupKeys zero-overwrites and removes all key files, then removes the temp dir.
|
|
||||||
func cleanupKeys(tmpDir string, keyPaths []string) {
|
|
||||||
zeros := make([]byte, 512)
|
|
||||||
for _, p := range keyPaths {
|
|
||||||
_ = os.WriteFile(p, zeros, 0600) // zero-overwrite
|
|
||||||
_ = os.Remove(p)
|
|
||||||
}
|
|
||||||
_ = os.Remove(tmpDir)
|
|
||||||
}
|
|
||||||
@ -1,376 +0,0 @@
|
|||||||
package remotessh
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"errors"
|
|
||||||
"os"
|
|
||||||
"strings"
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/inspector"
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/rwagent"
|
|
||||||
)
|
|
||||||
|
|
||||||
const testPrivateKey = "-----BEGIN OPENSSH PRIVATE KEY-----\nfake-key-data\n-----END OPENSSH PRIVATE KEY-----"
|
|
||||||
|
|
||||||
// mockClient implements vaultClient for testing.
|
|
||||||
type mockClient struct {
|
|
||||||
getSSHKey func(ctx context.Context, host, username, format string) (*rwagent.VaultSSHData, error)
|
|
||||||
createSSHEntry func(ctx context.Context, host, username string) (*rwagent.VaultSSHData, error)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *mockClient) GetSSHKey(ctx context.Context, host, username, format string) (*rwagent.VaultSSHData, error) {
|
|
||||||
return m.getSSHKey(ctx, host, username, format)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *mockClient) CreateSSHEntry(ctx context.Context, host, username string) (*rwagent.VaultSSHData, error) {
|
|
||||||
return m.createSSHEntry(ctx, host, username)
|
|
||||||
}
|
|
||||||
|
|
||||||
// withMockClient replaces newClient for the duration of a test.
|
|
||||||
func withMockClient(t *testing.T, mock *mockClient) {
|
|
||||||
t.Helper()
|
|
||||||
orig := newClient
|
|
||||||
newClient = func() vaultClient { return mock }
|
|
||||||
t.Cleanup(func() { newClient = orig })
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestParseVaultTarget(t *testing.T) {
|
|
||||||
tests := []struct {
|
|
||||||
target string
|
|
||||||
wantHost string
|
|
||||||
wantUser string
|
|
||||||
}{
|
|
||||||
{"sandbox/root", "sandbox", "root"},
|
|
||||||
{"192.168.1.1/ubuntu", "192.168.1.1", "ubuntu"},
|
|
||||||
{"my-host/my-user", "my-host", "my-user"},
|
|
||||||
{"noslash", "noslash", ""},
|
|
||||||
{"a/b/c", "a", "b/c"},
|
|
||||||
{"", "", ""},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tt := range tests {
|
|
||||||
t.Run(tt.target, func(t *testing.T) {
|
|
||||||
host, user := parseVaultTarget(tt.target)
|
|
||||||
if host != tt.wantHost {
|
|
||||||
t.Errorf("parseVaultTarget(%q) host = %q, want %q", tt.target, host, tt.wantHost)
|
|
||||||
}
|
|
||||||
if user != tt.wantUser {
|
|
||||||
t.Errorf("parseVaultTarget(%q) user = %q, want %q", tt.target, user, tt.wantUser)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestWrapAgentError_notRunning(t *testing.T) {
|
|
||||||
err := wrapAgentError(rwagent.ErrAgentNotRunning, "test action")
|
|
||||||
if !strings.Contains(err.Error(), "not running") {
|
|
||||||
t.Errorf("expected 'not running' message, got: %s", err)
|
|
||||||
}
|
|
||||||
if !strings.Contains(err.Error(), "rw agent start") {
|
|
||||||
t.Errorf("expected actionable hint, got: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestWrapAgentError_locked(t *testing.T) {
|
|
||||||
agentErr := &rwagent.AgentError{Code: "AGENT_LOCKED", Message: "agent is locked"}
|
|
||||||
err := wrapAgentError(agentErr, "test action")
|
|
||||||
if !strings.Contains(err.Error(), "locked") {
|
|
||||||
t.Errorf("expected 'locked' message, got: %s", err)
|
|
||||||
}
|
|
||||||
if !strings.Contains(err.Error(), "rw agent unlock") {
|
|
||||||
t.Errorf("expected actionable hint, got: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestWrapAgentError_generic(t *testing.T) {
|
|
||||||
err := wrapAgentError(errors.New("some error"), "test action")
|
|
||||||
if !strings.Contains(err.Error(), "test action") {
|
|
||||||
t.Errorf("expected action context, got: %s", err)
|
|
||||||
}
|
|
||||||
if !strings.Contains(err.Error(), "some error") {
|
|
||||||
t.Errorf("expected wrapped error, got: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestPrepareNodeKeys_success(t *testing.T) {
|
|
||||||
mock := &mockClient{
|
|
||||||
getSSHKey: func(_ context.Context, host, username, format string) (*rwagent.VaultSSHData, error) {
|
|
||||||
return &rwagent.VaultSSHData{PrivateKey: testPrivateKey}, nil
|
|
||||||
},
|
|
||||||
}
|
|
||||||
withMockClient(t, mock)
|
|
||||||
|
|
||||||
nodes := []inspector.Node{
|
|
||||||
{Host: "10.0.0.1", User: "root"},
|
|
||||||
{Host: "10.0.0.2", User: "root"},
|
|
||||||
}
|
|
||||||
|
|
||||||
cleanup, err := PrepareNodeKeys(nodes)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("PrepareNodeKeys() error = %v", err)
|
|
||||||
}
|
|
||||||
defer cleanup()
|
|
||||||
|
|
||||||
for i, n := range nodes {
|
|
||||||
if n.SSHKey == "" {
|
|
||||||
t.Errorf("node[%d].SSHKey is empty", i)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
data, err := os.ReadFile(n.SSHKey)
|
|
||||||
if err != nil {
|
|
||||||
t.Errorf("node[%d] key file unreadable: %v", i, err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if !strings.Contains(string(data), "BEGIN OPENSSH PRIVATE KEY") {
|
|
||||||
t.Errorf("node[%d] key file has wrong content", i)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestPrepareNodeKeys_deduplication(t *testing.T) {
|
|
||||||
callCount := 0
|
|
||||||
mock := &mockClient{
|
|
||||||
getSSHKey: func(_ context.Context, host, username, format string) (*rwagent.VaultSSHData, error) {
|
|
||||||
callCount++
|
|
||||||
return &rwagent.VaultSSHData{PrivateKey: testPrivateKey}, nil
|
|
||||||
},
|
|
||||||
}
|
|
||||||
withMockClient(t, mock)
|
|
||||||
|
|
||||||
nodes := []inspector.Node{
|
|
||||||
{Host: "10.0.0.1", User: "root"},
|
|
||||||
{Host: "10.0.0.1", User: "root"}, // same host/user
|
|
||||||
}
|
|
||||||
|
|
||||||
cleanup, err := PrepareNodeKeys(nodes)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("PrepareNodeKeys() error = %v", err)
|
|
||||||
}
|
|
||||||
defer cleanup()
|
|
||||||
|
|
||||||
if callCount != 1 {
|
|
||||||
t.Errorf("expected 1 agent call (dedup), got %d", callCount)
|
|
||||||
}
|
|
||||||
if nodes[0].SSHKey != nodes[1].SSHKey {
|
|
||||||
t.Error("expected same key path for deduplicated nodes")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestPrepareNodeKeys_vaultTarget(t *testing.T) {
|
|
||||||
var capturedHost, capturedUser string
|
|
||||||
mock := &mockClient{
|
|
||||||
getSSHKey: func(_ context.Context, host, username, format string) (*rwagent.VaultSSHData, error) {
|
|
||||||
capturedHost = host
|
|
||||||
capturedUser = username
|
|
||||||
return &rwagent.VaultSSHData{PrivateKey: testPrivateKey}, nil
|
|
||||||
},
|
|
||||||
}
|
|
||||||
withMockClient(t, mock)
|
|
||||||
|
|
||||||
nodes := []inspector.Node{
|
|
||||||
{Host: "10.0.0.1", User: "root", VaultTarget: "sandbox/admin"},
|
|
||||||
}
|
|
||||||
|
|
||||||
cleanup, err := PrepareNodeKeys(nodes)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("PrepareNodeKeys() error = %v", err)
|
|
||||||
}
|
|
||||||
defer cleanup()
|
|
||||||
|
|
||||||
if capturedHost != "sandbox" || capturedUser != "admin" {
|
|
||||||
t.Errorf("expected host=sandbox user=admin, got host=%s user=%s", capturedHost, capturedUser)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestPrepareNodeKeys_agentNotRunning(t *testing.T) {
|
|
||||||
mock := &mockClient{
|
|
||||||
getSSHKey: func(_ context.Context, _, _, _ string) (*rwagent.VaultSSHData, error) {
|
|
||||||
return nil, rwagent.ErrAgentNotRunning
|
|
||||||
},
|
|
||||||
}
|
|
||||||
withMockClient(t, mock)
|
|
||||||
|
|
||||||
nodes := []inspector.Node{{Host: "10.0.0.1", User: "root"}}
|
|
||||||
_, err := PrepareNodeKeys(nodes)
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error")
|
|
||||||
}
|
|
||||||
if !strings.Contains(err.Error(), "not running") {
|
|
||||||
t.Errorf("expected 'not running' error, got: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestPrepareNodeKeys_invalidKey(t *testing.T) {
|
|
||||||
mock := &mockClient{
|
|
||||||
getSSHKey: func(_ context.Context, _, _, _ string) (*rwagent.VaultSSHData, error) {
|
|
||||||
return &rwagent.VaultSSHData{PrivateKey: "garbage"}, nil
|
|
||||||
},
|
|
||||||
}
|
|
||||||
withMockClient(t, mock)
|
|
||||||
|
|
||||||
nodes := []inspector.Node{{Host: "10.0.0.1", User: "root"}}
|
|
||||||
_, err := PrepareNodeKeys(nodes)
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error for invalid key")
|
|
||||||
}
|
|
||||||
if !strings.Contains(err.Error(), "invalid key") {
|
|
||||||
t.Errorf("expected 'invalid key' error, got: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestPrepareNodeKeys_cleanupOnError(t *testing.T) {
|
|
||||||
callNum := 0
|
|
||||||
mock := &mockClient{
|
|
||||||
getSSHKey: func(_ context.Context, _, _, _ string) (*rwagent.VaultSSHData, error) {
|
|
||||||
callNum++
|
|
||||||
if callNum == 2 {
|
|
||||||
return nil, &rwagent.AgentError{Code: "AGENT_LOCKED", Message: "locked"}
|
|
||||||
}
|
|
||||||
return &rwagent.VaultSSHData{PrivateKey: testPrivateKey}, nil
|
|
||||||
},
|
|
||||||
}
|
|
||||||
withMockClient(t, mock)
|
|
||||||
|
|
||||||
nodes := []inspector.Node{
|
|
||||||
{Host: "10.0.0.1", User: "root"},
|
|
||||||
{Host: "10.0.0.2", User: "root"},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err := PrepareNodeKeys(nodes)
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error")
|
|
||||||
}
|
|
||||||
|
|
||||||
// First node's temp file should have been cleaned up
|
|
||||||
if nodes[0].SSHKey != "" {
|
|
||||||
if _, statErr := os.Stat(nodes[0].SSHKey); statErr == nil {
|
|
||||||
t.Error("expected temp key file to be cleaned up on error")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestPrepareNodeKeys_emptyNodes(t *testing.T) {
|
|
||||||
mock := &mockClient{}
|
|
||||||
withMockClient(t, mock)
|
|
||||||
|
|
||||||
cleanup, err := PrepareNodeKeys(nil)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("expected no error for empty nodes, got: %v", err)
|
|
||||||
}
|
|
||||||
cleanup() // should not panic
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestEnsureVaultEntry_exists(t *testing.T) {
|
|
||||||
mock := &mockClient{
|
|
||||||
getSSHKey: func(_ context.Context, _, _, _ string) (*rwagent.VaultSSHData, error) {
|
|
||||||
return &rwagent.VaultSSHData{PublicKey: "ssh-ed25519 AAAA..."}, nil
|
|
||||||
},
|
|
||||||
}
|
|
||||||
withMockClient(t, mock)
|
|
||||||
|
|
||||||
if err := EnsureVaultEntry("sandbox/root"); err != nil {
|
|
||||||
t.Fatalf("EnsureVaultEntry() error = %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestEnsureVaultEntry_creates(t *testing.T) {
|
|
||||||
created := false
|
|
||||||
mock := &mockClient{
|
|
||||||
getSSHKey: func(_ context.Context, _, _, _ string) (*rwagent.VaultSSHData, error) {
|
|
||||||
return nil, &rwagent.AgentError{Code: "NOT_FOUND", Message: "not found"}
|
|
||||||
},
|
|
||||||
createSSHEntry: func(_ context.Context, host, username string) (*rwagent.VaultSSHData, error) {
|
|
||||||
created = true
|
|
||||||
if host != "sandbox" || username != "root" {
|
|
||||||
t.Errorf("unexpected create args: %s/%s", host, username)
|
|
||||||
}
|
|
||||||
return &rwagent.VaultSSHData{PublicKey: "ssh-ed25519 AAAA..."}, nil
|
|
||||||
},
|
|
||||||
}
|
|
||||||
withMockClient(t, mock)
|
|
||||||
|
|
||||||
if err := EnsureVaultEntry("sandbox/root"); err != nil {
|
|
||||||
t.Fatalf("EnsureVaultEntry() error = %v", err)
|
|
||||||
}
|
|
||||||
if !created {
|
|
||||||
t.Error("expected CreateSSHEntry to be called")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestEnsureVaultEntry_locked(t *testing.T) {
|
|
||||||
mock := &mockClient{
|
|
||||||
getSSHKey: func(_ context.Context, _, _, _ string) (*rwagent.VaultSSHData, error) {
|
|
||||||
return nil, &rwagent.AgentError{Code: "AGENT_LOCKED", Message: "locked"}
|
|
||||||
},
|
|
||||||
}
|
|
||||||
withMockClient(t, mock)
|
|
||||||
|
|
||||||
err := EnsureVaultEntry("sandbox/root")
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error")
|
|
||||||
}
|
|
||||||
if !strings.Contains(err.Error(), "locked") {
|
|
||||||
t.Errorf("expected locked error, got: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestResolveVaultPublicKey_success(t *testing.T) {
|
|
||||||
mock := &mockClient{
|
|
||||||
getSSHKey: func(_ context.Context, _, _, format string) (*rwagent.VaultSSHData, error) {
|
|
||||||
if format != "pub" {
|
|
||||||
t.Errorf("expected format=pub, got %s", format)
|
|
||||||
}
|
|
||||||
return &rwagent.VaultSSHData{PublicKey: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAA..."}, nil
|
|
||||||
},
|
|
||||||
}
|
|
||||||
withMockClient(t, mock)
|
|
||||||
|
|
||||||
key, err := ResolveVaultPublicKey("sandbox/root")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("ResolveVaultPublicKey() error = %v", err)
|
|
||||||
}
|
|
||||||
if !strings.HasPrefix(key, "ssh-") {
|
|
||||||
t.Errorf("expected ssh- prefix, got: %s", key)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestResolveVaultPublicKey_invalidFormat(t *testing.T) {
|
|
||||||
mock := &mockClient{
|
|
||||||
getSSHKey: func(_ context.Context, _, _, _ string) (*rwagent.VaultSSHData, error) {
|
|
||||||
return &rwagent.VaultSSHData{PublicKey: "not-a-valid-key"}, nil
|
|
||||||
},
|
|
||||||
}
|
|
||||||
withMockClient(t, mock)
|
|
||||||
|
|
||||||
_, err := ResolveVaultPublicKey("sandbox/root")
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error for invalid public key")
|
|
||||||
}
|
|
||||||
if !strings.Contains(err.Error(), "invalid public key") {
|
|
||||||
t.Errorf("expected 'invalid public key' error, got: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestResolveVaultPublicKey_notFound(t *testing.T) {
|
|
||||||
mock := &mockClient{
|
|
||||||
getSSHKey: func(_ context.Context, _, _, _ string) (*rwagent.VaultSSHData, error) {
|
|
||||||
return nil, &rwagent.AgentError{Code: "NOT_FOUND", Message: "not found"}
|
|
||||||
},
|
|
||||||
}
|
|
||||||
withMockClient(t, mock)
|
|
||||||
|
|
||||||
_, err := ResolveVaultPublicKey("sandbox/root")
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestLoadAgentKeys_emptyNodes(t *testing.T) {
|
|
||||||
mock := &mockClient{}
|
|
||||||
withMockClient(t, mock)
|
|
||||||
|
|
||||||
if err := LoadAgentKeys(nil); err != nil {
|
|
||||||
t.Fatalf("expected no error for empty nodes, got: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,133 +0,0 @@
|
|||||||
package sandbox
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
|
|
||||||
"gopkg.in/yaml.v3"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Config holds sandbox configuration, stored at ~/.orama/sandbox.yaml.
|
|
||||||
type Config struct {
|
|
||||||
HetznerAPIToken string `yaml:"hetzner_api_token"`
|
|
||||||
Domain string `yaml:"domain"`
|
|
||||||
Location string `yaml:"location"` // Hetzner datacenter (default: fsn1)
|
|
||||||
ServerType string `yaml:"server_type"` // Hetzner server type (default: cx22)
|
|
||||||
FloatingIPs []FloatIP `yaml:"floating_ips"`
|
|
||||||
SSHKey SSHKeyConfig `yaml:"ssh_key"`
|
|
||||||
FirewallID int64 `yaml:"firewall_id,omitempty"` // Hetzner firewall resource ID
|
|
||||||
}
|
|
||||||
|
|
||||||
// FloatIP holds a Hetzner floating IP reference.
|
|
||||||
type FloatIP struct {
|
|
||||||
ID int64 `yaml:"id"`
|
|
||||||
IP string `yaml:"ip"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// SSHKeyConfig holds the wallet vault target and Hetzner resource ID.
|
|
||||||
type SSHKeyConfig struct {
|
|
||||||
HetznerID int64 `yaml:"hetzner_id"`
|
|
||||||
VaultTarget string `yaml:"vault_target"` // e.g. "sandbox/root"
|
|
||||||
}
|
|
||||||
|
|
||||||
// configDir returns ~/.orama/, creating it if needed.
|
|
||||||
func configDir() (string, error) {
|
|
||||||
home, err := os.UserHomeDir()
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("get home directory: %w", err)
|
|
||||||
}
|
|
||||||
dir := filepath.Join(home, ".orama")
|
|
||||||
if err := os.MkdirAll(dir, 0700); err != nil {
|
|
||||||
return "", fmt.Errorf("create config directory: %w", err)
|
|
||||||
}
|
|
||||||
return dir, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// configPath returns the full path to ~/.orama/sandbox.yaml.
|
|
||||||
func configPath() (string, error) {
|
|
||||||
dir, err := configDir()
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
return filepath.Join(dir, "sandbox.yaml"), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// LoadConfig reads the sandbox config from ~/.orama/sandbox.yaml.
|
|
||||||
// Returns an error if the file doesn't exist (user must run setup first).
|
|
||||||
func LoadConfig() (*Config, error) {
|
|
||||||
path, err := configPath()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
data, err := os.ReadFile(path)
|
|
||||||
if err != nil {
|
|
||||||
if os.IsNotExist(err) {
|
|
||||||
return nil, fmt.Errorf("sandbox not configured, run: orama sandbox setup")
|
|
||||||
}
|
|
||||||
return nil, fmt.Errorf("read config: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var cfg Config
|
|
||||||
if err := yaml.Unmarshal(data, &cfg); err != nil {
|
|
||||||
return nil, fmt.Errorf("parse config %s: %w", path, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := cfg.validate(); err != nil {
|
|
||||||
return nil, fmt.Errorf("invalid config: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
cfg.Defaults()
|
|
||||||
|
|
||||||
return &cfg, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// SaveConfig writes the sandbox config to ~/.orama/sandbox.yaml.
|
|
||||||
func SaveConfig(cfg *Config) error {
|
|
||||||
path, err := configPath()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
data, err := yaml.Marshal(cfg)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("marshal config: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := os.WriteFile(path, data, 0600); err != nil {
|
|
||||||
return fmt.Errorf("write config: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// validate checks that required fields are present.
|
|
||||||
func (c *Config) validate() error {
|
|
||||||
if c.HetznerAPIToken == "" {
|
|
||||||
return fmt.Errorf("hetzner_api_token is required")
|
|
||||||
}
|
|
||||||
if c.Domain == "" {
|
|
||||||
return fmt.Errorf("domain is required")
|
|
||||||
}
|
|
||||||
if len(c.FloatingIPs) < 2 {
|
|
||||||
return fmt.Errorf("2 floating IPs required, got %d", len(c.FloatingIPs))
|
|
||||||
}
|
|
||||||
if c.SSHKey.VaultTarget == "" {
|
|
||||||
return fmt.Errorf("ssh_key.vault_target is required (run: orama sandbox setup)")
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Defaults fills in default values for optional fields.
|
|
||||||
func (c *Config) Defaults() {
|
|
||||||
if c.Location == "" {
|
|
||||||
c.Location = "nbg1"
|
|
||||||
}
|
|
||||||
if c.ServerType == "" {
|
|
||||||
c.ServerType = "cx23"
|
|
||||||
}
|
|
||||||
if c.SSHKey.VaultTarget == "" {
|
|
||||||
c.SSHKey.VaultTarget = "sandbox/root"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,53 +0,0 @@
|
|||||||
package sandbox
|
|
||||||
|
|
||||||
import "testing"
|
|
||||||
|
|
||||||
func TestConfig_Validate_EmptyVaultTarget(t *testing.T) {
|
|
||||||
cfg := &Config{
|
|
||||||
HetznerAPIToken: "test-token",
|
|
||||||
Domain: "test.example.com",
|
|
||||||
FloatingIPs: []FloatIP{{ID: 1, IP: "1.1.1.1"}, {ID: 2, IP: "2.2.2.2"}},
|
|
||||||
SSHKey: SSHKeyConfig{HetznerID: 1, VaultTarget: ""},
|
|
||||||
}
|
|
||||||
if err := cfg.validate(); err == nil {
|
|
||||||
t.Error("validate() should reject empty VaultTarget")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestConfig_Validate_WithVaultTarget(t *testing.T) {
|
|
||||||
cfg := &Config{
|
|
||||||
HetznerAPIToken: "test-token",
|
|
||||||
Domain: "test.example.com",
|
|
||||||
FloatingIPs: []FloatIP{{ID: 1, IP: "1.1.1.1"}, {ID: 2, IP: "2.2.2.2"}},
|
|
||||||
SSHKey: SSHKeyConfig{HetznerID: 1, VaultTarget: "sandbox/root"},
|
|
||||||
}
|
|
||||||
if err := cfg.validate(); err != nil {
|
|
||||||
t.Errorf("validate() unexpected error: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestConfig_Defaults_SetsVaultTarget(t *testing.T) {
|
|
||||||
cfg := &Config{}
|
|
||||||
cfg.Defaults()
|
|
||||||
|
|
||||||
if cfg.SSHKey.VaultTarget != "sandbox/root" {
|
|
||||||
t.Errorf("Defaults() VaultTarget = %q, want sandbox/root", cfg.SSHKey.VaultTarget)
|
|
||||||
}
|
|
||||||
if cfg.Location != "nbg1" {
|
|
||||||
t.Errorf("Defaults() Location = %q, want nbg1", cfg.Location)
|
|
||||||
}
|
|
||||||
if cfg.ServerType != "cx23" {
|
|
||||||
t.Errorf("Defaults() ServerType = %q, want cx23", cfg.ServerType)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestConfig_Defaults_PreservesExistingVaultTarget(t *testing.T) {
|
|
||||||
cfg := &Config{
|
|
||||||
SSHKey: SSHKeyConfig{VaultTarget: "custom/user"},
|
|
||||||
}
|
|
||||||
cfg.Defaults()
|
|
||||||
|
|
||||||
if cfg.SSHKey.VaultTarget != "custom/user" {
|
|
||||||
t.Errorf("Defaults() should preserve existing VaultTarget, got %q", cfg.SSHKey.VaultTarget)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,649 +0,0 @@
|
|||||||
package sandbox
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"os/exec"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/remotessh"
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/inspector"
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/rwagent"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Create orchestrates the creation of a new sandbox cluster.
|
|
||||||
func Create(name string) error {
|
|
||||||
cfg, err := LoadConfig()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// --- Preflight: validate everything BEFORE spending money ---
|
|
||||||
fmt.Println("Preflight checks:")
|
|
||||||
|
|
||||||
// 1. Check for existing active sandbox
|
|
||||||
active, err := FindActiveSandbox()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if active != nil {
|
|
||||||
return fmt.Errorf("sandbox %q is already active (status: %s)\nDestroy it first: orama sandbox destroy --name %s",
|
|
||||||
active.Name, active.Status, active.Name)
|
|
||||||
}
|
|
||||||
fmt.Println(" [ok] No active sandbox")
|
|
||||||
|
|
||||||
// 2. Check rootwallet agent is running and unlocked before the slow SSH key call
|
|
||||||
if err := checkAgentReady(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
fmt.Println(" [ok] Rootwallet agent running and unlocked")
|
|
||||||
|
|
||||||
// 3. Resolve SSH key (may trigger approval prompt in RootWallet app)
|
|
||||||
fmt.Print(" [..] Resolving SSH key from vault...")
|
|
||||||
sshKeyPath, cleanup, err := resolveVaultKeyOnce(cfg.SSHKey.VaultTarget)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Println(" FAILED")
|
|
||||||
return fmt.Errorf("prepare SSH key: %w", err)
|
|
||||||
}
|
|
||||||
defer cleanup()
|
|
||||||
fmt.Println(" ok")
|
|
||||||
|
|
||||||
// 4. Check binary archive — auto-build if missing
|
|
||||||
archivePath := findNewestArchive()
|
|
||||||
if archivePath == "" {
|
|
||||||
fmt.Println(" [--] No binary archive found, building...")
|
|
||||||
if err := autoBuildArchive(); err != nil {
|
|
||||||
return fmt.Errorf("auto-build archive: %w", err)
|
|
||||||
}
|
|
||||||
archivePath = findNewestArchive()
|
|
||||||
if archivePath == "" {
|
|
||||||
return fmt.Errorf("build succeeded but no archive found in /tmp/")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
info, err := os.Stat(archivePath)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("stat archive %s: %w", archivePath, err)
|
|
||||||
}
|
|
||||||
fmt.Printf(" [ok] Binary archive: %s (%s)\n", filepath.Base(archivePath), formatBytes(info.Size()))
|
|
||||||
|
|
||||||
// 5. Verify Hetzner API token works
|
|
||||||
client := NewHetznerClient(cfg.HetznerAPIToken)
|
|
||||||
if err := client.ValidateToken(); err != nil {
|
|
||||||
return fmt.Errorf("hetzner API: %w\n Check your token in ~/.orama/sandbox.yaml", err)
|
|
||||||
}
|
|
||||||
fmt.Println(" [ok] Hetzner API token valid")
|
|
||||||
|
|
||||||
fmt.Println()
|
|
||||||
|
|
||||||
// --- All preflight checks passed, proceed ---
|
|
||||||
|
|
||||||
// Generate name if not provided
|
|
||||||
if name == "" {
|
|
||||||
name = GenerateName()
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("Creating sandbox %q (%s, %d nodes)\n\n", name, cfg.Domain, 5)
|
|
||||||
|
|
||||||
state := &SandboxState{
|
|
||||||
Name: name,
|
|
||||||
CreatedAt: time.Now().UTC(),
|
|
||||||
Domain: cfg.Domain,
|
|
||||||
Status: StatusCreating,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Phase 1: Provision servers
|
|
||||||
fmt.Println("Phase 1: Provisioning servers...")
|
|
||||||
if err := phase1ProvisionServers(client, cfg, state); err != nil {
|
|
||||||
cleanupFailedCreate(client, state)
|
|
||||||
return fmt.Errorf("provision servers: %w", err)
|
|
||||||
}
|
|
||||||
if err := SaveState(state); err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Warning: save state after provisioning: %v\n", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Phase 2: Assign floating IPs
|
|
||||||
fmt.Println("\nPhase 2: Assigning floating IPs...")
|
|
||||||
if err := phase2AssignFloatingIPs(client, cfg, state, sshKeyPath); err != nil {
|
|
||||||
return fmt.Errorf("assign floating IPs: %w", err)
|
|
||||||
}
|
|
||||||
if err := SaveState(state); err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Warning: save state after floating IPs: %v\n", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Phase 3: Upload binary archive
|
|
||||||
fmt.Println("\nPhase 3: Uploading binary archive...")
|
|
||||||
if err := phase3UploadArchive(state, sshKeyPath, archivePath); err != nil {
|
|
||||||
return fmt.Errorf("upload archive: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Phase 4: Install genesis node
|
|
||||||
fmt.Println("\nPhase 4: Installing genesis node...")
|
|
||||||
if err := phase4InstallGenesis(cfg, state, sshKeyPath); err != nil {
|
|
||||||
state.Status = StatusError
|
|
||||||
_ = SaveState(state)
|
|
||||||
return fmt.Errorf("install genesis: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Phase 5: Join remaining nodes
|
|
||||||
fmt.Println("\nPhase 5: Joining remaining nodes...")
|
|
||||||
if err := phase5JoinNodes(cfg, state, sshKeyPath); err != nil {
|
|
||||||
state.Status = StatusError
|
|
||||||
_ = SaveState(state)
|
|
||||||
return fmt.Errorf("join nodes: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Phase 6: Verify cluster
|
|
||||||
fmt.Println("\nPhase 6: Verifying cluster...")
|
|
||||||
phase6Verify(cfg, state, sshKeyPath)
|
|
||||||
|
|
||||||
state.Status = StatusRunning
|
|
||||||
if err := SaveState(state); err != nil {
|
|
||||||
return fmt.Errorf("save final state: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
printCreateSummary(cfg, state)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// checkAgentReady verifies the rootwallet agent is running, unlocked, and
|
|
||||||
// that the desktop app is connected (required for first-time app approval).
|
|
||||||
func checkAgentReady() error {
|
|
||||||
client := rwagent.New(os.Getenv("RW_AGENT_SOCK"))
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
status, err := client.Status(ctx)
|
|
||||||
if err != nil {
|
|
||||||
if rwagent.IsNotRunning(err) {
|
|
||||||
return fmt.Errorf("rootwallet agent is not running\n\n Start it with:\n rw agent start && rw agent unlock")
|
|
||||||
}
|
|
||||||
return fmt.Errorf("rootwallet agent: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return validateAgentStatus(status)
|
|
||||||
}
|
|
||||||
|
|
||||||
// validateAgentStatus checks that the agent status indicates readiness.
|
|
||||||
// Separated from checkAgentReady for testability.
|
|
||||||
func validateAgentStatus(status *rwagent.StatusResponse) error {
|
|
||||||
if status.Locked {
|
|
||||||
return fmt.Errorf("rootwallet agent is locked\n\n Unlock it with:\n rw agent unlock")
|
|
||||||
}
|
|
||||||
|
|
||||||
if status.ConnectedApps == 0 {
|
|
||||||
fmt.Println(" [!!] RootWallet desktop app is not open")
|
|
||||||
fmt.Println(" First-time use requires the desktop app to approve access.")
|
|
||||||
fmt.Println(" Open the RootWallet app, then re-run this command.")
|
|
||||||
return fmt.Errorf("RootWallet desktop app required for approval — open it and retry")
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// resolveVaultKeyOnce resolves a wallet SSH key to a temp file.
|
|
||||||
// Returns the key path, cleanup function, and any error.
|
|
||||||
func resolveVaultKeyOnce(vaultTarget string) (string, func(), error) {
|
|
||||||
node := inspector.Node{User: "root", Host: "resolve-only", VaultTarget: vaultTarget}
|
|
||||||
nodes := []inspector.Node{node}
|
|
||||||
cleanup, err := remotessh.PrepareNodeKeys(nodes)
|
|
||||||
if err != nil {
|
|
||||||
return "", func() {}, err
|
|
||||||
}
|
|
||||||
return nodes[0].SSHKey, cleanup, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// phase1ProvisionServers creates 5 Hetzner servers in parallel.
|
|
||||||
func phase1ProvisionServers(client *HetznerClient, cfg *Config, state *SandboxState) error {
|
|
||||||
type serverResult struct {
|
|
||||||
index int
|
|
||||||
server *HetznerServer
|
|
||||||
err error
|
|
||||||
}
|
|
||||||
|
|
||||||
results := make(chan serverResult, 5)
|
|
||||||
|
|
||||||
for i := 0; i < 5; i++ {
|
|
||||||
go func(idx int) {
|
|
||||||
role := "node"
|
|
||||||
if idx < 2 {
|
|
||||||
role = "nameserver"
|
|
||||||
}
|
|
||||||
|
|
||||||
serverName := fmt.Sprintf("sbx-%s-%d", state.Name, idx+1)
|
|
||||||
labels := map[string]string{
|
|
||||||
"orama-sandbox": state.Name,
|
|
||||||
"orama-sandbox-role": role,
|
|
||||||
}
|
|
||||||
|
|
||||||
req := CreateServerRequest{
|
|
||||||
Name: serverName,
|
|
||||||
ServerType: cfg.ServerType,
|
|
||||||
Image: "ubuntu-24.04",
|
|
||||||
Location: cfg.Location,
|
|
||||||
SSHKeys: []int64{cfg.SSHKey.HetznerID},
|
|
||||||
Labels: labels,
|
|
||||||
}
|
|
||||||
if cfg.FirewallID > 0 {
|
|
||||||
req.Firewalls = []struct {
|
|
||||||
Firewall int64 `json:"firewall"`
|
|
||||||
}{{Firewall: cfg.FirewallID}}
|
|
||||||
}
|
|
||||||
|
|
||||||
srv, err := client.CreateServer(req)
|
|
||||||
results <- serverResult{index: idx, server: srv, err: err}
|
|
||||||
}(i)
|
|
||||||
}
|
|
||||||
|
|
||||||
servers := make([]ServerState, 5)
|
|
||||||
var firstErr error
|
|
||||||
for i := 0; i < 5; i++ {
|
|
||||||
r := <-results
|
|
||||||
if r.err != nil {
|
|
||||||
if firstErr == nil {
|
|
||||||
firstErr = fmt.Errorf("server %d: %w", r.index+1, r.err)
|
|
||||||
}
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
fmt.Printf(" Created %s (ID: %d, initializing...)\n", r.server.Name, r.server.ID)
|
|
||||||
role := "node"
|
|
||||||
if r.index < 2 {
|
|
||||||
role = "nameserver"
|
|
||||||
}
|
|
||||||
servers[r.index] = ServerState{
|
|
||||||
ID: r.server.ID,
|
|
||||||
Name: r.server.Name,
|
|
||||||
Role: role,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
state.Servers = servers // populate before returning so cleanup can delete created servers
|
|
||||||
if firstErr != nil {
|
|
||||||
return firstErr
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait for all servers to reach "running"
|
|
||||||
fmt.Print(" Waiting for servers to boot...")
|
|
||||||
for i := range servers {
|
|
||||||
srv, err := client.WaitForServer(servers[i].ID, 3*time.Minute)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("wait for %s: %w", servers[i].Name, err)
|
|
||||||
}
|
|
||||||
servers[i].IP = srv.PublicNet.IPv4.IP
|
|
||||||
fmt.Print(".")
|
|
||||||
}
|
|
||||||
fmt.Println(" OK")
|
|
||||||
|
|
||||||
// Assign floating IPs to nameserver entries
|
|
||||||
if len(cfg.FloatingIPs) >= 2 {
|
|
||||||
servers[0].FloatingIP = cfg.FloatingIPs[0].IP
|
|
||||||
servers[1].FloatingIP = cfg.FloatingIPs[1].IP
|
|
||||||
}
|
|
||||||
|
|
||||||
state.Servers = servers
|
|
||||||
|
|
||||||
for _, srv := range servers {
|
|
||||||
fmt.Printf(" %s: %s (%s)\n", srv.Name, srv.IP, srv.Role)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// phase2AssignFloatingIPs assigns floating IPs and configures loopback.
|
|
||||||
func phase2AssignFloatingIPs(client *HetznerClient, cfg *Config, state *SandboxState, sshKeyPath string) error {
|
|
||||||
for i := 0; i < 2 && i < len(cfg.FloatingIPs) && i < len(state.Servers); i++ {
|
|
||||||
fip := cfg.FloatingIPs[i]
|
|
||||||
srv := state.Servers[i]
|
|
||||||
|
|
||||||
// Unassign if currently assigned elsewhere (ignore "not assigned" errors)
|
|
||||||
fmt.Printf(" Assigning %s to %s...\n", fip.IP, srv.Name)
|
|
||||||
if err := client.UnassignFloatingIP(fip.ID); err != nil {
|
|
||||||
// Log but continue — may fail if not currently assigned, which is fine
|
|
||||||
fmt.Printf(" Note: unassign %s: %v (continuing)\n", fip.IP, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := client.AssignFloatingIP(fip.ID, srv.ID); err != nil {
|
|
||||||
return fmt.Errorf("assign %s to %s: %w", fip.IP, srv.Name, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Configure floating IP on the server's loopback interface
|
|
||||||
// Hetzner floating IPs require this: ip addr add <floating_ip>/32 dev lo
|
|
||||||
node := inspector.Node{
|
|
||||||
User: "root",
|
|
||||||
Host: srv.IP,
|
|
||||||
SSHKey: sshKeyPath,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait for SSH to be ready on freshly booted servers
|
|
||||||
if err := waitForSSH(node, 5*time.Minute); err != nil {
|
|
||||||
return fmt.Errorf("SSH not ready on %s: %w", srv.Name, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
cmd := fmt.Sprintf("ip addr add %s/32 dev lo 2>/dev/null || true", fip.IP)
|
|
||||||
if err := remotessh.RunSSHStreaming(node, cmd, remotessh.WithNoHostKeyCheck()); err != nil {
|
|
||||||
return fmt.Errorf("configure loopback on %s: %w", srv.Name, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// waitForSSH polls until SSH is responsive on the node.
|
|
||||||
func waitForSSH(node inspector.Node, timeout time.Duration) error {
|
|
||||||
deadline := time.Now().Add(timeout)
|
|
||||||
for time.Now().Before(deadline) {
|
|
||||||
_, err := runSSHOutput(node, "echo ok")
|
|
||||||
if err == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
time.Sleep(3 * time.Second)
|
|
||||||
}
|
|
||||||
return fmt.Errorf("timeout after %s", timeout)
|
|
||||||
}
|
|
||||||
|
|
||||||
// autoBuildArchive runs `make build-archive` from the project root.
|
|
||||||
func autoBuildArchive() error {
|
|
||||||
// Find project root by looking for go.mod
|
|
||||||
dir, err := findProjectRoot()
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("find project root: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
cmd := exec.Command("make", "build-archive")
|
|
||||||
cmd.Dir = dir
|
|
||||||
cmd.Stdout = os.Stdout
|
|
||||||
cmd.Stderr = os.Stderr
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return fmt.Errorf("make build-archive failed: %w", err)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// findProjectRoot walks up from the current working directory to find go.mod.
|
|
||||||
func findProjectRoot() (string, error) {
|
|
||||||
dir, err := os.Getwd()
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
for {
|
|
||||||
if _, err := os.Stat(filepath.Join(dir, "go.mod")); err == nil {
|
|
||||||
return dir, nil
|
|
||||||
}
|
|
||||||
parent := filepath.Dir(dir)
|
|
||||||
if parent == dir {
|
|
||||||
return "", fmt.Errorf("could not find go.mod in any parent directory")
|
|
||||||
}
|
|
||||||
dir = parent
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// phase3UploadArchive uploads the binary archive to the genesis node, then fans out
|
|
||||||
// to the remaining nodes server-to-server (much faster than uploading from local machine).
|
|
||||||
func phase3UploadArchive(state *SandboxState, sshKeyPath, archivePath string) error {
|
|
||||||
fmt.Printf(" Archive: %s\n", filepath.Base(archivePath))
|
|
||||||
|
|
||||||
if err := fanoutArchive(state.Servers, sshKeyPath, archivePath); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Println(" All nodes ready")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// phase4InstallGenesis installs the genesis node.
|
|
||||||
func phase4InstallGenesis(cfg *Config, state *SandboxState, sshKeyPath string) error {
|
|
||||||
genesis := state.GenesisServer()
|
|
||||||
node := inspector.Node{User: "root", Host: genesis.IP, SSHKey: sshKeyPath}
|
|
||||||
|
|
||||||
// Install genesis
|
|
||||||
installCmd := fmt.Sprintf("/opt/orama/bin/orama node install --vps-ip %s --domain %s --base-domain %s --nameserver --anyone-client --skip-checks",
|
|
||||||
genesis.IP, cfg.Domain, cfg.Domain)
|
|
||||||
fmt.Printf(" Installing on %s (%s)...\n", genesis.Name, genesis.IP)
|
|
||||||
if err := remotessh.RunSSHStreaming(node, installCmd, remotessh.WithNoHostKeyCheck()); err != nil {
|
|
||||||
return fmt.Errorf("install genesis: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait for RQLite leader
|
|
||||||
fmt.Print(" Waiting for RQLite leader...")
|
|
||||||
if err := waitForRQLiteHealth(node, 3*time.Minute); err != nil {
|
|
||||||
return fmt.Errorf("genesis health: %w", err)
|
|
||||||
}
|
|
||||||
fmt.Println(" OK")
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// phase5JoinNodes joins the remaining 4 nodes to the cluster (serial).
|
|
||||||
// Generates invite tokens just-in-time to avoid expiry during long installs.
|
|
||||||
func phase5JoinNodes(cfg *Config, state *SandboxState, sshKeyPath string) error {
|
|
||||||
genesis := state.GenesisServer()
|
|
||||||
genesisNode := inspector.Node{User: "root", Host: genesis.IP, SSHKey: sshKeyPath}
|
|
||||||
|
|
||||||
for i := 1; i < len(state.Servers); i++ {
|
|
||||||
srv := state.Servers[i]
|
|
||||||
node := inspector.Node{User: "root", Host: srv.IP, SSHKey: sshKeyPath}
|
|
||||||
|
|
||||||
// Generate token just before use to avoid expiry
|
|
||||||
token, err := generateInviteToken(genesisNode)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("generate invite token for %s: %w", srv.Name, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var installCmd string
|
|
||||||
if srv.Role == "nameserver" {
|
|
||||||
installCmd = fmt.Sprintf("/opt/orama/bin/orama node install --join http://%s --token %s --vps-ip %s --domain %s --base-domain %s --nameserver --anyone-client --skip-checks",
|
|
||||||
genesis.IP, token, srv.IP, cfg.Domain, cfg.Domain)
|
|
||||||
} else {
|
|
||||||
installCmd = fmt.Sprintf("/opt/orama/bin/orama node install --join http://%s --token %s --vps-ip %s --base-domain %s --anyone-client --skip-checks",
|
|
||||||
genesis.IP, token, srv.IP, cfg.Domain)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf(" [%d/%d] Joining %s (%s, %s)...\n", i, len(state.Servers)-1, srv.Name, srv.IP, srv.Role)
|
|
||||||
if err := remotessh.RunSSHStreaming(node, installCmd, remotessh.WithNoHostKeyCheck()); err != nil {
|
|
||||||
return fmt.Errorf("join %s: %w", srv.Name, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait for node health before proceeding
|
|
||||||
fmt.Printf(" Waiting for %s health...", srv.Name)
|
|
||||||
if err := waitForRQLiteHealth(node, 3*time.Minute); err != nil {
|
|
||||||
fmt.Printf(" WARN: %v\n", err)
|
|
||||||
} else {
|
|
||||||
fmt.Println(" OK")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// phase6Verify runs a basic cluster health check.
|
|
||||||
func phase6Verify(cfg *Config, state *SandboxState, sshKeyPath string) {
|
|
||||||
genesis := state.GenesisServer()
|
|
||||||
node := inspector.Node{User: "root", Host: genesis.IP, SSHKey: sshKeyPath}
|
|
||||||
|
|
||||||
// Check RQLite cluster
|
|
||||||
out, err := runSSHOutput(node, "curl -s http://localhost:5001/status | grep -o '\"state\":\"[^\"]*\"' | head -1")
|
|
||||||
if err == nil {
|
|
||||||
fmt.Printf(" RQLite: %s\n", strings.TrimSpace(out))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check DNS (if floating IPs configured, only with safe domain names)
|
|
||||||
if len(cfg.FloatingIPs) > 0 && isSafeDNSName(cfg.Domain) {
|
|
||||||
out, err = runSSHOutput(node, fmt.Sprintf("dig +short @%s test.%s 2>/dev/null || echo 'DNS not responding'",
|
|
||||||
cfg.FloatingIPs[0].IP, cfg.Domain))
|
|
||||||
if err == nil {
|
|
||||||
fmt.Printf(" DNS: %s\n", strings.TrimSpace(out))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// waitForRQLiteHealth polls RQLite until it reports Leader or Follower state.
|
|
||||||
func waitForRQLiteHealth(node inspector.Node, timeout time.Duration) error {
|
|
||||||
deadline := time.Now().Add(timeout)
|
|
||||||
for time.Now().Before(deadline) {
|
|
||||||
out, err := runSSHOutput(node, "curl -sf http://localhost:5001/status 2>/dev/null | grep -o '\"state\":\"[^\"]*\"'")
|
|
||||||
if err == nil {
|
|
||||||
result := strings.TrimSpace(out)
|
|
||||||
if strings.Contains(result, "Leader") || strings.Contains(result, "Follower") {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
time.Sleep(5 * time.Second)
|
|
||||||
}
|
|
||||||
return fmt.Errorf("timeout waiting for RQLite health after %s", timeout)
|
|
||||||
}
|
|
||||||
|
|
||||||
// generateInviteToken runs `orama node invite` on the node and parses the token.
|
|
||||||
func generateInviteToken(node inspector.Node) (string, error) {
|
|
||||||
out, err := runSSHOutput(node, "/opt/orama/bin/orama node invite --expiry 1h 2>&1")
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("invite command failed: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse token from output — the invite command outputs:
|
|
||||||
// "sudo orama install --join https://... --token <64-char-hex> --vps-ip ..."
|
|
||||||
// Look for the --token flag value first
|
|
||||||
fields := strings.Fields(out)
|
|
||||||
for i, field := range fields {
|
|
||||||
if field == "--token" && i+1 < len(fields) {
|
|
||||||
candidate := fields[i+1]
|
|
||||||
if len(candidate) == 64 && isHex(candidate) {
|
|
||||||
return candidate, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Fallback: look for any standalone 64-char hex string
|
|
||||||
for _, word := range fields {
|
|
||||||
if len(word) == 64 && isHex(word) {
|
|
||||||
return word, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return "", fmt.Errorf("could not parse token from invite output:\n%s", out)
|
|
||||||
}
|
|
||||||
|
|
||||||
// isSafeDNSName returns true if the string is safe to use in shell commands.
|
|
||||||
func isSafeDNSName(s string) bool {
|
|
||||||
for _, c := range s {
|
|
||||||
if !((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || (c >= '0' && c <= '9') || c == '.' || c == '-') {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return len(s) > 0
|
|
||||||
}
|
|
||||||
|
|
||||||
// isHex returns true if s contains only hex characters.
|
|
||||||
func isHex(s string) bool {
|
|
||||||
for _, c := range s {
|
|
||||||
if !((c >= '0' && c <= '9') || (c >= 'a' && c <= 'f') || (c >= 'A' && c <= 'F')) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
// runSSHOutput runs a command via SSH and returns stdout as a string.
|
|
||||||
// Uses StrictHostKeyChecking=no because sandbox IPs are frequently recycled.
|
|
||||||
func runSSHOutput(node inspector.Node, command string) (string, error) {
|
|
||||||
args := []string{
|
|
||||||
"ssh", "-n",
|
|
||||||
"-o", "StrictHostKeyChecking=no",
|
|
||||||
"-o", "UserKnownHostsFile=/dev/null",
|
|
||||||
"-o", "ConnectTimeout=10",
|
|
||||||
"-o", "BatchMode=yes",
|
|
||||||
"-i", node.SSHKey,
|
|
||||||
fmt.Sprintf("%s@%s", node.User, node.Host),
|
|
||||||
command,
|
|
||||||
}
|
|
||||||
|
|
||||||
out, err := execCommand(args[0], args[1:]...)
|
|
||||||
return string(out), err
|
|
||||||
}
|
|
||||||
|
|
||||||
// execCommand runs a command and returns its output.
|
|
||||||
func execCommand(name string, args ...string) ([]byte, error) {
|
|
||||||
return exec.Command(name, args...).Output()
|
|
||||||
}
|
|
||||||
|
|
||||||
// findNewestArchive finds the newest binary archive in /tmp/.
|
|
||||||
func findNewestArchive() string {
|
|
||||||
entries, err := os.ReadDir("/tmp")
|
|
||||||
if err != nil {
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
|
|
||||||
var best string
|
|
||||||
var bestMod int64
|
|
||||||
for _, entry := range entries {
|
|
||||||
name := entry.Name()
|
|
||||||
if strings.HasPrefix(name, "orama-") && strings.Contains(name, "-linux-") && strings.HasSuffix(name, ".tar.gz") {
|
|
||||||
info, err := entry.Info()
|
|
||||||
if err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if info.ModTime().Unix() > bestMod {
|
|
||||||
best = filepath.Join("/tmp", name)
|
|
||||||
bestMod = info.ModTime().Unix()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return best
|
|
||||||
}
|
|
||||||
|
|
||||||
// formatBytes formats a byte count as human-readable.
|
|
||||||
func formatBytes(b int64) string {
|
|
||||||
const unit = 1024
|
|
||||||
if b < unit {
|
|
||||||
return fmt.Sprintf("%d B", b)
|
|
||||||
}
|
|
||||||
div, exp := int64(unit), 0
|
|
||||||
for n := b / unit; n >= unit; n /= unit {
|
|
||||||
div *= unit
|
|
||||||
exp++
|
|
||||||
}
|
|
||||||
return fmt.Sprintf("%.1f %cB", float64(b)/float64(div), "KMGTPE"[exp])
|
|
||||||
}
|
|
||||||
|
|
||||||
// printCreateSummary prints the cluster summary after creation.
|
|
||||||
func printCreateSummary(cfg *Config, state *SandboxState) {
|
|
||||||
fmt.Printf("\nSandbox %q ready (%d nodes)\n", state.Name, len(state.Servers))
|
|
||||||
fmt.Println()
|
|
||||||
|
|
||||||
fmt.Println("Nameservers:")
|
|
||||||
for _, srv := range state.NameserverNodes() {
|
|
||||||
floating := ""
|
|
||||||
if srv.FloatingIP != "" {
|
|
||||||
floating = fmt.Sprintf(" (floating: %s)", srv.FloatingIP)
|
|
||||||
}
|
|
||||||
fmt.Printf(" %s: %s%s\n", srv.Name, srv.IP, floating)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Println("Nodes:")
|
|
||||||
for _, srv := range state.RegularNodes() {
|
|
||||||
fmt.Printf(" %s: %s\n", srv.Name, srv.IP)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Println()
|
|
||||||
fmt.Printf("Domain: %s\n", cfg.Domain)
|
|
||||||
fmt.Printf("Gateway: https://%s\n", cfg.Domain)
|
|
||||||
fmt.Println()
|
|
||||||
fmt.Println("SSH: orama sandbox ssh 1")
|
|
||||||
fmt.Println("Destroy: orama sandbox destroy")
|
|
||||||
}
|
|
||||||
|
|
||||||
// cleanupFailedCreate deletes any servers that were created during a failed provision.
|
|
||||||
func cleanupFailedCreate(client *HetznerClient, state *SandboxState) {
|
|
||||||
if len(state.Servers) == 0 {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
fmt.Println("\nCleaning up failed creation...")
|
|
||||||
for _, srv := range state.Servers {
|
|
||||||
if srv.ID > 0 {
|
|
||||||
client.DeleteServer(srv.ID)
|
|
||||||
fmt.Printf(" Deleted %s\n", srv.Name)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
DeleteState(state.Name)
|
|
||||||
}
|
|
||||||
@ -1,158 +0,0 @@
|
|||||||
package sandbox
|
|
||||||
|
|
||||||
import (
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/rwagent"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestFindProjectRoot_FromSubDir(t *testing.T) {
|
|
||||||
// Create a temp dir with go.mod (resolve symlinks for macOS /private/var)
|
|
||||||
root, _ := filepath.EvalSymlinks(t.TempDir())
|
|
||||||
if err := os.WriteFile(filepath.Join(root, "go.mod"), []byte("module test"), 0644); err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create a nested subdir
|
|
||||||
sub := filepath.Join(root, "pkg", "foo")
|
|
||||||
if err := os.MkdirAll(sub, 0755); err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Change to subdir and find root
|
|
||||||
orig, _ := os.Getwd()
|
|
||||||
defer os.Chdir(orig)
|
|
||||||
os.Chdir(sub)
|
|
||||||
|
|
||||||
got, err := findProjectRoot()
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("findProjectRoot() error: %v", err)
|
|
||||||
}
|
|
||||||
if got != root {
|
|
||||||
t.Errorf("findProjectRoot() = %q, want %q", got, root)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestFindProjectRoot_NoGoMod(t *testing.T) {
|
|
||||||
// Create a temp dir without go.mod
|
|
||||||
dir := t.TempDir()
|
|
||||||
|
|
||||||
orig, _ := os.Getwd()
|
|
||||||
defer os.Chdir(orig)
|
|
||||||
os.Chdir(dir)
|
|
||||||
|
|
||||||
_, err := findProjectRoot()
|
|
||||||
if err == nil {
|
|
||||||
t.Error("findProjectRoot() should error when no go.mod exists")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestFindNewestArchive_NoArchives(t *testing.T) {
|
|
||||||
// findNewestArchive scans /tmp — just verify it returns "" when
|
|
||||||
// no matching files exist (this is the normal case in CI).
|
|
||||||
// We can't fully control /tmp, but we can verify the function doesn't crash.
|
|
||||||
result := findNewestArchive()
|
|
||||||
// Result is either "" or a valid path — both are acceptable
|
|
||||||
if result != "" {
|
|
||||||
if _, err := os.Stat(result); err != nil {
|
|
||||||
t.Errorf("findNewestArchive() returned non-existent path: %s", result)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestIsSafeDNSName(t *testing.T) {
|
|
||||||
tests := []struct {
|
|
||||||
input string
|
|
||||||
want bool
|
|
||||||
}{
|
|
||||||
{"example.com", true},
|
|
||||||
{"test-cluster.orama.network", true},
|
|
||||||
{"a", true},
|
|
||||||
{"", false},
|
|
||||||
{"test;rm -rf /", false},
|
|
||||||
{"test$(whoami)", false},
|
|
||||||
{"test space", false},
|
|
||||||
{"test_underscore", false},
|
|
||||||
{"UPPER.case.OK", true},
|
|
||||||
{"123.456", true},
|
|
||||||
}
|
|
||||||
for _, tt := range tests {
|
|
||||||
got := isSafeDNSName(tt.input)
|
|
||||||
if got != tt.want {
|
|
||||||
t.Errorf("isSafeDNSName(%q) = %v, want %v", tt.input, got, tt.want)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestIsHex(t *testing.T) {
|
|
||||||
tests := []struct {
|
|
||||||
input string
|
|
||||||
want bool
|
|
||||||
}{
|
|
||||||
{"abcdef0123456789", true},
|
|
||||||
{"ABCDEF", true},
|
|
||||||
{"0", true},
|
|
||||||
{"", true}, // vacuous truth, but guarded by len check in caller
|
|
||||||
{"xyz", false},
|
|
||||||
{"abcg", false},
|
|
||||||
{"abc def", false},
|
|
||||||
}
|
|
||||||
for _, tt := range tests {
|
|
||||||
got := isHex(tt.input)
|
|
||||||
if got != tt.want {
|
|
||||||
t.Errorf("isHex(%q) = %v, want %v", tt.input, got, tt.want)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestValidateAgentStatus_Locked(t *testing.T) {
|
|
||||||
status := &rwagent.StatusResponse{Locked: true, ConnectedApps: 1}
|
|
||||||
err := validateAgentStatus(status)
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error for locked agent")
|
|
||||||
}
|
|
||||||
if !strings.Contains(err.Error(), "locked") {
|
|
||||||
t.Errorf("error should mention locked, got: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestValidateAgentStatus_NoDesktopApp(t *testing.T) {
|
|
||||||
status := &rwagent.StatusResponse{Locked: false, ConnectedApps: 0}
|
|
||||||
err := validateAgentStatus(status)
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error when no desktop app connected")
|
|
||||||
}
|
|
||||||
if !strings.Contains(err.Error(), "desktop app") {
|
|
||||||
t.Errorf("error should mention desktop app, got: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestValidateAgentStatus_Ready(t *testing.T) {
|
|
||||||
status := &rwagent.StatusResponse{Locked: false, ConnectedApps: 1}
|
|
||||||
if err := validateAgentStatus(status); err != nil {
|
|
||||||
t.Errorf("expected no error for ready agent, got: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestFormatBytes(t *testing.T) {
|
|
||||||
tests := []struct {
|
|
||||||
input int64
|
|
||||||
want string
|
|
||||||
}{
|
|
||||||
{0, "0 B"},
|
|
||||||
{500, "500 B"},
|
|
||||||
{1024, "1.0 KB"},
|
|
||||||
{1536, "1.5 KB"},
|
|
||||||
{1048576, "1.0 MB"},
|
|
||||||
{1073741824, "1.0 GB"},
|
|
||||||
}
|
|
||||||
for _, tt := range tests {
|
|
||||||
got := formatBytes(tt.input)
|
|
||||||
if got != tt.want {
|
|
||||||
t.Errorf("formatBytes(%d) = %q, want %q", tt.input, got, tt.want)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,122 +0,0 @@
|
|||||||
package sandbox
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bufio"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Destroy tears down a sandbox cluster.
|
|
||||||
func Destroy(name string, force bool) error {
|
|
||||||
cfg, err := LoadConfig()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve sandbox name
|
|
||||||
state, err := resolveSandbox(name)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Confirm destruction
|
|
||||||
if !force {
|
|
||||||
reader := bufio.NewReader(os.Stdin)
|
|
||||||
fmt.Printf("Destroy sandbox %q? This deletes %d servers. [y/N]: ", state.Name, len(state.Servers))
|
|
||||||
choice, _ := reader.ReadString('\n')
|
|
||||||
choice = strings.TrimSpace(strings.ToLower(choice))
|
|
||||||
if choice != "y" && choice != "yes" {
|
|
||||||
fmt.Println("Aborted.")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
state.Status = StatusDestroying
|
|
||||||
SaveState(state) // best-effort status update
|
|
||||||
|
|
||||||
client := NewHetznerClient(cfg.HetznerAPIToken)
|
|
||||||
|
|
||||||
// Step 1: Unassign floating IPs from nameserver nodes
|
|
||||||
fmt.Println("Unassigning floating IPs...")
|
|
||||||
for _, srv := range state.NameserverNodes() {
|
|
||||||
if srv.FloatingIP == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
// Find the floating IP ID from config
|
|
||||||
for _, fip := range cfg.FloatingIPs {
|
|
||||||
if fip.IP == srv.FloatingIP {
|
|
||||||
if err := client.UnassignFloatingIP(fip.ID); err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, " Warning: could not unassign floating IP %s: %v\n", fip.IP, err)
|
|
||||||
} else {
|
|
||||||
fmt.Printf(" Unassigned %s from %s\n", fip.IP, srv.Name)
|
|
||||||
}
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 2: Delete all servers in parallel
|
|
||||||
fmt.Printf("Deleting %d servers...\n", len(state.Servers))
|
|
||||||
var wg sync.WaitGroup
|
|
||||||
var mu sync.Mutex
|
|
||||||
var failed []string
|
|
||||||
|
|
||||||
for _, srv := range state.Servers {
|
|
||||||
wg.Add(1)
|
|
||||||
go func(srv ServerState) {
|
|
||||||
defer wg.Done()
|
|
||||||
if err := client.DeleteServer(srv.ID); err != nil {
|
|
||||||
// Treat 404 as already deleted (idempotent)
|
|
||||||
if strings.Contains(err.Error(), "404") || strings.Contains(err.Error(), "not found") {
|
|
||||||
fmt.Printf(" %s (ID %d): already deleted\n", srv.Name, srv.ID)
|
|
||||||
} else {
|
|
||||||
mu.Lock()
|
|
||||||
failed = append(failed, fmt.Sprintf("%s (ID %d): %v", srv.Name, srv.ID, err))
|
|
||||||
mu.Unlock()
|
|
||||||
fmt.Fprintf(os.Stderr, " Warning: failed to delete %s: %v\n", srv.Name, err)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
fmt.Printf(" Deleted %s (ID %d)\n", srv.Name, srv.ID)
|
|
||||||
}
|
|
||||||
}(srv)
|
|
||||||
}
|
|
||||||
wg.Wait()
|
|
||||||
|
|
||||||
if len(failed) > 0 {
|
|
||||||
fmt.Fprintf(os.Stderr, "\nFailed to delete %d server(s):\n", len(failed))
|
|
||||||
for _, f := range failed {
|
|
||||||
fmt.Fprintf(os.Stderr, " %s\n", f)
|
|
||||||
}
|
|
||||||
fmt.Fprintf(os.Stderr, "\nManual cleanup: delete servers at https://console.hetzner.cloud\n")
|
|
||||||
state.Status = StatusError
|
|
||||||
SaveState(state)
|
|
||||||
return fmt.Errorf("failed to delete %d server(s)", len(failed))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 3: Remove state file
|
|
||||||
if err := DeleteState(state.Name); err != nil {
|
|
||||||
return fmt.Errorf("delete state: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("\nSandbox %q destroyed (%d servers deleted)\n", state.Name, len(state.Servers))
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// resolveSandbox finds a sandbox by name or returns the active one.
|
|
||||||
func resolveSandbox(name string) (*SandboxState, error) {
|
|
||||||
if name != "" {
|
|
||||||
return LoadState(name)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Find the active sandbox
|
|
||||||
active, err := FindActiveSandbox()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if active == nil {
|
|
||||||
return nil, fmt.Errorf("no active sandbox found, specify --name")
|
|
||||||
}
|
|
||||||
return active, nil
|
|
||||||
}
|
|
||||||
@ -1,84 +0,0 @@
|
|||||||
package sandbox
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"path/filepath"
|
|
||||||
"sync"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/remotessh"
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/inspector"
|
|
||||||
)
|
|
||||||
|
|
||||||
// fanoutArchive uploads a binary archive to the first server, then fans out
|
|
||||||
// server-to-server in parallel to all remaining servers. This is much faster
|
|
||||||
// than uploading from the local machine to each node individually.
|
|
||||||
// After distribution, the archive is extracted on all nodes.
|
|
||||||
func fanoutArchive(servers []ServerState, sshKeyPath, archivePath string) error {
|
|
||||||
remotePath := "/tmp/" + filepath.Base(archivePath)
|
|
||||||
extractCmd := fmt.Sprintf("mkdir -p /opt/orama && tar xzf %s -C /opt/orama && rm -f %s",
|
|
||||||
remotePath, remotePath)
|
|
||||||
|
|
||||||
// Step 1: Upload from local machine to first node
|
|
||||||
first := servers[0]
|
|
||||||
firstNode := inspector.Node{User: "root", Host: first.IP, SSHKey: sshKeyPath}
|
|
||||||
|
|
||||||
fmt.Printf(" Uploading to %s...\n", first.Name)
|
|
||||||
if err := remotessh.UploadFile(firstNode, archivePath, remotePath, remotessh.WithNoHostKeyCheck()); err != nil {
|
|
||||||
return fmt.Errorf("upload to %s: %w", first.Name, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 2: Fan out from first node to remaining nodes in parallel (server-to-server)
|
|
||||||
if len(servers) > 1 {
|
|
||||||
fmt.Printf(" Fanning out from %s to %d nodes...\n", first.Name, len(servers)-1)
|
|
||||||
|
|
||||||
// Temporarily upload SSH key for server-to-server SCP
|
|
||||||
remoteKeyPath := "/tmp/.sandbox_key"
|
|
||||||
if err := remotessh.UploadFile(firstNode, sshKeyPath, remoteKeyPath, remotessh.WithNoHostKeyCheck()); err != nil {
|
|
||||||
return fmt.Errorf("upload SSH key to %s: %w", first.Name, err)
|
|
||||||
}
|
|
||||||
defer remotessh.RunSSHStreaming(firstNode, fmt.Sprintf("rm -f %s", remoteKeyPath), remotessh.WithNoHostKeyCheck())
|
|
||||||
|
|
||||||
if err := remotessh.RunSSHStreaming(firstNode, fmt.Sprintf("chmod 600 %s", remoteKeyPath), remotessh.WithNoHostKeyCheck()); err != nil {
|
|
||||||
return fmt.Errorf("chmod SSH key on %s: %w", first.Name, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var wg sync.WaitGroup
|
|
||||||
errs := make([]error, len(servers))
|
|
||||||
|
|
||||||
for i := 1; i < len(servers); i++ {
|
|
||||||
wg.Add(1)
|
|
||||||
go func(idx int, srv ServerState) {
|
|
||||||
defer wg.Done()
|
|
||||||
// SCP from first node to target
|
|
||||||
scpCmd := fmt.Sprintf("scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i %s %s root@%s:%s",
|
|
||||||
remoteKeyPath, remotePath, srv.IP, remotePath)
|
|
||||||
if err := remotessh.RunSSHStreaming(firstNode, scpCmd, remotessh.WithNoHostKeyCheck()); err != nil {
|
|
||||||
errs[idx] = fmt.Errorf("fanout to %s: %w", srv.Name, err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
// Extract on target
|
|
||||||
targetNode := inspector.Node{User: "root", Host: srv.IP, SSHKey: sshKeyPath}
|
|
||||||
if err := remotessh.RunSSHStreaming(targetNode, extractCmd, remotessh.WithNoHostKeyCheck()); err != nil {
|
|
||||||
errs[idx] = fmt.Errorf("extract on %s: %w", srv.Name, err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
fmt.Printf(" Distributed to %s\n", srv.Name)
|
|
||||||
}(i, servers[i])
|
|
||||||
}
|
|
||||||
wg.Wait()
|
|
||||||
|
|
||||||
for _, err := range errs {
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 3: Extract on first node
|
|
||||||
fmt.Printf(" Extracting on %s...\n", first.Name)
|
|
||||||
if err := remotessh.RunSSHStreaming(firstNode, extractCmd, remotessh.WithNoHostKeyCheck()); err != nil {
|
|
||||||
return fmt.Errorf("extract on %s: %w", first.Name, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@ -1,538 +0,0 @@
|
|||||||
package sandbox
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"net/http"
|
|
||||||
"strconv"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
const hetznerBaseURL = "https://api.hetzner.cloud/v1"
|
|
||||||
|
|
||||||
// HetznerClient is a minimal Hetzner Cloud API client.
|
|
||||||
type HetznerClient struct {
|
|
||||||
token string
|
|
||||||
httpClient *http.Client
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewHetznerClient creates a new Hetzner API client.
|
|
||||||
func NewHetznerClient(token string) *HetznerClient {
|
|
||||||
return &HetznerClient{
|
|
||||||
token: token,
|
|
||||||
httpClient: &http.Client{
|
|
||||||
Timeout: 30 * time.Second,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// --- Request helpers ---
|
|
||||||
|
|
||||||
func (c *HetznerClient) doRequest(method, path string, body interface{}) ([]byte, int, error) {
|
|
||||||
var bodyReader io.Reader
|
|
||||||
if body != nil {
|
|
||||||
data, err := json.Marshal(body)
|
|
||||||
if err != nil {
|
|
||||||
return nil, 0, fmt.Errorf("marshal request body: %w", err)
|
|
||||||
}
|
|
||||||
bodyReader = bytes.NewReader(data)
|
|
||||||
}
|
|
||||||
|
|
||||||
req, err := http.NewRequest(method, hetznerBaseURL+path, bodyReader)
|
|
||||||
if err != nil {
|
|
||||||
return nil, 0, fmt.Errorf("create request: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
req.Header.Set("Authorization", "Bearer "+c.token)
|
|
||||||
if body != nil {
|
|
||||||
req.Header.Set("Content-Type", "application/json")
|
|
||||||
}
|
|
||||||
|
|
||||||
resp, err := c.httpClient.Do(req)
|
|
||||||
if err != nil {
|
|
||||||
return nil, 0, fmt.Errorf("request %s %s: %w", method, path, err)
|
|
||||||
}
|
|
||||||
defer resp.Body.Close()
|
|
||||||
|
|
||||||
respBody, err := io.ReadAll(resp.Body)
|
|
||||||
if err != nil {
|
|
||||||
return nil, resp.StatusCode, fmt.Errorf("read response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return respBody, resp.StatusCode, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *HetznerClient) get(path string) ([]byte, error) {
|
|
||||||
body, status, err := c.doRequest("GET", path, nil)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if status < 200 || status >= 300 {
|
|
||||||
return nil, parseHetznerError(body, status)
|
|
||||||
}
|
|
||||||
return body, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *HetznerClient) post(path string, payload interface{}) ([]byte, error) {
|
|
||||||
body, status, err := c.doRequest("POST", path, payload)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if status < 200 || status >= 300 {
|
|
||||||
return nil, parseHetznerError(body, status)
|
|
||||||
}
|
|
||||||
return body, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *HetznerClient) delete(path string) error {
|
|
||||||
_, status, err := c.doRequest("DELETE", path, nil)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if status < 200 || status >= 300 {
|
|
||||||
return fmt.Errorf("delete %s: HTTP %d", path, status)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// --- API types ---
|
|
||||||
|
|
||||||
// HetznerServer represents a Hetzner Cloud server.
|
|
||||||
type HetznerServer struct {
|
|
||||||
ID int64 `json:"id"`
|
|
||||||
Name string `json:"name"`
|
|
||||||
Status string `json:"status"` // initializing, running, off, ...
|
|
||||||
PublicNet HetznerPublicNet `json:"public_net"`
|
|
||||||
Labels map[string]string `json:"labels"`
|
|
||||||
ServerType struct {
|
|
||||||
Name string `json:"name"`
|
|
||||||
} `json:"server_type"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// HetznerPublicNet holds public networking info for a server.
|
|
||||||
type HetznerPublicNet struct {
|
|
||||||
IPv4 struct {
|
|
||||||
IP string `json:"ip"`
|
|
||||||
} `json:"ipv4"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// HetznerFloatingIP represents a Hetzner floating IP.
|
|
||||||
type HetznerFloatingIP struct {
|
|
||||||
ID int64 `json:"id"`
|
|
||||||
IP string `json:"ip"`
|
|
||||||
Server *int64 `json:"server"` // nil if unassigned
|
|
||||||
Labels map[string]string `json:"labels"`
|
|
||||||
Description string `json:"description"`
|
|
||||||
HomeLocation struct {
|
|
||||||
Name string `json:"name"`
|
|
||||||
} `json:"home_location"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// HetznerSSHKey represents a Hetzner SSH key.
|
|
||||||
type HetznerSSHKey struct {
|
|
||||||
ID int64 `json:"id"`
|
|
||||||
Name string `json:"name"`
|
|
||||||
Fingerprint string `json:"fingerprint"`
|
|
||||||
PublicKey string `json:"public_key"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// HetznerFirewall represents a Hetzner firewall.
|
|
||||||
type HetznerFirewall struct {
|
|
||||||
ID int64 `json:"id"`
|
|
||||||
Name string `json:"name"`
|
|
||||||
Rules []HetznerFWRule `json:"rules"`
|
|
||||||
Labels map[string]string `json:"labels"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// HetznerFWRule represents a firewall rule.
|
|
||||||
type HetznerFWRule struct {
|
|
||||||
Direction string `json:"direction"`
|
|
||||||
Protocol string `json:"protocol"`
|
|
||||||
Port string `json:"port"`
|
|
||||||
SourceIPs []string `json:"source_ips"`
|
|
||||||
Description string `json:"description,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// HetznerError represents an API error response.
|
|
||||||
type HetznerError struct {
|
|
||||||
Error struct {
|
|
||||||
Code string `json:"code"`
|
|
||||||
Message string `json:"message"`
|
|
||||||
} `json:"error"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseHetznerError(body []byte, status int) error {
|
|
||||||
var he HetznerError
|
|
||||||
if err := json.Unmarshal(body, &he); err == nil && he.Error.Message != "" {
|
|
||||||
return fmt.Errorf("hetzner API error (HTTP %d): %s — %s", status, he.Error.Code, he.Error.Message)
|
|
||||||
}
|
|
||||||
return fmt.Errorf("hetzner API error: HTTP %d", status)
|
|
||||||
}
|
|
||||||
|
|
||||||
// --- Server operations ---
|
|
||||||
|
|
||||||
// CreateServerRequest holds parameters for server creation.
|
|
||||||
type CreateServerRequest struct {
|
|
||||||
Name string `json:"name"`
|
|
||||||
ServerType string `json:"server_type"`
|
|
||||||
Image string `json:"image"`
|
|
||||||
Location string `json:"location"`
|
|
||||||
SSHKeys []int64 `json:"ssh_keys"`
|
|
||||||
Labels map[string]string `json:"labels"`
|
|
||||||
Firewalls []struct {
|
|
||||||
Firewall int64 `json:"firewall"`
|
|
||||||
} `json:"firewalls,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// CreateServer creates a new server and returns it.
|
|
||||||
func (c *HetznerClient) CreateServer(req CreateServerRequest) (*HetznerServer, error) {
|
|
||||||
body, err := c.post("/servers", req)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("create server %q: %w", req.Name, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var resp struct {
|
|
||||||
Server HetznerServer `json:"server"`
|
|
||||||
}
|
|
||||||
if err := json.Unmarshal(body, &resp); err != nil {
|
|
||||||
return nil, fmt.Errorf("parse create server response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return &resp.Server, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetServer retrieves a server by ID.
|
|
||||||
func (c *HetznerClient) GetServer(id int64) (*HetznerServer, error) {
|
|
||||||
body, err := c.get("/servers/" + strconv.FormatInt(id, 10))
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("get server %d: %w", id, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var resp struct {
|
|
||||||
Server HetznerServer `json:"server"`
|
|
||||||
}
|
|
||||||
if err := json.Unmarshal(body, &resp); err != nil {
|
|
||||||
return nil, fmt.Errorf("parse server response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return &resp.Server, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeleteServer deletes a server by ID.
|
|
||||||
func (c *HetznerClient) DeleteServer(id int64) error {
|
|
||||||
return c.delete("/servers/" + strconv.FormatInt(id, 10))
|
|
||||||
}
|
|
||||||
|
|
||||||
// ListServersByLabel lists servers filtered by a label selector.
|
|
||||||
func (c *HetznerClient) ListServersByLabel(selector string) ([]HetznerServer, error) {
|
|
||||||
body, err := c.get("/servers?label_selector=" + selector)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("list servers: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var resp struct {
|
|
||||||
Servers []HetznerServer `json:"servers"`
|
|
||||||
}
|
|
||||||
if err := json.Unmarshal(body, &resp); err != nil {
|
|
||||||
return nil, fmt.Errorf("parse servers response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return resp.Servers, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// WaitForServer polls until the server reaches "running" status.
|
|
||||||
func (c *HetznerClient) WaitForServer(id int64, timeout time.Duration) (*HetznerServer, error) {
|
|
||||||
deadline := time.Now().Add(timeout)
|
|
||||||
for time.Now().Before(deadline) {
|
|
||||||
srv, err := c.GetServer(id)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if srv.Status == "running" {
|
|
||||||
return srv, nil
|
|
||||||
}
|
|
||||||
time.Sleep(3 * time.Second)
|
|
||||||
}
|
|
||||||
return nil, fmt.Errorf("server %d did not reach running state within %s", id, timeout)
|
|
||||||
}
|
|
||||||
|
|
||||||
// --- Floating IP operations ---
|
|
||||||
|
|
||||||
// CreateFloatingIP creates a new floating IP.
|
|
||||||
func (c *HetznerClient) CreateFloatingIP(location, description string, labels map[string]string) (*HetznerFloatingIP, error) {
|
|
||||||
payload := map[string]interface{}{
|
|
||||||
"type": "ipv4",
|
|
||||||
"home_location": location,
|
|
||||||
"description": description,
|
|
||||||
"labels": labels,
|
|
||||||
}
|
|
||||||
|
|
||||||
body, err := c.post("/floating_ips", payload)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("create floating IP: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var resp struct {
|
|
||||||
FloatingIP HetznerFloatingIP `json:"floating_ip"`
|
|
||||||
}
|
|
||||||
if err := json.Unmarshal(body, &resp); err != nil {
|
|
||||||
return nil, fmt.Errorf("parse floating IP response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return &resp.FloatingIP, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// ListFloatingIPsByLabel lists floating IPs filtered by label.
|
|
||||||
func (c *HetznerClient) ListFloatingIPsByLabel(selector string) ([]HetznerFloatingIP, error) {
|
|
||||||
body, err := c.get("/floating_ips?label_selector=" + selector)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("list floating IPs: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var resp struct {
|
|
||||||
FloatingIPs []HetznerFloatingIP `json:"floating_ips"`
|
|
||||||
}
|
|
||||||
if err := json.Unmarshal(body, &resp); err != nil {
|
|
||||||
return nil, fmt.Errorf("parse floating IPs response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return resp.FloatingIPs, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// AssignFloatingIP assigns a floating IP to a server.
|
|
||||||
func (c *HetznerClient) AssignFloatingIP(floatingIPID, serverID int64) error {
|
|
||||||
payload := map[string]int64{"server": serverID}
|
|
||||||
_, err := c.post("/floating_ips/"+strconv.FormatInt(floatingIPID, 10)+"/actions/assign", payload)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("assign floating IP %d to server %d: %w", floatingIPID, serverID, err)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// UnassignFloatingIP removes a floating IP assignment.
|
|
||||||
func (c *HetznerClient) UnassignFloatingIP(floatingIPID int64) error {
|
|
||||||
_, err := c.post("/floating_ips/"+strconv.FormatInt(floatingIPID, 10)+"/actions/unassign", struct{}{})
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("unassign floating IP %d: %w", floatingIPID, err)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// --- SSH Key operations ---
|
|
||||||
|
|
||||||
// UploadSSHKey uploads a public key to Hetzner.
|
|
||||||
func (c *HetznerClient) UploadSSHKey(name, publicKey string) (*HetznerSSHKey, error) {
|
|
||||||
payload := map[string]string{
|
|
||||||
"name": name,
|
|
||||||
"public_key": publicKey,
|
|
||||||
}
|
|
||||||
|
|
||||||
body, err := c.post("/ssh_keys", payload)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("upload SSH key: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var resp struct {
|
|
||||||
SSHKey HetznerSSHKey `json:"ssh_key"`
|
|
||||||
}
|
|
||||||
if err := json.Unmarshal(body, &resp); err != nil {
|
|
||||||
return nil, fmt.Errorf("parse SSH key response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return &resp.SSHKey, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// ListSSHKeysByFingerprint finds SSH keys matching a fingerprint.
|
|
||||||
func (c *HetznerClient) ListSSHKeysByFingerprint(fingerprint string) ([]HetznerSSHKey, error) {
|
|
||||||
path := "/ssh_keys"
|
|
||||||
if fingerprint != "" {
|
|
||||||
path += "?fingerprint=" + fingerprint
|
|
||||||
}
|
|
||||||
body, err := c.get(path)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("list SSH keys: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var resp struct {
|
|
||||||
SSHKeys []HetznerSSHKey `json:"ssh_keys"`
|
|
||||||
}
|
|
||||||
if err := json.Unmarshal(body, &resp); err != nil {
|
|
||||||
return nil, fmt.Errorf("parse SSH keys response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return resp.SSHKeys, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetSSHKey retrieves an SSH key by ID.
|
|
||||||
func (c *HetznerClient) GetSSHKey(id int64) (*HetznerSSHKey, error) {
|
|
||||||
body, err := c.get("/ssh_keys/" + strconv.FormatInt(id, 10))
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("get SSH key %d: %w", id, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var resp struct {
|
|
||||||
SSHKey HetznerSSHKey `json:"ssh_key"`
|
|
||||||
}
|
|
||||||
if err := json.Unmarshal(body, &resp); err != nil {
|
|
||||||
return nil, fmt.Errorf("parse SSH key response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return &resp.SSHKey, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// --- Firewall operations ---
|
|
||||||
|
|
||||||
// CreateFirewall creates a firewall with the given rules.
|
|
||||||
func (c *HetznerClient) CreateFirewall(name string, rules []HetznerFWRule, labels map[string]string) (*HetznerFirewall, error) {
|
|
||||||
payload := map[string]interface{}{
|
|
||||||
"name": name,
|
|
||||||
"rules": rules,
|
|
||||||
"labels": labels,
|
|
||||||
}
|
|
||||||
|
|
||||||
body, err := c.post("/firewalls", payload)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("create firewall: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var resp struct {
|
|
||||||
Firewall HetznerFirewall `json:"firewall"`
|
|
||||||
}
|
|
||||||
if err := json.Unmarshal(body, &resp); err != nil {
|
|
||||||
return nil, fmt.Errorf("parse firewall response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return &resp.Firewall, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// ListFirewallsByLabel lists firewalls filtered by label.
|
|
||||||
func (c *HetznerClient) ListFirewallsByLabel(selector string) ([]HetznerFirewall, error) {
|
|
||||||
body, err := c.get("/firewalls?label_selector=" + selector)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("list firewalls: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var resp struct {
|
|
||||||
Firewalls []HetznerFirewall `json:"firewalls"`
|
|
||||||
}
|
|
||||||
if err := json.Unmarshal(body, &resp); err != nil {
|
|
||||||
return nil, fmt.Errorf("parse firewalls response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return resp.Firewalls, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeleteFirewall deletes a firewall by ID.
|
|
||||||
func (c *HetznerClient) DeleteFirewall(id int64) error {
|
|
||||||
return c.delete("/firewalls/" + strconv.FormatInt(id, 10))
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeleteFloatingIP deletes a floating IP by ID.
|
|
||||||
func (c *HetznerClient) DeleteFloatingIP(id int64) error {
|
|
||||||
return c.delete("/floating_ips/" + strconv.FormatInt(id, 10))
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeleteSSHKey deletes an SSH key by ID.
|
|
||||||
func (c *HetznerClient) DeleteSSHKey(id int64) error {
|
|
||||||
return c.delete("/ssh_keys/" + strconv.FormatInt(id, 10))
|
|
||||||
}
|
|
||||||
|
|
||||||
// --- Location & Server Type operations ---
|
|
||||||
|
|
||||||
// HetznerLocation represents a Hetzner datacenter location.
|
|
||||||
type HetznerLocation struct {
|
|
||||||
ID int64 `json:"id"`
|
|
||||||
Name string `json:"name"` // e.g., "fsn1", "nbg1", "hel1"
|
|
||||||
Description string `json:"description"` // e.g., "Falkenstein DC Park 1"
|
|
||||||
City string `json:"city"`
|
|
||||||
Country string `json:"country"` // ISO 3166-1 alpha-2
|
|
||||||
}
|
|
||||||
|
|
||||||
// HetznerServerType represents a Hetzner server type with pricing.
|
|
||||||
type HetznerServerType struct {
|
|
||||||
ID int64 `json:"id"`
|
|
||||||
Name string `json:"name"` // e.g., "cx22", "cx23"
|
|
||||||
Description string `json:"description"` // e.g., "CX23"
|
|
||||||
Cores int `json:"cores"`
|
|
||||||
Memory float64 `json:"memory"` // GB
|
|
||||||
Disk int `json:"disk"` // GB
|
|
||||||
Architecture string `json:"architecture"`
|
|
||||||
Deprecation *struct {
|
|
||||||
Announced string `json:"announced"`
|
|
||||||
UnavailableAfter string `json:"unavailable_after"`
|
|
||||||
} `json:"deprecation"` // nil = not deprecated
|
|
||||||
Prices []struct {
|
|
||||||
Location string `json:"location"`
|
|
||||||
Hourly struct {
|
|
||||||
Gross string `json:"gross"`
|
|
||||||
} `json:"price_hourly"`
|
|
||||||
Monthly struct {
|
|
||||||
Gross string `json:"gross"`
|
|
||||||
} `json:"price_monthly"`
|
|
||||||
} `json:"prices"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// ListLocations returns all available Hetzner datacenter locations.
|
|
||||||
func (c *HetznerClient) ListLocations() ([]HetznerLocation, error) {
|
|
||||||
body, err := c.get("/locations")
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("list locations: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var resp struct {
|
|
||||||
Locations []HetznerLocation `json:"locations"`
|
|
||||||
}
|
|
||||||
if err := json.Unmarshal(body, &resp); err != nil {
|
|
||||||
return nil, fmt.Errorf("parse locations response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return resp.Locations, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// ListServerTypes returns all available server types.
|
|
||||||
func (c *HetznerClient) ListServerTypes() ([]HetznerServerType, error) {
|
|
||||||
body, err := c.get("/server_types?per_page=50")
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("list server types: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var resp struct {
|
|
||||||
ServerTypes []HetznerServerType `json:"server_types"`
|
|
||||||
}
|
|
||||||
if err := json.Unmarshal(body, &resp); err != nil {
|
|
||||||
return nil, fmt.Errorf("parse server types response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return resp.ServerTypes, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// --- Validation ---
|
|
||||||
|
|
||||||
// ValidateToken checks if the API token is valid by making a simple request.
|
|
||||||
func (c *HetznerClient) ValidateToken() error {
|
|
||||||
_, err := c.get("/servers?per_page=1")
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("invalid Hetzner API token: %w", err)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// --- Sandbox firewall rules ---
|
|
||||||
|
|
||||||
// SandboxFirewallRules returns the standard firewall rules for sandbox nodes.
|
|
||||||
func SandboxFirewallRules() []HetznerFWRule {
|
|
||||||
allIPv4 := []string{"0.0.0.0/0"}
|
|
||||||
allIPv6 := []string{"::/0"}
|
|
||||||
allIPs := append(allIPv4, allIPv6...)
|
|
||||||
|
|
||||||
return []HetznerFWRule{
|
|
||||||
{Direction: "in", Protocol: "tcp", Port: "22", SourceIPs: allIPs, Description: "SSH"},
|
|
||||||
{Direction: "in", Protocol: "tcp", Port: "53", SourceIPs: allIPs, Description: "DNS TCP"},
|
|
||||||
{Direction: "in", Protocol: "udp", Port: "53", SourceIPs: allIPs, Description: "DNS UDP"},
|
|
||||||
{Direction: "in", Protocol: "tcp", Port: "80", SourceIPs: allIPs, Description: "HTTP"},
|
|
||||||
{Direction: "in", Protocol: "tcp", Port: "443", SourceIPs: allIPs, Description: "HTTPS"},
|
|
||||||
{Direction: "in", Protocol: "udp", Port: "51820", SourceIPs: allIPs, Description: "WireGuard"},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,303 +0,0 @@
|
|||||||
package sandbox
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/json"
|
|
||||||
"net/http"
|
|
||||||
"net/http/httptest"
|
|
||||||
"testing"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestValidateToken_Success(t *testing.T) {
|
|
||||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.Header.Get("Authorization") != "Bearer test-token" {
|
|
||||||
t.Errorf("unexpected auth header: %s", r.Header.Get("Authorization"))
|
|
||||||
}
|
|
||||||
w.WriteHeader(200)
|
|
||||||
json.NewEncoder(w).Encode(map[string]interface{}{"servers": []interface{}{}})
|
|
||||||
}))
|
|
||||||
defer srv.Close()
|
|
||||||
|
|
||||||
client := newTestClient(srv, "test-token")
|
|
||||||
if err := client.ValidateToken(); err != nil {
|
|
||||||
t.Errorf("ValidateToken() error = %v, want nil", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestValidateToken_InvalidToken(t *testing.T) {
|
|
||||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
|
||||||
w.WriteHeader(401)
|
|
||||||
json.NewEncoder(w).Encode(map[string]interface{}{
|
|
||||||
"error": map[string]string{
|
|
||||||
"code": "unauthorized",
|
|
||||||
"message": "unable to authenticate",
|
|
||||||
},
|
|
||||||
})
|
|
||||||
}))
|
|
||||||
defer srv.Close()
|
|
||||||
|
|
||||||
client := newTestClient(srv, "bad-token")
|
|
||||||
if err := client.ValidateToken(); err == nil {
|
|
||||||
t.Error("ValidateToken() expected error for invalid token")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCreateServer(t *testing.T) {
|
|
||||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.Method != "POST" || r.URL.Path != "/v1/servers" {
|
|
||||||
t.Errorf("unexpected request: %s %s", r.Method, r.URL.Path)
|
|
||||||
}
|
|
||||||
|
|
||||||
var req CreateServerRequest
|
|
||||||
json.NewDecoder(r.Body).Decode(&req)
|
|
||||||
|
|
||||||
if req.Name != "sbx-test-1" {
|
|
||||||
t.Errorf("unexpected server name: %s", req.Name)
|
|
||||||
}
|
|
||||||
if req.ServerType != "cx22" {
|
|
||||||
t.Errorf("unexpected server type: %s", req.ServerType)
|
|
||||||
}
|
|
||||||
|
|
||||||
w.WriteHeader(201)
|
|
||||||
json.NewEncoder(w).Encode(map[string]interface{}{
|
|
||||||
"server": map[string]interface{}{
|
|
||||||
"id": 12345,
|
|
||||||
"name": req.Name,
|
|
||||||
"status": "initializing",
|
|
||||||
"public_net": map[string]interface{}{
|
|
||||||
"ipv4": map[string]string{"ip": "1.2.3.4"},
|
|
||||||
},
|
|
||||||
"labels": req.Labels,
|
|
||||||
"server_type": map[string]string{"name": "cx22"},
|
|
||||||
},
|
|
||||||
})
|
|
||||||
}))
|
|
||||||
defer srv.Close()
|
|
||||||
|
|
||||||
client := newTestClient(srv, "test-token")
|
|
||||||
server, err := client.CreateServer(CreateServerRequest{
|
|
||||||
Name: "sbx-test-1",
|
|
||||||
ServerType: "cx22",
|
|
||||||
Image: "ubuntu-24.04",
|
|
||||||
Location: "fsn1",
|
|
||||||
SSHKeys: []int64{1},
|
|
||||||
Labels: map[string]string{"orama-sandbox": "test"},
|
|
||||||
})
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CreateServer() error = %v", err)
|
|
||||||
}
|
|
||||||
if server.ID != 12345 {
|
|
||||||
t.Errorf("server ID = %d, want 12345", server.ID)
|
|
||||||
}
|
|
||||||
if server.Name != "sbx-test-1" {
|
|
||||||
t.Errorf("server name = %s, want sbx-test-1", server.Name)
|
|
||||||
}
|
|
||||||
if server.PublicNet.IPv4.IP != "1.2.3.4" {
|
|
||||||
t.Errorf("server IP = %s, want 1.2.3.4", server.PublicNet.IPv4.IP)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDeleteServer(t *testing.T) {
|
|
||||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.Method != "DELETE" || r.URL.Path != "/v1/servers/12345" {
|
|
||||||
t.Errorf("unexpected request: %s %s", r.Method, r.URL.Path)
|
|
||||||
}
|
|
||||||
w.WriteHeader(200)
|
|
||||||
}))
|
|
||||||
defer srv.Close()
|
|
||||||
|
|
||||||
client := newTestClient(srv, "test-token")
|
|
||||||
if err := client.DeleteServer(12345); err != nil {
|
|
||||||
t.Errorf("DeleteServer() error = %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestListServersByLabel(t *testing.T) {
|
|
||||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.URL.Query().Get("label_selector") != "orama-sandbox=test" {
|
|
||||||
t.Errorf("unexpected label_selector: %s", r.URL.Query().Get("label_selector"))
|
|
||||||
}
|
|
||||||
w.WriteHeader(200)
|
|
||||||
json.NewEncoder(w).Encode(map[string]interface{}{
|
|
||||||
"servers": []map[string]interface{}{
|
|
||||||
{"id": 1, "name": "sbx-test-1", "status": "running", "public_net": map[string]interface{}{"ipv4": map[string]string{"ip": "1.1.1.1"}}, "server_type": map[string]string{"name": "cx22"}},
|
|
||||||
{"id": 2, "name": "sbx-test-2", "status": "running", "public_net": map[string]interface{}{"ipv4": map[string]string{"ip": "2.2.2.2"}}, "server_type": map[string]string{"name": "cx22"}},
|
|
||||||
},
|
|
||||||
})
|
|
||||||
}))
|
|
||||||
defer srv.Close()
|
|
||||||
|
|
||||||
client := newTestClient(srv, "test-token")
|
|
||||||
servers, err := client.ListServersByLabel("orama-sandbox=test")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("ListServersByLabel() error = %v", err)
|
|
||||||
}
|
|
||||||
if len(servers) != 2 {
|
|
||||||
t.Errorf("got %d servers, want 2", len(servers))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestWaitForServer_AlreadyRunning(t *testing.T) {
|
|
||||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
|
||||||
w.WriteHeader(200)
|
|
||||||
json.NewEncoder(w).Encode(map[string]interface{}{
|
|
||||||
"server": map[string]interface{}{
|
|
||||||
"id": 1,
|
|
||||||
"name": "test",
|
|
||||||
"status": "running",
|
|
||||||
"public_net": map[string]interface{}{
|
|
||||||
"ipv4": map[string]string{"ip": "1.1.1.1"},
|
|
||||||
},
|
|
||||||
"server_type": map[string]string{"name": "cx22"},
|
|
||||||
},
|
|
||||||
})
|
|
||||||
}))
|
|
||||||
defer srv.Close()
|
|
||||||
|
|
||||||
client := newTestClient(srv, "test-token")
|
|
||||||
server, err := client.WaitForServer(1, 5*time.Second)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("WaitForServer() error = %v", err)
|
|
||||||
}
|
|
||||||
if server.Status != "running" {
|
|
||||||
t.Errorf("server status = %s, want running", server.Status)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAssignFloatingIP(t *testing.T) {
|
|
||||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.Method != "POST" || r.URL.Path != "/v1/floating_ips/100/actions/assign" {
|
|
||||||
t.Errorf("unexpected request: %s %s", r.Method, r.URL.Path)
|
|
||||||
}
|
|
||||||
|
|
||||||
var body map[string]int64
|
|
||||||
json.NewDecoder(r.Body).Decode(&body)
|
|
||||||
if body["server"] != 200 {
|
|
||||||
t.Errorf("unexpected server ID: %d", body["server"])
|
|
||||||
}
|
|
||||||
|
|
||||||
w.WriteHeader(200)
|
|
||||||
json.NewEncoder(w).Encode(map[string]interface{}{"action": map[string]interface{}{"id": 1, "status": "running"}})
|
|
||||||
}))
|
|
||||||
defer srv.Close()
|
|
||||||
|
|
||||||
client := newTestClient(srv, "test-token")
|
|
||||||
if err := client.AssignFloatingIP(100, 200); err != nil {
|
|
||||||
t.Errorf("AssignFloatingIP() error = %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestUploadSSHKey(t *testing.T) {
|
|
||||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.Method != "POST" || r.URL.Path != "/v1/ssh_keys" {
|
|
||||||
t.Errorf("unexpected request: %s %s", r.Method, r.URL.Path)
|
|
||||||
}
|
|
||||||
w.WriteHeader(201)
|
|
||||||
json.NewEncoder(w).Encode(map[string]interface{}{
|
|
||||||
"ssh_key": map[string]interface{}{
|
|
||||||
"id": 42,
|
|
||||||
"name": "orama-sandbox",
|
|
||||||
"fingerprint": "aa:bb:cc:dd",
|
|
||||||
"public_key": "ssh-ed25519 AAAA...",
|
|
||||||
},
|
|
||||||
})
|
|
||||||
}))
|
|
||||||
defer srv.Close()
|
|
||||||
|
|
||||||
client := newTestClient(srv, "test-token")
|
|
||||||
key, err := client.UploadSSHKey("orama-sandbox", "ssh-ed25519 AAAA...")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("UploadSSHKey() error = %v", err)
|
|
||||||
}
|
|
||||||
if key.ID != 42 {
|
|
||||||
t.Errorf("key ID = %d, want 42", key.ID)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCreateFirewall(t *testing.T) {
|
|
||||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.Method != "POST" || r.URL.Path != "/v1/firewalls" {
|
|
||||||
t.Errorf("unexpected request: %s %s", r.Method, r.URL.Path)
|
|
||||||
}
|
|
||||||
w.WriteHeader(201)
|
|
||||||
json.NewEncoder(w).Encode(map[string]interface{}{
|
|
||||||
"firewall": map[string]interface{}{
|
|
||||||
"id": 99,
|
|
||||||
"name": "orama-sandbox",
|
|
||||||
},
|
|
||||||
})
|
|
||||||
}))
|
|
||||||
defer srv.Close()
|
|
||||||
|
|
||||||
client := newTestClient(srv, "test-token")
|
|
||||||
fw, err := client.CreateFirewall("orama-sandbox", SandboxFirewallRules(), map[string]string{"orama-sandbox": "infra"})
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CreateFirewall() error = %v", err)
|
|
||||||
}
|
|
||||||
if fw.ID != 99 {
|
|
||||||
t.Errorf("firewall ID = %d, want 99", fw.ID)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSandboxFirewallRules(t *testing.T) {
|
|
||||||
rules := SandboxFirewallRules()
|
|
||||||
if len(rules) != 6 {
|
|
||||||
t.Errorf("got %d rules, want 6", len(rules))
|
|
||||||
}
|
|
||||||
|
|
||||||
expectedPorts := map[string]bool{"22": false, "53": false, "80": false, "443": false, "51820": false}
|
|
||||||
for _, r := range rules {
|
|
||||||
expectedPorts[r.Port] = true
|
|
||||||
if r.Direction != "in" {
|
|
||||||
t.Errorf("rule %s direction = %s, want in", r.Port, r.Direction)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
for port, seen := range expectedPorts {
|
|
||||||
if !seen {
|
|
||||||
t.Errorf("missing firewall rule for port %s", port)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestParseHetznerError(t *testing.T) {
|
|
||||||
body := `{"error":{"code":"uniqueness_error","message":"server name already used"}}`
|
|
||||||
err := parseHetznerError([]byte(body), 409)
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error")
|
|
||||||
}
|
|
||||||
expected := "hetzner API error (HTTP 409): uniqueness_error — server name already used"
|
|
||||||
if err.Error() != expected {
|
|
||||||
t.Errorf("error = %q, want %q", err.Error(), expected)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// newTestClient creates a HetznerClient pointing at a test server.
|
|
||||||
func newTestClient(ts *httptest.Server, token string) *HetznerClient {
|
|
||||||
client := NewHetznerClient(token)
|
|
||||||
// Override the base URL by using a custom transport
|
|
||||||
client.httpClient = ts.Client()
|
|
||||||
// We need to override the base URL — wrap the transport
|
|
||||||
origTransport := client.httpClient.Transport
|
|
||||||
client.httpClient.Transport = &testTransport{
|
|
||||||
base: origTransport,
|
|
||||||
testURL: ts.URL,
|
|
||||||
}
|
|
||||||
return client
|
|
||||||
}
|
|
||||||
|
|
||||||
// testTransport rewrites requests to point at the test server.
|
|
||||||
type testTransport struct {
|
|
||||||
base http.RoundTripper
|
|
||||||
testURL string
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *testTransport) RoundTrip(req *http.Request) (*http.Response, error) {
|
|
||||||
// Rewrite the URL to point at the test server
|
|
||||||
req.URL.Scheme = "http"
|
|
||||||
req.URL.Host = t.testURL[len("http://"):]
|
|
||||||
if t.base != nil {
|
|
||||||
return t.base.RoundTrip(req)
|
|
||||||
}
|
|
||||||
return http.DefaultTransport.RoundTrip(req)
|
|
||||||
}
|
|
||||||
@ -1,26 +0,0 @@
|
|||||||
package sandbox
|
|
||||||
|
|
||||||
import (
|
|
||||||
"math/rand"
|
|
||||||
)
|
|
||||||
|
|
||||||
var adjectives = []string{
|
|
||||||
"swift", "bright", "calm", "dark", "eager",
|
|
||||||
"fair", "gold", "hazy", "iron", "jade",
|
|
||||||
"keen", "lush", "mild", "neat", "opal",
|
|
||||||
"pure", "raw", "sage", "teal", "warm",
|
|
||||||
}
|
|
||||||
|
|
||||||
var nouns = []string{
|
|
||||||
"falcon", "beacon", "cedar", "delta", "ember",
|
|
||||||
"frost", "grove", "haven", "ivory", "jewel",
|
|
||||||
"knot", "latch", "maple", "nexus", "orbit",
|
|
||||||
"prism", "reef", "spark", "tide", "vault",
|
|
||||||
}
|
|
||||||
|
|
||||||
// GenerateName produces a random adjective-noun name like "swift-falcon".
|
|
||||||
func GenerateName() string {
|
|
||||||
adj := adjectives[rand.Intn(len(adjectives))]
|
|
||||||
noun := nouns[rand.Intn(len(nouns))]
|
|
||||||
return adj + "-" + noun
|
|
||||||
}
|
|
||||||
@ -1,119 +0,0 @@
|
|||||||
package sandbox
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bufio"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Reset tears down all sandbox infrastructure (floating IPs, firewall, SSH key)
|
|
||||||
// and removes the config file so the user can rerun setup from scratch.
|
|
||||||
// This is useful when switching datacenter locations (floating IPs are location-bound).
|
|
||||||
func Reset() error {
|
|
||||||
fmt.Println("Sandbox Reset")
|
|
||||||
fmt.Println("=============")
|
|
||||||
fmt.Println()
|
|
||||||
|
|
||||||
cfg, err := LoadConfig()
|
|
||||||
if err != nil {
|
|
||||||
// Config doesn't exist — just clean up any local files
|
|
||||||
fmt.Println("No sandbox config found. Cleaning up local files...")
|
|
||||||
return resetLocalFiles()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check for active sandboxes — refuse to reset if clusters are still running
|
|
||||||
active, _ := FindActiveSandbox()
|
|
||||||
if active != nil {
|
|
||||||
return fmt.Errorf("active sandbox %q exists — run 'orama sandbox destroy' first", active.Name)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Show what will be deleted
|
|
||||||
fmt.Println("This will delete the following Hetzner resources:")
|
|
||||||
for i, fip := range cfg.FloatingIPs {
|
|
||||||
fmt.Printf(" Floating IP %d: %s (ID: %d)\n", i+1, fip.IP, fip.ID)
|
|
||||||
}
|
|
||||||
if cfg.FirewallID != 0 {
|
|
||||||
fmt.Printf(" Firewall ID: %d\n", cfg.FirewallID)
|
|
||||||
}
|
|
||||||
if cfg.SSHKey.HetznerID != 0 {
|
|
||||||
fmt.Printf(" SSH Key ID: %d\n", cfg.SSHKey.HetznerID)
|
|
||||||
}
|
|
||||||
fmt.Println()
|
|
||||||
fmt.Println("Local files to remove:")
|
|
||||||
fmt.Println(" ~/.orama/sandbox.yaml")
|
|
||||||
fmt.Println()
|
|
||||||
|
|
||||||
reader := bufio.NewReader(os.Stdin)
|
|
||||||
fmt.Print("Delete all sandbox resources? [y/N]: ")
|
|
||||||
choice, _ := reader.ReadString('\n')
|
|
||||||
choice = strings.TrimSpace(strings.ToLower(choice))
|
|
||||||
if choice != "y" && choice != "yes" {
|
|
||||||
fmt.Println("Aborted.")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
client := NewHetznerClient(cfg.HetznerAPIToken)
|
|
||||||
|
|
||||||
// Step 1: Delete floating IPs
|
|
||||||
fmt.Println()
|
|
||||||
fmt.Println("Deleting floating IPs...")
|
|
||||||
for _, fip := range cfg.FloatingIPs {
|
|
||||||
if err := client.DeleteFloatingIP(fip.ID); err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, " Warning: could not delete floating IP %s (ID %d): %v\n", fip.IP, fip.ID, err)
|
|
||||||
} else {
|
|
||||||
fmt.Printf(" Deleted %s (ID %d)\n", fip.IP, fip.ID)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 2: Delete firewall
|
|
||||||
if cfg.FirewallID != 0 {
|
|
||||||
fmt.Println("Deleting firewall...")
|
|
||||||
if err := client.DeleteFirewall(cfg.FirewallID); err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, " Warning: could not delete firewall (ID %d): %v\n", cfg.FirewallID, err)
|
|
||||||
} else {
|
|
||||||
fmt.Printf(" Deleted firewall (ID %d)\n", cfg.FirewallID)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 3: Delete SSH key from Hetzner
|
|
||||||
if cfg.SSHKey.HetznerID != 0 {
|
|
||||||
fmt.Println("Deleting SSH key from Hetzner...")
|
|
||||||
if err := client.DeleteSSHKey(cfg.SSHKey.HetznerID); err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, " Warning: could not delete SSH key (ID %d): %v\n", cfg.SSHKey.HetznerID, err)
|
|
||||||
} else {
|
|
||||||
fmt.Printf(" Deleted SSH key (ID %d)\n", cfg.SSHKey.HetznerID)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 4: Remove local files
|
|
||||||
if err := resetLocalFiles(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Println()
|
|
||||||
fmt.Println("Reset complete. All sandbox resources deleted.")
|
|
||||||
fmt.Println()
|
|
||||||
fmt.Println("Next: orama sandbox setup")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// resetLocalFiles removes the sandbox config file.
|
|
||||||
func resetLocalFiles() error {
|
|
||||||
dir, err := configDir()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
configFile := dir + "/sandbox.yaml"
|
|
||||||
fmt.Println("Removing local files...")
|
|
||||||
if err := os.Remove(configFile); err != nil {
|
|
||||||
if !os.IsNotExist(err) {
|
|
||||||
fmt.Fprintf(os.Stderr, " Warning: could not remove %s: %v\n", configFile, err)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
fmt.Printf(" Removed %s\n", configFile)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@ -1,162 +0,0 @@
|
|||||||
package sandbox
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/remotessh"
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/inspector"
|
|
||||||
)
|
|
||||||
|
|
||||||
// RolloutFlags holds optional flags passed through to `orama node upgrade`.
|
|
||||||
type RolloutFlags struct {
|
|
||||||
AnyoneClient bool
|
|
||||||
}
|
|
||||||
|
|
||||||
// Rollout builds, pushes, and performs a rolling upgrade on a sandbox cluster.
|
|
||||||
func Rollout(name string, flags RolloutFlags) error {
|
|
||||||
cfg, err := LoadConfig()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
state, err := resolveSandbox(name)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
sshKeyPath, cleanup, err := resolveVaultKeyOnce(cfg.SSHKey.VaultTarget)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("prepare SSH key: %w", err)
|
|
||||||
}
|
|
||||||
defer cleanup()
|
|
||||||
|
|
||||||
fmt.Printf("Rolling out to sandbox %q (%d nodes)\n\n", state.Name, len(state.Servers))
|
|
||||||
|
|
||||||
// Step 1: Find or require binary archive
|
|
||||||
archivePath := findNewestArchive()
|
|
||||||
if archivePath == "" {
|
|
||||||
return fmt.Errorf("no binary archive found in /tmp/ (run `orama build` first)")
|
|
||||||
}
|
|
||||||
|
|
||||||
info, _ := os.Stat(archivePath)
|
|
||||||
fmt.Printf("Archive: %s (%s)\n\n", filepath.Base(archivePath), formatBytes(info.Size()))
|
|
||||||
|
|
||||||
// Build extra flags string for upgrade command
|
|
||||||
extraFlags := flags.upgradeFlags()
|
|
||||||
|
|
||||||
// Step 2: Push archive to all nodes (upload to first, fan out server-to-server)
|
|
||||||
fmt.Println("Pushing archive to all nodes...")
|
|
||||||
if err := fanoutArchive(state.Servers, sshKeyPath, archivePath); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 3: Rolling upgrade — followers first, leader last
|
|
||||||
fmt.Println("\nRolling upgrade (followers first, leader last)...")
|
|
||||||
|
|
||||||
// Find the leader
|
|
||||||
leaderIdx := findLeaderIndex(state, sshKeyPath)
|
|
||||||
if leaderIdx < 0 {
|
|
||||||
fmt.Fprintf(os.Stderr, " Warning: could not detect RQLite leader, upgrading in order\n")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Upgrade non-leaders first
|
|
||||||
for i, srv := range state.Servers {
|
|
||||||
if i == leaderIdx {
|
|
||||||
continue // skip leader, do it last
|
|
||||||
}
|
|
||||||
if err := upgradeNode(srv, sshKeyPath, i+1, len(state.Servers), extraFlags); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
// Wait between nodes
|
|
||||||
if i < len(state.Servers)-1 {
|
|
||||||
fmt.Printf(" Waiting 15s before next node...\n")
|
|
||||||
time.Sleep(15 * time.Second)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Upgrade leader last
|
|
||||||
if leaderIdx >= 0 {
|
|
||||||
srv := state.Servers[leaderIdx]
|
|
||||||
if err := upgradeNode(srv, sshKeyPath, len(state.Servers), len(state.Servers), extraFlags); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("\nRollout complete for sandbox %q\n", state.Name)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// upgradeFlags builds the extra CLI flags string for `orama node upgrade`.
|
|
||||||
func (f RolloutFlags) upgradeFlags() string {
|
|
||||||
var parts []string
|
|
||||||
if f.AnyoneClient {
|
|
||||||
parts = append(parts, "--anyone-client")
|
|
||||||
}
|
|
||||||
return strings.Join(parts, " ")
|
|
||||||
}
|
|
||||||
|
|
||||||
// findLeaderIndex returns the index of the RQLite leader node, or -1 if unknown.
|
|
||||||
func findLeaderIndex(state *SandboxState, sshKeyPath string) int {
|
|
||||||
for i, srv := range state.Servers {
|
|
||||||
node := inspector.Node{User: "root", Host: srv.IP, SSHKey: sshKeyPath}
|
|
||||||
out, err := runSSHOutput(node, "curl -sf http://localhost:5001/status 2>/dev/null | grep -o '\"state\":\"[^\"]*\"'")
|
|
||||||
if err == nil && contains(out, "Leader") {
|
|
||||||
return i
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return -1
|
|
||||||
}
|
|
||||||
|
|
||||||
// upgradeNode performs `orama node upgrade --restart` on a single node.
|
|
||||||
// It pre-replaces the orama CLI binary before running the upgrade command
|
|
||||||
// to avoid ETXTBSY ("text file busy") errors when the old binary doesn't
|
|
||||||
// have the os.Remove fix in copyBinary().
|
|
||||||
func upgradeNode(srv ServerState, sshKeyPath string, current, total int, extraFlags string) error {
|
|
||||||
node := inspector.Node{User: "root", Host: srv.IP, SSHKey: sshKeyPath}
|
|
||||||
|
|
||||||
fmt.Printf(" [%d/%d] Upgrading %s (%s)...\n", current, total, srv.Name, srv.IP)
|
|
||||||
|
|
||||||
// Pre-replace the orama CLI so the upgrade runs the NEW binary (with ETXTBSY fix).
|
|
||||||
// rm unlinks the old inode (kernel keeps it alive for the running process),
|
|
||||||
// cp creates a fresh inode at the same path.
|
|
||||||
preReplace := "rm -f /usr/local/bin/orama && cp /opt/orama/bin/orama /usr/local/bin/orama"
|
|
||||||
if err := remotessh.RunSSHStreaming(node, preReplace, remotessh.WithNoHostKeyCheck()); err != nil {
|
|
||||||
return fmt.Errorf("pre-replace orama binary on %s: %w", srv.Name, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
upgradeCmd := "orama node upgrade --restart"
|
|
||||||
if extraFlags != "" {
|
|
||||||
upgradeCmd += " " + extraFlags
|
|
||||||
}
|
|
||||||
if err := remotessh.RunSSHStreaming(node, upgradeCmd, remotessh.WithNoHostKeyCheck()); err != nil {
|
|
||||||
return fmt.Errorf("upgrade %s: %w", srv.Name, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait for health
|
|
||||||
fmt.Printf(" Checking health...")
|
|
||||||
if err := waitForRQLiteHealth(node, 2*time.Minute); err != nil {
|
|
||||||
fmt.Printf(" WARN: %v\n", err)
|
|
||||||
} else {
|
|
||||||
fmt.Println(" OK")
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// contains checks if s contains substr.
|
|
||||||
func contains(s, substr string) bool {
|
|
||||||
return len(s) >= len(substr) && findSubstring(s, substr)
|
|
||||||
}
|
|
||||||
|
|
||||||
func findSubstring(s, substr string) bool {
|
|
||||||
for i := 0; i <= len(s)-len(substr); i++ {
|
|
||||||
if s[i:i+len(substr)] == substr {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
@ -1,582 +0,0 @@
|
|||||||
package sandbox
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bufio"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"os/exec"
|
|
||||||
"sort"
|
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/cli/remotessh"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Setup runs the interactive sandbox setup wizard.
|
|
||||||
func Setup() error {
|
|
||||||
fmt.Println("Orama Sandbox Setup")
|
|
||||||
fmt.Println("====================")
|
|
||||||
fmt.Println()
|
|
||||||
|
|
||||||
reader := bufio.NewReader(os.Stdin)
|
|
||||||
|
|
||||||
// Step 1: Hetzner API token
|
|
||||||
fmt.Print("Hetzner Cloud API token: ")
|
|
||||||
token, err := reader.ReadString('\n')
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("read token: %w", err)
|
|
||||||
}
|
|
||||||
token = strings.TrimSpace(token)
|
|
||||||
if token == "" {
|
|
||||||
return fmt.Errorf("API token is required")
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Print(" Validating token... ")
|
|
||||||
client := NewHetznerClient(token)
|
|
||||||
if err := client.ValidateToken(); err != nil {
|
|
||||||
fmt.Println("FAILED")
|
|
||||||
return fmt.Errorf("invalid token: %w", err)
|
|
||||||
}
|
|
||||||
fmt.Println("OK")
|
|
||||||
fmt.Println()
|
|
||||||
|
|
||||||
// Step 2: Domain
|
|
||||||
fmt.Print("Sandbox domain (e.g., sbx.dbrs.space): ")
|
|
||||||
domain, err := reader.ReadString('\n')
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("read domain: %w", err)
|
|
||||||
}
|
|
||||||
domain = strings.TrimSpace(domain)
|
|
||||||
if domain == "" {
|
|
||||||
return fmt.Errorf("domain is required")
|
|
||||||
}
|
|
||||||
|
|
||||||
cfg := &Config{
|
|
||||||
HetznerAPIToken: token,
|
|
||||||
Domain: domain,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 3: Location selection
|
|
||||||
fmt.Println()
|
|
||||||
location, err := selectLocation(client, reader)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
cfg.Location = location
|
|
||||||
|
|
||||||
// Step 4: Server type selection
|
|
||||||
fmt.Println()
|
|
||||||
serverType, err := selectServerType(client, reader, location)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
cfg.ServerType = serverType
|
|
||||||
|
|
||||||
// Step 5: Floating IPs
|
|
||||||
fmt.Println()
|
|
||||||
fmt.Println("Checking floating IPs...")
|
|
||||||
floatingIPs, err := setupFloatingIPs(client, cfg.Location)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
cfg.FloatingIPs = floatingIPs
|
|
||||||
|
|
||||||
// Step 6: Firewall
|
|
||||||
fmt.Println()
|
|
||||||
fmt.Println("Checking firewall...")
|
|
||||||
fwID, err := setupFirewall(client)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
cfg.FirewallID = fwID
|
|
||||||
|
|
||||||
// Step 7: SSH key
|
|
||||||
fmt.Println()
|
|
||||||
fmt.Println("Setting up SSH key...")
|
|
||||||
sshKeyConfig, err := setupSSHKey(client)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
cfg.SSHKey = sshKeyConfig
|
|
||||||
|
|
||||||
// Step 8: Display DNS instructions
|
|
||||||
fmt.Println()
|
|
||||||
fmt.Println("DNS Configuration")
|
|
||||||
fmt.Println("-----------------")
|
|
||||||
fmt.Println("Configure the following at your domain registrar:")
|
|
||||||
fmt.Println()
|
|
||||||
fmt.Printf(" 1. Add glue records (Personal DNS Servers):\n")
|
|
||||||
fmt.Printf(" ns1.%s -> %s\n", domain, cfg.FloatingIPs[0].IP)
|
|
||||||
fmt.Printf(" ns2.%s -> %s\n", domain, cfg.FloatingIPs[1].IP)
|
|
||||||
fmt.Println()
|
|
||||||
fmt.Printf(" 2. Set custom nameservers for %s:\n", domain)
|
|
||||||
fmt.Printf(" ns1.%s\n", domain)
|
|
||||||
fmt.Printf(" ns2.%s\n", domain)
|
|
||||||
fmt.Println()
|
|
||||||
|
|
||||||
// Step 9: Verify DNS (optional)
|
|
||||||
fmt.Print("Verify DNS now? [y/N]: ")
|
|
||||||
verifyChoice, _ := reader.ReadString('\n')
|
|
||||||
verifyChoice = strings.TrimSpace(strings.ToLower(verifyChoice))
|
|
||||||
if verifyChoice == "y" || verifyChoice == "yes" {
|
|
||||||
verifyDNS(domain, cfg.FloatingIPs, reader)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Save config
|
|
||||||
if err := SaveConfig(cfg); err != nil {
|
|
||||||
return fmt.Errorf("save config: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Println()
|
|
||||||
fmt.Println("Setup complete! Config saved to ~/.orama/sandbox.yaml")
|
|
||||||
fmt.Println()
|
|
||||||
fmt.Println("Next: orama sandbox create")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// selectLocation fetches available Hetzner locations and lets the user pick one.
|
|
||||||
func selectLocation(client *HetznerClient, reader *bufio.Reader) (string, error) {
|
|
||||||
fmt.Println("Fetching available locations...")
|
|
||||||
locations, err := client.ListLocations()
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("list locations: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
sort.Slice(locations, func(i, j int) bool {
|
|
||||||
return locations[i].Name < locations[j].Name
|
|
||||||
})
|
|
||||||
|
|
||||||
defaultLoc := "nbg1"
|
|
||||||
fmt.Println(" Available datacenter locations:")
|
|
||||||
for i, loc := range locations {
|
|
||||||
def := ""
|
|
||||||
if loc.Name == defaultLoc {
|
|
||||||
def = " (default)"
|
|
||||||
}
|
|
||||||
fmt.Printf(" %d) %s — %s, %s%s\n", i+1, loc.Name, loc.City, loc.Country, def)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("\n Select location [%s]: ", defaultLoc)
|
|
||||||
choice, _ := reader.ReadString('\n')
|
|
||||||
choice = strings.TrimSpace(choice)
|
|
||||||
|
|
||||||
if choice == "" {
|
|
||||||
fmt.Printf(" Using %s\n", defaultLoc)
|
|
||||||
return defaultLoc, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try as number first
|
|
||||||
if num, err := strconv.Atoi(choice); err == nil && num >= 1 && num <= len(locations) {
|
|
||||||
loc := locations[num-1].Name
|
|
||||||
fmt.Printf(" Using %s\n", loc)
|
|
||||||
return loc, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try as location name
|
|
||||||
for _, loc := range locations {
|
|
||||||
if strings.EqualFold(loc.Name, choice) {
|
|
||||||
fmt.Printf(" Using %s\n", loc.Name)
|
|
||||||
return loc.Name, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return "", fmt.Errorf("unknown location %q", choice)
|
|
||||||
}
|
|
||||||
|
|
||||||
// selectServerType fetches available server types for a location and lets the user pick one.
|
|
||||||
func selectServerType(client *HetznerClient, reader *bufio.Reader, location string) (string, error) {
|
|
||||||
fmt.Println("Fetching available server types...")
|
|
||||||
serverTypes, err := client.ListServerTypes()
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("list server types: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Filter to x86 shared-vCPU types available at the selected location, skip deprecated
|
|
||||||
type option struct {
|
|
||||||
name string
|
|
||||||
cores int
|
|
||||||
memory float64
|
|
||||||
disk int
|
|
||||||
hourly string
|
|
||||||
monthly string
|
|
||||||
}
|
|
||||||
|
|
||||||
var options []option
|
|
||||||
for _, st := range serverTypes {
|
|
||||||
if st.Architecture != "x86" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if st.Deprecation != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
// Only show shared-vCPU types (cx/cpx prefixes) — skip dedicated (ccx/cx5x)
|
|
||||||
if !strings.HasPrefix(st.Name, "cx") && !strings.HasPrefix(st.Name, "cpx") {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Find pricing for the selected location
|
|
||||||
hourly, monthly := "", ""
|
|
||||||
for _, p := range st.Prices {
|
|
||||||
if p.Location == location {
|
|
||||||
hourly = p.Hourly.Gross
|
|
||||||
monthly = p.Monthly.Gross
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if hourly == "" {
|
|
||||||
continue // Not available in this location
|
|
||||||
}
|
|
||||||
|
|
||||||
options = append(options, option{
|
|
||||||
name: st.Name,
|
|
||||||
cores: st.Cores,
|
|
||||||
memory: st.Memory,
|
|
||||||
disk: st.Disk,
|
|
||||||
hourly: hourly,
|
|
||||||
monthly: monthly,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(options) == 0 {
|
|
||||||
return "", fmt.Errorf("no server types available in %s", location)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Sort by hourly price (cheapest first)
|
|
||||||
sort.Slice(options, func(i, j int) bool {
|
|
||||||
pi, _ := strconv.ParseFloat(options[i].hourly, 64)
|
|
||||||
pj, _ := strconv.ParseFloat(options[j].hourly, 64)
|
|
||||||
return pi < pj
|
|
||||||
})
|
|
||||||
|
|
||||||
defaultType := options[0].name // cheapest
|
|
||||||
fmt.Printf(" Available server types in %s:\n", location)
|
|
||||||
for i, opt := range options {
|
|
||||||
def := ""
|
|
||||||
if opt.name == defaultType {
|
|
||||||
def = " (default)"
|
|
||||||
}
|
|
||||||
fmt.Printf(" %d) %-8s %d vCPU / %4.0f GB RAM / %3d GB disk — €%s/hr (€%s/mo)%s\n",
|
|
||||||
i+1, opt.name, opt.cores, opt.memory, opt.disk, formatPrice(opt.hourly), formatPrice(opt.monthly), def)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("\n Select server type [%s]: ", defaultType)
|
|
||||||
choice, _ := reader.ReadString('\n')
|
|
||||||
choice = strings.TrimSpace(choice)
|
|
||||||
|
|
||||||
if choice == "" {
|
|
||||||
fmt.Printf(" Using %s (×5 nodes ≈ €%s/hr)\n", defaultType, multiplyPrice(options[0].hourly, 5))
|
|
||||||
return defaultType, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try as number
|
|
||||||
if num, err := strconv.Atoi(choice); err == nil && num >= 1 && num <= len(options) {
|
|
||||||
opt := options[num-1]
|
|
||||||
fmt.Printf(" Using %s (×5 nodes ≈ €%s/hr)\n", opt.name, multiplyPrice(opt.hourly, 5))
|
|
||||||
return opt.name, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try as name
|
|
||||||
for _, opt := range options {
|
|
||||||
if strings.EqualFold(opt.name, choice) {
|
|
||||||
fmt.Printf(" Using %s (×5 nodes ≈ €%s/hr)\n", opt.name, multiplyPrice(opt.hourly, 5))
|
|
||||||
return opt.name, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return "", fmt.Errorf("unknown server type %q", choice)
|
|
||||||
}
|
|
||||||
|
|
||||||
// formatPrice trims trailing zeros from a price string like "0.0063000000000000" → "0.0063".
|
|
||||||
func formatPrice(price string) string {
|
|
||||||
f, err := strconv.ParseFloat(price, 64)
|
|
||||||
if err != nil {
|
|
||||||
return price
|
|
||||||
}
|
|
||||||
// Use enough precision then trim trailing zeros
|
|
||||||
s := fmt.Sprintf("%.4f", f)
|
|
||||||
s = strings.TrimRight(s, "0")
|
|
||||||
s = strings.TrimRight(s, ".")
|
|
||||||
return s
|
|
||||||
}
|
|
||||||
|
|
||||||
// multiplyPrice multiplies a price string by n and returns formatted.
|
|
||||||
func multiplyPrice(price string, n int) string {
|
|
||||||
f, err := strconv.ParseFloat(price, 64)
|
|
||||||
if err != nil {
|
|
||||||
return "?"
|
|
||||||
}
|
|
||||||
return formatPrice(fmt.Sprintf("%.10f", f*float64(n)))
|
|
||||||
}
|
|
||||||
|
|
||||||
// setupFloatingIPs checks for existing floating IPs or creates new ones.
|
|
||||||
func setupFloatingIPs(client *HetznerClient, location string) ([]FloatIP, error) {
|
|
||||||
existing, err := client.ListFloatingIPsByLabel("orama-sandbox-dns=true")
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("list floating IPs: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(existing) >= 2 {
|
|
||||||
fmt.Printf(" Found %d existing floating IPs:\n", len(existing))
|
|
||||||
result := make([]FloatIP, 2)
|
|
||||||
for i := 0; i < 2; i++ {
|
|
||||||
fmt.Printf(" ns%d: %s (ID: %d)\n", i+1, existing[i].IP, existing[i].ID)
|
|
||||||
result[i] = FloatIP{ID: existing[i].ID, IP: existing[i].IP}
|
|
||||||
}
|
|
||||||
return result, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Need to create missing floating IPs
|
|
||||||
needed := 2 - len(existing)
|
|
||||||
fmt.Printf(" Need to create %d floating IP(s)...\n", needed)
|
|
||||||
|
|
||||||
reader := bufio.NewReader(os.Stdin)
|
|
||||||
fmt.Printf(" Create %d floating IP(s) in %s? (~$0.005/hr each) [Y/n]: ", needed, location)
|
|
||||||
choice, _ := reader.ReadString('\n')
|
|
||||||
choice = strings.TrimSpace(strings.ToLower(choice))
|
|
||||||
if choice == "n" || choice == "no" {
|
|
||||||
return nil, fmt.Errorf("floating IPs required, aborting setup")
|
|
||||||
}
|
|
||||||
|
|
||||||
result := make([]FloatIP, 0, 2)
|
|
||||||
for _, fip := range existing {
|
|
||||||
result = append(result, FloatIP{ID: fip.ID, IP: fip.IP})
|
|
||||||
}
|
|
||||||
|
|
||||||
for i := len(existing); i < 2; i++ {
|
|
||||||
desc := fmt.Sprintf("orama-sandbox-ns%d", i+1)
|
|
||||||
labels := map[string]string{"orama-sandbox-dns": "true"}
|
|
||||||
fip, err := client.CreateFloatingIP(location, desc, labels)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("create floating IP %d: %w", i+1, err)
|
|
||||||
}
|
|
||||||
fmt.Printf(" Created ns%d: %s (ID: %d)\n", i+1, fip.IP, fip.ID)
|
|
||||||
result = append(result, FloatIP{ID: fip.ID, IP: fip.IP})
|
|
||||||
}
|
|
||||||
|
|
||||||
return result, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// setupFirewall ensures a sandbox firewall exists.
|
|
||||||
func setupFirewall(client *HetznerClient) (int64, error) {
|
|
||||||
existing, err := client.ListFirewallsByLabel("orama-sandbox=infra")
|
|
||||||
if err != nil {
|
|
||||||
return 0, fmt.Errorf("list firewalls: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(existing) > 0 {
|
|
||||||
fmt.Printf(" Found existing firewall: %s (ID: %d)\n", existing[0].Name, existing[0].ID)
|
|
||||||
return existing[0].ID, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Print(" Creating sandbox firewall... ")
|
|
||||||
fw, err := client.CreateFirewall(
|
|
||||||
"orama-sandbox",
|
|
||||||
SandboxFirewallRules(),
|
|
||||||
map[string]string{"orama-sandbox": "infra"},
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Println("FAILED")
|
|
||||||
return 0, fmt.Errorf("create firewall: %w", err)
|
|
||||||
}
|
|
||||||
fmt.Printf("OK (ID: %d)\n", fw.ID)
|
|
||||||
return fw.ID, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// setupSSHKey ensures a wallet SSH entry exists and uploads its public key to Hetzner.
|
|
||||||
func setupSSHKey(client *HetznerClient) (SSHKeyConfig, error) {
|
|
||||||
const vaultTarget = "sandbox/root"
|
|
||||||
|
|
||||||
// Ensure wallet entry exists (creates if missing)
|
|
||||||
fmt.Print(" Ensuring wallet SSH entry... ")
|
|
||||||
if err := remotessh.EnsureVaultEntry(vaultTarget); err != nil {
|
|
||||||
fmt.Println("FAILED")
|
|
||||||
return SSHKeyConfig{}, fmt.Errorf("ensure vault entry: %w", err)
|
|
||||||
}
|
|
||||||
fmt.Println("OK")
|
|
||||||
|
|
||||||
// Get public key from wallet
|
|
||||||
fmt.Print(" Resolving public key from wallet... ")
|
|
||||||
pubStr, err := remotessh.ResolveVaultPublicKey(vaultTarget)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Println("FAILED")
|
|
||||||
return SSHKeyConfig{}, fmt.Errorf("resolve public key: %w", err)
|
|
||||||
}
|
|
||||||
fmt.Println("OK")
|
|
||||||
|
|
||||||
// Upload to Hetzner (will fail with uniqueness error if already exists)
|
|
||||||
fmt.Print(" Uploading to Hetzner... ")
|
|
||||||
key, err := client.UploadSSHKey("orama-sandbox", pubStr)
|
|
||||||
if err != nil {
|
|
||||||
// Key may already exist on Hetzner — check if it matches the current vault key
|
|
||||||
existing, listErr := client.ListSSHKeysByFingerprint("")
|
|
||||||
if listErr == nil {
|
|
||||||
for _, k := range existing {
|
|
||||||
if sshKeyDataEqual(k.PublicKey, pubStr) {
|
|
||||||
// Key data matches — safe to reuse regardless of name
|
|
||||||
fmt.Printf("already exists (ID: %d)\n", k.ID)
|
|
||||||
return SSHKeyConfig{
|
|
||||||
HetznerID: k.ID,
|
|
||||||
VaultTarget: vaultTarget,
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
if k.Name == "orama-sandbox" {
|
|
||||||
// Name matches but key data differs — vault key was rotated.
|
|
||||||
// Delete the stale Hetzner key so we can re-upload the current one.
|
|
||||||
fmt.Print("stale key detected, replacing... ")
|
|
||||||
if delErr := client.DeleteSSHKey(k.ID); delErr != nil {
|
|
||||||
fmt.Println("FAILED")
|
|
||||||
return SSHKeyConfig{}, fmt.Errorf("delete stale SSH key (ID %d): %w", k.ID, delErr)
|
|
||||||
}
|
|
||||||
// Re-upload with current vault key
|
|
||||||
newKey, uploadErr := client.UploadSSHKey("orama-sandbox", pubStr)
|
|
||||||
if uploadErr != nil {
|
|
||||||
fmt.Println("FAILED")
|
|
||||||
return SSHKeyConfig{}, fmt.Errorf("re-upload SSH key: %w", uploadErr)
|
|
||||||
}
|
|
||||||
fmt.Printf("OK (ID: %d)\n", newKey.ID)
|
|
||||||
return SSHKeyConfig{
|
|
||||||
HetznerID: newKey.ID,
|
|
||||||
VaultTarget: vaultTarget,
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Println("FAILED")
|
|
||||||
return SSHKeyConfig{}, fmt.Errorf("upload SSH key: %w", err)
|
|
||||||
}
|
|
||||||
fmt.Printf("OK (ID: %d)\n", key.ID)
|
|
||||||
|
|
||||||
return SSHKeyConfig{
|
|
||||||
HetznerID: key.ID,
|
|
||||||
VaultTarget: vaultTarget,
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// sshKeyDataEqual compares two SSH public key strings by their key type and
|
|
||||||
// data, ignoring the optional comment field.
|
|
||||||
func sshKeyDataEqual(a, b string) bool {
|
|
||||||
partsA := strings.Fields(strings.TrimSpace(a))
|
|
||||||
partsB := strings.Fields(strings.TrimSpace(b))
|
|
||||||
if len(partsA) < 2 || len(partsB) < 2 {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
return partsA[0] == partsB[0] && partsA[1] == partsB[1]
|
|
||||||
}
|
|
||||||
|
|
||||||
// verifyDNS checks if glue records for the sandbox domain are configured.
|
|
||||||
//
|
|
||||||
// There's a chicken-and-egg problem: NS records can't fully resolve until
|
|
||||||
// CoreDNS is running on the floating IPs (which requires a sandbox cluster).
|
|
||||||
// So instead of resolving NS → A records, we check for glue records at the
|
|
||||||
// TLD level, which proves the registrar configuration is correct.
|
|
||||||
func verifyDNS(domain string, floatingIPs []FloatIP, reader *bufio.Reader) {
|
|
||||||
expectedIPs := make(map[string]bool)
|
|
||||||
for _, fip := range floatingIPs {
|
|
||||||
expectedIPs[fip.IP] = true
|
|
||||||
}
|
|
||||||
|
|
||||||
// Find the TLD nameserver to query for glue records
|
|
||||||
findTLDServer := func() string {
|
|
||||||
// For "dbrs.space", the TLD is "space." — ask the root for its NS
|
|
||||||
parts := strings.Split(domain, ".")
|
|
||||||
if len(parts) < 2 {
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
tld := parts[len(parts)-1]
|
|
||||||
out, err := exec.Command("dig", "+short", "NS", tld+".", "@8.8.8.8").Output()
|
|
||||||
if err != nil {
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
lines := strings.Split(strings.TrimSpace(string(out)), "\n")
|
|
||||||
if len(lines) > 0 && lines[0] != "" {
|
|
||||||
return strings.TrimSpace(lines[0])
|
|
||||||
}
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
|
|
||||||
check := func() (glueFound bool, foundIPs []string) {
|
|
||||||
tldNS := findTLDServer()
|
|
||||||
if tldNS == "" {
|
|
||||||
return false, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Query the TLD nameserver for NS + glue of our domain
|
|
||||||
// dig NS domain @tld-server will include glue in ADDITIONAL section
|
|
||||||
out, err := exec.Command("dig", "NS", domain, "@"+tldNS, "+norecurse", "+additional").Output()
|
|
||||||
if err != nil {
|
|
||||||
return false, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
output := string(out)
|
|
||||||
remaining := make(map[string]bool)
|
|
||||||
for k, v := range expectedIPs {
|
|
||||||
remaining[k] = v
|
|
||||||
}
|
|
||||||
|
|
||||||
// Look for our floating IPs in the ADDITIONAL section (glue records)
|
|
||||||
// or anywhere in the response
|
|
||||||
for _, fip := range floatingIPs {
|
|
||||||
if strings.Contains(output, fip.IP) {
|
|
||||||
foundIPs = append(foundIPs, fip.IP)
|
|
||||||
delete(remaining, fip.IP)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return len(remaining) == 0, foundIPs
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf(" Checking glue records for %s at TLD nameserver...\n", domain)
|
|
||||||
matched, foundIPs := check()
|
|
||||||
|
|
||||||
if matched {
|
|
||||||
fmt.Println(" ✓ Glue records configured correctly:")
|
|
||||||
for i, ip := range foundIPs {
|
|
||||||
fmt.Printf(" ns%d.%s → %s\n", i+1, domain, ip)
|
|
||||||
}
|
|
||||||
fmt.Println()
|
|
||||||
fmt.Println(" Note: Full DNS resolution will work once a sandbox is running")
|
|
||||||
fmt.Println(" (CoreDNS on the floating IPs needs to be up to answer queries).")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(foundIPs) > 0 {
|
|
||||||
fmt.Println(" ⚠ Partial glue records found:")
|
|
||||||
for _, ip := range foundIPs {
|
|
||||||
fmt.Printf(" %s\n", ip)
|
|
||||||
}
|
|
||||||
fmt.Println(" Missing floating IPs in glue:")
|
|
||||||
for _, fip := range floatingIPs {
|
|
||||||
if expectedIPs[fip.IP] {
|
|
||||||
fmt.Printf(" %s\n", fip.IP)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
fmt.Println(" ✗ No glue records found yet.")
|
|
||||||
fmt.Println(" Make sure you configured at your registrar:")
|
|
||||||
fmt.Printf(" ns1.%s → %s\n", domain, floatingIPs[0].IP)
|
|
||||||
fmt.Printf(" ns2.%s → %s\n", domain, floatingIPs[1].IP)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Println()
|
|
||||||
fmt.Print(" Wait for glue propagation? (polls every 30s, Ctrl+C to stop) [y/N]: ")
|
|
||||||
choice, _ := reader.ReadString('\n')
|
|
||||||
choice = strings.TrimSpace(strings.ToLower(choice))
|
|
||||||
if choice != "y" && choice != "yes" {
|
|
||||||
fmt.Println(" Skipping. You can create the sandbox now — DNS will work once glue propagates.")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Println(" Waiting for glue record propagation...")
|
|
||||||
for i := 1; ; i++ {
|
|
||||||
time.Sleep(30 * time.Second)
|
|
||||||
matched, _ = check()
|
|
||||||
if matched {
|
|
||||||
fmt.Printf("\n ✓ Glue records propagated after %d checks\n", i)
|
|
||||||
fmt.Println(" You can now create a sandbox: orama sandbox create")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
fmt.Printf(" [%d] Not yet... checking again in 30s\n", i)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,82 +0,0 @@
|
|||||||
package sandbox
|
|
||||||
|
|
||||||
import "testing"
|
|
||||||
|
|
||||||
func TestSSHKeyDataEqual(t *testing.T) {
|
|
||||||
tests := []struct {
|
|
||||||
name string
|
|
||||||
a string
|
|
||||||
b string
|
|
||||||
expected bool
|
|
||||||
}{
|
|
||||||
{
|
|
||||||
name: "identical keys",
|
|
||||||
a: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBtest comment1",
|
|
||||||
b: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBtest comment1",
|
|
||||||
expected: true,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "same key different comments",
|
|
||||||
a: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBtest vault",
|
|
||||||
b: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBtest user@host",
|
|
||||||
expected: true,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "same key one without comment",
|
|
||||||
a: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBtest",
|
|
||||||
b: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBtest vault",
|
|
||||||
expected: true,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "different key data",
|
|
||||||
a: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBoldkey vault",
|
|
||||||
b: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBnewkey vault",
|
|
||||||
expected: false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "different key types",
|
|
||||||
a: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAB vault",
|
|
||||||
b: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBtest vault",
|
|
||||||
expected: false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "empty string a",
|
|
||||||
a: "",
|
|
||||||
b: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBtest vault",
|
|
||||||
expected: false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "empty string b",
|
|
||||||
a: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBtest vault",
|
|
||||||
b: "",
|
|
||||||
expected: false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "both empty",
|
|
||||||
a: "",
|
|
||||||
b: "",
|
|
||||||
expected: false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "single field only",
|
|
||||||
a: "ssh-ed25519",
|
|
||||||
b: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBtest",
|
|
||||||
expected: false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "whitespace trimming",
|
|
||||||
a: " ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBtest vault ",
|
|
||||||
b: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBtest",
|
|
||||||
expected: true,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tt := range tests {
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
|
||||||
got := sshKeyDataEqual(tt.a, tt.b)
|
|
||||||
if got != tt.expected {
|
|
||||||
t.Errorf("sshKeyDataEqual(%q, %q) = %v, want %v", tt.a, tt.b, got, tt.expected)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,66 +0,0 @@
|
|||||||
package sandbox
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"os/exec"
|
|
||||||
)
|
|
||||||
|
|
||||||
// SSHInto opens an interactive SSH session to a sandbox node.
|
|
||||||
func SSHInto(name string, nodeNum int) error {
|
|
||||||
cfg, err := LoadConfig()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
state, err := resolveSandbox(name)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if nodeNum < 1 || nodeNum > len(state.Servers) {
|
|
||||||
return fmt.Errorf("node number must be between 1 and %d", len(state.Servers))
|
|
||||||
}
|
|
||||||
|
|
||||||
srv := state.Servers[nodeNum-1]
|
|
||||||
|
|
||||||
sshKeyPath, cleanup, err := resolveVaultKeyOnce(cfg.SSHKey.VaultTarget)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("prepare SSH key: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("Connecting to %s (%s, %s)...\n", srv.Name, srv.IP, srv.Role)
|
|
||||||
|
|
||||||
// Find ssh binary
|
|
||||||
sshBin, err := findSSHBinary()
|
|
||||||
if err != nil {
|
|
||||||
cleanup()
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run SSH as a child process so cleanup runs after the session ends
|
|
||||||
cmd := exec.Command(sshBin,
|
|
||||||
"-o", "StrictHostKeyChecking=no",
|
|
||||||
"-o", "UserKnownHostsFile=/dev/null",
|
|
||||||
"-i", sshKeyPath,
|
|
||||||
fmt.Sprintf("root@%s", srv.IP),
|
|
||||||
)
|
|
||||||
cmd.Stdin = os.Stdin
|
|
||||||
cmd.Stdout = os.Stdout
|
|
||||||
cmd.Stderr = os.Stderr
|
|
||||||
|
|
||||||
err = cmd.Run()
|
|
||||||
cleanup()
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// findSSHBinary locates the ssh binary in PATH.
|
|
||||||
func findSSHBinary() (string, error) {
|
|
||||||
paths := []string{"/usr/bin/ssh", "/usr/local/bin/ssh", "/opt/homebrew/bin/ssh"}
|
|
||||||
for _, p := range paths {
|
|
||||||
if _, err := os.Stat(p); err == nil {
|
|
||||||
return p, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return "", fmt.Errorf("ssh binary not found")
|
|
||||||
}
|
|
||||||
@ -1,211 +0,0 @@
|
|||||||
package sandbox
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/inspector"
|
|
||||||
"gopkg.in/yaml.v3"
|
|
||||||
)
|
|
||||||
|
|
||||||
// SandboxStatus represents the lifecycle state of a sandbox.
|
|
||||||
type SandboxStatus string
|
|
||||||
|
|
||||||
const (
|
|
||||||
StatusCreating SandboxStatus = "creating"
|
|
||||||
StatusRunning SandboxStatus = "running"
|
|
||||||
StatusDestroying SandboxStatus = "destroying"
|
|
||||||
StatusError SandboxStatus = "error"
|
|
||||||
)
|
|
||||||
|
|
||||||
// SandboxState holds the full state of an active sandbox cluster.
|
|
||||||
type SandboxState struct {
|
|
||||||
Name string `yaml:"name"`
|
|
||||||
CreatedAt time.Time `yaml:"created_at"`
|
|
||||||
Domain string `yaml:"domain"`
|
|
||||||
Status SandboxStatus `yaml:"status"`
|
|
||||||
Servers []ServerState `yaml:"servers"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// ServerState holds the state of a single server in the sandbox.
|
|
||||||
type ServerState struct {
|
|
||||||
ID int64 `yaml:"id"` // Hetzner server ID
|
|
||||||
Name string `yaml:"name"` // e.g., sbx-feature-webrtc-1
|
|
||||||
IP string `yaml:"ip"` // Public IPv4
|
|
||||||
Role string `yaml:"role"` // "nameserver" or "node"
|
|
||||||
FloatingIP string `yaml:"floating_ip,omitempty"` // Only for nameserver nodes
|
|
||||||
WgIP string `yaml:"wg_ip,omitempty"` // WireGuard IP (populated after install)
|
|
||||||
}
|
|
||||||
|
|
||||||
// sandboxesDir returns ~/.orama/sandboxes/, creating it if needed.
|
|
||||||
func sandboxesDir() (string, error) {
|
|
||||||
dir, err := configDir()
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
sbxDir := filepath.Join(dir, "sandboxes")
|
|
||||||
if err := os.MkdirAll(sbxDir, 0700); err != nil {
|
|
||||||
return "", fmt.Errorf("create sandboxes directory: %w", err)
|
|
||||||
}
|
|
||||||
return sbxDir, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// statePath returns the path for a sandbox's state file.
|
|
||||||
func statePath(name string) (string, error) {
|
|
||||||
dir, err := sandboxesDir()
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
return filepath.Join(dir, name+".yaml"), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// SaveState persists the sandbox state to disk.
|
|
||||||
func SaveState(state *SandboxState) error {
|
|
||||||
path, err := statePath(state.Name)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
data, err := yaml.Marshal(state)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("marshal state: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := os.WriteFile(path, data, 0600); err != nil {
|
|
||||||
return fmt.Errorf("write state: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// LoadState reads a sandbox state from disk.
|
|
||||||
func LoadState(name string) (*SandboxState, error) {
|
|
||||||
path, err := statePath(name)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
data, err := os.ReadFile(path)
|
|
||||||
if err != nil {
|
|
||||||
if os.IsNotExist(err) {
|
|
||||||
return nil, fmt.Errorf("sandbox %q not found", name)
|
|
||||||
}
|
|
||||||
return nil, fmt.Errorf("read state: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var state SandboxState
|
|
||||||
if err := yaml.Unmarshal(data, &state); err != nil {
|
|
||||||
return nil, fmt.Errorf("parse state: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return &state, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeleteState removes the sandbox state file.
|
|
||||||
func DeleteState(name string) error {
|
|
||||||
path, err := statePath(name)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := os.Remove(path); err != nil && !os.IsNotExist(err) {
|
|
||||||
return fmt.Errorf("delete state: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// ListStates returns all sandbox states from disk.
|
|
||||||
func ListStates() ([]*SandboxState, error) {
|
|
||||||
dir, err := sandboxesDir()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
entries, err := os.ReadDir(dir)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("read sandboxes directory: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var states []*SandboxState
|
|
||||||
for _, entry := range entries {
|
|
||||||
if entry.IsDir() || !strings.HasSuffix(entry.Name(), ".yaml") {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
name := strings.TrimSuffix(entry.Name(), ".yaml")
|
|
||||||
state, err := LoadState(name)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Warning: could not load sandbox %q: %v\n", name, err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
states = append(states, state)
|
|
||||||
}
|
|
||||||
|
|
||||||
return states, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// FindActiveSandbox returns the first sandbox in running or creating state.
|
|
||||||
// Returns nil if no active sandbox exists.
|
|
||||||
func FindActiveSandbox() (*SandboxState, error) {
|
|
||||||
states, err := ListStates()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, s := range states {
|
|
||||||
if s.Status == StatusRunning || s.Status == StatusCreating {
|
|
||||||
return s, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// ToNodes converts sandbox servers to inspector.Node structs for SSH operations.
|
|
||||||
// Sets VaultTarget on each node so PrepareNodeKeys resolves from the wallet.
|
|
||||||
func (s *SandboxState) ToNodes(vaultTarget string) []inspector.Node {
|
|
||||||
nodes := make([]inspector.Node, len(s.Servers))
|
|
||||||
for i, srv := range s.Servers {
|
|
||||||
nodes[i] = inspector.Node{
|
|
||||||
Environment: "sandbox",
|
|
||||||
User: "root",
|
|
||||||
Host: srv.IP,
|
|
||||||
Role: srv.Role,
|
|
||||||
VaultTarget: vaultTarget,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nodes
|
|
||||||
}
|
|
||||||
|
|
||||||
// NameserverNodes returns only the nameserver nodes.
|
|
||||||
func (s *SandboxState) NameserverNodes() []ServerState {
|
|
||||||
var ns []ServerState
|
|
||||||
for _, srv := range s.Servers {
|
|
||||||
if srv.Role == "nameserver" {
|
|
||||||
ns = append(ns, srv)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return ns
|
|
||||||
}
|
|
||||||
|
|
||||||
// RegularNodes returns only the non-nameserver nodes.
|
|
||||||
func (s *SandboxState) RegularNodes() []ServerState {
|
|
||||||
var nodes []ServerState
|
|
||||||
for _, srv := range s.Servers {
|
|
||||||
if srv.Role == "node" {
|
|
||||||
nodes = append(nodes, srv)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nodes
|
|
||||||
}
|
|
||||||
|
|
||||||
// GenesisServer returns the first server (genesis node).
|
|
||||||
func (s *SandboxState) GenesisServer() ServerState {
|
|
||||||
if len(s.Servers) == 0 {
|
|
||||||
return ServerState{}
|
|
||||||
}
|
|
||||||
return s.Servers[0]
|
|
||||||
}
|
|
||||||
@ -1,217 +0,0 @@
|
|||||||
package sandbox
|
|
||||||
|
|
||||||
import (
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"testing"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestSaveAndLoadState(t *testing.T) {
|
|
||||||
// Use temp dir for test
|
|
||||||
tmpDir := t.TempDir()
|
|
||||||
origHome := os.Getenv("HOME")
|
|
||||||
os.Setenv("HOME", tmpDir)
|
|
||||||
defer os.Setenv("HOME", origHome)
|
|
||||||
|
|
||||||
state := &SandboxState{
|
|
||||||
Name: "test-sandbox",
|
|
||||||
CreatedAt: time.Date(2026, 2, 25, 10, 0, 0, 0, time.UTC),
|
|
||||||
Domain: "test.example.com",
|
|
||||||
Status: StatusRunning,
|
|
||||||
Servers: []ServerState{
|
|
||||||
{ID: 1, Name: "sbx-test-1", IP: "1.1.1.1", Role: "nameserver", FloatingIP: "10.0.0.1", WgIP: "10.0.0.1"},
|
|
||||||
{ID: 2, Name: "sbx-test-2", IP: "2.2.2.2", Role: "nameserver", FloatingIP: "10.0.0.2", WgIP: "10.0.0.2"},
|
|
||||||
{ID: 3, Name: "sbx-test-3", IP: "3.3.3.3", Role: "node", WgIP: "10.0.0.3"},
|
|
||||||
{ID: 4, Name: "sbx-test-4", IP: "4.4.4.4", Role: "node", WgIP: "10.0.0.4"},
|
|
||||||
{ID: 5, Name: "sbx-test-5", IP: "5.5.5.5", Role: "node", WgIP: "10.0.0.5"},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := SaveState(state); err != nil {
|
|
||||||
t.Fatalf("SaveState() error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify file exists
|
|
||||||
expected := filepath.Join(tmpDir, ".orama", "sandboxes", "test-sandbox.yaml")
|
|
||||||
if _, err := os.Stat(expected); err != nil {
|
|
||||||
t.Fatalf("state file not created at %s: %v", expected, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Load back
|
|
||||||
loaded, err := LoadState("test-sandbox")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("LoadState() error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if loaded.Name != "test-sandbox" {
|
|
||||||
t.Errorf("name = %s, want test-sandbox", loaded.Name)
|
|
||||||
}
|
|
||||||
if loaded.Domain != "test.example.com" {
|
|
||||||
t.Errorf("domain = %s, want test.example.com", loaded.Domain)
|
|
||||||
}
|
|
||||||
if loaded.Status != StatusRunning {
|
|
||||||
t.Errorf("status = %s, want running", loaded.Status)
|
|
||||||
}
|
|
||||||
if len(loaded.Servers) != 5 {
|
|
||||||
t.Errorf("servers = %d, want 5", len(loaded.Servers))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestLoadState_NotFound(t *testing.T) {
|
|
||||||
tmpDir := t.TempDir()
|
|
||||||
origHome := os.Getenv("HOME")
|
|
||||||
os.Setenv("HOME", tmpDir)
|
|
||||||
defer os.Setenv("HOME", origHome)
|
|
||||||
|
|
||||||
_, err := LoadState("nonexistent")
|
|
||||||
if err == nil {
|
|
||||||
t.Error("LoadState() expected error for nonexistent sandbox")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDeleteState(t *testing.T) {
|
|
||||||
tmpDir := t.TempDir()
|
|
||||||
origHome := os.Getenv("HOME")
|
|
||||||
os.Setenv("HOME", tmpDir)
|
|
||||||
defer os.Setenv("HOME", origHome)
|
|
||||||
|
|
||||||
state := &SandboxState{
|
|
||||||
Name: "to-delete",
|
|
||||||
Status: StatusRunning,
|
|
||||||
}
|
|
||||||
if err := SaveState(state); err != nil {
|
|
||||||
t.Fatalf("SaveState() error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := DeleteState("to-delete"); err != nil {
|
|
||||||
t.Fatalf("DeleteState() error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err := LoadState("to-delete")
|
|
||||||
if err == nil {
|
|
||||||
t.Error("LoadState() should fail after DeleteState()")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestListStates(t *testing.T) {
|
|
||||||
tmpDir := t.TempDir()
|
|
||||||
origHome := os.Getenv("HOME")
|
|
||||||
os.Setenv("HOME", tmpDir)
|
|
||||||
defer os.Setenv("HOME", origHome)
|
|
||||||
|
|
||||||
// Create 2 sandboxes
|
|
||||||
for _, name := range []string{"sandbox-a", "sandbox-b"} {
|
|
||||||
if err := SaveState(&SandboxState{Name: name, Status: StatusRunning}); err != nil {
|
|
||||||
t.Fatalf("SaveState(%s) error = %v", name, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
states, err := ListStates()
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("ListStates() error = %v", err)
|
|
||||||
}
|
|
||||||
if len(states) != 2 {
|
|
||||||
t.Errorf("ListStates() returned %d, want 2", len(states))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestFindActiveSandbox(t *testing.T) {
|
|
||||||
tmpDir := t.TempDir()
|
|
||||||
origHome := os.Getenv("HOME")
|
|
||||||
os.Setenv("HOME", tmpDir)
|
|
||||||
defer os.Setenv("HOME", origHome)
|
|
||||||
|
|
||||||
// No sandboxes
|
|
||||||
active, err := FindActiveSandbox()
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("FindActiveSandbox() error = %v", err)
|
|
||||||
}
|
|
||||||
if active != nil {
|
|
||||||
t.Error("expected nil when no sandboxes exist")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add one running sandbox
|
|
||||||
if err := SaveState(&SandboxState{Name: "active-one", Status: StatusRunning}); err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
if err := SaveState(&SandboxState{Name: "errored-one", Status: StatusError}); err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
active, err = FindActiveSandbox()
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("FindActiveSandbox() error = %v", err)
|
|
||||||
}
|
|
||||||
if active == nil || active.Name != "active-one" {
|
|
||||||
t.Errorf("FindActiveSandbox() = %v, want active-one", active)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestToNodes(t *testing.T) {
|
|
||||||
state := &SandboxState{
|
|
||||||
Servers: []ServerState{
|
|
||||||
{IP: "1.1.1.1", Role: "nameserver"},
|
|
||||||
{IP: "2.2.2.2", Role: "node"},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
nodes := state.ToNodes("sandbox/root")
|
|
||||||
if len(nodes) != 2 {
|
|
||||||
t.Fatalf("ToNodes() returned %d nodes, want 2", len(nodes))
|
|
||||||
}
|
|
||||||
if nodes[0].Host != "1.1.1.1" {
|
|
||||||
t.Errorf("node[0].Host = %s, want 1.1.1.1", nodes[0].Host)
|
|
||||||
}
|
|
||||||
if nodes[0].User != "root" {
|
|
||||||
t.Errorf("node[0].User = %s, want root", nodes[0].User)
|
|
||||||
}
|
|
||||||
if nodes[0].VaultTarget != "sandbox/root" {
|
|
||||||
t.Errorf("node[0].VaultTarget = %s, want sandbox/root", nodes[0].VaultTarget)
|
|
||||||
}
|
|
||||||
if nodes[0].SSHKey != "" {
|
|
||||||
t.Errorf("node[0].SSHKey = %s, want empty (set by PrepareNodeKeys)", nodes[0].SSHKey)
|
|
||||||
}
|
|
||||||
if nodes[0].Environment != "sandbox" {
|
|
||||||
t.Errorf("node[0].Environment = %s, want sandbox", nodes[0].Environment)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestNameserverAndRegularNodes(t *testing.T) {
|
|
||||||
state := &SandboxState{
|
|
||||||
Servers: []ServerState{
|
|
||||||
{Role: "nameserver"},
|
|
||||||
{Role: "nameserver"},
|
|
||||||
{Role: "node"},
|
|
||||||
{Role: "node"},
|
|
||||||
{Role: "node"},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
ns := state.NameserverNodes()
|
|
||||||
if len(ns) != 2 {
|
|
||||||
t.Errorf("NameserverNodes() = %d, want 2", len(ns))
|
|
||||||
}
|
|
||||||
|
|
||||||
regular := state.RegularNodes()
|
|
||||||
if len(regular) != 3 {
|
|
||||||
t.Errorf("RegularNodes() = %d, want 3", len(regular))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGenesisServer(t *testing.T) {
|
|
||||||
state := &SandboxState{
|
|
||||||
Servers: []ServerState{
|
|
||||||
{Name: "first"},
|
|
||||||
{Name: "second"},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
if state.GenesisServer().Name != "first" {
|
|
||||||
t.Errorf("GenesisServer().Name = %s, want first", state.GenesisServer().Name)
|
|
||||||
}
|
|
||||||
|
|
||||||
empty := &SandboxState{}
|
|
||||||
if empty.GenesisServer().Name != "" {
|
|
||||||
t.Error("GenesisServer() on empty state should return zero value")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,165 +0,0 @@
|
|||||||
package sandbox
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/inspector"
|
|
||||||
)
|
|
||||||
|
|
||||||
// List prints all sandbox clusters.
|
|
||||||
func List() error {
|
|
||||||
states, err := ListStates()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(states) == 0 {
|
|
||||||
fmt.Println("No sandboxes found.")
|
|
||||||
fmt.Println("Create one: orama sandbox create")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("%-20s %-10s %-5s %-25s %s\n", "NAME", "STATUS", "NODES", "CREATED", "DOMAIN")
|
|
||||||
for _, s := range states {
|
|
||||||
fmt.Printf("%-20s %-10s %-5d %-25s %s\n",
|
|
||||||
s.Name, s.Status, len(s.Servers), s.CreatedAt.Format("2006-01-02 15:04"), s.Domain)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check for orphaned servers on Hetzner
|
|
||||||
cfg, err := LoadConfig()
|
|
||||||
if err != nil {
|
|
||||||
return nil // Config not set up, skip orphan check
|
|
||||||
}
|
|
||||||
|
|
||||||
client := NewHetznerClient(cfg.HetznerAPIToken)
|
|
||||||
hetznerServers, err := client.ListServersByLabel("orama-sandbox")
|
|
||||||
if err != nil {
|
|
||||||
return nil // API error, skip orphan check
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build set of known server IDs
|
|
||||||
known := make(map[int64]bool)
|
|
||||||
for _, s := range states {
|
|
||||||
for _, srv := range s.Servers {
|
|
||||||
known[srv.ID] = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var orphans []string
|
|
||||||
for _, srv := range hetznerServers {
|
|
||||||
if !known[srv.ID] {
|
|
||||||
orphans = append(orphans, fmt.Sprintf("%s (ID: %d, IP: %s)", srv.Name, srv.ID, srv.PublicNet.IPv4.IP))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(orphans) > 0 {
|
|
||||||
fmt.Printf("\nWarning: %d orphaned server(s) on Hetzner (no state file):\n", len(orphans))
|
|
||||||
for _, o := range orphans {
|
|
||||||
fmt.Printf(" %s\n", o)
|
|
||||||
}
|
|
||||||
fmt.Println("Delete manually at https://console.hetzner.cloud")
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Status prints the health report for a sandbox cluster.
|
|
||||||
func Status(name string) error {
|
|
||||||
cfg, err := LoadConfig()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
state, err := resolveSandbox(name)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
sshKeyPath, cleanup, err := resolveVaultKeyOnce(cfg.SSHKey.VaultTarget)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("prepare SSH key: %w", err)
|
|
||||||
}
|
|
||||||
defer cleanup()
|
|
||||||
|
|
||||||
fmt.Printf("Sandbox: %s (status: %s)\n\n", state.Name, state.Status)
|
|
||||||
|
|
||||||
for _, srv := range state.Servers {
|
|
||||||
node := inspector.Node{User: "root", Host: srv.IP, SSHKey: sshKeyPath}
|
|
||||||
|
|
||||||
fmt.Printf("%s (%s) — %s\n", srv.Name, srv.IP, srv.Role)
|
|
||||||
|
|
||||||
// Get node report
|
|
||||||
out, err := runSSHOutput(node, "orama node report --json 2>/dev/null")
|
|
||||||
if err != nil {
|
|
||||||
fmt.Printf(" Status: UNREACHABLE (%v)\n", err)
|
|
||||||
fmt.Println()
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
printNodeReport(out)
|
|
||||||
fmt.Println()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Cluster summary
|
|
||||||
fmt.Println("Cluster Summary")
|
|
||||||
fmt.Println("---------------")
|
|
||||||
genesis := state.GenesisServer()
|
|
||||||
genesisNode := inspector.Node{User: "root", Host: genesis.IP, SSHKey: sshKeyPath}
|
|
||||||
|
|
||||||
out, err := runSSHOutput(genesisNode, "curl -sf http://localhost:5001/status 2>/dev/null")
|
|
||||||
if err != nil {
|
|
||||||
fmt.Println(" RQLite: UNREACHABLE")
|
|
||||||
} else {
|
|
||||||
var status map[string]interface{}
|
|
||||||
if err := json.Unmarshal([]byte(out), &status); err == nil {
|
|
||||||
if store, ok := status["store"].(map[string]interface{}); ok {
|
|
||||||
if raft, ok := store["raft"].(map[string]interface{}); ok {
|
|
||||||
fmt.Printf(" RQLite state: %v\n", raft["state"])
|
|
||||||
fmt.Printf(" Commit index: %v\n", raft["commit_index"])
|
|
||||||
if nodes, ok := raft["nodes"].([]interface{}); ok {
|
|
||||||
fmt.Printf(" Nodes: %d\n", len(nodes))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// printNodeReport parses and prints a node report JSON.
|
|
||||||
func printNodeReport(jsonStr string) {
|
|
||||||
var report map[string]interface{}
|
|
||||||
if err := json.Unmarshal([]byte(jsonStr), &report); err != nil {
|
|
||||||
fmt.Printf(" Report: (parse error)\n")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Print key fields
|
|
||||||
if services, ok := report["services"].(map[string]interface{}); ok {
|
|
||||||
var active, inactive []string
|
|
||||||
for name, info := range services {
|
|
||||||
if svc, ok := info.(map[string]interface{}); ok {
|
|
||||||
if state, ok := svc["active"].(bool); ok && state {
|
|
||||||
active = append(active, name)
|
|
||||||
} else {
|
|
||||||
inactive = append(inactive, name)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if len(active) > 0 {
|
|
||||||
fmt.Printf(" Active: %s\n", strings.Join(active, ", "))
|
|
||||||
}
|
|
||||||
if len(inactive) > 0 {
|
|
||||||
fmt.Printf(" Inactive: %s\n", strings.Join(inactive, ", "))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if rqlite, ok := report["rqlite"].(map[string]interface{}); ok {
|
|
||||||
if state, ok := rqlite["state"].(string); ok {
|
|
||||||
fmt.Printf(" RQLite: %s\n", state)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,82 +0,0 @@
|
|||||||
package client
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/rqlite/gorqlite"
|
|
||||||
)
|
|
||||||
|
|
||||||
// mockPanicConnection simulates what gorqlite does when WriteParameterized
|
|
||||||
// returns an empty slice: accessing [0] panics.
|
|
||||||
func simulateGorqlitePanic() (gorqlite.WriteResult, error) {
|
|
||||||
var empty []gorqlite.WriteResult
|
|
||||||
return empty[0], fmt.Errorf("leader not found") // panics
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSafeWriteOne_recoversPanic(t *testing.T) {
|
|
||||||
// We can't easily create a real gorqlite.Connection that panics,
|
|
||||||
// but we can verify our recovery wrapper works by testing the
|
|
||||||
// recovery pattern directly.
|
|
||||||
var recovered bool
|
|
||||||
func() {
|
|
||||||
defer func() {
|
|
||||||
if r := recover(); r != nil {
|
|
||||||
recovered = true
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
simulateGorqlitePanic()
|
|
||||||
}()
|
|
||||||
|
|
||||||
if !recovered {
|
|
||||||
t.Fatal("expected simulateGorqlitePanic to panic, but it didn't")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSafeWriteOne_nilConnection(t *testing.T) {
|
|
||||||
// safeWriteOne with nil connection should recover from panic, not crash.
|
|
||||||
_, err := safeWriteOne(nil, gorqlite.ParameterizedStatement{
|
|
||||||
Query: "INSERT INTO test (a) VALUES (?)",
|
|
||||||
Arguments: []interface{}{"x"},
|
|
||||||
})
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error from nil connection, got nil")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSafeWriteOneRaw_nilConnection(t *testing.T) {
|
|
||||||
// safeWriteOneRaw with nil connection should recover from panic, not crash.
|
|
||||||
_, err := safeWriteOneRaw(nil, "INSERT INTO test (a) VALUES ('x')")
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error from nil connection, got nil")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestIsWriteOperation(t *testing.T) {
|
|
||||||
d := &DatabaseClientImpl{}
|
|
||||||
|
|
||||||
tests := []struct {
|
|
||||||
sql string
|
|
||||||
isWrite bool
|
|
||||||
}{
|
|
||||||
{"INSERT INTO foo VALUES (1)", true},
|
|
||||||
{" INSERT INTO foo VALUES (1)", true},
|
|
||||||
{"UPDATE foo SET a = 1", true},
|
|
||||||
{"DELETE FROM foo", true},
|
|
||||||
{"CREATE TABLE foo (a TEXT)", true},
|
|
||||||
{"DROP TABLE foo", true},
|
|
||||||
{"ALTER TABLE foo ADD COLUMN b TEXT", true},
|
|
||||||
{"SELECT * FROM foo", false},
|
|
||||||
{" SELECT * FROM foo", false},
|
|
||||||
{"EXPLAIN SELECT * FROM foo", false},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tt := range tests {
|
|
||||||
t.Run(tt.sql, func(t *testing.T) {
|
|
||||||
got := d.isWriteOperation(tt.sql)
|
|
||||||
if got != tt.isWrite {
|
|
||||||
t.Errorf("isWriteOperation(%q) = %v, want %v", tt.sql, got, tt.isWrite)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,13 +0,0 @@
|
|||||||
package constants
|
|
||||||
|
|
||||||
// External dependency versions used across the network.
|
|
||||||
// Single source of truth — all installer files and build scripts import from here.
|
|
||||||
const (
|
|
||||||
GoVersion = "1.24.6"
|
|
||||||
OlricVersion = "v0.7.0"
|
|
||||||
IPFSKuboVersion = "v0.38.2"
|
|
||||||
IPFSClusterVersion = "v1.1.2"
|
|
||||||
RQLiteVersion = "8.43.0"
|
|
||||||
CoreDNSVersion = "1.12.0"
|
|
||||||
CaddyVersion = "2.10.2"
|
|
||||||
)
|
|
||||||
@ -1,151 +0,0 @@
|
|||||||
package installers
|
|
||||||
|
|
||||||
import (
|
|
||||||
"io"
|
|
||||||
"strings"
|
|
||||||
"testing"
|
|
||||||
)
|
|
||||||
|
|
||||||
// newTestCoreDNSInstaller creates a CoreDNSInstaller suitable for unit tests.
|
|
||||||
// It uses a non-existent oramaHome so generateCorefile won't find a password file
|
|
||||||
// and will produce output without auth credentials.
|
|
||||||
func newTestCoreDNSInstaller() *CoreDNSInstaller {
|
|
||||||
return &CoreDNSInstaller{
|
|
||||||
BaseInstaller: NewBaseInstaller("amd64", io.Discard),
|
|
||||||
version: "1.11.1",
|
|
||||||
oramaHome: "/nonexistent",
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGenerateCorefile_ContainsBindLocalhost(t *testing.T) {
|
|
||||||
ci := newTestCoreDNSInstaller()
|
|
||||||
corefile := ci.generateCorefile("dbrs.space", "http://localhost:5001")
|
|
||||||
|
|
||||||
if !strings.Contains(corefile, "bind 127.0.0.1") {
|
|
||||||
t.Fatal("Corefile forward block must contain 'bind 127.0.0.1' to prevent open resolver")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGenerateCorefile_ForwardBlockIsLocalhostOnly(t *testing.T) {
|
|
||||||
ci := newTestCoreDNSInstaller()
|
|
||||||
corefile := ci.generateCorefile("dbrs.space", "http://localhost:5001")
|
|
||||||
|
|
||||||
// The bind directive must appear inside the catch-all (.) block,
|
|
||||||
// not inside the authoritative domain block.
|
|
||||||
// Find the ". {" block and verify bind is inside it.
|
|
||||||
dotBlockIdx := strings.Index(corefile, ". {")
|
|
||||||
if dotBlockIdx == -1 {
|
|
||||||
t.Fatal("Corefile must contain a catch-all '. {' server block")
|
|
||||||
}
|
|
||||||
|
|
||||||
dotBlock := corefile[dotBlockIdx:]
|
|
||||||
closingIdx := strings.Index(dotBlock, "}")
|
|
||||||
if closingIdx == -1 {
|
|
||||||
t.Fatal("Catch-all block has no closing brace")
|
|
||||||
}
|
|
||||||
dotBlock = dotBlock[:closingIdx]
|
|
||||||
|
|
||||||
if !strings.Contains(dotBlock, "bind 127.0.0.1") {
|
|
||||||
t.Error("bind 127.0.0.1 must be inside the catch-all (.) block, not the domain block")
|
|
||||||
}
|
|
||||||
|
|
||||||
if !strings.Contains(dotBlock, "forward .") {
|
|
||||||
t.Error("forward directive must be inside the catch-all (.) block")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGenerateCorefile_AuthoritativeBlockNoBindRestriction(t *testing.T) {
|
|
||||||
ci := newTestCoreDNSInstaller()
|
|
||||||
corefile := ci.generateCorefile("dbrs.space", "http://localhost:5001")
|
|
||||||
|
|
||||||
// The authoritative domain block should NOT have a bind directive
|
|
||||||
// (it must listen on all interfaces to serve external DNS queries).
|
|
||||||
domainBlockStart := strings.Index(corefile, "dbrs.space {")
|
|
||||||
if domainBlockStart == -1 {
|
|
||||||
t.Fatal("Corefile must contain 'dbrs.space {' server block")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract the domain block (up to the first closing brace)
|
|
||||||
domainBlock := corefile[domainBlockStart:]
|
|
||||||
closingIdx := strings.Index(domainBlock, "}")
|
|
||||||
if closingIdx == -1 {
|
|
||||||
t.Fatal("Domain block has no closing brace")
|
|
||||||
}
|
|
||||||
domainBlock = domainBlock[:closingIdx]
|
|
||||||
|
|
||||||
if strings.Contains(domainBlock, "bind ") {
|
|
||||||
t.Error("Authoritative domain block must not have a bind directive — it must listen on all interfaces")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGenerateCorefile_ContainsDomainZone(t *testing.T) {
|
|
||||||
ci := newTestCoreDNSInstaller()
|
|
||||||
|
|
||||||
tests := []struct {
|
|
||||||
domain string
|
|
||||||
}{
|
|
||||||
{"dbrs.space"},
|
|
||||||
{"orama.network"},
|
|
||||||
{"example.com"},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tt := range tests {
|
|
||||||
t.Run(tt.domain, func(t *testing.T) {
|
|
||||||
corefile := ci.generateCorefile(tt.domain, "http://localhost:5001")
|
|
||||||
|
|
||||||
if !strings.Contains(corefile, tt.domain+" {") {
|
|
||||||
t.Errorf("Corefile must contain server block for domain %q", tt.domain)
|
|
||||||
}
|
|
||||||
|
|
||||||
if !strings.Contains(corefile, "rqlite {") {
|
|
||||||
t.Error("Corefile must contain rqlite plugin block")
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGenerateCorefile_ContainsRQLiteDSN(t *testing.T) {
|
|
||||||
ci := newTestCoreDNSInstaller()
|
|
||||||
dsn := "http://10.0.0.1:5001"
|
|
||||||
corefile := ci.generateCorefile("dbrs.space", dsn)
|
|
||||||
|
|
||||||
if !strings.Contains(corefile, "dsn "+dsn) {
|
|
||||||
t.Errorf("Corefile must contain RQLite DSN %q", dsn)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGenerateCorefile_NoAuthBlockWithoutCredentials(t *testing.T) {
|
|
||||||
ci := newTestCoreDNSInstaller()
|
|
||||||
corefile := ci.generateCorefile("dbrs.space", "http://localhost:5001")
|
|
||||||
|
|
||||||
if strings.Contains(corefile, "username") || strings.Contains(corefile, "password") {
|
|
||||||
t.Error("Corefile must not contain auth credentials when secrets file is absent")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGeneratePluginConfig_ContainsBindPlugin(t *testing.T) {
|
|
||||||
ci := newTestCoreDNSInstaller()
|
|
||||||
cfg := ci.generatePluginConfig()
|
|
||||||
|
|
||||||
if !strings.Contains(cfg, "bind:bind") {
|
|
||||||
t.Error("Plugin config must include the bind plugin (required for localhost-only forwarding)")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGeneratePluginConfig_ContainsACLPlugin(t *testing.T) {
|
|
||||||
ci := newTestCoreDNSInstaller()
|
|
||||||
cfg := ci.generatePluginConfig()
|
|
||||||
|
|
||||||
if !strings.Contains(cfg, "acl:acl") {
|
|
||||||
t.Error("Plugin config must include the acl plugin")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGeneratePluginConfig_ContainsRQLitePlugin(t *testing.T) {
|
|
||||||
ci := newTestCoreDNSInstaller()
|
|
||||||
cfg := ci.generatePluginConfig()
|
|
||||||
|
|
||||||
if !strings.Contains(cfg, "rqlite:rqlite") {
|
|
||||||
t.Error("Plugin config must include the rqlite plugin")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,325 +0,0 @@
|
|||||||
package production
|
|
||||||
|
|
||||||
import (
|
|
||||||
"crypto/sha256"
|
|
||||||
"encoding/hex"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"os"
|
|
||||||
"os/exec"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
ethcrypto "github.com/ethereum/go-ethereum/crypto"
|
|
||||||
)
|
|
||||||
|
|
||||||
// PreBuiltManifest describes the contents of a pre-built binary archive.
|
|
||||||
type PreBuiltManifest struct {
|
|
||||||
Version string `json:"version"`
|
|
||||||
Commit string `json:"commit"`
|
|
||||||
Date string `json:"date"`
|
|
||||||
Arch string `json:"arch"`
|
|
||||||
Checksums map[string]string `json:"checksums"` // filename -> sha256
|
|
||||||
}
|
|
||||||
|
|
||||||
// HasPreBuiltArchive checks if a pre-built binary archive has been extracted
|
|
||||||
// at /opt/orama/ by looking for the manifest.json file.
|
|
||||||
func HasPreBuiltArchive() bool {
|
|
||||||
_, err := os.Stat(OramaManifest)
|
|
||||||
return err == nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// LoadPreBuiltManifest loads and parses the pre-built manifest.
|
|
||||||
func LoadPreBuiltManifest() (*PreBuiltManifest, error) {
|
|
||||||
data, err := os.ReadFile(OramaManifest)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to read manifest: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var manifest PreBuiltManifest
|
|
||||||
if err := json.Unmarshal(data, &manifest); err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to parse manifest: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return &manifest, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// OramaSignerAddress is the Ethereum address authorized to sign build archives.
|
|
||||||
// Archives signed by any other address are rejected during install.
|
|
||||||
// This is the DeBros deploy wallet — update if the signing key rotates.
|
|
||||||
const OramaSignerAddress = "0xb5d8a496c8b2412990d7D467E17727fdF5954afC"
|
|
||||||
|
|
||||||
// VerifyArchiveSignature verifies that the pre-built archive was signed by the
|
|
||||||
// authorized Orama signer. Returns nil if the signature is valid, or if no
|
|
||||||
// signature file exists (unsigned archives are allowed but logged as a warning).
|
|
||||||
func VerifyArchiveSignature(manifest *PreBuiltManifest) error {
|
|
||||||
sigData, err := os.ReadFile(OramaManifestSig)
|
|
||||||
if os.IsNotExist(err) {
|
|
||||||
return nil // unsigned archive — caller decides whether to proceed
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to read manifest.sig: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Reproduce the same hash used during signing: SHA256 of compact JSON
|
|
||||||
manifestJSON, err := json.Marshal(manifest)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to marshal manifest: %w", err)
|
|
||||||
}
|
|
||||||
manifestHash := sha256.Sum256(manifestJSON)
|
|
||||||
hashHex := hex.EncodeToString(manifestHash[:])
|
|
||||||
|
|
||||||
// EVM personal_sign: keccak256("\x19Ethereum Signed Message:\n" + len + message)
|
|
||||||
msg := []byte(hashHex)
|
|
||||||
prefix := []byte("\x19Ethereum Signed Message:\n" + fmt.Sprintf("%d", len(msg)))
|
|
||||||
ethHash := ethcrypto.Keccak256(prefix, msg)
|
|
||||||
|
|
||||||
// Decode signature
|
|
||||||
sigHex := strings.TrimSpace(string(sigData))
|
|
||||||
if strings.HasPrefix(sigHex, "0x") || strings.HasPrefix(sigHex, "0X") {
|
|
||||||
sigHex = sigHex[2:]
|
|
||||||
}
|
|
||||||
sig, err := hex.DecodeString(sigHex)
|
|
||||||
if err != nil || len(sig) != 65 {
|
|
||||||
return fmt.Errorf("invalid signature format in manifest.sig")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Normalize recovery ID
|
|
||||||
if sig[64] >= 27 {
|
|
||||||
sig[64] -= 27
|
|
||||||
}
|
|
||||||
|
|
||||||
// Recover public key from signature
|
|
||||||
pub, err := ethcrypto.SigToPub(ethHash, sig)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("signature recovery failed: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
recovered := ethcrypto.PubkeyToAddress(*pub).Hex()
|
|
||||||
expected := strings.ToLower(OramaSignerAddress)
|
|
||||||
got := strings.ToLower(recovered)
|
|
||||||
|
|
||||||
if got != expected {
|
|
||||||
return fmt.Errorf("archive signed by %s, expected %s — refusing to install", recovered, OramaSignerAddress)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// IsArchiveSigned returns true if a manifest.sig file exists alongside the manifest.
|
|
||||||
func IsArchiveSigned() bool {
|
|
||||||
_, err := os.Stat(OramaManifestSig)
|
|
||||||
return err == nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// installFromPreBuilt installs all binaries from a pre-built archive.
|
|
||||||
// The archive must already be extracted at /opt/orama/ with:
|
|
||||||
// - /opt/orama/bin/ — all pre-compiled binaries
|
|
||||||
// - /opt/orama/systemd/ — namespace service templates
|
|
||||||
// - /opt/orama/packages/ — optional .deb packages
|
|
||||||
// - /opt/orama/manifest.json — archive metadata
|
|
||||||
func (ps *ProductionSetup) installFromPreBuilt(manifest *PreBuiltManifest) error {
|
|
||||||
ps.logf(" Using pre-built binary archive v%s (%s) linux/%s", manifest.Version, manifest.Commit, manifest.Arch)
|
|
||||||
|
|
||||||
// Verify archive signature if present
|
|
||||||
if IsArchiveSigned() {
|
|
||||||
if err := VerifyArchiveSignature(manifest); err != nil {
|
|
||||||
return fmt.Errorf("archive signature verification failed: %w", err)
|
|
||||||
}
|
|
||||||
ps.logf(" ✓ Archive signature verified")
|
|
||||||
} else {
|
|
||||||
ps.logf(" ⚠️ Archive is unsigned — consider using 'orama build --sign'")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Install minimal system dependencies (no build tools needed)
|
|
||||||
if err := ps.installMinimalSystemDeps(); err != nil {
|
|
||||||
ps.logf(" ⚠️ System dependencies warning: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Copy binaries to runtime locations
|
|
||||||
if err := ps.deployPreBuiltBinaries(manifest); err != nil {
|
|
||||||
return fmt.Errorf("failed to deploy pre-built binaries: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set capabilities on binaries that need to bind privileged ports
|
|
||||||
if err := ps.setCapabilities(); err != nil {
|
|
||||||
return fmt.Errorf("failed to set capabilities: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Disable systemd-resolved stub listener for nameserver nodes
|
|
||||||
// (needed even in pre-built mode so CoreDNS can bind port 53)
|
|
||||||
if ps.isNameserver {
|
|
||||||
if err := ps.disableResolvedStub(); err != nil {
|
|
||||||
ps.logf(" ⚠️ Failed to disable systemd-resolved stub: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Install Anyone relay from .deb package if available
|
|
||||||
if ps.IsAnyoneRelay() || ps.IsAnyoneClient() {
|
|
||||||
if err := ps.installAnyonFromPreBuilt(); err != nil {
|
|
||||||
ps.logf(" ⚠️ Anyone install warning: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
ps.logf(" ✓ All pre-built binaries installed")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// installMinimalSystemDeps installs only runtime dependencies (no build tools).
|
|
||||||
func (ps *ProductionSetup) installMinimalSystemDeps() error {
|
|
||||||
ps.logf(" Installing minimal system dependencies...")
|
|
||||||
|
|
||||||
cmd := exec.Command("apt-get", "update")
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
ps.logf(" Warning: apt update failed")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Only install runtime deps — no build-essential, make, nodejs, npm needed
|
|
||||||
cmd = exec.Command("apt-get", "install", "-y", "curl", "wget", "unzip")
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return fmt.Errorf("failed to install minimal dependencies: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
ps.logf(" ✓ Minimal system dependencies installed (no build tools needed)")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// deployPreBuiltBinaries copies pre-built binaries to their runtime locations.
|
|
||||||
func (ps *ProductionSetup) deployPreBuiltBinaries(manifest *PreBuiltManifest) error {
|
|
||||||
ps.logf(" Deploying pre-built binaries...")
|
|
||||||
|
|
||||||
// Binary → destination mapping
|
|
||||||
// Most go to /usr/local/bin/, caddy goes to /usr/bin/
|
|
||||||
type binaryDest struct {
|
|
||||||
name string
|
|
||||||
dest string
|
|
||||||
}
|
|
||||||
|
|
||||||
binaries := []binaryDest{
|
|
||||||
{name: "orama", dest: "/usr/local/bin/orama"},
|
|
||||||
{name: "orama-node", dest: "/usr/local/bin/orama-node"},
|
|
||||||
{name: "gateway", dest: "/usr/local/bin/gateway"},
|
|
||||||
{name: "identity", dest: "/usr/local/bin/identity"},
|
|
||||||
{name: "sfu", dest: "/usr/local/bin/sfu"},
|
|
||||||
{name: "turn", dest: "/usr/local/bin/turn"},
|
|
||||||
{name: "olric-server", dest: "/usr/local/bin/olric-server"},
|
|
||||||
{name: "ipfs", dest: "/usr/local/bin/ipfs"},
|
|
||||||
{name: "ipfs-cluster-service", dest: "/usr/local/bin/ipfs-cluster-service"},
|
|
||||||
{name: "rqlited", dest: "/usr/local/bin/rqlited"},
|
|
||||||
{name: "coredns", dest: "/usr/local/bin/coredns"},
|
|
||||||
{name: "caddy", dest: "/usr/bin/caddy"},
|
|
||||||
}
|
|
||||||
// Note: vault-guardian stays at /opt/orama/bin/ (from archive extraction)
|
|
||||||
// and is referenced by absolute path in the systemd service — no copy needed.
|
|
||||||
|
|
||||||
for _, bin := range binaries {
|
|
||||||
srcPath := filepath.Join(OramaArchiveBin, bin.name)
|
|
||||||
|
|
||||||
// Skip optional binaries (e.g., coredns on non-nameserver nodes)
|
|
||||||
if _, ok := manifest.Checksums[bin.name]; !ok {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if _, err := os.Stat(srcPath); os.IsNotExist(err) {
|
|
||||||
ps.logf(" ⚠️ Binary %s not found in archive, skipping", bin.name)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := copyBinary(srcPath, bin.dest); err != nil {
|
|
||||||
return fmt.Errorf("failed to copy %s: %w", bin.name, err)
|
|
||||||
}
|
|
||||||
ps.logf(" ✓ %s → %s", bin.name, bin.dest)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// setCapabilities sets cap_net_bind_service on binaries that need to bind privileged ports.
|
|
||||||
// Both the /opt/orama/bin/ originals (used by systemd) and /usr/local/bin/ copies need caps.
|
|
||||||
func (ps *ProductionSetup) setCapabilities() error {
|
|
||||||
caps := []string{
|
|
||||||
filepath.Join(OramaArchiveBin, "orama-node"), // systemd uses this path
|
|
||||||
"/usr/local/bin/orama-node", // PATH copy
|
|
||||||
"/usr/bin/caddy", // caddy's standard location
|
|
||||||
}
|
|
||||||
for _, binary := range caps {
|
|
||||||
if _, err := os.Stat(binary); os.IsNotExist(err) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
cmd := exec.Command("setcap", "cap_net_bind_service=+ep", binary)
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return fmt.Errorf("setcap failed on %s: %w (node won't be able to bind port 443)", binary, err)
|
|
||||||
}
|
|
||||||
ps.logf(" ✓ setcap on %s", binary)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// disableResolvedStub disables systemd-resolved's stub listener so CoreDNS can bind port 53.
|
|
||||||
func (ps *ProductionSetup) disableResolvedStub() error {
|
|
||||||
// Delegate to the coredns installer's method
|
|
||||||
return ps.binaryInstaller.coredns.DisableResolvedStubListener()
|
|
||||||
}
|
|
||||||
|
|
||||||
// installAnyonFromPreBuilt installs the Anyone relay .deb from the packages dir,
|
|
||||||
// falling back to apt install if the .deb is not bundled.
|
|
||||||
func (ps *ProductionSetup) installAnyonFromPreBuilt() error {
|
|
||||||
debPath := filepath.Join(OramaPackagesDir, "anon.deb")
|
|
||||||
if _, err := os.Stat(debPath); err == nil {
|
|
||||||
ps.logf(" Installing Anyone from bundled .deb...")
|
|
||||||
cmd := exec.Command("dpkg", "-i", debPath)
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
ps.logf(" ⚠️ dpkg -i failed, falling back to apt...")
|
|
||||||
cmd = exec.Command("apt-get", "install", "-y", "anon")
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return fmt.Errorf("failed to install anon: %w", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
ps.logf(" ✓ Anyone installed from .deb")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// No .deb bundled — fall back to apt (the existing path in source mode)
|
|
||||||
ps.logf(" Installing Anyone via apt (not bundled in archive)...")
|
|
||||||
cmd := exec.Command("apt-get", "install", "-y", "anon")
|
|
||||||
if err := cmd.Run(); err != nil {
|
|
||||||
return fmt.Errorf("failed to install anon via apt: %w", err)
|
|
||||||
}
|
|
||||||
ps.logf(" ✓ Anyone installed via apt")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// copyBinary copies a file from src to dest, preserving executable permissions.
|
|
||||||
// It removes the destination first to avoid ETXTBSY ("text file busy") errors
|
|
||||||
// when overwriting a binary that is currently running.
|
|
||||||
func copyBinary(src, dest string) error {
|
|
||||||
// Ensure parent directory exists
|
|
||||||
if err := os.MkdirAll(filepath.Dir(dest), 0755); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Remove the old binary first. On Linux, if the binary is running,
|
|
||||||
// rm unlinks the filename while the kernel keeps the inode alive for
|
|
||||||
// the running process. Writing a new file at the same path creates a
|
|
||||||
// fresh inode — no ETXTBSY conflict.
|
|
||||||
_ = os.Remove(dest)
|
|
||||||
|
|
||||||
srcFile, err := os.Open(src)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer srcFile.Close()
|
|
||||||
|
|
||||||
destFile, err := os.OpenFile(dest, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0755)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer destFile.Close()
|
|
||||||
|
|
||||||
if _, err := io.Copy(destFile, srcFile); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@ -1,24 +0,0 @@
|
|||||||
package auth
|
|
||||||
|
|
||||||
import (
|
|
||||||
"crypto/hmac"
|
|
||||||
"crypto/sha256"
|
|
||||||
"encoding/hex"
|
|
||||||
)
|
|
||||||
|
|
||||||
// sha256Hex returns the lowercase hex-encoded SHA-256 hash of the input string.
|
|
||||||
// Used to hash refresh tokens before storage — deterministic so we can hash on
|
|
||||||
// insert and hash on lookup without storing the raw token.
|
|
||||||
func sha256Hex(s string) string {
|
|
||||||
h := sha256.Sum256([]byte(s))
|
|
||||||
return hex.EncodeToString(h[:])
|
|
||||||
}
|
|
||||||
|
|
||||||
// HmacSHA256Hex computes HMAC-SHA256 of data with the given secret key and
|
|
||||||
// returns the result as a lowercase hex string. Used for API key hashing —
|
|
||||||
// fast and deterministic, allowing direct DB lookup by hash.
|
|
||||||
func HmacSHA256Hex(data, secret string) string {
|
|
||||||
mac := hmac.New(sha256.New, []byte(secret))
|
|
||||||
mac.Write([]byte(data))
|
|
||||||
return hex.EncodeToString(mac.Sum(nil))
|
|
||||||
}
|
|
||||||
@ -1,435 +0,0 @@
|
|||||||
// Package enroll implements the OramaOS node enrollment endpoint.
|
|
||||||
//
|
|
||||||
// Flow:
|
|
||||||
// 1. Operator's CLI sends POST /v1/node/enroll with code + token + node_ip
|
|
||||||
// 2. Gateway validates invite token (single-use)
|
|
||||||
// 3. Gateway assigns WG IP, registers peer, reads secrets
|
|
||||||
// 4. Gateway pushes cluster config to OramaOS node at node_ip:9999
|
|
||||||
// 5. OramaOS node configures WG, encrypts data partition, starts services
|
|
||||||
package enroll
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"context"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"net/http"
|
|
||||||
"os"
|
|
||||||
"os/exec"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/rqlite"
|
|
||||||
"go.uber.org/zap"
|
|
||||||
)
|
|
||||||
|
|
||||||
// EnrollRequest is the request from the CLI.
|
|
||||||
type EnrollRequest struct {
|
|
||||||
Code string `json:"code"`
|
|
||||||
Token string `json:"token"`
|
|
||||||
NodeIP string `json:"node_ip"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// EnrollResponse is the configuration pushed to the OramaOS node.
|
|
||||||
type EnrollResponse struct {
|
|
||||||
NodeID string `json:"node_id"`
|
|
||||||
WireGuardConfig string `json:"wireguard_config"`
|
|
||||||
ClusterSecret string `json:"cluster_secret"`
|
|
||||||
Peers []PeerInfo `json:"peers"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// PeerInfo describes a cluster peer for LUKS key distribution.
|
|
||||||
type PeerInfo struct {
|
|
||||||
WGIP string `json:"wg_ip"`
|
|
||||||
NodeID string `json:"node_id"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handler handles OramaOS node enrollment.
|
|
||||||
type Handler struct {
|
|
||||||
logger *zap.Logger
|
|
||||||
rqliteClient rqlite.Client
|
|
||||||
oramaDir string
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewHandler creates a new enrollment handler.
|
|
||||||
func NewHandler(logger *zap.Logger, rqliteClient rqlite.Client, oramaDir string) *Handler {
|
|
||||||
return &Handler{
|
|
||||||
logger: logger,
|
|
||||||
rqliteClient: rqliteClient,
|
|
||||||
oramaDir: oramaDir,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// HandleEnroll handles POST /v1/node/enroll.
|
|
||||||
func (h *Handler) HandleEnroll(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.Method != http.MethodPost {
|
|
||||||
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
r.Body = http.MaxBytesReader(w, r.Body, 1<<20)
|
|
||||||
var req EnrollRequest
|
|
||||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
|
||||||
http.Error(w, "invalid request body", http.StatusBadRequest)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if req.Code == "" || req.Token == "" || req.NodeIP == "" {
|
|
||||||
http.Error(w, "code, token, and node_ip are required", http.StatusBadRequest)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx := r.Context()
|
|
||||||
|
|
||||||
// 1. Validate invite token (single-use, same as join handler)
|
|
||||||
if err := h.consumeToken(ctx, req.Token, req.NodeIP); err != nil {
|
|
||||||
h.logger.Warn("enroll token validation failed", zap.Error(err))
|
|
||||||
http.Error(w, "unauthorized: invalid or expired token", http.StatusUnauthorized)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// 2. Verify registration code against the OramaOS node
|
|
||||||
if err := h.verifyCode(req.NodeIP, req.Code); err != nil {
|
|
||||||
h.logger.Warn("registration code verification failed", zap.Error(err))
|
|
||||||
http.Error(w, "code verification failed: "+err.Error(), http.StatusBadRequest)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// 3. Generate WG keypair for the OramaOS node
|
|
||||||
wgPrivKey, wgPubKey, err := generateWGKeypair()
|
|
||||||
if err != nil {
|
|
||||||
h.logger.Error("failed to generate WG keypair", zap.Error(err))
|
|
||||||
http.Error(w, "internal error", http.StatusInternalServerError)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// 4. Assign WG IP
|
|
||||||
wgIP, err := h.assignWGIP(ctx)
|
|
||||||
if err != nil {
|
|
||||||
h.logger.Error("failed to assign WG IP", zap.Error(err))
|
|
||||||
http.Error(w, "failed to assign WG IP", http.StatusInternalServerError)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
nodeID := fmt.Sprintf("orama-node-%s", strings.ReplaceAll(wgIP, ".", "-"))
|
|
||||||
|
|
||||||
// 5. Register WG peer in database
|
|
||||||
if _, err := h.rqliteClient.Exec(ctx,
|
|
||||||
"INSERT OR REPLACE INTO wireguard_peers (node_id, wg_ip, public_key, public_ip, wg_port) VALUES (?, ?, ?, ?, ?)",
|
|
||||||
nodeID, wgIP, wgPubKey, req.NodeIP, 51820); err != nil {
|
|
||||||
h.logger.Error("failed to register WG peer", zap.Error(err))
|
|
||||||
http.Error(w, "failed to register peer", http.StatusInternalServerError)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// 6. Add peer to local WireGuard interface
|
|
||||||
if err := h.addWGPeerLocally(wgPubKey, req.NodeIP, wgIP); err != nil {
|
|
||||||
h.logger.Warn("failed to add WG peer to local interface", zap.Error(err))
|
|
||||||
}
|
|
||||||
|
|
||||||
// 7. Read secrets
|
|
||||||
clusterSecret, err := os.ReadFile(h.oramaDir + "/secrets/cluster-secret")
|
|
||||||
if err != nil {
|
|
||||||
h.logger.Error("failed to read cluster secret", zap.Error(err))
|
|
||||||
http.Error(w, "internal error", http.StatusInternalServerError)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// 8. Build WireGuard config for the OramaOS node
|
|
||||||
wgConfig, err := h.buildWGConfig(ctx, wgPrivKey, wgIP)
|
|
||||||
if err != nil {
|
|
||||||
h.logger.Error("failed to build WG config", zap.Error(err))
|
|
||||||
http.Error(w, "internal error", http.StatusInternalServerError)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// 9. Get all peer WG IPs for LUKS key distribution
|
|
||||||
peers, err := h.getPeerList(ctx, wgIP)
|
|
||||||
if err != nil {
|
|
||||||
h.logger.Error("failed to get peer list", zap.Error(err))
|
|
||||||
http.Error(w, "internal error", http.StatusInternalServerError)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// 10. Push config to OramaOS node
|
|
||||||
enrollResp := EnrollResponse{
|
|
||||||
NodeID: nodeID,
|
|
||||||
WireGuardConfig: wgConfig,
|
|
||||||
ClusterSecret: strings.TrimSpace(string(clusterSecret)),
|
|
||||||
Peers: peers,
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := h.pushConfigToNode(req.NodeIP, &enrollResp); err != nil {
|
|
||||||
h.logger.Error("failed to push config to node", zap.Error(err))
|
|
||||||
http.Error(w, "failed to configure node: "+err.Error(), http.StatusInternalServerError)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
h.logger.Info("OramaOS node enrolled",
|
|
||||||
zap.String("node_id", nodeID),
|
|
||||||
zap.String("wg_ip", wgIP),
|
|
||||||
zap.String("public_ip", req.NodeIP))
|
|
||||||
|
|
||||||
w.Header().Set("Content-Type", "application/json")
|
|
||||||
json.NewEncoder(w).Encode(map[string]string{
|
|
||||||
"status": "enrolled",
|
|
||||||
"node_id": nodeID,
|
|
||||||
"wg_ip": wgIP,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// consumeToken validates and marks an invite token as used.
|
|
||||||
func (h *Handler) consumeToken(ctx context.Context, token, usedByIP string) error {
|
|
||||||
result, err := h.rqliteClient.Exec(ctx,
|
|
||||||
"UPDATE invite_tokens SET used_at = datetime('now'), used_by_ip = ? WHERE token = ? AND used_at IS NULL AND expires_at > datetime('now')",
|
|
||||||
usedByIP, token)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("database error: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
rowsAffected, err := result.RowsAffected()
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to check result: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if rowsAffected == 0 {
|
|
||||||
return fmt.Errorf("token invalid, expired, or already used")
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// verifyCode checks that the OramaOS node has the expected registration code.
|
|
||||||
func (h *Handler) verifyCode(nodeIP, expectedCode string) error {
|
|
||||||
client := &http.Client{Timeout: 10 * time.Second}
|
|
||||||
resp, err := client.Get(fmt.Sprintf("http://%s:9999/", nodeIP))
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("cannot reach node at %s:9999: %w", nodeIP, err)
|
|
||||||
}
|
|
||||||
defer resp.Body.Close()
|
|
||||||
|
|
||||||
if resp.StatusCode == http.StatusGone {
|
|
||||||
return fmt.Errorf("node already served its registration code")
|
|
||||||
}
|
|
||||||
if resp.StatusCode != http.StatusOK {
|
|
||||||
return fmt.Errorf("node returned status %d", resp.StatusCode)
|
|
||||||
}
|
|
||||||
|
|
||||||
var result struct {
|
|
||||||
Code string `json:"code"`
|
|
||||||
}
|
|
||||||
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
|
|
||||||
return fmt.Errorf("invalid response from node: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if result.Code != expectedCode {
|
|
||||||
return fmt.Errorf("registration code mismatch")
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// pushConfigToNode sends cluster configuration to the OramaOS node.
|
|
||||||
func (h *Handler) pushConfigToNode(nodeIP string, config *EnrollResponse) error {
|
|
||||||
body, err := json.Marshal(config)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
client := &http.Client{Timeout: 30 * time.Second}
|
|
||||||
resp, err := client.Post(
|
|
||||||
fmt.Sprintf("http://%s:9999/v1/agent/enroll/complete", nodeIP),
|
|
||||||
"application/json",
|
|
||||||
bytes.NewReader(body),
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to push config: %w", err)
|
|
||||||
}
|
|
||||||
defer resp.Body.Close()
|
|
||||||
|
|
||||||
if resp.StatusCode != http.StatusOK {
|
|
||||||
return fmt.Errorf("node returned status %d", resp.StatusCode)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// generateWGKeypair generates a WireGuard private/public keypair.
|
|
||||||
func generateWGKeypair() (privKey, pubKey string, err error) {
|
|
||||||
privOut, err := exec.Command("wg", "genkey").Output()
|
|
||||||
if err != nil {
|
|
||||||
return "", "", fmt.Errorf("wg genkey failed: %w", err)
|
|
||||||
}
|
|
||||||
privKey = strings.TrimSpace(string(privOut))
|
|
||||||
|
|
||||||
cmd := exec.Command("wg", "pubkey")
|
|
||||||
cmd.Stdin = strings.NewReader(privKey)
|
|
||||||
pubOut, err := cmd.Output()
|
|
||||||
if err != nil {
|
|
||||||
return "", "", fmt.Errorf("wg pubkey failed: %w", err)
|
|
||||||
}
|
|
||||||
pubKey = strings.TrimSpace(string(pubOut))
|
|
||||||
|
|
||||||
return privKey, pubKey, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// assignWGIP finds the next available WG IP.
|
|
||||||
func (h *Handler) assignWGIP(ctx context.Context) (string, error) {
|
|
||||||
var rows []struct {
|
|
||||||
WGIP string `db:"wg_ip"`
|
|
||||||
}
|
|
||||||
if err := h.rqliteClient.Query(ctx, &rows, "SELECT wg_ip FROM wireguard_peers"); err != nil {
|
|
||||||
return "", fmt.Errorf("failed to query WG IPs: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(rows) == 0 {
|
|
||||||
return "10.0.0.2", nil
|
|
||||||
}
|
|
||||||
|
|
||||||
maxD := 0
|
|
||||||
maxC := 0
|
|
||||||
for _, row := range rows {
|
|
||||||
var a, b, c, d int
|
|
||||||
if _, err := fmt.Sscanf(row.WGIP, "%d.%d.%d.%d", &a, &b, &c, &d); err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if c > maxC || (c == maxC && d > maxD) {
|
|
||||||
maxC, maxD = c, d
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
maxD++
|
|
||||||
if maxD > 254 {
|
|
||||||
maxC++
|
|
||||||
maxD = 1
|
|
||||||
}
|
|
||||||
|
|
||||||
return fmt.Sprintf("10.0.%d.%d", maxC, maxD), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// addWGPeerLocally adds a peer to the local wg0 interface.
|
|
||||||
func (h *Handler) addWGPeerLocally(pubKey, publicIP, wgIP string) error {
|
|
||||||
cmd := exec.Command("wg", "set", "wg0",
|
|
||||||
"peer", pubKey,
|
|
||||||
"endpoint", fmt.Sprintf("%s:51820", publicIP),
|
|
||||||
"allowed-ips", fmt.Sprintf("%s/32", wgIP),
|
|
||||||
"persistent-keepalive", "25")
|
|
||||||
if output, err := cmd.CombinedOutput(); err != nil {
|
|
||||||
return fmt.Errorf("wg set failed: %w\n%s", err, string(output))
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// buildWGConfig generates a wg0.conf for the OramaOS node.
|
|
||||||
func (h *Handler) buildWGConfig(ctx context.Context, privKey, nodeWGIP string) (string, error) {
|
|
||||||
// Get this node's public key and WG IP
|
|
||||||
myPubKey, err := exec.Command("wg", "show", "wg0", "public-key").Output()
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("failed to get local WG public key: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
myWGIP, err := h.getMyWGIP()
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("failed to get local WG IP: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
myPublicIP, err := h.getMyPublicIP(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("failed to get local public IP: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var config strings.Builder
|
|
||||||
config.WriteString("[Interface]\n")
|
|
||||||
config.WriteString(fmt.Sprintf("PrivateKey = %s\n", privKey))
|
|
||||||
config.WriteString(fmt.Sprintf("Address = %s/24\n", nodeWGIP))
|
|
||||||
config.WriteString("ListenPort = 51820\n")
|
|
||||||
config.WriteString("\n")
|
|
||||||
|
|
||||||
// Add this gateway node as a peer
|
|
||||||
config.WriteString("[Peer]\n")
|
|
||||||
config.WriteString(fmt.Sprintf("PublicKey = %s\n", strings.TrimSpace(string(myPubKey))))
|
|
||||||
config.WriteString(fmt.Sprintf("Endpoint = %s:51820\n", myPublicIP))
|
|
||||||
config.WriteString(fmt.Sprintf("AllowedIPs = %s/32\n", myWGIP))
|
|
||||||
config.WriteString("PersistentKeepalive = 25\n")
|
|
||||||
|
|
||||||
// Add all existing peers
|
|
||||||
type peerRow struct {
|
|
||||||
WGIP string `db:"wg_ip"`
|
|
||||||
PublicKey string `db:"public_key"`
|
|
||||||
PublicIP string `db:"public_ip"`
|
|
||||||
}
|
|
||||||
var peers []peerRow
|
|
||||||
if err := h.rqliteClient.Query(ctx, &peers,
|
|
||||||
"SELECT wg_ip, public_key, public_ip FROM wireguard_peers WHERE wg_ip != ?", nodeWGIP); err != nil {
|
|
||||||
h.logger.Warn("failed to query peers for WG config", zap.Error(err))
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, p := range peers {
|
|
||||||
if p.PublicKey == strings.TrimSpace(string(myPubKey)) {
|
|
||||||
continue // already added above
|
|
||||||
}
|
|
||||||
config.WriteString(fmt.Sprintf("\n[Peer]\nPublicKey = %s\nEndpoint = %s:51820\nAllowedIPs = %s/32\nPersistentKeepalive = 25\n",
|
|
||||||
p.PublicKey, p.PublicIP, p.WGIP))
|
|
||||||
}
|
|
||||||
|
|
||||||
return config.String(), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// getPeerList returns all cluster peers for LUKS key distribution.
|
|
||||||
func (h *Handler) getPeerList(ctx context.Context, excludeWGIP string) ([]PeerInfo, error) {
|
|
||||||
type peerRow struct {
|
|
||||||
NodeID string `db:"node_id"`
|
|
||||||
WGIP string `db:"wg_ip"`
|
|
||||||
}
|
|
||||||
var rows []peerRow
|
|
||||||
if err := h.rqliteClient.Query(ctx, &rows,
|
|
||||||
"SELECT node_id, wg_ip FROM wireguard_peers WHERE wg_ip != ?", excludeWGIP); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
peers := make([]PeerInfo, 0, len(rows))
|
|
||||||
for _, row := range rows {
|
|
||||||
peers = append(peers, PeerInfo{
|
|
||||||
WGIP: row.WGIP,
|
|
||||||
NodeID: row.NodeID,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
return peers, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// getMyWGIP gets this node's WireGuard IP.
|
|
||||||
func (h *Handler) getMyWGIP() (string, error) {
|
|
||||||
out, err := exec.Command("ip", "-4", "addr", "show", "wg0").CombinedOutput()
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("failed to get wg0 info: %w", err)
|
|
||||||
}
|
|
||||||
for _, line := range strings.Split(string(out), "\n") {
|
|
||||||
line = strings.TrimSpace(line)
|
|
||||||
if strings.HasPrefix(line, "inet ") {
|
|
||||||
parts := strings.Fields(line)
|
|
||||||
if len(parts) >= 2 {
|
|
||||||
return strings.Split(parts[1], "/")[0], nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return "", fmt.Errorf("could not find wg0 IP")
|
|
||||||
}
|
|
||||||
|
|
||||||
// getMyPublicIP reads this node's public IP from the database.
|
|
||||||
func (h *Handler) getMyPublicIP(ctx context.Context) (string, error) {
|
|
||||||
myWGIP, err := h.getMyWGIP()
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
var rows []struct {
|
|
||||||
PublicIP string `db:"public_ip"`
|
|
||||||
}
|
|
||||||
if err := h.rqliteClient.Query(ctx, &rows,
|
|
||||||
"SELECT public_ip FROM wireguard_peers WHERE wg_ip = ?", myWGIP); err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
if len(rows) == 0 {
|
|
||||||
return "", fmt.Errorf("no peer entry for WG IP %s", myWGIP)
|
|
||||||
}
|
|
||||||
return rows[0].PublicIP, nil
|
|
||||||
}
|
|
||||||
@ -1,272 +0,0 @@
|
|||||||
package enroll
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"log"
|
|
||||||
"net/http"
|
|
||||||
"os/exec"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"go.uber.org/zap"
|
|
||||||
)
|
|
||||||
|
|
||||||
// HandleNodeStatus proxies GET /v1/node/status to the agent over WireGuard.
|
|
||||||
// Query param: ?node_id=<node_id> or ?wg_ip=<wg_ip>
|
|
||||||
func (h *Handler) HandleNodeStatus(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.Method != http.MethodGet {
|
|
||||||
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
wgIP, err := h.resolveNodeIP(r)
|
|
||||||
if err != nil {
|
|
||||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Proxy to agent's status endpoint
|
|
||||||
body, statusCode, err := h.proxyToAgent(wgIP, "GET", "/v1/agent/status", nil)
|
|
||||||
if err != nil {
|
|
||||||
h.logger.Warn("failed to proxy status request", zap.String("wg_ip", wgIP), zap.Error(err))
|
|
||||||
http.Error(w, "node unreachable: "+err.Error(), http.StatusBadGateway)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
w.Header().Set("Content-Type", "application/json")
|
|
||||||
w.WriteHeader(statusCode)
|
|
||||||
w.Write(body)
|
|
||||||
}
|
|
||||||
|
|
||||||
// HandleNodeCommand proxies POST /v1/node/command to the agent over WireGuard.
|
|
||||||
func (h *Handler) HandleNodeCommand(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.Method != http.MethodPost {
|
|
||||||
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
wgIP, err := h.resolveNodeIP(r)
|
|
||||||
if err != nil {
|
|
||||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Read command body
|
|
||||||
r.Body = http.MaxBytesReader(w, r.Body, 1<<20)
|
|
||||||
cmdBody, err := io.ReadAll(r.Body)
|
|
||||||
if err != nil {
|
|
||||||
http.Error(w, "invalid request body", http.StatusBadRequest)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Proxy to agent's command endpoint
|
|
||||||
body, statusCode, err := h.proxyToAgent(wgIP, "POST", "/v1/agent/command", cmdBody)
|
|
||||||
if err != nil {
|
|
||||||
h.logger.Warn("failed to proxy command", zap.String("wg_ip", wgIP), zap.Error(err))
|
|
||||||
http.Error(w, "node unreachable: "+err.Error(), http.StatusBadGateway)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
w.Header().Set("Content-Type", "application/json")
|
|
||||||
w.WriteHeader(statusCode)
|
|
||||||
w.Write(body)
|
|
||||||
}
|
|
||||||
|
|
||||||
// HandleNodeLogs proxies GET /v1/node/logs to the agent over WireGuard.
|
|
||||||
// Query params: ?node_id=<id>&service=<name>&lines=<n>
|
|
||||||
func (h *Handler) HandleNodeLogs(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.Method != http.MethodGet {
|
|
||||||
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
wgIP, err := h.resolveNodeIP(r)
|
|
||||||
if err != nil {
|
|
||||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build query string for agent
|
|
||||||
service := r.URL.Query().Get("service")
|
|
||||||
lines := r.URL.Query().Get("lines")
|
|
||||||
agentPath := "/v1/agent/logs"
|
|
||||||
params := []string{}
|
|
||||||
if service != "" {
|
|
||||||
params = append(params, "service="+service)
|
|
||||||
}
|
|
||||||
if lines != "" {
|
|
||||||
params = append(params, "lines="+lines)
|
|
||||||
}
|
|
||||||
if len(params) > 0 {
|
|
||||||
agentPath += "?" + strings.Join(params, "&")
|
|
||||||
}
|
|
||||||
|
|
||||||
body, statusCode, err := h.proxyToAgent(wgIP, "GET", agentPath, nil)
|
|
||||||
if err != nil {
|
|
||||||
h.logger.Warn("failed to proxy logs request", zap.String("wg_ip", wgIP), zap.Error(err))
|
|
||||||
http.Error(w, "node unreachable: "+err.Error(), http.StatusBadGateway)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
w.Header().Set("Content-Type", "application/json")
|
|
||||||
w.WriteHeader(statusCode)
|
|
||||||
w.Write(body)
|
|
||||||
}
|
|
||||||
|
|
||||||
// HandleNodeLeave handles POST /v1/node/leave — graceful node departure.
|
|
||||||
// Orchestrates: stop services → redistribute Shamir shares → remove WG peer.
|
|
||||||
func (h *Handler) HandleNodeLeave(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.Method != http.MethodPost {
|
|
||||||
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
r.Body = http.MaxBytesReader(w, r.Body, 1<<20)
|
|
||||||
var req struct {
|
|
||||||
NodeID string `json:"node_id"`
|
|
||||||
WGIP string `json:"wg_ip"`
|
|
||||||
}
|
|
||||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
|
||||||
http.Error(w, "invalid request body", http.StatusBadRequest)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
wgIP := req.WGIP
|
|
||||||
if wgIP == "" && req.NodeID != "" {
|
|
||||||
resolved, err := h.nodeIDToWGIP(r.Context(), req.NodeID)
|
|
||||||
if err != nil {
|
|
||||||
http.Error(w, "node not found: "+err.Error(), http.StatusNotFound)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
wgIP = resolved
|
|
||||||
}
|
|
||||||
if wgIP == "" {
|
|
||||||
http.Error(w, "node_id or wg_ip is required", http.StatusBadRequest)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
h.logger.Info("node leave requested", zap.String("wg_ip", wgIP))
|
|
||||||
|
|
||||||
// Step 1: Tell the agent to stop services
|
|
||||||
_, _, err := h.proxyToAgent(wgIP, "POST", "/v1/agent/command",
|
|
||||||
[]byte(`{"action":"stop"}`))
|
|
||||||
if err != nil {
|
|
||||||
h.logger.Warn("failed to stop services on leaving node", zap.Error(err))
|
|
||||||
// Continue — node may already be down
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 2: Remove WG peer from database
|
|
||||||
ctx := r.Context()
|
|
||||||
if _, err := h.rqliteClient.Exec(ctx,
|
|
||||||
"DELETE FROM wireguard_peers WHERE wg_ip = ?", wgIP); err != nil {
|
|
||||||
h.logger.Error("failed to remove WG peer from database", zap.Error(err))
|
|
||||||
http.Error(w, "failed to remove peer", http.StatusInternalServerError)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 3: Remove from local WireGuard interface
|
|
||||||
// Get the peer's public key first
|
|
||||||
var rows []struct {
|
|
||||||
PublicKey string `db:"public_key"`
|
|
||||||
}
|
|
||||||
_ = h.rqliteClient.Query(ctx, &rows,
|
|
||||||
"SELECT public_key FROM wireguard_peers WHERE wg_ip = ?", wgIP)
|
|
||||||
// Peer already deleted above, but try to remove from wg0 anyway
|
|
||||||
h.removeWGPeerLocally(wgIP)
|
|
||||||
|
|
||||||
h.logger.Info("node removed from cluster", zap.String("wg_ip", wgIP))
|
|
||||||
|
|
||||||
w.Header().Set("Content-Type", "application/json")
|
|
||||||
json.NewEncoder(w).Encode(map[string]string{
|
|
||||||
"status": "removed",
|
|
||||||
"wg_ip": wgIP,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// proxyToAgent sends an HTTP request to the OramaOS agent over WireGuard.
|
|
||||||
func (h *Handler) proxyToAgent(wgIP, method, path string, body []byte) ([]byte, int, error) {
|
|
||||||
url := fmt.Sprintf("http://%s:9998%s", wgIP, path)
|
|
||||||
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
var reqBody io.Reader
|
|
||||||
if body != nil {
|
|
||||||
reqBody = strings.NewReader(string(body))
|
|
||||||
}
|
|
||||||
|
|
||||||
req, err := http.NewRequestWithContext(ctx, method, url, reqBody)
|
|
||||||
if err != nil {
|
|
||||||
return nil, 0, err
|
|
||||||
}
|
|
||||||
if body != nil {
|
|
||||||
req.Header.Set("Content-Type", "application/json")
|
|
||||||
}
|
|
||||||
|
|
||||||
client := &http.Client{Timeout: 15 * time.Second}
|
|
||||||
resp, err := client.Do(req)
|
|
||||||
if err != nil {
|
|
||||||
return nil, 0, fmt.Errorf("request to agent failed: %w", err)
|
|
||||||
}
|
|
||||||
defer resp.Body.Close()
|
|
||||||
|
|
||||||
respBody, err := io.ReadAll(resp.Body)
|
|
||||||
if err != nil {
|
|
||||||
return nil, resp.StatusCode, fmt.Errorf("failed to read agent response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return respBody, resp.StatusCode, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// resolveNodeIP extracts the WG IP from query parameters.
|
|
||||||
func (h *Handler) resolveNodeIP(r *http.Request) (string, error) {
|
|
||||||
wgIP := r.URL.Query().Get("wg_ip")
|
|
||||||
if wgIP != "" {
|
|
||||||
return wgIP, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
nodeID := r.URL.Query().Get("node_id")
|
|
||||||
if nodeID != "" {
|
|
||||||
return h.nodeIDToWGIP(r.Context(), nodeID)
|
|
||||||
}
|
|
||||||
|
|
||||||
return "", fmt.Errorf("wg_ip or node_id query parameter is required")
|
|
||||||
}
|
|
||||||
|
|
||||||
// nodeIDToWGIP resolves a node_id to its WireGuard IP.
|
|
||||||
func (h *Handler) nodeIDToWGIP(ctx context.Context, nodeID string) (string, error) {
|
|
||||||
var rows []struct {
|
|
||||||
WGIP string `db:"wg_ip"`
|
|
||||||
}
|
|
||||||
if err := h.rqliteClient.Query(ctx, &rows,
|
|
||||||
"SELECT wg_ip FROM wireguard_peers WHERE node_id = ?", nodeID); err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
if len(rows) == 0 {
|
|
||||||
return "", fmt.Errorf("no node found with id %s", nodeID)
|
|
||||||
}
|
|
||||||
return rows[0].WGIP, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// removeWGPeerLocally removes a peer from the local wg0 interface by its allowed IP.
|
|
||||||
func (h *Handler) removeWGPeerLocally(wgIP string) {
|
|
||||||
// Find peer public key by allowed IP
|
|
||||||
out, err := exec.Command("wg", "show", "wg0", "dump").Output()
|
|
||||||
if err != nil {
|
|
||||||
log.Printf("failed to get wg dump: %v", err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, line := range strings.Split(string(out), "\n") {
|
|
||||||
fields := strings.Split(line, "\t")
|
|
||||||
if len(fields) >= 4 && strings.Contains(fields[3], wgIP) {
|
|
||||||
pubKey := fields[0]
|
|
||||||
exec.Command("wg", "set", "wg0", "peer", pubKey, "remove").Run()
|
|
||||||
log.Printf("removed WG peer %s (%s)", pubKey[:8]+"...", wgIP)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,112 +0,0 @@
|
|||||||
package join
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/base64"
|
|
||||||
"fmt"
|
|
||||||
"net"
|
|
||||||
"strings"
|
|
||||||
"testing"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestWgPeersContainsIP_found(t *testing.T) {
|
|
||||||
peers := []WGPeerInfo{
|
|
||||||
{PublicKey: "key1", Endpoint: "1.2.3.4:51820", AllowedIP: "10.0.0.1/32"},
|
|
||||||
{PublicKey: "key2", Endpoint: "5.6.7.8:51820", AllowedIP: "10.0.0.2/32"},
|
|
||||||
}
|
|
||||||
|
|
||||||
if !wgPeersContainsIP(peers, "10.0.0.1") {
|
|
||||||
t.Error("expected to find 10.0.0.1 in peer list")
|
|
||||||
}
|
|
||||||
if !wgPeersContainsIP(peers, "10.0.0.2") {
|
|
||||||
t.Error("expected to find 10.0.0.2 in peer list")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestWgPeersContainsIP_not_found(t *testing.T) {
|
|
||||||
peers := []WGPeerInfo{
|
|
||||||
{PublicKey: "key1", Endpoint: "1.2.3.4:51820", AllowedIP: "10.0.0.1/32"},
|
|
||||||
}
|
|
||||||
|
|
||||||
if wgPeersContainsIP(peers, "10.0.0.2") {
|
|
||||||
t.Error("did not expect to find 10.0.0.2 in peer list")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestWgPeersContainsIP_empty_list(t *testing.T) {
|
|
||||||
if wgPeersContainsIP(nil, "10.0.0.1") {
|
|
||||||
t.Error("did not expect to find any IP in nil peer list")
|
|
||||||
}
|
|
||||||
if wgPeersContainsIP([]WGPeerInfo{}, "10.0.0.1") {
|
|
||||||
t.Error("did not expect to find any IP in empty peer list")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAssignWGIP_format(t *testing.T) {
|
|
||||||
// Verify the WG IP format used in the handler matches what wgPeersContainsIP expects
|
|
||||||
wgIP := "10.0.0.1"
|
|
||||||
allowedIP := fmt.Sprintf("%s/32", wgIP)
|
|
||||||
peers := []WGPeerInfo{{AllowedIP: allowedIP}}
|
|
||||||
|
|
||||||
if !wgPeersContainsIP(peers, wgIP) {
|
|
||||||
t.Errorf("format mismatch: wgPeersContainsIP(%q, %q) should match", allowedIP, wgIP)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestValidatePublicIP(t *testing.T) {
|
|
||||||
tests := []struct {
|
|
||||||
name string
|
|
||||||
ip string
|
|
||||||
valid bool
|
|
||||||
}{
|
|
||||||
{"valid IPv4", "46.225.234.112", true},
|
|
||||||
{"loopback", "127.0.0.1", true},
|
|
||||||
{"invalid string", "not-an-ip", false},
|
|
||||||
{"empty", "", false},
|
|
||||||
{"IPv6", "::1", false},
|
|
||||||
{"with newline", "1.2.3.4\n5.6.7.8", false},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tt := range tests {
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
|
||||||
parsed := net.ParseIP(tt.ip)
|
|
||||||
isValid := parsed != nil && parsed.To4() != nil && !strings.ContainsAny(tt.ip, "\n\r")
|
|
||||||
if isValid != tt.valid {
|
|
||||||
t.Errorf("IP %q: expected valid=%v, got %v", tt.ip, tt.valid, isValid)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestValidateWGPublicKey(t *testing.T) {
|
|
||||||
// Valid WireGuard key: 32 bytes, base64 encoded = 44 chars
|
|
||||||
validKey := base64.StdEncoding.EncodeToString(make([]byte, 32))
|
|
||||||
|
|
||||||
tests := []struct {
|
|
||||||
name string
|
|
||||||
key string
|
|
||||||
valid bool
|
|
||||||
}{
|
|
||||||
{"valid 32-byte key", validKey, true},
|
|
||||||
{"too short", base64.StdEncoding.EncodeToString(make([]byte, 16)), false},
|
|
||||||
{"too long", base64.StdEncoding.EncodeToString(make([]byte, 64)), false},
|
|
||||||
{"not base64", "not-a-valid-base64-key!!!", false},
|
|
||||||
{"empty", "", false},
|
|
||||||
{"newline injection", validKey + "\n[Peer]", false},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tt := range tests {
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
|
||||||
if strings.ContainsAny(tt.key, "\n\r") {
|
|
||||||
if tt.valid {
|
|
||||||
t.Errorf("key %q contains newlines but expected valid", tt.key)
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
decoded, err := base64.StdEncoding.DecodeString(tt.key)
|
|
||||||
isValid := err == nil && len(decoded) == 32
|
|
||||||
if isValid != tt.valid {
|
|
||||||
t.Errorf("key %q: expected valid=%v, got %v", tt.key, tt.valid, isValid)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,132 +0,0 @@
|
|||||||
// Package vault provides HTTP handlers for vault proxy operations.
|
|
||||||
//
|
|
||||||
// The gateway acts as a smart proxy between RootWallet clients and
|
|
||||||
// vault guardian nodes on the WireGuard overlay network. It handles
|
|
||||||
// Shamir split/combine so clients make a single HTTPS call.
|
|
||||||
package vault
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"net/http"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/client"
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/logging"
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
|
||||||
// VaultGuardianPort is the port vault guardians listen on (client API).
|
|
||||||
VaultGuardianPort = 7500
|
|
||||||
|
|
||||||
// guardianTimeout is the per-guardian HTTP request timeout.
|
|
||||||
guardianTimeout = 5 * time.Second
|
|
||||||
|
|
||||||
// overallTimeout is the maximum time for the full fan-out operation.
|
|
||||||
overallTimeout = 15 * time.Second
|
|
||||||
|
|
||||||
// maxPushBodySize limits push request bodies (1 MiB).
|
|
||||||
maxPushBodySize = 1 << 20
|
|
||||||
|
|
||||||
// maxPullBodySize limits pull request bodies (4 KiB).
|
|
||||||
maxPullBodySize = 4 << 10
|
|
||||||
)
|
|
||||||
|
|
||||||
// Handlers provides HTTP handlers for vault proxy operations.
|
|
||||||
type Handlers struct {
|
|
||||||
logger *logging.ColoredLogger
|
|
||||||
dbClient client.NetworkClient
|
|
||||||
rateLimiter *IdentityRateLimiter
|
|
||||||
httpClient *http.Client
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewHandlers creates vault proxy handlers.
|
|
||||||
func NewHandlers(logger *logging.ColoredLogger, dbClient client.NetworkClient) *Handlers {
|
|
||||||
h := &Handlers{
|
|
||||||
logger: logger,
|
|
||||||
dbClient: dbClient,
|
|
||||||
rateLimiter: NewIdentityRateLimiter(
|
|
||||||
30, // 30 pushes per hour per identity
|
|
||||||
120, // 120 pulls per hour per identity
|
|
||||||
),
|
|
||||||
httpClient: &http.Client{
|
|
||||||
Timeout: guardianTimeout,
|
|
||||||
Transport: &http.Transport{
|
|
||||||
MaxIdleConns: 100,
|
|
||||||
MaxIdleConnsPerHost: 10,
|
|
||||||
IdleConnTimeout: 90 * time.Second,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
h.rateLimiter.StartCleanup(10*time.Minute, 1*time.Hour)
|
|
||||||
return h
|
|
||||||
}
|
|
||||||
|
|
||||||
// guardian represents a reachable vault guardian node.
|
|
||||||
type guardian struct {
|
|
||||||
IP string
|
|
||||||
Port int
|
|
||||||
}
|
|
||||||
|
|
||||||
// discoverGuardians queries dns_nodes for all active nodes.
|
|
||||||
// Every Orama node runs a vault guardian, so every active node is a guardian.
|
|
||||||
func (h *Handlers) discoverGuardians(ctx context.Context) ([]guardian, error) {
|
|
||||||
db := h.dbClient.Database()
|
|
||||||
internalCtx := client.WithInternalAuth(ctx)
|
|
||||||
|
|
||||||
query := "SELECT COALESCE(internal_ip, ip_address) FROM dns_nodes WHERE status = 'active'"
|
|
||||||
result, err := db.Query(internalCtx, query)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("vault: failed to query guardian nodes: %w", err)
|
|
||||||
}
|
|
||||||
if result == nil || len(result.Rows) == 0 {
|
|
||||||
return nil, fmt.Errorf("vault: no active guardian nodes found")
|
|
||||||
}
|
|
||||||
|
|
||||||
guardians := make([]guardian, 0, len(result.Rows))
|
|
||||||
for _, row := range result.Rows {
|
|
||||||
if len(row) == 0 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
ip := getString(row[0])
|
|
||||||
if ip == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
guardians = append(guardians, guardian{IP: ip, Port: VaultGuardianPort})
|
|
||||||
}
|
|
||||||
if len(guardians) == 0 {
|
|
||||||
return nil, fmt.Errorf("vault: no guardian nodes with valid IPs found")
|
|
||||||
}
|
|
||||||
return guardians, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func writeJSON(w http.ResponseWriter, status int, v interface{}) {
|
|
||||||
w.Header().Set("Content-Type", "application/json")
|
|
||||||
w.WriteHeader(status)
|
|
||||||
json.NewEncoder(w).Encode(v)
|
|
||||||
}
|
|
||||||
|
|
||||||
func writeError(w http.ResponseWriter, status int, msg string) {
|
|
||||||
writeJSON(w, status, map[string]string{"error": msg})
|
|
||||||
}
|
|
||||||
|
|
||||||
func getString(v interface{}) string {
|
|
||||||
if s, ok := v.(string); ok {
|
|
||||||
return s
|
|
||||||
}
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
|
|
||||||
// isValidIdentity checks that identity is exactly 64 hex characters.
|
|
||||||
func isValidIdentity(identity string) bool {
|
|
||||||
if len(identity) != 64 {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
for _, c := range identity {
|
|
||||||
if !((c >= '0' && c <= '9') || (c >= 'a' && c <= 'f') || (c >= 'A' && c <= 'F')) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
@ -1,116 +0,0 @@
|
|||||||
package vault
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"net/http"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/shamir"
|
|
||||||
)
|
|
||||||
|
|
||||||
// HealthResponse is returned for GET /v1/vault/health.
|
|
||||||
type HealthResponse struct {
|
|
||||||
Status string `json:"status"` // "healthy", "degraded", "unavailable"
|
|
||||||
}
|
|
||||||
|
|
||||||
// StatusResponse is returned for GET /v1/vault/status.
|
|
||||||
type StatusResponse struct {
|
|
||||||
Guardians int `json:"guardians"` // Total guardian nodes
|
|
||||||
Healthy int `json:"healthy"` // Reachable guardians
|
|
||||||
Threshold int `json:"threshold"` // Read quorum (K)
|
|
||||||
WriteQuorum int `json:"write_quorum"` // Write quorum (W)
|
|
||||||
}
|
|
||||||
|
|
||||||
// HandleHealth processes GET /v1/vault/health.
|
|
||||||
func (h *Handlers) HandleHealth(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.Method != http.MethodGet {
|
|
||||||
writeError(w, http.StatusMethodNotAllowed, "method not allowed")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
guardians, err := h.discoverGuardians(r.Context())
|
|
||||||
if err != nil {
|
|
||||||
writeJSON(w, http.StatusOK, HealthResponse{Status: "unavailable"})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
n := len(guardians)
|
|
||||||
healthy := h.probeGuardians(r.Context(), guardians)
|
|
||||||
|
|
||||||
k := shamir.AdaptiveThreshold(n)
|
|
||||||
wq := shamir.WriteQuorum(n)
|
|
||||||
|
|
||||||
status := "healthy"
|
|
||||||
if healthy < wq {
|
|
||||||
if healthy >= k {
|
|
||||||
status = "degraded"
|
|
||||||
} else {
|
|
||||||
status = "unavailable"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
writeJSON(w, http.StatusOK, HealthResponse{Status: status})
|
|
||||||
}
|
|
||||||
|
|
||||||
// HandleStatus processes GET /v1/vault/status.
|
|
||||||
func (h *Handlers) HandleStatus(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.Method != http.MethodGet {
|
|
||||||
writeError(w, http.StatusMethodNotAllowed, "method not allowed")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
guardians, err := h.discoverGuardians(r.Context())
|
|
||||||
if err != nil {
|
|
||||||
writeJSON(w, http.StatusOK, StatusResponse{})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
n := len(guardians)
|
|
||||||
healthy := h.probeGuardians(r.Context(), guardians)
|
|
||||||
|
|
||||||
writeJSON(w, http.StatusOK, StatusResponse{
|
|
||||||
Guardians: n,
|
|
||||||
Healthy: healthy,
|
|
||||||
Threshold: shamir.AdaptiveThreshold(n),
|
|
||||||
WriteQuorum: shamir.WriteQuorum(n),
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// probeGuardians checks health of all guardians in parallel and returns the healthy count.
|
|
||||||
func (h *Handlers) probeGuardians(ctx context.Context, guardians []guardian) int {
|
|
||||||
ctx, cancel := context.WithTimeout(ctx, guardianTimeout)
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
var healthyCount atomic.Int32
|
|
||||||
var wg sync.WaitGroup
|
|
||||||
wg.Add(len(guardians))
|
|
||||||
|
|
||||||
for _, g := range guardians {
|
|
||||||
go func(gd guardian) {
|
|
||||||
defer wg.Done()
|
|
||||||
|
|
||||||
url := fmt.Sprintf("http://%s:%d/v1/vault/health", gd.IP, gd.Port)
|
|
||||||
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
resp, err := h.httpClient.Do(req)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer resp.Body.Close()
|
|
||||||
io.Copy(io.Discard, resp.Body)
|
|
||||||
|
|
||||||
if resp.StatusCode >= 200 && resp.StatusCode < 300 {
|
|
||||||
healthyCount.Add(1)
|
|
||||||
}
|
|
||||||
}(g)
|
|
||||||
}
|
|
||||||
|
|
||||||
wg.Wait()
|
|
||||||
return int(healthyCount.Load())
|
|
||||||
}
|
|
||||||
@ -1,183 +0,0 @@
|
|||||||
package vault
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"context"
|
|
||||||
"encoding/base64"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"net/http"
|
|
||||||
"sync"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/logging"
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/shamir"
|
|
||||||
"go.uber.org/zap"
|
|
||||||
)
|
|
||||||
|
|
||||||
// PullRequest is the client-facing request body.
|
|
||||||
type PullRequest struct {
|
|
||||||
Identity string `json:"identity"` // 64 hex chars
|
|
||||||
}
|
|
||||||
|
|
||||||
// PullResponse is returned to the client.
|
|
||||||
type PullResponse struct {
|
|
||||||
Envelope string `json:"envelope"` // base64-encoded reconstructed envelope
|
|
||||||
Collected int `json:"collected"` // Number of shares collected
|
|
||||||
Threshold int `json:"threshold"` // K threshold used
|
|
||||||
}
|
|
||||||
|
|
||||||
// guardianPullRequest is sent to each vault guardian.
|
|
||||||
type guardianPullRequest struct {
|
|
||||||
Identity string `json:"identity"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// guardianPullResponse is the response from a guardian.
|
|
||||||
type guardianPullResponse struct {
|
|
||||||
Share string `json:"share"` // base64([x:1byte][y:rest])
|
|
||||||
}
|
|
||||||
|
|
||||||
// HandlePull processes POST /v1/vault/pull.
|
|
||||||
func (h *Handlers) HandlePull(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.Method != http.MethodPost {
|
|
||||||
writeError(w, http.StatusMethodNotAllowed, "method not allowed")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
body, err := io.ReadAll(io.LimitReader(r.Body, maxPullBodySize))
|
|
||||||
if err != nil {
|
|
||||||
writeError(w, http.StatusBadRequest, "failed to read request body")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
var req PullRequest
|
|
||||||
if err := json.Unmarshal(body, &req); err != nil {
|
|
||||||
writeError(w, http.StatusBadRequest, "invalid JSON")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if !isValidIdentity(req.Identity) {
|
|
||||||
writeError(w, http.StatusBadRequest, "identity must be 64 hex characters")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if !h.rateLimiter.AllowPull(req.Identity) {
|
|
||||||
w.Header().Set("Retry-After", "30")
|
|
||||||
writeError(w, http.StatusTooManyRequests, "pull rate limit exceeded for this identity")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
guardians, err := h.discoverGuardians(r.Context())
|
|
||||||
if err != nil {
|
|
||||||
h.logger.ComponentError(logging.ComponentGeneral, "Vault pull: guardian discovery failed", zap.Error(err))
|
|
||||||
writeError(w, http.StatusServiceUnavailable, "no guardian nodes available")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
n := len(guardians)
|
|
||||||
k := shamir.AdaptiveThreshold(n)
|
|
||||||
|
|
||||||
// Fan out pull requests to all guardians.
|
|
||||||
ctx, cancel := context.WithTimeout(r.Context(), overallTimeout)
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
type shareResult struct {
|
|
||||||
share shamir.Share
|
|
||||||
ok bool
|
|
||||||
}
|
|
||||||
|
|
||||||
results := make([]shareResult, n)
|
|
||||||
var wg sync.WaitGroup
|
|
||||||
wg.Add(n)
|
|
||||||
|
|
||||||
for i, g := range guardians {
|
|
||||||
go func(idx int, gd guardian) {
|
|
||||||
defer wg.Done()
|
|
||||||
|
|
||||||
guardianReq := guardianPullRequest{Identity: req.Identity}
|
|
||||||
reqBody, _ := json.Marshal(guardianReq)
|
|
||||||
|
|
||||||
url := fmt.Sprintf("http://%s:%d/v1/vault/pull", gd.IP, gd.Port)
|
|
||||||
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(reqBody))
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
httpReq.Header.Set("Content-Type", "application/json")
|
|
||||||
|
|
||||||
resp, err := h.httpClient.Do(httpReq)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer resp.Body.Close()
|
|
||||||
|
|
||||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
|
||||||
io.Copy(io.Discard, resp.Body)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
var pullResp guardianPullResponse
|
|
||||||
if err := json.NewDecoder(resp.Body).Decode(&pullResp); err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
shareBytes, err := base64.StdEncoding.DecodeString(pullResp.Share)
|
|
||||||
if err != nil || len(shareBytes) < 2 {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
results[idx] = shareResult{
|
|
||||||
share: shamir.Share{
|
|
||||||
X: shareBytes[0],
|
|
||||||
Y: shareBytes[1:],
|
|
||||||
},
|
|
||||||
ok: true,
|
|
||||||
}
|
|
||||||
}(i, g)
|
|
||||||
}
|
|
||||||
|
|
||||||
wg.Wait()
|
|
||||||
|
|
||||||
// Collect successful shares.
|
|
||||||
shares := make([]shamir.Share, 0, n)
|
|
||||||
for _, r := range results {
|
|
||||||
if r.ok {
|
|
||||||
shares = append(shares, r.share)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(shares) < k {
|
|
||||||
h.logger.ComponentError(logging.ComponentGeneral, "Vault pull: not enough shares",
|
|
||||||
zap.Int("collected", len(shares)), zap.Int("total", n), zap.Int("threshold", k))
|
|
||||||
writeError(w, http.StatusServiceUnavailable,
|
|
||||||
fmt.Sprintf("not enough shares: collected %d of %d required (contacted %d guardians)", len(shares), k, n))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Shamir combine to reconstruct envelope.
|
|
||||||
envelope, err := shamir.Combine(shares[:k])
|
|
||||||
if err != nil {
|
|
||||||
h.logger.ComponentError(logging.ComponentGeneral, "Vault pull: Shamir combine failed", zap.Error(err))
|
|
||||||
writeError(w, http.StatusInternalServerError, "failed to reconstruct envelope")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wipe collected shares.
|
|
||||||
for i := range shares {
|
|
||||||
for j := range shares[i].Y {
|
|
||||||
shares[i].Y[j] = 0
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
envelopeB64 := base64.StdEncoding.EncodeToString(envelope)
|
|
||||||
|
|
||||||
// Wipe envelope.
|
|
||||||
for i := range envelope {
|
|
||||||
envelope[i] = 0
|
|
||||||
}
|
|
||||||
|
|
||||||
writeJSON(w, http.StatusOK, PullResponse{
|
|
||||||
Envelope: envelopeB64,
|
|
||||||
Collected: len(shares),
|
|
||||||
Threshold: k,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
@ -1,168 +0,0 @@
|
|||||||
package vault
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"context"
|
|
||||||
"encoding/base64"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"net/http"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/logging"
|
|
||||||
"github.com/DeBrosOfficial/network/pkg/shamir"
|
|
||||||
"go.uber.org/zap"
|
|
||||||
)
|
|
||||||
|
|
||||||
// PushRequest is the client-facing request body.
|
|
||||||
type PushRequest struct {
|
|
||||||
Identity string `json:"identity"` // 64 hex chars (SHA-256)
|
|
||||||
Envelope string `json:"envelope"` // base64-encoded encrypted envelope
|
|
||||||
Version uint64 `json:"version"` // Anti-rollback version counter
|
|
||||||
}
|
|
||||||
|
|
||||||
// PushResponse is returned to the client.
|
|
||||||
type PushResponse struct {
|
|
||||||
Status string `json:"status"` // "ok" or "partial"
|
|
||||||
AckCount int `json:"ack_count"`
|
|
||||||
Total int `json:"total"`
|
|
||||||
Quorum int `json:"quorum"`
|
|
||||||
Threshold int `json:"threshold"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// guardianPushRequest is sent to each vault guardian.
|
|
||||||
type guardianPushRequest struct {
|
|
||||||
Identity string `json:"identity"`
|
|
||||||
Share string `json:"share"` // base64([x:1byte][y:rest])
|
|
||||||
Version uint64 `json:"version"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// HandlePush processes POST /v1/vault/push.
|
|
||||||
func (h *Handlers) HandlePush(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.Method != http.MethodPost {
|
|
||||||
writeError(w, http.StatusMethodNotAllowed, "method not allowed")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
body, err := io.ReadAll(io.LimitReader(r.Body, maxPushBodySize))
|
|
||||||
if err != nil {
|
|
||||||
writeError(w, http.StatusBadRequest, "failed to read request body")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
var req PushRequest
|
|
||||||
if err := json.Unmarshal(body, &req); err != nil {
|
|
||||||
writeError(w, http.StatusBadRequest, "invalid JSON")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if !isValidIdentity(req.Identity) {
|
|
||||||
writeError(w, http.StatusBadRequest, "identity must be 64 hex characters")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
envelopeBytes, err := base64.StdEncoding.DecodeString(req.Envelope)
|
|
||||||
if err != nil {
|
|
||||||
writeError(w, http.StatusBadRequest, "invalid base64 envelope")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if len(envelopeBytes) == 0 {
|
|
||||||
writeError(w, http.StatusBadRequest, "envelope must not be empty")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if !h.rateLimiter.AllowPush(req.Identity) {
|
|
||||||
w.Header().Set("Retry-After", "120")
|
|
||||||
writeError(w, http.StatusTooManyRequests, "push rate limit exceeded for this identity")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
guardians, err := h.discoverGuardians(r.Context())
|
|
||||||
if err != nil {
|
|
||||||
h.logger.ComponentError(logging.ComponentGeneral, "Vault push: guardian discovery failed", zap.Error(err))
|
|
||||||
writeError(w, http.StatusServiceUnavailable, "no guardian nodes available")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
n := len(guardians)
|
|
||||||
k := shamir.AdaptiveThreshold(n)
|
|
||||||
quorum := shamir.WriteQuorum(n)
|
|
||||||
|
|
||||||
shares, err := shamir.Split(envelopeBytes, n, k)
|
|
||||||
if err != nil {
|
|
||||||
h.logger.ComponentError(logging.ComponentGeneral, "Vault push: Shamir split failed", zap.Error(err))
|
|
||||||
writeError(w, http.StatusInternalServerError, "failed to split envelope")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Fan out to guardians in parallel.
|
|
||||||
ctx, cancel := context.WithTimeout(r.Context(), overallTimeout)
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
var ackCount atomic.Int32
|
|
||||||
var wg sync.WaitGroup
|
|
||||||
wg.Add(n)
|
|
||||||
|
|
||||||
for i, g := range guardians {
|
|
||||||
go func(idx int, gd guardian) {
|
|
||||||
defer wg.Done()
|
|
||||||
|
|
||||||
share := shares[idx]
|
|
||||||
// Serialize: [x:1byte][y:rest]
|
|
||||||
shareBytes := make([]byte, 1+len(share.Y))
|
|
||||||
shareBytes[0] = share.X
|
|
||||||
copy(shareBytes[1:], share.Y)
|
|
||||||
shareB64 := base64.StdEncoding.EncodeToString(shareBytes)
|
|
||||||
|
|
||||||
guardianReq := guardianPushRequest{
|
|
||||||
Identity: req.Identity,
|
|
||||||
Share: shareB64,
|
|
||||||
Version: req.Version,
|
|
||||||
}
|
|
||||||
reqBody, _ := json.Marshal(guardianReq)
|
|
||||||
|
|
||||||
url := fmt.Sprintf("http://%s:%d/v1/vault/push", gd.IP, gd.Port)
|
|
||||||
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(reqBody))
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
httpReq.Header.Set("Content-Type", "application/json")
|
|
||||||
|
|
||||||
resp, err := h.httpClient.Do(httpReq)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer resp.Body.Close()
|
|
||||||
io.Copy(io.Discard, resp.Body)
|
|
||||||
|
|
||||||
if resp.StatusCode >= 200 && resp.StatusCode < 300 {
|
|
||||||
ackCount.Add(1)
|
|
||||||
}
|
|
||||||
}(i, g)
|
|
||||||
}
|
|
||||||
|
|
||||||
wg.Wait()
|
|
||||||
|
|
||||||
// Wipe share data.
|
|
||||||
for i := range shares {
|
|
||||||
for j := range shares[i].Y {
|
|
||||||
shares[i].Y[j] = 0
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
ack := int(ackCount.Load())
|
|
||||||
status := "ok"
|
|
||||||
if ack < quorum {
|
|
||||||
status = "partial"
|
|
||||||
}
|
|
||||||
|
|
||||||
writeJSON(w, http.StatusOK, PushResponse{
|
|
||||||
Status: status,
|
|
||||||
AckCount: ack,
|
|
||||||
Total: n,
|
|
||||||
Quorum: quorum,
|
|
||||||
Threshold: k,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
@ -1,120 +0,0 @@
|
|||||||
package vault
|
|
||||||
|
|
||||||
import (
|
|
||||||
"sync"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
// IdentityRateLimiter provides per-identity-hash rate limiting for vault operations.
|
|
||||||
// Push and pull have separate rate limits since push is more expensive.
|
|
||||||
type IdentityRateLimiter struct {
|
|
||||||
pushBuckets sync.Map // identity -> *tokenBucket
|
|
||||||
pullBuckets sync.Map // identity -> *tokenBucket
|
|
||||||
pushRate float64 // tokens per second
|
|
||||||
pushBurst int
|
|
||||||
pullRate float64 // tokens per second
|
|
||||||
pullBurst int
|
|
||||||
stopCh chan struct{}
|
|
||||||
}
|
|
||||||
|
|
||||||
type tokenBucket struct {
|
|
||||||
mu sync.Mutex
|
|
||||||
tokens float64
|
|
||||||
lastCheck time.Time
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewIdentityRateLimiter creates a per-identity rate limiter.
|
|
||||||
// pushPerHour and pullPerHour are sustained rates; burst is 1/6th of the hourly rate.
|
|
||||||
func NewIdentityRateLimiter(pushPerHour, pullPerHour int) *IdentityRateLimiter {
|
|
||||||
pushBurst := pushPerHour / 6
|
|
||||||
if pushBurst < 1 {
|
|
||||||
pushBurst = 1
|
|
||||||
}
|
|
||||||
pullBurst := pullPerHour / 6
|
|
||||||
if pullBurst < 1 {
|
|
||||||
pullBurst = 1
|
|
||||||
}
|
|
||||||
return &IdentityRateLimiter{
|
|
||||||
pushRate: float64(pushPerHour) / 3600.0,
|
|
||||||
pushBurst: pushBurst,
|
|
||||||
pullRate: float64(pullPerHour) / 3600.0,
|
|
||||||
pullBurst: pullBurst,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// AllowPush checks if a push for this identity is allowed.
|
|
||||||
func (rl *IdentityRateLimiter) AllowPush(identity string) bool {
|
|
||||||
return rl.allow(&rl.pushBuckets, identity, rl.pushRate, rl.pushBurst)
|
|
||||||
}
|
|
||||||
|
|
||||||
// AllowPull checks if a pull for this identity is allowed.
|
|
||||||
func (rl *IdentityRateLimiter) AllowPull(identity string) bool {
|
|
||||||
return rl.allow(&rl.pullBuckets, identity, rl.pullRate, rl.pullBurst)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (rl *IdentityRateLimiter) allow(buckets *sync.Map, identity string, rate float64, burst int) bool {
|
|
||||||
val, _ := buckets.LoadOrStore(identity, &tokenBucket{
|
|
||||||
tokens: float64(burst),
|
|
||||||
lastCheck: time.Now(),
|
|
||||||
})
|
|
||||||
b := val.(*tokenBucket)
|
|
||||||
|
|
||||||
b.mu.Lock()
|
|
||||||
defer b.mu.Unlock()
|
|
||||||
|
|
||||||
now := time.Now()
|
|
||||||
elapsed := now.Sub(b.lastCheck).Seconds()
|
|
||||||
b.tokens += elapsed * rate
|
|
||||||
if b.tokens > float64(burst) {
|
|
||||||
b.tokens = float64(burst)
|
|
||||||
}
|
|
||||||
b.lastCheck = now
|
|
||||||
|
|
||||||
if b.tokens >= 1 {
|
|
||||||
b.tokens--
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// StartCleanup runs periodic cleanup of stale identity entries.
|
|
||||||
func (rl *IdentityRateLimiter) StartCleanup(interval, maxAge time.Duration) {
|
|
||||||
rl.stopCh = make(chan struct{})
|
|
||||||
go func() {
|
|
||||||
ticker := time.NewTicker(interval)
|
|
||||||
defer ticker.Stop()
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case <-ticker.C:
|
|
||||||
rl.cleanup(maxAge)
|
|
||||||
case <-rl.stopCh:
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Stop terminates the background cleanup goroutine.
|
|
||||||
func (rl *IdentityRateLimiter) Stop() {
|
|
||||||
if rl.stopCh != nil {
|
|
||||||
close(rl.stopCh)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (rl *IdentityRateLimiter) cleanup(maxAge time.Duration) {
|
|
||||||
cutoff := time.Now().Add(-maxAge)
|
|
||||||
cleanMap := func(m *sync.Map) {
|
|
||||||
m.Range(func(key, value interface{}) bool {
|
|
||||||
b := value.(*tokenBucket)
|
|
||||||
b.mu.Lock()
|
|
||||||
stale := b.lastCheck.Before(cutoff)
|
|
||||||
b.mu.Unlock()
|
|
||||||
if stale {
|
|
||||||
m.Delete(key)
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
})
|
|
||||||
}
|
|
||||||
cleanMap(&rl.pushBuckets)
|
|
||||||
cleanMap(&rl.pullBuckets)
|
|
||||||
}
|
|
||||||
@ -1,111 +0,0 @@
|
|||||||
package gateway
|
|
||||||
|
|
||||||
import "testing"
|
|
||||||
|
|
||||||
func TestAggregateHealthStatus_allHealthy(t *testing.T) {
|
|
||||||
checks := map[string]checkResult{
|
|
||||||
"rqlite": {Status: "ok"},
|
|
||||||
"olric": {Status: "ok"},
|
|
||||||
"ipfs": {Status: "ok"},
|
|
||||||
"libp2p": {Status: "ok"},
|
|
||||||
"anyone": {Status: "ok"},
|
|
||||||
"vault": {Status: "ok"},
|
|
||||||
"wireguard": {Status: "ok"},
|
|
||||||
}
|
|
||||||
if got := aggregateHealthStatus(checks); got != "healthy" {
|
|
||||||
t.Errorf("expected healthy, got %s", got)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAggregateHealthStatus_rqliteError(t *testing.T) {
|
|
||||||
checks := map[string]checkResult{
|
|
||||||
"rqlite": {Status: "error", Error: "connection refused"},
|
|
||||||
"olric": {Status: "ok"},
|
|
||||||
"ipfs": {Status: "ok"},
|
|
||||||
}
|
|
||||||
if got := aggregateHealthStatus(checks); got != "unhealthy" {
|
|
||||||
t.Errorf("expected unhealthy, got %s", got)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAggregateHealthStatus_nonCriticalError(t *testing.T) {
|
|
||||||
checks := map[string]checkResult{
|
|
||||||
"rqlite": {Status: "ok"},
|
|
||||||
"olric": {Status: "error", Error: "timeout"},
|
|
||||||
"ipfs": {Status: "ok"},
|
|
||||||
}
|
|
||||||
if got := aggregateHealthStatus(checks); got != "degraded" {
|
|
||||||
t.Errorf("expected degraded, got %s", got)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAggregateHealthStatus_unavailableIsNotError(t *testing.T) {
|
|
||||||
// Key test: "unavailable" services (like Anyone in sandbox) should NOT
|
|
||||||
// cause degraded status.
|
|
||||||
checks := map[string]checkResult{
|
|
||||||
"rqlite": {Status: "ok"},
|
|
||||||
"olric": {Status: "ok"},
|
|
||||||
"vault": {Status: "ok"},
|
|
||||||
"ipfs": {Status: "unavailable"},
|
|
||||||
"libp2p": {Status: "unavailable"},
|
|
||||||
"anyone": {Status: "unavailable"},
|
|
||||||
"wireguard": {Status: "unavailable"},
|
|
||||||
}
|
|
||||||
if got := aggregateHealthStatus(checks); got != "healthy" {
|
|
||||||
t.Errorf("expected healthy when services are unavailable, got %s", got)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAggregateHealthStatus_emptyChecks(t *testing.T) {
|
|
||||||
checks := map[string]checkResult{}
|
|
||||||
if got := aggregateHealthStatus(checks); got != "healthy" {
|
|
||||||
t.Errorf("expected healthy for empty checks, got %s", got)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAggregateHealthStatus_rqliteErrorOverridesDegraded(t *testing.T) {
|
|
||||||
// rqlite error should take priority over other errors
|
|
||||||
checks := map[string]checkResult{
|
|
||||||
"rqlite": {Status: "error", Error: "leader not found"},
|
|
||||||
"olric": {Status: "error", Error: "timeout"},
|
|
||||||
"anyone": {Status: "error", Error: "not reachable"},
|
|
||||||
}
|
|
||||||
if got := aggregateHealthStatus(checks); got != "unhealthy" {
|
|
||||||
t.Errorf("expected unhealthy (rqlite takes priority), got %s", got)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAggregateHealthStatus_vaultErrorIsUnhealthy(t *testing.T) {
|
|
||||||
// vault is critical — error should mean unhealthy, not degraded
|
|
||||||
checks := map[string]checkResult{
|
|
||||||
"rqlite": {Status: "ok"},
|
|
||||||
"vault": {Status: "error", Error: "vault-guardian unreachable on port 7500"},
|
|
||||||
"olric": {Status: "ok"},
|
|
||||||
}
|
|
||||||
if got := aggregateHealthStatus(checks); got != "unhealthy" {
|
|
||||||
t.Errorf("expected unhealthy (vault is critical), got %s", got)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAggregateHealthStatus_wireguardErrorIsDegraded(t *testing.T) {
|
|
||||||
// wireguard is non-critical — error should mean degraded, not unhealthy
|
|
||||||
checks := map[string]checkResult{
|
|
||||||
"rqlite": {Status: "ok"},
|
|
||||||
"vault": {Status: "ok"},
|
|
||||||
"wireguard": {Status: "error", Error: "wg0 interface not found"},
|
|
||||||
}
|
|
||||||
if got := aggregateHealthStatus(checks); got != "degraded" {
|
|
||||||
t.Errorf("expected degraded (wireguard is non-critical), got %s", got)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAggregateHealthStatus_bothCriticalDown(t *testing.T) {
|
|
||||||
checks := map[string]checkResult{
|
|
||||||
"rqlite": {Status: "error", Error: "connection refused"},
|
|
||||||
"vault": {Status: "error", Error: "unreachable"},
|
|
||||||
"wireguard": {Status: "ok"},
|
|
||||||
}
|
|
||||||
if got := aggregateHealthStatus(checks); got != "unhealthy" {
|
|
||||||
t.Errorf("expected unhealthy, got %s", got)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,95 +0,0 @@
|
|||||||
package ipfs
|
|
||||||
|
|
||||||
import (
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/multiformats/go-multiaddr"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestExtractIPFromMultiaddr(t *testing.T) {
|
|
||||||
tests := []struct {
|
|
||||||
name string
|
|
||||||
addr string
|
|
||||||
expected string
|
|
||||||
}{
|
|
||||||
{
|
|
||||||
name: "ipv4 tcp address",
|
|
||||||
addr: "/ip4/10.0.0.1/tcp/4001",
|
|
||||||
expected: "10.0.0.1",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "ipv4 public address",
|
|
||||||
addr: "/ip4/203.0.113.5/tcp/4001",
|
|
||||||
expected: "203.0.113.5",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "ipv4 loopback",
|
|
||||||
addr: "/ip4/127.0.0.1/tcp/4001",
|
|
||||||
expected: "127.0.0.1",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "ipv6 address",
|
|
||||||
addr: "/ip6/::1/tcp/4001",
|
|
||||||
expected: "[::1]",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "wireguard ip with udp",
|
|
||||||
addr: "/ip4/10.0.0.3/udp/4001/quic",
|
|
||||||
expected: "10.0.0.3",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tt := range tests {
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
|
||||||
ma, err := multiaddr.NewMultiaddr(tt.addr)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("failed to parse multiaddr %q: %v", tt.addr, err)
|
|
||||||
}
|
|
||||||
got := extractIPFromMultiaddr(ma)
|
|
||||||
if got != tt.expected {
|
|
||||||
t.Errorf("extractIPFromMultiaddr(%q) = %q, want %q", tt.addr, got, tt.expected)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestExtractIPFromMultiaddr_Nil(t *testing.T) {
|
|
||||||
got := extractIPFromMultiaddr(nil)
|
|
||||||
if got != "" {
|
|
||||||
t.Errorf("extractIPFromMultiaddr(nil) = %q, want empty string", got)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestWireGuardIPFiltering verifies that only 10.0.0.x IPs would be selected
|
|
||||||
// for peer discovery queries. This tests the filtering logic used in
|
|
||||||
// DiscoverClusterPeersFromLibP2P.
|
|
||||||
func TestWireGuardIPFiltering(t *testing.T) {
|
|
||||||
tests := []struct {
|
|
||||||
name string
|
|
||||||
addr string
|
|
||||||
accepted bool
|
|
||||||
}{
|
|
||||||
{"wireguard ip", "/ip4/10.0.0.1/tcp/4001", true},
|
|
||||||
{"wireguard ip high", "/ip4/10.0.0.254/tcp/4001", true},
|
|
||||||
{"public ip", "/ip4/203.0.113.5/tcp/4001", false},
|
|
||||||
{"private 192.168", "/ip4/192.168.1.1/tcp/4001", false},
|
|
||||||
{"private 172.16", "/ip4/172.16.0.1/tcp/4001", false},
|
|
||||||
{"loopback", "/ip4/127.0.0.1/tcp/4001", false},
|
|
||||||
{"different 10.x subnet", "/ip4/10.1.0.1/tcp/4001", false},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tt := range tests {
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
|
||||||
ma, err := multiaddr.NewMultiaddr(tt.addr)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("failed to parse multiaddr: %v", err)
|
|
||||||
}
|
|
||||||
ip := extractIPFromMultiaddr(ma)
|
|
||||||
// Replicate the filtering logic from DiscoverClusterPeersFromLibP2P
|
|
||||||
accepted := ip != "" && len(ip) >= 7 && ip[:7] == "10.0.0."
|
|
||||||
if accepted != tt.accepted {
|
|
||||||
t.Errorf("IP %q: accepted=%v, want %v", ip, accepted, tt.accepted)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,19 +0,0 @@
|
|||||||
package rqlite
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"database/sql"
|
|
||||||
"fmt"
|
|
||||||
)
|
|
||||||
|
|
||||||
// SafeExecContext wraps db.ExecContext with panic recovery.
|
|
||||||
// The gorqlite stdlib driver can panic with "index out of range" when
|
|
||||||
// RQLite is temporarily unavailable. This converts the panic to an error.
|
|
||||||
func SafeExecContext(db *sql.DB, ctx context.Context, query string, args ...interface{}) (result sql.Result, err error) {
|
|
||||||
defer func() {
|
|
||||||
if r := recover(); r != nil {
|
|
||||||
err = fmt.Errorf("gorqlite panic (ExecContext): %v", r)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
return db.ExecContext(ctx, query, args...)
|
|
||||||
}
|
|
||||||
@ -1,222 +0,0 @@
|
|||||||
package rwagent
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"net"
|
|
||||||
"net/http"
|
|
||||||
"net/url"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
|
||||||
// DefaultSocketName is the socket file relative to ~/.rootwallet/.
|
|
||||||
DefaultSocketName = "agent.sock"
|
|
||||||
|
|
||||||
// DefaultTimeout for HTTP requests to the agent.
|
|
||||||
// Set high enough to allow pending approval flow (2 min approval timeout).
|
|
||||||
DefaultTimeout = 150 * time.Second
|
|
||||||
)
|
|
||||||
|
|
||||||
// Client communicates with the rootwallet agent daemon over a Unix socket.
|
|
||||||
type Client struct {
|
|
||||||
httpClient *http.Client
|
|
||||||
socketPath string
|
|
||||||
}
|
|
||||||
|
|
||||||
// New creates a client that connects to the agent's Unix socket.
|
|
||||||
// If socketPath is empty, defaults to ~/.rootwallet/agent.sock.
|
|
||||||
func New(socketPath string) *Client {
|
|
||||||
if socketPath == "" {
|
|
||||||
home, _ := os.UserHomeDir()
|
|
||||||
socketPath = filepath.Join(home, ".rootwallet", DefaultSocketName)
|
|
||||||
}
|
|
||||||
|
|
||||||
return &Client{
|
|
||||||
socketPath: socketPath,
|
|
||||||
httpClient: &http.Client{
|
|
||||||
Transport: &http.Transport{
|
|
||||||
DialContext: func(ctx context.Context, _, _ string) (net.Conn, error) {
|
|
||||||
var d net.Dialer
|
|
||||||
return d.DialContext(ctx, "unix", socketPath)
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Timeout: DefaultTimeout,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Status returns the agent's current status.
|
|
||||||
func (c *Client) Status(ctx context.Context) (*StatusResponse, error) {
|
|
||||||
var resp apiResponse[StatusResponse]
|
|
||||||
if err := c.doJSON(ctx, "GET", "/v1/status", nil, &resp); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if !resp.OK {
|
|
||||||
return nil, c.apiError(resp.Error, resp.Code, 0)
|
|
||||||
}
|
|
||||||
return &resp.Data, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// IsRunning returns true if the agent is reachable.
|
|
||||||
func (c *Client) IsRunning(ctx context.Context) bool {
|
|
||||||
_, err := c.Status(ctx)
|
|
||||||
return err == nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetSSHKey retrieves an SSH key from the vault.
|
|
||||||
// format: "priv", "pub", or "both".
|
|
||||||
func (c *Client) GetSSHKey(ctx context.Context, host, username, format string) (*VaultSSHData, error) {
|
|
||||||
path := fmt.Sprintf("/v1/vault/ssh/%s/%s?format=%s",
|
|
||||||
url.PathEscape(host),
|
|
||||||
url.PathEscape(username),
|
|
||||||
url.QueryEscape(format),
|
|
||||||
)
|
|
||||||
|
|
||||||
var resp apiResponse[VaultSSHData]
|
|
||||||
if err := c.doJSON(ctx, "GET", path, nil, &resp); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if !resp.OK {
|
|
||||||
return nil, c.apiError(resp.Error, resp.Code, 0)
|
|
||||||
}
|
|
||||||
return &resp.Data, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// CreateSSHEntry creates a new SSH key entry in the vault.
|
|
||||||
func (c *Client) CreateSSHEntry(ctx context.Context, host, username string) (*VaultSSHData, error) {
|
|
||||||
body := map[string]string{"host": host, "username": username}
|
|
||||||
|
|
||||||
var resp apiResponse[VaultSSHData]
|
|
||||||
if err := c.doJSON(ctx, "POST", "/v1/vault/ssh", body, &resp); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if !resp.OK {
|
|
||||||
return nil, c.apiError(resp.Error, resp.Code, 0)
|
|
||||||
}
|
|
||||||
return &resp.Data, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetPassword retrieves a stored password from the vault.
|
|
||||||
func (c *Client) GetPassword(ctx context.Context, domain, username string) (*VaultPasswordData, error) {
|
|
||||||
path := fmt.Sprintf("/v1/vault/password/%s/%s",
|
|
||||||
url.PathEscape(domain),
|
|
||||||
url.PathEscape(username),
|
|
||||||
)
|
|
||||||
|
|
||||||
var resp apiResponse[VaultPasswordData]
|
|
||||||
if err := c.doJSON(ctx, "GET", path, nil, &resp); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if !resp.OK {
|
|
||||||
return nil, c.apiError(resp.Error, resp.Code, 0)
|
|
||||||
}
|
|
||||||
return &resp.Data, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetAddress returns the active wallet address.
|
|
||||||
func (c *Client) GetAddress(ctx context.Context, chain string) (*WalletAddressData, error) {
|
|
||||||
path := fmt.Sprintf("/v1/wallet/address?chain=%s", url.QueryEscape(chain))
|
|
||||||
|
|
||||||
var resp apiResponse[WalletAddressData]
|
|
||||||
if err := c.doJSON(ctx, "GET", path, nil, &resp); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if !resp.OK {
|
|
||||||
return nil, c.apiError(resp.Error, resp.Code, 0)
|
|
||||||
}
|
|
||||||
return &resp.Data, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Unlock sends the password to unlock the agent.
|
|
||||||
func (c *Client) Unlock(ctx context.Context, password string, ttlMinutes int) error {
|
|
||||||
body := map[string]any{"password": password, "ttlMinutes": ttlMinutes}
|
|
||||||
|
|
||||||
var resp apiResponse[any]
|
|
||||||
if err := c.doJSON(ctx, "POST", "/v1/unlock", body, &resp); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if !resp.OK {
|
|
||||||
return c.apiError(resp.Error, resp.Code, 0)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Lock locks the agent, zeroing all key material.
|
|
||||||
func (c *Client) Lock(ctx context.Context) error {
|
|
||||||
var resp apiResponse[any]
|
|
||||||
if err := c.doJSON(ctx, "POST", "/v1/lock", nil, &resp); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if !resp.OK {
|
|
||||||
return c.apiError(resp.Error, resp.Code, 0)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// doJSON performs an HTTP request and decodes the JSON response.
|
|
||||||
func (c *Client) doJSON(ctx context.Context, method, path string, body any, result any) error {
|
|
||||||
var bodyReader io.Reader
|
|
||||||
if body != nil {
|
|
||||||
data, err := json.Marshal(body)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("marshal request body: %w", err)
|
|
||||||
}
|
|
||||||
bodyReader = strings.NewReader(string(data))
|
|
||||||
}
|
|
||||||
|
|
||||||
// URL host is ignored for Unix sockets, but required by http.NewRequest
|
|
||||||
req, err := http.NewRequestWithContext(ctx, method, "http://localhost"+path, bodyReader)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("create request: %w", err)
|
|
||||||
}
|
|
||||||
req.Header.Set("Content-Type", "application/json")
|
|
||||||
req.Header.Set("X-RW-PID", strconv.Itoa(os.Getpid()))
|
|
||||||
|
|
||||||
resp, err := c.httpClient.Do(req)
|
|
||||||
if err != nil {
|
|
||||||
// Connection refused or socket not found = agent not running
|
|
||||||
if isConnectionError(err) {
|
|
||||||
return ErrAgentNotRunning
|
|
||||||
}
|
|
||||||
return fmt.Errorf("agent request failed: %w", err)
|
|
||||||
}
|
|
||||||
defer resp.Body.Close()
|
|
||||||
|
|
||||||
data, err := io.ReadAll(resp.Body)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("read response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := json.Unmarshal(data, result); err != nil {
|
|
||||||
return fmt.Errorf("decode response: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) apiError(message, code string, statusCode int) *AgentError {
|
|
||||||
return &AgentError{
|
|
||||||
Code: code,
|
|
||||||
Message: message,
|
|
||||||
StatusCode: statusCode,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// isConnectionError checks if the error is a connection-level failure.
|
|
||||||
func isConnectionError(err error) bool {
|
|
||||||
if err == nil {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
msg := err.Error()
|
|
||||||
return strings.Contains(msg, "connection refused") ||
|
|
||||||
strings.Contains(msg, "no such file or directory") ||
|
|
||||||
strings.Contains(msg, "connect: no such file")
|
|
||||||
}
|
|
||||||
|
|
||||||
@ -1,257 +0,0 @@
|
|||||||
package rwagent
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"encoding/json"
|
|
||||||
"net"
|
|
||||||
"net/http"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"testing"
|
|
||||||
)
|
|
||||||
|
|
||||||
// startMockAgent creates a mock agent server on a Unix socket for testing.
|
|
||||||
func startMockAgent(t *testing.T, handler http.Handler) (socketPath string, cleanup func()) {
|
|
||||||
t.Helper()
|
|
||||||
|
|
||||||
tmpDir := t.TempDir()
|
|
||||||
socketPath = filepath.Join(tmpDir, "test-agent.sock")
|
|
||||||
|
|
||||||
listener, err := net.Listen("unix", socketPath)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("listen on unix socket: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
server := &http.Server{Handler: handler}
|
|
||||||
go func() { _ = server.Serve(listener) }()
|
|
||||||
|
|
||||||
cleanup = func() {
|
|
||||||
_ = server.Close()
|
|
||||||
_ = os.Remove(socketPath)
|
|
||||||
}
|
|
||||||
return socketPath, cleanup
|
|
||||||
}
|
|
||||||
|
|
||||||
// jsonHandler returns an http.HandlerFunc that responds with the given JSON.
|
|
||||||
func jsonHandler(statusCode int, body any) http.HandlerFunc {
|
|
||||||
return func(w http.ResponseWriter, r *http.Request) {
|
|
||||||
w.Header().Set("Content-Type", "application/json")
|
|
||||||
w.WriteHeader(statusCode)
|
|
||||||
data, _ := json.Marshal(body)
|
|
||||||
_, _ = w.Write(data)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStatus(t *testing.T) {
|
|
||||||
mux := http.NewServeMux()
|
|
||||||
mux.HandleFunc("/v1/status", jsonHandler(200, apiResponse[StatusResponse]{
|
|
||||||
OK: true,
|
|
||||||
Data: StatusResponse{
|
|
||||||
Version: "1.0.0",
|
|
||||||
Locked: false,
|
|
||||||
Uptime: 120,
|
|
||||||
PID: 12345,
|
|
||||||
},
|
|
||||||
}))
|
|
||||||
|
|
||||||
sock, cleanup := startMockAgent(t, mux)
|
|
||||||
defer cleanup()
|
|
||||||
|
|
||||||
client := New(sock)
|
|
||||||
status, err := client.Status(context.Background())
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Status() error: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if status.Version != "1.0.0" {
|
|
||||||
t.Errorf("Version = %q, want %q", status.Version, "1.0.0")
|
|
||||||
}
|
|
||||||
if status.Locked {
|
|
||||||
t.Error("Locked = true, want false")
|
|
||||||
}
|
|
||||||
if status.Uptime != 120 {
|
|
||||||
t.Errorf("Uptime = %d, want 120", status.Uptime)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestIsRunning_true(t *testing.T) {
|
|
||||||
mux := http.NewServeMux()
|
|
||||||
mux.HandleFunc("/v1/status", jsonHandler(200, apiResponse[StatusResponse]{
|
|
||||||
OK: true,
|
|
||||||
Data: StatusResponse{Version: "1.0.0"},
|
|
||||||
}))
|
|
||||||
|
|
||||||
sock, cleanup := startMockAgent(t, mux)
|
|
||||||
defer cleanup()
|
|
||||||
|
|
||||||
client := New(sock)
|
|
||||||
if !client.IsRunning(context.Background()) {
|
|
||||||
t.Error("IsRunning() = false, want true")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestIsRunning_false(t *testing.T) {
|
|
||||||
client := New("/tmp/nonexistent-socket-test.sock")
|
|
||||||
if client.IsRunning(context.Background()) {
|
|
||||||
t.Error("IsRunning() = true, want false")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGetSSHKey(t *testing.T) {
|
|
||||||
mux := http.NewServeMux()
|
|
||||||
mux.HandleFunc("/v1/vault/ssh/myhost/root", jsonHandler(200, apiResponse[VaultSSHData]{
|
|
||||||
OK: true,
|
|
||||||
Data: VaultSSHData{
|
|
||||||
PrivateKey: "-----BEGIN OPENSSH PRIVATE KEY-----\nfake\n-----END OPENSSH PRIVATE KEY-----",
|
|
||||||
PublicKey: "ssh-ed25519 AAAA... myhost/root",
|
|
||||||
},
|
|
||||||
}))
|
|
||||||
|
|
||||||
sock, cleanup := startMockAgent(t, mux)
|
|
||||||
defer cleanup()
|
|
||||||
|
|
||||||
client := New(sock)
|
|
||||||
data, err := client.GetSSHKey(context.Background(), "myhost", "root", "both")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("GetSSHKey() error: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if data.PrivateKey == "" {
|
|
||||||
t.Error("PrivateKey is empty")
|
|
||||||
}
|
|
||||||
if data.PublicKey == "" {
|
|
||||||
t.Error("PublicKey is empty")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGetSSHKey_locked(t *testing.T) {
|
|
||||||
mux := http.NewServeMux()
|
|
||||||
mux.HandleFunc("/v1/vault/ssh/myhost/root", jsonHandler(423, apiResponse[any]{
|
|
||||||
OK: false,
|
|
||||||
Error: "Agent is locked",
|
|
||||||
Code: "AGENT_LOCKED",
|
|
||||||
}))
|
|
||||||
|
|
||||||
sock, cleanup := startMockAgent(t, mux)
|
|
||||||
defer cleanup()
|
|
||||||
|
|
||||||
client := New(sock)
|
|
||||||
_, err := client.GetSSHKey(context.Background(), "myhost", "root", "priv")
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("GetSSHKey() expected error, got nil")
|
|
||||||
}
|
|
||||||
if !IsLocked(err) {
|
|
||||||
t.Errorf("IsLocked() = false for error: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGetSSHKey_notFound(t *testing.T) {
|
|
||||||
mux := http.NewServeMux()
|
|
||||||
mux.HandleFunc("/v1/vault/ssh/unknown/user", jsonHandler(404, apiResponse[any]{
|
|
||||||
OK: false,
|
|
||||||
Error: "No SSH key found for unknown/user",
|
|
||||||
Code: "NOT_FOUND",
|
|
||||||
}))
|
|
||||||
|
|
||||||
sock, cleanup := startMockAgent(t, mux)
|
|
||||||
defer cleanup()
|
|
||||||
|
|
||||||
client := New(sock)
|
|
||||||
_, err := client.GetSSHKey(context.Background(), "unknown", "user", "priv")
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("GetSSHKey() expected error, got nil")
|
|
||||||
}
|
|
||||||
if !IsNotFound(err) {
|
|
||||||
t.Errorf("IsNotFound() = false for error: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGetPassword(t *testing.T) {
|
|
||||||
mux := http.NewServeMux()
|
|
||||||
mux.HandleFunc("/v1/vault/password/example.com/admin", jsonHandler(200, apiResponse[VaultPasswordData]{
|
|
||||||
OK: true,
|
|
||||||
Data: VaultPasswordData{Password: "secret123"},
|
|
||||||
}))
|
|
||||||
|
|
||||||
sock, cleanup := startMockAgent(t, mux)
|
|
||||||
defer cleanup()
|
|
||||||
|
|
||||||
client := New(sock)
|
|
||||||
data, err := client.GetPassword(context.Background(), "example.com", "admin")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("GetPassword() error: %v", err)
|
|
||||||
}
|
|
||||||
if data.Password != "secret123" {
|
|
||||||
t.Errorf("Password = %q, want %q", data.Password, "secret123")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCreateSSHEntry(t *testing.T) {
|
|
||||||
mux := http.NewServeMux()
|
|
||||||
mux.HandleFunc("/v1/vault/ssh", jsonHandler(201, apiResponse[VaultSSHData]{
|
|
||||||
OK: true,
|
|
||||||
Data: VaultSSHData{PublicKey: "ssh-ed25519 AAAA... new/entry"},
|
|
||||||
}))
|
|
||||||
|
|
||||||
sock, cleanup := startMockAgent(t, mux)
|
|
||||||
defer cleanup()
|
|
||||||
|
|
||||||
client := New(sock)
|
|
||||||
data, err := client.CreateSSHEntry(context.Background(), "new", "entry")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CreateSSHEntry() error: %v", err)
|
|
||||||
}
|
|
||||||
if data.PublicKey == "" {
|
|
||||||
t.Error("PublicKey is empty")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGetAddress(t *testing.T) {
|
|
||||||
mux := http.NewServeMux()
|
|
||||||
mux.HandleFunc("/v1/wallet/address", jsonHandler(200, apiResponse[WalletAddressData]{
|
|
||||||
OK: true,
|
|
||||||
Data: WalletAddressData{Address: "0x1234abcd", Chain: "evm"},
|
|
||||||
}))
|
|
||||||
|
|
||||||
sock, cleanup := startMockAgent(t, mux)
|
|
||||||
defer cleanup()
|
|
||||||
|
|
||||||
client := New(sock)
|
|
||||||
data, err := client.GetAddress(context.Background(), "evm")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("GetAddress() error: %v", err)
|
|
||||||
}
|
|
||||||
if data.Address != "0x1234abcd" {
|
|
||||||
t.Errorf("Address = %q, want %q", data.Address, "0x1234abcd")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAgentNotRunning(t *testing.T) {
|
|
||||||
client := New("/tmp/nonexistent-socket-for-testing.sock")
|
|
||||||
_, err := client.Status(context.Background())
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error, got nil")
|
|
||||||
}
|
|
||||||
if !IsNotRunning(err) {
|
|
||||||
t.Errorf("IsNotRunning() = false for error: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestUnlockAndLock(t *testing.T) {
|
|
||||||
mux := http.NewServeMux()
|
|
||||||
mux.HandleFunc("/v1/unlock", jsonHandler(200, apiResponse[any]{OK: true}))
|
|
||||||
mux.HandleFunc("/v1/lock", jsonHandler(200, apiResponse[any]{OK: true}))
|
|
||||||
|
|
||||||
sock, cleanup := startMockAgent(t, mux)
|
|
||||||
defer cleanup()
|
|
||||||
|
|
||||||
client := New(sock)
|
|
||||||
|
|
||||||
if err := client.Unlock(context.Background(), "password", 30); err != nil {
|
|
||||||
t.Fatalf("Unlock() error: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := client.Lock(context.Background()); err != nil {
|
|
||||||
t.Fatalf("Lock() error: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,57 +0,0 @@
|
|||||||
package rwagent
|
|
||||||
|
|
||||||
import (
|
|
||||||
"errors"
|
|
||||||
"fmt"
|
|
||||||
)
|
|
||||||
|
|
||||||
// AgentError represents an error returned by the rootwallet agent API.
|
|
||||||
type AgentError struct {
|
|
||||||
Code string // e.g., "AGENT_LOCKED", "NOT_FOUND"
|
|
||||||
Message string
|
|
||||||
StatusCode int
|
|
||||||
}
|
|
||||||
|
|
||||||
func (e *AgentError) Error() string {
|
|
||||||
return fmt.Sprintf("rootwallet agent: %s (%s)", e.Message, e.Code)
|
|
||||||
}
|
|
||||||
|
|
||||||
// IsLocked returns true if the error indicates the agent is locked.
|
|
||||||
func IsLocked(err error) bool {
|
|
||||||
var ae *AgentError
|
|
||||||
if errors.As(err, &ae) {
|
|
||||||
return ae.Code == "AGENT_LOCKED"
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// IsNotRunning returns true if the error indicates the agent is not reachable.
|
|
||||||
func IsNotRunning(err error) bool {
|
|
||||||
var ae *AgentError
|
|
||||||
if errors.As(err, &ae) {
|
|
||||||
return ae.Code == "AGENT_NOT_RUNNING"
|
|
||||||
}
|
|
||||||
// Also check for connection errors
|
|
||||||
return errors.Is(err, ErrAgentNotRunning)
|
|
||||||
}
|
|
||||||
|
|
||||||
// IsNotFound returns true if the vault entry was not found.
|
|
||||||
func IsNotFound(err error) bool {
|
|
||||||
var ae *AgentError
|
|
||||||
if errors.As(err, &ae) {
|
|
||||||
return ae.Code == "NOT_FOUND"
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// IsApprovalDenied returns true if the user denied the app's access request.
|
|
||||||
func IsApprovalDenied(err error) bool {
|
|
||||||
var ae *AgentError
|
|
||||||
if errors.As(err, &ae) {
|
|
||||||
return ae.Code == "APPROVAL_DENIED" || ae.Code == "PERMISSION_DENIED"
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// ErrAgentNotRunning is returned when the agent socket is not reachable.
|
|
||||||
var ErrAgentNotRunning = fmt.Errorf("rootwallet agent is not running — start with: rw agent start && rw agent unlock")
|
|
||||||
@ -1,56 +0,0 @@
|
|||||||
// Package rwagent provides a Go client for the RootWallet agent daemon.
|
|
||||||
//
|
|
||||||
// The agent is a persistent daemon that holds vault keys in memory and serves
|
|
||||||
// operations to authorized apps over a Unix socket HTTP API. This SDK replaces
|
|
||||||
// all subprocess `rw` calls with direct HTTP communication.
|
|
||||||
package rwagent
|
|
||||||
|
|
||||||
// StatusResponse from GET /v1/status.
|
|
||||||
type StatusResponse struct {
|
|
||||||
Version string `json:"version"`
|
|
||||||
Locked bool `json:"locked"`
|
|
||||||
Uptime int `json:"uptime"`
|
|
||||||
PID int `json:"pid"`
|
|
||||||
ConnectedApps int `json:"connectedApps"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// VaultSSHData from GET /v1/vault/ssh/:host/:user.
|
|
||||||
type VaultSSHData struct {
|
|
||||||
PrivateKey string `json:"privateKey,omitempty"`
|
|
||||||
PublicKey string `json:"publicKey,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// VaultPasswordData from GET /v1/vault/password/:domain/:user.
|
|
||||||
type VaultPasswordData struct {
|
|
||||||
Password string `json:"password"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// WalletAddressData from GET /v1/wallet/address.
|
|
||||||
type WalletAddressData struct {
|
|
||||||
Address string `json:"address"`
|
|
||||||
Chain string `json:"chain"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// AppPermission represents an approved app in the permission database.
|
|
||||||
type AppPermission struct {
|
|
||||||
BinaryHash string `json:"binaryHash"`
|
|
||||||
BinaryPath string `json:"binaryPath"`
|
|
||||||
Name string `json:"name"`
|
|
||||||
FirstSeen string `json:"firstSeen"`
|
|
||||||
LastUsed string `json:"lastUsed"`
|
|
||||||
Capabilities []PermittedCapability `json:"capabilities"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// PermittedCapability is a specific capability granted to an app.
|
|
||||||
type PermittedCapability struct {
|
|
||||||
Capability string `json:"capability"`
|
|
||||||
GrantedAt string `json:"grantedAt"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// apiResponse is the generic API response envelope.
|
|
||||||
type apiResponse[T any] struct {
|
|
||||||
OK bool `json:"ok"`
|
|
||||||
Data T `json:"data,omitempty"`
|
|
||||||
Error string `json:"error,omitempty"`
|
|
||||||
Code string `json:"code,omitempty"`
|
|
||||||
}
|
|
||||||
@ -1,98 +0,0 @@
|
|||||||
// Package secrets provides application-level encryption for sensitive data stored in RQLite.
|
|
||||||
// Uses AES-256-GCM with HKDF key derivation from the cluster secret.
|
|
||||||
package secrets
|
|
||||||
|
|
||||||
import (
|
|
||||||
"crypto/aes"
|
|
||||||
"crypto/cipher"
|
|
||||||
"crypto/rand"
|
|
||||||
"crypto/sha256"
|
|
||||||
"encoding/base64"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"golang.org/x/crypto/hkdf"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Prefix for encrypted values to distinguish from plaintext during migration.
|
|
||||||
const encryptedPrefix = "enc:"
|
|
||||||
|
|
||||||
// DeriveKey derives a 32-byte AES-256 key from the cluster secret using HKDF-SHA256.
|
|
||||||
// The purpose string provides domain separation (e.g., "turn-encryption").
|
|
||||||
func DeriveKey(clusterSecret, purpose string) ([]byte, error) {
|
|
||||||
if clusterSecret == "" {
|
|
||||||
return nil, fmt.Errorf("cluster secret is empty")
|
|
||||||
}
|
|
||||||
reader := hkdf.New(sha256.New, []byte(clusterSecret), nil, []byte(purpose))
|
|
||||||
key := make([]byte, 32)
|
|
||||||
if _, err := io.ReadFull(reader, key); err != nil {
|
|
||||||
return nil, fmt.Errorf("HKDF key derivation failed: %w", err)
|
|
||||||
}
|
|
||||||
return key, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Encrypt encrypts plaintext with AES-256-GCM using the given key.
|
|
||||||
// Returns a base64-encoded string prefixed with "enc:" for identification.
|
|
||||||
func Encrypt(plaintext string, key []byte) (string, error) {
|
|
||||||
block, err := aes.NewCipher(key)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("failed to create cipher: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
gcm, err := cipher.NewGCM(block)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("failed to create GCM: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
nonce := make([]byte, gcm.NonceSize())
|
|
||||||
if _, err := io.ReadFull(rand.Reader, nonce); err != nil {
|
|
||||||
return "", fmt.Errorf("failed to generate nonce: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// nonce is prepended to ciphertext
|
|
||||||
ciphertext := gcm.Seal(nonce, nonce, []byte(plaintext), nil)
|
|
||||||
return encryptedPrefix + base64.StdEncoding.EncodeToString(ciphertext), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypt decrypts an "enc:"-prefixed ciphertext string with AES-256-GCM.
|
|
||||||
// If the input is not prefixed with "enc:", it is returned as-is (plaintext passthrough
|
|
||||||
// for backward compatibility during migration).
|
|
||||||
func Decrypt(ciphertext string, key []byte) (string, error) {
|
|
||||||
if !strings.HasPrefix(ciphertext, encryptedPrefix) {
|
|
||||||
return ciphertext, nil // plaintext passthrough
|
|
||||||
}
|
|
||||||
|
|
||||||
data, err := base64.StdEncoding.DecodeString(strings.TrimPrefix(ciphertext, encryptedPrefix))
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("failed to decode ciphertext: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
block, err := aes.NewCipher(key)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("failed to create cipher: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
gcm, err := cipher.NewGCM(block)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("failed to create GCM: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
nonceSize := gcm.NonceSize()
|
|
||||||
if len(data) < nonceSize {
|
|
||||||
return "", fmt.Errorf("ciphertext too short")
|
|
||||||
}
|
|
||||||
|
|
||||||
nonce, sealed := data[:nonceSize], data[nonceSize:]
|
|
||||||
plaintext, err := gcm.Open(nil, nonce, sealed, nil)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("decryption failed (wrong key or corrupted data): %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return string(plaintext), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// IsEncrypted returns true if the value has the "enc:" prefix.
|
|
||||||
func IsEncrypted(value string) bool {
|
|
||||||
return strings.HasPrefix(value, encryptedPrefix)
|
|
||||||
}
|
|
||||||
@ -1,82 +0,0 @@
|
|||||||
// Package shamir implements Shamir's Secret Sharing over GF(2^8).
|
|
||||||
//
|
|
||||||
// Uses the AES irreducible polynomial x^8 + x^4 + x^3 + x + 1 (0x11B)
|
|
||||||
// with generator 3. Precomputed log/exp tables for O(1) field arithmetic.
|
|
||||||
//
|
|
||||||
// Cross-platform compatible with the Zig (orama-vault) and TypeScript
|
|
||||||
// (network-ts-sdk) implementations using identical field parameters.
|
|
||||||
package shamir
|
|
||||||
|
|
||||||
import "errors"
|
|
||||||
|
|
||||||
// ErrDivisionByZero is returned when dividing by zero in GF(2^8).
|
|
||||||
var ErrDivisionByZero = errors.New("shamir: division by zero in GF(2^8)")
|
|
||||||
|
|
||||||
// Irreducible polynomial: x^8 + x^4 + x^3 + x + 1.
|
|
||||||
const irreducible = 0x11B
|
|
||||||
|
|
||||||
// expTable[i] = generator^i mod polynomial, for i in 0..511.
|
|
||||||
// Extended to 512 entries so Mul can use (logA + logB) without modular reduction.
|
|
||||||
var expTable [512]byte
|
|
||||||
|
|
||||||
// logTable[a] = i where generator^i = a, for a in 1..255.
|
|
||||||
// logTable[0] is unused (log of zero is undefined).
|
|
||||||
var logTable [256]byte
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
x := uint16(1)
|
|
||||||
for i := 0; i < 512; i++ {
|
|
||||||
if i < 256 {
|
|
||||||
expTable[i] = byte(x)
|
|
||||||
logTable[byte(x)] = byte(i)
|
|
||||||
} else {
|
|
||||||
expTable[i] = expTable[i-255]
|
|
||||||
}
|
|
||||||
|
|
||||||
if i < 255 {
|
|
||||||
// Multiply by generator (3): x*3 = x*2 XOR x
|
|
||||||
x2 := x << 1
|
|
||||||
x3 := x2 ^ x
|
|
||||||
if x3&0x100 != 0 {
|
|
||||||
x3 ^= irreducible
|
|
||||||
}
|
|
||||||
x = x3
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add returns a XOR b (addition in GF(2^8)).
|
|
||||||
func Add(a, b byte) byte {
|
|
||||||
return a ^ b
|
|
||||||
}
|
|
||||||
|
|
||||||
// Mul returns a * b in GF(2^8) via log/exp tables.
|
|
||||||
func Mul(a, b byte) byte {
|
|
||||||
if a == 0 || b == 0 {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
logSum := uint16(logTable[a]) + uint16(logTable[b])
|
|
||||||
return expTable[logSum]
|
|
||||||
}
|
|
||||||
|
|
||||||
// Inv returns the multiplicative inverse of a in GF(2^8).
|
|
||||||
// Returns ErrDivisionByZero if a == 0.
|
|
||||||
func Inv(a byte) (byte, error) {
|
|
||||||
if a == 0 {
|
|
||||||
return 0, ErrDivisionByZero
|
|
||||||
}
|
|
||||||
return expTable[255-uint16(logTable[a])], nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Div returns a / b in GF(2^8).
|
|
||||||
// Returns ErrDivisionByZero if b == 0.
|
|
||||||
func Div(a, b byte) (byte, error) {
|
|
||||||
if b == 0 {
|
|
||||||
return 0, ErrDivisionByZero
|
|
||||||
}
|
|
||||||
if a == 0 {
|
|
||||||
return 0, nil
|
|
||||||
}
|
|
||||||
logDiff := uint16(logTable[a]) + 255 - uint16(logTable[b])
|
|
||||||
return expTable[logDiff], nil
|
|
||||||
}
|
|
||||||
@ -1,150 +0,0 @@
|
|||||||
package shamir
|
|
||||||
|
|
||||||
import (
|
|
||||||
"crypto/rand"
|
|
||||||
"errors"
|
|
||||||
"fmt"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
ErrThresholdTooSmall = errors.New("shamir: threshold K must be at least 2")
|
|
||||||
ErrShareCountTooSmall = errors.New("shamir: share count N must be >= threshold K")
|
|
||||||
ErrTooManyShares = errors.New("shamir: maximum 255 shares (GF(2^8) limit)")
|
|
||||||
ErrEmptySecret = errors.New("shamir: secret must not be empty")
|
|
||||||
ErrNotEnoughShares = errors.New("shamir: need at least 2 shares to reconstruct")
|
|
||||||
ErrMismatchedShareLen = errors.New("shamir: all shares must have the same data length")
|
|
||||||
ErrZeroShareIndex = errors.New("shamir: share index must not be 0")
|
|
||||||
ErrDuplicateShareIndex = errors.New("shamir: duplicate share indices")
|
|
||||||
)
|
|
||||||
|
|
||||||
// Share represents a single Shamir share.
|
|
||||||
type Share struct {
|
|
||||||
X byte // Evaluation point (1..255, never 0)
|
|
||||||
Y []byte // Share data (same length as original secret)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Split divides secret into n shares with threshold k.
|
|
||||||
// Any k shares can reconstruct the secret; k-1 reveal nothing.
|
|
||||||
func Split(secret []byte, n, k int) ([]Share, error) {
|
|
||||||
if k < 2 {
|
|
||||||
return nil, ErrThresholdTooSmall
|
|
||||||
}
|
|
||||||
if n < k {
|
|
||||||
return nil, ErrShareCountTooSmall
|
|
||||||
}
|
|
||||||
if n > 255 {
|
|
||||||
return nil, ErrTooManyShares
|
|
||||||
}
|
|
||||||
if len(secret) == 0 {
|
|
||||||
return nil, ErrEmptySecret
|
|
||||||
}
|
|
||||||
|
|
||||||
shares := make([]Share, n)
|
|
||||||
for i := range shares {
|
|
||||||
shares[i] = Share{
|
|
||||||
X: byte(i + 1),
|
|
||||||
Y: make([]byte, len(secret)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Temporary buffer for polynomial coefficients.
|
|
||||||
coeffs := make([]byte, k)
|
|
||||||
defer func() {
|
|
||||||
for i := range coeffs {
|
|
||||||
coeffs[i] = 0
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
for byteIdx := 0; byteIdx < len(secret); byteIdx++ {
|
|
||||||
coeffs[0] = secret[byteIdx]
|
|
||||||
// Fill degrees 1..k-1 with random bytes.
|
|
||||||
if _, err := rand.Read(coeffs[1:]); err != nil {
|
|
||||||
return nil, fmt.Errorf("shamir: random generation failed: %w", err)
|
|
||||||
}
|
|
||||||
for i := range shares {
|
|
||||||
shares[i].Y[byteIdx] = evaluatePolynomial(coeffs, shares[i].X)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return shares, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Combine reconstructs the secret from k or more shares via Lagrange interpolation.
|
|
||||||
func Combine(shares []Share) ([]byte, error) {
|
|
||||||
if len(shares) < 2 {
|
|
||||||
return nil, ErrNotEnoughShares
|
|
||||||
}
|
|
||||||
|
|
||||||
secretLen := len(shares[0].Y)
|
|
||||||
seen := make(map[byte]bool, len(shares))
|
|
||||||
for _, s := range shares {
|
|
||||||
if s.X == 0 {
|
|
||||||
return nil, ErrZeroShareIndex
|
|
||||||
}
|
|
||||||
if len(s.Y) != secretLen {
|
|
||||||
return nil, ErrMismatchedShareLen
|
|
||||||
}
|
|
||||||
if seen[s.X] {
|
|
||||||
return nil, ErrDuplicateShareIndex
|
|
||||||
}
|
|
||||||
seen[s.X] = true
|
|
||||||
}
|
|
||||||
|
|
||||||
result := make([]byte, secretLen)
|
|
||||||
for byteIdx := 0; byteIdx < secretLen; byteIdx++ {
|
|
||||||
var value byte
|
|
||||||
for i, si := range shares {
|
|
||||||
// Lagrange basis polynomial L_i evaluated at 0:
|
|
||||||
// L_i(0) = product over j!=i of (0 - x_j)/(x_i - x_j)
|
|
||||||
// = product over j!=i of x_j / (x_i XOR x_j)
|
|
||||||
var basis byte = 1
|
|
||||||
for j, sj := range shares {
|
|
||||||
if i == j {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
num := sj.X
|
|
||||||
den := Add(si.X, sj.X) // x_i - x_j = x_i XOR x_j in GF(2^8)
|
|
||||||
d, err := Div(num, den)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
basis = Mul(basis, d)
|
|
||||||
}
|
|
||||||
value = Add(value, Mul(si.Y[byteIdx], basis))
|
|
||||||
}
|
|
||||||
result[byteIdx] = value
|
|
||||||
}
|
|
||||||
|
|
||||||
return result, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// AdaptiveThreshold returns max(3, floor(n/3)).
|
|
||||||
// This is the read quorum: minimum shares needed to reconstruct.
|
|
||||||
func AdaptiveThreshold(n int) int {
|
|
||||||
t := n / 3
|
|
||||||
if t < 3 {
|
|
||||||
return 3
|
|
||||||
}
|
|
||||||
return t
|
|
||||||
}
|
|
||||||
|
|
||||||
// WriteQuorum returns ceil(2n/3).
|
|
||||||
// This is the write quorum: minimum ACKs needed for a successful push.
|
|
||||||
func WriteQuorum(n int) int {
|
|
||||||
if n == 0 {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
if n <= 2 {
|
|
||||||
return n
|
|
||||||
}
|
|
||||||
return (2*n + 2) / 3
|
|
||||||
}
|
|
||||||
|
|
||||||
// evaluatePolynomial evaluates p(x) = coeffs[0] + coeffs[1]*x + ... using Horner's method.
|
|
||||||
func evaluatePolynomial(coeffs []byte, x byte) byte {
|
|
||||||
var result byte
|
|
||||||
for i := len(coeffs) - 1; i >= 0; i-- {
|
|
||||||
result = Add(Mul(result, x), coeffs[i])
|
|
||||||
}
|
|
||||||
return result
|
|
||||||
}
|
|
||||||
@ -1,501 +0,0 @@
|
|||||||
package shamir
|
|
||||||
|
|
||||||
import (
|
|
||||||
"testing"
|
|
||||||
)
|
|
||||||
|
|
||||||
// ── GF(2^8) Field Tests ────────────────────────────────────────────────────
|
|
||||||
|
|
||||||
func TestExpTable_Cycle(t *testing.T) {
|
|
||||||
// g^0 = 1, g^255 = 1 (cyclic group of order 255)
|
|
||||||
if expTable[0] != 1 {
|
|
||||||
t.Errorf("exp[0] = %d, want 1", expTable[0])
|
|
||||||
}
|
|
||||||
if expTable[255] != 1 {
|
|
||||||
t.Errorf("exp[255] = %d, want 1", expTable[255])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestExpTable_AllNonzeroAppear(t *testing.T) {
|
|
||||||
var seen [256]bool
|
|
||||||
for i := 0; i < 255; i++ {
|
|
||||||
v := expTable[i]
|
|
||||||
if seen[v] {
|
|
||||||
t.Fatalf("duplicate value %d at index %d", v, i)
|
|
||||||
}
|
|
||||||
seen[v] = true
|
|
||||||
}
|
|
||||||
for v := 1; v < 256; v++ {
|
|
||||||
if !seen[v] {
|
|
||||||
t.Errorf("value %d not seen in exp[0..255]", v)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if seen[0] {
|
|
||||||
t.Error("zero should not appear in exp[0..254]")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Cross-platform test vectors from orama-vault/src/sss/test_cross_platform.zig
|
|
||||||
func TestExpTable_CrossPlatform(t *testing.T) {
|
|
||||||
vectors := [][2]int{
|
|
||||||
{0, 1}, {10, 114}, {20, 216}, {30, 102},
|
|
||||||
{40, 106}, {50, 4}, {60, 211}, {70, 77},
|
|
||||||
{80, 131}, {90, 179}, {100, 16}, {110, 97},
|
|
||||||
{120, 47}, {130, 58}, {140, 250}, {150, 64},
|
|
||||||
{160, 159}, {170, 188}, {180, 232}, {190, 197},
|
|
||||||
{200, 27}, {210, 74}, {220, 198}, {230, 141},
|
|
||||||
{240, 57}, {250, 108}, {254, 246}, {255, 1},
|
|
||||||
}
|
|
||||||
for _, v := range vectors {
|
|
||||||
if got := expTable[v[0]]; got != byte(v[1]) {
|
|
||||||
t.Errorf("exp[%d] = %d, want %d", v[0], got, v[1])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestMul_CrossPlatform(t *testing.T) {
|
|
||||||
vectors := [][3]byte{
|
|
||||||
{1, 1, 1}, {1, 2, 2}, {1, 3, 3},
|
|
||||||
{1, 42, 42}, {1, 127, 127}, {1, 170, 170}, {1, 255, 255},
|
|
||||||
{2, 1, 2}, {2, 2, 4}, {2, 3, 6},
|
|
||||||
{2, 42, 84}, {2, 127, 254}, {2, 170, 79}, {2, 255, 229},
|
|
||||||
{3, 1, 3}, {3, 2, 6}, {3, 3, 5},
|
|
||||||
{3, 42, 126}, {3, 127, 129}, {3, 170, 229}, {3, 255, 26},
|
|
||||||
{42, 1, 42}, {42, 2, 84}, {42, 3, 126},
|
|
||||||
{42, 42, 40}, {42, 127, 82}, {42, 170, 244}, {42, 255, 142},
|
|
||||||
{127, 1, 127}, {127, 2, 254}, {127, 3, 129},
|
|
||||||
{127, 42, 82}, {127, 127, 137}, {127, 170, 173}, {127, 255, 118},
|
|
||||||
{170, 1, 170}, {170, 2, 79}, {170, 3, 229},
|
|
||||||
{170, 42, 244}, {170, 127, 173}, {170, 170, 178}, {170, 255, 235},
|
|
||||||
{255, 1, 255}, {255, 2, 229}, {255, 3, 26},
|
|
||||||
{255, 42, 142}, {255, 127, 118}, {255, 170, 235}, {255, 255, 19},
|
|
||||||
}
|
|
||||||
for _, v := range vectors {
|
|
||||||
if got := Mul(v[0], v[1]); got != v[2] {
|
|
||||||
t.Errorf("Mul(%d, %d) = %d, want %d", v[0], v[1], got, v[2])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestMul_Zero(t *testing.T) {
|
|
||||||
for a := 0; a < 256; a++ {
|
|
||||||
if Mul(byte(a), 0) != 0 {
|
|
||||||
t.Errorf("Mul(%d, 0) != 0", a)
|
|
||||||
}
|
|
||||||
if Mul(0, byte(a)) != 0 {
|
|
||||||
t.Errorf("Mul(0, %d) != 0", a)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestMul_Identity(t *testing.T) {
|
|
||||||
for a := 0; a < 256; a++ {
|
|
||||||
if Mul(byte(a), 1) != byte(a) {
|
|
||||||
t.Errorf("Mul(%d, 1) = %d", a, Mul(byte(a), 1))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestMul_Commutative(t *testing.T) {
|
|
||||||
for a := 1; a < 256; a += 7 {
|
|
||||||
for b := 1; b < 256; b += 11 {
|
|
||||||
ab := Mul(byte(a), byte(b))
|
|
||||||
ba := Mul(byte(b), byte(a))
|
|
||||||
if ab != ba {
|
|
||||||
t.Errorf("Mul(%d,%d)=%d != Mul(%d,%d)=%d", a, b, ab, b, a, ba)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestInv_CrossPlatform(t *testing.T) {
|
|
||||||
vectors := [][2]byte{
|
|
||||||
{1, 1}, {2, 141}, {3, 246}, {5, 82},
|
|
||||||
{7, 209}, {16, 116}, {42, 152}, {127, 130},
|
|
||||||
{128, 131}, {170, 18}, {200, 169}, {255, 28},
|
|
||||||
}
|
|
||||||
for _, v := range vectors {
|
|
||||||
got, err := Inv(v[0])
|
|
||||||
if err != nil {
|
|
||||||
t.Errorf("Inv(%d) returned error: %v", v[0], err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if got != v[1] {
|
|
||||||
t.Errorf("Inv(%d) = %d, want %d", v[0], got, v[1])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestInv_SelfInverse(t *testing.T) {
|
|
||||||
for a := 1; a < 256; a++ {
|
|
||||||
inv1, _ := Inv(byte(a))
|
|
||||||
inv2, _ := Inv(inv1)
|
|
||||||
if inv2 != byte(a) {
|
|
||||||
t.Errorf("Inv(Inv(%d)) = %d, want %d", a, inv2, a)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestInv_Product(t *testing.T) {
|
|
||||||
for a := 1; a < 256; a++ {
|
|
||||||
inv1, _ := Inv(byte(a))
|
|
||||||
if Mul(byte(a), inv1) != 1 {
|
|
||||||
t.Errorf("Mul(%d, Inv(%d)) != 1", a, a)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestInv_Zero(t *testing.T) {
|
|
||||||
_, err := Inv(0)
|
|
||||||
if err != ErrDivisionByZero {
|
|
||||||
t.Errorf("Inv(0) should return ErrDivisionByZero, got %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDiv_CrossPlatform(t *testing.T) {
|
|
||||||
vectors := [][3]byte{
|
|
||||||
{1, 1, 1}, {1, 2, 141}, {1, 3, 246},
|
|
||||||
{1, 42, 152}, {1, 127, 130}, {1, 170, 18}, {1, 255, 28},
|
|
||||||
{2, 1, 2}, {2, 2, 1}, {2, 3, 247},
|
|
||||||
{3, 1, 3}, {3, 2, 140}, {3, 3, 1},
|
|
||||||
{42, 1, 42}, {42, 2, 21}, {42, 42, 1},
|
|
||||||
{127, 1, 127}, {127, 127, 1},
|
|
||||||
{170, 1, 170}, {170, 170, 1},
|
|
||||||
{255, 1, 255}, {255, 255, 1},
|
|
||||||
}
|
|
||||||
for _, v := range vectors {
|
|
||||||
got, err := Div(v[0], v[1])
|
|
||||||
if err != nil {
|
|
||||||
t.Errorf("Div(%d, %d) returned error: %v", v[0], v[1], err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if got != v[2] {
|
|
||||||
t.Errorf("Div(%d, %d) = %d, want %d", v[0], v[1], got, v[2])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDiv_ByZero(t *testing.T) {
|
|
||||||
_, err := Div(42, 0)
|
|
||||||
if err != ErrDivisionByZero {
|
|
||||||
t.Errorf("Div(42, 0) should return ErrDivisionByZero, got %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Polynomial evaluation ──────────────────────────────────────────────────
|
|
||||||
|
|
||||||
func TestEvaluatePolynomial_CrossPlatform(t *testing.T) {
|
|
||||||
// p(x) = 42 + 5x + 7x^2
|
|
||||||
coeffs0 := []byte{42, 5, 7}
|
|
||||||
vectors0 := [][2]byte{
|
|
||||||
{1, 40}, {2, 60}, {3, 62}, {4, 78},
|
|
||||||
{5, 76}, {10, 207}, {100, 214}, {255, 125},
|
|
||||||
}
|
|
||||||
for _, v := range vectors0 {
|
|
||||||
if got := evaluatePolynomial(coeffs0, v[0]); got != v[1] {
|
|
||||||
t.Errorf("p(%d) = %d, want %d [coeffs: 42,5,7]", v[0], got, v[1])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// p(x) = 0 + 0xAB*x + 0xCD*x^2
|
|
||||||
coeffs1 := []byte{0, 0xAB, 0xCD}
|
|
||||||
vectors1 := [][2]byte{
|
|
||||||
{1, 102}, {3, 50}, {5, 152}, {7, 204}, {200, 96},
|
|
||||||
}
|
|
||||||
for _, v := range vectors1 {
|
|
||||||
if got := evaluatePolynomial(coeffs1, v[0]); got != v[1] {
|
|
||||||
t.Errorf("p(%d) = %d, want %d [coeffs: 0,AB,CD]", v[0], got, v[1])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// p(x) = 0xFF (constant)
|
|
||||||
coeffs2 := []byte{0xFF}
|
|
||||||
for _, x := range []byte{1, 2, 255} {
|
|
||||||
if got := evaluatePolynomial(coeffs2, x); got != 0xFF {
|
|
||||||
t.Errorf("constant p(%d) = %d, want 255", x, got)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// p(x) = 128 + 64x + 32x^2 + 16x^3
|
|
||||||
coeffs3 := []byte{128, 64, 32, 16}
|
|
||||||
vectors3 := [][2]byte{
|
|
||||||
{1, 240}, {2, 0}, {3, 16}, {4, 193}, {5, 234},
|
|
||||||
}
|
|
||||||
for _, v := range vectors3 {
|
|
||||||
if got := evaluatePolynomial(coeffs3, v[0]); got != v[1] {
|
|
||||||
t.Errorf("p(%d) = %d, want %d [coeffs: 128,64,32,16]", v[0], got, v[1])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Lagrange combine (cross-platform) ─────────────────────────────────────
|
|
||||||
|
|
||||||
func TestCombine_CrossPlatform_SingleByte(t *testing.T) {
|
|
||||||
// p(x) = 42 + 5x + 7x^2, secret = 42
|
|
||||||
// Shares: (1,40) (2,60) (3,62) (4,78) (5,76)
|
|
||||||
allShares := []Share{
|
|
||||||
{X: 1, Y: []byte{40}},
|
|
||||||
{X: 2, Y: []byte{60}},
|
|
||||||
{X: 3, Y: []byte{62}},
|
|
||||||
{X: 4, Y: []byte{78}},
|
|
||||||
{X: 5, Y: []byte{76}},
|
|
||||||
}
|
|
||||||
|
|
||||||
subsets := [][]int{
|
|
||||||
{0, 1, 2}, // {1,2,3}
|
|
||||||
{0, 2, 4}, // {1,3,5}
|
|
||||||
{1, 3, 4}, // {2,4,5}
|
|
||||||
{2, 3, 4}, // {3,4,5}
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, subset := range subsets {
|
|
||||||
shares := make([]Share, len(subset))
|
|
||||||
for i, idx := range subset {
|
|
||||||
shares[i] = allShares[idx]
|
|
||||||
}
|
|
||||||
result, err := Combine(shares)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Combine failed for subset %v: %v", subset, err)
|
|
||||||
}
|
|
||||||
if result[0] != 42 {
|
|
||||||
t.Errorf("Combine(subset %v) = %d, want 42", subset, result[0])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCombine_CrossPlatform_MultiByte(t *testing.T) {
|
|
||||||
// 2-byte secret [42, 0]
|
|
||||||
// byte0: 42 + 5x + 7x^2 → shares at x=1,3,5: 40, 62, 76
|
|
||||||
// byte1: 0 + 0xAB*x + 0xCD*x^2 → shares at x=1,3,5: 102, 50, 152
|
|
||||||
shares := []Share{
|
|
||||||
{X: 1, Y: []byte{40, 102}},
|
|
||||||
{X: 3, Y: []byte{62, 50}},
|
|
||||||
{X: 5, Y: []byte{76, 152}},
|
|
||||||
}
|
|
||||||
result, err := Combine(shares)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Combine failed: %v", err)
|
|
||||||
}
|
|
||||||
if result[0] != 42 || result[1] != 0 {
|
|
||||||
t.Errorf("Combine = %v, want [42, 0]", result)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Split/Combine round-trip ──────────────────────────────────────────────
|
|
||||||
|
|
||||||
func TestSplitCombine_RoundTrip_2of3(t *testing.T) {
|
|
||||||
secret := []byte("hello world")
|
|
||||||
shares, err := Split(secret, 3, 2)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Split: %v", err)
|
|
||||||
}
|
|
||||||
if len(shares) != 3 {
|
|
||||||
t.Fatalf("got %d shares, want 3", len(shares))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Any 2 shares should reconstruct
|
|
||||||
for i := 0; i < 3; i++ {
|
|
||||||
for j := i + 1; j < 3; j++ {
|
|
||||||
result, err := Combine([]Share{shares[i], shares[j]})
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Combine(%d,%d): %v", i, j, err)
|
|
||||||
}
|
|
||||||
if string(result) != string(secret) {
|
|
||||||
t.Errorf("Combine(%d,%d) = %q, want %q", i, j, result, secret)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSplitCombine_RoundTrip_3of5(t *testing.T) {
|
|
||||||
secret := []byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
|
|
||||||
shares, err := Split(secret, 5, 3)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Split: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// All C(5,3)=10 subsets should reconstruct
|
|
||||||
count := 0
|
|
||||||
for i := 0; i < 5; i++ {
|
|
||||||
for j := i + 1; j < 5; j++ {
|
|
||||||
for k := j + 1; k < 5; k++ {
|
|
||||||
result, err := Combine([]Share{shares[i], shares[j], shares[k]})
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Combine(%d,%d,%d): %v", i, j, k, err)
|
|
||||||
}
|
|
||||||
for idx := range secret {
|
|
||||||
if result[idx] != secret[idx] {
|
|
||||||
t.Errorf("Combine(%d,%d,%d)[%d] = %d, want %d", i, j, k, idx, result[idx], secret[idx])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
count++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if count != 10 {
|
|
||||||
t.Errorf("tested %d subsets, want 10", count)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSplitCombine_RoundTrip_LargeSecret(t *testing.T) {
|
|
||||||
secret := make([]byte, 256)
|
|
||||||
for i := range secret {
|
|
||||||
secret[i] = byte(i)
|
|
||||||
}
|
|
||||||
shares, err := Split(secret, 10, 5)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Split: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Use first 5 shares
|
|
||||||
result, err := Combine(shares[:5])
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Combine: %v", err)
|
|
||||||
}
|
|
||||||
for i := range secret {
|
|
||||||
if result[i] != secret[i] {
|
|
||||||
t.Errorf("result[%d] = %d, want %d", i, result[i], secret[i])
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSplitCombine_AllZeros(t *testing.T) {
|
|
||||||
secret := make([]byte, 10)
|
|
||||||
shares, err := Split(secret, 5, 3)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Split: %v", err)
|
|
||||||
}
|
|
||||||
result, err := Combine(shares[:3])
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Combine: %v", err)
|
|
||||||
}
|
|
||||||
for i, b := range result {
|
|
||||||
if b != 0 {
|
|
||||||
t.Errorf("result[%d] = %d, want 0", i, b)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSplitCombine_AllOnes(t *testing.T) {
|
|
||||||
secret := make([]byte, 10)
|
|
||||||
for i := range secret {
|
|
||||||
secret[i] = 0xFF
|
|
||||||
}
|
|
||||||
shares, err := Split(secret, 5, 3)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Split: %v", err)
|
|
||||||
}
|
|
||||||
result, err := Combine(shares[:3])
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Combine: %v", err)
|
|
||||||
}
|
|
||||||
for i, b := range result {
|
|
||||||
if b != 0xFF {
|
|
||||||
t.Errorf("result[%d] = %d, want 255", i, b)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Share indices ─────────────────────────────────────────────────────────
|
|
||||||
|
|
||||||
func TestSplit_ShareIndices(t *testing.T) {
|
|
||||||
shares, err := Split([]byte{42}, 5, 3)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Split: %v", err)
|
|
||||||
}
|
|
||||||
for i, s := range shares {
|
|
||||||
if s.X != byte(i+1) {
|
|
||||||
t.Errorf("shares[%d].X = %d, want %d", i, s.X, i+1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Error cases ───────────────────────────────────────────────────────────
|
|
||||||
|
|
||||||
func TestSplit_Errors(t *testing.T) {
|
|
||||||
tests := []struct {
|
|
||||||
name string
|
|
||||||
secret []byte
|
|
||||||
n, k int
|
|
||||||
want error
|
|
||||||
}{
|
|
||||||
{"k < 2", []byte{1}, 3, 1, ErrThresholdTooSmall},
|
|
||||||
{"n < k", []byte{1}, 2, 3, ErrShareCountTooSmall},
|
|
||||||
{"n > 255", []byte{1}, 256, 3, ErrTooManyShares},
|
|
||||||
{"empty secret", []byte{}, 3, 2, ErrEmptySecret},
|
|
||||||
}
|
|
||||||
for _, tt := range tests {
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
|
||||||
_, err := Split(tt.secret, tt.n, tt.k)
|
|
||||||
if err != tt.want {
|
|
||||||
t.Errorf("Split() error = %v, want %v", err, tt.want)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCombine_Errors(t *testing.T) {
|
|
||||||
t.Run("not enough shares", func(t *testing.T) {
|
|
||||||
_, err := Combine([]Share{{X: 1, Y: []byte{1}}})
|
|
||||||
if err != ErrNotEnoughShares {
|
|
||||||
t.Errorf("got %v, want ErrNotEnoughShares", err)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
t.Run("zero index", func(t *testing.T) {
|
|
||||||
_, err := Combine([]Share{
|
|
||||||
{X: 0, Y: []byte{1}},
|
|
||||||
{X: 1, Y: []byte{2}},
|
|
||||||
})
|
|
||||||
if err != ErrZeroShareIndex {
|
|
||||||
t.Errorf("got %v, want ErrZeroShareIndex", err)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
t.Run("mismatched lengths", func(t *testing.T) {
|
|
||||||
_, err := Combine([]Share{
|
|
||||||
{X: 1, Y: []byte{1, 2}},
|
|
||||||
{X: 2, Y: []byte{3}},
|
|
||||||
})
|
|
||||||
if err != ErrMismatchedShareLen {
|
|
||||||
t.Errorf("got %v, want ErrMismatchedShareLen", err)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
t.Run("duplicate indices", func(t *testing.T) {
|
|
||||||
_, err := Combine([]Share{
|
|
||||||
{X: 1, Y: []byte{1}},
|
|
||||||
{X: 1, Y: []byte{2}},
|
|
||||||
})
|
|
||||||
if err != ErrDuplicateShareIndex {
|
|
||||||
t.Errorf("got %v, want ErrDuplicateShareIndex", err)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Threshold / Quorum ────────────────────────────────────────────────────
|
|
||||||
|
|
||||||
func TestAdaptiveThreshold(t *testing.T) {
|
|
||||||
tests := [][2]int{
|
|
||||||
{1, 3}, {2, 3}, {3, 3}, {5, 3}, {8, 3}, {9, 3},
|
|
||||||
{10, 3}, {12, 4}, {15, 5}, {30, 10}, {100, 33},
|
|
||||||
}
|
|
||||||
for _, tt := range tests {
|
|
||||||
if got := AdaptiveThreshold(tt[0]); got != tt[1] {
|
|
||||||
t.Errorf("AdaptiveThreshold(%d) = %d, want %d", tt[0], got, tt[1])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestWriteQuorum(t *testing.T) {
|
|
||||||
tests := [][2]int{
|
|
||||||
{0, 0}, {1, 1}, {2, 2}, {3, 2}, {4, 3}, {5, 4},
|
|
||||||
{6, 4}, {10, 7}, {14, 10}, {100, 67},
|
|
||||||
}
|
|
||||||
for _, tt := range tests {
|
|
||||||
if got := WriteQuorum(tt[0]); got != tt[1] {
|
|
||||||
t.Errorf("WriteQuorum(%d) = %d, want %d", tt[0], got, tt[1])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,95 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
# Orama CLI installer
|
|
||||||
# Builds the CLI and adds `orama` to your PATH.
|
|
||||||
# Usage: ./scripts/install.sh [--shell fish|zsh|bash]
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
|
||||||
PROJECT_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
|
|
||||||
BIN_DIR="$HOME/.local/bin"
|
|
||||||
BIN_PATH="$BIN_DIR/orama"
|
|
||||||
|
|
||||||
# --- Parse args ---
|
|
||||||
SHELL_NAME=""
|
|
||||||
while [[ $# -gt 0 ]]; do
|
|
||||||
case "$1" in
|
|
||||||
--shell) SHELL_NAME="$2"; shift 2 ;;
|
|
||||||
-h|--help)
|
|
||||||
echo "Usage: ./scripts/install.sh [--shell fish|zsh|bash]"
|
|
||||||
echo ""
|
|
||||||
echo "Builds the Orama CLI and installs 'orama' to ~/.local/bin."
|
|
||||||
echo "If --shell is not provided, auto-detects from \$SHELL."
|
|
||||||
exit 0 ;;
|
|
||||||
*) echo "Unknown option: $1"; exit 1 ;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
# Auto-detect shell
|
|
||||||
if [[ -z "$SHELL_NAME" ]]; then
|
|
||||||
case "$SHELL" in
|
|
||||||
*/fish) SHELL_NAME="fish" ;;
|
|
||||||
*/zsh) SHELL_NAME="zsh" ;;
|
|
||||||
*/bash) SHELL_NAME="bash" ;;
|
|
||||||
*) SHELL_NAME="unknown" ;;
|
|
||||||
esac
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "==> Shell: $SHELL_NAME"
|
|
||||||
|
|
||||||
# --- Build ---
|
|
||||||
echo "==> Building Orama CLI..."
|
|
||||||
(cd "$PROJECT_DIR" && make build)
|
|
||||||
|
|
||||||
# --- Install binary ---
|
|
||||||
mkdir -p "$BIN_DIR"
|
|
||||||
cp -f "$PROJECT_DIR/bin/orama" "$BIN_PATH"
|
|
||||||
chmod +x "$BIN_PATH"
|
|
||||||
echo "==> Installed $BIN_PATH"
|
|
||||||
|
|
||||||
# --- Ensure PATH ---
|
|
||||||
add_to_path() {
|
|
||||||
local rc_file="$1"
|
|
||||||
local line="$2"
|
|
||||||
|
|
||||||
if [[ -f "$rc_file" ]] && grep -qF "$line" "$rc_file"; then
|
|
||||||
echo "==> PATH already configured in $rc_file"
|
|
||||||
else
|
|
||||||
echo "" >> "$rc_file"
|
|
||||||
echo "$line" >> "$rc_file"
|
|
||||||
echo "==> Added PATH to $rc_file"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
case "$SHELL_NAME" in
|
|
||||||
fish)
|
|
||||||
FISH_CONFIG="$HOME/.config/fish/config.fish"
|
|
||||||
mkdir -p "$(dirname "$FISH_CONFIG")"
|
|
||||||
add_to_path "$FISH_CONFIG" "fish_add_path $BIN_DIR"
|
|
||||||
;;
|
|
||||||
zsh)
|
|
||||||
add_to_path "$HOME/.zshrc" "export PATH=\"$BIN_DIR:\$PATH\""
|
|
||||||
;;
|
|
||||||
bash)
|
|
||||||
add_to_path "$HOME/.bashrc" "export PATH=\"$BIN_DIR:\$PATH\""
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo "==> Unknown shell. Add this to your shell config manually:"
|
|
||||||
echo " export PATH=\"$BIN_DIR:\$PATH\""
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
# --- Verify ---
|
|
||||||
VERSION=$("$BIN_PATH" version 2>/dev/null || echo "unknown")
|
|
||||||
echo ""
|
|
||||||
echo "==> Orama CLI ${VERSION} installed!"
|
|
||||||
echo " Run: orama --help"
|
|
||||||
echo ""
|
|
||||||
if [[ "$SHELL_NAME" != "unknown" ]]; then
|
|
||||||
echo " Restart your terminal or run:"
|
|
||||||
case "$SHELL_NAME" in
|
|
||||||
fish) echo " source ~/.config/fish/config.fish" ;;
|
|
||||||
zsh) echo " source ~/.zshrc" ;;
|
|
||||||
bash) echo " source ~/.bashrc" ;;
|
|
||||||
esac
|
|
||||||
fi
|
|
||||||
@ -1,42 +0,0 @@
|
|||||||
# Orama Network node topology
|
|
||||||
# Format: environment|user@host|role
|
|
||||||
# Auth: wallet-derived SSH keys (rw vault ssh)
|
|
||||||
#
|
|
||||||
# environment: devnet, testnet
|
|
||||||
# role: node, nameserver-ns1, nameserver-ns2, nameserver-ns3
|
|
||||||
|
|
||||||
# --- Devnet nameservers ---
|
|
||||||
devnet|ubuntu@57.129.7.232|nameserver-ns1
|
|
||||||
devnet|ubuntu@57.131.41.160|nameserver-ns2
|
|
||||||
devnet|ubuntu@51.38.128.56|nameserver-ns3
|
|
||||||
|
|
||||||
# --- Devnet nodes ---
|
|
||||||
devnet|ubuntu@144.217.162.62|node
|
|
||||||
devnet|ubuntu@51.83.128.181|node
|
|
||||||
devnet|ubuntu@144.217.160.15|node
|
|
||||||
devnet|root@46.250.241.133|node
|
|
||||||
devnet|root@109.123.229.231|node
|
|
||||||
devnet|ubuntu@144.217.162.143|node
|
|
||||||
devnet|ubuntu@144.217.163.114|node
|
|
||||||
devnet|root@109.123.239.61|node
|
|
||||||
devnet|root@217.76.56.2|node
|
|
||||||
devnet|ubuntu@198.244.150.237|node
|
|
||||||
devnet|root@154.38.187.158|node
|
|
||||||
|
|
||||||
# --- Testnet nameservers ---
|
|
||||||
testnet|ubuntu@51.195.109.238|nameserver-ns1
|
|
||||||
testnet|ubuntu@57.131.41.159|nameserver-ns1
|
|
||||||
testnet|ubuntu@51.38.130.69|nameserver-ns1
|
|
||||||
|
|
||||||
# --- Testnet nodes ---
|
|
||||||
testnet|root@178.212.35.184|node
|
|
||||||
testnet|root@62.72.44.87|node
|
|
||||||
testnet|ubuntu@51.178.84.172|node
|
|
||||||
testnet|ubuntu@135.125.175.236|node
|
|
||||||
testnet|ubuntu@57.128.223.149|node
|
|
||||||
testnet|root@38.242.221.178|node
|
|
||||||
testnet|root@194.61.28.7|node
|
|
||||||
testnet|root@83.171.248.66|node
|
|
||||||
testnet|ubuntu@141.227.165.168|node
|
|
||||||
testnet|ubuntu@141.227.165.154|node
|
|
||||||
testnet|ubuntu@141.227.156.51|node
|
|
||||||
@ -1,27 +0,0 @@
|
|||||||
# Remote node configuration
|
|
||||||
# Format: environment|user@host|role
|
|
||||||
# environment: devnet, testnet
|
|
||||||
# role: node, nameserver-ns1, nameserver-ns2, nameserver-ns3
|
|
||||||
#
|
|
||||||
# SSH keys are resolved from rootwallet (rw vault ssh get <host>/<user> --priv).
|
|
||||||
# Ensure wallet entries exist: rw vault ssh add <host>/<user>
|
|
||||||
#
|
|
||||||
# Copy this file to remote-nodes.conf and fill in your node details.
|
|
||||||
|
|
||||||
# --- Devnet nameservers ---
|
|
||||||
devnet|root@1.2.3.4|nameserver-ns1
|
|
||||||
devnet|ubuntu@1.2.3.5|nameserver-ns2
|
|
||||||
devnet|root@1.2.3.6|nameserver-ns3
|
|
||||||
|
|
||||||
# --- Devnet nodes ---
|
|
||||||
devnet|ubuntu@1.2.3.7|node
|
|
||||||
devnet|ubuntu@1.2.3.8|node
|
|
||||||
|
|
||||||
# --- Testnet nameservers ---
|
|
||||||
testnet|ubuntu@2.3.4.5|nameserver-ns1
|
|
||||||
testnet|ubuntu@2.3.4.6|nameserver-ns2
|
|
||||||
testnet|ubuntu@2.3.4.7|nameserver-ns3
|
|
||||||
|
|
||||||
# --- Testnet nodes ---
|
|
||||||
testnet|root@2.3.4.8|node
|
|
||||||
testnet|ubuntu@2.3.4.9|node
|
|
||||||
0
core/debian/control → debian/control
vendored
0
core/debian/control → debian/control
vendored
0
core/debian/postinst → debian/postinst
vendored
0
core/debian/postinst → debian/postinst
vendored
@ -357,36 +357,11 @@ Function Invocation:
|
|||||||
|
|
||||||
All inter-node communication is encrypted via a WireGuard VPN mesh:
|
All inter-node communication is encrypted via a WireGuard VPN mesh:
|
||||||
|
|
||||||
- **WireGuard IPs:** Each node gets a private IP (10.0.0.x/24) used for all cluster traffic
|
- **WireGuard IPs:** Each node gets a private IP (10.0.0.x) used for all cluster traffic
|
||||||
- **UFW Firewall:** Only public ports are exposed: 22 (SSH), 53 (DNS, nameservers only), 80/443 (HTTP/HTTPS), 51820 (WireGuard UDP)
|
- **UFW Firewall:** Only public ports are exposed: 22 (SSH), 53 (DNS, nameservers only), 80/443 (HTTP/HTTPS), 51820 (WireGuard UDP)
|
||||||
- **IPv6 disabled:** System-wide via sysctl to prevent bypass of IPv4 firewall rules
|
|
||||||
- **Internal services** (RQLite 5001/7001, IPFS 4001/4501, Olric 3320/3322, Gateway 6001) are only accessible via WireGuard or localhost
|
- **Internal services** (RQLite 5001/7001, IPFS 4001/4501, Olric 3320/3322, Gateway 6001) are only accessible via WireGuard or localhost
|
||||||
- **Invite tokens:** Single-use, time-limited tokens for secure node joining. No shared secrets on the CLI
|
- **Invite tokens:** Single-use, time-limited tokens for secure node joining. No shared secrets on the CLI
|
||||||
- **Join flow:** New nodes authenticate via HTTPS (443) with TOFU certificate pinning, establish WireGuard tunnel, then join all services over the encrypted mesh
|
- **Join flow:** New nodes authenticate via HTTPS (443), establish WireGuard tunnel, then join all services over the encrypted mesh
|
||||||
|
|
||||||
### Service Authentication
|
|
||||||
|
|
||||||
- **RQLite:** HTTP basic auth on all queries/executions — credentials generated at genesis, distributed via join response
|
|
||||||
- **Olric:** Memberlist gossip encrypted with a shared 32-byte key
|
|
||||||
- **IPFS Cluster:** TrustedPeers restricted to known cluster peer IDs (not `*`)
|
|
||||||
- **Internal endpoints:** `/v1/internal/wg/peers` and `/v1/internal/wg/peer/remove` require cluster secret
|
|
||||||
- **Vault:** V1 push/pull endpoints require session token authentication when guardian is configured
|
|
||||||
- **WebSockets:** Origin header validated against the node's configured domain
|
|
||||||
|
|
||||||
### Token & Key Security
|
|
||||||
|
|
||||||
- **Refresh tokens:** Stored as SHA-256 hashes (never plaintext)
|
|
||||||
- **API keys:** Stored as HMAC-SHA256 hashes with a server-side secret
|
|
||||||
- **TURN secrets:** Encrypted at rest with AES-256-GCM (key derived from cluster secret)
|
|
||||||
- **Binary signing:** Build archives signed with rootwallet EVM signature, verified on install
|
|
||||||
|
|
||||||
### Process Isolation
|
|
||||||
|
|
||||||
- **Dedicated user:** All services run as `orama` user (not root)
|
|
||||||
- **systemd hardening:** `ProtectSystem=strict`, `NoNewPrivileges=yes`, `PrivateDevices=yes`, etc.
|
|
||||||
- **Capabilities:** Caddy and CoreDNS get `CAP_NET_BIND_SERVICE` for privileged ports
|
|
||||||
|
|
||||||
See [SECURITY.md](SECURITY.md) for the full security hardening reference.
|
|
||||||
|
|
||||||
### TLS/HTTPS
|
### TLS/HTTPS
|
||||||
|
|
||||||
@ -529,31 +504,6 @@ WebRTC uses a separate port allocation system from core namespace services:
|
|||||||
|
|
||||||
See [docs/WEBRTC.md](WEBRTC.md) for full details including client integration, API reference, and debugging.
|
See [docs/WEBRTC.md](WEBRTC.md) for full details including client integration, API reference, and debugging.
|
||||||
|
|
||||||
## OramaOS
|
|
||||||
|
|
||||||
For mainnet, devnet, and testnet environments, nodes run **OramaOS** — a custom minimal Linux image built with Buildroot.
|
|
||||||
|
|
||||||
**Key properties:**
|
|
||||||
- No SSH, no shell — operators cannot access the filesystem
|
|
||||||
- LUKS full-disk encryption with Shamir key distribution across peers
|
|
||||||
- Read-only rootfs (SquashFS + dm-verity)
|
|
||||||
- A/B partition updates with cryptographic signature verification
|
|
||||||
- Service sandboxing via Linux namespaces + seccomp
|
|
||||||
- Single root process: the **orama-agent**
|
|
||||||
|
|
||||||
**The orama-agent manages:**
|
|
||||||
- Boot sequence and LUKS key reconstruction
|
|
||||||
- WireGuard tunnel setup
|
|
||||||
- Service lifecycle in sandboxed namespaces
|
|
||||||
- Command reception from Gateway over WireGuard (port 9998)
|
|
||||||
- OS updates (download, verify, A/B swap, reboot with rollback)
|
|
||||||
|
|
||||||
**Node enrollment:** OramaOS nodes join via `orama node enroll` instead of `orama node install`. The enrollment flow uses a registration code + invite token + wallet verification.
|
|
||||||
|
|
||||||
See [ORAMAOS_DEPLOYMENT.md](ORAMAOS_DEPLOYMENT.md) for the full deployment guide.
|
|
||||||
|
|
||||||
Sandbox clusters remain on Ubuntu for development convenience.
|
|
||||||
|
|
||||||
## Future Enhancements
|
## Future Enhancements
|
||||||
|
|
||||||
1. **GraphQL Support** - GraphQL gateway alongside REST
|
1. **GraphQL Support** - GraphQL gateway alongside REST
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user