Compare commits

...

20 Commits

Author SHA1 Message Date
anonpenguin
ade6241357
Merge pull request #78 from DeBrosOfficial/nightly
Nightly version 0.9
2026-01-20 10:19:01 +02:00
anonpenguin
d3d1bb98ba
Merge pull request #77 from DeBrosOfficial/big-cleanup
Big cleanup
2026-01-20 10:13:50 +02:00
anonpenguin23
ccee66d525 Merge branch 'big-cleanup' of github-debros:DeBrosOfficial/network into big-cleanup 2026-01-20 10:13:21 +02:00
anonpenguin23
acc38d584a Fixed issue on wallet handler 2026-01-20 10:12:33 +02:00
anonpenguin
c20f6e9a25
Merge branch 'main' into big-cleanup 2026-01-20 10:06:55 +02:00
anonpenguin23
b0bc0a232e Refactored the whole codebase to be much cleaner 2026-01-20 10:03:55 +02:00
anonpenguin
86f73a1d8e
Merge pull request #76 from DeBrosOfficial/0.80.0
0.80.0
2026-01-05 20:00:41 +02:00
anonpenguin23
8c82124e05 Updated cursor rule 2026-01-05 20:00:20 +02:00
anonpenguin23
6f4f55f669 feat: disable debug logging in Rqlite MCP server to reduce disk writes
- Commented out debug logging statements in the Rqlite MCP server to prevent excessive disk writes during operation.
- Added a new PubSubAdapter method in the client for direct access to the pubsub.ClientAdapter, bypassing authentication checks for serverless functions.
- Integrated the pubsub adapter into the gateway for serverless function support.
- Implemented a new pubsub_publish host function in the serverless engine for publishing messages to topics.
2026-01-05 10:25:03 +02:00
anonpenguin23
fff665374f feat: disable debug logging in Rqlite MCP server to reduce disk writes
- Commented out debug logging statements in the Rqlite MCP server to prevent excessive disk writes during operation.
- Added a new PubSubAdapter method in the client for direct access to the pubsub.ClientAdapter, bypassing authentication checks for serverless functions.
- Integrated the pubsub adapter into the gateway for serverless function support.
- Implemented a new pubsub_publish host function in the serverless engine for publishing messages to topics.
2026-01-05 10:22:55 +02:00
anonpenguin23
2b3e6874c8 feat: disable debug logging in Rqlite MCP server to reduce disk writes
- Commented out debug logging statements in the Rqlite MCP server to prevent excessive disk writes during operation.
- Added a new PubSubAdapter method in the client for direct access to the pubsub.ClientAdapter, bypassing authentication checks for serverless functions.
- Integrated the pubsub adapter into the gateway for serverless function support.
- Implemented a new pubsub_publish host function in the serverless engine for publishing messages to topics.
2026-01-03 21:02:35 +02:00
anonpenguin23
cbbf72092d feat: add Rqlite MCP server and presence functionality
- Introduced a new Rqlite MCP server implementation in `cmd/rqlite-mcp`, enabling JSON-RPC communication for database operations.
- Updated the Makefile to include the build command for the Rqlite MCP server.
- Enhanced the WebSocket PubSub client with presence capabilities, allowing members to join and leave topics with notifications.
- Implemented presence management in the gateway, including endpoints for querying current members in a topic.
- Added end-to-end tests for presence functionality, ensuring correct behavior during member join and leave events.
2026-01-03 14:25:13 +02:00
anonpenguin23
9ddbe945fd feat: update mockFunctionRegistry methods for serverless function handling
- Modified the Register method to return a function instance and an error, enhancing its functionality.
- Added a new GetLogs method to the mockFunctionRegistry for retrieving log entries, improving test coverage for serverless function logging.
2026-01-02 08:41:54 +02:00
anonpenguin23
4f893e08d1 feat: enhance serverless function management and logging
- Updated the serverless functions table schema to remove the version constraint for uniqueness, allowing for more flexible function definitions.
- Enhanced the serverless engine to support HTTP fetch functionality, enabling external API calls from serverless functions.
- Implemented logging capabilities for function invocations, capturing detailed logs for better debugging and monitoring.
- Improved the authentication middleware to handle public endpoints more effectively, ensuring seamless access to serverless functions.
- Added new configuration options for serverless functions, including memory limits, timeout settings, and retry parameters, to optimize performance and reliability.
2026-01-02 08:40:28 +02:00
anonpenguin23
df5b11b175 feat: add API examples for Orama Network Gateway
- Introduced a new `example.http` file containing comprehensive API examples for the Orama Network Gateway, demonstrating various functionalities including health checks, distributed cache operations, decentralized storage interactions, real-time pub/sub messaging, and serverless function management.
- Updated the README to include a section on serverless functions using WebAssembly (WASM), detailing the build, deployment, invocation, and management processes for serverless functions.
- Removed outdated debug configuration file to streamline project structure.
2026-01-01 18:53:51 +02:00
anonpenguin23
a9844a1451 feat: add unit tests for gateway authentication and RQLite utilities
- Introduced comprehensive unit tests for the authentication service in the gateway, covering JWT generation, Base58 decoding, and signature verification for Ethereum and Solana.
- Added tests for RQLite cluster discovery functions, including host replacement logic and public IP validation.
- Implemented tests for RQLite utility functions, focusing on exponential backoff and data directory path resolution.
- Enhanced serverless engine tests to validate timeout handling and memory limits for WASM functions.
2025-12-31 12:26:31 +02:00
anonpenguin23
4ee76588ed feat: refactor API gateway and CLI utilities for improved functionality
- Updated the API gateway documentation to reflect changes in architecture and functionality, emphasizing its role as a multi-functional entry point for decentralized services.
- Refactored CLI commands to utilize utility functions for better code organization and maintainability.
- Introduced new utility functions for handling peer normalization, service management, and port validation, enhancing the overall CLI experience.
- Added a new production installation script to streamline the setup process for users, including detailed dry-run summaries for better visibility.
- Enhanced validation mechanisms for configuration files and swarm keys, ensuring robust error handling and user feedback during setup.
2025-12-31 10:48:15 +02:00
anonpenguin23
b3b1905fb2 feat: refactor API gateway and CLI utilities for improved functionality
- Updated the API gateway documentation to reflect changes in architecture and functionality, emphasizing its role as a multi-functional entry point for decentralized services.
- Refactored CLI commands to utilize utility functions for better code organization and maintainability.
- Introduced new utility functions for handling peer normalization, service management, and port validation, enhancing the overall CLI experience.
- Added a new production installation script to streamline the setup process for users, including detailed dry-run summaries for better visibility.
- Enhanced validation mechanisms for configuration files and swarm keys, ensuring robust error handling and user feedback during setup.
2025-12-31 10:16:26 +02:00
anonpenguin23
54aab4841d feat: add network MCP rules and documentation
- Introduced a new `network.mdc` file containing comprehensive guidelines for utilizing the network Model Context Protocol (MCP).
- Documented available MCP tools for code understanding, skill learning, and recommended workflows to enhance developer efficiency.
- Provided detailed instructions on the collaborative skill learning process and user override commands for better interaction with the MCP.
2025-12-29 14:09:48 +02:00
anonpenguin23
ee80be15d8 feat: add network MCP rules and documentation
- Introduced a new `network.mdc` file containing comprehensive guidelines for utilizing the network Model Context Protocol (MCP).
- Documented available MCP tools for code understanding, skill learning, and recommended workflows to enhance developer efficiency.
- Provided detailed instructions on the collaborative skill learning process and user override commands for better interaction with the MCP.
2025-12-29 14:08:58 +02:00
248 changed files with 30844 additions and 16092 deletions

4
.gitignore vendored
View File

@ -77,3 +77,7 @@ configs/
.dev/
.gocache/
.claude/
.mcp.json
.cursor/

View File

@ -1,68 +0,0 @@
// Project-local debug tasks
//
// For more documentation on how to configure debug tasks,
// see: https://zed.dev/docs/debugger
[
{
"label": "Gateway Go (Delve)",
"adapter": "Delve",
"request": "launch",
"mode": "debug",
"program": "./cmd/gateway",
"env": {
"GATEWAY_ADDR": ":6001",
"GATEWAY_BOOTSTRAP_PEERS": "/ip4/localhost/tcp/4001/p2p/12D3KooWSHHwEY6cga3ng7tD1rzStAU58ogQXVMX3LZJ6Gqf6dee",
"GATEWAY_NAMESPACE": "default",
"GATEWAY_API_KEY": "ak_iGustrsFk9H8uXpwczCATe5U:default"
}
},
{
"label": "E2E Test Go (Delve)",
"adapter": "Delve",
"request": "launch",
"mode": "test",
"buildFlags": "-tags e2e",
"program": "./e2e",
"env": {
"GATEWAY_API_KEY": "ak_iGustrsFk9H8uXpwczCATe5U:default"
},
"args": ["-test.v"]
},
{
"adapter": "Delve",
"label": "Gateway Go 6001 Port (Delve)",
"request": "launch",
"mode": "debug",
"program": "./cmd/gateway",
"env": {
"GATEWAY_ADDR": ":6001",
"GATEWAY_BOOTSTRAP_PEERS": "/ip4/localhost/tcp/4001/p2p/12D3KooWSHHwEY6cga3ng7tD1rzStAU58ogQXVMX3LZJ6Gqf6dee",
"GATEWAY_NAMESPACE": "default",
"GATEWAY_API_KEY": "ak_iGustrsFk9H8uXpwczCATe5U:default"
}
},
{
"adapter": "Delve",
"label": "Network CLI - peers (Delve)",
"request": "launch",
"mode": "debug",
"program": "./cmd/cli",
"args": ["peers"]
},
{
"adapter": "Delve",
"label": "Network CLI - PubSub Subscribe (Delve)",
"request": "launch",
"mode": "debug",
"program": "./cmd/cli",
"args": ["pubsub", "subscribe", "monitoring"]
},
{
"adapter": "Delve",
"label": "Node Go (Delve)",
"request": "launch",
"mode": "debug",
"program": "./cmd/node",
"args": ["--config", "configs/node.yaml"]
}
]

File diff suppressed because it is too large Load Diff

View File

@ -19,7 +19,7 @@ test-e2e:
.PHONY: build clean test run-node run-node2 run-node3 run-example deps tidy fmt vet lint clear-ports install-hooks kill
VERSION := 0.72.1
VERSION := 0.90.0
COMMIT ?= $(shell git rev-parse --short HEAD 2>/dev/null || echo unknown)
DATE ?= $(shell date -u +%Y-%m-%dT%H:%M:%SZ)
LDFLAGS := -X 'main.version=$(VERSION)' -X 'main.commit=$(COMMIT)' -X 'main.date=$(DATE)'
@ -31,6 +31,7 @@ build: deps
go build -ldflags "$(LDFLAGS)" -o bin/identity ./cmd/identity
go build -ldflags "$(LDFLAGS)" -o bin/orama-node ./cmd/node
go build -ldflags "$(LDFLAGS)" -o bin/orama cmd/cli/main.go
go build -ldflags "$(LDFLAGS)" -o bin/rqlite-mcp ./cmd/rqlite-mcp
# Inject gateway build metadata via pkg path variables
go build -ldflags "$(LDFLAGS) -X 'github.com/DeBrosOfficial/network/pkg/gateway.BuildVersion=$(VERSION)' -X 'github.com/DeBrosOfficial/network/pkg/gateway.BuildCommit=$(COMMIT)' -X 'github.com/DeBrosOfficial/network/pkg/gateway.BuildTime=$(DATE)'" -o bin/gateway ./cmd/gateway
@echo "Build complete! Run ./bin/orama version"
@ -71,14 +72,9 @@ run-gateway:
@echo "Note: Config must be in ~/.orama/data/gateway.yaml"
go run ./cmd/orama-gateway
# Setup local domain names for development
setup-domains:
@echo "Setting up local domains..."
@sudo bash scripts/setup-local-domains.sh
# Development environment target
# Uses orama dev up to start full stack with dependency and port checking
dev: build setup-domains
dev: build
@./bin/orama dev up
# Graceful shutdown of all dev services

124
README.md
View File

@ -1,6 +1,19 @@
# Orama Network - Distributed P2P Database System
# Orama Network - Distributed P2P Platform
A decentralized peer-to-peer data platform built in Go. Combines distributed SQL (RQLite), pub/sub messaging, and resilient peer discovery so applications can share state without central infrastructure.
A high-performance API Gateway and distributed platform built in Go. Provides a unified HTTP/HTTPS API for distributed SQL (RQLite), distributed caching (Olric), decentralized storage (IPFS), pub/sub messaging, and serverless WebAssembly execution.
**Architecture:** Modular Gateway / Edge Proxy following SOLID principles
## Features
- **🔐 Authentication** - Wallet signatures, API keys, JWT tokens
- **💾 Storage** - IPFS-based decentralized file storage with encryption
- **⚡ Cache** - Distributed cache with Olric (in-memory key-value)
- **🗄️ Database** - RQLite distributed SQL with Raft consensus
- **📡 Pub/Sub** - Real-time messaging via LibP2P and WebSocket
- **⚙️ Serverless** - WebAssembly function execution with host functions
- **🌐 HTTP Gateway** - Unified REST API with automatic HTTPS (Let's Encrypt)
- **📦 Client SDK** - Type-safe Go SDK for all services
## Quick Start
@ -26,27 +39,25 @@ make stop
After running `make dev`, test service health using these curl requests:
> **Note:** Local domains (node-1.local, etc.) require running `sudo make setup-domains` first. Alternatively, use `localhost` with port numbers.
### Node Unified Gateways
Each node is accessible via a single unified gateway port:
```bash
# Node-1 (port 6001)
curl http://node-1.local:6001/health
curl http://localhost:6001/health
# Node-2 (port 6002)
curl http://node-2.local:6002/health
curl http://localhost:6002/health
# Node-3 (port 6003)
curl http://node-3.local:6003/health
curl http://localhost:6003/health
# Node-4 (port 6004)
curl http://node-4.local:6004/health
curl http://localhost:6004/health
# Node-5 (port 6005)
curl http://node-5.local:6005/health
curl http://localhost:6005/health
```
## Network Architecture
@ -129,6 +140,54 @@ make build
./bin/orama auth logout
```
## Serverless Functions (WASM)
Orama supports high-performance serverless function execution using WebAssembly (WASM). Functions are isolated, secure, and can interact with network services like the distributed cache.
### 1. Build Functions
Functions must be compiled to WASM. We recommend using [TinyGo](https://tinygo.org/).
```bash
# Build example functions to examples/functions/bin/
./examples/functions/build.sh
```
### 2. Deployment
Deploy your compiled `.wasm` file to the network via the Gateway.
```bash
# Deploy a function
curl -X POST http://localhost:6001/v1/functions \
-H "Authorization: Bearer <your_api_key>" \
-F "name=hello-world" \
-F "namespace=default" \
-F "wasm=@./examples/functions/bin/hello.wasm"
```
### 3. Invocation
Trigger your function with a JSON payload. The function receives the payload via `stdin` and returns its response via `stdout`.
```bash
# Invoke via HTTP
curl -X POST http://localhost:6001/v1/functions/hello-world/invoke \
-H "Authorization: Bearer <your_api_key>" \
-H "Content-Type: application/json" \
-d '{"name": "Developer"}'
```
### 4. Management
```bash
# List all functions in a namespace
curl http://localhost:6001/v1/functions?namespace=default
# Delete a function
curl -X DELETE http://localhost:6001/v1/functions/hello-world?namespace=default
```
## Production Deployment
### Prerequisites
@ -262,12 +321,59 @@ sudo orama install
- `POST /v1/pubsub/publish` - Publish message
- `GET /v1/pubsub/topics` - List topics
- `GET /v1/pubsub/ws?topic=<name>` - WebSocket subscribe
- `POST /v1/functions` - Deploy function (multipart/form-data)
- `POST /v1/functions/{name}/invoke` - Invoke function
- `GET /v1/functions` - List functions
- `DELETE /v1/functions/{name}` - Delete function
- `GET /v1/functions/{name}/logs` - Get function logs
See `openapi/gateway.yaml` for complete API specification.
## Documentation
- **[Architecture Guide](docs/ARCHITECTURE.md)** - System architecture and design patterns
- **[Client SDK](docs/CLIENT_SDK.md)** - Go SDK documentation and examples
- **[Gateway API](docs/GATEWAY_API.md)** - Complete HTTP API reference
- **[Security Deployment](docs/SECURITY_DEPLOYMENT_GUIDE.md)** - Production security hardening
## Resources
- [RQLite Documentation](https://rqlite.io/docs/)
- [IPFS Documentation](https://docs.ipfs.tech/)
- [LibP2P Documentation](https://docs.libp2p.io/)
- [WebAssembly](https://webassembly.org/)
- [GitHub Repository](https://github.com/DeBrosOfficial/network)
- [Issue Tracker](https://github.com/DeBrosOfficial/network/issues)
## Project Structure
```
network/
├── cmd/ # Binary entry points
│ ├── cli/ # CLI tool
│ ├── gateway/ # HTTP Gateway
│ ├── node/ # P2P Node
│ └── rqlite-mcp/ # RQLite MCP server
├── pkg/ # Core packages
│ ├── gateway/ # Gateway implementation
│ │ └── handlers/ # HTTP handlers by domain
│ ├── client/ # Go SDK
│ ├── serverless/ # WASM engine
│ ├── rqlite/ # Database ORM
│ ├── contracts/ # Interface definitions
│ ├── httputil/ # HTTP utilities
│ └── errors/ # Error handling
├── docs/ # Documentation
├── e2e/ # End-to-end tests
└── examples/ # Example code
```
## Contributing
Contributions are welcome! This project follows:
- **SOLID Principles** - Single responsibility, open/closed, etc.
- **DRY Principle** - Don't repeat yourself
- **Clean Architecture** - Clear separation of concerns
- **Test Coverage** - Unit and E2E tests required
See our architecture docs for design patterns and guidelines.

320
cmd/rqlite-mcp/main.go Normal file
View File

@ -0,0 +1,320 @@
package main
import (
"bufio"
"encoding/json"
"fmt"
"log"
"os"
"strings"
"time"
"github.com/rqlite/gorqlite"
)
// MCP JSON-RPC types
type JSONRPCRequest struct {
JSONRPC string `json:"jsonrpc"`
ID any `json:"id,omitempty"`
Method string `json:"method"`
Params json.RawMessage `json:"params,omitempty"`
}
type JSONRPCResponse struct {
JSONRPC string `json:"jsonrpc"`
ID any `json:"id"`
Result any `json:"result,omitempty"`
Error *ResponseError `json:"error,omitempty"`
}
type ResponseError struct {
Code int `json:"code"`
Message string `json:"message"`
}
// Tool definition
type Tool struct {
Name string `json:"name"`
Description string `json:"description"`
InputSchema any `json:"inputSchema"`
}
// Tool call types
type CallToolRequest struct {
Name string `json:"name"`
Arguments json.RawMessage `json:"arguments"`
}
type TextContent struct {
Type string `json:"type"`
Text string `json:"text"`
}
type CallToolResult struct {
Content []TextContent `json:"content"`
IsError bool `json:"isError,omitempty"`
}
type MCPServer struct {
conn *gorqlite.Connection
}
func NewMCPServer(rqliteURL string) (*MCPServer, error) {
conn, err := gorqlite.Open(rqliteURL)
if err != nil {
return nil, err
}
return &MCPServer{
conn: conn,
}, nil
}
func (s *MCPServer) handleRequest(req JSONRPCRequest) JSONRPCResponse {
var resp JSONRPCResponse
resp.JSONRPC = "2.0"
resp.ID = req.ID
// Debug logging disabled to prevent excessive disk writes
// log.Printf("Received method: %s", req.Method)
switch req.Method {
case "initialize":
resp.Result = map[string]any{
"protocolVersion": "2024-11-05",
"capabilities": map[string]any{
"tools": map[string]any{},
},
"serverInfo": map[string]any{
"name": "rqlite-mcp",
"version": "0.1.0",
},
}
case "notifications/initialized":
// This is a notification, no response needed
return JSONRPCResponse{}
case "tools/list":
// Debug logging disabled to prevent excessive disk writes
tools := []Tool{
{
Name: "list_tables",
Description: "List all tables in the Rqlite database",
InputSchema: map[string]any{
"type": "object",
"properties": map[string]any{},
},
},
{
Name: "query",
Description: "Run a SELECT query on the Rqlite database",
InputSchema: map[string]any{
"type": "object",
"properties": map[string]any{
"sql": map[string]any{
"type": "string",
"description": "The SQL SELECT query to run",
},
},
"required": []string{"sql"},
},
},
{
Name: "execute",
Description: "Run an INSERT, UPDATE, or DELETE statement on the Rqlite database",
InputSchema: map[string]any{
"type": "object",
"properties": map[string]any{
"sql": map[string]any{
"type": "string",
"description": "The SQL statement (INSERT, UPDATE, DELETE) to run",
},
},
"required": []string{"sql"},
},
},
}
resp.Result = map[string]any{"tools": tools}
case "tools/call":
var callReq CallToolRequest
if err := json.Unmarshal(req.Params, &callReq); err != nil {
resp.Error = &ResponseError{Code: -32700, Message: "Parse error"}
return resp
}
resp.Result = s.handleToolCall(callReq)
default:
// Debug logging disabled to prevent excessive disk writes
resp.Error = &ResponseError{Code: -32601, Message: "Method not found"}
}
return resp
}
func (s *MCPServer) handleToolCall(req CallToolRequest) CallToolResult {
// Debug logging disabled to prevent excessive disk writes
// log.Printf("Tool call: %s", req.Name)
switch req.Name {
case "list_tables":
rows, err := s.conn.QueryOne("SELECT name FROM sqlite_master WHERE type='table' ORDER BY name")
if err != nil {
return errorResult(fmt.Sprintf("Error listing tables: %v", err))
}
var tables []string
for rows.Next() {
slice, err := rows.Slice()
if err == nil && len(slice) > 0 {
tables = append(tables, fmt.Sprint(slice[0]))
}
}
if len(tables) == 0 {
return textResult("No tables found")
}
return textResult(strings.Join(tables, "\n"))
case "query":
var args struct {
SQL string `json:"sql"`
}
if err := json.Unmarshal(req.Arguments, &args); err != nil {
return errorResult(fmt.Sprintf("Invalid arguments: %v", err))
}
// Debug logging disabled to prevent excessive disk writes
rows, err := s.conn.QueryOne(args.SQL)
if err != nil {
return errorResult(fmt.Sprintf("Query error: %v", err))
}
var result strings.Builder
cols := rows.Columns()
result.WriteString(strings.Join(cols, " | ") + "\n")
result.WriteString(strings.Repeat("-", len(cols)*10) + "\n")
rowCount := 0
for rows.Next() {
vals, err := rows.Slice()
if err != nil {
continue
}
rowCount++
for i, v := range vals {
if i > 0 {
result.WriteString(" | ")
}
result.WriteString(fmt.Sprint(v))
}
result.WriteString("\n")
}
result.WriteString(fmt.Sprintf("\n(%d rows)", rowCount))
return textResult(result.String())
case "execute":
var args struct {
SQL string `json:"sql"`
}
if err := json.Unmarshal(req.Arguments, &args); err != nil {
return errorResult(fmt.Sprintf("Invalid arguments: %v", err))
}
// Debug logging disabled to prevent excessive disk writes
res, err := s.conn.WriteOne(args.SQL)
if err != nil {
return errorResult(fmt.Sprintf("Execution error: %v", err))
}
return textResult(fmt.Sprintf("Rows affected: %d", res.RowsAffected))
default:
return errorResult(fmt.Sprintf("Unknown tool: %s", req.Name))
}
}
func textResult(text string) CallToolResult {
return CallToolResult{
Content: []TextContent{
{
Type: "text",
Text: text,
},
},
}
}
func errorResult(text string) CallToolResult {
return CallToolResult{
Content: []TextContent{
{
Type: "text",
Text: text,
},
},
IsError: true,
}
}
func main() {
// Log to stderr so stdout is clean for JSON-RPC
log.SetOutput(os.Stderr)
rqliteURL := "http://localhost:5001"
if u := os.Getenv("RQLITE_URL"); u != "" {
rqliteURL = u
}
var server *MCPServer
var err error
// Retry connecting to rqlite
maxRetries := 30
for i := 0; i < maxRetries; i++ {
server, err = NewMCPServer(rqliteURL)
if err == nil {
break
}
if i%5 == 0 {
log.Printf("Waiting for Rqlite at %s... (%d/%d)", rqliteURL, i+1, maxRetries)
}
time.Sleep(1 * time.Second)
}
if err != nil {
log.Fatalf("Failed to connect to Rqlite after %d retries: %v", maxRetries, err)
}
log.Printf("MCP Rqlite server started (stdio transport)")
log.Printf("Connected to Rqlite at %s", rqliteURL)
// Read JSON-RPC requests from stdin, write responses to stdout
scanner := bufio.NewScanner(os.Stdin)
for scanner.Scan() {
line := scanner.Text()
if line == "" {
continue
}
var req JSONRPCRequest
if err := json.Unmarshal([]byte(line), &req); err != nil {
// Debug logging disabled to prevent excessive disk writes
continue
}
resp := server.handleRequest(req)
// Don't send response for notifications (no ID)
if req.ID == nil && strings.HasPrefix(req.Method, "notifications/") {
continue
}
respData, err := json.Marshal(resp)
if err != nil {
// Debug logging disabled to prevent excessive disk writes
continue
}
fmt.Println(string(respData))
}
if err := scanner.Err(); err != nil {
// Debug logging disabled to prevent excessive disk writes
}
}

435
docs/ARCHITECTURE.md Normal file
View File

@ -0,0 +1,435 @@
# Orama Network Architecture
## Overview
Orama Network is a high-performance API Gateway and Reverse Proxy designed for a decentralized ecosystem. It serves as a unified entry point that orchestrates traffic between clients and various backend services.
## Architecture Pattern
**Modular Gateway / Edge Proxy Architecture**
The system follows a clean, layered architecture with clear separation of concerns:
```
┌─────────────────────────────────────────────────────────────┐
│ Clients │
│ (Web, Mobile, CLI, SDKs) │
└────────────────────────┬────────────────────────────────────┘
│ HTTPS/WSS
┌─────────────────────────────────────────────────────────────┐
│ API Gateway (Port 443) │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Handlers Layer (HTTP/WebSocket) │ │
│ │ - Auth handlers - Storage handlers │ │
│ │ - Cache handlers - PubSub handlers │ │
│ │ - Serverless - Database handlers │ │
│ └──────────────────────┬───────────────────────────────┘ │
│ │ │
│ ┌──────────────────────▼───────────────────────────────┐ │
│ │ Middleware (Security, Auth, Logging) │ │
│ └──────────────────────┬───────────────────────────────┘ │
│ │ │
│ ┌──────────────────────▼───────────────────────────────┐ │
│ │ Service Coordination (Gateway Core) │ │
│ └──────────────────────┬───────────────────────────────┘ │
└─────────────────────────┼────────────────────────────────────┘
┌─────────────────┼─────────────────┐
│ │ │
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ RQLite │ │ Olric │ │ IPFS │
│ (Database) │ │ (Cache) │ │ (Storage) │
│ │ │ │ │ │
│ Port 5001 │ │ Port 3320 │ │ Port 4501 │
└──────────────┘ └──────────────┘ └──────────────┘
┌─────────────────┐ ┌──────────────┐
│ IPFS Cluster │ │ Serverless │
│ (Pinning) │ │ (WASM) │
│ │ │ │
│ Port 9094 │ │ In-Process │
└─────────────────┘ └──────────────┘
```
## Core Components
### 1. API Gateway (`pkg/gateway/`)
The gateway is the main entry point for all client requests. It coordinates between various backend services.
**Key Files:**
- `gateway.go` - Core gateway struct and routing
- `dependencies.go` - Service initialization and dependency injection
- `lifecycle.go` - Start/stop/health lifecycle management
- `middleware.go` - Authentication, logging, error handling
- `routes.go` - HTTP route registration
**Handler Packages:**
- `handlers/auth/` - Authentication (JWT, API keys, wallet signatures)
- `handlers/storage/` - IPFS storage operations
- `handlers/cache/` - Distributed cache operations
- `handlers/pubsub/` - Pub/sub messaging
- `handlers/serverless/` - Serverless function deployment and execution
### 2. Client SDK (`pkg/client/`)
Provides a clean Go SDK for interacting with the Orama Network.
**Architecture:**
```go
// Main client interface
type NetworkClient interface {
Storage() StorageClient
Cache() CacheClient
Database() DatabaseClient
PubSub() PubSubClient
Serverless() ServerlessClient
Auth() AuthClient
}
```
**Key Files:**
- `client.go` - Main client orchestration
- `config.go` - Client configuration
- `storage_client.go` - IPFS storage client
- `cache_client.go` - Olric cache client
- `database_client.go` - RQLite database client
- `pubsub_bridge.go` - Pub/sub messaging client
- `transport.go` - HTTP transport layer
- `errors.go` - Client-specific errors
**Usage Example:**
```go
import "github.com/DeBrosOfficial/network/pkg/client"
// Create client
cfg := client.DefaultClientConfig()
cfg.GatewayURL = "https://api.orama.network"
cfg.APIKey = "your-api-key"
c := client.NewNetworkClient(cfg)
// Use storage
resp, err := c.Storage().Upload(ctx, data, "file.txt")
// Use cache
err = c.Cache().Set(ctx, "key", value, 0)
// Query database
rows, err := c.Database().Query(ctx, "SELECT * FROM users")
// Publish message
err = c.PubSub().Publish(ctx, "chat", []byte("hello"))
// Deploy function
fn, err := c.Serverless().Deploy(ctx, def, wasmBytes)
// Invoke function
result, err := c.Serverless().Invoke(ctx, "function-name", input)
```
### 3. Database Layer (`pkg/rqlite/`)
ORM-like interface over RQLite distributed SQL database.
**Key Files:**
- `client.go` - Main ORM client
- `orm_types.go` - Interfaces (Client, Tx, Repository[T])
- `query_builder.go` - Fluent query builder
- `repository.go` - Generic repository pattern
- `scanner.go` - Reflection-based row scanning
- `transaction.go` - Transaction support
**Features:**
- Fluent query builder
- Generic repository pattern with type safety
- Automatic struct mapping
- Transaction support
- Connection pooling with retry
**Example:**
```go
// Query builder
users, err := client.CreateQueryBuilder("users").
Select("id", "name", "email").
Where("age > ?", 18).
OrderBy("name ASC").
Limit(10).
GetMany(ctx, &users)
// Repository pattern
type User struct {
ID int `db:"id"`
Name string `db:"name"`
Email string `db:"email"`
}
repo := client.Repository("users")
user := &User{Name: "Alice", Email: "alice@example.com"}
err := repo.Save(ctx, user)
```
### 4. Serverless Engine (`pkg/serverless/`)
WebAssembly (WASM) function execution engine with host functions.
**Architecture:**
```
pkg/serverless/
├── engine.go - Core WASM engine
├── execution/ - Function execution
│ ├── executor.go
│ └── lifecycle.go
├── cache/ - Module caching
│ └── module_cache.go
├── registry/ - Function metadata
│ ├── registry.go
│ ├── function_store.go
│ ├── ipfs_store.go
│ └── invocation_logger.go
└── hostfunctions/ - Host functions by domain
├── cache.go - Cache operations
├── storage.go - Storage operations
├── database.go - Database queries
├── pubsub.go - Messaging
├── http.go - HTTP requests
└── logging.go - Logging
```
**Features:**
- Secure WASM execution sandbox
- Memory and CPU limits
- Host function injection (cache, storage, DB, HTTP)
- Function versioning
- Invocation logging
- Hot module reloading
### 5. Configuration System (`pkg/config/`)
Domain-specific configuration with validation.
**Structure:**
```
pkg/config/
├── config.go - Main config aggregator
├── loader.go - YAML loading
├── node_config.go - Node settings
├── database_config.go - Database settings
├── gateway_config.go - Gateway settings
└── validate/ - Validation
├── validators.go
├── node.go
├── database.go
└── gateway.go
```
### 6. Shared Utilities
**HTTP Utilities (`pkg/httputil/`):**
- Request parsing and validation
- JSON response writers
- Error handling
- Authentication extraction
**Error Handling (`pkg/errors/`):**
- Typed errors (ValidationError, NotFoundError, etc.)
- HTTP status code mapping
- Error wrapping with context
- Stack traces
**Contracts (`pkg/contracts/`):**
- Interface definitions for all services
- Enables dependency injection
- Clean abstractions
## Data Flow
### 1. HTTP Request Flow
```
Client Request
[HTTPS Termination]
[Authentication Middleware]
[Route Handler]
[Service Layer]
[Backend Service] (RQLite/Olric/IPFS)
[Response Formatting]
Client Response
```
### 2. WebSocket Flow (Pub/Sub)
```
Client WebSocket Connect
[Upgrade to WebSocket]
[Authentication]
[Subscribe to Topic]
[LibP2P PubSub] ←→ [Local Subscribers]
[Message Broadcasting]
Client Receives Messages
```
### 3. Serverless Invocation Flow
```
Function Deployment:
Upload WASM → Store in IPFS → Save Metadata (RQLite) → Compile Module
Function Invocation:
Request → Load Metadata → Get WASM from IPFS →
Execute in Sandbox → Return Result → Log Invocation
```
## Security Architecture
### Authentication Methods
1. **Wallet Signatures** (Ethereum-style)
- Challenge/response flow
- Nonce-based to prevent replay attacks
- Issues JWT tokens after verification
2. **API Keys**
- Long-lived credentials
- Stored in RQLite
- Namespace-scoped
3. **JWT Tokens**
- Short-lived (15 min default)
- Refresh token support
- Claims-based authorization
### TLS/HTTPS
- Automatic ACME (Let's Encrypt) certificate management
- TLS 1.3 support
- HTTP/2 enabled
- Certificate caching
### Middleware Stack
1. **Logger** - Request/response logging
2. **CORS** - Cross-origin resource sharing
3. **Authentication** - JWT/API key validation
4. **Authorization** - Namespace access control
5. **Rate Limiting** - Per-client rate limits
6. **Error Handling** - Consistent error responses
## Scalability
### Horizontal Scaling
- **Gateway:** Stateless, can run multiple instances behind load balancer
- **RQLite:** Multi-node cluster with Raft consensus
- **IPFS:** Distributed storage across nodes
- **Olric:** Distributed cache with consistent hashing
### Caching Strategy
1. **WASM Module Cache** - Compiled modules cached in memory
2. **Olric Distributed Cache** - Shared cache across nodes
3. **Local Cache** - Per-gateway request caching
### High Availability
- **Database:** RQLite cluster with automatic leader election
- **Storage:** IPFS replication factor configurable
- **Cache:** Olric replication and eventual consistency
- **Gateway:** Stateless, multiple replicas supported
## Monitoring & Observability
### Health Checks
- `/health` - Liveness probe
- `/v1/status` - Detailed status with service checks
### Metrics
- Prometheus-compatible metrics endpoint
- Request counts, latencies, error rates
- Service-specific metrics (cache hit ratio, DB query times)
### Logging
- Structured logging (JSON format)
- Log levels: DEBUG, INFO, WARN, ERROR
- Correlation IDs for request tracing
## Development Patterns
### SOLID Principles
- **Single Responsibility:** Each handler/service has one focus
- **Open/Closed:** Interface-based design for extensibility
- **Liskov Substitution:** All implementations conform to contracts
- **Interface Segregation:** Small, focused interfaces
- **Dependency Inversion:** Depend on abstractions, not implementations
### Code Organization
- **Average file size:** ~150 lines
- **Package structure:** Domain-driven, feature-focused
- **Testing:** Unit tests for logic, E2E tests for integration
- **Documentation:** Godoc comments on all public APIs
## Deployment
### Development
```bash
make dev # Start 5-node cluster
make stop # Stop all services
make test # Run unit tests
make test-e2e # Run E2E tests
```
### Production
```bash
# First node (creates cluster)
sudo orama install --vps-ip <IP> --domain node1.example.com
# Additional nodes (join cluster)
sudo orama install --vps-ip <IP> --domain node2.example.com \
--peers /dns4/node1.example.com/tcp/4001/p2p/<PEER_ID> \
--join <node1-ip>:7002 \
--cluster-secret <secret> \
--swarm-key <key>
```
### Docker (Future)
Planned containerization with Docker Compose and Kubernetes support.
## Future Enhancements
1. **GraphQL Support** - GraphQL gateway alongside REST
2. **gRPC Support** - gRPC protocol support
3. **Event Sourcing** - Event-driven architecture
4. **Kubernetes Operator** - Native K8s deployment
5. **Observability** - OpenTelemetry integration
6. **Multi-tenancy** - Enhanced namespace isolation
## Resources
- [RQLite Documentation](https://rqlite.io/docs/)
- [IPFS Documentation](https://docs.ipfs.tech/)
- [LibP2P Documentation](https://docs.libp2p.io/)
- [WebAssembly (WASM)](https://webassembly.org/)

546
docs/CLIENT_SDK.md Normal file
View File

@ -0,0 +1,546 @@
# Orama Network Client SDK
## Overview
The Orama Network Client SDK provides a clean, type-safe Go interface for interacting with the Orama Network. It abstracts away the complexity of HTTP requests, authentication, and error handling.
## Installation
```bash
go get github.com/DeBrosOfficial/network/pkg/client
```
## Quick Start
```go
package main
import (
"context"
"fmt"
"log"
"github.com/DeBrosOfficial/network/pkg/client"
)
func main() {
// Create client configuration
cfg := client.DefaultClientConfig()
cfg.GatewayURL = "https://api.orama.network"
cfg.APIKey = "your-api-key-here"
// Create client
c := client.NewNetworkClient(cfg)
// Use the client
ctx := context.Background()
// Upload to storage
data := []byte("Hello, Orama!")
resp, err := c.Storage().Upload(ctx, data, "hello.txt")
if err != nil {
log.Fatal(err)
}
fmt.Printf("Uploaded: CID=%s\n", resp.CID)
}
```
## Configuration
### ClientConfig
```go
type ClientConfig struct {
// Gateway URL (e.g., "https://api.orama.network")
GatewayURL string
// Authentication (choose one)
APIKey string // API key authentication
JWTToken string // JWT token authentication
// Client options
Timeout time.Duration // Request timeout (default: 30s)
UserAgent string // Custom user agent
// Network client namespace
Namespace string // Default namespace for operations
}
```
### Creating a Client
```go
// Default configuration
cfg := client.DefaultClientConfig()
cfg.GatewayURL = "https://api.orama.network"
cfg.APIKey = "your-api-key"
c := client.NewNetworkClient(cfg)
```
## Authentication
### API Key Authentication
```go
cfg := client.DefaultClientConfig()
cfg.APIKey = "your-api-key-here"
c := client.NewNetworkClient(cfg)
```
### JWT Token Authentication
```go
cfg := client.DefaultClientConfig()
cfg.JWTToken = "your-jwt-token-here"
c := client.NewNetworkClient(cfg)
```
### Obtaining Credentials
```go
// 1. Login with wallet signature (not yet implemented in SDK)
// Use the gateway API directly: POST /v1/auth/challenge + /v1/auth/verify
// 2. Issue API key after authentication
// POST /v1/auth/apikey with JWT token
```
## Storage Client
Upload, download, pin, and unpin files to IPFS.
### Upload File
```go
data := []byte("Hello, World!")
resp, err := c.Storage().Upload(ctx, data, "hello.txt")
if err != nil {
log.Fatal(err)
}
fmt.Printf("CID: %s\n", resp.CID)
```
### Upload with Options
```go
opts := &client.StorageUploadOptions{
Pin: true, // Pin after upload
Encrypt: true, // Encrypt before upload
ReplicationFactor: 3, // Number of replicas
}
resp, err := c.Storage().UploadWithOptions(ctx, data, "file.txt", opts)
```
### Get File
```go
cid := "QmXxx..."
data, err := c.Storage().Get(ctx, cid)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Downloaded %d bytes\n", len(data))
```
### Pin File
```go
cid := "QmXxx..."
resp, err := c.Storage().Pin(ctx, cid)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Pinned: %s\n", resp.CID)
```
### Unpin File
```go
cid := "QmXxx..."
err := c.Storage().Unpin(ctx, cid)
if err != nil {
log.Fatal(err)
}
fmt.Println("Unpinned successfully")
```
### Check Pin Status
```go
cid := "QmXxx..."
status, err := c.Storage().Status(ctx, cid)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Status: %s, Replicas: %d\n", status.Status, status.Replicas)
```
## Cache Client
Distributed key-value cache using Olric.
### Set Value
```go
key := "user:123"
value := map[string]interface{}{
"name": "Alice",
"email": "alice@example.com",
}
ttl := 5 * time.Minute
err := c.Cache().Set(ctx, key, value, ttl)
if err != nil {
log.Fatal(err)
}
```
### Get Value
```go
key := "user:123"
var user map[string]interface{}
err := c.Cache().Get(ctx, key, &user)
if err != nil {
log.Fatal(err)
}
fmt.Printf("User: %+v\n", user)
```
### Delete Value
```go
key := "user:123"
err := c.Cache().Delete(ctx, key)
if err != nil {
log.Fatal(err)
}
```
### Multi-Get
```go
keys := []string{"user:1", "user:2", "user:3"}
results, err := c.Cache().MGet(ctx, keys)
if err != nil {
log.Fatal(err)
}
for key, value := range results {
fmt.Printf("%s: %v\n", key, value)
}
```
## Database Client
Query RQLite distributed SQL database.
### Execute Query (Write)
```go
sql := "INSERT INTO users (name, email) VALUES (?, ?)"
args := []interface{}{"Alice", "alice@example.com"}
result, err := c.Database().Execute(ctx, sql, args...)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Inserted %d rows\n", result.RowsAffected)
```
### Query (Read)
```go
sql := "SELECT id, name, email FROM users WHERE id = ?"
args := []interface{}{123}
rows, err := c.Database().Query(ctx, sql, args...)
if err != nil {
log.Fatal(err)
}
type User struct {
ID int `json:"id"`
Name string `json:"name"`
Email string `json:"email"`
}
var users []User
for _, row := range rows {
var user User
// Parse row into user struct
// (manual parsing required, or use ORM layer)
users = append(users, user)
}
```
### Create Table
```go
schema := `CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
email TEXT UNIQUE NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
)`
_, err := c.Database().Execute(ctx, schema)
if err != nil {
log.Fatal(err)
}
```
### Transaction
```go
tx, err := c.Database().Begin(ctx)
if err != nil {
log.Fatal(err)
}
_, err = tx.Execute(ctx, "INSERT INTO users (name) VALUES (?)", "Alice")
if err != nil {
tx.Rollback(ctx)
log.Fatal(err)
}
_, err = tx.Execute(ctx, "INSERT INTO users (name) VALUES (?)", "Bob")
if err != nil {
tx.Rollback(ctx)
log.Fatal(err)
}
err = tx.Commit(ctx)
if err != nil {
log.Fatal(err)
}
```
## PubSub Client
Publish and subscribe to topics.
### Publish Message
```go
topic := "chat"
message := []byte("Hello, everyone!")
err := c.PubSub().Publish(ctx, topic, message)
if err != nil {
log.Fatal(err)
}
```
### Subscribe to Topic
```go
topic := "chat"
handler := func(ctx context.Context, msg []byte) error {
fmt.Printf("Received: %s\n", string(msg))
return nil
}
unsubscribe, err := c.PubSub().Subscribe(ctx, topic, handler)
if err != nil {
log.Fatal(err)
}
// Later: unsubscribe
defer unsubscribe()
```
### List Topics
```go
topics, err := c.PubSub().ListTopics(ctx)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Topics: %v\n", topics)
```
## Serverless Client
Deploy and invoke WebAssembly functions.
### Deploy Function
```go
// Read WASM file
wasmBytes, err := os.ReadFile("function.wasm")
if err != nil {
log.Fatal(err)
}
// Function definition
def := &client.FunctionDefinition{
Name: "hello-world",
Namespace: "default",
Description: "Hello world function",
MemoryLimit: 64, // MB
Timeout: 30, // seconds
}
// Deploy
fn, err := c.Serverless().Deploy(ctx, def, wasmBytes)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Deployed: %s (CID: %s)\n", fn.Name, fn.WASMCID)
```
### Invoke Function
```go
functionName := "hello-world"
input := map[string]interface{}{
"name": "Alice",
}
output, err := c.Serverless().Invoke(ctx, functionName, input)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Result: %s\n", output)
```
### List Functions
```go
functions, err := c.Serverless().List(ctx)
if err != nil {
log.Fatal(err)
}
for _, fn := range functions {
fmt.Printf("- %s: %s\n", fn.Name, fn.Description)
}
```
### Delete Function
```go
functionName := "hello-world"
err := c.Serverless().Delete(ctx, functionName)
if err != nil {
log.Fatal(err)
}
```
### Get Function Logs
```go
functionName := "hello-world"
logs, err := c.Serverless().GetLogs(ctx, functionName, 100)
if err != nil {
log.Fatal(err)
}
for _, log := range logs {
fmt.Printf("[%s] %s: %s\n", log.Timestamp, log.Level, log.Message)
}
```
## Error Handling
All client methods return typed errors that can be checked:
```go
import "github.com/DeBrosOfficial/network/pkg/errors"
resp, err := c.Storage().Upload(ctx, data, "file.txt")
if err != nil {
if errors.IsNotFound(err) {
fmt.Println("Resource not found")
} else if errors.IsUnauthorized(err) {
fmt.Println("Authentication failed")
} else if errors.IsValidation(err) {
fmt.Println("Validation error")
} else {
log.Fatal(err)
}
}
```
## Advanced Usage
### Custom Timeout
```go
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
resp, err := c.Storage().Upload(ctx, data, "file.txt")
```
### Retry Logic
```go
import "github.com/DeBrosOfficial/network/pkg/errors"
maxRetries := 3
for i := 0; i < maxRetries; i++ {
resp, err := c.Storage().Upload(ctx, data, "file.txt")
if err == nil {
break
}
if !errors.ShouldRetry(err) {
return err
}
time.Sleep(time.Second * time.Duration(i+1))
}
```
### Multiple Namespaces
```go
// Default namespace
c1 := client.NewNetworkClient(cfg)
c1.Storage().Upload(ctx, data, "file.txt") // Uses default namespace
// Override namespace per request
opts := &client.StorageUploadOptions{
Namespace: "custom-namespace",
}
c1.Storage().UploadWithOptions(ctx, data, "file.txt", opts)
```
## Testing
### Mock Client
```go
// Create a mock client for testing
mockClient := &MockNetworkClient{
StorageClient: &MockStorageClient{
UploadFunc: func(ctx context.Context, data []byte, filename string) (*UploadResponse, error) {
return &UploadResponse{CID: "QmMock"}, nil
},
},
}
// Use in tests
resp, err := mockClient.Storage().Upload(ctx, data, "test.txt")
assert.NoError(t, err)
assert.Equal(t, "QmMock", resp.CID)
```
## Examples
See the `examples/` directory for complete examples:
- `examples/storage/` - Storage upload/download examples
- `examples/cache/` - Cache operations
- `examples/database/` - Database queries
- `examples/pubsub/` - Pub/sub messaging
- `examples/serverless/` - Serverless functions
## API Reference
Complete API documentation is available at:
- GoDoc: https://pkg.go.dev/github.com/DeBrosOfficial/network/pkg/client
- OpenAPI: `openapi/gateway.yaml`
## Support
- GitHub Issues: https://github.com/DeBrosOfficial/network/issues
- Documentation: https://github.com/DeBrosOfficial/network/tree/main/docs

734
docs/GATEWAY_API.md Normal file
View File

@ -0,0 +1,734 @@
# Gateway API Documentation
## Overview
The Orama Network Gateway provides a unified HTTP/HTTPS API for all network services. It handles authentication, routing, and service coordination.
**Base URL:** `https://api.orama.network` (production) or `http://localhost:6001` (development)
## Authentication
All API requests (except `/health` and `/v1/auth/*`) require authentication.
### Authentication Methods
1. **API Key** (Recommended for server-to-server)
2. **JWT Token** (Recommended for user sessions)
3. **Wallet Signature** (For blockchain integration)
### Using API Keys
Include your API key in the `Authorization` header:
```bash
curl -H "Authorization: Bearer your-api-key-here" \
https://api.orama.network/v1/status
```
Or in the `X-API-Key` header:
```bash
curl -H "X-API-Key: your-api-key-here" \
https://api.orama.network/v1/status
```
### Using JWT Tokens
```bash
curl -H "Authorization: Bearer your-jwt-token-here" \
https://api.orama.network/v1/status
```
## Base Endpoints
### Health Check
```http
GET /health
```
**Response:**
```json
{
"status": "ok",
"timestamp": "2024-01-20T10:30:00Z"
}
```
### Status
```http
GET /v1/status
```
**Response:**
```json
{
"version": "0.80.0",
"uptime": "24h30m15s",
"services": {
"rqlite": "healthy",
"ipfs": "healthy",
"olric": "healthy"
}
}
```
### Version
```http
GET /v1/version
```
**Response:**
```json
{
"version": "0.80.0",
"commit": "abc123...",
"built": "2024-01-20T00:00:00Z"
}
```
## Authentication API
### Get Challenge (Wallet Auth)
Generate a nonce for wallet signature.
```http
POST /v1/auth/challenge
Content-Type: application/json
{
"wallet": "0x742d35Cc6634C0532925a3b844Bc9e7595f0bEb",
"purpose": "login",
"namespace": "default"
}
```
**Response:**
```json
{
"wallet": "0x742d35Cc6634C0532925a3b844Bc9e7595f0bEb",
"namespace": "default",
"nonce": "a1b2c3d4e5f6...",
"purpose": "login",
"expires_at": "2024-01-20T10:35:00Z"
}
```
### Verify Signature
Verify wallet signature and issue JWT + API key.
```http
POST /v1/auth/verify
Content-Type: application/json
{
"wallet": "0x742d35Cc6634C0532925a3b844Bc9e7595f0bEb",
"signature": "0x...",
"nonce": "a1b2c3d4e5f6...",
"namespace": "default"
}
```
**Response:**
```json
{
"jwt_token": "eyJhbGciOiJIUzI1NiIs...",
"refresh_token": "refresh_abc123...",
"api_key": "api_xyz789...",
"expires_in": 900,
"namespace": "default"
}
```
### Refresh Token
Refresh an expired JWT token.
```http
POST /v1/auth/refresh
Content-Type: application/json
{
"refresh_token": "refresh_abc123..."
}
```
**Response:**
```json
{
"jwt_token": "eyJhbGciOiJIUzI1NiIs...",
"expires_in": 900
}
```
### Logout
Revoke refresh tokens.
```http
POST /v1/auth/logout
Authorization: Bearer your-jwt-token
{
"all": false
}
```
**Response:**
```json
{
"message": "logged out successfully"
}
```
### Whoami
Get current authentication info.
```http
GET /v1/auth/whoami
Authorization: Bearer your-api-key
```
**Response:**
```json
{
"authenticated": true,
"method": "api_key",
"api_key": "api_xyz789...",
"namespace": "default"
}
```
## Storage API (IPFS)
### Upload File
```http
POST /v1/storage/upload
Authorization: Bearer your-api-key
Content-Type: multipart/form-data
file: <binary data>
```
Or with JSON:
```http
POST /v1/storage/upload
Authorization: Bearer your-api-key
Content-Type: application/json
{
"data": "base64-encoded-data",
"filename": "document.pdf",
"pin": true,
"encrypt": false
}
```
**Response:**
```json
{
"cid": "QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG",
"size": 1024,
"filename": "document.pdf"
}
```
### Get File
```http
GET /v1/storage/get/:cid
Authorization: Bearer your-api-key
```
**Response:** Binary file data or JSON (if `Accept: application/json`)
### Pin File
```http
POST /v1/storage/pin
Authorization: Bearer your-api-key
Content-Type: application/json
{
"cid": "QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG",
"replication_factor": 3
}
```
**Response:**
```json
{
"cid": "QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG",
"status": "pinned"
}
```
### Unpin File
```http
DELETE /v1/storage/unpin/:cid
Authorization: Bearer your-api-key
```
**Response:**
```json
{
"message": "unpinned successfully"
}
```
### Get Pin Status
```http
GET /v1/storage/status/:cid
Authorization: Bearer your-api-key
```
**Response:**
```json
{
"cid": "QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG",
"status": "pinned",
"replicas": 3,
"peers": ["12D3KooW...", "12D3KooW..."]
}
```
## Cache API (Olric)
### Set Value
```http
PUT /v1/cache/put
Authorization: Bearer your-api-key
Content-Type: application/json
{
"key": "user:123",
"value": {"name": "Alice", "email": "alice@example.com"},
"ttl": 300
}
```
**Response:**
```json
{
"message": "value set successfully"
}
```
### Get Value
```http
GET /v1/cache/get?key=user:123
Authorization: Bearer your-api-key
```
**Response:**
```json
{
"key": "user:123",
"value": {"name": "Alice", "email": "alice@example.com"}
}
```
### Get Multiple Values
```http
POST /v1/cache/mget
Authorization: Bearer your-api-key
Content-Type: application/json
{
"keys": ["user:1", "user:2", "user:3"]
}
```
**Response:**
```json
{
"results": {
"user:1": {"name": "Alice"},
"user:2": {"name": "Bob"},
"user:3": null
}
}
```
### Delete Value
```http
DELETE /v1/cache/delete?key=user:123
Authorization: Bearer your-api-key
```
**Response:**
```json
{
"message": "deleted successfully"
}
```
### Scan Keys
```http
GET /v1/cache/scan?pattern=user:*&limit=100
Authorization: Bearer your-api-key
```
**Response:**
```json
{
"keys": ["user:1", "user:2", "user:3"],
"count": 3
}
```
## Database API (RQLite)
### Execute SQL
```http
POST /v1/rqlite/exec
Authorization: Bearer your-api-key
Content-Type: application/json
{
"sql": "INSERT INTO users (name, email) VALUES (?, ?)",
"args": ["Alice", "alice@example.com"]
}
```
**Response:**
```json
{
"last_insert_id": 123,
"rows_affected": 1
}
```
### Query SQL
```http
POST /v1/rqlite/query
Authorization: Bearer your-api-key
Content-Type: application/json
{
"sql": "SELECT * FROM users WHERE id = ?",
"args": [123]
}
```
**Response:**
```json
{
"columns": ["id", "name", "email"],
"rows": [
[123, "Alice", "alice@example.com"]
]
}
```
### Get Schema
```http
GET /v1/rqlite/schema
Authorization: Bearer your-api-key
```
**Response:**
```json
{
"tables": [
{
"name": "users",
"schema": "CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, email TEXT)"
}
]
}
```
## Pub/Sub API
### Publish Message
```http
POST /v1/pubsub/publish
Authorization: Bearer your-api-key
Content-Type: application/json
{
"topic": "chat",
"data": "SGVsbG8sIFdvcmxkIQ==",
"namespace": "default"
}
```
**Response:**
```json
{
"message": "published successfully"
}
```
### List Topics
```http
GET /v1/pubsub/topics
Authorization: Bearer your-api-key
```
**Response:**
```json
{
"topics": ["chat", "notifications", "events"]
}
```
### Subscribe (WebSocket)
```http
GET /v1/pubsub/ws?topic=chat
Authorization: Bearer your-api-key
Upgrade: websocket
```
**WebSocket Messages:**
Incoming (from server):
```json
{
"type": "message",
"topic": "chat",
"data": "SGVsbG8sIFdvcmxkIQ==",
"timestamp": "2024-01-20T10:30:00Z"
}
```
Outgoing (to server):
```json
{
"type": "publish",
"topic": "chat",
"data": "SGVsbG8sIFdvcmxkIQ=="
}
```
### Presence
```http
GET /v1/pubsub/presence?topic=chat
Authorization: Bearer your-api-key
```
**Response:**
```json
{
"topic": "chat",
"members": [
{"id": "user-123", "joined_at": "2024-01-20T10:00:00Z"},
{"id": "user-456", "joined_at": "2024-01-20T10:15:00Z"}
]
}
```
## Serverless API (WASM)
### Deploy Function
```http
POST /v1/functions
Authorization: Bearer your-api-key
Content-Type: multipart/form-data
name: hello-world
namespace: default
description: Hello world function
wasm: <binary WASM file>
memory_limit: 64
timeout: 30
```
**Response:**
```json
{
"id": "fn_abc123",
"name": "hello-world",
"namespace": "default",
"wasm_cid": "QmXxx...",
"version": 1,
"created_at": "2024-01-20T10:30:00Z"
}
```
### Invoke Function
```http
POST /v1/functions/hello-world/invoke
Authorization: Bearer your-api-key
Content-Type: application/json
{
"name": "Alice"
}
```
**Response:**
```json
{
"result": "Hello, Alice!",
"execution_time_ms": 15,
"memory_used_mb": 2.5
}
```
### List Functions
```http
GET /v1/functions?namespace=default
Authorization: Bearer your-api-key
```
**Response:**
```json
{
"functions": [
{
"name": "hello-world",
"description": "Hello world function",
"version": 1,
"created_at": "2024-01-20T10:30:00Z"
}
]
}
```
### Delete Function
```http
DELETE /v1/functions/hello-world?namespace=default
Authorization: Bearer your-api-key
```
**Response:**
```json
{
"message": "function deleted successfully"
}
```
### Get Function Logs
```http
GET /v1/functions/hello-world/logs?limit=100
Authorization: Bearer your-api-key
```
**Response:**
```json
{
"logs": [
{
"timestamp": "2024-01-20T10:30:00Z",
"level": "info",
"message": "Function invoked",
"invocation_id": "inv_xyz789"
}
]
}
```
## Error Responses
All errors follow a consistent format:
```json
{
"code": "NOT_FOUND",
"message": "user with ID '123' not found",
"details": {
"resource": "user",
"id": "123"
},
"trace_id": "trace-abc123"
}
```
### Common Error Codes
| Code | HTTP Status | Description |
|------|-------------|-------------|
| `VALIDATION_ERROR` | 400 | Invalid input |
| `UNAUTHORIZED` | 401 | Authentication required |
| `FORBIDDEN` | 403 | Permission denied |
| `NOT_FOUND` | 404 | Resource not found |
| `CONFLICT` | 409 | Resource already exists |
| `TIMEOUT` | 408 | Operation timeout |
| `RATE_LIMIT_EXCEEDED` | 429 | Too many requests |
| `SERVICE_UNAVAILABLE` | 503 | Service unavailable |
| `INTERNAL` | 500 | Internal server error |
## Rate Limiting
The API implements rate limiting per API key:
- **Default:** 100 requests per minute
- **Burst:** 200 requests
Rate limit headers:
```
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1611144000
```
When rate limited:
```json
{
"code": "RATE_LIMIT_EXCEEDED",
"message": "rate limit exceeded",
"details": {
"limit": 100,
"retry_after": 60
}
}
```
## Pagination
List endpoints support pagination:
```http
GET /v1/functions?limit=10&offset=20
```
Response includes pagination metadata:
```json
{
"data": [...],
"pagination": {
"total": 100,
"limit": 10,
"offset": 20,
"has_more": true
}
}
```
## Webhooks (Future)
Coming soon: webhook support for event notifications.
## Support
- API Issues: https://github.com/DeBrosOfficial/network/issues
- OpenAPI Spec: `openapi/gateway.yaml`
- SDK Documentation: `docs/CLIENT_SDK.md`

View File

@ -0,0 +1,476 @@
# Orama Network - Security Deployment Guide
**Date:** January 18, 2026
**Status:** Production-Ready
**Audit Completed By:** Claude Code Security Audit
---
## Executive Summary
This document outlines the security hardening measures applied to the 4-node Orama Network production cluster. All critical vulnerabilities identified in the security audit have been addressed.
**Security Status:** ✅ SECURED FOR PRODUCTION
---
## Server Inventory
| Server ID | IP Address | Domain | OS | Role |
|-----------|------------|--------|-----|------|
| VPS 1 | 51.83.128.181 | node-kv4la8.debros.network | Ubuntu 22.04 | Gateway + Cluster Node |
| VPS 2 | 194.61.28.7 | node-7prvNa.debros.network | Ubuntu 24.04 | Gateway + Cluster Node |
| VPS 3 | 83.171.248.66 | node-xn23dq.debros.network | Ubuntu 24.04 | Gateway + Cluster Node |
| VPS 4 | 62.72.44.87 | node-nns4n5.debros.network | Ubuntu 24.04 | Gateway + Cluster Node |
---
## Services Running on Each Server
| Service | Port(s) | Purpose | Public Access |
|---------|---------|---------|---------------|
| **orama-node** | 80, 443, 7001 | API Gateway | Yes (80, 443 only) |
| **rqlited** | 5001, 7002 | Distributed SQLite DB | Cluster only |
| **ipfs** | 4101, 4501, 8080 | Content-addressed storage | Cluster only |
| **ipfs-cluster** | 9094, 9098 | IPFS cluster management | Cluster only |
| **olric-server** | 3320, 3322 | Distributed cache | Cluster only |
| **anon** (Anyone proxy) | 9001, 9050, 9051 | Anonymity proxy | Cluster only |
| **libp2p** | 4001 | P2P networking | Yes (public P2P) |
| **SSH** | 22 | Remote access | Yes |
---
## Security Measures Implemented
### 1. Firewall Configuration (UFW)
**Status:** ✅ Enabled on all 4 servers
#### Public Ports (Open to Internet)
- **22/tcp** - SSH (with hardening)
- **80/tcp** - HTTP (redirects to HTTPS)
- **443/tcp** - HTTPS (Let's Encrypt production certificates)
- **4001/tcp** - libp2p swarm (P2P networking)
#### Cluster-Only Ports (Restricted to 4 Server IPs)
All the following ports are ONLY accessible from the 4 cluster IPs:
- **5001/tcp** - rqlite HTTP API
- **7001/tcp** - SNI Gateway
- **7002/tcp** - rqlite Raft consensus
- **9094/tcp** - IPFS Cluster API
- **9098/tcp** - IPFS Cluster communication
- **3322/tcp** - Olric distributed cache
- **4101/tcp** - IPFS swarm (cluster internal)
#### Firewall Rules Example
```bash
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp comment "SSH"
sudo ufw allow 80/tcp comment "HTTP"
sudo ufw allow 443/tcp comment "HTTPS"
sudo ufw allow 4001/tcp comment "libp2p swarm"
# Cluster-only access for sensitive services
sudo ufw allow from 51.83.128.181 to any port 5001 proto tcp
sudo ufw allow from 194.61.28.7 to any port 5001 proto tcp
sudo ufw allow from 83.171.248.66 to any port 5001 proto tcp
sudo ufw allow from 62.72.44.87 to any port 5001 proto tcp
# (repeat for ports 7001, 7002, 9094, 9098, 3322, 4101)
sudo ufw enable
```
### 2. SSH Hardening
**Location:** `/etc/ssh/sshd_config.d/99-hardening.conf`
**Configuration:**
```bash
PermitRootLogin yes # Root login allowed with SSH keys
PasswordAuthentication yes # Password auth enabled (you have keys configured)
PubkeyAuthentication yes # SSH key authentication enabled
PermitEmptyPasswords no # No empty passwords
X11Forwarding no # X11 disabled for security
MaxAuthTries 3 # Max 3 login attempts
ClientAliveInterval 300 # Keep-alive every 5 minutes
ClientAliveCountMax 2 # Disconnect after 2 failed keep-alives
```
**Your SSH Keys Added:**
- ✅ `ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPcGZPX2iHXWO8tuyyDkHPS5eByPOktkw3+ugcw79yQO`
- ✅ `ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDgCWmycaBN3aAZJcM2w4+Xi2zrTwN78W8oAiQywvMEkubqNNWHF6I3...`
Both keys are installed on all 4 servers in:
- VPS 1: `/home/ubuntu/.ssh/authorized_keys`
- VPS 2, 3, 4: `/root/.ssh/authorized_keys`
### 3. Fail2ban Protection
**Status:** ✅ Installed and running on all 4 servers
**Purpose:** Automatically bans IPs after failed SSH login attempts
**Check Status:**
```bash
sudo systemctl status fail2ban
```
### 4. Security Updates
**Status:** ✅ All security updates applied (as of Jan 18, 2026)
**Update Command:**
```bash
sudo apt update && sudo apt upgrade -y
```
### 5. Let's Encrypt TLS Certificates
**Status:** ✅ Production certificates (NOT staging)
**Configuration:**
- **Provider:** Let's Encrypt (ACME v2 Production)
- **Auto-renewal:** Enabled via autocert
- **Cache Directory:** `/home/debros/.orama/tls-cache/`
- **Domains:**
- node-kv4la8.debros.network (VPS 1)
- node-7prvNa.debros.network (VPS 2)
- node-xn23dq.debros.network (VPS 3)
- node-nns4n5.debros.network (VPS 4)
**Certificate Files:**
- Account key: `/home/debros/.orama/tls-cache/acme_account+key`
- Certificates auto-managed by autocert
**Verification:**
```bash
curl -I https://node-kv4la8.debros.network
# Should return valid SSL certificate
```
---
## Cluster Configuration
### RQLite Cluster
**Nodes:**
- 51.83.128.181:7002 (Leader)
- 194.61.28.7:7002
- 83.171.248.66:7002
- 62.72.44.87:7002
**Test Cluster Health:**
```bash
ssh ubuntu@51.83.128.181
curl -s http://localhost:5001/status | jq '.store.nodes'
```
**Expected Output:**
```json
[
{"id":"194.61.28.7:7002","addr":"194.61.28.7:7002","suffrage":"Voter"},
{"id":"51.83.128.181:7002","addr":"51.83.128.181:7002","suffrage":"Voter"},
{"id":"62.72.44.87:7002","addr":"62.72.44.87:7002","suffrage":"Voter"},
{"id":"83.171.248.66:7002","addr":"83.171.248.66:7002","suffrage":"Voter"}
]
```
### IPFS Cluster
**Test Cluster Health:**
```bash
ssh ubuntu@51.83.128.181
curl -s http://localhost:9094/id | jq '.cluster_peers'
```
**Expected:** All 4 peer IDs listed
### Olric Cache Cluster
**Port:** 3320 (localhost), 3322 (cluster communication)
**Test:**
```bash
ssh ubuntu@51.83.128.181
ss -tulpn | grep olric
```
---
## Access Credentials
### SSH Access
**VPS 1:**
```bash
ssh ubuntu@51.83.128.181
# OR using your SSH key:
ssh -i ~/.ssh/ssh-sotiris/id_ed25519 ubuntu@51.83.128.181
```
**VPS 2, 3, 4:**
```bash
ssh root@194.61.28.7
ssh root@83.171.248.66
ssh root@62.72.44.87
```
**Important:** Password authentication is still enabled, but your SSH keys are configured for passwordless access.
---
## Testing & Verification
### 1. Test External Port Access (From Your Machine)
```bash
# These should be BLOCKED (timeout or connection refused):
nc -zv 51.83.128.181 5001 # rqlite API - should be blocked
nc -zv 51.83.128.181 7002 # rqlite Raft - should be blocked
nc -zv 51.83.128.181 9094 # IPFS cluster - should be blocked
# These should be OPEN:
nc -zv 51.83.128.181 22 # SSH - should succeed
nc -zv 51.83.128.181 80 # HTTP - should succeed
nc -zv 51.83.128.181 443 # HTTPS - should succeed
nc -zv 51.83.128.181 4001 # libp2p - should succeed
```
### 2. Test Domain Access
```bash
curl -I https://node-kv4la8.debros.network
curl -I https://node-7prvNa.debros.network
curl -I https://node-xn23dq.debros.network
curl -I https://node-nns4n5.debros.network
```
All should return `HTTP/1.1 200 OK` or similar with valid SSL certificates.
### 3. Test Cluster Communication (From VPS 1)
```bash
ssh ubuntu@51.83.128.181
# Test rqlite cluster
curl -s http://localhost:5001/status | jq -r '.store.nodes[].id'
# Test IPFS cluster
curl -s http://localhost:9094/id | jq -r '.cluster_peers[]'
# Check all services running
ps aux | grep -E "(orama-node|rqlited|ipfs|olric)" | grep -v grep
```
---
## Maintenance & Operations
### Firewall Management
**View current rules:**
```bash
sudo ufw status numbered
```
**Add a new allowed IP for cluster services:**
```bash
sudo ufw allow from NEW_IP_ADDRESS to any port 5001 proto tcp
sudo ufw allow from NEW_IP_ADDRESS to any port 7002 proto tcp
# etc.
```
**Delete a rule:**
```bash
sudo ufw status numbered # Get rule number
sudo ufw delete [NUMBER]
```
### SSH Management
**Test SSH config without applying:**
```bash
sudo sshd -t
```
**Reload SSH after config changes:**
```bash
sudo systemctl reload ssh
```
**View SSH login attempts:**
```bash
sudo journalctl -u ssh | tail -50
```
### Fail2ban Management
**Check banned IPs:**
```bash
sudo fail2ban-client status sshd
```
**Unban an IP:**
```bash
sudo fail2ban-client set sshd unbanip IP_ADDRESS
```
### Security Updates
**Check for updates:**
```bash
apt list --upgradable
```
**Apply updates:**
```bash
sudo apt update && sudo apt upgrade -y
```
**Reboot if kernel updated:**
```bash
sudo reboot
```
---
## Security Improvements Completed
### Before Security Audit:
- ❌ No firewall enabled
- ❌ rqlite database exposed to internet (port 5001, 7002)
- ❌ IPFS cluster management exposed (port 9094, 9098)
- ❌ Olric cache exposed (port 3322)
- ❌ Root login enabled without restrictions (VPS 2, 3, 4)
- ❌ No fail2ban on 3 out of 4 servers
- ❌ 19-39 security updates pending
### After Security Hardening:
- ✅ UFW firewall enabled on all servers
- ✅ Sensitive ports restricted to cluster IPs only
- ✅ SSH hardened with key authentication
- ✅ Fail2ban protecting all servers
- ✅ All security updates applied
- ✅ Let's Encrypt production certificates verified
- ✅ Cluster communication tested and working
- ✅ External access verified (HTTP/HTTPS only)
---
## Recommended Next Steps (Optional)
These were not implemented per your request but are recommended for future consideration:
1. **VPN/Private Networking** - Use WireGuard or Tailscale for encrypted cluster communication instead of firewall rules
2. **Automated Security Updates** - Enable unattended-upgrades for automatic security patches
3. **Monitoring & Alerting** - Set up Prometheus/Grafana for service monitoring
4. **Regular Security Audits** - Run `lynis` or `rkhunter` monthly for security checks
---
## Important Notes
### Let's Encrypt Configuration
The Orama Network gateway uses **autocert** from Go's `golang.org/x/crypto/acme/autocert` package. The configuration is in:
**File:** `/home/debros/.orama/configs/node.yaml`
**Relevant settings:**
```yaml
http_gateway:
https:
enabled: true
domain: "node-kv4la8.debros.network"
auto_cert: true
cache_dir: "/home/debros/.orama/tls-cache"
http_port: 80
https_port: 443
email: "admin@node-kv4la8.debros.network"
```
**Important:** There is NO `letsencrypt_staging` flag set, which means it defaults to **production Let's Encrypt**. This is correct for production deployment.
### Firewall Persistence
UFW rules are persistent across reboots. The firewall will automatically start on boot.
### SSH Key Access
Both of your SSH keys are configured on all servers. You can access:
- VPS 1: `ssh -i ~/.ssh/ssh-sotiris/id_ed25519 ubuntu@51.83.128.181`
- VPS 2-4: `ssh -i ~/.ssh/ssh-sotiris/id_ed25519 root@IP_ADDRESS`
Password authentication is still enabled as a fallback, but keys are recommended.
---
## Emergency Access
If you get locked out:
1. **VPS Provider Console:** All major VPS providers offer web-based console access
2. **Password Access:** Password auth is still enabled on all servers
3. **SSH Keys:** Two keys configured for redundancy
**Disable firewall temporarily (emergency only):**
```bash
sudo ufw disable
# Fix the issue
sudo ufw enable
```
---
## Verification Checklist
Use this checklist to verify the security hardening:
- [ ] All 4 servers have UFW firewall enabled
- [ ] SSH is hardened (MaxAuthTries 3, X11Forwarding no)
- [ ] Your SSH keys work on all servers
- [ ] Fail2ban is running on all servers
- [ ] Security updates are current
- [ ] rqlite port 5001 is NOT accessible from internet
- [ ] rqlite port 7002 is NOT accessible from internet
- [ ] IPFS cluster ports 9094, 9098 are NOT accessible from internet
- [ ] Domains are accessible via HTTPS with valid certificates
- [ ] RQLite cluster shows all 4 nodes
- [ ] IPFS cluster shows all 4 peers
- [ ] All services are running (5 processes per server)
---
## Contact & Support
For issues or questions about this deployment:
- **Security Audit Date:** January 18, 2026
- **Configuration Files:** `/home/debros/.orama/configs/`
- **Firewall Rules:** `/etc/ufw/`
- **SSH Config:** `/etc/ssh/sshd_config.d/99-hardening.conf`
- **TLS Certs:** `/home/debros/.orama/tls-cache/`
---
## Changelog
### January 18, 2026 - Production Security Hardening
**Changes:**
1. Added UFW firewall rules on all 4 VPS servers
2. Restricted sensitive ports (5001, 7002, 9094, 9098, 3322, 4101) to cluster IPs only
3. Hardened SSH configuration
4. Added your 2 SSH keys to all servers
5. Installed fail2ban on VPS 1, 2, 3 (VPS 4 already had it)
6. Applied all pending security updates (23-39 packages per server)
7. Verified Let's Encrypt is using production (not staging)
8. Tested all services: rqlite, IPFS, libp2p, Olric clusters
9. Verified all 4 domains are accessible via HTTPS
**Result:** Production-ready secure deployment ✅
---
**END OF DEPLOYMENT GUIDE**

View File

@ -5,14 +5,18 @@ package e2e
import (
"bytes"
"context"
"crypto/tls"
"database/sql"
"encoding/base64"
"encoding/json"
"fmt"
"io"
"math/rand"
"net/http"
"net/url"
"os"
"path/filepath"
"strings"
"sync"
"testing"
"time"
@ -20,6 +24,7 @@ import (
"github.com/DeBrosOfficial/network/pkg/client"
"github.com/DeBrosOfficial/network/pkg/config"
"github.com/DeBrosOfficial/network/pkg/ipfs"
"github.com/gorilla/websocket"
_ "github.com/mattn/go-sqlite3"
"go.uber.org/zap"
"gopkg.in/yaml.v2"
@ -84,6 +89,14 @@ func GetGatewayURL() string {
}
cacheMutex.RUnlock()
// Check environment variable first
if envURL := os.Getenv("GATEWAY_URL"); envURL != "" {
cacheMutex.Lock()
gatewayURLCache = envURL
cacheMutex.Unlock()
return envURL
}
// Try to load from gateway config
gwCfg, err := loadGatewayConfig()
if err == nil {
@ -135,14 +148,26 @@ func GetRQLiteNodes() []string {
// queryAPIKeyFromRQLite queries the SQLite database directly for an API key
func queryAPIKeyFromRQLite() (string, error) {
// Build database path from bootstrap/node config
// 1. Check environment variable first
if envKey := os.Getenv("DEBROS_API_KEY"); envKey != "" {
return envKey, nil
}
// 2. Build database path from bootstrap/node config
homeDir, err := os.UserHomeDir()
if err != nil {
return "", fmt.Errorf("failed to get home directory: %w", err)
}
// Try all node data directories
// Try all node data directories (both production and development paths)
dbPaths := []string{
// Development paths (~/.orama/node-x/...)
filepath.Join(homeDir, ".orama", "node-1", "rqlite", "db.sqlite"),
filepath.Join(homeDir, ".orama", "node-2", "rqlite", "db.sqlite"),
filepath.Join(homeDir, ".orama", "node-3", "rqlite", "db.sqlite"),
filepath.Join(homeDir, ".orama", "node-4", "rqlite", "db.sqlite"),
filepath.Join(homeDir, ".orama", "node-5", "rqlite", "db.sqlite"),
// Production paths (~/.orama/data/node-x/...)
filepath.Join(homeDir, ".orama", "data", "node-1", "rqlite", "db.sqlite"),
filepath.Join(homeDir, ".orama", "data", "node-2", "rqlite", "db.sqlite"),
filepath.Join(homeDir, ".orama", "data", "node-3", "rqlite", "db.sqlite"),
@ -363,7 +388,7 @@ func SkipIfMissingGateway(t *testing.T) {
return
}
resp, err := http.DefaultClient.Do(req)
resp, err := NewHTTPClient(5 * time.Second).Do(req)
if err != nil {
t.Skip("Gateway not accessible; tests skipped")
return
@ -378,7 +403,7 @@ func IsGatewayReady(ctx context.Context) bool {
if err != nil {
return false
}
resp, err := http.DefaultClient.Do(req)
resp, err := NewHTTPClient(5 * time.Second).Do(req)
if err != nil {
return false
}
@ -391,7 +416,11 @@ func NewHTTPClient(timeout time.Duration) *http.Client {
if timeout == 0 {
timeout = 30 * time.Second
}
return &http.Client{Timeout: timeout}
// Skip TLS verification for testing against self-signed certificates
transport := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}
return &http.Client{Timeout: timeout, Transport: transport}
}
// HTTPRequest is a helper for making authenticated HTTP requests
@ -644,3 +673,296 @@ func CleanupCacheEntry(t *testing.T, dmapName, key string) {
t.Logf("warning: delete cache entry returned status %d", status)
}
}
// ============================================================================
// WebSocket PubSub Client for E2E Tests
// ============================================================================
// WSPubSubClient is a WebSocket-based PubSub client that connects to the gateway
type WSPubSubClient struct {
t *testing.T
conn *websocket.Conn
topic string
handlers []func(topic string, data []byte) error
msgChan chan []byte
doneChan chan struct{}
mu sync.RWMutex
writeMu sync.Mutex // Protects concurrent writes to WebSocket
closed bool
}
// WSPubSubMessage represents a message received from the gateway
type WSPubSubMessage struct {
Data string `json:"data"` // base64 encoded
Timestamp int64 `json:"timestamp"` // unix milliseconds
Topic string `json:"topic"`
}
// NewWSPubSubClient creates a new WebSocket PubSub client connected to a topic
func NewWSPubSubClient(t *testing.T, topic string) (*WSPubSubClient, error) {
t.Helper()
// Build WebSocket URL
gatewayURL := GetGatewayURL()
wsURL := strings.Replace(gatewayURL, "http://", "ws://", 1)
wsURL = strings.Replace(wsURL, "https://", "wss://", 1)
u, err := url.Parse(wsURL + "/v1/pubsub/ws")
if err != nil {
return nil, fmt.Errorf("failed to parse WebSocket URL: %w", err)
}
q := u.Query()
q.Set("topic", topic)
u.RawQuery = q.Encode()
// Set up headers with authentication
headers := http.Header{}
if apiKey := GetAPIKey(); apiKey != "" {
headers.Set("Authorization", "Bearer "+apiKey)
}
// Connect to WebSocket
dialer := websocket.Dialer{
HandshakeTimeout: 10 * time.Second,
}
conn, resp, err := dialer.Dial(u.String(), headers)
if err != nil {
if resp != nil {
body, _ := io.ReadAll(resp.Body)
resp.Body.Close()
return nil, fmt.Errorf("websocket dial failed (status %d): %w - body: %s", resp.StatusCode, err, string(body))
}
return nil, fmt.Errorf("websocket dial failed: %w", err)
}
client := &WSPubSubClient{
t: t,
conn: conn,
topic: topic,
handlers: make([]func(topic string, data []byte) error, 0),
msgChan: make(chan []byte, 128),
doneChan: make(chan struct{}),
}
// Start reader goroutine
go client.readLoop()
return client, nil
}
// NewWSPubSubPresenceClient creates a new WebSocket PubSub client with presence parameters
func NewWSPubSubPresenceClient(t *testing.T, topic, memberID string, meta map[string]interface{}) (*WSPubSubClient, error) {
t.Helper()
// Build WebSocket URL
gatewayURL := GetGatewayURL()
wsURL := strings.Replace(gatewayURL, "http://", "ws://", 1)
wsURL = strings.Replace(wsURL, "https://", "wss://", 1)
u, err := url.Parse(wsURL + "/v1/pubsub/ws")
if err != nil {
return nil, fmt.Errorf("failed to parse WebSocket URL: %w", err)
}
q := u.Query()
q.Set("topic", topic)
q.Set("presence", "true")
q.Set("member_id", memberID)
if meta != nil {
metaJSON, _ := json.Marshal(meta)
q.Set("member_meta", string(metaJSON))
}
u.RawQuery = q.Encode()
// Set up headers with authentication
headers := http.Header{}
if apiKey := GetAPIKey(); apiKey != "" {
headers.Set("Authorization", "Bearer "+apiKey)
}
// Connect to WebSocket
dialer := websocket.Dialer{
HandshakeTimeout: 10 * time.Second,
}
conn, resp, err := dialer.Dial(u.String(), headers)
if err != nil {
if resp != nil {
body, _ := io.ReadAll(resp.Body)
resp.Body.Close()
return nil, fmt.Errorf("websocket dial failed (status %d): %w - body: %s", resp.StatusCode, err, string(body))
}
return nil, fmt.Errorf("websocket dial failed: %w", err)
}
client := &WSPubSubClient{
t: t,
conn: conn,
topic: topic,
handlers: make([]func(topic string, data []byte) error, 0),
msgChan: make(chan []byte, 128),
doneChan: make(chan struct{}),
}
// Start reader goroutine
go client.readLoop()
return client, nil
}
// readLoop reads messages from the WebSocket and dispatches to handlers
func (c *WSPubSubClient) readLoop() {
defer close(c.doneChan)
for {
_, message, err := c.conn.ReadMessage()
if err != nil {
c.mu.RLock()
closed := c.closed
c.mu.RUnlock()
if !closed {
// Only log if not intentionally closed
if !websocket.IsCloseError(err, websocket.CloseNormalClosure, websocket.CloseGoingAway) {
c.t.Logf("websocket read error: %v", err)
}
}
return
}
// Parse the message envelope
var msg WSPubSubMessage
if err := json.Unmarshal(message, &msg); err != nil {
c.t.Logf("failed to unmarshal message: %v", err)
continue
}
// Decode base64 data
data, err := base64.StdEncoding.DecodeString(msg.Data)
if err != nil {
c.t.Logf("failed to decode base64 data: %v", err)
continue
}
// Send to message channel
select {
case c.msgChan <- data:
default:
c.t.Logf("message channel full, dropping message")
}
// Dispatch to handlers
c.mu.RLock()
handlers := make([]func(topic string, data []byte) error, len(c.handlers))
copy(handlers, c.handlers)
c.mu.RUnlock()
for _, handler := range handlers {
if err := handler(msg.Topic, data); err != nil {
c.t.Logf("handler error: %v", err)
}
}
}
}
// Subscribe adds a message handler
func (c *WSPubSubClient) Subscribe(handler func(topic string, data []byte) error) {
c.mu.Lock()
defer c.mu.Unlock()
c.handlers = append(c.handlers, handler)
}
// Publish sends a message to the topic
func (c *WSPubSubClient) Publish(data []byte) error {
c.mu.RLock()
closed := c.closed
c.mu.RUnlock()
if closed {
return fmt.Errorf("client is closed")
}
// Protect concurrent writes to WebSocket
c.writeMu.Lock()
defer c.writeMu.Unlock()
return c.conn.WriteMessage(websocket.TextMessage, data)
}
// ReceiveWithTimeout waits for a message with timeout
func (c *WSPubSubClient) ReceiveWithTimeout(timeout time.Duration) ([]byte, error) {
select {
case msg := <-c.msgChan:
return msg, nil
case <-time.After(timeout):
return nil, fmt.Errorf("timeout waiting for message")
case <-c.doneChan:
return nil, fmt.Errorf("connection closed")
}
}
// Close closes the WebSocket connection
func (c *WSPubSubClient) Close() error {
c.mu.Lock()
if c.closed {
c.mu.Unlock()
return nil
}
c.closed = true
c.mu.Unlock()
// Send close message
_ = c.conn.WriteMessage(websocket.CloseMessage,
websocket.FormatCloseMessage(websocket.CloseNormalClosure, ""))
// Close connection
return c.conn.Close()
}
// Topic returns the topic this client is subscribed to
func (c *WSPubSubClient) Topic() string {
return c.topic
}
// WSPubSubClientPair represents a publisher and subscriber pair for testing
type WSPubSubClientPair struct {
Publisher *WSPubSubClient
Subscriber *WSPubSubClient
Topic string
}
// NewWSPubSubClientPair creates a publisher and subscriber pair for a topic
func NewWSPubSubClientPair(t *testing.T, topic string) (*WSPubSubClientPair, error) {
t.Helper()
// Create subscriber first
sub, err := NewWSPubSubClient(t, topic)
if err != nil {
return nil, fmt.Errorf("failed to create subscriber: %w", err)
}
// Small delay to ensure subscriber is registered
time.Sleep(100 * time.Millisecond)
// Create publisher
pub, err := NewWSPubSubClient(t, topic)
if err != nil {
sub.Close()
return nil, fmt.Errorf("failed to create publisher: %w", err)
}
return &WSPubSubClientPair{
Publisher: pub,
Subscriber: sub,
Topic: topic,
}, nil
}
// Close closes both publisher and subscriber
func (p *WSPubSubClientPair) Close() {
if p.Publisher != nil {
p.Publisher.Close()
}
if p.Subscriber != nil {
p.Subscriber.Close()
}
}

View File

@ -3,82 +3,46 @@
package e2e
import (
"context"
"fmt"
"sync"
"testing"
"time"
)
func newMessageCollector(ctx context.Context, buffer int) (chan []byte, func(string, []byte) error) {
if buffer <= 0 {
buffer = 1
}
ch := make(chan []byte, buffer)
handler := func(_ string, data []byte) error {
copied := append([]byte(nil), data...)
select {
case ch <- copied:
case <-ctx.Done():
}
return nil
}
return ch, handler
}
func waitForMessage(ctx context.Context, ch <-chan []byte) ([]byte, error) {
select {
case msg := <-ch:
return msg, nil
case <-ctx.Done():
return nil, fmt.Errorf("context finished while waiting for pubsub message: %w", ctx.Err())
}
}
// TestPubSub_SubscribePublish tests basic pub/sub functionality via WebSocket
func TestPubSub_SubscribePublish(t *testing.T) {
SkipIfMissingGateway(t)
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// Create two clients
client1 := NewNetworkClient(t)
client2 := NewNetworkClient(t)
if err := client1.Connect(); err != nil {
t.Fatalf("client1 connect failed: %v", err)
}
defer client1.Disconnect()
if err := client2.Connect(); err != nil {
t.Fatalf("client2 connect failed: %v", err)
}
defer client2.Disconnect()
topic := GenerateTopic()
message := "test-message-from-client1"
message := "test-message-from-publisher"
// Subscribe on client2
messageCh, handler := newMessageCollector(ctx, 1)
if err := client2.PubSub().Subscribe(ctx, topic, handler); err != nil {
t.Fatalf("subscribe failed: %v", err)
// Create subscriber first
subscriber, err := NewWSPubSubClient(t, topic)
if err != nil {
t.Fatalf("failed to create subscriber: %v", err)
}
defer client2.PubSub().Unsubscribe(ctx, topic)
defer subscriber.Close()
// Give subscription time to propagate and mesh to form
Delay(2000)
// Give subscriber time to register
Delay(200)
// Publish from client1
if err := client1.PubSub().Publish(ctx, topic, []byte(message)); err != nil {
// Create publisher
publisher, err := NewWSPubSubClient(t, topic)
if err != nil {
t.Fatalf("failed to create publisher: %v", err)
}
defer publisher.Close()
// Give connections time to stabilize
Delay(200)
// Publish message
if err := publisher.Publish([]byte(message)); err != nil {
t.Fatalf("publish failed: %v", err)
}
// Receive message on client2
recvCtx, recvCancel := context.WithTimeout(ctx, 10*time.Second)
defer recvCancel()
msg, err := waitForMessage(recvCtx, messageCh)
// Receive message on subscriber
msg, err := subscriber.ReceiveWithTimeout(10 * time.Second)
if err != nil {
t.Fatalf("receive failed: %v", err)
}
@ -88,154 +52,126 @@ func TestPubSub_SubscribePublish(t *testing.T) {
}
}
// TestPubSub_MultipleSubscribers tests that multiple subscribers receive the same message
func TestPubSub_MultipleSubscribers(t *testing.T) {
SkipIfMissingGateway(t)
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// Create three clients
clientPub := NewNetworkClient(t)
clientSub1 := NewNetworkClient(t)
clientSub2 := NewNetworkClient(t)
if err := clientPub.Connect(); err != nil {
t.Fatalf("publisher connect failed: %v", err)
}
defer clientPub.Disconnect()
if err := clientSub1.Connect(); err != nil {
t.Fatalf("subscriber1 connect failed: %v", err)
}
defer clientSub1.Disconnect()
if err := clientSub2.Connect(); err != nil {
t.Fatalf("subscriber2 connect failed: %v", err)
}
defer clientSub2.Disconnect()
topic := GenerateTopic()
message1 := "message-for-sub1"
message2 := "message-for-sub2"
message1 := "message-1"
message2 := "message-2"
// Subscribe on both clients
sub1Ch, sub1Handler := newMessageCollector(ctx, 4)
if err := clientSub1.PubSub().Subscribe(ctx, topic, sub1Handler); err != nil {
t.Fatalf("subscribe1 failed: %v", err)
// Create two subscribers
sub1, err := NewWSPubSubClient(t, topic)
if err != nil {
t.Fatalf("failed to create subscriber1: %v", err)
}
defer clientSub1.PubSub().Unsubscribe(ctx, topic)
defer sub1.Close()
sub2Ch, sub2Handler := newMessageCollector(ctx, 4)
if err := clientSub2.PubSub().Subscribe(ctx, topic, sub2Handler); err != nil {
t.Fatalf("subscribe2 failed: %v", err)
sub2, err := NewWSPubSubClient(t, topic)
if err != nil {
t.Fatalf("failed to create subscriber2: %v", err)
}
defer clientSub2.PubSub().Unsubscribe(ctx, topic)
defer sub2.Close()
// Give subscriptions time to propagate
Delay(500)
// Give subscribers time to register
Delay(200)
// Create publisher
publisher, err := NewWSPubSubClient(t, topic)
if err != nil {
t.Fatalf("failed to create publisher: %v", err)
}
defer publisher.Close()
// Give connections time to stabilize
Delay(200)
// Publish first message
if err := clientPub.PubSub().Publish(ctx, topic, []byte(message1)); err != nil {
if err := publisher.Publish([]byte(message1)); err != nil {
t.Fatalf("publish1 failed: %v", err)
}
// Both subscribers should receive first message
recvCtx, recvCancel := context.WithTimeout(ctx, 10*time.Second)
defer recvCancel()
msg1a, err := waitForMessage(recvCtx, sub1Ch)
msg1a, err := sub1.ReceiveWithTimeout(10 * time.Second)
if err != nil {
t.Fatalf("sub1 receive1 failed: %v", err)
}
if string(msg1a) != message1 {
t.Fatalf("sub1: expected %q, got %q", message1, string(msg1a))
}
msg1b, err := waitForMessage(recvCtx, sub2Ch)
msg1b, err := sub2.ReceiveWithTimeout(10 * time.Second)
if err != nil {
t.Fatalf("sub2 receive1 failed: %v", err)
}
if string(msg1b) != message1 {
t.Fatalf("sub2: expected %q, got %q", message1, string(msg1b))
}
// Publish second message
if err := clientPub.PubSub().Publish(ctx, topic, []byte(message2)); err != nil {
if err := publisher.Publish([]byte(message2)); err != nil {
t.Fatalf("publish2 failed: %v", err)
}
// Both subscribers should receive second message
recvCtx2, recvCancel2 := context.WithTimeout(ctx, 10*time.Second)
defer recvCancel2()
msg2a, err := waitForMessage(recvCtx2, sub1Ch)
msg2a, err := sub1.ReceiveWithTimeout(10 * time.Second)
if err != nil {
t.Fatalf("sub1 receive2 failed: %v", err)
}
if string(msg2a) != message2 {
t.Fatalf("sub1: expected %q, got %q", message2, string(msg2a))
}
msg2b, err := waitForMessage(recvCtx2, sub2Ch)
msg2b, err := sub2.ReceiveWithTimeout(10 * time.Second)
if err != nil {
t.Fatalf("sub2 receive2 failed: %v", err)
}
if string(msg2b) != message2 {
t.Fatalf("sub2: expected %q, got %q", message2, string(msg2b))
}
}
// TestPubSub_Deduplication tests that multiple identical messages are all received
func TestPubSub_Deduplication(t *testing.T) {
SkipIfMissingGateway(t)
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// Create two clients
clientPub := NewNetworkClient(t)
clientSub := NewNetworkClient(t)
if err := clientPub.Connect(); err != nil {
t.Fatalf("publisher connect failed: %v", err)
}
defer clientPub.Disconnect()
if err := clientSub.Connect(); err != nil {
t.Fatalf("subscriber connect failed: %v", err)
}
defer clientSub.Disconnect()
topic := GenerateTopic()
message := "duplicate-test-message"
// Subscribe on client
messageCh, handler := newMessageCollector(ctx, 3)
if err := clientSub.PubSub().Subscribe(ctx, topic, handler); err != nil {
t.Fatalf("subscribe failed: %v", err)
// Create subscriber
subscriber, err := NewWSPubSubClient(t, topic)
if err != nil {
t.Fatalf("failed to create subscriber: %v", err)
}
defer clientSub.PubSub().Unsubscribe(ctx, topic)
defer subscriber.Close()
// Give subscription time to propagate and mesh to form
Delay(2000)
// Give subscriber time to register
Delay(200)
// Create publisher
publisher, err := NewWSPubSubClient(t, topic)
if err != nil {
t.Fatalf("failed to create publisher: %v", err)
}
defer publisher.Close()
// Give connections time to stabilize
Delay(200)
// Publish the same message multiple times
for i := 0; i < 3; i++ {
if err := clientPub.PubSub().Publish(ctx, topic, []byte(message)); err != nil {
if err := publisher.Publish([]byte(message)); err != nil {
t.Fatalf("publish %d failed: %v", i, err)
}
// Small delay between publishes
Delay(50)
}
// Receive messages - should get all (no dedup filter on subscribe)
recvCtx, recvCancel := context.WithTimeout(ctx, 5*time.Second)
defer recvCancel()
// Receive messages - should get all (no dedup filter)
receivedCount := 0
for receivedCount < 3 {
if _, err := waitForMessage(recvCtx, messageCh); err != nil {
_, err := subscriber.ReceiveWithTimeout(5 * time.Second)
if err != nil {
break
}
receivedCount++
@ -244,40 +180,35 @@ func TestPubSub_Deduplication(t *testing.T) {
if receivedCount < 1 {
t.Fatalf("expected to receive at least 1 message, got %d", receivedCount)
}
t.Logf("received %d messages", receivedCount)
}
// TestPubSub_ConcurrentPublish tests concurrent message publishing
func TestPubSub_ConcurrentPublish(t *testing.T) {
SkipIfMissingGateway(t)
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// Create clients
clientPub := NewNetworkClient(t)
clientSub := NewNetworkClient(t)
if err := clientPub.Connect(); err != nil {
t.Fatalf("publisher connect failed: %v", err)
}
defer clientPub.Disconnect()
if err := clientSub.Connect(); err != nil {
t.Fatalf("subscriber connect failed: %v", err)
}
defer clientSub.Disconnect()
topic := GenerateTopic()
numMessages := 10
// Subscribe
messageCh, handler := newMessageCollector(ctx, numMessages)
if err := clientSub.PubSub().Subscribe(ctx, topic, handler); err != nil {
t.Fatalf("subscribe failed: %v", err)
// Create subscriber
subscriber, err := NewWSPubSubClient(t, topic)
if err != nil {
t.Fatalf("failed to create subscriber: %v", err)
}
defer clientSub.PubSub().Unsubscribe(ctx, topic)
defer subscriber.Close()
// Give subscription time to propagate and mesh to form
Delay(2000)
// Give subscriber time to register
Delay(200)
// Create publisher
publisher, err := NewWSPubSubClient(t, topic)
if err != nil {
t.Fatalf("failed to create publisher: %v", err)
}
defer publisher.Close()
// Give connections time to stabilize
Delay(200)
// Publish multiple messages concurrently
var wg sync.WaitGroup
@ -286,7 +217,7 @@ func TestPubSub_ConcurrentPublish(t *testing.T) {
go func(idx int) {
defer wg.Done()
msg := fmt.Sprintf("concurrent-msg-%d", idx)
if err := clientPub.PubSub().Publish(ctx, topic, []byte(msg)); err != nil {
if err := publisher.Publish([]byte(msg)); err != nil {
t.Logf("publish %d failed: %v", idx, err)
}
}(i)
@ -294,12 +225,10 @@ func TestPubSub_ConcurrentPublish(t *testing.T) {
wg.Wait()
// Receive messages
recvCtx, recvCancel := context.WithTimeout(ctx, 10*time.Second)
defer recvCancel()
receivedCount := 0
for receivedCount < numMessages {
if _, err := waitForMessage(recvCtx, messageCh); err != nil {
_, err := subscriber.ReceiveWithTimeout(10 * time.Second)
if err != nil {
break
}
receivedCount++
@ -310,107 +239,110 @@ func TestPubSub_ConcurrentPublish(t *testing.T) {
}
}
// TestPubSub_TopicIsolation tests that messages are isolated to their topics
func TestPubSub_TopicIsolation(t *testing.T) {
SkipIfMissingGateway(t)
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// Create clients
clientPub := NewNetworkClient(t)
clientSub := NewNetworkClient(t)
if err := clientPub.Connect(); err != nil {
t.Fatalf("publisher connect failed: %v", err)
}
defer clientPub.Disconnect()
if err := clientSub.Connect(); err != nil {
t.Fatalf("subscriber connect failed: %v", err)
}
defer clientSub.Disconnect()
topic1 := GenerateTopic()
topic2 := GenerateTopic()
// Subscribe to topic1
messageCh, handler := newMessageCollector(ctx, 2)
if err := clientSub.PubSub().Subscribe(ctx, topic1, handler); err != nil {
t.Fatalf("subscribe1 failed: %v", err)
}
defer clientSub.PubSub().Unsubscribe(ctx, topic1)
// Give subscription time to propagate and mesh to form
Delay(2000)
// Publish to topic2
msg1 := "message-on-topic1"
msg2 := "message-on-topic2"
if err := clientPub.PubSub().Publish(ctx, topic2, []byte(msg2)); err != nil {
// Create subscriber for topic1
sub1, err := NewWSPubSubClient(t, topic1)
if err != nil {
t.Fatalf("failed to create subscriber1: %v", err)
}
defer sub1.Close()
// Create subscriber for topic2
sub2, err := NewWSPubSubClient(t, topic2)
if err != nil {
t.Fatalf("failed to create subscriber2: %v", err)
}
defer sub2.Close()
// Give subscribers time to register
Delay(200)
// Create publishers
pub1, err := NewWSPubSubClient(t, topic1)
if err != nil {
t.Fatalf("failed to create publisher1: %v", err)
}
defer pub1.Close()
pub2, err := NewWSPubSubClient(t, topic2)
if err != nil {
t.Fatalf("failed to create publisher2: %v", err)
}
defer pub2.Close()
// Give connections time to stabilize
Delay(200)
// Publish to topic2 first
if err := pub2.Publish([]byte(msg2)); err != nil {
t.Fatalf("publish2 failed: %v", err)
}
// Publish to topic1
msg1 := "message-on-topic1"
if err := clientPub.PubSub().Publish(ctx, topic1, []byte(msg1)); err != nil {
if err := pub1.Publish([]byte(msg1)); err != nil {
t.Fatalf("publish1 failed: %v", err)
}
// Receive on sub1 - should get msg1 only
recvCtx, recvCancel := context.WithTimeout(ctx, 10*time.Second)
defer recvCancel()
msg, err := waitForMessage(recvCtx, messageCh)
// Sub1 should receive msg1 only
received1, err := sub1.ReceiveWithTimeout(10 * time.Second)
if err != nil {
t.Fatalf("receive failed: %v", err)
t.Fatalf("sub1 receive failed: %v", err)
}
if string(received1) != msg1 {
t.Fatalf("sub1: expected %q, got %q", msg1, string(received1))
}
if string(msg) != msg1 {
t.Fatalf("expected %q, got %q", msg1, string(msg))
// Sub2 should receive msg2 only
received2, err := sub2.ReceiveWithTimeout(10 * time.Second)
if err != nil {
t.Fatalf("sub2 receive failed: %v", err)
}
if string(received2) != msg2 {
t.Fatalf("sub2: expected %q, got %q", msg2, string(received2))
}
}
// TestPubSub_EmptyMessage tests sending and receiving empty messages
func TestPubSub_EmptyMessage(t *testing.T) {
SkipIfMissingGateway(t)
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// Create clients
clientPub := NewNetworkClient(t)
clientSub := NewNetworkClient(t)
if err := clientPub.Connect(); err != nil {
t.Fatalf("publisher connect failed: %v", err)
}
defer clientPub.Disconnect()
if err := clientSub.Connect(); err != nil {
t.Fatalf("subscriber connect failed: %v", err)
}
defer clientSub.Disconnect()
topic := GenerateTopic()
// Subscribe
messageCh, handler := newMessageCollector(ctx, 1)
if err := clientSub.PubSub().Subscribe(ctx, topic, handler); err != nil {
t.Fatalf("subscribe failed: %v", err)
// Create subscriber
subscriber, err := NewWSPubSubClient(t, topic)
if err != nil {
t.Fatalf("failed to create subscriber: %v", err)
}
defer clientSub.PubSub().Unsubscribe(ctx, topic)
defer subscriber.Close()
// Give subscription time to propagate and mesh to form
Delay(2000)
// Give subscriber time to register
Delay(200)
// Create publisher
publisher, err := NewWSPubSubClient(t, topic)
if err != nil {
t.Fatalf("failed to create publisher: %v", err)
}
defer publisher.Close()
// Give connections time to stabilize
Delay(200)
// Publish empty message
if err := clientPub.PubSub().Publish(ctx, topic, []byte("")); err != nil {
if err := publisher.Publish([]byte("")); err != nil {
t.Fatalf("publish empty failed: %v", err)
}
// Receive on sub - should get empty message
recvCtx, recvCancel := context.WithTimeout(ctx, 10*time.Second)
defer recvCancel()
msg, err := waitForMessage(recvCtx, messageCh)
// Receive on subscriber - should get empty message
msg, err := subscriber.ReceiveWithTimeout(10 * time.Second)
if err != nil {
t.Fatalf("receive failed: %v", err)
}
@ -419,3 +351,111 @@ func TestPubSub_EmptyMessage(t *testing.T) {
t.Fatalf("expected empty message, got %q", string(msg))
}
}
// TestPubSub_LargeMessage tests sending and receiving large messages
func TestPubSub_LargeMessage(t *testing.T) {
SkipIfMissingGateway(t)
topic := GenerateTopic()
// Create a large message (100KB)
largeMessage := make([]byte, 100*1024)
for i := range largeMessage {
largeMessage[i] = byte(i % 256)
}
// Create subscriber
subscriber, err := NewWSPubSubClient(t, topic)
if err != nil {
t.Fatalf("failed to create subscriber: %v", err)
}
defer subscriber.Close()
// Give subscriber time to register
Delay(200)
// Create publisher
publisher, err := NewWSPubSubClient(t, topic)
if err != nil {
t.Fatalf("failed to create publisher: %v", err)
}
defer publisher.Close()
// Give connections time to stabilize
Delay(200)
// Publish large message
if err := publisher.Publish(largeMessage); err != nil {
t.Fatalf("publish large message failed: %v", err)
}
// Receive on subscriber
msg, err := subscriber.ReceiveWithTimeout(30 * time.Second)
if err != nil {
t.Fatalf("receive failed: %v", err)
}
if len(msg) != len(largeMessage) {
t.Fatalf("expected message of length %d, got %d", len(largeMessage), len(msg))
}
// Verify content
for i := range msg {
if msg[i] != largeMessage[i] {
t.Fatalf("message content mismatch at byte %d", i)
}
}
}
// TestPubSub_RapidPublish tests rapid message publishing
func TestPubSub_RapidPublish(t *testing.T) {
SkipIfMissingGateway(t)
topic := GenerateTopic()
numMessages := 50
// Create subscriber
subscriber, err := NewWSPubSubClient(t, topic)
if err != nil {
t.Fatalf("failed to create subscriber: %v", err)
}
defer subscriber.Close()
// Give subscriber time to register
Delay(200)
// Create publisher
publisher, err := NewWSPubSubClient(t, topic)
if err != nil {
t.Fatalf("failed to create publisher: %v", err)
}
defer publisher.Close()
// Give connections time to stabilize
Delay(200)
// Publish messages rapidly
for i := 0; i < numMessages; i++ {
msg := fmt.Sprintf("rapid-msg-%d", i)
if err := publisher.Publish([]byte(msg)); err != nil {
t.Fatalf("publish %d failed: %v", i, err)
}
}
// Receive messages
receivedCount := 0
for receivedCount < numMessages {
_, err := subscriber.ReceiveWithTimeout(10 * time.Second)
if err != nil {
break
}
receivedCount++
}
// Allow some message loss due to buffering
minExpected := numMessages * 80 / 100 // 80% minimum
if receivedCount < minExpected {
t.Fatalf("expected at least %d messages, got %d", minExpected, receivedCount)
}
t.Logf("received %d/%d messages (%.1f%%)", receivedCount, numMessages, float64(receivedCount)*100/float64(numMessages))
}

122
e2e/pubsub_presence_test.go Normal file
View File

@ -0,0 +1,122 @@
//go:build e2e
package e2e
import (
"context"
"encoding/json"
"fmt"
"net/http"
"testing"
"time"
)
func TestPubSub_Presence(t *testing.T) {
SkipIfMissingGateway(t)
topic := GenerateTopic()
memberID := "user123"
memberMeta := map[string]interface{}{"name": "Alice"}
// 1. Subscribe with presence
client1, err := NewWSPubSubPresenceClient(t, topic, memberID, memberMeta)
if err != nil {
t.Fatalf("failed to create presence client: %v", err)
}
defer client1.Close()
// Wait for join event
msg, err := client1.ReceiveWithTimeout(5 * time.Second)
if err != nil {
t.Fatalf("did not receive join event: %v", err)
}
var event map[string]interface{}
if err := json.Unmarshal(msg, &event); err != nil {
t.Fatalf("failed to unmarshal event: %v", err)
}
if event["type"] != "presence.join" {
t.Fatalf("expected presence.join event, got %v", event["type"])
}
if event["member_id"] != memberID {
t.Fatalf("expected member_id %s, got %v", memberID, event["member_id"])
}
// 2. Query presence endpoint
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
req := &HTTPRequest{
Method: http.MethodGet,
URL: fmt.Sprintf("%s/v1/pubsub/presence?topic=%s", GetGatewayURL(), topic),
}
body, status, err := req.Do(ctx)
if err != nil {
t.Fatalf("presence query failed: %v", err)
}
if status != http.StatusOK {
t.Fatalf("expected status 200, got %d", status)
}
var resp map[string]interface{}
if err := DecodeJSON(body, &resp); err != nil {
t.Fatalf("failed to decode response: %v", err)
}
if resp["count"] != float64(1) {
t.Fatalf("expected count 1, got %v", resp["count"])
}
members := resp["members"].([]interface{})
if len(members) != 1 {
t.Fatalf("expected 1 member, got %d", len(members))
}
member := members[0].(map[string]interface{})
if member["member_id"] != memberID {
t.Fatalf("expected member_id %s, got %v", memberID, member["member_id"])
}
// 3. Subscribe second member
memberID2 := "user456"
client2, err := NewWSPubSubPresenceClient(t, topic, memberID2, nil)
if err != nil {
t.Fatalf("failed to create second presence client: %v", err)
}
// We'll close client2 later to test leave event
// Client1 should receive join event for Client2
msg2, err := client1.ReceiveWithTimeout(5 * time.Second)
if err != nil {
t.Fatalf("client1 did not receive join event for client2: %v", err)
}
if err := json.Unmarshal(msg2, &event); err != nil {
t.Fatalf("failed to unmarshal event: %v", err)
}
if event["type"] != "presence.join" || event["member_id"] != memberID2 {
t.Fatalf("expected presence.join for %s, got %v for %v", memberID2, event["type"], event["member_id"])
}
// 4. Disconnect client2 and verify leave event
client2.Close()
msg3, err := client1.ReceiveWithTimeout(5 * time.Second)
if err != nil {
t.Fatalf("client1 did not receive leave event for client2: %v", err)
}
if err := json.Unmarshal(msg3, &event); err != nil {
t.Fatalf("failed to unmarshal event: %v", err)
}
if event["type"] != "presence.leave" || event["member_id"] != memberID2 {
t.Fatalf("expected presence.leave for %s, got %v for %v", memberID2, event["type"], event["member_id"])
}
}

123
e2e/serverless_test.go Normal file
View File

@ -0,0 +1,123 @@
//go:build e2e
package e2e
import (
"bytes"
"context"
"io"
"mime/multipart"
"net/http"
"os"
"testing"
"time"
)
func TestServerless_DeployAndInvoke(t *testing.T) {
SkipIfMissingGateway(t)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()
wasmPath := "../examples/functions/bin/hello.wasm"
if _, err := os.Stat(wasmPath); os.IsNotExist(err) {
t.Skip("hello.wasm not found")
}
wasmBytes, err := os.ReadFile(wasmPath)
if err != nil {
t.Fatalf("failed to read hello.wasm: %v", err)
}
funcName := "e2e-hello"
namespace := "default"
// 1. Deploy function
var buf bytes.Buffer
writer := multipart.NewWriter(&buf)
// Add metadata
_ = writer.WriteField("name", funcName)
_ = writer.WriteField("namespace", namespace)
// Add WASM file
part, err := writer.CreateFormFile("wasm", funcName+".wasm")
if err != nil {
t.Fatalf("failed to create form file: %v", err)
}
part.Write(wasmBytes)
writer.Close()
deployReq, _ := http.NewRequestWithContext(ctx, "POST", GetGatewayURL()+"/v1/functions", &buf)
deployReq.Header.Set("Content-Type", writer.FormDataContentType())
if apiKey := GetAPIKey(); apiKey != "" {
deployReq.Header.Set("Authorization", "Bearer "+apiKey)
}
client := NewHTTPClient(1 * time.Minute)
resp, err := client.Do(deployReq)
if err != nil {
t.Fatalf("deploy request failed: %v", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusCreated {
body, _ := io.ReadAll(resp.Body)
t.Fatalf("deploy failed with status %d: %s", resp.StatusCode, string(body))
}
// 2. Invoke function
invokePayload := []byte(`{"name": "E2E Tester"}`)
invokeReq, _ := http.NewRequestWithContext(ctx, "POST", GetGatewayURL()+"/v1/functions/"+funcName+"/invoke", bytes.NewReader(invokePayload))
invokeReq.Header.Set("Content-Type", "application/json")
if apiKey := GetAPIKey(); apiKey != "" {
invokeReq.Header.Set("Authorization", "Bearer "+apiKey)
}
resp, err = client.Do(invokeReq)
if err != nil {
t.Fatalf("invoke request failed: %v", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
t.Fatalf("invoke failed with status %d: %s", resp.StatusCode, string(body))
}
output, _ := io.ReadAll(resp.Body)
expected := "Hello, E2E Tester!"
if !bytes.Contains(output, []byte(expected)) {
t.Errorf("output %q does not contain %q", string(output), expected)
}
// 3. List functions
listReq, _ := http.NewRequestWithContext(ctx, "GET", GetGatewayURL()+"/v1/functions?namespace="+namespace, nil)
if apiKey := GetAPIKey(); apiKey != "" {
listReq.Header.Set("Authorization", "Bearer "+apiKey)
}
resp, err = client.Do(listReq)
if err != nil {
t.Fatalf("list request failed: %v", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Errorf("list failed with status %d", resp.StatusCode)
}
// 4. Delete function
deleteReq, _ := http.NewRequestWithContext(ctx, "DELETE", GetGatewayURL()+"/v1/functions/"+funcName+"?namespace="+namespace, nil)
if apiKey := GetAPIKey(); apiKey != "" {
deleteReq.Header.Set("Authorization", "Bearer "+apiKey)
}
resp, err = client.Do(deleteReq)
if err != nil {
t.Fatalf("delete request failed: %v", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Errorf("delete failed with status %d", resp.StatusCode)
}
}

158
example.http Normal file
View File

@ -0,0 +1,158 @@
### Orama Network Gateway API Examples
# This file is designed for the VS Code "REST Client" extension.
# It demonstrates the core capabilities of the DeBros Network Gateway.
@baseUrl = http://localhost:6001
@apiKey = ak_X32jj2fiin8zzv0hmBKTC5b5:default
@contentType = application/json
############################################################
### 1. SYSTEM & HEALTH
############################################################
# @name HealthCheck
GET {{baseUrl}}/v1/health
X-API-Key: {{apiKey}}
###
# @name SystemStatus
# Returns the full status of the gateway and connected services
GET {{baseUrl}}/v1/status
X-API-Key: {{apiKey}}
###
# @name NetworkStatus
# Returns the P2P network status and PeerID
GET {{baseUrl}}/v1/network/status
X-API-Key: {{apiKey}}
############################################################
### 2. DISTRIBUTED CACHE (OLRIC)
############################################################
# @name CachePut
# Stores a value in the distributed cache (DMap)
POST {{baseUrl}}/v1/cache/put
X-API-Key: {{apiKey}}
Content-Type: {{contentType}}
{
"dmap": "demo-cache",
"key": "video-demo",
"value": "Hello from REST Client!"
}
###
# @name CacheGet
# Retrieves a value from the distributed cache
POST {{baseUrl}}/v1/cache/get
X-API-Key: {{apiKey}}
Content-Type: {{contentType}}
{
"dmap": "demo-cache",
"key": "video-demo"
}
###
# @name CacheScan
# Scans for keys in a specific DMap
POST {{baseUrl}}/v1/cache/scan
X-API-Key: {{apiKey}}
Content-Type: {{contentType}}
{
"dmap": "demo-cache"
}
############################################################
### 3. DECENTRALIZED STORAGE (IPFS)
############################################################
# @name StorageUpload
# Uploads a file to IPFS (Multipart)
POST {{baseUrl}}/v1/storage/upload
X-API-Key: {{apiKey}}
Content-Type: multipart/form-data; boundary=boundary
--boundary
Content-Disposition: form-data; name="file"; filename="demo.txt"
Content-Type: text/plain
This is a demonstration of decentralized storage on the Sonr Network.
--boundary--
###
# @name StorageStatus
# Check the pinning status and replication of a CID
# Replace {cid} with the CID returned from the upload above
@demoCid = bafkreid76y6x6v2n5o4n6n5o4n6n5o4n6n5o4n6n5o4
GET {{baseUrl}}/v1/storage/status/{{demoCid}}
X-API-Key: {{apiKey}}
###
# @name StorageDownload
# Retrieve content directly from IPFS via the gateway
GET {{baseUrl}}/v1/storage/get/{{demoCid}}
X-API-Key: {{apiKey}}
############################################################
### 4. REAL-TIME PUB/SUB
############################################################
# @name ListTopics
# Lists all active topics in the current namespace
GET {{baseUrl}}/v1/pubsub/topics
X-API-Key: {{apiKey}}
###
# @name PublishMessage
# Publishes a base64 encoded message to a topic
POST {{baseUrl}}/v1/pubsub/publish
X-API-Key: {{apiKey}}
Content-Type: {{contentType}}
{
"topic": "network-updates",
"data_base64": "U29uciBOZXR3b3JrIGlzIGF3ZXNvbWUh"
}
############################################################
### 5. SERVERLESS FUNCTIONS
############################################################
# @name ListFunctions
# Lists all deployed serverless functions
GET {{baseUrl}}/v1/functions
X-API-Key: {{apiKey}}
###
# @name InvokeFunction
# Invokes a deployed function by name
# Path: /v1/invoke/{namespace}/{functionName}
POST {{baseUrl}}/v1/invoke/default/hello
X-API-Key: {{apiKey}}
Content-Type: {{contentType}}
{
"name": "Developer"
}
###
# @name WhoAmI
# Validates the API Key and returns caller identity
GET {{baseUrl}}/v1/auth/whoami
X-API-Key: {{apiKey}}

42
examples/functions/build.sh Executable file
View File

@ -0,0 +1,42 @@
#!/bin/bash
# Build all example functions to WASM using TinyGo
#
# Prerequisites:
# - TinyGo installed: https://tinygo.org/getting-started/install/
# - On macOS: brew install tinygo
#
# Usage: ./build.sh
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
OUTPUT_DIR="$SCRIPT_DIR/bin"
# Check if TinyGo is installed
if ! command -v tinygo &> /dev/null; then
echo "Error: TinyGo is not installed."
echo "Install it with: brew install tinygo (macOS) or see https://tinygo.org/getting-started/install/"
exit 1
fi
# Create output directory
mkdir -p "$OUTPUT_DIR"
echo "Building example functions to WASM..."
echo
# Build each function
for dir in "$SCRIPT_DIR"/*/; do
if [ -f "$dir/main.go" ]; then
name=$(basename "$dir")
echo "Building $name..."
cd "$dir"
tinygo build -o "$OUTPUT_DIR/$name.wasm" -target wasi main.go
echo " -> $OUTPUT_DIR/$name.wasm"
fi
done
echo
echo "Done! WASM files are in $OUTPUT_DIR/"
ls -lh "$OUTPUT_DIR"/*.wasm 2>/dev/null || echo "No WASM files built."

View File

@ -0,0 +1,66 @@
// Example: Counter function with Olric cache
// This function demonstrates using the distributed cache to maintain state.
// Compile with: tinygo build -o counter.wasm -target wasi main.go
//
// Note: This example shows the CONCEPT. Actual host function integration
// requires the host function bindings to be exposed to the WASM module.
package main
import (
"encoding/json"
"os"
)
func main() {
// Read input from stdin
var input []byte
buf := make([]byte, 1024)
for {
n, err := os.Stdin.Read(buf)
if n > 0 {
input = append(input, buf[:n]...)
}
if err != nil {
break
}
}
// Parse input
var payload struct {
Action string `json:"action"` // "increment", "decrement", "get", "reset"
CounterID string `json:"counter_id"`
}
if err := json.Unmarshal(input, &payload); err != nil {
response := map[string]interface{}{
"error": "Invalid JSON input",
}
output, _ := json.Marshal(response)
os.Stdout.Write(output)
return
}
if payload.CounterID == "" {
payload.CounterID = "default"
}
// NOTE: In the real implementation, this would use host functions:
// - cache_get(key) to read the counter
// - cache_put(key, value, ttl) to write the counter
//
// For this example, we just simulate the logic:
response := map[string]interface{}{
"counter_id": payload.CounterID,
"action": payload.Action,
"message": "Counter operations require cache host functions",
"example": map[string]interface{}{
"increment": "cache_put('counter:' + counter_id, current + 1)",
"decrement": "cache_put('counter:' + counter_id, current - 1)",
"get": "cache_get('counter:' + counter_id)",
"reset": "cache_put('counter:' + counter_id, 0)",
},
}
output, _ := json.Marshal(response)
os.Stdout.Write(output)
}

View File

@ -0,0 +1,50 @@
// Example: Echo function
// This is a simple serverless function that echoes back the input.
// Compile with: tinygo build -o echo.wasm -target wasi main.go
package main
import (
"encoding/json"
"os"
)
// Input is read from stdin, output is written to stdout.
// The Orama serverless engine passes the invocation payload via stdin
// and expects the response on stdout.
func main() {
// Read all input from stdin
var input []byte
buf := make([]byte, 1024)
for {
n, err := os.Stdin.Read(buf)
if n > 0 {
input = append(input, buf[:n]...)
}
if err != nil {
break
}
}
// Parse input as JSON (optional - could also just echo raw bytes)
var payload map[string]interface{}
if err := json.Unmarshal(input, &payload); err != nil {
// Not JSON, just echo the raw input
response := map[string]interface{}{
"echo": string(input),
}
output, _ := json.Marshal(response)
os.Stdout.Write(output)
return
}
// Create response
response := map[string]interface{}{
"echo": payload,
"message": "Echo function received your input!",
}
output, _ := json.Marshal(response)
os.Stdout.Write(output)
}

View File

@ -0,0 +1,42 @@
// Example: Hello function
// This is a simple serverless function that returns a greeting.
// Compile with: tinygo build -o hello.wasm -target wasi main.go
package main
import (
"encoding/json"
"os"
)
func main() {
// Read input from stdin
var input []byte
buf := make([]byte, 1024)
for {
n, err := os.Stdin.Read(buf)
if n > 0 {
input = append(input, buf[:n]...)
}
if err != nil {
break
}
}
// Parse input to get name
var payload struct {
Name string `json:"name"`
}
if err := json.Unmarshal(input, &payload); err != nil || payload.Name == "" {
payload.Name = "World"
}
// Create greeting response
response := map[string]interface{}{
"greeting": "Hello, " + payload.Name + "!",
"message": "This is a serverless function running on Orama Network",
}
output, _ := json.Marshal(response)
os.Stdout.Write(output)
}

BIN
gateway Executable file

Binary file not shown.

7
go.mod
View File

@ -1,6 +1,6 @@
module github.com/DeBrosOfficial/network
go 1.23.8
go 1.24.0
toolchain go1.24.1
@ -10,6 +10,7 @@ require (
github.com/charmbracelet/lipgloss v1.0.0
github.com/ethereum/go-ethereum v1.13.14
github.com/go-chi/chi/v5 v5.2.3
github.com/google/uuid v1.6.0
github.com/gorilla/websocket v1.5.3
github.com/libp2p/go-libp2p v0.41.1
github.com/libp2p/go-libp2p-pubsub v0.14.2
@ -18,6 +19,7 @@ require (
github.com/multiformats/go-multiaddr v0.15.0
github.com/olric-data/olric v0.7.0
github.com/rqlite/gorqlite v0.0.0-20250609141355-ac86a4a1c9a8
github.com/tetratelabs/wazero v1.11.0
go.uber.org/zap v1.27.0
golang.org/x/crypto v0.40.0
golang.org/x/net v0.42.0
@ -54,7 +56,6 @@ require (
github.com/google/btree v1.1.3 // indirect
github.com/google/gopacket v1.1.19 // indirect
github.com/google/pprof v0.0.0-20250208200701-d0013a598941 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-immutable-radix v1.3.1 // indirect
github.com/hashicorp/go-metrics v0.5.4 // indirect
@ -154,7 +155,7 @@ require (
golang.org/x/exp v0.0.0-20250718183923-645b1fa84792 // indirect
golang.org/x/mod v0.26.0 // indirect
golang.org/x/sync v0.16.0 // indirect
golang.org/x/sys v0.34.0 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/text v0.27.0 // indirect
golang.org/x/tools v0.35.0 // indirect
google.golang.org/protobuf v1.36.6 // indirect

6
go.sum
View File

@ -487,6 +487,8 @@ github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXl
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/tarm/serial v0.0.0-20180830185346-98f6abe2eb07/go.mod h1:kDXzergiv9cbyO7IOYJZWg1U88JhDg3PB6klq9Hg2pA=
github.com/tetratelabs/wazero v1.11.0 h1:+gKemEuKCTevU4d7ZTzlsvgd1uaToIDtlQlmNbwqYhA=
github.com/tetratelabs/wazero v1.11.0/go.mod h1:eV28rsN8Q+xwjogd7f4/Pp4xFxO7uOGbLcD/LzB1wiU=
github.com/tidwall/btree v1.1.0/go.mod h1:TzIRzen6yHbibdSfK6t8QimqbUnoxUSrZfeW7Uob0q4=
github.com/tidwall/btree v1.7.0 h1:L1fkJH/AuEh5zBnnBbmTwQ5Lt+bRJ5A8EWecslvo9iI=
github.com/tidwall/btree v1.7.0/go.mod h1:twD9XRA5jj9VUQGELzDO4HPQTNJsoWWfYEL+EUQ2cKY=
@ -627,8 +629,8 @@ golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.34.0 h1:H5Y5sJ2L2JRdyv7ROF1he/lPdvFsd0mJHFw2ThKHxLA=
golang.org/x/sys v0.34.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=

View File

@ -0,0 +1,243 @@
-- Orama Network - Serverless Functions Engine (Phase 4)
-- WASM-based serverless function execution with triggers, jobs, and secrets
BEGIN;
-- =============================================================================
-- FUNCTIONS TABLE
-- Core function registry with versioning support
-- =============================================================================
CREATE TABLE IF NOT EXISTS functions (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
namespace TEXT NOT NULL,
version INTEGER NOT NULL DEFAULT 1,
wasm_cid TEXT NOT NULL,
source_cid TEXT,
memory_limit_mb INTEGER NOT NULL DEFAULT 64,
timeout_seconds INTEGER NOT NULL DEFAULT 30,
is_public BOOLEAN NOT NULL DEFAULT FALSE,
retry_count INTEGER NOT NULL DEFAULT 0,
retry_delay_seconds INTEGER NOT NULL DEFAULT 5,
dlq_topic TEXT,
status TEXT NOT NULL DEFAULT 'active',
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
created_by TEXT NOT NULL,
UNIQUE(namespace, name)
);
CREATE INDEX IF NOT EXISTS idx_functions_namespace ON functions(namespace);
CREATE INDEX IF NOT EXISTS idx_functions_name ON functions(namespace, name);
CREATE INDEX IF NOT EXISTS idx_functions_status ON functions(status);
-- =============================================================================
-- FUNCTION ENVIRONMENT VARIABLES
-- Non-sensitive configuration per function
-- =============================================================================
CREATE TABLE IF NOT EXISTS function_env_vars (
id TEXT PRIMARY KEY,
function_id TEXT NOT NULL,
key TEXT NOT NULL,
value TEXT NOT NULL,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
UNIQUE(function_id, key),
FOREIGN KEY (function_id) REFERENCES functions(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_function_env_vars_function ON function_env_vars(function_id);
-- =============================================================================
-- FUNCTION SECRETS
-- Encrypted secrets per namespace (shared across functions in namespace)
-- =============================================================================
CREATE TABLE IF NOT EXISTS function_secrets (
id TEXT PRIMARY KEY,
namespace TEXT NOT NULL,
name TEXT NOT NULL,
encrypted_value BLOB NOT NULL,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
UNIQUE(namespace, name)
);
CREATE INDEX IF NOT EXISTS idx_function_secrets_namespace ON function_secrets(namespace);
-- =============================================================================
-- CRON TRIGGERS
-- Scheduled function execution using cron expressions
-- =============================================================================
CREATE TABLE IF NOT EXISTS function_cron_triggers (
id TEXT PRIMARY KEY,
function_id TEXT NOT NULL,
cron_expression TEXT NOT NULL,
next_run_at TIMESTAMP,
last_run_at TIMESTAMP,
last_status TEXT,
last_error TEXT,
enabled BOOLEAN NOT NULL DEFAULT TRUE,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (function_id) REFERENCES functions(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_function_cron_triggers_function ON function_cron_triggers(function_id);
CREATE INDEX IF NOT EXISTS idx_function_cron_triggers_next_run ON function_cron_triggers(next_run_at)
WHERE enabled = TRUE;
-- =============================================================================
-- DATABASE TRIGGERS
-- Trigger functions on database changes (INSERT/UPDATE/DELETE)
-- =============================================================================
CREATE TABLE IF NOT EXISTS function_db_triggers (
id TEXT PRIMARY KEY,
function_id TEXT NOT NULL,
table_name TEXT NOT NULL,
operation TEXT NOT NULL CHECK(operation IN ('INSERT', 'UPDATE', 'DELETE')),
condition TEXT,
enabled BOOLEAN NOT NULL DEFAULT TRUE,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (function_id) REFERENCES functions(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_function_db_triggers_function ON function_db_triggers(function_id);
CREATE INDEX IF NOT EXISTS idx_function_db_triggers_table ON function_db_triggers(table_name, operation)
WHERE enabled = TRUE;
-- =============================================================================
-- PUBSUB TRIGGERS
-- Trigger functions on pubsub messages
-- =============================================================================
CREATE TABLE IF NOT EXISTS function_pubsub_triggers (
id TEXT PRIMARY KEY,
function_id TEXT NOT NULL,
topic TEXT NOT NULL,
enabled BOOLEAN NOT NULL DEFAULT TRUE,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (function_id) REFERENCES functions(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_function_pubsub_triggers_function ON function_pubsub_triggers(function_id);
CREATE INDEX IF NOT EXISTS idx_function_pubsub_triggers_topic ON function_pubsub_triggers(topic)
WHERE enabled = TRUE;
-- =============================================================================
-- ONE-TIME TIMERS
-- Schedule functions to run once at a specific time
-- =============================================================================
CREATE TABLE IF NOT EXISTS function_timers (
id TEXT PRIMARY KEY,
function_id TEXT NOT NULL,
run_at TIMESTAMP NOT NULL,
payload TEXT,
status TEXT NOT NULL DEFAULT 'pending' CHECK(status IN ('pending', 'running', 'completed', 'failed')),
error TEXT,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
completed_at TIMESTAMP,
FOREIGN KEY (function_id) REFERENCES functions(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_function_timers_function ON function_timers(function_id);
CREATE INDEX IF NOT EXISTS idx_function_timers_pending ON function_timers(run_at)
WHERE status = 'pending';
-- =============================================================================
-- BACKGROUND JOBS
-- Long-running async function execution
-- =============================================================================
CREATE TABLE IF NOT EXISTS function_jobs (
id TEXT PRIMARY KEY,
function_id TEXT NOT NULL,
payload TEXT,
status TEXT NOT NULL DEFAULT 'pending' CHECK(status IN ('pending', 'running', 'completed', 'failed', 'cancelled')),
progress INTEGER NOT NULL DEFAULT 0 CHECK(progress >= 0 AND progress <= 100),
result TEXT,
error TEXT,
started_at TIMESTAMP,
completed_at TIMESTAMP,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (function_id) REFERENCES functions(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_function_jobs_function ON function_jobs(function_id);
CREATE INDEX IF NOT EXISTS idx_function_jobs_status ON function_jobs(status);
CREATE INDEX IF NOT EXISTS idx_function_jobs_pending ON function_jobs(created_at)
WHERE status = 'pending';
-- =============================================================================
-- INVOCATION LOGS
-- Record of all function invocations for debugging and metrics
-- =============================================================================
CREATE TABLE IF NOT EXISTS function_invocations (
id TEXT PRIMARY KEY,
function_id TEXT NOT NULL,
request_id TEXT NOT NULL,
trigger_type TEXT NOT NULL,
caller_wallet TEXT,
input_size INTEGER,
output_size INTEGER,
started_at TIMESTAMP NOT NULL,
completed_at TIMESTAMP,
duration_ms INTEGER,
status TEXT CHECK(status IN ('success', 'error', 'timeout')),
error_message TEXT,
memory_used_mb REAL,
FOREIGN KEY (function_id) REFERENCES functions(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_function_invocations_function ON function_invocations(function_id);
CREATE INDEX IF NOT EXISTS idx_function_invocations_request ON function_invocations(request_id);
CREATE INDEX IF NOT EXISTS idx_function_invocations_time ON function_invocations(started_at);
CREATE INDEX IF NOT EXISTS idx_function_invocations_status ON function_invocations(function_id, status);
-- =============================================================================
-- FUNCTION LOGS
-- Captured log output from function execution
-- =============================================================================
CREATE TABLE IF NOT EXISTS function_logs (
id TEXT PRIMARY KEY,
function_id TEXT NOT NULL,
invocation_id TEXT NOT NULL,
level TEXT NOT NULL CHECK(level IN ('info', 'warn', 'error', 'debug')),
message TEXT NOT NULL,
timestamp TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (function_id) REFERENCES functions(id) ON DELETE CASCADE,
FOREIGN KEY (invocation_id) REFERENCES function_invocations(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_function_logs_invocation ON function_logs(invocation_id);
CREATE INDEX IF NOT EXISTS idx_function_logs_function ON function_logs(function_id, timestamp);
-- =============================================================================
-- DB CHANGE TRACKING
-- Track last processed row for database triggers (CDC-like)
-- =============================================================================
CREATE TABLE IF NOT EXISTS function_db_change_tracking (
id TEXT PRIMARY KEY,
trigger_id TEXT NOT NULL UNIQUE,
last_row_id INTEGER,
last_updated_at TIMESTAMP,
last_check_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (trigger_id) REFERENCES function_db_triggers(id) ON DELETE CASCADE
);
-- =============================================================================
-- RATE LIMITING
-- Track request counts for rate limiting
-- =============================================================================
CREATE TABLE IF NOT EXISTS function_rate_limits (
id TEXT PRIMARY KEY,
window_key TEXT NOT NULL,
count INTEGER NOT NULL DEFAULT 0,
window_start TIMESTAMP NOT NULL,
UNIQUE(window_key, window_start)
);
CREATE INDEX IF NOT EXISTS idx_function_rate_limits_window ON function_rate_limits(window_key, window_start);
-- =============================================================================
-- MIGRATION VERSION TRACKING
-- =============================================================================
INSERT OR IGNORE INTO schema_migrations(version) VALUES (4);
COMMIT;

View File

@ -1,321 +0,0 @@
openapi: 3.0.3
info:
title: DeBros Gateway API
version: 0.40.0
description: REST API over the DeBros Network client for storage, database, and pubsub.
servers:
- url: http://localhost:6001
security:
- ApiKeyAuth: []
- BearerAuth: []
components:
securitySchemes:
ApiKeyAuth:
type: apiKey
in: header
name: X-API-Key
BearerAuth:
type: http
scheme: bearer
schemas:
Error:
type: object
properties:
error:
type: string
QueryRequest:
type: object
required: [sql]
properties:
sql:
type: string
args:
type: array
items: {}
QueryResponse:
type: object
properties:
columns:
type: array
items:
type: string
rows:
type: array
items:
type: array
items: {}
count:
type: integer
format: int64
TransactionRequest:
type: object
required: [statements]
properties:
statements:
type: array
items:
type: string
CreateTableRequest:
type: object
required: [schema]
properties:
schema:
type: string
DropTableRequest:
type: object
required: [table]
properties:
table:
type: string
TopicsResponse:
type: object
properties:
topics:
type: array
items:
type: string
paths:
/v1/health:
get:
summary: Gateway health
responses:
"200": { description: OK }
/v1/storage/put:
post:
summary: Store a value by key
parameters:
- in: query
name: key
schema: { type: string }
required: true
requestBody:
required: true
content:
application/octet-stream:
schema:
type: string
format: binary
responses:
"201": { description: Created }
"400":
{
description: Bad Request,
content:
{
application/json:
{ schema: { $ref: "#/components/schemas/Error" } },
},
}
"401": { description: Unauthorized }
"500":
{
description: Error,
content:
{
application/json:
{ schema: { $ref: "#/components/schemas/Error" } },
},
}
/v1/storage/get:
get:
summary: Get a value by key
parameters:
- in: query
name: key
schema: { type: string }
required: true
responses:
"200":
description: OK
content:
application/octet-stream:
schema:
type: string
format: binary
"404":
{
description: Not Found,
content:
{
application/json:
{ schema: { $ref: "#/components/schemas/Error" } },
},
}
/v1/storage/exists:
get:
summary: Check key existence
parameters:
- in: query
name: key
schema: { type: string }
required: true
responses:
"200":
description: OK
content:
application/json:
schema:
type: object
properties:
exists:
type: boolean
/v1/storage/list:
get:
summary: List keys by prefix
parameters:
- in: query
name: prefix
schema: { type: string }
responses:
"200":
description: OK
content:
application/json:
schema:
type: object
properties:
keys:
type: array
items:
type: string
/v1/storage/delete:
post:
summary: Delete a key
requestBody:
required: true
content:
application/json:
schema:
type: object
required: [key]
properties:
key: { type: string }
responses:
"200": { description: OK }
/v1/rqlite/create-table:
post:
summary: Create tables via SQL DDL
requestBody:
required: true
content:
application/json:
schema: { $ref: "#/components/schemas/CreateTableRequest" }
responses:
"201": { description: Created }
"400":
{
description: Bad Request,
content:
{
application/json:
{ schema: { $ref: "#/components/schemas/Error" } },
},
}
"500":
{
description: Error,
content:
{
application/json:
{ schema: { $ref: "#/components/schemas/Error" } },
},
}
/v1/rqlite/drop-table:
post:
summary: Drop a table
requestBody:
required: true
content:
application/json:
schema: { $ref: "#/components/schemas/DropTableRequest" }
responses:
"200": { description: OK }
/v1/rqlite/query:
post:
summary: Execute a single SQL query
requestBody:
required: true
content:
application/json:
schema: { $ref: "#/components/schemas/QueryRequest" }
responses:
"200":
description: OK
content:
application/json:
schema: { $ref: "#/components/schemas/QueryResponse" }
"400":
{
description: Bad Request,
content:
{
application/json:
{ schema: { $ref: "#/components/schemas/Error" } },
},
}
"500":
{
description: Error,
content:
{
application/json:
{ schema: { $ref: "#/components/schemas/Error" } },
},
}
/v1/rqlite/transaction:
post:
summary: Execute multiple SQL statements atomically
requestBody:
required: true
content:
application/json:
schema: { $ref: "#/components/schemas/TransactionRequest" }
responses:
"200": { description: OK }
"400":
{
description: Bad Request,
content:
{
application/json:
{ schema: { $ref: "#/components/schemas/Error" } },
},
}
"500":
{
description: Error,
content:
{
application/json:
{ schema: { $ref: "#/components/schemas/Error" } },
},
}
/v1/rqlite/schema:
get:
summary: Get current database schema
responses:
"200": { description: OK }
/v1/pubsub/publish:
post:
summary: Publish to a topic
requestBody:
required: true
content:
application/json:
schema:
type: object
required: [topic, data_base64]
properties:
topic: { type: string }
data_base64: { type: string }
responses:
"200": { description: OK }
/v1/pubsub/topics:
get:
summary: List topics in caller namespace
responses:
"200":
description: OK
content:
application/json:
schema: { $ref: "#/components/schemas/TopicsResponse" }

File diff suppressed because it is too large Load Diff

View File

@ -2,6 +2,8 @@ package cli
import (
"testing"
"github.com/DeBrosOfficial/network/pkg/cli/utils"
)
// TestProdCommandFlagParsing verifies that prod command flags are parsed correctly
@ -156,7 +158,7 @@ func TestNormalizePeers(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
peers, err := normalizePeers(tt.input)
peers, err := utils.NormalizePeers(tt.input)
if tt.expectError && err == nil {
t.Errorf("expected error but got none")

View File

@ -0,0 +1,109 @@
package production
import (
"fmt"
"os"
"github.com/DeBrosOfficial/network/pkg/cli/production/install"
"github.com/DeBrosOfficial/network/pkg/cli/production/lifecycle"
"github.com/DeBrosOfficial/network/pkg/cli/production/logs"
"github.com/DeBrosOfficial/network/pkg/cli/production/migrate"
"github.com/DeBrosOfficial/network/pkg/cli/production/status"
"github.com/DeBrosOfficial/network/pkg/cli/production/uninstall"
"github.com/DeBrosOfficial/network/pkg/cli/production/upgrade"
)
// HandleCommand handles production environment commands
func HandleCommand(args []string) {
if len(args) == 0 {
ShowHelp()
return
}
subcommand := args[0]
subargs := args[1:]
switch subcommand {
case "install":
install.Handle(subargs)
case "upgrade":
upgrade.Handle(subargs)
case "migrate":
migrate.Handle(subargs)
case "status":
status.Handle()
case "start":
lifecycle.HandleStart()
case "stop":
lifecycle.HandleStop()
case "restart":
lifecycle.HandleRestart()
case "logs":
logs.Handle(subargs)
case "uninstall":
uninstall.Handle()
case "help":
ShowHelp()
default:
fmt.Fprintf(os.Stderr, "Unknown prod subcommand: %s\n", subcommand)
ShowHelp()
os.Exit(1)
}
}
// ShowHelp displays help information for production commands
func ShowHelp() {
fmt.Printf("Production Environment Commands\n\n")
fmt.Printf("Usage: orama <subcommand> [options]\n\n")
fmt.Printf("Subcommands:\n")
fmt.Printf(" install - Install production node (requires root/sudo)\n")
fmt.Printf(" Options:\n")
fmt.Printf(" --interactive - Launch interactive TUI wizard\n")
fmt.Printf(" --force - Reconfigure all settings\n")
fmt.Printf(" --vps-ip IP - VPS public IP address (required)\n")
fmt.Printf(" --domain DOMAIN - Domain for this node (e.g., node-1.orama.network)\n")
fmt.Printf(" --peers ADDRS - Comma-separated peer multiaddrs (for joining cluster)\n")
fmt.Printf(" --join ADDR - RQLite join address IP:port (for joining cluster)\n")
fmt.Printf(" --cluster-secret HEX - 64-hex cluster secret (required when joining)\n")
fmt.Printf(" --swarm-key HEX - 64-hex IPFS swarm key (required when joining)\n")
fmt.Printf(" --ipfs-peer ID - IPFS peer ID to connect to (auto-discovered)\n")
fmt.Printf(" --ipfs-addrs ADDRS - IPFS swarm addresses (auto-discovered)\n")
fmt.Printf(" --ipfs-cluster-peer ID - IPFS Cluster peer ID (auto-discovered)\n")
fmt.Printf(" --ipfs-cluster-addrs ADDRS - IPFS Cluster addresses (auto-discovered)\n")
fmt.Printf(" --branch BRANCH - Git branch to use (main or nightly, default: main)\n")
fmt.Printf(" --no-pull - Skip git clone/pull, use existing /home/debros/src\n")
fmt.Printf(" --ignore-resource-checks - Skip disk/RAM/CPU prerequisite validation\n")
fmt.Printf(" --dry-run - Show what would be done without making changes\n")
fmt.Printf(" upgrade - Upgrade existing installation (requires root/sudo)\n")
fmt.Printf(" Options:\n")
fmt.Printf(" --restart - Automatically restart services after upgrade\n")
fmt.Printf(" --branch BRANCH - Git branch to use (main or nightly)\n")
fmt.Printf(" --no-pull - Skip git clone/pull, use existing source\n")
fmt.Printf(" migrate - Migrate from old unified setup (requires root/sudo)\n")
fmt.Printf(" Options:\n")
fmt.Printf(" --dry-run - Show what would be migrated without making changes\n")
fmt.Printf(" status - Show status of production services\n")
fmt.Printf(" start - Start all production services (requires root/sudo)\n")
fmt.Printf(" stop - Stop all production services (requires root/sudo)\n")
fmt.Printf(" restart - Restart all production services (requires root/sudo)\n")
fmt.Printf(" logs <service> - View production service logs\n")
fmt.Printf(" Service aliases: node, ipfs, cluster, gateway, olric\n")
fmt.Printf(" Options:\n")
fmt.Printf(" --follow - Follow logs in real-time\n")
fmt.Printf(" uninstall - Remove production services (requires root/sudo)\n\n")
fmt.Printf("Examples:\n")
fmt.Printf(" # First node (creates new cluster)\n")
fmt.Printf(" sudo orama install --vps-ip 203.0.113.1 --domain node-1.orama.network\n\n")
fmt.Printf(" # Join existing cluster\n")
fmt.Printf(" sudo orama install --vps-ip 203.0.113.2 --domain node-2.orama.network \\\n")
fmt.Printf(" --peers /ip4/203.0.113.1/tcp/4001/p2p/12D3KooW... \\\n")
fmt.Printf(" --cluster-secret <64-hex-secret> --swarm-key <64-hex-swarm-key>\n\n")
fmt.Printf(" # Upgrade\n")
fmt.Printf(" sudo orama upgrade --restart\n\n")
fmt.Printf(" # Service management\n")
fmt.Printf(" sudo orama start\n")
fmt.Printf(" sudo orama stop\n")
fmt.Printf(" sudo orama restart\n\n")
fmt.Printf(" orama status\n")
fmt.Printf(" orama logs node --follow\n")
}

View File

@ -0,0 +1,47 @@
package install
import (
"fmt"
"os"
)
// Handle executes the install command
func Handle(args []string) {
// Parse flags
flags, err := ParseFlags(args)
if err != nil {
fmt.Fprintf(os.Stderr, "❌ %v\n", err)
os.Exit(1)
}
// Create orchestrator
orchestrator, err := NewOrchestrator(flags)
if err != nil {
fmt.Fprintf(os.Stderr, "❌ %v\n", err)
os.Exit(1)
}
// Validate flags
if err := orchestrator.validator.ValidateFlags(); err != nil {
fmt.Fprintf(os.Stderr, "❌ Error: %v\n", err)
os.Exit(1)
}
// Check root privileges
if err := orchestrator.validator.ValidateRootPrivileges(); err != nil {
fmt.Fprintf(os.Stderr, "❌ %v\n", err)
os.Exit(1)
}
// Check port availability before proceeding
if err := orchestrator.validator.ValidatePorts(); err != nil {
fmt.Fprintf(os.Stderr, "❌ %v\n", err)
os.Exit(1)
}
// Execute installation
if err := orchestrator.Execute(); err != nil {
fmt.Fprintf(os.Stderr, "❌ %v\n", err)
os.Exit(1)
}
}

View File

@ -0,0 +1,65 @@
package install
import (
"flag"
"fmt"
"os"
)
// Flags represents install command flags
type Flags struct {
VpsIP string
Domain string
Branch string
NoPull bool
Force bool
DryRun bool
SkipChecks bool
JoinAddress string
ClusterSecret string
SwarmKey string
PeersStr string
// IPFS/Cluster specific info for Peering configuration
IPFSPeerID string
IPFSAddrs string
IPFSClusterPeerID string
IPFSClusterAddrs string
}
// ParseFlags parses install command flags
func ParseFlags(args []string) (*Flags, error) {
fs := flag.NewFlagSet("install", flag.ContinueOnError)
fs.SetOutput(os.Stderr)
flags := &Flags{}
fs.StringVar(&flags.VpsIP, "vps-ip", "", "Public IP of this VPS (required)")
fs.StringVar(&flags.Domain, "domain", "", "Domain name for HTTPS (optional, e.g. gateway.example.com)")
fs.StringVar(&flags.Branch, "branch", "main", "Git branch to use (main or nightly)")
fs.BoolVar(&flags.NoPull, "no-pull", false, "Skip git clone/pull, use existing repository in /home/debros/src")
fs.BoolVar(&flags.Force, "force", false, "Force reconfiguration even if already installed")
fs.BoolVar(&flags.DryRun, "dry-run", false, "Show what would be done without making changes")
fs.BoolVar(&flags.SkipChecks, "skip-checks", false, "Skip minimum resource checks (RAM/CPU)")
// Cluster join flags
fs.StringVar(&flags.JoinAddress, "join", "", "Join an existing cluster (e.g. 1.2.3.4:7001)")
fs.StringVar(&flags.ClusterSecret, "cluster-secret", "", "Cluster secret for IPFS Cluster (required if joining)")
fs.StringVar(&flags.SwarmKey, "swarm-key", "", "IPFS Swarm key (required if joining)")
fs.StringVar(&flags.PeersStr, "peers", "", "Comma-separated list of bootstrap peer multiaddrs")
// IPFS/Cluster specific info for Peering configuration
fs.StringVar(&flags.IPFSPeerID, "ipfs-peer", "", "Peer ID of existing IPFS node to peer with")
fs.StringVar(&flags.IPFSAddrs, "ipfs-addrs", "", "Comma-separated multiaddrs of existing IPFS node")
fs.StringVar(&flags.IPFSClusterPeerID, "ipfs-cluster-peer", "", "Peer ID of existing IPFS Cluster node")
fs.StringVar(&flags.IPFSClusterAddrs, "ipfs-cluster-addrs", "", "Comma-separated multiaddrs of existing IPFS Cluster node")
if err := fs.Parse(args); err != nil {
if err == flag.ErrHelp {
return nil, err
}
return nil, fmt.Errorf("failed to parse flags: %w", err)
}
return flags, nil
}

View File

@ -0,0 +1,192 @@
package install
import (
"fmt"
"os"
"path/filepath"
"strings"
"github.com/DeBrosOfficial/network/pkg/cli/utils"
"github.com/DeBrosOfficial/network/pkg/environments/production"
)
// Orchestrator manages the install process
type Orchestrator struct {
oramaHome string
oramaDir string
setup *production.ProductionSetup
flags *Flags
validator *Validator
peers []string
}
// NewOrchestrator creates a new install orchestrator
func NewOrchestrator(flags *Flags) (*Orchestrator, error) {
oramaHome := "/home/debros"
oramaDir := oramaHome + "/.orama"
// Normalize peers
peers, err := utils.NormalizePeers(flags.PeersStr)
if err != nil {
return nil, fmt.Errorf("invalid peers: %w", err)
}
setup := production.NewProductionSetup(oramaHome, os.Stdout, flags.Force, flags.Branch, flags.NoPull, flags.SkipChecks)
validator := NewValidator(flags, oramaDir)
return &Orchestrator{
oramaHome: oramaHome,
oramaDir: oramaDir,
setup: setup,
flags: flags,
validator: validator,
peers: peers,
}, nil
}
// Execute runs the installation process
func (o *Orchestrator) Execute() error {
fmt.Printf("🚀 Starting production installation...\n\n")
// Inform user if skipping git pull
if o.flags.NoPull {
fmt.Printf(" ⚠️ --no-pull flag enabled: Skipping git clone/pull\n")
fmt.Printf(" Using existing repository at /home/debros/src\n")
}
// Validate DNS if domain is provided
o.validator.ValidateDNS()
// Dry-run mode: show what would be done and exit
if o.flags.DryRun {
utils.ShowDryRunSummary(o.flags.VpsIP, o.flags.Domain, o.flags.Branch, o.peers, o.flags.JoinAddress, o.validator.IsFirstNode(), o.oramaDir)
return nil
}
// Save secrets before installation
if err := o.validator.SaveSecrets(); err != nil {
return err
}
// Save branch preference for future upgrades
if err := production.SaveBranchPreference(o.oramaDir, o.flags.Branch); err != nil {
fmt.Fprintf(os.Stderr, "⚠️ Warning: Failed to save branch preference: %v\n", err)
}
// Phase 1: Check prerequisites
fmt.Printf("\n📋 Phase 1: Checking prerequisites...\n")
if err := o.setup.Phase1CheckPrerequisites(); err != nil {
return fmt.Errorf("prerequisites check failed: %w", err)
}
// Phase 2: Provision environment
fmt.Printf("\n🛠 Phase 2: Provisioning environment...\n")
if err := o.setup.Phase2ProvisionEnvironment(); err != nil {
return fmt.Errorf("environment provisioning failed: %w", err)
}
// Phase 2b: Install binaries
fmt.Printf("\nPhase 2b: Installing binaries...\n")
if err := o.setup.Phase2bInstallBinaries(); err != nil {
return fmt.Errorf("binary installation failed: %w", err)
}
// Phase 3: Generate secrets FIRST (before service initialization)
fmt.Printf("\n🔐 Phase 3: Generating secrets...\n")
if err := o.setup.Phase3GenerateSecrets(); err != nil {
return fmt.Errorf("secret generation failed: %w", err)
}
// Phase 4: Generate configs (BEFORE service initialization)
fmt.Printf("\n⚙ Phase 4: Generating configurations...\n")
enableHTTPS := o.flags.Domain != ""
if err := o.setup.Phase4GenerateConfigs(o.peers, o.flags.VpsIP, enableHTTPS, o.flags.Domain, o.flags.JoinAddress); err != nil {
return fmt.Errorf("configuration generation failed: %w", err)
}
// Validate generated configuration
if err := o.validator.ValidateGeneratedConfig(); err != nil {
return err
}
// Phase 2c: Initialize services (after config is in place)
fmt.Printf("\nPhase 2c: Initializing services...\n")
ipfsPeerInfo := o.buildIPFSPeerInfo()
ipfsClusterPeerInfo := o.buildIPFSClusterPeerInfo()
if err := o.setup.Phase2cInitializeServices(o.peers, o.flags.VpsIP, ipfsPeerInfo, ipfsClusterPeerInfo); err != nil {
return fmt.Errorf("service initialization failed: %w", err)
}
// Phase 5: Create systemd services
fmt.Printf("\n🔧 Phase 5: Creating systemd services...\n")
if err := o.setup.Phase5CreateSystemdServices(enableHTTPS); err != nil {
return fmt.Errorf("service creation failed: %w", err)
}
// Log completion with actual peer ID
o.setup.LogSetupComplete(o.setup.NodePeerID)
fmt.Printf("✅ Production installation complete!\n\n")
// For first node, print important secrets and identifiers
if o.validator.IsFirstNode() {
o.printFirstNodeSecrets()
}
return nil
}
func (o *Orchestrator) buildIPFSPeerInfo() *production.IPFSPeerInfo {
if o.flags.IPFSPeerID != "" {
var addrs []string
if o.flags.IPFSAddrs != "" {
addrs = strings.Split(o.flags.IPFSAddrs, ",")
}
return &production.IPFSPeerInfo{
PeerID: o.flags.IPFSPeerID,
Addrs: addrs,
}
}
return nil
}
func (o *Orchestrator) buildIPFSClusterPeerInfo() *production.IPFSClusterPeerInfo {
if o.flags.IPFSClusterPeerID != "" {
var addrs []string
if o.flags.IPFSClusterAddrs != "" {
addrs = strings.Split(o.flags.IPFSClusterAddrs, ",")
}
return &production.IPFSClusterPeerInfo{
PeerID: o.flags.IPFSClusterPeerID,
Addrs: addrs,
}
}
return nil
}
func (o *Orchestrator) printFirstNodeSecrets() {
fmt.Printf("📋 Save these for joining future nodes:\n\n")
// Print cluster secret
clusterSecretPath := filepath.Join(o.oramaDir, "secrets", "cluster-secret")
if clusterSecretData, err := os.ReadFile(clusterSecretPath); err == nil {
fmt.Printf(" Cluster Secret (--cluster-secret):\n")
fmt.Printf(" %s\n\n", string(clusterSecretData))
}
// Print swarm key
swarmKeyPath := filepath.Join(o.oramaDir, "secrets", "swarm.key")
if swarmKeyData, err := os.ReadFile(swarmKeyPath); err == nil {
swarmKeyContent := strings.TrimSpace(string(swarmKeyData))
lines := strings.Split(swarmKeyContent, "\n")
if len(lines) >= 3 {
// Extract just the hex part (last line)
fmt.Printf(" IPFS Swarm Key (--swarm-key, last line only):\n")
fmt.Printf(" %s\n\n", lines[len(lines)-1])
}
}
// Print peer ID
fmt.Printf(" Node Peer ID:\n")
fmt.Printf(" %s\n\n", o.setup.NodePeerID)
}

View File

@ -0,0 +1,106 @@
package install
import (
"fmt"
"os"
"path/filepath"
"strings"
"github.com/DeBrosOfficial/network/pkg/cli/utils"
)
// Validator validates install command inputs
type Validator struct {
flags *Flags
oramaDir string
isFirstNode bool
}
// NewValidator creates a new validator
func NewValidator(flags *Flags, oramaDir string) *Validator {
return &Validator{
flags: flags,
oramaDir: oramaDir,
isFirstNode: flags.JoinAddress == "",
}
}
// ValidateFlags validates required flags
func (v *Validator) ValidateFlags() error {
if v.flags.VpsIP == "" && !v.flags.DryRun {
return fmt.Errorf("--vps-ip is required for installation\nExample: dbn prod install --vps-ip 1.2.3.4")
}
return nil
}
// ValidateRootPrivileges checks if running as root
func (v *Validator) ValidateRootPrivileges() error {
if os.Geteuid() != 0 && !v.flags.DryRun {
return fmt.Errorf("production installation must be run as root (use sudo)")
}
return nil
}
// ValidatePorts validates port availability
func (v *Validator) ValidatePorts() error {
if err := utils.EnsurePortsAvailable("install", utils.DefaultPorts()); err != nil {
return err
}
return nil
}
// ValidateDNS validates DNS record if domain is provided
func (v *Validator) ValidateDNS() {
if v.flags.Domain != "" {
fmt.Printf("\n🌐 Pre-flight DNS validation...\n")
utils.ValidateDNSRecord(v.flags.Domain, v.flags.VpsIP)
}
}
// ValidateGeneratedConfig validates generated configuration files
func (v *Validator) ValidateGeneratedConfig() error {
fmt.Printf(" Validating generated configuration...\n")
if err := utils.ValidateGeneratedConfig(v.oramaDir); err != nil {
return fmt.Errorf("configuration validation failed: %w", err)
}
fmt.Printf(" ✓ Configuration validated\n")
return nil
}
// SaveSecrets saves cluster secret and swarm key to secrets directory
func (v *Validator) SaveSecrets() error {
// If cluster secret was provided, save it to secrets directory before setup
if v.flags.ClusterSecret != "" {
secretsDir := filepath.Join(v.oramaDir, "secrets")
if err := os.MkdirAll(secretsDir, 0755); err != nil {
return fmt.Errorf("failed to create secrets directory: %w", err)
}
secretPath := filepath.Join(secretsDir, "cluster-secret")
if err := os.WriteFile(secretPath, []byte(v.flags.ClusterSecret), 0600); err != nil {
return fmt.Errorf("failed to save cluster secret: %w", err)
}
fmt.Printf(" ✓ Cluster secret saved\n")
}
// If swarm key was provided, save it to secrets directory in full format
if v.flags.SwarmKey != "" {
secretsDir := filepath.Join(v.oramaDir, "secrets")
if err := os.MkdirAll(secretsDir, 0755); err != nil {
return fmt.Errorf("failed to create secrets directory: %w", err)
}
// Convert 64-hex key to full swarm.key format
swarmKeyContent := fmt.Sprintf("/key/swarm/psk/1.0.0/\n/base16/\n%s\n", strings.ToUpper(v.flags.SwarmKey))
swarmKeyPath := filepath.Join(secretsDir, "swarm.key")
if err := os.WriteFile(swarmKeyPath, []byte(swarmKeyContent), 0600); err != nil {
return fmt.Errorf("failed to save swarm key: %w", err)
}
fmt.Printf(" ✓ Swarm key saved\n")
}
return nil
}
// IsFirstNode returns true if this is the first node in the cluster
func (v *Validator) IsFirstNode() bool {
return v.isFirstNode
}

View File

@ -0,0 +1,67 @@
package lifecycle
import (
"fmt"
"os"
"os/exec"
"github.com/DeBrosOfficial/network/pkg/cli/utils"
)
// HandleRestart restarts all production services
func HandleRestart() {
if os.Geteuid() != 0 {
fmt.Fprintf(os.Stderr, "❌ Production commands must be run as root (use sudo)\n")
os.Exit(1)
}
fmt.Printf("Restarting all DeBros production services...\n")
services := utils.GetProductionServices()
if len(services) == 0 {
fmt.Printf(" ⚠️ No DeBros services found\n")
return
}
// Stop all active services first
fmt.Printf(" Stopping services...\n")
for _, svc := range services {
active, err := utils.IsServiceActive(svc)
if err != nil {
fmt.Printf(" ⚠️ Unable to check %s: %v\n", svc, err)
continue
}
if !active {
fmt.Printf(" %s was already stopped\n", svc)
continue
}
if err := exec.Command("systemctl", "stop", svc).Run(); err != nil {
fmt.Printf(" ⚠️ Failed to stop %s: %v\n", svc, err)
} else {
fmt.Printf(" ✓ Stopped %s\n", svc)
}
}
// Check port availability before restarting
ports, err := utils.CollectPortsForServices(services, false)
if err != nil {
fmt.Fprintf(os.Stderr, "❌ %v\n", err)
os.Exit(1)
}
if err := utils.EnsurePortsAvailable("prod restart", ports); err != nil {
fmt.Fprintf(os.Stderr, "❌ %v\n", err)
os.Exit(1)
}
// Start all services
fmt.Printf(" Starting services...\n")
for _, svc := range services {
if err := exec.Command("systemctl", "start", svc).Run(); err != nil {
fmt.Printf(" ⚠️ Failed to start %s: %v\n", svc, err)
} else {
fmt.Printf(" ✓ Started %s\n", svc)
}
}
fmt.Printf("\n✅ All services restarted\n")
}

View File

@ -0,0 +1,111 @@
package lifecycle
import (
"fmt"
"os"
"os/exec"
"time"
"github.com/DeBrosOfficial/network/pkg/cli/utils"
)
// HandleStart starts all production services
func HandleStart() {
if os.Geteuid() != 0 {
fmt.Fprintf(os.Stderr, "❌ Production commands must be run as root (use sudo)\n")
os.Exit(1)
}
fmt.Printf("Starting all DeBros production services...\n")
services := utils.GetProductionServices()
if len(services) == 0 {
fmt.Printf(" ⚠️ No DeBros services found\n")
return
}
// Reset failed state for all services before starting
// This helps with services that were previously in failed state
resetArgs := []string{"reset-failed"}
resetArgs = append(resetArgs, services...)
exec.Command("systemctl", resetArgs...).Run()
// Check which services are inactive and need to be started
inactive := make([]string, 0, len(services))
for _, svc := range services {
// Check if service is masked and unmask it
masked, err := utils.IsServiceMasked(svc)
if err == nil && masked {
fmt.Printf(" ⚠️ %s is masked, unmasking...\n", svc)
if err := exec.Command("systemctl", "unmask", svc).Run(); err != nil {
fmt.Printf(" ⚠️ Failed to unmask %s: %v\n", svc, err)
} else {
fmt.Printf(" ✓ Unmasked %s\n", svc)
}
}
active, err := utils.IsServiceActive(svc)
if err != nil {
fmt.Printf(" ⚠️ Unable to check %s: %v\n", svc, err)
continue
}
if active {
fmt.Printf(" %s already running\n", svc)
// Re-enable if disabled (in case it was stopped with 'dbn prod stop')
enabled, err := utils.IsServiceEnabled(svc)
if err == nil && !enabled {
if err := exec.Command("systemctl", "enable", svc).Run(); err != nil {
fmt.Printf(" ⚠️ Failed to re-enable %s: %v\n", svc, err)
} else {
fmt.Printf(" ✓ Re-enabled %s (will auto-start on boot)\n", svc)
}
}
continue
}
inactive = append(inactive, svc)
}
if len(inactive) == 0 {
fmt.Printf("\n✅ All services already running\n")
return
}
// Check port availability for services we're about to start
ports, err := utils.CollectPortsForServices(inactive, false)
if err != nil {
fmt.Fprintf(os.Stderr, "❌ %v\n", err)
os.Exit(1)
}
if err := utils.EnsurePortsAvailable("prod start", ports); err != nil {
fmt.Fprintf(os.Stderr, "❌ %v\n", err)
os.Exit(1)
}
// Enable and start inactive services
for _, svc := range inactive {
// Re-enable the service first (in case it was disabled by 'dbn prod stop')
enabled, err := utils.IsServiceEnabled(svc)
if err == nil && !enabled {
if err := exec.Command("systemctl", "enable", svc).Run(); err != nil {
fmt.Printf(" ⚠️ Failed to enable %s: %v\n", svc, err)
} else {
fmt.Printf(" ✓ Enabled %s (will auto-start on boot)\n", svc)
}
}
// Start the service
if err := exec.Command("systemctl", "start", svc).Run(); err != nil {
fmt.Printf(" ⚠️ Failed to start %s: %v\n", svc, err)
} else {
fmt.Printf(" ✓ Started %s\n", svc)
}
}
// Give services more time to fully initialize before verification
// Some services may need more time to start up, especially if they're
// waiting for dependencies or initializing databases
fmt.Printf(" ⏳ Waiting for services to initialize...\n")
time.Sleep(5 * time.Second)
fmt.Printf("\n✅ All services started\n")
}

View File

@ -0,0 +1,112 @@
package lifecycle
import (
"fmt"
"os"
"os/exec"
"time"
"github.com/DeBrosOfficial/network/pkg/cli/utils"
)
// HandleStop stops all production services
func HandleStop() {
if os.Geteuid() != 0 {
fmt.Fprintf(os.Stderr, "❌ Production commands must be run as root (use sudo)\n")
os.Exit(1)
}
fmt.Printf("Stopping all DeBros production services...\n")
services := utils.GetProductionServices()
if len(services) == 0 {
fmt.Printf(" ⚠️ No DeBros services found\n")
return
}
// First, disable all services to prevent auto-restart
disableArgs := []string{"disable"}
disableArgs = append(disableArgs, services...)
if err := exec.Command("systemctl", disableArgs...).Run(); err != nil {
fmt.Printf(" ⚠️ Warning: Failed to disable some services: %v\n", err)
}
// Stop all services at once using a single systemctl command
// This is more efficient and ensures they all stop together
stopArgs := []string{"stop"}
stopArgs = append(stopArgs, services...)
if err := exec.Command("systemctl", stopArgs...).Run(); err != nil {
fmt.Printf(" ⚠️ Warning: Some services may have failed to stop: %v\n", err)
// Continue anyway - we'll verify and handle individually below
}
// Wait a moment for services to fully stop
time.Sleep(2 * time.Second)
// Reset failed state for any services that might be in failed state
resetArgs := []string{"reset-failed"}
resetArgs = append(resetArgs, services...)
exec.Command("systemctl", resetArgs...).Run()
// Wait again after reset-failed
time.Sleep(1 * time.Second)
// Stop again to ensure they're stopped
exec.Command("systemctl", stopArgs...).Run()
time.Sleep(1 * time.Second)
hadError := false
for _, svc := range services {
active, err := utils.IsServiceActive(svc)
if err != nil {
fmt.Printf(" ⚠️ Unable to check %s: %v\n", svc, err)
hadError = true
continue
}
if !active {
fmt.Printf(" ✓ Stopped %s\n", svc)
} else {
// Service is still active, try stopping it individually
fmt.Printf(" ⚠️ %s still active, attempting individual stop...\n", svc)
if err := exec.Command("systemctl", "stop", svc).Run(); err != nil {
fmt.Printf(" ❌ Failed to stop %s: %v\n", svc, err)
hadError = true
} else {
// Wait and verify again
time.Sleep(1 * time.Second)
if stillActive, _ := utils.IsServiceActive(svc); stillActive {
fmt.Printf(" ❌ %s restarted itself (Restart=always)\n", svc)
hadError = true
} else {
fmt.Printf(" ✓ Stopped %s\n", svc)
}
}
}
// Disable the service to prevent it from auto-starting on boot
enabled, err := utils.IsServiceEnabled(svc)
if err != nil {
fmt.Printf(" ⚠️ Unable to check if %s is enabled: %v\n", svc, err)
// Continue anyway - try to disable
}
if enabled {
if err := exec.Command("systemctl", "disable", svc).Run(); err != nil {
fmt.Printf(" ⚠️ Failed to disable %s: %v\n", svc, err)
hadError = true
} else {
fmt.Printf(" ✓ Disabled %s (will not auto-start on boot)\n", svc)
}
} else {
fmt.Printf(" %s already disabled\n", svc)
}
}
if hadError {
fmt.Fprintf(os.Stderr, "\n⚠ Some services may still be restarting due to Restart=always\n")
fmt.Fprintf(os.Stderr, " Check status with: systemctl list-units 'debros-*'\n")
fmt.Fprintf(os.Stderr, " If services are still restarting, they may need manual intervention\n")
} else {
fmt.Printf("\n✅ All services stopped and disabled (will not auto-start on boot)\n")
fmt.Printf(" Use 'dbn prod start' to start and re-enable services\n")
}
}

View File

@ -0,0 +1,104 @@
package logs
import (
"fmt"
"os"
"os/exec"
"strings"
"github.com/DeBrosOfficial/network/pkg/cli/utils"
)
// Handle executes the logs command
func Handle(args []string) {
if len(args) == 0 {
showUsage()
os.Exit(1)
}
serviceAlias := args[0]
follow := false
if len(args) > 1 && (args[1] == "--follow" || args[1] == "-f") {
follow = true
}
// Resolve service alias to actual service names
serviceNames, err := utils.ResolveServiceName(serviceAlias)
if err != nil {
fmt.Fprintf(os.Stderr, "❌ %v\n", err)
fmt.Fprintf(os.Stderr, "\nAvailable service aliases: node, ipfs, cluster, gateway, olric\n")
fmt.Fprintf(os.Stderr, "Or use full service name like: debros-node\n")
os.Exit(1)
}
// If multiple services match, show all of them
if len(serviceNames) > 1 {
handleMultipleServices(serviceNames, serviceAlias, follow)
return
}
// Single service
service := serviceNames[0]
if follow {
followServiceLogs(service)
} else {
showServiceLogs(service)
}
}
func showUsage() {
fmt.Fprintf(os.Stderr, "Usage: dbn prod logs <service> [--follow]\n")
fmt.Fprintf(os.Stderr, "\nService aliases:\n")
fmt.Fprintf(os.Stderr, " node, ipfs, cluster, gateway, olric\n")
fmt.Fprintf(os.Stderr, "\nOr use full service name:\n")
fmt.Fprintf(os.Stderr, " debros-node, debros-gateway, etc.\n")
}
func handleMultipleServices(serviceNames []string, serviceAlias string, follow bool) {
if follow {
fmt.Fprintf(os.Stderr, "⚠️ Multiple services match alias %q:\n", serviceAlias)
for _, svc := range serviceNames {
fmt.Fprintf(os.Stderr, " - %s\n", svc)
}
fmt.Fprintf(os.Stderr, "\nShowing logs for all matching services...\n\n")
// Use journalctl with multiple units (build args correctly)
args := []string{}
for _, svc := range serviceNames {
args = append(args, "-u", svc)
}
args = append(args, "-f")
cmd := exec.Command("journalctl", args...)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
cmd.Stdin = os.Stdin
cmd.Run()
} else {
for i, svc := range serviceNames {
if i > 0 {
fmt.Print("\n" + strings.Repeat("=", 70) + "\n\n")
}
fmt.Printf("📋 Logs for %s:\n\n", svc)
cmd := exec.Command("journalctl", "-u", svc, "-n", "50")
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
cmd.Run()
}
}
}
func followServiceLogs(service string) {
fmt.Printf("Following logs for %s (press Ctrl+C to stop)...\n\n", service)
cmd := exec.Command("journalctl", "-u", service, "-f")
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
cmd.Stdin = os.Stdin
cmd.Run()
}
func showServiceLogs(service string) {
cmd := exec.Command("journalctl", "-u", service, "-n", "50")
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
cmd.Run()
}

View File

@ -0,0 +1,9 @@
package logs
// This file contains log tailing utilities
// Currently all tailing is done via journalctl in command.go
// Future enhancements could include:
// - Custom log parsing and filtering
// - Log streaming from remote nodes
// - Log aggregation across multiple services
// - Advanced filtering and search capabilities

View File

@ -0,0 +1,156 @@
package migrate
import (
"flag"
"fmt"
"os"
"os/exec"
"path/filepath"
)
// Handle executes the migrate command
func Handle(args []string) {
// Parse flags
fs := flag.NewFlagSet("migrate", flag.ContinueOnError)
fs.SetOutput(os.Stderr)
dryRun := fs.Bool("dry-run", false, "Show what would be migrated without making changes")
if err := fs.Parse(args); err != nil {
if err == flag.ErrHelp {
return
}
fmt.Fprintf(os.Stderr, "❌ Failed to parse flags: %v\n", err)
os.Exit(1)
}
if os.Geteuid() != 0 && !*dryRun {
fmt.Fprintf(os.Stderr, "❌ Migration must be run as root (use sudo)\n")
os.Exit(1)
}
oramaDir := "/home/debros/.orama"
fmt.Printf("🔄 Checking for installations to migrate...\n\n")
// Check for old-style installations
validator := NewValidator(oramaDir)
needsMigration := validator.CheckNeedsMigration()
if !needsMigration {
fmt.Printf("\n✅ No migration needed - installation already uses unified structure\n")
return
}
if *dryRun {
fmt.Printf("\n📋 Dry run - no changes made\n")
fmt.Printf(" Run without --dry-run to perform migration\n")
return
}
fmt.Printf("\n🔄 Starting migration...\n")
// Stop old services first
stopOldServices()
// Migrate data directories
migrateDataDirectories(oramaDir)
// Migrate config files
migrateConfigFiles(oramaDir)
// Remove old services
removeOldServices()
// Reload systemd
exec.Command("systemctl", "daemon-reload").Run()
fmt.Printf("\n✅ Migration complete!\n")
fmt.Printf(" Run 'sudo orama upgrade --restart' to regenerate services with new names\n\n")
}
func stopOldServices() {
oldServices := []string{
"debros-ipfs",
"debros-ipfs-cluster",
"debros-node",
}
fmt.Printf("\n Stopping old services...\n")
for _, svc := range oldServices {
if err := exec.Command("systemctl", "stop", svc).Run(); err == nil {
fmt.Printf(" ✓ Stopped %s\n", svc)
}
}
}
func migrateDataDirectories(oramaDir string) {
oldDataDirs := []string{
filepath.Join(oramaDir, "data", "node-1"),
filepath.Join(oramaDir, "data", "node"),
}
newDataDir := filepath.Join(oramaDir, "data")
fmt.Printf("\n Migrating data directories...\n")
// Prefer node-1 data if it exists, otherwise use node data
sourceDir := ""
if _, err := os.Stat(filepath.Join(oramaDir, "data", "node-1")); err == nil {
sourceDir = filepath.Join(oramaDir, "data", "node-1")
} else if _, err := os.Stat(filepath.Join(oramaDir, "data", "node")); err == nil {
sourceDir = filepath.Join(oramaDir, "data", "node")
}
if sourceDir != "" {
// Move contents to unified data directory
entries, _ := os.ReadDir(sourceDir)
for _, entry := range entries {
src := filepath.Join(sourceDir, entry.Name())
dst := filepath.Join(newDataDir, entry.Name())
if _, err := os.Stat(dst); os.IsNotExist(err) {
if err := os.Rename(src, dst); err == nil {
fmt.Printf(" ✓ Moved %s → %s\n", src, dst)
}
}
}
}
// Remove old data directories
for _, dir := range oldDataDirs {
if err := os.RemoveAll(dir); err == nil {
fmt.Printf(" ✓ Removed %s\n", dir)
}
}
}
func migrateConfigFiles(oramaDir string) {
fmt.Printf("\n Migrating config files...\n")
oldNodeConfig := filepath.Join(oramaDir, "configs", "bootstrap.yaml")
newNodeConfig := filepath.Join(oramaDir, "configs", "node.yaml")
if _, err := os.Stat(oldNodeConfig); err == nil {
if _, err := os.Stat(newNodeConfig); os.IsNotExist(err) {
if err := os.Rename(oldNodeConfig, newNodeConfig); err == nil {
fmt.Printf(" ✓ Renamed bootstrap.yaml → node.yaml\n")
}
} else {
os.Remove(oldNodeConfig)
fmt.Printf(" ✓ Removed old bootstrap.yaml (node.yaml already exists)\n")
}
}
}
func removeOldServices() {
oldServices := []string{
"debros-ipfs",
"debros-ipfs-cluster",
"debros-node",
}
fmt.Printf("\n Removing old service files...\n")
for _, svc := range oldServices {
unitPath := filepath.Join("/etc/systemd/system", svc+".service")
if err := os.Remove(unitPath); err == nil {
fmt.Printf(" ✓ Removed %s\n", unitPath)
}
}
}

View File

@ -0,0 +1,64 @@
package migrate
import (
"fmt"
"os"
"path/filepath"
)
// Validator checks if migration is needed
type Validator struct {
oramaDir string
}
// NewValidator creates a new Validator
func NewValidator(oramaDir string) *Validator {
return &Validator{oramaDir: oramaDir}
}
// CheckNeedsMigration checks if migration is needed
func (v *Validator) CheckNeedsMigration() bool {
oldDataDirs := []string{
filepath.Join(v.oramaDir, "data", "node-1"),
filepath.Join(v.oramaDir, "data", "node"),
}
oldServices := []string{
"debros-ipfs",
"debros-ipfs-cluster",
"debros-node",
}
oldConfigs := []string{
filepath.Join(v.oramaDir, "configs", "bootstrap.yaml"),
}
var needsMigration bool
fmt.Printf("Checking data directories:\n")
for _, dir := range oldDataDirs {
if _, err := os.Stat(dir); err == nil {
fmt.Printf(" ⚠️ Found old directory: %s\n", dir)
needsMigration = true
}
}
fmt.Printf("\nChecking services:\n")
for _, svc := range oldServices {
unitPath := filepath.Join("/etc/systemd/system", svc+".service")
if _, err := os.Stat(unitPath); err == nil {
fmt.Printf(" ⚠️ Found old service: %s\n", svc)
needsMigration = true
}
}
fmt.Printf("\nChecking configs:\n")
for _, cfg := range oldConfigs {
if _, err := os.Stat(cfg); err == nil {
fmt.Printf(" ⚠️ Found old config: %s\n", cfg)
needsMigration = true
}
}
return needsMigration
}

View File

@ -0,0 +1,58 @@
package status
import (
"fmt"
"os"
"github.com/DeBrosOfficial/network/pkg/cli/utils"
)
// Handle executes the status command
func Handle() {
fmt.Printf("Production Environment Status\n\n")
// Unified service names (no bootstrap/node distinction)
serviceNames := []string{
"debros-ipfs",
"debros-ipfs-cluster",
// Note: RQLite is managed by node process, not as separate service
"debros-olric",
"debros-node",
"debros-gateway",
}
// Friendly descriptions
descriptions := map[string]string{
"debros-ipfs": "IPFS Daemon",
"debros-ipfs-cluster": "IPFS Cluster",
"debros-olric": "Olric Cache Server",
"debros-node": "DeBros Node (includes RQLite)",
"debros-gateway": "DeBros Gateway",
}
fmt.Printf("Services:\n")
found := false
for _, svc := range serviceNames {
active, _ := utils.IsServiceActive(svc)
status := "❌ Inactive"
if active {
status = "✅ Active"
found = true
}
fmt.Printf(" %s: %s\n", status, descriptions[svc])
}
if !found {
fmt.Printf(" (No services found - installation may be incomplete)\n")
}
fmt.Printf("\nDirectories:\n")
oramaDir := "/home/debros/.orama"
if _, err := os.Stat(oramaDir); err == nil {
fmt.Printf(" ✅ %s exists\n", oramaDir)
} else {
fmt.Printf(" ❌ %s not found\n", oramaDir)
}
fmt.Printf("\nView logs with: dbn prod logs <service>\n")
}

View File

@ -0,0 +1,9 @@
package status
// This file contains formatting utilities for status output
// Currently all formatting is done inline in command.go
// Future enhancements could include:
// - JSON output format
// - Table-based formatting
// - Color-coded output
// - More detailed service information

View File

@ -0,0 +1,53 @@
package uninstall
import (
"bufio"
"fmt"
"os"
"os/exec"
"path/filepath"
"strings"
)
// Handle executes the uninstall command
func Handle() {
if os.Geteuid() != 0 {
fmt.Fprintf(os.Stderr, "❌ Production uninstall must be run as root (use sudo)\n")
os.Exit(1)
}
fmt.Printf("⚠️ This will stop and remove all DeBros production services\n")
fmt.Printf("⚠️ Configuration and data will be preserved in /home/debros/.orama\n\n")
fmt.Printf("Continue? (yes/no): ")
reader := bufio.NewReader(os.Stdin)
response, _ := reader.ReadString('\n')
response = strings.ToLower(strings.TrimSpace(response))
if response != "yes" && response != "y" {
fmt.Printf("Uninstall cancelled\n")
return
}
services := []string{
"debros-gateway",
"debros-node",
"debros-olric",
"debros-ipfs-cluster",
"debros-ipfs",
"debros-anyone-client",
}
fmt.Printf("Stopping services...\n")
for _, svc := range services {
exec.Command("systemctl", "stop", svc).Run()
exec.Command("systemctl", "disable", svc).Run()
unitPath := filepath.Join("/etc/systemd/system", svc+".service")
os.Remove(unitPath)
}
exec.Command("systemctl", "daemon-reload").Run()
fmt.Printf("✅ Services uninstalled\n")
fmt.Printf(" Configuration and data preserved in /home/debros/.orama\n")
fmt.Printf(" To remove all data: rm -rf /home/debros/.orama\n\n")
}

View File

@ -0,0 +1,29 @@
package upgrade
import (
"fmt"
"os"
)
// Handle executes the upgrade command
func Handle(args []string) {
// Parse flags
flags, err := ParseFlags(args)
if err != nil {
fmt.Fprintf(os.Stderr, "❌ %v\n", err)
os.Exit(1)
}
// Check root privileges
if os.Geteuid() != 0 {
fmt.Fprintf(os.Stderr, "❌ Production upgrade must be run as root (use sudo)\n")
os.Exit(1)
}
// Create orchestrator and execute upgrade
orchestrator := NewOrchestrator(flags)
if err := orchestrator.Execute(); err != nil {
fmt.Fprintf(os.Stderr, "❌ %v\n", err)
os.Exit(1)
}
}

View File

@ -0,0 +1,54 @@
package upgrade
import (
"flag"
"fmt"
"os"
)
// Flags represents upgrade command flags
type Flags struct {
Force bool
RestartServices bool
NoPull bool
Branch string
}
// ParseFlags parses upgrade command flags
func ParseFlags(args []string) (*Flags, error) {
fs := flag.NewFlagSet("upgrade", flag.ContinueOnError)
fs.SetOutput(os.Stderr)
flags := &Flags{}
fs.BoolVar(&flags.Force, "force", false, "Reconfigure all settings")
fs.BoolVar(&flags.RestartServices, "restart", false, "Automatically restart services after upgrade")
fs.BoolVar(&flags.NoPull, "no-pull", false, "Skip git clone/pull, use existing /home/debros/src")
fs.StringVar(&flags.Branch, "branch", "", "Git branch to use (main or nightly, uses saved preference if not specified)")
// Support legacy flags for backwards compatibility
nightly := fs.Bool("nightly", false, "Use nightly branch (deprecated, use --branch nightly)")
main := fs.Bool("main", false, "Use main branch (deprecated, use --branch main)")
if err := fs.Parse(args); err != nil {
if err == flag.ErrHelp {
return nil, err
}
return nil, fmt.Errorf("failed to parse flags: %w", err)
}
// Handle legacy flags
if *nightly {
flags.Branch = "nightly"
}
if *main {
flags.Branch = "main"
}
// Validate branch if provided
if flags.Branch != "" && flags.Branch != "main" && flags.Branch != "nightly" {
return nil, fmt.Errorf("invalid branch: %s (must be 'main' or 'nightly')", flags.Branch)
}
return flags, nil
}

View File

@ -0,0 +1,322 @@
package upgrade
import (
"fmt"
"net"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
"github.com/DeBrosOfficial/network/pkg/cli/utils"
"github.com/DeBrosOfficial/network/pkg/environments/production"
)
// Orchestrator manages the upgrade process
type Orchestrator struct {
oramaHome string
oramaDir string
setup *production.ProductionSetup
flags *Flags
}
// NewOrchestrator creates a new upgrade orchestrator
func NewOrchestrator(flags *Flags) *Orchestrator {
oramaHome := "/home/debros"
oramaDir := oramaHome + "/.orama"
setup := production.NewProductionSetup(oramaHome, os.Stdout, flags.Force, flags.Branch, flags.NoPull, false)
return &Orchestrator{
oramaHome: oramaHome,
oramaDir: oramaDir,
setup: setup,
flags: flags,
}
}
// Execute runs the upgrade process
func (o *Orchestrator) Execute() error {
fmt.Printf("🔄 Upgrading production installation...\n")
fmt.Printf(" This will preserve existing configurations and data\n")
fmt.Printf(" Configurations will be updated to latest format\n\n")
// Log if --no-pull is enabled
if o.flags.NoPull {
fmt.Printf(" ⚠️ --no-pull flag enabled: Skipping git clone/pull\n")
fmt.Printf(" Using existing repository at %s/src\n", o.oramaHome)
}
// Handle branch preferences
if err := o.handleBranchPreferences(); err != nil {
return err
}
// Phase 1: Check prerequisites
fmt.Printf("\n📋 Phase 1: Checking prerequisites...\n")
if err := o.setup.Phase1CheckPrerequisites(); err != nil {
return fmt.Errorf("prerequisites check failed: %w", err)
}
// Phase 2: Provision environment
fmt.Printf("\n🛠 Phase 2: Provisioning environment...\n")
if err := o.setup.Phase2ProvisionEnvironment(); err != nil {
return fmt.Errorf("environment provisioning failed: %w", err)
}
// Stop services before upgrading binaries
if o.setup.IsUpdate() {
if err := o.stopServices(); err != nil {
return err
}
}
// Check port availability after stopping services
if err := utils.EnsurePortsAvailable("prod upgrade", utils.DefaultPorts()); err != nil {
return err
}
// Phase 2b: Install/update binaries
fmt.Printf("\nPhase 2b: Installing/updating binaries...\n")
if err := o.setup.Phase2bInstallBinaries(); err != nil {
return fmt.Errorf("binary installation failed: %w", err)
}
// Detect existing installation
if o.setup.IsUpdate() {
fmt.Printf(" Detected existing installation\n")
} else {
fmt.Printf(" ⚠️ No existing installation detected, treating as fresh install\n")
fmt.Printf(" Use 'orama install' for fresh installation\n")
}
// Phase 3: Ensure secrets exist
fmt.Printf("\n🔐 Phase 3: Ensuring secrets...\n")
if err := o.setup.Phase3GenerateSecrets(); err != nil {
return fmt.Errorf("secret generation failed: %w", err)
}
// Phase 4: Regenerate configs
if err := o.regenerateConfigs(); err != nil {
return err
}
// Phase 2c: Ensure services are properly initialized
fmt.Printf("\nPhase 2c: Ensuring services are properly initialized...\n")
peers := o.extractPeers()
vpsIP, _ := o.extractNetworkConfig()
if err := o.setup.Phase2cInitializeServices(peers, vpsIP, nil, nil); err != nil {
return fmt.Errorf("service initialization failed: %w", err)
}
// Phase 5: Update systemd services
fmt.Printf("\n🔧 Phase 5: Updating systemd services...\n")
enableHTTPS, _ := o.extractGatewayConfig()
if err := o.setup.Phase5CreateSystemdServices(enableHTTPS); err != nil {
fmt.Fprintf(os.Stderr, "⚠️ Service update warning: %v\n", err)
}
fmt.Printf("\n✅ Upgrade complete!\n")
// Restart services if requested
if o.flags.RestartServices {
return o.restartServices()
}
fmt.Printf(" To apply changes, restart services:\n")
fmt.Printf(" sudo systemctl daemon-reload\n")
fmt.Printf(" sudo systemctl restart debros-*\n")
fmt.Printf("\n")
return nil
}
func (o *Orchestrator) handleBranchPreferences() error {
// If branch was explicitly provided, save it for future upgrades
if o.flags.Branch != "" {
if err := production.SaveBranchPreference(o.oramaDir, o.flags.Branch); err != nil {
fmt.Fprintf(os.Stderr, "⚠️ Warning: Failed to save branch preference: %v\n", err)
} else {
fmt.Printf(" Using branch: %s (saved for future upgrades)\n", o.flags.Branch)
}
} else {
// Show which branch is being used (read from saved preference)
currentBranch := production.ReadBranchPreference(o.oramaDir)
fmt.Printf(" Using branch: %s (from saved preference)\n", currentBranch)
}
return nil
}
func (o *Orchestrator) stopServices() error {
fmt.Printf("\n⏹ Stopping services before upgrade...\n")
serviceController := production.NewSystemdController()
services := []string{
"debros-gateway.service",
"debros-node.service",
"debros-ipfs-cluster.service",
"debros-ipfs.service",
// Note: RQLite is managed by node process, not as separate service
"debros-olric.service",
}
for _, svc := range services {
unitPath := filepath.Join("/etc/systemd/system", svc)
if _, err := os.Stat(unitPath); err == nil {
if err := serviceController.StopService(svc); err != nil {
fmt.Printf(" ⚠️ Warning: Failed to stop %s: %v\n", svc, err)
} else {
fmt.Printf(" ✓ Stopped %s\n", svc)
}
}
}
// Give services time to shut down gracefully
time.Sleep(2 * time.Second)
return nil
}
func (o *Orchestrator) extractPeers() []string {
nodeConfigPath := filepath.Join(o.oramaDir, "configs", "node.yaml")
var peers []string
if data, err := os.ReadFile(nodeConfigPath); err == nil {
configStr := string(data)
inPeersList := false
for _, line := range strings.Split(configStr, "\n") {
trimmed := strings.TrimSpace(line)
if strings.HasPrefix(trimmed, "bootstrap_peers:") || strings.HasPrefix(trimmed, "peers:") {
inPeersList = true
continue
}
if inPeersList {
if strings.HasPrefix(trimmed, "-") {
// Extract multiaddr after the dash
parts := strings.SplitN(trimmed, "-", 2)
if len(parts) > 1 {
peer := strings.TrimSpace(parts[1])
peer = strings.Trim(peer, "\"'")
if peer != "" && strings.HasPrefix(peer, "/") {
peers = append(peers, peer)
}
}
} else if trimmed == "" || !strings.HasPrefix(trimmed, "-") {
// End of peers list
break
}
}
}
}
return peers
}
func (o *Orchestrator) extractNetworkConfig() (vpsIP, joinAddress string) {
nodeConfigPath := filepath.Join(o.oramaDir, "configs", "node.yaml")
if data, err := os.ReadFile(nodeConfigPath); err == nil {
configStr := string(data)
for _, line := range strings.Split(configStr, "\n") {
trimmed := strings.TrimSpace(line)
// Try to extract VPS IP from http_adv_address or raft_adv_address
if vpsIP == "" && (strings.HasPrefix(trimmed, "http_adv_address:") || strings.HasPrefix(trimmed, "raft_adv_address:")) {
parts := strings.SplitN(trimmed, ":", 2)
if len(parts) > 1 {
addr := strings.TrimSpace(parts[1])
addr = strings.Trim(addr, "\"'")
if addr != "" && addr != "null" && addr != "localhost:5001" && addr != "localhost:7001" {
// Extract IP from address (format: "IP:PORT" or "[IPv6]:PORT")
if host, _, err := net.SplitHostPort(addr); err == nil && host != "" && host != "localhost" {
vpsIP = host
}
}
}
}
// Extract join address
if strings.HasPrefix(trimmed, "rqlite_join_address:") {
parts := strings.SplitN(trimmed, ":", 2)
if len(parts) > 1 {
joinAddress = strings.TrimSpace(parts[1])
joinAddress = strings.Trim(joinAddress, "\"'")
if joinAddress == "null" || joinAddress == "" {
joinAddress = ""
}
}
}
}
}
return vpsIP, joinAddress
}
func (o *Orchestrator) extractGatewayConfig() (enableHTTPS bool, domain string) {
gatewayConfigPath := filepath.Join(o.oramaDir, "configs", "gateway.yaml")
if data, err := os.ReadFile(gatewayConfigPath); err == nil {
configStr := string(data)
if strings.Contains(configStr, "domain:") {
for _, line := range strings.Split(configStr, "\n") {
trimmed := strings.TrimSpace(line)
if strings.HasPrefix(trimmed, "domain:") {
parts := strings.SplitN(trimmed, ":", 2)
if len(parts) > 1 {
domain = strings.TrimSpace(parts[1])
if domain != "" && domain != "\"\"" && domain != "''" && domain != "null" {
domain = strings.Trim(domain, "\"'")
enableHTTPS = true
} else {
domain = ""
}
}
break
}
}
}
}
return enableHTTPS, domain
}
func (o *Orchestrator) regenerateConfigs() error {
peers := o.extractPeers()
vpsIP, joinAddress := o.extractNetworkConfig()
enableHTTPS, domain := o.extractGatewayConfig()
fmt.Printf(" Preserving existing configuration:\n")
if len(peers) > 0 {
fmt.Printf(" - Peers: %d peer(s) preserved\n", len(peers))
}
if vpsIP != "" {
fmt.Printf(" - VPS IP: %s\n", vpsIP)
}
if domain != "" {
fmt.Printf(" - Domain: %s\n", domain)
}
if joinAddress != "" {
fmt.Printf(" - Join address: %s\n", joinAddress)
}
// Phase 4: Generate configs
if err := o.setup.Phase4GenerateConfigs(peers, vpsIP, enableHTTPS, domain, joinAddress); err != nil {
fmt.Fprintf(os.Stderr, "⚠️ Config generation warning: %v\n", err)
fmt.Fprintf(os.Stderr, " Existing configs preserved\n")
}
return nil
}
func (o *Orchestrator) restartServices() error {
fmt.Printf(" Restarting services...\n")
// Reload systemd daemon
if err := exec.Command("systemctl", "daemon-reload").Run(); err != nil {
fmt.Fprintf(os.Stderr, " ⚠️ Warning: Failed to reload systemd daemon: %v\n", err)
}
// Restart services to apply changes - use getProductionServices to only restart existing services
services := utils.GetProductionServices()
if len(services) == 0 {
fmt.Printf(" ⚠️ No services found to restart\n")
} else {
for _, svc := range services {
if err := exec.Command("systemctl", "restart", svc).Run(); err != nil {
fmt.Printf(" ⚠️ Failed to restart %s: %v\n", svc, err)
} else {
fmt.Printf(" ✓ Restarted %s\n", svc)
}
}
fmt.Printf(" ✓ All services restarted\n")
}
return nil
}

View File

@ -0,0 +1,10 @@
package cli
import (
"github.com/DeBrosOfficial/network/pkg/cli/production"
)
// HandleProdCommand handles production environment commands
func HandleProdCommand(args []string) {
production.HandleCommand(args)
}

97
pkg/cli/utils/install.go Normal file
View File

@ -0,0 +1,97 @@
package utils
import (
"fmt"
"strings"
)
// IPFSPeerInfo holds IPFS peer information for configuring Peering.Peers
type IPFSPeerInfo struct {
PeerID string
Addrs []string
}
// IPFSClusterPeerInfo contains IPFS Cluster peer information for cluster discovery
type IPFSClusterPeerInfo struct {
PeerID string
Addrs []string
}
// ShowDryRunSummary displays what would be done during installation without making changes
func ShowDryRunSummary(vpsIP, domain, branch string, peers []string, joinAddress string, isFirstNode bool, oramaDir string) {
fmt.Print("\n" + strings.Repeat("=", 70) + "\n")
fmt.Printf("DRY RUN - No changes will be made\n")
fmt.Print(strings.Repeat("=", 70) + "\n\n")
fmt.Printf("📋 Installation Summary:\n")
fmt.Printf(" VPS IP: %s\n", vpsIP)
fmt.Printf(" Domain: %s\n", domain)
fmt.Printf(" Branch: %s\n", branch)
if isFirstNode {
fmt.Printf(" Node Type: First node (creates new cluster)\n")
} else {
fmt.Printf(" Node Type: Joining existing cluster\n")
if joinAddress != "" {
fmt.Printf(" Join Address: %s\n", joinAddress)
}
if len(peers) > 0 {
fmt.Printf(" Peers: %d peer(s)\n", len(peers))
for _, peer := range peers {
fmt.Printf(" - %s\n", peer)
}
}
}
fmt.Printf("\n📁 Directories that would be created:\n")
fmt.Printf(" %s/configs/\n", oramaDir)
fmt.Printf(" %s/secrets/\n", oramaDir)
fmt.Printf(" %s/data/ipfs/repo/\n", oramaDir)
fmt.Printf(" %s/data/ipfs-cluster/\n", oramaDir)
fmt.Printf(" %s/data/rqlite/\n", oramaDir)
fmt.Printf(" %s/logs/\n", oramaDir)
fmt.Printf(" %s/tls-cache/\n", oramaDir)
fmt.Printf("\n🔧 Binaries that would be installed:\n")
fmt.Printf(" - Go (if not present)\n")
fmt.Printf(" - RQLite 8.43.0\n")
fmt.Printf(" - IPFS/Kubo 0.38.2\n")
fmt.Printf(" - IPFS Cluster (latest)\n")
fmt.Printf(" - Olric 0.7.0\n")
fmt.Printf(" - anyone-client (npm)\n")
fmt.Printf(" - DeBros binaries (built from %s branch)\n", branch)
fmt.Printf("\n🔐 Secrets that would be generated:\n")
fmt.Printf(" - Cluster secret (64-hex)\n")
fmt.Printf(" - IPFS swarm key\n")
fmt.Printf(" - Node identity (Ed25519 keypair)\n")
fmt.Printf("\n📝 Configuration files that would be created:\n")
fmt.Printf(" - %s/configs/node.yaml\n", oramaDir)
fmt.Printf(" - %s/configs/olric/config.yaml\n", oramaDir)
fmt.Printf("\n⚙ Systemd services that would be created:\n")
fmt.Printf(" - debros-ipfs.service\n")
fmt.Printf(" - debros-ipfs-cluster.service\n")
fmt.Printf(" - debros-olric.service\n")
fmt.Printf(" - debros-node.service (includes embedded gateway + RQLite)\n")
fmt.Printf(" - debros-anyone-client.service\n")
fmt.Printf("\n🌐 Ports that would be used:\n")
fmt.Printf(" External (must be open in firewall):\n")
fmt.Printf(" - 80 (HTTP for ACME/Let's Encrypt)\n")
fmt.Printf(" - 443 (HTTPS gateway)\n")
fmt.Printf(" - 4101 (IPFS swarm)\n")
fmt.Printf(" - 7001 (RQLite Raft)\n")
fmt.Printf(" Internal (localhost only):\n")
fmt.Printf(" - 4501 (IPFS API)\n")
fmt.Printf(" - 5001 (RQLite HTTP)\n")
fmt.Printf(" - 6001 (Unified gateway)\n")
fmt.Printf(" - 8080 (IPFS gateway)\n")
fmt.Printf(" - 9050 (Anyone SOCKS5)\n")
fmt.Printf(" - 9094 (IPFS Cluster API)\n")
fmt.Printf(" - 3320/3322 (Olric)\n")
fmt.Print("\n" + strings.Repeat("=", 70) + "\n")
fmt.Printf("To proceed with installation, run without --dry-run\n")
fmt.Print(strings.Repeat("=", 70) + "\n\n")
}

217
pkg/cli/utils/systemd.go Normal file
View File

@ -0,0 +1,217 @@
package utils
import (
"errors"
"fmt"
"net"
"os"
"os/exec"
"path/filepath"
"strings"
"syscall"
)
var ErrServiceNotFound = errors.New("service not found")
// PortSpec defines a port and its name for checking availability
type PortSpec struct {
Name string
Port int
}
var ServicePorts = map[string][]PortSpec{
"debros-gateway": {
{Name: "Gateway API", Port: 6001},
},
"debros-olric": {
{Name: "Olric HTTP", Port: 3320},
{Name: "Olric Memberlist", Port: 3322},
},
"debros-node": {
{Name: "RQLite HTTP", Port: 5001},
{Name: "RQLite Raft", Port: 7001},
},
"debros-ipfs": {
{Name: "IPFS API", Port: 4501},
{Name: "IPFS Gateway", Port: 8080},
{Name: "IPFS Swarm", Port: 4101},
},
"debros-ipfs-cluster": {
{Name: "IPFS Cluster API", Port: 9094},
},
}
// DefaultPorts is used for fresh installs/upgrades before unit files exist.
func DefaultPorts() []PortSpec {
return []PortSpec{
{Name: "IPFS Swarm", Port: 4001},
{Name: "IPFS API", Port: 4501},
{Name: "IPFS Gateway", Port: 8080},
{Name: "Gateway API", Port: 6001},
{Name: "RQLite HTTP", Port: 5001},
{Name: "RQLite Raft", Port: 7001},
{Name: "IPFS Cluster API", Port: 9094},
{Name: "Olric HTTP", Port: 3320},
{Name: "Olric Memberlist", Port: 3322},
}
}
// ResolveServiceName resolves service aliases to actual systemd service names
func ResolveServiceName(alias string) ([]string, error) {
// Service alias mapping (unified - no bootstrap/node distinction)
aliases := map[string][]string{
"node": {"debros-node"},
"ipfs": {"debros-ipfs"},
"cluster": {"debros-ipfs-cluster"},
"ipfs-cluster": {"debros-ipfs-cluster"},
"gateway": {"debros-gateway"},
"olric": {"debros-olric"},
"rqlite": {"debros-node"}, // RQLite logs are in node logs
}
// Check if it's an alias
if serviceNames, ok := aliases[strings.ToLower(alias)]; ok {
// Filter to only existing services
var existing []string
for _, svc := range serviceNames {
unitPath := filepath.Join("/etc/systemd/system", svc+".service")
if _, err := os.Stat(unitPath); err == nil {
existing = append(existing, svc)
}
}
if len(existing) == 0 {
return nil, fmt.Errorf("no services found for alias %q", alias)
}
return existing, nil
}
// Check if it's already a full service name
unitPath := filepath.Join("/etc/systemd/system", alias+".service")
if _, err := os.Stat(unitPath); err == nil {
return []string{alias}, nil
}
// Try without .service suffix
if !strings.HasSuffix(alias, ".service") {
unitPath = filepath.Join("/etc/systemd/system", alias+".service")
if _, err := os.Stat(unitPath); err == nil {
return []string{alias}, nil
}
}
return nil, fmt.Errorf("service %q not found. Use: node, ipfs, cluster, gateway, olric, or full service name", alias)
}
// IsServiceActive checks if a systemd service is currently active (running)
func IsServiceActive(service string) (bool, error) {
cmd := exec.Command("systemctl", "is-active", "--quiet", service)
if err := cmd.Run(); err != nil {
if exitErr, ok := err.(*exec.ExitError); ok {
switch exitErr.ExitCode() {
case 3:
return false, nil
case 4:
return false, ErrServiceNotFound
}
}
return false, err
}
return true, nil
}
// IsServiceEnabled checks if a systemd service is enabled to start on boot
func IsServiceEnabled(service string) (bool, error) {
cmd := exec.Command("systemctl", "is-enabled", "--quiet", service)
if err := cmd.Run(); err != nil {
if exitErr, ok := err.(*exec.ExitError); ok {
switch exitErr.ExitCode() {
case 1:
return false, nil // Service is disabled
case 4:
return false, ErrServiceNotFound
}
}
return false, err
}
return true, nil
}
// IsServiceMasked checks if a systemd service is masked
func IsServiceMasked(service string) (bool, error) {
cmd := exec.Command("systemctl", "is-enabled", service)
output, err := cmd.CombinedOutput()
if err != nil {
outputStr := string(output)
if strings.Contains(outputStr, "masked") {
return true, nil
}
return false, err
}
return false, nil
}
// GetProductionServices returns a list of all DeBros production service names that exist
func GetProductionServices() []string {
// Unified service names (no bootstrap/node distinction)
allServices := []string{
"debros-gateway",
"debros-node",
"debros-olric",
"debros-ipfs-cluster",
"debros-ipfs",
"debros-anyone-client",
}
// Filter to only existing services by checking if unit file exists
var existing []string
for _, svc := range allServices {
unitPath := filepath.Join("/etc/systemd/system", svc+".service")
if _, err := os.Stat(unitPath); err == nil {
existing = append(existing, svc)
}
}
return existing
}
// CollectPortsForServices returns a list of ports used by the specified services
func CollectPortsForServices(services []string, skipActive bool) ([]PortSpec, error) {
seen := make(map[int]PortSpec)
for _, svc := range services {
if skipActive {
active, err := IsServiceActive(svc)
if err != nil {
return nil, fmt.Errorf("unable to check %s: %w", svc, err)
}
if active {
continue
}
}
for _, spec := range ServicePorts[svc] {
if _, ok := seen[spec.Port]; !ok {
seen[spec.Port] = spec
}
}
}
ports := make([]PortSpec, 0, len(seen))
for _, spec := range seen {
ports = append(ports, spec)
}
return ports, nil
}
// EnsurePortsAvailable checks if the specified ports are available
func EnsurePortsAvailable(action string, ports []PortSpec) error {
for _, spec := range ports {
ln, err := net.Listen("tcp", fmt.Sprintf("0.0.0.0:%d", spec.Port))
if err != nil {
if errors.Is(err, syscall.EADDRINUSE) || strings.Contains(err.Error(), "address already in use") {
return fmt.Errorf("%s cannot continue: %s (port %d) is already in use", action, spec.Name, spec.Port)
}
return fmt.Errorf("%s cannot continue: failed to inspect %s (port %d): %w", action, spec.Name, spec.Port, err)
}
_ = ln.Close()
}
return nil
}

113
pkg/cli/utils/validation.go Normal file
View File

@ -0,0 +1,113 @@
package utils
import (
"fmt"
"net"
"os"
"path/filepath"
"strings"
"github.com/DeBrosOfficial/network/pkg/config"
"github.com/multiformats/go-multiaddr"
)
// ValidateGeneratedConfig loads and validates the generated node configuration
func ValidateGeneratedConfig(oramaDir string) error {
configPath := filepath.Join(oramaDir, "configs", "node.yaml")
// Check if config file exists
if _, err := os.Stat(configPath); os.IsNotExist(err) {
return fmt.Errorf("configuration file not found at %s", configPath)
}
// Load the config file
file, err := os.Open(configPath)
if err != nil {
return fmt.Errorf("failed to open config file: %w", err)
}
defer file.Close()
var cfg config.Config
if err := config.DecodeStrict(file, &cfg); err != nil {
return fmt.Errorf("failed to parse config: %w", err)
}
// Validate the configuration
if errs := cfg.Validate(); len(errs) > 0 {
var errMsgs []string
for _, e := range errs {
errMsgs = append(errMsgs, e.Error())
}
return fmt.Errorf("configuration validation errors:\n - %s", strings.Join(errMsgs, "\n - "))
}
return nil
}
// ValidateDNSRecord validates that the domain points to the expected IP address
// Returns nil if DNS is valid, warning message if DNS doesn't match but continues,
// or error if DNS lookup fails completely
func ValidateDNSRecord(domain, expectedIP string) error {
if domain == "" {
return nil // No domain provided, skip validation
}
ips, err := net.LookupIP(domain)
if err != nil {
// DNS lookup failed - this is a warning, not a fatal error
// The user might be setting up DNS after installation
fmt.Printf(" ⚠️ DNS lookup failed for %s: %v\n", domain, err)
fmt.Printf(" Make sure DNS is configured before enabling HTTPS\n")
return nil
}
// Check if any resolved IP matches the expected IP
for _, ip := range ips {
if ip.String() == expectedIP {
fmt.Printf(" ✓ DNS validated: %s → %s\n", domain, expectedIP)
return nil
}
}
// DNS doesn't point to expected IP - warn but continue
resolvedIPs := make([]string, len(ips))
for i, ip := range ips {
resolvedIPs[i] = ip.String()
}
fmt.Printf(" ⚠️ DNS mismatch: %s resolves to %v, expected %s\n", domain, resolvedIPs, expectedIP)
fmt.Printf(" HTTPS certificate generation may fail until DNS is updated\n")
return nil
}
// NormalizePeers normalizes and validates peer multiaddrs
func NormalizePeers(peersStr string) ([]string, error) {
if peersStr == "" {
return nil, nil
}
// Split by comma and trim whitespace
rawPeers := strings.Split(peersStr, ",")
peers := make([]string, 0, len(rawPeers))
seen := make(map[string]bool)
for _, peer := range rawPeers {
peer = strings.TrimSpace(peer)
if peer == "" {
continue
}
// Validate multiaddr format
if _, err := multiaddr.NewMultiaddr(peer); err != nil {
return nil, fmt.Errorf("invalid multiaddr %q: %w", peer, err)
}
// Deduplicate
if !seen[peer] {
peers = append(peers, peer)
seen[peer] = true
}
}
return peers, nil
}

View File

@ -329,6 +329,18 @@ func (c *Client) getAppNamespace() string {
return c.config.AppName
}
// PubSubAdapter returns the underlying pubsub.ClientAdapter for direct use by serverless functions.
// This bypasses the authentication checks used by PubSub() since serverless functions
// are already authenticated via the gateway.
func (c *Client) PubSubAdapter() *pubsub.ClientAdapter {
c.mu.RLock()
defer c.mu.RUnlock()
if c.pubsub == nil {
return nil
}
return c.pubsub.adapter
}
// requireAccess enforces that credentials are present and that any context-based namespace overrides match
func (c *Client) requireAccess(ctx context.Context) error {
// Allow internal system operations to bypass authentication

42
pkg/client/config.go Normal file
View File

@ -0,0 +1,42 @@
package client
import (
"fmt"
"time"
)
// ClientConfig represents configuration for network clients
type ClientConfig struct {
AppName string `json:"app_name"`
DatabaseName string `json:"database_name"`
BootstrapPeers []string `json:"peers"`
DatabaseEndpoints []string `json:"database_endpoints"`
GatewayURL string `json:"gateway_url"` // Gateway URL for HTTP API access (e.g., "http://localhost:6001")
ConnectTimeout time.Duration `json:"connect_timeout"`
RetryAttempts int `json:"retry_attempts"`
RetryDelay time.Duration `json:"retry_delay"`
QuietMode bool `json:"quiet_mode"` // Suppress debug/info logs
APIKey string `json:"api_key"` // API key for gateway auth
JWT string `json:"jwt"` // Optional JWT bearer token
}
// DefaultClientConfig returns a default client configuration
func DefaultClientConfig(appName string) *ClientConfig {
// Base defaults
peers := DefaultBootstrapPeers()
endpoints := DefaultDatabaseEndpoints()
return &ClientConfig{
AppName: appName,
DatabaseName: fmt.Sprintf("%s_db", appName),
BootstrapPeers: peers,
DatabaseEndpoints: endpoints,
GatewayURL: "http://localhost:6001",
ConnectTimeout: time.Second * 30,
RetryAttempts: 3,
RetryDelay: time.Second * 5,
QuietMode: false,
APIKey: "",
JWT: "",
}
}

View File

@ -2,15 +2,10 @@ package client
import (
"context"
"encoding/json"
"fmt"
"net/http"
"strings"
"sync"
"time"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/multiformats/go-multiaddr"
"github.com/rqlite/gorqlite"
)
@ -203,8 +198,7 @@ func (d *DatabaseClientImpl) getRQLiteNodes() []string {
return DefaultDatabaseEndpoints()
}
// normalizeEndpoints is now imported from defaults.go
// hasPort checks if a hostport string has a port suffix
func hasPort(hostport string) bool {
// cheap check for :port suffix (IPv6 with brackets handled by url.Parse earlier)
if i := strings.LastIndex(hostport, ":"); i > -1 && i < len(hostport)-1 {
@ -406,260 +400,3 @@ func (d *DatabaseClientImpl) GetSchema(ctx context.Context) (*SchemaInfo, error)
return schema, nil
}
// NetworkInfoImpl implements NetworkInfo
type NetworkInfoImpl struct {
client *Client
}
// GetPeers returns information about connected peers
func (n *NetworkInfoImpl) GetPeers(ctx context.Context) ([]PeerInfo, error) {
if !n.client.isConnected() {
return nil, fmt.Errorf("client not connected")
}
if err := n.client.requireAccess(ctx); err != nil {
return nil, fmt.Errorf("authentication required: %w - run CLI commands to authenticate automatically", err)
}
// Get peers from LibP2P host
host := n.client.host
if host == nil {
return nil, fmt.Errorf("no host available")
}
// Get connected peers
connectedPeers := host.Network().Peers()
peers := make([]PeerInfo, 0, len(connectedPeers)+1) // +1 for self
// Add connected peers
for _, peerID := range connectedPeers {
// Get peer addresses
peerInfo := host.Peerstore().PeerInfo(peerID)
// Convert multiaddrs to strings
addrs := make([]string, len(peerInfo.Addrs))
for i, addr := range peerInfo.Addrs {
addrs[i] = addr.String()
}
peers = append(peers, PeerInfo{
ID: peerID.String(),
Addresses: addrs,
Connected: true,
LastSeen: time.Now(), // LibP2P doesn't track last seen, so use current time
})
}
// Add self node
selfPeerInfo := host.Peerstore().PeerInfo(host.ID())
selfAddrs := make([]string, len(selfPeerInfo.Addrs))
for i, addr := range selfPeerInfo.Addrs {
selfAddrs[i] = addr.String()
}
// Insert self node at the beginning of the list
selfPeer := PeerInfo{
ID: host.ID().String(),
Addresses: selfAddrs,
Connected: true,
LastSeen: time.Now(),
}
// Prepend self to the list
peers = append([]PeerInfo{selfPeer}, peers...)
return peers, nil
}
// GetStatus returns network status
func (n *NetworkInfoImpl) GetStatus(ctx context.Context) (*NetworkStatus, error) {
if !n.client.isConnected() {
return nil, fmt.Errorf("client not connected")
}
if err := n.client.requireAccess(ctx); err != nil {
return nil, fmt.Errorf("authentication required: %w - run CLI commands to authenticate automatically", err)
}
host := n.client.host
if host == nil {
return nil, fmt.Errorf("no host available")
}
// Get actual network status
connectedPeers := host.Network().Peers()
// Try to get database size from RQLite (optional - don't fail if unavailable)
var dbSize int64 = 0
dbClient := n.client.database
if conn, err := dbClient.getRQLiteConnection(); err == nil {
// Query database size (rough estimate)
if result, err := conn.QueryOne("SELECT page_count * page_size as size FROM pragma_page_count(), pragma_page_size()"); err == nil {
for result.Next() {
if row, err := result.Slice(); err == nil && len(row) > 0 {
if size, ok := row[0].(int64); ok {
dbSize = size
}
}
}
}
}
// Try to get IPFS peer info (optional - don't fail if unavailable)
ipfsInfo := queryIPFSPeerInfo()
// Try to get IPFS Cluster peer info (optional - don't fail if unavailable)
ipfsClusterInfo := queryIPFSClusterPeerInfo()
return &NetworkStatus{
NodeID: host.ID().String(),
PeerID: host.ID().String(),
Connected: true,
PeerCount: len(connectedPeers),
DatabaseSize: dbSize,
Uptime: time.Since(n.client.startTime),
IPFS: ipfsInfo,
IPFSCluster: ipfsClusterInfo,
}, nil
}
// queryIPFSPeerInfo queries the local IPFS API for peer information
// Returns nil if IPFS is not running or unavailable
func queryIPFSPeerInfo() *IPFSPeerInfo {
// IPFS API typically runs on port 4501 in our setup
client := &http.Client{Timeout: 2 * time.Second}
resp, err := client.Post("http://localhost:4501/api/v0/id", "", nil)
if err != nil {
return nil // IPFS not available
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil
}
var result struct {
ID string `json:"ID"`
Addresses []string `json:"Addresses"`
}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return nil
}
// Filter addresses to only include public/routable ones
var swarmAddrs []string
for _, addr := range result.Addresses {
// Skip loopback and private addresses for external discovery
if !strings.Contains(addr, "127.0.0.1") && !strings.Contains(addr, "/ip6/::1") {
swarmAddrs = append(swarmAddrs, addr)
}
}
return &IPFSPeerInfo{
PeerID: result.ID,
SwarmAddresses: swarmAddrs,
}
}
// queryIPFSClusterPeerInfo queries the local IPFS Cluster API for peer information
// Returns nil if IPFS Cluster is not running or unavailable
func queryIPFSClusterPeerInfo() *IPFSClusterPeerInfo {
// IPFS Cluster API typically runs on port 9094 in our setup
client := &http.Client{Timeout: 2 * time.Second}
resp, err := client.Get("http://localhost:9094/id")
if err != nil {
return nil // IPFS Cluster not available
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil
}
var result struct {
ID string `json:"id"`
Addresses []string `json:"addresses"`
}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return nil
}
// Filter addresses to only include public/routable ones for cluster discovery
var clusterAddrs []string
for _, addr := range result.Addresses {
// Skip loopback addresses - only keep routable addresses
if !strings.Contains(addr, "127.0.0.1") && !strings.Contains(addr, "/ip6/::1") {
clusterAddrs = append(clusterAddrs, addr)
}
}
return &IPFSClusterPeerInfo{
PeerID: result.ID,
Addresses: clusterAddrs,
}
}
// ConnectToPeer connects to a specific peer
func (n *NetworkInfoImpl) ConnectToPeer(ctx context.Context, peerAddr string) error {
if !n.client.isConnected() {
return fmt.Errorf("client not connected")
}
if err := n.client.requireAccess(ctx); err != nil {
return fmt.Errorf("authentication required: %w - run CLI commands to authenticate automatically", err)
}
host := n.client.host
if host == nil {
return fmt.Errorf("no host available")
}
// Parse the multiaddr
ma, err := multiaddr.NewMultiaddr(peerAddr)
if err != nil {
return fmt.Errorf("invalid multiaddr: %w", err)
}
// Extract peer info
peerInfo, err := peer.AddrInfoFromP2pAddr(ma)
if err != nil {
return fmt.Errorf("failed to extract peer info: %w", err)
}
// Connect to the peer
if err := host.Connect(ctx, *peerInfo); err != nil {
return fmt.Errorf("failed to connect to peer: %w", err)
}
return nil
}
// DisconnectFromPeer disconnects from a specific peer
func (n *NetworkInfoImpl) DisconnectFromPeer(ctx context.Context, peerID string) error {
if !n.client.isConnected() {
return fmt.Errorf("client not connected")
}
if err := n.client.requireAccess(ctx); err != nil {
return fmt.Errorf("authentication required: %w - run CLI commands to authenticate automatically", err)
}
host := n.client.host
if host == nil {
return fmt.Errorf("no host available")
}
// Parse the peer ID
pid, err := peer.Decode(peerID)
if err != nil {
return fmt.Errorf("invalid peer ID: %w", err)
}
// Close the connection to the peer
if err := host.Network().ClosePeer(pid); err != nil {
return fmt.Errorf("failed to disconnect from peer: %w", err)
}
return nil
}

51
pkg/client/errors.go Normal file
View File

@ -0,0 +1,51 @@
package client
import (
"errors"
"fmt"
)
// Common client errors
var (
// ErrNotConnected indicates the client is not connected to the network
ErrNotConnected = errors.New("client not connected")
// ErrAuthRequired indicates authentication is required for the operation
ErrAuthRequired = errors.New("authentication required")
// ErrNoHost indicates no LibP2P host is available
ErrNoHost = errors.New("no host available")
// ErrInvalidConfig indicates the client configuration is invalid
ErrInvalidConfig = errors.New("invalid configuration")
// ErrNamespaceMismatch indicates a namespace mismatch
ErrNamespaceMismatch = errors.New("namespace mismatch")
)
// ClientError represents a client-specific error with additional context
type ClientError struct {
Op string // Operation that failed
Message string // Error message
Err error // Underlying error
}
func (e *ClientError) Error() string {
if e.Err != nil {
return fmt.Sprintf("%s: %s: %v", e.Op, e.Message, e.Err)
}
return fmt.Sprintf("%s: %s", e.Op, e.Message)
}
func (e *ClientError) Unwrap() error {
return e.Err
}
// NewClientError creates a new ClientError
func NewClientError(op, message string, err error) *ClientError {
return &ClientError{
Op: op,
Message: message,
Err: err,
}
}

View File

@ -2,7 +2,6 @@ package client
import (
"context"
"fmt"
"io"
"time"
)
@ -168,39 +167,3 @@ type StorageStatus struct {
Peers []string `json:"peers"`
Error string `json:"error,omitempty"`
}
// ClientConfig represents configuration for network clients
type ClientConfig struct {
AppName string `json:"app_name"`
DatabaseName string `json:"database_name"`
BootstrapPeers []string `json:"peers"`
DatabaseEndpoints []string `json:"database_endpoints"`
GatewayURL string `json:"gateway_url"` // Gateway URL for HTTP API access (e.g., "http://localhost:6001")
ConnectTimeout time.Duration `json:"connect_timeout"`
RetryAttempts int `json:"retry_attempts"`
RetryDelay time.Duration `json:"retry_delay"`
QuietMode bool `json:"quiet_mode"` // Suppress debug/info logs
APIKey string `json:"api_key"` // API key for gateway auth
JWT string `json:"jwt"` // Optional JWT bearer token
}
// DefaultClientConfig returns a default client configuration
func DefaultClientConfig(appName string) *ClientConfig {
// Base defaults
peers := DefaultBootstrapPeers()
endpoints := DefaultDatabaseEndpoints()
return &ClientConfig{
AppName: appName,
DatabaseName: fmt.Sprintf("%s_db", appName),
BootstrapPeers: peers,
DatabaseEndpoints: endpoints,
GatewayURL: "http://localhost:6001",
ConnectTimeout: time.Second * 30,
RetryAttempts: 3,
RetryDelay: time.Second * 5,
QuietMode: false,
APIKey: "",
JWT: "",
}
}

View File

@ -0,0 +1,270 @@
package client
import (
"context"
"encoding/json"
"fmt"
"net/http"
"strings"
"time"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/multiformats/go-multiaddr"
)
// NetworkInfoImpl implements NetworkInfo
type NetworkInfoImpl struct {
client *Client
}
// GetPeers returns information about connected peers
func (n *NetworkInfoImpl) GetPeers(ctx context.Context) ([]PeerInfo, error) {
if !n.client.isConnected() {
return nil, fmt.Errorf("client not connected")
}
if err := n.client.requireAccess(ctx); err != nil {
return nil, fmt.Errorf("authentication required: %w - run CLI commands to authenticate automatically", err)
}
// Get peers from LibP2P host
host := n.client.host
if host == nil {
return nil, fmt.Errorf("no host available")
}
// Get connected peers
connectedPeers := host.Network().Peers()
peers := make([]PeerInfo, 0, len(connectedPeers)+1) // +1 for self
// Add connected peers
for _, peerID := range connectedPeers {
// Get peer addresses
peerInfo := host.Peerstore().PeerInfo(peerID)
// Convert multiaddrs to strings
addrs := make([]string, len(peerInfo.Addrs))
for i, addr := range peerInfo.Addrs {
addrs[i] = addr.String()
}
peers = append(peers, PeerInfo{
ID: peerID.String(),
Addresses: addrs,
Connected: true,
LastSeen: time.Now(), // LibP2P doesn't track last seen, so use current time
})
}
// Add self node
selfPeerInfo := host.Peerstore().PeerInfo(host.ID())
selfAddrs := make([]string, len(selfPeerInfo.Addrs))
for i, addr := range selfPeerInfo.Addrs {
selfAddrs[i] = addr.String()
}
// Insert self node at the beginning of the list
selfPeer := PeerInfo{
ID: host.ID().String(),
Addresses: selfAddrs,
Connected: true,
LastSeen: time.Now(),
}
// Prepend self to the list
peers = append([]PeerInfo{selfPeer}, peers...)
return peers, nil
}
// GetStatus returns network status
func (n *NetworkInfoImpl) GetStatus(ctx context.Context) (*NetworkStatus, error) {
if !n.client.isConnected() {
return nil, fmt.Errorf("client not connected")
}
if err := n.client.requireAccess(ctx); err != nil {
return nil, fmt.Errorf("authentication required: %w - run CLI commands to authenticate automatically", err)
}
host := n.client.host
if host == nil {
return nil, fmt.Errorf("no host available")
}
// Get actual network status
connectedPeers := host.Network().Peers()
// Try to get database size from RQLite (optional - don't fail if unavailable)
var dbSize int64 = 0
dbClient := n.client.database
if conn, err := dbClient.getRQLiteConnection(); err == nil {
// Query database size (rough estimate)
if result, err := conn.QueryOne("SELECT page_count * page_size as size FROM pragma_page_count(), pragma_page_size()"); err == nil {
for result.Next() {
if row, err := result.Slice(); err == nil && len(row) > 0 {
if size, ok := row[0].(int64); ok {
dbSize = size
}
}
}
}
}
// Try to get IPFS peer info (optional - don't fail if unavailable)
ipfsInfo := queryIPFSPeerInfo()
// Try to get IPFS Cluster peer info (optional - don't fail if unavailable)
ipfsClusterInfo := queryIPFSClusterPeerInfo()
return &NetworkStatus{
NodeID: host.ID().String(),
PeerID: host.ID().String(),
Connected: true,
PeerCount: len(connectedPeers),
DatabaseSize: dbSize,
Uptime: time.Since(n.client.startTime),
IPFS: ipfsInfo,
IPFSCluster: ipfsClusterInfo,
}, nil
}
// queryIPFSPeerInfo queries the local IPFS API for peer information
// Returns nil if IPFS is not running or unavailable
func queryIPFSPeerInfo() *IPFSPeerInfo {
// IPFS API typically runs on port 4501 in our setup
client := &http.Client{Timeout: 2 * time.Second}
resp, err := client.Post("http://localhost:4501/api/v0/id", "", nil)
if err != nil {
return nil // IPFS not available
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil
}
var result struct {
ID string `json:"ID"`
Addresses []string `json:"Addresses"`
}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return nil
}
// Filter addresses to only include public/routable ones
var swarmAddrs []string
for _, addr := range result.Addresses {
// Skip loopback and private addresses for external discovery
if !strings.Contains(addr, "127.0.0.1") && !strings.Contains(addr, "/ip6/::1") {
swarmAddrs = append(swarmAddrs, addr)
}
}
return &IPFSPeerInfo{
PeerID: result.ID,
SwarmAddresses: swarmAddrs,
}
}
// queryIPFSClusterPeerInfo queries the local IPFS Cluster API for peer information
// Returns nil if IPFS Cluster is not running or unavailable
func queryIPFSClusterPeerInfo() *IPFSClusterPeerInfo {
// IPFS Cluster API typically runs on port 9094 in our setup
client := &http.Client{Timeout: 2 * time.Second}
resp, err := client.Get("http://localhost:9094/id")
if err != nil {
return nil // IPFS Cluster not available
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil
}
var result struct {
ID string `json:"id"`
Addresses []string `json:"addresses"`
}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return nil
}
// Filter addresses to only include public/routable ones for cluster discovery
var clusterAddrs []string
for _, addr := range result.Addresses {
// Skip loopback addresses - only keep routable addresses
if !strings.Contains(addr, "127.0.0.1") && !strings.Contains(addr, "/ip6/::1") {
clusterAddrs = append(clusterAddrs, addr)
}
}
return &IPFSClusterPeerInfo{
PeerID: result.ID,
Addresses: clusterAddrs,
}
}
// ConnectToPeer connects to a specific peer
func (n *NetworkInfoImpl) ConnectToPeer(ctx context.Context, peerAddr string) error {
if !n.client.isConnected() {
return fmt.Errorf("client not connected")
}
if err := n.client.requireAccess(ctx); err != nil {
return fmt.Errorf("authentication required: %w - run CLI commands to authenticate automatically", err)
}
host := n.client.host
if host == nil {
return fmt.Errorf("no host available")
}
// Parse the multiaddr
ma, err := multiaddr.NewMultiaddr(peerAddr)
if err != nil {
return fmt.Errorf("invalid multiaddr: %w", err)
}
// Extract peer info
peerInfo, err := peer.AddrInfoFromP2pAddr(ma)
if err != nil {
return fmt.Errorf("failed to extract peer info: %w", err)
}
// Connect to the peer
if err := host.Connect(ctx, *peerInfo); err != nil {
return fmt.Errorf("failed to connect to peer: %w", err)
}
return nil
}
// DisconnectFromPeer disconnects from a specific peer
func (n *NetworkInfoImpl) DisconnectFromPeer(ctx context.Context, peerID string) error {
if !n.client.isConnected() {
return fmt.Errorf("client not connected")
}
if err := n.client.requireAccess(ctx); err != nil {
return fmt.Errorf("authentication required: %w - run CLI commands to authenticate automatically", err)
}
host := n.client.host
if host == nil {
return fmt.Errorf("no host available")
}
// Parse the peer ID
pid, err := peer.Decode(peerID)
if err != nil {
return fmt.Errorf("invalid peer ID: %w", err)
}
// Close the connection to the peer
if err := host.Network().ClosePeer(pid); err != nil {
return fmt.Errorf("failed to disconnect from peer: %w", err)
}
return nil
}

View File

@ -8,7 +8,6 @@ import (
"io"
"mime/multipart"
"net/http"
"strings"
"time"
)
@ -215,31 +214,12 @@ func (s *StorageClientImpl) Unpin(ctx context.Context, cid string) error {
return nil
}
// getGatewayURL returns the gateway URL from config, defaulting to localhost:6001
// getGatewayURL returns the gateway URL from config
func (s *StorageClientImpl) getGatewayURL() string {
cfg := s.client.Config()
if cfg != nil && cfg.GatewayURL != "" {
return strings.TrimSuffix(cfg.GatewayURL, "/")
}
return "http://localhost:6001"
return getGatewayURL(s.client)
}
// addAuthHeaders adds authentication headers to the request
func (s *StorageClientImpl) addAuthHeaders(req *http.Request) {
cfg := s.client.Config()
if cfg == nil {
return
}
// Prefer JWT if available
if cfg.JWT != "" {
req.Header.Set("Authorization", "Bearer "+cfg.JWT)
return
}
// Fallback to API key
if cfg.APIKey != "" {
req.Header.Set("Authorization", "Bearer "+cfg.APIKey)
req.Header.Set("X-API-Key", cfg.APIKey)
}
addAuthHeaders(req, s.client)
}

35
pkg/client/transport.go Normal file
View File

@ -0,0 +1,35 @@
package client
import (
"net/http"
"strings"
)
// getGatewayURL returns the gateway URL from config, defaulting to localhost:6001
func getGatewayURL(c *Client) string {
cfg := c.Config()
if cfg != nil && cfg.GatewayURL != "" {
return strings.TrimSuffix(cfg.GatewayURL, "/")
}
return "http://localhost:6001"
}
// addAuthHeaders adds authentication headers to the request
func addAuthHeaders(req *http.Request, c *Client) {
cfg := c.Config()
if cfg == nil {
return
}
// Prefer JWT if available
if cfg.JWT != "" {
req.Header.Set("Authorization", "Bearer "+cfg.JWT)
return
}
// Fallback to API key
if cfg.APIKey != "" {
req.Header.Set("Authorization", "Bearer "+cfg.APIKey)
req.Header.Set("X-API-Key", cfg.APIKey)
}
}

View File

@ -3,6 +3,7 @@ package config
import (
"time"
"github.com/DeBrosOfficial/network/pkg/config/validate"
"github.com/multiformats/go-multiaddr"
)
@ -16,152 +17,67 @@ type Config struct {
HTTPGateway HTTPGatewayConfig `yaml:"http_gateway"`
}
// NodeConfig contains node-specific configuration
type NodeConfig struct {
ID string `yaml:"id"` // Auto-generated if empty
ListenAddresses []string `yaml:"listen_addresses"` // LibP2P listen addresses
DataDir string `yaml:"data_dir"` // Data directory
MaxConnections int `yaml:"max_connections"` // Maximum peer connections
Domain string `yaml:"domain"` // Domain for this node (e.g., node-1.orama.network)
// ValidationError represents a single validation error with context.
// This is exported from the validate subpackage for backward compatibility.
type ValidationError = validate.ValidationError
// ValidateSwarmKey validates that a swarm key is 64 hex characters.
// This is exported from the validate subpackage for backward compatibility.
func ValidateSwarmKey(key string) error {
return validate.ValidateSwarmKey(key)
}
// DatabaseConfig contains database-related configuration
type DatabaseConfig struct {
DataDir string `yaml:"data_dir"`
ReplicationFactor int `yaml:"replication_factor"`
ShardCount int `yaml:"shard_count"`
MaxDatabaseSize int64 `yaml:"max_database_size"` // In bytes
BackupInterval time.Duration `yaml:"backup_interval"`
// Validate performs comprehensive validation of the entire config.
// It aggregates all errors and returns them, allowing the caller to print all issues at once.
func (c *Config) Validate() []error {
var errs []error
// RQLite-specific configuration
RQLitePort int `yaml:"rqlite_port"` // RQLite HTTP API port
RQLiteRaftPort int `yaml:"rqlite_raft_port"` // RQLite Raft consensus port
RQLiteJoinAddress string `yaml:"rqlite_join_address"` // Address to join RQLite cluster
// Validate node config
errs = append(errs, validate.ValidateNode(validate.NodeConfig{
ID: c.Node.ID,
ListenAddresses: c.Node.ListenAddresses,
DataDir: c.Node.DataDir,
MaxConnections: c.Node.MaxConnections,
})...)
// RQLite node-to-node TLS encryption (for inter-node Raft communication)
// See: https://rqlite.io/docs/guides/security/#encrypting-node-to-node-communication
NodeCert string `yaml:"node_cert"` // Path to X.509 certificate for node-to-node communication
NodeKey string `yaml:"node_key"` // Path to X.509 private key for node-to-node communication
NodeCACert string `yaml:"node_ca_cert"` // Path to CA certificate (optional, uses system CA if not set)
NodeNoVerify bool `yaml:"node_no_verify"` // Skip certificate verification (for testing/self-signed certs)
// Validate database config
errs = append(errs, validate.ValidateDatabase(validate.DatabaseConfig{
DataDir: c.Database.DataDir,
ReplicationFactor: c.Database.ReplicationFactor,
ShardCount: c.Database.ShardCount,
MaxDatabaseSize: c.Database.MaxDatabaseSize,
RQLitePort: c.Database.RQLitePort,
RQLiteRaftPort: c.Database.RQLiteRaftPort,
RQLiteJoinAddress: c.Database.RQLiteJoinAddress,
ClusterSyncInterval: c.Database.ClusterSyncInterval,
PeerInactivityLimit: c.Database.PeerInactivityLimit,
MinClusterSize: c.Database.MinClusterSize,
})...)
// Dynamic discovery configuration (always enabled)
ClusterSyncInterval time.Duration `yaml:"cluster_sync_interval"` // default: 30s
PeerInactivityLimit time.Duration `yaml:"peer_inactivity_limit"` // default: 24h
MinClusterSize int `yaml:"min_cluster_size"` // default: 1
// Validate discovery config
errs = append(errs, validate.ValidateDiscovery(validate.DiscoveryConfig{
BootstrapPeers: c.Discovery.BootstrapPeers,
DiscoveryInterval: c.Discovery.DiscoveryInterval,
BootstrapPort: c.Discovery.BootstrapPort,
HttpAdvAddress: c.Discovery.HttpAdvAddress,
RaftAdvAddress: c.Discovery.RaftAdvAddress,
})...)
// Olric cache configuration
OlricHTTPPort int `yaml:"olric_http_port"` // Olric HTTP API port (default: 3320)
OlricMemberlistPort int `yaml:"olric_memberlist_port"` // Olric memberlist port (default: 3322)
// Validate security config
errs = append(errs, validate.ValidateSecurity(validate.SecurityConfig{
EnableTLS: c.Security.EnableTLS,
PrivateKeyFile: c.Security.PrivateKeyFile,
CertificateFile: c.Security.CertificateFile,
})...)
// IPFS storage configuration
IPFS IPFSConfig `yaml:"ipfs"`
}
// Validate logging config
errs = append(errs, validate.ValidateLogging(validate.LoggingConfig{
Level: c.Logging.Level,
Format: c.Logging.Format,
OutputFile: c.Logging.OutputFile,
})...)
// IPFSConfig contains IPFS storage configuration
type IPFSConfig struct {
// ClusterAPIURL is the IPFS Cluster HTTP API URL (e.g., "http://localhost:9094")
// If empty, IPFS storage is disabled for this node
ClusterAPIURL string `yaml:"cluster_api_url"`
// APIURL is the IPFS HTTP API URL for content retrieval (e.g., "http://localhost:5001")
// If empty, defaults to "http://localhost:5001"
APIURL string `yaml:"api_url"`
// Timeout for IPFS operations
// If zero, defaults to 60 seconds
Timeout time.Duration `yaml:"timeout"`
// ReplicationFactor is the replication factor for pinned content
// If zero, defaults to 3
ReplicationFactor int `yaml:"replication_factor"`
// EnableEncryption enables client-side encryption before upload
// Defaults to true
EnableEncryption bool `yaml:"enable_encryption"`
}
// DiscoveryConfig contains peer discovery configuration
type DiscoveryConfig struct {
BootstrapPeers []string `yaml:"bootstrap_peers"` // Peer addresses to connect to
DiscoveryInterval time.Duration `yaml:"discovery_interval"` // Discovery announcement interval
BootstrapPort int `yaml:"bootstrap_port"` // Default port for peer discovery
HttpAdvAddress string `yaml:"http_adv_address"` // HTTP advertisement address
RaftAdvAddress string `yaml:"raft_adv_address"` // Raft advertisement
NodeNamespace string `yaml:"node_namespace"` // Namespace for node identifiers
}
// SecurityConfig contains security-related configuration
type SecurityConfig struct {
EnableTLS bool `yaml:"enable_tls"`
PrivateKeyFile string `yaml:"private_key_file"`
CertificateFile string `yaml:"certificate_file"`
}
// LoggingConfig contains logging configuration
type LoggingConfig struct {
Level string `yaml:"level"` // debug, info, warn, error
Format string `yaml:"format"` // json, console
OutputFile string `yaml:"output_file"` // Empty for stdout
}
// HTTPGatewayConfig contains HTTP reverse proxy gateway configuration
type HTTPGatewayConfig struct {
Enabled bool `yaml:"enabled"` // Enable HTTP gateway
ListenAddr string `yaml:"listen_addr"` // Address to listen on (e.g., ":8080")
NodeName string `yaml:"node_name"` // Node name for routing
Routes map[string]RouteConfig `yaml:"routes"` // Service routes
HTTPS HTTPSConfig `yaml:"https"` // HTTPS/TLS configuration
SNI SNIConfig `yaml:"sni"` // SNI-based TCP routing configuration
// Full gateway configuration (for API, auth, pubsub)
ClientNamespace string `yaml:"client_namespace"` // Namespace for network client
RQLiteDSN string `yaml:"rqlite_dsn"` // RQLite database DSN
OlricServers []string `yaml:"olric_servers"` // List of Olric server addresses
OlricTimeout time.Duration `yaml:"olric_timeout"` // Timeout for Olric operations
IPFSClusterAPIURL string `yaml:"ipfs_cluster_api_url"` // IPFS Cluster API URL
IPFSAPIURL string `yaml:"ipfs_api_url"` // IPFS API URL
IPFSTimeout time.Duration `yaml:"ipfs_timeout"` // Timeout for IPFS operations
}
// HTTPSConfig contains HTTPS/TLS configuration for the gateway
type HTTPSConfig struct {
Enabled bool `yaml:"enabled"` // Enable HTTPS (port 443)
Domain string `yaml:"domain"` // Primary domain (e.g., node-123.orama.network)
AutoCert bool `yaml:"auto_cert"` // Use Let's Encrypt for automatic certificate
UseSelfSigned bool `yaml:"use_self_signed"` // Use self-signed certificates (pre-generated)
CertFile string `yaml:"cert_file"` // Path to certificate file (if not using auto_cert)
KeyFile string `yaml:"key_file"` // Path to key file (if not using auto_cert)
CacheDir string `yaml:"cache_dir"` // Directory for Let's Encrypt certificate cache
HTTPPort int `yaml:"http_port"` // HTTP port for ACME challenge (default: 80)
HTTPSPort int `yaml:"https_port"` // HTTPS port (default: 443)
Email string `yaml:"email"` // Email for Let's Encrypt account
}
// SNIConfig contains SNI-based TCP routing configuration for port 7001
type SNIConfig struct {
Enabled bool `yaml:"enabled"` // Enable SNI-based TCP routing
ListenAddr string `yaml:"listen_addr"` // Address to listen on (e.g., ":7001")
Routes map[string]string `yaml:"routes"` // SNI hostname -> backend address mapping
CertFile string `yaml:"cert_file"` // Path to certificate file
KeyFile string `yaml:"key_file"` // Path to key file
}
// RouteConfig defines a single reverse proxy route
type RouteConfig struct {
PathPrefix string `yaml:"path_prefix"` // URL path prefix (e.g., "/rqlite/http")
BackendURL string `yaml:"backend_url"` // Backend service URL
Timeout time.Duration `yaml:"timeout"` // Request timeout
WebSocket bool `yaml:"websocket"` // Support WebSocket upgrades
}
// ClientConfig represents configuration for network clients
type ClientConfig struct {
AppName string `yaml:"app_name"`
DatabaseName string `yaml:"database_name"`
BootstrapPeers []string `yaml:"bootstrap_peers"`
ConnectTimeout time.Duration `yaml:"connect_timeout"`
RetryAttempts int `yaml:"retry_attempts"`
return errs
}
// ParseMultiaddrs converts string addresses to multiaddr objects

View File

@ -0,0 +1,59 @@
package config
import "time"
// DatabaseConfig contains database-related configuration
type DatabaseConfig struct {
DataDir string `yaml:"data_dir"`
ReplicationFactor int `yaml:"replication_factor"`
ShardCount int `yaml:"shard_count"`
MaxDatabaseSize int64 `yaml:"max_database_size"` // In bytes
BackupInterval time.Duration `yaml:"backup_interval"`
// RQLite-specific configuration
RQLitePort int `yaml:"rqlite_port"` // RQLite HTTP API port
RQLiteRaftPort int `yaml:"rqlite_raft_port"` // RQLite Raft consensus port
RQLiteJoinAddress string `yaml:"rqlite_join_address"` // Address to join RQLite cluster
// RQLite node-to-node TLS encryption (for inter-node Raft communication)
// See: https://rqlite.io/docs/guides/security/#encrypting-node-to-node-communication
NodeCert string `yaml:"node_cert"` // Path to X.509 certificate for node-to-node communication
NodeKey string `yaml:"node_key"` // Path to X.509 private key for node-to-node communication
NodeCACert string `yaml:"node_ca_cert"` // Path to CA certificate (optional, uses system CA if not set)
NodeNoVerify bool `yaml:"node_no_verify"` // Skip certificate verification (for testing/self-signed certs)
// Dynamic discovery configuration (always enabled)
ClusterSyncInterval time.Duration `yaml:"cluster_sync_interval"` // default: 30s
PeerInactivityLimit time.Duration `yaml:"peer_inactivity_limit"` // default: 24h
MinClusterSize int `yaml:"min_cluster_size"` // default: 1
// Olric cache configuration
OlricHTTPPort int `yaml:"olric_http_port"` // Olric HTTP API port (default: 3320)
OlricMemberlistPort int `yaml:"olric_memberlist_port"` // Olric memberlist port (default: 3322)
// IPFS storage configuration
IPFS IPFSConfig `yaml:"ipfs"`
}
// IPFSConfig contains IPFS storage configuration
type IPFSConfig struct {
// ClusterAPIURL is the IPFS Cluster HTTP API URL (e.g., "http://localhost:9094")
// If empty, IPFS storage is disabled for this node
ClusterAPIURL string `yaml:"cluster_api_url"`
// APIURL is the IPFS HTTP API URL for content retrieval (e.g., "http://localhost:5001")
// If empty, defaults to "http://localhost:5001"
APIURL string `yaml:"api_url"`
// Timeout for IPFS operations
// If zero, defaults to 60 seconds
Timeout time.Duration `yaml:"timeout"`
// ReplicationFactor is the replication factor for pinned content
// If zero, defaults to 3
ReplicationFactor int `yaml:"replication_factor"`
// EnableEncryption enables client-side encryption before upload
// Defaults to true
EnableEncryption bool `yaml:"enable_encryption"`
}

View File

@ -0,0 +1,13 @@
package config
import "time"
// DiscoveryConfig contains peer discovery configuration
type DiscoveryConfig struct {
BootstrapPeers []string `yaml:"bootstrap_peers"` // Peer addresses to connect to
DiscoveryInterval time.Duration `yaml:"discovery_interval"` // Discovery announcement interval
BootstrapPort int `yaml:"bootstrap_port"` // Default port for peer discovery
HttpAdvAddress string `yaml:"http_adv_address"` // HTTP advertisement address
RaftAdvAddress string `yaml:"raft_adv_address"` // Raft advertisement
NodeNamespace string `yaml:"node_namespace"` // Namespace for node identifiers
}

View File

@ -0,0 +1,62 @@
package config
import "time"
// HTTPGatewayConfig contains HTTP reverse proxy gateway configuration
type HTTPGatewayConfig struct {
Enabled bool `yaml:"enabled"` // Enable HTTP gateway
ListenAddr string `yaml:"listen_addr"` // Address to listen on (e.g., ":8080")
NodeName string `yaml:"node_name"` // Node name for routing
Routes map[string]RouteConfig `yaml:"routes"` // Service routes
HTTPS HTTPSConfig `yaml:"https"` // HTTPS/TLS configuration
SNI SNIConfig `yaml:"sni"` // SNI-based TCP routing configuration
// Full gateway configuration (for API, auth, pubsub)
ClientNamespace string `yaml:"client_namespace"` // Namespace for network client
RQLiteDSN string `yaml:"rqlite_dsn"` // RQLite database DSN
OlricServers []string `yaml:"olric_servers"` // List of Olric server addresses
OlricTimeout time.Duration `yaml:"olric_timeout"` // Timeout for Olric operations
IPFSClusterAPIURL string `yaml:"ipfs_cluster_api_url"` // IPFS Cluster API URL
IPFSAPIURL string `yaml:"ipfs_api_url"` // IPFS API URL
IPFSTimeout time.Duration `yaml:"ipfs_timeout"` // Timeout for IPFS operations
}
// HTTPSConfig contains HTTPS/TLS configuration for the gateway
type HTTPSConfig struct {
Enabled bool `yaml:"enabled"` // Enable HTTPS (port 443)
Domain string `yaml:"domain"` // Primary domain (e.g., node-123.orama.network)
AutoCert bool `yaml:"auto_cert"` // Use Let's Encrypt for automatic certificate
UseSelfSigned bool `yaml:"use_self_signed"` // Use self-signed certificates (pre-generated)
CertFile string `yaml:"cert_file"` // Path to certificate file (if not using auto_cert)
KeyFile string `yaml:"key_file"` // Path to key file (if not using auto_cert)
CacheDir string `yaml:"cache_dir"` // Directory for Let's Encrypt certificate cache
HTTPPort int `yaml:"http_port"` // HTTP port for ACME challenge (default: 80)
HTTPSPort int `yaml:"https_port"` // HTTPS port (default: 443)
Email string `yaml:"email"` // Email for Let's Encrypt account
}
// SNIConfig contains SNI-based TCP routing configuration for port 7001
type SNIConfig struct {
Enabled bool `yaml:"enabled"` // Enable SNI-based TCP routing
ListenAddr string `yaml:"listen_addr"` // Address to listen on (e.g., ":7001")
Routes map[string]string `yaml:"routes"` // SNI hostname -> backend address mapping
CertFile string `yaml:"cert_file"` // Path to certificate file
KeyFile string `yaml:"key_file"` // Path to key file
}
// RouteConfig defines a single reverse proxy route
type RouteConfig struct {
PathPrefix string `yaml:"path_prefix"` // URL path prefix (e.g., "/rqlite/http")
BackendURL string `yaml:"backend_url"` // Backend service URL
Timeout time.Duration `yaml:"timeout"` // Request timeout
WebSocket bool `yaml:"websocket"` // Support WebSocket upgrades
}
// ClientConfig represents configuration for network clients
type ClientConfig struct {
AppName string `yaml:"app_name"`
DatabaseName string `yaml:"database_name"`
BootstrapPeers []string `yaml:"bootstrap_peers"`
ConnectTimeout time.Duration `yaml:"connect_timeout"`
RetryAttempts int `yaml:"retry_attempts"`
}

View File

@ -0,0 +1,8 @@
package config
// LoggingConfig contains logging configuration
type LoggingConfig struct {
Level string `yaml:"level"` // debug, info, warn, error
Format string `yaml:"format"` // json, console
OutputFile string `yaml:"output_file"` // Empty for stdout
}

10
pkg/config/node_config.go Normal file
View File

@ -0,0 +1,10 @@
package config
// NodeConfig contains node-specific configuration
type NodeConfig struct {
ID string `yaml:"id"` // Auto-generated if empty
ListenAddresses []string `yaml:"listen_addresses"` // LibP2P listen addresses
DataDir string `yaml:"data_dir"` // Data directory
MaxConnections int `yaml:"max_connections"` // Maximum peer connections
Domain string `yaml:"domain"` // Domain for this node (e.g., node-1.orama.network)
}

View File

@ -0,0 +1,8 @@
package config
// SecurityConfig contains security-related configuration
type SecurityConfig struct {
EnableTLS bool `yaml:"enable_tls"`
PrivateKeyFile string `yaml:"private_key_file"`
CertificateFile string `yaml:"certificate_file"`
}

View File

@ -1,587 +0,0 @@
package config
import (
"fmt"
"net"
"os"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/multiformats/go-multiaddr"
manet "github.com/multiformats/go-multiaddr/net"
)
// ValidationError represents a single validation error with context.
type ValidationError struct {
Path string // e.g., "discovery.bootstrap_peers[0]" or "discovery.peers[0]"
Message string // e.g., "invalid multiaddr"
Hint string // e.g., "expected /ip{4,6}/.../tcp/<port>/p2p/<peerID>"
}
func (e ValidationError) Error() string {
if e.Hint != "" {
return fmt.Sprintf("%s: %s; %s", e.Path, e.Message, e.Hint)
}
return fmt.Sprintf("%s: %s", e.Path, e.Message)
}
// Validate performs comprehensive validation of the entire config.
// It aggregates all errors and returns them, allowing the caller to print all issues at once.
func (c *Config) Validate() []error {
var errs []error
// Validate node config
errs = append(errs, c.validateNode()...)
// Validate database config
errs = append(errs, c.validateDatabase()...)
// Validate discovery config
errs = append(errs, c.validateDiscovery()...)
// Validate security config
errs = append(errs, c.validateSecurity()...)
// Validate logging config
errs = append(errs, c.validateLogging()...)
// Cross-field validations
errs = append(errs, c.validateCrossFields()...)
return errs
}
func (c *Config) validateNode() []error {
var errs []error
nc := c.Node
// Validate node ID (required for RQLite cluster membership)
if nc.ID == "" {
errs = append(errs, ValidationError{
Path: "node.id",
Message: "must not be empty (required for cluster membership)",
Hint: "will be auto-generated if empty, but explicit ID recommended",
})
}
// Validate listen_addresses
if len(nc.ListenAddresses) == 0 {
errs = append(errs, ValidationError{
Path: "node.listen_addresses",
Message: "must not be empty",
})
}
seen := make(map[string]bool)
for i, addr := range nc.ListenAddresses {
path := fmt.Sprintf("node.listen_addresses[%d]", i)
// Parse as multiaddr
ma, err := multiaddr.NewMultiaddr(addr)
if err != nil {
errs = append(errs, ValidationError{
Path: path,
Message: fmt.Sprintf("invalid multiaddr: %v", err),
Hint: "expected /ip{4,6}/.../ tcp/<port>",
})
continue
}
// Check for TCP and valid port
tcpAddr, err := manet.ToNetAddr(ma)
if err != nil {
errs = append(errs, ValidationError{
Path: path,
Message: fmt.Sprintf("cannot convert multiaddr to network address: %v", err),
Hint: "ensure multiaddr contains /tcp/<port>",
})
continue
}
tcpPort := tcpAddr.(*net.TCPAddr).Port
if tcpPort < 1 || tcpPort > 65535 {
errs = append(errs, ValidationError{
Path: path,
Message: fmt.Sprintf("invalid TCP port %d", tcpPort),
Hint: "port must be between 1 and 65535",
})
}
if seen[addr] {
errs = append(errs, ValidationError{
Path: path,
Message: "duplicate listen address",
})
}
seen[addr] = true
}
// Validate data_dir
if nc.DataDir == "" {
errs = append(errs, ValidationError{
Path: "node.data_dir",
Message: "must not be empty",
})
} else {
if err := validateDataDir(nc.DataDir); err != nil {
errs = append(errs, ValidationError{
Path: "node.data_dir",
Message: err.Error(),
})
}
}
// Validate max_connections
if nc.MaxConnections <= 0 {
errs = append(errs, ValidationError{
Path: "node.max_connections",
Message: fmt.Sprintf("must be > 0; got %d", nc.MaxConnections),
})
}
return errs
}
func (c *Config) validateDatabase() []error {
var errs []error
dc := c.Database
// Validate data_dir
if dc.DataDir == "" {
errs = append(errs, ValidationError{
Path: "database.data_dir",
Message: "must not be empty",
})
} else {
if err := validateDataDir(dc.DataDir); err != nil {
errs = append(errs, ValidationError{
Path: "database.data_dir",
Message: err.Error(),
})
}
}
// Validate replication_factor
if dc.ReplicationFactor < 1 {
errs = append(errs, ValidationError{
Path: "database.replication_factor",
Message: fmt.Sprintf("must be >= 1; got %d", dc.ReplicationFactor),
})
} else if dc.ReplicationFactor%2 == 0 {
// Warn about even replication factor (Raft best practice: odd)
// For now we log a note but don't error
_ = fmt.Sprintf("note: database.replication_factor %d is even; Raft recommends odd numbers for quorum", dc.ReplicationFactor)
}
// Validate shard_count
if dc.ShardCount < 1 {
errs = append(errs, ValidationError{
Path: "database.shard_count",
Message: fmt.Sprintf("must be >= 1; got %d", dc.ShardCount),
})
}
// Validate max_database_size
if dc.MaxDatabaseSize < 0 {
errs = append(errs, ValidationError{
Path: "database.max_database_size",
Message: fmt.Sprintf("must be >= 0; got %d", dc.MaxDatabaseSize),
})
}
// Validate rqlite_port
if dc.RQLitePort < 1 || dc.RQLitePort > 65535 {
errs = append(errs, ValidationError{
Path: "database.rqlite_port",
Message: fmt.Sprintf("must be between 1 and 65535; got %d", dc.RQLitePort),
})
}
// Validate rqlite_raft_port
if dc.RQLiteRaftPort < 1 || dc.RQLiteRaftPort > 65535 {
errs = append(errs, ValidationError{
Path: "database.rqlite_raft_port",
Message: fmt.Sprintf("must be between 1 and 65535; got %d", dc.RQLiteRaftPort),
})
}
// Ports must differ
if dc.RQLitePort == dc.RQLiteRaftPort {
errs = append(errs, ValidationError{
Path: "database.rqlite_raft_port",
Message: fmt.Sprintf("must differ from database.rqlite_port (%d)", dc.RQLitePort),
})
}
// Validate rqlite_join_address format if provided (optional for all nodes)
// The first node in a cluster won't have a join address; subsequent nodes will
if dc.RQLiteJoinAddress != "" {
if err := validateHostPort(dc.RQLiteJoinAddress); err != nil {
errs = append(errs, ValidationError{
Path: "database.rqlite_join_address",
Message: err.Error(),
Hint: "expected format: host:port",
})
}
}
// Validate cluster_sync_interval
if dc.ClusterSyncInterval != 0 && dc.ClusterSyncInterval < 10*time.Second {
errs = append(errs, ValidationError{
Path: "database.cluster_sync_interval",
Message: fmt.Sprintf("must be >= 10s or 0 (for default); got %v", dc.ClusterSyncInterval),
Hint: "recommended: 30s",
})
}
// Validate peer_inactivity_limit
if dc.PeerInactivityLimit != 0 {
if dc.PeerInactivityLimit < time.Hour {
errs = append(errs, ValidationError{
Path: "database.peer_inactivity_limit",
Message: fmt.Sprintf("must be >= 1h or 0 (for default); got %v", dc.PeerInactivityLimit),
Hint: "recommended: 24h",
})
} else if dc.PeerInactivityLimit > 7*24*time.Hour {
errs = append(errs, ValidationError{
Path: "database.peer_inactivity_limit",
Message: fmt.Sprintf("must be <= 7d; got %v", dc.PeerInactivityLimit),
Hint: "recommended: 24h",
})
}
}
// Validate min_cluster_size
if dc.MinClusterSize < 1 {
errs = append(errs, ValidationError{
Path: "database.min_cluster_size",
Message: fmt.Sprintf("must be >= 1; got %d", dc.MinClusterSize),
})
}
return errs
}
func (c *Config) validateDiscovery() []error {
var errs []error
disc := c.Discovery
// Validate discovery_interval
if disc.DiscoveryInterval <= 0 {
errs = append(errs, ValidationError{
Path: "discovery.discovery_interval",
Message: fmt.Sprintf("must be > 0; got %v", disc.DiscoveryInterval),
})
}
// Validate peer discovery port
if disc.BootstrapPort < 1 || disc.BootstrapPort > 65535 {
errs = append(errs, ValidationError{
Path: "discovery.bootstrap_port",
Message: fmt.Sprintf("must be between 1 and 65535; got %d", disc.BootstrapPort),
})
}
// Validate peer addresses (optional - all nodes are unified peers now)
// Validate each peer multiaddr
seenPeers := make(map[string]bool)
for i, peer := range disc.BootstrapPeers {
path := fmt.Sprintf("discovery.bootstrap_peers[%d]", i)
_, err := multiaddr.NewMultiaddr(peer)
if err != nil {
errs = append(errs, ValidationError{
Path: path,
Message: fmt.Sprintf("invalid multiaddr: %v", err),
Hint: "expected /ip{4,6}/.../tcp/<port>/p2p/<peerID>",
})
continue
}
// Check for /p2p/ component
if !strings.Contains(peer, "/p2p/") {
errs = append(errs, ValidationError{
Path: path,
Message: "missing /p2p/<peerID> component",
Hint: "expected /ip{4,6}/.../tcp/<port>/p2p/<peerID>",
})
}
// Extract TCP port by parsing the multiaddr string directly
// Look for /tcp/ in the peer string
tcpPortStr := extractTCPPort(peer)
if tcpPortStr == "" {
errs = append(errs, ValidationError{
Path: path,
Message: "missing /tcp/<port> component",
Hint: "expected /ip{4,6}/.../tcp/<port>/p2p/<peerID>",
})
continue
}
tcpPort, err := strconv.Atoi(tcpPortStr)
if err != nil || tcpPort < 1 || tcpPort > 65535 {
errs = append(errs, ValidationError{
Path: path,
Message: fmt.Sprintf("invalid TCP port %s", tcpPortStr),
Hint: "port must be between 1 and 65535",
})
}
if seenPeers[peer] {
errs = append(errs, ValidationError{
Path: path,
Message: "duplicate peer",
})
}
seenPeers[peer] = true
}
// Validate http_adv_address (required for cluster discovery)
if disc.HttpAdvAddress == "" {
errs = append(errs, ValidationError{
Path: "discovery.http_adv_address",
Message: "required for RQLite cluster discovery",
Hint: "set to your public HTTP address (e.g., 51.83.128.181:5001)",
})
} else {
if err := validateHostOrHostPort(disc.HttpAdvAddress); err != nil {
errs = append(errs, ValidationError{
Path: "discovery.http_adv_address",
Message: err.Error(),
Hint: "expected format: host or host:port",
})
}
}
// Validate raft_adv_address (required for cluster discovery)
if disc.RaftAdvAddress == "" {
errs = append(errs, ValidationError{
Path: "discovery.raft_adv_address",
Message: "required for RQLite cluster discovery",
Hint: "set to your public Raft address (e.g., 51.83.128.181:7001)",
})
} else {
if err := validateHostOrHostPort(disc.RaftAdvAddress); err != nil {
errs = append(errs, ValidationError{
Path: "discovery.raft_adv_address",
Message: err.Error(),
Hint: "expected format: host or host:port",
})
}
}
return errs
}
func (c *Config) validateSecurity() []error {
var errs []error
sec := c.Security
// Validate logging level
if sec.EnableTLS {
if sec.PrivateKeyFile == "" {
errs = append(errs, ValidationError{
Path: "security.private_key_file",
Message: "required when enable_tls is true",
})
} else {
if err := validateFileReadable(sec.PrivateKeyFile); err != nil {
errs = append(errs, ValidationError{
Path: "security.private_key_file",
Message: err.Error(),
})
}
}
if sec.CertificateFile == "" {
errs = append(errs, ValidationError{
Path: "security.certificate_file",
Message: "required when enable_tls is true",
})
} else {
if err := validateFileReadable(sec.CertificateFile); err != nil {
errs = append(errs, ValidationError{
Path: "security.certificate_file",
Message: err.Error(),
})
}
}
}
return errs
}
func (c *Config) validateLogging() []error {
var errs []error
log := c.Logging
// Validate level
validLevels := map[string]bool{"debug": true, "info": true, "warn": true, "error": true}
if !validLevels[log.Level] {
errs = append(errs, ValidationError{
Path: "logging.level",
Message: fmt.Sprintf("invalid value %q", log.Level),
Hint: "allowed values: debug, info, warn, error",
})
}
// Validate format
validFormats := map[string]bool{"json": true, "console": true}
if !validFormats[log.Format] {
errs = append(errs, ValidationError{
Path: "logging.format",
Message: fmt.Sprintf("invalid value %q", log.Format),
Hint: "allowed values: json, console",
})
}
// Validate output_file
if log.OutputFile != "" {
dir := filepath.Dir(log.OutputFile)
if dir != "" && dir != "." {
if err := validateDirWritable(dir); err != nil {
errs = append(errs, ValidationError{
Path: "logging.output_file",
Message: fmt.Sprintf("parent directory not writable: %v", err),
})
}
}
}
return errs
}
func (c *Config) validateCrossFields() []error {
var errs []error
return errs
}
// Helper validation functions
func validateDataDir(path string) error {
if path == "" {
return fmt.Errorf("must not be empty")
}
// Expand ~ to home directory
expandedPath := os.ExpandEnv(path)
if strings.HasPrefix(expandedPath, "~") {
home, err := os.UserHomeDir()
if err != nil {
return fmt.Errorf("cannot determine home directory: %v", err)
}
expandedPath = filepath.Join(home, expandedPath[1:])
}
if info, err := os.Stat(expandedPath); err == nil {
// Directory exists; check if it's a directory and writable
if !info.IsDir() {
return fmt.Errorf("path exists but is not a directory")
}
// Try to write a test file to check permissions
testFile := filepath.Join(expandedPath, ".write_test")
if err := os.WriteFile(testFile, []byte(""), 0644); err != nil {
return fmt.Errorf("directory not writable: %v", err)
}
os.Remove(testFile)
} else if os.IsNotExist(err) {
// Directory doesn't exist; check if parent is writable
parent := filepath.Dir(expandedPath)
if parent == "" || parent == "." {
parent = "."
}
// Allow parent not existing - it will be created at runtime
if info, err := os.Stat(parent); err != nil {
if !os.IsNotExist(err) {
return fmt.Errorf("parent directory not accessible: %v", err)
}
// Parent doesn't exist either - that's ok, will be created
} else if !info.IsDir() {
return fmt.Errorf("parent path is not a directory")
} else {
// Parent exists, check if writable
if err := validateDirWritable(parent); err != nil {
return fmt.Errorf("parent directory not writable: %v", err)
}
}
} else {
return fmt.Errorf("cannot access path: %v", err)
}
return nil
}
func validateDirWritable(path string) error {
info, err := os.Stat(path)
if err != nil {
return fmt.Errorf("cannot access directory: %v", err)
}
if !info.IsDir() {
return fmt.Errorf("path is not a directory")
}
// Try to write a test file
testFile := filepath.Join(path, ".write_test")
if err := os.WriteFile(testFile, []byte(""), 0644); err != nil {
return fmt.Errorf("directory not writable: %v", err)
}
os.Remove(testFile)
return nil
}
func validateFileReadable(path string) error {
_, err := os.Stat(path)
if err != nil {
return fmt.Errorf("cannot read file: %v", err)
}
return nil
}
func validateHostPort(hostPort string) error {
parts := strings.Split(hostPort, ":")
if len(parts) != 2 {
return fmt.Errorf("expected format host:port")
}
host := parts[0]
port := parts[1]
if host == "" {
return fmt.Errorf("host must not be empty")
}
portNum, err := strconv.Atoi(port)
if err != nil || portNum < 1 || portNum > 65535 {
return fmt.Errorf("port must be a number between 1 and 65535; got %q", port)
}
return nil
}
func validateHostOrHostPort(addr string) error {
// Try to parse as host:port first
if strings.Contains(addr, ":") {
return validateHostPort(addr)
}
// Otherwise just check if it's a valid hostname/IP
if addr == "" {
return fmt.Errorf("address must not be empty")
}
return nil
}
func extractTCPPort(multiaddrStr string) string {
// Look for the /tcp/ protocol code
parts := strings.Split(multiaddrStr, "/")
for i := 0; i < len(parts); i++ {
if parts[i] == "tcp" {
// The port is the next part
if i+1 < len(parts) {
return parts[i+1]
}
break
}
}
return ""
}

View File

@ -0,0 +1,140 @@
package validate
import (
"fmt"
"time"
)
// DatabaseConfig represents the database configuration for validation purposes.
type DatabaseConfig struct {
DataDir string
ReplicationFactor int
ShardCount int
MaxDatabaseSize int64
RQLitePort int
RQLiteRaftPort int
RQLiteJoinAddress string
ClusterSyncInterval time.Duration
PeerInactivityLimit time.Duration
MinClusterSize int
}
// ValidateDatabase performs validation of the database configuration.
func ValidateDatabase(dc DatabaseConfig) []error {
var errs []error
// Validate data_dir
if dc.DataDir == "" {
errs = append(errs, ValidationError{
Path: "database.data_dir",
Message: "must not be empty",
})
} else {
if err := ValidateDataDir(dc.DataDir); err != nil {
errs = append(errs, ValidationError{
Path: "database.data_dir",
Message: err.Error(),
})
}
}
// Validate replication_factor
if dc.ReplicationFactor < 1 {
errs = append(errs, ValidationError{
Path: "database.replication_factor",
Message: fmt.Sprintf("must be >= 1; got %d", dc.ReplicationFactor),
})
} else if dc.ReplicationFactor%2 == 0 {
// Warn about even replication factor (Raft best practice: odd)
// For now we log a note but don't error
_ = fmt.Sprintf("note: database.replication_factor %d is even; Raft recommends odd numbers for quorum", dc.ReplicationFactor)
}
// Validate shard_count
if dc.ShardCount < 1 {
errs = append(errs, ValidationError{
Path: "database.shard_count",
Message: fmt.Sprintf("must be >= 1; got %d", dc.ShardCount),
})
}
// Validate max_database_size
if dc.MaxDatabaseSize < 0 {
errs = append(errs, ValidationError{
Path: "database.max_database_size",
Message: fmt.Sprintf("must be >= 0; got %d", dc.MaxDatabaseSize),
})
}
// Validate rqlite_port
if dc.RQLitePort < 1 || dc.RQLitePort > 65535 {
errs = append(errs, ValidationError{
Path: "database.rqlite_port",
Message: fmt.Sprintf("must be between 1 and 65535; got %d", dc.RQLitePort),
})
}
// Validate rqlite_raft_port
if dc.RQLiteRaftPort < 1 || dc.RQLiteRaftPort > 65535 {
errs = append(errs, ValidationError{
Path: "database.rqlite_raft_port",
Message: fmt.Sprintf("must be between 1 and 65535; got %d", dc.RQLiteRaftPort),
})
}
// Ports must differ
if dc.RQLitePort == dc.RQLiteRaftPort {
errs = append(errs, ValidationError{
Path: "database.rqlite_raft_port",
Message: fmt.Sprintf("must differ from database.rqlite_port (%d)", dc.RQLitePort),
})
}
// Validate rqlite_join_address format if provided (optional for all nodes)
// The first node in a cluster won't have a join address; subsequent nodes will
if dc.RQLiteJoinAddress != "" {
if err := ValidateHostPort(dc.RQLiteJoinAddress); err != nil {
errs = append(errs, ValidationError{
Path: "database.rqlite_join_address",
Message: err.Error(),
Hint: "expected format: host:port",
})
}
}
// Validate cluster_sync_interval
if dc.ClusterSyncInterval != 0 && dc.ClusterSyncInterval < 10*time.Second {
errs = append(errs, ValidationError{
Path: "database.cluster_sync_interval",
Message: fmt.Sprintf("must be >= 10s or 0 (for default); got %v", dc.ClusterSyncInterval),
Hint: "recommended: 30s",
})
}
// Validate peer_inactivity_limit
if dc.PeerInactivityLimit != 0 {
if dc.PeerInactivityLimit < time.Hour {
errs = append(errs, ValidationError{
Path: "database.peer_inactivity_limit",
Message: fmt.Sprintf("must be >= 1h or 0 (for default); got %v", dc.PeerInactivityLimit),
Hint: "recommended: 24h",
})
} else if dc.PeerInactivityLimit > 7*24*time.Hour {
errs = append(errs, ValidationError{
Path: "database.peer_inactivity_limit",
Message: fmt.Sprintf("must be <= 7d; got %v", dc.PeerInactivityLimit),
Hint: "recommended: 24h",
})
}
}
// Validate min_cluster_size
if dc.MinClusterSize < 1 {
errs = append(errs, ValidationError{
Path: "database.min_cluster_size",
Message: fmt.Sprintf("must be >= 1; got %d", dc.MinClusterSize),
})
}
return errs
}

View File

@ -0,0 +1,131 @@
package validate
import (
"fmt"
"strconv"
"strings"
"time"
"github.com/multiformats/go-multiaddr"
)
// DiscoveryConfig represents the discovery configuration for validation purposes.
type DiscoveryConfig struct {
BootstrapPeers []string
DiscoveryInterval time.Duration
BootstrapPort int
HttpAdvAddress string
RaftAdvAddress string
}
// ValidateDiscovery performs validation of the discovery configuration.
func ValidateDiscovery(disc DiscoveryConfig) []error {
var errs []error
// Validate discovery_interval
if disc.DiscoveryInterval <= 0 {
errs = append(errs, ValidationError{
Path: "discovery.discovery_interval",
Message: fmt.Sprintf("must be > 0; got %v", disc.DiscoveryInterval),
})
}
// Validate peer discovery port
if disc.BootstrapPort < 1 || disc.BootstrapPort > 65535 {
errs = append(errs, ValidationError{
Path: "discovery.bootstrap_port",
Message: fmt.Sprintf("must be between 1 and 65535; got %d", disc.BootstrapPort),
})
}
// Validate peer addresses (optional - all nodes are unified peers now)
// Validate each peer multiaddr
seenPeers := make(map[string]bool)
for i, peer := range disc.BootstrapPeers {
path := fmt.Sprintf("discovery.bootstrap_peers[%d]", i)
_, err := multiaddr.NewMultiaddr(peer)
if err != nil {
errs = append(errs, ValidationError{
Path: path,
Message: fmt.Sprintf("invalid multiaddr: %v", err),
Hint: "expected /ip{4,6}/.../tcp/<port>/p2p/<peerID>",
})
continue
}
// Check for /p2p/ component
if !strings.Contains(peer, "/p2p/") {
errs = append(errs, ValidationError{
Path: path,
Message: "missing /p2p/<peerID> component",
Hint: "expected /ip{4,6}/.../tcp/<port>/p2p/<peerID>",
})
}
// Extract TCP port by parsing the multiaddr string directly
// Look for /tcp/ in the peer string
tcpPortStr := ExtractTCPPort(peer)
if tcpPortStr == "" {
errs = append(errs, ValidationError{
Path: path,
Message: "missing /tcp/<port> component",
Hint: "expected /ip{4,6}/.../tcp/<port>/p2p/<peerID>",
})
continue
}
tcpPort, err := strconv.Atoi(tcpPortStr)
if err != nil || tcpPort < 1 || tcpPort > 65535 {
errs = append(errs, ValidationError{
Path: path,
Message: fmt.Sprintf("invalid TCP port %s", tcpPortStr),
Hint: "port must be between 1 and 65535",
})
}
if seenPeers[peer] {
errs = append(errs, ValidationError{
Path: path,
Message: "duplicate peer",
})
}
seenPeers[peer] = true
}
// Validate http_adv_address (required for cluster discovery)
if disc.HttpAdvAddress == "" {
errs = append(errs, ValidationError{
Path: "discovery.http_adv_address",
Message: "required for RQLite cluster discovery",
Hint: "set to your public HTTP address (e.g., 51.83.128.181:5001)",
})
} else {
if err := ValidateHostOrHostPort(disc.HttpAdvAddress); err != nil {
errs = append(errs, ValidationError{
Path: "discovery.http_adv_address",
Message: err.Error(),
Hint: "expected format: host or host:port",
})
}
}
// Validate raft_adv_address (required for cluster discovery)
if disc.RaftAdvAddress == "" {
errs = append(errs, ValidationError{
Path: "discovery.raft_adv_address",
Message: "required for RQLite cluster discovery",
Hint: "set to your public Raft address (e.g., 51.83.128.181:7001)",
})
} else {
if err := ValidateHostOrHostPort(disc.RaftAdvAddress); err != nil {
errs = append(errs, ValidationError{
Path: "discovery.raft_adv_address",
Message: err.Error(),
Hint: "expected format: host or host:port",
})
}
}
return errs
}

View File

@ -0,0 +1,53 @@
package validate
import (
"fmt"
"path/filepath"
)
// LoggingConfig represents the logging configuration for validation purposes.
type LoggingConfig struct {
Level string
Format string
OutputFile string
}
// ValidateLogging performs validation of the logging configuration.
func ValidateLogging(log LoggingConfig) []error {
var errs []error
// Validate level
validLevels := map[string]bool{"debug": true, "info": true, "warn": true, "error": true}
if !validLevels[log.Level] {
errs = append(errs, ValidationError{
Path: "logging.level",
Message: fmt.Sprintf("invalid value %q", log.Level),
Hint: "allowed values: debug, info, warn, error",
})
}
// Validate format
validFormats := map[string]bool{"json": true, "console": true}
if !validFormats[log.Format] {
errs = append(errs, ValidationError{
Path: "logging.format",
Message: fmt.Sprintf("invalid value %q", log.Format),
Hint: "allowed values: json, console",
})
}
// Validate output_file
if log.OutputFile != "" {
dir := filepath.Dir(log.OutputFile)
if dir != "" && dir != "." {
if err := ValidateDirWritable(dir); err != nil {
errs = append(errs, ValidationError{
Path: "logging.output_file",
Message: fmt.Sprintf("parent directory not writable: %v", err),
})
}
}
}
return errs
}

108
pkg/config/validate/node.go Normal file
View File

@ -0,0 +1,108 @@
package validate
import (
"fmt"
"net"
"github.com/multiformats/go-multiaddr"
manet "github.com/multiformats/go-multiaddr/net"
)
// NodeConfig represents the node configuration for validation purposes.
type NodeConfig struct {
ID string
ListenAddresses []string
DataDir string
MaxConnections int
}
// ValidateNode performs validation of the node configuration.
func ValidateNode(nc NodeConfig) []error {
var errs []error
// Validate node ID (required for RQLite cluster membership)
if nc.ID == "" {
errs = append(errs, ValidationError{
Path: "node.id",
Message: "must not be empty (required for cluster membership)",
Hint: "will be auto-generated if empty, but explicit ID recommended",
})
}
// Validate listen_addresses
if len(nc.ListenAddresses) == 0 {
errs = append(errs, ValidationError{
Path: "node.listen_addresses",
Message: "must not be empty",
})
}
seen := make(map[string]bool)
for i, addr := range nc.ListenAddresses {
path := fmt.Sprintf("node.listen_addresses[%d]", i)
// Parse as multiaddr
ma, err := multiaddr.NewMultiaddr(addr)
if err != nil {
errs = append(errs, ValidationError{
Path: path,
Message: fmt.Sprintf("invalid multiaddr: %v", err),
Hint: "expected /ip{4,6}/.../tcp/<port>",
})
continue
}
// Check for TCP and valid port
tcpAddr, err := manet.ToNetAddr(ma)
if err != nil {
errs = append(errs, ValidationError{
Path: path,
Message: fmt.Sprintf("cannot convert multiaddr to network address: %v", err),
Hint: "ensure multiaddr contains /tcp/<port>",
})
continue
}
tcpPort := tcpAddr.(*net.TCPAddr).Port
if tcpPort < 1 || tcpPort > 65535 {
errs = append(errs, ValidationError{
Path: path,
Message: fmt.Sprintf("invalid TCP port %d", tcpPort),
Hint: "port must be between 1 and 65535",
})
}
if seen[addr] {
errs = append(errs, ValidationError{
Path: path,
Message: "duplicate listen address",
})
}
seen[addr] = true
}
// Validate data_dir
if nc.DataDir == "" {
errs = append(errs, ValidationError{
Path: "node.data_dir",
Message: "must not be empty",
})
} else {
if err := ValidateDataDir(nc.DataDir); err != nil {
errs = append(errs, ValidationError{
Path: "node.data_dir",
Message: err.Error(),
})
}
}
// Validate max_connections
if nc.MaxConnections <= 0 {
errs = append(errs, ValidationError{
Path: "node.max_connections",
Message: fmt.Sprintf("must be > 0; got %d", nc.MaxConnections),
})
}
return errs
}

View File

@ -0,0 +1,46 @@
package validate
// SecurityConfig represents the security configuration for validation purposes.
type SecurityConfig struct {
EnableTLS bool
PrivateKeyFile string
CertificateFile string
}
// ValidateSecurity performs validation of the security configuration.
func ValidateSecurity(sec SecurityConfig) []error {
var errs []error
// Validate logging level
if sec.EnableTLS {
if sec.PrivateKeyFile == "" {
errs = append(errs, ValidationError{
Path: "security.private_key_file",
Message: "required when enable_tls is true",
})
} else {
if err := ValidateFileReadable(sec.PrivateKeyFile); err != nil {
errs = append(errs, ValidationError{
Path: "security.private_key_file",
Message: err.Error(),
})
}
}
if sec.CertificateFile == "" {
errs = append(errs, ValidationError{
Path: "security.certificate_file",
Message: "required when enable_tls is true",
})
} else {
if err := ValidateFileReadable(sec.CertificateFile); err != nil {
errs = append(errs, ValidationError{
Path: "security.certificate_file",
Message: err.Error(),
})
}
}
}
return errs
}

View File

@ -0,0 +1,180 @@
package validate
import (
"encoding/hex"
"fmt"
"os"
"path/filepath"
"strconv"
"strings"
)
// ValidationError represents a single validation error with context.
type ValidationError struct {
Path string // e.g., "discovery.bootstrap_peers[0]" or "discovery.peers[0]"
Message string // e.g., "invalid multiaddr"
Hint string // e.g., "expected /ip{4,6}/.../tcp/<port>/p2p/<peerID>"
}
func (e ValidationError) Error() string {
if e.Hint != "" {
return fmt.Sprintf("%s: %s; %s", e.Path, e.Message, e.Hint)
}
return fmt.Sprintf("%s: %s", e.Path, e.Message)
}
// ValidateDataDir validates that a data directory exists or can be created.
func ValidateDataDir(path string) error {
if path == "" {
return fmt.Errorf("must not be empty")
}
// Expand ~ to home directory
expandedPath := os.ExpandEnv(path)
if strings.HasPrefix(expandedPath, "~") {
home, err := os.UserHomeDir()
if err != nil {
return fmt.Errorf("cannot determine home directory: %v", err)
}
expandedPath = filepath.Join(home, expandedPath[1:])
}
if info, err := os.Stat(expandedPath); err == nil {
// Directory exists; check if it's a directory and writable
if !info.IsDir() {
return fmt.Errorf("path exists but is not a directory")
}
// Try to write a test file to check permissions
testFile := filepath.Join(expandedPath, ".write_test")
if err := os.WriteFile(testFile, []byte(""), 0644); err != nil {
return fmt.Errorf("directory not writable: %v", err)
}
os.Remove(testFile)
} else if os.IsNotExist(err) {
// Directory doesn't exist; check if parent is writable
parent := filepath.Dir(expandedPath)
if parent == "" || parent == "." {
parent = "."
}
// Allow parent not existing - it will be created at runtime
if info, err := os.Stat(parent); err != nil {
if !os.IsNotExist(err) {
return fmt.Errorf("parent directory not accessible: %v", err)
}
// Parent doesn't exist either - that's ok, will be created
} else if !info.IsDir() {
return fmt.Errorf("parent path is not a directory")
} else {
// Parent exists, check if writable
if err := ValidateDirWritable(parent); err != nil {
return fmt.Errorf("parent directory not writable: %v", err)
}
}
} else {
return fmt.Errorf("cannot access path: %v", err)
}
return nil
}
// ValidateDirWritable validates that a directory exists and is writable.
func ValidateDirWritable(path string) error {
info, err := os.Stat(path)
if err != nil {
return fmt.Errorf("cannot access directory: %v", err)
}
if !info.IsDir() {
return fmt.Errorf("path is not a directory")
}
// Try to write a test file
testFile := filepath.Join(path, ".write_test")
if err := os.WriteFile(testFile, []byte(""), 0644); err != nil {
return fmt.Errorf("directory not writable: %v", err)
}
os.Remove(testFile)
return nil
}
// ValidateFileReadable validates that a file exists and is readable.
func ValidateFileReadable(path string) error {
_, err := os.Stat(path)
if err != nil {
return fmt.Errorf("cannot read file: %v", err)
}
return nil
}
// ValidateHostPort validates a host:port address format.
func ValidateHostPort(hostPort string) error {
parts := strings.Split(hostPort, ":")
if len(parts) != 2 {
return fmt.Errorf("expected format host:port")
}
host := parts[0]
port := parts[1]
if host == "" {
return fmt.Errorf("host must not be empty")
}
portNum, err := strconv.Atoi(port)
if err != nil || portNum < 1 || portNum > 65535 {
return fmt.Errorf("port must be a number between 1 and 65535; got %q", port)
}
return nil
}
// ValidateHostOrHostPort validates either a hostname or host:port format.
func ValidateHostOrHostPort(addr string) error {
// Try to parse as host:port first
if strings.Contains(addr, ":") {
return ValidateHostPort(addr)
}
// Otherwise just check if it's a valid hostname/IP
if addr == "" {
return fmt.Errorf("address must not be empty")
}
return nil
}
// ValidatePort validates that a port number is in the valid range.
func ValidatePort(port int) error {
if port < 1 || port > 65535 {
return fmt.Errorf("port must be between 1 and 65535; got %d", port)
}
return nil
}
// ExtractTCPPort extracts the TCP port from a multiaddr string.
func ExtractTCPPort(multiaddrStr string) string {
// Look for the /tcp/ protocol code
parts := strings.Split(multiaddrStr, "/")
for i := 0; i < len(parts); i++ {
if parts[i] == "tcp" {
// The port is the next part
if i+1 < len(parts) {
return parts[i+1]
}
break
}
}
return ""
}
// ValidateSwarmKey validates that a swarm key is 64 hex characters.
func ValidateSwarmKey(key string) error {
key = strings.TrimSpace(key)
if len(key) != 64 {
return fmt.Errorf("swarm key must be 64 hex characters (32 bytes), got %d", len(key))
}
if _, err := hex.DecodeString(key); err != nil {
return fmt.Errorf("swarm key must be valid hexadecimal: %w", err)
}
return nil
}

68
pkg/contracts/auth.go Normal file
View File

@ -0,0 +1,68 @@
package contracts
import (
"context"
"time"
)
// AuthService handles wallet-based authentication and authorization.
// Provides nonce generation, signature verification, JWT lifecycle management,
// and application registration for the gateway.
type AuthService interface {
// CreateNonce generates a cryptographic nonce for wallet authentication.
// The nonce is valid for a limited time and used to prevent replay attacks.
// wallet is the wallet address, purpose describes the nonce usage,
// and namespace isolates nonces across different contexts.
CreateNonce(ctx context.Context, wallet, purpose, namespace string) (string, error)
// VerifySignature validates a cryptographic signature from a wallet.
// Supports multiple blockchain types (ETH, SOL) for signature verification.
// Returns true if the signature is valid for the given nonce.
VerifySignature(ctx context.Context, wallet, nonce, signature, chainType string) (bool, error)
// IssueTokens generates a new access token and refresh token pair.
// Access tokens are short-lived (typically 15 minutes).
// Refresh tokens are long-lived (typically 30 days).
// Returns: accessToken, refreshToken, expirationUnix, error.
IssueTokens(ctx context.Context, wallet, namespace string) (string, string, int64, error)
// RefreshToken validates a refresh token and issues a new access token.
// Returns: newAccessToken, subject (wallet), expirationUnix, error.
RefreshToken(ctx context.Context, refreshToken, namespace string) (string, string, int64, error)
// RevokeToken invalidates a refresh token or all tokens for a subject.
// If token is provided, revokes that specific token.
// If all is true and subject is provided, revokes all tokens for that subject.
RevokeToken(ctx context.Context, namespace, token string, all bool, subject string) error
// ParseAndVerifyJWT validates a JWT access token and returns its claims.
// Verifies signature, expiration, and issuer.
ParseAndVerifyJWT(token string) (*JWTClaims, error)
// GenerateJWT creates a new signed JWT with the specified claims and TTL.
// Returns: token, expirationUnix, error.
GenerateJWT(namespace, subject string, ttl time.Duration) (string, int64, error)
// RegisterApp registers a new client application with the gateway.
// Returns an application ID that can be used for OAuth flows.
RegisterApp(ctx context.Context, wallet, namespace, name, publicKey string) (string, error)
// GetOrCreateAPIKey retrieves an existing API key or creates a new one.
// API keys provide programmatic access without interactive authentication.
GetOrCreateAPIKey(ctx context.Context, wallet, namespace string) (string, error)
// ResolveNamespaceID ensures a namespace exists and returns its internal ID.
// Creates the namespace if it doesn't exist.
ResolveNamespaceID(ctx context.Context, namespace string) (interface{}, error)
}
// JWTClaims represents the claims contained in a JWT access token.
type JWTClaims struct {
Iss string `json:"iss"` // Issuer
Sub string `json:"sub"` // Subject (wallet address)
Aud string `json:"aud"` // Audience
Iat int64 `json:"iat"` // Issued At
Nbf int64 `json:"nbf"` // Not Before
Exp int64 `json:"exp"` // Expiration
Namespace string `json:"namespace"` // Namespace isolation
}

28
pkg/contracts/cache.go Normal file
View File

@ -0,0 +1,28 @@
package contracts
import (
"context"
)
// CacheProvider defines the interface for distributed cache operations.
// Implementations provide a distributed key-value store with eventual consistency.
type CacheProvider interface {
// Health checks if the cache service is operational.
// Returns an error if the service is unavailable or cannot be reached.
Health(ctx context.Context) error
// Close gracefully shuts down the cache client and releases resources.
Close(ctx context.Context) error
}
// CacheClient provides extended cache operations beyond basic connectivity.
// This interface is intentionally kept minimal as cache operations are
// typically accessed through the underlying client's DMap API.
type CacheClient interface {
CacheProvider
// UnderlyingClient returns the native cache client for advanced operations.
// The returned client can be used to access DMap operations like Get, Put, Delete, etc.
// Return type is interface{} to avoid leaking concrete implementation details.
UnderlyingClient() interface{}
}

117
pkg/contracts/database.go Normal file
View File

@ -0,0 +1,117 @@
package contracts
import (
"context"
"database/sql"
)
// DatabaseClient defines the interface for ORM-like database operations.
// Provides both raw SQL execution and fluent query building capabilities.
type DatabaseClient interface {
// Query executes a SELECT query and scans results into dest.
// dest must be a pointer to a slice of structs or []map[string]any.
Query(ctx context.Context, dest any, query string, args ...any) error
// Exec executes a write statement (INSERT/UPDATE/DELETE) and returns the result.
Exec(ctx context.Context, query string, args ...any) (sql.Result, error)
// FindBy retrieves multiple records matching the criteria.
// dest must be a pointer to a slice, table is the table name,
// criteria is a map of column->value filters, and opts customize the query.
FindBy(ctx context.Context, dest any, table string, criteria map[string]any, opts ...FindOption) error
// FindOneBy retrieves a single record matching the criteria.
// dest must be a pointer to a struct or map.
FindOneBy(ctx context.Context, dest any, table string, criteria map[string]any, opts ...FindOption) error
// Save inserts or updates an entity based on its primary key.
// If the primary key is zero, performs an INSERT.
// If the primary key is set, performs an UPDATE.
Save(ctx context.Context, entity any) error
// Remove deletes an entity by its primary key.
Remove(ctx context.Context, entity any) error
// Repository returns a generic repository for a table.
// Return type is any to avoid exposing generic type parameters in the interface.
Repository(table string) any
// CreateQueryBuilder creates a fluent query builder for advanced queries.
// Supports joins, where clauses, ordering, grouping, and pagination.
CreateQueryBuilder(table string) QueryBuilder
// Tx executes a function within a database transaction.
// If fn returns an error, the transaction is rolled back.
// Otherwise, it is committed.
Tx(ctx context.Context, fn func(tx DatabaseTransaction) error) error
}
// DatabaseTransaction provides database operations within a transaction context.
type DatabaseTransaction interface {
// Query executes a SELECT query within the transaction.
Query(ctx context.Context, dest any, query string, args ...any) error
// Exec executes a write statement within the transaction.
Exec(ctx context.Context, query string, args ...any) (sql.Result, error)
// CreateQueryBuilder creates a query builder that executes within the transaction.
CreateQueryBuilder(table string) QueryBuilder
// Save inserts or updates an entity within the transaction.
Save(ctx context.Context, entity any) error
// Remove deletes an entity within the transaction.
Remove(ctx context.Context, entity any) error
}
// QueryBuilder provides a fluent interface for building SQL queries.
type QueryBuilder interface {
// Select specifies which columns to retrieve (default: *).
Select(cols ...string) QueryBuilder
// Alias sets a table alias for the query.
Alias(alias string) QueryBuilder
// Where adds a WHERE condition (same as AndWhere).
Where(expr string, args ...any) QueryBuilder
// AndWhere adds a WHERE condition with AND conjunction.
AndWhere(expr string, args ...any) QueryBuilder
// OrWhere adds a WHERE condition with OR conjunction.
OrWhere(expr string, args ...any) QueryBuilder
// InnerJoin adds an INNER JOIN clause.
InnerJoin(table string, on string) QueryBuilder
// LeftJoin adds a LEFT JOIN clause.
LeftJoin(table string, on string) QueryBuilder
// Join adds a JOIN clause (default join type).
Join(table string, on string) QueryBuilder
// GroupBy adds a GROUP BY clause.
GroupBy(cols ...string) QueryBuilder
// OrderBy adds an ORDER BY clause.
// Supports expressions like "name ASC", "created_at DESC".
OrderBy(exprs ...string) QueryBuilder
// Limit sets the maximum number of rows to return.
Limit(n int) QueryBuilder
// Offset sets the number of rows to skip.
Offset(n int) QueryBuilder
// Build constructs the final SQL query and returns it with positional arguments.
Build() (query string, args []any)
// GetMany executes the query and scans results into dest (pointer to slice).
GetMany(ctx context.Context, dest any) error
// GetOne executes the query with LIMIT 1 and scans into dest (pointer to struct/map).
GetOne(ctx context.Context, dest any) error
}
// FindOption is a function that configures a FindBy/FindOneBy query.
type FindOption func(q QueryBuilder)

View File

@ -0,0 +1,36 @@
package contracts
import (
"context"
"time"
)
// PeerDiscovery handles peer discovery and connection management.
// Provides mechanisms for finding and connecting to network peers
// without relying on a DHT (Distributed Hash Table).
type PeerDiscovery interface {
// Start begins periodic peer discovery with the given configuration.
// Runs discovery in the background until Stop is called.
Start(config DiscoveryConfig) error
// Stop halts the peer discovery process and cleans up resources.
Stop()
// StartProtocolHandler registers the peer exchange protocol handler.
// Must be called to enable incoming peer exchange requests.
StartProtocolHandler()
// TriggerPeerExchange manually triggers peer exchange with all connected peers.
// Useful for bootstrapping or refreshing peer metadata.
// Returns the number of peers from which metadata was collected.
TriggerPeerExchange(ctx context.Context) int
}
// DiscoveryConfig contains configuration for peer discovery.
type DiscoveryConfig struct {
// DiscoveryInterval is how often to run peer discovery.
DiscoveryInterval time.Duration
// MaxConnections is the maximum number of new connections per discovery round.
MaxConnections int
}

24
pkg/contracts/doc.go Normal file
View File

@ -0,0 +1,24 @@
// Package contracts defines clean, focused interface contracts for the Orama Network.
//
// This package follows the Interface Segregation Principle (ISP) by providing
// small, focused interfaces that define clear contracts between components.
// Each interface represents a specific capability or service without exposing
// implementation details.
//
// Design Principles:
// - Small, focused interfaces (ISP compliance)
// - No concrete type leakage in signatures
// - Comprehensive documentation for all public methods
// - Domain-aligned contracts (storage, cache, database, auth, serverless, etc.)
//
// Interfaces:
// - StorageProvider: Decentralized content storage (IPFS)
// - CacheProvider/CacheClient: Distributed caching (Olric)
// - DatabaseClient: ORM-like database operations (RQLite)
// - AuthService: Wallet-based authentication and JWT management
// - FunctionExecutor: WebAssembly function execution
// - FunctionRegistry: Function metadata and bytecode storage
// - PubSubService: Topic-based messaging
// - PeerDiscovery: Peer discovery and connection management
// - Logger: Structured logging
package contracts

48
pkg/contracts/logger.go Normal file
View File

@ -0,0 +1,48 @@
package contracts
// Logger defines a structured logging interface.
// Provides leveled logging with contextual fields for debugging and monitoring.
type Logger interface {
// Debug logs a debug-level message with optional fields.
Debug(msg string, fields ...Field)
// Info logs an info-level message with optional fields.
Info(msg string, fields ...Field)
// Warn logs a warning-level message with optional fields.
Warn(msg string, fields ...Field)
// Error logs an error-level message with optional fields.
Error(msg string, fields ...Field)
// Fatal logs a fatal-level message and terminates the application.
Fatal(msg string, fields ...Field)
// With creates a child logger with additional context fields.
// The returned logger includes all parent fields plus the new ones.
With(fields ...Field) Logger
// Sync flushes any buffered log entries.
// Should be called before application shutdown.
Sync() error
}
// Field represents a structured logging field with a key and value.
// Implementations typically use zap.Field or similar structured logging types.
type Field interface {
// Key returns the field's key name.
Key() string
// Value returns the field's value.
Value() interface{}
}
// LoggerFactory creates logger instances with configuration.
type LoggerFactory interface {
// NewLogger creates a new logger with the given name.
// The name is typically used as a component identifier in logs.
NewLogger(name string) Logger
// NewLoggerWithFields creates a new logger with pre-set context fields.
NewLoggerWithFields(name string, fields ...Field) Logger
}

36
pkg/contracts/pubsub.go Normal file
View File

@ -0,0 +1,36 @@
package contracts
import (
"context"
)
// PubSubService defines the interface for publish-subscribe messaging.
// Provides topic-based message broadcasting with support for multiple handlers.
type PubSubService interface {
// Publish sends a message to all subscribers of a topic.
// The message is delivered asynchronously to all registered handlers.
Publish(ctx context.Context, topic string, data []byte) error
// Subscribe registers a handler for messages on a topic.
// Multiple handlers can be registered for the same topic.
// Returns a HandlerID that can be used to unsubscribe.
Subscribe(ctx context.Context, topic string, handler MessageHandler) (HandlerID, error)
// Unsubscribe removes a specific handler from a topic.
// The subscription is reference-counted per topic.
Unsubscribe(ctx context.Context, topic string, handlerID HandlerID) error
// Close gracefully shuts down the pubsub service and releases resources.
Close(ctx context.Context) error
}
// MessageHandler processes messages received from a subscribed topic.
// Each handler receives the topic name and message data.
// Multiple handlers for the same topic each receive a copy of the message.
// Handlers should return an error only for critical failures.
type MessageHandler func(topic string, data []byte) error
// HandlerID uniquely identifies a subscription handler.
// Each Subscribe call generates a new HandlerID, allowing multiple
// independent subscriptions to the same topic.
type HandlerID string

129
pkg/contracts/serverless.go Normal file
View File

@ -0,0 +1,129 @@
package contracts
import (
"context"
"time"
)
// FunctionExecutor handles the execution of WebAssembly serverless functions.
// Manages compilation, caching, and runtime execution of WASM modules.
type FunctionExecutor interface {
// Execute runs a function with the given input and returns the output.
// fn contains the function metadata, input is the function's input data,
// and invCtx provides context about the invocation (caller, trigger type, etc.).
Execute(ctx context.Context, fn *Function, input []byte, invCtx *InvocationContext) ([]byte, error)
// Precompile compiles a WASM module and caches it for faster execution.
// wasmCID is the content identifier, wasmBytes is the raw WASM bytecode.
// Precompiling reduces cold-start latency for subsequent invocations.
Precompile(ctx context.Context, wasmCID string, wasmBytes []byte) error
// Invalidate removes a compiled module from the cache.
// Call this when a function is updated or deleted.
Invalidate(wasmCID string)
}
// FunctionRegistry manages function metadata and bytecode storage.
// Responsible for CRUD operations on function definitions.
type FunctionRegistry interface {
// Register deploys a new function or updates an existing one.
// fn contains the function definition, wasmBytes is the compiled WASM code.
// Returns the old function definition if it was updated, or nil for new registrations.
Register(ctx context.Context, fn *FunctionDefinition, wasmBytes []byte) (*Function, error)
// Get retrieves a function by name and optional version.
// If version is 0, returns the latest active version.
// Returns an error if the function is not found.
Get(ctx context.Context, namespace, name string, version int) (*Function, error)
// List returns all active functions in a namespace.
// Returns only the latest version of each function.
List(ctx context.Context, namespace string) ([]*Function, error)
// Delete marks a function as inactive (soft delete).
// If version is 0, marks all versions as inactive.
Delete(ctx context.Context, namespace, name string, version int) error
// GetWASMBytes retrieves the compiled WASM bytecode for a function.
// wasmCID is the content identifier returned during registration.
GetWASMBytes(ctx context.Context, wasmCID string) ([]byte, error)
// GetLogs retrieves execution logs for a function.
// limit constrains the number of log entries returned.
GetLogs(ctx context.Context, namespace, name string, limit int) ([]LogEntry, error)
}
// Function represents a deployed serverless function with its metadata.
type Function struct {
ID string `json:"id"`
Name string `json:"name"`
Namespace string `json:"namespace"`
Version int `json:"version"`
WASMCID string `json:"wasm_cid"`
SourceCID string `json:"source_cid,omitempty"`
MemoryLimitMB int `json:"memory_limit_mb"`
TimeoutSeconds int `json:"timeout_seconds"`
IsPublic bool `json:"is_public"`
RetryCount int `json:"retry_count"`
RetryDelaySeconds int `json:"retry_delay_seconds"`
DLQTopic string `json:"dlq_topic,omitempty"`
Status FunctionStatus `json:"status"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
CreatedBy string `json:"created_by"`
}
// FunctionDefinition contains the configuration for deploying a function.
type FunctionDefinition struct {
Name string `json:"name"`
Namespace string `json:"namespace"`
Version int `json:"version,omitempty"`
MemoryLimitMB int `json:"memory_limit_mb,omitempty"`
TimeoutSeconds int `json:"timeout_seconds,omitempty"`
IsPublic bool `json:"is_public,omitempty"`
RetryCount int `json:"retry_count,omitempty"`
RetryDelaySeconds int `json:"retry_delay_seconds,omitempty"`
DLQTopic string `json:"dlq_topic,omitempty"`
EnvVars map[string]string `json:"env_vars,omitempty"`
}
// InvocationContext provides context for a function invocation.
type InvocationContext struct {
RequestID string `json:"request_id"`
FunctionID string `json:"function_id"`
FunctionName string `json:"function_name"`
Namespace string `json:"namespace"`
CallerWallet string `json:"caller_wallet,omitempty"`
TriggerType TriggerType `json:"trigger_type"`
WSClientID string `json:"ws_client_id,omitempty"`
EnvVars map[string]string `json:"env_vars,omitempty"`
}
// LogEntry represents a log message from a function execution.
type LogEntry struct {
Level string `json:"level"`
Message string `json:"message"`
Timestamp time.Time `json:"timestamp"`
}
// FunctionStatus represents the current state of a deployed function.
type FunctionStatus string
const (
FunctionStatusActive FunctionStatus = "active"
FunctionStatusInactive FunctionStatus = "inactive"
FunctionStatusError FunctionStatus = "error"
)
// TriggerType identifies the type of event that triggered a function invocation.
type TriggerType string
const (
TriggerTypeHTTP TriggerType = "http"
TriggerTypeWebSocket TriggerType = "websocket"
TriggerTypeCron TriggerType = "cron"
TriggerTypeDatabase TriggerType = "database"
TriggerTypePubSub TriggerType = "pubsub"
TriggerTypeTimer TriggerType = "timer"
TriggerTypeJob TriggerType = "job"
)

70
pkg/contracts/storage.go Normal file
View File

@ -0,0 +1,70 @@
package contracts
import (
"context"
"io"
)
// StorageProvider defines the interface for decentralized storage operations.
// Implementations typically use IPFS Cluster for distributed content storage.
type StorageProvider interface {
// Add uploads content to the storage network and returns metadata.
// The content is read from the provided reader and associated with the given name.
// Returns information about the stored content including its CID (Content IDentifier).
Add(ctx context.Context, reader io.Reader, name string) (*AddResponse, error)
// Pin ensures content is persistently stored across the network.
// The CID identifies the content, name provides a human-readable label,
// and replicationFactor specifies how many nodes should store the content.
Pin(ctx context.Context, cid string, name string, replicationFactor int) (*PinResponse, error)
// PinStatus retrieves the current replication status of pinned content.
// Returns detailed information about which peers are storing the content
// and the current state of the pin operation.
PinStatus(ctx context.Context, cid string) (*PinStatus, error)
// Get retrieves content from the storage network by its CID.
// The ipfsAPIURL parameter specifies which IPFS API endpoint to query.
// Returns a ReadCloser that must be closed by the caller.
Get(ctx context.Context, cid string, ipfsAPIURL string) (io.ReadCloser, error)
// Unpin removes a pin, allowing the content to be garbage collected.
// This does not immediately delete the content but makes it eligible for removal.
Unpin(ctx context.Context, cid string) error
// Health checks if the storage service is operational.
// Returns an error if the service is unavailable or unhealthy.
Health(ctx context.Context) error
// GetPeerCount returns the number of storage peers in the cluster.
// Useful for monitoring cluster health and connectivity.
GetPeerCount(ctx context.Context) (int, error)
// Close gracefully shuts down the storage client and releases resources.
Close(ctx context.Context) error
}
// AddResponse represents the result of adding content to storage.
type AddResponse struct {
Name string `json:"name"`
Cid string `json:"cid"`
Size int64 `json:"size"`
}
// PinResponse represents the result of a pin operation.
type PinResponse struct {
Cid string `json:"cid"`
Name string `json:"name"`
}
// PinStatus represents the replication status of pinned content.
type PinStatus struct {
Cid string `json:"cid"`
Name string `json:"name"`
Status string `json:"status"` // "pinned", "pinning", "queued", "unpinned", "error"
ReplicationMin int `json:"replication_min"`
ReplicationMax int `json:"replication_max"`
ReplicationFactor int `json:"replication_factor"`
Peers []string `json:"peers"` // List of peer IDs storing the content
Error string `json:"error,omitempty"`
}

View File

@ -78,7 +78,7 @@ func (dc *DependencyChecker) CheckAll() ([]string, error) {
errMsg := fmt.Sprintf("Missing %d required dependencies:\n%s\n\nInstall them with:\n%s",
len(missing), strings.Join(missing, ", "), strings.Join(hints, "\n"))
return missing, fmt.Errorf(errMsg)
return missing, fmt.Errorf("%s", errMsg)
}
// PortChecker validates that required ports are available
@ -113,7 +113,7 @@ func (pc *PortChecker) CheckAll() ([]int, error) {
errMsg := fmt.Sprintf("The following ports are unavailable: %v\n\nFree them or stop conflicting services and try again",
unavailable)
return unavailable, fmt.Errorf(errMsg)
return unavailable, fmt.Errorf("%s", errMsg)
}
// isPortAvailable checks if a TCP port is available for binding

View File

@ -0,0 +1,287 @@
package development
import (
"context"
"encoding/json"
"fmt"
"net/http"
"net/url"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
"github.com/DeBrosOfficial/network/pkg/tlsutil"
)
// ipfsNodeInfo holds information about an IPFS node for peer discovery
type ipfsNodeInfo struct {
name string
ipfsPath string
apiPort int
swarmPort int
gatewayPort int
peerID string
}
func (pm *ProcessManager) buildIPFSNodes(topology *Topology) []ipfsNodeInfo {
var nodes []ipfsNodeInfo
for _, nodeSpec := range topology.Nodes {
nodes = append(nodes, ipfsNodeInfo{
name: nodeSpec.Name,
ipfsPath: filepath.Join(pm.oramaDir, nodeSpec.DataDir, "ipfs/repo"),
apiPort: nodeSpec.IPFSAPIPort,
swarmPort: nodeSpec.IPFSSwarmPort,
gatewayPort: nodeSpec.IPFSGatewayPort,
peerID: "",
})
}
return nodes
}
func (pm *ProcessManager) startIPFS(ctx context.Context) error {
topology := DefaultTopology()
nodes := pm.buildIPFSNodes(topology)
for i := range nodes {
os.MkdirAll(nodes[i].ipfsPath, 0755)
if _, err := os.Stat(filepath.Join(nodes[i].ipfsPath, "config")); os.IsNotExist(err) {
fmt.Fprintf(pm.logWriter, " Initializing IPFS (%s)...\n", nodes[i].name)
cmd := exec.CommandContext(ctx, "ipfs", "init", "--profile=server", "--repo-dir="+nodes[i].ipfsPath)
if _, err := cmd.CombinedOutput(); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: ipfs init failed: %v\n", err)
}
swarmKeyPath := filepath.Join(pm.oramaDir, "swarm.key")
if data, err := os.ReadFile(swarmKeyPath); err == nil {
os.WriteFile(filepath.Join(nodes[i].ipfsPath, "swarm.key"), data, 0600)
}
}
peerID, err := configureIPFSRepo(nodes[i].ipfsPath, nodes[i].apiPort, nodes[i].gatewayPort, nodes[i].swarmPort)
if err != nil {
fmt.Fprintf(pm.logWriter, " Warning: failed to configure IPFS repo for %s: %v\n", nodes[i].name, err)
} else {
nodes[i].peerID = peerID
fmt.Fprintf(pm.logWriter, " Peer ID for %s: %s\n", nodes[i].name, peerID)
}
}
for i := range nodes {
pidPath := filepath.Join(pm.pidsDir, fmt.Sprintf("ipfs-%s.pid", nodes[i].name))
logPath := filepath.Join(pm.oramaDir, "logs", fmt.Sprintf("ipfs-%s.log", nodes[i].name))
cmd := exec.CommandContext(ctx, "ipfs", "daemon", "--enable-pubsub-experiment", "--repo-dir="+nodes[i].ipfsPath)
logFile, _ := os.Create(logPath)
cmd.Stdout = logFile
cmd.Stderr = logFile
if err := cmd.Start(); err != nil {
return fmt.Errorf("failed to start ipfs-%s: %w", nodes[i].name, err)
}
os.WriteFile(pidPath, []byte(fmt.Sprintf("%d", cmd.Process.Pid)), 0644)
pm.processes[fmt.Sprintf("ipfs-%s", nodes[i].name)] = &ManagedProcess{
Name: fmt.Sprintf("ipfs-%s", nodes[i].name),
PID: cmd.Process.Pid,
StartTime: time.Now(),
LogPath: logPath,
}
fmt.Fprintf(pm.logWriter, "✓ IPFS (%s) started (PID: %d, API: %d, Swarm: %d)\n", nodes[i].name, cmd.Process.Pid, nodes[i].apiPort, nodes[i].swarmPort)
}
time.Sleep(2 * time.Second)
if err := pm.seedIPFSPeersWithHTTP(ctx, nodes); err != nil {
fmt.Fprintf(pm.logWriter, "⚠️ Failed to seed IPFS peers: %v\n", err)
}
return nil
}
func configureIPFSRepo(repoPath string, apiPort, gatewayPort, swarmPort int) (string, error) {
configPath := filepath.Join(repoPath, "config")
data, err := os.ReadFile(configPath)
if err != nil {
return "", fmt.Errorf("failed to read IPFS config: %w", err)
}
var config map[string]interface{}
if err := json.Unmarshal(data, &config); err != nil {
return "", fmt.Errorf("failed to parse IPFS config: %w", err)
}
config["Addresses"] = map[string]interface{}{
"API": []string{fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", apiPort)},
"Gateway": []string{fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", gatewayPort)},
"Swarm": []string{
fmt.Sprintf("/ip4/0.0.0.0/tcp/%d", swarmPort),
fmt.Sprintf("/ip6/::/tcp/%d", swarmPort),
},
}
config["AutoConf"] = map[string]interface{}{
"Enabled": false,
}
config["Bootstrap"] = []string{}
if dns, ok := config["DNS"].(map[string]interface{}); ok {
dns["Resolvers"] = map[string]interface{}{}
} else {
config["DNS"] = map[string]interface{}{
"Resolvers": map[string]interface{}{},
}
}
if routing, ok := config["Routing"].(map[string]interface{}); ok {
routing["DelegatedRouters"] = []string{}
} else {
config["Routing"] = map[string]interface{}{
"DelegatedRouters": []string{},
}
}
if ipns, ok := config["Ipns"].(map[string]interface{}); ok {
ipns["DelegatedPublishers"] = []string{}
} else {
config["Ipns"] = map[string]interface{}{
"DelegatedPublishers": []string{},
}
}
if api, ok := config["API"].(map[string]interface{}); ok {
api["HTTPHeaders"] = map[string][]string{
"Access-Control-Allow-Origin": {"*"},
"Access-Control-Allow-Methods": {"GET", "PUT", "POST", "DELETE", "OPTIONS"},
"Access-Control-Allow-Headers": {"Content-Type", "X-Requested-With"},
"Access-Control-Expose-Headers": {"Content-Length", "Content-Range"},
}
} else {
config["API"] = map[string]interface{}{
"HTTPHeaders": map[string][]string{
"Access-Control-Allow-Origin": {"*"},
"Access-Control-Allow-Methods": {"GET", "PUT", "POST", "DELETE", "OPTIONS"},
"Access-Control-Allow-Headers": {"Content-Type", "X-Requested-With"},
"Access-Control-Expose-Headers": {"Content-Length", "Content-Range"},
},
}
}
updatedData, err := json.MarshalIndent(config, "", " ")
if err != nil {
return "", fmt.Errorf("failed to marshal IPFS config: %w", err)
}
if err := os.WriteFile(configPath, updatedData, 0644); err != nil {
return "", fmt.Errorf("failed to write IPFS config: %w", err)
}
if id, ok := config["Identity"].(map[string]interface{}); ok {
if peerID, ok := id["PeerID"].(string); ok {
return peerID, nil
}
}
return "", fmt.Errorf("could not extract peer ID from config")
}
func (pm *ProcessManager) seedIPFSPeersWithHTTP(ctx context.Context, nodes []ipfsNodeInfo) error {
fmt.Fprintf(pm.logWriter, " Seeding IPFS local bootstrap peers via HTTP API...\n")
for _, node := range nodes {
if err := pm.waitIPFSReady(ctx, node); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: failed to wait for IPFS readiness for %s: %v\n", node.name, err)
}
}
for i, node := range nodes {
httpURL := fmt.Sprintf("http://127.0.0.1:%d/api/v0/bootstrap/rm?all=true", node.apiPort)
if err := pm.ipfsHTTPCall(ctx, httpURL, "POST"); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: failed to clear bootstrap for %s: %v\n", node.name, err)
}
for j, otherNode := range nodes {
if i == j {
continue
}
multiaddr := fmt.Sprintf("/ip4/127.0.0.1/tcp/%d/p2p/%s", otherNode.swarmPort, otherNode.peerID)
httpURL := fmt.Sprintf("http://127.0.0.1:%d/api/v0/bootstrap/add?arg=%s", node.apiPort, url.QueryEscape(multiaddr))
if err := pm.ipfsHTTPCall(ctx, httpURL, "POST"); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: failed to add bootstrap peer for %s: %v\n", node.name, err)
}
}
}
return nil
}
func (pm *ProcessManager) waitIPFSReady(ctx context.Context, node ipfsNodeInfo) error {
maxRetries := 30
retryInterval := 500 * time.Millisecond
for attempt := 0; attempt < maxRetries; attempt++ {
httpURL := fmt.Sprintf("http://127.0.0.1:%d/api/v0/version", node.apiPort)
if err := pm.ipfsHTTPCall(ctx, httpURL, "POST"); err == nil {
return nil
}
select {
case <-time.After(retryInterval):
continue
case <-ctx.Done():
return ctx.Err()
}
}
return fmt.Errorf("IPFS daemon %s did not become ready", node.name)
}
func (pm *ProcessManager) ipfsHTTPCall(ctx context.Context, urlStr string, method string) error {
client := tlsutil.NewHTTPClient(5 * time.Second)
req, err := http.NewRequestWithContext(ctx, method, urlStr, nil)
if err != nil {
return fmt.Errorf("failed to create request: %w", err)
}
resp, err := client.Do(req)
if err != nil {
return fmt.Errorf("HTTP call failed: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode >= 400 {
return fmt.Errorf("HTTP %d", resp.StatusCode)
}
return nil
}
func readIPFSConfigValue(ctx context.Context, repoPath string, key string) (string, error) {
configPath := filepath.Join(repoPath, "config")
data, err := os.ReadFile(configPath)
if err != nil {
return "", fmt.Errorf("failed to read IPFS config: %w", err)
}
lines := strings.Split(string(data), "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
if strings.Contains(line, key) {
parts := strings.SplitN(line, ":", 2)
if len(parts) == 2 {
value := strings.TrimSpace(parts[1])
value = strings.Trim(value, `",`)
if value != "" {
return value, nil
}
}
}
}
return "", fmt.Errorf("key %s not found in IPFS config", key)
}

View File

@ -0,0 +1,314 @@
package development
import (
"context"
"encoding/json"
"fmt"
"net/http"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
)
func (pm *ProcessManager) startIPFSCluster(ctx context.Context) error {
topology := DefaultTopology()
var nodes []struct {
name string
clusterPath string
restAPIPort int
clusterPort int
ipfsPort int
}
for _, nodeSpec := range topology.Nodes {
nodes = append(nodes, struct {
name string
clusterPath string
restAPIPort int
clusterPort int
ipfsPort int
}{
nodeSpec.Name,
filepath.Join(pm.oramaDir, nodeSpec.DataDir, "ipfs-cluster"),
nodeSpec.ClusterAPIPort,
nodeSpec.ClusterPort,
nodeSpec.IPFSAPIPort,
})
}
fmt.Fprintf(pm.logWriter, " Waiting for IPFS daemons to be ready...\n")
ipfsNodes := pm.buildIPFSNodes(topology)
for _, ipfsNode := range ipfsNodes {
if err := pm.waitIPFSReady(ctx, ipfsNode); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: IPFS %s did not become ready: %v\n", ipfsNode.name, err)
}
}
secretPath := filepath.Join(pm.oramaDir, "cluster-secret")
clusterSecret, err := os.ReadFile(secretPath)
if err != nil {
return fmt.Errorf("failed to read cluster secret: %w", err)
}
clusterSecretHex := strings.TrimSpace(string(clusterSecret))
bootstrapMultiaddr := ""
{
node := nodes[0]
if err := pm.cleanClusterState(node.clusterPath); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: failed to clean cluster state for %s: %v\n", node.name, err)
}
os.MkdirAll(node.clusterPath, 0755)
fmt.Fprintf(pm.logWriter, " Initializing IPFS Cluster (%s)...\n", node.name)
cmd := exec.CommandContext(ctx, "ipfs-cluster-service", "init", "--force")
cmd.Env = append(os.Environ(),
fmt.Sprintf("IPFS_CLUSTER_PATH=%s", node.clusterPath),
fmt.Sprintf("CLUSTER_SECRET=%s", clusterSecretHex),
)
if output, err := cmd.CombinedOutput(); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: ipfs-cluster-service init failed: %v (output: %s)\n", err, string(output))
}
if err := pm.ensureIPFSClusterPorts(node.clusterPath, node.restAPIPort, node.clusterPort); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: failed to update IPFS Cluster config for %s: %v\n", node.name, err)
}
pidPath := filepath.Join(pm.pidsDir, fmt.Sprintf("ipfs-cluster-%s.pid", node.name))
logPath := filepath.Join(pm.oramaDir, "logs", fmt.Sprintf("ipfs-cluster-%s.log", node.name))
cmd = exec.CommandContext(ctx, "ipfs-cluster-service", "daemon")
cmd.Env = append(os.Environ(), fmt.Sprintf("IPFS_CLUSTER_PATH=%s", node.clusterPath))
logFile, _ := os.Create(logPath)
cmd.Stdout = logFile
cmd.Stderr = logFile
if err := cmd.Start(); err != nil {
return err
}
os.WriteFile(pidPath, []byte(fmt.Sprintf("%d", cmd.Process.Pid)), 0644)
fmt.Fprintf(pm.logWriter, "✓ IPFS Cluster (%s) started (PID: %d, API: %d)\n", node.name, cmd.Process.Pid, node.restAPIPort)
if err := pm.waitClusterReady(ctx, node.name, node.restAPIPort); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: IPFS Cluster %s did not become ready: %v\n", node.name, err)
}
time.Sleep(500 * time.Millisecond)
peerID, err := pm.waitForClusterPeerID(ctx, filepath.Join(node.clusterPath, "identity.json"))
if err != nil {
fmt.Fprintf(pm.logWriter, " Warning: failed to read bootstrap peer ID: %v\n", err)
} else {
bootstrapMultiaddr = fmt.Sprintf("/ip4/127.0.0.1/tcp/%d/p2p/%s", node.clusterPort, peerID)
}
}
for i := 1; i < len(nodes); i++ {
node := nodes[i]
if err := pm.cleanClusterState(node.clusterPath); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: failed to clean cluster state for %s: %v\n", node.name, err)
}
os.MkdirAll(node.clusterPath, 0755)
fmt.Fprintf(pm.logWriter, " Initializing IPFS Cluster (%s)...\n", node.name)
cmd := exec.CommandContext(ctx, "ipfs-cluster-service", "init", "--force")
cmd.Env = append(os.Environ(),
fmt.Sprintf("IPFS_CLUSTER_PATH=%s", node.clusterPath),
fmt.Sprintf("CLUSTER_SECRET=%s", clusterSecretHex),
)
if output, err := cmd.CombinedOutput(); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: ipfs-cluster-service init failed for %s: %v (output: %s)\n", node.name, err, string(output))
}
if err := pm.ensureIPFSClusterPorts(node.clusterPath, node.restAPIPort, node.clusterPort); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: failed to update IPFS Cluster config for %s: %v\n", node.name, err)
}
pidPath := filepath.Join(pm.pidsDir, fmt.Sprintf("ipfs-cluster-%s.pid", node.name))
logPath := filepath.Join(pm.oramaDir, "logs", fmt.Sprintf("ipfs-cluster-%s.log", node.name))
args := []string{"daemon"}
if bootstrapMultiaddr != "" {
args = append(args, "--bootstrap", bootstrapMultiaddr)
}
cmd = exec.CommandContext(ctx, "ipfs-cluster-service", args...)
cmd.Env = append(os.Environ(), fmt.Sprintf("IPFS_CLUSTER_PATH=%s", node.clusterPath))
logFile, _ := os.Create(logPath)
cmd.Stdout = logFile
cmd.Stderr = logFile
if err := cmd.Start(); err != nil {
continue
}
os.WriteFile(pidPath, []byte(fmt.Sprintf("%d", cmd.Process.Pid)), 0644)
fmt.Fprintf(pm.logWriter, "✓ IPFS Cluster (%s) started (PID: %d, API: %d)\n", node.name, cmd.Process.Pid, node.restAPIPort)
if err := pm.waitClusterReady(ctx, node.name, node.restAPIPort); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: IPFS Cluster %s did not become ready: %v\n", node.name, err)
}
}
fmt.Fprintf(pm.logWriter, " Waiting for IPFS Cluster peers to form...\n")
if err := pm.waitClusterFormed(ctx, nodes[0].restAPIPort); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: IPFS Cluster did not form fully: %v\n", err)
}
time.Sleep(1 * time.Second)
return nil
}
func (pm *ProcessManager) waitForClusterPeerID(ctx context.Context, identityPath string) (string, error) {
maxRetries := 30
retryInterval := 500 * time.Millisecond
for attempt := 0; attempt < maxRetries; attempt++ {
data, err := os.ReadFile(identityPath)
if err == nil {
var identity map[string]interface{}
if err := json.Unmarshal(data, &identity); err == nil {
if id, ok := identity["id"].(string); ok {
return id, nil
}
}
}
select {
case <-time.After(retryInterval):
continue
case <-ctx.Done():
return "", ctx.Err()
}
}
return "", fmt.Errorf("could not read cluster peer ID")
}
func (pm *ProcessManager) waitClusterReady(ctx context.Context, name string, restAPIPort int) error {
maxRetries := 30
retryInterval := 500 * time.Millisecond
for attempt := 0; attempt < maxRetries; attempt++ {
httpURL := fmt.Sprintf("http://127.0.0.1:%d/peers", restAPIPort)
resp, err := http.Get(httpURL)
if err == nil && resp.StatusCode == 200 {
resp.Body.Close()
return nil
}
if resp != nil {
resp.Body.Close()
}
select {
case <-time.After(retryInterval):
continue
case <-ctx.Done():
return ctx.Err()
}
}
return fmt.Errorf("IPFS Cluster %s did not become ready", name)
}
func (pm *ProcessManager) waitClusterFormed(ctx context.Context, bootstrapRestAPIPort int) error {
maxRetries := 30
retryInterval := 1 * time.Second
requiredPeers := 3
for attempt := 0; attempt < maxRetries; attempt++ {
httpURL := fmt.Sprintf("http://127.0.0.1:%d/peers", bootstrapRestAPIPort)
resp, err := http.Get(httpURL)
if err == nil && resp.StatusCode == 200 {
dec := json.NewDecoder(resp.Body)
peerCount := 0
for {
var peer interface{}
if err := dec.Decode(&peer); err != nil {
break
}
peerCount++
}
resp.Body.Close()
if peerCount >= requiredPeers {
return nil
}
}
if resp != nil {
resp.Body.Close()
}
select {
case <-time.After(retryInterval):
continue
case <-ctx.Done():
return ctx.Err()
}
}
return fmt.Errorf("IPFS Cluster did not form fully")
}
func (pm *ProcessManager) cleanClusterState(clusterPath string) error {
pebblePath := filepath.Join(clusterPath, "pebble")
os.RemoveAll(pebblePath)
peerstorePath := filepath.Join(clusterPath, "peerstore")
os.Remove(peerstorePath)
serviceJSONPath := filepath.Join(clusterPath, "service.json")
os.Remove(serviceJSONPath)
lockPath := filepath.Join(clusterPath, "cluster.lock")
os.Remove(lockPath)
return nil
}
func (pm *ProcessManager) ensureIPFSClusterPorts(clusterPath string, restAPIPort int, clusterPort int) error {
serviceJSONPath := filepath.Join(clusterPath, "service.json")
data, err := os.ReadFile(serviceJSONPath)
if err != nil {
return err
}
var config map[string]interface{}
json.Unmarshal(data, &config)
portOffset := restAPIPort - 9094
proxyPort := 9095 + portOffset
pinsvcPort := 9097 + portOffset
ipfsPort := 4501 + (portOffset / 10)
if api, ok := config["api"].(map[string]interface{}); ok {
if restapi, ok := api["restapi"].(map[string]interface{}); ok {
restapi["http_listen_multiaddress"] = fmt.Sprintf("/ip4/0.0.0.0/tcp/%d", restAPIPort)
}
if proxy, ok := api["ipfsproxy"].(map[string]interface{}); ok {
proxy["listen_multiaddress"] = fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", proxyPort)
proxy["node_multiaddress"] = fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", ipfsPort)
}
if pinsvc, ok := api["pinsvcapi"].(map[string]interface{}); ok {
pinsvc["http_listen_multiaddress"] = fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", pinsvcPort)
}
}
if cluster, ok := config["cluster"].(map[string]interface{}); ok {
cluster["listen_multiaddress"] = []string{
fmt.Sprintf("/ip4/0.0.0.0/tcp/%d", clusterPort),
fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", clusterPort),
}
}
if connector, ok := config["ipfs_connector"].(map[string]interface{}); ok {
if ipfshttp, ok := connector["ipfshttp"].(map[string]interface{}); ok {
ipfshttp["node_multiaddress"] = fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", ipfsPort)
}
}
updatedData, _ := json.MarshalIndent(config, "", " ")
return os.WriteFile(serviceJSONPath, updatedData, 0644)
}

View File

@ -0,0 +1,231 @@
package development
import (
"context"
"fmt"
"os"
"os/exec"
"path/filepath"
"runtime"
"strconv"
"strings"
"time"
)
func (pm *ProcessManager) printStartupSummary(topology *Topology) {
fmt.Fprintf(pm.logWriter, "\n✅ Development environment ready!\n")
fmt.Fprintf(pm.logWriter, "═══════════════════════════════════════\n\n")
fmt.Fprintf(pm.logWriter, "📡 Access your nodes via unified gateway ports:\n\n")
for _, node := range topology.Nodes {
fmt.Fprintf(pm.logWriter, " %s:\n", node.Name)
fmt.Fprintf(pm.logWriter, " curl http://localhost:%d/health\n", node.UnifiedGatewayPort)
fmt.Fprintf(pm.logWriter, " curl http://localhost:%d/rqlite/http/db/execute\n", node.UnifiedGatewayPort)
fmt.Fprintf(pm.logWriter, " curl http://localhost:%d/cluster/health\n\n", node.UnifiedGatewayPort)
}
fmt.Fprintf(pm.logWriter, "🌐 Main Gateway:\n")
fmt.Fprintf(pm.logWriter, " curl http://localhost:%d/v1/status\n\n", topology.GatewayPort)
fmt.Fprintf(pm.logWriter, "📊 Other Services:\n")
fmt.Fprintf(pm.logWriter, " Olric: http://localhost:%d\n", topology.OlricHTTPPort)
fmt.Fprintf(pm.logWriter, " Anon SOCKS: 127.0.0.1:%d\n", topology.AnonSOCKSPort)
fmt.Fprintf(pm.logWriter, " Rqlite MCP: http://localhost:%d/sse\n\n", topology.MCPPort)
fmt.Fprintf(pm.logWriter, "📝 Useful Commands:\n")
fmt.Fprintf(pm.logWriter, " ./bin/orama dev status - Check service status\n")
fmt.Fprintf(pm.logWriter, " ./bin/orama dev logs node-1 - View logs\n")
fmt.Fprintf(pm.logWriter, " ./bin/orama dev down - Stop all services\n\n")
fmt.Fprintf(pm.logWriter, "📂 Logs: %s/logs\n", pm.oramaDir)
fmt.Fprintf(pm.logWriter, "⚙️ Config: %s\n\n", pm.oramaDir)
}
func (pm *ProcessManager) stopProcess(name string) error {
pidPath := filepath.Join(pm.pidsDir, fmt.Sprintf("%s.pid", name))
pidBytes, err := os.ReadFile(pidPath)
if err != nil {
return nil
}
pid, err := strconv.Atoi(strings.TrimSpace(string(pidBytes)))
if err != nil {
os.Remove(pidPath)
return nil
}
if !checkProcessRunning(pid) {
os.Remove(pidPath)
fmt.Fprintf(pm.logWriter, "✓ %s (not running)\n", name)
return nil
}
proc, err := os.FindProcess(pid)
if err != nil {
os.Remove(pidPath)
return nil
}
proc.Signal(os.Interrupt)
gracefulShutdown := false
for i := 0; i < 20; i++ {
time.Sleep(100 * time.Millisecond)
if !checkProcessRunning(pid) {
gracefulShutdown = true
break
}
}
if !gracefulShutdown && checkProcessRunning(pid) {
proc.Signal(os.Kill)
time.Sleep(200 * time.Millisecond)
if runtime.GOOS != "windows" {
exec.Command("pkill", "-9", "-P", fmt.Sprintf("%d", pid)).Run()
}
if checkProcessRunning(pid) {
exec.Command("kill", "-9", fmt.Sprintf("%d", pid)).Run()
time.Sleep(100 * time.Millisecond)
}
}
os.Remove(pidPath)
if gracefulShutdown {
fmt.Fprintf(pm.logWriter, "✓ %s stopped gracefully\n", name)
} else {
fmt.Fprintf(pm.logWriter, "✓ %s stopped (forced)\n", name)
}
return nil
}
func checkProcessRunning(pid int) bool {
proc, err := os.FindProcess(pid)
if err != nil {
return false
}
err = proc.Signal(os.Signal(nil))
return err == nil
}
func (pm *ProcessManager) startNode(name, configFile, logPath string) error {
pidPath := filepath.Join(pm.pidsDir, fmt.Sprintf("%s.pid", name))
cmd := exec.Command("./bin/orama-node", "--config", configFile)
logFile, _ := os.Create(logPath)
cmd.Stdout = logFile
cmd.Stderr = logFile
if err := cmd.Start(); err != nil {
return fmt.Errorf("failed to start %s: %w", name, err)
}
os.WriteFile(pidPath, []byte(fmt.Sprintf("%d", cmd.Process.Pid)), 0644)
fmt.Fprintf(pm.logWriter, "✓ %s started (PID: %d)\n", strings.Title(name), cmd.Process.Pid)
time.Sleep(1 * time.Second)
return nil
}
func (pm *ProcessManager) startGateway(ctx context.Context) error {
pidPath := filepath.Join(pm.pidsDir, "gateway.pid")
logPath := filepath.Join(pm.oramaDir, "logs", "gateway.log")
cmd := exec.Command("./bin/gateway", "--config", "gateway.yaml")
logFile, _ := os.Create(logPath)
cmd.Stdout = logFile
cmd.Stderr = logFile
if err := cmd.Start(); err != nil {
return fmt.Errorf("failed to start gateway: %w", err)
}
os.WriteFile(pidPath, []byte(fmt.Sprintf("%d", cmd.Process.Pid)), 0644)
fmt.Fprintf(pm.logWriter, "✓ Gateway started (PID: %d, listen: 6001)\n", cmd.Process.Pid)
return nil
}
func (pm *ProcessManager) startOlric(ctx context.Context) error {
pidPath := filepath.Join(pm.pidsDir, "olric.pid")
logPath := filepath.Join(pm.oramaDir, "logs", "olric.log")
configPath := filepath.Join(pm.oramaDir, "olric-config.yaml")
cmd := exec.CommandContext(ctx, "olric-server")
cmd.Env = append(os.Environ(), fmt.Sprintf("OLRIC_SERVER_CONFIG=%s", configPath))
logFile, _ := os.Create(logPath)
cmd.Stdout = logFile
cmd.Stderr = logFile
if err := cmd.Start(); err != nil {
return fmt.Errorf("failed to start olric: %w", err)
}
os.WriteFile(pidPath, []byte(fmt.Sprintf("%d", cmd.Process.Pid)), 0644)
fmt.Fprintf(pm.logWriter, "✓ Olric started (PID: %d)\n", cmd.Process.Pid)
time.Sleep(1 * time.Second)
return nil
}
func (pm *ProcessManager) startAnon(ctx context.Context) error {
if runtime.GOOS != "darwin" {
return nil
}
pidPath := filepath.Join(pm.pidsDir, "anon.pid")
logPath := filepath.Join(pm.oramaDir, "logs", "anon.log")
cmd := exec.CommandContext(ctx, "npx", "anyone-client")
logFile, _ := os.Create(logPath)
cmd.Stdout = logFile
cmd.Stderr = logFile
if err := cmd.Start(); err != nil {
fmt.Fprintf(pm.logWriter, " ⚠️ Failed to start Anon: %v\n", err)
return nil
}
os.WriteFile(pidPath, []byte(fmt.Sprintf("%d", cmd.Process.Pid)), 0644)
fmt.Fprintf(pm.logWriter, "✓ Anon proxy started (PID: %d, SOCKS: 9050)\n", cmd.Process.Pid)
return nil
}
func (pm *ProcessManager) startMCP(ctx context.Context) error {
topology := DefaultTopology()
pidPath := filepath.Join(pm.pidsDir, "rqlite-mcp.pid")
logPath := filepath.Join(pm.oramaDir, "logs", "rqlite-mcp.log")
cmd := exec.CommandContext(ctx, "./bin/rqlite-mcp")
cmd.Env = append(os.Environ(),
fmt.Sprintf("MCP_PORT=%d", topology.MCPPort),
fmt.Sprintf("RQLITE_URL=http://localhost:%d", topology.Nodes[0].RQLiteHTTPPort),
)
logFile, _ := os.Create(logPath)
cmd.Stdout = logFile
cmd.Stderr = logFile
if err := cmd.Start(); err != nil {
fmt.Fprintf(pm.logWriter, " ⚠️ Failed to start Rqlite MCP: %v\n", err)
return nil
}
os.WriteFile(pidPath, []byte(fmt.Sprintf("%d", cmd.Process.Pid)), 0644)
fmt.Fprintf(pm.logWriter, "✓ Rqlite MCP started (PID: %d, port: %d)\n", cmd.Process.Pid, topology.MCPPort)
return nil
}
func (pm *ProcessManager) startNodes(ctx context.Context) error {
topology := DefaultTopology()
for _, nodeSpec := range topology.Nodes {
logPath := filepath.Join(pm.oramaDir, "logs", fmt.Sprintf("%s.log", nodeSpec.Name))
if err := pm.startNode(nodeSpec.Name, nodeSpec.ConfigFilename, logPath); err != nil {
return fmt.Errorf("failed to start %s: %w", nodeSpec.Name, err)
}
time.Sleep(500 * time.Millisecond)
}
return nil
}

View File

@ -2,21 +2,12 @@ package development
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"os"
"os/exec"
"path/filepath"
"runtime"
"strconv"
"strings"
"sync"
"time"
"github.com/DeBrosOfficial/network/pkg/tlsutil"
)
// ProcessManager manages all dev environment processes
@ -69,13 +60,12 @@ func (pm *ProcessManager) StartAll(ctx context.Context) error {
{"Olric", pm.startOlric},
{"Anon", pm.startAnon},
{"Nodes (Network)", pm.startNodes},
// Gateway is now per-node (embedded in each node) - no separate main gateway needed
{"Rqlite MCP", pm.startMCP},
}
for _, svc := range services {
if err := svc.fn(ctx); err != nil {
fmt.Fprintf(pm.logWriter, "⚠️ Failed to start %s: %v\n", svc.name, err)
// Continue starting others, don't fail
}
}
@ -99,35 +89,6 @@ func (pm *ProcessManager) StartAll(ctx context.Context) error {
return nil
}
// printStartupSummary prints the final startup summary with key endpoints
func (pm *ProcessManager) printStartupSummary(topology *Topology) {
fmt.Fprintf(pm.logWriter, "\n✅ Development environment ready!\n")
fmt.Fprintf(pm.logWriter, "═══════════════════════════════════════\n\n")
fmt.Fprintf(pm.logWriter, "📡 Access your nodes via unified gateway ports:\n\n")
for _, node := range topology.Nodes {
fmt.Fprintf(pm.logWriter, " %s:\n", node.Name)
fmt.Fprintf(pm.logWriter, " curl http://localhost:%d/health\n", node.UnifiedGatewayPort)
fmt.Fprintf(pm.logWriter, " curl http://localhost:%d/rqlite/http/db/execute\n", node.UnifiedGatewayPort)
fmt.Fprintf(pm.logWriter, " curl http://localhost:%d/cluster/health\n\n", node.UnifiedGatewayPort)
}
fmt.Fprintf(pm.logWriter, "🌐 Main Gateway:\n")
fmt.Fprintf(pm.logWriter, " curl http://localhost:%d/v1/status\n\n", topology.GatewayPort)
fmt.Fprintf(pm.logWriter, "📊 Other Services:\n")
fmt.Fprintf(pm.logWriter, " Olric: http://localhost:%d\n", topology.OlricHTTPPort)
fmt.Fprintf(pm.logWriter, " Anon SOCKS: 127.0.0.1:%d\n\n", topology.AnonSOCKSPort)
fmt.Fprintf(pm.logWriter, "📝 Useful Commands:\n")
fmt.Fprintf(pm.logWriter, " ./bin/orama dev status - Check service status\n")
fmt.Fprintf(pm.logWriter, " ./bin/orama dev logs node-1 - View logs\n")
fmt.Fprintf(pm.logWriter, " ./bin/orama dev down - Stop all services\n\n")
fmt.Fprintf(pm.logWriter, "📂 Logs: %s/logs\n", pm.oramaDir)
fmt.Fprintf(pm.logWriter, "⚙️ Config: %s\n\n", pm.oramaDir)
}
// StopAll stops all running processes
func (pm *ProcessManager) StopAll(ctx context.Context) error {
fmt.Fprintf(pm.logWriter, "\n🛑 Stopping development environment...\n\n")
@ -149,11 +110,10 @@ func (pm *ProcessManager) StopAll(ctx context.Context) error {
node := topology.Nodes[i]
services = append(services, fmt.Sprintf("ipfs-%s", node.Name))
}
services = append(services, "olric", "anon")
services = append(services, "olric", "anon", "rqlite-mcp")
fmt.Fprintf(pm.logWriter, "Stopping %d services...\n\n", len(services))
// Stop all processes sequentially (in dependency order) and wait for each
stoppedCount := 0
for _, svc := range services {
if err := pm.stopProcess(svc); err != nil {
@ -161,8 +121,6 @@ func (pm *ProcessManager) StopAll(ctx context.Context) error {
} else {
stoppedCount++
}
// Show progress
fmt.Fprintf(pm.logWriter, " [%d/%d] stopped\n", stoppedCount, len(services))
}
@ -219,12 +177,17 @@ func (pm *ProcessManager) Status(ctx context.Context) {
name string
ports []int
}{"Anon SOCKS", []int{topology.AnonSOCKSPort}})
services = append(services, struct {
name string
ports []int
}{"Rqlite MCP", []int{topology.MCPPort}})
for _, svc := range services {
pidPath := filepath.Join(pm.pidsDir, fmt.Sprintf("%s.pid", svc.name))
running := false
if pidBytes, err := os.ReadFile(pidPath); err == nil {
pid, _ := strconv.Atoi(string(pidBytes))
var pid int
fmt.Sscanf(string(pidBytes), "%d", &pid)
if checkProcessRunning(pid) {
running = true
}
@ -252,888 +215,3 @@ func (pm *ProcessManager) Status(ctx context.Context) {
fmt.Fprintf(pm.logWriter, "\nLogs directory: %s/logs\n\n", pm.oramaDir)
}
// Helper functions for starting individual services
// buildIPFSNodes constructs ipfsNodeInfo from topology
func (pm *ProcessManager) buildIPFSNodes(topology *Topology) []ipfsNodeInfo {
var nodes []ipfsNodeInfo
for _, nodeSpec := range topology.Nodes {
nodes = append(nodes, ipfsNodeInfo{
name: nodeSpec.Name,
ipfsPath: filepath.Join(pm.oramaDir, nodeSpec.DataDir, "ipfs/repo"),
apiPort: nodeSpec.IPFSAPIPort,
swarmPort: nodeSpec.IPFSSwarmPort,
gatewayPort: nodeSpec.IPFSGatewayPort,
peerID: "",
})
}
return nodes
}
// startNodes starts all network nodes
func (pm *ProcessManager) startNodes(ctx context.Context) error {
topology := DefaultTopology()
for _, nodeSpec := range topology.Nodes {
logPath := filepath.Join(pm.oramaDir, "logs", fmt.Sprintf("%s.log", nodeSpec.Name))
if err := pm.startNode(nodeSpec.Name, nodeSpec.ConfigFilename, logPath); err != nil {
return fmt.Errorf("failed to start %s: %w", nodeSpec.Name, err)
}
time.Sleep(500 * time.Millisecond)
}
return nil
}
// ipfsNodeInfo holds information about an IPFS node for peer discovery
type ipfsNodeInfo struct {
name string
ipfsPath string
apiPort int
swarmPort int
gatewayPort int
peerID string
}
// readIPFSConfigValue reads a single config value from IPFS repo without daemon running
func readIPFSConfigValue(ctx context.Context, repoPath string, key string) (string, error) {
configPath := filepath.Join(repoPath, "config")
data, err := os.ReadFile(configPath)
if err != nil {
return "", fmt.Errorf("failed to read IPFS config: %w", err)
}
// Simple JSON parse to extract the value - only works for string values
lines := strings.Split(string(data), "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
if strings.Contains(line, key) {
// Extract the value after the colon
parts := strings.SplitN(line, ":", 2)
if len(parts) == 2 {
value := strings.TrimSpace(parts[1])
value = strings.Trim(value, `",`)
if value != "" {
return value, nil
}
}
}
}
return "", fmt.Errorf("key %s not found in IPFS config", key)
}
// configureIPFSRepo directly modifies IPFS config JSON to set addresses, bootstrap, and CORS headers
// This avoids shell commands which fail on some systems and instead manipulates the config directly
// Returns the peer ID from the config
func configureIPFSRepo(repoPath string, apiPort, gatewayPort, swarmPort int) (string, error) {
configPath := filepath.Join(repoPath, "config")
// Read existing config
data, err := os.ReadFile(configPath)
if err != nil {
return "", fmt.Errorf("failed to read IPFS config: %w", err)
}
var config map[string]interface{}
if err := json.Unmarshal(data, &config); err != nil {
return "", fmt.Errorf("failed to parse IPFS config: %w", err)
}
// Set Addresses
config["Addresses"] = map[string]interface{}{
"API": []string{fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", apiPort)},
"Gateway": []string{fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", gatewayPort)},
"Swarm": []string{
fmt.Sprintf("/ip4/0.0.0.0/tcp/%d", swarmPort),
fmt.Sprintf("/ip6/::/tcp/%d", swarmPort),
},
}
// Disable AutoConf for private swarm
config["AutoConf"] = map[string]interface{}{
"Enabled": false,
}
// Clear Bootstrap (will be set via HTTP API after startup)
config["Bootstrap"] = []string{}
// Clear DNS Resolvers
if dns, ok := config["DNS"].(map[string]interface{}); ok {
dns["Resolvers"] = map[string]interface{}{}
} else {
config["DNS"] = map[string]interface{}{
"Resolvers": map[string]interface{}{},
}
}
// Clear Routing DelegatedRouters
if routing, ok := config["Routing"].(map[string]interface{}); ok {
routing["DelegatedRouters"] = []string{}
} else {
config["Routing"] = map[string]interface{}{
"DelegatedRouters": []string{},
}
}
// Clear IPNS DelegatedPublishers
if ipns, ok := config["Ipns"].(map[string]interface{}); ok {
ipns["DelegatedPublishers"] = []string{}
} else {
config["Ipns"] = map[string]interface{}{
"DelegatedPublishers": []string{},
}
}
// Set API HTTPHeaders with CORS (must be map[string][]string)
if api, ok := config["API"].(map[string]interface{}); ok {
api["HTTPHeaders"] = map[string][]string{
"Access-Control-Allow-Origin": {"*"},
"Access-Control-Allow-Methods": {"GET", "PUT", "POST", "DELETE", "OPTIONS"},
"Access-Control-Allow-Headers": {"Content-Type", "X-Requested-With"},
"Access-Control-Expose-Headers": {"Content-Length", "Content-Range"},
}
} else {
config["API"] = map[string]interface{}{
"HTTPHeaders": map[string][]string{
"Access-Control-Allow-Origin": {"*"},
"Access-Control-Allow-Methods": {"GET", "PUT", "POST", "DELETE", "OPTIONS"},
"Access-Control-Allow-Headers": {"Content-Type", "X-Requested-With"},
"Access-Control-Expose-Headers": {"Content-Length", "Content-Range"},
},
}
}
// Write config back
updatedData, err := json.MarshalIndent(config, "", " ")
if err != nil {
return "", fmt.Errorf("failed to marshal IPFS config: %w", err)
}
if err := os.WriteFile(configPath, updatedData, 0644); err != nil {
return "", fmt.Errorf("failed to write IPFS config: %w", err)
}
// Extract and return peer ID
if id, ok := config["Identity"].(map[string]interface{}); ok {
if peerID, ok := id["PeerID"].(string); ok {
return peerID, nil
}
}
return "", fmt.Errorf("could not extract peer ID from config")
}
// seedIPFSPeersWithHTTP configures each IPFS node to bootstrap with its local peers using HTTP API
func (pm *ProcessManager) seedIPFSPeersWithHTTP(ctx context.Context, nodes []ipfsNodeInfo) error {
fmt.Fprintf(pm.logWriter, " Seeding IPFS local bootstrap peers via HTTP API...\n")
// Wait for all IPFS daemons to be ready before trying to configure them
for _, node := range nodes {
if err := pm.waitIPFSReady(ctx, node); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: failed to wait for IPFS readiness for %s: %v\n", node.name, err)
}
}
// For each node, clear default bootstrap and add local peers via HTTP
for i, node := range nodes {
// Clear bootstrap peers
httpURL := fmt.Sprintf("http://127.0.0.1:%d/api/v0/bootstrap/rm?all=true", node.apiPort)
if err := pm.ipfsHTTPCall(ctx, httpURL, "POST"); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: failed to clear bootstrap for %s: %v\n", node.name, err)
}
// Add other nodes as bootstrap peers
for j, otherNode := range nodes {
if i == j {
continue // Skip self
}
multiaddr := fmt.Sprintf("/ip4/127.0.0.1/tcp/%d/p2p/%s", otherNode.swarmPort, otherNode.peerID)
httpURL := fmt.Sprintf("http://127.0.0.1:%d/api/v0/bootstrap/add?arg=%s", node.apiPort, url.QueryEscape(multiaddr))
if err := pm.ipfsHTTPCall(ctx, httpURL, "POST"); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: failed to add bootstrap peer for %s: %v\n", node.name, err)
}
}
}
return nil
}
// waitIPFSReady polls the IPFS daemon's HTTP API until it's ready
func (pm *ProcessManager) waitIPFSReady(ctx context.Context, node ipfsNodeInfo) error {
maxRetries := 30
retryInterval := 500 * time.Millisecond
for attempt := 0; attempt < maxRetries; attempt++ {
httpURL := fmt.Sprintf("http://127.0.0.1:%d/api/v0/version", node.apiPort)
if err := pm.ipfsHTTPCall(ctx, httpURL, "POST"); err == nil {
return nil // IPFS is ready
}
select {
case <-time.After(retryInterval):
continue
case <-ctx.Done():
return ctx.Err()
}
}
return fmt.Errorf("IPFS daemon %s did not become ready after %d seconds", node.name, (maxRetries * int(retryInterval.Seconds())))
}
// ipfsHTTPCall makes an HTTP call to IPFS API
func (pm *ProcessManager) ipfsHTTPCall(ctx context.Context, urlStr string, method string) error {
client := tlsutil.NewHTTPClient(5 * time.Second)
req, err := http.NewRequestWithContext(ctx, method, urlStr, nil)
if err != nil {
return fmt.Errorf("failed to create request: %w", err)
}
resp, err := client.Do(req)
if err != nil {
return fmt.Errorf("HTTP call failed: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode >= 400 {
body, _ := io.ReadAll(resp.Body)
return fmt.Errorf("HTTP %d: %s", resp.StatusCode, string(body))
}
return nil
}
func (pm *ProcessManager) startIPFS(ctx context.Context) error {
topology := DefaultTopology()
nodes := pm.buildIPFSNodes(topology)
// Phase 1: Initialize repos and configure addresses
for i := range nodes {
os.MkdirAll(nodes[i].ipfsPath, 0755)
// Initialize IPFS if needed
if _, err := os.Stat(filepath.Join(nodes[i].ipfsPath, "config")); os.IsNotExist(err) {
fmt.Fprintf(pm.logWriter, " Initializing IPFS (%s)...\n", nodes[i].name)
cmd := exec.CommandContext(ctx, "ipfs", "init", "--profile=server", "--repo-dir="+nodes[i].ipfsPath)
if _, err := cmd.CombinedOutput(); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: ipfs init failed: %v\n", err)
}
// Copy swarm key
swarmKeyPath := filepath.Join(pm.oramaDir, "swarm.key")
if data, err := os.ReadFile(swarmKeyPath); err == nil {
os.WriteFile(filepath.Join(nodes[i].ipfsPath, "swarm.key"), data, 0600)
}
}
// Configure the IPFS config directly (addresses, bootstrap, DNS, routing, CORS headers)
// This replaces shell commands which can fail on some systems
peerID, err := configureIPFSRepo(nodes[i].ipfsPath, nodes[i].apiPort, nodes[i].gatewayPort, nodes[i].swarmPort)
if err != nil {
fmt.Fprintf(pm.logWriter, " Warning: failed to configure IPFS repo for %s: %v\n", nodes[i].name, err)
} else {
nodes[i].peerID = peerID
fmt.Fprintf(pm.logWriter, " Peer ID for %s: %s\n", nodes[i].name, peerID)
}
}
// Phase 2: Start all IPFS daemons
for i := range nodes {
pidPath := filepath.Join(pm.pidsDir, fmt.Sprintf("ipfs-%s.pid", nodes[i].name))
logPath := filepath.Join(pm.oramaDir, "logs", fmt.Sprintf("ipfs-%s.log", nodes[i].name))
cmd := exec.CommandContext(ctx, "ipfs", "daemon", "--enable-pubsub-experiment", "--repo-dir="+nodes[i].ipfsPath)
logFile, _ := os.Create(logPath)
cmd.Stdout = logFile
cmd.Stderr = logFile
if err := cmd.Start(); err != nil {
return fmt.Errorf("failed to start ipfs-%s: %w", nodes[i].name, err)
}
os.WriteFile(pidPath, []byte(fmt.Sprintf("%d", cmd.Process.Pid)), 0644)
pm.processes[fmt.Sprintf("ipfs-%s", nodes[i].name)] = &ManagedProcess{
Name: fmt.Sprintf("ipfs-%s", nodes[i].name),
PID: cmd.Process.Pid,
StartTime: time.Now(),
LogPath: logPath,
}
fmt.Fprintf(pm.logWriter, "✓ IPFS (%s) started (PID: %d, API: %d, Swarm: %d)\n", nodes[i].name, cmd.Process.Pid, nodes[i].apiPort, nodes[i].swarmPort)
}
time.Sleep(2 * time.Second)
// Phase 3: Seed IPFS peers via HTTP API after all daemons are running
if err := pm.seedIPFSPeersWithHTTP(ctx, nodes); err != nil {
fmt.Fprintf(pm.logWriter, "⚠️ Failed to seed IPFS peers: %v\n", err)
}
return nil
}
func (pm *ProcessManager) startIPFSCluster(ctx context.Context) error {
topology := DefaultTopology()
var nodes []struct {
name string
clusterPath string
restAPIPort int
clusterPort int
ipfsPort int
}
for _, nodeSpec := range topology.Nodes {
nodes = append(nodes, struct {
name string
clusterPath string
restAPIPort int
clusterPort int
ipfsPort int
}{
nodeSpec.Name,
filepath.Join(pm.oramaDir, nodeSpec.DataDir, "ipfs-cluster"),
nodeSpec.ClusterAPIPort,
nodeSpec.ClusterPort,
nodeSpec.IPFSAPIPort,
})
}
// Wait for all IPFS daemons to be ready before starting cluster services
fmt.Fprintf(pm.logWriter, " Waiting for IPFS daemons to be ready...\n")
ipfsNodes := pm.buildIPFSNodes(topology)
for _, ipfsNode := range ipfsNodes {
if err := pm.waitIPFSReady(ctx, ipfsNode); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: IPFS %s did not become ready: %v\n", ipfsNode.name, err)
}
}
// Read cluster secret to ensure all nodes use the same PSK
secretPath := filepath.Join(pm.oramaDir, "cluster-secret")
clusterSecret, err := os.ReadFile(secretPath)
if err != nil {
return fmt.Errorf("failed to read cluster secret: %w", err)
}
clusterSecretHex := strings.TrimSpace(string(clusterSecret))
// Phase 1: Initialize and start bootstrap IPFS Cluster, then read its identity
bootstrapMultiaddr := ""
{
node := nodes[0] // bootstrap
// Always clean stale cluster state to ensure fresh initialization with correct secret
if err := pm.cleanClusterState(node.clusterPath); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: failed to clean cluster state for %s: %v\n", node.name, err)
}
os.MkdirAll(node.clusterPath, 0755)
fmt.Fprintf(pm.logWriter, " Initializing IPFS Cluster (%s)...\n", node.name)
cmd := exec.CommandContext(ctx, "ipfs-cluster-service", "init", "--force")
cmd.Env = append(os.Environ(),
fmt.Sprintf("IPFS_CLUSTER_PATH=%s", node.clusterPath),
fmt.Sprintf("CLUSTER_SECRET=%s", clusterSecretHex),
)
if output, err := cmd.CombinedOutput(); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: ipfs-cluster-service init failed: %v (output: %s)\n", err, string(output))
}
// Ensure correct ports in service.json BEFORE starting daemon
// This is critical: it sets the cluster listen port to clusterPort, not the default
if err := pm.ensureIPFSClusterPorts(node.clusterPath, node.restAPIPort, node.clusterPort); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: failed to update IPFS Cluster config for %s: %v\n", node.name, err)
}
// Verify the config was written correctly (debug: read it back)
serviceJSONPath := filepath.Join(node.clusterPath, "service.json")
if data, err := os.ReadFile(serviceJSONPath); err == nil {
var verifyConfig map[string]interface{}
if err := json.Unmarshal(data, &verifyConfig); err == nil {
if cluster, ok := verifyConfig["cluster"].(map[string]interface{}); ok {
if listenAddrs, ok := cluster["listen_multiaddress"].([]interface{}); ok {
fmt.Fprintf(pm.logWriter, " Config verified: %s cluster listening on %v\n", node.name, listenAddrs)
}
}
}
}
// Start bootstrap cluster service
pidPath := filepath.Join(pm.pidsDir, fmt.Sprintf("ipfs-cluster-%s.pid", node.name))
logPath := filepath.Join(pm.oramaDir, "logs", fmt.Sprintf("ipfs-cluster-%s.log", node.name))
cmd = exec.CommandContext(ctx, "ipfs-cluster-service", "daemon")
cmd.Env = append(os.Environ(), fmt.Sprintf("IPFS_CLUSTER_PATH=%s", node.clusterPath))
logFile, _ := os.Create(logPath)
cmd.Stdout = logFile
cmd.Stderr = logFile
if err := cmd.Start(); err != nil {
fmt.Fprintf(pm.logWriter, " ⚠️ Failed to start ipfs-cluster-%s: %v\n", node.name, err)
return err
}
os.WriteFile(pidPath, []byte(fmt.Sprintf("%d", cmd.Process.Pid)), 0644)
fmt.Fprintf(pm.logWriter, "✓ IPFS Cluster (%s) started (PID: %d, API: %d)\n", node.name, cmd.Process.Pid, node.restAPIPort)
// Wait for bootstrap to be ready and read its identity
if err := pm.waitClusterReady(ctx, node.name, node.restAPIPort); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: IPFS Cluster %s did not become ready: %v\n", node.name, err)
}
// Add a brief delay to allow identity.json to be written
time.Sleep(500 * time.Millisecond)
// Read bootstrap peer ID for follower nodes to join
peerID, err := pm.waitForClusterPeerID(ctx, filepath.Join(node.clusterPath, "identity.json"))
if err != nil {
fmt.Fprintf(pm.logWriter, " Warning: failed to read bootstrap peer ID: %v\n", err)
} else {
bootstrapMultiaddr = fmt.Sprintf("/ip4/127.0.0.1/tcp/%d/p2p/%s", node.clusterPort, peerID)
fmt.Fprintf(pm.logWriter, " Bootstrap multiaddress: %s\n", bootstrapMultiaddr)
}
}
// Phase 2: Initialize and start follower IPFS Cluster nodes with bootstrap flag
for i := 1; i < len(nodes); i++ {
node := nodes[i]
// Always clean stale cluster state to ensure fresh initialization with correct secret
if err := pm.cleanClusterState(node.clusterPath); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: failed to clean cluster state for %s: %v\n", node.name, err)
}
os.MkdirAll(node.clusterPath, 0755)
fmt.Fprintf(pm.logWriter, " Initializing IPFS Cluster (%s)...\n", node.name)
cmd := exec.CommandContext(ctx, "ipfs-cluster-service", "init", "--force")
cmd.Env = append(os.Environ(),
fmt.Sprintf("IPFS_CLUSTER_PATH=%s", node.clusterPath),
fmt.Sprintf("CLUSTER_SECRET=%s", clusterSecretHex),
)
if output, err := cmd.CombinedOutput(); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: ipfs-cluster-service init failed for %s: %v (output: %s)\n", node.name, err, string(output))
}
// Ensure correct ports in service.json BEFORE starting daemon
if err := pm.ensureIPFSClusterPorts(node.clusterPath, node.restAPIPort, node.clusterPort); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: failed to update IPFS Cluster config for %s: %v\n", node.name, err)
}
// Verify the config was written correctly (debug: read it back)
serviceJSONPath := filepath.Join(node.clusterPath, "service.json")
if data, err := os.ReadFile(serviceJSONPath); err == nil {
var verifyConfig map[string]interface{}
if err := json.Unmarshal(data, &verifyConfig); err == nil {
if cluster, ok := verifyConfig["cluster"].(map[string]interface{}); ok {
if listenAddrs, ok := cluster["listen_multiaddress"].([]interface{}); ok {
fmt.Fprintf(pm.logWriter, " Config verified: %s cluster listening on %v\n", node.name, listenAddrs)
}
}
}
}
// Start follower cluster service with bootstrap flag
pidPath := filepath.Join(pm.pidsDir, fmt.Sprintf("ipfs-cluster-%s.pid", node.name))
logPath := filepath.Join(pm.oramaDir, "logs", fmt.Sprintf("ipfs-cluster-%s.log", node.name))
args := []string{"daemon"}
if bootstrapMultiaddr != "" {
args = append(args, "--bootstrap", bootstrapMultiaddr)
}
cmd = exec.CommandContext(ctx, "ipfs-cluster-service", args...)
cmd.Env = append(os.Environ(), fmt.Sprintf("IPFS_CLUSTER_PATH=%s", node.clusterPath))
logFile, _ := os.Create(logPath)
cmd.Stdout = logFile
cmd.Stderr = logFile
if err := cmd.Start(); err != nil {
fmt.Fprintf(pm.logWriter, " ⚠️ Failed to start ipfs-cluster-%s: %v\n", node.name, err)
continue
}
os.WriteFile(pidPath, []byte(fmt.Sprintf("%d", cmd.Process.Pid)), 0644)
fmt.Fprintf(pm.logWriter, "✓ IPFS Cluster (%s) started (PID: %d, API: %d)\n", node.name, cmd.Process.Pid, node.restAPIPort)
// Wait for follower node to connect to the bootstrap peer
if err := pm.waitClusterReady(ctx, node.name, node.restAPIPort); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: IPFS Cluster %s did not become ready: %v\n", node.name, err)
}
}
// Phase 3: Wait for all cluster peers to discover each other
fmt.Fprintf(pm.logWriter, " Waiting for IPFS Cluster peers to form...\n")
if err := pm.waitClusterFormed(ctx, nodes[0].restAPIPort); err != nil {
fmt.Fprintf(pm.logWriter, " Warning: IPFS Cluster did not form fully: %v\n", err)
}
time.Sleep(1 * time.Second)
return nil
}
// waitForClusterPeerID polls the identity.json file until it appears and extracts the peer ID
func (pm *ProcessManager) waitForClusterPeerID(ctx context.Context, identityPath string) (string, error) {
maxRetries := 30
retryInterval := 500 * time.Millisecond
for attempt := 0; attempt < maxRetries; attempt++ {
data, err := os.ReadFile(identityPath)
if err == nil {
var identity map[string]interface{}
if err := json.Unmarshal(data, &identity); err == nil {
if id, ok := identity["id"].(string); ok {
return id, nil
}
}
}
select {
case <-time.After(retryInterval):
continue
case <-ctx.Done():
return "", ctx.Err()
}
}
return "", fmt.Errorf("could not read cluster peer ID after %d seconds", (maxRetries * int(retryInterval.Milliseconds()) / 1000))
}
// waitClusterReady polls the cluster REST API until it's ready
func (pm *ProcessManager) waitClusterReady(ctx context.Context, name string, restAPIPort int) error {
maxRetries := 30
retryInterval := 500 * time.Millisecond
for attempt := 0; attempt < maxRetries; attempt++ {
httpURL := fmt.Sprintf("http://127.0.0.1:%d/peers", restAPIPort)
resp, err := http.Get(httpURL)
if err == nil && resp.StatusCode == 200 {
resp.Body.Close()
return nil
}
if resp != nil {
resp.Body.Close()
}
select {
case <-time.After(retryInterval):
continue
case <-ctx.Done():
return ctx.Err()
}
}
return fmt.Errorf("IPFS Cluster %s did not become ready after %d seconds", name, (maxRetries * int(retryInterval.Seconds())))
}
// waitClusterFormed waits for all cluster peers to be visible from the bootstrap node
func (pm *ProcessManager) waitClusterFormed(ctx context.Context, bootstrapRestAPIPort int) error {
maxRetries := 30
retryInterval := 1 * time.Second
requiredPeers := 3 // bootstrap, node2, node3
for attempt := 0; attempt < maxRetries; attempt++ {
httpURL := fmt.Sprintf("http://127.0.0.1:%d/peers", bootstrapRestAPIPort)
resp, err := http.Get(httpURL)
if err == nil && resp.StatusCode == 200 {
// The /peers endpoint returns NDJSON (newline-delimited JSON), not a JSON array
// We need to stream-read each peer object
dec := json.NewDecoder(resp.Body)
peerCount := 0
for {
var peer interface{}
err := dec.Decode(&peer)
if err != nil {
if err == io.EOF {
break
}
break // Stop on parse error
}
peerCount++
}
resp.Body.Close()
if peerCount >= requiredPeers {
return nil // All peers have formed
}
}
if resp != nil {
resp.Body.Close()
}
select {
case <-time.After(retryInterval):
continue
case <-ctx.Done():
return ctx.Err()
}
}
return fmt.Errorf("IPFS Cluster did not form fully after %d seconds", (maxRetries * int(retryInterval.Seconds())))
}
// cleanClusterState removes stale cluster state files to ensure fresh initialization
// This prevents PSK (private network key) mismatches when cluster secret changes
func (pm *ProcessManager) cleanClusterState(clusterPath string) error {
// Remove pebble datastore (contains persisted PSK state)
pebblePath := filepath.Join(clusterPath, "pebble")
if err := os.RemoveAll(pebblePath); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("failed to remove pebble directory: %w", err)
}
// Remove peerstore (contains peer addresses and metadata)
peerstorePath := filepath.Join(clusterPath, "peerstore")
if err := os.Remove(peerstorePath); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("failed to remove peerstore: %w", err)
}
// Remove service.json (will be regenerated with correct ports and secret)
serviceJSONPath := filepath.Join(clusterPath, "service.json")
if err := os.Remove(serviceJSONPath); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("failed to remove service.json: %w", err)
}
// Remove cluster.lock if it exists (from previous run)
lockPath := filepath.Join(clusterPath, "cluster.lock")
if err := os.Remove(lockPath); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("failed to remove cluster.lock: %w", err)
}
// Note: We keep identity.json as it's tied to the node's peer ID
// The secret will be updated via CLUSTER_SECRET env var during init
return nil
}
// ensureIPFSClusterPorts updates service.json with correct per-node ports and IPFS connector settings
func (pm *ProcessManager) ensureIPFSClusterPorts(clusterPath string, restAPIPort int, clusterPort int) error {
serviceJSONPath := filepath.Join(clusterPath, "service.json")
// Read existing config
data, err := os.ReadFile(serviceJSONPath)
if err != nil {
return fmt.Errorf("failed to read service.json: %w", err)
}
var config map[string]interface{}
if err := json.Unmarshal(data, &config); err != nil {
return fmt.Errorf("failed to unmarshal service.json: %w", err)
}
// Calculate unique ports for this node based on restAPIPort offset
// bootstrap=9094 -> proxy=9095, pinsvc=9097, cluster=9096
// node2=9104 -> proxy=9105, pinsvc=9107, cluster=9106
// node3=9114 -> proxy=9115, pinsvc=9117, cluster=9116
portOffset := restAPIPort - 9094
proxyPort := 9095 + portOffset
pinsvcPort := 9097 + portOffset
// Infer IPFS port from REST API port
// 9094 -> 4501 (bootstrap), 9104 -> 4502 (node2), 9114 -> 4503 (node3)
ipfsPort := 4501 + (portOffset / 10)
// Update API settings
if api, ok := config["api"].(map[string]interface{}); ok {
// Update REST API listen address
if restapi, ok := api["restapi"].(map[string]interface{}); ok {
restapi["http_listen_multiaddress"] = fmt.Sprintf("/ip4/0.0.0.0/tcp/%d", restAPIPort)
}
// Update IPFS Proxy settings
if proxy, ok := api["ipfsproxy"].(map[string]interface{}); ok {
proxy["listen_multiaddress"] = fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", proxyPort)
proxy["node_multiaddress"] = fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", ipfsPort)
}
// Update Pinning Service API port
if pinsvc, ok := api["pinsvcapi"].(map[string]interface{}); ok {
pinsvc["http_listen_multiaddress"] = fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", pinsvcPort)
}
}
// Update cluster listen multiaddress to match the correct port
// Replace all old listen addresses with new ones for the correct port
if cluster, ok := config["cluster"].(map[string]interface{}); ok {
listenAddrs := []string{
fmt.Sprintf("/ip4/0.0.0.0/tcp/%d", clusterPort),
fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", clusterPort),
}
cluster["listen_multiaddress"] = listenAddrs
}
// Update IPFS connector settings to point to correct IPFS API port
if connector, ok := config["ipfs_connector"].(map[string]interface{}); ok {
if ipfshttp, ok := connector["ipfshttp"].(map[string]interface{}); ok {
ipfshttp["node_multiaddress"] = fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", ipfsPort)
}
}
// Write updated config
updatedData, err := json.MarshalIndent(config, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal updated config: %w", err)
}
if err := os.WriteFile(serviceJSONPath, updatedData, 0644); err != nil {
return fmt.Errorf("failed to write service.json: %w", err)
}
return nil
}
func (pm *ProcessManager) startOlric(ctx context.Context) error {
pidPath := filepath.Join(pm.pidsDir, "olric.pid")
logPath := filepath.Join(pm.oramaDir, "logs", "olric.log")
configPath := filepath.Join(pm.oramaDir, "olric-config.yaml")
cmd := exec.CommandContext(ctx, "olric-server")
cmd.Env = append(os.Environ(), fmt.Sprintf("OLRIC_SERVER_CONFIG=%s", configPath))
logFile, _ := os.Create(logPath)
cmd.Stdout = logFile
cmd.Stderr = logFile
if err := cmd.Start(); err != nil {
return fmt.Errorf("failed to start olric: %w", err)
}
os.WriteFile(pidPath, []byte(fmt.Sprintf("%d", cmd.Process.Pid)), 0644)
fmt.Fprintf(pm.logWriter, "✓ Olric started (PID: %d)\n", cmd.Process.Pid)
time.Sleep(1 * time.Second)
return nil
}
func (pm *ProcessManager) startAnon(ctx context.Context) error {
if runtime.GOOS != "darwin" {
return nil // Skip on non-macOS for now
}
pidPath := filepath.Join(pm.pidsDir, "anon.pid")
logPath := filepath.Join(pm.oramaDir, "logs", "anon.log")
cmd := exec.CommandContext(ctx, "npx", "anyone-client")
logFile, _ := os.Create(logPath)
cmd.Stdout = logFile
cmd.Stderr = logFile
if err := cmd.Start(); err != nil {
fmt.Fprintf(pm.logWriter, " ⚠️ Failed to start Anon: %v\n", err)
return nil
}
os.WriteFile(pidPath, []byte(fmt.Sprintf("%d", cmd.Process.Pid)), 0644)
fmt.Fprintf(pm.logWriter, "✓ Anon proxy started (PID: %d, SOCKS: 9050)\n", cmd.Process.Pid)
return nil
}
func (pm *ProcessManager) startNode(name, configFile, logPath string) error {
pidPath := filepath.Join(pm.pidsDir, fmt.Sprintf("%s.pid", name))
cmd := exec.Command("./bin/orama-node", "--config", configFile)
logFile, _ := os.Create(logPath)
cmd.Stdout = logFile
cmd.Stderr = logFile
if err := cmd.Start(); err != nil {
return fmt.Errorf("failed to start %s: %w", name, err)
}
os.WriteFile(pidPath, []byte(fmt.Sprintf("%d", cmd.Process.Pid)), 0644)
fmt.Fprintf(pm.logWriter, "✓ %s started (PID: %d)\n", strings.Title(name), cmd.Process.Pid)
time.Sleep(1 * time.Second)
return nil
}
func (pm *ProcessManager) startGateway(ctx context.Context) error {
pidPath := filepath.Join(pm.pidsDir, "gateway.pid")
logPath := filepath.Join(pm.oramaDir, "logs", "gateway.log")
cmd := exec.Command("./bin/gateway", "--config", "gateway.yaml")
logFile, _ := os.Create(logPath)
cmd.Stdout = logFile
cmd.Stderr = logFile
if err := cmd.Start(); err != nil {
return fmt.Errorf("failed to start gateway: %w", err)
}
os.WriteFile(pidPath, []byte(fmt.Sprintf("%d", cmd.Process.Pid)), 0644)
fmt.Fprintf(pm.logWriter, "✓ Gateway started (PID: %d, listen: 6001)\n", cmd.Process.Pid)
return nil
}
// stopProcess terminates a managed process and its children
func (pm *ProcessManager) stopProcess(name string) error {
pidPath := filepath.Join(pm.pidsDir, fmt.Sprintf("%s.pid", name))
pidBytes, err := os.ReadFile(pidPath)
if err != nil {
return nil // Process not running or PID not found
}
pid, err := strconv.Atoi(strings.TrimSpace(string(pidBytes)))
if err != nil {
os.Remove(pidPath)
return nil
}
// Check if process exists before trying to kill
if !checkProcessRunning(pid) {
os.Remove(pidPath)
fmt.Fprintf(pm.logWriter, "✓ %s (not running)\n", name)
return nil
}
proc, err := os.FindProcess(pid)
if err != nil {
os.Remove(pidPath)
return nil
}
// Try graceful shutdown first (SIGTERM)
proc.Signal(os.Interrupt)
// Wait up to 2 seconds for graceful shutdown
gracefulShutdown := false
for i := 0; i < 20; i++ {
time.Sleep(100 * time.Millisecond)
if !checkProcessRunning(pid) {
gracefulShutdown = true
break
}
}
// Force kill if still running after graceful attempt
if !gracefulShutdown && checkProcessRunning(pid) {
proc.Signal(os.Kill)
time.Sleep(200 * time.Millisecond)
// Kill any child processes (platform-specific)
if runtime.GOOS != "windows" {
exec.Command("pkill", "-9", "-P", fmt.Sprintf("%d", pid)).Run()
}
// Final force kill attempt if somehow still alive
if checkProcessRunning(pid) {
exec.Command("kill", "-9", fmt.Sprintf("%d", pid)).Run()
time.Sleep(100 * time.Millisecond)
}
}
os.Remove(pidPath)
if gracefulShutdown {
fmt.Fprintf(pm.logWriter, "✓ %s stopped gracefully\n", name)
} else {
fmt.Fprintf(pm.logWriter, "✓ %s stopped (forced)\n", name)
}
return nil
}
// checkProcessRunning checks if a process with given PID is running
func checkProcessRunning(pid int) bool {
proc, err := os.FindProcess(pid)
if err != nil {
return false
}
// Send signal 0 to check if process exists (doesn't actually send signal)
err = proc.Signal(os.Signal(nil))
return err == nil
}

View File

@ -27,6 +27,7 @@ type Topology struct {
OlricHTTPPort int
OlricMemberPort int
AnonSOCKSPort int
MCPPort int
}
// DefaultTopology returns the default five-node dev environment topology
@ -118,6 +119,7 @@ func DefaultTopology() *Topology {
OlricHTTPPort: 3320,
OlricMemberPort: 3322,
AnonSOCKSPort: 9050,
MCPPort: 5825,
}
}

View File

@ -1,19 +1,24 @@
package production
import (
"encoding/json"
"fmt"
"io"
"os"
"os/exec"
"path/filepath"
"strings"
"github.com/DeBrosOfficial/network/pkg/environments/production/installers"
)
// BinaryInstaller handles downloading and installing external binaries
// This is a backward-compatible wrapper around the new installers package
type BinaryInstaller struct {
arch string
logWriter io.Writer
// Embedded installers
rqlite *installers.RQLiteInstaller
ipfs *installers.IPFSInstaller
ipfsCluster *installers.IPFSClusterInstaller
olric *installers.OlricInstaller
gateway *installers.GatewayInstaller
}
// NewBinaryInstaller creates a new binary installer
@ -21,617 +26,64 @@ func NewBinaryInstaller(arch string, logWriter io.Writer) *BinaryInstaller {
return &BinaryInstaller{
arch: arch,
logWriter: logWriter,
rqlite: installers.NewRQLiteInstaller(arch, logWriter),
ipfs: installers.NewIPFSInstaller(arch, logWriter),
ipfsCluster: installers.NewIPFSClusterInstaller(arch, logWriter),
olric: installers.NewOlricInstaller(arch, logWriter),
gateway: installers.NewGatewayInstaller(arch, logWriter),
}
}
// InstallRQLite downloads and installs RQLite
func (bi *BinaryInstaller) InstallRQLite() error {
if _, err := exec.LookPath("rqlited"); err == nil {
fmt.Fprintf(bi.logWriter, " ✓ RQLite already installed\n")
return nil
}
fmt.Fprintf(bi.logWriter, " Installing RQLite...\n")
version := "8.43.0"
tarball := fmt.Sprintf("rqlite-v%s-linux-%s.tar.gz", version, bi.arch)
url := fmt.Sprintf("https://github.com/rqlite/rqlite/releases/download/v%s/%s", version, tarball)
// Download
cmd := exec.Command("wget", "-q", url, "-O", "/tmp/"+tarball)
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to download RQLite: %w", err)
}
// Extract
cmd = exec.Command("tar", "-C", "/tmp", "-xzf", "/tmp/"+tarball)
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to extract RQLite: %w", err)
}
// Copy binaries
dir := fmt.Sprintf("/tmp/rqlite-v%s-linux-%s", version, bi.arch)
if err := exec.Command("cp", dir+"/rqlited", "/usr/local/bin/").Run(); err != nil {
return fmt.Errorf("failed to copy rqlited binary: %w", err)
}
if err := exec.Command("chmod", "+x", "/usr/local/bin/rqlited").Run(); err != nil {
fmt.Fprintf(bi.logWriter, " ⚠️ Warning: failed to chmod rqlited: %v\n", err)
}
// Ensure PATH includes /usr/local/bin
os.Setenv("PATH", os.Getenv("PATH")+":/usr/local/bin")
fmt.Fprintf(bi.logWriter, " ✓ RQLite installed\n")
return nil
return bi.rqlite.Install()
}
// InstallIPFS downloads and installs IPFS (Kubo)
// Follows official steps from https://docs.ipfs.tech/install/command-line/
func (bi *BinaryInstaller) InstallIPFS() error {
if _, err := exec.LookPath("ipfs"); err == nil {
fmt.Fprintf(bi.logWriter, " ✓ IPFS already installed\n")
return nil
}
fmt.Fprintf(bi.logWriter, " Installing IPFS (Kubo)...\n")
// Follow official installation steps in order
kuboVersion := "v0.38.2"
tarball := fmt.Sprintf("kubo_%s_linux-%s.tar.gz", kuboVersion, bi.arch)
url := fmt.Sprintf("https://dist.ipfs.tech/kubo/%s/%s", kuboVersion, tarball)
tmpDir := "/tmp"
tarPath := filepath.Join(tmpDir, tarball)
kuboDir := filepath.Join(tmpDir, "kubo")
// Step 1: Download the Linux binary from dist.ipfs.tech
fmt.Fprintf(bi.logWriter, " Step 1: Downloading Kubo v%s...\n", kuboVersion)
cmd := exec.Command("wget", "-q", url, "-O", tarPath)
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to download kubo from %s: %w", url, err)
}
// Verify tarball exists
if _, err := os.Stat(tarPath); err != nil {
return fmt.Errorf("kubo tarball not found after download at %s: %w", tarPath, err)
}
// Step 2: Unzip the file
fmt.Fprintf(bi.logWriter, " Step 2: Extracting Kubo archive...\n")
cmd = exec.Command("tar", "-xzf", tarPath, "-C", tmpDir)
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to extract kubo tarball: %w", err)
}
// Verify extraction
if _, err := os.Stat(kuboDir); err != nil {
return fmt.Errorf("kubo directory not found after extraction at %s: %w", kuboDir, err)
}
// Step 3: Move into the kubo folder (cd kubo)
fmt.Fprintf(bi.logWriter, " Step 3: Running installation script...\n")
// Step 4: Run the installation script (sudo bash install.sh)
installScript := filepath.Join(kuboDir, "install.sh")
if _, err := os.Stat(installScript); err != nil {
return fmt.Errorf("install.sh not found in extracted kubo directory at %s: %w", installScript, err)
}
cmd = exec.Command("bash", installScript)
cmd.Dir = kuboDir
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to run install.sh: %v\n%s", err, string(output))
}
// Step 5: Test that Kubo has installed correctly
fmt.Fprintf(bi.logWriter, " Step 5: Verifying installation...\n")
cmd = exec.Command("ipfs", "--version")
output, err := cmd.CombinedOutput()
if err != nil {
// ipfs might not be in PATH yet in this process, check file directly
ipfsLocations := []string{"/usr/local/bin/ipfs", "/usr/bin/ipfs"}
found := false
for _, loc := range ipfsLocations {
if info, err := os.Stat(loc); err == nil && !info.IsDir() {
found = true
// Ensure it's executable
if info.Mode()&0111 == 0 {
os.Chmod(loc, 0755)
}
break
}
}
if !found {
return fmt.Errorf("ipfs binary not found after installation in %v", ipfsLocations)
}
} else {
fmt.Fprintf(bi.logWriter, " %s", string(output))
}
// Ensure PATH is updated for current process
os.Setenv("PATH", os.Getenv("PATH")+":/usr/local/bin")
fmt.Fprintf(bi.logWriter, " ✓ IPFS installed successfully\n")
return nil
return bi.ipfs.Install()
}
// InstallIPFSCluster downloads and installs IPFS Cluster Service
func (bi *BinaryInstaller) InstallIPFSCluster() error {
if _, err := exec.LookPath("ipfs-cluster-service"); err == nil {
fmt.Fprintf(bi.logWriter, " ✓ IPFS Cluster already installed\n")
return nil
}
fmt.Fprintf(bi.logWriter, " Installing IPFS Cluster Service...\n")
// Check if Go is available
if _, err := exec.LookPath("go"); err != nil {
return fmt.Errorf("go not found - required to install IPFS Cluster. Please install Go first")
}
cmd := exec.Command("go", "install", "github.com/ipfs-cluster/ipfs-cluster/cmd/ipfs-cluster-service@latest")
cmd.Env = append(os.Environ(), "GOBIN=/usr/local/bin")
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to install IPFS Cluster: %w", err)
}
fmt.Fprintf(bi.logWriter, " ✓ IPFS Cluster installed\n")
return nil
return bi.ipfsCluster.Install()
}
// InstallOlric downloads and installs Olric server
func (bi *BinaryInstaller) InstallOlric() error {
if _, err := exec.LookPath("olric-server"); err == nil {
fmt.Fprintf(bi.logWriter, " ✓ Olric already installed\n")
return nil
}
fmt.Fprintf(bi.logWriter, " Installing Olric...\n")
// Check if Go is available
if _, err := exec.LookPath("go"); err != nil {
return fmt.Errorf("go not found - required to install Olric. Please install Go first")
}
cmd := exec.Command("go", "install", "github.com/olric-data/olric/cmd/olric-server@v0.7.0")
cmd.Env = append(os.Environ(), "GOBIN=/usr/local/bin")
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to install Olric: %w", err)
}
fmt.Fprintf(bi.logWriter, " ✓ Olric installed\n")
return nil
return bi.olric.Install()
}
// InstallGo downloads and installs Go toolchain
func (bi *BinaryInstaller) InstallGo() error {
if _, err := exec.LookPath("go"); err == nil {
fmt.Fprintf(bi.logWriter, " ✓ Go already installed\n")
return nil
}
fmt.Fprintf(bi.logWriter, " Installing Go...\n")
goTarball := fmt.Sprintf("go1.22.5.linux-%s.tar.gz", bi.arch)
goURL := fmt.Sprintf("https://go.dev/dl/%s", goTarball)
// Download
cmd := exec.Command("wget", "-q", goURL, "-O", "/tmp/"+goTarball)
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to download Go: %w", err)
}
// Extract
cmd = exec.Command("tar", "-C", "/usr/local", "-xzf", "/tmp/"+goTarball)
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to extract Go: %w", err)
}
// Add to PATH
newPath := os.Getenv("PATH") + ":/usr/local/go/bin"
os.Setenv("PATH", newPath)
// Verify installation
if _, err := exec.LookPath("go"); err != nil {
return fmt.Errorf("go installed but not found in PATH after installation")
}
fmt.Fprintf(bi.logWriter, " ✓ Go installed\n")
return nil
return bi.gateway.InstallGo()
}
// ResolveBinaryPath finds the fully-qualified path to a required executable
func (bi *BinaryInstaller) ResolveBinaryPath(binary string, extraPaths ...string) (string, error) {
// First try to find in PATH
if path, err := exec.LookPath(binary); err == nil {
if abs, err := filepath.Abs(path); err == nil {
return abs, nil
}
return path, nil
}
// Then try extra candidate paths
for _, candidate := range extraPaths {
if candidate == "" {
continue
}
if info, err := os.Stat(candidate); err == nil && !info.IsDir() && info.Mode()&0111 != 0 {
if abs, err := filepath.Abs(candidate); err == nil {
return abs, nil
}
return candidate, nil
}
}
// Not found - generate error message
checked := make([]string, 0, len(extraPaths))
for _, candidate := range extraPaths {
if candidate != "" {
checked = append(checked, candidate)
}
}
if len(checked) == 0 {
return "", fmt.Errorf("required binary %q not found in path", binary)
}
return "", fmt.Errorf("required binary %q not found in path (also checked %s)", binary, strings.Join(checked, ", "))
return installers.ResolveBinaryPath(binary, extraPaths...)
}
// InstallDeBrosBinaries clones and builds DeBros binaries
func (bi *BinaryInstaller) InstallDeBrosBinaries(branch string, oramaHome string, skipRepoUpdate bool) error {
fmt.Fprintf(bi.logWriter, " Building DeBros binaries...\n")
srcDir := filepath.Join(oramaHome, "src")
binDir := filepath.Join(oramaHome, "bin")
// Ensure directories exist
if err := os.MkdirAll(srcDir, 0755); err != nil {
return fmt.Errorf("failed to create source directory %s: %w", srcDir, err)
}
if err := os.MkdirAll(binDir, 0755); err != nil {
return fmt.Errorf("failed to create bin directory %s: %w", binDir, err)
}
// Check if source directory has content (either git repo or pre-existing source)
hasSourceContent := false
if entries, err := os.ReadDir(srcDir); err == nil && len(entries) > 0 {
hasSourceContent = true
}
// Check if git repository is already initialized
isGitRepo := false
if _, err := os.Stat(filepath.Join(srcDir, ".git")); err == nil {
isGitRepo = true
}
// Handle repository update/clone based on skipRepoUpdate flag
if skipRepoUpdate {
fmt.Fprintf(bi.logWriter, " Skipping repo clone/pull (--no-pull flag)\n")
if !hasSourceContent {
return fmt.Errorf("cannot skip pull: source directory is empty at %s (need to populate it first)", srcDir)
}
fmt.Fprintf(bi.logWriter, " Using existing source at %s (skipping git operations)\n", srcDir)
// Skip to build step - don't execute any git commands
} else {
// Clone repository if not present, otherwise update it
if !isGitRepo {
fmt.Fprintf(bi.logWriter, " Cloning repository...\n")
cmd := exec.Command("git", "clone", "--branch", branch, "--depth", "1", "https://github.com/DeBrosOfficial/network.git", srcDir)
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to clone repository: %w", err)
}
} else {
fmt.Fprintf(bi.logWriter, " Updating repository to latest changes...\n")
if output, err := exec.Command("git", "-C", srcDir, "fetch", "origin", branch).CombinedOutput(); err != nil {
return fmt.Errorf("failed to fetch repository updates: %v\n%s", err, string(output))
}
if output, err := exec.Command("git", "-C", srcDir, "reset", "--hard", "origin/"+branch).CombinedOutput(); err != nil {
return fmt.Errorf("failed to reset repository: %v\n%s", err, string(output))
}
if output, err := exec.Command("git", "-C", srcDir, "clean", "-fd").CombinedOutput(); err != nil {
return fmt.Errorf("failed to clean repository: %v\n%s", err, string(output))
}
}
}
// Build binaries
fmt.Fprintf(bi.logWriter, " Building binaries...\n")
cmd := exec.Command("make", "build")
cmd.Dir = srcDir
cmd.Env = append(os.Environ(), "HOME="+oramaHome, "PATH="+os.Getenv("PATH")+":/usr/local/go/bin")
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to build: %v\n%s", err, string(output))
}
// Copy binaries
fmt.Fprintf(bi.logWriter, " Copying binaries...\n")
srcBinDir := filepath.Join(srcDir, "bin")
// Check if source bin directory exists
if _, err := os.Stat(srcBinDir); os.IsNotExist(err) {
return fmt.Errorf("source bin directory does not exist at %s - build may have failed", srcBinDir)
}
// Check if there are any files to copy
entries, err := os.ReadDir(srcBinDir)
if err != nil {
return fmt.Errorf("failed to read source bin directory: %w", err)
}
if len(entries) == 0 {
return fmt.Errorf("source bin directory is empty - build may have failed")
}
// Copy each binary individually to avoid wildcard expansion issues
for _, entry := range entries {
if entry.IsDir() {
continue
}
srcPath := filepath.Join(srcBinDir, entry.Name())
dstPath := filepath.Join(binDir, entry.Name())
// Read source file
data, err := os.ReadFile(srcPath)
if err != nil {
return fmt.Errorf("failed to read binary %s: %w", entry.Name(), err)
}
// Write destination file
if err := os.WriteFile(dstPath, data, 0755); err != nil {
return fmt.Errorf("failed to write binary %s: %w", entry.Name(), err)
}
}
if err := exec.Command("chmod", "-R", "755", binDir).Run(); err != nil {
fmt.Fprintf(bi.logWriter, " ⚠️ Warning: failed to chmod bin directory: %v\n", err)
}
if err := exec.Command("chown", "-R", "debros:debros", binDir).Run(); err != nil {
fmt.Fprintf(bi.logWriter, " ⚠️ Warning: failed to chown bin directory: %v\n", err)
}
// Grant CAP_NET_BIND_SERVICE to orama-node to allow binding to ports 80/443 without root
nodeBinary := filepath.Join(binDir, "orama-node")
if _, err := os.Stat(nodeBinary); err == nil {
if err := exec.Command("setcap", "cap_net_bind_service=+ep", nodeBinary).Run(); err != nil {
fmt.Fprintf(bi.logWriter, " ⚠️ Warning: failed to setcap on orama-node: %v\n", err)
fmt.Fprintf(bi.logWriter, " ⚠️ Gateway may not be able to bind to port 80/443\n")
} else {
fmt.Fprintf(bi.logWriter, " ✓ Set CAP_NET_BIND_SERVICE on orama-node\n")
}
}
fmt.Fprintf(bi.logWriter, " ✓ DeBros binaries installed\n")
return nil
return bi.gateway.InstallDeBrosBinaries(branch, oramaHome, skipRepoUpdate)
}
// InstallSystemDependencies installs system-level dependencies via apt
func (bi *BinaryInstaller) InstallSystemDependencies() error {
fmt.Fprintf(bi.logWriter, " Installing system dependencies...\n")
// Update package list
cmd := exec.Command("apt-get", "update")
if err := cmd.Run(); err != nil {
fmt.Fprintf(bi.logWriter, " Warning: apt update failed\n")
}
// Install dependencies including Node.js for anyone-client
cmd = exec.Command("apt-get", "install", "-y", "curl", "git", "make", "build-essential", "wget", "nodejs", "npm")
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to install dependencies: %w", err)
}
fmt.Fprintf(bi.logWriter, " ✓ System dependencies installed\n")
return nil
return bi.gateway.InstallSystemDependencies()
}
// IPFSPeerInfo holds IPFS peer information for configuring Peering.Peers
type IPFSPeerInfo struct {
PeerID string
Addrs []string
}
type IPFSPeerInfo = installers.IPFSPeerInfo
// IPFSClusterPeerInfo contains IPFS Cluster peer information for cluster peer discovery
type IPFSClusterPeerInfo struct {
PeerID string // Cluster peer ID (different from IPFS peer ID)
Addrs []string // Cluster multiaddresses (e.g., /ip4/x.x.x.x/tcp/9098)
}
type IPFSClusterPeerInfo = installers.IPFSClusterPeerInfo
// InitializeIPFSRepo initializes an IPFS repository for a node (unified - no bootstrap/node distinction)
// If ipfsPeer is provided, configures Peering.Peers for peer discovery in private networks
func (bi *BinaryInstaller) InitializeIPFSRepo(ipfsRepoPath string, swarmKeyPath string, apiPort, gatewayPort, swarmPort int, ipfsPeer *IPFSPeerInfo) error {
configPath := filepath.Join(ipfsRepoPath, "config")
repoExists := false
if _, err := os.Stat(configPath); err == nil {
repoExists = true
fmt.Fprintf(bi.logWriter, " IPFS repo already exists, ensuring configuration...\n")
} else {
fmt.Fprintf(bi.logWriter, " Initializing IPFS repo...\n")
}
if err := os.MkdirAll(ipfsRepoPath, 0755); err != nil {
return fmt.Errorf("failed to create IPFS repo directory: %w", err)
}
// Resolve IPFS binary path
ipfsBinary, err := bi.ResolveBinaryPath("ipfs", "/usr/local/bin/ipfs", "/usr/bin/ipfs")
if err != nil {
return err
}
// Initialize IPFS if repo doesn't exist
if !repoExists {
cmd := exec.Command(ipfsBinary, "init", "--profile=server", "--repo-dir="+ipfsRepoPath)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to initialize IPFS: %v\n%s", err, string(output))
}
}
// Copy swarm key if present
swarmKeyExists := false
if data, err := os.ReadFile(swarmKeyPath); err == nil {
swarmKeyDest := filepath.Join(ipfsRepoPath, "swarm.key")
if err := os.WriteFile(swarmKeyDest, data, 0600); err != nil {
return fmt.Errorf("failed to copy swarm key: %w", err)
}
swarmKeyExists = true
}
// Configure IPFS addresses (API, Gateway, Swarm) by modifying the config file directly
// This ensures the ports are set correctly and avoids conflicts with RQLite on port 5001
fmt.Fprintf(bi.logWriter, " Configuring IPFS addresses (API: %d, Gateway: %d, Swarm: %d)...\n", apiPort, gatewayPort, swarmPort)
if err := bi.configureIPFSAddresses(ipfsRepoPath, apiPort, gatewayPort, swarmPort); err != nil {
return fmt.Errorf("failed to configure IPFS addresses: %w", err)
}
// Always disable AutoConf for private swarm when swarm.key is present
// This is critical - IPFS will fail to start if AutoConf is enabled on a private network
// We do this even for existing repos to fix repos initialized before this fix was applied
if swarmKeyExists {
fmt.Fprintf(bi.logWriter, " Disabling AutoConf for private swarm...\n")
cmd := exec.Command(ipfsBinary, "config", "--json", "AutoConf.Enabled", "false")
cmd.Env = append(os.Environ(), "IPFS_PATH="+ipfsRepoPath)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to disable AutoConf: %v\n%s", err, string(output))
}
// Clear AutoConf placeholders from config to prevent Kubo startup errors
// When AutoConf is disabled, 'auto' placeholders must be replaced with explicit values or empty
fmt.Fprintf(bi.logWriter, " Clearing AutoConf placeholders from IPFS config...\n")
type configCommand struct {
desc string
args []string
}
// List of config replacements to clear 'auto' placeholders
cleanup := []configCommand{
{"clearing Bootstrap peers", []string{"config", "Bootstrap", "--json", "[]"}},
{"clearing Routing.DelegatedRouters", []string{"config", "Routing.DelegatedRouters", "--json", "[]"}},
{"clearing Ipns.DelegatedPublishers", []string{"config", "Ipns.DelegatedPublishers", "--json", "[]"}},
{"clearing DNS.Resolvers", []string{"config", "DNS.Resolvers", "--json", "{}"}},
}
for _, step := range cleanup {
fmt.Fprintf(bi.logWriter, " %s...\n", step.desc)
cmd := exec.Command(ipfsBinary, step.args...)
cmd.Env = append(os.Environ(), "IPFS_PATH="+ipfsRepoPath)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed while %s: %v\n%s", step.desc, err, string(output))
}
}
// Configure Peering.Peers if we have peer info (for private network discovery)
if ipfsPeer != nil && ipfsPeer.PeerID != "" && len(ipfsPeer.Addrs) > 0 {
fmt.Fprintf(bi.logWriter, " Configuring Peering.Peers for private network discovery...\n")
if err := bi.configureIPFSPeering(ipfsRepoPath, ipfsPeer); err != nil {
return fmt.Errorf("failed to configure IPFS peering: %w", err)
}
}
}
// Fix ownership (best-effort, don't fail if it doesn't work)
if err := exec.Command("chown", "-R", "debros:debros", ipfsRepoPath).Run(); err != nil {
fmt.Fprintf(bi.logWriter, " ⚠️ Warning: failed to chown IPFS repo: %v\n", err)
}
return nil
}
// configureIPFSAddresses configures the IPFS API, Gateway, and Swarm addresses in the config file
func (bi *BinaryInstaller) configureIPFSAddresses(ipfsRepoPath string, apiPort, gatewayPort, swarmPort int) error {
configPath := filepath.Join(ipfsRepoPath, "config")
// Read existing config
data, err := os.ReadFile(configPath)
if err != nil {
return fmt.Errorf("failed to read IPFS config: %w", err)
}
var config map[string]interface{}
if err := json.Unmarshal(data, &config); err != nil {
return fmt.Errorf("failed to parse IPFS config: %w", err)
}
// Get existing Addresses section or create new one
// This preserves any existing settings like Announce, AppendAnnounce, NoAnnounce
addresses, ok := config["Addresses"].(map[string]interface{})
if !ok {
addresses = make(map[string]interface{})
}
// Update specific address fields while preserving others
// Bind API and Gateway to localhost only for security
// Swarm binds to all interfaces for peer connections
addresses["API"] = []string{
fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", apiPort),
}
addresses["Gateway"] = []string{
fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", gatewayPort),
}
addresses["Swarm"] = []string{
fmt.Sprintf("/ip4/0.0.0.0/tcp/%d", swarmPort),
fmt.Sprintf("/ip6/::/tcp/%d", swarmPort),
}
config["Addresses"] = addresses
// Write config back
updatedData, err := json.MarshalIndent(config, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal IPFS config: %w", err)
}
if err := os.WriteFile(configPath, updatedData, 0600); err != nil {
return fmt.Errorf("failed to write IPFS config: %w", err)
}
return nil
}
// configureIPFSPeering configures Peering.Peers in the IPFS config for private network discovery
// This allows nodes in a private swarm to find each other even without bootstrap peers
func (bi *BinaryInstaller) configureIPFSPeering(ipfsRepoPath string, peer *IPFSPeerInfo) error {
configPath := filepath.Join(ipfsRepoPath, "config")
// Read existing config
data, err := os.ReadFile(configPath)
if err != nil {
return fmt.Errorf("failed to read IPFS config: %w", err)
}
var config map[string]interface{}
if err := json.Unmarshal(data, &config); err != nil {
return fmt.Errorf("failed to parse IPFS config: %w", err)
}
// Get existing Peering section or create new one
peering, ok := config["Peering"].(map[string]interface{})
if !ok {
peering = make(map[string]interface{})
}
// Create peer entry
peerEntry := map[string]interface{}{
"ID": peer.PeerID,
"Addrs": peer.Addrs,
}
// Set Peering.Peers
peering["Peers"] = []interface{}{peerEntry}
config["Peering"] = peering
fmt.Fprintf(bi.logWriter, " Adding peer: %s (%d addresses)\n", peer.PeerID, len(peer.Addrs))
// Write config back
updatedData, err := json.MarshalIndent(config, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal IPFS config: %w", err)
}
if err := os.WriteFile(configPath, updatedData, 0600); err != nil {
return fmt.Errorf("failed to write IPFS config: %w", err)
}
return nil
return bi.ipfs.InitializeRepo(ipfsRepoPath, swarmKeyPath, apiPort, gatewayPort, swarmPort, ipfsPeer)
}
// InitializeIPFSClusterConfig initializes IPFS Cluster configuration (unified - no bootstrap/node distinction)
@ -639,303 +91,34 @@ func (bi *BinaryInstaller) configureIPFSPeering(ipfsRepoPath string, peer *IPFSP
// For existing installations, it ensures the cluster secret is up to date.
// clusterPeers should be in format: ["/ip4/<ip>/tcp/9098/p2p/<cluster-peer-id>"]
func (bi *BinaryInstaller) InitializeIPFSClusterConfig(clusterPath, clusterSecret string, ipfsAPIPort int, clusterPeers []string) error {
serviceJSONPath := filepath.Join(clusterPath, "service.json")
configExists := false
if _, err := os.Stat(serviceJSONPath); err == nil {
configExists = true
fmt.Fprintf(bi.logWriter, " IPFS Cluster config already exists, ensuring it's up to date...\n")
} else {
fmt.Fprintf(bi.logWriter, " Preparing IPFS Cluster path...\n")
}
if err := os.MkdirAll(clusterPath, 0755); err != nil {
return fmt.Errorf("failed to create IPFS Cluster directory: %w", err)
}
// Fix ownership before running init (best-effort)
if err := exec.Command("chown", "-R", "debros:debros", clusterPath).Run(); err != nil {
fmt.Fprintf(bi.logWriter, " ⚠️ Warning: failed to chown cluster path before init: %v\n", err)
}
// Resolve ipfs-cluster-service binary path
clusterBinary, err := bi.ResolveBinaryPath("ipfs-cluster-service", "/usr/local/bin/ipfs-cluster-service", "/usr/bin/ipfs-cluster-service")
if err != nil {
return fmt.Errorf("ipfs-cluster-service binary not found: %w", err)
}
// Initialize cluster config if it doesn't exist
if !configExists {
// Initialize cluster config with ipfs-cluster-service init
// This creates the service.json file with all required sections
fmt.Fprintf(bi.logWriter, " Initializing IPFS Cluster config...\n")
cmd := exec.Command(clusterBinary, "init", "--force")
cmd.Env = append(os.Environ(), "IPFS_CLUSTER_PATH="+clusterPath)
// Pass CLUSTER_SECRET to init so it writes the correct secret to service.json directly
if clusterSecret != "" {
cmd.Env = append(cmd.Env, "CLUSTER_SECRET="+clusterSecret)
}
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to initialize IPFS Cluster config: %v\n%s", err, string(output))
}
}
// Always update the cluster secret, IPFS port, and peer addresses (for both new and existing configs)
// This ensures existing installations get the secret and port synchronized
// We do this AFTER init to ensure our secret takes precedence
if clusterSecret != "" {
fmt.Fprintf(bi.logWriter, " Updating cluster secret, IPFS port, and peer addresses...\n")
if err := bi.updateClusterConfig(clusterPath, clusterSecret, ipfsAPIPort, clusterPeers); err != nil {
return fmt.Errorf("failed to update cluster config: %w", err)
}
// Verify the secret was written correctly
if err := bi.verifyClusterSecret(clusterPath, clusterSecret); err != nil {
return fmt.Errorf("cluster secret verification failed: %w", err)
}
fmt.Fprintf(bi.logWriter, " ✓ Cluster secret verified\n")
}
// Fix ownership again after updates (best-effort)
if err := exec.Command("chown", "-R", "debros:debros", clusterPath).Run(); err != nil {
fmt.Fprintf(bi.logWriter, " ⚠️ Warning: failed to chown cluster path after updates: %v\n", err)
}
return nil
}
// updateClusterConfig updates the secret, IPFS port, and peer addresses in IPFS Cluster service.json
func (bi *BinaryInstaller) updateClusterConfig(clusterPath, secret string, ipfsAPIPort int, bootstrapClusterPeers []string) error {
serviceJSONPath := filepath.Join(clusterPath, "service.json")
// Read existing config
data, err := os.ReadFile(serviceJSONPath)
if err != nil {
return fmt.Errorf("failed to read service.json: %w", err)
}
// Parse JSON
var config map[string]interface{}
if err := json.Unmarshal(data, &config); err != nil {
return fmt.Errorf("failed to parse service.json: %w", err)
}
// Update cluster secret, listen_multiaddress, and peer addresses
if cluster, ok := config["cluster"].(map[string]interface{}); ok {
cluster["secret"] = secret
// Set consistent listen_multiaddress - port 9098 for cluster LibP2P communication
// This MUST match the port used in GetClusterPeerMultiaddr() and peer_addresses
cluster["listen_multiaddress"] = []interface{}{"/ip4/0.0.0.0/tcp/9098"}
// Configure peer addresses for cluster discovery
// This allows nodes to find and connect to each other
if len(bootstrapClusterPeers) > 0 {
cluster["peer_addresses"] = bootstrapClusterPeers
}
} else {
clusterConfig := map[string]interface{}{
"secret": secret,
"listen_multiaddress": []interface{}{"/ip4/0.0.0.0/tcp/9098"},
}
if len(bootstrapClusterPeers) > 0 {
clusterConfig["peer_addresses"] = bootstrapClusterPeers
}
config["cluster"] = clusterConfig
}
// Update IPFS port in IPFS Proxy configuration
ipfsNodeMultiaddr := fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", ipfsAPIPort)
if api, ok := config["api"].(map[string]interface{}); ok {
if ipfsproxy, ok := api["ipfsproxy"].(map[string]interface{}); ok {
ipfsproxy["node_multiaddress"] = ipfsNodeMultiaddr
}
}
// Update IPFS port in IPFS Connector configuration
if ipfsConnector, ok := config["ipfs_connector"].(map[string]interface{}); ok {
if ipfshttp, ok := ipfsConnector["ipfshttp"].(map[string]interface{}); ok {
ipfshttp["node_multiaddress"] = ipfsNodeMultiaddr
}
}
// Write back
updatedData, err := json.MarshalIndent(config, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal service.json: %w", err)
}
if err := os.WriteFile(serviceJSONPath, updatedData, 0644); err != nil {
return fmt.Errorf("failed to write service.json: %w", err)
}
return nil
}
// verifyClusterSecret verifies that the secret in service.json matches the expected value
func (bi *BinaryInstaller) verifyClusterSecret(clusterPath, expectedSecret string) error {
serviceJSONPath := filepath.Join(clusterPath, "service.json")
data, err := os.ReadFile(serviceJSONPath)
if err != nil {
return fmt.Errorf("failed to read service.json for verification: %w", err)
}
var config map[string]interface{}
if err := json.Unmarshal(data, &config); err != nil {
return fmt.Errorf("failed to parse service.json for verification: %w", err)
}
if cluster, ok := config["cluster"].(map[string]interface{}); ok {
if secret, ok := cluster["secret"].(string); ok {
if secret != expectedSecret {
return fmt.Errorf("secret mismatch: expected %s, got %s", expectedSecret, secret)
}
return nil
}
return fmt.Errorf("secret not found in cluster config")
}
return fmt.Errorf("cluster section not found in service.json")
return bi.ipfsCluster.InitializeConfig(clusterPath, clusterSecret, ipfsAPIPort, clusterPeers)
}
// GetClusterPeerMultiaddr reads the IPFS Cluster peer ID and returns its multiaddress
// Returns format: /ip4/<ip>/tcp/9098/p2p/<cluster-peer-id>
func (bi *BinaryInstaller) GetClusterPeerMultiaddr(clusterPath string, nodeIP string) (string, error) {
identityPath := filepath.Join(clusterPath, "identity.json")
// Read identity file
data, err := os.ReadFile(identityPath)
if err != nil {
return "", fmt.Errorf("failed to read identity.json: %w", err)
}
// Parse JSON
var identity map[string]interface{}
if err := json.Unmarshal(data, &identity); err != nil {
return "", fmt.Errorf("failed to parse identity.json: %w", err)
}
// Get peer ID
peerID, ok := identity["id"].(string)
if !ok || peerID == "" {
return "", fmt.Errorf("peer ID not found in identity.json")
}
// Construct multiaddress: /ip4/<ip>/tcp/9098/p2p/<peer-id>
// Port 9098 is the default cluster listen port
multiaddr := fmt.Sprintf("/ip4/%s/tcp/9098/p2p/%s", nodeIP, peerID)
return multiaddr, nil
return bi.ipfsCluster.GetClusterPeerMultiaddr(clusterPath, nodeIP)
}
// InitializeRQLiteDataDir initializes RQLite data directory
func (bi *BinaryInstaller) InitializeRQLiteDataDir(dataDir string) error {
fmt.Fprintf(bi.logWriter, " Initializing RQLite data dir...\n")
if err := os.MkdirAll(dataDir, 0755); err != nil {
return fmt.Errorf("failed to create RQLite data directory: %w", err)
}
if err := exec.Command("chown", "-R", "debros:debros", dataDir).Run(); err != nil {
fmt.Fprintf(bi.logWriter, " ⚠️ Warning: failed to chown RQLite data dir: %v\n", err)
}
return nil
return bi.rqlite.InitializeDataDir(dataDir)
}
// InstallAnyoneClient installs the anyone-client npm package globally
func (bi *BinaryInstaller) InstallAnyoneClient() error {
// Check if anyone-client is already available via npx (more reliable for scoped packages)
// Note: the CLI binary is "anyone-client", not the full scoped package name
if cmd := exec.Command("npx", "anyone-client", "--help"); cmd.Run() == nil {
fmt.Fprintf(bi.logWriter, " ✓ anyone-client already installed\n")
return nil
return bi.gateway.InstallAnyoneClient()
}
fmt.Fprintf(bi.logWriter, " Installing anyone-client...\n")
// Mock system commands for testing (if needed)
var execCommand = exec.Command
// Initialize NPM cache structure to ensure all directories exist
// This prevents "mkdir" errors when NPM tries to create nested cache directories
fmt.Fprintf(bi.logWriter, " Initializing NPM cache...\n")
// Create nested cache directories with proper permissions
debrosHome := "/home/debros"
npmCacheDirs := []string{
filepath.Join(debrosHome, ".npm"),
filepath.Join(debrosHome, ".npm", "_cacache"),
filepath.Join(debrosHome, ".npm", "_cacache", "tmp"),
filepath.Join(debrosHome, ".npm", "_logs"),
// SetExecCommand allows mocking exec.Command in tests
func SetExecCommand(cmd func(name string, arg ...string) *exec.Cmd) {
execCommand = cmd
}
for _, dir := range npmCacheDirs {
if err := os.MkdirAll(dir, 0700); err != nil {
fmt.Fprintf(bi.logWriter, " ⚠️ Failed to create %s: %v\n", dir, err)
continue
}
// Fix ownership to debros user (sequential to avoid race conditions)
if err := exec.Command("chown", "debros:debros", dir).Run(); err != nil {
fmt.Fprintf(bi.logWriter, " ⚠️ Warning: failed to chown %s: %v\n", dir, err)
}
if err := exec.Command("chmod", "700", dir).Run(); err != nil {
fmt.Fprintf(bi.logWriter, " ⚠️ Warning: failed to chmod %s: %v\n", dir, err)
}
}
// Recursively fix ownership of entire .npm directory to ensure all nested files are owned by debros
if err := exec.Command("chown", "-R", "debros:debros", filepath.Join(debrosHome, ".npm")).Run(); err != nil {
fmt.Fprintf(bi.logWriter, " ⚠️ Warning: failed to chown .npm directory: %v\n", err)
}
// Run npm cache verify as debros user with proper environment
cacheInitCmd := exec.Command("sudo", "-u", "debros", "npm", "cache", "verify", "--silent")
cacheInitCmd.Env = append(os.Environ(), "HOME="+debrosHome)
if err := cacheInitCmd.Run(); err != nil {
fmt.Fprintf(bi.logWriter, " ⚠️ NPM cache verify warning: %v (continuing anyway)\n", err)
}
// Install anyone-client globally via npm (using scoped package name)
cmd := exec.Command("npm", "install", "-g", "@anyone-protocol/anyone-client")
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to install anyone-client: %w\n%s", err, string(output))
}
// Create terms-agreement file to bypass interactive prompt when running as a service
termsFile := filepath.Join(debrosHome, "terms-agreement")
if err := os.WriteFile(termsFile, []byte("agreed"), 0644); err != nil {
fmt.Fprintf(bi.logWriter, " ⚠️ Warning: failed to create terms-agreement: %v\n", err)
} else {
if err := exec.Command("chown", "debros:debros", termsFile).Run(); err != nil {
fmt.Fprintf(bi.logWriter, " ⚠️ Warning: failed to chown terms-agreement: %v\n", err)
}
}
// Verify installation - try npx with the correct CLI name (anyone-client, not full scoped package name)
verifyCmd := exec.Command("npx", "anyone-client", "--help")
if err := verifyCmd.Run(); err != nil {
// Fallback: check if binary exists in common locations
possiblePaths := []string{
"/usr/local/bin/anyone-client",
"/usr/bin/anyone-client",
}
found := false
for _, path := range possiblePaths {
if info, err := os.Stat(path); err == nil && !info.IsDir() {
found = true
break
}
}
if !found {
// Try npm bin -g to find global bin directory
cmd := exec.Command("npm", "bin", "-g")
if output, err := cmd.Output(); err == nil {
npmBinDir := strings.TrimSpace(string(output))
candidate := filepath.Join(npmBinDir, "anyone-client")
if info, err := os.Stat(candidate); err == nil && !info.IsDir() {
found = true
}
}
}
if !found {
return fmt.Errorf("anyone-client installation verification failed - package may not provide a binary, but npx should work")
}
}
fmt.Fprintf(bi.logWriter, " ✓ anyone-client installed\n")
return nil
// ResetExecCommand resets exec.Command to the default
func ResetExecCommand() {
execCommand = exec.Command
}

View File

@ -0,0 +1,322 @@
package installers
import (
"fmt"
"io"
"os"
"os/exec"
"path/filepath"
"strings"
)
// GatewayInstaller handles DeBros binary installation (including gateway)
type GatewayInstaller struct {
*BaseInstaller
}
// NewGatewayInstaller creates a new gateway installer
func NewGatewayInstaller(arch string, logWriter io.Writer) *GatewayInstaller {
return &GatewayInstaller{
BaseInstaller: NewBaseInstaller(arch, logWriter),
}
}
// IsInstalled checks if gateway binaries are already installed
func (gi *GatewayInstaller) IsInstalled() bool {
// Check if binaries exist (gateway is embedded in orama-node)
return false // Always build to ensure latest version
}
// Install clones and builds DeBros binaries
func (gi *GatewayInstaller) Install() error {
// This is a placeholder - actual installation is handled by InstallDeBrosBinaries
return nil
}
// Configure is a placeholder for gateway configuration
func (gi *GatewayInstaller) Configure() error {
// Configuration is handled by the orchestrator
return nil
}
// InstallDeBrosBinaries clones and builds DeBros binaries
func (gi *GatewayInstaller) InstallDeBrosBinaries(branch string, oramaHome string, skipRepoUpdate bool) error {
fmt.Fprintf(gi.logWriter, " Building DeBros binaries...\n")
srcDir := filepath.Join(oramaHome, "src")
binDir := filepath.Join(oramaHome, "bin")
// Ensure directories exist
if err := os.MkdirAll(srcDir, 0755); err != nil {
return fmt.Errorf("failed to create source directory %s: %w", srcDir, err)
}
if err := os.MkdirAll(binDir, 0755); err != nil {
return fmt.Errorf("failed to create bin directory %s: %w", binDir, err)
}
// Check if source directory has content (either git repo or pre-existing source)
hasSourceContent := false
if entries, err := os.ReadDir(srcDir); err == nil && len(entries) > 0 {
hasSourceContent = true
}
// Check if git repository is already initialized
isGitRepo := false
if _, err := os.Stat(filepath.Join(srcDir, ".git")); err == nil {
isGitRepo = true
}
// Handle repository update/clone based on skipRepoUpdate flag
if skipRepoUpdate {
fmt.Fprintf(gi.logWriter, " Skipping repo clone/pull (--no-pull flag)\n")
if !hasSourceContent {
return fmt.Errorf("cannot skip pull: source directory is empty at %s (need to populate it first)", srcDir)
}
fmt.Fprintf(gi.logWriter, " Using existing source at %s (skipping git operations)\n", srcDir)
// Skip to build step - don't execute any git commands
} else {
// Clone repository if not present, otherwise update it
if !isGitRepo {
fmt.Fprintf(gi.logWriter, " Cloning repository...\n")
cmd := exec.Command("git", "clone", "--branch", branch, "--depth", "1", "https://github.com/DeBrosOfficial/network.git", srcDir)
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to clone repository: %w", err)
}
} else {
fmt.Fprintf(gi.logWriter, " Updating repository to latest changes...\n")
if output, err := exec.Command("git", "-C", srcDir, "fetch", "origin", branch).CombinedOutput(); err != nil {
return fmt.Errorf("failed to fetch repository updates: %v\n%s", err, string(output))
}
if output, err := exec.Command("git", "-C", srcDir, "reset", "--hard", "origin/"+branch).CombinedOutput(); err != nil {
return fmt.Errorf("failed to reset repository: %v\n%s", err, string(output))
}
if output, err := exec.Command("git", "-C", srcDir, "clean", "-fd").CombinedOutput(); err != nil {
return fmt.Errorf("failed to clean repository: %v\n%s", err, string(output))
}
}
}
// Build binaries
fmt.Fprintf(gi.logWriter, " Building binaries...\n")
cmd := exec.Command("make", "build")
cmd.Dir = srcDir
cmd.Env = append(os.Environ(), "HOME="+oramaHome, "PATH="+os.Getenv("PATH")+":/usr/local/go/bin")
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to build: %v\n%s", err, string(output))
}
// Copy binaries
fmt.Fprintf(gi.logWriter, " Copying binaries...\n")
srcBinDir := filepath.Join(srcDir, "bin")
// Check if source bin directory exists
if _, err := os.Stat(srcBinDir); os.IsNotExist(err) {
return fmt.Errorf("source bin directory does not exist at %s - build may have failed", srcBinDir)
}
// Check if there are any files to copy
entries, err := os.ReadDir(srcBinDir)
if err != nil {
return fmt.Errorf("failed to read source bin directory: %w", err)
}
if len(entries) == 0 {
return fmt.Errorf("source bin directory is empty - build may have failed")
}
// Copy each binary individually to avoid wildcard expansion issues
for _, entry := range entries {
if entry.IsDir() {
continue
}
srcPath := filepath.Join(srcBinDir, entry.Name())
dstPath := filepath.Join(binDir, entry.Name())
// Read source file
data, err := os.ReadFile(srcPath)
if err != nil {
return fmt.Errorf("failed to read binary %s: %w", entry.Name(), err)
}
// Write destination file
if err := os.WriteFile(dstPath, data, 0755); err != nil {
return fmt.Errorf("failed to write binary %s: %w", entry.Name(), err)
}
}
if err := exec.Command("chmod", "-R", "755", binDir).Run(); err != nil {
fmt.Fprintf(gi.logWriter, " ⚠️ Warning: failed to chmod bin directory: %v\n", err)
}
if err := exec.Command("chown", "-R", "debros:debros", binDir).Run(); err != nil {
fmt.Fprintf(gi.logWriter, " ⚠️ Warning: failed to chown bin directory: %v\n", err)
}
// Grant CAP_NET_BIND_SERVICE to orama-node to allow binding to ports 80/443 without root
nodeBinary := filepath.Join(binDir, "orama-node")
if _, err := os.Stat(nodeBinary); err == nil {
if err := exec.Command("setcap", "cap_net_bind_service=+ep", nodeBinary).Run(); err != nil {
fmt.Fprintf(gi.logWriter, " ⚠️ Warning: failed to setcap on orama-node: %v\n", err)
fmt.Fprintf(gi.logWriter, " ⚠️ Gateway may not be able to bind to port 80/443\n")
} else {
fmt.Fprintf(gi.logWriter, " ✓ Set CAP_NET_BIND_SERVICE on orama-node\n")
}
}
fmt.Fprintf(gi.logWriter, " ✓ DeBros binaries installed\n")
return nil
}
// InstallGo downloads and installs Go toolchain
func (gi *GatewayInstaller) InstallGo() error {
if _, err := exec.LookPath("go"); err == nil {
fmt.Fprintf(gi.logWriter, " ✓ Go already installed\n")
return nil
}
fmt.Fprintf(gi.logWriter, " Installing Go...\n")
goTarball := fmt.Sprintf("go1.22.5.linux-%s.tar.gz", gi.arch)
goURL := fmt.Sprintf("https://go.dev/dl/%s", goTarball)
// Download
if err := DownloadFile(goURL, "/tmp/"+goTarball); err != nil {
return fmt.Errorf("failed to download Go: %w", err)
}
// Extract
if err := ExtractTarball("/tmp/"+goTarball, "/usr/local"); err != nil {
return fmt.Errorf("failed to extract Go: %w", err)
}
// Add to PATH
newPath := os.Getenv("PATH") + ":/usr/local/go/bin"
os.Setenv("PATH", newPath)
// Verify installation
if _, err := exec.LookPath("go"); err != nil {
return fmt.Errorf("go installed but not found in PATH after installation")
}
fmt.Fprintf(gi.logWriter, " ✓ Go installed\n")
return nil
}
// InstallSystemDependencies installs system-level dependencies via apt
func (gi *GatewayInstaller) InstallSystemDependencies() error {
fmt.Fprintf(gi.logWriter, " Installing system dependencies...\n")
// Update package list
cmd := exec.Command("apt-get", "update")
if err := cmd.Run(); err != nil {
fmt.Fprintf(gi.logWriter, " Warning: apt update failed\n")
}
// Install dependencies including Node.js for anyone-client
cmd = exec.Command("apt-get", "install", "-y", "curl", "git", "make", "build-essential", "wget", "nodejs", "npm")
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to install dependencies: %w", err)
}
fmt.Fprintf(gi.logWriter, " ✓ System dependencies installed\n")
return nil
}
// InstallAnyoneClient installs the anyone-client npm package globally
func (gi *GatewayInstaller) InstallAnyoneClient() error {
// Check if anyone-client is already available via npx (more reliable for scoped packages)
// Note: the CLI binary is "anyone-client", not the full scoped package name
if cmd := exec.Command("npx", "anyone-client", "--help"); cmd.Run() == nil {
fmt.Fprintf(gi.logWriter, " ✓ anyone-client already installed\n")
return nil
}
fmt.Fprintf(gi.logWriter, " Installing anyone-client...\n")
// Initialize NPM cache structure to ensure all directories exist
// This prevents "mkdir" errors when NPM tries to create nested cache directories
fmt.Fprintf(gi.logWriter, " Initializing NPM cache...\n")
// Create nested cache directories with proper permissions
debrosHome := "/home/debros"
npmCacheDirs := []string{
filepath.Join(debrosHome, ".npm"),
filepath.Join(debrosHome, ".npm", "_cacache"),
filepath.Join(debrosHome, ".npm", "_cacache", "tmp"),
filepath.Join(debrosHome, ".npm", "_logs"),
}
for _, dir := range npmCacheDirs {
if err := os.MkdirAll(dir, 0700); err != nil {
fmt.Fprintf(gi.logWriter, " ⚠️ Failed to create %s: %v\n", dir, err)
continue
}
// Fix ownership to debros user (sequential to avoid race conditions)
if err := exec.Command("chown", "debros:debros", dir).Run(); err != nil {
fmt.Fprintf(gi.logWriter, " ⚠️ Warning: failed to chown %s: %v\n", dir, err)
}
if err := exec.Command("chmod", "700", dir).Run(); err != nil {
fmt.Fprintf(gi.logWriter, " ⚠️ Warning: failed to chmod %s: %v\n", dir, err)
}
}
// Recursively fix ownership of entire .npm directory to ensure all nested files are owned by debros
if err := exec.Command("chown", "-R", "debros:debros", filepath.Join(debrosHome, ".npm")).Run(); err != nil {
fmt.Fprintf(gi.logWriter, " ⚠️ Warning: failed to chown .npm directory: %v\n", err)
}
// Run npm cache verify as debros user with proper environment
cacheInitCmd := exec.Command("sudo", "-u", "debros", "npm", "cache", "verify", "--silent")
cacheInitCmd.Env = append(os.Environ(), "HOME="+debrosHome)
if err := cacheInitCmd.Run(); err != nil {
fmt.Fprintf(gi.logWriter, " ⚠️ NPM cache verify warning: %v (continuing anyway)\n", err)
}
// Install anyone-client globally via npm (using scoped package name)
cmd := exec.Command("npm", "install", "-g", "@anyone-protocol/anyone-client")
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to install anyone-client: %w\n%s", err, string(output))
}
// Create terms-agreement file to bypass interactive prompt when running as a service
termsFile := filepath.Join(debrosHome, "terms-agreement")
if err := os.WriteFile(termsFile, []byte("agreed"), 0644); err != nil {
fmt.Fprintf(gi.logWriter, " ⚠️ Warning: failed to create terms-agreement: %v\n", err)
} else {
if err := exec.Command("chown", "debros:debros", termsFile).Run(); err != nil {
fmt.Fprintf(gi.logWriter, " ⚠️ Warning: failed to chown terms-agreement: %v\n", err)
}
}
// Verify installation - try npx with the correct CLI name (anyone-client, not full scoped package name)
verifyCmd := exec.Command("npx", "anyone-client", "--help")
if err := verifyCmd.Run(); err != nil {
// Fallback: check if binary exists in common locations
possiblePaths := []string{
"/usr/local/bin/anyone-client",
"/usr/bin/anyone-client",
}
found := false
for _, path := range possiblePaths {
if info, err := os.Stat(path); err == nil && !info.IsDir() {
found = true
break
}
}
if !found {
// Try npm bin -g to find global bin directory
cmd := exec.Command("npm", "bin", "-g")
if output, err := cmd.Output(); err == nil {
npmBinDir := strings.TrimSpace(string(output))
candidate := filepath.Join(npmBinDir, "anyone-client")
if info, err := os.Stat(candidate); err == nil && !info.IsDir() {
found = true
}
}
}
if !found {
return fmt.Errorf("anyone-client installation verification failed - package may not provide a binary, but npx should work")
}
}
fmt.Fprintf(gi.logWriter, " ✓ anyone-client installed\n")
return nil
}

View File

@ -0,0 +1,43 @@
package installers
import (
"io"
)
// Installer defines the interface for service installers
type Installer interface {
// Install downloads and installs the service binary
Install() error
// Configure initializes configuration for the service
Configure() error
// IsInstalled checks if the service is already installed
IsInstalled() bool
}
// BaseInstaller provides common functionality for all installers
type BaseInstaller struct {
arch string
logWriter io.Writer
}
// NewBaseInstaller creates a new base installer with common dependencies
func NewBaseInstaller(arch string, logWriter io.Writer) *BaseInstaller {
return &BaseInstaller{
arch: arch,
logWriter: logWriter,
}
}
// IPFSPeerInfo holds IPFS peer information for configuring Peering.Peers
type IPFSPeerInfo struct {
PeerID string
Addrs []string
}
// IPFSClusterPeerInfo contains IPFS Cluster peer information for cluster peer discovery
type IPFSClusterPeerInfo struct {
PeerID string // Cluster peer ID (different from IPFS peer ID)
Addrs []string // Cluster multiaddresses (e.g., /ip4/x.x.x.x/tcp/9098)
}

View File

@ -0,0 +1,321 @@
package installers
import (
"encoding/json"
"fmt"
"io"
"os"
"os/exec"
"path/filepath"
)
// IPFSInstaller handles IPFS (Kubo) installation
type IPFSInstaller struct {
*BaseInstaller
version string
}
// NewIPFSInstaller creates a new IPFS installer
func NewIPFSInstaller(arch string, logWriter io.Writer) *IPFSInstaller {
return &IPFSInstaller{
BaseInstaller: NewBaseInstaller(arch, logWriter),
version: "v0.38.2",
}
}
// IsInstalled checks if IPFS is already installed
func (ii *IPFSInstaller) IsInstalled() bool {
_, err := exec.LookPath("ipfs")
return err == nil
}
// Install downloads and installs IPFS (Kubo)
// Follows official steps from https://docs.ipfs.tech/install/command-line/
func (ii *IPFSInstaller) Install() error {
if ii.IsInstalled() {
fmt.Fprintf(ii.logWriter, " ✓ IPFS already installed\n")
return nil
}
fmt.Fprintf(ii.logWriter, " Installing IPFS (Kubo)...\n")
// Follow official installation steps in order
tarball := fmt.Sprintf("kubo_%s_linux-%s.tar.gz", ii.version, ii.arch)
url := fmt.Sprintf("https://dist.ipfs.tech/kubo/%s/%s", ii.version, tarball)
tmpDir := "/tmp"
tarPath := filepath.Join(tmpDir, tarball)
kuboDir := filepath.Join(tmpDir, "kubo")
// Step 1: Download the Linux binary from dist.ipfs.tech
fmt.Fprintf(ii.logWriter, " Step 1: Downloading Kubo %s...\n", ii.version)
if err := DownloadFile(url, tarPath); err != nil {
return fmt.Errorf("failed to download kubo from %s: %w", url, err)
}
// Verify tarball exists
if _, err := os.Stat(tarPath); err != nil {
return fmt.Errorf("kubo tarball not found after download at %s: %w", tarPath, err)
}
// Step 2: Unzip the file
fmt.Fprintf(ii.logWriter, " Step 2: Extracting Kubo archive...\n")
if err := ExtractTarball(tarPath, tmpDir); err != nil {
return fmt.Errorf("failed to extract kubo tarball: %w", err)
}
// Verify extraction
if _, err := os.Stat(kuboDir); err != nil {
return fmt.Errorf("kubo directory not found after extraction at %s: %w", kuboDir, err)
}
// Step 3: Move into the kubo folder (cd kubo)
fmt.Fprintf(ii.logWriter, " Step 3: Running installation script...\n")
// Step 4: Run the installation script (sudo bash install.sh)
installScript := filepath.Join(kuboDir, "install.sh")
if _, err := os.Stat(installScript); err != nil {
return fmt.Errorf("install.sh not found in extracted kubo directory at %s: %w", installScript, err)
}
cmd := exec.Command("bash", installScript)
cmd.Dir = kuboDir
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to run install.sh: %v\n%s", err, string(output))
}
// Step 5: Test that Kubo has installed correctly
fmt.Fprintf(ii.logWriter, " Step 5: Verifying installation...\n")
cmd = exec.Command("ipfs", "--version")
output, err := cmd.CombinedOutput()
if err != nil {
// ipfs might not be in PATH yet in this process, check file directly
ipfsLocations := []string{"/usr/local/bin/ipfs", "/usr/bin/ipfs"}
found := false
for _, loc := range ipfsLocations {
if info, err := os.Stat(loc); err == nil && !info.IsDir() {
found = true
// Ensure it's executable
if info.Mode()&0111 == 0 {
os.Chmod(loc, 0755)
}
break
}
}
if !found {
return fmt.Errorf("ipfs binary not found after installation in %v", ipfsLocations)
}
} else {
fmt.Fprintf(ii.logWriter, " %s", string(output))
}
// Ensure PATH is updated for current process
os.Setenv("PATH", os.Getenv("PATH")+":/usr/local/bin")
fmt.Fprintf(ii.logWriter, " ✓ IPFS installed successfully\n")
return nil
}
// Configure is a placeholder for IPFS configuration
func (ii *IPFSInstaller) Configure() error {
// Configuration is handled by InitializeRepo
return nil
}
// InitializeRepo initializes an IPFS repository for a node (unified - no bootstrap/node distinction)
// If ipfsPeer is provided, configures Peering.Peers for peer discovery in private networks
func (ii *IPFSInstaller) InitializeRepo(ipfsRepoPath string, swarmKeyPath string, apiPort, gatewayPort, swarmPort int, ipfsPeer *IPFSPeerInfo) error {
configPath := filepath.Join(ipfsRepoPath, "config")
repoExists := false
if _, err := os.Stat(configPath); err == nil {
repoExists = true
fmt.Fprintf(ii.logWriter, " IPFS repo already exists, ensuring configuration...\n")
} else {
fmt.Fprintf(ii.logWriter, " Initializing IPFS repo...\n")
}
if err := os.MkdirAll(ipfsRepoPath, 0755); err != nil {
return fmt.Errorf("failed to create IPFS repo directory: %w", err)
}
// Resolve IPFS binary path
ipfsBinary, err := ResolveBinaryPath("ipfs", "/usr/local/bin/ipfs", "/usr/bin/ipfs")
if err != nil {
return err
}
// Initialize IPFS if repo doesn't exist
if !repoExists {
cmd := exec.Command(ipfsBinary, "init", "--profile=server", "--repo-dir="+ipfsRepoPath)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to initialize IPFS: %v\n%s", err, string(output))
}
}
// Copy swarm key if present
swarmKeyExists := false
if data, err := os.ReadFile(swarmKeyPath); err == nil {
swarmKeyDest := filepath.Join(ipfsRepoPath, "swarm.key")
if err := os.WriteFile(swarmKeyDest, data, 0600); err != nil {
return fmt.Errorf("failed to copy swarm key: %w", err)
}
swarmKeyExists = true
}
// Configure IPFS addresses (API, Gateway, Swarm) by modifying the config file directly
// This ensures the ports are set correctly and avoids conflicts with RQLite on port 5001
fmt.Fprintf(ii.logWriter, " Configuring IPFS addresses (API: %d, Gateway: %d, Swarm: %d)...\n", apiPort, gatewayPort, swarmPort)
if err := ii.configureAddresses(ipfsRepoPath, apiPort, gatewayPort, swarmPort); err != nil {
return fmt.Errorf("failed to configure IPFS addresses: %w", err)
}
// Always disable AutoConf for private swarm when swarm.key is present
// This is critical - IPFS will fail to start if AutoConf is enabled on a private network
// We do this even for existing repos to fix repos initialized before this fix was applied
if swarmKeyExists {
fmt.Fprintf(ii.logWriter, " Disabling AutoConf for private swarm...\n")
cmd := exec.Command(ipfsBinary, "config", "--json", "AutoConf.Enabled", "false")
cmd.Env = append(os.Environ(), "IPFS_PATH="+ipfsRepoPath)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to disable AutoConf: %v\n%s", err, string(output))
}
// Clear AutoConf placeholders from config to prevent Kubo startup errors
// When AutoConf is disabled, 'auto' placeholders must be replaced with explicit values or empty
fmt.Fprintf(ii.logWriter, " Clearing AutoConf placeholders from IPFS config...\n")
type configCommand struct {
desc string
args []string
}
// List of config replacements to clear 'auto' placeholders
cleanup := []configCommand{
{"clearing Bootstrap peers", []string{"config", "Bootstrap", "--json", "[]"}},
{"clearing Routing.DelegatedRouters", []string{"config", "Routing.DelegatedRouters", "--json", "[]"}},
{"clearing Ipns.DelegatedPublishers", []string{"config", "Ipns.DelegatedPublishers", "--json", "[]"}},
{"clearing DNS.Resolvers", []string{"config", "DNS.Resolvers", "--json", "{}"}},
}
for _, step := range cleanup {
fmt.Fprintf(ii.logWriter, " %s...\n", step.desc)
cmd := exec.Command(ipfsBinary, step.args...)
cmd.Env = append(os.Environ(), "IPFS_PATH="+ipfsRepoPath)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed while %s: %v\n%s", step.desc, err, string(output))
}
}
// Configure Peering.Peers if we have peer info (for private network discovery)
if ipfsPeer != nil && ipfsPeer.PeerID != "" && len(ipfsPeer.Addrs) > 0 {
fmt.Fprintf(ii.logWriter, " Configuring Peering.Peers for private network discovery...\n")
if err := ii.configurePeering(ipfsRepoPath, ipfsPeer); err != nil {
return fmt.Errorf("failed to configure IPFS peering: %w", err)
}
}
}
// Fix ownership (best-effort, don't fail if it doesn't work)
if err := exec.Command("chown", "-R", "debros:debros", ipfsRepoPath).Run(); err != nil {
fmt.Fprintf(ii.logWriter, " ⚠️ Warning: failed to chown IPFS repo: %v\n", err)
}
return nil
}
// configureAddresses configures the IPFS API, Gateway, and Swarm addresses in the config file
func (ii *IPFSInstaller) configureAddresses(ipfsRepoPath string, apiPort, gatewayPort, swarmPort int) error {
configPath := filepath.Join(ipfsRepoPath, "config")
// Read existing config
data, err := os.ReadFile(configPath)
if err != nil {
return fmt.Errorf("failed to read IPFS config: %w", err)
}
var config map[string]interface{}
if err := json.Unmarshal(data, &config); err != nil {
return fmt.Errorf("failed to parse IPFS config: %w", err)
}
// Get existing Addresses section or create new one
// This preserves any existing settings like Announce, AppendAnnounce, NoAnnounce
addresses, ok := config["Addresses"].(map[string]interface{})
if !ok {
addresses = make(map[string]interface{})
}
// Update specific address fields while preserving others
// Bind API and Gateway to localhost only for security
// Swarm binds to all interfaces for peer connections
addresses["API"] = []string{
fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", apiPort),
}
addresses["Gateway"] = []string{
fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", gatewayPort),
}
addresses["Swarm"] = []string{
fmt.Sprintf("/ip4/0.0.0.0/tcp/%d", swarmPort),
fmt.Sprintf("/ip6/::/tcp/%d", swarmPort),
}
config["Addresses"] = addresses
// Write config back
updatedData, err := json.MarshalIndent(config, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal IPFS config: %w", err)
}
if err := os.WriteFile(configPath, updatedData, 0600); err != nil {
return fmt.Errorf("failed to write IPFS config: %w", err)
}
return nil
}
// configurePeering configures Peering.Peers in the IPFS config for private network discovery
// This allows nodes in a private swarm to find each other even without bootstrap peers
func (ii *IPFSInstaller) configurePeering(ipfsRepoPath string, peer *IPFSPeerInfo) error {
configPath := filepath.Join(ipfsRepoPath, "config")
// Read existing config
data, err := os.ReadFile(configPath)
if err != nil {
return fmt.Errorf("failed to read IPFS config: %w", err)
}
var config map[string]interface{}
if err := json.Unmarshal(data, &config); err != nil {
return fmt.Errorf("failed to parse IPFS config: %w", err)
}
// Get existing Peering section or create new one
peering, ok := config["Peering"].(map[string]interface{})
if !ok {
peering = make(map[string]interface{})
}
// Create peer entry
peerEntry := map[string]interface{}{
"ID": peer.PeerID,
"Addrs": peer.Addrs,
}
// Set Peering.Peers
peering["Peers"] = []interface{}{peerEntry}
config["Peering"] = peering
fmt.Fprintf(ii.logWriter, " Adding peer: %s (%d addresses)\n", peer.PeerID, len(peer.Addrs))
// Write config back
updatedData, err := json.MarshalIndent(config, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal IPFS config: %w", err)
}
if err := os.WriteFile(configPath, updatedData, 0600); err != nil {
return fmt.Errorf("failed to write IPFS config: %w", err)
}
return nil
}

View File

@ -0,0 +1,266 @@
package installers
import (
"encoding/json"
"fmt"
"io"
"os"
"os/exec"
"path/filepath"
"strings"
)
// IPFSClusterInstaller handles IPFS Cluster Service installation
type IPFSClusterInstaller struct {
*BaseInstaller
}
// NewIPFSClusterInstaller creates a new IPFS Cluster installer
func NewIPFSClusterInstaller(arch string, logWriter io.Writer) *IPFSClusterInstaller {
return &IPFSClusterInstaller{
BaseInstaller: NewBaseInstaller(arch, logWriter),
}
}
// IsInstalled checks if IPFS Cluster is already installed
func (ici *IPFSClusterInstaller) IsInstalled() bool {
_, err := exec.LookPath("ipfs-cluster-service")
return err == nil
}
// Install downloads and installs IPFS Cluster Service
func (ici *IPFSClusterInstaller) Install() error {
if ici.IsInstalled() {
fmt.Fprintf(ici.logWriter, " ✓ IPFS Cluster already installed\n")
return nil
}
fmt.Fprintf(ici.logWriter, " Installing IPFS Cluster Service...\n")
// Check if Go is available
if _, err := exec.LookPath("go"); err != nil {
return fmt.Errorf("go not found - required to install IPFS Cluster. Please install Go first")
}
cmd := exec.Command("go", "install", "github.com/ipfs-cluster/ipfs-cluster/cmd/ipfs-cluster-service@latest")
cmd.Env = append(os.Environ(), "GOBIN=/usr/local/bin")
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to install IPFS Cluster: %w", err)
}
fmt.Fprintf(ici.logWriter, " ✓ IPFS Cluster installed\n")
return nil
}
// Configure is a placeholder for IPFS Cluster configuration
func (ici *IPFSClusterInstaller) Configure() error {
// Configuration is handled by InitializeConfig
return nil
}
// InitializeConfig initializes IPFS Cluster configuration (unified - no bootstrap/node distinction)
// This runs `ipfs-cluster-service init` to create the service.json configuration file.
// For existing installations, it ensures the cluster secret is up to date.
// clusterPeers should be in format: ["/ip4/<ip>/tcp/9098/p2p/<cluster-peer-id>"]
func (ici *IPFSClusterInstaller) InitializeConfig(clusterPath, clusterSecret string, ipfsAPIPort int, clusterPeers []string) error {
serviceJSONPath := filepath.Join(clusterPath, "service.json")
configExists := false
if _, err := os.Stat(serviceJSONPath); err == nil {
configExists = true
fmt.Fprintf(ici.logWriter, " IPFS Cluster config already exists, ensuring it's up to date...\n")
} else {
fmt.Fprintf(ici.logWriter, " Preparing IPFS Cluster path...\n")
}
if err := os.MkdirAll(clusterPath, 0755); err != nil {
return fmt.Errorf("failed to create IPFS Cluster directory: %w", err)
}
// Fix ownership before running init (best-effort)
if err := exec.Command("chown", "-R", "debros:debros", clusterPath).Run(); err != nil {
fmt.Fprintf(ici.logWriter, " ⚠️ Warning: failed to chown cluster path before init: %v\n", err)
}
// Resolve ipfs-cluster-service binary path
clusterBinary, err := ResolveBinaryPath("ipfs-cluster-service", "/usr/local/bin/ipfs-cluster-service", "/usr/bin/ipfs-cluster-service")
if err != nil {
return fmt.Errorf("ipfs-cluster-service binary not found: %w", err)
}
// Initialize cluster config if it doesn't exist
if !configExists {
// Initialize cluster config with ipfs-cluster-service init
// This creates the service.json file with all required sections
fmt.Fprintf(ici.logWriter, " Initializing IPFS Cluster config...\n")
cmd := exec.Command(clusterBinary, "init", "--force")
cmd.Env = append(os.Environ(), "IPFS_CLUSTER_PATH="+clusterPath)
// Pass CLUSTER_SECRET to init so it writes the correct secret to service.json directly
if clusterSecret != "" {
cmd.Env = append(cmd.Env, "CLUSTER_SECRET="+clusterSecret)
}
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to initialize IPFS Cluster config: %v\n%s", err, string(output))
}
}
// Always update the cluster secret, IPFS port, and peer addresses (for both new and existing configs)
// This ensures existing installations get the secret and port synchronized
// We do this AFTER init to ensure our secret takes precedence
if clusterSecret != "" {
fmt.Fprintf(ici.logWriter, " Updating cluster secret, IPFS port, and peer addresses...\n")
if err := ici.updateConfig(clusterPath, clusterSecret, ipfsAPIPort, clusterPeers); err != nil {
return fmt.Errorf("failed to update cluster config: %w", err)
}
// Verify the secret was written correctly
if err := ici.verifySecret(clusterPath, clusterSecret); err != nil {
return fmt.Errorf("cluster secret verification failed: %w", err)
}
fmt.Fprintf(ici.logWriter, " ✓ Cluster secret verified\n")
}
// Fix ownership again after updates (best-effort)
if err := exec.Command("chown", "-R", "debros:debros", clusterPath).Run(); err != nil {
fmt.Fprintf(ici.logWriter, " ⚠️ Warning: failed to chown cluster path after updates: %v\n", err)
}
return nil
}
// updateConfig updates the secret, IPFS port, and peer addresses in IPFS Cluster service.json
func (ici *IPFSClusterInstaller) updateConfig(clusterPath, secret string, ipfsAPIPort int, bootstrapClusterPeers []string) error {
serviceJSONPath := filepath.Join(clusterPath, "service.json")
// Read existing config
data, err := os.ReadFile(serviceJSONPath)
if err != nil {
return fmt.Errorf("failed to read service.json: %w", err)
}
// Parse JSON
var config map[string]interface{}
if err := json.Unmarshal(data, &config); err != nil {
return fmt.Errorf("failed to parse service.json: %w", err)
}
// Update cluster secret, listen_multiaddress, and peer addresses
if cluster, ok := config["cluster"].(map[string]interface{}); ok {
cluster["secret"] = secret
// Set consistent listen_multiaddress - port 9098 for cluster LibP2P communication
// This MUST match the port used in GetClusterPeerMultiaddr() and peer_addresses
cluster["listen_multiaddress"] = []interface{}{"/ip4/0.0.0.0/tcp/9098"}
// Configure peer addresses for cluster discovery
// This allows nodes to find and connect to each other
if len(bootstrapClusterPeers) > 0 {
cluster["peer_addresses"] = bootstrapClusterPeers
}
} else {
clusterConfig := map[string]interface{}{
"secret": secret,
"listen_multiaddress": []interface{}{"/ip4/0.0.0.0/tcp/9098"},
}
if len(bootstrapClusterPeers) > 0 {
clusterConfig["peer_addresses"] = bootstrapClusterPeers
}
config["cluster"] = clusterConfig
}
// Update IPFS port in IPFS Proxy configuration
ipfsNodeMultiaddr := fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", ipfsAPIPort)
if api, ok := config["api"].(map[string]interface{}); ok {
if ipfsproxy, ok := api["ipfsproxy"].(map[string]interface{}); ok {
ipfsproxy["node_multiaddress"] = ipfsNodeMultiaddr
}
}
// Update IPFS port in IPFS Connector configuration
if ipfsConnector, ok := config["ipfs_connector"].(map[string]interface{}); ok {
if ipfshttp, ok := ipfsConnector["ipfshttp"].(map[string]interface{}); ok {
ipfshttp["node_multiaddress"] = ipfsNodeMultiaddr
}
}
// Write back
updatedData, err := json.MarshalIndent(config, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal service.json: %w", err)
}
if err := os.WriteFile(serviceJSONPath, updatedData, 0644); err != nil {
return fmt.Errorf("failed to write service.json: %w", err)
}
return nil
}
// verifySecret verifies that the secret in service.json matches the expected value
func (ici *IPFSClusterInstaller) verifySecret(clusterPath, expectedSecret string) error {
serviceJSONPath := filepath.Join(clusterPath, "service.json")
data, err := os.ReadFile(serviceJSONPath)
if err != nil {
return fmt.Errorf("failed to read service.json for verification: %w", err)
}
var config map[string]interface{}
if err := json.Unmarshal(data, &config); err != nil {
return fmt.Errorf("failed to parse service.json for verification: %w", err)
}
if cluster, ok := config["cluster"].(map[string]interface{}); ok {
if secret, ok := cluster["secret"].(string); ok {
if secret != expectedSecret {
return fmt.Errorf("secret mismatch: expected %s, got %s", expectedSecret, secret)
}
return nil
}
return fmt.Errorf("secret not found in cluster config")
}
return fmt.Errorf("cluster section not found in service.json")
}
// GetClusterPeerMultiaddr reads the IPFS Cluster peer ID and returns its multiaddress
// Returns format: /ip4/<ip>/tcp/9098/p2p/<cluster-peer-id>
func (ici *IPFSClusterInstaller) GetClusterPeerMultiaddr(clusterPath string, nodeIP string) (string, error) {
identityPath := filepath.Join(clusterPath, "identity.json")
// Read identity file
data, err := os.ReadFile(identityPath)
if err != nil {
return "", fmt.Errorf("failed to read identity.json: %w", err)
}
// Parse JSON
var identity map[string]interface{}
if err := json.Unmarshal(data, &identity); err != nil {
return "", fmt.Errorf("failed to parse identity.json: %w", err)
}
// Get peer ID
peerID, ok := identity["id"].(string)
if !ok || peerID == "" {
return "", fmt.Errorf("peer ID not found in identity.json")
}
// Construct multiaddress: /ip4/<ip>/tcp/9098/p2p/<peer-id>
// Port 9098 is the default cluster listen port
multiaddr := fmt.Sprintf("/ip4/%s/tcp/9098/p2p/%s", nodeIP, peerID)
return multiaddr, nil
}
// inferPeerIP extracts the IP address from peer addresses
func inferPeerIP(peerAddresses []string, vpsIP string) string {
for _, addr := range peerAddresses {
// Look for /ip4/ prefix
if strings.Contains(addr, "/ip4/") {
parts := strings.Split(addr, "/")
for i, part := range parts {
if part == "ip4" && i+1 < len(parts) {
return parts[i+1]
}
}
}
}
return vpsIP // Fallback to VPS IP
}

View File

@ -0,0 +1,58 @@
package installers
import (
"fmt"
"io"
"os"
"os/exec"
)
// OlricInstaller handles Olric server installation
type OlricInstaller struct {
*BaseInstaller
version string
}
// NewOlricInstaller creates a new Olric installer
func NewOlricInstaller(arch string, logWriter io.Writer) *OlricInstaller {
return &OlricInstaller{
BaseInstaller: NewBaseInstaller(arch, logWriter),
version: "v0.7.0",
}
}
// IsInstalled checks if Olric is already installed
func (oi *OlricInstaller) IsInstalled() bool {
_, err := exec.LookPath("olric-server")
return err == nil
}
// Install downloads and installs Olric server
func (oi *OlricInstaller) Install() error {
if oi.IsInstalled() {
fmt.Fprintf(oi.logWriter, " ✓ Olric already installed\n")
return nil
}
fmt.Fprintf(oi.logWriter, " Installing Olric...\n")
// Check if Go is available
if _, err := exec.LookPath("go"); err != nil {
return fmt.Errorf("go not found - required to install Olric. Please install Go first")
}
cmd := exec.Command("go", "install", fmt.Sprintf("github.com/olric-data/olric/cmd/olric-server@%s", oi.version))
cmd.Env = append(os.Environ(), "GOBIN=/usr/local/bin")
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to install Olric: %w", err)
}
fmt.Fprintf(oi.logWriter, " ✓ Olric installed\n")
return nil
}
// Configure is a placeholder for Olric configuration
func (oi *OlricInstaller) Configure() error {
// Configuration is handled by the orchestrator
return nil
}

View File

@ -0,0 +1,86 @@
package installers
import (
"fmt"
"io"
"os"
"os/exec"
)
// RQLiteInstaller handles RQLite installation
type RQLiteInstaller struct {
*BaseInstaller
version string
}
// NewRQLiteInstaller creates a new RQLite installer
func NewRQLiteInstaller(arch string, logWriter io.Writer) *RQLiteInstaller {
return &RQLiteInstaller{
BaseInstaller: NewBaseInstaller(arch, logWriter),
version: "8.43.0",
}
}
// IsInstalled checks if RQLite is already installed
func (ri *RQLiteInstaller) IsInstalled() bool {
_, err := exec.LookPath("rqlited")
return err == nil
}
// Install downloads and installs RQLite
func (ri *RQLiteInstaller) Install() error {
if ri.IsInstalled() {
fmt.Fprintf(ri.logWriter, " ✓ RQLite already installed\n")
return nil
}
fmt.Fprintf(ri.logWriter, " Installing RQLite...\n")
tarball := fmt.Sprintf("rqlite-v%s-linux-%s.tar.gz", ri.version, ri.arch)
url := fmt.Sprintf("https://github.com/rqlite/rqlite/releases/download/v%s/%s", ri.version, tarball)
// Download
if err := DownloadFile(url, "/tmp/"+tarball); err != nil {
return fmt.Errorf("failed to download RQLite: %w", err)
}
// Extract
if err := ExtractTarball("/tmp/"+tarball, "/tmp"); err != nil {
return fmt.Errorf("failed to extract RQLite: %w", err)
}
// Copy binaries
dir := fmt.Sprintf("/tmp/rqlite-v%s-linux-%s", ri.version, ri.arch)
if err := exec.Command("cp", dir+"/rqlited", "/usr/local/bin/").Run(); err != nil {
return fmt.Errorf("failed to copy rqlited binary: %w", err)
}
if err := exec.Command("chmod", "+x", "/usr/local/bin/rqlited").Run(); err != nil {
fmt.Fprintf(ri.logWriter, " ⚠️ Warning: failed to chmod rqlited: %v\n", err)
}
// Ensure PATH includes /usr/local/bin
os.Setenv("PATH", os.Getenv("PATH")+":/usr/local/bin")
fmt.Fprintf(ri.logWriter, " ✓ RQLite installed\n")
return nil
}
// Configure initializes RQLite data directory
func (ri *RQLiteInstaller) Configure() error {
// Configuration is handled by the orchestrator
return nil
}
// InitializeDataDir initializes RQLite data directory
func (ri *RQLiteInstaller) InitializeDataDir(dataDir string) error {
fmt.Fprintf(ri.logWriter, " Initializing RQLite data dir...\n")
if err := os.MkdirAll(dataDir, 0755); err != nil {
return fmt.Errorf("failed to create RQLite data directory: %w", err)
}
if err := exec.Command("chown", "-R", "debros:debros", dataDir).Run(); err != nil {
fmt.Fprintf(ri.logWriter, " ⚠️ Warning: failed to chown RQLite data dir: %v\n", err)
}
return nil
}

View File

@ -0,0 +1,126 @@
package installers
import (
"fmt"
"os"
"os/exec"
"path/filepath"
"strings"
)
// DownloadFile downloads a file from a URL to a destination path
func DownloadFile(url, dest string) error {
cmd := exec.Command("wget", "-q", url, "-O", dest)
if err := cmd.Run(); err != nil {
return fmt.Errorf("download failed: %w", err)
}
return nil
}
// ExtractTarball extracts a tarball to a destination directory
func ExtractTarball(tarPath, destDir string) error {
cmd := exec.Command("tar", "-xzf", tarPath, "-C", destDir)
if err := cmd.Run(); err != nil {
return fmt.Errorf("extraction failed: %w", err)
}
return nil
}
// ResolveBinaryPath finds the fully-qualified path to a required executable
func ResolveBinaryPath(binary string, extraPaths ...string) (string, error) {
// First try to find in PATH
if path, err := exec.LookPath(binary); err == nil {
if abs, err := filepath.Abs(path); err == nil {
return abs, nil
}
return path, nil
}
// Then try extra candidate paths
for _, candidate := range extraPaths {
if candidate == "" {
continue
}
if info, err := os.Stat(candidate); err == nil && !info.IsDir() && info.Mode()&0111 != 0 {
if abs, err := filepath.Abs(candidate); err == nil {
return abs, nil
}
return candidate, nil
}
}
// Not found - generate error message
checked := make([]string, 0, len(extraPaths))
for _, candidate := range extraPaths {
if candidate != "" {
checked = append(checked, candidate)
}
}
if len(checked) == 0 {
return "", fmt.Errorf("required binary %q not found in path", binary)
}
return "", fmt.Errorf("required binary %q not found in path (also checked %s)", binary, strings.Join(checked, ", "))
}
// CreateSystemdService creates a systemd service unit file
func CreateSystemdService(name, content string) error {
servicePath := filepath.Join("/etc/systemd/system", name)
if err := os.WriteFile(servicePath, []byte(content), 0644); err != nil {
return fmt.Errorf("failed to write service file: %w", err)
}
return nil
}
// EnableSystemdService enables a systemd service
func EnableSystemdService(name string) error {
cmd := exec.Command("systemctl", "enable", name)
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to enable service: %w", err)
}
return nil
}
// StartSystemdService starts a systemd service
func StartSystemdService(name string) error {
cmd := exec.Command("systemctl", "start", name)
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to start service: %w", err)
}
return nil
}
// ReloadSystemdDaemon reloads systemd daemon configuration
func ReloadSystemdDaemon() error {
cmd := exec.Command("systemctl", "daemon-reload")
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to reload systemd: %w", err)
}
return nil
}
// SetFileOwnership sets ownership of a file or directory
func SetFileOwnership(path, owner string) error {
cmd := exec.Command("chown", "-R", owner, path)
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to set ownership: %w", err)
}
return nil
}
// SetFilePermissions sets permissions on a file or directory
func SetFilePermissions(path string, mode os.FileMode) error {
if err := os.Chmod(path, mode); err != nil {
return fmt.Errorf("failed to set permissions: %w", err)
}
return nil
}
// EnsureDirectory creates a directory if it doesn't exist
func EnsureDirectory(path string, mode os.FileMode) error {
if err := os.MkdirAll(path, mode); err != nil {
return fmt.Errorf("failed to create directory: %w", err)
}
return nil
}

179
pkg/errors/codes.go Normal file
View File

@ -0,0 +1,179 @@
package errors
// Error codes for categorizing errors.
// These codes map to HTTP status codes and gRPC codes where applicable.
const (
// CodeOK indicates success (not an error).
CodeOK = "OK"
// CodeCancelled indicates the operation was cancelled.
CodeCancelled = "CANCELLED"
// CodeUnknown indicates an unknown error occurred.
CodeUnknown = "UNKNOWN"
// CodeInvalidArgument indicates client specified an invalid argument.
CodeInvalidArgument = "INVALID_ARGUMENT"
// CodeDeadlineExceeded indicates operation deadline was exceeded.
CodeDeadlineExceeded = "DEADLINE_EXCEEDED"
// CodeNotFound indicates a resource was not found.
CodeNotFound = "NOT_FOUND"
// CodeAlreadyExists indicates attempting to create a resource that already exists.
CodeAlreadyExists = "ALREADY_EXISTS"
// CodePermissionDenied indicates the caller doesn't have permission.
CodePermissionDenied = "PERMISSION_DENIED"
// CodeResourceExhausted indicates a resource has been exhausted.
CodeResourceExhausted = "RESOURCE_EXHAUSTED"
// CodeFailedPrecondition indicates operation was rejected because the system
// is not in a required state.
CodeFailedPrecondition = "FAILED_PRECONDITION"
// CodeAborted indicates the operation was aborted.
CodeAborted = "ABORTED"
// CodeOutOfRange indicates operation attempted past valid range.
CodeOutOfRange = "OUT_OF_RANGE"
// CodeUnimplemented indicates operation is not implemented or not supported.
CodeUnimplemented = "UNIMPLEMENTED"
// CodeInternal indicates internal errors.
CodeInternal = "INTERNAL"
// CodeUnavailable indicates the service is currently unavailable.
CodeUnavailable = "UNAVAILABLE"
// CodeDataLoss indicates unrecoverable data loss or corruption.
CodeDataLoss = "DATA_LOSS"
// CodeUnauthenticated indicates the request does not have valid authentication.
CodeUnauthenticated = "UNAUTHENTICATED"
// Domain-specific error codes
// CodeValidation indicates input validation failed.
CodeValidation = "VALIDATION_ERROR"
// CodeUnauthorized indicates authentication is required or failed.
CodeUnauthorized = "UNAUTHORIZED"
// CodeForbidden indicates the authenticated user lacks permission.
CodeForbidden = "FORBIDDEN"
// CodeConflict indicates a resource conflict (e.g., duplicate key).
CodeConflict = "CONFLICT"
// CodeTimeout indicates an operation timed out.
CodeTimeout = "TIMEOUT"
// CodeRateLimit indicates rate limit was exceeded.
CodeRateLimit = "RATE_LIMIT_EXCEEDED"
// CodeServiceUnavailable indicates a downstream service is unavailable.
CodeServiceUnavailable = "SERVICE_UNAVAILABLE"
// CodeDatabaseError indicates a database operation failed.
CodeDatabaseError = "DATABASE_ERROR"
// CodeCacheError indicates a cache operation failed.
CodeCacheError = "CACHE_ERROR"
// CodeStorageError indicates a storage operation failed.
CodeStorageError = "STORAGE_ERROR"
// CodeNetworkError indicates a network operation failed.
CodeNetworkError = "NETWORK_ERROR"
// CodeExecutionError indicates a WASM or function execution failed.
CodeExecutionError = "EXECUTION_ERROR"
// CodeCompilationError indicates WASM compilation failed.
CodeCompilationError = "COMPILATION_ERROR"
// CodeConfigError indicates a configuration error.
CodeConfigError = "CONFIG_ERROR"
// CodeAuthError indicates an authentication/authorization error.
CodeAuthError = "AUTH_ERROR"
// CodeCryptoError indicates a cryptographic operation failed.
CodeCryptoError = "CRYPTO_ERROR"
// CodeSerializationError indicates serialization/deserialization failed.
CodeSerializationError = "SERIALIZATION_ERROR"
)
// ErrorCategory represents a high-level error category.
type ErrorCategory string
const (
// CategoryClient indicates a client-side error (4xx).
CategoryClient ErrorCategory = "CLIENT_ERROR"
// CategoryServer indicates a server-side error (5xx).
CategoryServer ErrorCategory = "SERVER_ERROR"
// CategoryNetwork indicates a network-related error.
CategoryNetwork ErrorCategory = "NETWORK_ERROR"
// CategoryTimeout indicates a timeout error.
CategoryTimeout ErrorCategory = "TIMEOUT_ERROR"
// CategoryValidation indicates a validation error.
CategoryValidation ErrorCategory = "VALIDATION_ERROR"
// CategoryAuth indicates an authentication/authorization error.
CategoryAuth ErrorCategory = "AUTH_ERROR"
)
// GetCategory returns the category for an error code.
func GetCategory(code string) ErrorCategory {
switch code {
case CodeInvalidArgument, CodeValidation, CodeNotFound,
CodeConflict, CodeAlreadyExists, CodeOutOfRange:
return CategoryClient
case CodeUnauthorized, CodeUnauthenticated,
CodeForbidden, CodePermissionDenied, CodeAuthError:
return CategoryAuth
case CodeTimeout, CodeDeadlineExceeded:
return CategoryTimeout
case CodeNetworkError, CodeServiceUnavailable, CodeUnavailable:
return CategoryNetwork
default:
return CategoryServer
}
}
// IsRetryable returns true if an error with the given code should be retried.
func IsRetryable(code string) bool {
switch code {
case CodeTimeout, CodeDeadlineExceeded,
CodeServiceUnavailable, CodeUnavailable,
CodeResourceExhausted, CodeAborted,
CodeNetworkError, CodeDatabaseError,
CodeCacheError, CodeStorageError:
return true
default:
return false
}
}
// IsClientError returns true if the error is a client error (4xx).
func IsClientError(code string) bool {
return GetCategory(code) == CategoryClient
}
// IsServerError returns true if the error is a server error (5xx).
func IsServerError(code string) bool {
return GetCategory(code) == CategoryServer
}

206
pkg/errors/codes_test.go Normal file
View File

@ -0,0 +1,206 @@
package errors
import "testing"
func TestGetCategory(t *testing.T) {
tests := []struct {
code string
expectedCategory ErrorCategory
}{
// Client errors
{CodeInvalidArgument, CategoryClient},
{CodeValidation, CategoryClient},
{CodeNotFound, CategoryClient},
{CodeConflict, CategoryClient},
{CodeAlreadyExists, CategoryClient},
{CodeOutOfRange, CategoryClient},
// Auth errors
{CodeUnauthorized, CategoryAuth},
{CodeUnauthenticated, CategoryAuth},
{CodeForbidden, CategoryAuth},
{CodePermissionDenied, CategoryAuth},
{CodeAuthError, CategoryAuth},
// Timeout errors
{CodeTimeout, CategoryTimeout},
{CodeDeadlineExceeded, CategoryTimeout},
// Network errors
{CodeNetworkError, CategoryNetwork},
{CodeServiceUnavailable, CategoryNetwork},
{CodeUnavailable, CategoryNetwork},
// Server errors
{CodeInternal, CategoryServer},
{CodeUnknown, CategoryServer},
{CodeDatabaseError, CategoryServer},
{CodeCacheError, CategoryServer},
{CodeStorageError, CategoryServer},
{CodeExecutionError, CategoryServer},
{CodeCompilationError, CategoryServer},
{CodeConfigError, CategoryServer},
{CodeCryptoError, CategoryServer},
{CodeSerializationError, CategoryServer},
{CodeDataLoss, CategoryServer},
}
for _, tt := range tests {
t.Run(tt.code, func(t *testing.T) {
category := GetCategory(tt.code)
if category != tt.expectedCategory {
t.Errorf("Code %s: expected category %s, got %s", tt.code, tt.expectedCategory, category)
}
})
}
}
func TestIsRetryable(t *testing.T) {
tests := []struct {
code string
expected bool
}{
// Retryable errors
{CodeTimeout, true},
{CodeDeadlineExceeded, true},
{CodeServiceUnavailable, true},
{CodeUnavailable, true},
{CodeResourceExhausted, true},
{CodeAborted, true},
{CodeNetworkError, true},
{CodeDatabaseError, true},
{CodeCacheError, true},
{CodeStorageError, true},
// Non-retryable errors
{CodeInvalidArgument, false},
{CodeValidation, false},
{CodeNotFound, false},
{CodeUnauthorized, false},
{CodeForbidden, false},
{CodeConflict, false},
{CodeInternal, false},
{CodeAuthError, false},
{CodeExecutionError, false},
{CodeCompilationError, false},
}
for _, tt := range tests {
t.Run(tt.code, func(t *testing.T) {
result := IsRetryable(tt.code)
if result != tt.expected {
t.Errorf("Code %s: expected retryable=%v, got %v", tt.code, tt.expected, result)
}
})
}
}
func TestIsClientError(t *testing.T) {
tests := []struct {
code string
expected bool
}{
{CodeInvalidArgument, true},
{CodeValidation, true},
{CodeNotFound, true},
{CodeConflict, true},
{CodeInternal, false},
{CodeUnauthorized, false}, // Auth category, not client
{CodeTimeout, false},
}
for _, tt := range tests {
t.Run(tt.code, func(t *testing.T) {
result := IsClientError(tt.code)
if result != tt.expected {
t.Errorf("Code %s: expected client error=%v, got %v", tt.code, tt.expected, result)
}
})
}
}
func TestIsServerError(t *testing.T) {
tests := []struct {
code string
expected bool
}{
{CodeInternal, true},
{CodeUnknown, true},
{CodeDatabaseError, true},
{CodeCacheError, true},
{CodeStorageError, true},
{CodeExecutionError, true},
{CodeInvalidArgument, false},
{CodeNotFound, false},
{CodeUnauthorized, false},
{CodeTimeout, false},
}
for _, tt := range tests {
t.Run(tt.code, func(t *testing.T) {
result := IsServerError(tt.code)
if result != tt.expected {
t.Errorf("Code %s: expected server error=%v, got %v", tt.code, tt.expected, result)
}
})
}
}
func TestErrorCategoryConsistency(t *testing.T) {
// Test that IsClientError and IsServerError are mutually exclusive
allCodes := []string{
CodeOK, CodeCancelled, CodeUnknown, CodeInvalidArgument,
CodeDeadlineExceeded, CodeNotFound, CodeAlreadyExists,
CodePermissionDenied, CodeResourceExhausted, CodeFailedPrecondition,
CodeAborted, CodeOutOfRange, CodeUnimplemented, CodeInternal,
CodeUnavailable, CodeDataLoss, CodeUnauthenticated,
CodeValidation, CodeUnauthorized, CodeForbidden, CodeConflict,
CodeTimeout, CodeRateLimit, CodeServiceUnavailable,
CodeDatabaseError, CodeCacheError, CodeStorageError,
CodeNetworkError, CodeExecutionError, CodeCompilationError,
CodeConfigError, CodeAuthError, CodeCryptoError,
CodeSerializationError,
}
for _, code := range allCodes {
t.Run(code, func(t *testing.T) {
isClient := IsClientError(code)
isServer := IsServerError(code)
// They shouldn't both be true
if isClient && isServer {
t.Errorf("Code %s is both client and server error", code)
}
// Get category to ensure it's one of the valid ones
category := GetCategory(code)
validCategories := []ErrorCategory{
CategoryClient, CategoryServer, CategoryNetwork,
CategoryTimeout, CategoryValidation, CategoryAuth,
}
found := false
for _, valid := range validCategories {
if category == valid {
found = true
break
}
}
if !found {
t.Errorf("Code %s has invalid category: %s", code, category)
}
})
}
}
func BenchmarkGetCategory(b *testing.B) {
for i := 0; i < b.N; i++ {
_ = GetCategory(CodeValidation)
}
}
func BenchmarkIsRetryable(b *testing.B) {
for i := 0; i < b.N; i++ {
_ = IsRetryable(CodeTimeout)
}
}

389
pkg/errors/errors.go Normal file
View File

@ -0,0 +1,389 @@
package errors
import (
"errors"
"fmt"
"runtime"
"strings"
)
// Common sentinel errors for quick checks
var (
// ErrNotFound is returned when a resource is not found.
ErrNotFound = errors.New("not found")
// ErrUnauthorized is returned when authentication fails or is missing.
ErrUnauthorized = errors.New("unauthorized")
// ErrForbidden is returned when the user lacks permission for an action.
ErrForbidden = errors.New("forbidden")
// ErrConflict is returned when a resource already exists.
ErrConflict = errors.New("resource already exists")
// ErrInvalidInput is returned when request input is invalid.
ErrInvalidInput = errors.New("invalid input")
// ErrTimeout is returned when an operation times out.
ErrTimeout = errors.New("operation timeout")
// ErrServiceUnavailable is returned when a required service is unavailable.
ErrServiceUnavailable = errors.New("service unavailable")
// ErrInternal is returned when an internal error occurs.
ErrInternal = errors.New("internal error")
// ErrTooManyRequests is returned when rate limit is exceeded.
ErrTooManyRequests = errors.New("too many requests")
)
// Error is the base interface for all custom errors in the system.
// It extends the standard error interface with additional context.
type Error interface {
error
// Code returns the error code
Code() string
// Message returns the human-readable error message
Message() string
// Unwrap returns the underlying cause
Unwrap() error
}
// BaseError provides a foundation for all typed errors.
type BaseError struct {
code string
message string
cause error
stack []uintptr
}
// Error implements the error interface.
func (e *BaseError) Error() string {
if e.cause != nil {
return fmt.Sprintf("%s: %v", e.message, e.cause)
}
return e.message
}
// Code returns the error code.
func (e *BaseError) Code() string {
return e.code
}
// Message returns the error message.
func (e *BaseError) Message() string {
return e.message
}
// Unwrap returns the underlying cause.
func (e *BaseError) Unwrap() error {
return e.cause
}
// Stack returns the captured stack trace.
func (e *BaseError) Stack() []uintptr {
return e.stack
}
// captureStack captures the current stack trace.
func captureStack(skip int) []uintptr {
const maxDepth = 32
stack := make([]uintptr, maxDepth)
n := runtime.Callers(skip+2, stack)
return stack[:n]
}
// StackTrace returns a formatted stack trace string.
func (e *BaseError) StackTrace() string {
if len(e.stack) == 0 {
return ""
}
var buf strings.Builder
frames := runtime.CallersFrames(e.stack)
for {
frame, more := frames.Next()
if !strings.Contains(frame.File, "runtime/") {
fmt.Fprintf(&buf, "%s\n\t%s:%d\n", frame.Function, frame.File, frame.Line)
}
if !more {
break
}
}
return buf.String()
}
// ValidationError represents an input validation error.
type ValidationError struct {
*BaseError
Field string
Value interface{}
}
// NewValidationError creates a new validation error.
func NewValidationError(field, message string, value interface{}) *ValidationError {
return &ValidationError{
BaseError: &BaseError{
code: CodeValidation,
message: message,
stack: captureStack(1),
},
Field: field,
Value: value,
}
}
// Error implements the error interface.
func (e *ValidationError) Error() string {
if e.Field != "" {
return fmt.Sprintf("validation error: %s: %s", e.Field, e.message)
}
return fmt.Sprintf("validation error: %s", e.message)
}
// NotFoundError represents a resource not found error.
type NotFoundError struct {
*BaseError
Resource string
ID string
}
// NewNotFoundError creates a new not found error.
func NewNotFoundError(resource, id string) *NotFoundError {
return &NotFoundError{
BaseError: &BaseError{
code: CodeNotFound,
message: fmt.Sprintf("%s not found", resource),
stack: captureStack(1),
},
Resource: resource,
ID: id,
}
}
// Error implements the error interface.
func (e *NotFoundError) Error() string {
if e.ID != "" {
return fmt.Sprintf("%s with ID '%s' not found", e.Resource, e.ID)
}
return fmt.Sprintf("%s not found", e.Resource)
}
// UnauthorizedError represents an authentication error.
type UnauthorizedError struct {
*BaseError
Realm string
}
// NewUnauthorizedError creates a new unauthorized error.
func NewUnauthorizedError(message string) *UnauthorizedError {
if message == "" {
message = "authentication required"
}
return &UnauthorizedError{
BaseError: &BaseError{
code: CodeUnauthorized,
message: message,
stack: captureStack(1),
},
}
}
// WithRealm sets the authentication realm.
func (e *UnauthorizedError) WithRealm(realm string) *UnauthorizedError {
e.Realm = realm
return e
}
// ForbiddenError represents an authorization error.
type ForbiddenError struct {
*BaseError
Resource string
Action string
}
// NewForbiddenError creates a new forbidden error.
func NewForbiddenError(resource, action string) *ForbiddenError {
message := "forbidden"
if resource != "" && action != "" {
message = fmt.Sprintf("forbidden: cannot %s %s", action, resource)
}
return &ForbiddenError{
BaseError: &BaseError{
code: CodeForbidden,
message: message,
stack: captureStack(1),
},
Resource: resource,
Action: action,
}
}
// ConflictError represents a resource conflict error.
type ConflictError struct {
*BaseError
Resource string
Field string
Value string
}
// NewConflictError creates a new conflict error.
func NewConflictError(resource, field, value string) *ConflictError {
message := fmt.Sprintf("%s already exists", resource)
if field != "" {
message = fmt.Sprintf("%s with %s='%s' already exists", resource, field, value)
}
return &ConflictError{
BaseError: &BaseError{
code: CodeConflict,
message: message,
stack: captureStack(1),
},
Resource: resource,
Field: field,
Value: value,
}
}
// InternalError represents an internal server error.
type InternalError struct {
*BaseError
Operation string
}
// NewInternalError creates a new internal error.
func NewInternalError(message string, cause error) *InternalError {
if message == "" {
message = "internal error"
}
return &InternalError{
BaseError: &BaseError{
code: CodeInternal,
message: message,
cause: cause,
stack: captureStack(1),
},
}
}
// WithOperation sets the operation context.
func (e *InternalError) WithOperation(op string) *InternalError {
e.Operation = op
return e
}
// ServiceError represents a downstream service error.
type ServiceError struct {
*BaseError
Service string
StatusCode int
}
// NewServiceError creates a new service error.
func NewServiceError(service, message string, statusCode int, cause error) *ServiceError {
if message == "" {
message = fmt.Sprintf("%s service error", service)
}
return &ServiceError{
BaseError: &BaseError{
code: CodeServiceUnavailable,
message: message,
cause: cause,
stack: captureStack(1),
},
Service: service,
StatusCode: statusCode,
}
}
// TimeoutError represents a timeout error.
type TimeoutError struct {
*BaseError
Operation string
Duration string
}
// NewTimeoutError creates a new timeout error.
func NewTimeoutError(operation, duration string) *TimeoutError {
message := "operation timeout"
if operation != "" {
message = fmt.Sprintf("%s timeout", operation)
}
return &TimeoutError{
BaseError: &BaseError{
code: CodeTimeout,
message: message,
stack: captureStack(1),
},
Operation: operation,
Duration: duration,
}
}
// RateLimitError represents a rate limiting error.
type RateLimitError struct {
*BaseError
Limit int
RetryAfter int // seconds
}
// NewRateLimitError creates a new rate limit error.
func NewRateLimitError(limit, retryAfter int) *RateLimitError {
return &RateLimitError{
BaseError: &BaseError{
code: CodeRateLimit,
message: "rate limit exceeded",
stack: captureStack(1),
},
Limit: limit,
RetryAfter: retryAfter,
}
}
// Wrap wraps an error with additional context.
// If the error is already one of our custom types, it preserves the type
// and adds the cause chain. Otherwise, it creates an InternalError.
func Wrap(err error, message string) error {
if err == nil {
return nil
}
// If it's already our error type, wrap it
if e, ok := err.(Error); ok {
return &BaseError{
code: e.Code(),
message: message,
cause: err,
stack: captureStack(1),
}
}
// Otherwise create an internal error
return &InternalError{
BaseError: &BaseError{
code: CodeInternal,
message: message,
cause: err,
stack: captureStack(1),
},
}
}
// Wrapf wraps an error with a formatted message.
func Wrapf(err error, format string, args ...interface{}) error {
return Wrap(err, fmt.Sprintf(format, args...))
}
// New creates a new error with a message.
func New(message string) error {
return &BaseError{
code: CodeInternal,
message: message,
stack: captureStack(1),
}
}
// Newf creates a new error with a formatted message.
func Newf(format string, args ...interface{}) error {
return New(fmt.Sprintf(format, args...))
}

405
pkg/errors/errors_test.go Normal file
View File

@ -0,0 +1,405 @@
package errors
import (
"errors"
"fmt"
"strings"
"testing"
)
func TestValidationError(t *testing.T) {
tests := []struct {
name string
field string
message string
value interface{}
expectedError string
}{
{
name: "with field",
field: "email",
message: "invalid email format",
value: "not-an-email",
expectedError: "validation error: email: invalid email format",
},
{
name: "without field",
field: "",
message: "invalid input",
value: nil,
expectedError: "validation error: invalid input",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := NewValidationError(tt.field, tt.message, tt.value)
if err.Error() != tt.expectedError {
t.Errorf("Expected error %q, got %q", tt.expectedError, err.Error())
}
if err.Code() != CodeValidation {
t.Errorf("Expected code %q, got %q", CodeValidation, err.Code())
}
if err.Field != tt.field {
t.Errorf("Expected field %q, got %q", tt.field, err.Field)
}
})
}
}
func TestNotFoundError(t *testing.T) {
tests := []struct {
name string
resource string
id string
expectedError string
}{
{
name: "with ID",
resource: "user",
id: "123",
expectedError: "user with ID '123' not found",
},
{
name: "without ID",
resource: "user",
id: "",
expectedError: "user not found",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := NewNotFoundError(tt.resource, tt.id)
if err.Error() != tt.expectedError {
t.Errorf("Expected error %q, got %q", tt.expectedError, err.Error())
}
if err.Code() != CodeNotFound {
t.Errorf("Expected code %q, got %q", CodeNotFound, err.Code())
}
if err.Resource != tt.resource {
t.Errorf("Expected resource %q, got %q", tt.resource, err.Resource)
}
})
}
}
func TestUnauthorizedError(t *testing.T) {
t.Run("default message", func(t *testing.T) {
err := NewUnauthorizedError("")
if err.Message() != "authentication required" {
t.Errorf("Expected message 'authentication required', got %q", err.Message())
}
if err.Code() != CodeUnauthorized {
t.Errorf("Expected code %q, got %q", CodeUnauthorized, err.Code())
}
})
t.Run("custom message", func(t *testing.T) {
err := NewUnauthorizedError("invalid token")
if err.Message() != "invalid token" {
t.Errorf("Expected message 'invalid token', got %q", err.Message())
}
})
t.Run("with realm", func(t *testing.T) {
err := NewUnauthorizedError("").WithRealm("api")
if err.Realm != "api" {
t.Errorf("Expected realm 'api', got %q", err.Realm)
}
})
}
func TestForbiddenError(t *testing.T) {
tests := []struct {
name string
resource string
action string
expectedMsg string
}{
{
name: "with resource and action",
resource: "function",
action: "delete",
expectedMsg: "forbidden: cannot delete function",
},
{
name: "without details",
resource: "",
action: "",
expectedMsg: "forbidden",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := NewForbiddenError(tt.resource, tt.action)
if err.Message() != tt.expectedMsg {
t.Errorf("Expected message %q, got %q", tt.expectedMsg, err.Message())
}
if err.Code() != CodeForbidden {
t.Errorf("Expected code %q, got %q", CodeForbidden, err.Code())
}
})
}
}
func TestConflictError(t *testing.T) {
tests := []struct {
name string
resource string
field string
value string
expectedMsg string
}{
{
name: "with field",
resource: "user",
field: "email",
value: "test@example.com",
expectedMsg: "user with email='test@example.com' already exists",
},
{
name: "without field",
resource: "user",
field: "",
value: "",
expectedMsg: "user already exists",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := NewConflictError(tt.resource, tt.field, tt.value)
if err.Message() != tt.expectedMsg {
t.Errorf("Expected message %q, got %q", tt.expectedMsg, err.Message())
}
if err.Code() != CodeConflict {
t.Errorf("Expected code %q, got %q", CodeConflict, err.Code())
}
})
}
}
func TestInternalError(t *testing.T) {
t.Run("with cause", func(t *testing.T) {
cause := errors.New("database connection failed")
err := NewInternalError("failed to save user", cause)
if err.Message() != "failed to save user" {
t.Errorf("Expected message 'failed to save user', got %q", err.Message())
}
if err.Unwrap() != cause {
t.Errorf("Expected cause to be preserved")
}
if !strings.Contains(err.Error(), "database connection failed") {
t.Errorf("Expected error to contain cause: %q", err.Error())
}
})
t.Run("with operation", func(t *testing.T) {
err := NewInternalError("operation failed", nil).WithOperation("saveUser")
if err.Operation != "saveUser" {
t.Errorf("Expected operation 'saveUser', got %q", err.Operation)
}
})
}
func TestServiceError(t *testing.T) {
cause := errors.New("connection refused")
err := NewServiceError("rqlite", "database unavailable", 503, cause)
if err.Service != "rqlite" {
t.Errorf("Expected service 'rqlite', got %q", err.Service)
}
if err.StatusCode != 503 {
t.Errorf("Expected status code 503, got %d", err.StatusCode)
}
if err.Unwrap() != cause {
t.Errorf("Expected cause to be preserved")
}
}
func TestTimeoutError(t *testing.T) {
err := NewTimeoutError("function execution", "30s")
if err.Operation != "function execution" {
t.Errorf("Expected operation 'function execution', got %q", err.Operation)
}
if err.Duration != "30s" {
t.Errorf("Expected duration '30s', got %q", err.Duration)
}
if !strings.Contains(err.Message(), "timeout") {
t.Errorf("Expected message to contain 'timeout': %q", err.Message())
}
}
func TestRateLimitError(t *testing.T) {
err := NewRateLimitError(100, 60)
if err.Limit != 100 {
t.Errorf("Expected limit 100, got %d", err.Limit)
}
if err.RetryAfter != 60 {
t.Errorf("Expected retry after 60, got %d", err.RetryAfter)
}
if err.Code() != CodeRateLimit {
t.Errorf("Expected code %q, got %q", CodeRateLimit, err.Code())
}
}
func TestWrap(t *testing.T) {
t.Run("wrap standard error", func(t *testing.T) {
original := errors.New("original error")
wrapped := Wrap(original, "additional context")
if !strings.Contains(wrapped.Error(), "additional context") {
t.Errorf("Expected wrapped error to contain context: %q", wrapped.Error())
}
if !errors.Is(wrapped, original) {
t.Errorf("Expected wrapped error to preserve original error")
}
})
t.Run("wrap custom error", func(t *testing.T) {
original := NewNotFoundError("user", "123")
wrapped := Wrap(original, "failed to fetch user")
if !strings.Contains(wrapped.Error(), "failed to fetch user") {
t.Errorf("Expected wrapped error to contain new context: %q", wrapped.Error())
}
if errors.Unwrap(wrapped) != original {
t.Errorf("Expected wrapped error to preserve original error")
}
})
t.Run("wrap nil error", func(t *testing.T) {
wrapped := Wrap(nil, "context")
if wrapped != nil {
t.Errorf("Expected Wrap(nil) to return nil, got %v", wrapped)
}
})
}
func TestWrapf(t *testing.T) {
original := errors.New("connection failed")
wrapped := Wrapf(original, "failed to connect to %s:%d", "localhost", 5432)
expected := "failed to connect to localhost:5432"
if !strings.Contains(wrapped.Error(), expected) {
t.Errorf("Expected wrapped error to contain %q, got %q", expected, wrapped.Error())
}
}
func TestErrorChaining(t *testing.T) {
// Create a chain of errors
root := errors.New("root cause")
level1 := Wrap(root, "level 1")
level2 := Wrap(level1, "level 2")
level3 := Wrap(level2, "level 3")
// Test unwrapping
if !errors.Is(level3, root) {
t.Errorf("Expected error chain to preserve root cause")
}
// Test that we can unwrap multiple levels
unwrapped := errors.Unwrap(level3)
if unwrapped != level2 {
t.Errorf("Expected first unwrap to return level2")
}
unwrapped = errors.Unwrap(unwrapped)
if unwrapped != level1 {
t.Errorf("Expected second unwrap to return level1")
}
}
func TestStackTrace(t *testing.T) {
err := NewInternalError("test error", nil)
if len(err.Stack()) == 0 {
t.Errorf("Expected stack trace to be captured")
}
trace := err.StackTrace()
if trace == "" {
t.Errorf("Expected stack trace string to be non-empty")
}
// Stack trace should contain this test function
if !strings.Contains(trace, "TestStackTrace") {
t.Errorf("Expected stack trace to contain test function name: %s", trace)
}
}
func TestNew(t *testing.T) {
err := New("test error")
if err.Error() != "test error" {
t.Errorf("Expected error message 'test error', got %q", err.Error())
}
// Check that it implements our Error interface
var customErr Error
if !errors.As(err, &customErr) {
t.Errorf("Expected New() to return an Error interface")
}
}
func TestNewf(t *testing.T) {
err := Newf("error code: %d, message: %s", 404, "not found")
expected := "error code: 404, message: not found"
if err.Error() != expected {
t.Errorf("Expected error message %q, got %q", expected, err.Error())
}
}
func TestSentinelErrors(t *testing.T) {
tests := []struct {
name string
err error
}{
{"ErrNotFound", ErrNotFound},
{"ErrUnauthorized", ErrUnauthorized},
{"ErrForbidden", ErrForbidden},
{"ErrConflict", ErrConflict},
{"ErrInvalidInput", ErrInvalidInput},
{"ErrTimeout", ErrTimeout},
{"ErrServiceUnavailable", ErrServiceUnavailable},
{"ErrInternal", ErrInternal},
{"ErrTooManyRequests", ErrTooManyRequests},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
wrapped := fmt.Errorf("wrapped: %w", tt.err)
if !errors.Is(wrapped, tt.err) {
t.Errorf("Expected errors.Is to work with sentinel error")
}
})
}
}
func BenchmarkNewValidationError(b *testing.B) {
for i := 0; i < b.N; i++ {
_ = NewValidationError("field", "message", "value")
}
}
func BenchmarkWrap(b *testing.B) {
err := errors.New("original error")
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = Wrap(err, "wrapped")
}
}
func BenchmarkStackTrace(b *testing.B) {
err := NewInternalError("test", nil)
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = err.StackTrace()
}
}

166
pkg/errors/example_test.go Normal file
View File

@ -0,0 +1,166 @@
package errors_test
import (
"fmt"
"net/http/httptest"
"github.com/DeBrosOfficial/network/pkg/errors"
)
// Example demonstrates creating and using validation errors.
func ExampleNewValidationError() {
err := errors.NewValidationError("email", "invalid email format", "not-an-email")
fmt.Println(err.Error())
fmt.Println("Code:", err.Code())
// Output:
// validation error: email: invalid email format
// Code: VALIDATION_ERROR
}
// Example demonstrates creating and using not found errors.
func ExampleNewNotFoundError() {
err := errors.NewNotFoundError("user", "123")
fmt.Println(err.Error())
fmt.Println("HTTP Status:", errors.StatusCode(err))
// Output:
// user with ID '123' not found
// HTTP Status: 404
}
// Example demonstrates wrapping errors with context.
func ExampleWrap() {
originalErr := errors.NewNotFoundError("user", "123")
wrappedErr := errors.Wrap(originalErr, "failed to fetch user profile")
fmt.Println(wrappedErr.Error())
fmt.Println("Is NotFound:", errors.IsNotFound(wrappedErr))
// Output:
// failed to fetch user profile: user with ID '123' not found
// Is NotFound: true
}
// Example demonstrates checking error types.
func ExampleIsNotFound() {
err := errors.NewNotFoundError("user", "123")
if errors.IsNotFound(err) {
fmt.Println("User not found")
}
// Output:
// User not found
}
// Example demonstrates checking if an error should be retried.
func ExampleShouldRetry() {
timeoutErr := errors.NewTimeoutError("database query", "5s")
notFoundErr := errors.NewNotFoundError("user", "123")
fmt.Println("Timeout should retry:", errors.ShouldRetry(timeoutErr))
fmt.Println("Not found should retry:", errors.ShouldRetry(notFoundErr))
// Output:
// Timeout should retry: true
// Not found should retry: false
}
// Example demonstrates converting errors to HTTP responses.
func ExampleToHTTPError() {
err := errors.NewNotFoundError("user", "123")
httpErr := errors.ToHTTPError(err, "trace-abc-123")
fmt.Println("Status:", httpErr.Status)
fmt.Println("Code:", httpErr.Code)
fmt.Println("Message:", httpErr.Message)
fmt.Println("Resource:", httpErr.Details["resource"])
// Output:
// Status: 404
// Code: NOT_FOUND
// Message: user not found
// Resource: user
}
// Example demonstrates writing HTTP error responses.
func ExampleWriteHTTPError() {
err := errors.NewValidationError("email", "invalid format", "bad-email")
// Create a test response recorder
w := httptest.NewRecorder()
// Write the error response
errors.WriteHTTPError(w, err, "trace-xyz")
fmt.Println("Status Code:", w.Code)
fmt.Println("Content-Type:", w.Header().Get("Content-Type"))
// Output:
// Status Code: 400
// Content-Type: application/json
}
// Example demonstrates using error categories.
func ExampleGetCategory() {
code := errors.CodeNotFound
category := errors.GetCategory(code)
fmt.Println("Category:", category)
fmt.Println("Is Client Error:", errors.IsClientError(code))
fmt.Println("Is Server Error:", errors.IsServerError(code))
// Output:
// Category: CLIENT_ERROR
// Is Client Error: true
// Is Server Error: false
}
// Example demonstrates creating service errors.
func ExampleNewServiceError() {
err := errors.NewServiceError("rqlite", "database unavailable", 503, nil)
fmt.Println(err.Error())
fmt.Println("Should Retry:", errors.ShouldRetry(err))
// Output:
// database unavailable
// Should Retry: true
}
// Example demonstrates creating internal errors with context.
func ExampleNewInternalError() {
dbErr := fmt.Errorf("connection refused")
err := errors.NewInternalError("failed to save user", dbErr).WithOperation("saveUser")
fmt.Println("Message:", err.Message())
fmt.Println("Operation:", err.Operation)
// Output:
// Message: failed to save user
// Operation: saveUser
}
// Example demonstrates HTTP status code mapping.
func ExampleStatusCode() {
tests := []error{
errors.NewValidationError("field", "invalid", nil),
errors.NewNotFoundError("user", "123"),
errors.NewUnauthorizedError("invalid token"),
errors.NewForbiddenError("resource", "delete"),
errors.NewTimeoutError("operation", "30s"),
}
for _, err := range tests {
fmt.Printf("%s -> %d\n", errors.GetErrorCode(err), errors.StatusCode(err))
}
// Output:
// VALIDATION_ERROR -> 400
// NOT_FOUND -> 404
// UNAUTHORIZED -> 401
// FORBIDDEN -> 403
// TIMEOUT -> 408
}
// Example demonstrates getting the root cause of an error chain.
func ExampleCause() {
root := fmt.Errorf("database connection failed")
level1 := errors.Wrap(root, "failed to fetch user")
level2 := errors.Wrap(level1, "API request failed")
cause := errors.Cause(level2)
fmt.Println(cause.Error())
// Output:
// database connection failed
}

175
pkg/errors/helpers.go Normal file
View File

@ -0,0 +1,175 @@
package errors
import "errors"
// IsNotFound checks if an error indicates a resource was not found.
func IsNotFound(err error) bool {
if err == nil {
return false
}
var notFoundErr *NotFoundError
return errors.As(err, &notFoundErr) || errors.Is(err, ErrNotFound)
}
// IsValidation checks if an error is a validation error.
func IsValidation(err error) bool {
if err == nil {
return false
}
var validationErr *ValidationError
return errors.As(err, &validationErr)
}
// IsUnauthorized checks if an error indicates lack of authentication.
func IsUnauthorized(err error) bool {
if err == nil {
return false
}
var unauthorizedErr *UnauthorizedError
return errors.As(err, &unauthorizedErr) || errors.Is(err, ErrUnauthorized)
}
// IsForbidden checks if an error indicates lack of authorization.
func IsForbidden(err error) bool {
if err == nil {
return false
}
var forbiddenErr *ForbiddenError
return errors.As(err, &forbiddenErr) || errors.Is(err, ErrForbidden)
}
// IsConflict checks if an error indicates a resource conflict.
func IsConflict(err error) bool {
if err == nil {
return false
}
var conflictErr *ConflictError
return errors.As(err, &conflictErr) || errors.Is(err, ErrConflict)
}
// IsTimeout checks if an error indicates a timeout.
func IsTimeout(err error) bool {
if err == nil {
return false
}
var timeoutErr *TimeoutError
return errors.As(err, &timeoutErr) || errors.Is(err, ErrTimeout)
}
// IsRateLimit checks if an error indicates rate limiting.
func IsRateLimit(err error) bool {
if err == nil {
return false
}
var rateLimitErr *RateLimitError
return errors.As(err, &rateLimitErr) || errors.Is(err, ErrTooManyRequests)
}
// IsServiceUnavailable checks if an error indicates a service is unavailable.
func IsServiceUnavailable(err error) bool {
if err == nil {
return false
}
var serviceErr *ServiceError
return errors.As(err, &serviceErr) || errors.Is(err, ErrServiceUnavailable)
}
// IsInternal checks if an error is an internal error.
func IsInternal(err error) bool {
if err == nil {
return false
}
var internalErr *InternalError
return errors.As(err, &internalErr) || errors.Is(err, ErrInternal)
}
// ShouldRetry checks if an operation should be retried based on the error.
func ShouldRetry(err error) bool {
if err == nil {
return false
}
// Check if it's a retryable error type
if IsTimeout(err) || IsServiceUnavailable(err) {
return true
}
// Check the error code
var customErr Error
if errors.As(err, &customErr) {
return IsRetryable(customErr.Code())
}
return false
}
// GetErrorCode extracts the error code from an error.
func GetErrorCode(err error) string {
if err == nil {
return CodeOK
}
var customErr Error
if errors.As(err, &customErr) {
return customErr.Code()
}
// Try to infer from sentinel errors
switch {
case IsNotFound(err):
return CodeNotFound
case IsUnauthorized(err):
return CodeUnauthorized
case IsForbidden(err):
return CodeForbidden
case IsConflict(err):
return CodeConflict
case IsTimeout(err):
return CodeTimeout
case IsRateLimit(err):
return CodeRateLimit
case IsServiceUnavailable(err):
return CodeServiceUnavailable
default:
return CodeInternal
}
}
// GetErrorMessage extracts a human-readable message from an error.
func GetErrorMessage(err error) string {
if err == nil {
return ""
}
var customErr Error
if errors.As(err, &customErr) {
return customErr.Message()
}
return err.Error()
}
// Cause returns the underlying cause of an error.
// It unwraps the error chain until it finds the root cause.
func Cause(err error) error {
for {
unwrapper, ok := err.(interface{ Unwrap() error })
if !ok {
return err
}
underlying := unwrapper.Unwrap()
if underlying == nil {
return err
}
err = underlying
}
}

617
pkg/errors/helpers_test.go Normal file
View File

@ -0,0 +1,617 @@
package errors
import (
"errors"
"testing"
)
func TestIsNotFound(t *testing.T) {
tests := []struct {
name string
err error
expected bool
}{
{
name: "nil error",
err: nil,
expected: false,
},
{
name: "NotFoundError",
err: NewNotFoundError("user", "123"),
expected: true,
},
{
name: "sentinel ErrNotFound",
err: ErrNotFound,
expected: true,
},
{
name: "wrapped NotFoundError",
err: Wrap(NewNotFoundError("user", "123"), "context"),
expected: true,
},
{
name: "wrapped sentinel",
err: Wrap(ErrNotFound, "context"),
expected: true,
},
{
name: "other error",
err: NewInternalError("internal", nil),
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := IsNotFound(tt.err)
if result != tt.expected {
t.Errorf("Expected %v, got %v", tt.expected, result)
}
})
}
}
func TestIsValidation(t *testing.T) {
tests := []struct {
name string
err error
expected bool
}{
{
name: "nil error",
err: nil,
expected: false,
},
{
name: "ValidationError",
err: NewValidationError("field", "invalid", nil),
expected: true,
},
{
name: "wrapped ValidationError",
err: Wrap(NewValidationError("field", "invalid", nil), "context"),
expected: true,
},
{
name: "other error",
err: NewNotFoundError("user", "123"),
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := IsValidation(tt.err)
if result != tt.expected {
t.Errorf("Expected %v, got %v", tt.expected, result)
}
})
}
}
func TestIsUnauthorized(t *testing.T) {
tests := []struct {
name string
err error
expected bool
}{
{
name: "nil error",
err: nil,
expected: false,
},
{
name: "UnauthorizedError",
err: NewUnauthorizedError("invalid token"),
expected: true,
},
{
name: "sentinel ErrUnauthorized",
err: ErrUnauthorized,
expected: true,
},
{
name: "wrapped UnauthorizedError",
err: Wrap(NewUnauthorizedError("invalid token"), "context"),
expected: true,
},
{
name: "other error",
err: NewForbiddenError("resource", "action"),
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := IsUnauthorized(tt.err)
if result != tt.expected {
t.Errorf("Expected %v, got %v", tt.expected, result)
}
})
}
}
func TestIsForbidden(t *testing.T) {
tests := []struct {
name string
err error
expected bool
}{
{
name: "nil error",
err: nil,
expected: false,
},
{
name: "ForbiddenError",
err: NewForbiddenError("resource", "action"),
expected: true,
},
{
name: "sentinel ErrForbidden",
err: ErrForbidden,
expected: true,
},
{
name: "wrapped ForbiddenError",
err: Wrap(NewForbiddenError("resource", "action"), "context"),
expected: true,
},
{
name: "other error",
err: NewUnauthorizedError("invalid token"),
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := IsForbidden(tt.err)
if result != tt.expected {
t.Errorf("Expected %v, got %v", tt.expected, result)
}
})
}
}
func TestIsConflict(t *testing.T) {
tests := []struct {
name string
err error
expected bool
}{
{
name: "nil error",
err: nil,
expected: false,
},
{
name: "ConflictError",
err: NewConflictError("user", "email", "test@example.com"),
expected: true,
},
{
name: "sentinel ErrConflict",
err: ErrConflict,
expected: true,
},
{
name: "wrapped ConflictError",
err: Wrap(NewConflictError("user", "email", "test@example.com"), "context"),
expected: true,
},
{
name: "other error",
err: NewNotFoundError("user", "123"),
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := IsConflict(tt.err)
if result != tt.expected {
t.Errorf("Expected %v, got %v", tt.expected, result)
}
})
}
}
func TestIsTimeout(t *testing.T) {
tests := []struct {
name string
err error
expected bool
}{
{
name: "nil error",
err: nil,
expected: false,
},
{
name: "TimeoutError",
err: NewTimeoutError("operation", "30s"),
expected: true,
},
{
name: "sentinel ErrTimeout",
err: ErrTimeout,
expected: true,
},
{
name: "wrapped TimeoutError",
err: Wrap(NewTimeoutError("operation", "30s"), "context"),
expected: true,
},
{
name: "other error",
err: NewInternalError("internal", nil),
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := IsTimeout(tt.err)
if result != tt.expected {
t.Errorf("Expected %v, got %v", tt.expected, result)
}
})
}
}
func TestIsRateLimit(t *testing.T) {
tests := []struct {
name string
err error
expected bool
}{
{
name: "nil error",
err: nil,
expected: false,
},
{
name: "RateLimitError",
err: NewRateLimitError(100, 60),
expected: true,
},
{
name: "sentinel ErrTooManyRequests",
err: ErrTooManyRequests,
expected: true,
},
{
name: "wrapped RateLimitError",
err: Wrap(NewRateLimitError(100, 60), "context"),
expected: true,
},
{
name: "other error",
err: NewTimeoutError("operation", "30s"),
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := IsRateLimit(tt.err)
if result != tt.expected {
t.Errorf("Expected %v, got %v", tt.expected, result)
}
})
}
}
func TestIsServiceUnavailable(t *testing.T) {
tests := []struct {
name string
err error
expected bool
}{
{
name: "nil error",
err: nil,
expected: false,
},
{
name: "ServiceError",
err: NewServiceError("rqlite", "unavailable", 503, nil),
expected: true,
},
{
name: "sentinel ErrServiceUnavailable",
err: ErrServiceUnavailable,
expected: true,
},
{
name: "wrapped ServiceError",
err: Wrap(NewServiceError("rqlite", "unavailable", 503, nil), "context"),
expected: true,
},
{
name: "other error",
err: NewTimeoutError("operation", "30s"),
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := IsServiceUnavailable(tt.err)
if result != tt.expected {
t.Errorf("Expected %v, got %v", tt.expected, result)
}
})
}
}
func TestIsInternal(t *testing.T) {
tests := []struct {
name string
err error
expected bool
}{
{
name: "nil error",
err: nil,
expected: false,
},
{
name: "InternalError",
err: NewInternalError("internal error", nil),
expected: true,
},
{
name: "sentinel ErrInternal",
err: ErrInternal,
expected: true,
},
{
name: "wrapped InternalError",
err: Wrap(NewInternalError("internal error", nil), "context"),
expected: true,
},
{
name: "other error",
err: NewNotFoundError("user", "123"),
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := IsInternal(tt.err)
if result != tt.expected {
t.Errorf("Expected %v, got %v", tt.expected, result)
}
})
}
}
func TestShouldRetry(t *testing.T) {
tests := []struct {
name string
err error
expected bool
}{
{
name: "nil error",
err: nil,
expected: false,
},
{
name: "timeout error",
err: NewTimeoutError("operation", "30s"),
expected: true,
},
{
name: "service unavailable error",
err: NewServiceError("rqlite", "unavailable", 503, nil),
expected: true,
},
{
name: "not found error",
err: NewNotFoundError("user", "123"),
expected: false,
},
{
name: "validation error",
err: NewValidationError("field", "invalid", nil),
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := ShouldRetry(tt.err)
if result != tt.expected {
t.Errorf("Expected %v, got %v", tt.expected, result)
}
})
}
}
func TestGetErrorCode(t *testing.T) {
tests := []struct {
name string
err error
expectedCode string
}{
{
name: "nil error",
err: nil,
expectedCode: CodeOK,
},
{
name: "validation error",
err: NewValidationError("field", "invalid", nil),
expectedCode: CodeValidation,
},
{
name: "not found error",
err: NewNotFoundError("user", "123"),
expectedCode: CodeNotFound,
},
{
name: "unauthorized error",
err: NewUnauthorizedError("invalid token"),
expectedCode: CodeUnauthorized,
},
{
name: "forbidden error",
err: NewForbiddenError("resource", "action"),
expectedCode: CodeForbidden,
},
{
name: "conflict error",
err: NewConflictError("user", "email", "test@example.com"),
expectedCode: CodeConflict,
},
{
name: "timeout error",
err: NewTimeoutError("operation", "30s"),
expectedCode: CodeTimeout,
},
{
name: "rate limit error",
err: NewRateLimitError(100, 60),
expectedCode: CodeRateLimit,
},
{
name: "service error",
err: NewServiceError("rqlite", "unavailable", 503, nil),
expectedCode: CodeServiceUnavailable,
},
{
name: "internal error",
err: NewInternalError("internal", nil),
expectedCode: CodeInternal,
},
{
name: "sentinel ErrNotFound",
err: ErrNotFound,
expectedCode: CodeNotFound,
},
{
name: "standard error",
err: errors.New("generic error"),
expectedCode: CodeInternal,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
code := GetErrorCode(tt.err)
if code != tt.expectedCode {
t.Errorf("Expected code %s, got %s", tt.expectedCode, code)
}
})
}
}
func TestGetErrorMessage(t *testing.T) {
tests := []struct {
name string
err error
expectedMessage string
}{
{
name: "nil error",
err: nil,
expectedMessage: "",
},
{
name: "validation error",
err: NewValidationError("field", "invalid format", nil),
expectedMessage: "invalid format",
},
{
name: "not found error",
err: NewNotFoundError("user", "123"),
expectedMessage: "user not found",
},
{
name: "standard error",
err: errors.New("generic error"),
expectedMessage: "generic error",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
message := GetErrorMessage(tt.err)
if message != tt.expectedMessage {
t.Errorf("Expected message %q, got %q", tt.expectedMessage, message)
}
})
}
}
func TestCause(t *testing.T) {
t.Run("unwrap error chain", func(t *testing.T) {
root := errors.New("root cause")
level1 := Wrap(root, "level 1")
level2 := Wrap(level1, "level 2")
level3 := Wrap(level2, "level 3")
cause := Cause(level3)
if cause != root {
t.Errorf("Expected to find root cause, got %v", cause)
}
})
t.Run("error without cause", func(t *testing.T) {
err := errors.New("standalone error")
cause := Cause(err)
if cause != err {
t.Errorf("Expected to return same error, got %v", cause)
}
})
t.Run("custom error with cause", func(t *testing.T) {
root := errors.New("database error")
wrapped := NewInternalError("failed to save", root)
cause := Cause(wrapped)
if cause != root {
t.Errorf("Expected to find root cause, got %v", cause)
}
})
}
func BenchmarkIsNotFound(b *testing.B) {
err := NewNotFoundError("user", "123")
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = IsNotFound(err)
}
}
func BenchmarkShouldRetry(b *testing.B) {
err := NewTimeoutError("operation", "30s")
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = ShouldRetry(err)
}
}
func BenchmarkGetErrorCode(b *testing.B) {
err := NewValidationError("field", "invalid", nil)
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = GetErrorCode(err)
}
}
func BenchmarkCause(b *testing.B) {
root := errors.New("root")
wrapped := Wrap(Wrap(Wrap(root, "l1"), "l2"), "l3")
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = Cause(wrapped)
}
}

Some files were not shown because too many files have changed in this diff Show More