Rewrite docs for modern node/client architecture and install system

This commit is contained in:
anonpenguin 2025-08-14 15:19:23 +03:00
parent 170b06b213
commit a6129d3fc2
3 changed files with 447 additions and 1492 deletions

View File

@ -3,574 +3,266 @@
## Table of Contents
- [Project Overview](#project-overview)
- [Product Requirements Document (PRD)](#product-requirements-document-prd)
- [Architecture Overview](#architecture-overview)
- [Codebase Structure](#codebase-structure)
- [Key Components](#key-components)
- [Network Protocol](#network-protocol)
- [Data Flow](#data-flow)
- [Configuration System](#configuration-system)
- [Node vs Client Roles](#node-vs-client-roles)
- [Network Protocol & Data Flow](#network-protocol--data-flow)
- [Build & Development](#build--development)
- [API Reference](#api-reference)
- [Troubleshooting](#troubleshooting)
- [Example Application: Anchat](#example-application-anchat)
---
## Project Overview
**DeBros Network Cluster** is a decentralized peer-to-peer (P2P) network built in Go that provides distributed database operations, key-value storage, pub/sub messaging, and peer discovery. The system is designed for applications that need resilient, distributed data management without relying on centralized infrastructure.
**DeBros Network Cluster** is a decentralized peer-to-peer (P2P) system built in Go, providing distributed database operations, key-value storage, pub/sub messaging, and peer management. It is designed for resilient, distributed data management and communication, with a clear separation between full network nodes and lightweight clients.
## Product Requirements Document (PRD)
### Vision
Create a robust, decentralized network platform that enables applications to seamlessly share data, communicate, and discover peers in a distributed environment.
### Core Requirements
#### Functional Requirements
1. **Distributed Database Operations**
- SQL query execution across network nodes
- ACID transactions with eventual consistency
- Schema management and table operations
- Multi-node resilience with automatic failover
2. **Key-Value Storage**
- Distributed storage with namespace isolation
- CRUD operations with consistency guarantees
- Prefix-based querying and key enumeration
- Data replication across network participants
3. **Pub/Sub Messaging**
- Topic-based publish/subscribe communication
- Real-time message delivery with ordering guarantees
- Subscription management with automatic cleanup
- Namespace isolation per application
4. **Peer Discovery & Management**
- Automatic peer discovery using DHT (Distributed Hash Table)
- Bootstrap node support for network joining
- Connection health monitoring and recovery
- Peer exchange for network growth
5. **Application Isolation**
- Namespace-based multi-tenancy
- Per-application data segregation
- Independent configuration and lifecycle management
#### Non-Functional Requirements
1. **Reliability**: 99.9% uptime with automatic failover
2. **Scalability**: Support 100+ nodes with linear performance
3. **Security**: End-to-end encryption for sensitive data
4. **Performance**: <100ms latency for local operations
5. **Developer Experience**: Simple client API with comprehensive examples
### Success Metrics
- Network uptime > 99.9%
- Peer discovery time < 30 seconds
- Database operation latency < 500ms
- Message delivery success rate > 99.5%
---
## Architecture Overview
The architecture is modular and robust, supporting both full nodes (which run core services and participate in discovery) and lightweight clients (which connect to the network via bootstrap peers).
```
┌─────────────────────────────────────────────────────────────┐
│ DeBros Network Cluster │
├─────────────────────────────────────────────────────────────┤
│ Application Layer │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ ┌─────────────┐ ┌─────────────┐ ┌───────────────────────┐ │
│ │ Anchat │ │ Custom App │ │ CLI Tools │ │
│ │ (Chat) │ │ │ │ │ │
│ └─────────────┘ └─────────────┘ └─────────────────────┘ │
│ └─────────────┘ └─────────────┘ └───────────────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ Client API │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ ┌─────────────┐ ┌─────────────┐ ┌───────────────────────┐ │
│ │ Database │ │ Storage │ │ PubSub │ │
│ │ Client │ │ Client │ │ Client │ │
│ └─────────────┘ └─────────────┘ └─────────────────────┘ │
│ └─────────────┘ └─────────────┘ └───────────────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ Network Layer │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ Discovery │ PubSub │ │ Consensus │ │
│ │ Manager │ │ Manager │ │ (RQLite) │ │
│ └─────────────┘ └─────────────┘ └─────────────────────┘ │
│ ┌─────────────┐ ┌─────────────┐ ┌───────────────────────┐ │
│ │ Node │ │ Discovery │ │ PubSub │ │
│ │ (Full P2P) │ │ Manager │ │ Manager │ │
│ └─────────────┘ └─────────────┘ └───────────────────────┘ │
├─────────────────────────────────────────────────────────────┤
Transport Layer
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐
│ │ LibP2P │ │ DHT │ │ RQLite │
│ │ Host │ │ Kademlia │ │ Database │
│ └─────────────┘ └─────────────┘ └─────────────────────┘
│ Database Layer (RQLite) │
│ ┌─────────────┐ │
│ │ RQLite │ │
│ │ Consensus │ │
│ └─────────────┘ │
└─────────────────────────────────────────────────────────────┘
```
### Key Design Principles
**Key Principles:**
- **Modularity:** Each component is independently testable and replaceable.
- **Fault Tolerance:** Network continues operating with node failures.
- **Security:** End-to-end encryption, peer authentication, and namespace isolation.
- **Performance:** Optimized for common operations, with connection pooling and caching.
1. **Modularity**: Each component can be developed and tested independently
2. **Fault Tolerance**: Network continues operating even with node failures
3. **Consistency**: Strong consistency for database operations, eventual consistency for discovery
4. **Security**: Defense in depth with multiple security layers
5. **Performance**: Optimized for common operations with caching and connection pooling
---
## Codebase Structure
```
network/
├── cmd/ # Executables
│ ├── node/ # Network node (bootstrap via flag/auto)
│ │ ├── main.go # Entrypoint
│ └── configmap.go # Centralized flags/env → config mapping (flags > env > defaults)
│ └── cli/main.go # Command-line interface
│ ├── node/ # Network node (full participant)
│ │ └── main.go # Node entrypoint
│ └── cli/ # Command-line interface
│ └── main.go # CLI entrypoint
├── pkg/ # Core packages
│ ├── client/ # Client API and implementations
│ │ ├── client.go # Main client (now slimmed)
│ │ ├── connect_bootstrap.go # Bootstrap helpers
│ │ ├── discovery_aggressive.go # Generic aggressive discovery
│ │ ├── monitoring.go # Connection monitoring
│ │ ├── pubsub_bridge.go # PubSub adapter bridge
│ │ ├── implementations.go # Database, storage, network implementations
│ │ └── interface.go # Public API interfaces
│ ├── config/ # Configuration management
│ │ └── config.go # Node and client configuration
│ ├── constants/ # System constants
│ │ └── bootstrap.go # Bootstrap node constants
│ ├── database/ # Database layer
│ │ ├── adapter.go # Database adapter interface
│ │ └── rqlite.go # RQLite implementation
│ ├── discovery/ # Peer discovery
│ │ └── discovery.go # DHT-based peer discovery
│ ├── node/ # Node implementation
│ │ └── node.go # Network node logic
│ ├── pubsub/ # Publish/Subscribe messaging
│ │ ├── manager.go # Core pub/sub logic
│ │ ├── adapter.go # Client interface adapter
│ │ └── types.go # Shared types
│ └── storage/ # Distributed storage
│ ├── client.go # Storage client
│ ├── protocol.go # Storage protocol
│ ├── service.go # Service (struct/ctor/Close)
│ ├── rqlite_init.go # Schema initialization
│ ├── stream_handler.go # Stream handling
│ └── kv_ops.go # KV operation handlers
├── anchat/ # Example chat application
│ ├── cmd/cli/main.go # Chat CLI
│ └── pkg/
│ ├── chat/manager.go # Chat message management
│ └── crypto/crypto.go # End-to-end encryption
├── examples/ # Usage examples
│ └── basic_usage.go # Basic API usage
├── configs/ # Configuration files
│ ├── bootstrap.yaml # Bootstrap node config
│ └── node.yaml # Regular node config
├── data/ # Runtime data
│ ├── bootstrap/ # Bootstrap node data
│ └── node/ # Regular node data
└── scripts/ # Utility scripts
└── test-multinode.sh # Multi-node testing
│ ├── client/ # Lightweight client API
│ ├── node/ # Full node implementation
│ ├── config/ # Centralized configuration management
│ ├── database/ # RQLite integration
│ ├── storage/ # Distributed key-value storage
│ ├── pubsub/ # Pub/Sub messaging
│ ├── discovery/ # Peer discovery (node only)
│ ├── logging/ # Structured and colored logging
│ └── anyoneproxy/ # Optional SOCKS5 proxy support
├── configs/ # YAML configuration files
│ ├── node.yaml # Node config
│ └── bootstrap.yaml # Bootstrap config (legacy, now unified)
├── scripts/ # Install and utility scripts
└── data/ # Runtime data (identity, db, logs)
```
---
## Key Components
### 1. Network Client (`pkg/client/`)
### 1. **Network Client (`pkg/client/`)**
- **Role:** Lightweight P2P participant for apps and CLI.
- **Features:** Connects only to bootstrap peers, no peer discovery, provides Database, Storage, PubSub, and NetworkInfo interfaces.
- **Isolation:** Namespaced per application.
The main entry point for applications to interact with the network.
### 2. **Node (`pkg/node/`)**
- **Role:** Full P2P participant, runs core services (RQLite, storage, pubsub), handles peer discovery and network management.
- **Features:** Peer discovery, service registration, connection monitoring, and data replication.
**Core Interfaces:**
### 3. **Configuration (`pkg/config/`)**
- **Centralized:** All config is managed via YAML files, with CLI flags and environment variables overriding as needed.
- **Unified:** Node and client configs share structure; bootstrap is just a node with no join address.
- `NetworkClient`: Main client interface
- `DatabaseClient`: SQL database operations
- `StorageClient`: Key-value storage operations
- `PubSubClient`: Publish/subscribe messaging
- `NetworkInfo`: Network status and peer information
### 4. **Database Layer (`pkg/database/`)**
- **RQLite:** Distributed SQLite with Raft consensus, automatic leader election, and failover.
- **Client API:** SQL queries, transactions, schema management.
**Key Features:**
### 5. **Storage System (`pkg/storage/`)**
- **Distributed KV:** Namespace-isolated, CRUD operations, prefix queries, replication.
- Automatic connection management with retry logic
- Namespace isolation per application
- Health monitoring and status reporting
- Graceful shutdown and cleanup
### 6. **Pub/Sub System (`pkg/pubsub/`)**
- **Messaging:** Topic-based, real-time delivery, automatic subscription management, namespace isolation.
### 2. Peer Discovery (`pkg/discovery/`)
### 7. **Discovery (`pkg/discovery/`)**
- **Node Only:** Handles peer discovery via peerstore and peer exchange. No DHT/Kademlia in client.
Handles automatic peer discovery and network topology management.
---
**Discovery Strategies:**
## Configuration System
- **DHT-based**: Uses Kademlia DHT for efficient peer routing
- **Peer Exchange**: Learns about new peers from existing connections
- **Bootstrap**: Connects to known bootstrap nodes for network entry
- **Primary Source:** YAML files (`configs/node.yaml`)
- **Overrides:** CLI flags > Environment variables > YAML > Code defaults
- **Examples:**
- `data_dir`, `key_file`, `listen_addresses`, `solana_wallet`
- `rqlite_port`, `rqlite_raft_port`, `rqlite_join_address`
- `bootstrap_peers`, `discovery_interval`
- Logging: `level`, `file`
**Configuration:**
**Client Configuration Precedence:**
1. Explicit in `ClientConfig`
2. Environment variables (`RQLITE_NODES`, `BOOTSTRAP_PEERS`)
3. Library defaults (from config package)
- Discovery interval (default: 10 seconds)
- Maximum concurrent connections (default: 3)
- Connection timeout and retry policies
---
### 3. Pub/Sub System (`pkg/pubsub/`)
## Node vs Client Roles
Provides reliable, topic-based messaging with ordering guarantees.
### **Node (`pkg/node/`)**
- Runs full network services (RQLite, storage, pubsub)
- Handles peer discovery and network topology
- Participates in consensus and replication
- Manages service lifecycle and monitoring
**Features:**
### **Client (`pkg/client/`)**
- Lightweight participant (does not run services)
- Connects only to known bootstrap peers
- No peer discovery or DHT
- Consumes network services via API (Database, Storage, PubSub, NetworkInfo)
- Used by CLI and application integrations
- Topic-based routing with wildcard support
- Namespace isolation per application
- Automatic subscription management
- Message deduplication and ordering
---
**Message Flow:**
## Network Protocol & Data Flow
1. Client subscribes to topic with handler
2. Publisher sends message to topic
3. Network propagates message to all subscribers
4. Handlers process messages asynchronously
### **Connection Establishment**
- **Node:** Connects to bootstrap peers, discovers additional peers, registers services.
- **Client:** Connects only to bootstrap peers.
### 4. Database Layer (`pkg/database/`)
### **Message Types**
- **Control:** Node status, heartbeats, topology updates
- **Database:** SQL queries, transactions, schema ops
- **Storage:** KV operations, replication
- **PubSub:** Topic subscriptions, published messages
Distributed SQL database built on RQLite (Raft-based SQLite).
### **Security Model**
- **Transport:** Noise/TLS encryption for all connections
- **Authentication:** Peer identity verification
- **Isolation:** Namespace-based access control
**Capabilities:**
### **Data Flow**
- **Database:** Client → DatabaseClient → RQLite Leader → Raft Consensus → All Nodes
- **Storage:** Client → StorageClient → Node → Replication
- **PubSub:** Client → PubSubClient → Node → Topic Router → Subscribers
- ACID transactions with strong consistency
- Automatic leader election and failover
- Multi-node replication with conflict resolution
- Schema management and migrations
**Query Types:**
- Read operations: Served from any node
- Write operations: Routed to leader node
- Transactions: Atomic across multiple statements
### 5. Storage System (`pkg/storage/`)
Distributed key-value store with eventual consistency.
**Operations:**
- `Put(key, value)`: Store value with key
- `Get(key)`: Retrieve value by key
- `Delete(key)`: Remove key-value pair
- `List(prefix, limit)`: Enumerate keys with prefix
- `Exists(key)`: Check key existence
## Network Protocol
### Connection Establishment
1. **Bootstrap Connection**: New nodes connect to bootstrap peers
2. **DHT Bootstrap**: Initialize Kademlia DHT for routing
3. **Peer Discovery**: Discover additional peers through DHT
4. **Service Registration**: Register available services (database, storage, pubsub)
### Message Types
- **Control Messages**: Node status, heartbeats, topology updates
- **Database Messages**: SQL queries, transactions, schema operations
- **Storage Messages**: Key-value operations, replication data
- **PubSub Messages**: Topic subscriptions, published content
### Security Model
- **Transport Security**: All connections use TLS/Noise encryption
- **Peer Authentication**: Cryptographic peer identity verification
- **Message Integrity**: Hash-based message authentication codes
- **Namespace Isolation**: Application-level access control
## Data Flow
### Database Operation Flow
```
Client App → DatabaseClient → RQLite Leader → Raft Consensus → All Nodes
↑ ↓
└─────────────────── Query Result ←─────────────────────────────┘
```
### Storage Operation Flow
```
Client App → StorageClient → DHT Routing → Target Nodes → Replication
↑ ↓
└─────────────── Response ←─────────────────────────────────┘
```
### PubSub Message Flow
```
Publisher → PubSub Manager → Topic Router → All Subscribers → Message Handlers
```
---
## Build & Development
### Prerequisites
- Go 1.19+
- Make
### **Prerequisites**
- Go 1.21+
- RQLite
- Git
- Make
### Build Commands
### **Build Commands**
```bash
# Build all executables
make build
# Run tests
make test
# Clean build artifacts
make clean
# Start network node (auto-detects bootstrap vs regular)
make run-node
# Start additional node
make run-node
make build # Build all executables
make test # Run tests
make run-node # Start node (auto-detects bootstrap vs regular)
```
### Development Workflow
### **Development Workflow**
- Use `make run-node` for local development.
- Edit YAML configs for node settings.
- Use CLI for network operations and testing.
1. **Local Development**: Use `make run-node` (auto-detects bootstrap vs regular)
2. **Testing**: Run `make test` for unit tests
3. **Integration Testing**: Use `scripts/test-multinode.sh`
4. **Configuration**: Edit `configs/*.yaml` files
### Configuration Files
#### Bootstrap Node (`configs/bootstrap.yaml`)
```yaml
node:
data_dir: "./data/bootstrap"
listen_addresses:
- "/ip4/0.0.0.0/tcp/4001"
- "/ip4/0.0.0.0/udp/4001/quic"
database:
rqlite_port: 5001
rqlite_raft_port: 7001
```
#### Regular Node (`configs/node.yaml`)
```yaml
node:
data_dir: "./data/node"
listen_addresses:
- "/ip4/0.0.0.0/tcp/4001"
discovery:
bootstrap_peers:
- "/ip4/127.0.0.1/tcp/4001/p2p/{BOOTSTRAP_PEER_ID}"
discovery_interval: "10s"
database:
rqlite_port: 5001
rqlite_raft_port: 7001
rqlite_join_address: "localhost:7001"
```
---
## API Reference
### Client Creation
### **Client Creation**
```go
import "git.debros.io/DeBros/network/pkg/client"
config := client.DefaultClientConfig("my-app")
config.BootstrapPeers = []string{"/ip4/127.0.0.1/tcp/4001/p2p/{PEER_ID}"}
client, err := client.NewClient(config)
if err != nil {
log.Fatal(err)
}
err = client.Connect()
if err != nil {
log.Fatal(err)
}
defer client.Disconnect()
```
### Centralized Defaults & Endpoint Precedence
- Defaults are centralized in the client package:
- `client.DefaultBootstrapPeers()` exposes default multiaddrs from `pkg/constants/bootstrap.go`.
- `client.DefaultDatabaseEndpoints()` derives HTTP DB endpoints from bootstrap peers (default port 5001 or `RQLITE_PORT`).
- `ClientConfig` now includes `DatabaseEndpoints []string` to explicitly set DB URLs.
- Resolution order used by the database client:
1. `ClientConfig.DatabaseEndpoints`
2. `RQLITE_NODES` environment variable (comma/space separated)
3. `client.DefaultDatabaseEndpoints()`
- Endpoints are normalized to include scheme and port; duplicates are removed.
Example:
### **Database Operations**
```go
cfg := client.DefaultClientConfig("app")
cfg.BootstrapPeers = client.DefaultBootstrapPeers()
// Optionally pin DB endpoints
cfg.DatabaseEndpoints = []string{"http://db1:5001","db2:5001"}
cli, _ := client.NewClient(cfg)
_ = cli.Connect()
result, err := client.Database().Query(ctx, "SELECT * FROM users")
err := client.Database().CreateTable(ctx, "CREATE TABLE ...")
```
### Database Operations
### **Storage Operations**
```go
// Create table
err := client.Database().CreateTable(ctx, `
CREATE TABLE users (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
email TEXT UNIQUE
)
`)
// Insert data
result, err := client.Database().Query(ctx,
"INSERT INTO users (name, email) VALUES (?, ?)",
"Alice", "alice@example.com")
// Query data
result, err := client.Database().Query(ctx,
"SELECT id, name, email FROM users WHERE name = ?", "Alice")
err := client.Storage().Put(ctx, "key", []byte("value"))
data, err := client.Storage().Get(ctx, "key")
```
### Storage Operations
### **PubSub Operations**
```go
// Store data
err := client.Storage().Put(ctx, "user:123", []byte(`{"name":"Alice"}`))
// Retrieve data
data, err := client.Storage().Get(ctx, "user:123")
// List keys
keys, err := client.Storage().List(ctx, "user:", 10)
// Check existence
exists, err := client.Storage().Exists(ctx, "user:123")
err := client.PubSub().Subscribe(ctx, "topic", handler)
err := client.PubSub().Publish(ctx, "topic", []byte("msg"))
```
### PubSub Operations
### **Network Information**
```go
// Subscribe to messages
handler := func(topic string, data []byte) error {
fmt.Printf("Received on %s: %s\n", topic, string(data))
return nil
}
err := client.PubSub().Subscribe(ctx, "notifications", handler)
// Publish message
err := client.PubSub().Publish(ctx, "notifications", []byte("Hello, World!"))
// List subscribed topics
topics, err := client.PubSub().ListTopics(ctx)
```
### Network Information
```go
// Get network status
status, err := client.Network().GetStatus(ctx)
fmt.Printf("Node ID: %s, Peers: %d\n", status.NodeID, status.PeerCount)
// Get connected peers
peers, err := client.Network().GetPeers(ctx)
for _, peer := range peers {
fmt.Printf("Peer: %s, Connected: %v\n", peer.ID, peer.Connected)
}
// Connect to specific peer
err := client.Network().ConnectToPeer(ctx, "/ip4/192.168.1.100/tcp/4001/p2p/{PEER_ID}")
```
---
## Troubleshooting
### Common Issues
### **Common Issues**
- **Bootstrap Connection Failed:** Check peer ID, port, firewall, and node status.
- **Database Timeout:** Ensure RQLite ports are open, leader election is complete, and join address is correct.
- **Message Delivery Failures:** Verify topic names, subscription status, and network connectivity.
- **High Memory Usage:** Unsubscribe from topics when done, monitor connection pool size.
#### 1. Bootstrap Connection Failed
**Symptoms**: `Failed to connect to bootstrap peer`
**Solutions**:
- Verify bootstrap node is running and accessible
- Check firewall settings and port availability
- Validate peer ID in bootstrap address
#### 2. Database Operations Timeout
**Symptoms**: `Query timeout` or `No RQLite connection available`
**Solutions**:
- Ensure RQLite ports are not blocked
- Check if leader election has completed
- Verify cluster join configuration
#### 3. Message Delivery Failures
**Symptoms**: Messages not received by subscribers
**Solutions**:
- Verify topic names match exactly
- Check subscription is active before publishing
- Ensure network connectivity between peers
#### 4. High Memory Usage
**Symptoms**: Memory usage grows continuously
**Solutions**:
- Check for subscription leaks (unsubscribe when done)
- Monitor connection pool size
- Review message retention policies
### Debug Mode
Enable debug logging by setting environment variable:
```bash
export LOG_LEVEL=debug
```
### Health Checks
```go
health, err := client.Health()
if health.Status != "healthy" {
log.Printf("Unhealthy: %+v", health.Checks)
}
```
### Network Diagnostics
```bash
# Check node connectivity
./bin/network-cli peers
# Verify database status
./bin/network-cli query "SELECT 1"
# Test pub/sub
./bin/network-cli pubsub publish test "hello"
./bin/network-cli pubsub subscribe test 10s
```
### **Debugging**
- Enable debug logging: `export LOG_LEVEL=debug`
- Check service logs: `sudo journalctl -u debros-node.service -f`
- Use CLI for health and peer checks: `./bin/network-cli health`, `./bin/network-cli peers`
---
## Example Application: Anchat
The `anchat/` directory contains a complete example application demonstrating how to build a decentralized chat system using the DeBros network. It showcases:
- User registration with Solana wallet integration
The `anchat/` directory contains a full-featured decentralized chat app built on DeBros Network. Features include:
- Solana wallet integration
- End-to-end encrypted messaging
- IRC-style chat rooms
- Real-time message delivery
- Persistent chat history
This serves as both a practical example and a reference implementation for building applications on the DeBros network platform.
- Real-time pub/sub chat rooms
- Persistent history
---
_This document provides comprehensive context for AI systems to understand the DeBros Network Cluster project architecture, implementation details, and usage patterns._
_This document provides a modern, accurate context for understanding the DeBros Network Cluster architecture, configuration, and usage patterns. All details reflect the current codebase and best practices._

801
README.md
View File

@ -1,323 +1,180 @@
# Network - Distributed P2P Database System v0.19.0-beta
# DeBros Network - Distributed P2P Database System
A distributed peer-to-peer network built with Go and LibP2P, providing decentralized database capabilities with RQLite consensus and replication.
A robust, decentralized peer-to-peer network built in Go, providing distributed SQL database, key-value storage, pub/sub messaging, and resilient peer management. Designed for applications needing reliable, scalable, and secure data sharing without centralized infrastructure.
---
## Table of Contents
- [Features](#features)
- [Architecture Overview](#architecture-overview)
- [System Requirements](#system-requirements)
- [Software Dependencies](#software-dependencies)
- [Installation](#installation)
- [macOS](#macos)
- [Ubuntu/Debian](#ubuntudebian)
- [Windows](#windows)
- [Hardware Requirements](#hardware-requirements)
- [Network Ports](#network-ports)
- [Quick Start](#quick-start)
- [1. Clone and Setup Environment](#1-clone-and-setup-environment)
- [2. Generate Bootstrap Identity (Development Only)](#2-generate-bootstrap-identity-development-only)
- [3. Build the Project](#3-build-the-project)
- [4. Start the Network](#4-start-the-network)
- [5. Test with CLI](#5-test-with-cli)
- [Deployment](#deployment)
- [Production Installation Script](#production-installation-script)
- [One-Command Installation](#one-command-installation)
- [What the Script Does](#what-the-script-does)
- [Directory Structure](#directory-structure)
- [Node Setup](#node-setup)
- [Service Management](#service-management)
- [Configuration Files](#configuration-files)
- [Security Features](#security-features)
- [Network Discovery](#network-discovery)
- [Updates and Maintenance](#updates-and-maintenance)
- [Monitoring and Troubleshooting](#monitoring-and-troubleshooting)
- [Deployment & Installation](#deployment--installation)
- [Configuration](#configuration)
- [Bootstrap and Ports (via flags)](#bootstrap-and-ports-via-flags)
- [CLI Commands](#cli-commands)
- [Network Operations](#network-operations)
- [Storage Operations](#storage-operations)
- [Database Operations](#database-operations)
- [Pub/Sub Messaging](#pubsub-messaging)
- [CLI Options](#cli-options)
- [CLI Usage](#cli-usage)
- [Development](#development)
- [Project Structure](#project-structure)
- [Building and Testing](#building-and-testing)
- [Development Workflow](#development-workflow)
- [Environment Setup](#environment-setup)
- [Configuration System](#configuration-system)
- [Client Library Usage](#client-library-usage)
- [Troubleshooting](#troubleshooting)
- [Common Issues](#common-issues)
- [Debug Commands](#debug-commands)
- [Environment-specific Issues](#environment-specific-issues)
- [Configuration Validation](#configuration-validation)
- [Logs and Data](#logs-and-data)
- [License](#license)
---
## Features
- **Peer-to-Peer Networking**: Built on LibP2P for robust P2P communication
- **Distributed Database**: RQLite-based distributed SQLite with Raft consensus
- **Automatic Peer Discovery**: Nodes help new peers join the network
- **CLI Tool**: Command-line interface for network operations and testing
- **Client Library**: Simple Go API for applications to interact with the network
- **Application Isolation**: Namespaced storage and messaging per application
- **Distributed SQL Database:** RQLite-backed, Raft-consensus, ACID transactions, automatic failover.
- **Key-Value Storage:** Namespaced, replicated, CRUD operations, prefix queries.
- **Pub/Sub Messaging:** Topic-based, real-time, namespaced, automatic cleanup.
- **Peer Discovery & Management:** Nodes discover peers, bootstrap support, health monitoring.
- **Application Isolation:** Namespace-based multi-tenancy, per-app config.
- **Secure by Default:** Noise/TLS transport, peer identity, systemd hardening.
- **Simple Client API:** Lightweight Go client for apps and CLI tools.
---
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────┐
│ DeBros Network Cluster │
├─────────────────────────────────────────────────────────────┤
│ Application Layer │
│ ┌─────────────┐ ┌─────────────┐ ┌────────────────────────┐ │
│ │ Anchat │ │ Custom App │ │ CLI Tools │ │
│ └─────────────┘ └─────────────┘ └────────────────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ Client API │
│ ┌─────────────┐ ┌─────────────┐ ┌────────────────────────┐ │
│ │ Database │ │ Storage │ │ PubSub │ │
│ │ Client │ │ Client │ │ Client │ │
│ └─────────────┘ └─────────────┘ └────────────────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ Network Node Layer │
│ ┌─────────────┐ ┌─────────────┐ ┌────────────────────────┐ │
│ │ Discovery │ │ PubSub │ │ Database │ │
│ │ Manager │ │ Manager │ │ (RQLite) │ │
│ └─────────────┘ └─────────────┘ └────────────────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ Transport Layer │
│ ┌─────────────┐ ┌─────────────┐ ┌────────────────────────┐ │
│ │ LibP2P │ │ Noise/TLS │ │ RQLite │ │
│ │ Host │ │ Encryption │ │ Database │ │
│ └─────────────┘ └─────────────┘ └────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
```
- **Node:** Full P2P participant, runs services, handles peer discovery, database, storage, pubsub.
- **Client:** Lightweight, connects only to bootstrap peers, consumes services, no peer discovery.
---
## System Requirements
### Software Dependencies
### Software
- **Go**: Version 1.21 or later
- **RQLite**: Distributed SQLite database
- **Git**: For cloning the repository
- **Make**: For build automation (optional but recommended)
- **Go:** 1.21+ (recommended)
- **RQLite:** 8.x (distributed SQLite)
- **Git:** For source management
- **Make:** For build automation (recommended)
### Installation
### Hardware
#### macOS
```bash
# Install Homebrew if you don't have it
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install dependencies
brew install go rqlite git make
# Verify installation
go version # Should show Go 1.21+
rqlited --version
```
#### Ubuntu/Debian
```bash
# Install Go (latest version)
sudo rm -rf /usr/local/go
wget https://go.dev/dl/go1.21.6.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.21.6.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
# Install RQLite
wget https://github.com/rqlite/rqlite/releases/download/v8.43.0/rqlite-v8.43.0-linux-amd64.tar.gz
tar -xzf rqlite-v8.43.0-linux-amd64.tar.gz
sudo mv rqlite-v8.43.0-linux-amd64/rqlited /usr/local/bin/
# Install other dependencies
sudo apt update
sudo apt install git make
# Verify installation
go version
rqlited --version
```
#### Windows
```powershell
# Install Go from https://golang.org/dl/
# Install Git from https://git-scm.com/download/win
# Install RQLite from https://github.com/rqlite/rqlite/releases
# Or use Chocolatey
choco install golang git make
# Download RQLite manually from releases page
```
### Hardware Requirements
**Minimum:**
- CPU: 2 cores
- RAM: 4GB
- Storage: 10GB free space
- Network: Stable internet connection
**Recommended:**
- CPU: 4+ cores
- RAM: 8GB+
- Storage: 50GB+ SSD
- Network: Low-latency internet connection
- **Minimum:** 2 CPU cores, 4GB RAM, 10GB disk, stable internet
- **Recommended:** 4+ cores, 8GB+ RAM, 50GB+ SSD, low-latency network
### Network Ports
The system uses these ports by default:
- **4001:** LibP2P P2P communication
- **5001:** RQLite HTTP API
- **7001:** RQLite Raft consensus
- **4001**: LibP2P communication
- **5001**: RQLite HTTP API
- **7001**: RQLite Raft consensus
Ensure these ports are available or configure firewall rules accordingly. The system will also use +1 for each extra node started. For example 4002, 5002, 7002
---
## Quick Start
### 1. Clone and Setup Environment
### 1. Clone and Setup
```bash
# Clone the repository
git clone https://git.debros.io/DeBros/network.git
cd network
```
### 2. Build the Project
### 2. Build All Executables
```bash
# Build all network executables
make build
```
```bash
# Build all network executables
make build
```
### 3. Start the Network
**Terminal 1 - Bootstrap Node:**
### 3. Start a Bootstrap Node
```bash
# Start the bootstrap node (LibP2P 4001, RQLite 5001/7001)
make run-node
# Or manually:
go run ./cmd/node -data ./data/bootstrap -p2p-port 4001 -rqlite-http-port 5001 -rqlite-raft-port 7001
```
**Terminal 2 - Regular Node:**
### 4. Start Additional Nodes
```bash
# Start a regular node and join the cluster using the bootstrap node's RQLite HTTP address
go run ./cmd/node --id node2 --data ./data/node2 --p2p-port 4002 --rqlite-http-port 5002 --rqlite-raft-port 7002 --rqlite-join-address http://127.0.0.1:5001 --disable-anonrc
```
**Terminal 3 - Another Node (optional):**
```bash
go run ./cmd/node --id node3 --data ./data/node3 --p2p-port 4003 --rqlite-http-port 5003 --rqlite-raft-port 7003 --rqlite-join-address http://127.0.0.1:5001 --disable-anonrc
go run ./cmd/node -id node2 -data ./data/node2 -rqlite-http-port 5002 -rqlite-raft-port 7002 -p2p-port 4002 --disable-anonrc
```
### 5. Test with CLI
```bash
# Check current bootstrap configuration
make show-bootstrap
# Check network health
./bin/network-cli health
# Test storage operations
./bin/network-cli peers
./bin/network-cli storage put test-key "Hello Network"
./bin/network-cli storage get test-key
# List connected peers
./bin/network-cli peers
./bin/network-cli pubsub publish notifications "Hello World"
./bin/network-cli pubsub subscribe notifications 10s
```
## Deployment
---
### Production Installation Script
## Deployment & Installation
For production deployments on Linux servers, we provide an automated installation script that handles all dependencies, configuration, and service setup.
### Automated Production Install
#### One-Command Installation
Run the install script for a secure, production-ready setup:
```bash
# Download and run the installation script
curl -sSL https://git.debros.io/DeBros/network/raw/branch/main/scripts/install-debros-network.sh | sudo bash
```
#### What the Script Does
1. **System Setup**:
- Detects OS (Ubuntu/Debian/CentOS/RHEL/Fedora)
- Installs Go 1.21+ with architecture detection
- Installs system dependencies (git, make, build tools)
- Checks port availability (4001, 5001, 7001)
2. **Configuration Wizard**:
- Solana wallet address for node operator rewards
- Installation directory (default: `/opt/debros`)
- Automatic firewall configuration (UFW)
3. **Secure Installation**:
- Creates dedicated `debros` system user
- Sets up secure directory structure with proper permissions
- Generates LibP2P identity keys with secure storage
- Clones source code and builds binaries
4. **Service Management**:
- Creates systemd service with security hardening
- Enables automatic startup and restart on failure
- Configures structured logging to systemd journal
#### Directory Structure
The script creates a production-ready directory structure:
**What the Script Does:**
- Detects OS, installs Go, RQLite, dependencies
- Creates `debros` system user, secure directory structure
- Generates LibP2P identity keys
- Clones source, builds binaries
- Sets up systemd service (`debros-node`)
- Configures firewall (UFW) for required ports
- Generates YAML config in `/opt/debros/configs/node.yaml`
**Directory Structure:**
```
/opt/debros/
├── bin/ # Compiled binaries
│ ├── bootstrap # Bootstrap node executable
│ ├── node # Node executable
│ └── cli # CLI tools
├── configs/ # Configuration files
│ └── node.yaml # Node configuration
├── keys/ # Identity keys (secure 700 permissions)
│ └── node/
│ └── identity.key
├── data/ # Runtime data
│ └── node/
│ ├── rqlite/ # RQLite database files
│ └── storage/ # P2P storage data
├── logs/ # Application logs
│ └── node.log
└── src/ # Source code (for updates)
├── bin/ # Binaries
├── configs/ # YAML configs
├── keys/ # Identity keys
├── data/ # RQLite DB, storage
├── logs/ # Node logs
├── src/ # Source code
```
#### Node Setup
The installation script sets up a **network node**:
- Runs on ports: 4001 (P2P), 5001 (RQLite), 7001 (Raft)
- Participates in DHT for peer discovery and data replication
- Can be deployed on any server or VPS
For setup, please run these commands with adequate permissions:
- Ensure you have elevated privileges or run as a user with the necessary permissions for server setup.
- Follow the installation steps correctly to ensure a smooth deployment.
#### Service Management
After installation, manage your node with these commands:
**Service Management:**
```bash
# Check service status
sudo systemctl status debros-node
# Start/stop/restart service
sudo systemctl start debros-node
sudo systemctl stop debros-node
sudo systemctl restart debros-node
# View real-time logs
sudo journalctl -u debros-node.service -f
# Enable/disable auto-start
sudo systemctl enable debros-node
sudo systemctl disable debros-node
# Use CLI tools
/opt/debros/bin/network-cli health
/opt/debros/bin/network-cli peers
/opt/debros/bin/network-cli storage put key value
```
#### Configuration Files
---
The script generates YAML configuration files:
## Configuration
**Node Configuration (`/opt/debros/configs/node.yaml`)**:
### YAML Config Example (`/opt/debros/configs/node.yaml`)
```yaml
node:
@ -336,200 +193,21 @@ logging:
file: "/opt/debros/logs/node.log"
```
#### Security Features
### Flags & Environment Variables
The installation script implements production security best practices:
- **Flags**: Override config at startup (`--data`, `--p2p-port`, `--rqlite-http-port`, etc.)
- **Env Vars**: Override config and flags (`NODE_ID`, `RQLITE_PORT`, `BOOTSTRAP_PEERS`, etc.)
- **Precedence**: Flags > Env Vars > YAML > Defaults
- **Dedicated User**: Runs as `debros` system user (not root)
- **File Permissions**: Key files have 600 permissions, directories have proper ownership
- **Systemd Security**: Service runs with `NoNewPrivileges`, `PrivateTmp`, `ProtectSystem=strict`
- **Firewall**: Automatic UFW configuration for required ports
- **Network Isolation**: Proper port management to avoid conflicts
### Bootstrap & Database Endpoints
#### Network Discovery
- **Bootstrap peers**: Set in config or via `BOOTSTRAP_PEERS` env var.
- **Database endpoints**: Set in config or via `RQLITE_NODES` env var.
- **Development mode**: Use `NETWORK_DEV_LOCAL=1` for localhost defaults.
- **Network Peers**: Hardcoded in the application for automatic connection
- **DHT Discovery**: Nodes automatically join Kademlia DHT for peer discovery
- **Peer Exchange**: Connected nodes share information about other peers
- **No Manual Configuration**: Nodes connect automatically without user intervention
---
#### Updates and Maintenance
```bash
# Update to latest version (re-run the installation script)
curl -sSL https://git.debros.io/DeBros/network/raw/branch/main/scripts/install-debros-network.sh | bash
# Manual source update
cd /opt/debros/src
sudo -u debros git pull
sudo -u debros make build
sudo cp bin/* /opt/debros/bin/
sudo systemctl restart debros-node
# Backup configuration and keys
sudo cp -r /opt/debros/configs /backup/
sudo cp -r /opt/debros/keys /backup/
```
#### Monitoring and Troubleshooting
```bash
# Check if ports are open
sudo netstat -tuln | grep -E "(4001|5001|7001)"
# Check service logs
sudo journalctl -u debros-node.service --since "1 hour ago"
# Check network connectivity
/opt/debros/bin/network-cli health
/opt/debros/bin/network-cli peers
# Check disk usage
du -sh /opt/debros/data/*
# Process information
ps aux | grep debros
```
For more advanced configuration options and development setup, see the sections below.
## Configuration
### Node Startup Flags
- **Bootstrap node**: Just run `make run-node` (auto-selects data dir and identity)
- **Regular node**: Use `--id`, `--data`, `--p2p-port`, `--rqlite-http-port`, `--rqlite-raft-port`, and `--rqlite-join-address <http://bootstrap_host:5001>`
- **Disable anonymous routing**: `--disable-anonrc` (optional)
- **Development localhost defaults**: Use `--disable-anonrc` for local-only testing; the library returns localhost DB endpoints and bootstrap peers.
- **RQLite ports**: `--rqlite-http-port` (default 5001), `--rqlite-raft-port` (default 7001)
Examples are shown in Quick Start above for local multi-node on a single machine.
### Environment Variables
Precedence: CLI flags > Environment variables > Code defaults. Set any of the following in your shell or `.env`:
- NODE_ID: custom node identifier (e.g. "node2")
- NODE_TYPE: "bootstrap" or "node"
- NODE_LISTEN_ADDRESSES: comma-separated multiaddrs (e.g. "/ip4/0.0.0.0/tcp/4001,/ip4/0.0.0.0/udp/4001/quic")
- DATA_DIR: node data directory (default `./data`)
- MAX_CONNECTIONS: max peer connections (int)
- DB_DATA_DIR: database data directory (default `./data/db`)
- REPLICATION_FACTOR: int (default 3)
- SHARD_COUNT: int (default 16)
- MAX_DB_SIZE: e.g. "1g", "512m", or bytes
- BACKUP_INTERVAL: Go duration (e.g. "24h")
- RQLITE_HTTP_PORT: int (default 5001)
- RQLITE_RAFT_PORT: int (default 7001)
- RQLITE_JOIN_ADDRESS: host:port for Raft join (regular nodes)
- RQLITE_NODES: comma/space-separated DB endpoints (e.g. "http://n1:5001,http://n2:5001"). Used by client if `ClientConfig.DatabaseEndpoints` is empty.
- RQLITE_PORT: default DB HTTP port for constructing library defaults (fallback 5001)
- NETWORK_DEV_LOCAL: when truthy (1/true/yes/on), client defaults use localhost for DB endpoints; default bootstrap peers also return localhost values.
- LOCAL_BOOTSTRAP_MULTIADDR: when set with NETWORK_DEV_LOCAL, overrides default bootstrap with a specific local multiaddr (e.g. `/ip4/127.0.0.1/tcp/4001/p2p/<ID>`)
- ADVERTISE_MODE: "auto" | "localhost" | "ip"
- BOOTSTRAP_PEERS: comma-separated multiaddrs for bootstrap peers
- ENABLE_MDNS: true/false
- ENABLE_DHT: true/false
- DHT_PREFIX: string (default `/network/kad/1.0.0`)
- DISCOVERY_INTERVAL: duration (e.g. "5m")
- ENABLE_TLS: true/false
- PRIVATE_KEY_FILE: path
- CERT_FILE: path
- AUTH_ENABLED: true/false
- LOG_LEVEL: "debug" | "info" | "warn" | "error"
- LOG_FORMAT: "json" | "console"
- LOG_OUTPUT_FILE: path (empty = stdout)
### Centralized Flag/Env Mapping
Flag and environment variable mapping is centralized in `cmd/node/configmap.go` via `MapFlagsAndEnvToConfig`.
This enforces precedence (flags > env > defaults) consistently across the node startup path.
### Centralized Defaults: Bootstrap & Database
- The network library is the single source of truth for defaults.
- Bootstrap peers: `pkg/constants/bootstrap.go` exposed via `client.DefaultBootstrapPeers()`.
- Database HTTP endpoints: derived from bootstrap peers via `client.DefaultDatabaseEndpoints()`.
#### Database Endpoints Precedence
When the client connects to RQLite, endpoints are resolved with this precedence:
1. `ClientConfig.DatabaseEndpoints` (explicitly set by the app)
2. `RQLITE_NODES` environment variable (comma/space separated), e.g. `http://x:5001,http://y:5001`
3. `client.DefaultDatabaseEndpoints()` (constructed from default bootstrap peers)
Notes:
- Default DB port is 5001. Override with `RQLITE_PORT` when constructing defaults.
- Endpoints are normalized to include scheme and port; duplicates are removed.
#### Client Usage Example
```go
cfg := client.DefaultClientConfig("my-app")
// Optional: override bootstrap peers
cfg.BootstrapPeers = []string{"/ip4/127.0.0.1/tcp/4001/p2p/<PEER_ID>"}
// Optional: prefer explicit DB endpoints
cfg.DatabaseEndpoints = []string{"http://127.0.0.1:5001"}
cli, err := client.NewClient(cfg)
// cli.Connect() will prefer cfg.DatabaseEndpoints, then RQLITE_NODES, then defaults
```
#### Development Mode (localhost-only)
To force localhost defaults for both database endpoints and bootstrap peers:
```bash
export NETWORK_DEV_LOCAL=1
# Optional: specify a local bootstrap peer multiaddr with peer ID
export LOCAL_BOOTSTRAP_MULTIADDR="/ip4/127.0.0.1/tcp/4001/p2p/<BOOTSTRAP_PEER_ID>"
# Optional: customize default DB port used in localhost endpoints
export RQLITE_PORT=5001
```
Notes:
- With `NETWORK_DEV_LOCAL`, `client.DefaultDatabaseEndpoints()` returns `http://127.0.0.1:$RQLITE_PORT`.
- `client.DefaultBootstrapPeers()` returns `LOCAL_BOOTSTRAP_MULTIADDR` if set, otherwise `/ip4/127.0.0.1/tcp/4001`.
- If you construct config via `client.DefaultClientConfig(...)`, DB endpoints are pinned to localhost and will override `RQLITE_NODES` automatically.
- When `NETWORK_DEV_LOCAL` is set and `LOCAL_BOOTSTRAP_MULTIADDR` is NOT set, the client attempts to auto-load the local bootstrap multiaddr (with peer ID) from `./data/bootstrap/peer.info` (or `LOCAL_BOOTSTRAP_INFO` path if provided). Only if no file is found does it fall back to `/ip4/127.0.0.1/tcp/4001`.
### Migration Guide for Apps (e.g., anchat)
- **Stop hardcoding endpoints**: Replace any hardcoded bootstrap peers and DB URLs with calls to
`client.DefaultBootstrapPeers()` and, if needed, set `ClientConfig.DatabaseEndpoints`.
- **Prefer config over env**: Set `ClientConfig.DatabaseEndpoints` in your app config. If not set,
the library will read `RQLITE_NODES` for backward compatibility.
- **Keep env compatibility**: Existing environments using `RQLITE_NODES` and `RQLITE_PORT` continue to work.
- **Minimal changes**: Most apps only need to populate `ClientConfig.DatabaseEndpoints` and/or rely on
`client.DefaultDatabaseEndpoints()`; no other code changes required.
Example migration snippet:
```go
import netclient "git.debros.io/DeBros/network/pkg/client"
cfg := netclient.DefaultClientConfig("anchat")
// Use library defaults for bootstrap peers
cfg.BootstrapPeers = netclient.DefaultBootstrapPeers()
// Prefer explicit DB endpoints (can also leave empty to use env or defaults)
cfg.DatabaseEndpoints = []string{"http://127.0.0.1:5001"}
c, err := netclient.NewClient(cfg)
if err != nil { /* handle */ }
if err := c.Connect(); err != nil { /* handle */ }
defer c.Disconnect()
```
## CLI Commands
The CLI can still accept `--bootstrap <multiaddr>` to override discovery when needed.
## CLI Usage
### Network Operations
@ -568,8 +246,12 @@ The CLI can still accept `--bootstrap <multiaddr>` to override discovery when ne
--format json # Output in JSON format
--timeout 30s # Set operation timeout
--bootstrap <multiaddr> # Override bootstrap peer
--production # Use production bootstrap peers
--disable-anonrc # Disable anonymous routing (Tor/SOCKS5)
```
---
## Development
### Project Structure
@ -577,249 +259,92 @@ The CLI can still accept `--bootstrap <multiaddr>` to override discovery when ne
```
network/
├── cmd/
│ ├── node/ # Network node (bootstrap via flag)
│ │ ├── main.go # Entrypoint
│ │ └── configmap.go # Centralized flags/env → config mapping
│ ├── node/ # Network node executable
│ └── cli/ # Command-line interface
├── pkg/
│ ├── client/ # Client library
│ ├── node/ # Node implementation
│ ├── database/ # RQLite integration
│ ├── storage/ # Storage service
│ ├── constants/ # Bootstrap configuration
│ └── config/ # System configuration
├── scripts/ # Helper scripts (install, security, tests)
│ ├── pubsub/ # Pub/Sub messaging
│ ├── config/ # Centralized config
│ └── discovery/ # Peer discovery (node only)
├── scripts/ # Install, test scripts
├── configs/ # YAML configs
├── bin/ # Built executables
```
### Building and Testing
### Build & Test
```bash
# Build all network executables
make build
# Show current bootstrap configuration
make show-bootstrap
# Run node (auto-detects bootstrap vs regular based on .env)
make run-node
# Clean data directories
make clean
# Run tests
go test ./...
# Full development workflow
make dev
make build # Build all executables
make test # Run unit tests
make clean # Clean build artifacts
```
### Development Workflow
1. **Initial Setup:**
### Local Multi-Node Testing
```bash
# Copy environment templates
cp .env.example .env
# Generate consistent bootstrap identity
go run scripts/generate-bootstrap-identity.go
# Update .env files with the generated peer ID
scripts/test-multinode.sh
```
2. **Build Everything:**
```bash
make build # Build network components
```
3. **Start Development Cluster:**
```bash
# Terminal 1: Bootstrap node (auto-detected)
make run-node
# Terminal 2: Regular node (auto-connects via .env)
make run-node
# Terminal 3: Test with CLI
./bin/network-cli health
./bin/network-cli peers
```
### Environment Setup
1. **Install Dependencies:**
```bash
# macOS
brew install go rqlite git make
# Ubuntu/Debian
sudo apt install golang-go git make
# Install RQLite from https://github.com/rqlite/rqlite/releases
```
2. **Verify Installation:**
```bash
go version # Should be 1.21+
rqlited --version
make --version
```
3. **Configure Environment:**
```bash
# Setup .env files
cp .env.example .env
# Generate bootstrap identity
go run scripts/generate-bootstrap-identity.go
# Update .env files with generated peer ID
```
### Configuration System
The network uses a dual configuration system:
1. **Environment Variables (.env files):** Primary configuration method
2. **Hardcoded Constants:** Fallback when .env files are not found
#### Bootstrap Configuration Priority:
1. Command line flags (if provided)
2. Environment variables from `.env` files
3. Hardcoded constants in `pkg/constants/bootstrap.go`
4. Auto-discovery from running bootstrap nodes
This ensures the network can start even without configuration files, while allowing easy customization for different environments.
## Client Library Usage
```go
package main
import (
"context"
"log"
"git.debros.io/DeBros/network/pkg/client"
)
func main() {
// Create client (bootstrap peer discovered automatically)
config := client.DefaultClientConfig("my-app")
networkClient, err := client.NewClient(config)
if err != nil {
log.Fatal(err)
}
// Connect to network
if err := networkClient.Connect(); err != nil {
log.Fatal(err)
}
defer networkClient.Disconnect()
// Use storage
ctx := context.Background()
storage := networkClient.Storage()
err = storage.Put(ctx, "user:123", []byte("user data"))
if err != nil {
log.Fatal(err)
}
data, err := storage.Get(ctx, "user:123")
if err != nil {
log.Fatal(err)
}
log.Printf("Retrieved: %s", string(data))
}
```
---
## Troubleshooting
### Common Issues
**Bootstrap peer not found / Peer ID mismatch:**
#### Bootstrap Connection Failed
- Generate a new bootstrap identity: `go run scripts/generate-bootstrap-identity.go`
- Update `.env` with the new peer ID
- Restart the bootstrap node: `make run-node`
- Check configuration: `make show-bootstrap`
- **Symptoms:** `Failed to connect to bootstrap peer`
- **Solutions:** Check node is running, firewall settings, peer ID validity.
**Nodes can't connect:**
#### Database Operations Timeout
- Verify `.env` files have the correct bootstrap peer ID
- Check that the bootstrap node is running: `ps aux | grep node`
- Verify firewall settings and port availability (4001, 5001, 7001)
- Try restarting with clean data: `make clean && make run-node`
- **Symptoms:** `Query timeout` or `No RQLite connection available`
- **Solutions:** Ensure RQLite ports are open, leader election completed, cluster join config correct.
**Storage operations fail:**
#### Message Delivery Failures
- Ensure at least one node is running and connected
- Check network health: `./bin/cli health`
- Verify RQLite is properly installed: `rqlited --version`
- Check for port conflicts: `netstat -an | grep -E "(4001|5001|7001)"`
- **Symptoms:** Messages not received by subscribers
- **Solutions:** Verify topic names, active subscriptions, network connectivity.
### Debug Commands
#### High Memory Usage
- **Symptoms:** Memory usage grows continuously
- **Solutions:** Unsubscribe when done, monitor connection pool, review message retention.
### Debugging & Health Checks
```bash
# Check current configuration
make show-bootstrap
cat .env
# Check running processes
ps aux | grep -E "(bootstrap|node|rqlite)"
# Check port usage
netstat -an | grep -E "(4001|5001|7001)"
# Check bootstrap peer info
cat data/bootstrap/peer.info
# Clean and restart everything
make clean
make run-node # In one terminal (auto-detects as bootstrap)
make run-node # In another terminal (runs as regular node)
export LOG_LEVEL=debug
./bin/network-cli health
./bin/network-cli peers
./bin/network-cli query "SELECT 1"
./bin/network-cli pubsub publish test "hello"
./bin/network-cli pubsub subscribe test 10s
```
### Environment-specific Issues
**Development Environment:**
- Always run `go run scripts/generate-bootstrap-identity.go` first
- Update `.env` files with the generated peer ID
- Use `make run-node` - the system auto-detects if it should run as bootstrap
**Production Environment:**
- Use stable, external bootstrap peer addresses
- Configure multiple bootstrap peers for redundancy
- Set `ENVIRONMENT=production` in `.env` files
### Configuration Validation
### Service Logs
```bash
# Test bootstrap configuration loading
go run -c 'package main; import "fmt"; import "network/pkg/constants"; func main() { fmt.Printf("Bootstrap peers: %v\n", constants.GetBootstrapPeers()) }'
# Verify .env file syntax
grep -E "^[A-Z_]+=.*" .env
sudo journalctl -u debros-node.service --since "1 hour ago"
```
### Logs and Data
- Node logs: Console output from each running process
- Data directories: `./data/bootstrap/`, `./data/node/`, etc.
- RQLite data: `./data/<node>/rqlite/`
- Peer info: `./data/<node>/peer.info`
- Bootstrap identity: `./data/bootstrap/identity.key`
- Environment config: `./.env`
---
## License
MIT License - see LICENSE file for details.
Distributed under the MIT License. See [LICENSE](LICENSE) for details.
---
## Further Reading
- [DeBros Network Documentation](https://docs.debros.io)
- [RQLite Documentation](https://github.com/rqlite/rqlite)
- [LibP2P Documentation](https://libp2p.io)
---
_This README reflects the latest architecture, configuration, and operational practices for the DeBros Network. For questions or contributions, please open an issue or pull request._

View File

@ -1,6 +1,10 @@
#!/bin/bash
set -e # Exit on any error
# DeBros Network Node Installation Script (Modern Node-Only Setup)
# Installs, configures, and manages a DeBros network node with secure defaults.
# Supports update-in-place, systemd service, and CLI management.
set -e
trap 'echo -e "${RED}An error occurred. Installation aborted.${NOCOLOR}"; exit 1' ERR
# Color codes
@ -11,39 +15,28 @@ BLUE='\033[38;2;2;128;175m'
YELLOW='\033[1;33m'
NOCOLOR='\033[0m'
# Default values
# Defaults
INSTALL_DIR="/opt/debros"
REPO_URL="https://git.debros.io/DeBros/network.git"
MIN_GO_VERSION="1.19"
NODE_PORT="4001" # LibP2P port for peer-to-peer communication
RQLITE_PORT="5001" # All nodes use same RQLite HTTP port to join same cluster
RAFT_PORT="7001" # All nodes use same Raft port
MIN_GO_VERSION="1.21"
NODE_PORT="4001"
RQLITE_PORT="5001"
RAFT_PORT="7001"
UPDATE_MODE=false
NON_INTERACTIVE=false
log() {
echo -e "${CYAN}[$(date '+%Y-%m-%d %H:%M:%S')]${NOCOLOR} $1"
}
log() { echo -e "${CYAN}[$(date '+%Y-%m-%d %H:%M:%S')]${NOCOLOR} $1"; }
error() { echo -e "${RED}[ERROR]${NOCOLOR} $1"; }
success() { echo -e "${GREEN}[SUCCESS]${NOCOLOR} $1"; }
warning() { echo -e "${YELLOW}[WARNING]${NOCOLOR} $1"; }
# Check if running non-interactively (piped from curl)
# Detect non-interactive mode
if [ ! -t 0 ]; then
NON_INTERACTIVE=true
log "Running in non-interactive mode"
fi
error() {
echo -e "${RED}[ERROR]${NOCOLOR} $1"
}
success() {
echo -e "${GREEN}[SUCCESS]${NOCOLOR} $1"
}
warning() {
echo -e "${YELLOW}[WARNING]${NOCOLOR} $1"
}
# Check if running as root and warn user
# Root/sudo checks
if [[ $EUID -eq 0 ]]; then
warning "Running as root is not recommended for security reasons."
if [ "$NON_INTERACTIVE" != true ]; then
@ -56,17 +49,15 @@ if [[ $EUID -eq 0 ]]; then
else
log "Non-interactive mode: proceeding with root (use at your own risk)"
fi
# Create sudo alias that does nothing when running as root
alias sudo=''
else
# Check if sudo is available for non-root users
if ! command -v sudo &>/dev/null; then
error "sudo command not found. Please ensure you have sudo privileges."
exit 1
fi
fi
# Detect OS
# Detect OS and package manager
detect_os() {
if [ -f /etc/os-release ]; then
. /etc/os-release
@ -76,72 +67,43 @@ detect_os() {
error "Cannot detect operating system"
exit 1
fi
case $OS in
ubuntu|debian)
PACKAGE_MANAGER="apt"
;;
ubuntu|debian) PACKAGE_MANAGER="apt" ;;
centos|rhel|fedora)
PACKAGE_MANAGER="yum"
if command -v dnf &> /dev/null; then
PACKAGE_MANAGER="dnf"
fi
;;
*)
error "Unsupported operating system: $OS"
exit 1
if command -v dnf &> /dev/null; then PACKAGE_MANAGER="dnf"; fi
;;
*) error "Unsupported operating system: $OS"; exit 1 ;;
esac
log "Detected OS: $OS $VERSION"
}
# Check if DeBros Network is already installed
# Check for existing install
check_existing_installation() {
if [ -d "$INSTALL_DIR" ] && [ -f "$INSTALL_DIR/bin/node" ]; then
log "Found existing DeBros Network installation at $INSTALL_DIR"
# Check if service is running
NODE_RUNNING=false
if systemctl is-active --quiet debros-node.service 2>/dev/null; then
NODE_RUNNING=true
log "Node service is currently running"
fi
if [ "$NON_INTERACTIVE" = true ]; then
log "Non-interactive mode: updating existing installation"
UPDATE_MODE=true
return 0
fi
echo -e "${YELLOW}Existing installation detected!${NOCOLOR}"
echo -e "${CYAN}Options:${NOCOLOR}"
echo -e "${CYAN}1) Update existing installation${NOCOLOR}"
echo -e "${CYAN}2) Remove and reinstall${NOCOLOR}"
echo -e "${CYAN}3) Exit installer${NOCOLOR}"
while true; do
read -rp "Enter your choice (1, 2, or 3): " EXISTING_CHOICE
case $EXISTING_CHOICE in
1)
UPDATE_MODE=true
log "Will update existing installation"
return 0
;;
2)
log "Will remove and reinstall"
remove_existing_installation
UPDATE_MODE=false
return 0
;;
3)
log "Installation cancelled by user"
exit 0
;;
*)
error "Invalid choice. Please enter 1, 2, or 3."
;;
1) UPDATE_MODE=true; log "Will update existing installation"; return 0 ;;
2) log "Will remove and reinstall"; remove_existing_installation; UPDATE_MODE=false; return 0 ;;
3) log "Installation cancelled by user"; exit 0 ;;
*) error "Invalid choice. Please enter 1, 2, or 3." ;;
esac
done
else
@ -150,12 +112,9 @@ check_existing_installation() {
fi
}
# Remove existing installation
remove_existing_installation() {
log "Removing existing installation..."
# Stop services if they exist
for service in debros-bootstrap debros-node; do
for service in debros-node; do
if systemctl list-unit-files | grep -q "$service.service"; then
log "Stopping $service service..."
sudo systemctl stop $service.service 2>/dev/null || true
@ -163,31 +122,22 @@ remove_existing_installation() {
sudo rm -f /etc/systemd/system/$service.service
fi
done
sudo systemctl daemon-reload
# Remove installation directory
if [ -d "$INSTALL_DIR" ]; then
sudo rm -rf "$INSTALL_DIR"
log "Removed installation directory"
fi
# Remove debros user
if id "debros" &>/dev/null; then
sudo userdel debros 2>/dev/null || true
log "Removed debros user"
fi
success "Existing installation removed"
}
# Check Go installation and version
check_go_installation() {
if command -v go &> /dev/null; then
GO_VERSION=$(go version | awk '{print $3}' | sed 's/go//')
log "Found Go version: $GO_VERSION"
# Compare versions (simplified)
if [ "$(printf '%s\n' "$MIN_GO_VERSION" "$GO_VERSION" | sort -V | head -n1)" = "$MIN_GO_VERSION" ]; then
success "Go version is sufficient"
return 0
@ -201,65 +151,37 @@ check_go_installation() {
fi
}
# Install Go
install_go() {
log "Installing Go..."
case $PACKAGE_MANAGER in
apt)
sudo apt update
sudo apt install -y wget
;;
yum|dnf)
sudo $PACKAGE_MANAGER install -y wget
;;
apt) sudo apt update; sudo apt install -y wget ;;
yum|dnf) sudo $PACKAGE_MANAGER install -y wget ;;
esac
# Download and install Go
GO_TARBALL="go1.21.0.linux-amd64.tar.gz"
GO_TARBALL="go1.21.6.linux-amd64.tar.gz"
ARCH=$(uname -m)
if [ "$ARCH" = "aarch64" ]; then
GO_TARBALL="go1.21.0.linux-arm64.tar.gz"
fi
if [ "$ARCH" = "aarch64" ]; then GO_TARBALL="go1.21.6.linux-arm64.tar.gz"; fi
cd /tmp
wget -q "https://golang.org/dl/$GO_TARBALL"
wget -q "https://go.dev/dl/$GO_TARBALL"
sudo rm -rf /usr/local/go
sudo tar -C /usr/local -xzf "$GO_TARBALL"
# Add Go to system-wide PATH
if ! grep -q "/usr/local/go/bin" /etc/environment 2>/dev/null; then
echo 'PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/go/bin"' | sudo tee /etc/environment > /dev/null
fi
# Also add to current user's bashrc for compatibility
if ! grep -q "/usr/local/go/bin" ~/.bashrc; then
echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.bashrc
fi
# Update current session PATH
export PATH=$PATH:/usr/local/go/bin
success "Go installed successfully"
}
# Install system dependencies
install_dependencies() {
log "Checking system dependencies..."
# Check which dependencies are missing
MISSING_DEPS=()
case $PACKAGE_MANAGER in
apt)
# Check for required packages
for pkg in git make build-essential curl; do
if ! dpkg -l | grep -q "^ii $pkg "; then
MISSING_DEPS+=($pkg)
fi
if ! dpkg -l | grep -q "^ii $pkg "; then MISSING_DEPS+=($pkg); fi
done
if [ ${#MISSING_DEPS[@]} -gt 0 ]; then
log "Installing missing dependencies: ${MISSING_DEPS[*]}"
sudo apt update
@ -269,24 +191,15 @@ install_dependencies() {
fi
;;
yum|dnf)
# Check for required packages
for pkg in git make curl; do
if ! rpm -q $pkg &>/dev/null; then
MISSING_DEPS+=($pkg)
fi
if ! rpm -q $pkg &>/dev/null; then MISSING_DEPS+=($pkg); fi
done
# Check for development tools
if ! rpm -q gcc &>/dev/null; then
MISSING_DEPS+=("Development Tools")
fi
if ! rpm -q gcc &>/dev/null; then MISSING_DEPS+=("Development Tools"); fi
if [ ${#MISSING_DEPS[@]} -gt 0 ]; then
log "Installing missing dependencies: ${MISSING_DEPS[*]}"
if [[ " ${MISSING_DEPS[*]} " =~ " Development Tools " ]]; then
sudo $PACKAGE_MANAGER groupinstall -y "Development Tools"
fi
# Remove "Development Tools" from array for individual package installation
MISSING_DEPS=($(printf '%s\n' "${MISSING_DEPS[@]}" | grep -v "Development Tools"))
if [ ${#MISSING_DEPS[@]} -gt 0 ]; then
sudo $PACKAGE_MANAGER install -y "${MISSING_DEPS[@]}"
@ -296,65 +209,36 @@ install_dependencies() {
fi
;;
esac
success "System dependencies ready"
}
# Install RQLite
install_rqlite() {
# Check if RQLite is already installed
if command -v rqlited &> /dev/null; then
RQLITE_VERSION=$(rqlited -version | head -n1 | awk '{print $2}')
log "Found RQLite version: $RQLITE_VERSION"
success "RQLite already installed"
return 0
fi
log "Installing RQLite..."
# Determine architecture
ARCH=$(uname -m)
case $ARCH in
x86_64)
RQLITE_ARCH="amd64"
;;
aarch64|arm64)
RQLITE_ARCH="arm64"
;;
armv7l)
RQLITE_ARCH="arm"
;;
*)
error "Unsupported architecture: $ARCH"
exit 1
;;
x86_64) RQLITE_ARCH="amd64" ;;
aarch64|arm64) RQLITE_ARCH="arm64" ;;
armv7l) RQLITE_ARCH="arm" ;;
*) error "Unsupported architecture: $ARCH"; exit 1 ;;
esac
# Download and install RQLite
RQLITE_VERSION="8.30.0"
RQLITE_VERSION="8.43.0"
RQLITE_TARBALL="rqlite-v${RQLITE_VERSION}-linux-${RQLITE_ARCH}.tar.gz"
RQLITE_URL="https://github.com/rqlite/rqlite/releases/download/v${RQLITE_VERSION}/${RQLITE_TARBALL}"
cd /tmp
if ! wget -q "$RQLITE_URL"; then
error "Failed to download RQLite from $RQLITE_URL"
exit 1
fi
# Extract and install RQLite binaries
if ! wget -q "$RQLITE_URL"; then error "Failed to download RQLite from $RQLITE_URL"; exit 1; fi
tar -xzf "$RQLITE_TARBALL"
RQLITE_DIR="rqlite-v${RQLITE_VERSION}-linux-${RQLITE_ARCH}"
# Install RQLite binaries to system PATH
sudo cp "$RQLITE_DIR/rqlited" /usr/local/bin/
sudo cp "$RQLITE_DIR/rqlite" /usr/local/bin/
sudo chmod +x /usr/local/bin/rqlited
sudo chmod +x /usr/local/bin/rqlite
# Cleanup
rm -rf "$RQLITE_TARBALL" "$RQLITE_DIR"
# Verify installation
if command -v rqlited &> /dev/null; then
INSTALLED_VERSION=$(rqlited -version | head -n1 | awk '{print $2}')
success "RQLite v$INSTALLED_VERSION installed successfully"
@ -364,102 +248,67 @@ install_rqlite() {
fi
}
# Check port availability
check_ports() {
local ports=($NODE_PORT $RQLITE_PORT $RAFT_PORT)
for port in "${ports[@]}"; do
if sudo netstat -tuln 2>/dev/null | grep -q ":$port " || ss -tuln 2>/dev/null | grep -q ":$port "; then
error "Port $port is already in use. Please free it up and try again."
exit 1
fi
done
success "All required ports are available"
}
# Configuration wizard
configuration_wizard() {
log "${BLUE}==================================================${NOCOLOR}"
log "${GREEN} DeBros Network Configuration Wizard ${NOCOLOR}"
log "${BLUE}==================================================${NOCOLOR}"
if [ "$NON_INTERACTIVE" = true ]; then
log "Non-interactive mode: using default configuration"
NODE_TYPE="node"
SOLANA_WALLET="11111111111111111111111111111111" # Placeholder wallet
SOLANA_WALLET="11111111111111111111111111111111"
CONFIGURE_FIREWALL="yes"
log "Node Type: $NODE_TYPE"
log "Installation Directory: $INSTALL_DIR"
log "Firewall Configuration: $CONFIGURE_FIREWALL"
success "Configuration completed with defaults"
return 0
fi
# Setting default node type to "node"
NODE_TYPE="node"
# Solana wallet address
log "${GREEN}Enter your Solana wallet address to be eligible for node operator rewards:${NOCOLOR}"
log "${GREEN}Enter your Solana wallet address for node operator rewards:${NOCOLOR}"
while true; do
read -rp "Solana Wallet Address: " SOLANA_WALLET
if [[ -n "$SOLANA_WALLET" && ${#SOLANA_WALLET} -ge 32 ]]; then
break
else
error "Please enter a valid Solana wallet address"
fi
if [[ -n "$SOLANA_WALLET" && ${#SOLANA_WALLET} -ge 32 ]]; then break; else error "Please enter a valid Solana wallet address"; fi
done
# Data directory
read -rp "Installation directory [default: $INSTALL_DIR]: " CUSTOM_INSTALL_DIR
if [[ -n "$CUSTOM_INSTALL_DIR" ]]; then
INSTALL_DIR="$CUSTOM_INSTALL_DIR"
fi
# Firewall configuration
if [[ -n "$CUSTOM_INSTALL_DIR" ]]; then INSTALL_DIR="$CUSTOM_INSTALL_DIR"; fi
read -rp "Configure firewall automatically? (yes/no) [default: yes]: " CONFIGURE_FIREWALL
CONFIGURE_FIREWALL="${CONFIGURE_FIREWALL:-yes}"
success "Configuration completed"
}
# Create user and directories
setup_directories() {
log "Setting up directories and permissions..."
# Create debros user if it doesn't exist
if ! id "debros" &>/dev/null; then
sudo useradd -r -s /bin/false -d "$INSTALL_DIR" debros
log "Created debros user"
else
log "User 'debros' already exists"
fi
# Create directory structure
sudo mkdir -p "$INSTALL_DIR"/{bin,configs,keys,data,logs}
sudo mkdir -p "$INSTALL_DIR/keys/$NODE_TYPE"
sudo mkdir -p "$INSTALL_DIR/data/$NODE_TYPE"/{rqlite,storage}
# Set ownership first, then permissions
sudo mkdir -p "$INSTALL_DIR"/{bin,configs,keys,data,logs,src}
sudo mkdir -p "$INSTALL_DIR/keys/node"
sudo mkdir -p "$INSTALL_DIR/data/node"/{rqlite,storage}
sudo chown -R debros:debros "$INSTALL_DIR"
sudo chmod 755 "$INSTALL_DIR"
sudo chmod 700 "$INSTALL_DIR/keys"
sudo chmod 700 "$INSTALL_DIR/keys/$NODE_TYPE"
# Ensure the debros user can write to the keys directory
sudo chmod 700 "$INSTALL_DIR/keys/node"
sudo chmod 755 "$INSTALL_DIR/data"
sudo chmod 755 "$INSTALL_DIR/logs"
sudo chmod 755 "$INSTALL_DIR/configs"
sudo chmod 755 "$INSTALL_DIR/bin"
success "Directory structure ready"
}
# Clone or update repository
setup_source_code() {
log "Setting up source code..."
if [ -d "$INSTALL_DIR/src" ]; then
if [ -d "$INSTALL_DIR/src/.git" ]; then
log "Updating existing repository..."
cd "$INSTALL_DIR/src"
sudo -u debros git pull
@ -468,14 +317,11 @@ setup_source_code() {
sudo -u debros git clone "$REPO_URL" "$INSTALL_DIR/src"
cd "$INSTALL_DIR/src"
fi
success "Source code ready"
}
# Generate identity key
generate_identity() {
local identity_file="$INSTALL_DIR/keys/$NODE_TYPE/identity.key"
local identity_file="$INSTALL_DIR/keys/node/identity.key"
if [ -f "$identity_file" ]; then
if [ "$UPDATE_MODE" = true ]; then
log "Identity key already exists, keeping existing key"
@ -486,110 +332,65 @@ generate_identity() {
sudo rm -f "$identity_file"
fi
fi
log "Generating node identity..."
cd "$INSTALL_DIR/src"
# Create a custom identity generation script with output path support
cat > /tmp/generate_identity_custom.go << 'EOF'
package main
import (
"crypto/rand"
"flag"
"fmt"
"os"
"path/filepath"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/peer"
)
func main() {
var outputPath string
flag.StringVar(&outputPath, "output", "", "Output path for identity key")
flag.Parse()
if outputPath == "" {
fmt.Println("Usage: go run generate_identity_custom.go -output <path>")
os.Exit(1)
}
// Generate identity
priv, pub, err := crypto.GenerateKeyPairWithReader(crypto.Ed25519, 2048, rand.Reader)
if err != nil {
panic(err)
}
// Get peer ID
if err != nil { panic(err) }
peerID, err := peer.IDFromPublicKey(pub)
if err != nil {
panic(err)
}
// Marshal private key
if err != nil { panic(err) }
data, err := crypto.MarshalPrivateKey(priv)
if err != nil {
panic(err)
}
// Create directory
if err := os.MkdirAll(filepath.Dir(outputPath), 0700); err != nil {
panic(err)
}
// Save identity
if err := os.WriteFile(outputPath, data, 0600); err != nil {
panic(err)
}
if err != nil { panic(err) }
if err := os.MkdirAll(filepath.Dir(outputPath), 0700); err != nil { panic(err) }
if err := os.WriteFile(outputPath, data, 0600); err != nil { panic(err) }
fmt.Printf("Generated Peer ID: %s\n", peerID.String())
fmt.Printf("Identity saved to: %s\n", outputPath)
}
EOF
# Ensure Go is in PATH and generate the identity key
export PATH=$PATH:/usr/local/go/bin
sudo -u debros env "PATH=$PATH:/usr/local/go/bin" "GOMOD=$(pwd)" go run /tmp/generate_identity_custom.go -output "$identity_file"
rm /tmp/generate_identity_custom.go
success "Node identity generated"
}
# Build binaries
build_binaries() {
log "Building DeBros Network binaries..."
cd "$INSTALL_DIR/src"
# Ensure Go is in PATH and build all binaries
export PATH=$PATH:/usr/local/go/bin
sudo -u debros env "PATH=$PATH:/usr/local/go/bin" make build
# If in update mode, stop services before copying binaries to avoid "Text file busy" error
local services_were_running=()
if [ "$UPDATE_MODE" = true ]; then
log "Update mode: checking for running services before binary update..."
if systemctl is-active --quiet debros-node.service 2>/dev/null; then
log "Stopping debros-node service to update binaries..."
sudo systemctl stop debros-node.service
services_were_running+=("debros-node")
fi
# Give services a moment to fully stop
if [ ${#services_were_running[@]} -gt 0 ]; then
log "Waiting for services to stop completely..."
sleep 3
fi
fi
# Copy binaries to installation directory
sudo cp bin/* "$INSTALL_DIR/bin/"
sudo chown debros:debros "$INSTALL_DIR/bin/"*
# If in update mode and services were running, restart them
if [ "$UPDATE_MODE" = true ] && [ ${#services_were_running[@]} -gt 0 ]; then
log "Restarting previously running services..."
for service in "${services_were_running[@]}"; do
@ -597,48 +398,35 @@ build_binaries() {
sudo systemctl start $service.service
done
fi
success "Binaries built and installed"
}
# Generate configuration files
generate_configs() {
log "Generating configuration files..."
cat > /tmp/config.yaml << EOF
cat > /tmp/node.yaml << EOF
node:
data_dir: "$INSTALL_DIR/data/node"
key_file: "$INSTALL_DIR/keys/node/identity.key"
listen_addresses:
- "/ip4/0.0.0.0/tcp/$NODE_PORT"
solana_wallet: "$SOLANA_WALLET"
database:
rqlite_port: $RQLITE_PORT
rqlite_raft_port: $RAFT_PORT
logging:
level: "info"
file: "$INSTALL_DIR/logs/node.log"
EOF
sudo mv /tmp/config.yaml "$INSTALL_DIR/configs/$NODE_TYPE.yaml"
sudo chown debros:debros "$INSTALL_DIR/configs/$NODE_TYPE.yaml"
sudo mv /tmp/node.yaml "$INSTALL_DIR/configs/node.yaml"
sudo chown debros:debros "$INSTALL_DIR/configs/node.yaml"
success "Configuration files generated"
}
# Configure firewall
configure_firewall() {
if [[ "$CONFIGURE_FIREWALL" == "yes" ]]; then
log "Configuring firewall rules..."
if command -v ufw &> /dev/null; then
# Add firewall rules regardless of UFW status
# This allows the rules to be ready when UFW is enabled
log "Adding UFW rules for DeBros Network ports..."
# Add ports for node
for port in $NODE_PORT $RQLITE_PORT $RAFT_PORT; do
if ! sudo ufw allow $port; then
error "Failed to allow port $port"
@ -646,10 +434,7 @@ configure_firewall() {
fi
log "Added UFW rule: allow port $port"
done
# Check UFW status and inform user
UFW_STATUS=$(sudo ufw status | grep -o "Status: [a-z]\+" | awk '{print $2}' || echo "inactive")
if [[ "$UFW_STATUS" == "active" ]]; then
success "Firewall rules added and active"
else
@ -666,30 +451,20 @@ configure_firewall() {
fi
}
# Create systemd service
create_systemd_service() {
local service_file="/etc/systemd/system/debros-$NODE_TYPE.service"
# Always clean up any existing service files to ensure fresh start
for service in debros-bootstrap debros-node; do
if [ -f "/etc/systemd/system/$service.service" ]; then
log "Cleaning up existing $service service..."
sudo systemctl stop $service.service 2>/dev/null || true
sudo systemctl disable $service.service 2>/dev/null || true
sudo rm -f /etc/systemd/system/$service.service
local service_file="/etc/systemd/system/debros-node.service"
if [ -f "$service_file" ]; then
log "Cleaning up existing node service..."
sudo systemctl stop debros-node.service 2>/dev/null || true
sudo systemctl disable debros-node.service 2>/dev/null || true
sudo rm -f "$service_file"
fi
done
sudo systemctl daemon-reload
log "Creating new systemd service..."
# Determine the correct ExecStart command based on node type
local exec_start=""
exec_start="$INSTALL_DIR/bin/node -data $INSTALL_DIR/data/node"
cat > /tmp/debros-$NODE_TYPE.service << EOF
local exec_start="$INSTALL_DIR/bin/node -data $INSTALL_DIR/data/node"
cat > /tmp/debros-node.service << EOF
[Unit]
Description=DeBros Network $NODE_TYPE Node
Description=DeBros Network Node
After=network.target
Wants=network-online.target
@ -704,9 +479,8 @@ Restart=always
RestartSec=10
StandardOutput=journal
StandardError=journal
SyslogIdentifier=debros-$NODE_TYPE
SyslogIdentifier=debros-node
# Security settings
NoNewPrivileges=yes
PrivateTmp=yes
ProtectSystem=strict
@ -716,96 +490,65 @@ ReadWritePaths=$INSTALL_DIR
[Install]
WantedBy=multi-user.target
EOF
sudo mv /tmp/debros-$NODE_TYPE.service "$service_file"
sudo mv /tmp/debros-node.service "$service_file"
sudo systemctl daemon-reload
sudo systemctl enable debros-$NODE_TYPE.service
sudo systemctl enable debros-node.service
success "Systemd service ready"
}
# Start the service
start_service() {
log "Starting DeBros Network service..."
sudo systemctl start debros-$NODE_TYPE.service
sudo systemctl start debros-node.service
sleep 3
if systemctl is-active --quiet debros-$NODE_TYPE.service; then
if systemctl is-active --quiet debros-node.service; then
success "DeBros Network service started successfully"
else
error "Failed to start DeBros Network service"
log "Check logs with: sudo journalctl -u debros-$NODE_TYPE.service"
log "Check logs with: sudo journalctl -u debros-node.service"
exit 1
fi
}
# Display banner
display_banner() {
echo -e "${BLUE}========================================================================${NOCOLOR}"
echo -e "${CYAN}
____ ____ _ _ _ _
| _ \ ___| __ ) _ __ ___ ___ | \ | | ___| |___ _____ _ __| | __
| | | |/ _ \ _ \| __/ _ \/ __| | \| |/ _ \ __\ \ /\ / / _ \| __| |/ /
| |_| | __/ |_) | | | (_) \__ \ | |\ | __/ |_ \ V V / (_) | | | <
|____/ \___|____/|_| \___/|___/ |_| \_|\___|\__| \_/\_/ \___/|_| |_|\_\\
| _ \\ ___| __ ) _ __ ___ ___ | \\ | | ___| |___ _____ _ __| | __
| | | |/ _ \\ _ \\| __/ _ \\/ __| | \\| |/ _ \\ __\\ \\ /\\ / / _ \\| __| |/ /
| |_| | __/ |_) | | | (_) \\__ \\ | |\\ | __/ |_ \\ V V / (_) | | | <
|____/ \\___|____/|_| \\___/|___/ |_| \\_|\\___|\\__| \\_/\\_/ \\___/|_| |_|\\_\\
${NOCOLOR}"
echo -e "${BLUE}========================================================================${NOCOLOR}"
}
# Main installation function
main() {
display_banner
log "${BLUE}==================================================${NOCOLOR}"
log "${GREEN} Starting DeBros Network Installation ${NOCOLOR}"
log "${BLUE}==================================================${NOCOLOR}"
detect_os
check_existing_installation
# Skip port check in update mode since services are already running
if [ "$UPDATE_MODE" != true ]; then
check_ports
else
log "Update mode: skipping port availability check"
fi
# Check and install Go if needed
if ! check_go_installation; then
install_go
fi
if [ "$UPDATE_MODE" != true ]; then check_ports; else log "Update mode: skipping port availability check"; fi
if ! check_go_installation; then install_go; fi
install_dependencies
install_rqlite
# Skip configuration wizard in update mode
if [ "$UPDATE_MODE" != true ]; then
configuration_wizard
else
if [ "$UPDATE_MODE" != true ]; then configuration_wizard; else
log "Update mode: skipping configuration wizard"
# Force node type to 'node' for consistent terminology
NODE_TYPE="node"
log "Using node type: $NODE_TYPE (standardized from any previous bootstrap configuration)"
SOLANA_WALLET="11111111111111111111111111111111"
CONFIGURE_FIREWALL="yes"
fi
setup_directories
setup_source_code
generate_identity
build_binaries
# Only generate new configs if not in update mode
if [ "$UPDATE_MODE" != true ]; then
generate_configs
configure_firewall
else
log "Update mode: keeping existing configuration"
fi
create_systemd_service
start_service
# Display completion information
log "${BLUE}==================================================${NOCOLOR}"
if [ "$UPDATE_MODE" = true ]; then
log "${GREEN} Update Complete! ${NOCOLOR}"
@ -813,25 +556,21 @@ main() {
log "${GREEN} Installation Complete! ${NOCOLOR}"
fi
log "${BLUE}==================================================${NOCOLOR}"
log "${GREEN}Installation Directory:${NOCOLOR} ${CYAN}$INSTALL_DIR${NOCOLOR}"
log "${GREEN}Configuration:${NOCOLOR} ${CYAN}$INSTALL_DIR/configs/$NODE_TYPE.yaml${NOCOLOR}"
log "${GREEN}Logs:${NOCOLOR} ${CYAN}$INSTALL_DIR/logs/$NODE_TYPE.log${NOCOLOR}"
log "${GREEN}Configuration:${NOCOLOR} ${CYAN}$INSTALL_DIR/configs/node.yaml${NOCOLOR}"
log "${GREEN}Logs:${NOCOLOR} ${CYAN}$INSTALL_DIR/logs/node.log${NOCOLOR}"
log "${GREEN}Node Port:${NOCOLOR} ${CYAN}$NODE_PORT${NOCOLOR}"
log "${GREEN}RQLite Port:${NOCOLOR} ${CYAN}$RQLITE_PORT${NOCOLOR}"
log "${GREEN}Raft Port:${NOCOLOR} ${CYAN}$RAFT_PORT${NOCOLOR}"
log "${BLUE}==================================================${NOCOLOR}"
log "${GREEN}Management Commands:${NOCOLOR}"
log "${CYAN} - sudo systemctl status debros-$NODE_TYPE${NOCOLOR} (Check status)"
log "${CYAN} - sudo systemctl restart debros-$NODE_TYPE${NOCOLOR} (Restart service)"
log "${CYAN} - sudo systemctl stop debros-$NODE_TYPE${NOCOLOR} (Stop service)"
log "${CYAN} - sudo systemctl start debros-$NODE_TYPE${NOCOLOR} (Start service)"
log "${CYAN} - sudo journalctl -u debros-$NODE_TYPE.service -f${NOCOLOR} (View logs)"
log "${CYAN} - sudo systemctl status debros-node${NOCOLOR} (Check status)"
log "${CYAN} - sudo systemctl restart debros-node${NOCOLOR} (Restart service)"
log "${CYAN} - sudo systemctl stop debros-node${NOCOLOR} (Stop service)"
log "${CYAN} - sudo systemctl start debros-node${NOCOLOR} (Start service)"
log "${CYAN} - sudo journalctl -u debros-node.service -f${NOCOLOR} (View logs)"
log "${CYAN} - $INSTALL_DIR/bin/network-cli${NOCOLOR} (Use CLI tools)"
log "${BLUE}==================================================${NOCOLOR}"
if [ "$UPDATE_MODE" = true ]; then
success "DeBros Network has been updated and is running!"
else
@ -840,5 +579,4 @@ main() {
log "${CYAN}For documentation visit: https://docs.debros.io${NOCOLOR}"
}
# Run main function
main "$@"