Moved everything to root user and to /opt/orama from home/orama/.orama

This commit is contained in:
anonpenguin23 2026-02-14 14:33:38 +02:00
parent ef8002bf13
commit bc9cbb3627
50 changed files with 210 additions and 576 deletions

View File

@ -366,7 +366,7 @@ systemctl status orama-node
journalctl -u orama-node -f journalctl -u orama-node -f
# Check log files # Check log files
tail -f /home/orama/.orama/logs/node.log tail -f /opt/orama/.orama/logs/node.log
``` ```
### Port Conflicts ### Port Conflicts
@ -398,7 +398,7 @@ rqlite -H localhost -p 5001
```bash ```bash
# Production reset (⚠️ DESTROYS DATA) # Production reset (⚠️ DESTROYS DATA)
sudo orama uninstall sudo orama uninstall
sudo rm -rf /home/orama/.orama sudo rm -rf /opt/orama/.orama
sudo orama install sudo orama install
``` ```

View File

@ -31,11 +31,12 @@ sudo ufw --force reset
sudo ufw allow 22/tcp sudo ufw allow 22/tcp
sudo ufw --force enable sudo ufw --force enable
# 5. Remove orama user and home directory # 5. Remove orama data directory
sudo rm -rf /opt/orama
# 6. Remove legacy orama user (if exists from old installs)
sudo userdel -r orama 2>/dev/null sudo userdel -r orama 2>/dev/null
sudo rm -rf /home/orama sudo rm -rf /home/orama
# 6. Remove sudoers files
sudo rm -f /etc/sudoers.d/orama-access sudo rm -f /etc/sudoers.d/orama-access
sudo rm -f /etc/sudoers.d/orama-deployments sudo rm -f /etc/sudoers.d/orama-deployments
sudo rm -f /etc/sudoers.d/orama-wireguard sudo rm -f /etc/sudoers.d/orama-wireguard
@ -62,14 +63,13 @@ echo "Node cleaned. Ready for fresh install."
| Category | Paths | | Category | Paths |
|----------|-------| |----------|-------|
| **User** | `orama` system user and `/home/orama/` | | **App data** | `/opt/orama/.orama/` (configs, secrets, logs, IPFS, RQLite, Olric) |
| **App data** | `/home/orama/.orama/` (configs, secrets, logs, IPFS, RQLite, Olric) | | **Source code** | `/opt/orama/src/` |
| **Source code** | `/home/orama/src/` | | **Binaries** | `/opt/orama/bin/orama-node`, `/opt/orama/bin/gateway` |
| **Binaries** | `/home/orama/bin/orama-node`, `/home/orama/bin/gateway` |
| **Systemd** | `orama-*.service`, `coredns.service`, `caddy.service`, `orama-deploy-*.service` | | **Systemd** | `orama-*.service`, `coredns.service`, `caddy.service`, `orama-deploy-*.service` |
| **WireGuard** | `/etc/wireguard/wg0.conf`, `wg-quick@wg0` systemd unit | | **WireGuard** | `/etc/wireguard/wg0.conf`, `wg-quick@wg0` systemd unit |
| **Firewall** | All UFW rules (reset to default + SSH only) | | **Firewall** | All UFW rules (reset to default + SSH only) |
| **Sudoers** | `/etc/sudoers.d/orama-*` | | **Legacy** | `orama` user, `/etc/sudoers.d/orama-*` (old installs only) |
| **CoreDNS** | `/etc/coredns/Corefile` | | **CoreDNS** | `/etc/coredns/Corefile` |
| **Caddy** | `/etc/caddy/Caddyfile`, `/var/lib/caddy/` (TLS certs) | | **Caddy** | `/etc/caddy/Caddyfile`, `/var/lib/caddy/` (TLS certs) |
| **Anyone Relay** | `orama-anyone-relay.service`, `orama-anyone-client.service` | | **Anyone Relay** | `orama-anyone-relay.service`, `orama-anyone-client.service` |
@ -130,6 +130,7 @@ sudo wg-quick down wg0 2>/dev/null
sudo systemctl disable wg-quick@wg0 2>/dev/null sudo systemctl disable wg-quick@wg0 2>/dev/null
sudo rm -f /etc/wireguard/wg0.conf sudo rm -f /etc/wireguard/wg0.conf
sudo ufw --force reset && sudo ufw allow 22/tcp && sudo ufw --force enable sudo ufw --force reset && sudo ufw allow 22/tcp && sudo ufw --force enable
sudo rm -rf /opt/orama
sudo userdel -r orama 2>/dev/null sudo userdel -r orama 2>/dev/null
sudo rm -rf /home/orama sudo rm -rf /home/orama
sudo rm -f /etc/sudoers.d/orama-access /etc/sudoers.d/orama-deployments /etc/sudoers.d/orama-wireguard sudo rm -f /etc/sudoers.d/orama-access /etc/sudoers.d/orama-deployments /etc/sudoers.d/orama-wireguard

View File

@ -41,7 +41,7 @@ You can find peer public keys with `wg show wg0`.
Check the Olric config on each node: Check the Olric config on each node:
```bash ```bash
cat /home/orama/.orama/data/namespaces/<name>/configs/olric-*.yaml cat /opt/orama/.orama/data/namespaces/<name>/configs/olric-*.yaml
``` ```
If `bindAddr` is `0.0.0.0`, the node will try to bind to IPv6 on dual-stack hosts, breaking memberlist gossip. If `bindAddr` is `0.0.0.0`, the node will try to bind to IPv6 on dual-stack hosts, breaking memberlist gossip.
@ -69,7 +69,7 @@ If every UDP ping fails but TCP stream connections succeed, it's the WireGuard p
**Fix:** Edit the gateway config manually: **Fix:** Edit the gateway config manually:
```bash ```bash
vim /home/orama/.orama/data/namespaces/<name>/configs/gateway-*.yaml vim /opt/orama/.orama/data/namespaces/<name>/configs/gateway-*.yaml
``` ```
Add/fix: Add/fix:
@ -95,7 +95,7 @@ This was fixed in code, so new namespaces get the correct config.
**Check:** **Check:**
```bash ```bash
ls /home/orama/.orama/data/namespaces/<name>/cluster-state.json ls /opt/orama/.orama/data/namespaces/<name>/cluster-state.json
``` ```
If the file doesn't exist, the node can't restore the namespace. If the file doesn't exist, the node can't restore the namespace.
@ -153,7 +153,7 @@ ssh -n user@host 'command'
## General Debugging Tips ## General Debugging Tips
- **Always use `sudo orama prod restart`** instead of raw `systemctl` commands - **Always use `sudo orama prod restart`** instead of raw `systemctl` commands
- **Namespace data lives at:** `/home/orama/.orama/data/namespaces/<name>/` - **Namespace data lives at:** `/opt/orama/.orama/data/namespaces/<name>/`
- **Check service logs:** `journalctl -u orama-namespace-olric@<name>.service --no-pager -n 50` - **Check service logs:** `journalctl -u orama-namespace-olric@<name>.service --no-pager -n 50`
- **Check WireGuard:** `wg show wg0` — look for recent handshakes and transfer bytes - **Check WireGuard:** `wg show wg0` — look for recent handshakes and transfer bytes
- **Check gateway health:** `curl http://localhost:<port>/v1/health` from the node itself - **Check gateway health:** `curl http://localhost:<port>/v1/health` from the node itself

View File

@ -369,7 +369,7 @@ orama db create my-database
# Output: # Output:
# ✅ Database created: my-database # ✅ Database created: my-database
# Home Node: node-abc123 # Home Node: node-abc123
# File Path: /home/orama/.orama/data/sqlite/your-namespace/my-database.db # File Path: /opt/orama/.orama/data/sqlite/your-namespace/my-database.db
``` ```
### Executing Queries ### Executing Queries
@ -588,7 +588,7 @@ func main() {
// DATABASE_NAME env var is automatically set by Orama // DATABASE_NAME env var is automatically set by Orama
dbPath := os.Getenv("DATABASE_PATH") dbPath := os.Getenv("DATABASE_PATH")
if dbPath == "" { if dbPath == "" {
dbPath = "/home/orama/.orama/data/sqlite/" + os.Getenv("NAMESPACE") + "/myapp-db.db" dbPath = "/opt/orama/.orama/data/sqlite/" + os.Getenv("NAMESPACE") + "/myapp-db.db"
} }
var err error var err error

View File

@ -48,7 +48,7 @@ make build-linux
The `orama install` command automatically: The `orama install` command automatically:
1. Uploads the source archive via SCP 1. Uploads the source archive via SCP
2. Extracts source to `/home/orama/src` and installs the CLI to `/usr/local/bin/orama` 2. Extracts source to `/opt/orama/src` and installs the CLI to `/usr/local/bin/orama`
3. Runs `orama install` on the VPS which builds all binaries from source (Go, CoreDNS, Caddy, Olric, etc.) 3. Runs `orama install` on the VPS which builds all binaries from source (Go, CoreDNS, Caddy, Olric, etc.)
### Upgrading a Multi-Node Cluster (CRITICAL) ### Upgrading a Multi-Node Cluster (CRITICAL)
@ -112,7 +112,7 @@ If nodes get stuck in "Candidate" state or show "leader not found" errors:
3. On each other node, clear RQLite data and restart: 3. On each other node, clear RQLite data and restart:
```bash ```bash
sudo orama prod stop sudo orama prod stop
sudo rm -rf /home/orama/.orama/data/rqlite sudo rm -rf /opt/orama/.orama/data/rqlite
sudo systemctl start orama-node sudo systemctl start orama-node
``` ```
4. The node should automatically rejoin using its configured `rqlite_join_address` 4. The node should automatically rejoin using its configured `rqlite_join_address`

View File

@ -30,8 +30,8 @@ type Orchestrator struct {
// NewOrchestrator creates a new install orchestrator // NewOrchestrator creates a new install orchestrator
func NewOrchestrator(flags *Flags) (*Orchestrator, error) { func NewOrchestrator(flags *Flags) (*Orchestrator, error) {
oramaHome := "/home/orama" oramaHome := production.OramaBase
oramaDir := oramaHome + "/.orama" oramaDir := production.OramaDir
// Normalize peers // Normalize peers
peers, err := utils.NormalizePeers(flags.PeersStr) peers, err := utils.NormalizePeers(flags.PeersStr)

View File

@ -81,17 +81,14 @@ func (r *RemoteOrchestrator) extractOnVPS() error {
// All other binaries are built from source on the VPS during install. // All other binaries are built from source on the VPS during install.
extractCmd := r.sudoPrefix() + "bash -c '" + extractCmd := r.sudoPrefix() + "bash -c '" +
`ARCHIVE="/tmp/network-source.tar.gz" && ` + `ARCHIVE="/tmp/network-source.tar.gz" && ` +
`SRC_DIR="/home/orama/src" && ` + `SRC_DIR="/opt/orama/src" && ` +
`BIN_DIR="/home/orama/bin" && ` + `BIN_DIR="/opt/orama/bin" && ` +
`id -u orama &>/dev/null || useradd -m -s /bin/bash orama && ` +
`rm -rf "$SRC_DIR" && mkdir -p "$SRC_DIR" "$BIN_DIR" && ` + `rm -rf "$SRC_DIR" && mkdir -p "$SRC_DIR" "$BIN_DIR" && ` +
`tar xzf "$ARCHIVE" -C "$SRC_DIR" && ` + `tar xzf "$ARCHIVE" -C "$SRC_DIR" && ` +
`chown -R orama:orama "$SRC_DIR" && ` +
// Install pre-built CLI binary (only binary cross-compiled locally) // Install pre-built CLI binary (only binary cross-compiled locally)
`if [ -f "$SRC_DIR/bin-linux/orama" ]; then ` + `if [ -f "$SRC_DIR/bin-linux/orama" ]; then ` +
`cp "$SRC_DIR/bin-linux/orama" /usr/local/bin/orama && ` + `cp "$SRC_DIR/bin-linux/orama" /usr/local/bin/orama && ` +
`chmod +x /usr/local/bin/orama; fi && ` + `chmod +x /usr/local/bin/orama; fi && ` +
`chown -R orama:orama "$BIN_DIR" && ` +
`echo "Extract complete."` + `echo "Extract complete."` +
"'" "'"

View File

@ -68,7 +68,7 @@ func Handle(args []string) {
// readNodeDomain reads the domain from the node config file // readNodeDomain reads the domain from the node config file
func readNodeDomain() (string, error) { func readNodeDomain() (string, error) {
configPath := "/home/orama/.orama/configs/node.yaml" configPath := "/opt/orama/.orama/configs/node.yaml"
data, err := os.ReadFile(configPath) data, err := os.ReadFile(configPath)
if err != nil { if err != nil {
return "", fmt.Errorf("read config: %w", err) return "", fmt.Errorf("read config: %w", err)

View File

@ -14,7 +14,7 @@ import (
) )
const ( const (
maintenanceFlagPath = "/home/orama/.orama/maintenance.flag" maintenanceFlagPath = "/opt/orama/.orama/maintenance.flag"
) )
// HandlePreUpgrade prepares the node for a safe rolling upgrade: // HandlePreUpgrade prepares the node for a safe rolling upgrade:
@ -83,7 +83,7 @@ func HandlePreUpgrade() {
// getNamespaceRQLitePorts scans namespace env files to find RQLite HTTP ports. // getNamespaceRQLitePorts scans namespace env files to find RQLite HTTP ports.
// Returns map of namespace_name → HTTP port. // Returns map of namespace_name → HTTP port.
func getNamespaceRQLitePorts() map[string]int { func getNamespaceRQLitePorts() map[string]int {
namespacesDir := "/home/orama/.orama/data/namespaces" namespacesDir := "/opt/orama/.orama/data/namespaces"
ports := make(map[string]int) ports := make(map[string]int)
entries, err := os.ReadDir(namespacesDir) entries, err := os.ReadDir(namespacesDir)

View File

@ -28,7 +28,7 @@ func Handle(args []string) {
os.Exit(1) os.Exit(1)
} }
oramaDir := "/home/orama/.orama" oramaDir := "/opt/orama/.orama"
fmt.Printf("🔄 Checking for installations to migrate...\n\n") fmt.Printf("🔄 Checking for installations to migrate...\n\n")

View File

@ -47,7 +47,7 @@ func Handle() {
} }
fmt.Printf("\nDirectories:\n") fmt.Printf("\nDirectories:\n")
oramaDir := "/home/orama/.orama" oramaDir := "/opt/orama/.orama"
if _, err := os.Stat(oramaDir); err == nil { if _, err := os.Stat(oramaDir); err == nil {
fmt.Printf(" ✅ %s exists\n", oramaDir) fmt.Printf(" ✅ %s exists\n", oramaDir)
} else { } else {

View File

@ -17,7 +17,7 @@ func Handle() {
} }
fmt.Printf("⚠️ This will stop and remove all Orama production services\n") fmt.Printf("⚠️ This will stop and remove all Orama production services\n")
fmt.Printf("⚠️ Configuration and data will be preserved in /home/orama/.orama\n\n") fmt.Printf("⚠️ Configuration and data will be preserved in /opt/orama/.orama\n\n")
fmt.Printf("Continue? (yes/no): ") fmt.Printf("Continue? (yes/no): ")
reader := bufio.NewReader(os.Stdin) reader := bufio.NewReader(os.Stdin)
@ -48,6 +48,6 @@ func Handle() {
exec.Command("systemctl", "daemon-reload").Run() exec.Command("systemctl", "daemon-reload").Run()
fmt.Printf("✅ Services uninstalled\n") fmt.Printf("✅ Services uninstalled\n")
fmt.Printf(" Configuration and data preserved in /home/orama/.orama\n") fmt.Printf(" Configuration and data preserved in /opt/orama/.orama\n")
fmt.Printf(" To remove all data: rm -rf /home/orama/.orama\n\n") fmt.Printf(" To remove all data: rm -rf /opt/orama/.orama\n\n")
} }

View File

@ -26,8 +26,8 @@ type Orchestrator struct {
// NewOrchestrator creates a new upgrade orchestrator // NewOrchestrator creates a new upgrade orchestrator
func NewOrchestrator(flags *Flags) *Orchestrator { func NewOrchestrator(flags *Flags) *Orchestrator {
oramaHome := "/home/orama" oramaHome := production.OramaBase
oramaDir := oramaHome + "/.orama" oramaDir := production.OramaDir
// Load existing preferences // Load existing preferences
prefs := production.LoadPreferences(oramaDir) prefs := production.LoadPreferences(oramaDir)
@ -341,7 +341,7 @@ func (o *Orchestrator) writePeersJSONFromState(state ClusterState) error {
} }
// Write to RQLite's raft directory // Write to RQLite's raft directory
raftDir := filepath.Join(o.oramaHome, ".orama", "data", "rqlite", "raft") raftDir := filepath.Join(production.OramaData, "rqlite", "raft")
if err := os.MkdirAll(raftDir, 0755); err != nil { if err := os.MkdirAll(raftDir, 0755); err != nil {
return err return err
} }

View File

@ -182,7 +182,7 @@ func GetProductionServices() []string {
// template files (e.g. orama-namespace-gateway@.service) with no instance name. // template files (e.g. orama-namespace-gateway@.service) with no instance name.
// Restarting a template without an instance is a no-op. // Restarting a template without an instance is a no-op.
// Instead, scan the data directory where each subdirectory is a provisioned namespace. // Instead, scan the data directory where each subdirectory is a provisioned namespace.
namespacesDir := "/home/orama/.orama/data/namespaces" namespacesDir := "/opt/orama/.orama/data/namespaces"
nsEntries, err := os.ReadDir(namespacesDir) nsEntries, err := os.ReadDir(namespacesDir)
if err == nil { if err == nil {
serviceTypes := []string{"rqlite", "olric", "gateway"} serviceTypes := []string{"rqlite", "olric", "gateway"}

View File

@ -178,9 +178,9 @@ func (m *Manager) Stop(ctx context.Context, deployment *deployments.Deployment)
m.logger.Warn("Failed to disable service", zap.Error(err)) m.logger.Warn("Failed to disable service", zap.Error(err))
} }
// Remove service file using sudo // Remove service file
serviceFile := filepath.Join("/etc/systemd/system", serviceName+".service") serviceFile := filepath.Join("/etc/systemd/system", serviceName+".service")
cmd := exec.Command("sudo", "rm", "-f", serviceFile) cmd := exec.Command("rm", "-f", serviceFile)
if err := cmd.Run(); err != nil { if err := cmd.Run(); err != nil {
m.logger.Warn("Failed to remove service file", zap.Error(err)) m.logger.Warn("Failed to remove service file", zap.Error(err))
} }
@ -310,8 +310,6 @@ After=network.target
[Service] [Service]
Type=simple Type=simple
User=orama
Group=orama
WorkingDirectory={{.WorkDir}} WorkingDirectory={{.WorkDir}}
{{range .Env}}Environment="{{.}}" {{range .Env}}Environment="{{.}}"
@ -328,9 +326,6 @@ CPUQuota={{.CPULimitPercent}}%
# Security - minimal restrictions for deployments in home directory # Security - minimal restrictions for deployments in home directory
PrivateTmp=true PrivateTmp=true
ProtectSystem=full
ProtectHome=read-only
ReadWritePaths={{.WorkDir}}
StandardOutput=journal StandardOutput=journal
StandardError=journal StandardError=journal
@ -373,8 +368,8 @@ WantedBy=multi-user.target
return err return err
} }
// Use sudo tee to write to systemd directory (orama user needs sudo access) // Use tee to write to systemd directory
cmd := exec.Command("sudo", "tee", serviceFile) cmd := exec.Command("tee", serviceFile)
cmd.Stdin = &buf cmd.Stdin = &buf
output, err := cmd.CombinedOutput() output, err := cmd.CombinedOutput()
if err != nil { if err != nil {
@ -436,34 +431,34 @@ func (m *Manager) getServiceName(deployment *deployments.Deployment) string {
return fmt.Sprintf("orama-deploy-%s-%s", namespace, name) return fmt.Sprintf("orama-deploy-%s-%s", namespace, name)
} }
// systemd helper methods (use sudo for non-root execution) // systemd helper methods
func (m *Manager) systemdReload() error { func (m *Manager) systemdReload() error {
cmd := exec.Command("sudo", "systemctl", "daemon-reload") cmd := exec.Command("systemctl", "daemon-reload")
return cmd.Run() return cmd.Run()
} }
func (m *Manager) systemdEnable(serviceName string) error { func (m *Manager) systemdEnable(serviceName string) error {
cmd := exec.Command("sudo", "systemctl", "enable", serviceName) cmd := exec.Command("systemctl", "enable", serviceName)
return cmd.Run() return cmd.Run()
} }
func (m *Manager) systemdDisable(serviceName string) error { func (m *Manager) systemdDisable(serviceName string) error {
cmd := exec.Command("sudo", "systemctl", "disable", serviceName) cmd := exec.Command("systemctl", "disable", serviceName)
return cmd.Run() return cmd.Run()
} }
func (m *Manager) systemdStart(serviceName string) error { func (m *Manager) systemdStart(serviceName string) error {
cmd := exec.Command("sudo", "systemctl", "start", serviceName) cmd := exec.Command("systemctl", "start", serviceName)
return cmd.Run() return cmd.Run()
} }
func (m *Manager) systemdStop(serviceName string) error { func (m *Manager) systemdStop(serviceName string) error {
cmd := exec.Command("sudo", "systemctl", "stop", serviceName) cmd := exec.Command("systemctl", "stop", serviceName)
return cmd.Run() return cmd.Run()
} }
func (m *Manager) systemdRestart(serviceName string) error { func (m *Manager) systemdRestart(serviceName string) error {
cmd := exec.Command("sudo", "systemctl", "restart", serviceName) cmd := exec.Command("systemctl", "restart", serviceName)
return cmd.Run() return cmd.Run()
} }

View File

@ -91,7 +91,7 @@ func TestGetStartCommand(t *testing.T) {
// On macOS (test environment), useSystemd will be false, so node/npm use short paths. // On macOS (test environment), useSystemd will be false, so node/npm use short paths.
// We explicitly set it to test both modes. // We explicitly set it to test both modes.
workDir := "/home/orama/deployments/alice/myapp" workDir := "/opt/orama/deployments/alice/myapp"
tests := []struct { tests := []struct {
name string name string

View File

@ -6,7 +6,6 @@ import (
"fmt" "fmt"
"net" "net"
"os" "os"
"os/exec"
"os/user" "os/user"
"path/filepath" "path/filepath"
"strconv" "strconv"
@ -438,10 +437,5 @@ func (sg *SecretGenerator) SaveConfig(filename string, content string) error {
return fmt.Errorf("failed to write config %s: %w", filename, err) return fmt.Errorf("failed to write config %s: %w", filename, err)
} }
// Fix ownership
if err := exec.Command("chown", "orama:orama", configPath).Run(); err != nil {
fmt.Printf("Warning: failed to chown %s to orama:orama: %v\n", configPath, err)
}
return nil return nil
} }

View File

@ -27,7 +27,7 @@ type BinaryInstaller struct {
// NewBinaryInstaller creates a new binary installer // NewBinaryInstaller creates a new binary installer
func NewBinaryInstaller(arch string, logWriter io.Writer) *BinaryInstaller { func NewBinaryInstaller(arch string, logWriter io.Writer) *BinaryInstaller {
oramaHome := "/home/orama" oramaHome := OramaBase
return &BinaryInstaller{ return &BinaryInstaller{
arch: arch, arch: arch,
logWriter: logWriter, logWriter: logWriter,

View File

@ -150,11 +150,6 @@ func (ci *CaddyInstaller) Install() error {
return fmt.Errorf("failed to install binary: %w", err) return fmt.Errorf("failed to install binary: %w", err)
} }
// Grant CAP_NET_BIND_SERVICE to allow binding to ports 80/443
if err := exec.Command("setcap", "cap_net_bind_service=+ep", dstBinary).Run(); err != nil {
fmt.Fprintf(ci.logWriter, " ⚠️ Warning: failed to setcap on caddy: %v\n", err)
}
fmt.Fprintf(ci.logWriter, " ✓ Caddy with orama DNS module installed\n") fmt.Fprintf(ci.logWriter, " ✓ Caddy with orama DNS module installed\n")
return nil return nil
} }

View File

@ -39,7 +39,7 @@ func (gi *GatewayInstaller) Configure() error {
return nil return nil
} }
// InstallDeBrosBinaries builds Orama binaries from source at /home/orama/src. // InstallDeBrosBinaries builds Orama binaries from source at /opt/orama/src.
// Source must already be present (uploaded via SCP archive). // Source must already be present (uploaded via SCP archive).
func (gi *GatewayInstaller) InstallDeBrosBinaries(oramaHome string) error { func (gi *GatewayInstaller) InstallDeBrosBinaries(oramaHome string) error {
fmt.Fprintf(gi.logWriter, " Building Orama binaries...\n") fmt.Fprintf(gi.logWriter, " Building Orama binaries...\n")
@ -217,7 +217,7 @@ func (gi *GatewayInstaller) InstallAnyoneClient() error {
fmt.Fprintf(gi.logWriter, " Initializing NPM cache...\n") fmt.Fprintf(gi.logWriter, " Initializing NPM cache...\n")
// Create nested cache directories with proper permissions // Create nested cache directories with proper permissions
oramaHome := "/home/orama" oramaHome := "/opt/orama"
npmCacheDirs := []string{ npmCacheDirs := []string{
filepath.Join(oramaHome, ".npm"), filepath.Join(oramaHome, ".npm"),
filepath.Join(oramaHome, ".npm", "_cacache"), filepath.Join(oramaHome, ".npm", "_cacache"),

View File

@ -216,11 +216,6 @@ func (ii *IPFSInstaller) InitializeRepo(ipfsRepoPath string, swarmKeyPath string
} }
} }
// Fix ownership (best-effort, don't fail if it doesn't work)
if err := exec.Command("chown", "-R", "orama:orama", ipfsRepoPath).Run(); err != nil {
fmt.Fprintf(ii.logWriter, " ⚠️ Warning: failed to chown IPFS repo: %v\n", err)
}
return nil return nil
} }

View File

@ -76,11 +76,6 @@ func (ici *IPFSClusterInstaller) InitializeConfig(clusterPath, clusterSecret str
return fmt.Errorf("failed to create IPFS Cluster directory: %w", err) return fmt.Errorf("failed to create IPFS Cluster directory: %w", err)
} }
// Fix ownership before running init (best-effort)
if err := exec.Command("chown", "-R", "orama:orama", clusterPath).Run(); err != nil {
fmt.Fprintf(ici.logWriter, " ⚠️ Warning: failed to chown cluster path before init: %v\n", err)
}
// Resolve ipfs-cluster-service binary path // Resolve ipfs-cluster-service binary path
clusterBinary, err := ResolveBinaryPath("ipfs-cluster-service", "/usr/local/bin/ipfs-cluster-service", "/usr/bin/ipfs-cluster-service") clusterBinary, err := ResolveBinaryPath("ipfs-cluster-service", "/usr/local/bin/ipfs-cluster-service", "/usr/bin/ipfs-cluster-service")
if err != nil { if err != nil {
@ -119,11 +114,6 @@ func (ici *IPFSClusterInstaller) InitializeConfig(clusterPath, clusterSecret str
fmt.Fprintf(ici.logWriter, " ✓ Cluster secret verified\n") fmt.Fprintf(ici.logWriter, " ✓ Cluster secret verified\n")
} }
// Fix ownership again after updates (best-effort)
if err := exec.Command("chown", "-R", "orama:orama", clusterPath).Run(); err != nil {
fmt.Fprintf(ici.logWriter, " ⚠️ Warning: failed to chown cluster path after updates: %v\n", err)
}
return nil return nil
} }

View File

@ -79,8 +79,5 @@ func (ri *RQLiteInstaller) InitializeDataDir(dataDir string) error {
return fmt.Errorf("failed to create RQLite data directory: %w", err) return fmt.Errorf("failed to create RQLite data directory: %w", err)
} }
if err := exec.Command("chown", "-R", "orama:orama", dataDir).Run(); err != nil {
fmt.Fprintf(ri.logWriter, " ⚠️ Warning: failed to chown RQLite data dir: %v\n", err)
}
return nil return nil
} }

View File

@ -45,7 +45,6 @@ type ProductionSetup struct {
resourceChecker *ResourceChecker resourceChecker *ResourceChecker
portChecker *PortChecker portChecker *PortChecker
fsProvisioner *FilesystemProvisioner fsProvisioner *FilesystemProvisioner
userProvisioner *UserProvisioner
stateDetector *StateDetector stateDetector *StateDetector
configGenerator *ConfigGenerator configGenerator *ConfigGenerator
secretGenerator *SecretGenerator secretGenerator *SecretGenerator
@ -78,7 +77,6 @@ func SaveBranchPreference(oramaDir, branch string) error {
if err := os.WriteFile(branchFile, []byte(branch), 0644); err != nil { if err := os.WriteFile(branchFile, []byte(branch), 0644); err != nil {
return fmt.Errorf("failed to save branch preference: %w", err) return fmt.Errorf("failed to save branch preference: %w", err)
} }
exec.Command("chown", "orama:orama", branchFile).Run()
return nil return nil
} }
@ -100,7 +98,6 @@ func NewProductionSetup(oramaHome string, logWriter io.Writer, forceReconfigure
resourceChecker: NewResourceChecker(), resourceChecker: NewResourceChecker(),
portChecker: NewPortChecker(), portChecker: NewPortChecker(),
fsProvisioner: NewFilesystemProvisioner(oramaHome), fsProvisioner: NewFilesystemProvisioner(oramaHome),
userProvisioner: NewUserProvisioner("orama", oramaHome, "/bin/bash"),
stateDetector: NewStateDetector(oramaDir), stateDetector: NewStateDetector(oramaDir),
configGenerator: NewConfigGenerator(oramaDir), configGenerator: NewConfigGenerator(oramaDir),
secretGenerator: NewSecretGenerator(oramaDir), secretGenerator: NewSecretGenerator(oramaDir),
@ -227,63 +224,16 @@ func (ps *ProductionSetup) Phase1CheckPrerequisites() error {
return nil return nil
} }
// Phase2ProvisionEnvironment sets up users and filesystems // Phase2ProvisionEnvironment sets up filesystems
func (ps *ProductionSetup) Phase2ProvisionEnvironment() error { func (ps *ProductionSetup) Phase2ProvisionEnvironment() error {
ps.logf("Phase 2: Provisioning environment...") ps.logf("Phase 2: Provisioning environment...")
// Create orama user
if !ps.userProvisioner.UserExists() {
if err := ps.userProvisioner.CreateUser(); err != nil {
return fmt.Errorf("failed to create orama user: %w", err)
}
ps.logf(" ✓ Created 'orama' user")
} else {
ps.logf(" ✓ 'orama' user already exists")
}
// Set up sudoers access if invoked via sudo
sudoUser := os.Getenv("SUDO_USER")
if sudoUser != "" {
if err := ps.userProvisioner.SetupSudoersAccess(sudoUser); err != nil {
ps.logf(" ⚠️ Failed to setup sudoers: %v", err)
} else {
ps.logf(" ✓ Sudoers access configured")
}
}
// Set up deployment sudoers (allows orama user to manage orama-deploy-* services)
if err := ps.userProvisioner.SetupDeploymentSudoers(); err != nil {
ps.logf(" ⚠️ Failed to setup deployment sudoers: %v", err)
} else {
ps.logf(" ✓ Deployment sudoers configured")
}
// Set up namespace sudoers (allows orama user to manage orama-namespace-* services)
if err := ps.userProvisioner.SetupNamespaceSudoers(); err != nil {
ps.logf(" ⚠️ Failed to setup namespace sudoers: %v", err)
} else {
ps.logf(" ✓ Namespace sudoers configured")
}
// Set up WireGuard sudoers (allows orama user to manage WG peers)
if err := ps.userProvisioner.SetupWireGuardSudoers(); err != nil {
ps.logf(" ⚠️ Failed to setup wireguard sudoers: %v", err)
} else {
ps.logf(" ✓ WireGuard sudoers configured")
}
// Create directory structure (unified structure) // Create directory structure (unified structure)
if err := ps.fsProvisioner.EnsureDirectoryStructure(); err != nil { if err := ps.fsProvisioner.EnsureDirectoryStructure(); err != nil {
return fmt.Errorf("failed to create directory structure: %w", err) return fmt.Errorf("failed to create directory structure: %w", err)
} }
ps.logf(" ✓ Directory structure created") ps.logf(" ✓ Directory structure created")
// Fix ownership
if err := ps.fsProvisioner.FixOwnership(); err != nil {
return fmt.Errorf("failed to fix ownership: %w", err)
}
ps.logf(" ✓ Ownership fixed")
return nil return nil
} }
@ -305,7 +255,7 @@ func (ps *ProductionSetup) Phase2bInstallBinaries() error {
ps.logf(" ⚠️ Olric install warning: %v", err) ps.logf(" ⚠️ Olric install warning: %v", err)
} }
// Install Orama binaries (source must be at /home/orama/src via SCP) // Install Orama binaries (source must be at /opt/orama/src via SCP)
if err := ps.binaryInstaller.InstallDeBrosBinaries(ps.oramaHome); err != nil { if err := ps.binaryInstaller.InstallDeBrosBinaries(ps.oramaHome); err != nil {
return fmt.Errorf("failed to install Orama binaries: %w", err) return fmt.Errorf("failed to install Orama binaries: %w", err)
} }
@ -470,12 +420,6 @@ func (ps *ProductionSetup) Phase2cInitializeServices(peerAddresses []string, vps
ps.logf(" ⚠️ RQLite initialization warning: %v", err) ps.logf(" ⚠️ RQLite initialization warning: %v", err)
} }
// Ensure all directories and files created during service initialization have correct ownership
// This is critical because directories/files created as root need to be owned by orama user
if err := ps.fsProvisioner.FixOwnership(); err != nil {
return fmt.Errorf("failed to fix ownership after service initialization: %w", err)
}
ps.logf(" ✓ Services initialized") ps.logf(" ✓ Services initialized")
return nil return nil
} }
@ -564,7 +508,6 @@ func (ps *ProductionSetup) Phase4GenerateConfigs(peerAddresses []string, vpsIP s
if err := os.WriteFile(olricConfigPath, []byte(olricConfig), 0644); err != nil { if err := os.WriteFile(olricConfigPath, []byte(olricConfig), 0644); err != nil {
return fmt.Errorf("failed to save olric config: %w", err) return fmt.Errorf("failed to save olric config: %w", err)
} }
exec.Command("chown", "orama:orama", olricConfigPath).Run()
ps.logf(" ✓ Olric config generated") ps.logf(" ✓ Olric config generated")
// Configure CoreDNS (if baseDomain is provided - this is the zone name) // Configure CoreDNS (if baseDomain is provided - this is the zone name)
@ -690,12 +633,8 @@ func (ps *ProductionSetup) Phase5CreateSystemdServices(enableHTTPS bool) error {
// Caddy service on ALL nodes (any node may host namespaces and need TLS) // Caddy service on ALL nodes (any node may host namespaces and need TLS)
if _, err := os.Stat("/usr/bin/caddy"); err == nil { if _, err := os.Stat("/usr/bin/caddy"); err == nil {
// Create caddy user if it doesn't exist // Create caddy data directory
exec.Command("useradd", "-r", "-m", "-d", "/home/caddy", "-s", "/sbin/nologin", "caddy").Run()
exec.Command("mkdir", "-p", "/var/lib/caddy").Run() exec.Command("mkdir", "-p", "/var/lib/caddy").Run()
exec.Command("chown", "caddy:caddy", "/var/lib/caddy").Run()
exec.Command("mkdir", "-p", "/home/caddy").Run()
exec.Command("chown", "caddy:caddy", "/home/caddy").Run()
caddyUnit := ps.serviceGenerator.GenerateCaddyService() caddyUnit := ps.serviceGenerator.GenerateCaddyService()
if err := ps.serviceController.WriteServiceUnit("caddy.service", caddyUnit); err != nil { if err := ps.serviceController.WriteServiceUnit("caddy.service", caddyUnit); err != nil {

View File

@ -0,0 +1,14 @@
package production
// Central path constants for the Orama Network production environment.
// All services run as root with /opt/orama as the base directory.
const (
OramaBase = "/opt/orama"
OramaBinDir = "/opt/orama/bin"
OramaSrcDir = "/opt/orama/src"
OramaDir = "/opt/orama/.orama"
OramaConfigs = "/opt/orama/.orama/configs"
OramaSecrets = "/opt/orama/.orama/secrets"
OramaData = "/opt/orama/.orama/data"
OramaLogs = "/opt/orama/.orama/logs"
)

View File

@ -81,223 +81,6 @@ func (fp *FilesystemProvisioner) EnsureDirectoryStructure() error {
return nil return nil
} }
// FixOwnership changes ownership of .orama directory to orama user
func (fp *FilesystemProvisioner) FixOwnership() error {
// Fix entire .orama directory recursively (includes all data, configs, logs, etc.)
cmd := exec.Command("chown", "-R", "orama:orama", fp.oramaDir)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to set ownership for %s: %w\nOutput: %s", fp.oramaDir, err, string(output))
}
// Also fix home directory ownership
cmd = exec.Command("chown", "orama:orama", fp.oramaHome)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to set ownership for %s: %w\nOutput: %s", fp.oramaHome, err, string(output))
}
// Fix bin directory
binDir := filepath.Join(fp.oramaHome, "bin")
cmd = exec.Command("chown", "-R", "orama:orama", binDir)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to set ownership for %s: %w\nOutput: %s", binDir, err, string(output))
}
// Fix npm cache directory
npmDir := filepath.Join(fp.oramaHome, ".npm")
cmd = exec.Command("chown", "-R", "orama:orama", npmDir)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to set ownership for %s: %w\nOutput: %s", npmDir, err, string(output))
}
return nil
}
// UserProvisioner manages system user creation and sudoers setup
type UserProvisioner struct {
username string
home string
shell string
}
// NewUserProvisioner creates a new user provisioner
func NewUserProvisioner(username, home, shell string) *UserProvisioner {
if shell == "" {
shell = "/bin/bash"
}
return &UserProvisioner{
username: username,
home: home,
shell: shell,
}
}
// UserExists checks if the system user exists
func (up *UserProvisioner) UserExists() bool {
cmd := exec.Command("id", up.username)
return cmd.Run() == nil
}
// CreateUser creates the system user
func (up *UserProvisioner) CreateUser() error {
if up.UserExists() {
return nil // User already exists
}
cmd := exec.Command("useradd", "-r", "-m", "-s", up.shell, "-d", up.home, up.username)
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to create user %s: %w", up.username, err)
}
return nil
}
// SetupSudoersAccess creates sudoers rule for the invoking user
func (up *UserProvisioner) SetupSudoersAccess(invokerUser string) error {
if invokerUser == "" {
return nil // Skip if no invoker
}
sudoersRule := fmt.Sprintf("%s ALL=(orama) NOPASSWD: ALL\n", invokerUser)
sudoersFile := "/etc/sudoers.d/orama-access"
// Check if rule already exists
if existing, err := os.ReadFile(sudoersFile); err == nil {
if strings.Contains(string(existing), invokerUser) {
return nil // Rule already set
}
}
// Write sudoers rule
if err := os.WriteFile(sudoersFile, []byte(sudoersRule), 0440); err != nil {
return fmt.Errorf("failed to create sudoers rule: %w", err)
}
// Validate sudoers file
cmd := exec.Command("visudo", "-c", "-f", sudoersFile)
if err := cmd.Run(); err != nil {
os.Remove(sudoersFile) // Clean up on validation failure
return fmt.Errorf("sudoers rule validation failed: %w", err)
}
return nil
}
// SetupDeploymentSudoers configures the orama user with permissions needed for
// managing user deployments via systemd services.
func (up *UserProvisioner) SetupDeploymentSudoers() error {
sudoersFile := "/etc/sudoers.d/orama-deployments"
// Check if already configured
if _, err := os.Stat(sudoersFile); err == nil {
return nil // Already configured
}
sudoersContent := `# Orama Network - Deployment Management Permissions
# Allows orama user to manage systemd services for user deployments
# Systemd service management for orama-deploy-* services
orama ALL=(ALL) NOPASSWD: /usr/bin/systemctl daemon-reload
orama ALL=(ALL) NOPASSWD: /usr/bin/systemctl start orama-deploy-*
orama ALL=(ALL) NOPASSWD: /usr/bin/systemctl stop orama-deploy-*
orama ALL=(ALL) NOPASSWD: /usr/bin/systemctl restart orama-deploy-*
orama ALL=(ALL) NOPASSWD: /usr/bin/systemctl enable orama-deploy-*
orama ALL=(ALL) NOPASSWD: /usr/bin/systemctl disable orama-deploy-*
orama ALL=(ALL) NOPASSWD: /usr/bin/systemctl status orama-deploy-*
# Service file management (tee to write, rm to remove)
orama ALL=(ALL) NOPASSWD: /usr/bin/tee /etc/systemd/system/orama-deploy-*.service
orama ALL=(ALL) NOPASSWD: /bin/rm -f /etc/systemd/system/orama-deploy-*.service
`
// Write sudoers rule
if err := os.WriteFile(sudoersFile, []byte(sudoersContent), 0440); err != nil {
return fmt.Errorf("failed to create deployment sudoers rule: %w", err)
}
// Validate sudoers file
cmd := exec.Command("visudo", "-c", "-f", sudoersFile)
if err := cmd.Run(); err != nil {
os.Remove(sudoersFile) // Clean up on validation failure
return fmt.Errorf("deployment sudoers rule validation failed: %w", err)
}
return nil
}
// SetupNamespaceSudoers configures the orama user with permissions needed for
// managing namespace cluster services via systemd.
func (up *UserProvisioner) SetupNamespaceSudoers() error {
sudoersFile := "/etc/sudoers.d/orama-namespaces"
// Check if already configured
if _, err := os.Stat(sudoersFile); err == nil {
return nil // Already configured
}
sudoersContent := `# Orama Network - Namespace Cluster Management Permissions
# Allows orama user to manage systemd services for namespace clusters
# Systemd service management for orama-namespace-* services
orama ALL=(ALL) NOPASSWD: /usr/bin/systemctl daemon-reload
orama ALL=(ALL) NOPASSWD: /usr/bin/systemctl start orama-namespace-*
orama ALL=(ALL) NOPASSWD: /usr/bin/systemctl stop orama-namespace-*
orama ALL=(ALL) NOPASSWD: /usr/bin/systemctl restart orama-namespace-*
orama ALL=(ALL) NOPASSWD: /usr/bin/systemctl enable orama-namespace-*
orama ALL=(ALL) NOPASSWD: /usr/bin/systemctl disable orama-namespace-*
orama ALL=(ALL) NOPASSWD: /usr/bin/systemctl status orama-namespace-*
orama ALL=(ALL) NOPASSWD: /usr/bin/systemctl is-active orama-namespace-*
# Service file management (tee to write, rm to remove)
orama ALL=(ALL) NOPASSWD: /usr/bin/tee /etc/systemd/system/orama-namespace-*.service
orama ALL=(ALL) NOPASSWD: /bin/rm -f /etc/systemd/system/orama-namespace-*.service
# Environment file management for namespace services
orama ALL=(ALL) NOPASSWD: /usr/bin/tee /home/orama/.orama/namespace/*/env/*
orama ALL=(ALL) NOPASSWD: /usr/bin/mkdir -p /home/orama/.orama/namespace/*/env
`
// Write sudoers rule
if err := os.WriteFile(sudoersFile, []byte(sudoersContent), 0440); err != nil {
return fmt.Errorf("failed to create namespace sudoers rule: %w", err)
}
// Validate sudoers file
cmd := exec.Command("visudo", "-c", "-f", sudoersFile)
if err := cmd.Run(); err != nil {
os.Remove(sudoersFile) // Clean up on validation failure
return fmt.Errorf("namespace sudoers rule validation failed: %w", err)
}
return nil
}
// SetupWireGuardSudoers configures the orama user with permissions to manage WireGuard
func (up *UserProvisioner) SetupWireGuardSudoers() error {
sudoersFile := "/etc/sudoers.d/orama-wireguard"
sudoersContent := `# Orama Network - WireGuard Management Permissions
# Allows orama user to manage WireGuard peers
orama ALL=(ALL) NOPASSWD: /usr/bin/wg set wg0 *
orama ALL=(ALL) NOPASSWD: /usr/bin/wg show wg0
orama ALL=(ALL) NOPASSWD: /usr/bin/wg showconf wg0
orama ALL=(ALL) NOPASSWD: /usr/bin/tee /etc/wireguard/wg0.conf
`
// Write sudoers rule (always overwrite to ensure latest)
if err := os.WriteFile(sudoersFile, []byte(sudoersContent), 0440); err != nil {
return fmt.Errorf("failed to create wireguard sudoers rule: %w", err)
}
// Validate sudoers file
cmd := exec.Command("visudo", "-c", "-f", sudoersFile)
if err := cmd.Run(); err != nil {
os.Remove(sudoersFile)
return fmt.Errorf("wireguard sudoers rule validation failed: %w", err)
}
return nil
}
// StateDetector checks for existing production state // StateDetector checks for existing production state
type StateDetector struct { type StateDetector struct {

View File

@ -34,8 +34,6 @@ Wants=network-online.target
[Service] [Service]
Type=simple Type=simple
User=orama
Group=orama
Environment=HOME=%[1]s Environment=HOME=%[1]s
Environment=IPFS_PATH=%[2]s Environment=IPFS_PATH=%[2]s
ExecStartPre=/bin/bash -c 'if [ -f %[3]s/secrets/swarm.key ] && [ ! -f %[2]s/swarm.key ]; then cp %[3]s/secrets/swarm.key %[2]s/swarm.key && chmod 600 %[2]s/swarm.key; fi' ExecStartPre=/bin/bash -c 'if [ -f %[3]s/secrets/swarm.key ] && [ ! -f %[2]s/swarm.key ]; then cp %[3]s/secrets/swarm.key %[2]s/swarm.key && chmod 600 %[2]s/swarm.key; fi'
@ -46,16 +44,7 @@ StandardOutput=append:%[4]s
StandardError=append:%[4]s StandardError=append:%[4]s
SyslogIdentifier=orama-ipfs SyslogIdentifier=orama-ipfs
NoNewPrivileges=yes
PrivateTmp=yes PrivateTmp=yes
ProtectSystem=strict
ProtectHome=read-only
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictRealtime=yes
RestrictSUIDSGID=yes
ReadWritePaths=%[3]s
LimitNOFILE=65536 LimitNOFILE=65536
TimeoutStopSec=30 TimeoutStopSec=30
KillMode=mixed KillMode=mixed
@ -86,8 +75,6 @@ Requires=orama-ipfs.service
[Service] [Service]
Type=simple Type=simple
User=orama
Group=orama
WorkingDirectory=%[1]s WorkingDirectory=%[1]s
Environment=HOME=%[1]s Environment=HOME=%[1]s
Environment=IPFS_CLUSTER_PATH=%[2]s Environment=IPFS_CLUSTER_PATH=%[2]s
@ -101,16 +88,7 @@ StandardOutput=append:%[3]s
StandardError=append:%[3]s StandardError=append:%[3]s
SyslogIdentifier=orama-ipfs-cluster SyslogIdentifier=orama-ipfs-cluster
NoNewPrivileges=yes
PrivateTmp=yes PrivateTmp=yes
ProtectSystem=strict
ProtectHome=read-only
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictRealtime=yes
RestrictSUIDSGID=yes
ReadWritePaths=%[1]s
LimitNOFILE=65536 LimitNOFILE=65536
TimeoutStopSec=30 TimeoutStopSec=30
KillMode=mixed KillMode=mixed
@ -150,8 +128,6 @@ Wants=network-online.target
[Service] [Service]
Type=simple Type=simple
User=orama
Group=orama
Environment=HOME=%[1]s Environment=HOME=%[1]s
ExecStart=%[5]s %[2]s ExecStart=%[5]s %[2]s
Restart=always Restart=always
@ -160,16 +136,7 @@ StandardOutput=append:%[3]s
StandardError=append:%[3]s StandardError=append:%[3]s
SyslogIdentifier=orama-rqlite SyslogIdentifier=orama-rqlite
NoNewPrivileges=yes
PrivateTmp=yes PrivateTmp=yes
ProtectSystem=strict
ProtectHome=read-only
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictRealtime=yes
RestrictSUIDSGID=yes
ReadWritePaths=%[4]s
LimitNOFILE=65536 LimitNOFILE=65536
TimeoutStopSec=30 TimeoutStopSec=30
KillMode=mixed KillMode=mixed
@ -191,8 +158,6 @@ Wants=network-online.target
[Service] [Service]
Type=simple Type=simple
User=orama
Group=orama
Environment=HOME=%[1]s Environment=HOME=%[1]s
Environment=OLRIC_SERVER_CONFIG=%[2]s Environment=OLRIC_SERVER_CONFIG=%[2]s
ExecStart=%[5]s ExecStart=%[5]s
@ -202,16 +167,7 @@ StandardOutput=append:%[3]s
StandardError=append:%[3]s StandardError=append:%[3]s
SyslogIdentifier=olric SyslogIdentifier=olric
NoNewPrivileges=yes
PrivateTmp=yes PrivateTmp=yes
ProtectSystem=strict
ProtectHome=read-only
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictRealtime=yes
RestrictSUIDSGID=yes
ReadWritePaths=%[4]s
LimitNOFILE=65536 LimitNOFILE=65536
TimeoutStopSec=30 TimeoutStopSec=30
KillMode=mixed KillMode=mixed
@ -237,8 +193,6 @@ Requires=wg-quick@wg0.service
[Service] [Service]
Type=simple Type=simple
User=orama
Group=orama
WorkingDirectory=%[1]s WorkingDirectory=%[1]s
Environment=HOME=%[1]s Environment=HOME=%[1]s
ExecStart=%[1]s/bin/orama-node --config %[2]s/configs/%[3]s ExecStart=%[1]s/bin/orama-node --config %[2]s/configs/%[3]s
@ -248,16 +202,7 @@ StandardOutput=append:%[4]s
StandardError=append:%[4]s StandardError=append:%[4]s
SyslogIdentifier=orama-node SyslogIdentifier=orama-node
NoNewPrivileges=yes
PrivateTmp=yes PrivateTmp=yes
ProtectSystem=strict
ProtectHome=read-only
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictRealtime=yes
RestrictSUIDSGID=yes
ReadWritePaths=%[2]s /etc/systemd/system
LimitNOFILE=65536 LimitNOFILE=65536
TimeoutStopSec=30 TimeoutStopSec=30
KillMode=mixed KillMode=mixed
@ -279,8 +224,6 @@ Wants=orama-node.service orama-olric.service
[Service] [Service]
Type=simple Type=simple
User=orama
Group=orama
WorkingDirectory=%[1]s WorkingDirectory=%[1]s
Environment=HOME=%[1]s Environment=HOME=%[1]s
ExecStart=%[1]s/bin/gateway --config %[2]s/data/gateway.yaml ExecStart=%[1]s/bin/gateway --config %[2]s/data/gateway.yaml
@ -290,20 +233,7 @@ StandardOutput=append:%[3]s
StandardError=append:%[3]s StandardError=append:%[3]s
SyslogIdentifier=orama-gateway SyslogIdentifier=orama-gateway
AmbientCapabilities=CAP_NET_BIND_SERVICE
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
# Note: NoNewPrivileges is omitted because it conflicts with AmbientCapabilities
# The service needs CAP_NET_BIND_SERVICE to bind to privileged ports (80, 443)
PrivateTmp=yes PrivateTmp=yes
ProtectSystem=strict
ProtectHome=read-only
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictRealtime=yes
RestrictSUIDSGID=yes
ReadWritePaths=%[2]s
LimitNOFILE=65536 LimitNOFILE=65536
TimeoutStopSec=30 TimeoutStopSec=30
KillMode=mixed KillMode=mixed
@ -325,8 +255,6 @@ Wants=network-online.target
[Service] [Service]
Type=simple Type=simple
User=orama
Group=orama
Environment=HOME=%[1]s Environment=HOME=%[1]s
Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/lib/node_modules/.bin Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/lib/node_modules/.bin
WorkingDirectory=%[1]s WorkingDirectory=%[1]s
@ -337,16 +265,7 @@ StandardOutput=append:%[2]s
StandardError=append:%[2]s StandardError=append:%[2]s
SyslogIdentifier=anyone-client SyslogIdentifier=anyone-client
NoNewPrivileges=yes
PrivateTmp=yes PrivateTmp=yes
ProtectSystem=strict
ProtectHome=no
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictRealtime=yes
RestrictSUIDSGID=yes
ReadWritePaths=%[3]s
LimitNOFILE=65536 LimitNOFILE=65536
TimeoutStopSec=30 TimeoutStopSec=30
KillMode=mixed KillMode=mixed
@ -405,15 +324,11 @@ Wants=network-online.target orama-node.service
[Service] [Service]
Type=simple Type=simple
User=root
ExecStart=/usr/local/bin/coredns -conf /etc/coredns/Corefile ExecStart=/usr/local/bin/coredns -conf /etc/coredns/Corefile
Restart=on-failure Restart=on-failure
RestartSec=5 RestartSec=5
SyslogIdentifier=coredns SyslogIdentifier=coredns
NoNewPrivileges=true
ProtectSystem=full
ProtectHome=true
LimitNOFILE=65536 LimitNOFILE=65536
TimeoutStopSec=30 TimeoutStopSec=30
KillMode=mixed KillMode=mixed
@ -435,16 +350,12 @@ Wants=orama-node.service
[Service] [Service]
Type=simple Type=simple
User=caddy
Group=caddy
ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile
ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile
TimeoutStopSec=5s TimeoutStopSec=5s
LimitNOFILE=1048576 LimitNOFILE=1048576
LimitNPROC=512 LimitNPROC=512
PrivateTmp=true PrivateTmp=true
ProtectSystem=full
AmbientCapabilities=CAP_NET_BIND_SERVICE
Restart=on-failure Restart=on-failure
RestartSec=5 RestartSec=5
SyslogIdentifier=caddy SyslogIdentifier=caddy

View File

@ -47,8 +47,8 @@ func TestGenerateRQLiteService(t *testing.T) {
for _, tt := range tests { for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) { t.Run(tt.name, func(t *testing.T) {
ssg := &SystemdServiceGenerator{ ssg := &SystemdServiceGenerator{
oramaHome: "/home/orama", oramaHome: "/opt/orama",
oramaDir: "/home/orama/.orama", oramaDir: "/opt/orama/.orama",
} }
unit := ssg.GenerateRQLiteService("/usr/local/bin/rqlited", 5001, 7001, tt.joinAddr, tt.advertiseIP) unit := ssg.GenerateRQLiteService("/usr/local/bin/rqlited", 5001, 7001, tt.joinAddr, tt.advertiseIP)
@ -81,8 +81,8 @@ func TestGenerateRQLiteService(t *testing.T) {
// TestGenerateRQLiteServiceArgs verifies the ExecStart command arguments // TestGenerateRQLiteServiceArgs verifies the ExecStart command arguments
func TestGenerateRQLiteServiceArgs(t *testing.T) { func TestGenerateRQLiteServiceArgs(t *testing.T) {
ssg := &SystemdServiceGenerator{ ssg := &SystemdServiceGenerator{
oramaHome: "/home/orama", oramaHome: "/opt/orama",
oramaDir: "/home/orama/.orama", oramaDir: "/opt/orama/.orama",
} }
unit := ssg.GenerateRQLiteService("/usr/local/bin/rqlited", 5001, 7001, "10.0.0.1:7001", "10.0.0.2") unit := ssg.GenerateRQLiteService("/usr/local/bin/rqlited", 5001, 7001, "10.0.0.1:7001", "10.0.0.2")

View File

@ -145,11 +145,11 @@ func (wp *WireGuardProvisioner) WriteConfig() error {
} }
} }
// Fallback to sudo tee (for non-root, e.g. orama user) // Fallback to tee (for non-root, e.g. orama user)
cmd := exec.Command("sudo", "tee", confPath) cmd := exec.Command("tee", confPath)
cmd.Stdin = strings.NewReader(content) cmd.Stdin = strings.NewReader(content)
if output, err := cmd.CombinedOutput(); err != nil { if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to write wg0.conf via sudo: %w\n%s", err, string(output)) return fmt.Errorf("failed to write wg0.conf via tee: %w\n%s", err, string(output))
} }
return nil return nil
@ -198,7 +198,7 @@ func (wp *WireGuardProvisioner) AddPeer(peer WireGuardPeer) error {
args = append(args, "endpoint", peer.Endpoint) args = append(args, "endpoint", peer.Endpoint)
} }
cmd := exec.Command("sudo", args...) cmd := exec.Command(args[0], args[1:]...)
if output, err := cmd.CombinedOutput(); err != nil { if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to add peer %s: %w\n%s", peer.AllowedIP, err, string(output)) return fmt.Errorf("failed to add peer %s: %w\n%s", peer.AllowedIP, err, string(output))
} }
@ -210,7 +210,7 @@ func (wp *WireGuardProvisioner) AddPeer(peer WireGuardPeer) error {
// RemovePeer removes a peer from the running WireGuard interface // RemovePeer removes a peer from the running WireGuard interface
func (wp *WireGuardProvisioner) RemovePeer(publicKey string) error { func (wp *WireGuardProvisioner) RemovePeer(publicKey string) error {
cmd := exec.Command("sudo", "wg", "set", "wg0", "peer", publicKey, "remove") cmd := exec.Command("wg", "set", "wg0", "peer", publicKey, "remove")
if output, err := cmd.CombinedOutput(); err != nil { if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to remove peer: %w\n%s", err, string(output)) return fmt.Errorf("failed to remove peer: %w\n%s", err, string(output))
} }

View File

@ -10,7 +10,7 @@ func TestRenderNodeConfig(t *testing.T) {
data := NodeConfigData{ data := NodeConfigData{
NodeID: "node2", NodeID: "node2",
P2PPort: 4002, P2PPort: 4002,
DataDir: "/home/orama/.orama/node2", DataDir: "/opt/orama/.orama/node2",
RQLiteHTTPPort: 5002, RQLiteHTTPPort: 5002,
RQLiteRaftPort: 7002, RQLiteRaftPort: 7002,
RQLiteJoinAddress: "localhost:5001", RQLiteJoinAddress: "localhost:5001",

View File

@ -5,8 +5,6 @@ Wants=orama-node.service
[Service] [Service]
Type=simple Type=simple
User=orama
Group=orama
WorkingDirectory={{.HomeDir}} WorkingDirectory={{.HomeDir}}
Environment=HOME={{.HomeDir}} Environment=HOME={{.HomeDir}}
ExecStart={{.HomeDir}}/bin/gateway --config {{.OramaDir}}/data/gateway.yaml ExecStart={{.HomeDir}}/bin/gateway --config {{.OramaDir}}/data/gateway.yaml
@ -16,14 +14,7 @@ StandardOutput=journal
StandardError=journal StandardError=journal
SyslogIdentifier=orama-gateway SyslogIdentifier=orama-gateway
AmbientCapabilities=CAP_NET_BIND_SERVICE
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
NoNewPrivileges=yes
PrivateTmp=yes PrivateTmp=yes
ProtectSystem=strict
ReadWritePaths={{.OramaDir}}
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target

View File

@ -5,8 +5,6 @@ Wants=network-online.target
[Service] [Service]
Type=simple Type=simple
User=orama
Group=orama
Environment=HOME={{.HomeDir}} Environment=HOME={{.HomeDir}}
Environment=IPFS_PATH={{.IPFSRepoPath}} Environment=IPFS_PATH={{.IPFSRepoPath}}
ExecStartPre=/bin/bash -c 'if [ -f {{.SecretsDir}}/swarm.key ] && [ ! -f {{.IPFSRepoPath}}/swarm.key ]; then cp {{.SecretsDir}}/swarm.key {{.IPFSRepoPath}}/swarm.key && chmod 600 {{.IPFSRepoPath}}/swarm.key; fi' ExecStartPre=/bin/bash -c 'if [ -f {{.SecretsDir}}/swarm.key ] && [ ! -f {{.IPFSRepoPath}}/swarm.key ]; then cp {{.SecretsDir}}/swarm.key {{.IPFSRepoPath}}/swarm.key && chmod 600 {{.IPFSRepoPath}}/swarm.key; fi'
@ -17,11 +15,7 @@ StandardOutput=journal
StandardError=journal StandardError=journal
SyslogIdentifier=ipfs-{{.NodeType}} SyslogIdentifier=ipfs-{{.NodeType}}
NoNewPrivileges=yes
PrivateTmp=yes PrivateTmp=yes
ProtectSystem=strict
ReadWritePaths={{.OramaDir}}
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target

View File

@ -6,8 +6,6 @@ Requires=orama-ipfs-{{.NodeType}}.service
[Service] [Service]
Type=simple Type=simple
User=orama
Group=orama
WorkingDirectory={{.HomeDir}} WorkingDirectory={{.HomeDir}}
Environment=HOME={{.HomeDir}} Environment=HOME={{.HomeDir}}
Environment=CLUSTER_PATH={{.ClusterPath}} Environment=CLUSTER_PATH={{.ClusterPath}}
@ -18,11 +16,7 @@ StandardOutput=journal
StandardError=journal StandardError=journal
SyslogIdentifier=ipfs-cluster-{{.NodeType}} SyslogIdentifier=ipfs-cluster-{{.NodeType}}
NoNewPrivileges=yes
PrivateTmp=yes PrivateTmp=yes
ProtectSystem=strict
ReadWritePaths={{.OramaDir}}
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target

View File

@ -6,8 +6,6 @@ Requires=orama-ipfs-cluster-{{.NodeType}}.service
[Service] [Service]
Type=simple Type=simple
User=orama
Group=orama
WorkingDirectory={{.HomeDir}} WorkingDirectory={{.HomeDir}}
Environment=HOME={{.HomeDir}} Environment=HOME={{.HomeDir}}
ExecStart={{.HomeDir}}/bin/orama-node --config {{.OramaDir}}/configs/{{.ConfigFile}} ExecStart={{.HomeDir}}/bin/orama-node --config {{.OramaDir}}/configs/{{.ConfigFile}}
@ -17,11 +15,7 @@ StandardOutput=journal
StandardError=journal StandardError=journal
SyslogIdentifier=orama-node-{{.NodeType}} SyslogIdentifier=orama-node-{{.NodeType}}
NoNewPrivileges=yes
PrivateTmp=yes PrivateTmp=yes
ProtectSystem=strict
ReadWritePaths={{.OramaDir}}
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target

View File

@ -5,8 +5,6 @@ Wants=network-online.target
[Service] [Service]
Type=simple Type=simple
User=orama
Group=orama
Environment=HOME={{.HomeDir}} Environment=HOME={{.HomeDir}}
Environment=OLRIC_SERVER_CONFIG={{.ConfigPath}} Environment=OLRIC_SERVER_CONFIG={{.ConfigPath}}
ExecStart=/usr/local/bin/olric-server ExecStart=/usr/local/bin/olric-server
@ -16,11 +14,7 @@ StandardOutput=journal
StandardError=journal StandardError=journal
SyslogIdentifier=olric SyslogIdentifier=olric
NoNewPrivileges=yes
PrivateTmp=yes PrivateTmp=yes
ProtectSystem=strict
ReadWritePaths={{.OramaDir}}
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target

View File

@ -65,7 +65,7 @@ type PeerInfo struct {
type Handler struct { type Handler struct {
logger *zap.Logger logger *zap.Logger
rqliteClient rqlite.Client rqliteClient rqlite.Client
oramaDir string // e.g., /home/orama/.orama oramaDir string // e.g., /opt/orama/.orama
} }
// NewHandler creates a new join handler // NewHandler creates a new join handler
@ -271,7 +271,7 @@ func (h *Handler) assignWGIP(ctx context.Context) (string, error) {
// addWGPeerLocally adds a peer to the local wg0 interface and persists to config // addWGPeerLocally adds a peer to the local wg0 interface and persists to config
func (h *Handler) addWGPeerLocally(pubKey, publicIP, wgIP string) error { func (h *Handler) addWGPeerLocally(pubKey, publicIP, wgIP string) error {
// Add to running interface with persistent-keepalive // Add to running interface with persistent-keepalive
cmd := exec.Command("sudo", "wg", "set", "wg0", cmd := exec.Command("wg", "set", "wg0",
"peer", pubKey, "peer", pubKey,
"endpoint", fmt.Sprintf("%s:51820", publicIP), "endpoint", fmt.Sprintf("%s:51820", publicIP),
"allowed-ips", fmt.Sprintf("%s/32", wgIP), "allowed-ips", fmt.Sprintf("%s/32", wgIP),
@ -298,7 +298,7 @@ func (h *Handler) addWGPeerLocally(pubKey, publicIP, wgIP string) error {
pubKey, publicIP, wgIP) pubKey, publicIP, wgIP)
newConf := string(data) + peerSection newConf := string(data) + peerSection
writeCmd := exec.Command("sudo", "tee", confPath) writeCmd := exec.Command("tee", confPath)
writeCmd.Stdin = strings.NewReader(newConf) writeCmd.Stdin = strings.NewReader(newConf)
if output, err := writeCmd.CombinedOutput(); err != nil { if output, err := writeCmd.CombinedOutput(); err != nil {
h.logger.Warn("could not persist peer to wg0.conf", zap.Error(err), zap.String("output", string(output))) h.logger.Warn("could not persist peer to wg0.conf", zap.Error(err), zap.String("output", string(output)))

View File

@ -59,7 +59,7 @@ func NewHTTPSGateway(logger *logging.ColoredLogger, cfg *config.HTTPGatewayConfi
// Use Let's Encrypt STAGING (consistent with SNI gateway) // Use Let's Encrypt STAGING (consistent with SNI gateway)
cacheDir := cfg.HTTPS.CacheDir cacheDir := cfg.HTTPS.CacheDir
if cacheDir == "" { if cacheDir == "" {
cacheDir = "/home/orama/.orama/tls-cache" cacheDir = "/opt/orama/.orama/tls-cache"
} }
// Use Let's Encrypt STAGING - provides higher rate limits for testing/development // Use Let's Encrypt STAGING - provides higher rate limits for testing/development

View File

@ -625,7 +625,7 @@ curl -sf -X POST 'http://localhost:4501/api/v0/version' 2>/dev/null | python3 -c
echo "$SEP" echo "$SEP"
curl -sf 'http://localhost:9094/id' 2>/dev/null | python3 -c "import sys,json; print(json.load(sys.stdin).get('version',''))" 2>/dev/null || echo unknown curl -sf 'http://localhost:9094/id' 2>/dev/null | python3 -c "import sys,json; print(json.load(sys.stdin).get('version',''))" 2>/dev/null || echo unknown
echo "$SEP" echo "$SEP"
sudo test -f /home/orama/.orama/data/ipfs/repo/swarm.key && echo yes || echo no test -f /opt/orama/.orama/data/ipfs/repo/swarm.key && echo yes || echo no
echo "$SEP" echo "$SEP"
curl -sf -X POST 'http://localhost:4501/api/v0/bootstrap/list' 2>/dev/null | python3 -c "import sys,json; peers=json.load(sys.stdin).get('Peers',[]); print(len(peers))" 2>/dev/null || echo -1 curl -sf -X POST 'http://localhost:4501/api/v0/bootstrap/list' 2>/dev/null | python3 -c "import sys,json; peers=json.load(sys.stdin).get('Peers',[]); print(len(peers))" 2>/dev/null || echo -1
` `

View File

@ -30,7 +30,7 @@ func (n *Node) syncWireGuardPeers(ctx context.Context) error {
} }
// Check if wg0 interface exists // Check if wg0 interface exists
out, err := exec.CommandContext(ctx, "sudo", "wg", "show", "wg0").CombinedOutput() out, err := exec.CommandContext(ctx, "wg", "show", "wg0").CombinedOutput()
if err != nil { if err != nil {
n.logger.ComponentInfo(logging.ComponentNode, "WireGuard interface wg0 not active, skipping peer sync") n.logger.ComponentInfo(logging.ComponentNode, "WireGuard interface wg0 not active, skipping peer sync")
return nil return nil
@ -116,7 +116,7 @@ func (n *Node) ensureWireGuardSelfRegistered(ctx context.Context) {
} }
// Check if wg0 is active // Check if wg0 is active
out, err := exec.CommandContext(ctx, "sudo", "wg", "show", "wg0").CombinedOutput() out, err := exec.CommandContext(ctx, "wg", "show", "wg0").CombinedOutput()
if err != nil { if err != nil {
return // WG not active, nothing to register return // WG not active, nothing to register
} }

View File

@ -17,7 +17,7 @@ func (r *RQLiteManager) rqliteDataDirPath() (string, error) {
} }
func (r *RQLiteManager) resolveMigrationsDir() (string, error) { func (r *RQLiteManager) resolveMigrationsDir() (string, error) {
productionPath := "/home/orama/src/migrations" productionPath := "/opt/orama/src/migrations"
if _, err := os.Stat(productionPath); err == nil { if _, err := os.Stat(productionPath); err == nil {
return productionPath, nil return productionPath, nil
} }

View File

@ -47,7 +47,7 @@ func (m *Manager) StartService(namespace string, serviceType ServiceType) error
zap.String("service", svcName), zap.String("service", svcName),
zap.String("namespace", namespace)) zap.String("namespace", namespace))
cmd := exec.Command("sudo", "-n", "systemctl", "start", svcName) cmd := exec.Command("systemctl", "start", svcName)
m.logger.Debug("Executing systemctl command", m.logger.Debug("Executing systemctl command",
zap.String("cmd", cmd.String()), zap.String("cmd", cmd.String()),
zap.Strings("args", cmd.Args)) zap.Strings("args", cmd.Args))
@ -75,7 +75,7 @@ func (m *Manager) StopService(namespace string, serviceType ServiceType) error {
zap.String("service", svcName), zap.String("service", svcName),
zap.String("namespace", namespace)) zap.String("namespace", namespace))
cmd := exec.Command("sudo", "-n", "systemctl", "stop", svcName) cmd := exec.Command("systemctl", "stop", svcName)
if output, err := cmd.CombinedOutput(); err != nil { if output, err := cmd.CombinedOutput(); err != nil {
// Don't error if service is already stopped or doesn't exist // Don't error if service is already stopped or doesn't exist
if strings.Contains(string(output), "not loaded") || strings.Contains(string(output), "inactive") { if strings.Contains(string(output), "not loaded") || strings.Contains(string(output), "inactive") {
@ -96,7 +96,7 @@ func (m *Manager) RestartService(namespace string, serviceType ServiceType) erro
zap.String("service", svcName), zap.String("service", svcName),
zap.String("namespace", namespace)) zap.String("namespace", namespace))
cmd := exec.Command("sudo", "-n", "systemctl", "restart", svcName) cmd := exec.Command("systemctl", "restart", svcName)
if output, err := cmd.CombinedOutput(); err != nil { if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to restart %s: %w; output: %s", svcName, err, string(output)) return fmt.Errorf("failed to restart %s: %w; output: %s", svcName, err, string(output))
} }
@ -112,7 +112,7 @@ func (m *Manager) EnableService(namespace string, serviceType ServiceType) error
zap.String("service", svcName), zap.String("service", svcName),
zap.String("namespace", namespace)) zap.String("namespace", namespace))
cmd := exec.Command("sudo", "-n", "systemctl", "enable", svcName) cmd := exec.Command("systemctl", "enable", svcName)
if output, err := cmd.CombinedOutput(); err != nil { if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to enable %s: %w; output: %s", svcName, err, string(output)) return fmt.Errorf("failed to enable %s: %w; output: %s", svcName, err, string(output))
} }
@ -128,7 +128,7 @@ func (m *Manager) DisableService(namespace string, serviceType ServiceType) erro
zap.String("service", svcName), zap.String("service", svcName),
zap.String("namespace", namespace)) zap.String("namespace", namespace))
cmd := exec.Command("sudo", "-n", "systemctl", "disable", svcName) cmd := exec.Command("systemctl", "disable", svcName)
if output, err := cmd.CombinedOutput(); err != nil { if output, err := cmd.CombinedOutput(); err != nil {
// Don't error if service is already disabled or doesn't exist // Don't error if service is already disabled or doesn't exist
if strings.Contains(string(output), "not loaded") { if strings.Contains(string(output), "not loaded") {
@ -145,7 +145,7 @@ func (m *Manager) DisableService(namespace string, serviceType ServiceType) erro
// IsServiceActive checks if a namespace service is active // IsServiceActive checks if a namespace service is active
func (m *Manager) IsServiceActive(namespace string, serviceType ServiceType) (bool, error) { func (m *Manager) IsServiceActive(namespace string, serviceType ServiceType) (bool, error) {
svcName := m.serviceName(namespace, serviceType) svcName := m.serviceName(namespace, serviceType)
cmd := exec.Command("sudo", "-n", "systemctl", "is-active", svcName) cmd := exec.Command("systemctl", "is-active", svcName)
output, err := cmd.CombinedOutput() output, err := cmd.CombinedOutput()
outputStr := strings.TrimSpace(string(output)) outputStr := strings.TrimSpace(string(output))
@ -185,7 +185,7 @@ func (m *Manager) IsServiceActive(namespace string, serviceType ServiceType) (bo
// ReloadDaemon reloads systemd daemon configuration // ReloadDaemon reloads systemd daemon configuration
func (m *Manager) ReloadDaemon() error { func (m *Manager) ReloadDaemon() error {
m.logger.Info("Reloading systemd daemon") m.logger.Info("Reloading systemd daemon")
cmd := exec.Command("sudo", "-n", "systemctl", "daemon-reload") cmd := exec.Command("systemctl", "daemon-reload")
if output, err := cmd.CombinedOutput(); err != nil { if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to reload systemd daemon: %w; output: %s", err, string(output)) return fmt.Errorf("failed to reload systemd daemon: %w; output: %s", err, string(output))
} }
@ -228,7 +228,7 @@ func (m *Manager) StartAllNamespaceServices(namespace string) error {
// ListNamespaceServices returns all namespace services currently registered in systemd // ListNamespaceServices returns all namespace services currently registered in systemd
func (m *Manager) ListNamespaceServices() ([]string, error) { func (m *Manager) ListNamespaceServices() ([]string, error) {
cmd := exec.Command("sudo", "-n", "systemctl", "list-units", "--all", "--no-legend", "orama-namespace-*@*.service") cmd := exec.Command("systemctl", "list-units", "--all", "--no-legend", "orama-namespace-*@*.service")
output, err := cmd.CombinedOutput() output, err := cmd.CombinedOutput()
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to list namespace services: %w; output: %s", err, string(output)) return nil, fmt.Errorf("failed to list namespace services: %w; output: %s", err, string(output))
@ -260,7 +260,7 @@ func (m *Manager) StopAllNamespaceServicesGlobally() error {
for _, svc := range services { for _, svc := range services {
m.logger.Info("Stopping service", zap.String("service", svc)) m.logger.Info("Stopping service", zap.String("service", svc))
cmd := exec.Command("sudo", "-n", "systemctl", "stop", svc) cmd := exec.Command("systemctl", "stop", svc)
if output, err := cmd.CombinedOutput(); err != nil { if output, err := cmd.CombinedOutput(); err != nil {
m.logger.Warn("Failed to stop service", m.logger.Warn("Failed to stop service",
zap.String("service", svc), zap.String("service", svc),

View File

@ -86,7 +86,7 @@ echo ""
# 4. IPFS # 4. IPFS
echo "── IPFS ──" echo "── IPFS ──"
PEERS=$(sudo -u orama IPFS_PATH=/home/orama/.orama/data/ipfs/repo /usr/local/bin/ipfs swarm peers 2>/dev/null) PEERS=$(IPFS_PATH=/opt/orama/.orama/data/ipfs/repo /usr/local/bin/ipfs swarm peers 2>/dev/null)
if [ -n "$PEERS" ]; then if [ -n "$PEERS" ]; then
COUNT=$(echo "$PEERS" | wc -l) COUNT=$(echo "$PEERS" | wc -l)
echo " Connected peers: $COUNT" echo " Connected peers: $COUNT"

View File

@ -50,11 +50,10 @@ ufw --force reset
ufw allow 22/tcp ufw allow 22/tcp
ufw --force enable ufw --force enable
echo " Killing orama processes..." echo " Removing orama data..."
pkill -u orama 2>/dev/null || true rm -rf /opt/orama
sleep 1
echo " Removing orama user and data..." echo " Removing legacy user and data..."
userdel -r orama 2>/dev/null || true userdel -r orama 2>/dev/null || true
rm -rf /home/orama rm -rf /home/orama

View File

@ -3,34 +3,27 @@
# Run as root on the target VPS. # Run as root on the target VPS.
# #
# What it does: # What it does:
# 1. Extracts source to /home/orama/src/ # 1. Extracts source to /opt/orama/src/
# 2. Installs CLI to /usr/local/bin/orama # 2. Installs CLI to /usr/local/bin/orama
# All other binaries are built from source during `orama install`. # All other binaries are built from source during `orama install`.
# #
# Usage: sudo bash /home/orama/src/scripts/extract-deploy.sh # Usage: sudo bash /opt/orama/src/scripts/extract-deploy.sh
set -e set -e
ARCHIVE="/tmp/network-source.tar.gz" ARCHIVE="/tmp/network-source.tar.gz"
SRC_DIR="/home/orama/src" SRC_DIR="/opt/orama/src"
BIN_DIR="/home/orama/bin" BIN_DIR="/opt/orama/bin"
if [ ! -f "$ARCHIVE" ]; then if [ ! -f "$ARCHIVE" ]; then
echo "Error: $ARCHIVE not found" echo "Error: $ARCHIVE not found"
exit 1 exit 1
fi fi
# Ensure orama user exists
if ! id -u orama &>/dev/null; then
echo "Creating 'orama' user..."
useradd -m -s /bin/bash orama
fi
echo "Extracting source..." echo "Extracting source..."
rm -rf "$SRC_DIR" rm -rf "$SRC_DIR"
mkdir -p "$SRC_DIR" "$BIN_DIR" mkdir -p "$SRC_DIR" "$BIN_DIR"
tar xzf "$ARCHIVE" -C "$SRC_DIR" tar xzf "$ARCHIVE" -C "$SRC_DIR"
chown -R orama:orama "$SRC_DIR" || true
# Install CLI binary # Install CLI binary
if [ -f "$SRC_DIR/bin-linux/orama" ]; then if [ -f "$SRC_DIR/bin-linux/orama" ]; then
@ -41,6 +34,4 @@ else
echo " ⚠️ CLI binary not found in archive (bin-linux/orama)" echo " ⚠️ CLI binary not found in archive (bin-linux/orama)"
fi fi
chown -R orama:orama "$BIN_DIR" || true
echo "Done. Ready for: sudo orama install --vps-ip <ip> ..." echo "Done. Ready for: sudo orama install --vps-ip <ip> ..."

93
scripts/migrate-to-opt.sh Executable file
View File

@ -0,0 +1,93 @@
#!/usr/bin/env bash
#
# Migrate an existing node from /home/orama to /opt/orama.
#
# This is a one-time migration for nodes installed with the old architecture
# (dedicated orama user, /home/orama base). After migration, redeploy with
# the new root-based architecture.
#
# Usage:
# scripts/migrate-to-opt.sh <user@host> <password>
#
# Example:
# scripts/migrate-to-opt.sh root@51.195.109.238 'mypassword'
#
set -euo pipefail
if [[ $# -lt 2 ]]; then
echo "Usage: $0 <user@host> <password>"
exit 1
fi
USERHOST="$1"
PASS="$2"
SSH_OPTS=(-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=10 -o LogLevel=ERROR)
echo "== Migrating $USERHOST from /home/orama → /opt/orama =="
echo ""
sshpass -p "$PASS" ssh -n "${SSH_OPTS[@]}" "$USERHOST" 'bash -s' <<'REMOTE'
set -e
echo "1. Stopping all services..."
systemctl stop orama-node orama-gateway orama-ipfs orama-ipfs-cluster orama-olric orama-anyone-relay orama-anyone-client coredns caddy 2>/dev/null || true
systemctl disable orama-node orama-gateway orama-ipfs orama-ipfs-cluster orama-olric orama-anyone-relay orama-anyone-client coredns caddy 2>/dev/null || true
echo "2. Creating /opt/orama..."
mkdir -p /opt/orama
echo "3. Migrating data..."
if [ -d /home/orama/.orama ]; then
cp -a /home/orama/.orama /opt/orama/
echo " .orama/ copied"
fi
if [ -d /home/orama/bin ]; then
cp -a /home/orama/bin /opt/orama/
echo " bin/ copied"
fi
if [ -d /home/orama/src ]; then
cp -a /home/orama/src /opt/orama/
echo " src/ copied"
fi
echo "4. Removing old service files..."
rm -f /etc/systemd/system/orama-*.service
rm -f /etc/systemd/system/coredns.service
rm -f /etc/systemd/system/caddy.service
systemctl daemon-reload
echo "5. Removing orama user..."
userdel -r orama 2>/dev/null || true
rm -rf /home/orama
echo "6. Removing old sudoers files..."
rm -f /etc/sudoers.d/orama-access
rm -f /etc/sudoers.d/orama-deployments
rm -f /etc/sudoers.d/orama-wireguard
echo "7. Tearing down WireGuard (will be re-created on install)..."
systemctl stop wg-quick@wg0 2>/dev/null || true
wg-quick down wg0 2>/dev/null || true
systemctl disable wg-quick@wg0 2>/dev/null || true
rm -f /etc/wireguard/wg0.conf
echo "8. Resetting UFW..."
ufw --force reset
ufw allow 22/tcp
ufw --force enable
echo "9. Cleaning temp files..."
rm -f /tmp/orama /tmp/network-source.tar.gz /tmp/network-source.zip
rm -rf /tmp/network-extract /tmp/coredns-build /tmp/caddy-build
echo ""
echo "Migration complete. Data preserved at /opt/orama/"
echo "Old /home/orama removed."
echo ""
echo "Next: redeploy with new architecture:"
echo " ./bin/orama install --vps-ip <ip> --nameserver --domain <domain> --base-domain <domain>"
REMOTE
echo ""
echo "Done."

View File

@ -51,7 +51,7 @@ fix_node() {
local cmd local cmd
cmd=$(cat <<'REMOTE' cmd=$(cat <<'REMOTE'
set -e set -e
PREFS="/home/orama/.orama/preferences.yaml" PREFS="/opt/orama/.orama/preferences.yaml"
# Only patch nodes that have the Anyone relay service installed # Only patch nodes that have the Anyone relay service installed
if [ ! -f /etc/systemd/system/orama-anyone-relay.service ]; then if [ ! -f /etc/systemd/system/orama-anyone-relay.service ]; then

View File

@ -133,7 +133,7 @@ if [[ "$confirm" != "y" && "$confirm" != "Y" ]]; then
fi fi
echo "" echo ""
RAFT_DIR="/home/orama/.orama/data/rqlite/raft" RAFT_DIR="/opt/orama/.orama/data/rqlite/raft"
BACKUP_DIR="/tmp/rqlite-raft-backup" BACKUP_DIR="/tmp/rqlite-raft-backup"
# ── Phase 1: Stop orama-node on ALL nodes ─────────────────────────────────── # ── Phase 1: Stop orama-node on ALL nodes ───────────────────────────────────
@ -286,4 +286,4 @@ echo ""
echo "Next steps:" echo "Next steps:"
echo " 1. Run 'scripts/inspect.sh --devnet' to verify full cluster health" echo " 1. Run 'scripts/inspect.sh --devnet' to verify full cluster health"
echo " 2. If some nodes show Candidate state, give them more time (up to 5 min)" echo " 2. If some nodes show Candidate state, give them more time (up to 5 min)"
echo " 3. If nodes fail to join, check /home/orama/.orama/logs/rqlite-node.log on the node" echo " 3. If nodes fail to join, check /opt/orama/.orama/logs/rqlite-node.log on the node"

View File

@ -7,14 +7,12 @@ PartOf=orama-node.service
[Service] [Service]
Type=simple Type=simple
User=orama WorkingDirectory=/opt/orama
Group=orama
WorkingDirectory=/home/orama
EnvironmentFile=/home/orama/.orama/data/namespaces/%i/gateway.env EnvironmentFile=/opt/orama/.orama/data/namespaces/%i/gateway.env
# Use shell to properly expand NODE_ID from env file # Use shell to properly expand NODE_ID from env file
ExecStart=/bin/sh -c 'exec /home/orama/bin/gateway --config ${GATEWAY_CONFIG}' ExecStart=/bin/sh -c 'exec /opt/orama/bin/gateway --config ${GATEWAY_CONFIG}'
TimeoutStopSec=30s TimeoutStopSec=30s
KillMode=mixed KillMode=mixed
@ -27,14 +25,7 @@ StandardOutput=journal
StandardError=journal StandardError=journal
SyslogIdentifier=orama-gateway-%i SyslogIdentifier=orama-gateway-%i
# Security hardening PrivateTmp=yes
NoNewPrivileges=yes
ProtectSystem=strict
ProtectHome=read-only
ProtectKernelTunables=yes
ProtectKernelModules=yes
ReadWritePaths=/home/orama/.orama/data/namespaces
LimitNOFILE=65536 LimitNOFILE=65536
MemoryMax=1G MemoryMax=1G

View File

@ -7,14 +7,12 @@ PartOf=orama-node.service
[Service] [Service]
Type=simple Type=simple
User=orama WorkingDirectory=/opt/orama
Group=orama
WorkingDirectory=/home/orama
# Olric reads config from environment variable (set in env file) # Olric reads config from environment variable (set in env file)
EnvironmentFile=/home/orama/.orama/data/namespaces/%i/olric.env EnvironmentFile=/opt/orama/.orama/data/namespaces/%i/olric.env
ExecStart=/home/orama/bin/olric-server ExecStart=/opt/orama/bin/olric-server
TimeoutStopSec=30s TimeoutStopSec=30s
KillMode=mixed KillMode=mixed
@ -27,14 +25,7 @@ StandardOutput=journal
StandardError=journal StandardError=journal
SyslogIdentifier=orama-olric-%i SyslogIdentifier=orama-olric-%i
# Security hardening PrivateTmp=yes
NoNewPrivileges=yes
ProtectSystem=strict
ProtectHome=read-only
ProtectKernelTunables=yes
ProtectKernelModules=yes
ReadWritePaths=/home/orama/.orama/data/namespaces
LimitNOFILE=65536 LimitNOFILE=65536
MemoryMax=2G MemoryMax=2G

View File

@ -7,12 +7,10 @@ StopWhenUnneeded=false
[Service] [Service]
Type=simple Type=simple
User=orama WorkingDirectory=/opt/orama
Group=orama
WorkingDirectory=/home/orama
# Environment file contains namespace-specific config # Environment file contains namespace-specific config
EnvironmentFile=/home/orama/.orama/data/namespaces/%i/rqlite.env EnvironmentFile=/opt/orama/.orama/data/namespaces/%i/rqlite.env
# Start rqlited with args from environment (using shell to properly expand JOIN_ARGS) # Start rqlited with args from environment (using shell to properly expand JOIN_ARGS)
ExecStart=/bin/sh -c 'exec /usr/local/bin/rqlited \ ExecStart=/bin/sh -c 'exec /usr/local/bin/rqlited \
@ -21,7 +19,7 @@ ExecStart=/bin/sh -c 'exec /usr/local/bin/rqlited \
-http-adv-addr ${HTTP_ADV_ADDR} \ -http-adv-addr ${HTTP_ADV_ADDR} \
-raft-adv-addr ${RAFT_ADV_ADDR} \ -raft-adv-addr ${RAFT_ADV_ADDR} \
${JOIN_ARGS} \ ${JOIN_ARGS} \
/home/orama/.orama/data/namespaces/%i/rqlite/${NODE_ID}' /opt/orama/.orama/data/namespaces/%i/rqlite/${NODE_ID}'
# Graceful shutdown # Graceful shutdown
TimeoutStopSec=30s TimeoutStopSec=30s
@ -37,15 +35,8 @@ StandardOutput=journal
StandardError=journal StandardError=journal
SyslogIdentifier=orama-rqlite-%i SyslogIdentifier=orama-rqlite-%i
# Security hardening
NoNewPrivileges=yes
ProtectSystem=strict
ProtectHome=read-only
ProtectKernelTunables=yes
ProtectKernelModules=yes
ReadWritePaths=/home/orama/.orama/data/namespaces
# Resource limits # Resource limits
PrivateTmp=yes
LimitNOFILE=65536 LimitNOFILE=65536
MemoryMax=2G MemoryMax=2G