feat: enhance IPFS and Cluster integration in setup 08:16:27

- Added automatic setup for IPFS and IPFS Cluster during the network setup process.
  - Implemented initialization of IPFS repositories and Cluster configurations for each node.
  - Enhanced Makefile to support starting IPFS and Cluster daemons with improved logging.
  - Introduced a new documentation guide for IPFS Cluster setup, detailing configuration and verification steps.
  - Updated changelog to reflect the new features and improvements.
This commit is contained in:
anonpenguin23 2025-11-05 09:01:55 +02:00
parent cf26c1af2c
commit d6009bb33f
9 changed files with 925 additions and 88 deletions

View File

@ -30,6 +30,15 @@ if [ -z "$OTHER_FILES" ]; then
exit 0 exit 0
fi fi
# Check for skip flag
# To skip changelog generation, set SKIP_CHANGELOG=1 before committing:
# SKIP_CHANGELOG=1 git commit -m "your message"
# SKIP_CHANGELOG=1 git commit
if [ "$SKIP_CHANGELOG" = "1" ] || [ "$SKIP_CHANGELOG" = "true" ]; then
echo -e "${YELLOW}Skipping changelog update (SKIP_CHANGELOG is set)${NOCOLOR}"
exit 0
fi
# Update changelog before commit # Update changelog before commit
if [ -f "$CHANGELOG_SCRIPT" ]; then if [ -f "$CHANGELOG_SCRIPT" ]; then
echo -e "\n${CYAN}Updating changelog...${NOCOLOR}" echo -e "\n${CYAN}Updating changelog...${NOCOLOR}"

View File

@ -13,14 +13,17 @@ The format is based on [Keep a Changelog][keepachangelog] and adheres to [Semant
### Deprecated ### Deprecated
### Fixed ### Fixed
## [0.56.0] - 2025-11-05 ## [0.56.0] - 2025-11-05
### Added ### Added
- Added IPFS storage endpoints to the Gateway for content upload, pinning, status, retrieval, and unpinning. - Added IPFS storage endpoints to the Gateway for content upload, pinning, status, retrieval, and unpinning.
- Introduced `StorageClient` interface and implementation in the Go client library for interacting with the new IPFS storage endpoints. - Introduced `StorageClient` interface and implementation in the Go client library for interacting with the new IPFS storage endpoints.
- Added support for automatically starting IPFS daemon, IPFS Cluster daemon, and Olric cache server in the `dev` environment setup. - Added support for automatically starting IPFS daemon, IPFS Cluster daemon, and Olric cache server in the `dev` environment setup.
### Changed ### Changed
- Updated Gateway configuration to include settings for IPFS Cluster API URL, IPFS API URL, timeout, and replication factor. - Updated Gateway configuration to include settings for IPFS Cluster API URL, IPFS API URL, timeout, and replication factor.
- Refactored Olric configuration generation to use a simpler, local-environment focused setup. - Refactored Olric configuration generation to use a simpler, local-environment focused setup.
- Improved IPFS content retrieval (`Get`) to fall back to the IPFS Gateway (port 8080) if the IPFS API (port 5001) returns a 404. - Improved IPFS content retrieval (`Get`) to fall back to the IPFS Gateway (port 8080) if the IPFS API (port 5001) returns a 404.
@ -30,34 +33,18 @@ The format is based on [Keep a Changelog][keepachangelog] and adheres to [Semant
### Removed ### Removed
### Fixed ### Fixed
\n
## [0.55.0] - 2025-11-05
### Added
- Added IPFS storage endpoints to the Gateway for content upload, pinning, status, retrieval, and unpinning.
- Introduced `StorageClient` interface and implementation in the Go client library for interacting with the new IPFS storage endpoints.
- Added support for automatically starting IPFS daemon, IPFS Cluster daemon, and Olric cache server in the `dev` environment setup.
### Changed
- Updated Gateway configuration to include settings for IPFS Cluster API URL, IPFS API URL, timeout, and replication factor.
- Refactored Olric configuration generation to use a simpler, local-environment focused setup.
- Improved `dev` environment logging to include logs from IPFS and Olric services when running.
### Deprecated
### Removed
### Fixed
\n
## [0.54.0] - 2025-11-03 ## [0.54.0] - 2025-11-03
### Added ### Added
- Integrated Olric distributed cache for high-speed key-value storage and caching. - Integrated Olric distributed cache for high-speed key-value storage and caching.
- Added new HTTP Gateway endpoints for cache operations (GET, PUT, DELETE, SCAN) via `/v1/cache/`. - Added new HTTP Gateway endpoints for cache operations (GET, PUT, DELETE, SCAN) via `/v1/cache/`.
- Added `olric_servers` and `olric_timeout` configuration options to the Gateway. - Added `olric_servers` and `olric_timeout` configuration options to the Gateway.
- Updated the automated installation script (`install-debros-network.sh`) to include Olric installation, configuration, and firewall rules (ports 3320, 3322). - Updated the automated installation script (`install-debros-network.sh`) to include Olric installation, configuration, and firewall rules (ports 3320, 3322).
### Changed ### Changed
- Refactored README for better clarity and organization, focusing on quick start and core features. - Refactored README for better clarity and organization, focusing on quick start and core features.
### Deprecated ### Deprecated
@ -65,12 +52,17 @@ The format is based on [Keep a Changelog][keepachangelog] and adheres to [Semant
### Removed ### Removed
### Fixed ### Fixed
\n \n
## [0.53.18] - 2025-11-03 ## [0.53.18] - 2025-11-03
### Added ### Added
\n \n
### Changed ### Changed
- Increased the connection timeout during peer discovery from 15 seconds to 20 seconds to improve connection reliability. - Increased the connection timeout during peer discovery from 15 seconds to 20 seconds to improve connection reliability.
- Removed unnecessary debug logging related to filtering out ephemeral port addresses during peer exchange. - Removed unnecessary debug logging related to filtering out ephemeral port addresses during peer exchange.
@ -79,13 +71,17 @@ The format is based on [Keep a Changelog][keepachangelog] and adheres to [Semant
### Removed ### Removed
### Fixed ### Fixed
\n \n
## [0.53.17] - 2025-11-03 ## [0.53.17] - 2025-11-03
### Added ### Added
- Added a new Git `pre-commit` hook to automatically update the changelog and version before committing, ensuring version consistency. - Added a new Git `pre-commit` hook to automatically update the changelog and version before committing, ensuring version consistency.
### Changed ### Changed
- Refactored the `update_changelog.sh` script to support different execution contexts (pre-commit vs. pre-push), allowing it to analyze only staged changes during commit. - Refactored the `update_changelog.sh` script to support different execution contexts (pre-commit vs. pre-push), allowing it to analyze only staged changes during commit.
- The Git `pre-push` hook was simplified by removing the changelog update logic, which is now handled by the `pre-commit` hook. - The Git `pre-push` hook was simplified by removing the changelog update logic, which is now handled by the `pre-commit` hook.
@ -94,12 +90,17 @@ The format is based on [Keep a Changelog][keepachangelog] and adheres to [Semant
### Removed ### Removed
### Fixed ### Fixed
\n \n
## [0.53.16] - 2025-11-03 ## [0.53.16] - 2025-11-03
### Added ### Added
\n \n
### Changed ### Changed
- Improved the changelog generation script to prevent infinite loops when the only unpushed commit is a previous changelog update. - Improved the changelog generation script to prevent infinite loops when the only unpushed commit is a previous changelog update.
### Deprecated ### Deprecated
@ -107,12 +108,17 @@ The format is based on [Keep a Changelog][keepachangelog] and adheres to [Semant
### Removed ### Removed
### Fixed ### Fixed
\n \n
## [0.53.15] - 2025-11-03 ## [0.53.15] - 2025-11-03
### Added ### Added
\n \n
### Changed ### Changed
- Improved the pre-push git hook to automatically commit updated changelog and Makefile after generation. - Improved the pre-push git hook to automatically commit updated changelog and Makefile after generation.
- Updated the changelog generation script to load the OpenRouter API key from the .env file or environment variables for better security. - Updated the changelog generation script to load the OpenRouter API key from the .env file or environment variables for better security.
- Modified the pre-push hook to read user confirmation from /dev/tty for better compatibility. - Modified the pre-push hook to read user confirmation from /dev/tty for better compatibility.
@ -124,12 +130,17 @@ The format is based on [Keep a Changelog][keepachangelog] and adheres to [Semant
### Removed ### Removed
### Fixed ### Fixed
\n \n
## [0.53.15] - 2025-11-03 ## [0.53.15] - 2025-11-03
### Added ### Added
\n \n
### Changed ### Changed
- Improved the pre-push git hook to automatically commit updated changelog and Makefile after generation. - Improved the pre-push git hook to automatically commit updated changelog and Makefile after generation.
- Updated the changelog generation script to load the OpenRouter API key from the .env file or environment variables for better security. - Updated the changelog generation script to load the OpenRouter API key from the .env file or environment variables for better security.
- Modified the pre-push hook to read user confirmation from /dev/tty for better compatibility. - Modified the pre-push hook to read user confirmation from /dev/tty for better compatibility.
@ -141,14 +152,18 @@ The format is based on [Keep a Changelog][keepachangelog] and adheres to [Semant
### Removed ### Removed
### Fixed ### Fixed
\n \n
## [0.53.14] - 2025-11-03 ## [0.53.14] - 2025-11-03
### Added ### Added
- Added a new `install-hooks` target to the Makefile to easily set up git hooks. - Added a new `install-hooks` target to the Makefile to easily set up git hooks.
- Added a script (`scripts/install-hooks.sh`) to copy git hooks from `.githooks` to `.git/hooks`. - Added a script (`scripts/install-hooks.sh`) to copy git hooks from `.githooks` to `.git/hooks`.
### Changed ### Changed
- Improved the pre-push git hook to automatically commit the updated `CHANGELOG.md` and `Makefile` after generating the changelog. - Improved the pre-push git hook to automatically commit the updated `CHANGELOG.md` and `Makefile` after generating the changelog.
- Updated the changelog generation script (`scripts/update_changelog.sh`) to load the OpenRouter API key from the `.env` file or environment variables, improving security and configuration. - Updated the changelog generation script (`scripts/update_changelog.sh`) to load the OpenRouter API key from the `.env` file or environment variables, improving security and configuration.
- Modified the pre-push hook to read user confirmation from `/dev/tty` for better compatibility in various terminal environments. - Modified the pre-push hook to read user confirmation from `/dev/tty` for better compatibility in various terminal environments.
@ -160,14 +175,18 @@ The format is based on [Keep a Changelog][keepachangelog] and adheres to [Semant
### Removed ### Removed
### Fixed ### Fixed
\n \n
## [0.53.14] - 2025-11-03 ## [0.53.14] - 2025-11-03
### Added ### Added
- Added a new `install-hooks` target to the Makefile to easily set up git hooks. - Added a new `install-hooks` target to the Makefile to easily set up git hooks.
- Added a script (`scripts/install-hooks.sh`) to copy git hooks from `.githooks` to `.git/hooks`. - Added a script (`scripts/install-hooks.sh`) to copy git hooks from `.githooks` to `.git/hooks`.
### Changed ### Changed
- Improved the pre-push git hook to automatically commit the updated `CHANGELOG.md` and `Makefile` after generating the changelog. - Improved the pre-push git hook to automatically commit the updated `CHANGELOG.md` and `Makefile` after generating the changelog.
- Updated the changelog generation script (`scripts/update_changelog.sh`) to load the OpenRouter API key from the `.env` file or environment variables, improving security and configuration. - Updated the changelog generation script (`scripts/update_changelog.sh`) to load the OpenRouter API key from the `.env` file or environment variables, improving security and configuration.
- Modified the pre-push hook to read user confirmation from `/dev/tty` for better compatibility in various terminal environments. - Modified the pre-push hook to read user confirmation from `/dev/tty` for better compatibility in various terminal environments.
@ -177,6 +196,7 @@ The format is based on [Keep a Changelog][keepachangelog] and adheres to [Semant
### Removed ### Removed
### Fixed ### Fixed
\n \n
## [0.53.8] - 2025-10-31 ## [0.53.8] - 2025-10-31

241
Makefile
View File

@ -19,7 +19,7 @@ test-e2e:
# Network - Distributed P2P Database System # Network - Distributed P2P Database System
# Makefile for development and build tasks # Makefile for development and build tasks
.PHONY: build clean test run-node run-node2 run-node3 run-example deps tidy fmt vet lint clear-ports install-hooks .PHONY: build clean test run-node run-node2 run-node3 run-example deps tidy fmt vet lint clear-ports install-hooks kill
VERSION := 0.56.0 VERSION := 0.56.0
COMMIT ?= $(shell git rev-parse --short HEAD 2>/dev/null || echo unknown) COMMIT ?= $(shell git rev-parse --short HEAD 2>/dev/null || echo unknown)
@ -109,6 +109,102 @@ dev: build
echo " ⚠️ systemctl not found - skipping Anon"; \ echo " ⚠️ systemctl not found - skipping Anon"; \
fi; \ fi; \
fi fi
@echo "Initializing IPFS and Cluster for all nodes..."
@if command -v ipfs >/dev/null 2>&1 && command -v ipfs-cluster-service >/dev/null 2>&1; then \
CLUSTER_SECRET=$$HOME/.debros/cluster-secret; \
if [ ! -f $$CLUSTER_SECRET ]; then \
echo " Generating shared cluster secret..."; \
ipfs-cluster-service --version >/dev/null 2>&1 && openssl rand -hex 32 > $$CLUSTER_SECRET || echo "0000000000000000000000000000000000000000000000000000000000000000" > $$CLUSTER_SECRET; \
fi; \
SECRET=$$(cat $$CLUSTER_SECRET); \
echo " Setting up bootstrap node (IPFS: 5001, Cluster: 9094)..."; \
if [ ! -d $$HOME/.debros/bootstrap/ipfs/repo ]; then \
echo " Initializing IPFS..."; \
mkdir -p $$HOME/.debros/bootstrap/ipfs; \
IPFS_PATH=$$HOME/.debros/bootstrap/ipfs/repo ipfs init --profile=server 2>&1 | grep -v "generating" | grep -v "peer identity" || true; \
IPFS_PATH=$$HOME/.debros/bootstrap/ipfs/repo ipfs config --json Addresses.API '["/ip4/127.0.0.1/tcp/5001"]' 2>&1 | grep -v "generating" || true; \
IPFS_PATH=$$HOME/.debros/bootstrap/ipfs/repo ipfs config --json Addresses.Gateway '["/ip4/127.0.0.1/tcp/8080"]' 2>&1 | grep -v "generating" || true; \
IPFS_PATH=$$HOME/.debros/bootstrap/ipfs/repo ipfs config --json Addresses.Swarm '["/ip4/0.0.0.0/tcp/4001","/ip6/::/tcp/4001"]' 2>&1 | grep -v "generating" || true; \
fi; \
echo " Initializing IPFS Cluster..."; \
mkdir -p $$HOME/.debros/bootstrap/ipfs-cluster; \
env IPFS_CLUSTER_PATH=$$HOME/.debros/bootstrap/ipfs-cluster ipfs-cluster-service init --force >/dev/null 2>&1 || true; \
jq '.cluster.peername = "bootstrap" | .cluster.secret = "'$$SECRET'" | .cluster.listen_multiaddress = ["/ip4/0.0.0.0/tcp/9096"] | .consensus.crdt.cluster_name = "debros-cluster" | .consensus.crdt.trusted_peers = ["*"] | .api.restapi.http_listen_multiaddress = "/ip4/0.0.0.0/tcp/9094" | .api.ipfsproxy.listen_multiaddress = "/ip4/127.0.0.1/tcp/9095" | .api.pinsvcapi.http_listen_multiaddress = "/ip4/127.0.0.1/tcp/9097" | .ipfs_connector.ipfshttp.node_multiaddress = "/ip4/127.0.0.1/tcp/5001"' $$HOME/.debros/bootstrap/ipfs-cluster/service.json > $$HOME/.debros/bootstrap/ipfs-cluster/service.json.tmp && mv $$HOME/.debros/bootstrap/ipfs-cluster/service.json.tmp $$HOME/.debros/bootstrap/ipfs-cluster/service.json; \
echo " Setting up node2 (IPFS: 5002, Cluster: 9104)..."; \
if [ ! -d $$HOME/.debros/node2/ipfs/repo ]; then \
echo " Initializing IPFS..."; \
mkdir -p $$HOME/.debros/node2/ipfs; \
IPFS_PATH=$$HOME/.debros/node2/ipfs/repo ipfs init --profile=server 2>&1 | grep -v "generating" | grep -v "peer identity" || true; \
IPFS_PATH=$$HOME/.debros/node2/ipfs/repo ipfs config --json Addresses.API '["/ip4/127.0.0.1/tcp/5002"]' 2>&1 | grep -v "generating" || true; \
IPFS_PATH=$$HOME/.debros/node2/ipfs/repo ipfs config --json Addresses.Gateway '["/ip4/127.0.0.1/tcp/8081"]' 2>&1 | grep -v "generating" || true; \
IPFS_PATH=$$HOME/.debros/node2/ipfs/repo ipfs config --json Addresses.Swarm '["/ip4/0.0.0.0/tcp/4002","/ip6/::/tcp/4002"]' 2>&1 | grep -v "generating" || true; \
fi; \
echo " Initializing IPFS Cluster..."; \
mkdir -p $$HOME/.debros/node2/ipfs-cluster; \
env IPFS_CLUSTER_PATH=$$HOME/.debros/node2/ipfs-cluster ipfs-cluster-service init --force >/dev/null 2>&1 || true; \
jq '.cluster.peername = "node2" | .cluster.secret = "'$$SECRET'" | .cluster.listen_multiaddress = ["/ip4/0.0.0.0/tcp/9106"] | .consensus.crdt.cluster_name = "debros-cluster" | .consensus.crdt.trusted_peers = ["*"] | .api.restapi.http_listen_multiaddress = "/ip4/0.0.0.0/tcp/9104" | .api.ipfsproxy.listen_multiaddress = "/ip4/127.0.0.1/tcp/9105" | .api.pinsvcapi.http_listen_multiaddress = "/ip4/127.0.0.1/tcp/9107" | .ipfs_connector.ipfshttp.node_multiaddress = "/ip4/127.0.0.1/tcp/5002"' $$HOME/.debros/node2/ipfs-cluster/service.json > $$HOME/.debros/node2/ipfs-cluster/service.json.tmp && mv $$HOME/.debros/node2/ipfs-cluster/service.json.tmp $$HOME/.debros/node2/ipfs-cluster/service.json; \
echo " Setting up node3 (IPFS: 5003, Cluster: 9114)..."; \
if [ ! -d $$HOME/.debros/node3/ipfs/repo ]; then \
echo " Initializing IPFS..."; \
mkdir -p $$HOME/.debros/node3/ipfs; \
IPFS_PATH=$$HOME/.debros/node3/ipfs/repo ipfs init --profile=server 2>&1 | grep -v "generating" | grep -v "peer identity" || true; \
IPFS_PATH=$$HOME/.debros/node3/ipfs/repo ipfs config --json Addresses.API '["/ip4/127.0.0.1/tcp/5003"]' 2>&1 | grep -v "generating" || true; \
IPFS_PATH=$$HOME/.debros/node3/ipfs/repo ipfs config --json Addresses.Gateway '["/ip4/127.0.0.1/tcp/8082"]' 2>&1 | grep -v "generating" || true; \
IPFS_PATH=$$HOME/.debros/node3/ipfs/repo ipfs config --json Addresses.Swarm '["/ip4/0.0.0.0/tcp/4003","/ip6/::/tcp/4003"]' 2>&1 | grep -v "generating" || true; \
fi; \
echo " Initializing IPFS Cluster..."; \
mkdir -p $$HOME/.debros/node3/ipfs-cluster; \
env IPFS_CLUSTER_PATH=$$HOME/.debros/node3/ipfs-cluster ipfs-cluster-service init --force >/dev/null 2>&1 || true; \
jq '.cluster.peername = "node3" | .cluster.secret = "'$$SECRET'" | .cluster.listen_multiaddress = ["/ip4/0.0.0.0/tcp/9116"] | .consensus.crdt.cluster_name = "debros-cluster" | .consensus.crdt.trusted_peers = ["*"] | .api.restapi.http_listen_multiaddress = "/ip4/0.0.0.0/tcp/9114" | .api.ipfsproxy.listen_multiaddress = "/ip4/127.0.0.1/tcp/9115" | .api.pinsvcapi.http_listen_multiaddress = "/ip4/127.0.0.1/tcp/9117" | .ipfs_connector.ipfshttp.node_multiaddress = "/ip4/127.0.0.1/tcp/5003"' $$HOME/.debros/node3/ipfs-cluster/service.json > $$HOME/.debros/node3/ipfs-cluster/service.json.tmp && mv $$HOME/.debros/node3/ipfs-cluster/service.json.tmp $$HOME/.debros/node3/ipfs-cluster/service.json; \
echo "Starting IPFS daemons..."; \
if [ ! -f .dev/pids/ipfs-bootstrap.pid ] || ! kill -0 $$(cat .dev/pids/ipfs-bootstrap.pid) 2>/dev/null; then \
IPFS_PATH=$$HOME/.debros/bootstrap/ipfs/repo nohup ipfs daemon --enable-pubsub-experiment > $$HOME/.debros/logs/ipfs-bootstrap.log 2>&1 & echo $$! > .dev/pids/ipfs-bootstrap.pid; \
echo " Bootstrap IPFS started (PID: $$(cat .dev/pids/ipfs-bootstrap.pid), API: 5001)"; \
sleep 3; \
else \
echo " ✓ Bootstrap IPFS already running"; \
fi; \
if [ ! -f .dev/pids/ipfs-node2.pid ] || ! kill -0 $$(cat .dev/pids/ipfs-node2.pid) 2>/dev/null; then \
IPFS_PATH=$$HOME/.debros/node2/ipfs/repo nohup ipfs daemon --enable-pubsub-experiment > $$HOME/.debros/logs/ipfs-node2.log 2>&1 & echo $$! > .dev/pids/ipfs-node2.pid; \
echo " Node2 IPFS started (PID: $$(cat .dev/pids/ipfs-node2.pid), API: 5002)"; \
sleep 3; \
else \
echo " ✓ Node2 IPFS already running"; \
fi; \
if [ ! -f .dev/pids/ipfs-node3.pid ] || ! kill -0 $$(cat .dev/pids/ipfs-node3.pid) 2>/dev/null; then \
IPFS_PATH=$$HOME/.debros/node3/ipfs/repo nohup ipfs daemon --enable-pubsub-experiment > $$HOME/.debros/logs/ipfs-node3.log 2>&1 & echo $$! > .dev/pids/ipfs-node3.pid; \
echo " Node3 IPFS started (PID: $$(cat .dev/pids/ipfs-node3.pid), API: 5003)"; \
sleep 3; \
else \
echo " ✓ Node3 IPFS already running"; \
fi; \
\
echo "Starting IPFS Cluster peers..."; \
if [ ! -f .dev/pids/ipfs-cluster-bootstrap.pid ] || ! kill -0 $$(cat .dev/pids/ipfs-cluster-bootstrap.pid) 2>/dev/null; then \
env IPFS_CLUSTER_PATH=$$HOME/.debros/bootstrap/ipfs-cluster nohup ipfs-cluster-service daemon > $$HOME/.debros/logs/ipfs-cluster-bootstrap.log 2>&1 & echo $$! > .dev/pids/ipfs-cluster-bootstrap.pid; \
echo " Bootstrap Cluster started (PID: $$(cat .dev/pids/ipfs-cluster-bootstrap.pid), API: 9094)"; \
sleep 3; \
else \
echo " ✓ Bootstrap Cluster already running"; \
fi; \
if [ ! -f .dev/pids/ipfs-cluster-node2.pid ] || ! kill -0 $$(cat .dev/pids/ipfs-cluster-node2.pid) 2>/dev/null; then \
env IPFS_CLUSTER_PATH=$$HOME/.debros/node2/ipfs-cluster nohup ipfs-cluster-service daemon > $$HOME/.debros/logs/ipfs-cluster-node2.log 2>&1 & echo $$! > .dev/pids/ipfs-cluster-node2.pid; \
echo " Node2 Cluster started (PID: $$(cat .dev/pids/ipfs-cluster-node2.pid), API: 9104)"; \
sleep 3; \
else \
echo " ✓ Node2 Cluster already running"; \
fi; \
if [ ! -f .dev/pids/ipfs-cluster-node3.pid ] || ! kill -0 $$(cat .dev/pids/ipfs-cluster-node3.pid) 2>/dev/null; then \
env IPFS_CLUSTER_PATH=$$HOME/.debros/node3/ipfs-cluster nohup ipfs-cluster-service daemon > $$HOME/.debros/logs/ipfs-cluster-node3.log 2>&1 & echo $$! > .dev/pids/ipfs-cluster-node3.pid; \
echo " Node3 Cluster started (PID: $$(cat .dev/pids/ipfs-cluster-node3.pid), API: 9114)"; \
sleep 3; \
else \
echo " ✓ Node3 Cluster already running"; \
fi; \
else \
echo " ⚠️ ipfs or ipfs-cluster-service not found - skipping IPFS setup"; \
echo " Install with: https://docs.ipfs.tech/install/ and https://ipfscluster.io/documentation/guides/install/"; \
fi
@sleep 2 @sleep 2
@echo "Starting bootstrap node..." @echo "Starting bootstrap node..."
@nohup ./bin/node --config bootstrap.yaml > $$HOME/.debros/logs/bootstrap.log 2>&1 & echo $$! > .dev/pids/bootstrap.pid @nohup ./bin/node --config bootstrap.yaml > $$HOME/.debros/logs/bootstrap.log 2>&1 & echo $$! > .dev/pids/bootstrap.pid
@ -119,40 +215,6 @@ dev: build
@echo "Starting node3..." @echo "Starting node3..."
@nohup ./bin/node --config node3.yaml > $$HOME/.debros/logs/node3.log 2>&1 & echo $$! > .dev/pids/node3.pid @nohup ./bin/node --config node3.yaml > $$HOME/.debros/logs/node3.log 2>&1 & echo $$! > .dev/pids/node3.pid
@sleep 1 @sleep 1
@echo "Starting IPFS daemon..."
@if command -v ipfs >/dev/null 2>&1; then \
if [ ! -d $$HOME/.debros/ipfs ]; then \
echo " Initializing IPFS repository..."; \
IPFS_PATH=$$HOME/.debros/ipfs ipfs init 2>&1 | grep -v "generating" | grep -v "peer identity" || true; \
fi; \
if ! pgrep -f "ipfs daemon" >/dev/null 2>&1; then \
IPFS_PATH=$$HOME/.debros/ipfs nohup ipfs daemon > $$HOME/.debros/logs/ipfs.log 2>&1 & echo $$! > .dev/pids/ipfs.pid; \
echo " IPFS daemon started (PID: $$(cat .dev/pids/ipfs.pid))"; \
sleep 5; \
else \
echo " ✓ IPFS daemon already running"; \
fi; \
else \
echo " ⚠️ ipfs command not found - skipping IPFS (storage endpoints will be disabled)"; \
echo " Install with: https://docs.ipfs.tech/install/"; \
fi
@echo "Starting IPFS Cluster daemon..."
@if command -v ipfs-cluster-service >/dev/null 2>&1; then \
if [ ! -d $$HOME/.debros/ipfs-cluster ]; then \
echo " Initializing IPFS Cluster..."; \
CLUSTER_PATH=$$HOME/.debros/ipfs-cluster ipfs-cluster-service init --force 2>&1 | grep -v "peer identity" || true; \
fi; \
if ! pgrep -f "ipfs-cluster-service" >/dev/null 2>&1; then \
CLUSTER_PATH=$$HOME/.debros/ipfs-cluster nohup ipfs-cluster-service daemon > $$HOME/.debros/logs/ipfs-cluster.log 2>&1 & echo $$! > .dev/pids/ipfs-cluster.pid; \
echo " IPFS Cluster daemon started (PID: $$(cat .dev/pids/ipfs-cluster.pid))"; \
sleep 5; \
else \
echo " ✓ IPFS Cluster daemon already running"; \
fi; \
else \
echo " ⚠️ ipfs-cluster-service command not found - skipping IPFS Cluster (storage endpoints will be disabled)"; \
echo " Install with: https://ipfscluster.io/documentation/guides/install/"; \
fi
@echo "Starting Olric cache server..." @echo "Starting Olric cache server..."
@if command -v olric-server >/dev/null 2>&1; then \ @if command -v olric-server >/dev/null 2>&1; then \
if [ ! -f $$HOME/.debros/olric-config.yaml ]; then \ if [ ! -f $$HOME/.debros/olric-config.yaml ]; then \
@ -182,11 +244,23 @@ dev: build
@if [ -f .dev/pids/anon.pid ]; then \ @if [ -f .dev/pids/anon.pid ]; then \
echo " Anon: PID=$$(cat .dev/pids/anon.pid) (SOCKS: 9050)"; \ echo " Anon: PID=$$(cat .dev/pids/anon.pid) (SOCKS: 9050)"; \
fi fi
@if [ -f .dev/pids/ipfs.pid ]; then \ @if [ -f .dev/pids/ipfs-bootstrap.pid ]; then \
echo " IPFS: PID=$$(cat .dev/pids/ipfs.pid) (API: 5001)"; \ echo " Bootstrap IPFS: PID=$$(cat .dev/pids/ipfs-bootstrap.pid) (API: 5001)"; \
fi fi
@if [ -f .dev/pids/ipfs-cluster.pid ]; then \ @if [ -f .dev/pids/ipfs-node2.pid ]; then \
echo " IPFS Cluster: PID=$$(cat .dev/pids/ipfs-cluster.pid) (API: 9094)"; \ echo " Node2 IPFS: PID=$$(cat .dev/pids/ipfs-node2.pid) (API: 5002)"; \
fi
@if [ -f .dev/pids/ipfs-node3.pid ]; then \
echo " Node3 IPFS: PID=$$(cat .dev/pids/ipfs-node3.pid) (API: 5003)"; \
fi
@if [ -f .dev/pids/ipfs-cluster-bootstrap.pid ]; then \
echo " Bootstrap Cluster: PID=$$(cat .dev/pids/ipfs-cluster-bootstrap.pid) (API: 9094)"; \
fi
@if [ -f .dev/pids/ipfs-cluster-node2.pid ]; then \
echo " Node2 Cluster: PID=$$(cat .dev/pids/ipfs-cluster-node2.pid) (API: 9104)"; \
fi
@if [ -f .dev/pids/ipfs-cluster-node3.pid ]; then \
echo " Node3 Cluster: PID=$$(cat .dev/pids/ipfs-cluster-node3.pid) (API: 9114)"; \
fi fi
@if [ -f .dev/pids/olric.pid ]; then \ @if [ -f .dev/pids/olric.pid ]; then \
echo " Olric: PID=$$(cat .dev/pids/olric.pid) (API: 3320)"; \ echo " Olric: PID=$$(cat .dev/pids/olric.pid) (API: 3320)"; \
@ -198,9 +272,13 @@ dev: build
@echo "" @echo ""
@echo "Ports:" @echo "Ports:"
@echo " Anon SOCKS: 9050 (proxy endpoint: POST /v1/proxy/anon)" @echo " Anon SOCKS: 9050 (proxy endpoint: POST /v1/proxy/anon)"
@if [ -f .dev/pids/ipfs.pid ]; then \ @if [ -f .dev/pids/ipfs-bootstrap.pid ]; then \
echo " IPFS API: 5001 (content retrieval)"; \ echo " Bootstrap IPFS API: 5001"; \
echo " IPFS Cluster: 9094 (pin management)"; \ echo " Node2 IPFS API: 5002"; \
echo " Node3 IPFS API: 5003"; \
echo " Bootstrap Cluster: 9094 (pin management)"; \
echo " Node2 Cluster: 9104 (pin management)"; \
echo " Node3 Cluster: 9114 (pin management)"; \
fi fi
@if [ -f .dev/pids/olric.pid ]; then \ @if [ -f .dev/pids/olric.pid ]; then \
echo " Olric: 3320 (cache API)"; \ echo " Olric: 3320 (cache API)"; \
@ -217,15 +295,85 @@ dev: build
if [ -f .dev/pids/anon.pid ]; then \ if [ -f .dev/pids/anon.pid ]; then \
LOGS="$$LOGS $$HOME/.debros/logs/anon.log"; \ LOGS="$$LOGS $$HOME/.debros/logs/anon.log"; \
fi; \ fi; \
if [ -f .dev/pids/ipfs.pid ]; then \ if [ -f .dev/pids/ipfs-bootstrap.pid ]; then \
LOGS="$$LOGS $$HOME/.debros/logs/ipfs.log"; \ LOGS="$$LOGS $$HOME/.debros/logs/ipfs-bootstrap.log $$HOME/.debros/logs/ipfs-node2.log $$HOME/.debros/logs/ipfs-node3.log"; \
fi; \ fi; \
if [ -f .dev/pids/ipfs-cluster.pid ]; then \ if [ -f .dev/pids/ipfs-cluster-bootstrap.pid ]; then \
LOGS="$$LOGS $$HOME/.debros/logs/ipfs-cluster.log"; \ LOGS="$$LOGS $$HOME/.debros/logs/ipfs-cluster-bootstrap.log $$HOME/.debros/logs/ipfs-cluster-node2.log $$HOME/.debros/logs/ipfs-cluster-node3.log"; \
fi; \
if [ -f .dev/pids/olric.pid ]; then \
LOGS="$$LOGS $$HOME/.debros/logs/olric.log"; \
fi; \ fi; \
trap 'echo "Stopping all processes..."; kill $$(cat .dev/pids/*.pid) 2>/dev/null; rm -f .dev/pids/*.pid; exit 0' INT; \ trap 'echo "Stopping all processes..."; kill $$(cat .dev/pids/*.pid) 2>/dev/null; rm -f .dev/pids/*.pid; exit 0' INT; \
tail -f $$LOGS tail -f $$LOGS
# Kill all processes
kill:
@echo "🛑 Stopping all DeBros network services..."
@echo ""
@echo "Stopping DeBros nodes and gateway..."
@if [ -f .dev/pids/gateway.pid ]; then \
kill -TERM $$(cat .dev/pids/gateway.pid) 2>/dev/null && echo " ✓ Gateway stopped" || echo " ✗ Gateway not running"; \
rm -f .dev/pids/gateway.pid; \
fi
@if [ -f .dev/pids/bootstrap.pid ]; then \
kill -TERM $$(cat .dev/pids/bootstrap.pid) 2>/dev/null && echo " ✓ Bootstrap node stopped" || echo " ✗ Bootstrap not running"; \
rm -f .dev/pids/bootstrap.pid; \
fi
@if [ -f .dev/pids/node2.pid ]; then \
kill -TERM $$(cat .dev/pids/node2.pid) 2>/dev/null && echo " ✓ Node2 stopped" || echo " ✗ Node2 not running"; \
rm -f .dev/pids/node2.pid; \
fi
@if [ -f .dev/pids/node3.pid ]; then \
kill -TERM $$(cat .dev/pids/node3.pid) 2>/dev/null && echo " ✓ Node3 stopped" || echo " ✗ Node3 not running"; \
rm -f .dev/pids/node3.pid; \
fi
@echo ""
@echo "Stopping IPFS Cluster peers..."
@if [ -f .dev/pids/ipfs-cluster-bootstrap.pid ]; then \
kill -TERM $$(cat .dev/pids/ipfs-cluster-bootstrap.pid) 2>/dev/null && echo " ✓ Bootstrap Cluster stopped" || echo " ✗ Bootstrap Cluster not running"; \
rm -f .dev/pids/ipfs-cluster-bootstrap.pid; \
fi
@if [ -f .dev/pids/ipfs-cluster-node2.pid ]; then \
kill -TERM $$(cat .dev/pids/ipfs-cluster-node2.pid) 2>/dev/null && echo " ✓ Node2 Cluster stopped" || echo " ✗ Node2 Cluster not running"; \
rm -f .dev/pids/ipfs-cluster-node2.pid; \
fi
@if [ -f .dev/pids/ipfs-cluster-node3.pid ]; then \
kill -TERM $$(cat .dev/pids/ipfs-cluster-node3.pid) 2>/dev/null && echo " ✓ Node3 Cluster stopped" || echo " ✗ Node3 Cluster not running"; \
rm -f .dev/pids/ipfs-cluster-node3.pid; \
fi
@echo ""
@echo "Stopping IPFS daemons..."
@if [ -f .dev/pids/ipfs-bootstrap.pid ]; then \
kill -TERM $$(cat .dev/pids/ipfs-bootstrap.pid) 2>/dev/null && echo " ✓ Bootstrap IPFS stopped" || echo " ✗ Bootstrap IPFS not running"; \
rm -f .dev/pids/ipfs-bootstrap.pid; \
fi
@if [ -f .dev/pids/ipfs-node2.pid ]; then \
kill -TERM $$(cat .dev/pids/ipfs-node2.pid) 2>/dev/null && echo " ✓ Node2 IPFS stopped" || echo " ✗ Node2 IPFS not running"; \
rm -f .dev/pids/ipfs-node2.pid; \
fi
@if [ -f .dev/pids/ipfs-node3.pid ]; then \
kill -TERM $$(cat .dev/pids/ipfs-node3.pid) 2>/dev/null && echo " ✓ Node3 IPFS stopped" || echo " ✗ Node3 IPFS not running"; \
rm -f .dev/pids/ipfs-node3.pid; \
fi
@echo ""
@echo "Stopping Olric cache..."
@if [ -f .dev/pids/olric.pid ]; then \
kill -TERM $$(cat .dev/pids/olric.pid) 2>/dev/null && echo " ✓ Olric stopped" || echo " ✗ Olric not running"; \
rm -f .dev/pids/olric.pid; \
fi
@echo ""
@echo "Stopping Anon proxy..."
@if [ -f .dev/pids/anyone.pid ]; then \
kill -TERM $$(cat .dev/pids/anyone.pid) 2>/dev/null && echo " ✓ Anon proxy stopped" || echo " ✗ Anon proxy not running"; \
rm -f .dev/pids/anyone.pid; \
fi
@echo ""
@echo "Cleaning up any remaining processes on ports..."
@lsof -ti:7001,7002,7003,5001,5002,5003,6001,4001,4002,4003,9050,3320,3322,9094,9095,9096,9097,9104,9105,9106,9107,9114,9115,9116,9117,8080,8081,8082 2>/dev/null | xargs kill -9 2>/dev/null && echo " ✓ Cleaned up remaining port bindings" || echo " ✓ No lingering processes found"
@echo ""
@echo "✅ All services stopped!"
# Help # Help
help: help:
@echo "Available targets:" @echo "Available targets:"
@ -277,6 +425,7 @@ help:
@echo " vet - Vet code" @echo " vet - Vet code"
@echo " lint - Lint code (fmt + vet)" @echo " lint - Lint code (fmt + vet)"
@echo " clear-ports - Clear common dev ports" @echo " clear-ports - Clear common dev ports"
@echo " kill - Stop all running services (nodes, IPFS, cluster, gateway, olric)"
@echo " dev-setup - Setup development environment" @echo " dev-setup - Setup development environment"
@echo " dev-cluster - Show cluster startup commands" @echo " dev-cluster - Show cluster startup commands"
@echo " dev - Full development workflow" @echo " dev - Full development workflow"

171
docs/ipfs-cluster-setup.md Normal file
View File

@ -0,0 +1,171 @@
# IPFS Cluster Setup Guide
This guide explains how IPFS Cluster is configured to run on every DeBros Network node.
## Overview
Each DeBros Network node runs its own IPFS Cluster peer, enabling distributed pinning and replication across the network. The cluster uses CRDT consensus for automatic peer discovery.
## Architecture
- **IPFS (Kubo)**: Runs on each node, handles content storage and retrieval
- **IPFS Cluster**: Runs on each node, manages pinning and replication
- **Cluster Consensus**: Uses CRDT (instead of Raft) for simpler multi-node setup
## Automatic Setup
When you run `network-cli setup`, the following happens automatically:
1. IPFS (Kubo) and IPFS Cluster are installed
2. IPFS repository is initialized for each node
3. IPFS Cluster service.json config is generated
4. Systemd services are created and started:
- `debros-ipfs` - IPFS daemon
- `debros-ipfs-cluster` - IPFS Cluster service
- `debros-node` - DeBros Network node (depends on cluster)
- `debros-gateway` - HTTP Gateway (depends on node)
## Configuration
### Node Configs
Each node config (`~/.debros/bootstrap.yaml`, `~/.debros/node.yaml`, etc.) includes:
```yaml
database:
ipfs:
cluster_api_url: "http://localhost:9094" # Local cluster API
api_url: "http://localhost:5001" # Local IPFS API
replication_factor: 3 # Desired replication
```
### Cluster Service Config
Cluster service configs are stored at:
- Bootstrap: `~/.debros/bootstrap/ipfs-cluster/service.json`
- Nodes: `~/.debros/node/ipfs-cluster/service.json`
Key settings:
- **Consensus**: CRDT (automatic peer discovery)
- **API Listen**: `0.0.0.0:9094` (REST API)
- **Cluster Listen**: `0.0.0.0:9096` (peer-to-peer)
- **Secret**: Shared cluster secret stored at `~/.debros/cluster-secret`
## Verification
### Check Cluster Peers
From any node, verify all cluster peers are connected:
```bash
sudo -u debros ipfs-cluster-ctl --host http://localhost:9094 peers ls
```
You should see all cluster peers listed (bootstrap, node1, node2, etc.).
### Check IPFS Daemon
Verify IPFS is running:
```bash
sudo -u debros ipfs daemon --repo-dir=~/.debros/bootstrap/ipfs/repo
# Or for regular nodes:
sudo -u debros ipfs daemon --repo-dir=~/.debros/node/ipfs/repo
```
### Check Service Status
```bash
network-cli service status all
```
Should show:
- `debros-ipfs` - running
- `debros-ipfs-cluster` - running
- `debros-node` - running
- `debros-gateway` - running
## Troubleshooting
### Cluster Peers Not Connecting
If peers aren't discovering each other:
1. **Check firewall**: Ensure ports 9096 (cluster swarm) and 9094 (cluster API) are open
2. **Verify secret**: All nodes must use the same cluster secret from `~/.debros/cluster-secret`
3. **Check logs**: `journalctl -u debros-ipfs-cluster -f`
### Not Enough Peers Error
If you see "not enough peers to allocate CID" errors:
- The cluster needs at least `replication_factor` peers running
- Check that all nodes have `debros-ipfs-cluster` service running
- Verify with `ipfs-cluster-ctl peers ls`
### IPFS Not Starting
If IPFS daemon fails to start:
1. Check IPFS repo exists: `ls -la ~/.debros/bootstrap/ipfs/repo/`
2. Check permissions: `chown -R debros:debros ~/.debros/bootstrap/ipfs/`
3. Check logs: `journalctl -u debros-ipfs -f`
## Manual Setup (If Needed)
If automatic setup didn't work, you can manually initialize:
### 1. Initialize IPFS
```bash
sudo -u debros ipfs init --profile=server --repo-dir=~/.debros/bootstrap/ipfs/repo
sudo -u debros ipfs config --json Addresses.API '["/ip4/127.0.0.1/tcp/5001"]' --repo-dir=~/.debros/bootstrap/ipfs/repo
```
### 2. Initialize Cluster
```bash
# Generate or get cluster secret
CLUSTER_SECRET=$(cat ~/.debros/cluster-secret)
# Initialize cluster (will create service.json)
sudo -u debros ipfs-cluster-service init --consensus crdt
```
### 3. Start Services
```bash
systemctl start debros-ipfs
systemctl start debros-ipfs-cluster
systemctl start debros-node
systemctl start debros-gateway
```
## Ports
- **4001**: IPFS swarm (LibP2P)
- **5001**: IPFS HTTP API
- **8080**: IPFS Gateway (optional)
- **9094**: IPFS Cluster REST API
- **9096**: IPFS Cluster swarm (LibP2P)
## Replication Factor
The default replication factor is 3, meaning content is pinned to 3 cluster peers. This requires at least 3 nodes running cluster peers.
To change replication factor, edit node configs:
```yaml
database:
ipfs:
replication_factor: 1 # For single-node development
```
## Security Notes
- Cluster secret is stored at `~/.debros/cluster-secret` (mode 0600)
- Cluster API (port 9094) should be firewalled in production
- IPFS API (port 5001) should only be accessible locally

View File

@ -2,6 +2,9 @@ package cli
import ( import (
"bufio" "bufio"
"crypto/rand"
"encoding/hex"
"encoding/json"
"fmt" "fmt"
"net" "net"
"os" "os"
@ -63,11 +66,12 @@ func HandleSetupCommand(args []string) {
fmt.Printf(" 4. Install RQLite database\n") fmt.Printf(" 4. Install RQLite database\n")
fmt.Printf(" 5. Install Anyone Relay (Anon) for anonymous networking\n") fmt.Printf(" 5. Install Anyone Relay (Anon) for anonymous networking\n")
fmt.Printf(" 6. Install Olric cache server\n") fmt.Printf(" 6. Install Olric cache server\n")
fmt.Printf(" 7. Create directories (/home/debros/bin, /home/debros/src)\n") fmt.Printf(" 7. Install IPFS (Kubo) and IPFS Cluster\n")
fmt.Printf(" 8. Clone and build DeBros Network\n") fmt.Printf(" 8. Create directories (/home/debros/bin, /home/debros/src)\n")
fmt.Printf(" 9. Generate configuration files\n") fmt.Printf(" 9. Clone and build DeBros Network\n")
fmt.Printf(" 10. Create systemd services (debros-node, debros-gateway, debros-olric)\n") fmt.Printf(" 10. Generate configuration files\n")
fmt.Printf(" 11. Start and enable services\n") fmt.Printf(" 11. Create systemd services (debros-ipfs, debros-ipfs-cluster, debros-node, debros-gateway, debros-olric)\n")
fmt.Printf(" 12. Start and enable services\n")
fmt.Printf(strings.Repeat("=", 70) + "\n\n") fmt.Printf(strings.Repeat("=", 70) + "\n\n")
fmt.Printf("Ready to begin setup? (yes/no): ") fmt.Printf("Ready to begin setup? (yes/no): ")
@ -96,6 +100,9 @@ func HandleSetupCommand(args []string) {
// Step 4.6: Install Olric cache server // Step 4.6: Install Olric cache server
installOlric() installOlric()
// Step 4.7: Install IPFS and IPFS Cluster
installIPFS()
// Step 5: Setup directories // Step 5: Setup directories
setupDirectories() setupDirectories()
@ -123,6 +130,14 @@ func HandleSetupCommand(args []string) {
fmt.Printf("🆔 Node Peer ID: %s\n\n", peerID) fmt.Printf("🆔 Node Peer ID: %s\n\n", peerID)
} }
// Display IPFS Cluster information
fmt.Printf("IPFS Cluster Setup:\n")
fmt.Printf(" Each node runs its own IPFS Cluster peer\n")
fmt.Printf(" Cluster peers use CRDT consensus for automatic discovery\n")
fmt.Printf(" To verify cluster is working:\n")
fmt.Printf(" sudo -u debros ipfs-cluster-ctl --host http://localhost:9094 peers ls\n")
fmt.Printf(" You should see all cluster peers listed\n\n")
fmt.Printf("Service Management:\n") fmt.Printf("Service Management:\n")
fmt.Printf(" network-cli service status all\n") fmt.Printf(" network-cli service status all\n")
fmt.Printf(" network-cli service logs node --follow\n") fmt.Printf(" network-cli service logs node --follow\n")
@ -1156,6 +1171,92 @@ func configureFirewallForOlric() {
fmt.Printf(" No active firewall detected for Olric\n") fmt.Printf(" No active firewall detected for Olric\n")
} }
func installIPFS() {
fmt.Printf("🌐 Installing IPFS (Kubo) and IPFS Cluster...\n")
// Check if IPFS is already installed
if _, err := exec.LookPath("ipfs"); err == nil {
fmt.Printf(" ✓ IPFS (Kubo) already installed\n")
} else {
fmt.Printf(" Installing IPFS (Kubo)...\n")
// Install IPFS via official installation script
cmd := exec.Command("bash", "-c", "curl -fsSL https://dist.ipfs.tech/kubo/v0.27.0/install.sh | bash")
if err := cmd.Run(); err != nil {
fmt.Fprintf(os.Stderr, "⚠️ Failed to install IPFS: %v\n", err)
fmt.Fprintf(os.Stderr, " You may need to install IPFS manually: https://docs.ipfs.tech/install/command-line/\n")
return
}
// Make sure ipfs is in PATH
exec.Command("ln", "-sf", "/usr/local/bin/ipfs", "/usr/bin/ipfs").Run()
fmt.Printf(" ✓ IPFS (Kubo) installed\n")
}
// Check if IPFS Cluster is already installed
if _, err := exec.LookPath("ipfs-cluster-service"); err == nil {
fmt.Printf(" ✓ IPFS Cluster already installed\n")
} else {
fmt.Printf(" Installing IPFS Cluster...\n")
// Install IPFS Cluster via go install
if _, err := exec.LookPath("go"); err != nil {
fmt.Fprintf(os.Stderr, "⚠️ Go not found - cannot install IPFS Cluster. Please install Go first.\n")
return
}
cmd := exec.Command("go", "install", "github.com/ipfs-cluster/ipfs-cluster/cmd/ipfs-cluster-service@latest")
cmd.Env = append(os.Environ(), "GOBIN=/usr/local/bin")
if output, err := cmd.CombinedOutput(); err != nil {
fmt.Fprintf(os.Stderr, "⚠️ Failed to install IPFS Cluster: %v\n", err)
if len(output) > 0 {
fmt.Fprintf(os.Stderr, " Output: %s\n", string(output))
}
fmt.Fprintf(os.Stderr, " You can manually install with: go install github.com/ipfs-cluster/ipfs-cluster/cmd/ipfs-cluster-service@latest\n")
return
}
// Also install ipfs-cluster-ctl for management
exec.Command("go", "install", "github.com/ipfs-cluster/ipfs-cluster/cmd/ipfs-cluster-ctl@latest").Run()
fmt.Printf(" ✓ IPFS Cluster installed\n")
}
// Configure firewall for IPFS and Cluster
configureFirewallForIPFS()
fmt.Printf(" ✓ IPFS and IPFS Cluster setup complete\n")
}
func configureFirewallForIPFS() {
fmt.Printf(" Checking firewall configuration for IPFS...\n")
// Check for UFW
if _, err := exec.LookPath("ufw"); err == nil {
output, _ := exec.Command("ufw", "status").CombinedOutput()
if strings.Contains(string(output), "Status: active") {
fmt.Printf(" Adding UFW rules for IPFS and Cluster...\n")
exec.Command("ufw", "allow", "4001/tcp", "comment", "IPFS Swarm").Run()
exec.Command("ufw", "allow", "5001/tcp", "comment", "IPFS API").Run()
exec.Command("ufw", "allow", "9094/tcp", "comment", "IPFS Cluster API").Run()
exec.Command("ufw", "allow", "9096/tcp", "comment", "IPFS Cluster Swarm").Run()
fmt.Printf(" ✓ UFW rules added for IPFS\n")
return
}
}
// Check for firewalld
if _, err := exec.LookPath("firewall-cmd"); err == nil {
output, _ := exec.Command("firewall-cmd", "--state").CombinedOutput()
if strings.Contains(string(output), "running") {
fmt.Printf(" Adding firewalld rules for IPFS...\n")
exec.Command("firewall-cmd", "--permanent", "--add-port=4001/tcp").Run()
exec.Command("firewall-cmd", "--permanent", "--add-port=5001/tcp").Run()
exec.Command("firewall-cmd", "--permanent", "--add-port=9094/tcp").Run()
exec.Command("firewall-cmd", "--permanent", "--add-port=9096/tcp").Run()
exec.Command("firewall-cmd", "--reload").Run()
fmt.Printf(" ✓ firewalld rules added for IPFS\n")
return
}
}
fmt.Printf(" No active firewall detected for IPFS\n")
}
func setupDirectories() { func setupDirectories() {
fmt.Printf("📁 Creating directories...\n") fmt.Printf("📁 Creating directories...\n")
@ -1405,6 +1506,18 @@ func generateConfigsInteractive(force bool) {
exec.Command("chown", "debros:debros", nodeConfigPath).Run() exec.Command("chown", "debros:debros", nodeConfigPath).Run()
fmt.Printf(" ✓ Node config created: %s\n", nodeConfigPath) fmt.Printf(" ✓ Node config created: %s\n", nodeConfigPath)
// Initialize IPFS and Cluster for this node
var nodeID string
if isBootstrap {
nodeID = "bootstrap"
} else {
nodeID = "node"
}
if err := initializeIPFSForNode(nodeID, vpsIP, isBootstrap); err != nil {
fmt.Fprintf(os.Stderr, "⚠️ Failed to initialize IPFS/Cluster: %v\n", err)
fmt.Fprintf(os.Stderr, " You may need to initialize IPFS and Cluster manually\n")
}
// Generate Olric config file for this node (uses multicast discovery) // Generate Olric config file for this node (uses multicast discovery)
var olricConfigPath string var olricConfigPath string
if isBootstrap { if isBootstrap {
@ -1730,14 +1843,309 @@ func generateOlricConfig(configPath, bindIP string, httpPort, memberlistPort int
return nil return nil
} }
// getOrGenerateClusterSecret gets or generates a shared cluster secret
func getOrGenerateClusterSecret() (string, error) {
secretPath := "/home/debros/.debros/cluster-secret"
// Try to read existing secret
if data, err := os.ReadFile(secretPath); err == nil {
secret := strings.TrimSpace(string(data))
if len(secret) == 64 {
return secret, nil
}
}
// Generate new secret (64 hex characters = 32 bytes)
bytes := make([]byte, 32)
if _, err := rand.Read(bytes); err != nil {
return "", fmt.Errorf("failed to generate cluster secret: %w", err)
}
secret := hex.EncodeToString(bytes)
// Save secret
if err := os.WriteFile(secretPath, []byte(secret), 0600); err != nil {
return "", fmt.Errorf("failed to save cluster secret: %w", err)
}
exec.Command("chown", "debros:debros", secretPath).Run()
return secret, nil
}
// initializeIPFSForNode initializes IPFS and IPFS Cluster for a node
func initializeIPFSForNode(nodeID, vpsIP string, isBootstrap bool) error {
fmt.Printf(" Initializing IPFS and Cluster for node %s...\n", nodeID)
// Get or generate cluster secret
secret, err := getOrGenerateClusterSecret()
if err != nil {
return fmt.Errorf("failed to get cluster secret: %w", err)
}
// Determine data directories
var ipfsDataDir, clusterDataDir string
if nodeID == "bootstrap" {
ipfsDataDir = "/home/debros/.debros/bootstrap/ipfs"
clusterDataDir = "/home/debros/.debros/bootstrap/ipfs-cluster"
} else {
ipfsDataDir = "/home/debros/.debros/node/ipfs"
clusterDataDir = "/home/debros/.debros/node/ipfs-cluster"
}
// Create directories
os.MkdirAll(ipfsDataDir, 0755)
os.MkdirAll(clusterDataDir, 0755)
exec.Command("chown", "-R", "debros:debros", ipfsDataDir).Run()
exec.Command("chown", "-R", "debros:debros", clusterDataDir).Run()
// Initialize IPFS if not already initialized
ipfsRepoPath := filepath.Join(ipfsDataDir, "repo")
if _, err := os.Stat(filepath.Join(ipfsRepoPath, "config")); os.IsNotExist(err) {
fmt.Printf(" Initializing IPFS repository...\n")
cmd := exec.Command("sudo", "-u", "debros", "ipfs", "init", "--profile=server", "--repo-dir="+ipfsRepoPath)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to initialize IPFS: %v\n%s", err, string(output))
}
// Configure IPFS API and Gateway addresses
exec.Command("sudo", "-u", "debros", "ipfs", "config", "--json", "Addresses.API", `["/ip4/127.0.0.1/tcp/5001"]`, "--repo-dir="+ipfsRepoPath).Run()
exec.Command("sudo", "-u", "debros", "ipfs", "config", "--json", "Addresses.Gateway", `["/ip4/127.0.0.1/tcp/8080"]`, "--repo-dir="+ipfsRepoPath).Run()
exec.Command("sudo", "-u", "debros", "ipfs", "config", "--json", "Addresses.Swarm", `["/ip4/0.0.0.0/tcp/4001","/ip6/::/tcp/4001"]`, "--repo-dir="+ipfsRepoPath).Run()
fmt.Printf(" ✓ IPFS initialized\n")
}
// Initialize IPFS Cluster if not already initialized
clusterConfigPath := filepath.Join(clusterDataDir, "service.json")
if _, err := os.Stat(clusterConfigPath); os.IsNotExist(err) {
fmt.Printf(" Initializing IPFS Cluster...\n")
// Generate cluster config
clusterConfig := generateClusterServiceConfig(nodeID, vpsIP, secret, isBootstrap)
// Write config
configJSON, err := json.MarshalIndent(clusterConfig, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal cluster config: %w", err)
}
if err := os.WriteFile(clusterConfigPath, configJSON, 0644); err != nil {
return fmt.Errorf("failed to write cluster config: %w", err)
}
exec.Command("chown", "debros:debros", clusterConfigPath).Run()
fmt.Printf(" ✓ IPFS Cluster initialized\n")
}
return nil
}
// getClusterPeerID gets the cluster peer ID from a running cluster service
func getClusterPeerID(clusterAPIURL string) (string, error) {
cmd := exec.Command("ipfs-cluster-ctl", "--host", clusterAPIURL, "id")
output, err := cmd.CombinedOutput()
if err != nil {
return "", fmt.Errorf("failed to get cluster peer ID: %v\n%s", err, string(output))
}
// Parse output to extract peer ID
// Output format: "12D3KooW..."
lines := strings.Split(string(output), "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
if strings.HasPrefix(line, "12D3Koo") {
return line, nil
}
}
return "", fmt.Errorf("could not parse cluster peer ID from output: %s", string(output))
}
// getClusterPeerMultiaddr constructs the cluster peer multiaddr
func getClusterPeerMultiaddr(vpsIP, peerID string) string {
return fmt.Sprintf("/ip4/%s/tcp/9096/p2p/%s", vpsIP, peerID)
}
// clusterServiceConfig represents IPFS Cluster service.json structure
type clusterServiceConfig struct {
Cluster clusterConfig `json:"cluster"`
Consensus consensusConfig `json:"consensus"`
API apiConfig `json:"api"`
IPFSConnector ipfsConnectorConfig `json:"ipfs_connector"`
Datastore datastoreConfig `json:"datastore"`
}
type clusterConfig struct {
ID string `json:"id"`
PrivateKey string `json:"private_key"`
Secret string `json:"secret"`
Peername string `json:"peername"`
Bootstrap []string `json:"bootstrap"`
LeaveOnShutdown bool `json:"leave_on_shutdown"`
ListenMultiaddr string `json:"listen_multiaddress"`
ConnectionManager connectionManagerConfig `json:"connection_manager"`
}
type connectionManagerConfig struct {
LowWater int `json:"low_water"`
HighWater int `json:"high_water"`
GracePeriod string `json:"grace_period"`
}
type consensusConfig struct {
CRDT crdtConfig `json:"crdt"`
}
type crdtConfig struct {
ClusterName string `json:"cluster_name"`
TrustedPeers []string `json:"trusted_peers"`
}
type apiConfig struct {
RestAPI restAPIConfig `json:"restapi"`
}
type restAPIConfig struct {
HTTPListenMultiaddress string `json:"http_listen_multiaddress"`
ID string `json:"id"`
BasicAuthCredentials interface{} `json:"basic_auth_credentials"`
}
type ipfsConnectorConfig struct {
IPFSHTTP ipfsHTTPConfig `json:"ipfshttp"`
}
type ipfsHTTPConfig struct {
NodeMultiaddress string `json:"node_multiaddress"`
}
type datastoreConfig struct {
Type string `json:"type"`
Path string `json:"path"`
}
// generateClusterServiceConfig generates IPFS Cluster service.json config
func generateClusterServiceConfig(nodeID, vpsIP, secret string, isBootstrap bool) clusterServiceConfig {
clusterListenAddr := "/ip4/0.0.0.0/tcp/9096"
restAPIListenAddr := "/ip4/0.0.0.0/tcp/9094"
// For bootstrap node, use empty bootstrap list
// For other nodes, bootstrap list will be set when starting the service
bootstrap := []string{}
return clusterServiceConfig{
Cluster: clusterConfig{
Peername: nodeID,
Secret: secret,
Bootstrap: bootstrap,
LeaveOnShutdown: false,
ListenMultiaddr: clusterListenAddr,
ConnectionManager: connectionManagerConfig{
LowWater: 50,
HighWater: 200,
GracePeriod: "20s",
},
},
Consensus: consensusConfig{
CRDT: crdtConfig{
ClusterName: "debros-cluster",
TrustedPeers: []string{"*"}, // Trust all peers
},
},
API: apiConfig{
RestAPI: restAPIConfig{
HTTPListenMultiaddress: restAPIListenAddr,
ID: "",
BasicAuthCredentials: nil,
},
},
IPFSConnector: ipfsConnectorConfig{
IPFSHTTP: ipfsHTTPConfig{
NodeMultiaddress: "/ip4/127.0.0.1/tcp/5001",
},
},
Datastore: datastoreConfig{
Type: "badger",
Path: fmt.Sprintf("/home/debros/.debros/%s/ipfs-cluster/badger", nodeID),
},
}
}
func createSystemdServices() { func createSystemdServices() {
fmt.Printf("🔧 Creating systemd services...\n") fmt.Printf("🔧 Creating systemd services...\n")
// IPFS service (runs on all nodes)
ipfsService := `[Unit]
Description=IPFS Daemon
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=debros
Group=debros
Environment=HOME=/home/debros
ExecStartPre=/bin/bash -c 'if [ -f /home/debros/.debros/node.yaml ]; then export IPFS_PATH=/home/debros/.debros/node/ipfs/repo; elif [ -f /home/debros/.debros/bootstrap.yaml ]; then export IPFS_PATH=/home/debros/.debros/bootstrap/ipfs/repo; else export IPFS_PATH=/home/debros/.debros/bootstrap/ipfs/repo; fi'
ExecStart=/usr/bin/ipfs daemon --enable-pubsub-experiment --repo-dir=${IPFS_PATH}
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=ipfs
NoNewPrivileges=yes
PrivateTmp=yes
ProtectSystem=strict
ReadWritePaths=/home/debros
[Install]
WantedBy=multi-user.target
`
if err := os.WriteFile("/etc/systemd/system/debros-ipfs.service", []byte(ipfsService), 0644); err != nil {
fmt.Fprintf(os.Stderr, "❌ Failed to create IPFS service: %v\n", err)
os.Exit(1)
}
// IPFS Cluster service (runs on all nodes)
clusterService := `[Unit]
Description=IPFS Cluster Service
After=debros-ipfs.service
Wants=debros-ipfs.service
Requires=debros-ipfs.service
[Service]
Type=simple
User=debros
Group=debros
WorkingDirectory=/home/debros
Environment=HOME=/home/debros
ExecStartPre=/bin/bash -c 'if [ -f /home/debros/.debros/node.yaml ]; then export CLUSTER_PATH=/home/debros/.debros/node/ipfs-cluster; elif [ -f /home/debros/.debros/bootstrap.yaml ]; then export CLUSTER_PATH=/home/debros/.debros/bootstrap/ipfs-cluster; else export CLUSTER_PATH=/home/debros/.debros/bootstrap/ipfs-cluster; fi'
ExecStart=/usr/local/bin/ipfs-cluster-service daemon --config ${CLUSTER_PATH}/service.json
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=ipfs-cluster
NoNewPrivileges=yes
PrivateTmp=yes
ProtectSystem=strict
ReadWritePaths=/home/debros
[Install]
WantedBy=multi-user.target
`
if err := os.WriteFile("/etc/systemd/system/debros-ipfs-cluster.service", []byte(clusterService), 0644); err != nil {
fmt.Fprintf(os.Stderr, "❌ Failed to create IPFS Cluster service: %v\n", err)
os.Exit(1)
}
// Node service // Node service
nodeService := `[Unit] nodeService := `[Unit]
Description=DeBros Network Node Description=DeBros Network Node
After=network-online.target After=network-online.target debros-ipfs-cluster.service
Wants=network-online.target Wants=network-online.target debros-ipfs-cluster.service
Requires=debros-ipfs-cluster.service
[Service] [Service]
Type=simple Type=simple
@ -1807,6 +2215,8 @@ WantedBy=multi-user.target
// Reload systemd // Reload systemd
exec.Command("systemctl", "daemon-reload").Run() exec.Command("systemctl", "daemon-reload").Run()
exec.Command("systemctl", "enable", "debros-ipfs").Run()
exec.Command("systemctl", "enable", "debros-ipfs-cluster").Run()
exec.Command("systemctl", "enable", "debros-node").Run() exec.Command("systemctl", "enable", "debros-node").Run()
exec.Command("systemctl", "enable", "debros-gateway").Run() exec.Command("systemctl", "enable", "debros-gateway").Run()
@ -1841,6 +2251,18 @@ func startServices() {
} }
} }
// Start IPFS first (required by Cluster)
startOrRestartService("debros-ipfs")
// Wait a bit for IPFS to start
time.Sleep(2 * time.Second)
// Start IPFS Cluster (required by Node)
startOrRestartService("debros-ipfs-cluster")
// Wait a bit for Cluster to start
time.Sleep(2 * time.Second)
// Start or restart node service // Start or restart node service
startOrRestartService("debros-node") startOrRestartService("debros-node")

View File

@ -254,6 +254,25 @@ func New(logger *logging.ColoredLogger, cfg *Config) (*Gateway, error) {
logger.ComponentWarn(logging.ComponentGeneral, "failed to initialize IPFS Cluster client; storage endpoints disabled", zap.Error(ipfsErr)) logger.ComponentWarn(logging.ComponentGeneral, "failed to initialize IPFS Cluster client; storage endpoints disabled", zap.Error(ipfsErr))
} else { } else {
gw.ipfsClient = ipfsClient gw.ipfsClient = ipfsClient
// Check peer count and warn if insufficient (use background context to avoid blocking)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if peerCount, err := ipfsClient.GetPeerCount(ctx); err == nil {
if peerCount < ipfsReplicationFactor {
logger.ComponentWarn(logging.ComponentGeneral, "insufficient cluster peers for replication factor",
zap.Int("peer_count", peerCount),
zap.Int("replication_factor", ipfsReplicationFactor),
zap.String("message", "Some pin operations may fail until more peers join the cluster"))
} else {
logger.ComponentInfo(logging.ComponentGeneral, "IPFS Cluster peer count sufficient",
zap.Int("peer_count", peerCount),
zap.Int("replication_factor", ipfsReplicationFactor))
}
} else {
logger.ComponentWarn(logging.ComponentGeneral, "failed to get cluster peer count", zap.Error(err))
}
logger.ComponentInfo(logging.ComponentGeneral, "IPFS Cluster client ready", logger.ComponentInfo(logging.ComponentGeneral, "IPFS Cluster client ready",
zap.String("cluster_api_url", ipfsCfg.ClusterAPIURL), zap.String("cluster_api_url", ipfsCfg.ClusterAPIURL),
zap.String("ipfs_api_url", ipfsAPIURL), zap.String("ipfs_api_url", ipfsAPIURL),

View File

@ -275,7 +275,12 @@ func (g *Gateway) storageGetHandler(w http.ResponseWriter, r *http.Request) {
reader, err := g.ipfsClient.Get(ctx, path, ipfsAPIURL) reader, err := g.ipfsClient.Get(ctx, path, ipfsAPIURL)
if err != nil { if err != nil {
g.logger.ComponentError(logging.ComponentGeneral, "failed to get content from IPFS", zap.Error(err), zap.String("cid", path)) g.logger.ComponentError(logging.ComponentGeneral, "failed to get content from IPFS", zap.Error(err), zap.String("cid", path))
writeError(w, http.StatusInternalServerError, fmt.Sprintf("failed to get content: %v", err)) // Check if error indicates content not found (404)
if strings.Contains(err.Error(), "not found") || strings.Contains(err.Error(), "status 404") {
writeError(w, http.StatusNotFound, fmt.Sprintf("content not found: %s", path))
} else {
writeError(w, http.StatusInternalServerError, fmt.Sprintf("failed to get content: %v", err))
}
return return
} }
defer reader.Close() defer reader.Close()

View File

@ -8,6 +8,7 @@ import (
"io" "io"
"mime/multipart" "mime/multipart"
"net/http" "net/http"
"net/url"
"time" "time"
"go.uber.org/zap" "go.uber.org/zap"
@ -21,6 +22,7 @@ type IPFSClient interface {
Get(ctx context.Context, cid string, ipfsAPIURL string) (io.ReadCloser, error) Get(ctx context.Context, cid string, ipfsAPIURL string) (io.ReadCloser, error)
Unpin(ctx context.Context, cid string) error Unpin(ctx context.Context, cid string) error
Health(ctx context.Context) error Health(ctx context.Context) error
GetPeerCount(ctx context.Context) (int, error)
Close(ctx context.Context) error Close(ctx context.Context) error
} }
@ -110,6 +112,33 @@ func (c *Client) Health(ctx context.Context) error {
return nil return nil
} }
// GetPeerCount returns the number of cluster peers
func (c *Client) GetPeerCount(ctx context.Context) (int, error) {
req, err := http.NewRequestWithContext(ctx, "GET", c.apiURL+"/peers", nil)
if err != nil {
return 0, fmt.Errorf("failed to create peers request: %w", err)
}
resp, err := c.httpClient.Do(req)
if err != nil {
return 0, fmt.Errorf("peers request failed: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return 0, fmt.Errorf("peers request failed with status: %d", resp.StatusCode)
}
var peers []struct {
ID string `json:"id"`
}
if err := json.NewDecoder(resp.Body).Decode(&peers); err != nil {
return 0, fmt.Errorf("failed to decode peers response: %w", err)
}
return len(peers), nil
}
// Add adds content to IPFS and returns the CID // Add adds content to IPFS and returns the CID
func (c *Client) Add(ctx context.Context, reader io.Reader, name string) (*AddResponse, error) { func (c *Client) Add(ctx context.Context, reader io.Reader, name string) (*AddResponse, error) {
// Create multipart form request for IPFS Cluster API // Create multipart form request for IPFS Cluster API
@ -157,28 +186,25 @@ func (c *Client) Add(ctx context.Context, reader io.Reader, name string) (*AddRe
} }
// Pin pins a CID with specified replication factor // Pin pins a CID with specified replication factor
// IPFS Cluster expects pin options (including name) as query parameters, not in JSON body
func (c *Client) Pin(ctx context.Context, cid string, name string, replicationFactor int) (*PinResponse, error) { func (c *Client) Pin(ctx context.Context, cid string, name string, replicationFactor int) (*PinResponse, error) {
reqBody := map[string]interface{}{ // Build URL with query parameters
"cid": cid, reqURL := c.apiURL + "/pins/" + cid
"replication_factor_min": replicationFactor, values := url.Values{}
"replication_factor_max": replicationFactor, values.Set("replication-min", fmt.Sprintf("%d", replicationFactor))
} values.Set("replication-max", fmt.Sprintf("%d", replicationFactor))
if name != "" { if name != "" {
reqBody["name"] = name values.Set("name", name)
}
if len(values) > 0 {
reqURL += "?" + values.Encode()
} }
jsonBody, err := json.Marshal(reqBody) req, err := http.NewRequestWithContext(ctx, "POST", reqURL, nil)
if err != nil {
return nil, fmt.Errorf("failed to marshal pin request: %w", err)
}
req, err := http.NewRequestWithContext(ctx, "POST", c.apiURL+"/pins/"+cid, bytes.NewReader(jsonBody))
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to create pin request: %w", err) return nil, fmt.Errorf("failed to create pin request: %w", err)
} }
req.Header.Set("Content-Type", "application/json")
resp, err := c.httpClient.Do(req) resp, err := c.httpClient.Do(req)
if err != nil { if err != nil {
return nil, fmt.Errorf("pin request failed: %w", err) return nil, fmt.Errorf("pin request failed: %w", err)
@ -242,6 +268,9 @@ func (c *Client) PinStatus(ctx context.Context, cid string) (*PinStatus, error)
return nil, fmt.Errorf("failed to decode pin status response: %w", err) return nil, fmt.Errorf("failed to decode pin status response: %w", err)
} }
// Use name from GlobalPinInfo
name := gpi.Name
// Extract status from peer map (use first peer's status, or aggregate) // Extract status from peer map (use first peer's status, or aggregate)
status := "unknown" status := "unknown"
peers := make([]string, 0, len(gpi.PeerMap)) peers := make([]string, 0, len(gpi.PeerMap))
@ -274,7 +303,7 @@ func (c *Client) PinStatus(ctx context.Context, cid string) (*PinStatus, error)
result := &PinStatus{ result := &PinStatus{
Cid: gpi.Cid, Cid: gpi.Cid,
Name: gpi.Name, Name: name,
Status: status, Status: status,
ReplicationMin: 0, // Not available in GlobalPinInfo ReplicationMin: 0, // Not available in GlobalPinInfo
ReplicationMax: 0, // Not available in GlobalPinInfo ReplicationMax: 0, // Not available in GlobalPinInfo
@ -331,8 +360,12 @@ func (c *Client) Get(ctx context.Context, cid string, ipfsAPIURL string) (io.Rea
} }
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
resp.Body.Close() resp.Body.Close()
return nil, fmt.Errorf("get failed with status %d", resp.StatusCode) if resp.StatusCode == http.StatusNotFound {
return nil, fmt.Errorf("content not found (CID: %s). The content may not be available on the IPFS node, or the IPFS API may not be accessible at %s", cid, ipfsAPIURL)
}
return nil, fmt.Errorf("get failed with status %d: %s", resp.StatusCode, string(body))
} }
return resp.Body, nil return resp.Body, nil

View File

@ -67,6 +67,15 @@ if ! command -v curl > /dev/null 2>&1; then
exit 1 exit 1
fi fi
# Check for skip flag
# To skip changelog generation, set SKIP_CHANGELOG=1 before committing:
# SKIP_CHANGELOG=1 git commit -m "your message"
# SKIP_CHANGELOG=1 git commit
if [ "$SKIP_CHANGELOG" = "1" ] || [ "$SKIP_CHANGELOG" = "true" ]; then
log "Skipping changelog update (SKIP_CHANGELOG is set)"
exit 0
fi
# Check if we're in a git repo # Check if we're in a git repo
if ! git rev-parse --git-dir > /dev/null 2>&1; then if ! git rev-parse --git-dir > /dev/null 2>&1; then
error "Not in a git repository" error "Not in a git repository"