Compare commits

..

16 Commits
v0.2.5 ... main

Author SHA1 Message Date
anonpenguin23
d744e26a19 Update package version to 0.3.4 2025-11-28 22:31:21 +02:00
anonpenguin23
681299efdd Add TLS configuration for fetch in HttpClient
- Introduced a new function to create a fetch instance with TLS settings for staging certificates in Node.js environments.
- Updated HttpClient to use this fetch function, allowing self-signed certificates in development and staging environments.
- Enhanced security handling by ensuring that staging certificates are only accepted in non-production settings.
2025-11-28 22:31:01 +02:00
anonpenguin23
3db9f4d8b8 Update package version to 0.3.2 2025-11-22 09:38:59 +02:00
anonpenguin23
ca81e60bcb Refactor HttpClient error logging and SQL query handling
- Improved logging for expected 404 responses in find-one calls, now logged as warnings instead of errors for better clarity.
- Enhanced SQL query detail logging by refining the formatting of arguments in the console output.
- Cleaned up code by removing unnecessary checks for expected 404 errors related to blocked users and conversation participants.
2025-11-21 14:41:04 +02:00
anonpenguin23
091a6d5751 Enhance HttpClient to log SQL query details for rqlite operations
- Added functionality to extract and log SQL query details when making requests to rqlite endpoints, including direct SQL queries and table-based queries.
- Improved error handling for expected 404 responses, logging them as warnings instead of errors for better visibility.
- Updated console logging to include query details when available, enhancing debugging capabilities.
2025-11-21 14:39:07 +02:00
anonpenguin23
c7ef91f8d6 Update package version to 0.3.1 2025-11-11 17:51:03 +02:00
anonpenguin23
54c61e6a47 Add getBinary method to StorageClient for IPFS content retrieval
- Implemented a new method `getBinary` to retrieve content from IPFS by CID, returning the full Response object.
- Added retry logic with exponential backoff for handling eventual consistency in IPFS Cluster, allowing up to 8 attempts for content retrieval.
- Enhanced documentation with usage examples and detailed parameter descriptions for better clarity.
2025-11-08 12:02:35 +02:00
anonpenguin23
a02c7474ab Enhance StorageClient upload method to support optional pinning
- Updated the upload method to accept an options parameter for controlling the pinning behavior of uploaded content.
- Default pinning behavior remains true, but can be set to false through the options.
- Improved documentation to reflect the new options parameter and its usage in examples.
2025-11-08 08:34:29 +02:00
anonpenguin23
06d58fe85b Enhance HttpClient error handling for expected 404 responses
- Added logic to identify and handle expected 404 errors for conversation participants without logging them as errors.
- Updated comments to clarify the expected behavior for cache misses and non-participant status, improving code readability.
2025-11-07 08:17:51 +02:00
anonpenguin23
2cdb78ee1d Add multiGet functionality to CacheClient for batch retrieval of cache values
- Introduced CacheMultiGetRequest and CacheMultiGetResponse interfaces to define the structure for multi-get operations.
- Implemented multiGet method in CacheClient to retrieve multiple cache values in a single request, handling cache misses gracefully.
- Enhanced error handling to manage 404 errors when the backend does not support multiGet, returning null for missing keys.
2025-11-07 06:47:42 +02:00
anonpenguin23
cee0cd62a9 Enhance CacheClient to handle cache misses gracefully
- Updated the `get` method in CacheClient to return null for cache misses instead of throwing an error.
- Improved error handling to differentiate between expected 404 errors and other exceptions.
- Adjusted related tests to verify null returns for non-existent keys, ensuring robust cache behavior.
2025-11-07 06:03:47 +02:00
anonpenguin23
51f7c433c7 Refactor HttpClient to improve API key and JWT handling for various operations
- Removed redundant console logging in setApiKey method.
- Updated getAuthHeaders method to include cache operations in API key usage logic.
- Enhanced request logging to track request duration and handle expected 404 errors gracefully.
- Improved code readability by formatting SQL queries in the Repository class.
2025-11-06 11:16:10 +02:00
anonpenguin23
64cfe078f0 Remove redundant test for handling get errors with non-existent CID in storage tests 2025-11-06 06:25:18 +02:00
anonpenguin23
5bfb042646 Implement retry logic for content retrieval in StorageClient
- Added retry mechanism for the `get` method to handle eventual consistency in IPFS Cluster.
- Introduced exponential backoff strategy for retries, allowing up to 8 attempts with increasing wait times.
- Enhanced error handling to differentiate between 404 errors and other failures, ensuring robust content retrieval.
2025-11-05 17:30:58 +02:00
anonpenguin23
e233696166 feat: integrate Olric distributed cache support
- Added Olric cache server integration, including configuration options for Olric servers and timeout settings.
- Implemented HTTP handlers for cache operations: health check, get, put, delete, and scan.
- Enhanced Makefile with commands to run the Olric server and manage its configuration.
- Updated README and setup scripts to include Olric installation and configuration instructions.
- Introduced tests for cache handlers to ensure proper functionality and error handling.
2025-11-05 07:30:59 +02:00
anonpenguin23
e30e81d0c9 Update package version to 0.3.0 and introduce CacheClient with caching functionality
- Added CacheClient to manage cache operations including get, put, delete, and scan.
- Updated createClient function to include cache client.
- Added new types and interfaces for cache requests and responses.
- Implemented comprehensive tests for cache functionality, covering health checks, value storage, retrieval, deletion, and scanning.
2025-11-03 15:28:30 +02:00
8 changed files with 1126 additions and 41 deletions

View File

@ -1,6 +1,6 @@
{
"name": "@debros/network-ts-sdk",
"version": "0.2.5",
"version": "0.3.4",
"description": "TypeScript SDK for DeBros Network Gateway",
"type": "module",
"main": "./dist/index.js",

203
src/cache/client.ts vendored Normal file
View File

@ -0,0 +1,203 @@
import { HttpClient } from "../core/http";
import { SDKError } from "../errors";
export interface CacheGetRequest {
dmap: string;
key: string;
}
export interface CacheGetResponse {
key: string;
value: any;
dmap: string;
}
export interface CachePutRequest {
dmap: string;
key: string;
value: any;
ttl?: string; // Duration string like "1h", "30m"
}
export interface CachePutResponse {
status: string;
key: string;
dmap: string;
}
export interface CacheDeleteRequest {
dmap: string;
key: string;
}
export interface CacheDeleteResponse {
status: string;
key: string;
dmap: string;
}
export interface CacheMultiGetRequest {
dmap: string;
keys: string[];
}
export interface CacheMultiGetResponse {
results: Array<{
key: string;
value: any;
}>;
dmap: string;
}
export interface CacheScanRequest {
dmap: string;
match?: string; // Optional regex pattern
}
export interface CacheScanResponse {
keys: string[];
count: number;
dmap: string;
}
export interface CacheHealthResponse {
status: string;
service: string;
}
export class CacheClient {
private httpClient: HttpClient;
constructor(httpClient: HttpClient) {
this.httpClient = httpClient;
}
/**
* Check cache service health
*/
async health(): Promise<CacheHealthResponse> {
return this.httpClient.get("/v1/cache/health");
}
/**
* Get a value from cache
* Returns null if the key is not found (cache miss/expired), which is normal behavior
*/
async get(dmap: string, key: string): Promise<CacheGetResponse | null> {
try {
return await this.httpClient.post<CacheGetResponse>("/v1/cache/get", {
dmap,
key,
});
} catch (error) {
// Cache misses (404 or "key not found" messages) are normal behavior - return null instead of throwing
if (
error instanceof SDKError &&
(error.httpStatus === 404 ||
(error.httpStatus === 500 &&
error.message?.toLowerCase().includes("key not found")))
) {
return null;
}
// Re-throw other errors (network issues, server errors, etc.)
throw error;
}
}
/**
* Put a value into cache
*/
async put(
dmap: string,
key: string,
value: any,
ttl?: string
): Promise<CachePutResponse> {
return this.httpClient.post<CachePutResponse>("/v1/cache/put", {
dmap,
key,
value,
ttl,
});
}
/**
* Delete a value from cache
*/
async delete(dmap: string, key: string): Promise<CacheDeleteResponse> {
return this.httpClient.post<CacheDeleteResponse>("/v1/cache/delete", {
dmap,
key,
});
}
/**
* Get multiple values from cache in a single request
* Returns a map of key -> value (or null if not found)
* Gracefully handles 404 errors (endpoint not implemented) by returning empty results
*/
async multiGet(
dmap: string,
keys: string[]
): Promise<Map<string, any | null>> {
try {
if (keys.length === 0) {
return new Map();
}
const response = await this.httpClient.post<CacheMultiGetResponse>(
"/v1/cache/mget",
{
dmap,
keys,
}
);
// Convert array to Map
const resultMap = new Map<string, any | null>();
// First, mark all keys as null (cache miss)
keys.forEach((key) => {
resultMap.set(key, null);
});
// Then, update with found values
if (response.results) {
response.results.forEach(({ key, value }) => {
resultMap.set(key, value);
});
}
return resultMap;
} catch (error) {
// Handle 404 errors silently (endpoint not implemented on backend)
// This is expected behavior when the backend doesn't support multiGet yet
if (error instanceof SDKError && error.httpStatus === 404) {
// Return map with all nulls silently - caller can fall back to individual gets
const resultMap = new Map<string, any | null>();
keys.forEach((key) => {
resultMap.set(key, null);
});
return resultMap;
}
// Log and return empty results for other errors
const resultMap = new Map<string, any | null>();
keys.forEach((key) => {
resultMap.set(key, null);
});
console.error(`[CacheClient] Error in multiGet for ${dmap}:`, error);
return resultMap;
}
}
/**
* Scan keys in a distributed map, optionally matching a regex pattern
*/
async scan(dmap: string, match?: string): Promise<CacheScanResponse> {
return this.httpClient.post<CacheScanResponse>("/v1/cache/scan", {
dmap,
match,
});
}
}

View File

@ -8,6 +8,29 @@ export interface HttpClientConfig {
fetch?: typeof fetch;
}
/**
* Create a fetch function with proper TLS configuration for staging certificates
* In Node.js, we need to configure TLS to accept Let's Encrypt staging certificates
*/
function createFetchWithTLSConfig(): typeof fetch {
// Check if we're in a Node.js environment
if (typeof process !== "undefined" && process.versions?.node) {
// For testing/staging/development: allow staging certificates
// Let's Encrypt staging certificates are self-signed and not trusted by default
const isDevelopmentOrStaging =
process.env.NODE_ENV !== "production" ||
process.env.DEBROS_ALLOW_STAGING_CERTS === "true" ||
process.env.DEBROS_USE_HTTPS === "true";
if (isDevelopmentOrStaging) {
// Allow self-signed/staging certificates
// WARNING: Only use this in development/testing environments
process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";
}
}
return globalThis.fetch;
}
export class HttpClient {
private baseURL: string;
private timeout: number;
@ -22,20 +45,13 @@ export class HttpClient {
this.timeout = config.timeout ?? 60000; // Increased from 30s to 60s for pub/sub operations
this.maxRetries = config.maxRetries ?? 3;
this.retryDelayMs = config.retryDelayMs ?? 1000;
this.fetch = config.fetch ?? globalThis.fetch;
// Use provided fetch or create one with proper TLS configuration for staging certificates
this.fetch = config.fetch ?? createFetchWithTLSConfig();
}
setApiKey(apiKey?: string) {
this.apiKey = apiKey;
// Don't clear JWT - allow both to coexist
if (typeof console !== "undefined") {
console.log(
"[HttpClient] API key set:",
!!apiKey,
"JWT still present:",
!!this.jwt
);
}
}
setJwt(jwt?: string) {
@ -54,22 +70,39 @@ export class HttpClient {
private getAuthHeaders(path: string): Record<string, string> {
const headers: Record<string, string> = {};
// For database, pubsub, and proxy operations, ONLY use API key to avoid JWT user context
// For database, pubsub, proxy, and cache operations, ONLY use API key to avoid JWT user context
// interfering with namespace-level authorization
const isDbOperation = path.includes("/v1/rqlite/");
const isPubSubOperation = path.includes("/v1/pubsub/");
const isProxyOperation = path.includes("/v1/proxy/");
const isCacheOperation = path.includes("/v1/cache/");
if (isDbOperation || isPubSubOperation || isProxyOperation) {
// For database/pubsub/proxy operations: use only API key (preferred for namespace operations)
// For auth operations, prefer API key over JWT to ensure proper authentication
const isAuthOperation = path.includes("/v1/auth/");
if (
isDbOperation ||
isPubSubOperation ||
isProxyOperation ||
isCacheOperation
) {
// For database/pubsub/proxy/cache operations: use only API key (preferred for namespace operations)
if (this.apiKey) {
headers["X-API-Key"] = this.apiKey;
} else if (this.jwt) {
// Fallback to JWT if no API key
headers["Authorization"] = `Bearer ${this.jwt}`;
}
} else if (isAuthOperation) {
// For auth operations: prefer API key over JWT (auth endpoints should use explicit API key)
if (this.apiKey) {
headers["X-API-Key"] = this.apiKey;
}
if (this.jwt) {
headers["Authorization"] = `Bearer ${this.jwt}`;
}
} else {
// For auth/other operations: send both JWT and API key
// For other operations: send both JWT and API key
if (this.jwt) {
headers["Authorization"] = `Bearer ${this.jwt}`;
}
@ -98,6 +131,7 @@ export class HttpClient {
timeout?: number; // Per-request timeout override
} = {}
): Promise<T> {
const startTime = performance.now(); // Track request start time
const url = new URL(this.baseURL + path);
if (options.query) {
Object.entries(options.query).forEach(([key, value]) => {
@ -111,27 +145,6 @@ export class HttpClient {
...options.headers,
};
// Debug: Log headers being sent
if (
typeof console !== "undefined" &&
(path.includes("/db/") ||
path.includes("/query") ||
path.includes("/auth/") ||
path.includes("/pubsub/") ||
path.includes("/proxy/"))
) {
console.log("[HttpClient] Request headers for", path, {
hasAuth: !!headers["Authorization"],
hasApiKey: !!headers["X-API-Key"],
authPrefix: headers["Authorization"]
? headers["Authorization"].substring(0, 20)
: "none",
apiKeyPrefix: headers["X-API-Key"]
? headers["X-API-Key"].substring(0, 20)
: "none",
});
}
const controller = new AbortController();
const requestTimeout = options.timeout ?? this.timeout; // Use override or default
const timeoutId = setTimeout(() => controller.abort(), requestTimeout);
@ -146,8 +159,99 @@ export class HttpClient {
fetchOptions.body = JSON.stringify(options.body);
}
// Extract and log SQL query details for rqlite operations
const isRqliteOperation = path.includes("/v1/rqlite/");
let queryDetails: string | null = null;
if (isRqliteOperation && options.body) {
try {
const body =
typeof options.body === "string"
? JSON.parse(options.body)
: options.body;
if (body.sql) {
// Direct SQL query (query/exec endpoints)
queryDetails = `SQL: ${body.sql}`;
if (body.args && body.args.length > 0) {
queryDetails += ` | Args: [${body.args
.map((a: any) => (typeof a === "string" ? `"${a}"` : a))
.join(", ")}]`;
}
} else if (body.table) {
// Table-based query (find/find-one/select endpoints)
queryDetails = `Table: ${body.table}`;
if (body.criteria && Object.keys(body.criteria).length > 0) {
queryDetails += ` | Criteria: ${JSON.stringify(body.criteria)}`;
}
if (body.options) {
queryDetails += ` | Options: ${JSON.stringify(body.options)}`;
}
if (body.select) {
queryDetails += ` | Select: ${JSON.stringify(body.select)}`;
}
if (body.where) {
queryDetails += ` | Where: ${JSON.stringify(body.where)}`;
}
if (body.limit) {
queryDetails += ` | Limit: ${body.limit}`;
}
if (body.offset) {
queryDetails += ` | Offset: ${body.offset}`;
}
}
} catch (e) {
// Failed to parse body, ignore
}
}
try {
return await this.requestWithRetry(url.toString(), fetchOptions);
const result = await this.requestWithRetry(
url.toString(),
fetchOptions,
0,
startTime
);
const duration = performance.now() - startTime;
if (typeof console !== "undefined") {
const logMessage = `[HttpClient] ${method} ${path} completed in ${duration.toFixed(
2
)}ms`;
if (queryDetails) {
console.log(logMessage);
console.log(`[HttpClient] ${queryDetails}`);
} else {
console.log(logMessage);
}
}
return result;
} catch (error) {
const duration = performance.now() - startTime;
if (typeof console !== "undefined") {
// For 404 errors on find-one calls, log at warn level (not error) since "not found" is expected
// Application layer handles these cases in try-catch blocks
const is404FindOne =
path === "/v1/rqlite/find-one" &&
error instanceof SDKError &&
error.httpStatus === 404;
if (is404FindOne) {
// Log as warning for visibility, but not as error since it's expected behavior
console.warn(
`[HttpClient] ${method} ${path} returned 404 after ${duration.toFixed(
2
)}ms (expected for optional lookups)`
);
} else {
const errorMessage = `[HttpClient] ${method} ${path} failed after ${duration.toFixed(
2
)}ms:`;
console.error(errorMessage, error);
if (queryDetails) {
console.error(`[HttpClient] ${queryDetails}`);
}
}
}
throw error;
} finally {
clearTimeout(timeoutId);
}
@ -156,7 +260,8 @@ export class HttpClient {
private async requestWithRetry(
url: string,
options: RequestInit,
attempt: number = 0
attempt: number = 0,
startTime?: number // Track start time for timing across retries
): Promise<any> {
try {
const response = await this.fetch(url, options);
@ -185,7 +290,7 @@ export class HttpClient {
await new Promise((resolve) =>
setTimeout(resolve, this.retryDelayMs * (attempt + 1))
);
return this.requestWithRetry(url, options, attempt + 1);
return this.requestWithRetry(url, options, attempt + 1, startTime);
}
throw error;
}
@ -221,6 +326,104 @@ export class HttpClient {
return this.request<T>("DELETE", path, options);
}
/**
* Upload a file using multipart/form-data
* This is a special method for file uploads that bypasses JSON serialization
*/
async uploadFile<T = any>(
path: string,
formData: FormData,
options?: {
timeout?: number;
}
): Promise<T> {
const startTime = performance.now(); // Track upload start time
const url = new URL(this.baseURL + path);
const headers: Record<string, string> = {
...this.getAuthHeaders(path),
// Don't set Content-Type - browser will set it with boundary
};
const controller = new AbortController();
const requestTimeout = options?.timeout ?? this.timeout * 5; // 5x timeout for uploads
const timeoutId = setTimeout(() => controller.abort(), requestTimeout);
const fetchOptions: RequestInit = {
method: "POST",
headers,
body: formData,
signal: controller.signal,
};
try {
const result = await this.requestWithRetry(
url.toString(),
fetchOptions,
0,
startTime
);
const duration = performance.now() - startTime;
if (typeof console !== "undefined") {
console.log(
`[HttpClient] POST ${path} (upload) completed in ${duration.toFixed(
2
)}ms`
);
}
return result;
} catch (error) {
const duration = performance.now() - startTime;
if (typeof console !== "undefined") {
console.error(
`[HttpClient] POST ${path} (upload) failed after ${duration.toFixed(
2
)}ms:`,
error
);
}
throw error;
} finally {
clearTimeout(timeoutId);
}
}
/**
* Get a binary response (returns Response object for streaming)
*/
async getBinary(path: string): Promise<Response> {
const url = new URL(this.baseURL + path);
const headers: Record<string, string> = {
...this.getAuthHeaders(path),
};
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), this.timeout * 5); // 5x timeout for downloads
const fetchOptions: RequestInit = {
method: "GET",
headers,
signal: controller.signal,
};
try {
const response = await this.fetch(url.toString(), fetchOptions);
if (!response.ok) {
clearTimeout(timeoutId);
const error = await response.json().catch(() => ({
error: response.statusText,
}));
throw SDKError.fromResponse(response.status, error);
}
return response;
} catch (error) {
clearTimeout(timeoutId);
if (error instanceof SDKError) {
throw error;
}
throw error;
}
}
getToken(): string | undefined {
return this.getAuthToken();
}

View File

@ -98,7 +98,9 @@ export class Repository<T extends Record<string, any>> {
private buildInsertSql(entity: T): string {
const columns = Object.keys(entity).filter((k) => entity[k] !== undefined);
const placeholders = columns.map(() => "?").join(", ");
return `INSERT INTO ${this.tableName} (${columns.join(", ")}) VALUES (${placeholders})`;
return `INSERT INTO ${this.tableName} (${columns.join(
", "
)}) VALUES (${placeholders})`;
}
private buildInsertArgs(entity: T): any[] {
@ -111,7 +113,9 @@ export class Repository<T extends Record<string, any>> {
const columns = Object.keys(entity)
.filter((k) => entity[k] !== undefined && k !== this.primaryKey)
.map((k) => `${k} = ?`);
return `UPDATE ${this.tableName} SET ${columns.join(", ")} WHERE ${this.primaryKey} = ?`;
return `UPDATE ${this.tableName} SET ${columns.join(", ")} WHERE ${
this.primaryKey
} = ?`;
}
private buildUpdateArgs(entity: T): any[] {

View File

@ -3,6 +3,8 @@ import { AuthClient } from "./auth/client";
import { DBClient } from "./db/client";
import { PubSubClient } from "./pubsub/client";
import { NetworkClient } from "./network/client";
import { CacheClient } from "./cache/client";
import { StorageClient } from "./storage/client";
import { WSClientConfig } from "./core/ws";
import {
StorageAdapter,
@ -23,6 +25,8 @@ export interface Client {
db: DBClient;
pubsub: PubSubClient;
network: NetworkClient;
cache: CacheClient;
storage: StorageClient;
}
export function createClient(config: ClientConfig): Client {
@ -52,16 +56,19 @@ export function createClient(config: ClientConfig): Client {
wsURL,
});
const network = new NetworkClient(httpClient);
const cache = new CacheClient(httpClient);
const storage = new StorageClient(httpClient);
return {
auth,
db,
pubsub,
network,
cache,
storage,
};
}
// Re-exports
export { HttpClient } from "./core/http";
export { WSClient } from "./core/ws";
export { AuthClient } from "./auth/client";
@ -70,6 +77,8 @@ export { QueryBuilder } from "./db/qb";
export { Repository } from "./db/repository";
export { PubSubClient, Subscription } from "./pubsub/client";
export { NetworkClient } from "./network/client";
export { CacheClient } from "./cache/client";
export { StorageClient } from "./storage/client";
export { SDKError } from "./errors";
export { MemoryStorage, LocalStorageAdapter } from "./auth/types";
export type { StorageAdapter, AuthConfig, WhoAmI } from "./auth/types";
@ -86,3 +95,22 @@ export type {
ProxyRequest,
ProxyResponse,
} from "./network/client";
export type {
CacheGetRequest,
CacheGetResponse,
CachePutRequest,
CachePutResponse,
CacheDeleteRequest,
CacheDeleteResponse,
CacheMultiGetRequest,
CacheMultiGetResponse,
CacheScanRequest,
CacheScanResponse,
CacheHealthResponse,
} from "./cache/client";
export type {
StorageUploadResponse,
StoragePinRequest,
StoragePinResponse,
StorageStatus,
} from "./storage/client";

270
src/storage/client.ts Normal file
View File

@ -0,0 +1,270 @@
import { HttpClient } from "../core/http";
export interface StorageUploadResponse {
cid: string;
name: string;
size: number;
}
export interface StoragePinRequest {
cid: string;
name?: string;
}
export interface StoragePinResponse {
cid: string;
name: string;
}
export interface StorageStatus {
cid: string;
name: string;
status: string; // "pinned", "pinning", "queued", "unpinned", "error"
replication_min: number;
replication_max: number;
replication_factor: number;
peers: string[];
error?: string;
}
export class StorageClient {
private httpClient: HttpClient;
constructor(httpClient: HttpClient) {
this.httpClient = httpClient;
}
/**
* Upload content to IPFS and optionally pin it.
* Supports both File objects (browser) and Buffer/ReadableStream (Node.js).
*
* @param file - File to upload (File, Blob, or Buffer)
* @param name - Optional filename
* @param options - Optional upload options
* @param options.pin - Whether to pin the content (default: true). Pinning happens asynchronously on the backend.
* @returns Upload result with CID
*
* @example
* ```ts
* // Browser
* const fileInput = document.querySelector('input[type="file"]');
* const file = fileInput.files[0];
* const result = await client.storage.upload(file, file.name);
* console.log(result.cid);
*
* // Node.js
* const fs = require('fs');
* const fileBuffer = fs.readFileSync('image.jpg');
* const result = await client.storage.upload(fileBuffer, 'image.jpg', { pin: true });
* ```
*/
async upload(
file: File | Blob | ArrayBuffer | Uint8Array | ReadableStream<Uint8Array>,
name?: string,
options?: {
pin?: boolean;
}
): Promise<StorageUploadResponse> {
// Create FormData for multipart upload
const formData = new FormData();
// Handle different input types
if (file instanceof File) {
formData.append("file", file);
} else if (file instanceof Blob) {
formData.append("file", file, name);
} else if (file instanceof ArrayBuffer) {
const blob = new Blob([file]);
formData.append("file", blob, name);
} else if (file instanceof Uint8Array) {
// Convert Uint8Array to ArrayBuffer for Blob constructor
const buffer = file.buffer.slice(
file.byteOffset,
file.byteOffset + file.byteLength
) as ArrayBuffer;
const blob = new Blob([buffer], { type: "application/octet-stream" });
formData.append("file", blob, name);
} else if (file instanceof ReadableStream) {
// For ReadableStream, we need to read it into a blob first
// This is a limitation - in practice, pass File/Blob/Buffer
const chunks: ArrayBuffer[] = [];
const reader = file.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const buffer = value.buffer.slice(
value.byteOffset,
value.byteOffset + value.byteLength
) as ArrayBuffer;
chunks.push(buffer);
}
const blob = new Blob(chunks);
formData.append("file", blob, name);
} else {
throw new Error(
"Unsupported file type. Use File, Blob, ArrayBuffer, Uint8Array, or ReadableStream."
);
}
// Add pin flag (default: true)
const shouldPin = options?.pin !== false; // Default to true
formData.append("pin", shouldPin ? "true" : "false");
return this.httpClient.uploadFile<StorageUploadResponse>(
"/v1/storage/upload",
formData,
{ timeout: 300000 } // 5 minute timeout for large files
);
}
/**
* Pin an existing CID
*
* @param cid - Content ID to pin
* @param name - Optional name for the pin
* @returns Pin result
*/
async pin(cid: string, name?: string): Promise<StoragePinResponse> {
return this.httpClient.post<StoragePinResponse>("/v1/storage/pin", {
cid,
name,
});
}
/**
* Get the pin status for a CID
*
* @param cid - Content ID to check
* @returns Pin status information
*/
async status(cid: string): Promise<StorageStatus> {
return this.httpClient.get<StorageStatus>(`/v1/storage/status/${cid}`);
}
/**
* Retrieve content from IPFS by CID
*
* @param cid - Content ID to retrieve
* @returns ReadableStream of the content
*
* @example
* ```ts
* const stream = await client.storage.get(cid);
* const reader = stream.getReader();
* while (true) {
* const { done, value } = await reader.read();
* if (done) break;
* // Process chunk
* }
* ```
*/
async get(cid: string): Promise<ReadableStream<Uint8Array>> {
// Retry logic for content retrieval - content may not be immediately available
// after upload due to eventual consistency in IPFS Cluster
// IPFS Cluster pins can take 2-3+ seconds to complete across all nodes
const maxAttempts = 8;
let lastError: Error | null = null;
for (let attempt = 1; attempt <= maxAttempts; attempt++) {
try {
const response = await this.httpClient.getBinary(
`/v1/storage/get/${cid}`
);
if (!response.body) {
throw new Error("Response body is null");
}
return response.body;
} catch (error: any) {
lastError = error;
// Check if this is a 404 error (content not found)
const isNotFound =
error?.httpStatus === 404 ||
error?.message?.includes("not found") ||
error?.message?.includes("404");
// If it's not a 404 error, or this is the last attempt, give up
if (!isNotFound || attempt === maxAttempts) {
throw error;
}
// Wait before retrying (exponential backoff: 400ms, 800ms, 1200ms, etc.)
// This gives up to ~12 seconds total wait time, covering typical pin completion
const backoffMs = attempt * 2500;
await new Promise((resolve) => setTimeout(resolve, backoffMs));
}
}
// This should never be reached, but TypeScript needs it
throw lastError || new Error("Failed to retrieve content");
}
/**
* Retrieve content from IPFS by CID and return the full Response object
* Useful when you need access to response headers (e.g., content-length)
*
* @param cid - Content ID to retrieve
* @returns Response object with body stream and headers
*
* @example
* ```ts
* const response = await client.storage.getBinary(cid);
* const contentLength = response.headers.get('content-length');
* const reader = response.body.getReader();
* // ... read stream
* ```
*/
async getBinary(cid: string): Promise<Response> {
// Retry logic for content retrieval - content may not be immediately available
// after upload due to eventual consistency in IPFS Cluster
// IPFS Cluster pins can take 2-3+ seconds to complete across all nodes
const maxAttempts = 8;
let lastError: Error | null = null;
for (let attempt = 1; attempt <= maxAttempts; attempt++) {
try {
const response = await this.httpClient.getBinary(
`/v1/storage/get/${cid}`
);
if (!response) {
throw new Error("Response is null");
}
return response;
} catch (error: any) {
lastError = error;
// Check if this is a 404 error (content not found)
const isNotFound =
error?.httpStatus === 404 ||
error?.message?.includes("not found") ||
error?.message?.includes("404");
// If it's not a 404 error, or this is the last attempt, give up
if (!isNotFound || attempt === maxAttempts) {
throw error;
}
// Wait before retrying (exponential backoff: 400ms, 800ms, 1200ms, etc.)
// This gives up to ~12 seconds total wait time, covering typical pin completion
const backoffMs = attempt * 2500;
await new Promise((resolve) => setTimeout(resolve, backoffMs));
}
}
// This should never be reached, but TypeScript needs it
throw lastError || new Error("Failed to retrieve content");
}
/**
* Unpin a CID
*
* @param cid - Content ID to unpin
*/
async unpin(cid: string): Promise<void> {
await this.httpClient.delete(`/v1/storage/unpin/${cid}`);
}
}

166
tests/e2e/cache.test.ts Normal file
View File

@ -0,0 +1,166 @@
import { describe, it, expect, beforeEach } from "vitest";
import { createTestClient, skipIfNoGateway } from "./setup";
describe("Cache", () => {
if (skipIfNoGateway()) {
console.log("Skipping cache tests - gateway not available");
return;
}
const testDMap = "test-cache";
beforeEach(async () => {
// Clean up test keys before each test
const client = await createTestClient();
try {
const keys = await client.cache.scan(testDMap);
for (const key of keys.keys) {
await client.cache.delete(testDMap, key);
}
} catch (err) {
// Ignore errors during cleanup
}
});
it("should check cache health", async () => {
const client = await createTestClient();
const health = await client.cache.health();
expect(health.status).toBe("ok");
expect(health.service).toBe("olric");
});
it("should put and get a value", async () => {
const client = await createTestClient();
const testKey = "test-key-1";
const testValue = "test-value-1";
// Put value
const putResult = await client.cache.put(testDMap, testKey, testValue);
expect(putResult.status).toBe("ok");
expect(putResult.key).toBe(testKey);
expect(putResult.dmap).toBe(testDMap);
// Get value
const getResult = await client.cache.get(testDMap, testKey);
expect(getResult).not.toBeNull();
expect(getResult!.key).toBe(testKey);
expect(getResult!.value).toBe(testValue);
expect(getResult!.dmap).toBe(testDMap);
});
it("should put and get complex objects", async () => {
const client = await createTestClient();
const testKey = "test-key-2";
const testValue = {
name: "John",
age: 30,
tags: ["developer", "golang"],
};
// Put object
await client.cache.put(testDMap, testKey, testValue);
// Get object
const getResult = await client.cache.get(testDMap, testKey);
expect(getResult).not.toBeNull();
expect(getResult!.value).toBeDefined();
expect(getResult!.value.name).toBe(testValue.name);
expect(getResult!.value.age).toBe(testValue.age);
});
it("should put value with TTL", async () => {
const client = await createTestClient();
const testKey = "test-key-ttl";
const testValue = "ttl-value";
// Put with TTL
const putResult = await client.cache.put(
testDMap,
testKey,
testValue,
"5m"
);
expect(putResult.status).toBe("ok");
// Verify value exists
const getResult = await client.cache.get(testDMap, testKey);
expect(getResult).not.toBeNull();
expect(getResult!.value).toBe(testValue);
});
it("should delete a value", async () => {
const client = await createTestClient();
const testKey = "test-key-delete";
const testValue = "delete-me";
// Put value
await client.cache.put(testDMap, testKey, testValue);
// Verify it exists
const before = await client.cache.get(testDMap, testKey);
expect(before).not.toBeNull();
expect(before!.value).toBe(testValue);
// Delete value
const deleteResult = await client.cache.delete(testDMap, testKey);
expect(deleteResult.status).toBe("ok");
expect(deleteResult.key).toBe(testKey);
// Verify it's deleted (should return null, not throw)
const after = await client.cache.get(testDMap, testKey);
expect(after).toBeNull();
});
it("should scan keys", async () => {
const client = await createTestClient();
// Put multiple keys
await client.cache.put(testDMap, "key-1", "value-1");
await client.cache.put(testDMap, "key-2", "value-2");
await client.cache.put(testDMap, "key-3", "value-3");
// Scan all keys
const scanResult = await client.cache.scan(testDMap);
expect(scanResult.count).toBeGreaterThanOrEqual(3);
expect(scanResult.keys).toContain("key-1");
expect(scanResult.keys).toContain("key-2");
expect(scanResult.keys).toContain("key-3");
expect(scanResult.dmap).toBe(testDMap);
});
it("should scan keys with regex match", async () => {
const client = await createTestClient();
// Put keys with different patterns
await client.cache.put(testDMap, "user-1", "value-1");
await client.cache.put(testDMap, "user-2", "value-2");
await client.cache.put(testDMap, "session-1", "value-3");
// Scan with regex match
const scanResult = await client.cache.scan(testDMap, "^user-");
expect(scanResult.count).toBeGreaterThanOrEqual(2);
expect(scanResult.keys).toContain("user-1");
expect(scanResult.keys).toContain("user-2");
expect(scanResult.keys).not.toContain("session-1");
});
it("should handle non-existent key gracefully", async () => {
const client = await createTestClient();
const nonExistentKey = "non-existent-key";
// Cache misses should return null, not throw an error
const result = await client.cache.get(testDMap, nonExistentKey);
expect(result).toBeNull();
});
it("should handle empty dmap name", async () => {
const client = await createTestClient();
try {
await client.cache.get("", "test-key");
expect.fail("Expected get to fail with empty dmap");
} catch (err: any) {
expect(err.message).toBeDefined();
}
});
});

211
tests/e2e/storage.test.ts Normal file
View File

@ -0,0 +1,211 @@
import { describe, it, expect, beforeAll } from "vitest";
import { createTestClient, skipIfNoGateway } from "./setup";
describe("Storage", () => {
beforeAll(() => {
if (skipIfNoGateway()) {
console.log("Skipping storage tests");
}
});
it("should upload a file", async () => {
const client = await createTestClient();
const testContent = "Hello, IPFS!";
const testFile = new File([testContent], "test.txt", {
type: "text/plain",
});
const result = await client.storage.upload(testFile);
expect(result).toBeDefined();
expect(result.cid).toBeDefined();
expect(typeof result.cid).toBe("string");
expect(result.cid.length).toBeGreaterThan(0);
expect(result.name).toBe("test.txt");
expect(result.size).toBeGreaterThan(0);
});
it("should upload a Blob", async () => {
const client = await createTestClient();
const testContent = "Test blob content";
const blob = new Blob([testContent], { type: "text/plain" });
const result = await client.storage.upload(blob, "blob.txt");
expect(result).toBeDefined();
expect(result.cid).toBeDefined();
expect(typeof result.cid).toBe("string");
expect(result.name).toBe("blob.txt");
});
it("should upload ArrayBuffer", async () => {
const client = await createTestClient();
const testContent = "Test array buffer";
const buffer = new TextEncoder().encode(testContent).buffer;
const result = await client.storage.upload(buffer, "buffer.bin");
expect(result).toBeDefined();
expect(result.cid).toBeDefined();
expect(typeof result.cid).toBe("string");
});
it("should upload Uint8Array", async () => {
const client = await createTestClient();
const testContent = "Test uint8array";
const uint8Array = new TextEncoder().encode(testContent);
const result = await client.storage.upload(uint8Array, "uint8.txt");
expect(result).toBeDefined();
expect(result.cid).toBeDefined();
expect(typeof result.cid).toBe("string");
});
it("should pin a CID", async () => {
const client = await createTestClient();
// First upload a file to get a CID
const testContent = "File to pin";
const testFile = new File([testContent], "pin-test.txt", {
type: "text/plain",
});
const uploadResult = await client.storage.upload(testFile);
const cid = uploadResult.cid;
// Now pin it
const pinResult = await client.storage.pin(cid, "pinned-file");
expect(pinResult).toBeDefined();
expect(pinResult.cid).toBe(cid);
expect(pinResult.name).toBe("pinned-file");
});
it("should get pin status", async () => {
const client = await createTestClient();
// First upload and pin a file
const testContent = "File for status check";
const testFile = new File([testContent], "status-test.txt", {
type: "text/plain",
});
const uploadResult = await client.storage.upload(testFile);
await client.storage.pin(uploadResult.cid, "status-test");
// Wait a bit for pin to propagate
await new Promise((resolve) => setTimeout(resolve, 1000));
const status = await client.storage.status(uploadResult.cid);
expect(status).toBeDefined();
expect(status.cid).toBe(uploadResult.cid);
expect(status.name).toBe("status-test");
expect(status.status).toBeDefined();
expect(typeof status.status).toBe("string");
expect(status.replication_factor).toBeGreaterThanOrEqual(0);
expect(Array.isArray(status.peers)).toBe(true);
});
it("should retrieve content by CID", async () => {
const client = await createTestClient();
const testContent = "Content to retrieve";
const testFile = new File([testContent], "retrieve-test.txt", {
type: "text/plain",
});
const uploadResult = await client.storage.upload(testFile);
const cid = uploadResult.cid;
// Get the content back
const stream = await client.storage.get(cid);
expect(stream).toBeDefined();
expect(stream instanceof ReadableStream).toBe(true);
// Read the stream
const reader = stream.getReader();
const chunks: Uint8Array[] = [];
let done = false;
while (!done) {
const { value, done: streamDone } = await reader.read();
done = streamDone;
if (value) {
chunks.push(value);
}
}
// Combine chunks
const totalLength = chunks.reduce((sum, chunk) => sum + chunk.length, 0);
const combined = new Uint8Array(totalLength);
let offset = 0;
for (const chunk of chunks) {
combined.set(chunk, offset);
offset += chunk.length;
}
const retrievedContent = new TextDecoder().decode(combined);
expect(retrievedContent).toBe(testContent);
});
it("should unpin a CID", async () => {
const client = await createTestClient();
// First upload and pin a file
const testContent = "File to unpin";
const testFile = new File([testContent], "unpin-test.txt", {
type: "text/plain",
});
const uploadResult = await client.storage.upload(testFile);
await client.storage.pin(uploadResult.cid, "unpin-test");
// Wait a bit
await new Promise((resolve) => setTimeout(resolve, 1000));
// Unpin it
await expect(client.storage.unpin(uploadResult.cid)).resolves.not.toThrow();
});
it("should handle upload errors gracefully", async () => {
const client = await createTestClient();
// Try to upload invalid data
const invalidFile = null as any;
await expect(client.storage.upload(invalidFile)).rejects.toThrow();
});
it("should handle status errors for non-existent CID", async () => {
const client = await createTestClient();
const fakeCID = "QmInvalidCID123456789";
await expect(client.storage.status(fakeCID)).rejects.toThrow();
});
it("should upload large content", async () => {
const client = await createTestClient();
// Create a larger file (100KB)
const largeContent = "x".repeat(100 * 1024);
const largeFile = new File([largeContent], "large.txt", {
type: "text/plain",
});
const result = await client.storage.upload(largeFile);
expect(result).toBeDefined();
expect(result.cid).toBeDefined();
expect(result.size).toBeGreaterThanOrEqual(100 * 1024);
});
it("should upload binary content", async () => {
const client = await createTestClient();
// Create binary data
const binaryData = new Uint8Array([0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a]); // PNG header
const blob = new Blob([binaryData], { type: "image/png" });
const result = await client.storage.upload(blob, "image.png");
expect(result).toBeDefined();
expect(result.cid).toBeDefined();
expect(result.name).toBe("image.png");
});
});