For the last decade, the conversation around decentralized storage has been dominated by blockchain projects.
Projects like Filecoin and Arweave have focused on solving Global Permanence by relying on Global Consensus: the entire network must validate and record the proofs of storage for every file, secured by a native token, mining rigs, and a global ledger. Highly complex, computationally expensive, and not user friendly.
This type of architecture might serve a purpose / use case, but it is the wrong approach for self-hosted storage users that want a way to have cloud / offsite backup for family photos, documents, etc. There is no need for a global market, gas fees, or a wallet. The only requirement is a guarantee of data safety for recovery in the event of a disaster (e.g. your house burned down).
Commercial vendors like Backblaze are currently the main solution for this, but for users who cannot afford cloud storage and have TBs of data to safeguard, this post explores whether a better, more affordable solution can be built.
Global vs. Local Consensus
In a Ledgerless model, trust is not derived from a global chain; it is derived from local relationships and statistical probability.
The Blockchain Way: A user bids for storage in a global market. The network reaches consensus on the contract. Miners continually prove to the global ledger that they are storing the data to release the payments.
The Ledgerless Way: The client splits a file into millions of chunks. For each chunk, the node negotiates storage directly with a unique set of 6 peers (in a computational, transparent way). There is no global record of this deal. The Consensus is purely pairwise between the source computer and the specific peers holding those chunks.
This distinction allows for building a tool that is light enough to run on your average computer, without the CPU overhead of mining.
The Economics of Local Debt
The immediate question about a tokenless system is: What solves the asymmetry problem? What happens if Peer A has 10TB to offer, but their peers only have 500GB?
The proposed solution combines Swarm Liquidity with Local Accounting.
Swarm Liquidity: A user does not swap data 1:1 with a single partner. Because files are chunked into smaller pieces, a 10TB upload is spread across hundreds or thousands of different nodes. Peer A does not need to find one node with 10TB of space; they can utilize the aggregate free space of 5,000 smaller peers.
Local Accounting: Every node maintains a local database that tracks the Debt Ratio of its peers. If Peer B stores chunks for Peer A, Peer A credits them locally. If Peer B later requests storage, Peer A approves it based on that credit history.
There is no need for a global currency. Local quotas are sufficient. Asymmetry is managed socially: users wanting to store more than they can host must find peers who explicitly whitelist them (e.g., friends or secondary devices), or simply increase storage to contribute more space to the swarm.
Crucially, Local Debt does not guarantee long-term availability from any single peer. Durability emerges statistically through over-replication and erasure coding. Even if a specific peer defaults on their debt and disappears, the math ensures the stored content survives.
Under the Hood: The Tech Stack
To achieve this on consumer hardware, the proposed solution prioritizes standard, high-performance libraries over experimental cryptography.
Language: Rust
Networking: libp2p. This allows nodes to maintain hundreds of concurrent connections efficiently.
Database: redb to handle the local file manifest and peer reputation tables.
Cryptography: XChaCha20Poly1305 for stream encryption and Ed25519 for peer identity.
Consensus: The Source of Truth is the User’s own db. To ensure recovery on bare metal, a small, versioned metadata object containing the database snapshot is backed up to the DHT, encrypted using a key derived from the user’s BIP39 recovery phrase.
Recovery: A generous decay period ensures the user can recover the data even if they’ve been offline for a while (e.g. house burned down).
The Optimistic Trust Model
If there is no global policeman, what prevents a Sybil attacker from spinning up 1,000 nodes, accepting data, and then deleting it?
The architecture relies on an Optimistic Trust Model, which operates on Gradual Trust Ramps:
The Grace Period: A new peer is optimistically trusted with a small quota (e.g., 1GB) to prove they are real.
Statistical Verification: Each node acts as a Sentinel. Every X minutes, it issues a cryptographic challenge (Spot Check) to one of their chunk holder.
Banning: Sybil resistance is local and probabilistic. If a peer fails a check or churns too aggressively, they are blacklisted locally. The attacker gains no durable advantage, because they never graduated from the Grace Period.
Merkle Integrity & Erasure Coding
The proposed approach does not pay peers to be honest; it tries to make their dishonesty irrelevant as the network grows.
Erasure Coding: Data is split using Reed-Solomon (e.g. 4+2). The network can tolerate the loss of 33% of peers (2 out of 6) for any specific chunk without losing a single byte.
Merkle Trees: To handle the metadata of millions of chunks efficiently, chunk manifests are compacted into Merkle trees to keep lookup overhead small. This allows the owner to audit a peer holding gigabytes of shards by requesting a tiny hash proof for a single random block, preventing bandwidth exhaustion during audits.
Request for Comments
This draft serves as a reference implementation for a potential solution. However, by removing the global ledger, the system accepts different failure modes than blockchain architectures. I’d appreciate feedback regarding these topics (and anything else that might come to mind!):
Churn-Heavy Environments: How does the Grace Period logic hold up if the majority of peers are not online 24×7?
Correlated Failures: Does IP diversity (restricting peers to different subnets) provide enough protection against regional outages?
Cold Storage Rot: Without financial incentives, will Local Debt be enough to keep nodes online for years?
Ramp Up Phase: This type of network would be quite vulnerable in the beginning until there are enough honest peers to keep it healthy. Also, users don’t immediately store TBs of data, there are bandwidth constraints to download / upload, so a mechanism to balance this out would be needed.