Backing up your data shouldn’t require trusting a blockchain economy.
Yet most decentralized storage systems are built for speculation, not recovery. They require wallets, gas fees, global consensus, and financial incentives. For self-hosted users who want an offsite backup for family photos or personal documents, this is unnecessary complexity.
Symbion is an attempt to design something simpler.
Built using Rust, Symbion is a serverless peer-to-peer storage network focused on one goal: reliable, redundant backups on the open internet, without tokens, payments, or trusted operators.
It achieves this by combining erasure coding, cryptographic audits, and a deliberately constrained economic model. There are no coins, no markets, and no global ledger, only local verification by the client.
Threat Model
Before diving into mechanics, it’s important to define what Symbion is not designed to defend against.
Symbion does not aim to provide:
- Censorship-resistant publishing
- Strong anonymity against global adversaries
- Protection from nation-state attacks
If you need those properties, tools like Tor, Freenet, or mixnets are better suited.
Symbion is designed for reliable backups under a rational threat model: users who attempt to freeload, cheat storage accounting, or disrupt the network cheaply. The goal is not perfect security, but to make abuse economically irrational and operationally expensive.
1. The Physics
At the core of Symbion is Reed–Solomon erasure coding configured for wide striping.
Every file is split into 1 MB chunks and expanded into 14 shards:
- 8 data shards
- 6 parity shards
This allows perfect reconstruction as long as any 8 of 14 shards survive, meaning the system tolerates up to 42% shard loss.
This is the fundamental “physics” of the network. Nodes will go offline, disks will fail, and laptops will disappear without warning. Wide striping absorbs that churn without coordination, retries, or operator intervention.
Note: Symbion optimizes primarily for random loss and churn. Correlated physical failures (such as regional outages) are acknowledged but not fully mitigated in the current design. This is an intentional constraint reflecting a tradeoff between resilience and fairness in peer placement.
2. The Economy
Symbion solves the free-rider problem using Deferred Reciprocity, rather than payments or tokens.
The rules are simple and enforced locally:
- The 2:1 Rule:
To upload 1 GB of personal data, a node must prove it is hosting 2 GB for the network. - The 60-Day Sliding Window:
Contribution credit decays over time. Hosting briefly and then disappearing does not grant permanent upload capacity.
This ensures the swarm remains over-collateralized in storage while requiring ongoing participation, not upfront payment.
Trusted peers (such as your own secondary devices) can still bypass public limits via explicit trust keys, but the default public network assumes no social trust.
3. Sybil Resistance
Creating thousands of fake identities must carry a real cost. Symbion uses two complementary mechanisms:
A. Identity Cost
Before joining the swarm, each node must solve a CPU-bound Proof-of-Work puzzle.
This is not a mining economy and produces no rewards. It exists solely to attach a measurable real-world cost to identity creation, limiting large-scale Sybil attacks. The puzzle difficulty is intentionally bounded to identity creation and does not scale with storage usage or network activity.
B. Slow Start
New nodes begin in a probationary state:
- They are audited more aggressively
- Their storage credit is capped
After passing a threshold of verified audits (e.g., ~20 successful checks over ~7 days), a node “graduates” to a mature state, and can enjoy full participation.
Trust is not granted; it is accumulated. A mature host can and will get downgraded / banned by clients if they start misbehaving.
4. Hardened Privacy
NOTE: Code is still in ALPHA, and has not been audited. It is strongly recommended to encrypt your files before putting anything on the network.
- Zero-Knowledge Hosting:
All shards are encrypted client-side using XChaCha20-Poly1305 before leaving the owner’s machine. Hosts store opaque ciphertext with no knowledge of its contents. - Deterministic Recovery:
Using a BIP-39 mnemonic, users can deterministically derive their master keys and locate their encrypted metadata (“Cloud Header”) on the DHT. Losing a machine does not mean losing access to data.
At no point does the network require plaintext visibility, shared secrets, or trusted custodians.
5. Self-Healing
Every node, when acting as a client, runs a background process called the Sentinel.
Sentinels perform risk-weighted cryptographic audits, verifying that peers actually possess the shards they claim to host. Audits take the form of challenge–response proofs that require a host to demonstrate possession of specific shard data without revealing its contents.
There is no central auditor; all checks are locally verified and independently reproducible.
When file health drops below a safety threshold, the Repair process activates:
- Surviving shards are fetched
- Data is reconstructed in a secure memory space
- New shards are re-seeded to healthy peers
This allows the client to heal it’s files under churn without your involvement.
6. Proportional Retention
Not every node can be online continuously. Symbion accounts for this using Proportional Retention:
- Nodes that remain online longer and pass audits consistently earn retention credit
- Mature nodes can accumulate up to 60 days of grace period (where their data as a client remain on the network before being completely removed)
- Temporary outages do not immediately trigger data eviction
Reliability is treated as a measurable contribution, not a binary state.
Conclusion
The transition from concept to implementation forced hard tradeoffs. Early ideas based on social trust gave way to sliding windows, audits, and explicit constraints. Notions of “optimistic cooperation” are still there, but it is necessary to design with the assumption that abuse and enforce fairness can be accounted for.
The result is not a revolution in decentralized storage, it is something more modest and more useful: a system where backing up your data does not require speculation, permanent uptime, or full trust in strangers / cloud companies.
I’m still tweaking the code and plan to release an Alpha at some point, if you are interested in helping test this out, drop me a note!