The Secure Storage Trilemma

Backing up personal data should not require trusting a blockchain economy.

A short while back, I published a design for a decentralized storage network that deliberately avoided blockchains. The idea was simple. Most people who self-host backups do not need global consensus, token incentives, or smart contracts. They need recoverability, privacy, and reasonable guarantees that their data will not disappear.

The original proposal attempted to achieve something ambitious: client-side enforcement of encrypted storage without trusting the host and without sacrificing performance.

After deeper analysis (primarily prompted by interesting Reddit comments), I don’t think it’s gonna work. Not because it is philosophically wrong, but because it collides with hard limits in modern cryptography.

The Goal

The system aimed to guarantee three properties at the same time:

  1. Enforcement
    Hosts should be cryptographically unable to store plaintext, even if the client is malicious.
  2. Privacy
    Hosts should never see user data or encryption keys.
  3. Performance
    Backups should run at normal disk and network speeds on consumer hardware.

At first glance, this feels achievable. In practice, these goals form a trilemma.

The Secure Storage Trilemma

        Enforcement
            ▲
            │
            │
Privacy ◄─── X ───► Performance

You can pick two. The center seems impossible to reach with today’s cryptography.

The Three Approaches

Approach A: Client Trust Only

Clients encrypt data locally and upload ciphertext. Hosts store whatever they receive. This was the original proposed design.

Failure:
There is no enforcement. A malicious client can upload plaintext or illegal content. The host has no way to detect or reject it. I failed to catch this edge case on the original design, and turns out it’s a pretty important one.

Approach B: Host Validation (Double Encryption)

Clients encrypt data twice. The host temporarily decrypts the outer layer to verify structure and compliance, then immediately discards it.

Failure:
This could work, but feels fragile. If a malicious client attacks the host by stripping the inner encryption, the host momentarily processes raw illegal content in memory during verification.

Approach C: Zero-Knowledge Proofs

Clients generate cryptographic proofs that they encrypted the data correctly, without revealing the data itself. Hosts verify the proof before accepting storage.

Failure:
Zero-knowledge proofs work well for small, structured secrets such as passwords or transactions. Proving that gigabytes of bulk data were encrypted correctly requires orders of magnitude more computation than simply encrypting it. On consumer hardware, this turns background backups into CPU-saturating workloads.

Why This Is Not Just an Engineering Problem

The most tempting idea is to prove encryption correctness using modern zero-knowledge systems. This runs into a fundamental mismatch.

Bulk encryption algorithms like XChaCha20 operate on bits. Zero-knowledge circuits operate on arithmetic fields. To verify a bitwise cipher inside a ZK circuit, every operation must be decomposed into thousands of constraints.

Verifying a single 512 KB shard would take time on consumer CPUs. Upload speeds collapse. This is why ZK works for blockchains and authentication, but not for backups.

The Theoretical Fixes and Their Tradeoffs

There are ways to try to make this work, but each requires a compromise.

Using SNARK-friendly ciphers like Poseidon or Anemoi makes proofs fast, but introduces long-term cryptographic risk for cold storage. And I believe still not fast enough.

Proving only a small portion of the data makes performance acceptable, but allows malicious clients to encrypt headers while leaving the bulk plaintext.

Proving only key knowledge preserves privacy, but does not prevent hosts from storing unencrypted data. None of these fully solve the enforcement problem.

The Conclusion

As far as my research goes, there is no solution today that delivers enforcement, privacy, and performance simultaneously for decentralized bulk storage. You must trust either the client, the host, or sacrifice performance. Trustless systems are not always safer. In this case, it becomes a liability.

What This Means for Symbion

Symbion will not pursue a public, trustless storage network.

Instead, it will focus on friend-to-friend and small-group backup models where trust exists and can be reasoned about socially, not cryptographically.

The original design remains intellectually interesting, but it is simply premature until a better cryptography solution comes around. Or if you, the reader, have a suggestion, I’d love to hear your proposal!

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top