On whether a separate chain for data availability needs to exist.


Avail positions itself as a purpose-built data availability layer for the modular blockchain stack, with a roadmap spanning DA, a cross-chain coordination layer (Nexus), and a security unification mechanism (Fusion). Before examining the codebase, there is a prior question that the marketing does not address: does a dedicated data availability chain need to exist?

This analysis operates on two levels. The first is a source code review, based on direct inspection of Avail’s public repositories: avail, avail-core, avail-light, poly-multiproof, avail-srs, nexus-sdk, Henosis, and audits. The second is a thesis-level examination of whether external DA layers have durable product-market fit, given what Ethereum and other prominent chains are building natively.


1. The DA thesis

Data availability is a real problem. When a rollup executes transactions off-chain and posts only a proof to the settlement layer, someone needs to guarantee that the underlying transaction data was published and can be retrieved. Without this guarantee, users cannot verify the rollup’s state, prove fraud, or exit to the base layer. This is well-understood.

The question is not whether data availability matters. It is whether it needs to be a separate chain.

What DA actually requires

At a technical level, DA requires three things:

  1. Erasure coding: data is encoded with redundancy so that the original can be reconstructed from a subset of fragments. This is a 1960s result (Reed and Solomon). The blockchain-specific application uses polynomial evaluation over finite fields, but the mathematics is established information theory.

  2. Commitment schemes: a cryptographic binding between the encoded data and a compact proof. KZG (Kate-Zaverucha-Goldberg, 2010) provides constant-size commitments and proofs. The multi-opening optimization is from BDFG21 (Boneh, Drake, Fisch, Gabizon, 2021).

  3. Data availability sampling: light clients randomly sample cells from the encoded data to verify availability without downloading the full block. The formal cryptographic treatment appeared in 2023. The design builds on work by Al-Bassam, Sonnino, and Buterin (2018) and Dankrad Feist’s DAS construction for Ethereum.

None of these techniques were invented by any DA-specific project. They are established results from information theory and cryptography, applied to a blockchain context. Every project in this space (Celestia, Avail, EigenDA, and Ethereum itself) implements variations of the same primitives.

Ethereum already has the same thing

This is not a future risk for external DA layers. It is the present.

Ethereum deployed PeerDAS on mainnet in December 2025 via the Fusaka upgrade. Within two weeks, two follow-on forks (BPO1 and BPO2) raised the blob target to 14 and the maximum to 21 blobs per block. PeerDAS uses erasure coding and data availability sampling, the same techniques Avail and Celestia use, backed by the full security of Ethereum’s validator set.

The progression:

  • EIP-4844 (proto-danksharding). Live since March 2024. Introduced blob transactions with KZG commitments. Median blob fees dropped to fractions of a cent.

  • PeerDAS (EIP-7594). Live since December 2025. Validators download roughly 12.5% of blob data via sampling rather than the full payload. Current target: 14 blobs per block, maximum 21. Reduces node bandwidth requirements by 70-80%.

  • Full danksharding. The long-term target. Approximately 16 MB of DA per block, with full DAS across the network.

Ethereum now offers expanding DA capacity with the same cryptographic techniques external DA layers use, backed by the strongest economic security in the ecosystem. The competitive landscape has already shifted.

The security regression

The tradeoff that external DA layers require but rarely foreground: a rollup that posts data to Avail instead of Ethereum is no longer a rollup. It is a validium.

In a rollup, data is posted to the settlement layer (Ethereum). If the sequencer goes offline or acts maliciously, any honest node can reconstruct the state from Ethereum’s DA and force correct execution. The security inherits Ethereum’s validator set and economic stake.

In a validium, data is posted to an external layer. If that layer’s validators collude to withhold data, the rollup’s users cannot prove fraud (optimistic rollup) or reconstruct state (ZK rollup). Users' funds can be frozen. The security depends on the external DA layer’s validator set and economic stake.

Ethereum $400B+ market cap
EigenDA (restaking) $1.86B TVS
Avail $44.59M TVS

This is not a theoretical distinction. A developer choosing Avail for DA is choosing a security model backed by three orders of magnitude less economic stake than Ethereum. The cost savings are real, but so is the security regression.

2. The market evidence

The data is publicly available. It tells a story that the marketing materials for these projects do not.

Celestia

Celestia launched mainnet in October 2023 and holds roughly 50% market share among external DA layers. As of early 2026:

  • Total data posted: ~3,600 GB, of which ~2,641 GB from rollups

  • Daily data volume: approximately 2.5 GB

  • Active rollups: 41

  • Total fees collected: ~394,266 TIA

  • Cost: approximately $0.07 per MB

These numbers are real growth from near-zero in early 2024. But they also reveal concentration: Eclipse alone accounts for roughly half of all rollup data posted. And 2.5 GB/day across 41 rollups is modest. A single high-throughput Ethereum L2 (Base, Arbitrum) generates comparable volumes posting to Ethereum’s native blob space.

Avail

Avail launched mainnet in July 2024. According to L2BEAT data as of February 2026:

0.05% of 100% Avail capacity utilization — 8.74 MiB/day of available bandwidth
0.05%
Capacity utilization
8.74 MiB
Daily data posted
94.4%
From one application
Lens protocol
105
Validators
$44.59M total value secured

Nine megabytes per day. One application accounting for nearly all of it. This is not early-stage traction with a long tail of users. It is effectively a single-client network operating at a fraction of a percent of its capacity.

EigenDA

EigenDA takes a different approach, securing DA through Ethereum restaking:

  • Capacity: 100 MB/s (v2, launched July 2025)

  • Actual throughput: ~1.64 MiB/s

  • Past-day data posted: ~138 GiB

  • A single user (MegaETH) accounts for 99.57% of data posted

  • Total value secured: ~$1.86 billion via 99 operators

EigenDA has the strongest economic security among external DA layers and meaningful throughput from MegaETH. But the client concentration is extreme. Remove one user and the network is nearly idle.

Ethereum blobs

For comparison, Ethereum’s native blob market since PeerDAS went live:

  • Post-Fusaka blob target: 14 per block, maximum 21

  • Median blob fee: fractions of a cent

  • Used by all major L2s: Arbitrum, Base, Optimism, zkSync, Starknet

  • Security: backed by the full Ethereum validator set

Ethereum blobs are so cheap that cost alone is a diminishing justification for external DA. The remaining argument is throughput: Ethereum’s DA capacity may eventually become a bottleneck. But with PeerDAS already live and further scaling planned, the window for that argument is narrowing.

3. The commoditization problem

DA is, at its core, temporary storage of encoded data with cryptographic proofs. There is no execution environment, no smart contract composability, no application-layer network effect. The product is bits stored for a bounded duration.

This makes DA a commodity.

The product is bits stored for a bounded duration.
The market is not yet convinced it needs another supplier.

DA competitive landscape: converging toward commodity
Ethereum (PeerDAS)
Celestia
Avail
Security
$400B+ market cap
$4.5B market cap
$44.59M TVS
Utilization
All major L2s
41 rollups, ~2.5 GB/day
0.05%, 8.74 MiB/day
Cost
Fractions of a cent
Low (excess capacity)
Low (excess capacity)
Concentration
Diversified across L2s
Eclipse: ~50%
Lens: 94.4%

The competitive dimensions are converging:

When supply exceeds demand across every provider, when prices converge toward zero, and when the highest-security provider (Ethereum) has already deployed native capacity using the same techniques, the structural position of a standalone DA chain is precarious.

4. The dependency on the rollup thesis

Every argument for external DA layers rests on a prior assumption: that rollups are the dominant scaling paradigm for blockchain. If rollups generate massive amounts of data, and Ethereum’s native DA cannot absorb it all, then external DA has a market. This is the modular thesis.

But the modular thesis is not a settled question. It is a bet.

Monolithic chains are scaling

The alternative to modular architecture is monolithic scaling: executing transactions, achieving consensus, and handling data availability all within a single chain. This approach has produced results that the modular thesis did not predict:

  • Solana has demonstrated over 100,000 TPS on mainnet (August 2025), with sustained production throughput of 3,500-3,700 TPS. Its Turbine protocol handles data propagation natively using erasure coding and hierarchical shred distribution. No external DA layer is needed or used.

  • Sui has achieved 297,000 TPS in testing with sub-second finality, processing 65.8 million transactions in a single day through its object-centric parallel execution model.

  • Monad and other emerging chains are building parallel execution environments targeting high throughput without separating DA into an external layer.

These chains do not use rollups. They do not need external DA. They handle execution and data availability as an integrated system. If the market matures toward this architecture (as a meaningful portion of DeFi volume, gaming, and consumer applications already have), the entire addressable market for external DA layers contracts accordingly.

The sequencer centralization problem

Even within the rollup paradigm, the architecture has structural problems that external DA does not solve. Most major rollups today operate with centralized sequencers. The sequencer controls transaction ordering, can censor transactions, and extracts MEV. Decentralized sequencer roadmaps exist (Optimism, Arbitrum) but timelines remain distant.

A rollup posting data to an external DA layer while running a centralized sequencer has a contradictory trust model: the DA is decentralized, but the entity deciding what data gets posted is a single party. The DA guarantee is only as meaningful as the data selection is honest.

The dependency chain

Avail’s thesis requires every link in this chain to hold:

  1. Rollups must become the dominant scaling model. If monolithic chains capture the majority of high-value activity, DA demand from rollups is structurally limited.

  2. Rollup DA demand must exceed Ethereum’s native capacity. If PeerDAS and full danksharding provide sufficient throughput for rollup needs, there is no overflow market for external DA.

  3. The cost-security tradeoff must favor external DA. Developers must choose the cheaper, less secure option for enough use cases to sustain a standalone chain’s economics.

  4. DA alone must generate sufficient value, or Avail must successfully build Nexus and Fusion to supplement it.

If any of these assumptions fails, the thesis collapses. The first assumption, that rollups are the inevitable end state, is the one most aggressively challenged by the performance of monolithic chains.

This does not mean the rollup thesis is wrong. It means that building an entire chain whose value depends on it being right is a concentrated bet. The market data so far (0.05% utilization, 8.74 MiB/day, single-client dominance) does not yet validate that bet.

5. Origin and composition

Avail began as an internal research initiative at Polygon in late 2020, co-created by Anurag Arjun (one of Polygon’s three co-founders) and Prabal Banerjee. In March 2023, Polygon spun Avail off as a separate entity. In February 2024, Avail raised $27 million in seed funding led by Founders Fund and Dragonfly.

Built on Substrate

The node is built on Polkadot SDK (Substrate), a blockchain framework designed for building application-specific chains. This is a reasonable engineering choice; Substrate exists for this purpose. The dependency composition is typical: the vast majority of crates come from the Polkadot SDK fork, with Avail-specific crates (avail-core, kate, kate-recovery, da-control) providing the DA customizations. Consensus is unmodified BABE + GRANDPA, standard for Substrate chains.

The DA-specific work lives in three custom pallets (DA control, Mandate, Vector) and a patched frame-system that embeds Kate commitments into block headers during finalization. The DA control pallet manages application IDs, data submission, and block dimension proposals. This is a focused customization on top of a mature framework, which is how Substrate is intended to be used.

Erasure coding provenance

The erasure coding reconstruction logic lives in avail-core/kate/recovery/src/com.rs. The file contains a comment attributing the core reconstruction module to a public GitHub gist:

Rust
// This module is taken from https://gist.github.com/itzmeanjan/4acf9338d9233e79cfbee5d311e7a0b4
// which I wrote few months back when exploring polynomial based erasure coding technique !

The linked gist, authored by itzmeanjan (Anjan Roy), contains a standalone Rust implementation of polynomial-based erasure coding. To their credit, the Avail team left this attribution comment intact.

The reconstruction algorithm follows a standard approach:

  1. Collect available evaluations and their indices

  2. Build a zero polynomial from missing evaluation points

  3. Apply IFFT, shift, FFT, divide, IFFT, unshift

  4. Extract original data from the reconstructed polynomial

The modifications are integration-level: type conversion to ArkScalar and ArkEvaluationDomain, Result return types instead of panics, and a reconstruct_column wrapper for Avail’s 2D grid structure. The algorithm itself is unchanged. The FFT operations come from Arkworks, which is a cryptographic primitives library designed for exactly this kind of work.

The system enforces a 50% reconstruction threshold:

Rust
ensure!(
    column.len() >= unflatten_rows.len() / 2,
    Error::InvalidColumn
);

This means the original data can be recovered from any 50% of the erasure-coded cells, which is the information-theoretic minimum for a 2x extension factor.

The question is not about attribution etiquette (the comment is there) but about how central this borrowed code is to a system marketed as "purpose-built." The reconstruction algorithm is the core of any erasure-coded DA system.

KZG commitments: poly-multiproof

The poly-multiproof library implements the BDFG21 scheme (Boneh, Drake, Fisch, Gabizon 2021) for efficient polynomial commitment opening at multiple points. This builds on the original Kate-Zaverucha-Goldberg 2010 KZG protocol.

The library is built on:

  • Arkworks for elliptic curve arithmetic over BLS12-381 and finite field operations

  • BLST for optimized multi-scalar multiplication

Implementing a published paper with standard cryptographic libraries is how all production cryptography is built. Nobody re-derives elliptic curve arithmetic from scratch. The poly-multiproof library is proper engineering: it takes a well-defined mathematical construction and implements it using the tools designed for that purpose.

Trusted setup: Filecoin’s Powers of Tau

Avail does not run its own trusted setup ceremony. It extracts a Structured Reference String (SRS) from Filecoin’s Powers of Tau ceremony, specifically the challenge_19 file. From the avail-srs repository:

For Avail DA, we needed to have one publicly verifiable reference string, which can be used for constructing & verifying KZG polynomial commitment proofs, so we decided to make use of Filecoin’s Powers of Tau, which also uses BLS12-381 curve.

The extraction code is described as a "slightly modified" version of arielgabizon’s powersoftau repository. The test cases are taken from dusk-network’s PLONK implementation. Avail extracts 1,024 parameters, which constrains the maximum polynomial degree and therefore the block dimensions.

Reusing a public trusted setup ceremony is responsible practice. Running a separate ceremony introduces new trust assumptions; reusing an established one with broad participation is the better choice.

Light client and DAS

The avail-light client implements Data Availability Sampling using libp2p with Kademlia DHT for peer-to-peer cell distribution. There is no reason to reimplement a networking stack for this purpose; libp2p is the standard choice for distributed applications.

The sampling logic follows the established DAS confidence model. Cells are selected uniformly at random across the block matrix, and the confidence that data is available grows with each successfully retrieved cell:

Rust
fn confidence_from_cells(count: u32, total: u32) -> f64 {
    100.0 * (1.0 - (0.5_f64).powi(count as i32))
}

The formula is a direct application of the probability model from DAS research (Al-Bassam, Sonnino, and Buterin 2018, and subsequent work by Dankrad Feist). The cell fetching strategy (DHT-first with RPC fallback), confidence tracking, and configurable verification modes are Avail’s implementation decisions.

Summary

Using established libraries and frameworks is normal engineering. The same cryptographic techniques Avail uses (KZG + erasure coding + DAS) are precisely what Ethereum has deployed natively via PeerDAS since December 2025. The differentiation is not in the mathematics, which is identical. The differentiation would need to come from somewhere else.

6. The "validity proofs" question

One area where the marketing departs from technical precision: Avail describes its KZG commitments as "validity proofs inspired by zero-knowledge technology."

A KZG commitment, in the context Avail uses it, proves that data was correctly encoded in a polynomial and committed to by the block producer. This is a data availability guarantee: the data exists and matches the polynomial structure.

A validity proof in the ZK-rollup sense proves that a computation was executed correctly, that a state transition followed the rules of the system. This is a computational integrity guarantee.

These are fundamentally different cryptographic properties. Avail provides the former. The terminology "validity proofs" borrows from the ZK-rollup space and could mislead developers evaluating the security model they are building on. Data commitments and validity proofs serve different purposes and carry different trust assumptions.

7. Nexus and the pivot beyond DA

The existence of Nexus and Fusion in Avail’s roadmap is itself evidence that the team recognizes DA alone may not sustain the project. If DA had clear, standalone product-market fit, the roadmap would focus on scaling DA. Instead, it pivots toward:

  • Nexus, a "ZK coordination rollup" for cross-chain interoperability

  • Fusion, a multi-asset restaking security layer

Nexus: current state

The public nexus-sdk repository is a TypeScript client SDK that submits intents to a Cosmos-based backend. It does not contain zero-knowledge proof generation, verification, aggregation, sequencer selection, or any computational verification logic. The coordination layer itself is not available in a public repository.

Henosis, the proof aggregation framework, is explicitly not production-ready per its own documentation. It was developed by RizeLabs and supports a single proof system (Polygon zkEVM) in practice.

What the public SDK implements is an intent-based bridging system with solver competition, a pattern used by Across, UniswapX, and others. The "ZK coordination rollup" framing sets expectations the codebase cannot yet substantiate.

Fusion: restaking

Avail Fusion enables multi-asset restaking (BTC, ETH, SOL, ERC-20 tokens) to provide shared security. This is conceptually similar to EigenLayer’s restaking model. The question is whether an additional restaking layer, built on top of a DA chain with $44.59 million in TVS, provides meaningfully differentiated security compared to restaking solutions built directly on Ethereum with orders of magnitude more economic stake.

8. Narrative vs. implementation

Not every component deserves the same level of scrutiny. Much of Avail’s stack is built with frameworks and libraries that are designed to be used this way. Substrate is a blockchain SDK. Arkworks is a cryptographic primitives library. libp2p is a p2p networking framework. Filecoin’s Powers of Tau is a public good. Using these tools is standard engineering practice, and doing so well is legitimate work.

The analysis becomes more pointed where marketing language departs from what the codebase supports. The following tables separate these two categories.

Table 1. Standard engineering practice (using tools as designed)
Component Notes

Substrate (Polkadot SDK)

Framework designed for building custom chains. Avail’s use is typical.

BABE + GRANDPA consensus

Default consensus for Substrate chains. No modification needed for Avail’s use case.

Arkworks / BLST

Cryptographic libraries designed for implementing commitment schemes. Standard dependency.

libp2p + Kademlia DHT

p2p networking framework designed for distributed applications. Natural choice.

Filecoin’s Powers of Tau

Public trusted setup ceremony. Reusing it is responsible practice.

DAS confidence formula

Established result from data availability research. No reason to re-derive.

Table 2. Areas where marketing framing departs from implementation
Claim What the Marketing Says What the Code Shows

"Validity proofs"

KZG commitments described as "validity proofs inspired by zero-knowledge technology"

KZG commitments provide data availability guarantees, not computational validity. These are distinct cryptographic properties.

"ZK coordination rollup" (Nexus)

Marketed as a ZK coordination layer with proof aggregation

Public repos contain a TypeScript client SDK making API calls. Coordination backend is not in a public repository.

"Proof aggregation and verification"

Presented as a core capability of Nexus

Henosis framework is explicitly not production-ready and supports one proof system in practice.

Erasure coding as core innovation

Presented as a defining DA feature Avail built

Reconstruction algorithm is attributed to a public gist. The integration into a 2D grid structure is Avail’s contribution; the underlying algorithm is not.

9. The technical merits

This analysis would be incomplete without acknowledging what Avail does well.

The grid architecture. Organizing block data into a 2D matrix with row-wise KZG commitments and column-wise erasure coding is a deliberate and sound design. Each extended row produces a separate commitment, and the matrix structure enables efficient cell-level sampling. This specific configuration is Avail’s contribution.

The light client. The avail-light client implements a clean DAS flow: DHT-first cell fetching with RPC fallback, confidence tracking, and configurable verification modes. The orchestration is well-structured.

Audit transparency. Eight audit reports from six firms are published in a public repository. This is above industry average. The DA-specific components have come through audits with relatively few issues.

Block header integration. Embedding Kate commitments into Substrate block headers via a patched frame-system is non-trivial. It requires careful understanding of Substrate’s finalization pipeline and the consensus-critical data structures involved.

The system works. It has shipped to mainnet. The engineering is sound within its scope.

10. The hard question

The hard question is not about code quality. It is about structural position.

Data availability, as a technique, is a solved problem. Erasure coding with polynomial commitments and random sampling provides information-theoretic guarantees that data was available at the time of commitment. Every major approach (Avail, Celestia, EigenDA, Ethereum PeerDAS) implements variations of the same construction. There is no proprietary mathematical advantage.

The competitive moat for a DA layer must therefore come from one of:

  1. Security: the economic stake backing the DA guarantee. Ethereum dominates this by orders of magnitude.

  2. Cost: the price per megabyte of DA. Ethereum blobs are already effectively free. External DA layers are marginally cheaper, but the differential is narrowing.

  3. Throughput: the capacity to handle more data than alternatives. All providers have excess capacity. Demand is the bottleneck, not supply.

  4. Ecosystem lock-in: the network effects of developer adoption. Ethereum has the largest rollup ecosystem by far.

None of these favor a standalone DA chain over the settlement layer building native DA. The window in which external DA layers had a clear cost advantage, between EIP-4844 (limited blobs) and PeerDAS (abundant blobs), was a transitional period. PeerDAS is live. The window has closed, or at minimum, it is closing.

And this analysis still assumes rollups are the dominant scaling paradigm. If monolithic chains capture a significant share of high-value activity (as Solana’s DeFi volume already suggests), the addressable market for external DA shrinks further. The dependency is not just on Ethereum’s DA being insufficient, but on rollups being the primary way blockchains scale. That is not a foregone conclusion.

Avail’s pivot to Nexus and Fusion is a bet that DA alone is not enough, and that cross-chain coordination and restaking security can provide the differentiation DA cannot. But those products are not yet real in any publicly verifiable sense. The DA layer is real. The thesis for why it needs to exist as a standalone chain, in a world where the settlement layer has already deployed the same cryptographic guarantees natively and monolithic chains demonstrate viable alternative architectures, is the question the project has not yet answered.

11. Throughput and configuration

For completeness, the mainnet configuration:

Rust
pub const MILLISECS_PER_BLOCK: Moment = 20_000;  // 20 seconds
pub const EPOCH_DURATION_IN_SLOTS: BlockNumber = 4 * HOURS;
pub const MaxActiveValidators: u32 = 1200;
pub const MaxBlockRows: BlockLengthRows = BlockLengthRows(1024);
pub const MaxBlockCols: BlockLengthColumns = BlockLengthColumns(1024);

The block production interval is 20 seconds (BABE slot duration). GRANDPA finality, the point at which a block is irreversible, can lag by one or more rounds. The trusted setup supports 1,024 parameters, constraining the maximum polynomial degree and therefore the block dimensions. Scaling meaningfully beyond the current ceiling would require a larger SRS or a transition to a different commitment scheme (Avail has explored a FRI-based approach under the name "FRIVail").

12. Audit history

Avail has published eight audit reports from six firms (Halborn, Least Authority, Sherlock, Sayfer, Verichains, CredShields) in a public repository. Notable findings:

  • Critical: An unrestricted mint() function in the Nexus Vault contract that allowed anyone to mint unlimited tokens. Fixed by removing the contract.

  • Medium: A DDoS vector in KateApi.query_proof via unbounded input parameters. Missing validation in block dimension computation.

  • Low (unresolved): Vulnerable crate dependencies flagged in 2022, acknowledged but pending resolution at time of analysis.

The DA-specific components have fared well in audits. The more concerning findings cluster around the newer Nexus infrastructure.

Conclusion

Avail is a competently built data availability layer. The engineering is sound, the code has been audited, and the system is live on mainnet. Using Substrate, Arkworks, libp2p, and established cryptographic constructions is how production software is made.

The challenge is not the code. It is the thesis, and the thesis has two layers of fragility.

The first is the competitive layer. Data availability is not proprietary. Every provider implements variations of KZG + erasure coding + DAS. Ethereum has already deployed PeerDAS with the same techniques, backed by orders of magnitude more economic security. Avail runs at 0.05% capacity with a single application accounting for nearly all usage. Even Celestia, the most established external DA layer, is dominated by one client. The market signal, across every external DA provider, is that demand has not materialized at the scale the thesis requires.

The second, and deeper, is the architectural layer. The entire external DA market depends on rollups becoming the dominant scaling paradigm. If monolithic chains (Solana processing over 100,000 TPS, Sui achieving sub-second finality with parallel execution) capture the lion’s share of high-throughput activity, the rollup model generates less DA demand than projected. If Ethereum’s native DA is sufficient for the rollups that do exist, the overflow market that external DA targets may never reach meaningful scale. Avail has built a system for a world that may not arrive.

The pivot to Nexus and Fusion is a recognition of both risks. But those products are not yet substantiated by the public repositories. Nexus is a TypeScript SDK making API calls to a closed-source backend. Henosis is explicitly not production-ready. Fusion’s restaking model competes with EigenLayer on an asymmetrically smaller economic base.

The question for teams evaluating Avail is not whether the DA layer works; it does. The question is whether the world needs it. That depends on rollups winning the scaling debate, on Ethereum’s native DA proving insufficient, and on external DA surviving commoditization with strictly weaker security. Three assumptions, all contestable, none yet validated by the market. That is the thesis risk, and it is the one the project has yet to answer convincingly.

The engineering is sound.
The question is whether the market it was built for exists.