Allen

Allen

crypto seeker||starknet

Stark's modularity?

The term "modular blockchain" has been frequently mentioned in the past two years, distinguishing it from extreme single-chip connections such as Solana/Aptos. Modularization provides a new solution for scalability.

With the pioneer Celestia and the orthodox Eigenlayer, how will Starkware respond?

The answer is -

AppChain is coming#

• How does the cryptocurrency app chain work to improve the advantages of existing networks.
• Improve scalability, customizability, and ecosystem through the release of app chains.
• By building App Chains in the StarkX/Cosmos/Polka Dot system, the liquidity, token economics, and consensus protocols of the network can be effectively utilized.
• However, App Chains also have some limitations, including security, composability, complexity, and attacks in very popular cases.
• By building them on the Stark network, you can obtain security from L1 and interoperability from L2.
• Therefore, when application builders want to deploy applications quickly, building App Chains may be a good choice.
• However, the time and resources required to build an app chain and the challenges involved still need to be considered before making a decision.

Reasons for migrating to app chains: scalability, customizability, ecosystem.

Limitations of app chains: security, composability, simplicity.

The structure that app chains should have: L3 achieves scalability, L2 achieves composability, L1 achieves security.

image

Options for App Chains

image

Modularization led by Celestia#

Modular design solves scalability issues#

image

The problem with single-chip blockchains is that they are bound by the "blockchain trilemma". Optimizing one attribute will constrain the other two attributes because the L1 layer of the same blockchain is responsible for providing the three underlying components that make the blockchain a "blockchain" (i.e., consensus, execution, and data availability).

Modular blockchains divide the three components (consensus, execution, and data availability) of single-chip blockchains at the L1 layer. Just like division of labor, splitting these three components allows us to optimize each component and produce better products, making the whole greater than the sum of its parts.

(1) Modular security through PoS validators#

With the PoS mechanism, specific computers are no longer needed to be responsible for network security. Now, all computers can be used to ensure network security. Since PoS tokens can be staked on any connected computer, this reflects the value of providing security for the assets themselves.

In PoS consensus, the physical (mining) capital cost that used to maintain PoW networks has been transformed into the cost of purchasing PoS tokens, thereby improving the capital efficiency of assets. Unlike physical mining hardware, PoS assets do not degrade over time, so PoS validators do not need to sell assets to pay for operating costs.

(2) Maximizing data availability: sharding#

One way to increase the throughput of a blockchain is to split the blockchain into multiple chains called shards. These shards have their own block producers and can communicate with each other to transfer tokens between shards. The purpose of sharding is to separate the block producers in the network, so that not every block producer processes every transaction, but divides their processing power into different shards to process only some transactions.

Typically, full nodes in sharded blockchains will run a full node for one or several shards and a light client for each other shard. After all, anyone running a full node for each shard violates the purpose of sharding, which is to allocate network resources to different nodes.

However, this approach has its problems. What if the block producers in a shard become malicious and start accepting invalid transactions? This is more likely to happen in a sharded system than in a non-sharded system because a sharded system is more vulnerable to attacks as it has only a few block producers in each shard. Remember, block producers are divided into different shards.

To address the issue of detecting whether any shard accepts invalid transactions, you need to ensure that all data in that shard has been published and available so that any invalid transactions can be proven with fraud proofs.

(3) Modular execution through Rollups#

Rollups process transactions much faster than the L1 main chain, by creating an off-chain transaction execution environment independent of Ethereum L1 and updating the state of L1 after processing the transaction, Rollups are not responsible for consensus and data availability.

Rollups chains do not need to focus on consensus and data availability like highly decentralized L1 chains; instead, Rollups chains can make sacrifices in consensus and data availability because Rollups chains are closely associated with Ethereum L1 in an encrypted manner.

Rollups is a design that stores transactions on the blockchain only as a data availability layer, but all actual transaction processing and computation occur on the rollup itself. This leads to an interesting insight: the blockchain does not actually need to perform any computation, but it needs to at least sort transactions into blocks and ensure the data availability of transactions.

This is also the design concept of Celestia. It is a "lazy" blockchain that only does the two core things that a blockchain needs to do - order transactions in a scalable way and make them available. This makes it a minimal "plug-and-play" component for systems such as aggregation.

The current direction to solve the scalability problem of blockchains is mainly through modular design, with rollup solutions as the main execution layer, and multiple chains as the main consensus and data availability layer.

Design of Ethereum 2.0#

Sharding further relaxes the requirement for all main chain nodes to download all data and instead uses a new primitive called DA proof to achieve higher scalability. With DA proof, each node only needs to download a small part of the shard chain data, and knowing a small part can collectively reconstruct all shard chain blocks. This achieves shared security across shards, as it ensures that any individual shard chain node can raise disputes that are resolved by all nodes on demand. Polkadot and Near have already implemented DA proofs in their sharding designs, which will also be adopted by ETH 2.0.

At this point, it is worth mentioning how the shard roadmap of ETH 2.0 differs from other roadmaps. While Ethereum's initial roadmap was similar to Polkadot's, it recently seems to have shifted towards only sharding data. In other words, sharding on Ethereum will serve as the DA layer for rollups. This means that Ethereum will continue to remain in a single state, just like today. In contrast, Polkadot executes all executions on the basis of shards with different states.

One major advantage of treating shards as pure data layers is that rollups can flexibly dump data to multiple shards while remaining fully composable. Therefore, the throughput and fees of rollups are not limited by the data capacity of a single shard. With 64 shards, the maximum total throughput of rollups is expected to increase from 5K TPS to 100,000 TPS. In contrast, regardless of how much throughput Polkadot generates as a whole, fees will be constrained by the limited throughput (1000-1500 TPS) of a single parallel chain.

Dedicated data availability layer solutions#

Data Availability Proofs#

Data Availability Proofs are a new technology that allows clients to check with very high probability whether all data of a block has been published by only downloading a small part of that block. (Data sampling check)

It uses a technique called erasure coding (used in CD-ROMs, satellite communications, QR codes). Erasure coding allows obtaining a block, for example, 1MB, and then "amplifying" it to 2MB, where the additional 1MB is special data called erasure codes. If any byte in the block is lost, these bytes can be easily recovered through the code. Even if up to 1MB of the block is lost, the entire block can be recovered. Even if a CD-ROM is scratched, it can still allow the computer to read all the data.

This means that to make a block 100% available, block producers only need to publish 50% of it on the network. If a malicious block producer wants to withhold even 1% of the block, they have to withhold 50% of the block because that 1% can be recovered from the 50%.

With this knowledge, clients can do some clever things to ensure that no part of the block is withheld. They can try to download some random blocks from the block, and if they fail to download any of these blocks (i.e., the block is in the 50% of blocks not published by the malicious block producer), they will consider the block unavailable.

After trying to download a random block, they have a 50% chance of detecting that the block is unavailable. After two blocks, there is a 75% chance, after three blocks, there is an 87.5% chance, and so on, until after seven blocks, there is a 99% chance. This is very convenient because it means that clients can check the availability of the entire block by downloading only a small part of it with a high probability.

Data Availability Proofs require a minimum number of light clients in the network so that there are enough light clients to make sample requests and collectively recover the entire block. In other words, the more light client nodes in the network, the more secure the network.

Pros and cons of the Celestia solution#

Similar to the DA sharding of ETH 2.0, Celestia acts as the data availability layer, and other chains (rollups) can be inserted to inherit security. Celestia's solution differs from Ethereum in two fundamental aspects:

  • It does not perform any meaningful state execution at the base layer (unlike ETH 2.0) (ETH as the L1 layer, there will be many Dapps and NFT projects on it, often leading to Gas wars, preempting the block confirmation space of Rollup, and bringing high gas base fees). This frees rollups from the highly unreliable base layer fees, which in a stateful environment can skyrocket due to token sales, NFT airdrops, or the emergence of high-yield farming opportunities. Rollups consume the same resources (i.e., bytes in the base layer) for security and only for security. This efficiency allows rollup fees to be primarily associated with that specific rollup rather than the usage of the base layer.
  • Due to DA proofs, Celestia can increase its DA throughput without sharding. One key feature of DA proofs is that as more nodes participate in sampling, more data can be stored. In the case of Celestia, this means that as more light nodes participate in DA sampling (without centralization), blocks can become larger (higher throughput).

Like all designs, a dedicated DA layer also has some drawbacks. One direct drawback is the lack of a default settlement layer. Therefore, for assets to be shared with each other, rollups must implement methods to interpret each other's fraud proofs.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.