+ + + +

Could You Bring Solana-Style Parallel Execution to EVM?

DEV

EVM chains process transactions sequentially. One after the other, in order. Every transaction in a block has access to the full state, which means each one could potentially read or write any storage slot on any contract. The execution engine can’t safely run two transactions in parallel because it doesn’t know in advance whether they’ll touch the same state.

Solana doesn’t have this problem because it solved it at the architecture level. Every transaction declares upfront which accounts it will read and which it will write. The runtime looks at those declarations, figures out which transactions are independent, and runs the non-conflicting ones in parallel. It’s one of the reasons Solana gets the throughput numbers it does.

I’ve been thinking about whether you could bring that model to EVM. Not by redesigning the VM from scratch, but by analyzing existing contracts to figure out which storage slots each function actually touches.

The basic idea

If you could build a map of “function X on contract Y reads slots A and B, writes slot C, and calls function Z on contract W,” you’d have enough information to schedule transactions in parallel. Two transactions that touch completely different storage slots can safely run at the same time. Two that both write to the same slot can’t.

This is essentially what Solana requires developers to declare manually. The question is whether you can infer it automatically from EVM contracts.

For source code, this is surprisingly tractable. Solidity’s storage layout is deterministic. A mapping(address => uint256) like a token balance always computes its slot from keccak256(abi.encode(key, slot_index)). If you know the function’s arguments and the contract’s storage layout, you can determine which slots it’ll access. State variables in fixed positions are even simpler.

You’d need to trace through the function’s logic, tracking every SLOAD and SSTORE and where the slot values come from. For straightforward functions like an ERC-20 transfer, this is doable: it reads and writes the balance mapping for the sender and recipient, and that’s about it.

The call graph problem

You can’t do this on a per-contract basis, though. Contracts call each other. An ERC-20 transfer is simple, but a DEX swap calls the router, which calls the pool, which calls both tokens, which might call fee contracts. A single user-facing function can fan out into dozens of cross-contract calls.

So you’re not just mapping storage access for one contract. You’re building a call graph across the entire set of contracts involved, then tracking storage reads and writes through every branch. It’s a static analysis problem, and it compounds quickly.

That said, the call targets aren’t always unpredictable. A Uniswap pool always calls the same two token contracts. A lending protocol always interacts with the same oracle and the same set of collateral tokens. The edges in the call graph are largely stable between blocks. You could build the graph once and update it incrementally.

Bytecode makes it harder

If you have the Solidity source (or Vyper, or whatever), parsing the storage access patterns is reasonable. Verified contracts on Etherscan give you this. But not everything is verified, and the source isn’t always available.

With only bytecode, you’re doing decompilation-level analysis. You need to identify SLOAD/SSTORE opcodes, trace back the stack values to figure out which slots they reference, handle dynamic jumps, and deal with the fact that the same function might access different slots depending on runtime values (conditional branches, loops over dynamic-length arrays).

It’s not impossible. There are tools that do bytecode-level storage analysis (like Panoramix or Heimdall for decompilation). But the accuracy drops. You’d get definitive results for some contracts and “I can’t tell” for others.

A chain that implements permissioned deployments could sidestep this entirely. If you control what gets deployed, you can require source code submission or even require contracts to declare their storage access patterns explicitly (similar to how Solana requires account declarations). For a private chain, an L2, or an appchain, this is a realistic option.

And for contracts you can’t analyze? Fall back to sequential execution. You don’t need 100% coverage to get benefits. Even partial parallelism helps.

The central contract problem

Here’s where it gets interesting. Some contracts are touched by nearly everything. WETH, USDC, major DEX routers. If every other transaction in a block involves USDC, then knowing you can parallelize the rest doesn’t buy you much.

But even within a “central” contract, not every transaction conflicts. Two USDC transfers between different pairs of addresses touch different slots in the balance mapping. They look like they conflict at the contract level, but at the storage slot level they’re independent.

This is where the analysis pays for itself. A naive “contracts that overlap can’t parallelize” approach would serialize almost everything. A slot-level analysis would correctly identify that transfer(Alice, Bob, 100) and transfer(Charlie, Dave, 200) can run in parallel, because they write to four different balance slots.

The same applies to DEX pools. Two swaps on different Uniswap pools touch entirely different contracts and storage. Even two swaps on the same pool might be parallelizable if you could break the operation into read and write phases. Maybe. I’m less sure about that one.

Building the execution schedule

Assuming you can build the storage access map, the scheduling part is a well-understood problem. You’re constructing a dependency graph of transactions within a block:

  • Each transaction is a node
  • There’s an edge between two transactions if they access overlapping storage slots and at least one of them writes
  • Independent subgraphs can execute in parallel
  • Within a dependency chain, transactions execute sequentially in block order

You could also think of it as a multi-phase execution plan. Phase 1 runs all transactions that don’t conflict with each other. Phase 2 picks up everything that depended on Phase 1 results. And so on. In the best case (lots of independent transactions), you get massive parallelism. In the worst case (every transaction touches the same state), you’re back to sequential, which is where you started anyway.

The analysis itself would need to happen fast. Between receiving transactions and executing the block, you need to compute the access maps for every transaction and build the dependency graph. For analyzed contracts with cached access patterns, this is mostly just plugging in the transaction arguments to determine concrete slot addresses. For the call graph traversal, you’d want it precomputed and cached.

What I’m not sure about

There’s a lot I haven’t thought through. Dynamic dispatch (proxy patterns, DELEGATECALL) makes static analysis harder because the target code isn’t known until runtime. CREATE2 deploys inside transactions could introduce entirely new contracts mid-block. Gas-dependent behavior (running out of gas partway through a call) means the actual storage writes might differ from what static analysis predicts.

There’s also the question of whether the overhead of the analysis and scheduling eats into the parallelism gains. If computing the dependency graph takes as long as just executing the transactions sequentially, you haven’t gained anything.

I haven’t built any of this. It’s an idea I keep coming back to when thinking about EVM throughput limitations, and I think the storage slot analysis is more feasible than people assume, especially for the common contract patterns that make up the majority of mainnet transactions. Whether it’s worth building as an actual execution engine versus just moving to a chain that was designed for parallelism from the start… that I’m less sure about.