diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index dbefde5c18..5f38d01798 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -33,8 +33,8 @@ license.
We aim to make it as easy as possible to contribute to the mission. This is still WIP, and we're happy for contributions
and suggestions here too. Some resources to help:
-1. [In-repo docs aimed at developers](docs)
-2. [ZKsync Era docs!](https://docs.zksync.io/zk-stack)
+1. [Docs aimed at developers](https://docs.zksync.io/zksync-protocol)
+2. [ZK Stack docs!](https://docs.zksync.io/zk-stack)
3. Company links can be found in the [repo's readme](README.md)
## Code of Conduct
diff --git a/README.md b/README.md
index 8c776af4c1..15bd04dde9 100644
--- a/README.md
+++ b/README.md
@@ -8,7 +8,7 @@ or re-auditing a single line of code. ZKsync Era also uses an LLVM-based compile
write smart contracts in C++, Rust and other popular languages.
This repository contains both L1 and L2 ZKsync smart contracts. For their description see the
-[system overview](docs/overview.md).
+[contracts overview](https://docs.zksync.io/zksync-protocol/contracts).
## Disclaimer
diff --git a/docs/README.md b/docs/README.md
deleted file mode 100644
index e42e174b12..0000000000
--- a/docs/README.md
+++ /dev/null
@@ -1,72 +0,0 @@
-# ZK Stack contracts specs
-
-The order of the files here only roughly represents the order of reading. A lot of topics are intertwined, so it is recommended to read everything first to have a complete picture and then refer to specific documents for more details.
-
-- [Glossary](./glossary.md)
-- [Overview](./overview.md)
-- Contracts of an individual chain
- - [ZK Chain basics](./settlement_contracts/zkchain_basics.md)
- - Data availability
- - [Custom DA support](./settlement_contracts/data_availability/custom_da.md)
- - [Rollup DA support](./settlement_contracts/data_availability/rollup_da.md)
- - [Standard pubdata format](./settlement_contracts/data_availability/standard_pubdata_format.md)
- - [State diff compression v1 spec](./settlement_contracts/data_availability/state_diff_compression_v1_spec.md)
- - L1->L2 transaction handling
- - [Processing of L1->L2 transactions](./settlement_contracts/priority_queue/processing_of_l1-l2_txs.md)
- - [Priority queue](./settlement_contracts/priority_queue/priority-queue.md)
- - Consensus
- - [Consensus Registry](./consensus/consensus-registry.md)
-- Chain Management
- - [Chain type manager](./chain_management/chain_type_manager.md)
- - [Admin role](./chain_management/admin_role.md)
- - [Chain genesis](./chain_management/chain_genesis.md)
- - [Standard Upgrade process](./chain_management/upgrade_process.md)
-- Bridging
- - Bridgehub
- - [Overview of the bridgehub functionality](./bridging/bridgehub/overview.md)
- - [Asset Router](./bridging/asset_router/overview.md)
-- L2 System Contracts
- - [System contracts bootloader description](./l2_system_contracts/system_contracts_bootloader_description.md)
- - [Batches and blocks on ZKsync](./l2_system_contracts/batches_and_blocks_on_zksync.md)
- - [Elliptic curve precompiles](./l2_system_contracts/elliptic_curve_precompiles.md)
- - [ZKsync fee model](./l2_system_contracts/zksync_fee_model.md)
-- Gateway
- - [General overview](./gateway/overview.md)
- - [Chain migration](./gateway/chain_migration.md)
- - [L1->L2 messaging via gateway](./gateway/messaging_via_gateway.md)
- - [L2->L1 messaging via gateway](./gateway/l2_gw_l1_messaging.md)
- - [Gateway protocol versioning](./gateway/gateway_protocol_upgrades.md)
- - [DA handling on Gateway](./gateway/gateway_da.md)
-- EVM emulation
- - [Technical overview](./evm_emulation/technical_overview.md)
- - [Gas emulation](./evm_emulation/evm_gas_emulation.md)
- - [Differences from EVM (Cancun)](./evm_emulation/differences_from_cancun_evm.md)
- - [EVM predeploys list](./evm_emulation/evm_predeploys_list.md)
-- Upgrade history
- - Gateway
- - [Gateway upgrade diff](./upgrade_history/gateway_preparation_upgrade/gateway_diff_review.md)
- - [Gateway upgrade process](<./upgrade_history/gateway_preparation_upgrade/upgrade_process_(no_gateway_chain).md>)
- - EVM emulator
- - [Upgrade process and changes](./upgrade_history/v27_evm_emulation/v27-evm-emulation.md)
-
-
-
-## Repo structure
-
-The repository contains the following sections:
-
-- [gas-bound-caller](../gas-bound-caller) that contains `GasBoundCaller` utility contract implementation. You can read more about it in its README.
-- [da-contracts](../da-contracts/). There are implementations for [DA validation](./settlement_contracts/data_availability/custom_da.md) contracts that should be deployed on L1 only.
-- [l1-contracts](../l1-contracts/). Despite the legacy name, it contains contracts that are deployed both on L1 and on L2. This folder encompasses bridging, ZK chain contracts, the contracts for chain admin, etc. The name is historical due to the fact that these contracts were usually deployed on L1 only. However with Gateway, settlement and bridging-related contracts will be deployed on both EVM and eraVM environment. Also, bridging has been unified between L1 and L2 in many places and so keeping everything in one project allows to avoid code duplication.
-- [l2-contracts](../l2-contracts/). Contains contracts that are deployed only on L2.
-- [system-contracts](../system-contracts/). Contains system contracts or predeployed L2 contracts.
-
-## For auditors: Invariants/tricky places to look out for
-
-This section is for auditors of the codebase. It includes some of the important invariants that the system relies on and which if broken could have bad consequences.
-
-- Assuming that the accepting CTM is correct & efficient, the L1→GW part of the L1→GW→L2 transaction never fails. It is assumed that the provided max amount for gas is always enough for any transaction that can realistically come from L1.
-- GW → L1 migration never fails. If it is possible to get into a state where the migration is not possible to finish, then the chain is basically lost. There are some exceptions where for now it is the expected behavior. (check out the “Migration invariants & protocol upgradability” section)
-- The general consistency of chains when migration between different settlement layers is done. Including the feasibility of emergency upgrades, etc. I.e. whether the whole system is thought-through.
-- Preimage attacks in the L2→GW→L1 tree, we apply special prefixes to ensure that the tree structure is fixed, i.e. all logs are 88 bytes long (this is for backwards compatibility reasons). For batch leaves and chain id leaves we use special prefixes.
-- Data availability guarantees. Whether rollup users can always restore all their storage slots, etc. An example of a potential tricky issue can be found in “Security notes for Gateway-based rollups” [in this document](./gateway/gateway_da.md).
diff --git a/docs/bridging/asset_router/asset_router.md b/docs/bridging/asset_router/asset_router.md
deleted file mode 100644
index 99f678f585..0000000000
--- a/docs/bridging/asset_router/asset_router.md
+++ /dev/null
@@ -1,47 +0,0 @@
-# AssetRouters (L1/L2) and NativeTokenVault
-
-[back to readme](../../README.md)
-
-The main job of the asset router is to be the central point of coordination for bridging. All crosschain token bridging is done between asset routers only and once the message reaches asset router, it then routes it to the corresponding asset handler.
-
-In order to make this easier, all L2 chains have the asset router located on the same address on every chain. It is `0x10003` and it is pre-deployed contract. More on how it is deployed can be seen in the [Chain Genesis](../../chain_management/chain_genesis.md) section.
-
-The endgame is to have L1 asset router have the same functionality as the L2 one. This is not the case yet, but some progress has been made: L2AssetRouter can now bridge L2-native assets to L1, from which it could be bridged to other chains in the ecosystem.
-
-The specifics of the L2AssetRouter is the need to interact with the previously deployed L2SharedBridgeLegacy if it was already present. It has less “rights” than the L1AssetRouter: at the moment it is assumed that all asset deployment trackers are from L1, the only way to register an asset handler on L2 is to make an L1→L2 transaction.
-
-> Note, that today registering new asset deployment trackers will be permissioned, but the plan is to make it permissionless in the future
-
-The specifics of the L1AssetRouter come from the need to be backwards compatible with the old L1SharedBridge. Yes, it will not share the same storage, but it will inherit the need to be backwards compatible with the current SDK. Also, L1AssetRouter needs to facilitate L1-only operations, such as recovering from failed deposits.
-
-Also, L1AssetRouter is the only base token bridge contract that can participate in initiation of cross chain transactions via the bridgehub. This will change in the future with the support of interop.
-
-### L1Nullifier
-
-While the endgoal is to unify L1 and L2 asset routers, in reality, it may not be that easy: while L2 asset routers get called by L1→L2 transactions, L1 ones don't and require manual finalization of transactions, which involves proof verification, etc. To move this logic outside of the L1AssetRouter, it was moved into a separate L1Nullifier contract.
-
-_This is the contract the previous L1SharedBridge will be upgraded to, so it should have the backwards compatible storage._
-
-### NativeTokenVault (L1/L2)
-
-NativeTokenVault is an asset handler that is available on all chains and is also predeployed. It is provides the functionality of the most basic bridging: locking funds on one chain and minting the bridged equivalent on the other one. On L2 chains NTV is predeployed at the `0x10004` address.
-
-The L1 and L2 versions of the NTV are almost identical in functionality, the main differences come from the differences of the deployment functionality in L1 and L2 envs, where the former uses standard CREATE2 and the latter uses low level calls to `CONTRACT_DEPLOYER`system contract.
-
-Also, the L1NTV has the following specifics:
-
-- It operates the `chainBalance` mapping, ensuring that the chains do not go beyond their balances.
-- It allows recovering from failed L1→L2 transfers.
-- It needs to both be able to retrieve funds from the former L1SharedBridge (now this contract has L1Nullifier in its place), but also needs to support the old SDK that gives out allowance to the “l1 shared bridge” value returned from the API, i.e. in our case this is will the L1AssetRouter.
-
-### L2SharedBridgeLegacy
-
-L2AssetRouter has to be pre-deployed onto a specific address. The old L2SharedBridge will be upgraded to L2SharedBridgeLegacy contract. The main purpose of this contract is to ensure compatibility with the incoming deposits and re-route them to the shared bridge.
-
-This contract is never deployed for new chains.
-
-### Summary
-
-
-
-> New bridge contracts
diff --git a/docs/bridging/asset_router/img/bridge_contracts.png b/docs/bridging/asset_router/img/bridge_contracts.png
deleted file mode 100644
index f3f6802cdf..0000000000
Binary files a/docs/bridging/asset_router/img/bridge_contracts.png and /dev/null differ
diff --git a/docs/bridging/asset_router/img/custom_asset_handler_registration.png b/docs/bridging/asset_router/img/custom_asset_handler_registration.png
deleted file mode 100644
index a57e69f927..0000000000
Binary files a/docs/bridging/asset_router/img/custom_asset_handler_registration.png and /dev/null differ
diff --git a/docs/bridging/asset_router/overview.md b/docs/bridging/asset_router/overview.md
deleted file mode 100644
index 3ca2c650e9..0000000000
--- a/docs/bridging/asset_router/overview.md
+++ /dev/null
@@ -1,33 +0,0 @@
-# Overview of Custom Asset Bridging with the Asset Router
-
-[back to readme](../../README.md)
-
-Bridges are completely separate contracts from the ZKChains and system contracts. They are a wrapper for L1 <-> L2 communication on both L1 and L2. Upon locking assets on one layer, a request is sent to mint these bridged assets on the other layer.
-Upon burning assets on one layer, a request is sent to unlock them on the other.
-
-Custom asset bridging is a new bridging model that allows to:
-
-1. Minimize the effort needed by custom tokens to be able to become part of the elastic chain ecosystem. Before, each custom token would have to build its own bridge, but now just custom asset deployment trackers / asset handler is needed. This is achieved by building a modular bridge which separates the logic of L1<>L2 messaging from the holding of the asset.
-2. Unify the interfaces between L1 and L2 bridge contracts, paving the way for easy cross chain bridging. It will especially become valuable once interop is enabled.
-
-#### New concepts
-
-- assetId => identifier to track bridged assets across chains. This is used to link messages to specific asset handlers in the AssetRouters.
-- AssetHandler => contract that manages liquidity (burns/mints, locks/unlocks, etc.) for specific token (or a set of them) on a chain. Every asset
-- AssetDeploymentTracker => contract that manages the deployment of asset handlers across chains. This is the contract that registers these asset handlers in the AssetRouters.
-
-### Normal flow
-
-Assets Handlers are registered in the Routers based on their assetId. The assetId is used to identify the asset when bridging, it is sent with the cross-chain transaction data and Router routes the data to the appropriate Handler. If the asset handler is not registered in the L2 Router, then the L1->L2 bridging transaction will fail on the L2 (expect for NTV assets, see below).
-
-`assetId = keccak256(chainId, asset deployment tracker = msg.sender, additionalData)`
-
-Asset registration is handled by the AssetDeploymentTracker. It is expected that this contract is deployed on the L1. Registration of the assetHandler on a ZKChain can be permissionless depending on the Asset (e.g. the AssetHandler can be deployed on the chain at a predefined address, this can message the L1 ADT, which can then register the asset in the Router). Registering the L1 Handler in the L1 Router can be done via a direct function call from the L1 Deployment Tracker. Registration in the L2 Router is done indirectly via the L1 Router.
-
-
-
-The Native Token Vault is a special case of the Asset Handler, as we want it to support automatic bridging. This means it should be possible to bridge a L1 token to an L2 without deploying the Token contract beforehand and without registering it in the L2 Router. For NTV assets, L1->L2 transactions where the AssetHandler is not registered will not fail, but the message will be automatically be forwarded to the L2NTV. Here the contract checks that the asset is indeed deployed by the L1NTV, by checking that the assetId contains the correct ADT address (note, for NTV assets the ADT is the NTV and the used address is the L2NTV address). If the assetId is correct, the token contract is deployed.
-
-### Read more
-
-You can read more in the more in-depth about L1 and L2 asset routers and the default asset handler that is Native Token Vault [here](./asset_router.md).
diff --git a/docs/bridging/bridgehub/img/L1_L2_tx_processing_on_L2.png b/docs/bridging/bridgehub/img/L1_L2_tx_processing_on_L2.png
deleted file mode 100644
index cfe75d5cc1..0000000000
Binary files a/docs/bridging/bridgehub/img/L1_L2_tx_processing_on_L2.png and /dev/null differ
diff --git a/docs/bridging/bridgehub/img/gateway_architecture.png b/docs/bridging/bridgehub/img/gateway_architecture.png
deleted file mode 100644
index a9302ec7ea..0000000000
Binary files a/docs/bridging/bridgehub/img/gateway_architecture.png and /dev/null differ
diff --git a/docs/bridging/bridgehub/img/requestL2TransactionDirect.png b/docs/bridging/bridgehub/img/requestL2TransactionDirect.png
deleted file mode 100644
index 95621fb7b1..0000000000
Binary files a/docs/bridging/bridgehub/img/requestL2TransactionDirect.png and /dev/null differ
diff --git a/docs/bridging/bridgehub/img/requestL2TransactionTwoBridges_depositEthToUSDC.png b/docs/bridging/bridgehub/img/requestL2TransactionTwoBridges_depositEthToUSDC.png
deleted file mode 100644
index 12f2f116c7..0000000000
Binary files a/docs/bridging/bridgehub/img/requestL2TransactionTwoBridges_depositEthToUSDC.png and /dev/null differ
diff --git a/docs/bridging/bridgehub/img/requestL2TransactionTwoBridges_token.png b/docs/bridging/bridgehub/img/requestL2TransactionTwoBridges_token.png
deleted file mode 100644
index 6cc290fde5..0000000000
Binary files a/docs/bridging/bridgehub/img/requestL2TransactionTwoBridges_token.png and /dev/null differ
diff --git a/docs/bridging/bridgehub/overview.md b/docs/bridging/bridgehub/overview.md
deleted file mode 100644
index b47e839fdc..0000000000
--- a/docs/bridging/bridgehub/overview.md
+++ /dev/null
@@ -1,243 +0,0 @@
-# BridgeHub & Asset Routers
-
-[back to readme](../../README.md)
-
-## Bridgehub as the main chain registry
-
-Bridgehub is the most important contract in the system, that stores:
-
-- A mapping from chainId to chains address
-- A mapping from chainId to the CTM it belongs to.
-- A mapping from chainId to its base token (i.e. the token that is used for paying fees)
-- etc
-
-> Note sure what CTM is? Check our the [overview](../../settlement_contracts/zkchain_basics.md) for contracts for settlement layer.
-
-Overall, it is the main registry for all the contracts. Note, that a clone of Bridgehub is also deployed on each L2 chain, but this clone is only used on settlement layers. All the in all, the architecture of the entire ecosystem can be seen below:
-
-
-
-> This document will not cover how ZK Gateway works, you can check it out in [a separate doc](../../gateway/overview.md).
-
-## Asset router as the main asset bridging entrypoint
-
-The main entry for passing value between chains is the AssetRouter, it is responsible for facilitating bridging between multiple asset types. To read more in detail on how it works, please refer to custom [asset bridging documentation](../asset_router/overview.md).
-
-For the purpose of this document, it is enough to treat the Asset Router as a blackbox that is responsible for processing escrowing funds on the source chain and minting them on the destination chain.
-
-> For those that are aware of the [previous ZKsync architecture](https://github.com/code-423n4/2024-03-zksync/blob/main/docs/Smart%20contract%20Section/L1%20ecosystem%20contracts.md), its role is similar to L1SharedBridge that we had before. Note, however, that it is a different contract with much enhanced functionality. Also, note that the L1SharedBridge will NOT be upgraded to the L1AssetRouter. For more details about migration, please check out [the migration doc](../../upgrade_history/gateway_preparation_upgrade/gateway_diff_review.md).
-
-### Handling base tokens
-
-On L2, _a base token_ (not to be consfused with a _native token_, i.e. an ERC20 token with a main contract on the chain) is the one that is used for `msg.value` and it is managed at `L2BaseToken` system contract. We need its logic to be strictly defined in `L2BaseToken`, since the base asset is expected to behave the exactly the same as ether on EVM. For now this token contract does not support base minting and burning of the asset, nor further customization.
-
-In other words, in the current release base assets can only be transferred through `msg.value`. They can also only be minted when they are backed 1-1 on L1.
-
-## L1→L2 communication via `Bridgehub.requestL2TransactionDirect`
-
-L1→L2 communication allows users on L1 to create a request for a transaction to happen on L2. This is the primary censorship resistance mechanism. If you are interested, you can read more on L1→L2 communications [here](../../settlement_contracts/priority_queue/processing_of_l1-l2_txs.md), but for now just understanding that L1→L2 communication allows to request transactions to happen on L2 is enough.
-
-The L1→L2 communication is also the only way to mint a base asset at the moment. Fees to the operator as well as `msg.value` will be minted on `L2BaseToken` after the corresponding L1→L2 tx has been processed.
-
-To request an L1→L2 transaction, the `BridgeHub.requestL2TransactionDirect` function needs to be invoked. The user should pass the struct with the following parameters:
-
-```solidity
-struct L2TransactionRequestDirect {
- uint256 chainId;
- uint256 mintValue;
- address l2Contract;
- uint256 l2Value;
- bytes l2Calldata;
- uint256 l2GasLimit;
- uint256 l2GasPerPubdataByteLimit;
- bytes[] factoryDeps;
- address refundRecipient;
-}
-```
-
-Most of the params are self-explanatory & replicate the logic of ZKsync Era. The only non-trivial fields are:
-
-- `mintValue` is the total amount of the base tokens that should be minted on L2 as the result of this transaction. The requirement is that `request.mintValue >= request.l2Value + request.l2GasLimit * derivedL2GasPrice(...)`, where `derivedL2GasPrice(...)` is the gas price to be used by this L1→L2 transaction. The exact price is defined by the ZKChain.
-
-Here is a quick guide on how this transaction is routed through the bridgehub.
-
-1. The bridgehub retrieves the `baseTokenAssetId` of the chain with the corresponding `chainId` and calls `L1AssetRouter.bridgehubDepositBaseToken` method. The `L1AssetRouter` will then use standard token depositing mechanism to burn/escrow the respective amount of the `baseTokenAssetId`. You can read more about it in [the asset router doc](../asset_router/overview.md). This step ensures that the baseToken will be backed 1-1 on L1.
-
-2. After that, it just routes the corresponding call to the ZKChain with the corresponding `chainId` . It is now the responsibility of the ZKChain to validate that the transaction is correct and can be accepted by it. This validation includes, but not limited to:
-
- - The fact that the user paid enough funds for the transaction (basically `request.l2GasLimit * derivedL2GasPrice(...) + request.l2Value >= request.mintValue`.
- - The fact the transaction is always executable (the `request.l2GasLimit` is not high enough).
- - etc.
-
-3. After the ZKChain validates the tx, it includes it into its priority queue. Once the operator executes this transaction on L2, the `mintValue` of the baseToken will be minted on L2. The `derivedL2GasPrice(...) * gasUsed` will be given to the operator’s balance. The other funds can be routed either of the following way:
-
-If the transaction is successful, the `request.l2Value` will be minted on the `request.l2Contract` address (it can potentially transfer these funds within the transaction). The rest are minted to the `request.refundRecipient` address. In case the transaction is not successful, all of the base token will be minted to the `request.refundRecipient` address. These are the same rules as for the ZKsync Era.
-
-**_Diagram of the L1→L2 transaction flow on L1 for direct user calls, the baseToken can be ETH or an ERC20:_**
-
-
-
-**_Diagram of the L1→L2 transaction flow on L2 (it is the same regardless of the baseToken):_**
-
-
-
-
-
-### Limitations of custom base tokens in the current release
-
-ZKsync Era uses ETH as a base token. Upon creation of an ZKChain other chains may want to use their own custom base tokens. Note, that for the current release all the possible base tokens are whitelisted. The other limitation is that all the base tokens must be backed 1-1 on L1 as well as they are solely implemented with `L2BaseToken` contract. In other words:
-
-- No custom logic is allowed on L2 for base tokens
-- Base tokens can not be minted on L2 without being backed by the corresponding L1 amount.
-
-If someone wants to build a protocol that mints base tokens on L2, the option for now is to “mint” an infinite amount of those on L1, deposit on L2 and then give those out as a way to “mint”. We will update this in the future.
-
-## General architecture and initialization of SharedBridge for a new ZKChain
-
-Once the chain is created, its L2AssetRouter will be automatically deployed upon genesis. You can read more about it in the [Chain creation flow](../../chain_management/chain_genesis.md).
-
-## `requestL2TransactionTwoBridges`
-
-`L1AssetRouter` is used as the main "glue" for value bridging across chains. Whenever a token that is not native needs to be bridged between two chains an L1<>L2 transaction out of the name of an AssetRouter needs to be performed. For more details, check out the [asset router documentation](../asset_router/overview.md). But for this section it is enough to understand that we need to somehow make a transaction out of the name of `L1AssetRouter` to its L2 counterpart to deliver the message about certain amount of asset being bridged.
-
-> In the next paragraphs we will often refer to `L1AssetRouter` as performing something. It is good enough for understanding of how bridgehub functionality works. Under the hood though, it mainly serves as common entry that calls various asset handlers that are chosen based on asset id. You can read more about it in the [asset router documentation](../asset_router/asset_router.md).
-
-Let’s say that a ZKChain has ETH as its base token. Let’s say that the depositor wants to bridge USDC to that chain. We can not use `BridgeHub.requestL2TransactionDirect`, because it only takes base token `mintValue` and then starts an L1→L2 transaction rightaway out of the name of the user and not the `L1AssetRouter`.
-
-We need some way to atomically deposit both ETH and USDC to the shared bridge + start a transaction from `L1AssetRouter`. For that we have a separate function on `Bridgehub`: `BridgeHub.requestL2TransactionTwoBridges`. The reason behind the name “two bridges” is a bit historical: the transaction supposed compose to do actions with two bridges: the bridge responsible for base tokens and the second bridge responsible for any other token.
-
-Note, however, that only `L1AssetRouter` can be used to bridge base tokens. And the role of the second bridge can be played by any contract that supports the protocol desrcibed below.
-
-When calling `BridgeHub.requestL2TransactionTwoBridges` the following struct needs to be provided:
-
-```solidity
-struct L2TransactionRequestTwoBridgesOuter {
- uint256 chainId;
- uint256 mintValue;
- uint256 l2Value;
- uint256 l2GasLimit;
- uint256 l2GasPerPubdataByteLimit;
- address refundRecipient;
- address secondBridgeAddress;
- uint256 secondBridgeValue;
- bytes secondBridgeCalldata;
-}
-```
-
-The first few fields are the same as for the simple L1→L2 transaction case. However there are three new fields:
-
-- `secondBridgeAddress` is the address of the bridge (or contract in general) which will need to perform the L1->L2 transaction. In this case it should be the same `L1AssetRouter`
-- `secondBridgeValue` is the `msg.value` to be sent to the bridge which is responsible for the asset being deposited (in this case it is `L1AssetRouter` ). This can be used to deposit ETH to ZKChains that have base token that is not ETH.
-- `secondBridgeCalldata` is the data to pass to the second contract. `L1AssetRouter` supports multiple formats of calldata, the list can be seen in the `bridgehubDeposit` function of the `L1AssetRouter`.
-
-The function will do the following:
-
-#### L1
-
-1. It will deposit the `request.mintValue` of the ZKChain’s base token the same way as during a simple L1→L2 transaction. These funds will be used for funding the `l2Value` and the fee to the operator.
-2. It will call the `secondBridgeAddress` (`L1AssetRouter`) once again and this time it will deposit the funds to the `L1AssetRouter`, but this time it will be deposit not to pay the fees, but rather for the sake of bridging the desired token.
-
-This call will return the parameters to call the l2 contract with (the address of the L2 bridge counterpart, the calldata and factory deps to call it with). 3. After the BridgeHub will call the ZKChain to add the corresponding L1→L2 transaction to the priority queue. 4. The BridgeHub will call the `SharedBridge` once again so that it can remember the hash of the corresponding deposit transaction. [This is needed in case the deposit fails](#claiming-failed-deposits).
-
-#### L2
-
-1. After some time, the corresponding L1→L2 is created.
-2. The L2AssetRouter will receive the message and re-route it to the asset handler of the bridged token. To read more about how it works, check out the [asset router documentation](../asset_router/overview.md).
-
-**_Diagram of a depositing ETH onto a chain with USDC as the baseToken. Note that some contract calls (like `USDC.transferFrom` are omitted for the sake of consiceness):_**
-
-
-
-## Generic usage of `BridgeHub.requestL2TransactionTwoBridges`
-
-`L1AssetRouter` is the only bridge that can handle base tokens. However, the `BridgeHub.requestL2TransactionTwoBridges` could be used by `secondBridgeAddress` on L1. A notable example of how it is done is how our [CTMDeploymentTracker](../../../l1-contracts/contracts/bridgehub/CTMDeploymentTracker.sol) uses it to register the correct CTM address on Gateway. You can read more about how Gateway works in [its documentation](../../gateway/overview.md).
-
-Let’s do a quick recap on how it works:
-
-When calling `BridgeHub.requestL2TransactionTwoBridges` the following struct needs to be provided:
-
-```solidity
-struct L2TransactionRequestTwoBridgesOuter {
- uint256 chainId;
- uint256 mintValue;
- uint256 l2Value;
- uint256 l2GasLimit;
- uint256 l2GasPerPubdataByteLimit;
- address refundRecipient;
- address secondBridgeAddress;
- uint256 secondBridgeValue;
- bytes secondBridgeCalldata;
-}
-```
-
-- `secondBridgeAddress` is the address of the L1 contract that needs to perform the L1->L2 transaction.
-- `secondBridgeValue` is the `msg.value` to be sent to the `secondBridgeAddress`.
-- `secondBridgeCalldata` is the data to pass to the `secondBridgeAddress`. This can be interpreted any way it wants.
-
-1. Firstly, the Bridgehub will deposit the `request.mintValue` the same way as during a simple L1→L2 transaction. These funds will be used for funding the `l2Value` and the fee to the operator.
-2. After that, the `secondBridgeAddress.bridgehubDeposit` with the following signature is called
-
-```solidity
-struct L2TransactionRequestTwoBridgesInner {
- // Should be equal to a constant `keccak256("TWO_BRIDGES_MAGIC_VALUE")) - 1`
- bytes32 magicValue;
- // The L2 contract to call
- address l2Contract;
- // The calldata to call it with
- bytes l2Calldata;
- // The factory deps to call it with
- bytes[] factoryDeps;
- // Just some 32-byte value that can be used for later processing
- // It is called `txDataHash` as it *should* be used as a way to facilitate
- // reclaiming failed deposits.
- bytes32 txDataHash;
-}
-
-function bridgehubDeposit(
- uint256 _chainId,
- // The actual user that does the deposit
- address _prevMsgSender,
- // The msg.value of the L1->L2 transaction to be created
- uint256 _l2Value,
- // Custom bridge-specific data
- bytes calldata _data
-) external payable returns (L2TransactionRequestTwoBridgesInner memory request);
-```
-
-Now the job of the contract will be to “validate” whether they are okay with the transaction to come. For instance, the `CTMDeploymentTracker` checks that the `_prevMsgSender` is the owner of `CTMDeploymentTracker` and has the necessary rights to perform the transaction out of the name of it.
-
-Ultimately, the correctly processed `bridgehubDeposit` function basically grants `BridgeHub` the right to create an L1→L2 transaction out of the name of the `secondBridgeAddress`. Since it is so powerful, the first returned value must be a magical constant that is equal to `keccak256("TWO_BRIDGES_MAGIC_VALUE")) - 1`. The fact that it was a somewhat non standard signature and a struct with the magical value is the major defense against “accidental” approvals to start a transaction out of the name of an account.
-
-Aside from the magical constant, the method should also return the information an L1→L2 transaction will start its call with: the `l2Contract` , `l2Calldata`, `factoryDeps`. It also should return the `txDataHash` field. The meaning `txDataHash` will be needed in the next paragraphs. But generally it can be any 32-byte value the bridge wants.
-
-1. After that, an L1→L2 transaction is invoked. Note, that the “trusted” `L1AssetRouter` has enforced that the baseToken was deposited correctly (again, the step (1) can _only_ be handled by the `L1AssetRouter`), while the second bridge can provide any data to call its L2 counterpart with.
-2. As a final step, following function is called:
-
-```solidity
-function bridgehubConfirmL2Transaction(
- // `chainId` of the ZKChain
- uint256 _chainId,
- // the same value that was returned by `bridgehubDeposit`
- bytes32 _txDataHash,
- // the hash of the L1->L2 transaction
- bytes32 _txHash
-) external;
-```
-
-This function is needed for whatever actions are needed to be done after the L1→L2 transaction has been invoked.
-
-On `L1AssetRouter` it is used to remember the hash of each deposit transaction, so that later on, the funds could be returned to user if the `L1->L2` transaction fails. The `_txDataHash` is stored so that the whenever the users will want to reclaim funds from a failed deposit, they would provide the token and the amount as well as the sender to send the money to.
-
-## Claiming failed deposits
-
-In case a deposit fails, the `L1AssetRouter` allows users to recover the deposited funds by providing a proof that the corresponding transaction indeed failed. The logic is the same as in the current Era implementation.
-
-## Withdrawing funds from L2
-
-Funds withdrawal is a similar way to how it is done currently on Era.
-
-The user needs to call the `L2AssetRouter.withdraw` function on L2, while providing the token they want to withdraw. This function would then calls the corresponding L2 asset handler and ask him to burn the funds. We expand a bit more about it in the [asset router documentation](../asset_router/overview.md).
-
-Note, however, that it is not the way to withdraw base token. To withdraw base token, `L2BaseToken.withdraw` needs to be called.
-
-After the batch with the withdrawal request has been executed, the user can finalize the withdrawal on L1 by calling `L1AssetRouter.finalizeWithdrawal`, where the user provides the proof of the corresponding withdrawal message.
diff --git a/docs/chain_management/admin_role.md b/docs/chain_management/admin_role.md
deleted file mode 100644
index 81585f748b..0000000000
--- a/docs/chain_management/admin_role.md
+++ /dev/null
@@ -1,114 +0,0 @@
-# Safe ChainAdmin management
-
-[back to readme](../README.md)
-
-While the ecosystem does a [decentralized trusted governance](https://blog.zknation.io/introducing-zk-nation/), each chain has its own Chain Admin. While the upgrade parameters are chosen by the governance, chain admin is still a powerful role and should be managed carefully.
-
-In this document we will explore what are the abilities of the ChainAdmin, how dangerous they are and how to mitigate potential issues.
-
-## General guidelines
-
-The system does not restrict in any way how the admin of the chain should be implemented. However special caution should be taken to keep it safe.
-
-The general guideline is that an admin of a ZK chain should be _at least_ a well-distributed multisig. Having it as an EOA is definitely a bad idea since having this address stolen can lead to [chain being permanently frozen](#setting-da-layer).
-
-Additional measures may be taken [to self-restrict](#proposed-modular-chainadmin-implementation) the ChainAdmin to ensure that some operations can be only done in safe fashion.
-
-Generally all the functionality of chain admin should be treated with maximal security and caution, and having hotkey separate roles in rare circuimstances, e.g. to call `setTokenMultiplier` in case of an ERC-20 based chain.
-
-## Chain Admin functionality
-
-### Setting validators for a chain
-
-The admin of a chain can call `ValidatorTimelock` on the settlement layer to add or remove validators, i.e. addresses that have the right to `commit`/`verify`/`execute` batches etc.
-
-The system is protected against malicious validators, they can never steal funds from users. However, this role is still relatively powerful: If the DA layer is not reliable, and a batch does get executed, the funds may be frozen. This is why the chains should be [cautious about DA layers that they use](#setting-da-layer). Note, that on L1 the `ValidatorTimelock` has 21h delay, while on Gateway this timelock will not be present.
-
-In case the malicious block has not been executed yet, it can be reverted.
-
-### Setting DA layer
-
-This is one of the most powerful settings that a chain can have: setting a custom DA layer. The dangers of doing this wrong are obvious: lack of proper data availability solution may lead to funds being frozen. (Note: that funds can never be _stolen_ due to ZKP checks of the VM execution).
-
-Sometimes, users may need assurances that a chain will never become frozen even under a malicious chain admin. A general though unstable approach is discussed [here](#proposed-modular-chainadmin-implementation), however this release comes with a solution specially tailored for rollups: the `isPermanentRollup` setting.
-
-#### `isPermanentRollup` setting
-
-Chain also exposes the `AdminFacet.makePermanentRollup` function. It will turn a chain into a permanent rollup, ensuring that DA validator pairs can be only set to values that are approved by decentralized governance to be used for rollups.
-
-This functionality is obviously dangerous in a sense that it is permanent and revokes the right of the chain to change its DA layer. On the other hand, it ensures perpetual safety for users. This is the option that ZKsync Era plans to use.
-
-This setting is preserved even when migrating to [gateway](../gateway/overview.md). If this setting was set while chain is on top of Gateway, and it migrates back to L1, it will keep this status, i.e. it is fully irrevocable.
-
-### `changeFeeParams` method
-
-This method allows to change how the fees are charged for priority operations.
-
-The worst impact of setting this value wrongly is having L1->L2 transactions underpriced.
-
-### `setTokenMultiplier` method
-
-This method allows to set the token multiplier, i.e. the ratio between the price of ETH and the price of the token. It will be used for L1->L2 priority transactions.
-
-Typically, `ChainAdmin`s of ERC20 chains will have a special hotkey responsible for calling this function to keep the price up to date. An example on how it is implemented in the current system can be seen [here](https://github.com/matter-labs/era-contracts/blob/aafee035db892689df3f7afe4b89fd6467a39313/l1-contracts/contracts/governance/ChainAdmin.sol#L23).
-
-The worst impact of setting this value wrongly is having L1->L2 transactions underpriced.
-
-### `setPubdataPricingMode`
-
-This method allows to set whether the pubdata price will be taken into account for priority operations.
-
-The worst impact of setting this value wrongly is having L1->L2 transactions underpriced.
-
-### `setTransactionFilterer`
-
-This method allows to set a transaction filterer, i.e. an additional validator for all incoming L1->L2 transactions. The worst impact is users' transactions being censored.
-
-### Migration to another settlement layer
-
-The upgrade can start migration of a chain to another settlement layer. Currently all the settlement layers are whitelisted, so generally this operation is harmless (except for the inconvenience in case the migration was unplanned).
-
-However, some caution needs to be applied to migrate properly as described in the section below.
-
-## Chain admin when migrating to gateway
-
-When a chain migrates to gateway, it provides the address of the new admin on L2. The following rules apply:
-
-- If a ZK chain has already been deployed on a settlement layer, its admin stays the same.
-- If a ZK chain has not been deployed yet, then the new admin is set.
-
-The above means that in the current release the admin of the chain on the new settlement layer is "detached" from the admin on L1. It is the responsibility of the chain to set the L2 admin correctly: either it should have the same signers or, even better in the long run, put the aliased L1 admin to have most of the abilities inside the L2 chain admin.
-
-Since most of the Admin's functionality above are related to L1->L2 operations, the L1 chain admin will continue playing a crucial role even after the chain migrates to Gateway. However, some of the new functionality are relevant on the chain admin on the settlement layer only:
-
-- Managing DA
-- Managing new validators
-- It is the admin of the settlement layer that do migrations of chains
-
-As such, the choice of the L2 Admin is very important. Also, if the chain admin on the new settlement layer is not accessible (e.g. accidentally wrong address was chosen), the chain is lost:
-
-- No validators will be set
-- The chain can not migrate back
-
-Overall **very special care** needs to be taken when selecting an admin for the migration to a new settlement layer.
-
-## Proposed modular `ChainAdmin` implementation
-
-> **Warning**. The proposed implementation here will likely **not** be used by the Matter Labs team for ZKsync Era due to the issues listed in the issues section. This code, however, is still in scope of the audit and may serve as a future basis of a more long term solution.
-
-In order to ensure that the architecture here flexible enough for future other chains to use, it uses a modular architecture to ensure that other chains could fit it to their needs. By default, this contract is not even `Ownable`, and anyone can execute transactions out of the name of it. In order to add new features such as restricting calling dangerous methods and access control, _restrictions_ should be added there. Each restriction is a contract that implements the `IRestriction` interface. The following restrictions have been implemented so far:
-
-- `AccessControlRestriction` that allows to specify which addresses can call which methods. In the case of Era, only the `DEFAULT_ADMIN_ROLE` will be able to call any methods. Other chains with non-ETH base token may need an account that would periodically call the L1 contract to update the ETH price there. They may create the `SET_TOKEN_MULTIPLIER_ROLE` role that is required to update the token price and give its rights to some hot private key.
-
-- `PermanentRestriction` that ensures that:
-
-a) This restriction could not be lifted, i.e. the chain admin of the chain must forever have it. Even if the address of the `ChainAdmin` changes, it ensures that the new admin has this restriction turned on.
-b) It specifies the calldata this which certain methods can be called. For instance, in case a chain wants to keep itself permanently tied to certain DA, it will ensure that the only DA validation method that can be used is rollup. Some sort of decentralized governance could be chosen to select which DA validation pair corresponds to this DA method.
-
-The approach above does not only helps to protect the chain, but also provides correct information for chains that are present in our ecosystem. For instance, if a chain claims to perpetually have a certain property, having the `PermanentRestriction` as part of the chain admin can ensure all observers of that.
-
-### Issues and limitations
-
-Due to specifics of [migration to another settlement layers](#migration-to-another-settlement-layer) (i.e. that migrations do not overwrite the admin), maintaining the same `PermanentRestriction` becomes hard in case a restriction has been added on top of the chain admin inside one chain, but not the other.
-
-While very flexible, this modular approach should still be polished enough before recommending it as a generic solution for everyone. However, the provided new [ChainAdmin](../../l1-contracts/contracts/governance/ChainAdmin.sol) can still be helpful for new chains as with the `AccessControlRestriction` it provides a ready-to-use framework for role-based managing of the chain. Using `PermanentRestriction` for now is discouraged however.
diff --git a/docs/chain_management/chain_genesis.md b/docs/chain_management/chain_genesis.md
deleted file mode 100644
index 8431868a97..0000000000
--- a/docs/chain_management/chain_genesis.md
+++ /dev/null
@@ -1,72 +0,0 @@
-# Creating new chains with BridgeHub
-
-[back to readme](../README.md)
-
-The main contract of the whole hyperchain ecosystem is called _`BridgeHub`_. It contains:
-
-- the registry from chainId to CTMs that is responsible for that chainId
-- the base token for each chainId.
-- the whitelist of CTMs
-- the whitelist of tokens allowed to be `baseTokens` of chains.
-- the whitelist of settlement layers
-- etc
-
-BridgeHub is responsible for creating new STs. It is also the main point of entry for L1→L2 transactions for all the STs. Users won't be able to interact with STs directly, all the actions must be done through the BridgeHub, which will ensure that the fees have been paid and will route the call to the corresponding ST. One of the reasons it was done this way was to have the unified interface for all STs that will ever be included in the hyperchain ecosystem.
-
-To create a chain, the `BridgeHub.createNewChain` function needs to be called:
-
-```solidity
-/// @notice register new chain. New chains can be only registered on Bridgehub deployed on L1. Later they can be moved to any other layer.
-/// @notice for Eth the baseToken address is 1
-/// @param _chainId the chainId of the chain
-/// @param _chainTypeManager the state transition manager address
-/// @param _baseTokenAssetId the base token asset id of the chain
-/// @param _salt the salt for the chainId, currently not used
-/// @param _admin the admin of the chain
-/// @param _initData the fixed initialization data for the chain
-/// @param _factoryDeps the factory dependencies for the chain's deployment
-function createNewChain(
- uint256 _chainId,
- address _chainTypeManager,
- bytes32 _baseTokenAssetId,
- // solhint-disable-next-line no-unused-vars
- uint256 _salt,
- address _admin,
- bytes calldata _initData,
- bytes[] calldata _factoryDeps
-) external
-```
-
-BridgeHub will check that the CTM as well as the base token are whitelisted and route the call to the State
-
-
-
-### Creation of a chain in the current release
-
-In the future, ST creation will be permissionless. A securely random `chainId` will be generated for each chain to be registered. However, generating 32-byte chainId is not feasible with the current SDK expectations on EVM and so for now chainId is of type `uint48`. And so it has to be chosen by the admin of `BridgeHub`. Also, for the current release we would want to avoid chains being able to choose their own initialization parameter to prevent possible malicious input.
-
-For this reason, there will be an entity called `admin` which is basically a hot key managed by us and it will be used to deploy new STs.
-
-So the flow for deploying their own ST for users will be the following:
-
-1. Users tell us that they want to deploy a ST with certain governance, CTM (we’ll likely allow only one for now), and baseToken.
-2. Our server will generate a chainId not reserved by any other major chain and the `admin` will call the `BridgeHub.createNewChain` . This will call the `CTM.createNewChain` that will deploy the instance of the rollup as well as initialize the first transaction there — the system upgrade transaction needed to set the chainId on L2.
-
-After that, the ST is ready to be used. Note, that the admin of the newly created chain (this will be the organization that will manage this chain from now on) will have to conduct certain configurations before the chain [can be used securely](../chain_management/admin_role.md).
-
-## Built-in contracts and their initialization
-
-Each single ZK Chain has a set of the following contracts that, while not belong to kernel space, are built-in and provide important functionality:
-
-- Bridgehub (the source code is identical to the L1 one). The role of bridgehub is to facilitate cross chain transactions. It contains a mapping from chainId to the address of the diamond proxy of the chain. It is really used only on the L1 and Gateway, i.e. layers that can serve as a settlement layer.
-- L2AssetRouter. The new iteration of the SharedBridge.
-- L2NativeTokenVault. The Native token vault on L2.
-- MessageRoot (the source code is identical to the L1 one). Similar to bridgehub, it facilitates cross-chain communication, but is practically unused on all chains except for L1/GW.
-
-To reuse as much code as possible from L1 and also to allow easier initialization, most of these contracts are not initialized as just part of the genesis storage root. Instead, the data for their initialization is part of the original diamondcut for the chain. In the same initial upgrade transaction when the chainId is initialized, these contracts are force-deployed and initialized also. An important part in it plays the new `L2GenesisUpgrade` contract, which is pre-deployed in a user-space contract, but it is delegate-called by the `ComplexUpgrader` system contract (already exists as part of genesis and existed before this upgrade).
-
-## Additional limitations for the current version
-
-In the current version creating new chains will not be permissionless. That is needed to ensure that no malicious input can be provided there.
-
-Also, since in the current release, there will be little benefits from shared liquidity, i.e. the there will be no direct ZKChain<>ZKChain transfers supported, as a measure of additional security we’ll also keep track of balances for each individual ZKChain and will not allow it to withdraw more than it has deposited into the system.
diff --git a/docs/chain_management/chain_type_manager.md b/docs/chain_management/chain_type_manager.md
deleted file mode 100644
index 26a01f6048..0000000000
--- a/docs/chain_management/chain_type_manager.md
+++ /dev/null
@@ -1,70 +0,0 @@
-# Chain Type Manager (CTM)
-
-[back to readme](../README.md)
-
-> If someone is already familiar with the [previous version](https://github.com/code-423n4/2024-03-zksync) of ZKsync architecture, this contract was previously known as "State Transition Manager (CTM)".
-
-Currently bridging between different zk rollups requires the funds to pass through L1. This is slow & expensive.
-
-The vision of seamless internet of value requires transfers of value to be _both_ seamless and trustless. This means that for instance different STs need to share the same L1 liquidity, i.e. a transfer of funds should never touch L1 in the process. However, it requires some sort of trust between two chains. If a malicious (or broken) rollup becomes a part of the shared liquidity pool it can steal all the funds.
-
-However, can two instances of the same zk rollup trust each other? The answer is yes, because no new additions of rollups introduce new trust assumptions. Assuming there are no bugs in circuits, the system will work as intended.
-
-How can two rollups know that they are two different instances of the same system? We can create a factory of such contracts (and so we would know that each new rollup created by this instance is correct one). But just creating correct contracts is not enough. Ethereum changes, new bugs may be found in the original system & so an instance that does not keep itself up-to-date with the upgrades may exploit some bug from the past and jeopardize the entire system. Just deploying is not enough. We need to constantly make sure that all STs are up to date and maintain whatever other invariants are needed for these STs to trust each other.
-
-Let’s define as _Chain Type Manager_ (CTM) \*\*as a contract that is responsible for the following:
-
-- It serves as a factory to deploy STs (new ZK chains)
-- It is responsible for ensuring that all the STs deployed by it are up-to-date.
-
-Note, that this means that STs have a “weaker” governance. I.e. governance can only do very limited number of things, such as setting the validator. ST admin can not set its own upgrades and it can only “execute” the upgrade that has already been prepared by the CTM.
-
-In the long term vision STs deployment will be permissionless, however CTM will always remain the main point of trust and will have to be explicitly whitelisted by the decentralized governance of the entire ecosystem before its ST can get the access to the shared liquidity.
-
-## Configurability in the current release
-
-For now, only one CTM will be supported — the one that deploys instances of ZKsync Era, possibly using other DA layers. To read more about different DA layers, check out [this document](../settlement_contracts/data_availability/custom_da.md).
-
-The exact process of deploying & registering a ST can be [read here](./chain_genesis.md). Overall, each ST in the current release will have the following parameters:
-
-| ST parameter | Updatability | Comment |
-| --------------------------------------- | -------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| chainId | Permanent | Permanent identifier of the ST. Due to wallet support reasons, for now chainId has to be small (48 bits). This is one of the reasons why for now we’ll deploy STs manually, to prevent STs from having the same chainId as some another popular chain. In the future it will be trustlessly assigned as a random 32-byte value. |
-| baseTokenAssetId | Permanent | Each ST can have their own custom base token (i.e. token used for paying the fees). It is set once during creation and can never be changed. Note, that we refer to and "asset id" here instead of an L1 address. To read more about what is assetId and how it works check out the document for [asset router](../bridging/asset_router/overview.md) |
-| chainTypeManager | Permanent | The CTM that deployed the ST. In principle, it could be possible to migrate between CTMs (assuming both CTMs support that). However, in practice it may be very hard and as of now such functionality is not supported. |
-| admin | By admin of ST | The admin of the ST. It has some limited powers to govern the chain. To read more about which powers are available to a chain admin and which precautions should be taken, check [out this document](../chain_management/admin_role.md) |
-| validatorTimelock | CTM | For now, we want all the chains to use the same 21h timelock period before their batches are finalized. Only CTM can update the address that can submit state transitions to the rollup (that is, the validatorTimelock). |
-| validatorTimelock.validator | By admin of ST | The admin of ST can choose who can submit new batches to the ValidatorTimelock. |
-| priorityTx FeeParams | By admin of ST | The admin of a ZK chain can amend the priority transaction fee params. |
-| transactionFilterer | By admin of ST | A chain may put an additional filter to the incoming L1->L2 transactions. This may be needed by a permissioned chain (e.g. a Validium bank-lile corporate chain). |
-| DA validation / permanent rollup status | By admin of ST | A chain can decide which DA layer to use. You check out more about [safe DA management here](./admin_role.md) |
-| executing upgrades | By admin of ST | While exclusively CTM governance can set the content of the upgrade, STs will typically be able to choose suitable time for them to actually execute it. In the current release, STs will have to follow our upgrades. |
-| settlement layer | By admin of ST | The admin of the chain can enact migrations to other settlement layers. |
-
-> Note, that if we take a look at the access control for the corresponding functions inside the [AdminFacet](../../l1-contracts/contracts/state-transition/chain-deps/facets/Admin.sol), the may see that a lot of methods from above that are marked as "By admin of ST" could be in theory amended by the ChainTypeManager. However, this sort of action requires approval from decentralized governance. Also, in case of an urgent high risk situation, the decentralized governance might force upgrade the contract via CTM.
-
-## Upgradability in the current release
-
-In the current release, each chain will be an instance of ZKsync Era and so the upgrade process of each individual ST will be similar to that of ZKsync Era.
-
-1. Firstly, the governance of the CTM will publish the server (including sequencer, prover, etc) that support the new version . This is done offchain. Enough time should be given to various zkStack devs to update their version.
-2. The governance of the CTM will publish the upgrade onchain by automatically executing the following three transactions:
-
- - `setChainCreationParams` ⇒ to ensure that new chains will be created with the version
- - `setValidatorTimelock` (if needed) ⇒ to ensure that the new chains will use the new validator timelock right-away
- - `setNewVersionUpgrade` ⇒ to save the upgrade information that each ST will need to follow to conduct the upgrade on their side.
-
-3. After that, each ChainAdmin can upgrade to the new version in suitable time for them.
-
-> Note, that while the governance does try to give the maximal possible time for chains to upgrade, the governance will typically put restrictions (aka deadlines) on the time by which the chain has to be upgraded. If the deadline is passed, the chain can not commit new batches until the upgrade is executed.
-
-### Emergency upgrade
-
-In case of an emergency, the [security council](https://blog.zknation.io/introducing-zk-nation/) has the ability to freeze the ecosystem and conduct an emergency upgrade.
-
-In case we are aware that some of the committed batches on an ST are dangerous to be executed, the CTM can call `revertBatches` on that ST. For faster reaction, the admin of the ChainTypeManager has the ability to do so without waiting for govenrnace approval that may take a lot of time. This action does not lead to funds being lost, so it is considered suitable for the partially trusted role of the admin of the ChainTypeManager.
-
-### Issues & caveats
-
-- If an ZK chain skips an upgrade (i.e. it has version X, it did not upgrade to `X + 1` and now the latest protocol version is `X + 2` there is no built-in way to upgrade). This team will require manual intervention from us to upgrade.
-- The approach of calling `revertBatches` for malicious STs is not scalable (O(N) of the number of chains). The situation is very rare, so it is fine in the short term, but not in the long run.
diff --git a/docs/chain_management/img/create_new_chain.png b/docs/chain_management/img/create_new_chain.png
deleted file mode 100644
index b71fecfebf..0000000000
Binary files a/docs/chain_management/img/create_new_chain.png and /dev/null differ
diff --git a/docs/chain_management/upgrade_process.md b/docs/chain_management/upgrade_process.md
deleted file mode 100644
index ca649c03e6..0000000000
--- a/docs/chain_management/upgrade_process.md
+++ /dev/null
@@ -1,41 +0,0 @@
-# Upgrade process document
-
-[back to readme](../README.md)
-
-## Intro
-
-This document assumes that you have understanding about [the structure](../settlement_contracts/zkchain_basics.md) on individual chains' L1 contracts.
-
-Upgrading the ecosystem of ZKChains is a complicated process. ZKSync is a complex ecosystem with many chains and contracts and each upgrade is unique, but there are some steps that repeat for most upgrades. These are mostly how we interact with the CTM, the diamond facets, the L1→L2 upgrade, how we update the verification keys.
-
-Where each upgrade consists of two parameters:
-
-- Facet cuts - change of the internal implementation of the diamond proxy
-- Diamond Initialization - delegate call to the specified address wit`h specified data
-
-The second parameter is very powerful and flexible enough to move majority of upgrade logic there.
-
-## Upgrade structure
-
-Upgrade information is composed in the form of a [DiamondCutData](../../l1-contracts/contracts/state-transition/libraries/Diamond.sol#L75) struct. During the upgrade, the chain's DiamondProxy will delegateCall the `initAddress` with the provided `initCalldata`, while the facets that the `DiamondProxy` will be changed according to the `facetCuts`. This scheme is very powerful and it allows to change anything in the contract. However, we typically have a very specific set of changes that we need to do. To facilitate these, two contracts have been created:
-
-1. [BaseZkSyncUpgrade](../../l1-contracts/contracts/upgrades/BaseZkSyncUpgrade.sol) - Generic template with function that can be useful for upgrades
-2. [DefaultUpgrade](../../l1-contracts/contracts/upgrades/DefaultUpgrade.sol) - Default implementation of the `BaseZkSyncUpgrade`, contract that is most often planned to be used as diamond initialization when doing upgrades.
-
-> Note, that the Gateway upgrade will be more complex than the usual ones and so a similar, but separate [process](<../upgrade_history/gateway_preparation_upgrade/upgrade_process_(no_gateway_chain).md>) will be used for it. It will also use its own custom implementation of the `BaseZkSyncUpgrade`: [GatewayUpgrade](../../l1-contracts/contracts/upgrades/GatewayUpgrade.sol).
-
-### Protocol version
-
-For tracking upgrade versions on different networks (private testnet, public testnet, mainnet) we use protocol version, which is basically just a number denoting the deployed version. The protocol version is different from Diamond Cut `proposalId`, since `protocolId` only shows how much upgrade proposal was proposed/executed, but nothing about the content of upgrades, while the protocol version is needed to understand what version is deployed.
-
-In the [BaseZkSyncUpgrade](../../l1-contracts/contracts/upgrades/BaseZkSyncUpgrade.sol) & [DefaultUpgrade](../../l1-contracts/contracts/upgrades/DefaultUpgrade.sol) we allow to arbitrarily increase the proposal version while upgrading a system, but only increase it. We are doing that since we can skip some protocol versions if for example found a bug there (but it was deployed on another network already).
-
-## Protocol upgrade transaction
-
-During upgrade, we typically need not only update the L1 contracts, but also the L2 ones. This is achieved by creating an upgrade transactions. More details on how those are processed inside the system can be read [here](../settlement_contracts/priority_queue/processing_of_l1-l2_txs.md).
-
-## Whitelisting and executing upgrade
-
-Note, that due to how powerful the upgrades are, if we allowed any [chain admin](../chain_management/admin_role.md) to inact any upgrade it wants, it could allow malicious chains to potentially break some of the ecosystem invariants. Because of that, any upgrade should be firstly whitelisted by the decentralized governance through calling the `setNewVersionUpgrade` function of the [ChainTypeManager](../../l1-contracts/contracts/state-transition/ChainTypeManager.sol).
-
-In order to execute the upgrade, the chain admin would call the `upgradeChainFromVersion` function from the [Admin](../../l1-contracts/contracts/state-transition/chain-deps/facets/Admin.sol) facet.
diff --git a/docs/consensus/consensus-registry.md b/docs/consensus/consensus-registry.md
deleted file mode 100644
index d2a37e9946..0000000000
--- a/docs/consensus/consensus-registry.md
+++ /dev/null
@@ -1,20 +0,0 @@
-# Consensus Registry
-
-As part of the decentralization effort we plan to introduce two new roles into the system:
-
-- Validators, which are nodes that are meant to receive L2 blocks from the sequencer, execute them locally and broadcast their signature over the block if it’s valid. If the sequencer receives enough of these signatures, the L2 block is considered finalized. Nodes that are following the chain or syncing will only accept blocks that are finalized.
-- Attesters, which basically do the same thing as validators but for L1 batches instead of L2 blocks. Just like with L2 blocks, if a L1 batch is accompanied with enough attester signatures then it’s considered finalized. How these signature are used is different from validators signatures though. These signatures are meant to be submitted to L1 together with the L1 batch when it’s committed. And the L1 contracts are meant to only accept L1 batches that come with enough signatures from the correct attesters. But that functionality is not implemented yet.
-
-The `ConsensusRegistry` contract implements a small part of that entire flow. In order to verify the L2 block and L1 batch signatures we need to know the public keys of the validators and attesters that signed them. And we also want that set of validators and attesters to be dynamic. The `ConsensusRegistry` contract is going to store and manage the current set of validators and attesters and expose methods to add, remove and modify validators/attesters.
-
-## Users
-
-There are basically three types of users that will call this contract:
-
-- The contract owner. This is generally meant to be some multisig or governance contract. In this case, it will initially be Matter Labs multisig and later it will be changed to be ZKsync’s governance. It can call any method in the contract and basically can modify the validator and attester sets at will. There are methods that are exclusive to it though. Namely add nodes, remove nodes, change validator/attester weights (the relative voting power of each validator/attester) and commit validator/attester committees (creates a snapshot of the current nodes and that updates the validator/attester committees).
-- The node owners. The entities that will run the validators and attesters. They change over time as nodes get added/removed. They can only activate/deactivate their nodes (deactivated nodes do not get selected to be part of committees) and change their validator/attester public keys.
-- The sequencer plus anyone running an external node. They need to verify L1 batch and L2 block signatures so they need to get the attester and validator committees for each batch. There are getter methods for this.
-
-## Future integration
-
-Currently `ConsensusRegistry` contract is not directly connected to the protocol. The plan is to read the validator committee from the consensus registry contract on each new batch. And, with upcoming protocol upgrades, start verifying the validator signatures onchain in each submitted batch.
diff --git a/docs/evm_emulation/differences_from_cancun_evm.md b/docs/evm_emulation/differences_from_cancun_evm.md
deleted file mode 100644
index 4c1563b67f..0000000000
--- a/docs/evm_emulation/differences_from_cancun_evm.md
+++ /dev/null
@@ -1,43 +0,0 @@
-# General differences from Ethereum
-
-This feature allows EVM emulation on top of EraVM. This mode is not fully equivalent to native EVM and is limited by the EraVM design. This page describes the known differences between emulation and EVM (Cancun).
-
-## Gas behavior differences
-
-- EVM emulation is executed on top of EraVM and all transactions start in EraVM environment. So gas and gaslimit values signed in the transaction correspond to the native **EraVM** gas, not **EVM** gas. For that reason presigned/keyless transactions created for Ethereum may be not compatible with ZK Chains. For detailed info: [Gas emulation](./evm_gas_emulation.md).
-- Our “Intrinsic gas costs” are different from EVM.
-- We do not implement the gas refunds logic from **EVM**. Because users pay for EraVM gas and EVM gas is virtual, we use EraVM gas refunds logic instead of EVM one. It does not affect contracts behavior (refunds happen in the end of transaction).
-- We do not charge EVM gas for tx calldata.
-- Access lists are not supported (EIP-2930).
-
-## Limitations
-
-- `DELEGATECALL` between EVM and native EraVM contracts will be reverted.
-- Calls to empty addresses in kernel space (address < 2^16) will fail.
-- `GASLIMIT` opcode returns the same fixed constant as EraVM and should not be used.
-
-Unsupported opcodes:
-
-- `CALLCODE`
-- `SELFDESTRUCT`
-- `BLOBHASH`
-- `BLOBBASEFEE`
-
-## Precompiles
-
-EVM emulation supports the same precompiles that are supported by EraVM.
-
-## Technical differences
-
-_These changes are unlikely to have an impact on the developer experience._
-
-Differences:
-
-- `JUMPDEST` analysis is simplified. It is not checked that `JUMPDEST` is not a part of `PUSH` instruction.
-- No force of call stack depth limit. It is implicitly implemented by 63/64 gas rule.
-- Account storage is not destroyed during contract deployment.
-- If the deployer's nonce is overflowed during contract deployment, all passed gas will be consumed. EVM refunds all passed gas to the caller frame.
-- Nonces are limited by size of `u128`, not `u64`
-- During creation of EVM contract by EOA or EraVM contract, emulator does not charge additional `2` gas for every 32-byte chunk of `initcode` as specified in [EIP-3860](https://eips.ethereum.org/EIPS/eip-3860) (since perform `JUMPDEST` analysis is simplified). This cost **is** charged if contract is created by another EVM contract (to keep gas equivalence).
-- Code deposit cost is charged from constructor frame, not caller frame. It will be changed in the future.
-- Only those accounts that are accessed from an EVM environment become warm (including origin, sender, coinbase, precompiles). Anything that happens outside the EVM does not affect the warm/cold status of the accounts for EVM.
diff --git a/docs/evm_emulation/evm_gas_emulation.md b/docs/evm_emulation/evm_gas_emulation.md
deleted file mode 100644
index 8adb99b114..0000000000
--- a/docs/evm_emulation/evm_gas_emulation.md
+++ /dev/null
@@ -1,43 +0,0 @@
-# EVM gas emulation overview
-
-The gas model can be one of the most tricky topics regarding EVM emulation since there are two interacting executions are supported at the same time. This document is intended to explain in more detail how the emulation of the EVM gas works.
-
-## EVM emulation execution flow
-
-From the point of view of EraVM, the EVM emulator is (almost) a regular EraVM contract. This contract is predefined and can be changed only during protocol upgrades. When an EVM contract is called, EraVM invokes the emulator code. The emulator loads the corresponding EVM bytecode and interprets it.
-
-Thus, the emulation is executed on top of EraVM, uses EraVM opcodes and, most importantly, **pays for EraVM operations in native gas (ergs)**.
-
-⚠️ To avoid confusion, native EraVM gas will be referred to as "ergs" further in the text.
-
-This means that **ergs** are used to pay for all operations on EraVM (and therefore on ZK Chains). Gas and gaslimit values signed in user's transactions are specified in **ergs**. All refunds are also made by the virtual machine in ergs.
-
-## EVM gas model emulation
-
-Full EVM gas equivalence is necessary to provide an EVM-compatible environment and meet the assumptions made in contracts. However, the cost of EraVM operations differs, and EVM emulation requires multiple EraVM operations for each EVM operation. As a result, we emulate the EVM gas model.
-
-As mentioned in the previous section, the EVM environment is virtual: it operates on top of EraVM and incurs costs for execution in EraVM ergs. The EVM emulator, however, has its own internal gas accounting system that corresponds to the EVM gas model. Each EVM opcode consumes a predefined amount of gas, according to the EVM specification. And if there is insufficient gas for an operation, the frame will be reverted.
-
-⚠️ EVM gas is used only for compatibility in the EVM emulation mode. Underlying virtual machine is not aware about EVM gas and uses native ergs.
-
-## Ergs to gas conversion rate for gas limit
-
-EVM gas units are not equivalent to EraVM ergs. And both the EVM and EraVM do not provide mechanisms for an EOA or contract to specify how to convert gas from one unit to another. For practical use in the emulator, a predefined constant is used to convert gas limit from one unit to another.
-
-⚠️ Current EraVM -> EVM gas limit conversion ratio is 5:1.
-
-This means that if a user made a call to an EVM contract with 100,000 ergs, the EVM emulator will start execution with 20,000 EVM gas. At the same time, underlying EraVM will have all 100,000 ergs available for use and will refund any unused ergs at the end of transaction.
-
-In the other direction, when calling an EraVM contract from EVM context, if 20,000 gas is provided, the emulator will try to pass 100,000 ergs (or less, if amount of ergs left is not enough).
-
-## Out-of-ergs situation
-
-Because EVM gas is virtual and it is not equivalent to EraVM ergs, situations are possible in which emulator has enough EVM gas to continue execution, but encounters `out-of-ergs` panic from EraVM. In this case we simply propagate special internal kind of panic and revert the **whole** EVM frames chain.
-
-⚠️ EVM emulation can only be executed completely, up to returning from the EVM context (incl. EVM reverts), or completely rolled back.
-
-This fact should be taken into account by smart contract developers:
-
-❗**It is highly discouraged to use try-catch patterns with unknown contracts in EVM environment**❗
-
-Technically, this problem is similar to classic gas-griefing issues in EVM contracts, if not handled appropriately.
diff --git a/docs/evm_emulation/evm_predeploys_list.md b/docs/evm_emulation/evm_predeploys_list.md
deleted file mode 100644
index 1b57c75e1c..0000000000
--- a/docs/evm_emulation/evm_predeploys_list.md
+++ /dev/null
@@ -1,18 +0,0 @@
-# EVM predeploys
-
-Some important EVM contracts can be deployed to predefined addresses if EVM emulation is enabled on the chain. It can be done using [DeployEvmPredeploys.s.sol](../../l1-contracts/deploy-scripts/evm-predeploys/DeployEvmPredeploys.s.sol) script.
-
-List of contracts:
-
-- [Create2 proxy](https://github.com/Arachnid/deterministic-deployment-proxy)
- `0x4e59b44847b379578588920cA78FbF26c0B4956C`
-- [Create2 deployer](https://github.com/pcaversaccio/create2deployer)
- `0x13b0D85CcB8bf860b6b79AF3029fCA081AE9beF2`
-- [ERC2470 singleton factory](https://eips.ethereum.org/EIPS/eip-2470)
- `0xce0042B868300000d44A59004Da54A005ffdcf9f`
-- [Safe Singleton Factory](https://github.com/safe-global/safe-singleton-factory/blob/main/source/deterministic-deployment-proxy.yul)
- `0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7`
-- [Multicall3](https://github.com/mds1/multicall/tree/main)
- `0xcA11bde05977b3631167028862bE2a173976CA11`
-- [Create2 proxy](https://github.com/Zoltu/deterministic-deployment-proxy)
- `0x7A0D94F55792C434d74a40883C6ed8545E406D12`
diff --git a/docs/evm_emulation/technical_overview.md b/docs/evm_emulation/technical_overview.md
deleted file mode 100644
index 077a826533..0000000000
--- a/docs/evm_emulation/technical_overview.md
+++ /dev/null
@@ -1,249 +0,0 @@
-# EVM emulation technical overview
-
-## Intro
-
-The EraVM differs from the EVM in several ways: it has a distinct set of instructions and a different overall design. As a result, while Solidity and Vyper can be compiled to bytecode for the EraVM, there are several peculiarities and behavioral differences that may require developers to modify their smart contracts. These differences can negatively impact the developer experience, especially due to inconsistencies in the tooling.
-
-As an option to unblock developers that depend on EVM bytecode support, EVM execution mode is added as an emulation on top of EraVM:
-
-- The core of the system is EraVM: EVM emulator is only a complementary functionality. It is still possible to deploy native EraVM contracts and the main unit of gas is EraVM one (it will be called **_ergs_** further so as not to confuse it with EVM gas).
-- However, it is also possible to deploy and execute EVM contracts. These contracts’ bytecode hash is marked with a special marker. Whenever an EVM bytecode is invoked, instead of decommiting and executing EraVM bytecode, virtual machine uses fixed and predefined EvmEmulator bytecode. Internally, this emulator loads, interprets and executes EVM bytecode in accordance with the EVM rules (as close as possible).
-
-The main invariant that emulation aims to preserve:
-
-❗ Behavior inside emulated EVM environment (i.e. not only within one contract, but during any sort of EVM <> EVM contracts interaction chain) is the same as it would’ve been on widespread EVM implementation (Geth, REVM, etc.).
-
-Note, that during EraVM <> EVM or EVM <> EraVM contract interaction can be different, but it is expected that this interaction does not break major security invariants.
-
-❗ The EVM environment is agnostic about EraVM.
-
-⚠️ This document is meant to cover the high level of the EVM emulation design as well as some of its rough edges and is not meant to be a full specification of it. A proper understanding of the EVM emulation requires reading the corresponding comments in the contracts code.
-
-## Prerequisites
-
-This document requires that the reader is aware of ZKsync Era internal design.
-General docs: [Developer reference](https://docs.zksync.io/build/developer-reference)
-
-The emulator related changes actively use some features of EraVM, including:
-
-- **Kernel space and user space addresses.** Everything with address < 2^16 (kernel space) is considered as system contract with some special capabilities.
-- **Difference between system and non-system calls**. Call to the contract in kernel space can be explicitly marked as system call. Some methods in system contracts allow only system calls to prevent accidental use of various system functions.
-- **Fat pointers**. EraVM has different memory model, actively using pointers instead of copying memory. So calldata / returndata usually not a copy, but just create an immutable pointer to some region of memory. These pointers can be manipulated (but not the memory behind them) - for example, by shrinking the memory area or clearing the pointer completely.
-- **Verbatim instructions and other compiler-specific instructions.** In some cases we want to use some EraVM-specific functionality in Solidity or Yul. These languages do not have suitable instructions, and for this reason we use `verbatim` instructions (**note**: it has different meaning compared to usual Yul) or pseudocalls to predefined system addresses. In both cases, zksolc replaces these instructions with the corresponding functionality.
-
-The target version of EVM is **Cancun.**
-
-[EVM emulator: differences from EVM (Cancun)](./differences_from_cancun_evm.md)
-
-## Internals of deploying EVM contract
-
-EVM contracts can be deployed with a transaction without field `to` (as in Ethereum). In this case `data` field will be interpreted as init code for the constructor.
-
-Additionally EVM contracts can be deployed from EraVM environment using **system** call to the following functions in ContractDeployer system contract:
-
-- `createEVM` - `CREATE`-like behavior
-- `create2EVM` - `CREATE2`-like behavior
-
-They use the same address derivation schemes as corresponding EVM opcodes. To derive the deployed contract’s address for EOAs we use the main nonce for this operation, while for contracts we use their deployment nonce. You can read more about the two types of nonces in the [NonceHolder system contract’s documentation](https://docs.zksync.io/zksync-protocol/contracts/system-contracts#nonceholder).
-
-❗ Note, that these two functions are not used (and can’t be used!) from the EVM environment. EVM smart contracts can’t perform **system** calls. EVM opcodes `CREATE` and `CREATE2` should be used instead.
-
-EvmEmulator internally uses `precreateEvmAccountFromEmulator` and `createEvmFromEmulator` functions to guarantee the same creation flow as in EVM.
-
-Once the address for the deployed EVM contract is derived, next steps are:
-
-1. Set the dummy bytecode hash, marked as EVM one, onto the derived address. This will ensure that the `EvmEmulator` bytecode will be invoked during the constructor call.
-2. Execute the constructor branch of the `EvmEmulator`. After obtaining the initCode from the calldata, the interpreter interprets it as a normal EVM bytecode.
-3. EvmEmulator constructor returns to the ContractDeployer EVM gas left and final bytecode, which should be deployed.
-
-After creation, the `deployedBytecode` is saved and any call to the created contract will use the EVM emulator, which loads and executes the corresponding EVM bytecode.
-
-### New type of versioned code hash
-
-In EraVM we use special _versioned hash_ format for interacting with bytecodes: it is a 32-byte value with the following structure (indexed in bytes):
-
-- hash[0] — version (0x01 for EraVM)
-- hash[1] — whether the contract is being constructed
-- hash[2..3] — big endian length of the bytecode in **32-byte words**. This number must be odd.
-- hash[4..31] — the last 28 bytes of the sha256 hash of the bytecode.
-
-For each native EraVM contract with address `A` this version hash is stored under the key `A` inside the `AccountCodeStorage` system contract. Whenever a call is performed to an address `A`, the EraVM will read its versioned bytecode hash, check its correct versioned format and “unpack” the bytecode that corresponds to that versioned hash inside the code memory page.
-
-Also that versioned hash value is used as `extcodehash` value in **EraVM** context (but not in the EVM context!).
-
-In order to support EVM bytecode, we introduced a new hash version for EVM contracts (0x02):
-
-- hash[0] — version (0x02 for EVM)
-- hash[1] — whether the contract is being constructed
-- hash[2..3] — big endian length of the raw EVM bytecode in **bytes**.
-- hash[4..31] — the last 28 bytes of the sha256 hash of the padded EVM bytecode.
-
-Versioned hash value is **not** used as `extcodehash` value of EVM contracts in EVM context.
-
-Besides the version, these formats are different in the fact that the first version stored the length of the bytecode in _32-byte words,_ while the second one does it in _bytes_. This was done mostly for historical reasons during the development of this version. However, it perfectly fits the maximal allowed bytecode size for an EVM contract (i.e. 24,576 bytes can easily fit into the 2^16 - 1 bytes).
-
-EraVM now has the following logic whenever a contract with address `A` is called:
-
-The first 3 steps are the same as pre-EVM emulator:
-
-1. It reads the versioned hash of the bytecode under the key `A` inside the AccountCodeStorage system contract.
-2. If it is empty, it invokes DefaultAccount.
-3. If it has version 1, it treats it as a native EraVM contract, and uses the preimage for this versioned hash as EraVM bytecode for the contract.
-
-But now we have the new path: if it has version 2, it interprets it as a EVM contract, and uses the bytecode of `EvmEmulator` as the EraVM bytecode of the contract.
-
-Note, that while for native contracts the knowledge of the preimage for the versioned hash is crucial (since it is _the_ bytecode that will be used), for bytecodes with version 2 it does not really matter since `EvmEmulator` is the bytecode that will be used. This is why when we are constructing a contract, we put a temporary “dummy” versioned hash, the job of which is only to ensure that the `EvmEmulator` is the one executing its logic.
-
-### Support for null `to` address in contract creation transactions
-
-Before we did not allow `null` to be a valid address for type 0-2 transactions. This was not needed since EVM-like CREATE was not supported. With EVM emulator, we allow CREATE operations from EOA, similar to Ethereum. These have `_transaction.reserved[1]` field as non-zero.
-
-## EraVM → EVM calls internal overview
-
-Whenever a EraVM contract calls an EVM one (note that, the first contract to be ever executed is bootloader written in EraVM code, so execution of EVM contracts always starts by being called by a EraVM one), the following steps happen:
-
-1. Once the EraVM sees that the callee has versioned hash with version 2, it uses `EvmEmulator` as the “EraVM” bytecode for this contract’s frame. Note, that `this` address is the address of the contract itself, i.e. all the `sstore` operations will be performed against the storage of the interpreted contract.
-2. The only public function provided by `EvmEmulator` is fallback, so the execution will start there.
-3. We calculate the amount of EVM gas that is given to this frame. Note, that since each EVM opcode has to be emulated by EraVM, each EVM gas necessarily costs several EraVM one (ergs). We currently use a linear ratio to calculate the amount of received EVM gas.
-4. Then, the emulation starts as usual and the `returndata` is returned via standard EraVM means.
-
-## EVM → EraVM calls internal overview
-
-Whenever a EVM contract calls an EraVM one it is treated as simple native EraVM call. The EVM gas passed from EVM environment is converted into ergs using a fixed ratio.
-
-## Ensuring the same behavior within EVM context
-
-In the previous sections we’ve discussed on how to deploy an EVM contract and how to call one. Next we will discuss some special aspects of emulation.
-
-❗ remember, that our EVM is emulated and for each EVM frame there is a corresponding EraVM one
-
-## Static calls
-
-The `isStatic` context is set off by EraVM for calls to EVM contracts. Emulator gets info whether context is static or non-static from call flags. This is needed because `EvmGasManager` need to perform writes to transient storage to emulate cold/warm access mechanics for accounts and storage slots.
-
-Thus, it is entirely up to the emulator to ensure that no other state changes occur in the static execution mode.
-
-## Context parameters
-
-While most of the context parameters (`this`/`msg.sender`/`msg.value`, etc) are used as is, some have to be explicitly maintained by the `EvmEmulator`:
-
-- `gas` (as EVM gas rules need to be applied)
-- `is_static` (more on it in the section about static context)
-- The entire set of warm/cold slots and addresses is maintained by the `EvmGasManager` system contract.
-
-## Managing storage
-
-For managing storage we reuse the same primitives that EraVM provides:
-
-- `SSTORE`/`SLOAD` , `TSTORE`/`TLOAD` are done using the same opcodes as EraVM
-- Whenever there is a need to revert a frame, we reuse the same `REVERT` opcode as used by EraVM.
-
-## `EvmGasManager`
-
-### Managing hot/cold storage slots & accounts
-
-Whenever an account is accessed for the first time on EVM, the users are charged extra for the I/O costs incurred by it. Also, additional costs are incurred for state growth (when a slot goes from 0 to some other values).
-
-To support the same behavior as on EVM, we maintain a registry of whether a slot or account is warm or cold. This registry is located in the `EvmGasManager` system contract. In order to ensure that this registry gets erased after each transaction, transient storage is used for it.
-
-By default, the following addresses are considered hot:
-
-- Called EVM contract
-- msg.sender (caller of the EVM contract)
-- tx.origin
-- coinbase
-- precompiles
-
-### EVM frames
-
-As already mentioned, for warm/cold storage/account management to work, the `isStatic` context has to be put off. However, we need to somehow preserve the knowledge about the fact that the context is static. Also, we need to ensure that EVM contracts get the exact correct gas amount if called by an EVM contract.
-
-Thus, whenever an EVM call happens, the following functions in `EvmGasManager` are called by the emulator:
-
-1. (Parent frame, before the call) `pushEVMFrame`. It sets info about new EVM frame to the transient storage including amount of EVM `gas` in that frame as well as the `isStatic` context flag.
-2. (Child frame, start of the execution) Whenever an EVM contract is called, it calls `EvmGasManager.consumeEvmFrame`. If new EVM frame was pushed previously, it will return info about that frame and mark it as consumed. Returned info contains EVM gas and `isStatic` flag.
-3. In case of revert. (Parent frame, right after the call) When an EVM call finishes with revert, `EvmGasManager.resetEVMFrame` is used. Note, that if there is a revert of the parent frame for some reason before/during `EvmGasManager.resetEVMFrame`, the frame will be “popped” implicitly since all the changes of the parent frame will be reverted, including `EvmGasManager.pushEVMFrame`.
-
-## Calldata & returndata internals
-
-### EVM <> EraVM
-
-EraVM contracts are expected to know nothing about the EVM emulation and so when a EraVM contract calls an EVM one, it provides just normal calldata “as-is”. The same happens when an EVM contract finishes the call when the caller was a EraVM contract: the returndata is returned “as-is” without any further modifications.
-
-### EVM <> EVM
-
-Whenever an EVM contract calls another one, it passes the calldata “as-is”. The “correct” EVM `gas` and the `isStatic` flags are passed inside the `EVMGasManager` as mentioned above.
-
-However, returning data is more complicated. Whenever an EVM contract needs to return the data and the caller was another EVM contract, the tuple of `(gas_left, true_returndata)` is returned.
-
-## Out-of-ergs situation
-
-The actual cost of executing instructions differs from the fixed ratio between the EVM gas and the EraVM gas. For this reason, situations are possible in which emulator has enough EVM gas to continue execution, but encounters `out-of-ergs` panic. In this case we simply propagate special internal kind of panic and revert the **whole** EVM frames chain.
-
-❗ EVM emulation can only be executed completely, up to returning from the EVM context (incl. EVM reverts), or completely rolled back.
-
-This does not happen when calling native contracts. For this reason, the possible `out-of-ergs` panic must be taken into account when calling native contracts from EVM environment. Technically, this problem is one of the versions of classic gas-griefing vulnerabilities in EVM contracts, if not handled appropriately.
-
-## Caveats about EVM contract deployment internals
-
-We want to support the same logic for EVM contract deployment as Ethereum. This, for instance, includes that a failed deploy should still increase the nonce of a contract, which is not the case on Era EraVM: [differences-with-ethereum nonces](https://docs.zksync.io/build/developer-reference/differences-with-ethereum.html#nonces).
-
-For this, for EVM<>EVM deployments have two big steps: precheck and deploy.
-
-The flow of EVM <> EVM deployment is the following:
-
-1. The EVM emulator checks the memory offsets are valid. Also it checks that size of initCode is valid and charges dynamic gas costs. In case of error creator frame is reverted consuming all remaining EVM gas.
-2. The EVM emulator checks that the `value` is valid. Otherwise creation considered as failed, all passed EVM gas refunded. Caller frame is not reverted.
-3. The EVM emulator calls the `ContractDeployer.precreateEvmAccountFromEmulator`. This call derives new contract address and performs collision check. If it fails, creation is considered as failed, all passed EVM gas consumed. Caller frame is not reverted.
-4. The EVM emulator creates a new `EVMFrame`, and calls the `ContractDeployer.createEvmFromEmulator`. This call should fail only if constructor of the new contract was reverted.
-5. The `ContractDeployer` sets a dummy version 2 hash on the deployed address to ensure that when it will be called, the `EvmEmulator` will be activated.
-6. The `EvmEmulator`'s constructor is invoked. The initCode is passed inside the calldata.
-
-Then, there are three cases
-
-1. If the execution of the initCode is successful, the `EvmEmulator` will pad the new bytecode to the correct form for the code oracle and return to the ContractDeployer system contract together with remaining EVM gas. This will publish the deployed contract’s bytecode and set the correct code hash for the account.
-2. If the execution is not successful, the standard `revert` will be used and propagated. If the deployer is an EVM contract, the pair of `(gas_left, returndata)` will be returned.
-3. If the execution failed due to `out-of-ergs`, aborting panic will be propagated.
-
-## Notes on the architecture of the EvmEmulator
-
-### Memory layout
-
-The EVM emulator has the following areas in memory:
-
-- First 23 slots are used as scratch space. They are dedicated for temporary data, e.g. when emulator needs to make a call to some other contract.
-- Next 9 slots are used to cache fixed context values.
-- Next slot is used to store the size of the last returndata
-- Next 1024 slots are dedicated for the EVM stack.
-- Next slot is used to store the bytecode size of executing contract
-- Next `MAX_POSSIBLE_ACTIVE_BYTECODE` bytes are used to store active bytecode. This value is 24576 for deployed contracts and 24576\*2 for initCode in constructor.
-- Next slot is empty. It is needed to simplify PUSH N opcodes
-- Next slot is used to store the size of memory used in EVM
-- All the memory after that is used as the EVM memory, i.e. all `mload/mstore` operations that are done by the user are performed at that location.
-
-For returndata we keep returned fat pointer as active and copy from it if needed.
-
-### Managing returndata & calldata
-
-For calldata we just reuse the standard calldata available inside the interpreter.
-
-However for returndata the situation is a bit harder. Let’s imagine the following scenario:
-
-- The EVM contract performs a call to some contract. Now the returndata is `R`.
-- Then the user tries to read a storage slot. This means that the interpreter will have to call `EvmGasManager.warmSlot`. Now, as far as the EraVM code of the interpreter is concerned, the returndata is `R2`.
-
-If a user inside the EVM will ask to provide the returndata, we need to provide `R` and not `R2`. That’s why we use the active pointer feature of the zksolc compiler and we store the “correct” EVM returndata in the active pointer, allowing us to ensure that the returndata will always behave the same as on EVM.
-
-### Aborting the whole EVM execution frames chain
-
-If any EVM emulator frame returns with unexpected amount of returndata (e.g. `revert(0, 0)` or `out-of-ergs`) it will be treated by caller EVM frame as abort signal and propagated. Thus, the whole EVM calls chain should revert.
-
-## Limitations
-
-### EraVM <> EVM delegatecalls
-
-Calls between EraVM and EVM calls are supported, but delegatecalls are not. Since it would compromise the features that only should be allowed for the interpreter itself (e.g. warming slots). Our VM limitations doesn’t allow us to enable cross-VM delegate calls at this step.
-
-### Other differences from EVM and limitations
-
-More detailed info about differences from EVM and limitations: [Differences from EVM (Cancun)](./differences_from_cancun_evm.md)
diff --git a/docs/gateway/chain_migration.md b/docs/gateway/chain_migration.md
deleted file mode 100644
index 2fe254e7bb..0000000000
--- a/docs/gateway/chain_migration.md
+++ /dev/null
@@ -1,43 +0,0 @@
-# Chain migration
-
-[back to readme](../README.md)
-
-## Ecosystem Setup
-
-Chain migration reuses lots of logic from standard custom asset bridging which is enabled by the AssetRouter. The easiest way to imagine is that ZKChains are NFTs that are being migrated from one chain to another. Just like in case of the NFT contract, an CTM is assumed to have an `assetId := keccak256(abi.encode(L1_CHAIN_ID, address(ctmDeployer), bytes32(uint256(uint160(_ctmAddress)))))`. I.e. these are all assets with ADT = ctmDeployer contract on L1.
-
-CTMDeployer is a very lightweight contract used to facilitate chain migration. Its main purpose is to server as formal asset deployment tracker for CTMs. It serves two purposes:
-
-- Assign bridgehub as the asset handler for the “asset” of the CTM on the supported settlement layer.
-
-Currently, it can only be done by the owner of the CTMDeployer, but in the future, this method can become either permissionless or callable by the CTM owner.
-
-- Tell bridgehub which address on the L2 should serve as the L2 representation of the CTM on L1. Currently, it can only be done by the owner of the CTMDeployer, but in the future, this method can become callable by the CTM owner.
-
-
-
-## The process of migration L1→GW
-
-
-
-## Chain migration GW → L1
-
-Chain migration from from L1 to GW works similar to how NFT bridging from L1 to another chain would work. Migrating back will use the same mechanism as for withdrawals.
-
-Note, that for L2→L1 withdrawals via bridges we never provide a recovery mechanism. The same is the case with GW → L1 messaging, i.e. it is assumed that such migrations are always executable on L1.
-
-You can read more about how the safety is ensured in the “Migration invariants & protocol upgradability” section.
-
-
-
-## Chain migration GW_1 → GW_2
-
-In this release we plan to only support a single whitelisted settlement layer, but if in the future more will be supported, as of now the plan is to migrate the chain firstly to L1 and then to GW.
-
-## Chain migration invariants & protocol upgradability
-
-Note, that once a chain migrates to a new settlement layer, there are two deployments of contracts for the same ZKChain. What’s more, the L1 part will always be used.
-
-There is a need to ensure that the chains work smoothly during migration and there are not many issues during the protocol upgrade.
-
-You can read more about it [here](./gateway_protocol_upgrades.md).
diff --git a/docs/gateway/gateway_da.md b/docs/gateway/gateway_da.md
deleted file mode 100644
index 67bd2081bd..0000000000
--- a/docs/gateway/gateway_da.md
+++ /dev/null
@@ -1,27 +0,0 @@
-# Custom DA layers
-
-[back to readme](../README.md)
-
-## Prerequisites
-
-To better understand this document, it is better to have grasp on how [custom DA handling protocol](../settlement_contracts/data_availability/custom_da.md) works.
-
-## Rollup DA
-
-If a chain intends to be a rollup, it needs to relay its pubdata to L1 via L1Messenger system contract. Thus, typically the L1DAValidator will be different from the one that they used on Ethereum.
-
-For chains that use our [standard pubdata format](../settlement_contracts/data_availability/rollup_da.md), we provide the [following relayed L1 DA validator](../../l1-contracts/contracts/state-transition/data-availability/RelayedSLDAValidator.sol) that relays all the data to L1.
-
-### Security notes for Gateway-based rollups
-
-An important note is that when reading the state diffs from L1, the observer will read messages that come from the L2DAValidator. To be more precise, the contract used is `RelayedSLDAValidator` which reads the data and publishes it to L1 by calling the L1Messenger contract.
-
-If anyone could call this contract, the observer from L1 could get wrong data for pubdata for this particular batch. To prevent this, it ensures that only the chain can call it.
-
-## Validium DA
-
-Validiums can reuse [the same DA validator](../../l1-contracts/contracts/state-transition/data-availability/ValidiumL1DAValidator.sol) that they used on L1. Note, that it has to be redeployed on the Gateway.
-
-## Custom DA
-
-As already stated before, the DA validation is done on the settlement layer. Thus, if you use a custom DA layer you need to ensure that its verification can be done on Gateway.
diff --git a/docs/gateway/gateway_protocol_upgrades.md b/docs/gateway/gateway_protocol_upgrades.md
deleted file mode 100644
index 11558d5334..0000000000
--- a/docs/gateway/gateway_protocol_upgrades.md
+++ /dev/null
@@ -1,159 +0,0 @@
-# Gateway protocol versioning and upgradability
-
-[back to readme](../README.md)
-
-One of the hardest part about gateway (GW) is how do we synchronize interaction between L1 and L2 parts that can potentially have different versions of contracts. This synchronization should be compatible with any future CTM that may be present on the gateway.
-
-Here we describe various scenarios of standard/emergency upgrades and how will those play out in the gateway setup.
-
-## General idea
-
-We do not enshrine any particular approach on the protocol level of the GW. The following is the approach used by the standard Era CTM, which also manages GW.
-
-Upgrades will be split into two parts:
-
-- “Inactive chain upgrades” ⇒ intended to update contract code only and not touch the state or touch it very little. The main motivation is to be able to upgrade the L1 contracts without e.g. adding new upgrade transactions.
-- “Active chain upgrades” ⇒ same as the ones that we have today: full-on upgrade that also updates bootloader, insert system upgrade transaction and so on.
-
-In other words:
-
-`active upgrade = inactive upgrade + bootloader changes + setting upgrade tx`
-
-The other difference is that while “active chain upgrades” are usually always needed to be forced in order to ensure that contracts/protocol are up to date, the “inactive chain upgrades” typically involve changes in the facets’ bytecode and will only be needed before migration is complete to ensure that contracts are compatible.
-
-To reduce the boilerplate / make management of the upgrades easier, the abstraction will be basically implemented at the upgrade implementation level, that will check `if block.chainid == s.settlementLayer { ... perform active upgrade stuff } else { ... perform inactive upgrade stuff, typically nothing m}.`
-
-## Lifecycle of a chain
-
-While the chain settles on L1 only, it will just do “active chain upgrades”. Everything is the same as now.
-
-When a chain starts its migration to a new settlement layer (regardless of whether it is gateway or not):
-
-1. It will be checked the that the protocolVersion is the latest in the CTM in the current settlement layer (just in case to not have to bother with backwards compatibility).
-2. The `s.settlementLayer` will be set for the chain. Now the chain becomes inactive and it can only take “inactive” upgrades.
-3. When migration finishes, it will be double checked that the `protocolVersion` is the same as the one in the target chains’ CTM.
-
-If the chain has already been deployed there, it will be checked that the `protocolVersion` of the deployed contracts there is the same as the one of the chain that is being moved. 4. All “inactive” instances of a chain can receive “inactive” upgrades of a chain. The single “active” instance of a chain (the one on the settlement layer) can receive only active upgrades.
-
-In case step (3) fails (or for any other reason the chain fails), the migration recovery process should be available. (`L1AssetRouter.bridgeRecoverFailedTransfer` method). Recovering a chain id basically just changing its `settlementLayerId` to the current block.chainid. It will be double checked that the chain has not conducted any inactive upgrades in the meantime, i.e. the `protocolVersion` of the chain is the same as the one when the chain started its migration.
-
-In case we ever do need to do more than simply resetting `settlementLayerId` for a chain in case of a failed migration, it is the responsibility of the CTM to ensure that the logic is compatible for all the versions.
-
-## Stuck state for L1→GW migration
-
-The only unrecoverable state that a chain can achieve is:
-
-- It tries to migrate and it fails.
-- While the migration has happening an “inactive” upgrade has been conducted.
-- Now recovery of the chain is not possible as the “protocol version” check will fail.
-
-This is considered to be a rare event, but it will be strongly recommended that before conducting any inactive upgrades the migration transaction should be finalized.
-
-In the future, we could actively force it, i.e. require confirmation of a successful migration before any upgrades on a migrated chain could be done.
-
-## Safety guards for GW→L1 migrations
-
-Migrations from GW to L1 do not have any chain recovery mechanism, i.e. if the step (3) from the above fails for some reason (e.g. a new protocol version id is available on the CTM), then the chain is basically lost.
-
-### Protocol version safety guards
-
-- Before a new protocol version is released, all the migrations will be paused, i.e. the `pauseMigration` function will be called by the owner of the Bridgehub on both L1 and L2. It should prevent migrations happening in the risky period when the new version is published to the CTM.
-- Assuming that no new protocol versions are published to CTM during the migration, the migration must succeed, since both CTM on GW and on L1 will have the same version and so the checks will work fine.
-- The finalization of any chain withdrawal is permissionless and so in the short term the team could help finalize the outstanding migrations to prevent funds loss.
-
-> The approach above is somewhat tricky as it requires careful coordination with the governance to ensure that at the time of when the new protocol version is published to CTM, there are no outstanding migrations.
-
-In the future we will either make it more robust or add a recovery mechanism for failed GW → L1 migrations.
-
->
-
-### Batch number safety guards
-
-Another potential place that may lead for a chain to not be migratable to L1 is if the number of outstanding batches is very high, which can lead to migration to cost too much gas and being not executable no L1.
-
-To prevent that, it is required for chains that migrate from GW that all their batches are executed. This ensures that the number of batches’ hashes to be copied on L1 is constant (i.e. just 1 last batch).
-
-## Motivation
-
-The job of this proposal is to reduce the number of potential states in which the system can find itself in to a minimum. The cases that are removed:
-
-- Need to be able to migrate to a chain that has contracts from a different protocol version
-- Need to be able for CTM to support migration of chains with different versions. Only `bridgeRecoverFailedTransfer` has to be supported for all the versions, but its logic is very trivial.
-
-The reason why we can not conduct “active” upgrades everywhere on both L1 and L2 part is that for the settlement layer we need to write the new protocol upgrade tx, while NOT allowing to override it. On other hand, for the “inactive” chain contracts, we need to ignore the upgrade transaction.
-
-## Forcing “active chain upgrade”
-
-For L1-based chains, forcing those upgrades will work exactly same as before. Just during `commitBatches` the CTM double checks that the protocol version is up to date.
-
-The admin of the CTM (GW) will call the CTM (GW) with the new protocol version’s data. This transaction should not fail, but even if it does fail, we should be able to just re-try. For now, the GW operator will be trusted to not be malicious
-
-### Case of malicious Gateway operator
-
-In the future, malicious Gateway operator may try to exploit a known vulnerability in an CTM.
-
-The recommended approach here is the following:
-
-- Admin of the CTM (GW) will firstly commit to the upgrade (for example, preemptively freeze all the chains).
-- Once the chains are frozen, it can use L1→L2 communication to pass the new protocol upgrade to CTM.
-
-> The approach above basically states that “if operator is censoring, we’ll be able to use standard censorship-resistance mechanism of a chain to bypass it”. The freezing part is just a way to not tell to the world the issue before all chains are safe from exploits.
-
-It is the responsibility of the CTM to ensure that all the supported settlement layers are trusted enough to uphold to the above protocol. Using any sort of Validiums will be especially discouraged, since in theory those could get frozen forever without any true censorship resistance mechanisms.
-
-Also, note that the freezing period should be long enough to ensure that censorship resistance mechanisms have enough time to kick in
-
->
-
-## Forcing “inactive chain upgrade”
-
-Okay, imagine that there is a bug in an L1 implementation of a chain that has migrated to Gateway. This is a rather rare event as most of the action happens on the settlement layer, together with the ability to steal the most of funds.
-
-In case such situation does happen however, the current plan is just to:
-
-- Freeze the ecosystem.
-- Ask the admins nicely to upgrade their implementation. Decentralized token governance can also force-upgrade those via CTM on L1.
-
-## Backwards compatibility
-
-With this proposal the protocol version on the L1 part and on the settlement layer part is completely out of sync. This means that all new mailboxes need to support both accepting and sending all versions of relayed (L1 → GW → L2) transactions.
-
-For now, this is considered okay. In the future, some stricter versioning could apply.
-
-## Notes
-
-### Regular chain migration moving chain X from Y to Z (where Y is Z’s settlement layer)
-
-So assume that Y is L1, and Z is ‘Gateway’.
-
-Definition:
-
-`ZKChain(X)` - ‘a.k.a ST / DiamondProxy’ for a given chain id X
-
-`CTM(X)` - the State transition manager for a given chain id X
-
-1. check that `ZKChain(X).protocol_version == CTM(X).protocol_version` on chain Y.
-2. Start ‘burn’ process (on chain Y)
- 1. collect `‘payload’` from `ZKChain(X)` and `CTM(X)` and `protocol_version` on chain Y.
- 2. set `ZKChain(X).settlement_layer` to `address(ZKChain(Z))` on chain Y.
-3. Start ‘mint’ process (on chain Z)
- 1. check that `CTM(X).protocol_version == payload.protocol_version`
- 2. Create new `ZKChain(X)` on chain Z and register in the local bridgehub & CTM.
- 3. pass `payload` to `ZKChain(X)` and `CTM(X)` to initialize the state.
-4. If ‘mint’ fails - recover (on chain Y)
- 1. check that `ZKChain(X).protocol_version == payload.protocol_version`
- 1. important, here we’re actually looking at the ‘HYPERCHAIN’ protocol version and not necessarily CTM protocol version.
- 2. set `ZKChain(X).settlement_layer` to `0` on chain Y.
- 3. pass `payload` to `IZKChain(X)` and `CTM(X)` to initialize the state.
-
-### ‘Reverse’ chain migration - moving chain X ‘back’ from Z to Y
-
-(moving back from gateway to L1).
-
-1. Same as above (check protocol version - but on chain Z)
-2. Same as above (start burn process - but on chain Z)
-3. Same as above (start ‘mint’ - but on chain Y)
- 1. same as above
- 2. creation is probably not needed - as the contract was already there in a first place.
- 3. same as above - but the state is ‘re-initialized’
-4. Same as above - but on chain ‘Z’
diff --git a/docs/gateway/img/ctm_gw_registration.png b/docs/gateway/img/ctm_gw_registration.png
deleted file mode 100644
index 03dc68518f..0000000000
Binary files a/docs/gateway/img/ctm_gw_registration.png and /dev/null differ
diff --git a/docs/gateway/img/gateway_architecture.png b/docs/gateway/img/gateway_architecture.png
deleted file mode 100644
index a9302ec7ea..0000000000
Binary files a/docs/gateway/img/gateway_architecture.png and /dev/null differ
diff --git a/docs/gateway/img/l1_gw_l2_messaging.png b/docs/gateway/img/l1_gw_l2_messaging.png
deleted file mode 100644
index a2b4db9357..0000000000
Binary files a/docs/gateway/img/l1_gw_l2_messaging.png and /dev/null differ
diff --git a/docs/gateway/img/l1_l2_messaging.png b/docs/gateway/img/l1_l2_messaging.png
deleted file mode 100644
index 886c131747..0000000000
Binary files a/docs/gateway/img/l1_l2_messaging.png and /dev/null differ
diff --git a/docs/gateway/img/migrate_from_gw.png b/docs/gateway/img/migrate_from_gw.png
deleted file mode 100644
index b30576043d..0000000000
Binary files a/docs/gateway/img/migrate_from_gw.png and /dev/null differ
diff --git a/docs/gateway/img/migrate_to_gw.png b/docs/gateway/img/migrate_to_gw.png
deleted file mode 100644
index 6615791e14..0000000000
Binary files a/docs/gateway/img/migrate_to_gw.png and /dev/null differ
diff --git a/docs/gateway/img/nested_l2_gw_l1_messaging.png b/docs/gateway/img/nested_l2_gw_l1_messaging.png
deleted file mode 100644
index c858453968..0000000000
Binary files a/docs/gateway/img/nested_l2_gw_l1_messaging.png and /dev/null differ
diff --git a/docs/gateway/img/nested_l2_gw_l1_messaging_2.png b/docs/gateway/img/nested_l2_gw_l1_messaging_2.png
deleted file mode 100644
index 5426e8e01d..0000000000
Binary files a/docs/gateway/img/nested_l2_gw_l1_messaging_2.png and /dev/null differ
diff --git a/docs/gateway/img/new_bridging_contracts.png b/docs/gateway/img/new_bridging_contracts.png
deleted file mode 100644
index f3f6802cdf..0000000000
Binary files a/docs/gateway/img/new_bridging_contracts.png and /dev/null differ
diff --git a/docs/gateway/l2_gw_l1_messaging.md b/docs/gateway/l2_gw_l1_messaging.md
deleted file mode 100644
index f2c8777fbf..0000000000
--- a/docs/gateway/l2_gw_l1_messaging.md
+++ /dev/null
@@ -1,184 +0,0 @@
-# Nested L2→GW→L1 messages tree design for Gateway
-
-[back to readme](../README.md)
-
-## Introduction
-
-This document assumes that the reader is already aware of what SyncLayer (or how it is now called Gateway) is. To reduce the interactions with L1, on SyncLayer we will gather all the batch roots from all the chains into the tree with following structure:
-
-
-
-
-> Note:
-
-“Multiple arrows” from `AggregatedRoot` to `chainIdRoot` and from each `chainIdRoot` to `batchRoot` are for illustrational purposes only.
-
-In fact, the tree above will be a binary merkle tree, where the `AggregatedRoot` will be the root of the tree of `chainIdRoot`, while `chainIdRoot` is the merkle root of a binary merkle tree of `batchRoot`.
-
->
-
-For each chain that settles on L1, the root will have the following format:
-
-`settledMessageRoot = keccak256(LocalRoot, AggregatedRoot)`
-
-where `localRoot` is the root of the tree of messages that come from the chain itself, while the `AggregatedRoot` is the root of aggregated messages from all of the chains that settle on top of the chain.
-
-In reality, `AggregatedRoot` will have a meaningful value only on SyncLayer and L1. On other chains it will be a root of an empty tree.
-
-The structure has the following recursive format:
-
-- `settledMessageRoot = keccak256(LocalRoot, AggregatedRoot)`
-- `LocalRoot` — the root of the binary merkle tree over `UserLog[]`. (the same as the one we have now). It only contains messages from the current batch.
-- `AggregatedRoot` — the root of the binary merkle tree over `ChainIdLeaf[]`.
-- `ChainIdLeaf = keccak256(CHAIN_ID_LEAF_PADDING, chain_id, ChainIdRoot`)
-- `CHAIN_ID_LEAF_PADDING` — it is a constant padding, needed to ensure that the preimage of the ChainIdLeaf is larger than 64 bytes and so it can not be an internal node.
-- `chain_id` — the chain id of the chain the batches of which are aggregated.
-- `ChainIdRoot` = the root of the binary merkle tree `BatchRootLeaf[]`.
-- `BatchRootLeaf = keccak256(BATCH_LEAF_HASH_PADDING, batch_number, SettledRootOfBatch).`
-
-In other words, we get the recursive structure, where for leaves of it, i.e. chains that do not aggregate any other chains, have empty `AggregatedRoot`.
-
-## Appending new batch root leaves
-
-At the execution stage of every batch, the ZK Chain would call the `MessageRoot.addChainBatchRoot` function, while providing the `SettledRootOfBatch` for the chain. Then, the `BatchRootLeaf` will be calculated and appended to the incremental merkle tree with which the `ChainIdRoot` & `ChainIdLeaf` is calculated, which will be updated in the merkle tree of `ChainIdLeaf`s.
-
-At the end of the batch, the L1Messenger system contract would query the MessageRoot contract for the total aggregated root, i.e. the root of all `ChainIdLeaf`s . Calculate the settled root `settledMessageRoot = keccak256(LocalRoot, AggregatedRoot)` and propagate it to L1.
-
-Only the final aggregated root will be stored on L1.
-
-## Proving that a message belongs to a chain on top of SyncLayer
-
-The process will consist of two steps:
-
-1. Construct the needed `SettledRootOfBatch` for the current chain’s batch.
-2. Prove that it belonged to the gateway.
-
-If the depth of recursion is larger than 1, then step (1) could be repeated multiple times.
-
-Right now for proving logs the following interface is exposed on L1 side:
-
-```solidity
-struct L2Log {
- uint8 l2ShardId;
- bool isService;
- uint16 txNumberInBatch;
- address sender;
- bytes32 key;
- bytes32 value;
-}
-
-function proveL2LogInclusion(
- uint256 _chainId,
- uint256 _batchNumber,
- uint256 _index,
- L2Log calldata _log,
- bytes32[] calldata _proof
-) external view override returns (bool) {
- address hyperchain = getHyperchain(_chainId);
- return IZkSyncHyperchain(hyperchain).proveL2LogInclusion(_batchNumber, _index, _log, _proof);
-}
-```
-
-Let’s define a new function:
-
-```solidity
-function proveL2LeafInclusion(
- uint256 _chainId,
- uint256 _batchNumber,
- uint256 _mask,
- bytes32 _leaf,
- bytes32[] calldata _proof
-) external view override returns (bool) {}
-```
-
-This function will prove that a certain 32-byte leaf belongs to the tree. Note, that the fact that the `leaf` is 32-bytes long means that the function could work successfully for internal leaves also. To prevent this it will be the callers responsibility to ensure that the preimage of the leaf is larger than 32-bytes long and/or use other ways to ensuring that the function will be called securely.
-
-This function will be internally used by the existing `_proveL2LogInclusion` function to prove that a certain log existed
-
-We want to avoid breaking changes to SDKs, so we will modify the `zks_getL2ToL1LogProof` to return the data in the following format (the results of it are directly passed into the `proveL2LeafInclusion` method, so returned value must be supported by the contract):
-
-First `bytes32` corresponds to the metadata of the proof. The zero-th byte should tell the version of the metadata and must be equal to the `SUPPORTED_PROOF_METADATA_VERSION` (a constant of `0x01`).
-
-Then, it should contain the number of 32-byte words that are needed to restore the current `BatchRootLeaf` , i.e. `logLeafProofLen` (it is called this way as it proves that a leaf belongs to the `SettledRootOfBatch`). The second byte contains the `batchLeafProofLen` . It is the length of the merkle path to prove that the `BatchRootLeaf` belonged to the `ChainIdRoot` .
-
-Then, the following happens:
-
-- We consume the `logLeafProofLen` items to produce the `SettledRootOfBatch`. The last word is typically the aggregated root for the chain.
-
-If the settlement layer of the chain is the chain itself, we can just end here by verifying that the provided batch message root is correct.
-
-If the chain is not a settlement layer of itself, we then need to calculate:
-
-- `BatchRootLeaf = keccak256(BATCH_LEAF_HASH_PADDING, SettledRootOfBatch, batch_number).`
-- Consume one element from the `_proofs` array to get the mask for the merkle path of the batch leaf in the chain id tree.
-- Consume `batchLeafProofLen` elements to construct the `ChainIdRoot`
-- After that, we calculate the `chainIdLeaf = keccak256(CHAIN_ID_LEAF_PADDING, chainIdRoot, chainId`
-
-Now, we have the _supposed_ `chainIdRoot` for the chain inside its settlement layer. The only thing left to prove is that this root belonged to some batch of the settlement layer.
-
-Then, the following happens:
-
-- One element from `_proof` array is consumed and expected to maintain the batchNumber of the settlement layer when this chainid root was present as well as mask for the reconstruction of the merkle tree.
-- The other element from the `_proof` contains the address of the settlement layer, where the address will be checked.
-
-Now, we can call the function to verify that the batch belonged to the settlement layer:
-
-```solidity
- IMailbox(settlementLayerAddress).proveL2LeafInclusion(
- settlementLayerBatchNumber,
- settlementLayerBatchRootMask,
- chainIdLeaf,
- // Basically pass the rest of the `_proof` array
- extractSliceUntilEnd(_proof, ptr)
- );
-```
-
-The other slice of the `_proof` array is expected to have the same structure as before:
-
-- Metadata
-- Merkle path to construct the `SettledRootOfBatch`
-- In case there are any more aggregation layers, additional info to prove that the batch belonged to it.
-
-## Trust assumptions
-
-Note, that the `_proof` field is provided by potentially malicious users. The only part that really checks anything with L1 state is the final step of the aggregated proof verification, i.e. that the settled root of batch of the final top layer was present on L1.
-
-It puts a lot of trust in the settlement layers as it can steal funds from chains and “verify” incorrect L2→GW→L1 logs if it wants to. It is the job of the chain itself to ensure that it trusts the aggregation layer. It is also the job of the STM to ensure that the settlement layers that are used by its chains are secure.
-
-Also, note that that `address` of the settlement layer is provided by the user. Assuming that the settlement layer is trusted, this scheme works fine, since the `chainIdLeaf` belongs to it only if the chain really ever settled there. I.e. so the protection from maliciously chosen settlement layers is the fact that the settlement layers are trusted to never include batches that they did not have.
-
-## Additional notes on security
-
-### Redundance of data
-
-Currently, we never clear the `MessageRoot` in other words, the aggregated root contains more and more batches’ settlement roots, leading to the following two facts:
-
-- The aggregated proofs’ length starts to logarithmically depend on the number of total batches ever finalized on top of this settlement layer (it also depends logarithmically on the number of chains in the settlement layer). I.e. it is `O(log(total_chains) + log(total_batches) + log(total_logs_in_the_batch))` in case of a single aggregation layer.
-- The same data may be referenced from multiple final aggregated roots.
-
-It is the responsibility of the chain to ensure that each message has a unique id and can not be replayed. Currently a tuple of `chain_batch_number, chain_message_id` is used. While there are multiple message roots from which such a tuple could be proven from, it is still okay as it will be nullified only once.
-
-Another notable example of the redundancy of data, is that we also have total `MessageRoot` on L1, which contains the aggregated root of all chains, while for chains that settle on L1, we still store the `settledBatchRoot` for the efficiency.
-
-### Data availability guarantees
-
-We want to maintain the security invariant that users can always withdraw their funds from rollup chains. In other words, all L2→GW→L1 logs that come from rollups should be eventually propagated to L1, and also regardless of how other chains behave an honest chain should always provide the ability for their users to withdraw.
-
-Firstly, unless the chain settles on L1, this requires a trusted settlement layer. That is, not trusted operator of the gateway, but it works properly, i.e. appends messages correctly, publishes the data that it promises to publish, etc. This is already the case for the Gateway as it is a ZK rollup fork of Era, and while the operator may censor transactions, it can not lie and is always forced to publish all state diffs.
-
-Secondly, we guarantee that all the stored `ChainIdLeaf`s are published on L1, even for Validiums. Publishing a single 32 byte value per relatively big Gateway batch has little price for Validiums, but it ensures that the settlement root of the gateway can always be constructed. And, assuming that the preimage for the chain root could be constructed, this gives an ability to ability to recover the proof for any L2→GW→L1 coming from a rollup.
-
-But how can one reconstruct the total chain tree for a particular rollup chain? A rollup would relay all of its pubdata to L1, meaning that by observing L1, the observer would know all the L2→GW→L1 logs that happened in a particular batch. It means that for each batch it can restore the `LocalRoot` (in case the `AggregatedRoot` is non-zero, it could be read from e.g. the storage which is available via the standard state diffs). This allows to calculate the `BatchRootLeaf` for the chain. The only thing missing is understanding which batches were finalized on gateway in order to construct the merkle path to the `ChainRootLeaf`.
-
-To understand which SL was used by a batch for finalization, one could simply brute force over all settlement layers ever used to find out where the settledBatchRoot is stored.. This number is expected to be rather small.
-
-## Legacy support
-
-In order to ease the server migration, we support legacy format of L2→L1 logs proving, i.e. just provide a proof that assumes that stored `settledMessageRoot` is identical to local root, i.e. the hash of logs in the batch.
-
-To differentiate between legacy format and the one, the following approach is used;
-
-- Except for the first 3 bytes the first word in the new format contains 0s, which is unlikely in the old format, where leaves are hashed.
-- I.e. if the last 29 bytes are zeroes, then it is assumed to be the new format and vice versa.
-
-In the next release the old format will be removed.
diff --git a/docs/gateway/messaging_via_gateway.md b/docs/gateway/messaging_via_gateway.md
deleted file mode 100644
index 71cba6e244..0000000000
--- a/docs/gateway/messaging_via_gateway.md
+++ /dev/null
@@ -1,50 +0,0 @@
-# Messaging via Gateway
-
-[back to readme](../README.md)
-
-## Deeper dive into MessageRoot contract and how L2→GW→L1 communication works
-
-Before, when were just settling on L1, a chain’s message root was just the merkle tree of L2→L1 logs that were sent within this batch. However, this new model will have to be amended to be able to perform messages to L1 coming from an L2 that settles on top of Gateway.
-
-The description of how L2→GW→L1 messages are aggregated in the MessageRoots and proved on L1 can be read in the [nested l2 gw l1 messaging](./l2_gw_l1_messaging.md) section.
-
-## L1→GW→L2 messaging
-
-As a recap, here is how messaging works for chains that settle on L1:
-
-
-
-- The user calls the bridgehub, which routes the message to the chain.
-- The operator eventually sees the transaction via an event on L1 and it will process it on L2.
-
-With gateway, the situation will be a bit more complex:
-
-
-
-Since now, the contracts responsible for batch processing were moved to Gateway, now all the priority transactions have to be relayed to that chain so that the validation could work.
-
-- (Steps 1-3) The user calls Bridgehub. The base token needs to be deposited via L1AssetRouter (usually the NTV will be used).
-- (Step 4-5). The Bridgehub calls the chain where the transaction is targeted to. The chain sees that its settlement layer is another chain and so it calls it and asks to relay this transaction to gateway
-- (Steps 6-7). priority transaction from `SETTLEMENT_LAYER_RELAY_SENDER` to the Bridgehub is added to the Gateway chain’s priority queue. Once the Gateway operator sees the transaction from L1, it processed it. The transaction itself will eventually call the DiamondProxy of the initial called chain.
-- (Step 8) At some point, the operator of the chain will see that the priority transaction has been included to the gateway and it will process it on the L2.
-- Step 9 from the picture above is optional and in case the callee of the L1→GW→L2 transaction is the L2AssetRouter (i.e. the purpose of the transaction was bridging funds), then the L2AssetRouter will call asset handler of the asset (in case of standard bridged tokens, it will be the NativeTokenVault). It will be responsible for minting the corresponding number of tokens to the user.
-
-So under the hood there are 2 cross chain transactions happening:
-
-1. One from L1 to GW
-2. The second one from GW to the L2.
-
-From another point with bridging we have methods that allow users to recover funds in case of a failed L1→L2 transaction. E.g. if the user tried to bridge USDC to a Zk Chain, but did not provide enough L2 gas, it can still recover the funds.
-
-This functionality works by letting user prove that the bridging transaction failed and then the funds are released back to the original sender on L1. With the approach above where there are multiple cross chain transactions involved, it could become 2x hard to maintain: now both of these could fail.
-
-To simplify things, for now, we provide the L1→GW with a large amount of gas (72kk, i.e. the maximal amount allowed to be passed on L2). We believe that it is not possible to create a relayed transaction that would fail, assuming that a non malicious recipient CTM is used on L2.
-
-> Note, that the above means that we currently rely on the following two facts:
-
-- The recipient CTM is honest and efficient.
-- Creating a large transaction on L1 that would cause the L1→GW part to fail is not possible due to high L1 gas costs that would be required to create such a tx.
-
-Both of the assumptions above will be removed in subsequent releases, but for now this is how things are.
-
->
diff --git a/docs/gateway/overview.md b/docs/gateway/overview.md
deleted file mode 100644
index 9517456b88..0000000000
--- a/docs/gateway/overview.md
+++ /dev/null
@@ -1,25 +0,0 @@
-# Gateway
-
-[back to readme](../README.md)
-
-Gateway is a proof aggregation layer, created to solve the following problems:
-
-- Fast interop (interchain communication) would require quick proof generation and verification. The latter can be very expensive on L1. Gateway provides an L1-like interface for chains, while giving a stable price for compute.
-- Generally proof aggregation can reduce costs for users, if there are multiple chains settling on top of the same layer. It can reduce the costs of running a Validium even further.
-
-In this release, Gateway is basically a fork of Era, that will be deployed within the same CTM as other ZK Chains. This allows us to reuse most of the existing code for Gateway.
-
-> In some places in code you can meet words such as “settlement layer” or the abbreviation “sl”. “Settlement layer” is a general term that describes a chain that other chains can settle to. Right now, the list of settlement layers is whitelisted and only Gateway will be allowed to be a settlement layer (along with L1).
-
-## High level gateway architecture
-
-
-
-## Read more
-
-- [General overview](overview.md)
-- [Chain migration](chain_migration.md)
-- [L1->L2 messaging via gateway](messaging_via_gateway.md)
-- [L2->L1 messaging via gateway](l2_gw_l1_messaging.md)
-- [Gateway protocol versioning](gateway_protocol_upgrades.md)
-- [DA handling on Gateway](gateway_da.md)
diff --git a/docs/glossary.md b/docs/glossary.md
deleted file mode 100644
index c0a1b29a80..0000000000
--- a/docs/glossary.md
+++ /dev/null
@@ -1,11 +0,0 @@
-# Glossary
-
-[back to readme](./README.md)
-
-- **Validator/Operator** - a privileged address that can commit/verify/execute L2 batches.
-- **L2 batch (or just batch)** - An aggregation of multiple L2 blocks. Note, that while the API operates on L2 blocks,
- the prove system operates on batches, which represent a single proved VM execution, which typically contains multiple
- L2 blocks.
-- **Facet** - implementation contract. The word comes from the EIP-2535.
-- **Gas** - a unit that measures the amount of computational effort required to execute specific operations on the
- ZKsync Era network.
diff --git a/docs/img/reading_order.png b/docs/img/reading_order.png
deleted file mode 100644
index c5309e6814..0000000000
Binary files a/docs/img/reading_order.png and /dev/null differ
diff --git a/docs/l2_system_contracts/batches_and_blocks_on_zksync.md b/docs/l2_system_contracts/batches_and_blocks_on_zksync.md
deleted file mode 100644
index 723317e64e..0000000000
--- a/docs/l2_system_contracts/batches_and_blocks_on_zksync.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Batches & L2 blocks on ZKsync
-
-[back to readme](../README.md)
-
-## Glossary
-
-- Batch - a set of transactions that the bootloader processes (`commitBatches`, `proveBatches`, and `executeBatches` work with it). A batch consists of multiple transactions.
-- L2 blocks - non-intersecting sub-sets of consecutively executed transactions in a batch. This is the kind of block you see in the API. This is the one that is used for `block.number`/`block.timestamp`/etc.
-
-> Note that sometimes in code you can see notion of "virtual blocks". In the past, we returned batch information for `block.number`/`block.timestamp`. However due to DevEx issues we decided to move to returned these values for L2 blocks. Virtual blocks were used during migration, but are not used anymore. You consider that there is one virtual block per one L2 block and it has exactly the same properties.
-
-## Motivation
-
-L2 blocks were created for fast soft confirmation in wallets and block explorer. For example, MetaMask shows transactions as confirmed only after the block in which transaction execution was mined. So if the user needs to wait for the batch confirmation it would take at least a few minutes (for soft confirmation) and hours for full confirmation which is very bad UX. But API could return soft confirmation much earlier through L2 blocks.
-
-## Adapting for Solidity
-
-In order to get the returned value for `block.number`, `block.timestamp`, `blockhash` our compiler used the following functions:
-
-- `getBlockNumber`
-- `getBlockTimestamp`
-- `getBlockHashEVM`
-
-These return values for L2 blocks.
-
-## Blocks’ processing and consistency checks
-
-Our `SystemContext` contract allows to get information about batches and L2 blocks. Some of the information is hard to calculate onchain. For instance, time. The timing information (for both batches and L2 blocks) are provided by the operator. In order to check that the operator provided some realistic values, certain checks are done on L1. Generally though, we try to check as much as we can on L2.
-
-### Initializing L1 batch
-
-At the start of the batch, the operator [provides](../../system-contracts/bootloader/bootloader.yul#L3935) the timestamp of the batch, its number and the hash of the previous batch. The root hash of the Merkle tree serves as the root hash of the batch.
-
-The SystemContext can immediately check whether the provided number is the correct batch number. It also immediately sends the previous batch hash to L1, where it will be checked during the commit operation. Also, some general consistency checks are performed. This logic can be found [here](../../system-contracts/contracts/SystemContext.sol#L469).
-
-### L2 blocks processing and consistency checks
-
-#### `setL2Block`
-
-Before each transaction, we call `setL2Block` [method](../../system-contracts/bootloader/bootloader.yul#L2884). There we will provide some data about the L2 block that the transaction belongs to:
-
-- `_l2BlockNumber` The number of the new L2 block.
-- `_l2BlockTimestamp` The timestamp of the new L2 block.
-- `_expectedPrevL2BlockHash` The expected hash of the previous L2 block.
-- `_isFirstInBatch` Whether this method is called for the first time in the batch.
-- `_maxVirtualBlocksToCreate` The maximum number of virtual block to create with this L2 block. This is a legacy field that is always equal either to 0 or 1.
-
-If two transactions belong to the same L2 block, only the first one may have non-zero `_maxVirtualBlocksToCreate`. The rest of the data must be same.
-
-The `setL2Block` [performs](../../system-contracts/contracts/SystemContext.sol#L355) a lot of similar consistency checks to the ones for the L1 batch.
-
-#### L2 blockhash calculation and storage
-
-Unlike L1 batch’s hash, the L2 blocks’ hashes can be checked on L2.
-
-The hash of an L2 block is `keccak256(abi.encode(_blockNumber, _blockTimestamp, _prevL2BlockHash, _blockTxsRollingHash))`. Where `_blockTxsRollingHash` is defined in the following way:
-
-`_blockTxsRollingHash = 0` for an empty block.
-
-`_blockTxsRollingHash = keccak(0, tx1_hash)` for a block with one tx.
-
-`_blockTxsRollingHash = keccak(keccak(0, tx1_hash), tx2_hash)` for a block with two txs, etc.
-
-To add a transaction hash to the current miniblock we use the `appendTransactionToCurrentL2Block` function of the `SystemContext` contract.
-
-Since ZKsync is a state-diff based rollup, there is no way to deduce the hashes of the L2 blocks based on the transactions’ in the batch (because there is no access to the transaction’s hashes). At the same time, in order to execute `blockhash` method, the VM requires the knowledge of some of the previous L2 block hashes. In order to save up on pubdata (by making sure that the same storage slots are reused, i.e. we only have repeated writes) we [store](../../system-contracts/contracts/SystemContext.sol#L73) only the last 257 block hashes. You can read more on what are the repeated writes and how the pubdata is processed [here](../settlement_contracts/data_availability/standard_pubdata_format.md).
-
-We store only the last 257 blocks, since the EVM requires only 256 previous ones and we use 257 as a safe margin.
-
-#### Legacy blockhash
-
-For L2 blocks that were created before we switched to the formulas from above, we use the following formula for their hash:
-
-`keccak256(abi.encodePacked(uint32(_blockNumber)))`
-
-These are only very old blocks on ZKsync Era and other ZK chains don't have such blocks.
-
-#### Timing invariants
-
-While the timestamp of each L2 block is provided by the operator, there are some timing invariants that the system preserves:
-
-- For each L2 block its timestamp should be > the timestamp of the previous L2 block
-- For each L2 block its timestamp should be ≥ timestamp of the batch it belongs to
-- Each batch must start with a new L2 block (i.e. an L2 block can not span across batches).
-- The timestamp of a batch must be ≥ the timestamp of the latest L2 block which belonged to the previous batch.
-- The timestamp of the last miniblock in batch can not go too far into the future. This is enforced by publishing an L2→L1 log, with the timestamp which is then checked on L1.
-
-### Fictive L2 block & finalizing the batch
-
-At the end of the batch, the bootloader calls the `setL2Block` [one more time](../../system-contracts/bootloader/bootloader.yul#L4110) to allow the operator to create a new empty block. This is done purely for some of the technical reasons inside the node, where each batch ends with an empty L2 block.
-
-We do not enforce that the last block is empty explicitly as it complicates the development process and testing, but in practice, it is, and either way, it should be secure.
-
-Also, at the end of the batch we send the timestamps of the batch as well as the timestamp of the last miniblock in order to check on L1 that both of these are realistic. Checking any other L2 block’s timestamp is not required since all of them are enforced to be between those two.
-
-## Additional note on blockhashes
-
-In the past, we had to apply different formulas based on whether or not the migration from batch environment info to L2 block info has finished. You can find these checks [here](../../system-contracts/contracts/SystemContext.sol#L137). But note, that the migration has ended quite some time ago, so in reality only the two cases above can be met:
-
-- When the block is out of the readable range.
-- When it is a normal L2 block and so its hash has to be used.
-
-The only edge case is when we ask for miniblock block number for which the base hash is returned. This edge case will be removed in future releases.
diff --git a/docs/l2_system_contracts/elliptic_curve_precompiles.md b/docs/l2_system_contracts/elliptic_curve_precompiles.md
deleted file mode 100644
index a3caa1f01f..0000000000
--- a/docs/l2_system_contracts/elliptic_curve_precompiles.md
+++ /dev/null
@@ -1,251 +0,0 @@
-# Elliptic curve precompiles
-
-[back to readme](../README.md)
-
-Precompiled contracts for elliptic curve operations are required in order to perform zkSNARK verification.
-
-The operations that you need to be able to perform are elliptic curve point addition, elliptic curve point scalar multiplication, and elliptic curve pairing.
-
-This document explains the precompiles responsible for elliptic curve point addition and scalar multiplication and the design decisions. You can read the specification [here](https://eips.ethereum.org/EIPS/eip-196).
-
-## Introduction
-
-On top of having a set of opcodes to choose from, the EVM also offers a set of more advanced functionalities through precompiled contracts. These are a special kind of contracts that are bundled with the EVM at fixed addresses and can be called with a determined gas cost. The addresses start from 1, and increment for each contract. New hard forks may introduce new precompiled contracts. They are called from the opcodes like regular contracts, with instructions like CALL. The gas cost mentioned here is purely the cost of the contract and does not consider the cost of the call itself nor the instructions to put the parameters in memory.
-
-For Go-Ethereum, the code being run is written in Go, and the gas costs are defined in each precompile spec.
-
-In the case of ZKsync Era, ecAdd and ecMul precompiles are written as a smart contract for two reasons:
-
-- zkEVM needs to be able to prove their execution (and at the moment it cannot do that if the code being run is executed outside the VM)
-- Writing custom circuits for Elliptic curve operations is hard, and time-consuming, and after all such code is harder to maintain and audit.
-
-## Field Arithmetic
-
-The BN254 (also known as alt-BN128) is an elliptic curve defined by the equation $y^2 = x^3 + 3$ over the finite field $\mathbb{F}_p$, being $p = 21888242871839275222246405745257275088696311157297823662689037894645226208583. The modulus is less than 256 bits, which is why every element in the field is represented as a `uint256`.
-
-The arithmetic is carried out with the field elements encoded in the Montgomery form. This is done not only because operating in the Montgomery form speeds up the computation but also because the native modular multiplication, which is carried out by Yul's `mulmod` opcode, is very inefficient.
-
-Instructions set on ZKsync and EVM are different, so the performance of the same Yul/Solidity code can be efficient on EVM, but not on zkEVM and opposite.
-
-One such very inefficient command is `mulmod`. On EVM there is a native opcode that makes modulo multiplication and it costs only 8 gas, which compared to the other opcodes costs is only 2-3 times more expensive. On zkEVM we don’t have native `mulmod` opcode, instead, the compiler does full-with multiplication (e.g. it multiplies two `uint256`s and gets as a result an `uint512`). Then the compiler performs long division for reduction (but only the remainder is kept), in the generic form it is an expensive operation and costs many opcode executions, which can’t be compared to the cost of one opcode execution. The worst thing is that `mulmod` is used a lot for the modulo inversion, so optimizing this one opcode gives a huge benefit to the precompiles.
-
-### Multiplication
-
-As said before, multiplication was carried out by implementing the Montgomery reduction, which works with general moduli and provides a significant speedup compared to the naïve approach.
-
-The squaring operation is obtained by multiplying a number by itself. However, this operation can have an additional speedup by implementing the SOS Montgomery squaring.
-
-### Inversion
-
-Inversion was performed using the extended binary Euclidean algorithm (also known as extended binary greatest common divisor). This algorithm is a modification of Algorithm 3 `MontInvbEEA` from [Montgomery inversion](https://cetinkayakoc.net/docs/j82.pdf).
-
-### Exponentiation
-
-The exponentiation was carried out using the square and multiply algorithm, which is a standard technique for this operation.
-
-## Montgomery Form
-
-Let’s take a number `R`, such that `gcd(N, R) == 1` and `R` is a number by which we can efficiently divide and take module over it (for example power of two or better machine word, aka 2^256). Then transform every number to the form of `x * R mod N` / `y * R mod N` and then we get efficient modulo addition and multiplication. The only thing is that before working with numbers we need to transform them to the form from `x mod N` to the `x * R mod N` and after performing operations transform the form back.
-
-For the latter, we will assume that `N` is the module that we use in computations, and `R` is $2^{256}$, since we can efficiently divide and take module over this number and it practically satisfies the property of `gcd(N, R) == 1`.
-
-### Montgomery Reduction Algorithm (REDC)
-
-> Reference:
-
-```solidity
-/// @notice Implementation of the Montgomery reduction algorithm (a.k.a. REDC).
-/// @dev See
-/// @param lowestHalfOfT The lowest half of the value T.
-/// @param higherHalfOfT The higher half of the value T.
-/// @return S The result of the Montgomery reduction.
-function REDC(lowestHalfOfT, higherHalfOfT) -> S {
- let q := mul(lowestHalfOfT, N_PRIME())
- let aHi := add(higherHalfOfT, getHighestHalfOfMultiplication(q, P()))
- let aLo, overflowed := overflowingAdd(lowestHalfOfT, mul(q, P()))
- if overflowed {
- aHi := add(aHi, 1)
- }
- S := aHi
- if iszero(lt(aHi, P())) {
- S := sub(aHi, P())
- }
-}
-
-```
-
-By choosing $R = 2^{256}$ we avoided 2 modulo operations and one division from the original algorithm. This is because in Yul, native numbers are uint256 and the modulo operation is native, but for the division, as we work with a 512-bit number split into two parts (high and low part) dividing by $R$ means shifting 256 bits to the right or what is the same, discarding the low part.
-
-### Montgomery Addition/Subtraction
-
-Addition and subtraction in Montgomery form are the same as ordinary modular addition and subtraction because of the distributive law
-
-$$
-\begin{align*}
-aR+bR=(a+b)R,\\
-aR-bR=(a-b)R.
-\end{align*}
-$$
-
-```solidity
-/// @notice Computes the Montgomery addition.
-/// @dev See for further details on the Montgomery multiplication.
-/// @param augend The augend in Montgomery form.
-/// @param addend The addend in Montgomery form.
-/// @return ret The result of the Montgomery addition.
-function montgomeryAdd(augend, addend) -> ret {
- ret := add(augend, addend)
- if iszero(lt(ret, P())) {
- ret := sub(ret, P())
- }
-}
-
-/// @notice Computes the Montgomery subtraction.
-/// @dev See for further details on the Montgomery multiplication.
-/// @param minuend The minuend in Montgomery form.
-/// @param subtrahend The subtrahend in Montgomery form.
-/// @return ret The result of the Montgomery subtraction.
-function montgomerySub(minuend, subtrahend) -> ret {
- ret := montgomeryAdd(minuend, sub(P(), subtrahend))
-}
-
-```
-
-We do not use `addmod`. That's because in most cases the sum does not exceed the modulus.
-
-### Montgomery Multiplication
-
-The product of $aR \mod N$ and $bR \mod N$ is $REDC((aR \mod N)(bR \mod N))$.
-
-```solidity
-/// @notice Computes the Montgomery multiplication using the Montgomery reduction algorithm (REDC).
-/// @dev See for further details on the Montgomery multiplication.
-/// @param multiplicand The multiplicand in Montgomery form.
-/// @param multiplier The multiplier in Montgomery form.
-/// @return ret The result of the Montgomery multiplication.
-function montgomeryMul(multiplicand, multiplier) -> ret {
- let hi := getHighestHalfOfMultiplication(multiplicand, multiplier)
- let lo := mul(multiplicand, multiplier)
- ret := REDC(lo, hi)
-}
-
-```
-
-### Montgomery Inversion
-
-```solidity
-/// @notice Computes the Montgomery modular inverse skipping the Montgomery reduction step.
-/// @dev The Montgomery reduction step is skipped because a modification in the binary extended Euclidean algorithm is used to compute the modular inverse.
-/// @dev See the function `binaryExtendedEuclideanAlgorithm` for further details.
-/// @param a The field element in Montgomery form to compute the modular inverse of.
-/// @return invmod The result of the Montgomery modular inverse (in Montgomery form).
-function montgomeryModularInverse(a) -> invmod {
- invmod := binaryExtendedEuclideanAlgorithm(a)
-}
-```
-
-As said before, we use a modified version of the bEE algorithm that lets us “skip” the Montgomery reduction step.
-
-The regular algorithm would be $REDC((aR \mod N)^{−1}(R^3 \mod N))$ which involves a regular inversion plus a multiplication by a value that can be precomputed.
-
-## ECADD
-
-Precompile for computing elliptic curve point addition. The points are represented in affine form, given by a pair of coordinates $(x,y)$.
-
-Affine coordinates are the conventional way of expressing elliptic curve points, which use 2 coordinates. The math is concise and easy to follow.
-
-For a pair of constants $a$ and $b$, an elliptic curve is defined by the set of all points $(x,y)$ that satisfy the equation $y^2=x^3+ax+b$, plus a special “point at infinity” named $O$.
-
-### Point Doubling
-
-To compute $2P$ (or $P+P$), there are three cases:
-
-- If $P = O$, then $2P = O$.
-- Else $P = (x, y)$
-
- - If $y = 0$, then $2P = O$.
- - Else $y≠0$, then
-
- $$
- \begin{gather*} \lambda = \frac{3x_{p}^{2} + a}{2y_{p}} \\ x_{r} = \lambda^{2} - 2x_{p} \\ y_{r} = \lambda(x_{p} - x_{r}) - y_{p}\end{gather*}
- $$
-
-The complicated case involves approximately 6 multiplications, 4 additions/subtractions, and 1 division. There could also be 4 multiplications, 6 additions/subtractions, and 1 division, and if you want you could trade a multiplication with 2 more additions.
-
-### Point Addition
-
-To compute $P + Q$ where $P \neq Q$, there are four cases:
-
-- If $P = 0$ and $Q \neq 0$, then $P + Q = Q$.
-- If $Q = 0$ and $P \neq 0$, then $P + Q = P$.
-- Else $P = (x_{p},\ y_{p})$ and$Q = (x_{q},\ y_{q})$
-
- - If $x_{p} = x_{q}$ (and necessarily $y_{p} \neq y_{q}$), then $P + Q = O$.
- - Else $x_{p} \neq x_{q}$, then
-
- $$
- \begin{gather*} \lambda = \frac{y_{2} - y_{1}}{x_{2} - x_{1}} \\ x_{r} = \lambda^{2} - x_{p} - x_{q} \\ y_{r} = \lambda(x_{p} - x_{r}) - y_{p}\end{gather*}
- $$
-
- and $P + Q = R = (x_{r},\ y_{r})$.
-
-The complicated case involves approximately 2 multiplications, 6 additions/subtractions, and 1 division.
-
-## ECMUL
-
-Precompile for computing elliptic curve point scalar multiplication. The points are represented in homogeneous projective coordinates, given by the coordinates $(x , y , z)$. Transformation into affine coordinates can be done by applying the following transformation:
-$(x,y) = (X.Z^{-1} , Y.Z^{-1} )$ if the point is not the point at infinity.
-
-The key idea of projective coordinates is that instead of performing every division immediately, we defer the divisions by multiplying them into a denominator. The denominator is represented by a new coordinate. Only at the very end, do we perform a single division to convert from projective coordinates back to affine coordinates.
-
-In affine form, each elliptic curve point has 2 coordinates, like $(x,y)$. In the new projective form, each point will have 3 coordinates, like $(X,Y,Z)$, with the restriction that $Z$ is never zero. The forward mapping is given by $(x,y)→(xz,yz,z)$, for any non-zero $z$ (usually chosen to be 1 for convenience). The reverse mapping is given by $(X,Y,Z)→(X/Z,Y/Z)$, as long as $Z$ is non-zero.
-
-### Point Doubling
-
-The affine form case $y=0$ corresponds to the projective form case $Y/Z=0$. This is equivalent to $Y=0$, since $Z≠0$.
-
-For the interesting case where $P=(X,Y,Z)$ and $Y≠0$, let’s convert the affine arithmetic to projective arithmetic.
-
-After expanding and simplifying the equations ([demonstration here](https://www.nayuki.io/page/elliptic-curve-point-addition-in-projective-coordinates)), the following substitutions come out
-
-$$
-\begin{align*} T &= 3X^{2} + aZ^{2},\\ U &= 2YZ,\\ V &= 2UXY,\\ W &= T^{2} - 2V \end{align*}
-$$
-
-Using them, we can write
-
-$$
-\begin{align*} X_{r} &= UW \\ Y_{r} &= T(V−W)−2(UY)^{2} \\ Z_{r} &= U^{3} \end{align*}
-$$
-
-As we can see, the complicated case involves approximately 18 multiplications, 4 additions/subtractions, and 0 divisions.
-
-### Point Addition
-
-The affine form case $x_{p} = x_{q}$ corresponds to the projective form case $X_{p}/Z_{p} = X_{q}/Z_{q}$. This is equivalent to $X_{p}Z_{q} = X_{q}Z_{p}$, via cross-multiplication.
-
-For the interesting case where $P = (X_{p},\ Y_{p},\ Z_{p})$ , $Q = (X_{q},\ Y_{q},\ Z_{q})$, and $X_{p}Z_{q} ≠ X_{q}Z_{p}$, let’s convert the affine arithmetic to projective arithmetic.
-
-After expanding and simplifying the equations ([demonstration here](https://www.nayuki.io/page/elliptic-curve-point-addition-in-projective-coordinates)), the following substitutions come out
-
-$$
-\begin{align*}
-T_{0} &= Y_{p}Z_{q}\\
-T_{1} &= Y_{q}Z_{p}\\
-T &= T_{0} - T_{1}\\
-U_{0} &= X_{p}Z_{q}\\
-U_{1} &= X_{q}Z_{p}\\
-U &= U_{0} - U_{1}\\
-U_{2} &= U^{2}\\
-V &= Z_{p}Z_{q}\\
-W &= T^{2}V−U_{2}(U_{0}+U_{1}) \\
-\end{align*}
-$$
-
-Using them, we can write
-
-$$
-\begin{align*} X_{r} &= UW \\ Y_{r} &= T(U_{0}U_{2}−W)−T_{0}U^{3} \\ Z_{r} &= U^{3}V \end{align*}
-$$
-
-As we can see, the complicated case involves approximately 15 multiplications, 6 additions/subtractions, and 0 divisions.
diff --git a/docs/l2_system_contracts/system_contracts_bootloader_description.md b/docs/l2_system_contracts/system_contracts_bootloader_description.md
deleted file mode 100644
index 4fbe9d5521..0000000000
--- a/docs/l2_system_contracts/system_contracts_bootloader_description.md
+++ /dev/null
@@ -1,728 +0,0 @@
-# System contracts/bootloader description (VM v1.5.0)
-
-[back to readme](../README.md)
-
-## Bootloader
-
-On standard Ethereum clients, the workflow for executing blocks is the following:
-
-1. Pick a transaction, validate the transactions & charge the fee, execute it
-2. Gather the state changes (if the transaction has not reverted), apply them to the state.
-3. Go back to step (1) if the block gas limit has not been yet exceeded.
-
-However, having such flow on ZKsync (i.e. processing transaction one-by-one) would be too inefficient, since we have to run the entire proving workflow for each individual transaction. That’s what we need the _bootloader_ for: instead of running N transactions separately, we run the entire batch (set of blocks, more can be found [here](./batches_and_blocks_on_zksync.md)) as a single program that accepts the array of transactions as well as some other batch metadata and processes them inside a single big “transaction”. The easiest way to think about bootloader is to think in terms of EntryPoint from EIP4337: it also accepts the array of transactions and facilitates the Account Abstraction protocol.
-
-The hash of the code of the bootloader is stored on L1 and can only be changed as a part of a system upgrade. Note, that unlike system contracts, the bootloader’s code is not stored anywhere on L2. That’s why we may sometimes refer to the bootloader’s address as formal. It only exists for the sake of providing some value to `this` / `msg.sender`/etc. When someone calls the bootloader address (e.g. to pay fees) the EmptyContract’s code is actually invoked.
-
-## System contracts
-
-While most of the primitive EVM opcodes can be supported out of the box (i.e. zero-value calls, addition/multiplication/memory/storage management, etc), some of the opcodes are not supported by the VM by default and they are implemented via “system contracts” — these contracts are located in a special _kernel space,_ i.e. in the address space in range `[0..2^16-1]`, and they have some special privileges, which users’ contracts don’t have. These contracts are pre-deployed at the genesis and updating their code can be done only via system upgrade, managed from L1.
-
-The use of each system contract will be explained down below.
-
-### Pre-deployed contracts
-
-Some of the contracts need to be predeployed at the genesis, but they do not need the kernel space rights. To give them minimal permissiones, we predeploy them at consecutive addresses that start right at the `2^16`. These will be described in the following sections.
-
-## zkEVM internals
-
-Full specification of the zkEVM is beyond the scope of this document. However, this section will give you most of the details needed for understanding the L2 system smart contracts & basic differences between EVM and zkEVM.
-
-### Registers and memory management
-
-On EVM, during transaction execution, the following memory areas are available:
-
-- `memory` itself.
-- `calldata` the immutable slice of parent memory.
-- `returndata` the immutable slice returned by the latest call to another contract.
-- `stack` where the local variables are stored.
-
-Unlike EVM, which is stack machine, zkEVM has 16 registers. Instead of receiving input from `calldata`, zkEVM starts by receiving a _pointer_ in its first register (_basically a packed struct with 4 elements: the memory page id, start and length of the slice to which it points to_) to the calldata page of the parent. Similarly, a transaction can receive some other additional data within its registers at the start of the program: whether the transaction should invoke the constructor ([more about deployments here](#contractdeployer--immutablesimulator)), whether the transaction has `isSystem` flag, etc. The meaning of each of these flags will be expanded further in this section.
-
-_Pointers_ are separate type in the VM. It is only possible to:
-
-- Read some value within a pointer.
-- Shrink the pointer by reducing the slice to which pointer points to.
-- Receive the pointer to the returndata/as a calldata.
-- Pointers can be stored only on stack/registers to make sure that the other contracts can not read memory/returndata of contracts they are not supposed to.
-- A pointer can be converted to the u256 integer representing it, but an integer can not be converted to a pointer to prevent unallowed memory access.
-- It is not possible to return a pointer that points to a memory page with id smaller than the one for the current page. What this means is that it is only possible to `return` only pointer to the memory of the current frame or one of the pointers returned by the subcalls of the current frame.
-
-#### Memory areas in zkEVM
-
-For each frame, the following memory areas are allocated:
-
-- _Heap_ (plays the same role as `memory` on Ethereum).
-- _AuxHeap_ (auxiliary heap). It has the same properties as Heap, but it is used for the compiler to encode calldata/copy the returndata from the calls to system contracts to not interfere with the standard Solidity memory alignment.
-- _Stack_. Unlike Ethereum, stack is not the primary place to get arguments for opcodes. The biggest difference between stack on zkEVM and EVM is that on ZKsync stack can be accessed at any location (just like memory). While users do not pay for the growth of stack, the stack can be fully cleared at the end of the frame, so the overhead is minimal.
-- _Code_. The memory area from which the VM executes the code of the contract. The contract itself can not read the code page, it is only done implicitly by the VM.
-
-Also, as mentioned in the previous section, the contract receives the pointer to the calldata.
-
-#### Managing returndata & calldata
-
-Whenever a contract finishes its execution, the parent’s frame receives a _pointer_ as `returndata`. This pointer may point to the child frame’s Heap/AuxHeap or it can even be the same `returndata` pointer that the child frame received from some of its child frames.
-
-The same goes with the `calldata`. Whenever a contract starts its execution, it receives the pointer to the calldata. The parent frame can provide any valid pointer as the calldata, which means it can either be a pointer to the slice of parent’s frame memory (heap or auxHeap) or it can be some valid pointer that the parent frame has received before as calldata/returndata.
-
-Contracts simply remember the calldata pointer at the start of the execution frame (it is by design of the compiler) and remembers the latest received returndata pointer.
-
-Some important implications of this is that it is now possible to do the following calls without any memory copying:
-
-A → B → C
-
-where C receives a slice of the calldata received by B.
-
-The same goes for returning data:
-
-A ← B ← C
-
-There is no need to copy returned data if the B returns a slice of the returndata returned by C.
-
-Note, that you can _not_ use the pointer that you received via calldata as returndata (i.e. return it at the end of the execution frame). Otherwise, it would be possible that returndata points to the memory slice of the active frame and allow editing the `returndata`. It means that in the examples above, C could not return a slice of its calldata without memory copying.
-
-Note, that the rule above is implemented by the principle "it is not possible to return a slice of data with memory page id lower than the memory page id of the current heap", since a memory page with smaller id could only be created before the call. That's why a user contract can usually safely return a slice of previously returned returndata (since it is guaranteed to have a higher memory page id). However, system contracts have an exemption from the rule above. It is needed in particular to the correct functionality of the `CodeOracle` system contract. You can read more about it [here](#codeoracle). So the rule of thumb is that returndata from `CodeOracle` should never be passed along.
-
-Some of these memory optimizations can be seen utilized in the [EfficientCall](../../system-contracts/contracts/libraries/EfficientCall.sol#L34) library that allows to perform a call while reusing the slice of calldata that the frame already has, without memory copying.
-
-#### Returndata & precompiles
-
-Some of the operations which are opcodes on Ethereum, have become calls to some of the system contracts. The most notable examples are `Keccak256`, `SystemContext`, etc. Note, that, if done naively, the following lines of code would work differently on ZKsync and Ethereum:
-
-```solidity
-pop(call(...))
-keccak(...)
-returndatacopy(...)
-```
-
-Since the call to keccak precompile would modify the `returndata`. To avoid this, our compiler does not override the latest `returndata` pointer after calls to such opcode-like precompiles.
-
-### ZKsync specific opcodes
-
-While some Ethereum opcodes are not supported out of the box, some of the new opcodes were added to facilitate the development of the system contracts.
-
-Note, that this lists does not aim to be specific about the internals, but rather explain methods in the [SystemContractHelper.sol](../../system-contracts/contracts/libraries/SystemContractHelper.sol#L44)
-
-#### **Only for kernel space**
-
-These opcodes are allowed only for contracts in kernel space (i.e. system contracts). If executed in other places they result in `revert(0,0)`.
-
-- `mimic_call`. The same as a normal `call`, but it can alter the `msg.sender` field of the transaction.
-- `to_l1`. Sends a system L2→L1 log to Ethereum. The structure of this log can be seen [here](../../l1-contracts/contracts/common/Messaging.sol#L23).
-- `event`. Emits an L2 log to ZKsync. Note, that L2 logs are not equivalent to Ethereum events. Each L2 log can emit 64 bytes of data (the actual size is 88 bytes, because it includes the emitter address, etc). A single Ethereum event is represented with multiple `event` logs constitute. This opcode is only used by `EventWriter` system contract.
-- `precompile_call`. This is an opcode that accepts two parameters: the uint256 representing the packed parameters for it as well as the ergs to burn. Besides the price for the precompile call itself, it burns the provided ergs and executes the precompile. The action that it does depend on `this` during execution:
- - If it is the address of the `ecrecover` system contract, it performs the ecrecover operation
- - If it is the address of the `sha256`/`keccak256` system contracts, it performs the corresponding hashing operation.
- - It does nothing (i.e. just burns ergs) otherwise. It can be used to burn ergs needed for L2→L1 communication or publication of bytecodes onchain.
-- `setValueForNextFarCall` sets `msg.value` for the next `call`/`mimic_call`. Note, that it does not mean that the value will be really transferred. It just sets the corresponding `msg.value` context variable. The transferring of ETH should be done via other means by the system contract that uses this parameter. Note, that this method has no effect on `delegatecall` , since `delegatecall` inherits the `msg.value` of the previous frame.
-- `increment_tx_counter` increments the counter of the transactions within the VM. The transaction counter used mostly for the VM’s internal tracking of events. Used only in bootloader after the end of each transaction.
-- `decommit` will return a pointer to a slice with the corresponding bytecode hash preimage. If this bytecode has been unpacked before, the memory page where it was unpacked will be reused. If it has never been unpacked before, it will be unpacked into the current heap.
-
-Note, that currently we do not have access to the `tx_counter` within VM (i.e. for now it is possible to increment it and it will be automatically used for logs such as `event`s as well as system logs produced by `to_l1`, but we can not read it). We need to read it to publish the _user_ L2→L1 logs, so `increment_tx_counter` is always accompanied by the corresponding call to the [SystemContext](#systemcontext) contract.
-
-More on the difference between system and user logs can be read [here](../settlement_contracts/data_availability/standard_pubdata_format.md).
-
-#### **Generally accessible**
-
-Here are opcodes that can be generally accessed by any contract. Note that while the VM allows to access these methods, it does not mean that this is easy: the compiler might not have convenient support for some use-cases yet.
-
-- `near_call`. It is basically a “framed” jump to some location of the code of your contract. The difference between the `near_call` and ordinary jump are:
- 1. It is possible to provide an ergsLimit for it. Note, that unlike “`far_call`”s (i.e. calls between contracts) the 63/64 rule does not apply to them.
- 2. If the near call frame panics, all state changes made by it are reversed. Please note, that the memory changes will **not** be reverted.
-- `getMeta`. Returns an u256 packed value of [ZkSyncMeta](../../system-contracts/contracts/libraries/SystemContractHelper.sol#L18) struct. Note that this is not tight packing. The struct is formed by the [following rust code](https://github.com/matter-labs/zksync-protocol/blob/main/crates/zkevm_opcode_defs/src/definitions/abi/meta.rs#L4).
-- `getCodeAddress` — receives the address of the executed code. This is different from `this` , since in case of delegatecalls `this` is preserved, but `codeAddress` is not.
-
-#### Flags for calls
-
-Besides the calldata, it is also possible to provide additional information to the callee when doing `call` , `mimic_call`, `delegate_call`. The called contract will receive the following information in its first 12 registers at the start of execution:
-
-- _r1_ — the pointer to the calldata.
-- _r2_ — the pointer with flags of the call. This is a mask, where each bit is set only if certain flags have been set to the call. Currently, two flags are supported: 0-th bit: `isConstructor` flag. This flag can only be set by system contracts and denotes whether the account should execute its constructor logic. Note, unlike Ethereum, there is no separation on constructor & deployment bytecode. More on that can be read [here](#contractdeployer--immutablesimulator). 1-st bit: `isSystem` flag. Whether the call intends a system contracts’ function. While most of the system contracts’ functions are relatively harmless, accessing some with calldata only may break the invariants of Ethereum, e.g. if the system contract uses `mimic_call`: no one expects that by calling a contract some operations may be done out of the name of the caller. This flag can be only set if the callee is in kernel space.
-- The rest r3..r12 registers are non-empty only if the `isSystem` flag is set. There may be arbitrary values passed, which we call `extraAbiParams`.
-
-The compiler implementation is that these flags are remembered by the contract and can be accessed later during execution via special [simulations](https://github.com/code-423n4/2024-03-zksync/blob/main/docs/VM%20Section/How%20compiler%20works/instructions/extensions/overview.md).
-
-If the caller provides inappropriate flags (i.e. tries to set `isSystem` flag when callee is not in the kernel space), the flags are ignored.
-
-#### `onlySystemCall` modifier
-
-Some of the system contracts can act on behalf of the user or have a very important impact on the behavior of the account. That’s why we wanted to make it clear that users can not invoke potentially dangerous operations by doing a simple EVM-like `call`. Whenever a user wants to invoke some of the operations which we considered dangerous, they must provide “`isSystem`” flag with them.
-
-The `onlySystemCall` flag checks that the call was either done with the “isSystemCall” flag provided or the call is done by another system contract (since Matter Labs is fully aware of system contracts).
-
-#### Simulations via our compiler
-
-In the future, we plan to introduce our “extended” version of Solidity with more supported opcodes than the original one. However, right now it was beyond the capacity of the team to do, so in order to represent accessing ZKsync-specific opcodes, we use `call` opcode with certain constant parameters that will be automatically replaced by the compiler with zkEVM native opcode.
-
-Example:
-
-```solidity
-function getCodeAddress() internal view returns (address addr) {
- address callAddr = CODE_ADDRESS_CALL_ADDRESS;
- assembly {
- addr := staticcall(0, callAddr, 0, 0xFFFF, 0, 0)
- }
-}
-```
-
-In the example above, the compiler will detect that the static call is done to the constant `CODE_ADDRESS_CALL_ADDRESS` and so it will replace it with the opcode for getting the code address of the current execution.
-
-Full list of opcode simulations can be found [here](https://github.com/code-423n4/2024-03-zksync/blob/main/docs/VM%20Section/How%20compiler%20works/instructions/extensions/call.md).
-
-We also use [verbatim-like](https://github.com/code-423n4/2024-03-zksync/blob/main/docs/VM%20Section/How%20compiler%20works/instructions/extensions/verbatim.md) statements to access ZKsync-specific opcodes in the bootloader.
-
-All the usages of the simulations in our Solidity code are implemented in the [SystemContractHelper](../../system-contracts/contracts/libraries//SystemContractHelper.sol) library and the [SystemContractsCaller](../../system-contracts/contracts//libraries/SystemContractsCaller.sol) library.
-
-#### Simulating `near_call` (in Yul only)
-
-In order to use `near_call` i.e. to call a local function, while providing a limit of ergs (gas) that this function can use, the following syntax is used:
-
-The function should contain `ZKSYNC_NEAR_CALL` string in its name and accept at least 1 input parameter. The first input parameter is the packed ABI of the `near_call`. Currently, it is equal to the number of ergs to be passed with the `near_call`.
-
-Whenever a `near_call` panics, the `ZKSYNC_CATCH_NEAR_CALL` function is called.
-
-_Important note:_ the compiler behaves in a way that if there is a `revert` in the bootloader, the `ZKSYNC_CATCH_NEAR_CALL` is not called and the parent frame is reverted as well. The only way to revert only the `near_call` frame is to trigger VM’s _panic_ (it can be triggered with either invalid opcode or out of gas error).
-
-_Important note 2:_ The 63/64 rule does not apply to `near_call`. Also, if 0 gas is provided to the near call, then actually all of the available gas will go to it.
-
-#### Notes on security
-
-To prevent unintended substitution, the compiler requires `--system-mode` flag to be passed during compilation for the above substitutions to work.
-
-> Note, that in the more recent compiler versions this the `--system-mode` has been renamed to `enable_eravm_extensions` (this can be seen in e.g. our [foundry.toml](../../l1-contracts/foundry.toml))
-
-### Bytecode hashes
-
-On ZKsync the bytecode hashes are stored in the following format:
-
-- The 0th byte denotes the version of the format. Currently the only version that is used is “1”.
-- The 1st byte is `0` for deployed contracts’ code and `1` for the contract code [that is being constructed](#constructing-vs-non-constructing-code-hash).
-- The 2nd and 3rd bytes denote the length of the contract in 32-byte words as big-endian 2-byte number.
-- The next 28 bytes are the last 28 bytes of the sha256 hash of the contract’s bytecode.
-
-The bytes are ordered in little-endian order (i.e. the same way as for `bytes32` ).
-
-#### Bytecode validity
-
-A bytecode is valid if it:
-
-- Has its length in bytes divisible by 32 (i.e. consists of an integer number of 32-byte words).
-- Has a length of less than 2^16 words (i.e. its length in words fits into 2 bytes).
-- Has an odd length in words (i.e. the 3rd byte is an odd number).
-
-Note, that it does not have to consist of only correct opcodes. In case the VM encounters an invalid opcode, it will simply revert (similar to how EVM would treat them).
-
-A call to a contract with invalid bytecode can not be proven. That is why it is **essential** that no contract with invalid bytecode is ever deployed on ZKsync. It is the job of the [KnownCodesStorage](#knowncodestorage) to ensure that all allowed bytecodes in the system are valid.
-
-## Account abstraction
-
-One of the other important features of ZKsync is the support of account abstraction. It is highly recommended to read the documentation on our AA protocol here: [https://docs.zksync.io/zk-stack/concepts/account-abstraction](https://docs.zksync.io/zksync-protocol/era-vm/account-abstraction)
-
-#### Account versioning
-
-Each account can also specify which version of the account abstraction protocol do they support. This is needed to allow breaking changes of the protocol in the future.
-
-Currently, two versions are supported: `None` (i.e. it is a simple contract and it should never be used as `from` field of a transaction), and `Version1`.
-
-#### Nonce ordering
-
-Accounts can also signal to the operator which nonce ordering it should expect from these accounts: `Sequential` or `Arbitrary`.
-
-`Sequential` means that the nonces should be ordered in the same way as in EOAs. This means, that, for instance, the operator will always wait for a transaction with nonce `X` before processing a transaction with nonce `X+1`.
-
-`Arbitrary` means that the nonces can be ordered in arbitrary order. It is supported by the server right now, i.e. if there is a contract with arbitrary nonce ordering, its transactions will likely either be rejected or get stuck in the mempool due to nonce mismatch.
-
-Note, that this is not enforced by system contracts in any way. Some sanity checks may be present, but the accounts are allowed to do however they like. It is more of a suggestion to the operator on how to manage the mempool.
-
-#### Returned magic value
-
-Now, both accounts and paymasters are required to return a certain magic value upon validation. This magic value will be enforced to be correct on the mainnet, but will be ignored during fee estimation. Unlike Ethereum, the signature verification + fee charging/nonce increment are not included as part of the intrinsic costs of the transaction. These are paid as part of the execution and so they need to be estimated as part of the estimation for the transaction’s costs.
-
-Generally, the accounts are recommended to perform as many operations as during normal validation, but only return the invalid magic in the end of the validation. This will allow to correctly (or at least as correctly as possible) estimate the price for the validation of the account.
-
-## Bootloader
-
-Bootloader is the program that accepts an array of transactions and executes the entire ZKsync batch. This section will expand on its invariants and methods.
-
-### Playground bootloader vs proved bootloader
-
-For convenience, we use the same implementation of the bootloader both in the mainnet batches and for emulating ethCalls or other testing activities. _Only_ _proved_ bootloader is ever used for batch-building and thus this document describes only it.
-
-### Start of the batch
-
-It is enforced by the ZKPs, that the state of the bootloader is equivalent to the state of a contract transaction with empty calldata. The only difference is that it starts with all the possible memory pre-allocated (to avoid costs for memory expansion).
-
-For additional efficiency (and our convenience), the bootloader receives its parameters inside its memory. This is the only point of non-determinism: the bootloader _starts with its memory pre-filled with any data the operator wants_. That’s why it is responsible for validating the correctness of it and it should never rely on the initial contents of the memory to be correct & valid.
-
-For instance, for each transaction, we check that it is [properly ABI-encoded](../../system-contracts/bootloader/bootloader.yul#L4044) and that the transactions [go exactly one after another](../../system-contracts/bootloader/bootloader.yul#L4037). We also ensure that transactions do not exceed the limits of the memory space allowed for transactions.
-
-### Transaction types & their validation
-
-While the main transaction format is the internal `Transaction` [format](../../system-contracts/contracts/libraries/TransactionHelper.sol#L25), it is a struct that is used to represent various kinds of transactions types. It contains a lot of `reserved` fields that could be used depending in the future types of transactions without need for AA to change the interfaces of their contracts.
-
-The exact type of the transaction is marked by the `txType` field of the transaction type. There are 6 types currently supported:
-
-- `txType`: 0. It means that this transaction is of legacy transaction type. The following restrictions are enforced:
-- `maxFeePerErgs=getMaxPriorityFeePerErg` since it is pre-EIP1559 tx type.
-- `reserved1..reserved4` as well as `paymaster` are 0. `paymasterInput` is zero.
-- Note, that unlike type 1 and type 2 transactions, `reserved0` field can be set to a non-zero value, denoting that this legacy transaction is EIP-155-compatible and its RLP encoding (as well as signature) should contain the `chainId` of the system.
-- `txType`: 1. It means that the transaction is of type 1, i.e. transactions access list. ZKsync does not support access lists in any way, so no benefits of fulfilling this list will be provided. The access list is assumed to be empty. The same restrictions as for type 0 are enforced, but also `reserved0` must be 0.
-- `txType`: 2. It is EIP1559 transactions. The same restrictions as for type 1 apply, but now `maxFeePerErgs` may not be equal to `getMaxPriorityFeePerErg`.
-- `txType`: 113. It is ZKsync transaction type. This transaction type is intended for AA support. The only restriction that applies to this transaction type: fields `reserved0..reserved4` must be equal to 0.
-- `txType`: 254. It is a transaction type that is used for upgrading the L2 system. This is the only type of transaction is allowed to start a transaction out of the name of the contracts in kernel space.
-- `txType`: 255. It is a transaction that comes from L1. There are almost no restrictions explicitly imposed upon this type of transaction, since the bootloader at the end of its execution sends the rolling hash of the executed priority transactions. The L1 contract ensures that the hash did indeed match the [hashes of the priority transactions on L1](../../l1-contracts/contracts/state-transition/chain-deps/facets/Executor.sol#L376).
-
-You can also read more on L1->L2 transactions and upgrade transactions [here](../settlement_contracts/priority_queue/processing_of_l1-l2_txs.md).
-
-However, as already stated, the bootloader’s memory is not deterministic and the operator is free to put anything it wants there. For all of the transaction types above the restrictions are imposed in the following ([method](../../system-contracts/bootloader/bootloader.yul#L3107)), which is called before starting processing the transaction.
-
-### Structure of the bootloader’s memory
-
-The bootloader expects the following structure of the memory (here by word we denote 32-bytes, the same machine word as on EVM):
-
-#### **Batch information**
-
-The first 8 words are reserved for the batch information provided by the operator.
-
-- `0` word — the address of the operator (the beneficiary of the transactions).
-- `1` word — the hash of the previous batch. Its validation will be explained later on.
-- `2` word — the timestamp of the current batch. Its validation will be explained later on.
-- `3` word — the number of the new batch.
-- `4` word — the fair pubdata price. More on how our pubdata is calculated can be read [here](./zksync_fee_model.md).
-- `5` word — the “fair” price for L2 gas, i.e. the price below which the `baseFee` of the batch should not fall. For now, it is provided by the operator, but it in the future it may become hardcoded.
-- `6` word — the base fee for the batch that is expected by the operator. While the base fee is deterministic, it is still provided to the bootloader just to make sure that the data that the operator has coincides with the data provided by the bootloader.
-- `7` word — reserved word. Unused on proved batch.
-
-The batch information slots [are used at the beginning of the batch](../../system-contracts/bootloader/bootloader.yul#3921). Once read, these slots can be used for temporary data.
-
-#### **Temporary data for debug & transaction processing purposes**
-
-- `[8..39]` – reserved slots for debugging purposes
-- `[40..72]` – slots for holding the paymaster context data for the current transaction. The role of the paymaster context is similar to the [EIP4337](https://eips.ethereum.org/EIPS/eip-4337)’s one. You can read more about it in the account abstraction documentation.
-- `[73..74]` – slots for signed and explorer transaction hash of the currently processed L2 transaction.
-- `[75..142]` – 68 slots for the calldata for the KnownCodesContract call.
-- `[143..10142]` – 10000 slots for the refunds for the transactions.
-- `[10143..20142]` – 10000 slots for the overhead for batch for the transactions. This overhead is suggested by the operator, i.e. the bootloader will still double-check that the operator does not overcharge the user.
-- `[20143..30142]` – slots for the “trusted” gas limits by the operator. The user’s transaction will have at its disposal `min(MAX_TX_GAS(), trustedGasLimit)`, where `MAX_TX_GAS` is a constant guaranteed by the system. Currently, it is equal to 80 million gas. In the future, this feature will be removed.
-- `[30143..70146]` – slots for storing L2 block info for each transaction. You can read more on the difference L2 blocks and batches [here](./batches_and_blocks_on_zksync.md).
-- `[70147..266754]` – slots used for compressed bytecodes each in the following format:
- - 32 bytecode hash
- - 32 zeroes (but then it will be modified by the bootloader to contain 28 zeroes and then the 4-byte selector of the `publishCompressedBytecode` function of the `BytecodeCompressor`)
- - The calldata to the bytecode compressor (without the selector).
-- `[266755..266756]` – slots where the hash and the number of current priority ops is stored. More on it in the priority operations [section](../settlement_contracts/priority_queue/processing_of_l1-l2_txs.md).
-
-#### L1Messenger Pubdata
-
-- `[266757..1626756]` – slots where the final batch pubdata is supplied to be verified by the [L2DAValidator](../settlement_contracts/data_availability/custom_da.md).
-
-But briefly, this space is used for the calldata to the L1Messenger’s `publishPubdataAndClearState` function, which accepts the address of the L2DAValidator as well as the pubdata for it to check. The L2DAValidator is a contract that is responsible to ensure efficiency [when handling pubdata](../settlement_contracts/data_availability/custom_da.md). Typically, the calldata `L2DAValidator` would include uncompressed preimages for bytecodes, L2->L1 messages, L2->L1 logs, etc as their compressed counterparts. However, the exact implementation may vary across various ZK chains.
-
-Note, that while the realistic number of pubdata that can be published in a batch is ~780kb, the size of the calldata to L1Messenger may be a lot larger due to the fact that this method also accepts the original uncompressed state diff entries. These will not be published to L1, but will be used to verify the correctness of the compression.
-
-One of "worst case" scenarios for the number of state diffs in a batch is when 780kb of pubdata is spent on repeated writes, that are all zeroed out. In this case, the number of diffs is 780kb / 5 = 156k. This means that they will have accoomdate 42432000 bytes of calldata for the uncompressed state diffs. Adding 780kb on top leaves us with roughly 43212000 bytes needed for calldata. 1350375 slots are needed to accommodate this amount of data. We round up to 1360000 slots just in case.
-
-In theory though much more calldata could be used (if for instance 1 byte is used for enum index). It is the responsibility of the
-operator to ensure that it can form the correct calldata for the L1Messenger.
-
-#### **Transaction’s meta descriptions**
-
-- `[1626756..1646756]` words — 20000 slots for 10000 transaction’s meta descriptions (their structure is explained below).
-
-For internal reasons related to possible future integrations of zero-knowledge proofs about some of the contents of the bootloader’s memory, the array of the transactions is not passed as the ABI-encoding of the array of transactions, but:
-
-- We have a constant maximum number of transactions. At the time of this writing, this number is 10000.
-- Then, we have 10000 transaction descriptions, each ABI encoded as the following struct:
-
-```solidity
-struct BootloaderTxDescription {
- // The offset by which the ABI-encoded transaction's data is stored
- uint256 txDataOffset;
- // Auxiliary data on the transaction's execution. In our internal versions
- // of the bootloader it may have some special meaning, but for the
- // bootloader used on the mainnet it has only one meaning: whether to execute
- // the transaction. If 0, no more transactions should be executed. If 1, then
- // we should execute this transaction and possibly try to execute the next one.
- uint256 txExecutionMeta;
-}
-```
-
-#### **Reserved slots for the calldata for the paymaster’s postOp operation**
-
-- `[1646756..1646795]` words — 40 slots which could be used for encoding the calls for postOp methods of the paymaster.
-
-To avoid additional copying of transactions for calls for the account abstraction, we reserve some of the slots which could be then used to form the calldata for the `postOp` call for the account abstraction without having to copy the entire transaction’s data.
-
-#### **The actual transaction’s descriptions**
-
-- `[1646796..1967599]`
-
-Starting from the 487312 word, the actual descriptions of the transactions start. (The struct can be found by this [link](../../system-contracts/contracts/libraries/TransactionHelper.sol#L25)). The bootloader enforces that:
-
-- They are correctly ABI encoded representations of the struct above.
-- They are located without any gaps in memory (the first transaction starts at word 653 and each transaction goes right after the next one).
-- The contents of the currently processed transaction (and the ones that will be processed later on are untouched). Note, that we do allow overriding data from the already processed transactions as it helps to preserve efficiency by not having to copy the contents of the `Transaction` each time we need to encode a call to the account.
-
-#### **VM hook pointers**
-
-- `[1967600..1967602]`
-
-These are memory slots that are used purely for debugging purposes (when the VM writes to these slots, the server side can catch these calls and give important insight information for debugging issues).
-
-#### **Result ptr pointer**
-
-- `[1967602..1977602]`
-
-These are memory slots that are used to track the success status of a transaction. If the transaction with number `i` succeeded, the slot `937499 - 10000 + i` will be marked as 1 and 0 otherwise.
-
-### General flow of the bootloader’s execution
-
-1. At the start of the batch it [reads the initial batch information](../../system-contracts/bootloader/bootloader.yul#L3928) and [sends the information](../../system-contracts/bootloader/bootloader.yul#L2857) about the current batch to the SystemContext system contract.
-2. It goes through each of [transaction’s descriptions](../../system-contracts/bootloader/bootloader.yul#L4016) and checks whether the `execute` field is set. If not, it ends processing of the transactions and ends execution of the batch. If the execute field is non-zero, the transaction will be executed and it goes to step 3.
-3. Based on the transaction’s type it decides whether the transaction is an L1 or L2 transaction and processes them accordingly. More on the processing of the L1 transactions can be read [here](#l1-l2-transactions). More on L2 transactions can be read [here](#l2-transactions).
-
-### L2 transactions
-
-On ZKsync, every address is a contract. Users can start transactions from their EOA accounts, because every address that does not have any contract deployed on it implicitly contains the code defined in the [DefaultAccount.sol](../../system-contracts/contracts/DefaultAccount.sol) file. Whenever anyone calls a contract that is not in kernel space (i.e. the address is ≥ 2^16) and does not have any contract code deployed on it, the code for `DefaultAccount` will be used as the contract’s code.
-
-Note, that if you call an account that is in kernel space and does not have any code deployed there, right now, the transaction will revert.
-
-We process the L2 transactions according to our account abstraction protocol: [https://docs.zksync.io/build/developer-reference/account-abstraction](https://docs.zksync.io/build/developer-reference/account-abstraction).
-
-1. We [deduct](../../system-contracts/bootloader/bootloader.yul#L1263) the transaction’s upfront payment for the overhead for the block’s processing. You can read more on how that works in the fee model [description](./zksync_fee_model.md).
-2. Then we calculate the gasPrice for these transactions according to the EIP1559 rules.
-3. We [conduct the validation step](../../system-contracts/bootloader/bootloader.yul#L1287) of the AA protocol:
-
- - We calculate the hash of the transaction.
- - If enough gas has been provided, we near_call the validation function in the bootloader. It sets the tx.origin to the address of the bootloader, sets the ergsPrice. It also marks the factory dependencies provided by the transaction as marked and then invokes the validation method of the account and verifies the returned magic.
- - Calls the accounts and, if needed, the paymaster to receive the payment for the transaction. Note, that accounts may not use `block.baseFee` context variable, so they have no way to know what exact sum to pay. That’s why the accounts typically firstly send `tx.maxFeePerErg * tx.ergsLimit` and the bootloader [refunds](../../system-contracts/bootloader/bootloader.yul#L792) for any excess funds sent.
-
-4. [We perform the execution of the transaction](../../system-contracts/bootloader/bootloader.yul#L1352). Note, that if the sender is an EOA, tx.origin is set equal to the `from` the value of the transaction. During the execution of the transaction, the publishing of the compressed bytecodes happens: for each factory dependency if it has not been published yet and its hash is currently pointed to in the compressed bytecodes area of the bootloader, a call to the bytecode compressor is done. Also, at the end the call to the KnownCodeStorage is done to ensure all the bytecodes have indeed been published.
-5. We [refund](../../system-contracts/bootloader/bootloader.yul#L1206) the user for any excess funds he spent on the transaction:
-
- - Firstly, the `postTransaction` operation is called to the paymaster.
- - The bootloader asks the operator to provide a refund. During the first VM run without proofs the provide directly inserts the refunds in the memory of the bootloader. During the run for the proved batches, the operator already knows what which values have to be inserted there. You can read more about it in the [documentation](./zksync_fee_model.md) of the fee model.
- - The bootloader refunds the user.
-
-6. We notify the operator about the [refund](../../system-contracts/bootloader/bootloader.yul#L1217) that was granted to the user. It will be used for the correct displaying of gasUsed for the transaction in explorer.
-
-### L1->L2 transactions
-
-L1->L2 transactions are transactions that were initiated on L1. We assume that `from` has already authorized the L1→L2 transactions. It also has its L1 pubdata price as well as ergsPrice set on L1.
-
-Most of the steps from the execution of L2 transactions are omitted and we set `tx.origin` to the `from`, and `ergsPrice` to the one provided by transaction. After that, we use [mimicCall](#zksync-specific-opcodes) to provide the operation itself from the name of the sender account.
-
-Note, that for L1→L2 transactions, `reserved0` field denotes the amount of ETH that should be minted on L2 as a result of this transaction. `reserved1` is the refund receiver address, i.e. the address that would receive the refund for the transaction as well as the msg.value if the transaction fails.
-
-There are two kinds of L1->L2 transactions:
-
-- Priority operations, initiated by users (they have type `255`).
-- Upgrade transactions, that can be initiated during system upgrade (they have type `254`).
-
-You can read more about differences between those in the corresponding [document](../settlement_contracts/priority_queue/processing_of_l1-l2_txs.md).
-
-### End of the batch
-
-At the end of the batch we set `tx.origin` and `tx.gasprice` context variables to zero to save L1 gas on calldata and send the entire bootloader balance to the operator, effectively sending fees to him.
-
-Also, we [set](../../system-contracts/bootloader/bootloader.yul#L4110) the fictive L2 block’s data. Then, we call the system context to ensure that it publishes the timestamp of the L2 block as well as L1 batch. We also reset the `txNumberInBlock` counter to avoid its state diffs from being published on L1. You can read more about block processing on ZKsync [here](./batches_and_blocks_on_zksync.md).
-
-After that, we publish the hash as well as the number of priority operations in this batch. More on it [here](../settlement_contracts/priority_queue/processing_of_l1-l2_txs.md).
-
-Then, we call the L1Messenger system contract for it to compose the pubdata to be published on L1. You can read more about the pubdata processing [here](../settlement_contracts/data_availability/standard_pubdata_format.md).
-
-## System contracts
-
-Most of the details on the implementation and the requirements for the execution of system contracts can be found in the doc-comments of their respective code bases. This chapter serves only as a high-level overview of such contracts.
-
-All the codes of system contracts (including `DefaultAccount`s) are part of the protocol and can only be change via a system upgrade through L1.
-
-### SystemContext
-
-This contract is used to support various system parameters not included in the VM by default, i.e. `chainId`, `origin`, `ergsPrice`, `blockErgsLimit`, `coinbase`, `difficulty`, `baseFee`, `blockhash`, `block.number`, `block.timestamp.`
-
-It is important to note that the constructor is **not** run for this contract upon genesis, i.e. the constant context values are set on genesis explicitly. Notably, if in the future we want to upgrade the contracts, we will do it via [ContractDeployer](#contractdeployer--immutablesimulator) and so the constructor will be run.
-
-This contract is also responsible for ensuring validity and consistency of batches, L2 blocks. The implementation itself is rather straightforward, but to better understand this contract, please take a look at the [page](./batches_and_blocks_on_zksync.md) about the block processing on ZKsync.
-
-### AccountCodeStorage
-
-The code hashes of accounts are stored inside the storage of this contract. Whenever a VM calls a contract with address `address` it retrieves the value under storage slot `address` of this system contract, if this value is non-zero, it uses this as the code hash of the account.
-
-Whenever a contract is called, the VM asks the operator to provide the preimage for the codehash of the account. That is why data availability of the code hashes is paramount.
-
-#### Constructing vs Non-constructing code hash
-
-In order to prevent contracts from being able to call a contract during its construction, we set the marker (i.e. second byte of the bytecode hash of the account) as `1`. This way, the VM will ensure that whenever a contract is called without the `isConstructor` flag, the bytecode of the default account (i.e. EOA) will be substituted instead of the original bytecode.
-
-### BootloaderUtilities
-
-This contract contains some of the methods which are needed purely for the bootloader functionality but were moved out from the bootloader itself for the convenience of not writing this logic in Yul.
-
-### DefaultAccount
-
-Whenever a contract that does **not** both:
-
-- belong to kernel space
-- have any code deployed on it (the value stored under the corresponding storage slot in `AccountCodeStorage` is zero)
-
-The code of the default account is used. The main purpose of this contract is to provide EOA-like experience for both wallet users and contracts that call it, i.e. it should not be distinguishable (apart of spent gas) from EOA accounts on Ethereum.
-
-### Ecrecover
-
-The implementation of the ecrecover precompile. It is expected to be used frequently, so written in pure yul with a custom memory layout.
-
-The contract accepts the calldata in the same format as EVM precompile, i.e. the first 32 bytes are the hash, the next 32 bytes are the v, the next 32 bytes are the r, and the last 32 bytes are the s.
-
-It also validates the input by the same rules as the EVM precompile:
-
-- The v should be either 27 or 28,
-- The r and s should be less than the curve order.
-
-After that, it makes a precompile call and returns empty bytes if the call failed, and the recovered address otherwise.
-
-### Empty contracts
-
-Some of the contracts are relied upon to have EOA-like behaviour, i.e. they can be always called and get the success value in return. An example of such address is 0 address. We also require the bootloader to be callable so that the users could transfer ETH to it.
-
-For these contracts, we insert the `EmptyContract` code upon genesis. It is basically a noop code, which does nothing and returns `success=1`.
-
-### SHA256 & Keccak256
-
-Note that, unlike Ethereum, keccak256 is a precompile (_not an opcode_) on ZKsync.
-
-These system contracts act as wrappers for their respective crypto precompile implementations. They are expected to be used frequently, especially keccak256, since Solidity computes storage slots for mapping and dynamic arrays with its help. That's why we wrote contracts on pure yul with optimizing the short input case. In the past both `sha256` and `keccak256` performed padding within the smart contracts, this is no longer true with `sha256` performing padding in the smart contracts and `keccak256` in the zk-circuits. Hashing is then completed for both within the zk-circuits.
-
-It's important to note that the crypto part of the `sha256` precompile expects to work with padded data. This means that a bug in applying padding may lead to an unprovable transaction.
-
-### EcAdd & EcMul
-
-These precompiles simulate the behaviour of the EVM's EcAdd and EcMul precompiles and are fully implemented in Yul without circuit counterparts. You can read more about them [here](./elliptic_curve_precompiles.md).
-
-### L2BaseToken & MsgValueSimulator
-
-Unlike Ethereum, zkEVM does not have any notion of any special native token. That’s why we have to simulate operations with the native token (in which fees are charged) via two contracts: `L2BaseToken` & `MsgValueSimulator`.
-
-`L2BaseToken` is a contract that holds the balances of native token for the users. This contract does NOT provide ERC20 interface. The only method for transferring native token is `transferFromTo`. It permits only some system contracts to transfer on behalf of users. This is needed to ensure that the interface is as close to Ethereum as possible, i.e. the only way to transfer native token is by doing a call to a contract with some `msg.value`. This is what `MsgValueSimulator` system contract is for.
-
-Whenever anyone wants to do a non-zero value call, they need to call `MsgValueSimulator` with:
-
-- The calldata for the call equal to the original one.
-- Pass `value` and whether the call should be marked with `isSystem` in the first extra abi params.
-- Pass the address of the callee in the second extraAbiParam.
-
-More information on the extraAbiParams can be read [here](#flags-for-calls).
-
-#### Support for `.send/.transfer`
-
-On Ethereum, whenever a call with non-zero value is done, some additional gas is charged from the caller's frame and in return a `2300` gas stipend is given out to the callee frame. This stipend is usually enough to emit a small event, but it is enforced that it is not possible to change storage within these `2300` gas. This also means that in practice some users might opt to do `call` with 0 gas provided, relying on the `2300` stipend to be passed to the callee. This is the case for `.call/.transfer`.
-
-While using `.send/.transfer` is generally not recommended, as a step towards better EVM compatibility, since vm1.5.0 a _partial_ support of these functions is present with ZKsync Era. It is the done via the following means:
-
-- Whenever a call is done to the `MsgValueSimulator` system contract, `27000` gas is deducted from the caller's frame and it passed to the `MsgValueSimulator` on top of whatever gas the user has originally provided. The number was chosen to cover for the execution of the transferring of the balances as well as other constant size operations by the `MsgValueSimulator`. Note, that since it will be the frame of `MsgValueSimulator` that will actually call the callee, the constant must also include the cost for decommitting the code of the callee. Decoding bytecode of any size would be prohibitevely expensive and so we support only callees of size up to `100000` bytes.
-- `MsgValueSimulator` ensures that no more than `2300` out of the stipend above gets to the callee, ensuring the reentrancy protection invariant for these functions holds.
-
-Note, that unlike EVM any unused gas from such calls will be refunded.
-
-The system preserves the following guarantees about `.send/.transfer`:
-
-- No more than `2300` gas will be received by the callee. Note, [that a smaller, but a close amount](../../system-contracts/contracts/test-contracts/TransferTest.sol#L33) may be passed.
-- It is not possible to do any storage changes within this stipend. This is enforced by having cold write cost more than `2300` gas. Also, cold write cost always has to be prepaid whenever executing storage writes. More on it can be read [here](../l2_system_contracts/zksync_fee_model.md#io-pricing).
-- Any callee with bytecode size of up to `100000` will work.
-
-The system does not guarantee the following:
-
-- That callees with bytecode size larger than `100000` will work. Note, that a malicious operator can fail any call to a callee with large bytecode even if it has been decommitted before. More on it can be read [here](../l2_system_contracts/zksync_fee_model.md#io-pricing).
-
-As a conclusion, using `.send/.transfer` should be generally avoided, but when avoiding is not possible it should be used with small callees, e.g. EOAs, which implement [DefaultAccount](../../system-contracts/contracts/DefaultAccount.sol).
-
-### KnownCodeStorage
-
-This contract is used to store whether a certain code hash is “known”, i.e. can be used to deploy contracts. On ZKsync, the L2 stores the contract’s code _hashes_ and not the codes themselves. Therefore, it must be part of the protocol to ensure that no contract with unknown bytecode (i.e. hash with an unknown preimage) is ever deployed.
-
-The factory dependencies field provided by the user for each transaction contains the list of the contract’s bytecode hashes to be marked as known. We can not simply trust the operator to “know” these bytecodehashes as the operator might be malicious and hide the preimage. We ensure the availability of the bytecode in the following way:
-
-- If the transaction comes from L1, i.e. all its factory dependencies have already been published on L1, we can simply mark these dependencies as “known”.
-- If the transaction comes from L2, i.e. (the factory dependencies are yet to publish on L1), we make the user pays by burning ergs proportional to the bytecode’s length. After that, we send the L2→L1 log with the bytecode hash of the contract. It is the responsibility of the L1 contracts to verify that the corresponding bytecode hash has been published on L1.
-
-It is the responsibility of the [ContractDeployer](#contractdeployer--immutablesimulator) system contract to deploy only those code hashes that are known.
-
-The KnownCodesStorage contract is also responsible for ensuring that all the “known” bytecode hashes are also [valid](#bytecode-validity).
-
-### ContractDeployer & ImmutableSimulator
-
-`ContractDeployer` is a system contract responsible for deploying contracts on ZKsync. It is better to understand how it works in the context of how the contract deployment works on ZKsync. Unlike Ethereum, where `create`/`create2` are opcodes, on ZKsync these are implemented by the compiler via calls to the ContractDeployer system contract.
-
-For additional security, we also distinguish the deployment of normal contracts and accounts. That’s why the main methods that will be used by the user are `create`, `create2`, `createAccount`, `create2Account`, which simulate the CREATE-like and CREATE2-like behavior for deploying normal and account contracts respectively.
-
-#### **Address derivation**
-
-Each rollup that supports L1→L2 communications needs to make sure that the addresses of contracts on L1 and L2 do not overlap during such communication (otherwise it would be possible that some evil proxy on L1 could mutate the state of the L2 contract). Generally, rollups solve this issue in two ways:
-
-- XOR/ADD some kind of constant to addresses during L1→L2 communication. That’s how rollups closer to full EVM-equivalence solve it, since it allows them to maintain the same derivation rules on L1 at the expense of contract accounts on L1 having to redeploy on L2.
-- Have different derivation rules from Ethereum. That is the path that ZKsync has chosen, mainly because since we have different bytecode than on EVM, CREATE2 address derivation would be different in practice anyway.
-
-You can see the rules for our address derivation in `getNewAddressCreate2`/ `getNewAddressCreate` methods in the ContractDeployer.
-
-Note, that we still add a certain constant to the addresses during L1→L2 communication in order to allow ourselves some way to support EVM bytecodes in the future.
-
-#### **Deployment nonce**
-
-On Ethereum, the same nonce is used for CREATE for accounts and EOA wallets. On ZKsync this is not the case, we use a separate nonce called “deploymentNonce” to track the nonces for accounts. This was done mostly for consistency with custom accounts and for having multicalls feature in the future.
-
-#### **General process of deployment**
-
-- After incrementing the deployment nonce, the contract deployer must ensure that the bytecode that is being deployed is available.
-- After that, it puts the bytecode hash with a [special constructing marker](#constructing-vs-non-constructing-code-hash) as code for the address of the to-be-deployed contract.
-- Then, if there is any value passed with the call, the contract deployer passes it to the deployed account and sets the `msg.value` for the next as equal to this value.
-- Then, it uses `mimic_call` for calling the constructor of the contract out of the name of the account.
-- It parses the array of immutables returned by the constructor (we’ll talk about immutables in more details later).
-- Calls `ImmutableSimulator` to set the immutables that are to be used for the deployed contract.
-
-Note how it is different from the EVM approach: on EVM when the contract is deployed, it executes the initCode and returns the deployedCode. On ZKsync, contracts only have the deployed code and can set immutables as storage variables returned by the constructor.
-
-#### **Constructor**
-
-On Ethereum, the constructor is only part of the initCode that gets executed during the deployment of the contract and returns the deployment code of the contract. On ZKsync, there is no separation between deployed code and constructor code. The constructor is always a part of the deployment code of the contract. In order to protect it from being called, the compiler-generated contracts invoke constructor only if the `isConstructor` flag provided (it is only available for the system contracts). You can read more about flags [here](#flags-for-calls).
-
-After execution, the constructor must return an array of:
-
-```solidity
-struct ImmutableData {
- uint256 index;
- bytes32 value;
-}
-```
-
-basically denoting an array of immutables passed to the contract.
-
-#### **Immutables**
-
-Immutables are stored in the `ImmutableSimulator` system contract. The way how `index` of each immutable is defined is part of the compiler specification. This contract treats it simply as mapping from index to value for each particular address.
-
-Whenever a contract needs to access a value of some immutable, they call the `ImmutableSimulator.getImmutable(getCodeAddress(), index)`. Note that on ZKsync it is possible to get the current execution address (you can read more about `getCodeAddress()` [here](#zksync-specific-opcodes).
-
-#### **Return value of the deployment methods**
-
-If the call succeeded, the address of the deployed contract is returned. If the deploy fails, the error bubbles up.
-
-### DefaultAccount
-
-The implementation of the default account abstraction. This is the code that is used by default for all addresses that are not in kernel space and have no contract deployed on them. This address:
-
-- Contains minimal implementation of our account abstraction protocol. Note that it supports the [built-in paymaster flows](https://docs.zksync.io/zksync-protocol/era-vm/account-abstraction/paymasters).
-- When anyone (except bootloader) calls it, it behaves in the same way as a call to an EOA, i.e. it always returns `success = 1, returndatasize = 0` for calls from anyone except for the bootloader.
-
-### L1Messenger
-
-A contract used for sending arbitrary length L2→L1 messages from ZKsync to L1. While ZKsync natively supports a rather limited number of L1→L2 logs, which can transfer only roughly 64 bytes of data a time, we allowed sending nearly-arbitrary length L2→L1 messages with the following trick:
-
-The L1 messenger receives a message, hashes it and sends only its hash as well as the original sender via L2→L1 log. Then, it is the duty of the L1 smart contracts to make sure that the operator has provided full preimage of this hash in the commitment of the batch.
-
-Note, that L1Messenger is calls the L2DAValidator and plays an important role in facilitating the [DA validation protocol](../settlement_contracts/data_availability/custom_da.md).
-
-### NonceHolder
-
-Serves as storage for nonces for our accounts. Besides making it easier for operator to order transactions (i.e. by reading the current nonces of account), it also serves a separate purpose: making sure that the pair (address, nonce) is always unique.
-
-It provides a function `validateNonceUsage` which the bootloader uses to check whether the nonce has been used for a certain account or not. Bootloader enforces that the nonce is marked as non-used before validation step of the transaction and marked as used one afterwards. The contract ensures that once marked as used, the nonce can not be set back to the “unused” state.
-
-Note that nonces do not necessarily have to be monotonic (this is needed to support more interesting applications of account abstractions, e.g. protocols that can start transactions on their own, tornado-cash like protocols, etc). That’s why there are two ways to set a certain nonce as “used”:
-
-- By incrementing the `minNonce` for the account (thus making all nonces that are lower than `minNonce` as used).
-- By setting some non-zero value under the nonce via `setValueUnderNonce`. This way, this key will be marked as used and will no longer be allowed to be used as nonce for accounts. This way it is also rather efficient, since these 32 bytes could be used to store some valuable information.
-
-The accounts upon creation can also provide which type of nonce ordering do they want: Sequential (i.e. it should be expected that the nonces grow one by one, just like EOA) or Arbitrary, the nonces may have any values. This ordering is not enforced in any way by system contracts, but it is more of a suggestion to the operator on how it should order the transactions in the mempool.
-
-### EventWriter
-
-A system contract responsible for emitting events.
-
-It accepts in its 0-th extra abi data param the number of topics. In the rest of the extraAbiParams he accepts topics for the event to emit. Note, that in reality the event the first topic of the event contains the address of the account. Generally, the users should not interact with this contract directly, but only through Solidity syntax of `emit`-ing new events.
-
-### Compressor
-
-One of the most expensive resource for a rollup is data availability, so in order to reduce costs for the users we compress the published pubdata in several ways:
-
-- We compress published bytecodes.
-- We compress state diffs.
-
-The contract provides two methods:
-
-- `publishCompressedBytecode` that verifies the correctness of the bytecode compression and publishes it in form of a message to the DA layer.
-- `verifyCompressedStateDiffs` that can verify the correctness of our standard state diff compression. This method can be used by common L2DAValidators and it is for instance utilized by the [RollupL2DAValidator](../../l2-contracts/contracts/data-availability/RollupL2DAValidator.sol).
-
-You can read more about how custom DA is handled [here](../settlement_contracts/data_availability/custom_da.md).
-
-### Pubdata Chunk Publisher
-
-This contract is responsible for separating pubdata into chunks that each fit into a [4844 blob](../settlement_contracts/data_availability/rollup_da.md) and calculating the hash of the preimage of said blob. If a chunk's size is less than the total number of bytes for a blob, we pad it on the right with zeroes as the circuits will require that the chunk is of exact size.
-
-This contract can be utilized by L2DAValidators, e.g. [RollupL2DAValidator](../../l2-contracts/contracts/data-availability/RollupL2DAValidator.sol) uses it to compress the pubdata into blobs.
-
-### CodeOracle
-
-It is a contract that accepts the versioned hash of a bytecode and returns the preimage of it. It is similar to the `extcodecopy` functionality on Ethereum.
-
-It works the following way:
-
-1. It accepts a versioned hash and double checks that it is marked as “known”, i.e. the operator must know the preimage for such hash.
-2. After that, it uses the `decommit` opcode, which accepts the versioned hash and the number of ergs to spent, which is proportional to the length of the preimage. If the preimage has been decommitted before, the requested cost will be refunded to the user.
-
- Note, that the decommitment process does not only happen using the `decommit` opcode, but during calls to contracts. Whenever a contract is called, its code is decommitted into a memory page dedicated to contract code. We never decommit the same preimage twice, regardless of whether it was decommitted via an explicit opcode or during a call to another contract, the previous unpacked bytecode memory page will be reused. When executing `decommit` inside the `CodeOracle` contract, the user will be firstly precharged with maximal possible price and then it will be refunded in case the bytecode has been decommitted before.
-
-3. The `decommit` opcode returns to the slice of the decommitted bytecode. Note, that the returned pointer always has length of 2^21 bytes, regardless of the length of the actual bytecode. So it is the job of the `CodeOracle` system contract to shrink the length of the returned data.
-
-### P256Verify
-
-This contract exerts the same behavior as the P256Verify precompile from [RIP-7212](https://github.com/ethereum/RIPs/blob/master/RIPS/rip-7212.md). Note, that since Era has different gas schedule, we do not comply with the gas costs, but otherwise the interface is identical.
-
-### GasBoundCaller
-
-This is not a system contract, but it will be predeployed on a fixed user space address. This contract allows users to set an upper bound of how much pubdata can a subcall take, regardless of the gas per pubdata. More on how pubdata works on ZKsync can be read [here](./zksync_fee_model.md).
-
-Note, that it is a deliberate decision not to deploy this contract in the kernel space, since it can relay calls to any contracts and so may break the assumption that all system contracts can be trusted.
-
-### ComplexUpgrader
-
-Usually an upgrade is performed by calling the `forceDeployOnAddresses` function of ContractDeployer out of the name of the `FORCE_DEPLOYER` constant address. However some upgrades may require more complex interactions, e.g. query something from a contract to determine which calls to make etc.
-
-For cases like this `ComplexUpgrader` contract has been created. The assumption is that the implementation of the upgrade is predeployed and the `ComplexUpgrader` would delegatecall to it.
-
-> Note, that while `ComplexUpgrader` existed even in the previous upgrade, it lacked `forceDeployAndUpgrade` function. This caused some serious limitations. More on how the gateway upgrade process will look like can be read [here](<../upgrade_history/gateway_preparation_upgrade/upgrade_process_(no_gateway_chain).md>).
-
-### Predeployed contracts
-
-There are some contracts need to predeployed, but having kernel space rights is not desirable for them. Such contracts are usuallypredeployed at sequential addresses starting from `2^16`.
-
-### Create2Factory
-
-Just a built-in Create2Factory. It allows to deterministically deploy contracts to the samme address on multiple chains.
-
-### L2GenesisUpgrade
-
-A contract that is responsible for facilitating initialization of a newly created chain. This is part of a [chain creation flow](../chain_management/chain_genesis.md).
-
-### Bridging-related contracts
-
-`L2Bridgehub`, `L2AssetRouter`, `L2NativeTokenVault`, as well as `L2MessageRoot`.
-
-These contracts are used to facilitate cross-chain communication as well value bridging. You can read more about then in [the asset router spec](../bridging/asset_router/overview.md).
-
-Note, that [L2AssetRouter](../../l1-contracts/contracts/bridge/asset-router/L2AssetRouter.sol) and [L2NativeTokenVault](../../l1-contracts/contracts/bridge/ntv/L2NativeTokenVault.sol) have unique code, the L2Bridgehub and L2MessageRoot share the same source code with their L1 precompiles, i.e. the L2Bridgehub has [this](../../l1-contracts/contracts/bridgehub/Bridgehub.sol) code and L2MessageRoot has [this](../../l1-contracts/contracts/bridgehub/MessageRoot.sol) code.
-
-### SloadContract
-
-During the L2GatewayUpgrade, the system contracts will need to read the storage of some other contracts, despite those lacking getters. The how it is implemented can be read in the `forcedSload` function of the [SystemContractHelper](../../system-contracts/contracts/libraries/SystemContractHelper.sol) contract.
-
-While it is only used for the upgrade, it was decided to leave it as a predeployed contract for future use-cases as well.
-
-### L2WrappedBaseTokenImplementation
-
-While bridging wrapped base tokens (e.g. WETH) is not yet supported. The address of it is enshrined within the native token vault (both the L1 and L2 one). For consistency with other networks, our WETH token is deployed as a TransparentUpgradeableProxy. To have the deployment process easier, we predeploy the implementation.
-
-## Known issues to be resolved
-
-The protocol, while conceptually complete, contains some known issues which will be resolved in the short to middle term.
-
-- Fee modeling is yet to be improved. More on it in the [document](./zksync_fee_model.md) on the fee model.
-- We may add some kind of default implementation for the contracts in the kernel space (i.e. if called, they wouldn’t revert but behave like an EOA).
diff --git a/docs/l2_system_contracts/zksync_fee_model.md b/docs/l2_system_contracts/zksync_fee_model.md
deleted file mode 100644
index 5f3c44fe23..0000000000
--- a/docs/l2_system_contracts/zksync_fee_model.md
+++ /dev/null
@@ -1,309 +0,0 @@
-# ZKsync fee model
-
-[back to readme](../README.md)
-
-This document will assume that you already know how gas & fees work on Ethereum.
-
-On Ethereum, all the computational, as well as storage costs, are represented via one unit: gas. Each operation costs a certain amount of gas, which is generally constant (though it may change during [upgrades](https://blog.ethereum.org/2021/03/08/ethereum-berlin-upgrade-announcement)).
-
-## Main differences from EVM
-
-ZKsync as well as other L2s have the issue that does not allow the adoption of the same model as the one for Ethereum so easily: the main reason is the requirement for publishing the pubdata on Ethereum. This means that prices for L2 transactions will depend on the volatile L1 gas prices and can not be simply hard coded.
-
-Also, ZKsync, being a zkRollup is required to prove every operation with zero-knowledge proofs. That comes with a few nuances.
-
-### Different opcode pricing
-
-The operations tend to have different “complexity”/”pricing” in zero-knowledge proof terms than in standard CPU terms. For instance, `keccak256` which was optimized for CPU performance, will cost more to prove.
-
-That’s why you will find the prices for operations on ZKsync a lot different from the ones on Ethereum.
-
-### I/O pricing
-
-On Ethereum, whenever a storage slot is read/written to for the first time, a certain amount of gas is charged for the fact that the slot has been accessed for the first time. A similar mechanism is used for accounts: whenever an account is accessed for the first time, a certain amount of gas is charged for reading the account's data. On EVM, an account's data includes its nonce, balance, and code. We use a similar mechanism but with a few differences.
-
-#### Storage costs
-
-Just like EVM, we also support "warm" and "cold" storage slots. However, the flow is a bit different:
-
-1. The user is firstly precharged with the maximum (cold) cost.
-2. The operator is asked for a refund.
-3. Then, the refund is given out to the user in place.
-
-In other words, unlike EVM, the user should always have enough gas for the worst case (even if the storage slot is "warm"). Also, the control of the refunds is currently enforced by the operator only and not by the circuits.
-
-#### Code decommitment and account access costs
-
-Unlike EVM, our storage does not couple accounts' balances, nonces, and bytecodes. Balance, nonce, and code hash are three separate storage variables that use standard storage "warm" and "cold" mechanisms. A different approach is used for accessing bytecodes though.
-
-We call the process of unpacking the bytecode as, _code decommitment_, since it is a process of transforming a commitment to code (i.e., the versioned code hash) into its preimage. Whenever a contract with a certain code hash is called, the following logic is executed:
-
-1. The operator is asked whether this is the first time this bytecode has been decommitted.
-2. If the operator returns "yes", then the user is charged the full cost. Otherwise, the user does not pay for decommit.
-3. If needed, the code is decommitted to the code page.
-
-Unlike storage interactions, the correctness of this process is _partially_ enforced by circuits, i.e., if step (3) is reached, i.e., the code is being decommitted, it will be proven that the operator responded correctly on step (1). However, if the program runs out of gas on step (2), the correctness of the first statement won't be proven. The reason for that is it is hard to prove in circuits at the time the decommitment is invoked whether it is indeed the first decommitment or not.
-
-Note that in the case of an honest operator, this approach offers a better UX, since there is no need to be precharged with the full cost beforehand. However, no program should rely on this fact.
-
-#### Conclusion
-
-As a conclusion, ZKsync Era supports a similar "cold"/"warm" mechanism to EVM, but for now, these are only enforced by the operator, i.e., the users of the applications should not rely on these. The execution is guaranteed to be correct as long as the user has enough gas to pay for the worst, i.e. "cold" scenario.
-
-### Memory pricing
-
-ZKsync Era has different memory pricing rules:
-
-- Whenever a user contract is called, `2^12` bytes of memory are given out for free, before starting to charge users linearly according to its length.
-- Whenever a kernel space (i.e., a system) contract is called, `2^21` bytes of memory are given out for free, before starting to charge users linearly according to the length.
- Note that, unlike EVM, we never use a quadratic component of the price for memory expansion.
-
-### Different intrinsic costs
-
-Unlike Ethereum, where the intrinsic cost of transactions (`21000` gas) is used to cover the price of updating the balances of the users, the nonce and signature verification, on ZKsync these prices are _not_ included in the intrinsic costs for transactions, due to the native support of account abstraction, meaning that each account type may have their own transaction cost. In theory, some may even use more zk-friendly signature schemes or other kinds of optimizations to allow cheaper transactions for their users.
-
-That being said, ZKsync transactions do come with some small intrinsic costs, but they are mostly used to cover costs related to the processing of the transaction by the bootloader which can not be easily measured in code in real-time. These are measured via testing and are hard coded.
-
-### Charging for pubdata
-
-An important cost factor for users is the pubdata. ZKsync Era is a state diff-based rollup, meaning that the pubdata is published not for the transaction data, but for the state changes: modified storage slots, deployed bytecodes, L2->L1 messages. This allows for applications that modify the same storage slot multiple times such as oracles, to update the storage slots multiple times while maintaining a constant footprint on L1 pubdata. Correctly a state diff rollups requires a special solution to charging for pubdata. It is explored in the next section.
-
-## How L2 gas price works
-
-### Batch overhead & limited resources of the batch
-
-To process the batch, the ZKsync team has to pay for proving the batch, committing to it, etc. Processing a batch involves some operational costs as well. All of these values we call “Batch overhead”. It consists of two parts:
-
-- The L2 requirements for proving the circuits (denoted in L2 gas).
-- The L1 requirements for the proof verification as well as general batch processing (denoted in L1 gas).
-
-We generally try to aggregate as many transactions as possible and each transaction pays for the batch overhead proportionally to how close the transaction brings the batch to being _sealed,_ i.e. closed and prepared for proof verification and submission on L1. A transaction gets closer to sealing a batch by using the batch’s _limited resources_.
-
-While on Ethereum, the main reason for the existence of a batch gas limit is to keep the system decentralized & load low, i.e. assuming the existence of the correct hardware, only time would be a requirement for a batch to adhere to. In the case of ZKsync batches, there are some limited resources the batch should manage:
-
-- **Time.** The same as on Ethereum, the batch should generally not take too much time to be closed to provide better UX. To represent the time needed we use a batch gas limit, note that it is higher than the gas limit for a single transaction.
-- **Slots for transactions.** The bootloader has a limited number of slots for transactions, i.e. it can not take more than a certain transactions per batch.
-- **The memory of the bootloader.** The bootloader needs to store the transaction’s ABI encoding in its memory & this fills it up. In practical terms, it serves as a penalty for having transactions with large calldata/signatures in the case of custom accounts.
-- **Pubdata bytes.** To fully appreciate the gains from the storage diffs, i.e. the fact that changes in a single slot happening in the same batch need to be published only once, we need to publish all the batch’s public data only after the transaction has been processed. Right now, we publish all the data with the storage diffs as well as L2→L1 messages, etc in a single transaction at the end of the batch. Most nodes have a limit of 128kb per transaction, so this is the limit that each ZKsync batch should adhere to.
-
-Each transaction spends the batch overhead proportionally to how closely it consumes the resources above.
-
-Note, that before the transaction is executed, the system can not know how many of the limited system resources the transaction will take, so we need to charge for the worst case and provide the refund at the end of the transaction.
-
-### `MAX_TRANSACTION_GAS_LIMIT`
-
-A recommended maximal amount of gas that a transaction can spend on computation is `MAX_TRANSACTION_GAS_LIMIT`. But in case the operator trusts the user, the operator may provide the [trusted gas limit](../../system-contracts/bootloader/bootloader.yul#L1242), i.e. the limit which exceeds `MAX_TRANSACTION_GAS_LIMIT` assuming that the operator knows what he is doing. This can be helpful in the case of a hyperchain with different parameters.
-
-### Derivation of `baseFee` and `gasPerPubdata`
-
-At the start of each batch, the operator provides the following two parameters:
-
-1. `FAIR_L2_GAS_PRICE`. This variable should denote what is the minimal L2 gas price that the operator is willing to accept. It is expected to cover the cost of proving/executing a single unit of zkEVM gas, the potential contribution of usage of a single gas towards sealing the batch, as well as congestion.
-2. `FAIR_PUBDATA_PRICE`, which is the price of a single pubdata byte in Wei. Similar to the variable about, it is expected to cover the cost of publishing a single byte as well as the potential contribution of usage of a single pubdata byte towards sealing the batch.
-
-In the descriptions above by "contribution towards sealing the batch" we referred to the fact that if a batch is most often closed by a certain resource (e.g. pubdata), then the pubdata price should include this cost.
-
-The `baseFee` and `gasPerPubdata` are then calculated as:
-
-```yul
-baseFee := max(
- fairL2GasPrice,
- ceilDiv(fairPubdataPrice, MAX_L2_GAS_PER_PUBDATA())
-)
-gasPerPubdata := ceilDiv(pubdataPrice, baseFee)
-```
-
-While the way how we [charge for pubdata](#how-we-charge-for-pubdata) in theory allows for any `gasPerPubdata`, some SDKs expect the `gasLimit` by a transaction to be a uint64 number. We would prefer `gasLimit` for transactions to stay within JS's safe "number" range in case someone uses `number` type to denote gas there. For this reason, we will bind the `MAX_L2_GAS_PER_PUBDATA` to `2^20` gas per 1 pubdata byte. The number is chosen such that `MAX_L2_GAS_PER_PUBDATA * 2^32` is a safe JS integer. The `2^32` part is the maximal possible value for pubdata counter that could be in theory used. It is unrealistic that this value will ever appear under an honest operator, but it is needed just in case.
-
-Note, however, that it means that the total under high L1 gas prices `gasLimit` may be larger than `u32::MAX` and it is recommended that no more than `2^20` bytes of pubdata can be published within a transaction.
-
-#### Recommended calculation of `FAIR_L2_GAS_PRICE`/`FAIR_PUBDATA_PRICE`
-
-Let's define the following constants:
-
-- `BATCH_OVERHEAD_L1_GAS` - The L1 gas overhead for a batch (proof verification, etc).
-- `COMPUTE_OVERHEAD_PART` - The constant that represents the possibility that a batch can be sealed because of overuse of computation resources. It has range from 0 to 1. If it is 0, the compute will not depend on the cost of closing the batch. If it is 1, the gas limit per batch will have to cover the entire cost of closing the batch.
-- `MAX_GAS_PER_BATCH` - The maximum amount of gas that can be used by the batch. This value is derived from the circuits' limitation per batch.
-- `PUBDATA_OVERHEAD_PART` - The constant that represents the possibility that a batch can be sealed because of overuse of pubdata. It has range from 0 to 1. If it is 0, the pubdata will not depend on the cost of closing the batch. If it is 1, the pubdata limit per batch will have to cover the entire cost of closing the batch.
-- `MAX_PUBDATA_PER_BATCH` - The maximum amount of pubdata that can be used by the batch. Note that if the calldata is used as pubdata, this variable should not exceed 128kb.
-
-And the following fluctuating variables:
-
-- `MINIMAL_L2_GAS_PRICE` - The minimal acceptable L2 gas price, i.e. the price that should include the cost of computation/proving as well as potential premium for congestion.
-- `PUBDATA_BYTE_ETH_PRICE` - The minimal acceptable price in ETH per each byte of pubdata. It should generally be equal to the expected price of a single blob byte or calldata byte (depending on the approach used).
-
-Then:
-
-1. `FAIR_L2_GAS_PRICE = MINIMAL_L2_GAS_PRICE + COMPUTE_OVERHEAD_PART * BATCH_OVERHEAD_L1_GAS / MAX_GAS_PER_BATCH`
-2. `FAIR_PUBDATA_PRICE = PUBDATA_BYTE_ETH_PRICE + PUBDATA_OVERHEAD_PART * BATCH_OVERHEAD_L1_GAS / MAX_PUBDATA_PER_BATCH`
-
-For L1→L2 transactions, the `MAX_GAS_PER_BATCH` variable is equal to `L2_TX_MAX_GAS_LIMIT` (since this amount of gas is enough to publish the maximal number of pubdata in the batch). Also, for additional security, for L1->L2 transactions the `COMPUTE_OVERHEAD_PART = PUBDATA_OVERHEAD_PART = 1`, i.e. since we are not sure what exactly will be the reason for us closing the batch. For L2 transactions, typically `COMPUTE_OVERHEAD_PART = 0`, since, unlike L1→L2 transactions, in case of an attack, the operator can simply censor bad transactions or increase the `FAIR_L2_GAS_PRICE` and so the operator can use average values for better UX.
-
-#### Note on operator’s responsibility
-
-To reiterate, the formulas above are used for L1→L2 transactions on L1 to protect the operator from malicious transactions. However, for L2 transactions, it is solely the responsibility of the operator to provide the correct values. It is designed this way for more fine-grained control over the system for the zkStack operators (including Validiums, maybe Era on top of another L1, etc).
-
-This fee model also provides a very high degree of flexibility to the operator & so if we find out that we earn too much with a certain part, we could amend how the fair l2 gas price and fair pubdata price are generated and that’s it (there will be no further enforcements on the bootloader side).
-
-In the long run, the consensus will ensure the correctness of these values on the main ZKsync Era (or maybe we’ll transition to a different system).
-
-#### Overhead for transaction slot and memory
-
-We also have a limit on the number of memory that can be consumed within a batch as well as the number of transactions that can be included there.
-
-To simplify the codebase we've chosen the following constants:
-
-- `TX_OVERHEAD_GAS = 10000` -- the overhead in gas for including a transaction into a batch.
-- `TX_MEMORY_OVERHEAD_GAS = 10` -- the overhead for consuming a single byte of bootloader memory.
-
-We've used roughly the following formulae to derive these values:
-
-1. `TX_OVERHEAD_GAS = MAX_GAS_PER_BATCH / MAX_TXS_IN_BATCH`. For L1->L2 transactions we used the `MAX_GAS_PER_BATCH = 80kk` and `MAX_TXS_IN_BATCH = 10k`. `MAX_GAS_PER_BATCH / MAX_TXS_IN_BATCH = 8k`, while we decided to use the 10k value to better take into account the load on the operator from storing the information about the transaction.
-2. `TX_MEMORY_OVERHEAD_GAS = MAX_GAS_PER_BATCH / MAX_MEMORY_FOR_BATCH`. For L1->L2 transactions we used the `MAX_GAS_PER_BATCH = 80kk` and `MAX_MEMORY_FOR_BATCH = 32 * 600_000`.
-
-`MAX_GAS_PER_BATCH / MAX_MEMORY_FOR_BATCH = 4`, while we decided to use the `10` gas value to better take into account the load on the operator from storing the information about the transaction.
-
-Future work will focus on removing the limit on the number of transactions’ slots completely as well as increasing the memory limit.
-
-#### Note on L1→L2 transactions
-
-The formulas above apply to L1→L2 transactions. However, note that the `gas_per_pubdata` is still kept as constant as `800`. This means that a higher `baseFee` could be used for L1->L2 transactions to ensure that `gas_per_pubdata` remains at that value regardless of the price of the pubdata.
-
-#### Refunds
-
-Note, that the used constants for the fee model are probabilistic, i.e. we never know in advance the exact reason why a batch is going to be sealed. These constants are meant to cover the expenses of the operator over a longer period so we do not refund the fact that the transaction might've been charged for overhead above the level at which the transaction has brought the batch to being closed, since these funds are used to cover transactions that did not pay in full for the limited batch's resources that they used.
-
-#### Refunds for repeated writes
-
-ZKsync Era is a state diff-based rollup, i.e. the pubdata is published not for transactions, but for storage changes. This means that whenever a user writes into a storage slot, it incurs a certain amount of pubdata. However, not all writes are equal:
-
-- If a slot has been already written to in one of the previous batches, the slot has received a short ID, which allows it to require less pubdata in the state diff.
-- Depending on the `value` written into a slot, various compression optimizations could be used and so we should reflect that too.
-- Maybe the slot has been already written to in this batch so we don’t have to charge anything for it.
-
-You can read more about how we treat the pubdata [here](../settlement_contracts/data_availability/standard_pubdata_format.md).
-
-The important part here is that while such refunds are inlined (i.e. unlike the refunds for overhead they happen in place during execution and not after the whole transaction has been processed), they are enforced by the operator. Right now, the operator is the one who decides what refund to provide.
-
-## How we charge for pubdata
-
-ZKsync Era is a state diff-based rollup. It means that it is not possible to know how much pubdata a transaction will take before its execution. We _could_ charge for pubdata the following way: whenever a user does an operation that emits pubdata (writes to storage, publishes an L2->L1 message, etc.), we charge `pubdata_bytes_published * gas_per_pubdata` directly from the context of the execution.
-
-However, such an approach has the following disadvantages:
-
-- This would inherently make execution very divergent from EVM.
-- It is prone to unneeded overhead. For instance, in the case of reentrancy locks, the user will still have to pay the initial price for marking the lock as used. The price will get refunded in the end, but it still worsens the UX.
-- If we want to impose any sort of limit on how much computation a transaction could take (let's call this limit `MAX_TX_GAS_LIMIT`), it would mean that no more than `MAX_TX_GAS_LIMIT / gas_per_pubdata` could be published in a transaction, making this limit either too small or forcing us to increase `baseFee` to prevent the number from growing too much.
-
-To avoid the issues above we need to somehow decouple the gas spent on pubdata from the gas spent on execution. While calldata-based rollups precharge for calldata, we cannot do it, since the exact state diffs are known only after the transaction is finished. We'll use the approach of _post-charging._ Basically, we'll keep a counter that tracks how much pubdata has been spent and charge the user for the calldata at the end of the transaction.
-
-A problem with post-charging is that the user may spend all their gas within the transaction so we'll have no gas to charge for pubdata from. Note, however, that if the transaction is reverted, all the state changes that were related to it will be reverted too. That's why whenever we need to charge the user for pubdata, but it doesn't provide enough gas, the transaction will get reverted. The user will pay for the computation, but no state changes (and thus, pubdata) will be produced by the transaction.
-
-So it will work the following way:
-
-1. Firstly, we fix the amount of pubdata published so far. Let's denote it as `basePubdataSpent`.
-2. We execute the validation of the transaction.
-3. We check whether `(getPubdataSpent() - basePubdataSpent) * gasPerPubdata <= gasLeftAfterValidation`. If it is not, then the transaction does not cover enough funds for itself, so it should be _rejected_ (unlike revert, which means that the transaction is not even included in the block).
-4. We execute the transaction itself.
-5. We do the same check as in (3), but now if the transaction does not have enough gas for pubdata, it is reverted, i.e., the user still pays the fee to cover the computation for its transaction.
-6. (optional, in case a paymaster is used). We repeat steps (4-5), but now for the `postTransaction` method of the paymaster.
-
-On the internal level, the pubdata counter is modified in the following way:
-
-- When there is a storage write, the operator is asked to provide by how much to increment the pubdata counter. Note that this value can be negative if, as in the example with a reentrancy guard, the storage diff is being reversed. There is currently no limit on how much the operator can charge for the pubdata.
-- Whenever there is a need to publish a blob of bytes to L1 (for instance, when publishing a bytecode), the responsible system contract would increment the pubdata counter by `bytes_to_publish`.
-- Whenever there is a revert in a frame, the pubdata counter gets reverted too, similar to storage & events.
-
-The approach with post-charging removes the unneeded overhead and decouples the gas used for the execution from the gas used for data availability, which removes any caps on `gasPerPubdata`.
-
-### Security considerations for protocol
-
-Now it has become easier for a transaction to use up more pubdata than what can be published within a batch. In such a case, we'll revert the transaction as well.
-
-### Security considerations for users
-
-The approach with post-charging introduces one distinctive feature: it is not trivial to know the final price for a transaction at the time of its execution. When a user does `.call{gas: some_gas}` the final impact on the price of the transaction may be higher than `some_gas` since the pubdata counter will be incremented during the execution and charged only at the end of the transaction.
-
-While for the average user, this limitation is not relevant, some specific applications may receive certain issues.
-
-#### Example for a queue of withdrawals
-
-Imagine that there is the following contract:
-
-```solidity
-struct Withdrawal {
- address token;
- address to;
- uint256 amount;
-}
-
-Withdrawals[] queue;
-uint256 lastProcessed;
-
-function processNWithdrawals(uint256 N) external nonReentrant {
- uint256 current = lastProcessed + 1;
- uint256 lastToProcess = current + N - 1;
-
- while(current <= lastToProcess) {
- // If the user provided some bad token that takes more than MAX_WITHDRAWAL_GAS
- // to transfer, it is the problem of the user and it will stall the queue, so
- // the `_success` value is ignored.
- Withdrawal storage currentQueue = queue[current];
- (bool _success, ) = currentQueue.token.call{gas: MAX_WITHDRAWAL_GAS}(abi.encodeWithSignature("transfer(to,amount)", currentQueue.to, currentQueue.amount));
- current += 1;
- }
- lastProcessed = lastToProcess;
-}
-```
-
-The contract above supports a queue of withdrawals. This queue supports any type of token, including potentially malicious ones. However, the queue will never get stuck, since the `MAX_WITHDRAWAL_GAS` ensures that even if the malicious token does a lot of computation, it will be bound by this number and so the caller of the `processNWithdrawals` won't spend more than `MAX_WITHDRAWAL_GAS` per token.
-
-The above assumptions work in the pre-charge model (calldata based rollups) or pay-as-you-go model (pre-1.5.0 Era). However, in the post-charge model, the `MAX_WITHDRAWAL_GAS`` limits the amount of computation that can be done within the transaction, but it does not limit the amount of pubdata that can be published. Thus, if such a function publishes a very large L1→L2 message, it might make the entire top transaction fail. This effectively means that such a queue would be stalled.
-
-#### How to prevent this issue on the users' side
-
-If a user really needs to limit the amount of gas that the subcall takes, all the subcalls should be routed through a special contract, that will guarantee that the total cost of the subcall wont be larger than the gas provided (by reverting if needed).
-
-An implementation of this special contract can be seen [here](../../gas-bound-caller/contracts/GasBoundCaller.sol). Note, that this contract is _not_ a system one and it will be deployed on some fixed, but not kernel space address.
-
-#### 1. Case of when a malicious contract consumes a large, but processable amount of pubdata\*\*
-
-In this case, the topmost transaction will be able to sponsor such subcalls. When a transaction is processed, at most 80M gas is allowed to be passed to the execution. The rest can only be spent on pubdata during the post-charging.
-
-#### 2. Case of when a malicious contract consumes an unprocessable amount of pubdata\*\*
-
-In this case, the malicious callee published so much pubdata, that such a transaction can not be included into a batch. This effectively means that no matter how much money the topmost transaction willing to pay, the queue is stalled.
-
-The only way how it is combated is by setting some minimal amount of ergs that still have to be consumed with each emission of pubdata (basically to make sure that it is not possible to publish large chunks of pubdata while using negligible computation). Unfortunately, setting this minimal amount to cover the worst possible case (i.e. 80M ergs spent with maximally 100k of pubdata available, leading to 800 L2 gas / pubdata byte) would likely be too harsh and will negatively impact average UX. Overall, this _is_ the way to go, however for now the only guarantee will be that a subcall of 1M gas is always processable, which will mean that at least 80 gas will have to be spent for each published pubdata byte. Even if higher than real L1 gas costs, it is reasonable even in the long run, since all the things that are published as pubdata are state-related and so they have to be well-priced for long-term storage.
-
-In the future, we will guarantee the processability of subcalls of larger size by increasing the number of pubdata that can be published per batch.
-
-### Limiting the `gas_per_pubdata`
-
-As already mentioned, the transactions on ZKsync depend on volatile L1 gas costs to publish the pubdata for batch, verify proofs, etc. For this reason, ZKsync-specific EIP712 transactions contain the `gas_per_pubdata_limit` field, denoting the maximum `gas_per_pubdata` that the operator can charge the user for a single byte of pubdata.
-
-For Ethereum transactions (which do not contain this field), the block's `gas_per_pubdata` is used.
-
-## Improvements in the upcoming releases
-
-The fee model explained above, while fully functional, has some known issues. These will be tackled with the following upgrades.
-
-### L1->L2 transactions do not pay for their execution on L1
-
-The `executeBatches` operation on L1 is executed in `O(N)` where N is the number of priority ops that we have in the batch. Each executed priority operation will be popped and so it incurs cost for storage modifications. As of now, we do not charge for it.
-
-## ZKsync Era Fee Components (Revenue & Costs)
-
-- On-Chain L1 Costs
- - L1 Commit Batches
- - The commit batch transaction submits pubdata (which is the list of updated storage slots) to L1. The cost of a commit transaction is calculated as `constant overhead + price of pubdata`. The `constant overhead` cost is evenly distributed among L2 transactions in the L1 commit transaction, but only at higher transaction loads. As for the `price of pubdata`, it is known how much pubdata each L2 transaction consumed, therefore, they are charged directly for that. Multiple L1 batches can be included in a single commit transaction.
- - L1 Prove Batches
- - Once the off-chain proof is generated, it is submitted to L1 to make the rollup batch final. Currently, each proof contains only one L1 batch.
- - L1 Execute Batches
- - The execute batches transaction processes L2 -> L1 messages and marks executed priority operations as such. Multiple L1 batches can be included in a single execute transaction.
- - L1 Finalize Withdrawals
- - While not strictly part of the L1 fees, the cost to finalize L2 → L1 withdrawals are covered by Matter Labs. The finalize withdrawals transaction processes user token withdrawals from ZKsync Era to Ethereum. Multiple L2 withdrawal transactions are included in each finalize withdrawal transaction.
-- On-Chain L2 Revenue
- - L2 Transaction Fee
- - This fee is what the user pays to complete a transaction on ZKsync Era. It is calculated as `gasLimit x baseFeePerGas - refundedGas x baseFeePerGas`, or more simply, `gasUsed x baseFeePerGas`.
-- Profit = L2 Revenue - L1 Costs - Off-Chain Infrastructure Costs
diff --git a/docs/overview.md b/docs/overview.md
deleted file mode 100644
index e69de29bb2..0000000000
diff --git a/docs/settlement_contracts/data_availability/custom_da.md b/docs/settlement_contracts/data_availability/custom_da.md
deleted file mode 100644
index b1780eb4a3..0000000000
--- a/docs/settlement_contracts/data_availability/custom_da.md
+++ /dev/null
@@ -1,49 +0,0 @@
-# Custom DA support
-
-[back to readme](../../README.md)
-
-## Intro
-
-We introduced modularity into our contracts to support multiple DA layers, easier support for Validium and Rollup mode, and to settlement via the Gateway.
-
-
-
-
-### Background
-
-**Pubdata** - information published by the ZK Chain that can be used to reconstruct its state, it consists of l2→l1 logs, l2→l1 messages, contract bytecodes, and compressed state diffs.
-
-```solidity
-struct PubdataInput {
- pub(crate) user_logs: Vec,
- pub(crate) l2_to_l1_messages: Vec>,
- pub(crate) published_bytecodes: Vec>,
- pub(crate) state_diffs: Vec,
-}
-```
-
-The current version of ZK Chains supports the following DataAvailability(DA) modes:
-
-- `Calldata` - uses Ethereum tx calldata as pubdata storage
-- `Blobs` - uses Ethereum blobs calldata as pubdata storage
-- `No DA Validium` - posting pubdata is not enforced
-
-The goal is to create a general purpose solution, that would ensure DA consistency and verifiability, on top of which we would build what is requested by many partners and covers many use cases like on-chain games and DEXes: **Validium with Abstract DA.**
-
-This means that a separate solution like AvailDA, EigenDA, Celestia, etc. would be used to store the pubdata. The idea is that every solution like that (`DA layer`) provides a proof of inclusion of our pubdata to their storage, and this proof can later be verified on Ethereum. This results in an approach that has more security guarantees than `No DA Validium`, but lower fees than `Blobs`(assuming that Ethereum usage grows and blobs become more expensive).
-
-## Proposed solution
-
-The proposed solution is to introduce an abstract 3rd party DA layer, that the sequencer would publish the data to. When the batch is sealed, the hashes of the data related to that batch will be made available on L1. Then, after the DA layer has confirmed that its state is synchronized, the sequencer calls a `commitBatches` function with the proofs required to verify the DA inclusion on L1.
-
-### Challenges
-
-On the protocol level, the complexity is in introducing two new components: L1 and L2 DA verifiers. They are required to ensure the verifiable delivery of the DA inclusion proofs to L1 and consequent verification of these proofs.
-
-The L2 verifier would validate the pubdata correctness and compute a final commitment for DA called `outputHash`. It consists of hashes of `L2→L1 logs and messages`, `bytecodes`, and `compressed state diffs`(blob hashes in case of blobs). This contract has to be deployed by the chain operator and it has to be tied to the DA layer logic, e.g. DA layer accepts 256kb blobs → on the final hash computation stage, the pubdata has to be packed into the chunks of <256kb, and a either the hashes of all blobs, or a rolling hash has to be be part of the `outputHash` preimage.
-
-The `outputHash` will be sent to L1 as a L2→L1 log, so this process is a part of a bootloader execution and can be trusted.
-
-The hashes of data chunks alongside the inclusion proofs have to be provided in the calldata of the L1 diamond proxy’s `commitBatches` function.
-
-L1 contracts have to recalculate the `outputHash` and make sure it matches the one from the logs, after which the abstract DA verification contract is called. In general terms, it would accept the set of chunk’s hashes (by chunk here I mean DA blob, not to be confused with 4844 blob) and a set of inclusion proofs, that should be enough to verify that the preimage (chunk data) is included in the DA layer. This verification would be done by specific contract e.g. `Attestation Bridge`, which holds the state tree information and can perform verification against it.
diff --git a/docs/settlement_contracts/data_availability/img/Custom-da-external.png b/docs/settlement_contracts/data_availability/img/Custom-da-external.png
deleted file mode 100644
index 0c318dd531..0000000000
Binary files a/docs/settlement_contracts/data_availability/img/Custom-da-external.png and /dev/null differ
diff --git a/docs/settlement_contracts/data_availability/img/Rollup_DA.png b/docs/settlement_contracts/data_availability/img/Rollup_DA.png
deleted file mode 100644
index 6f6ad084a4..0000000000
Binary files a/docs/settlement_contracts/data_availability/img/Rollup_DA.png and /dev/null differ
diff --git a/docs/settlement_contracts/data_availability/img/custom_da.png b/docs/settlement_contracts/data_availability/img/custom_da.png
deleted file mode 100644
index 0a17eb6625..0000000000
Binary files a/docs/settlement_contracts/data_availability/img/custom_da.png and /dev/null differ
diff --git a/docs/settlement_contracts/data_availability/rollup_da.md b/docs/settlement_contracts/data_availability/rollup_da.md
deleted file mode 100644
index 901e471485..0000000000
--- a/docs/settlement_contracts/data_availability/rollup_da.md
+++ /dev/null
@@ -1,80 +0,0 @@
-# Rollup DA
-
-[back to readme](../../README.md)
-
-FIXME: run a spellchecker
-
-## Prerequisites
-
-Before reading this document, it is better to understand how [custom DA](./custom_da.md) in general works.
-
-## EIP4844 support
-
-EIP-4844, commonly known as Proto-Danksharding, is an upgrade to the ethereum protocol that introduces a new data availability solution embedded in layer 1. More information about it can be found [here](https://ethereum.org/en/roadmap/danksharding/).
-
-To facilitate EIP4844 blob support, our circuits allow providing two arrays in our public input to the circuit:
-
-- `blobCommitments` -- this is the commitment that helps to check the correctness of the blob content. The formula on how it is computed will be explained below in the document.
-- `blobHash` -- the `keccak256` hash of the inner contents of the blob.
-
-Note, that our circuits require that each blob contains exactly `4096 * 31` bytes. The maximal number of blobs that are supported by our proving system is 16, but the system contracts support only 6 blobs at most for now.
-
-When committing a batch, the L1DAValidator is called with the data provided by the operator and it should return the two arrays described above. These arrays be put inside the batch commitment and then the correctness of the commitments will be verified at the proving stage.
-
-Note, that the `Executor.sol` (and the contract itself) is not responsible for checking that the provided `blobHash` and `blobCommitments` in any way correspond to the pubdata inside the batch as it is the job of the DA Validator pair.
-
-## Publishing pubdata to L1
-
-Let's see an example of how the approach above works in rollup DA validators.
-
-### RollupL2DAValidator
-
-
-
-`RollupL2DAValidator` accepts the preimages for the data to publishes as well as their compressed format. After verifying the compression, it forms the `_totalPubdata` bytes array, which represents the entire blob of data that should be published to L1.
-
-It calls the `PubdataChunkPublisher` system contract to split this pubdata into multiple "chunks" of size `4096 * 31` bytes and return the `keccak256` hash of those, These will be the `blobHash` of from the section before.
-
-To give the flexibility of checking different DA, we send the following data to L1:
-
-- State diff hash. As it will be used on L1 to confirm the correctness of the provided uncompressed storage diifs.
-- The hash of the `_totalPubdata`. In case the size of pubdata is small, it will allow the operator also use just standard Ethereum calldata for the DA.
-- Send the `blobHash` array.
-
-### RollupL1DAValidator
-
-When committing the batch, the operator will provide the preimage of the fields that the RollupL2DAValidator has sent before, and also some `l1DaInput` along with it. This `l1DaInput` will be used to prove that the pubdata was indeed provided in this batch.
-
-The first byte of the `l1DaInput` denotes which way of pubdata publishing was used: Calldata or Blobs.
-
-In case it is Calldata it will be just checked that the provided calldata matches the hash of the `_totalPubdata` that was sent by the L2 counterpart. Note, that Calldata may still contain the blob information as we typically start generating proves before we know which way of calldata will be used. Note, that in case the Calldata is used for DA, we do not verify the `blobCommitments` as the presence of the correct pubdata has been verified already.
-
-In case it is blobs, we need to construct the `blobCommitment`s correctly for each of the blob of data.
-
-For each of the `blob`s the operator provides so called `_commitment` that consists of the following packed structure: `opening point (16 bytes) || claimed value (32 bytes) || commitment (48 bytes) || proof (48 bytes)`.
-
-The verification of the `_commitment` can be summarized in the following snippet:
-
-```solidity
-// The opening point is passed as 16 bytes as that is what our circuits expect and use when verifying the new batch commitment
-// PUBDATA_COMMITMENT_SIZE = 144 bytes
-pubdata_commitments <- [opening point (16 bytes) || claimed value (32 bytes) || commitment (48 bytes) || proof (48 bytes)] from calldata
-opening_point = bytes32(pubdata_commitments[:16])
-versioned_hash <- from BLOBHASH opcode
-
-// Given that we needed to pad the opening point for the precompile, append the data after.
-point_eval_input = versioned_hash || opening_point || pubdataCommitments[16: PUBDATA_COMMITMENT_SIZE]
-
-// this part handles the following:
-// verify versioned_hash == hash(commitment)
-// verify P(z) = y
-res <- point_valuation_precompile(point_eval_input)
-
-assert uint256(res[32:]) == BLS_MODULUS
-```
-
-The final `blobCommitment` is calculated as the hash between the `blobVersionedHash`, `opening point` and the `claimed value`. The zero knowledge circuits will verify that the opening point and the claimed value were calculated correctly and correspond to the data that was hashed under the `blobHash`.
-
-## Structure of the pubdata
-
-Rollups maintain the same structure of pubdata and apply the same rules for compresison as those that were used in the previous versions of the system. These can be read [here](./state_diff_compression_v1_spec.md).
diff --git a/docs/settlement_contracts/data_availability/standard_pubdata_format.md b/docs/settlement_contracts/data_availability/standard_pubdata_format.md
deleted file mode 100644
index 4db784b2ab..0000000000
--- a/docs/settlement_contracts/data_availability/standard_pubdata_format.md
+++ /dev/null
@@ -1,286 +0,0 @@
-# Standard pubdata format
-
-[back to readme](../../README.md)
-
-While with the introduction of [custom DA validators](./custom_da.md), any pubdata logic could be applied for each chain (including calldata-based pubdata), ZK chains are generally optimized for using state-diffs based rollup model.
-
-This document will describe how the standard pubdata format looks like. This is the format that is enforced for [permanent rollup chains](../../chain_management/admin_role.md#ispermanentrollup-setting).
-
-Pubdata in ZKsync can be divided up into 4 different categories:
-
-1. L2 to L1 Logs
-2. L2 to L1 Messages
-3. Smart Contract Bytecodes
-4. Storage writes
-
-Using data corresponding to these 4 facets, across all executed batches, we’re able to reconstruct the full state of L2. To restore the state we just need to filter all of the transactions to the L1 ZKsync contract for only the `commitBatches` transactions where the proposed block has been referenced by a corresponding `executeBatches` call (the reason for this is that a committed or even proven block can be reverted but an executed one cannot). Once we have all the committed batches that have been executed, we then will pull the transaction input and the relevant fields, applying them in order to reconstruct the current state of L2.
-
-## L2→L1 communication
-
-We will implement the calculation of the Merkle root of the L2→L1 messages via a system contract as part of the `L1Messenger`. Basically, whenever a new log emitted by users that needs to be Merklized is created, the `L1Messenger` contract will append it to its rolling hash and then at the end of the batch, during the formation of the blob it will receive the original preimages from the operator, verify their consistency, and send those to the L2DAValidator to facilitate the DA protocol.
-
-We will now call the logs that are created by users and are Merklized _user_ logs and the logs that are emitted by natively by VM _system_ logs. Here is a short comparison table for better understanding:
-
-| System logs | User logs |
-| --------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| Emitted by VM via an opcode. | VM knows nothing about them. |
-| Consistency and correctness is enforced by the verifier on L1 (i.e. their hash is part of the block commitment. | Consistency and correctness is enforced by the L1Messenger system contract. The correctness of the behavior of the L1Messenger is enforced implicitly by prover in a sense that it proves the correctness of the execution overall. |
-| We don’t calculate their Merkle root. | We calculate their Merkle root on the L1Messenger system contract. |
-| We have constant small number of those. | We can have as much as possible as long as the commitBatches function on L1 remains executable (it is the job of the operator to ensure that only such transactions are selected) |
-| In EIP4844 they will remain part of the calldata. | In EIP4844 they will become part of the blobs. |
-
-### Backwards-compatibility
-
-Note, that to maintain a unified interface with the previous version of the protocol, the leaves of the Merkle tree will have to maintain the following structure:
-
-```solidity
-struct L2Log {
- uint8 l2ShardId;
- bool isService;
- uint16 txNumberInBlock;
- address sender;
- bytes32 key;
- bytes32 value;
-}
-```
-
-While the leaf will look the following way:
-
-```solidity
-bytes32 hashedLog = keccak256(
- abi.encodePacked(_log.l2ShardId, _log.isService, _log.txNumberInBlock, _log.sender, _log.key, _log.value)
-);
-```
-
-`keccak256` will continue being the function for the merkle tree.
-
-To put it shortly, the proofs for L2→L1 log inclusion will continue having exactly the same format as they did in the pre-Boojum system, which avoids breaking changes for SDKs and bridges alike.
-
-### Implementation of `L1Messenger`
-
-The L1Messenger contract will maintain a rolling hash of all the L2ToL1 logs `chainedLogsHash` as well as the rolling hashes of messages `chainedMessagesHash`. Whenever a contract wants to send an L2→L1 log, the following operation will be [applied](../../../system-contracts/contracts/L1Messenger.sol#L73):
-
-`chainedLogsHash = keccak256(chainedLogsHash, hashedLog)`. L2→L1 logs have the same 88-byte format as in the current version of ZKsync.
-
-Note, that the user is charged for necessary future the computation that will be needed to calculate the final merkle root. It is roughly 4x higher than the cost to calculate the hash of the leaf, since the eventual tree might have be 4x times the number nodes. In any case, this will likely be a relatively negligible part compared to the cost of the pubdata.
-
-At the end of the execution, the bootloader will [provide](../../../system-contracts/bootloader/bootloader.yul#L2676) a list of all the L2ToL1 logs (this will be provided by the operator in the memory of the bootloader). The L1Messenger checks that the rolling hash from the provided logs is the same as in the `chainedLogsHash` and calculate the merkle tree of the provided messages. Right now, we always build the Merkle tree of size `16384`, but we charge the user as if the tree was built dynamically based on the number of leaves in there. The implementation of the dynamic tree has been postponed until the later upgrades.
-
-> Note, that unlike most other parts of pubdata, the user L2->L1 must always be validated by the trusted `L1Messenger` system contract. If we moved this responsibility to L2DAValidator it would be possible that a malicious operator provided incorrect data and forged transactions out of names of certain users.
-
-### Long L2→L1 messages & bytecodes
-
-If the user wants to send an L2→L1 message, its preimage is [appended](../../../system-contracts/contracts/L1Messenger.sol#L126) to the message’s rolling hash too `chainedMessagesHash = keccak256(chainedMessagesHash, keccak256(message))`.
-
-A very similar approach for bytecodes is used, where their rolling hash is calculated and then the preimages are provided at the end of the batch to form the full pubdata for the batch.
-
-Note, that in for backward compatibility, just like before any long message or bytecode is accompanied by the corresponding user L2→L1 log.
-
-### Using system L2→L1 logs vs the user logs
-
-The content of the L2→L1 logs by the L1Messenger will go to the blob of EIP4844. Meaning, that all the data that belongs to the tree by L1Messenger’s L2→L1 logs should not be needed during block commitment. Also, note that in the future we will remove the calculation of the Merkle root of the built-in L2→L1 messages.
-
-The only places where the built-in L2→L1 messaging should continue to be used:
-
-- Logs by SystemContext (they are needed on commit to check the previous block hash).
-- Logs by L1Messenger for the merkle root of the L2→L1 tree as well the data needed for L1DAValidator.
-- `chainedPriorityTxsHash` and `numberOfLayer1Txs` from the bootloader (read more about it below).
-
-### Obtaining `txNumberInBlock`
-
-To have the same log format, the `txNumberInBlock` must be obtained. While it is internally counted in the VM, there is currently no opcode to retrieve this number. We will have a public variable `txNumberInBlock` in the `SystemContext`, which will be incremented with each new transaction and retrieve this variable from there. It is [zeroed out](../../../system-contracts/contracts/SystemContext.sol#L515) at the end of the batch.
-
-### Bootloader implementation
-
-The bootloader has a memory segment dedicated to the ABI-encoded data of the L1ToL2Messenger to perform the `publishPubdataAndClearState` call.
-
-At the end of the execution of the batch, the operator should provide the corresponding data into the bootloader memory, i.e user L2→L1 logs, long messages, bytecodes, etc. After that, the [call](../../../system-contracts/bootloader/bootloader.yul#L2676) is performed to the `L1Messenger` system contract, that would call the L2DAValidator that should check the adherence of the pubdata to the specified format.
-
-## Bytecode Publishing
-
-Within pubdata, bytecodes are published in 1 of 2 ways: (1) uncompressed as part of the bytecodes array and (2) compressed via long l2 → l1 messages.
-
-### Uncompressed Bytecode Publishing
-
-Uncompressed bytecodes are included within the `totalPubdata` bytes and have the following format: `number of bytecodes || forEachBytecode (length of bytecode(n) || bytecode(n))` .
-
-### Compressed Bytecode Publishing
-
-Unlike uncompressed bytecode which are published as part of `factoryDeps`, compressed bytecodes are published as long l2 → l1 messages which can be seen [here](../../../system-contracts/contracts/Compressor.sol#L78).
-
-#### Bytecode Compression Algorithm — Server Side
-
-This is the part that is responsible for taking bytecode, that has already been chunked into 8 byte words, performing validation, and compressing it.
-
-Each 8 byte word from the chunked bytecode is assigned a 2 byte index (constraint on size of dictionary of chunk → index is 2^16 - 1 elements). The length of the dictionary, dictionary entries (index assumed through order), and indexes are all concatenated together to yield the final compressed version.
-
-For bytecode to be considered valid it must satisfy the following:
-
-1. Bytecode length must be less than 2097120 ((2^16 - 1) \* 32) bytes.
-2. Bytecode length must be a multiple of 32.
-3. Number of 32-byte words cannot be even.
-
-The following is a simplified version of the algorithm:
-
-```python
-statistic: Map[chunk, (count, first_pos)]
-dictionary: Map[chunk, index]
-encoded_data: List[index]
-
-for position, chunk in chunked_bytecode:
- if chunk is in statistic:
- statistic[chunk].count += 1
- else:
- statistic[chunk] = (count=1, first_pos=pos)
-
-# We want the more frequently used bytes to have smaller ids to save on calldata (zero bytes cost less)
-statistic.sort(primary=count, secondary=first_pos, order=desc)
-
-for index, chunk in enumerated(sorted_statistics):
- dictionary[chunk] = index
-
-for chunk in chunked_bytecode:
- encoded_data.append(dictionary[chunk])
-
-return [len(dictionary), dictionary.keys(order=index asc), encoded_data]
-```
-
-#### Verification And Publishing — L2 Contract
-
-The function `publishCompressBytecode` takes in both the original `_bytecode` and the `_rawCompressedData` , the latter of which comes from the output of the server’s compression algorithm. Looping over the encoded data, derived from `_rawCompressedData` , the corresponding chunks are pulled from the dictionary and compared to the original byte code, reverting if there is a mismatch. After the encoded data has been verified, it is published to L1 and marked accordingly within the `KnownCodesStorage` contract.
-
-Pseudo-code implementation:
-
-```python
-length_of_dict = _rawCompressedData[:2]
-dictionary = _rawCompressedData[2:2 + length_of_dict * 8] # need to offset by bytes used to store length (2) and multiply by 8 for chunk size
-encoded_data = _rawCompressedData[2 + length_of_dict * 8:]
-
-assert(len(dictionary) % 8 == 0) # each element should be 8 bytes
-assert(num_entries(dictionary) <= 2^16)
-assert(len(encoded_data) * 4 == len(_bytecode)) # given that each chunk is 8 bytes and each index is 2 bytes they should differ by a factor of 4
-
-for (index, dict_index) in list(enumerate(encoded_data)):
- encoded_chunk = dictionary[dict_index]
- real_chunk = _bytecode.readUint64(index * 8) # need to pull from index * 8 to account for difference in element size
- verify(encoded_chunk == real_chunk)
-
-# Sending the compressed bytecode to L1 for data availability
-sendToL1(_rawCompressedBytecode)
-markAsPublished(hash(_bytecode))
-```
-
-## Storage diff publishing
-
-ZKsync is a statediff-based rollup and so publishing the correct state diffs plays an integral role in ensuring data availability.
-
-### Difference between initial and repeated writes
-
-ZKsync publishes state changes that happened within the batch instead of transactions themselves. Meaning, that for instance some storage slot `S` under account `A` has changed to value `V`, we could publish a triple of `A,S,V`. Users by observing all the triples could restore the state of ZKsync. However, note that our tree unlike Ethereum’s one is not account based (i.e. there is no first layer of depth 160 of the merkle tree corresponding to accounts and second layer of depth 256 of the merkle tree corresponding to users). Our tree is “flat”, i.e. a slot `S` under account `A` is just stored in the leaf number `H(S,A)`. Our tree is of depth 256 + 8 (the 256 is for these hashed account/key pairs and 8 is for potential shards in the future, we currently have only one shard and it is irrelevant for the rest of the document).
-
-We call this `H(S,A)` _derived key_, because it is derived from the address and the actual key in the storage of the account. Since our tree is flat, whenever a change happens, we can publish a pair `DK, V`, where `DK=H(S,A)`.
-
-However, these is an optimization that could be done:
-
-- Whenever a change to a key is used for the first time, we publish a pair of `DK,V` and we assign some sequential id to this derived key. This is called an _initial write_. It happens for the first time and that’s why we must publish the full key.
-- If this storage slot is published in some of the subsequent batches, instead of publishing the whole `DK`, we can use the sequential id instead. This is called a _repeated write_.
-
-For instance, if the slots `A`,`B` (I’ll use latin letters instead of 32-byte hashes for readability) changed their values to `12`,`13` accordingly, in the batch it happened they will be published in the following format:
-
-- `(A, 12), (B, 13)`. Let’s say that the last sequential id ever used is 6. Then, `A` will receive the id of `7` and B will receive the id of `8`.
-
-Let’s say that in the next block, they changes their values to `13`,`14`. Then, their diff will be published in the following format:
-
-- `(7, 13), (8,14)`.
-
-The id is permanently assigned to each storage key that was ever published. While in the description above it may not seem like a huge boost, however, each `DK` is 32 bytes long and id is at most 8 bytes long.
-
-We call this id _enumeration_index_.
-
-Note, that the enumeration indexes are assigned in the order of sorted array of (address, key), i.e. they are internally sorted. The enumeration indexes are part of the state merkle tree, it is **crucial** that the initial writes are published in the correct order, so that anyone could restore the correct enum indexes for the storage slots. In addition, an enumeration index of `0` indicates that the storage write is an initial write.
-
-### State diffs structure
-
-Firstly, let’s define what we mean by _state diffs_. A _state diff_ is an element of the following structure.
-
-[State diff structure](https://github.com/matter-labs/zksync-protocol/blob/main/crates/circuit_encodings/src/state_diff_record.rs#L8).
-
-Basically, it contains all the values which might interest us about the state diff:
-
-- `address` where the storage has been changed.
-- `key` (the original key inside the address)
-- `derived_key` — `H(key, address)` as described in the previous section.
- - Note, the hashing algorithm currently used here is `Blake2s`
-- `enumeration_index` — Enumeration index as explained above. It is equal to 0 if the write is initial and contains the non-zero enumeration index if it is the repeated write (indexes are numerated starting from 1).
-- `initial_value` — The value that was present in the key at the start of the batch
-- `final_value` — The value that the key has changed to by the end of the batch.
-
-We will consider `stateDiffs` an array of such objects, sorted by (address, key).
-
-This is the internal structure that is used by the circuits to represent the state diffs. The most basic “compression” algorithm is the one described above:
-
-- For initial writes, write the pair of (`derived_key`, `final_value`)
-- For repeated writes write the pair of (`enumeration_index`, `final_value`).
-
-Note, that values like `initial_value`, `address` and `key` are not used in the "simplified" algorithm above, but they will be helpful for the more advanced compression algorithms in the future. The [algorithm](#state-diff-compression-format) for Boojum already utilizes the difference between the `initial_value` and `final_value` for saving up on pubdata.
-
-### How the new pubdata verification works
-
-#### **L2**
-
-1. The operator provides both full `stateDiffs` (i.e. the array of the structs above) and the compressed state diffs (i.e. the array which contains the state diffs, compressed by the algorithm explained [below](#state-diff-compression-format)).
-2. The L2DAValidator must verify that the compressed version is consistent with the original stateDiffs and send the the _hash_ of the original state diff to its L1 counterpart. It will also include the compressed state diffs into the totalPubdata to be published onto L1.
-
-#### **L1**
-
-1. During committing the block, the standard DA protocol follows and the L1DAValidator is responsible to check that the operator has provided the preimage for the `_totalPubdata`. More on how this is checked can be seen [here](./rollup_da.md).
-2. The block commitment [includes](../../../l1-contracts/contracts/state-transition/chain-deps/facets/Executor.sol#L550) \*the hash of the `stateDiffs`. Thus, during ZKP verification will fail if the provided stateDiff hash is not correct.
-
-It is a secure construction because the proof can be verified only if both the execution was correct and the hash of the provided hash of the `stateDiffs` is correct. This means that the L2DAValidator indeed received the array of correct `stateDiffs` and, assuming the L2DAValidator is working correctly, double-checked that the compression is of the correct format, while L1 contracts on the commit stage double checked that the operator provided the preimage for the compressed state diffs.
-
-### State diff compression format
-
-The following algorithm is used for the state diff compression:
-
-[State diff compression v1 spec](./state_diff_compression_v1_spec.md)
-
-## General pubdata format
-
-The `totalPubdata` has the following structure:
-
-1. First 4 bytes — the number of user L2→L1 logs in the batch
-2. Then, the concatenation of packed L2→L1 user logs.
-3. Next, 4 bytes — the number of long L2→L1 messages in the batch.
-4. Then, the concatenation of L2→L1 messages, each in the format of `<4 byte length || actual_message>`.
-5. Next, 4 bytes — the number of uncompressed bytecodes in the batch.
-6. Then, the concatenation of uncompressed bytecodes, each in the format of `<4 byte length || actual_bytecode>`.
-7. Next, 4 bytes — the length of the compressed state diffs.
-8. Then, state diffs are compressed by the spec [above](#state-diff-compression-format).
-
-The interface for committing batches is the following one:
-
-```solidity
-/// @notice Data needed to commit new batch
-/// @param batchNumber Number of the committed batch
-/// @param timestamp Unix timestamp denoting the start of the batch execution
-/// @param indexRepeatedStorageChanges The serial number of the shortcut index that's used as a unique identifier for storage keys that were used twice or more
-/// @param newStateRoot The state root of the full state tree
-/// @param numberOfLayer1Txs Number of priority operations to be processed
-/// @param priorityOperationsHash Hash of all priority operations from this batch
-/// @param bootloaderHeapInitialContentsHash Hash of the initial contents of the bootloader heap. In practice it serves as the commitment to the transactions in the batch.
-/// @param eventsQueueStateHash Hash of the events queue state. In practice it serves as the commitment to the events in the batch.
-/// @param systemLogs concatenation of all L2 -> L1 system logs in the batch
-/// @param totalL2ToL1Pubdata Total pubdata committed to as part of bootloader run. Contents are: l2Tol1Logs <> l2Tol1Messages <> publishedBytecodes <> stateDiffs
-struct CommitBatchInfo {
- uint64 batchNumber;
- uint64 timestamp;
- uint64 indexRepeatedStorageChanges;
- bytes32 newStateRoot;
- uint256 numberOfLayer1Txs;
- bytes32 priorityOperationsHash;
- bytes32 bootloaderHeapInitialContentsHash;
- bytes32 eventsQueueStateHash;
- bytes systemLogs;
- bytes totalL2ToL1Pubdata;
-}
-```
diff --git a/docs/settlement_contracts/data_availability/state_diff_compression_v1_spec.md b/docs/settlement_contracts/data_availability/state_diff_compression_v1_spec.md
deleted file mode 100644
index 5fae9cedf7..0000000000
--- a/docs/settlement_contracts/data_availability/state_diff_compression_v1_spec.md
+++ /dev/null
@@ -1,89 +0,0 @@
-# State diff compression v1 spec
-
-[back to readme](../../README.md)
-
-The most basic strategy to publish state diffs is to publish those in either of the following two forms:
-
-- When a key is updated for the first time — ``, where key is 32-byte derived key and the value is new 32-byte value of the slot.
-- When a key is updated for the second time and more — ``, where the `enumeration_index` is an 8-byte id of the slot and the value is the new 32-byte value of the slot.
-
-This compression strategy will utilize a similar idea for treating keys and values separately and it will be focused on the efficient compression of keys and values separately.
-
-## Keys
-
-Keys will be packed in the same way as they were before. The only change is that we’ll avoid using the 8-byte enumeration index and will pack it to the minimal necessary number of bytes. This number will be part of the pubdata. Once a key has been used, it can already use the 4 or 5 byte enumeration index and it is very hard to have something cheaper for keys that has been used already. The opportunity comes when remembering the ids for accounts to spare some bytes on nonce/balance key, but ultimately the complexity may not be worth it.
-
-There is some room for optimization of the keys that are being written for the first time, however, optimizing those is more complex and achieves only a one-time effect (when the key is published for the first time), so they may be in scope of the future upgrades.
-
-## Values
-
-Values are much easier to compress since they usually contain only zeroes. Also, we can leverage the nature of how those values are changed. For instance, if nonce has been increased only by 1, we do not need to write the entire 32-byte new value, we can just tell that the slot has been _increased_ and then supply only the 1-byte value by which it was increased. This way instead of 32 bytes we need to publish only 2 bytes: first byte to denote which operation has been applied and the second by to denote the number by which the addition has been made.
-
-We have the following 4 types of changes: `Add`, `Sub,` `Transform`, `NoCompression` where:
-
-- `NoCompression` denotes that the whole 32 byte will be provided.
-- `Add` denotes that the value has been increased. (modulo 2^256)
-- `Sub` denotes that the value has been decreased. (modulo 2^256)
-- `Transform` denotes the value just has been changed (i.e. we disregard any potential relation between the previous and the new value, though the new value might be small enough to save up on the number of bytes).
-
-Where the byte size of the output can be anywhere from 0 to 31 (also 0 makes sense for `Transform`, since it denotes that it has been zeroed out). For `NoCompression` the whole 32 byte value is used.
-
-So the format of the pubdata is the following:
-
-**Part 1. Header.**
-
-- `` — this will enable easier automated unpacking in the future. Currently, it will be only equal to `1`.
-- `` — we need only 3 bytes to describe the total length of the L2→L1 logs.
-- ``. It should be equal to the minimal required bytes to represent the enum indexes for repeated writes.
-
-**Part 2. Initial writes.**
-
-- `` - the number of initial writes. Since each initial write publishes at least 32 bytes for key, then `2^16 * 32 = 2097152` will be enough for a lot of time (right now with the limit of 120kb it will take more than 15 L1 txs to use up all the space there).
-- Then for each `` pair for each initial write:
- - print key as 32-byte derived key.
- - packing type as a 1 byte value, which consists of 5 bits to denote the length of the packing and 3 bits to denote the type of the packing (either `Add`, `Sub`, `Transform` or `NoCompression`).
- - The packed value itself.
-
-**Part 3. Repeated writes.**
-
-Note, that there is no need to write the number of repeated writes, since we know that until the end of the pubdata, all the writes will be repeated ones.
-
-- For each `` pair for each repeated write:
- - print key as derived key by using the number of bytes provided in the header.
- - packing type as a 1 byte value, which consists of 5 bits to denote the length of the packing and 3 bits to denote the type of the packing (either `Add`, `Sub`, `Transform` or `NoCompression`).
- - The packed value itself.
-
-## Impact
-
-This setup allows us to achieve nearly 75% packing for values, and 50% gains overall in terms of the storage logs based on historical data.
-
-## Encoding of packing type
-
-Since we have `32 * 3 + 1` ways to pack a state diff, we need at least 7 bits to present the packing type. To make parsing easier, we will use 8 bits, i.e. 1 byte.
-
-We will use the first 5 bits to represent the length of the bytes (from 0 to 31 inclusive) to be used. The other 3 bits will be used to represent the type of the packing: `Add`, `Sub` , `Transform`, `NoCompression`.
-
-## Worst case scenario
-
-The worst case scenario for such packing is when we have to pack a completely random new value, i.e. it will take us 32 bytes to pack + 1 byte to denote which type it is. However, for such a write the user will anyway pay at least for 32 bytes. Adding an additional byte is roughly 3% increase, which will likely be barely felt by users, most of which use storage slots for balances, etc, which will consume only 7-9 bytes for packed value.
-
-## Why do we need to repeat the same packing method id
-
-You might have noticed that for each pair `` to describe value we always first write the packing type and then write the packed value. However, the reader might ask, it is more efficient to just supply the packing id once and then list all the pairs `` which use such packing.
-
-I.e. instead of listing
-
-(key = 0, type = 1, value = 1), (key = 1, type = 1, value = 3), (key = 2, type = 1, value = 4), …
-
-Just write:
-
-type = 1, (key = 0, value = 1), (key = 1, value = 3), (key = 2, value = 4), …
-
-There are two reasons for it:
-
-- A minor reason: sometimes it is less efficient in case the packing is used for very few slots (since for correct unpacking we need to provide the number of slots for each packing type).
-- A fundamental reason: currently enum indices are stored directly in the merkle tree & have very strict order of incrementing enforced by the circuits and (they are given in order by pairs `(address, key)`), which are generally not accessible from pubdata.
-
-All this means that we are not allowed to change the order of “first writes” above, so indexes for them are directly recoverable from their order, and so we can not permute them. If we were to reorder keys without supplying the new enumeration indices for them, the state would be unrecoverable. Always supplying the new enum index may add additional 5 bytes for each key, which might negate the compression benefits in a lot of cases. Even if the compression will still be beneficial, the added complexity may not be worth it.
-
-That being said, we _could_ rearange those for _repeated_ writes, but for now we stick to the same value compression format for simplicity.
diff --git a/docs/settlement_contracts/img/Diamond-scheme.png b/docs/settlement_contracts/img/Diamond-scheme.png
deleted file mode 100644
index eac56be5aa..0000000000
Binary files a/docs/settlement_contracts/img/Diamond-scheme.png and /dev/null differ
diff --git a/docs/settlement_contracts/priority_queue/img/PQ1.png b/docs/settlement_contracts/priority_queue/img/PQ1.png
deleted file mode 100644
index 0f3602371d..0000000000
Binary files a/docs/settlement_contracts/priority_queue/img/PQ1.png and /dev/null differ
diff --git a/docs/settlement_contracts/priority_queue/img/PQ2.png b/docs/settlement_contracts/priority_queue/img/PQ2.png
deleted file mode 100644
index 92a3e30022..0000000000
Binary files a/docs/settlement_contracts/priority_queue/img/PQ2.png and /dev/null differ
diff --git a/docs/settlement_contracts/priority_queue/img/PQ3.png b/docs/settlement_contracts/priority_queue/img/PQ3.png
deleted file mode 100644
index 8cd5fd8475..0000000000
Binary files a/docs/settlement_contracts/priority_queue/img/PQ3.png and /dev/null differ
diff --git a/docs/settlement_contracts/priority_queue/priority-queue.md b/docs/settlement_contracts/priority_queue/priority-queue.md
deleted file mode 100644
index 59384bb0a2..0000000000
--- a/docs/settlement_contracts/priority_queue/priority-queue.md
+++ /dev/null
@@ -1,137 +0,0 @@
-# Priority Queue to Merkle Tree
-
-[back to readme](../../README.md)
-
-## Overview of the current implementation
-
-Priority queue is a data structure in Era contracts that is used to handle L1->L2 priority operations. It supports the following:
-
-- inserting a new operation into the end of the queue
-- checking that an newly executed batch executed some n first priority operations from the queue (and not some other ones) in correct order
-
-The queue itself only stores the following:
-
-```solidity
-struct PriorityOperation {
- bytes32 canonicalTxHash;
- uint64 expirationTimestamp;
- uint192 layer2Tip;
-}
-```
-
-of which we only care about the canonical hash.
-
-### Inserting new operations
-
-The queue is implemented as a [library](../../../l1-contracts/contracts/state-transition/libraries/PriorityQueue.sol#L22).
-For each incoming priority operation, we simply `pushBack` its hash, expiration and layer2Tip.
-
-### Checking validity
-
-When a new batch is executed, we need to check that operations that were executed there match the operations in the priority queue. The batch header contains `numberOfLayer1Txs` and `priorityOperationsHash` which is a rolling hash of all priority operations that were executed in the batch. Bootloader check that this hash indeed corresponds to all priority operations that have been executed in that batch. The contract only checks that this hash matches the operations stored in the queue:
-
-```solidity
-/// @dev Pops the priority operations from the priority queue and returns a rolling hash of operations
-function _collectOperationsFromPriorityQueue(uint256 _nPriorityOps) internal returns (bytes32 concatHash) {
- concatHash = EMPTY_STRING_KECCAK;
-
- for (uint256 i = 0; i < _nPriorityOps; i = i.uncheckedInc()) {
- PriorityOperation memory priorityOp = s.priorityQueue.popFront();
- concatHash = keccak256(abi.encode(concatHash, priorityOp.canonicalTxHash));
- }
-}
-
-bytes32 priorityOperationsHash = _collectOperationsFromPriorityQueue(_storedBatch.numberOfLayer1Txs);
-require(priorityOperationsHash == _storedBatch.priorityOperationsHash); // priority operations hash does not match to expected
-```
-
-As can be seen, this is done in `O(n)` compute, where `n` is the number of priority operations in the batch.
-
-## Motivation for migration to Merkle Tree
-
-Since we will be introducing Sync Layer, we will need to support one more operation:
-
-- migrating priority queue from L1 to SL (and back)
-
-Current implementation takes `O(n)` space and is vulnerable to spam attacks during migration
-(e.g. an attacker can insert a lot of priority operations and we won't be able to migrate all of them due to gas limits).
-
-Hence, we need an implementation with a small (constant- or log-size) space imprint that we can migrate to SL and back that would still allow us to perform the other 2 operations.
-
-Merkle tree of priority operations is perfect for this since we can simply migrate the latest root hash to SL and back.
-
-- It can still efficiently (in `O(height)`) insert new operations.
-- It can also still efficiently (in `O(n)` compute and `O(n + height)` calldata) check that the batch’s `priorityOperationsHash` corresponds to the operations from the queue.
-
-Note that `n` here is the number of priority operations in the batch, not `2^height`.
-
-The implementation details are described below.
-
-### FAQ
-
-- Q: Why can't we just migrate the rolling hash of the operations in the existing priority queue?
-- A: The rolling hash is not enough to check that the operations from the executed batch are indeed from the priority queue. We would need to store all historical rolling hashes, which would be `O(n)` space and would not solve the spam attack problem.
-
-## Implementation
-
-The implementation will consist of two parts:
-
-- Merkle tree on L1 contracts, to replace the existing priority queue (while still supporting the existing operations)
-- Merkle tree off-chain on the server, to generate the merkle proofs for the executed priority operations.
-
-### Contracts
-
-On the contracts, the Merkle tree will be implemented as an Incremental (append-only) Merkle Tree ([example implementation](https://github.com/tornadocash/tornado-core/blob/master/contracts/MerkleTreeWithHistory.sol)), meaning that it can efficiently (in `O(height)` compute) append new elements to the right, while only storing `O(height)` nodes at all times.
-
-It will also be dynamically sized, meaning that it will double in size when the current size is not enough to store the new element.
-
-### Server
-
-On the server, the Merkle tree will be implemented as an extension of `MiniMerkleTree` currently used for L2->L1 logs.
-
-It will have the following properties:
-
-- in-memory: the tree will be stored in memory and will be rebuilt on each restart (details below).
-- dynamically sized (to match the contracts implementation)
-- append-only (to match the contracts implementation)
-
-The tree does not need to be super efficient, since we process on average 7 operations per batch.
-
-### Why in-memory?
-
-Having the tree in-memory means rebuilding the tree on each restart. This is fine because on mainnet after >1 year since release we have only 3.2M priority operations. We only have to fully rebuild the tree _once_ and then simply cache the already executed operations (which are the majority). Having the tree in-memory has an added benefit of not having to have additional infrastructure to store it on disk and not having to be bothered to rollback its state manually if we ever have to (as we do for e.g. for the storage logs tree).
-
-Note: If even rebuilding it once becomes a problem, it can be easily mitigated by only persisting the cache nodes.
-
-### Caching
-
-**Why do we need caching?** After a batch is successfully executed, we will no longer need to have the ability to generate merkle paths for those operations. This means that we can save space and compute by only fully storing the operations that are not yet executed, and caching the leaves
-corresponding to the already executed operations.
-
-We will only cache some prefix of the tree, meaning nodes in the interval [0; N) where N is the number of executed priority operations. The cache will store the rightmost cached left-child node on each level of the tree (see diagrams).
-
-
-
-
-
-
-
-This means that we will not be able to generate merkle proofs for the cached nodes (and since they are already executed, we don't need to). This structure allows us to save a lot of space, since it only takes up `O(height)` space instead of linear space for all executed operations. This is a big optimization since there are currently 3.2M total operations but <10 non-executed operations in the mainnet priority queue, which means most of the tree will be cached.
-
-This also means we don’t really have to store non-leaf nodes other than cache, since we can calculate merkle root / merkle paths in `O(n)` where `n` is the number of non-executed operations (and not total number of operations), and since `n` is so small, it is really fast.
-
-### Adding new operations
-
-On the contracts, appending a new operation to the tree is done by simply calling `append` on the Incremental Merkle Tree, which will update at most `height` slots. Actually, it works almost exactly like the cache described above. Once again: [tornado-cash implementation](https://github.com/tornadocash/tornado-core/blob/1ef6a263ac6a0e476d063fcb269a9df65a1bd56a/contracts/MerkleTreeWithHistory.sol#L68).
-
-On the server, `eth_watch` will listen for `NewPriorityOperation` events as it does now, and will append the new operation to the tree on the server.
-
-### Checking validity
-
-To check that the executed batch indeed took its priority operations from the queue, we have to make sure that if we take first `numberOfL1Txs` non-executed operations from the tree, their rolling hash will match `priorityOperationsHash` . Since will not be storing the hashes of these operations onchain anymore, we will have to provide them as calldata. Additionally in calldata, we should provide merkle proofs for the **first and last** operations in that batch (hence `O(n + height)` calldata). This will make it possible to prove onchain that that contiguous interval of hashes indeed exists in the merkle tree.
-
-This can be done simply by constructing the part of the tree above this interval using the provided paths to first and last elements of the interval checking that computed merkle root matches with stored one (in `O(n)` where `n` is number of priority operations in a batch). We will also need to track the `index` of the first unexecuted operation onchain to properly calculate the merkle root and ensure that batches don’t execute some operations out of order or multiple times.
-
-We will also need to prove that the rolling hash of provided hashes matches with `priorityOperationsHash` which is also `O(n)`
-
-It is important to note that we should store some number of historical root hashes, since the Merkle tree on the server might lag behind the contracts a bit, and hence merkle paths generated on the server-side might become invalid if we compare them to the latest root hash on the contracts. These historical root hashes are not necessary to migrate to and from SL though.
diff --git a/docs/settlement_contracts/priority_queue/processing_of_l1-l2_txs.md b/docs/settlement_contracts/priority_queue/processing_of_l1-l2_txs.md
deleted file mode 100644
index c6af14c61d..0000000000
--- a/docs/settlement_contracts/priority_queue/processing_of_l1-l2_txs.md
+++ /dev/null
@@ -1,98 +0,0 @@
-# Handling L1→L2 ops on ZKsync
-
-[back to readme](../../README.md)
-
-The transactions on ZKsync can be initiated not only on L2, but also on L1. There are two types of transactions that can be initiated on L1:
-
-- Priority operations. These are the kind of operations that any user can create.
-- Upgrade transactions. These can be created only during upgrades.
-
-## Prerequisites
-
-Please read the full [article](../../l2_system_contracts/system_contracts_bootloader_description.md) on the general system contracts / bootloader structure as well as the pubdata structure to understand [the difference](../data_availability/standard_pubdata_format.md) between system and user logs.
-
-## Priority operations
-
-### Initiation
-
-A new priority operation can be appended by calling the `requestL2TransactionDirect` or `requestL2TransactionTwoBridges` methods on `BridgeHub` smart contract. `BridgeHub` will ensure that the base token is deposited via `L1AssetRouter` and send transaction request to the specified state transition contract (selected by the chainID). State transition contract will perform several checks for the transaction, making sure that it is processable and provides enough fee to compensate the operator for this transaction. Then, this transaction will be [appended](../../../l1-contracts/contracts/state-transition/chain-deps/facets/Mailbox.sol#569) to the priority tree (and optionally to the legacy priority queue).
-
-> In the previous system, priority operations were structured in a queue. However, now they will be stored in an incremental merkle tree. The motivation for the tree structure can be read [here](./priority-queue.md).
-
-The difference between `requestL2TransactionDirect` and `requestL2TransactionTwoBridges` is that the `msg.sender` on the L2 Transaction is the second bridge in the `requestL2TransactionTwoBridges` case, while it is the `msg.sender` of the `requestL2TransactionDirect` in the first case. For more details read the [bridgehub documentation](../../bridging/bridgehub/overview.md)
-
-### Bootloader
-
-Whenever an operator sees a priority operation, it can include the transaction into the batch. While for normal L2 transaction the account abstraction protocol will ensure that the `msg.sender` has indeed agreed to start a transaction out of this name, for L1→L2 transactions there is no signature verification. In order to verify that the operator includes only transactions that were indeed requested on L1, the bootloader maintains](../../system-contracts/bootloader/bootloader.yul#L1052-L1053) two variables:
-
-- `numberOfPriorityTransactions` (maintained at `PRIORITY_TXS_L1_DATA_BEGIN_BYTE` of bootloader memory)
-- `priorityOperationsRollingHash` (maintained at `PRIORITY_TXS_L1_DATA_BEGIN_BYTE + 32` of the bootloader memory)
-
-Whenever a priority transaction is processed, the `numberOfPriorityTransactions` gets incremented by 1, while `priorityOperationsRollingHash` is assigned to `keccak256(priorityOperationsRollingHash, processedPriorityOpHash)`, where `processedPriorityOpHash` is the hash of the priority operations that has been just processed.
-
-Also, for each priority transaction, we [emit](../../../system-contracts/bootloader/bootloader.yul#L1046) a user L2→L1 log with its hash and result, which basically means that it will get Merklized and users will be able to prove on L1 that a certain priority transaction has succeeded or failed (which can be helpful to reclaim your funds from bridges if the L2 part of the deposit has failed).
-
-Then, at the end of the batch, we [submit](../../../system-contracts/bootloader/bootloader.yul#L4117-L4118) 2 L2→L1 log system log with these values.
-
-### Batch commit
-
-During batch commit, the contract will remember those values, but not validate them in any way.
-
-### Batch execution
-
-During batch execution, the will check that the `priorityOperationsRollingHash` rolling hash provided before was correct. There are two ways to do it:
-
-- [Legacy one that uses priority queue](../../../l1-contracts/contracts/state-transition/chain-deps/facets/Executor.sol#L397). We will pop `numberOfPriorityTransactions` from the top of priority queue and verify that the hashes match.
-- [The new one that uses priority tree](../../../l1-contracts/contracts/state-transition/chain-deps/facets/Executor.sol#L397). The operator would have to provide the hashes of these priority operations in an array, as well as proof that this entire segment belongs to the merkle tree. After it is verified that this array of leaves is correct, it will be checked whether the rolling hash of those is equal to the `priorityOperationsRollingHash`.
-
-## Upgrade transactions
-
-### Initiation
-
-Upgrade transactions can only be created during a system upgrade. It is done if the `DiamondProxy` delegatecalls to the implementation that manually puts this transaction into the storage of the DiamondProxy, this could happen on calling `upgradeChainFromVersion` function in `Admin.sol` on the State Transition contract. Note, that since it happens during the upgrade, there is no “real” checks on the structure of this transaction. We do have [some validation](../../../l1-contracts/contracts/upgrades/BaseZkSyncUpgrade.sol#L193), but it is purely on the side of the implementation which the `DiamondProxy` delegatecalls to and so may be lifted if the implementation is changed.
-
-The hash of the currently required upgrade transaction is stored under `l2SystemContractsUpgradeTxHash` variable.
-
-We will also track the batch where the upgrade has been committed in the `l2SystemContractsUpgradeBatchNumber` variable.
-
-We can not support multiple upgrades in parallel, i.e. the next upgrade should start only after the previous one has been complete.
-
-### Bootloader
-
-The upgrade transactions are processed just like with priority transactions, with only the following differences:
-
-- We can have only one upgrade transaction per batch & this transaction must be the first transaction in the batch.
-- The system contracts upgrade transaction is not appended to `priorityOperationsRollingHash` and doesn’t increment `numberOfPriorityTransactions`. Instead, its hash is calculated via a system L2→L1 log _before_ it gets executed. Note, that it is an important property. More on it [below](#security-considerations).
-
-### Commit
-
-After an upgrade has been initiated, it will be required that the next commit batches operation already contains the system upgrade transaction. It is [checked](../../../l1-contracts/contracts/state-transition/chain-deps/facets/Executor.sol#L223) by verifying the corresponding L2→L1 log.
-
-We also remember that the upgrade transaction has been processed in this batch (by amending the `l2SystemContractsUpgradeBatchNumber` variable).
-
-### Revert
-
-In a very rare event when the team needs to revert the batch with the upgrade on ZKsync, the `l2SystemContractsUpgradeBatchNumber` is reset.
-
-Note, however, that we do not “remember” that certain batches had a version before the upgrade, i.e. if the reverted batches will have to be reexecuted, the upgrade transaction must still be present there, even if some of the deleted batches were committed before the upgrade and thus didn’t contain the transaction.
-
-### Execute
-
-Once batch with the upgrade transaction has been executed, we [delete](../../../l1-contracts/contracts/state-transition/chain-deps/facets/Executor.sol#L486) them from storage for efficiency to signify that the upgrade has been fully processed and that a new upgrade can be initiated.
-
-### Security considerations
-
-Since the operator can put any data into the bootloader memory and for L1→L2 transactions the bootloader has to blindly trust it and rely on L1 contracts to validate it, it may be a very powerful tool for a malicious operator. Note, that while the governance mechanism is trusted, we try to limit our trust for the operator as much as possible, since in the future anyone would be able to become an operator.
-
-Some time ago, we _used to_ have a system where the upgrades could be done via L1→L2 transactions, i.e. the implementation of the `DiamondProxy` upgrade would include a priority transaction (with `from` equal to for instance `FORCE_DEPLOYER`) with all the upgrade params.
-
-In the current system though having such logic would be dangerous and would allow for the following attack:
-
-- Let’s say that we have at least 1 priority operations in the priority queue. This can be any operation, initiated by anyone.
-- The operator puts a malicious priority operation with an upgrade into the bootloader memory. This operation was never included in the priority operations queue / and it is not an upgrade transaction. However, as already mentioned above the bootloader has no idea what priority / upgrade transactions are correct and so this transaction will be processed.
-
-The most important caveat of this malicious upgrade is that it may change implementation of the `Keccak256` precompile to return any values that the operator needs.
-
-- When the`priorityOperationsRollingHash` will be updated, instead of the “correct” rolling hash of the priority transactions, the one which would appear with the correct topmost priority operation is returned. The operator can’t amend the behaviour of `numberOfPriorityTransactions`, but it won’t help much, since the the `priorityOperationsRollingHash` will match on L1 on the execution step.
-
-That’s why the concept of the upgrade transaction is needed: this is the only transaction that can initiate transactions out of the kernel space and thus change bytecodes of system contracts. That’s why it must be the first one and that’s why bootloader [emits](../../../system-contracts/bootloader/bootloader.yul#L603) its hash via a system L2→L1 log before actually processing it.
diff --git a/docs/settlement_contracts/zkchain_basics.md b/docs/settlement_contracts/zkchain_basics.md
deleted file mode 100644
index 0ac91fca92..0000000000
--- a/docs/settlement_contracts/zkchain_basics.md
+++ /dev/null
@@ -1,174 +0,0 @@
-# L1 smart contract of an individual chain
-
-[back to readme](../README.md)
-
-## Diamond (also mentioned as State Transition contract)
-
-Technically, this L1 smart contract acts as a connector between Ethereum (L1) and hyperchain (L2). It checks the
-validity proof and data availability, handles L2 <-> L1 communication, finalizes L2 state transition, and more.
-
-There are also important contracts deployed on the L2 that can also execute logic called _system contracts_. Using L2
-<-> L1 communication can affect both the L1 and the L2.
-
-
-
-### DiamondProxy
-
-The main contract uses [EIP-2535](https://eips.ethereum.org/EIPS/eip-2535) diamond proxy pattern. It is an in-house
-implementation that is inspired by the [mudgen reference implementation](https://github.com/mudgen/Diamond). It has no
-external functions, only the fallback that delegates a call to one of the facets (target/implementation contract). So
-even an upgrade system is a separate facet that can be replaced.
-
-One of the differences from the reference implementation is access freezability. Each of the facets has an associated
-parameter that indicates if it is possible to freeze access to the facet. Privileged actors can freeze the **diamond**
-(not a specific facet!) and all facets with the marker `isFreezable` should be inaccessible until the governor or admin
-unfreezes the diamond. Note that it is a very dangerous thing since the diamond proxy can freeze the upgrade system and then
-the diamond will be frozen forever.
-
-The diamond proxy pattern is very flexible and extendable. For now, it allows splitting implementation contracts by their logical meaning, removes the limit of bytecode size per contract and implements security features such as freezing. In the future, it can also be viewed as [EIP-6900](https://eips.ethereum.org/EIPS/eip-6900) for [zkStack](https://blog.matter-labs.io/introducing-the-zk-stack-c24240c2532a), where each hyperchain can implement a sub-set of allowed implementation contracts.
-
-### GettersFacet
-
-Separate facet, whose only function is providing `view` and `pure` methods. It also implements
-[diamond loupe](https://eips.ethereum.org/EIPS/eip-2535#diamond-loupe) which makes managing facets easier.
-This contract must never be frozen.
-
-### AdminFacet
-
-This facet responsible for the configuration setup and upgradability, handling tasks such as:
-
-- Privileged Address Management: Updating key roles, including the governor and validators.
-- System Parameter Configuration: Adjusting critical system settings, such as the L2 bootloader bytecode hash, verifier address, changing DA layer or fee configurations.
-- Freezability: Executing the freezing/unfreezing of facets within the diamond proxy to safeguard the ecosystem during upgrades or in response to detected vulnerabilities.
-
-Control over the AdminFacet is divided between two main entities:
-
-- CTM (Chain Type Manager, formerly known as `StateTransitionManager`) - Separate smart contract that can perform critical changes to the system as protocol upgrades. For more detailed information on its function and design, refer to [this document](../chain_management/chain_type_manager.md). Although currently only one version of the CTM exists, the architecture allows for future versions to be introduced via subsequent upgrades. The owner of the CTM is the [decentralized governance](https://blog.zknation.io/introducing-zk-nation/), while for non-critical an Admin entity is used (see details below).
-- Chain Admin - Multisig smart contract managed by each individual chain that can perform non-critical changes to the system such as granting validator permissions.
-
-### MailboxFacet
-
-The facet that handles L2 <-> L1 communication.
-
-The Mailbox performs three functions:
-
-- L1 ↔ L2 Communication: Enables data and transaction requests to be sent from L1 to L2 and vice versa, supporting the implementation of multi-layer protocols.
-- Bridging Native Tokens: Allows the bridging of either ether or ERC20 tokens to L2, enabling users to use these assets within the L2 ecosystem.
-- Censorship Resistance Mechanism: Currently in the research stage.
-
-L1 -> L2 communication is implemented as requesting an L2 transaction on L1 and executing it on L2. This means a user
-can call the function on the L1 contract to save the data about the transaction in some queue. Later on, a validator can
-process it on L2 and mark it as processed on the L1 priority queue. Currently, it is used for sending information from
-L1 to L2 or implementing multi-layer protocols. Users pays for the transaction execution in the native token when requests L1 -> L2 transaction.
-
-_NOTE_: While user requests the transaction from L1, the initiated transaction on L2 will have such a `msg.sender`:
-
-```solidity
- address sender = msg.sender;
- if (sender != tx.origin) {
- sender = AddressAliasHelper.applyL1ToL2Alias(msg.sender);
- }
-```
-
-where
-
-```solidity
-uint160 constant offset = uint160(0x1111000000000000000000000000000000001111);
-
-function applyL1ToL2Alias(address l1Address) internal pure returns (address l2Address) {
- unchecked {
- l2Address = address(uint160(l1Address) + offset);
- }
-}
-```
-
-For most of the rollups the address aliasing needs to prevent cross-chain exploits that would otherwise be possible if
-we simply reused the same L1 addresses as the L2 sender. In ZKsync Era address derivation rule is different from the
-Ethereum, so cross-chain exploits are already impossible. However, ZKsync Era may add full EVM support in the future, so
-applying address aliasing leaves room for future EVM compatibility.
-
-The L1 -> L2 communication is also used for bridging **base tokens**. If base token is ether (the case for ZKsync Era) - user should include a `msg.value` when initiating a
-transaction request on the L1 contract, if base token is an ERC20 then contract will spend users allowance. Before executing a transaction on L2, the specified address will be credited
-with the funds. To withdraw funds user should call `withdraw` function on the `L2BaseToken` system contracts. This will
-burn the funds on L2, allowing the user to reclaim them through the `finalizeWithdrawal` function on the
-`SharedBridge` (more in hyperchain section).
-
-More about L1->L2 operations can be found [here](./priority_queue/processing_of_l1-l2_txs.md).
-
-L2 -> L1 communication, in contrast to L1 -> L2 communication, is based only on transferring the information, and not on
-the transaction execution on L1. The full description of the mechanism for sending information from L2 to L1 can be found [here](./data_availability/standard_pubdata_format.md).
-
-The Mailbox facet also facilitates L1<>L2 communications for those chains that settle on top of Gateway. The user interfaces for those are identical to the L1<>L2 communication described above. To learn more about L1<>L2 communication via Gateway works, check out [this document](../gateway/messaging_via_gateway.md) and [this one](../gateway/l2_gw_l1_messaging.md).
-
-### ExecutorFacet
-
-A contract that accepts L2 batches, enforces data availability via DA validators and checks the validity of zk-proofs. You can read more about DA validators [in this document](../settlement_contracts/data_availability/custom_da.md).
-
-The state transition is divided into three stages:
-
-- `commitBatches` - check L2 batch timestamp, process the L2 logs, save data for a batch, and prepare data for zk-proof.
-- `proveBatches` - validate zk-proof.
-- `executeBatches` - finalize the state, marking L1 -> L2 communication processing, and saving Merkle tree with L2 logs.
-
-Each L2 -> L1 system log will have a key that is part of the following:
-
-```solidity
-enum SystemLogKey {
- L2_TO_L1_LOGS_TREE_ROOT_KEY,
- PACKED_BATCH_AND_L2_BLOCK_TIMESTAMP_KEY,
- CHAINED_PRIORITY_TXN_HASH_KEY,
- NUMBER_OF_LAYER_1_TXS_KEY,
- PREV_BATCH_HASH_KEY,
- L2_DA_VALIDATOR_OUTPUT_HASH_KEY,
- USED_L2_DA_VALIDATOR_ADDRESS_KEY,
- MESSAGE_ROOT_ROLLING_HASH_KEY,
- L2_TXS_STATUS_ROLLING_HASH_KEY,
- EXPECTED_SYSTEM_CONTRACT_UPGRADE_TX_HASH_KEY
-}
-```
-
-When a batch is committed, we process L2 -> L1 system logs. Here are the invariants that are expected there:
-
-- In a given batch there will be either 7 or 8 system logs. The 8th log is only required for a protocol upgrade.
-- There will be a single log for each key that is contained within `SystemLogKey`
-- Three logs from the `L2_TO_L1_MESSENGER` with keys:
-- `L2_TO_L1_LOGS_TREE_ROOT_KEY`
-- `L2_DA_VALIDATOR_OUTPUT_HASH_KEY`
-- `USED_L2_DA_VALIDATOR_ADDRESS_KEY`
-- Two logs from `L2_SYSTEM_CONTEXT_SYSTEM_CONTRACT_ADDR` with keys:
- - `PACKED_BATCH_AND_L2_BLOCK_TIMESTAMP_KEY`
- - `PREV_BATCH_HASH_KEY`
-- Two or three logs from `L2_BOOTLOADER_ADDRESS` with keys:
- - `CHAINED_PRIORITY_TXN_HASH_KEY`
- - `NUMBER_OF_LAYER_1_TXS_KEY`
- - `EXPECTED_SYSTEM_CONTRACT_UPGRADE_TX_HASH_KEY`
-- None logs from other addresses (may be changed in the future).
-
-### DiamondInit
-
-It is a one-function contract that implements the logic of initializing a diamond proxy. It is called only once on the
-diamond constructor and is not saved in the diamond as a facet.
-
-Implementation detail - function returns a magic value just like it is designed in
-[EIP-1271](https://eips.ethereum.org/EIPS/eip-1271), but the magic value is 32 bytes in size.
-
-## ValidatorTimelock
-
-An intermediate smart contract between the validator EOA account and the ZK chain diamond contract. Its primary purpose is
-to provide a trustless means of delaying batch execution without modifying the main ZKsync contract. ZKsync actively
-monitors the chain activity and reacts to any suspicious activity by freezing the chain. This allows time for
-investigation and mitigation before resuming normal operations.
-
-It is a temporary solution to prevent any significant impact of the validator hot key leakage, while the network is in
-the Alpha stage.
-
-This contract consists of four main functions `commitBatches`, `proveBatches`, `executeBatches`, and `revertBatches`, which can be called only by the validator.
-
-When the validator calls `commitBatches`, the same calldata will be propagated to the ZKsync contract (`DiamondProxy` through
-`call` where it invokes the `ExecutorFacet` through `delegatecall`), and also a timestamp is assigned to these batches to track
-the time these batches are committed by the validator to enforce a delay between committing and execution of batches. Then, the
-validator can prove the already committed batches regardless of the mentioned timestamp, and again the same calldata (related
-to the `proveBatches` function) will be propagated to the ZKsync contract. After the `delay` is elapsed, the validator
-is allowed to call `executeBatches` to propagate the same calldata to ZKsync contract.
-
-The owner of the ValidatorTimelock contract is the decentralized governance. Note, that all the chains share the same ValidatorTimelock for simplicity.
diff --git a/docs/upgrade_history/gateway_preparation_upgrade/gateway_diff_review.md b/docs/upgrade_history/gateway_preparation_upgrade/gateway_diff_review.md
deleted file mode 100644
index 635a4aedaa..0000000000
--- a/docs/upgrade_history/gateway_preparation_upgrade/gateway_diff_review.md
+++ /dev/null
@@ -1,100 +0,0 @@
-# Gateway upgrade changes
-
-## Introduction & prerequisites
-
-[back to readme](../../README.md)
-
-This document assumes that the reader has general knowledge of how ZKsync Era works and how our ecosystem used to be like at the moment of shared bridge in general.
-
-To read the documentation about the current system, you can read [here](../../README.md).
-
-For more info about the previous one, you can reach out to the following documentation:
-
-[Code4rena Documentation Smart contract Section](https://github.com/code-423n4/2024-03-zksync/tree/main/docs/Smart%20contract%20Section)
-
-## Changes from the shared bridge design
-
-This section contains some of the important changes that happened since the shared bridge release in June. This section may not be fully complete and additional information will be provided in the sections that cover specific topics.
-
-### Bridgehub now has chainId → address mapping
-
-Before, Bridgehub contained a mapping from `chainId => stateTransitionManager`. The further resolution of the mapping should happen at the CTM level.
-For more intuitive management of the chains, a new mapping `chainId => hyperchainAddress` was added. This is considered more intuitive since “bridgehub is the owner of all the chains” mentality is more applicable with this new design.
-
-The upside of the previous approach was potentially easier migration within the same CTM. However, in the end it was decided that the new approach is better.
-
-#### Migration
-
-This new mapping will have to be filled up after upgrading the bridgehub. It is done by repeatedly calling the `setLegacyChainAddress` for each of the deployed chains. It is assumed that their number is relatively low. Also, this function is permissionless and so can be called by anyone after the upgrade is complete. This function will call the old CTM and ask for the implementation of the chainId.
-
-Until the migration is done, all transactions with the old chains will not be working, but it is a short period of time.
-
-### baseTokenAssetId is used as a base token for the chains
-
-In order to facilitate future support of any type of asset a base token, including assets minted on L2, now chains will provide the `assetId` for their base token instead. The derivation & definition of the `assetId` is expanded in the CAB section of the doc.
-
-#### Migration & compatibility
-
-Today, there are some mappings of sort `chainId => baseTokenAddress`. These will no longer be filled for new chains. Instead, only assetId will be provided in a new `chainId => baseTokenAssetId` mapping.
-
-To initialize the new `baseTokenAssetId` mapping the following function should be called for each chain: `setLegacyBaseTokenAssetId`. It will encode each token as the assetId of an L1 token of the Native Token Vault. This method is permissionless.
-
-For the old tooling that may rely on getters of sort `getBaseTokenAddress(chainId)` working, we provide a getter method, but its exact behavior depends on the asset handler of the `setLegacyBaseTokenAssetId`, i.e. it is even possible that the method will revert for an incompatible assetId.
-
-### L2 Shared bridge (not L2AssetRouter) is deployed everywhere at the same address
-
-Before, for each new chain, we would have to initialize the mapping in the L1SharedBridge to remember the address of the l2 shared bridge on the corresponding L2 chain.
-
-Now, however, the L2AssetRouter is set on the same constant on all chains.
-
-#### L2SharedBridgeLegacy
-
-Note, that for the chains that contained the `L2SharedBridge` before the upgrade, it will be upgraded to the `L2SharedBridgeLegacy` code. The `L2AssetRouter` will have the same address on all chains, including old ones.
-
-### StateTransitionManager was renamed to ChainTypeManager
-
-CTM was renamed to CTM (ChainTypeManager). This was done to use more intuitive naming as the chains of the same “type” share the same CTM.
-
-### Hyperchains were renamed to ZK chains
-
-For consistency with the naming inside the blogs, the term “hyperchain” has been changed to “ZK chain”.
-
-## Changes in the structure of contracts
-
-While fully reusing contracts on both L1 and L2 is not always possible, it was done to a very high degree as now all bridging-related contracts are located inside the `l1-contracts` folder.
-
-## Priority tree
-
-[Migrating Priority Queue to Merkle Tree](../../settlement_contracts/priority_queue/priority-queue.md)
-
-In the currently deployed system, L1→L2 transactions are added as a part of a priority queue, i.e. all of them are stored 1-by-1 on L1 in a queue-like structure.
-
-Note, that the complexity of chain migrations in either of the directions depends on the size of the priority queue. However, the number of unprocessed priority transactions is potentially out of hands of both the operator of the chain and the chain admin as the users are free to add priority transactions in case there is no `transactionFilterer` contract, which is the case for any permissionless system, such as ZKsync Era.
-
-If someone tries to DDoS the priority queue, the chain can be blocked from migration. Even worse, for GW→L1 migrations, inability to finalize the migration can lead to a complete loss of chain.
-
-To combat all the issues above, it was decided to move from the priority queue to a priority tree, i.e. only the incremental merkle tree is stored on L1, while at the end of the batch the operator will provide a merkle proof for the inclusion of the priority transactions that were present in the batch. It does not impact the bootloader, but rather only how the L1 checks that the priority transactions did indeed belong to the chain
-
-## Custom DA layers
-
-Custom DA layer support was added.
-
-### Major changes
-
-In order to achieve CAB, we separated the liquidity managing logic from the Shared Bridge to `Asset Handlers`. The basic cases will be handled by `Native Token Vaults`, which are handling all of the standard `ERC20 tokens`, as well as `ETH`.
-
-## L1<>L2 token bridging considerations
-
-- We have the L2SharedBridgeLegacy on chains that are live before the upgrade. This contract will keep on working, and where it exists it will also be used to:
- - deploy bridged tokens. This is so that the l2TokenAddress keeps working on the L1, and so that we have a predictable address for these tokens.
- - send messages to L1. On the L1 finalizeWithdrawal does not specify the l2Sender. Legacy withdrawals will use the legacy bridge as their sender, while new withdrawals would use the L2_ASSET_ROUTER_ADDR. In the future we will add the sender to the L1 finalizeWithdrawal interface. Until the current method is deprecated we use the l2SharedBridgeAddress even for new withdrawals on legacy chains.
- This also means that on the L1 side we set the L2AR address when calling the function via the legacy interface even if it is a baseToken withdrawal. Later when we learn if it baseToken or not, we override the value.
-- We have the finalizeWithdrawal function on L1 AR, which uses the finalizeDeposit in the background.
-- L1→L2 deposits need to use the legacy encoding for SDK compatibility.
- - This means the legacy finalizeDeposit with tokenAddress which calls the new finalizeDeposit with assetId.
- - On the other hand, new assets will use the new finalizeDeposit directly
-- The originChainId will be tracked for each assetId in the NTVs. This will be the chain where the token is originally native to. This is needed to accurately track chainBalance (especially for l2 native tokens bridged to other chains via L1), and to verify the assetId is indeed an NTV asset id (i.e. has the L2_NATIVE_TOKEN_VAULT_ADDR as deployment tracker).
-
-## Upgrade process in detail
-
-You can read more about the upgrade process itself [here](<./upgrade_process_(no_gateway_chain).md>).
diff --git a/docs/upgrade_history/gateway_preparation_upgrade/upgrade_process_(no_gateway_chain).md b/docs/upgrade_history/gateway_preparation_upgrade/upgrade_process_(no_gateway_chain).md
deleted file mode 100644
index 6339c1aad1..0000000000
--- a/docs/upgrade_history/gateway_preparation_upgrade/upgrade_process_(no_gateway_chain).md
+++ /dev/null
@@ -1,185 +0,0 @@
-# The upgrade process to the new version
-
-[back to readme](../../README.md)
-
-Gateway system introduces a lot of new contracts and so conducting so to provide the best experience for ZK chains the multistage upgrade will be provided. The upgrade will require some auxiliary contracts that will exist only for the purpose of this upgrade.
-
-## Previous version
-
-The previous version can be found [here](https://github.com/matter-labs/era-contracts/tree/main).
-
-The documentation for the previous version can be found [here](https://github.com/code-423n4/2024-03-zksync).
-
-However, deep knowledge of the previous version should not be required for understanding. But this document _does_ require understanding of the new system, so it should be the last document for you to read.
-
-## Overall design motivation
-
-During design of this upgrade we followed two principles:
-
-- Trust minimization. I.e. once the voting has started, no party can do damage to the upgrade or change its course. For instance, no one should be able to prevent a usable chain from conducting the upgrade to the gateway.
-- Minimal required preparation for chains. All of the required contracts for the upgrade (e.g. rollup L2DA validator, etc) will be deployed during the upgrade automatically. This allows to minimize risks of mistakes for each individual chain.
-
-There are four roles that will be mentioned within this document:
-
-- “Governance” — trusted entity that embodies the whole [decentralized voting process for ZK chain ecosystem](https://blog.zknation.io/introducing-zk-nation/).
-- “Ecosystem admin” — relatively trusted role for non-critical operation. It will be typically implemented as a multisig, that is approved by the governance and can only do limited operations to facilitate the upgrade. It can not alter the content of the upgrade nor it should be able to somehow harm chains by weaponizing the upgrade.
-- “Chain admin” — an admin of a ZK chain. An entity with limited ability to govern their own chain (they can choose who are the validators of the chain, but can not change the general behavior of the their chain).
-- “Deployer”. This role may not be mentioned, but it is implicitly present during the preparation stage. This is a hot wallet that is responsible for deploying the implementation of new contracts, etc. The governance should validate all its actions at the start of the voting process and so no trust is assumed from this wallet.
-
-## Ecosystem preparation stage
-
-This stage involves everything that is done before the voting starts. At this stage, all the details of the upgrade must be fixed, including the chain id of the gateway.
-
-More precisely, the implementations for the contracts will have to be deployed. Also, all of the new contracts will have to be deployed along with their proxies, e.g. `CTMDeploymentTracker`, `L1AssetRouter`, etc. Also, at this stage, the bytecodes of L2 contracts are considered fixed, i.e. they should not change.
-
-### Ensuring Governance ownership
-
-Some of the new contracts (e.g. `CTMDeploymentTracker` ) have two sorts of admins: the admin of the their proxy as well the `owner` role inside the contract. Both should belong to governance (the former is indirectly controlled by governance via a `ProxyAdmin` contract).
-
-The governance needs to know that it will definitely retain the ownership of these contracts regardless of the actions of their deployer. There are multiple ways this is ensured:
-
-- For `TransparentUpgradaeableProxy` this is simple: we can just transfer the ownership in one step to the `ProxyAdmin` that is under control of the governance.
-- For contracts that are deployed as standalone contracts (not proxies), then if we possible we provide the address of the owner of in the constructor.
-- For proxies and for contracts for which transferring inside the constructor is not option, we would transfer the ownership to a `TransitionaryOwner` contract. This is a contract that is responsible for being a temporary owner until the voting ends and it can do only two things: accept ownership for a contract and atomically transfer it to the governance. This is a workaround we have to use since most of our contracts implement `Ownable2Step` and so it is not possible to transfer ownership in one go.
-
-PS: It may be possible that for more contracts, e.g. some of the proxies we could’ve avoided the `TransitionaryOwner` approach by e.g. providing the governance address inside of an initializer. But we anyway need the `TransitionaryOwner` for `ValidatorTimelock`, so we decided to use it in most places to keep the code simpler. Also, some contracts will use `Create2AndTransfer` contract that deploys a contract and immediately transfers ownership to the governance.
-
-### L2SharedBridge and L2WETH migration
-
-In the current system (i.e. before the gateway upgrade), the trusted admin of the L1SharedBridge is responsible for [setting the correct L2SharedBridge address for chains](https://github.com/matter-labs/era-contracts/blob/aafee035db892689df3f7afe4b89fd6467a39313/l1-contracts/contracts/bridge/L1SharedBridge.sol#L249) (note that the links points to the old code). This is done with no additional validation. The system is generally designed to protect chains in case when a malicious admin tries to attack a chain. There are two measures to do that:
-
-- The general assumption is that the L2 shared bridge is set for a chain as soon as possible. It is a realistic assumption, since without it no bridging of any funds except for the base token is possible. So if at an early stage the admin would put a malicious l2 shared bridge for a chain, it would lose its trust from the community and the chain should be discarded.
-- Admin can not retroactively change L2 shared bridge for any chains. So once the correct L2 shared bridge is set, there is no way a bad admin can harm the chain.
-
-The mapping for L2SharedBridge will be used as a source for the address of `L2SharedBridgeLegacy` contract address during the migration.
-
-To correctly initialize the `L2NativeTokenVault` inside the gateway upgrade, we will need the address of the L2 Wrapped Base Token contract [as well](https://github.com/matter-labs/era-contracts/blob/84d5e3716f645909e8144c7d50af9dd6dd9ded62/l2-contracts/contracts/bridge/L2WrappedBaseToken.sol) (note that the link is intentionally for the pre-v26 codebase to show that these are deployed even before the upgrade).
-
-The data to execute the upgrade with is gathered on L1, so we need to create a mapping on L1 from `chainId => l2WrappedBaseToken`. This is what the `L2WrappedBaseTokenStore` contract for.
-
-Some chains already have `L2WrappedBaseToken` implementation deployed. It will be the job of the admin of the contract to prepopulate the contract with the correct addresses of those. The governance will have to double check that for the existing chains this mapping has been populated correctly before proceeding with the upgrade.
-
-Since we do not want to stop new chain creation while the voting is in progress, the admin needs to have the ability to add both new `L2SharedBridges` and the new `L2WrappedBaseToken` addresses to the mappings above. The following protections are put in place:
-
-- In case the trusted admin maliciously populated the addresses for any chains that were created before the voting started, the governance should just reject the voting
-- In case the trusted admin maliciously populated the addresses for a chain after the voting has ended, the same assumptions as the ones described for L2SharedBridge apply, i.e. the chain should have its `L2SharedBridge` and `L2WrappedBaseToken` deployed asap after the creation of the chain, in case the admin did something malicious, they should immediately discard the chain to prevent loss of value.
-
-### Publishing bytecodes for everyone
-
-Before a contract can be deployed with a bytecode, it must be marked as “known”. This includes system contracts. This caused some inconveniences during previous upgrades:
-
-- For each chain we would have to publish all factory dependencies for the upgrade separately, making it expensive and risk-prone process.
-- If a chain forgets to publish bytecodes for a chain before it executes an upgrade, there is little way to recover without intervention from the Governance.
-
-This upgrade the different approach is used to ensure safe and riskless preparation for the upgrade:
-
-- All bytecodes that are needed for this upgrade must be published to the `BytecodesSupplier` contract.
-- The protocol upgrade transaction will have all the required dependencies in its factory deps. During the upgrade they will be marked as known automatically by the system. The operator of a chain needs to grab the preimages for those from events emitted by the `BytecodesSupplier`.
-- It will be the job of the governance to verify that all the bytecodes were published to this contract.
-
-## Voting stage
-
-### Things to validate by the governance
-
-- The L1/L2 bytecodes are correct and the calldata is correct.
-- That the correct L2SharedBridge are populated in L1SharedBridge (note that it is a legacy contract from the current system that becomes L1Nullifer in the new upgrade) and that L2WrappedBaseTokenStore has been populated correctly.
-- [That the ownership is correctly transferred to governance.](#ensuring-governance-ownership)
-- That the bytecodes were published correctly to the `BytecodeSupplier` contract.
-
-### Things to sign by the governance
-
-The governance should sign all operations that will happen in all of the consecutive stages at this time. There will be no other voting. Unless stated otherwise, all the governance operations in this document are listed as dependencies for one another, i.e. must be executed in strictly sequential order.
-
-## Stage 1. Publishing of the new protocol upgrade
-
-### Txs by governance (in one multicall)
-
-1. The governance accepts ownership for all the contracts that used `TransitionaryOwner`.
-2. The governance publishes the new version by calling `function setNewVersionUpgrade`.
-3. The governance sets the new `validatorTimelock`.
-4. The governance calls `setChainCreationParams` and sets the new chain creation params to ensure that the chain creation fails.
-5. The governance should call the `GovernanceUpgradeTimer.startTimer()` to ensure that the timer for the upgrade starts. It will give the ecosystem's chains a fixed amount of time to upgrade their chains before the old protocol version becomes invalid.
-
-### Impact
-
-The chains will get the ability to upgrade to the new protocol version. They will be advised to do so before the deadline for upgrade runs out.
-
-Also, new chains wont be deployable during this stage due to step (4).
-
-Chains, whether upgraded or not, should work as usual as the new L2 bridging ecosystem is fully compatible with the old L1SharedBridge.
-
-## Chain Upgrade flow
-
-Let’s take a deeper look at how upgrading of an individual chain would look like.
-
-### Actions by Chain Admins
-
-As usual, the ChainAdmin should call `upgradeChainFromVersion`. What is unusual however:
-
-- ValidatorTimelock changes and so the admin should call the new ValidatorTimelock to set the old validators there.
-- The new DA validation mechanism is there and so the ChainAdmin should set the new DA validator pair.
-- If a chain should be a permanent rollup, the ChainAdmin should call the `makePermanentRollup()` function.
-
-It is preferable that all the steps above are executed in a multicall for greater convenience, though it is not mandatory.
-
-This upgrade adds a lot of new chain parameters and so these [should be managed carefully](../../chain_management/admin_role.md).
-
-### Upgrade flow in contracts
-
-Usually, we would perform an upgrade by simply doing a list of force deployments: basically providing an array of the contracts to deploy for the system. This array would be constant for all chains and it would work fine.
-
-However in this upgrade we have an issue that some of the constructor parameters (e.g. the address of the `L2SharedBridgeLegacy`) are specific to each chain. Thus, besides the standard parts of the upgrades each chain also has `ZKChainSpecificForceDeploymentsData` populated. Some of the params to conduct those actions are constant and so populate the `FixedForceDeploymentsData` struct.
-
-If the above could be composed on L1 to still reuse the old list of `(address, bytecodeHash, constructorData)` list, there are also other rather complex actions such as upgrading the L2SharedBridge to the L2SharedBridgeLegacy implementation that require rather complex logic.
-
-Due to the complexity of the actions above, it was decided to put all those into the [L2GatewayUpgrade](https://github.com/matter-labs/era-contracts/blob/14961f1efecac1030139c4cf0655b14135197772/system-contracts/contracts/L2GatewayUpgrade.sol) contract. It is supposed to be force-deployed with the constructor parameters containing the `ZKChainSpecificForceDeploymentsData` as well as `FixedForceDeploymentsData`. It will be forcedeployed to the ComplexUpgrader’s address to get the kernel space rights.
-
-So most of the system contracts will be deployed the old way (via force deployment), but for more complex thing the `L2GatewayUpgrade` will be temporarily put onto `ComplexUpgrader` address and initialize additional contracts inside the constructor. Then the correct will be put back there.
-
-So entire flow can be summarized by the following:
-
-1. On L1, when `AdminFacet.upgradeChainFromVersion` is called by the Chain Admin, the contract delegatecalls to the [GatewayUpgrade](https://github.com/matter-labs/era-contracts/blob/14961f1efecac1030139c4cf0655b14135197772/system-contracts/contracts/L2GatewayUpgrade.sol) contract.
-2. The `GatewayUpgrade` gathers all the needed data to compose the `ZKChainSpecificForceDeploymentsData`, while the `FixedForceDeploymentsData` is part is hardcoded inside the upgrade transaction.
-3. The combined upgrade transaction consists of many forced deployments (basically tuples of `(address, bytecodeHash, constructorInput)`) and one of these that is responsible for the temporary `L2GatewayUpgrade` gets its `constructorInput` set to contain the `ZKChainSpecificForceDeploymentsData` / `FixedForceDeploymentsData`.
-4. When the upgrade is executed on L2, it iterates over the forced deployments, deploys most of the contracts and then executes the `L2GatewayUpgrade`.
-5. `L2GatewayUpgrade` will deploy the L2 Bridgehub, MessageRoot, L2NativeTokenVault, L2AssetRouters. It will also deploy l2WrappedBaseToken if missing. It will also upgrade the implementations the L2SharedBridge as well as the UpgradaeableBeacon for these tokens.
-
-## Stage 2. Finalization of the upgrade
-
-### Txs by governance (in one multicall)
-
-- call the `GovernanceUpgradeTimer` to check whether the deadline has passed as only after that the upgrade can be finalized.
-- set the protocol version deadline for the old version to 0, i.e. ensuring that all the chains with the old version wont be able to commit any new batches.
-- upgrade the old contracts to the new implementation.
-- set the correct new chain creation params, upgrade the old contracts to the new one
-
-### Txs by anyone
-
-After the governance has finalized the upgrade above, anyone can do the following transactions to finalize the upgrade:
-
-For each chainId:
-
-- `Bridgehub.setLegacyBaseTokenAssetId`
-- `Bridgehub.setLegacyChainAddress`
-
-For each token:
-
-- register token inside the L1NTV
-
-For each chain/token pair:
-
-- update chain balances from shared bridge for L1NTV
-
-The exact way these functions will be executed is out of scope of this document. It can be done via a trivial multicall.
-
-### Impact
-
-The ecosystem has been fully transformed to the new version.
-
-## Security notes
-
-### Importance of preventing new batches being committed with the old version
-
-The new `L1AssetRouter` is not compatible with chains that do not support the new protocol version as they do not have `L2AssetRouter` deployed. Doing bridging to such chains will lead to funds being lost without recovery (since formally the L1->L2 transaction won't fail as it is just a call to an empty address).
-
-This is why it is crucial that on step (2) we revoke the ability for outdated chains to push new batches as those might've been spawned using the `L1AssetRouter`.
diff --git a/docs/upgrade_history/v27_evm_emulation/v27-evm-emulation.md b/docs/upgrade_history/v27_evm_emulation/v27-evm-emulation.md
deleted file mode 100644
index c232f01813..0000000000
--- a/docs/upgrade_history/v27_evm_emulation/v27-evm-emulation.md
+++ /dev/null
@@ -1,61 +0,0 @@
-# V27 EVM emulation upgrade
-
-## Upgrade process
-
-V27 upgrade will happen after the gateway preparation upgrade, but before the gateway is deployed. As such the upgrade process does not involve the Gateway parts ( upgrading the CTM on GW, etc), it is an L1-only upgrade.
-
-Additionally this is not a bridge upgrade, as the bridge and ecosystem contracts don't have new features, so L1<>L2 bridging is not affected. This means that only the system components, the Verifiers, Facets and L2 contracts need to be upgraded.
-
-The upgrade process is as follows:
-
-- deploy new contract implementations.
-- publish L2 bytecodes
-- generate upgrade data
- - forceDeployments data on L2. Contains all new System and L2 contracts.
- - new genesis diamondCut (which contains facetCuts, and genesis forceDeployments, as well as init data)
- - upgradeCut (which contains facetCuts, and upgrade forceDeployments, as well as upgrade data)
-- prepare ecosytem upgrade governance calls:
- - stage 0:
- - pause gateway migrations (needed on CTM even though GW is not yet deployed)
- - stage 1:
- - upgrade proxies for L1 contracts.
- - CTM:
- - set new ChainCreationParams (-> contains new genesis upgrade cut data)
- - set new version upgrade (-> contains new upgrade cut data)
- - stage 2:
- - unpause gateway migrations
-
-Read more here: [Upgrade process document](../../chain_management/upgrade_process.md)
-
-## Changes
-
-### New features
-
-- EVM emulation, system contracts and bootloader
-- service transaction on Mailbox
-- verifiers: Fflonk and plonk Dual verifiers
-- identity precompile
-- new TUPP contract ServerNotifier
-- ChainTypeManager: add setServerNotifier ( used for GW migration)
-
-### Bug fixes
-
-- GW send data to L1 bug in Executor
-- safeCall tokenData on L2.
-- Token registration
-- Bridgehub: registerLegacy onlyL1 modifier
-
-### Token registration
-
-- L1Nullifier → small check added in token registration
-- L1AssetRouter: → small check in token registration
-- L1NTV: token registration
-- L2AssetRouter: token registration check
-- L2NTV: token registration
-
-### Changed without need for upgrade
-
-- AssetRouterBase → casting, no need
-- L1ERC20Bridge → comment changes, don’t change
-- CTMDeploymentTracker: changed error imports only, do not upgrade.
-- Relayed SLDA validator version, deployed on GW, nothing to upgrade.