Layer2

Layer 2 protocols are secondary frameworks built on top of Layer 1 blockchains to enhance scalability and reduce transaction costs. By utilizing technologies like Optimistic Rollups and ZK-Rollups, L2s like Arbitrum, Optimism, and Base process transactions off-chain before finalizing them on the mainnet. Following the 2026 Ethereum upgrades, L2s have become the primary execution layer for retail users. Stay updated on interoperability, fractal scaling, and the reduction of gas fees across the modular Ethereum roadmap.

214 Articles
Created: 2026/02/02 18:52
Updated: 2026/02/02 18:52
Lessons From Hands-on Research on High-Velocity AI Development

Lessons From Hands-on Research on High-Velocity AI Development

The main constraint on AI-assisted development was not model capability but how context was structured and exposed.

Author: Hackernoon
A Monumental Boost For Layer 2 Capacity

A Monumental Boost For Layer 2 Capacity

The post A Monumental Boost For Layer 2 Capacity appeared on BitcoinEthereumNews.com. In a significant move for blockchain scalability, the Ethereum Foundation has successfully activated a crucial upgrade. The Ethereum Foundation BPO-1 upgrade went live on December 10th, marking a pivotal step in enhancing the network’s capacity for Layer 2 solutions. This development directly addresses one of the ecosystem’s most pressing needs: scalable and affordable transaction space. What Exactly is the Ethereum Foundation BPO-1 Upgrade? The core of this update is a technical enhancement that increases ‘blob’ capacity. Think of blobs as dedicated data packages for Layer 2 networks like Optimism and Arbitrum. Before the Ethereum Foundation BPO-1 upgrade, capacity was limited. Now, each new block on Ethereum can carry up to 15 of these blobs, a substantial increase. The best part? This expansion was achieved without a disruptive hard fork, ensuring network stability. Why Does This Boost to Layer 2 Capacity Matter? You might wonder, why focus on Layer 2? The answer lies in Ethereum’s scalability trilemma: balancing security, decentralization, and scalability. Layer 2s process transactions off the main chain (Layer 1) and post compressed proofs back to it. Therefore, by increasing blob space, the Ethereum Foundation BPO-1 upgrade directly fuels the growth of these scaling solutions. The benefits are clear: Lower Costs: More space means L2s can batch more transactions, driving down fees for end-users. Higher Throughput: Networks can handle more activity, supporting wider adoption. Future-Proofing: It creates room for new and existing L2s to innovate and grow. What’s Next After the BPO-1 Activation? The journey doesn’t stop here. The Ethereum Foundation has already charted the course for the next phase. A follow-up enhancement, known as BPO-2, is scheduled for activation in January 2026. This planned upgrade aims to push capacity even further, demonstrating a clear, long-term commitment to scaling. The sequential rollout of BPO-1 and BPO-2 shows a methodical…

Author: BitcoinEthereumNews
Ethereum Foundation’s BPO-1 Upgrade: A Monumental Boost for Layer 2 Capacity

Ethereum Foundation’s BPO-1 Upgrade: A Monumental Boost for Layer 2 Capacity

BitcoinWorld Ethereum Foundation’s BPO-1 Upgrade: A Monumental Boost for Layer 2 Capacity In a significant move for blockchain scalability, the Ethereum Foundation has successfully activated a crucial upgrade. The Ethereum Foundation BPO-1 upgrade went live on December 10th, marking a pivotal step in enhancing the network’s capacity for Layer 2 solutions. This development directly addresses one of the ecosystem’s most pressing needs: scalable and affordable transaction space. […] This post Ethereum Foundation’s BPO-1 Upgrade: A Monumental Boost for Layer 2 Capacity first appeared on BitcoinWorld.

Author: bitcoinworld
Can Your AI Actually Use a Computer? A 2025 Map of Computer‑Use Benchmarks

Can Your AI Actually Use a Computer? A 2025 Map of Computer‑Use Benchmarks

This article maps today’s computer use benchmarks across three layers (UI grounding, web agents, full OS use), shows how a few anchors like ScreenSpot, Mind2Web, REAL, OSWorld and CUB are emerging, explains why scaffolding and harnesses often drive more gains than model size, and gives practical guidance on which evals to use if you are building GUI models, web agents, or full computer use agents.

Author: Hackernoon
How to Build a Fully Automated Affiliate Marketing Tech Stack in 2026

How to Build a Fully Automated Affiliate Marketing Tech Stack in 2026

An affiliate marketing tech stack in 2026 is a connected set of tools where tracking, CRM, payouts, analytics, and comms are wired together via APIs and automation. It’s not about “I installed a plugin and added a pixel.” We’re talking about real-time data flows in real time across your campaigns.

Author: Hackernoon
What will the Fusaka upgrade bring to Ethereum?

What will the Fusaka upgrade bring to Ethereum?

The name Fusaka comes from a combination of the execution layer upgrade Osaka and the consensus layer version Fula Star. This upgrade is expected to be activated on December 3, 2025 at 21:49 UTC. This upgrade includes 12 EIPs, covering data availability, Gas/block capacity, security optimization, signature compatibility, transaction fee structure, etc. It is a systematic upgrade to achieve L1 capacity expansion, reduce L2 costs, reduce node costs, and improve user experience. I. Fusaka's two core objectives: to improve Ethereum performance and to enhance user experience. Objective 1: Significantly improve the underlying performance and scalability of Ethereum. Core keywords: Data availability expansion Reduced node load Blob is more flexible Improved execution capabilities A more efficient and secure consensus mechanism In short: to further improve Ethereum performance. Objective 2: Improve user experience and drive the next generation of wallet and account abstraction. Core keywords: Block preconfirmation P-256 (device native signature) support mnemonic wallet A more modern account system Essentially, Ethereum is moving closer to the experience of mainstream internet software. II. Five Key Changes in Fusaka 1. PeerDAS: Reduces the data storage burden on nodes PeerDAS is a core new feature of the Fusaka upgrade. Currently, Layer 2 nodes use blobs (a temporary data type) to publish data to Ethereum. Before the Fusaka upgrade, each full node had to store every blob to ensure data existence. As blob throughput increases, downloading all this data becomes extremely resource-intensive, making it difficult for nodes to handle. PeerDAS employs a data abailability sampling scheme, allowing each node to store only a subset of data blocks instead of the entire dataset. To ensure data availability, any subset of data can be reconstructed from 50% of the existing data, reducing the probability of errors or missing data to a cryptographically negligible level. PeerDAS works by applying Reed-Solomon erasure coding to blob data. In traditional applications, DVDs use the same encoding technology—even with scratches, the player can still read the disc; similarly, QR codes can still be fully recognized even when partially obscured. Therefore, the PeerDAS solution can ensure that the hardware and bandwidth requirements of the nodes are within an acceptable range, while also enabling blob expansion, thereby supporting more and larger-scale Layer2 nodes at a lower cost. 2. Flexibly increase the number of blobs as needed: Adapt to ever-changing L2 data requirements. To ensure consistent upgrades across all nodes, clients, and validator software, a gradual approach is necessary. To more quickly adapt to evolving Layer 2 data block requirements, a mechanism of blob-parameter-only forks is introduced. When blobs were first added to the network during the Dencun upgrade, there were 3 (max 6), which was later increased to 6 (max 9) in the Pectra upgrade. After Fusaka, they can be added at a sustainable rate without requiring major network upgrades. 3. Supports historical record expiration: reduces node costs. To reduce the disk space required by node operators during Ethereum's continued growth, clients are required to begin supporting the expiration of some historical records. In fact, clients already have this function enabled in real-time; this upgrade simply adds it to the to-do list. 4. Pre-confirmation of blocks: Enables faster transaction confirmation. Using EIP7917, the Beacon Chain will be able to identify the block proposers for the next epoch. Knowing in advance which validators will propose future blocks enables pre-confirmation. A commitment can be made with the upcoming block proposer to guarantee that user transactions will be included in that block, without waiting for the actual block to be generated. This feature benefits client implementation and network security because it prevents extreme situations such as validators manipulating proposer scheduling. Furthermore, the look-ahead feature reduces implementation complexity. 5. Native P-256 signature: Ethereum directly aligns with 5 billion mobile devices. A built-in, pass-key-like secp256r1 (P-256) signature checker is introduced at a fixed address. This is the native signature algorithm used by systems such as Apple, Android, FIDO2, and WebAuthn. For users, this upgrade unlocks native device signing and passkey functionality. Wallets can directly access Apple's Secure Vault, Android Keystore, Hardware Security Module (HSM), and FIDO2/WebAuthn—no mnemonic phrase required, a smoother registration process, and a multi-factor authentication experience comparable to modern applications. This will result in a better user experience, more convenient account recovery methods, and an account abstraction model that matches the existing functionality of billions of devices. For developers, it accepts 160 bytes of input and returns 32 bytes of output, making it very easy to port existing libraries and L2 contracts. Its underlying implementation includes pointers to infinity and modulo comparison checks to eliminate tricky boundary cases without breaking valid callers. III. The Long-Term Impact of the Fusaka Upgrade on the Ethereum Ecosystem 1. Impact on L2: Expansion enters the second curve. Through PeerDAS and the on-demand increase of Blob numbers, as well as a fairer data pricing mechanism, the data availability bottleneck has been resolved, and Fusaka has accelerated the decline in L2 costs. 2. Impact on nodes: Operating costs continue to decrease. Reduced storage requirements and shorter synchronization times lower operating costs. Furthermore, in the long run, this ensures the continued participation of nodes with weaker hardware, thereby guaranteeing the continued decentralization of the network. 3. Impact on DApps: More complex on-chain logic becomes possible. More efficient mathematical opcodes and more predictable block proposal schedules may drive high-performance AMMs, more complex derivative protocols, and fully on-chain applications. 4. Impact on ordinary users: Finally, they can use blockchain like Web2. P-256 signatures mean that there is no need for mnemonic phrases, mobile phones can be used as wallets, login is more convenient, recovery is simpler, and multi-factor authentication is naturally integrated. This is a revolutionary change in user experience and one of the necessary conditions for driving 1 billion users to the blockchain. IV. Conclusion: Fusaka is a key step towards DankSharding and large-scale user adoption. Dencun ushered in the era of Blob (Proto-Dank Sharding), Pectra optimized execution and had an impact on EIP-4844, while Fusaka allowed Ethereum to take a key step towards "sustainable scaling + mobile-first". TLDR: This upgrade will incorporate 12 EIPs, mainly including: EIP-7594: Employs PeerDAS to reduce the data storage burden on nodes. This is a key foundation for expanding Ethereum's data capacity. PeerDAS has built the infrastructure needed to implement Danksharding, and future upgrades are expected to increase data throughput from 375kb/s to several MB/s. It also directly implements Layer 2 scaling, enabling nodes to process more data efficiently without overwhelming individual participants. EIP-7642: Introduces the history expiration function to reduce the disk space required by nodes. This is equivalent to changing how receipts are processed, removing old data from node synchronization, thus saving approximately 530GB of bandwidth during synchronization. EIP-7823: Sets the upper limit for MODEXP to prevent consensus vulnerabilities. This limits the length of each input to 1024 bytes for the MODEXP cryptographic precompiled code. Previously, MODEXP had been a source of consensus vulnerabilities due to its unrestricted input length. By setting practical limits covering all real-world application scenarios, the scope of testing is reduced, paving the way for future replacement with more efficient EVM code. EIP-7825: Introduces a transaction gas cap to prevent a single transaction from consuming most of the block space. This measure introduces a gas cap of 167,777,216 per transaction, preventing any single transaction from consuming most of the block space. This ensures a fairer allocation of block space, thereby improving network stability and the ability to defend against DoS attacks, and enabling more predictable block verification times. EIP-7883: Increases the gas cost of ModExp cryptographic precompiled code to prevent potential denial-of-service attacks due to excessively low pricing. To address the issue of excessively low pricing for operations, the gas cost of ModExp cryptographic precompilers has been increased. The minimum cost has increased from 200 gas to 500 gas, and the cost doubles for large inputs exceeding 32 bytes. This ensures reasonable pricing for cryptographic precompilers, improves network economic sustainability, and prevents potential denial-of-service attacks caused by excessively low pricing. EIP-7892: Supports on-demand elastic scaling of blob counts to adapt to evolving Layer 2 requirements. Ethereum can adjust blob storage parameters more frequently by creating a new, lightweight process. This allows for smaller adjustments to blob capacity to adapt to the evolving needs of Layer 2 without waiting for major upgrades. EIP-7917: Enables block preconfirmation, improving the predictability of transaction order. Currently, validators cannot know who will propose blocks until the next epoch begins, introducing uncertainty into MEV mitigation and the pre-acknowledgment protocol. This change pre-calculates and stores the proposer schedule for future epochs, making it deterministic and accessible to applications. EIP-7918: Introduces a base blob fee linked to execution costs, thereby addressing the market problem of data block fees. This solution addresses the block fee market problem by introducing a reserve price linked to execution costs. This prevents the block fee market from failing at 1 wei when the Layer 2 execution cost is significantly higher than the block cost. This is crucial for L2, ensuring that sustainable Blob pricing reflects true costs and maintains effective price discovery as L2 usage scales up. EIP-7934: Limits the maximum RLP execution block size to 10MB to prevent network instability and denial-of-service attacks. Currently, block sizes can be very large, which slows down network propagation and increases the risk of temporary forks. This limitation ensures that block sizes remain within a reasonable range that the network can process and propagate efficiently. This improves network reliability, reduces the risk of temporary forks, and thus achieves more stable transaction confirmation times. EIP-7935: Increases the default gas limit to 60M to expand L1 execution capabilities. The proposal suggests increasing the gas cap from 36M to 60M to expand L1 execution capacity. While this change does not require a hard fork (the gas cap is a parameter chosen by validators), extensive testing is needed to ensure network stability under high computational loads. Therefore, including this EIP in a hard fork ensures that this work is prioritized and continues. By allowing each data block to perform more computations, the overall network throughput is directly improved, which is the most direct way to extend L1 execution capabilities. EIP-7939: Added CLZ opcode to make on-chain computation more efficient. This update adds a new CLZ (Calculate Leading Zeros) opcode to the EVM for efficiently calculating the number of leading zeros in a 256-bit number. This significantly reduces the gas cost of mathematical operations requiring bit manipulation, improves computational efficiency, and enables more complex on-chain computations. This allows for cheaper and more efficient mathematical operations, benefiting DeFi protocols, gaming applications, and any contracts that require complex mathematical calculations. EIP-7951: Adds support for pre-compiled secp256r1 curves to improve user experience. This update adds support for the widely used cryptographic curve secp256r1 (also known as P-256) to Ethereum. Currently, Ethereum only supports the secp256k1 curve for signatures, but many devices and systems use secp256r1. This update enables Ethereum to verify signatures from iPhones, Android phones, hardware wallets, and other systems using this standard curve, making it easier to integrate with existing infrastructure.

Author: PANews
Does Hyperliquid's popularity mean Arbitrum is "winning by default"?

Does Hyperliquid's popularity mean Arbitrum is "winning by default"?

Recently, the Hyperliquid HIP3 protocol has become incredibly popular, with stocks, gold, and even Pokémon cards and CS skins now available for trading. This has made Hyperliquid incredibly successful, but many people have overlooked the fact that Arbitrum's liquidity has also seen a significant surge in the past. Is it true that the more popular Hyperliquid becomes, the more Arbitrum can "quietly make a fortune"? Why is that? 1) A fundamental fact is that most of the USDC held by Hyperliquid is bridged from Arbitrum. Whenever Hyperliquid launches a TSLA stock contract or a gold perp, a massive amount of USDC flows in from Arbitrum. This connection is not incidental, but a structural dependency. These bridging activities directly contributed to Arbitrum's daily transaction volume and ecosystem activity, propelling Arbitrum to maintain its leading position in layer 2. 2) Of course, some might say that Arbitrum is merely a stepping stone for Hyperliquid's funding, a one-way street where funds simply pass through. Then why doesn't Hyperliquid choose Solana or Base, but instead deeply integrates with Arbitrum? The reasons are as follows: 1. Lowest technical adaptation cost: Hyperliquid requires a liquidity entry point with good EVM compatibility to securely accept stablecoins, while Arbitrum's Nitro architecture can keep bridging latency within 1 minute and the gas fee is less than $0.01, so users can hardly feel the friction cost. 2. Unparalleled Liquidity Depth: Arbitrum's native USDC circulating supply reaches $8.06 billion, the highest among all Layer 2 platforms. Furthermore, Arbitrum has mature protocols like GMX and Gains that have formed a complete closed loop encompassing lending, trading, derivatives, and yield aggregation. Essentially, Hyperliquid's choice of Arbitrum is not merely about a bridging channel, but about accessing a mature liquidity network. 3. The synergistic effect of the ecosystem is irreplaceable: Some of the new stock PERP, gold PERP, and even government bond tokens launched in HIP3 already existed on Arbitrum as RWA assets, and were used for lending and farming through DeFi protocols such as Morpho, Pendle, and Euler. This allows users to stake RWA assets as collateral on Arbitrum to borrow USDC, and then bridge to Hyperliquid to trade stock PERP with 5x or even 10x leverage. This isn't just a one-way transfer of funds; it's a cross-ecosystem liquidity aggregation. 3) In my view, the relationship between Hyperliquid and Arbitrum is not a simple liquidity "parasitic relationship," but rather a strategic complementarity. Hyperliquid, as the application chain of Perp Dex, continues to stimulate transaction activity, while Arbitrum provides a continuous influx of liquidity. For Arbitrum, it also needs phenomenal applications like Hyperliquid to overcome the lack of product dynamism in the Ethereum ecosystem. This reminds me of when Arbitrum was promoting the Orbit layer3 framework, its main selling point was the "general layer2 + specialized application chain" approach. Orbit allowed any team to quickly deploy their own Layer3 application chain, enjoying Arbitrum's security and liquidity while customizing performance parameters according to business needs. While Hyperliquid chose a path of building its own layer 1 and deeply binding with Arbitrum, which seems different from directly deploying layer 3, a closer analysis of the relationship between the HIP-3 ecosystem and Arbitrum reveals an interesting conclusion: the HIP-3 ecosystem has, to some extent, become the de facto layer 3 application chain of Arbitrum. Ultimately, the core logic of Layer 3 is to maintain its own performance advantages while outsourcing security and liquidity to Layer 2. Clearly, Hyperliquid cannot currently offer the liquidity advantages of the HIP3 ecosystem, but Arbitrum can. Isn't this just a variant of the layer 3 operating mode?

Author: PANews
Layer-2 Expansion: CryptoProcessing by CoinsPaid Integrates Base and Arbitrum

Layer-2 Expansion: CryptoProcessing by CoinsPaid Integrates Base and Arbitrum

CoinsPaid adds Arbitrum and Base enabling faster cheaper ETH and USDC payments for merchants strengthening scalable secure crypto transactions.

Author: Blockchainreporter
CryptoProcessing by CoinsPaid Expands Layer-2 Payment Rails with Arbitrum and Base

CryptoProcessing by CoinsPaid Expands Layer-2 Payment Rails with Arbitrum and Base

CryptoProcessing by CoinsPaid, one of the world’s leading crypto payment gateways, has integrated Arbitrum and Base, two of the most advanced Layer 2 blockchains, to bring faster, cheaper, and smoother transactions to its users. The integration adds support for ETH (Ethereum) and USDC (USD Coin) on both networks, giving merchants access to instant payments with […] The post CryptoProcessing by CoinsPaid Expands Layer-2 Payment Rails with Arbitrum and Base appeared first on TechBullion.

Author: Techbullion
How I Built a Generative Manufacturing Engine That Actually Obeys Physics

How I Built a Generative Manufacturing Engine That Actually Obeys Physics

Large Language Models are great at poetry and terrible at engineering. If you ask GPT-4 to design a machine, it will hallucinate a bolt that doesn't exist or a battery that explodes. To fix this, I built OpenForge—a multi-agent system that autonomously sources real components, reads their datasheets via computer vision, and validates them against deterministic physics engines. While I used this architecture to build drones, the pattern solves the fundamental bottleneck in automating hardware engineering across any industry.

Author: Hackernoon