Blockchain Scaling Solutions & Trade-offs

Depending on your requirements, you’ll probably prefer certain scaling solutions to others. but all come with their own unique trade-offs

Blockchain Scaling Solutions & Trade-offs
Blockchain Scaling Solutions & Trade-offs

LISTEN TO THE PODCAST:

Blockchain technology can be represented as being slow, inefficient, and inconsistent. Many would argue, this preserves the highest levels of security whilst aiming to maintain a peer to peer decentralised network. The question is, can this be improved to give users the best of both worlds, A fast, efficient blockchain that is also a secure, decentralised, and peer to peer network?

Let’s explore blockchain scaling and the methods used to increase transaction volume and speed, whilst decreasing congestion and cost. We’ll also discuss the trade-offs being introduced by various solutions. Understanding the long-term implications will also help evaluate the path a project is likely to follow and the overall outcomes.

To my knowledge, the first time blockchain technology had to deal with transaction scaling issue, was during the 2017 block size wars, which introduced the “Bitcoin Cash” hard fork. The reason for this hard fork was to introduce larger blocks, so Bitcoin could process more transactions per second. This hard fork would have taken the original block size from 1 MB to 4 MB per block, with the likelihood of increasing this further in the future as needed. The opposition to this hard fork highlighted the blockchain bloating that would be caused by increasing the block size. Fast-forward to today, and we can see this effect playing out in many blockchains, including Ethereum and Solana. Bitcoin (BTC) also hasn’t completely avoided this problem.

Chain Bloat calculation:

  • 1 MB per block increases to 4 MB
  • 6 MB per hour becomes = 24 MB ph (per hour)
  • 132 MB per day becomes = 528 MB (per day)
  • 924 MB per week becomes = 3,696 MB (per week)
  • 40,048 MB per year becomes = 160,192 (per year)

Transaction scaling — Volume, Speed & cost

The desirable features blockchain projects try to include in their design can be categorised into three areas - volume, speed, and low fees:

In terms of transaction volume, the holy grail is to match the 100,000 transactions per second currently being achieved by VISA.

Transaction speed, is how fast transactions can fully settle. One of the major downsides of blockchain like Decred and Bitcoin is how long this takes. The aim in this area of development is to get transactions to fully settle in under one second.

And finally, transaction fees. As a blockchain increases in popularity, it will start to witness block congestion. This in turn pushes transaction fees up as people increase the amount they are willing to pay to get their transaction into the next block. The aim in this area is to maintain a transaction fee that is less than a penny, the current benchmark is a single Atom or Satoshi (0.00000001 DCR).

The transaction scaling trade-offs

As scaling solutions are introduced they typically run into one or more unavoidable trade-offs, these include, centralisation, lower security, blockchain bloat, inefficiencies, and reduced reliability. Let’s look at how these trade-offs are introduced:

Centralisation
Reduced decentralisation is typically caused due to the majority of users not being able to run the validating software and hardware needed. The lower the participation in this area, the more likely this infrastructure is being run in data-centres. Which generally is incredibly bad for security, privacy and the peer to peer nature of a blockchain.

Security
Security issues typically arise through greater complexity or techniques used for circumventing on-chain consensus mechanisms, three examples of this are sharding, pruning and bridging. But security can also be lowered as projects increase their need for trusted third-parties or data-centres.

Blockchain bloat
Blockchain bloat is mostly unavoidable when a project aims to place all transactions on-chain whilst aiming for the 100,000 TPS target. This is also the number one reason a blockchain ceases to be a peer to peer network.

Efficiency
Efficiency is about optimising the data that is put on-chain and the way that the data is processed and stored. One practice we’re currently seeing a lot of, is a rise of arbitrary data that is being stamped onto blockchains like Bitcoin and Litecoin, for example images, audio, and other reference data. All of which will add to the inefficiencies of a network.

Reliability
Reliability, the more transactions a blockchain network processes, the harder it becomes for full nodes to synchronise and stay in sync. The reliability factor is how hard is it for an individual full node to maintain the chain whilst validating all incoming transactions and blockchain data.

Why do these problems and trade-offs exist?

There are many scenarios for why these trade-offs exist, but the first thing that needs to be identified is that everything that happens on-chain has a data weight and a physical storage and creation cost. To maintain a peer to peer network, the majority of the participants need to be able to participate with low processing and cost values. We must get it out of our heads that everything is free or somebody else is going to do the work for us. There is always a price to pay, and blockchain is no different. At some point in the future, individuals who want to remain sovereign will need to burden these expenses.

Producing data for a blockchain creates a footprint that will live on-chain, sometimes in defiantly. For example, an empty block has a data weight, as does each and every transaction. An empty block will generally be larger than 194 Bytes. And an individual transaction will typically be 215 Bytes or larger.

Once you understand this, you should be able to make some simple calculations regarding things like blockchain bloat and its effects on centralisation.

For example: If a blockchain produces a new block every second and only produces empty blocks, this will produce approximately 5 GB of data per year, of which full node operators have to process, verify and store long into the future.

Calculation:

  • 1 block per second (194 B) x 60 seconds = 11,640 B per minute (0.01164 MB)
  • 11,640 B × 60 minutes = 698,400 B per hour ( 0.6984 MB)
  • 698,400 B × 24 hours = 16,761,600 B per day (16.7616 MB)
  • 16 × 365 = 5,840 MB per year (5.84 GB)

Scaling solution

It’s actually very impressive how many projects are trying to solve the transaction scaling problem. Some of the most popular scaling solutions include:

  • Increasing Block size and reducing time between blocks
  • Pruning and sharding mechanisms for on-chain scaling
  • Layer 2 and side-chain scaling solutions
  • The Lightning Network

Depending on your requirements, you’ll probably prefer certain solutions to others. I’m currently leaning towards the Lightning Network for my preferred scaling option, as it keeps the majority of the transactions off-chain. It also fulfils the requirements of cheap and fast settlement. In terms of trade-offs, it can be noted that the Lightning Network does introduce more complexity and lower security than can be assured with on-chain transactions. For this reason, the recommendation for using the Lighting Network is for small amount, including, in person transactions like buying a cup of coffee or tipping someone for a service they’ve provided. As a final note, the Lightning Network does have performance features, which as a bundle aren’t found in any of the other solutions I’ve researched, including:

  • Extremely low blockchain bloat
  • The ability to process greater than 100,000 transactions per second
  • Settlement times below 1 second
  • Low-cost transaction fees of 1 atom or satoshi per transaction (0.00000001 DCR)