Over the past couple of months there has been a number of discussions revolving around increasing the Bitcoin block size from its current 1 MB limit to 20 MB. One such plan is Gavin Andresen’s proposal (this is not to single him out as there are others with similar proposals). The code change itself is trivial, as it can simply be changed to any arbitrary number in a couple of keystrokes (for instance, see Vitalik Buterin discuss this at 14:15).
However, getting the majority of validating nodes, miners and the rest of the ecosystem on-board in a timely fashion is a very non-trivial matter.
Recall that, as illustrated by Organ of Corti and Dave Hudson, the average block size has increased over the past year to the point where we will likely max out at around 3 transactions per second with the current 1 MB limit. Since many of the investors, developers and entrepreneurs in this space would like to make Bitcoin ‘competitive’ to other payment platforms such as Visa, according to their view, this number eventually needs to increase by several orders of magnitude.
Fundamentally there are two trade-offs in block size economics:
- Keeping a 1 MB block size requires higher fees to end-users but results in a more decentralized network
- With a larger, 20 MB block size, fees are (temporarily) subsidized to end-users but with fewer validating nodes on the network
A quick explanation of both:
- Retaining a 1 MB block size ultimately results in higher transaction fees because block space is scarce and miners will only process and include transactions based on market-based prioritization rates (e.g., pay higher to be included faster). While this would likely mean the end of certain types of transactions (such as “long chain” transactions) as well as fee-less transactions which have disproportionally increased the size of the blockchain over the past six months relative to actual commerce, simultaneously this design decision would have the effect of retaining some nominal decentralization as the increase in blockchain size would remain relatively linear and thus the blockchain could be validated by several thousand nodes as it is done today without (much) additional cost.
In early March 2014, there were approximately 10,000 nodes however over the past year there has been a decline by roughly 1/3. What does this distribution of roughly 6,400 current nodes look like?
Recall that the original value proposition of the Bitcoin blockchain was its decentralized characteristic, thus the more miners and validation nodes that are geographically distributed, the less prone the network is to single-points of failure. Furthermore, while many people call the various artifacts that have increased the blockchain size “bloat,” because this is a public good and no one owns it, it is imprecise to do so (e.g., one man’s 80 byte “trash” OP_RETURN is another man’s data storing “treasure“).
Whether consumers are sensitive to this change in fees is another matter due to elastic demand, they may simply switch over substitute goods (e.g., competing chains and ledgers). What does this mean exactly?
- An increase to a 20 MB block size would likely continue the same “low” fee (donation) structure practiced and promoted today as there is purportedly more room for non-priority transactions. The known challenge however is that if 20 MB blocks became “filled,” this would require a corresponding increase in bandwidth and disk space which would require more costs to be borne by the validating nodes which are already operating as public goods. That is to say, a blockchain that increased in size by 20 MB every 10 minutes would fill over 1 terabyte a year which would create additional costs for participants and likely reduce the amount of verification nodes and therefore reduce the decentralization of the network.
The other challenge to Andresen’s plan is, that because the prioritization of transactions would still not be adjusting towards via fees to miners, this would in turn continue the status quo in which miners continue to largely rely on seigniorage to operate. This is an unhealthy trend as it stalls the transition from block rewards to fees which was the narrative stated since day one on October 31, 2008 (see section 6).
It is difficult to predict what exactly will happen as the key actors in this space are still deciding what to use social capital on.
Gavin Andresen, as recently as two weeks ago, stated that most of the large payment processors, exchanges and other service companies are on-board with his plan (see also David Davout’s recent dialogue with Andresen). Furthermore, others in the community have (likely erroneously) found correlation between market cap and transaction volume yet as we know, correlation does not actually imply causation. Similarly, ‘Death and Taxes’ recently presented a narrative reinforcing Andresen’s view yet for some reason glossed over the all-important miners perspective. Others, such as in the ideological wing personified by Mircea Popescu claim that they will fight this effort with an actual attack.
Irrespective as to what size a block is increased to, it will likely create at least a temporary fork as validating nodes need to upgrade and they are not being compensated for storage and traffic (Andresen’s plan is to “future proof” the protocol such that the 20 MB change is included in a patch this year but isn’t “turned on” until needed later on). There is at least one open question: what is the minimal amount of full nodes that are required for network to operate within current trust/security model? Unlike miners, their value to the system is hard to measure.
What the experts say
While the field is young, one expert in this space is Jonathan Levin who modeled network propagation in his masters thesis. I reached out to him and in his view:
I think that the 20mb proposal is untenable given the current way that blocks are propagated around the Bitcoin network. The Bitcoin network and specifically the Bitcoin miners use a gossip network to relay blocks to each other. That means that as the size of the block increases, the time that it takes to spread around the network also increases linearly. We have seen this first in the work of Decker and Wattenhofer as well as my own work.
The problem is that the increased time that blocks take to propagate around the network increase the probability of orphan races between different mining pools. If you create blocks that are 20mb and a competing pool is creating blocks under 1mb or even empty ones, they have a higher expected return per hash. This is because you would expect your blocks to lose out to smaller blocks in an orphan race if both are found in quick succession. Now we can argue that miners will continue to create large blocks out of altruism but if we continue to increase the size of the blocks without greater utilisation of better block relaying protocols we risk breaking this equilibrium and miners resorting to nasty strategies like creating empty blocks which suit no one.
I also spoke with several other professionals in this space.
For instance, I spoke with Atif Nazir, co-founder of Block.io and an instructor at Blockchain University. According to him:
On the one hand, increasing block sizes, as you say, may result in lower transaction fee requirements. However, if the transaction fees actually are lowered by, say, 1000x what they are now (0.00001 is the minimum accepted by the reference client), this will lower the cost of “institutional attacks” on the Bitcoin infrastructure, where an attacker can push 1000 transactions for an erstwhile cost of 1. The attack will basically be “make infrastructure expensive to run for the average joe, drive them towards centralized infrastructure services that run APIs, Blockchain Explorers, etc.” It is good for business, bad for the decentralization of the network in the near term.
We’ve seen something like this occur on the Dogecoin Network in the past few months, where one user or a group of individuals were pushing transactions with 0 transaction fees. These transactions were accepted as valid by the Dogecoin reference clients, and as a result, caused bandwidth consumption hikes for the dorm-room nodes, which populate most of the current network(s). The resulting change by the Dogecoin Core team was to add a fee of 1.0 DOGE for every transaction, which isn’t yet mandatory, but is on its way there. The dorm-room nodes, however, are already on the decline in both Bitcoin and Dogecoin due to the increasing size of the Blockchain, and the bandwidth consumed by them.
Increasing the Block sizes sounds like a good idea for the number of transactions flowing on the network, but in the near term it will drive a lot of the nodes out of the system because of CPU/bandwidth/disk IO hikes. Increasing the Block sizes will definitely increase infrastructure costs, driving more users towards centralized places that can afford to host API services for the Blockchain. However, given this crunch on the average joe Bitcoin nodes, this will lead to a more concentrated effort towards “pick what you need” style nodes (say, SPV). Again, in the near term, the number of “full nodes” on the network will dwindle, but as more companies come into the ecosystem, this number will inevitably rise.
Bitcoin as a whole is headed towards a network where most nodes don’t actually host the entire Blockchain — increasing the block size will only accelerate this change. This will lead to more innovative solutions, and who knows, we might find a way for nodes to communicate cost-effectively rather than the current “gossip”-style protocol we use, where you inform all your peers when you hear about a new transaction. The community can very dynamic, and I think the longer term outlook for the network looks good regardless. Bitcoin is powered by nerds like you and I, and we tend to find solutions where others walk away.
Nazir raises an interesting point in terms of a hypothetical time horizon for when a transition (between short term and long term) could take place.
Another individual who has done a lot of modeling of incentives, mining and block sizes is Dave Hudson, a software developer who also writes at HashingIt. According to him:
Changes to the distributed consensus software within Bitcoin raise really interesting questions about the evolution of cryptocurrencies and how truly decentralised they really are. With each change we’re actually seeing something interesting happen where the ongoing participants in the system all effectively agree to move to a new system: BTC becomes BTC’ becomes BTC”, etc. We might be calling BTC” Bitcoin but any legacy nodes running BTC’ or BTC also think they’re Bitcoin too. At some point in time something happens and the various systems start to disagree about what is or isn’t valid and those could be very subtle. Imagine for example that BTC” introduced a subtle change that inadvertently made some of Satoshi’s coins unspendable; nobody might ever know until someone with Satoshi’s keys tries to spend their Bitcoins. Arguably it might already have happened as the result of some random compiler bug (not a fault in the Bitcoin-core code, but a bug in the way that’s transformed into something that runs on the node CPUs).
Clearly the Bitcoin-core developers try very hard to ensure that this sort of thing doesn’t happen by accident, but in order to sustain all participants holdings within the system they really do have to try to ensure that every node moves from BTC to BTC’ to BTC”, etc. In order to do this they essentially have to persuade everyone to migrate to each new version within some specific time window.
Now let’s imagine for a moment that instead of miners all tending to mine through centralised infrastructure (mining pools), that we really did have true decentralisation and had hundreds of thousands, or millions, of nodes that all did their own transaction selection and mining. Perhaps they’re even embedded into things that their users didn’t even realise were contributing to mining. At this scale it would probably be almost impossible to get them all to move to adopt a planned fork. We would either see the protocol totally stagnate or else we would see potentially very significant forks occurring.
In practice the system holds together in a cohesive way because, in the absence of a precise protocol spec, the core devs try to ensure that everyone uses the same consensus-critical software, runs it on the same sorts of hardware that all do things the same way and with some reasonably consistent set of capabilities.
It’s seems a slight irony that one of the key factors in the successful maintaining and sustaining of the Bitcoin network is continual centralised actions, and that things aren’t actually massively decentralised.
This last point is intriguing in that a lot of the software in this space is still relatively homogeneous and that if a network were to scale to become as distributed (or decentralized) as is hoped while simultaneously incorporating many nodes and clients, then it is likely that a diverse set (or lackthereof) of developer tools could prevent or perhaps even incentivize attacks (e.g., if every actor in the ecosystem uses the same client then that could create a vulnerability to the network).
In an exchange with Peter Todd, a contributor and developer on Bitcoin core and other related protocols (such as ClearingHouse), he framed the issue:
At the recent O’Reilly Media conference basically I pointed out that because this is an externality / tragedy-of-the-commons problem we may have to see Bitcoin fail due to a blocksize increase first before the community actually groks the issue. Personally I’m inclined to not oppose a blocksize increase on this grounds – Bitcoin failing cleanly is probably good for my interests.
In terms of “getting people on board” – to a degree you inherently can’t do this, because a blocksize increase will inherently exclude people from the system. See for example the discussion between Greg Maxwell and Gavin Andresen several weeks ago on the #bitcoin-dev IRC channel.
I spoke with Robert Sams, co-founder of a fintech startup who has previously written analysis covering the marginal costs of Bitcoin-like systems. In his view:
Levin’s point about network propagation is key: mining a larger block has a lower expected return because of the increased probability of losing out to a smaller block in an orphan race.
Now all of what you argue is a totally sound economic conjecture based on the assumption of distributed mining economics. Miners include tx until the marginal cost of tx inclusion (opportunity cost of including a different tx when up against the block limit + block propagation effect) equals marginal revenue (the fee).
However, for me the crucial economic force here is what happens to fees under concentrated mining. The logic changes from the marginal costs equals the marginal revenue logic in the above distributed case to a more strategic, oligopolistic pricing dynamic. What I mean is this. In the distributed case, whether or not a given miner includes a given tx has no material effect on the expected confirmation time for the tx sender. But in the concentrated mining scenario it does. If some pool is 35% of the network, the decision by that pool to not include the tx will materially increase the confirmation time of that transaction. So miners can extract more of the value that a tx senders place on fast confirmation times by setting their own minimum fee threshold, knowing that this threshold will over time effect the fees that tx senders include. What that optimal threshold is depends upon how much senders are willing to pay for faster tx confirmation times. Who knows what that is, but the implication is clear: under concentrated mining, fees levels will start to reflect more what tx senders are willing to pay rather than the cost to miners of including them.
So when you cast the blocksize issue in this concentrated mining context, it’s really not clear what will happen. My bets are that fees will go up and we won’t have to worry about blocksizes because higher fees will act as a break on adoption.
If block sizes are increased we will learn a lot about the dynamics of the community, the interplay between incentives such as fees and seigniorage have for on-boarding (and off-boarding) miners as well as how price sensitive users are in this space.
Ultimately it is the miners who decide as they are the entities creating Sybil protection and preventing double-spend attacks (or in some cases, providing that service). Or as Raffael Danielli, a quantitative research analyst at ING explained:
In theory, fee rewards should incentivize miners to include as many transactions as possible. In reality though fee rewards are a tiny percentage of block rewards and the risk-rewards ratio simply doesn’t add up at the moment (risking a (almost) sure 25 BTC payoff to get a potential say 25.1 BTC). What are the rational incentives for miners to upgrade and actually fill 20mb blocks? At the moment there are none that I am aware of. If there are no incentives for miners then this is not going to happen. Period. There is no altruism when it comes mining and anyone who bets on it is in for a rude awakening.
But this crosses over into the new field of cryptoeconomics which is a topic for another day.
[Thanks to Anton Bolotinksy for his thoughts on measuring the value of nodes within the system.]
Update from Organ of Corti:
I would add that there is a downward pressure on block size for block makers. I’ve done some research with Nadi Sarrer that proves the larger the block, the longer propagation takes. Even if a pool uses the relay network, increased latency also increases the chance of a pool losing an orphan race.
So block makers have to decide how to maximise fees while at the same time minimising block size. Some, like Discus Fish (f2pool) have tested both minimum block size (only including coinbase tx) and maximum block size, and lately seem comfortable producing maximum sized block each time. (They also seem to have a ‘pay for tx inclusion’ scheme here, but I don’t know much about it)
I think eventually pools will aim to use a decision making algorithm to:
a) Pick a block size they think will make losing an orphan race less likely.
b) Include all available high fee density (fee/kb) transactions in the block
c) then include high fee transactions
d) any left over space can be given to low and zero fee txs
With more data, this sort of process could be optimised to calculate the expected value of a block including the probability of losing orphan races. This would only lead to larger blocks when the value of the included txs outweighed the losses due to orphan races in the long term.
Of course, if all block makers had the same sized blocks, this would not be an issue. But if a block maker can win an orphan race by the expedient of having a smaller block, then they will.
Some open questions for the community: How will fewer network nodes affect orphan races? If the blocks are solved many seconds apart, I would think that fewer network nodes will mean fewer orphan races since the time for a block to propagate to most of the network will reduce significantly. However, if the blocks are solved at the same time, an orphan race might be more likely since the paths taken by the blocks propagating will have less affect on the overall propagation time. Which do you think is more likely?
In summary: If block makers are rational actors and the risk of losing orphan races is a significant downward pressure on block size, I don’t think increasing the available block space will have a significant effect on actual block size. There’s a lot of room for improvement in the tx inclusion algorithms used by most pools, and if I was a block maker I would increase the fee density of blocks and include far fewer low-fee and fee-free txs.
Great analysis of this important topic, thanks Tim!
I think my preference is that we focus on ‘fixing’ bitcoin by working on side-chain solutions. I would rather that the main blockchain be reserved for transfers of ‘substantial’ value, while all micro-transactions happen on side-chains. I don’t know how realistic this is, but it seems a lower risk solution that solves the problem at hand.
Assuming hard drive and bandwidth continue on their exponentially improving trajectory, Blocksize should logically rise shouldn’t it?
Stated differently, the future should allow us to buy more decentralisation per dollar (assuming performance is held constant). This should hold true for Hyperledger as well as Bitcoin.
I think the more interesting question is the magnitude of the blocksize increase. What rate should be assumed?
There does not seem to be a model that tries to answer the “Why 20 MB?” question really well. Or is there?
Pingback: The Weekend Read: Feb 6 | Todd Blog
For those investors focused solely on price, things will get worse before they get better: the ‘hard fork missile crisis’ represents an existential threat to the growth and adoption of Bitcoin as a protocol.
Distribution of platform versions is something that is very difficult to forecast. Even Google hasn’t been able to solve for the problem of ensuring that the network is running the latest and greatest versions of Android[1].
It will be very interesting to what the community feels is the best approach to move forward.
[1] Platform Versions Dashboards – https://developer.android.com/about/dashboards/index.html#Platform
To me, the eventuality of a Netflix model where full Bitcoin nodes reside in data centers seems most likely.
“The other challenge to Andresen’s plan is, that because the prioritization of transactions would still not be adjusting towards via fees to miners, this would in turn continue the status quo in which miners continue to largely rely on seigniorage to operate. This is an unhealthy trend as it stalls the transition from block rewards to fees which was the narrative stated since day one on October 31, 2008 (see section 6).”
The transition from reliance on seigniorage to reliance on fees will happen automatically as the block subsidy is halved every four years. In the meantime, it’s important that Bitcoin does what it can to grow its economy, which will happen much more slowly, if at all, if it is stuck at an almost comically low 3 transactions per second limit.
“Similarly, ‘Death and Taxes’ recently presented a narrative reinforcing Andresen’s view yet for some reason glossed over the all-important miners perspective. ”
DeathAndTaxes dedicated a section of his post to the fees issue, noting that larger blocks are necessary for more fees. I will quote the section:
“On a transaction fee basis.
Currently the cost of the network is roughly $300 million annually. The users of the network are collectively purchasing $300 mil worth of security each year. If users paid $400 million the network would be more secure and if they paid $200 million it would be less secure. Today the majority of this cost is paid indirectly (or subsidized) through the creation of new coins but it is important to keep in mind the total unsubsidized security cost. At 2 tps the network the unsubsidized cost per transaction would be about $5. At 100 tps it would be $0.05. If Bitcoin was widely adopted, more users purchasing more coins should mean a higher exchange rate and thus the value of potential attacks also rises. The future cost of the network will need to rise to ensure that attacks are not economical and non-economic attacks are prohibitively expense relative to the benefit for the attacker. It may not rise linearly but it will need to rise. If someday one Bitcoin is worth $10,000 and we are still only spending $300 million a year on security we probably are going to have a problem. Now advocates of keeping the limit may argue that the majority of the network cost won’t be paid by fees for many years but the reality is that with the limit on potential transactions there are only two other ways to balance the equation and that is much higher fees or much lower security.”
There is a natural market limit to how much Bitcoin miners can earn on fees if only 1,800 transactions can happen per block, because there’s a natural limit to how much people will be willing to pay to transfer money on the Bitcoin blockchain when there are competitors, including fiat networks, that they can use.
Atif Nazir writes:
“On the one hand, increasing block sizes, as you say, may result in lower transaction fee requirements. However, if the transaction fees actually are lowered by, say, 1000x what they are now (0.00001 is the minimum accepted by the reference client),”
This is wrong. Bitcoin right now is not near the limit, so the block size limit is not producing artificial scarcity and hence upward pressure on fees. The situation would remain the same if the block size limit was raised to 20 MB. So there’s no reason to expect fees per transaction to come down. What advocates of a permanent 1 MB restriction are hoping for is that IF Bitcoin block sizes reach 1 MB, there will be increased scarcity of block space, and thus fees will go up. This is a dangerous experiment, as very possibly, new people will simply stop adopting Bitcoin if it becomes more expensive to sue.
The question of behaviour in the face of orphan races is going to be very interesting. There much-speculated-about bloom filter implementation has the potential to dramatically reduce the dependence of block propagation on block size which would mean that this wouldn’t be a major concern.
I do, however, have concerns about anything that would mean that successfully propagating all transactions to all nodes in advance of the block propagation becomes necessary as that seems to have scope for abuse by someone intent on causing problems.
Pretty interesting technical analysis.
The political implications of a higher mining centralization are quite obvious.
Pingback: What is the blockchain hard fork “missile crisis?” | Bitcoin Portal
Interesting article but your representation of increase propagation time due to bigger block sizes is biased and inaccurate. You completely fail to mention that reverse bloom filters + set reconciliation addresses this exact problem and it’s one of the reasons that allowed Gavin A. to consider 20x blocksize increase, which would otherwise indeed be infeasible. To mention one without the other would be taking things out of context.
The only legitimate criticism I see here is the extra HD space required from a full node to store each block. But the severity of this issue is much less than block propagation time.
+1 for Victor’s comment about invertible bloom filters which is the main point of Andresen’s proposal.
As a sidenote: It’s not even sure that disk space is the most critical issue. Stats from january 2015 show that we have 1 tx/s when the network could support around 3 or 3,5 tx/s. Therefore, 20Mb/block doesn’t imply a consumption of 1Tb/year but just means that the network can cope with higher peaks without introducing “latency” in the validation of txs (because of scarce space in blocks).
Imho, network bandwidth may be a more critical issue in case of an important increase of # transactions because the access to this resource is strongly linked to costly network infrastructures. Everything should be fine as long as bitcoin consumption stays in the range of an average internet user (videos, …).
That being said, the main question for all of us remains: do we envision bitcoin as a payment currency or as digital gold ? According to your choice, you’ll consider scarcity, latency, high fees… like a blessing or a curse.