return-to-genesis-with-nchain-cto-steve-shadders

Return to Genesis with nChain CTO Steve Shadders

The following was a written interview conducted with CTO of nChain, Steve Shadders, with specific questions about the upcoming Genesis upgrade on February 4. The motivation of this interview was to establish clarity on some common questions that kept popping up from the key parties in the ecosystem, as well as some specific questions I had based on my curiosity and personal opinions.

Could you explain in basic terms why we will be able to now push up to 4GB of data on-chain, as opposed to only 100 KB before?

The original PUSHDATA4 op_code allowed for up to 4gb of data. This op code was rendered useless due to limitations built into the BTC Core software (including a limit of 520 bytes on the size a stack item). This consensus limit has been removed so miners are now free to accept larger data element. OP_RETURN enabled a cheeky workaround for this limit but now any type of data operation can exceed it subject to the maximum transaction size (currently 1GB by default).  It is now up to miners to choose how large a data element they will accept.

Could you talk through the benefits and trade-offs of using OP_PUSHDATA vs. OP_RETURN to store data? Can miners prune data from OP_PUSHDATA?

Yes there is no technical reason miners need to keep provably unspendable outputs, of with the classic OP_RETURN output is one example. That does not mean they won’t. They do need to be kept by archival nodes that wish to service Initial Block Downloads (IBD).

But it’s important to factor that miners (or some other actor) probably have a good economic reason to keep this data. Simply because it’s valuable and there is an opportunity to earn revenue by providing the data at some point in the future. Perhaps not all data, but if you don’t know which subset of data will have future value you need to keep it all.

After Genesis it will be trivial to store data in spendable outputs. These cannot be pruned by miners since they may be needed to validate a future transaction. This different property imposes different requirements on the miners, and I would foresee different mechanisms of data storage having different pricing models as miners understand this.

If guaranteed miner retention or future spendability (perhaps as a signal that data is expired) is valuable to you then you might choose to pay a higher price to gain these properties for your data.

Why is changing script numeric type from 32 bit to big numbers significant?

Of the near infinite use cases for this one stands out as obvious. Cryptography. Almost all cryptography is based on maths using numbers larger than 32 bits. To be able to implement cryptography in script easily has enormous potential. There are no doubt many other uses to be discovered over coming years and decades, the point is why restrict it in the first place? Original Bitcoin did not have this limitation.

Why was the sighash algorithm reverted?

Based on feedback from various actors in the Bitcoin SV ecosystem It was decided that further preparation of the wider ecosystem would need to be done to avoid potential disruption. Given the significant number of other changes going into Genesis that also require preparation there simply wasn’t time to do this as thoroughly as was required.

Will unsplit BTC replay onto BSV from Genesis?

No, it would require the original sighash algorithm to be enabled.

What is the status of dust limit for transactions?

The dust limit still exists although it’s living on borrowed time. Its original purpose was to prevent people creating outputs that are so small they’d cost more in transactions fees to spend than they are actually worth.

A change planned for soon after genesis is for miner to allow what we call “consolidation transactions” to be accepted for either a zero fee or a heavily discounted one. A consolidation transaction is one where you collect a large number of very small inputs into a single output.

This is worthwhile for the miners because it reduces the size of the UTXO set they have to maintain. Once this is possible there is no further need for the dust limit. If you create dust you simply wait until you’ve collected enough dust to consolidate and you can make use of it again.

Enabling dust size outputs opens up some very interesting use cases. E.g. attaching a micropayment to an existing transaction. Let’s say a merchant wants some sort of insurance on a 20c transaction, merchant asked the customer to include a 0.02c output paying the insurer directly.

With a dust limit this might not be feasible, but without it the insurer, who is presumably getting thousands of these per day can simply consolidate them daily to turn them into useful outputs.

Could you set the record straight on the 25 chained transaction limit? Why was this limit such a you-know-what to raise/remove?

The default has been raised to 50 in the Genesis release.  Based on extensive testing on the STN we determined that various performance improvements mean we can raise it this far with no measurable impact.

Most of the problem with ancestors is caused by fee accounting. If you have capped block sizes the miners need to pick the most valuable set of transactions to maximise fees. If block size in unbounded you just add everything and the block template becomes an append only list… So all of that horrible accounting code can go… We are peeling it away layer by layer… The end result should be a vastly simpler and more efficient mempool. But those layers are complex and need to be unpicked very carefully.

The Bitcoin SV node team is well aware that this is the most requested feature by far from application developers and we are prioritising it appropriately.

What is the timeline on creating ‘Bitcoin Spec’ as discussed from the latest interview with Ryan?

YouTube video

We haven’t formally spun this project up yet, it is on the roadmap to work on after Genesis but it’s too early to give any firm timelines.

Are there any particular type of scripts you are excited about, or would like to see post-Genesis?

What I’m most excited about is the concept of chaining sequences of scripts together (chains of transactions) where the outputs of one (or more) can be fed in as inputs to the next. The potential to create a living system that crosses the boundaries of script and potentially introduces other interactions is fascinating to me.

To that end we are trying to encourage this by extracting the Bitcoin script engine out of the Bitcoin SV node and into a separate Nakasendo library module.

We want people to be able to play and interact with the script engine much more easily. There is probably going to be a new job description in the future that doesn’t exist right now which is a specialist script engineer. The first generation of these will probably begin to emerge this year. If you’ve got any previous Forth experience you’re probably in the box seat to become one of the pioneers in this field.

Why was the date for Genesis brought forward to 2/4/2020?

It was actually the removal of the block size cap that was brought forward. It was originally planned for end of 2020. But that was when were still using the old BCH 6 monthly hard fork schedule. After BCH forked off it didn’t take us long to realise we had no reason to stick with this schedule. And it was also apparent that staged progression of block size increases was also a hangover of BCH style thinking. The goal was to put the governance of consensus limits into the hands of miners as soon as possible so that the Bitcoin SV team could focus on scaling.

The date itself has some cute symbolic meaning in that it is 11 years, 1 month and 1 day after the Genesis block timestamp. Although it’s possible Genesis may actually activate a day early due to the unpredictability of the rate that blocks are mined.

Do you expect to see a spike in transactions on-chain in February?

I can’t really predict this. It’s possible people will try stress testing it although there’s a not a lot of point to that since the latest software releases are focused on functional changes and not performance enhancement. I would expect to start seeing some more unusual looking transactions though as people start experimenting with script.

Implementation detail vs. protocol: Do you think the difficulty adjustment falls into that category of implementation detail that technically fits within the protocol of every two weeks (2000 blocks) vs. protocol of adjusting average over given period of time?

I think the difficulty adjustment algorithm is a protocol detail. The length of the cycle appears to be quite deliberately chosen and we already know some of the consequences of continually adjusting algorithms like the current DAA and the EDA. Safe restoration of the original algorithm will occur once transaction volumes are significantly higher in what I expect to the last ever protocol changing hard fork. We are 90% of the way there after Genesis but after that final hard fork we will have achieved complete protocol lockdown.

Please elaborate on why P2SH is disabled.

Jerry Chan elaborates on this nicely here.

Can you talk through what happens if a miner tries to create a P2SH output?

This particular script template is disallowed by consensus. It is unfortunate to do so but the same functionality can be achieved with a slight modification of the script. This means that it will be impossible to accidentally create a P2SH output.

In most cases this wouldn’t be the end of the world, you’d just need the help of an honest miner to recover your funds. But in some cases it could have left the funds open to theft. This decision was taken simply to protect users from unwittingly putting themselves at risk.  It can probably be reverted later by a consensus of miners.

Do you think this latest ‘fix’ was absolutely necessary? Could this not have been left this up to the miners to enforce?

It could have been enforced by miners however the public disclosure of the mechanism to exploit the change drew a lot of attention to it. The more attention it has the more likely a well resourced attacked might try it and the more expensive it would be for honest miners to repel the attack. In our opinion the disclosure changed the risk profile enough to justify changing the mechanism to a more explicit one.

How will your role at nChain/Bitcoin Association change post-Genesis?

I characterise the focus of 2019 as being on infrastructure. There was a lot to research, build out and plenty to fix.  We’ve got a really solid foundation for the Bitcoin SV ecosystem now.

There’s more to do but it’s enough to start really building the next layers. So I think the focus in 2020 will start to shift onto the tools that enable business to work with Bitcoin more easily. On the infrastructure side the focus will shift to primarily to scaling i.e. Teranode.

2020 will be an incredibly exciting year because we’ve reached the point where Bitcoin SV not only has a credible track record of achievement but because it’s now within reach of real businesses to build things on top of it that matter to them.

***

Thank you Steve for taking time to answer the questions. I hope the readers learned something and has clarity going into the Genesis upgrade in just a few days!

Stay tuned for another written interview in the same manner with Bitcoin SV Node Lead Developer, Daniel Connolly, about Teranode.

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.