BSV
$66.73
Vol 69.03m
-5.11%
BTC
$90167
Vol 48016.27m
-0.75%
BCH
$434.49
Vol 903.43m
-6.37%
LTC
$88.66
Vol 1955.67m
-4.41%
DOGE
$0.36
Vol 9293.32m
-1.25%
Getting your Trinity Audio player ready...

November 4, 2017. The Bitcoin Scaling conference day 1 was filled with students, experts, and researchers from the world over, gathering to present and discuss various improvements to Bitcoin and its eco-system.

Among the opening lines from the host was a comment stating that “this is the place where we discuss engineering, not politics”. I’d like to think so… After all, I would also like to think that we are all there for the benefit, and improvement of Bitcoin.

There were many interesting presentations – much of which focused on layer 2 solutions. In that, I came to a bit of a revelation of sorts.

When governments around the world restrict their citizens of certain things, then the people will always go about looking, to find a way ‘around’ the block. Take for example, censored Internet. Many countries have filters on all their ISPs, ensuring that certain sites are blocked. Of course, in the case of many countries that have strict controls, citizens generally find their way around this by adopting VPNs or Tor. Such restrictions actually drive legitimate demand for VPN software and the like, and it drives improvement.

BTC (Segwit) is no different. With an artificially imposed limit, researchers around the world, are genuinely coming up with all sorts of ways of utilizing second layer technologies to enable off-chain transactions.

Ian Miers from Hopkins University for example, presented the details of a very interesting paper titled “Bolt: Anonymous Payment Channels for Decentralized Currencies”. The techniques described allow for the construction of anonymous payment channels. The purpose of which is “to reduce the storage burden on the payment network.” I caught up with Ian following the presentation, and asked of the applicability to Bitcoin development, and how likely it would be that this would be incorporated into Bitcoin. Miers responded “I’m a researcher, and I do research. I’ll present to the community and what gets put in has nothing to do with me.”, but he also stated that it would be unlikely, and that this is more likely to be implemented into ZCash.  After all, Miers is one of the key scientists on the ZCash team, but he did also state that this particular improvement is not high priority at present.

Another proposal by Johnson Lau of Olaoluwa Osuntokun, proposed modifications to the Bitcoin script to enable further functionality and to “strengthen payment channels”.

The point of all this is, that there are some very legitimate, and ground-breaking pieces of work done here regardless of whether we are talking about on-chain or off-chain. There were solid presentations throughout, on pieces of research that either looked at the best methods for layer 2 solutions, or ways of cutting down on the amount of data.

Take for example, the presentation by Benedikt Bunz of Stanford University. He proposed a new concept for Light Clients, which he termed “FlyClient” – ‘Super light clients for cryptocurrencies’. As a proponent of on-chain scalability, I found this of particular interest. Although SPVs don’t grow with the number of transactions, they do grow with downloading of blockheaders. Over-time this can accumulate. FlyClient finds a very neat trick, in which a statistical method is utilised so that not every single blockheader needs to be downloaded. This was tested on the Ethereum network, and the 2.2GB headerfiles, shrunk to an astonishingly 3MB. – There is potential applicability here for Bitcoin BCH.

For those that find sense in Satoshi’s on-chain scalability roadmap,  and see the exponential value of Moore’s law, then the real presentation that stole the show was undeniably Bitcoin Unlimited’s Gigablock Testnet results.

The presentation was titled “Measuring maximum sustained transaction throughput on a global network of Bitcoin nodes”.

Incredibly, the limitations face, were not by hardware limitations at all. In fact, the bottlenecks found, were of a software nature. Among the changes, some tweaks to Satoshi’s code, and the inclusion of “try later queue”, and VISA level scalability is achieved.

The test featured nodes across 3 continents. This was truly a global test. But the real icing on the cake for this was the fact that this ran on a standard desktop computer. That is a 4-core CPU system, with 16GB RAM, with SSD Storage, but with a solidly strong 30Mbps connection.

It is extrapolated that for 50,000 transactions per second, 500 cores and 1.5 Gbps would be required. This sort of super-computer would be the kind of thing which would be commercially available within a decade or so.

Moore’s Law is a real thing.

I do recall when world Chess champion Garry Kasaprov lost to IBM’s deep blue supercomputer. At that time, in 1997, Deep Blue was a 30 x RS/6000 SP Thin 120MHz P2SC-based system in a cluster. At the time this was among the top 300 fastest supercomputers in the world. Ten years later, and you could obtain the same power computer, as a desktop. It’s incredible what 10 years in technology can do.

Tony Vays seemingly lampooned the presenters’ efforts during question time by asking how his 350GB storage computer could store a blockchain downloading 1GB blocks.

Peter Rizun correctly responded by stating that this is not being expected to be run on 5 year old systems. This is without a doubt, not needed today. To assume so is naïve. In fact, for today’s standards, 8MB blocksizes are more than enough to process transactions consistently, and effectively, for minimal fees.

The gigablock testnet, proves that big blocks do propagate, that the limitations today, are not so much hardware related, but rather the bottlenecks were indeed software related… This is something, which could be, and was optimized.

It also paves the way forward. The work by Bitcoin Unlimited and nChain, provides important information on future bottlenecks, and ways in which we can address these issues. Many of which, have already been ‘fixed’ by the BU team. The wealth of information doesn’t only prove what is possible in the future, it provides confidence for on-chain scalability.

We don’t need Gigabyte blocks today, but when we do, we would have already proactively tested thoroughly, and would be in a position, where Bitcoin has already grown to incredible proportions.

Another point worth mentioning is that 1GB blocks aren’t something that we will just decide to switch on one day. This sort of scalability gets worked up to, over time, as the need requires. In the same way Bitcoin Cash BCH hardforked, to 8MB when it was necessary, so too, there will come a time when we move to 32MB, and so on. The team on the gigablock testnet initiative deserve nothing but praise, and they are certainly not pushing any agenda to push this onto the network of users…

I did manage to catch up with Peter Rizun for a quick chat following the presentation. And I look forward to posting the details of that Interview shortly.

Eli Afram
@justicemate

Recommended for you

Empowering Filipinos: The rise of fintech
The Philippines' fintech sector is taking shape, with the Fintech Revolution Summit serving as a witness to the transformation of...
August 5, 2024
Block Dojo x BSVA Spring Party: A prelude to the London Blockchain Conference 2024
The exclusive Spring Party event, which followed the Block Dojo Spring Discussions, saw developers, entrepreneurs, startups, VCs, and blockchain enthusiasts...
July 5, 2024
Advertisement
Advertisement
Advertisement