Getting your Trinity Audio player ready...
|
This article was first published on Dr. Craig Wright’s blog, and we republished with permission from the author. Read Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7, and Part 8.
The ability to scale a blockchain is related to the problem of scaling a distributed database. For example, the original ledger used within Bitcoin was a key-value database known for its ability to scale to high transaction volumes. The system was changed to ensure that the nodes ran on smaller systems such as a Raspberry Pie. Yet, the use of databases such as LevelDB, higher levels of interoperability, and key-value databases have been extended to run using GPU-based acceleration (Iliakis et al., 2022).
Other research has focused on providing high availability and fault-tolerance properties analogous to those within NoSQL and NewSQL (Zhou et al., 2021). Angelis and Ribeiro da Silva (2019) examine blockchain technologies concerning extended uses beyond financial instruments. In such an analysis, it is necessary to remember that scale is a critical component of adoption. Therefore, in analyzing the growth and scalability of blockchain-based solutions, it is necessary to understand both the value drivers and the ability to create scalable systems.
Annotated Bibliography
Angelis, J., & Ribeiro da Silva, E. (2019). Blockchain adoption: A value driver perspective. Business Horizons, 62(3), 307–314. https://doi.org/10.1016/j.bushor.2018.12.001
The authors present a discussion of uses associated with blockchain technology outside the production of financial systems. The approach analyses the concept of a blockchain, creating an analysis of the consensus and scalability of a system based upon a series of digital signatures and which allows for the distribution of a variety of tokens, including those used in monetary exchanges. The primary fundamentals of a blockchain documented by the authors include deploying a system that is “highly transparent, secure, immutable, and decentralized” (Angelis & Ribeiro da Silva, 2019, p. 308).
The paper extends to analyzing blockchain maturity, providing examples of blockchain versions 1.0, 2.0, and 3.0. It focuses on digital cash in the first instance, incorporating privacy and ‘smart contracts’ in the second version, and developing ‘decentralised applications’ in the third. Unfortunately, the authors made no note of the scripting language within Bitcoin and failed to link it to the ability to produce both “smart contracts” and dApps when Bitcoin had first been launched. Consequently, the determination of maturity levels provided is problematic.
The authors then branch into an analysis of the underlying logic behind value propositions that are associated with blockchains. The analysis links artificial intelligence, and claims that blockchain 4.0 will be in the amalgamation of technologies. The implications lead to decentralized artificial intelligence and the creation of automated systems. Yet, while the authors note the problems with the existing “hype cycle” (2019, p. 311), they present a paper that introduces many of the same dilemmas and focuses on overhyped technologies, including artificial intelligence, rather than other technologies that have been noted in the paper, including ERP (2019, p. 312).
Iliakis, K., Koliogeorgi, K., Litke, A., Varvarigou, T., & Soudris, D. (2022). GPU accelerated blockchain over key‐value database transactions. IET Blockchain, 2(1), 1–12. https://doi.org/10.1049/blc2.12011
Iliakis et al. (2022) analyse blockchain and distributed ledger technologies as they apply to the Internet of things (IoT), finance, supply chain management, and ERP applications. The paper provides an introductory analysis of blockchain technology, and notes the use of GPUs, FPGAs, and ASICs in solving the hash puzzle associated with the block reward. It is noted that most existing blockchain-based systems use a NoSQL database. The authors compare the performance of LevelDB (a commonly deployed option within blockchain systems) with an alternative GPU-based key-value store referred to as MegaKV.
The authors argue that this presents a hybrid CPU-and-GPU system, and that integrating both accelerates the analysis and storage of transactions within the distributed ledger. The authors analyze directed acyclic structures (DAGs), sidechains, and sharding-based solutions to demonstrate this. The belief that changing the consensus protocol improves performance and scalability is based upon the argument that proof-of-work within Bitcoin is inefficient. Yet, the work fails to note the separation of the transaction data and the block header that is hashed. Despite this, the work on the performance-optimized deployment of key-value stores is valuable.
In analyzing such options, the authors summarize the state of the art with in-memory key-value stores and compare it with LevelDB deployments. The analysis modelling GPU execution through megaKV demonstrates a potential to shard the database into multiple formats, and provides a methodology that the authors used in emulating blockchain-like transactions. While the analysis was based on an experimental pulley system, the authors demonstrated the potential for scaling solutions based on GPU-based database accelerators. Yet, the authors still conclude that the main benefits of a blockchain are based on anonymity and decentralization, without documenting the purpose behind each tool.
Zhou, J., Xu, M., Shraer, A., Namasivayam, B., Miller, A., Tschannen, E., Atherton, S., Beamon, A. J., Sears, R., Leach, J., Rosenthal, D., Dong, X., Wilson, W., Collins, B., Scherer, D., Grieser, A., Liu, Y., Moore, A., Muppana, B., … Yadav, V. (2021). FoundationDB: A Distributed Unbundled Transactional Key Value Store. Proceedings of the 2021 International Conference on Management of Data, 2653–2666. https://doi.org/10.1145/3448016.3457559
Zhou et al. (2021) present an alternative key-value database option that integrates the scalability and flexibility of NoSQL with ACID transaction types deployed within NewSQl. The paper documents the existing and competing alternatives to this database form, and investigates the core design principles associated with creating scalable data infrastructures. The architecture analysis includes the design of the system interface and the overall system architecture, from the control plane to the read/write structures.
The analysis models replication and read/write processes, providing a suitable instrument to document database design. The analysis extends into geo-replication and failover systems and the necessary inclusions for system optimization. The section on optimization and scalability testing documents lessons learnt and the problems that can be associated with analyzing large-scale database deployments. Most critically, the authors provide a framework that can be used in analyzing other databases.
The most beneficial aspect of the paper lies in the methodology and system used in measuring and capturing various metrics associated with databases. For example, the authors measure the lag from storage servers to log servers, capture proxy redo time, integrate a variety of metrics associated with geolocation across multiple data centers, and provide simulations and measurements for read/write operations against client read and commit requests. Yet, as the authors note (2021, p. 2661), limitations do exist, and the “simulation is not able to reliably detect performance issues, such as an imperfect load balancing algorithm” or third-party libraries not implemented in the flow.
This article was lightly edited for clarity purposes.
Watch: Re-Inventing Business with Blockchain