Yes, yes, I know you CompSci folks out there are screaming in your heads saying, “There is no such thing as an infinite tape!” I know, okay? Take a chill pill. I am aware that there isn’t anything within the bounds of our physical universe which is truly *infinite*, but I use the term only because in the context of discussion the *theoretical* Turing machine, it has an infinite *theoretical *tape, so I think the use of the more common term is justified.

With that out of the way, “What the heck am I talking about?” ask you the average layperson, who doesn’t care much about CompSci nerdy debates. Well, let’s start with the Turing machine that we discussed last week. Recall that the Turing machine is just a mental model of how computation works. It involves a machine with a read/write head, which can read instructions and write instructions to an infinite tape, according to a set of rules. The tape is divided into cells which contain symbols or can be blank. The rules are represented in the form of a state transition table, which is known to the machine (in its memory) so it knows what to do when it reads any given symbol from the tape cell under the read/write head. The table essentially is a mapping of rules that take the form:

—>

given symbol ‘x’ is read from the tape, do action ‘y’.

Actions could be a combination of moving the tape left or right some number of cells, and/or writing a new symbol onto the cell at the head. Through feeding the tape to the machine (the tape can be thought of as the program), and having a formal language defined (the table of symbol rules), this sufficiently models any generic computation. Any program written in any language for any computer today, can be implemented as a Turing machine, though in practice such a machine isn’t terribly *efficient,* so it is rarely actually built as a real physical computer. Turing proposed this theoretical machine as a way to analyze what actually can be computed, by any computer regardless of physical constraints of memory size or computational speed. It was so useful, that to this day we use the colloquial term “Turing Complete” to describe computer languages which can express any generic computation in equivalence to what a theoretical Turing machine would be able to compute.

So what is the significance of this? Well, hold onto your breeches—

BSV is an infinite Turing tape. One that moves forward, but unboundedly forever into the future. Previous cells can be read from but cannot be changed. Other than that, it can model the infinite tape in a Turing machine. Better still, this tape is readable and writable by anyone, anywhere.

A Turing machine (or an unbounded number of them) can be built on top of this public infinite tape, and any number of parallel computations can be calculated. Bitcoin is a global massively distributed computer!

Well, at least the necessary foundation of one. What is needed is just the rulesets and the read/writing machines. A reader/writer is just something that listens to and can publish signed transactions onto the blockchain—writing to the tape. By running such a server and programming it with a table of symbol translation rules, you can effectively create programs as a pushdown automata^{1} (PDA). Since the tape is a blockchain—with its own consensus mechanisms—the tape can be universally agreed upon, and stores the state for the potential sea of PDAs that can be built on top of it. It can even store the symbol translations tables themselves. Now imagine you need some computation done. You compile the logic into an equivalent symbol translation table, or ruleset. You publish this ruleset onto the infinite Turing tape. You want to execute this program for a set of 1,000 different inputs. All you need to do is to embed these inputs into smart contract transactions, one per set including a locked payment each, and then any number of public calculation nodes will pick up the inputs, load the rulesets, run the program until completion and write the results back to the tape. They are incentivized to do this because the first ones to complete the task will be able to claim the reward that was placed into the smart contract transaction.

This is the promise of future computation on BSV, and it is the next evolution of network computing. It is computing without a specific instance or “server-less computation” allowing for unbounded scalability and resiliency. Having the total computation resources of the planet being able to participate ensures maximal and efficient scaling, governed by just how much the computation requestors are willing to pay. The supply and demand for cloud computing will become free market driven, imagine that!

But how will this differ from our current cloud computing models?

While the current models assume nay, encourage customer stickiness, the new model will encourage diversification, differentiation and consumer choice in the computational markets. Current cloud providers give you the platform infrastructure and tools needed to serve all your distributed computational needs, but in implementing applications and services on the platform you make it very hard to migrate your business to a different provider. You can scale, but only if you remain on the same platform, subject to their evolving business models and pricing schemes—you are locked-in. Containerization technology like Docker is a big step in moving towards a more modularized computational market, where the server host and the code being run can be decoupled, but being able to pay servers on a *per computation* basis is the *true* innovation of the decade. In a world where different computational services can all compete to run the *same computation* the dynamics of competition changes drastically. A computation client can choose different service providers per job and can even have them compete to compute the same job at the same time. Such are the benefits of sharing the same global Turing tape to communicate compute job requests, outputs, and service payments. Computation truly becomes a commodity; when this happens, service providers will truly have to compete for your business. Where there is competition among providers, the consumer always wins in the end.

Imagine a future when data resiliency, and business continuity is ensured by the fact that your computations can always be run by the global public network, securely and privately. Not a cloud, but a decentralized SWARM of compute nodes which comprise the worldwide computational network. Compute jobs are executed by the most efficient or local nodes, and compute becomes the commodity service in which anyone with any sort of computational power, even in the form of a mobile device, can profit from. This is more than edge computing—as ‘edge’ still implies a center—this is *global* computing.

I avidly await the first PDAs and functional automata joining the swarm. To be released onto this Turing tape environment which will last forever, keeping a record of events and transactions for the citizens of starship Earth for many millennia^{2} to come.

**Resources**

Some projects in the space which are starting to explore this Bitcoin computation space to keep an eye on:

Grid Planaria (some of the base components of the Turing machine, and even better docs on the topic)

Turing Tape protocol (one implementation)

Bitcoin Computer (For the more OOP leaning programmers out there who like javascript)

Run network (a token focused take on bitcoin computing)

***

[1] PDA – Computer science folks will know that any Turing machine can be simulated by a pushdown automata with 2 stacks. And every bitcoin evaluator has 2 stacks available to it when executing bitcoin script language.

[2] Technically, any block height higher than 500,000,000 will start to cause some problems with the current protocol design. For instance nLocktime. But it will be around AD 10,000 before we have to worry about such things, which is fine for most business applications.

*New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.*