Getting your Trinity Audio player ready...
|
You could have said it last year or the year before, but it’s still true. Artificial Intelligence (AI) is the biggest tech bandwagon since blockchain. But it’s confusing because there are two independent stories about what’s happening.
First, there’s the hype. Today, AI is more of a “brandwagon,” with the likes of Coca-Cola shouting from the rooftops about their AI credentials.
Coke’s Christmas ads come—provocatively—with the label “GENERATED BY A.I.” The happy families and trucks rolling through snowy streets are all AI fantasies, no doubt carefully curated by highly paid executives. So, is the end product really any different from good old-fashioned animation?
Even more brazenly, last year, the company produced something called “Coca-Cola Y3000,” which was supposed to demonstrate “what a Coke from the future might taste like.” And, of course, this was—as the writing on the can boasted—”Co-Created with AI.” Perlease!
There’s no reason why Coke shouldn’t use AI. It’s just that it feels a bit like a 132-year-old brand trying to get down with the kids. Of course, most of its ad executives are probably kids, and good luck to them.
As well as being at the ‘cutting edge’ of tech, AI is likely to save Coke money because they don’t need to film anything or pay actors. But anyone who bothers to notice or be outraged—as many have— is falling for the oldest trick in the ad industry book, where “all publicity is good publicity.”
Coke is just one example of AI being a kind of calling card to prove one’s contemporary credentials. Try getting a startup off the ground without an idea with AI as the key to its success, and you’ll see what I mean.
The second, less ephemeral story about AI is how it’s growing up fast in technical terms.
Last year, ChatGPT used to frustrate me by answering requests for academic sources by inventing interesting-sounding reading lists, which would have been perfect if the papers on them hadn’t been invented for the occasion by ChatGPT. This year, the system has come up with the same kind of answers, but everything actually exists, which I much prefer. Its ability to transcribe handwriting has also improved markedly.
Some of these changes may be due to the abilities of the latest ChatGPT model, called o1, which boasts of using “advanced reasoning.” The phrase begs a whole lot of questions, and any claims by OpenAI, the maker of ChatGPT, need to be seen in the context of the company’s need to justify the huge investment it receives by constantly appearing to make breakthroughs.
However, experts seem to agree that o1 is different from previous models and that there is substance behind the claim that o1 “thinks before it answers.” Delving into what that means would involve a discussion of the “stochastic parrot” model of AI (“stochastic” means using probabilities). The parrot comparison is a techy put-down of large language models (LLMs), which rely on their ‘intelligence’ to analyze word order in existing texts. A parrot can imitate human speech pretty effectively, but nobody believes it understands what it’s saying.
The question is whether, in taking the next step, models like o1 can be said to make “internal representations” of objects or ideas in order to “reason,” Yes, says OpenAI. The model isn’t just processing words. It’s teaching itself and can decide to spend time “thinking” instead of simply spitting out a sequence of words:
“Our large-scale reinforcement learning algorithm teaches the model how to think productively using its chain of thought in a highly data-efficient training process …The constraints on scaling this approach differ substantially from those of LLM pretraining, and we are continuing to investigate them.”
It’s clear that future AI improvements will need to be “highly data-efficient” because the arms race between rival companies for more investment to fund more computer power to make better models cannot go on forever.
The Atlantic notes an OpenAI announcement of plans to build massive data centers that would “each require the power generated by approximately five large nuclear reactors, enough for almost 3 million homes.” But last month, TechCrunch
reported that the idea of ever-increasing intelligence derived from ever-larger data sets is “showing signs of diminishing returns.” It looks like limits to the complex relationship between investment and results may lead to a change of direction.
However, in the short term, if the company’s investors, who have yet to see a profit, don’t want to fund the new generation of data centers, then OpenAI hopes its customers might. It offers a “Pro” subscription scheme for $200 monthly, ten times the current subscription.
I wondered whether ChatGPT would give me a sales pitch to answer my question, “Would it be an intelligent move for me to upgrade my current $20 a month subscription to $200 to get ChatGPT Pro?” But no, its answer was well-balanced, concluding very reasonably:
“If your research workflow depends heavily on ChatGPT, and you value speed, reliability, and advanced features, the upgrade could be worthwhile. However, if you’re not fully utilizing your current plan, it might be better to stick with the $20 subscription for now.”
Good advice!
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: AI and Big Data will be a long-term trend