BSV
$72.15
Vol 101.49m
2.06%
BTC
$98310
Vol 47893.34m
-0.36%
BCH
$520.15
Vol 1253.55m
-2.45%
LTC
$102.17
Vol 2101.76m
0.88%
DOGE
$0.44
Vol 22026.84m
-2.55%
Getting your Trinity Audio player ready...

At first I was like, “meh”—then I was like, “hmmm.” My editor just sent me a link to an article titled, “Google tests AI tool that is able to write news articles.” The disturbing part was that she followed it up with “lol.” My pride as a human drove me to think a bit deeper about something: not whether generative AI could write news articles (it already can), but what it would mean for news readers and news/information consumption.

Could it be true? A decade from now, might I be sitting in a squalid urban flat, all alone and penniless, with only a set of VR goggles for company, reading news articles generated by machine-learning LLMs about how Bitcoin and the BSV blockchain will drive the digital economy of the future? I might even be able to use the goggles to have an interactive live discussion with some artificially-generated news anchor, who gives me all the totally-unbiased updates and analysis I need to stay optimistic.

News writing would seem a perfect match for LLMs and machine-learning. It’s very formulaic: first paragraph contains the “hook” and key points of interest. The second, or “nut graph” outlines the reasons the article exists. The rest of the article contains backup details and quotes, with a conclusion in the final graph (that few ever read) to wrap it all up. Even as a human writing news stories, it often feels more like muscle-memory working than actual brain power or creativity (am I giving away too much here?)

The first thing I did was put it to the test, asking ChatGPT: “Can you write me a 600 word news article, in Bloomberg news style, about how artificial intelligence and LLMs will soon be able to write news articles?”

The result, I have to say, wasn’t bad—if a little bland. It took less than 20 seconds for ChatGPT to write it. The grammar was flawless, and it laid out the facts. The only chuckle I got was its repeated references to “Language Model Models (LLMs)” which, honestly, is the one thing I didn’t expect it to get wrong. Ha, my job is safe!

Generative AI isn’t that great—for now

Adding to my sense of false relief are reports that ChatGPT might even be getting worse with age. Testers have noted a dramatic decline in accuracy when GPT-4 is given math problems, visual reasoning tests, and exams. One theory as to why this is happening, which has been discussed on social networks, is that programmers at ChatGPT’s creator, OpenAI, may have inadvertently (or even deliberately) stunted its growth by introducing limitations, designed in the interests of “AI safety” and avoidance of answers that may offend.

My own experience with GPT3.5 has been mostly eye-rolling, as it generates more lines of text apologizing and prevaricating about why it can’t perform certain tasks than outputting useful (or even desired) material.

Of course, there’s no point gloating over mistakes AIs and LLMs make at their current stage of development. Doing that reminds me of people in the late 1990s who said the web would never take off as a mass medium because it was too slow, and streaming a video was impossible. You could also recall the first DARPA Grand Challenge for autonomous vehicles, in 2004. Not one of the entrants traveled more than 12km of the 240km course in the middle of the Mojave Desert. However, one year later, five vehicles completed the course and only one entrant failed to beat 2004’s 12km record.

We shouldn’t assume that because some technology doesn’t work very well right now, it never will. It seems like common sense, but a lot of people continue to make that mistake. OpenAI will iron out any problems with GPT, if they decide it’s impeding the project. Maybe they will provide different versions of it for different clients, depending on the type of content they need to produce. In any case, and aside from any debate over whether machine-learning and LLMs are forms of “intelligence” or not, it’d be unwise to judge their future performance on current examples. Assume that someday, and someday soon, generative AI will be able to produce convincing news content. Journalists working today will just have to deal with it.

One thing I noticed about ChatGPT’s article was this: if I didn’t request it, and therefore know in advance it was auto-generated, I probably wouldn’t have noticed it (odd terminology errors notwithstanding). Looking at the daily news pieces in mainstream and independent media, it’s impossible to tell if they’re machine-written, or machine-written with some human editorial oversight. Head across to articles in the “general/technical information” or “life/health advice” categories and it’s even harder to tell.

Machines writing news for machines to read

With all this in mind, it’s possible to extrapolate and predict that most of the content we read and view in the future will be produced by generative AI. Perhaps much of it is already. There are plenty of examples of online written content that reads like it was auto-generated, or at least written by disinterested and unimaginative humans. There are entire Twitter threads full of responses that sound like they came from LLMs, not real people.

How will people respond to this? Will they continue to consume (and trust) news as they do now? There have been several media reports in recent years on polls investigating levels of public trust in the news media. The existence, and increasing regularity, of these polls suggest a certain panic is setting in. Results have shown a steady decline in the level of trust in mass media news, as well as a deepening gap in trust levels between people with different political leanings. CNN and MSNBC, for example, have less than—30% trust from Republicans, but a +50% rating from Democrats. Some surveys have shown trust levels across the board to be well below 50%. The most trusted news source in the USA is the Weather Channel, which is the only network sitting above 50% on average.

Introducing generative AI into this mix will probably not affect trust levels, that is, if the public is even able to tell. Viewers/readers will assume the auto-generated content has exactly the same biases as its human producers, and treat it in the same way. We’ve all seen those collage videos of human newsreaders and commentators at various stations all appearing to read from the same script. We follow accounts on Twitter that have human faces and names, but suspect in the backs of our minds that these aren’t “real” people with real opinions and the time to write them—at “worst” they’re AI bots, or maybe just fake accounts run by PR companies, cranking out astroturfed personal content at factory-level efficiency.

One dirty secret of the news media industry today (and in the past few decades) is that the content it produces isn’t to inform or entertain people in the present, it’s there to be indexed so future writings can cite it as a reference. Material that has existed for a longer time has more value, for some reason.

Whether it’s biased or fair, accurate, proven or debunked, it will still appear in search results. The same is true in book publishing, especially non-fiction academic—information contained within those books can be cited and referenced by others for years into the future, whether anyone actually reads the book or not.

Once generative AI is producing all the content, and widely assumed to be doing so, the real battleground for news in the future will move to the back end. Content itself will just be window-dressing, while those seeking to mold public opinion and “manufacture consent” will seek to influence the data generative AIs are trained on, hoping to push the needle towards their own end. In the past, activists would try to flood news and social networks with content favoring their own causes. These days, they work equally hard to remove any content that disfavors them. So, machines will produce content, the main purpose of which will be for future generative AI systems to “read” and produce more content… for a machine audience further down the line.

The conclusion to all this is, yes, news reporters will in time be replaced by generative AI—and so will a large percentage of news consumers. Real-life humans may lose interest in news media all together, or treat it as a sideshow, or use it to confirm existing biases. Is this happening already, even without the mass intervention of AIs? Possibly. Can we trust news written by machines? About as much as we can trust the human-produced kind. And the most important question for news reporters, which I’ve deliberately placed at the end of the last paragraph, is: will those operating news sites continue to employ humans to produce content? The answer is probably yes, though probably more in an opinion-editorial or analytical commentary role. That’s why this piece is an op-ed, and not a dry report on the latest developments.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch CoinGeek Roundtable with Joshua Henslee: AI, ChatGPT & Blockchain

Recommended for you

‘Crypto’ big bet pays off in Washington
The next few years will determine whether crypto can translate the ongoing political capital into lasting change. Will we see...
November 20, 2024
Reserve assets are for idiots
Only by circulating Bitcoin as envisioned by its creator, Satoshi Nakamoto, can we unlock its true potential and ensure a...
November 19, 2024
Advertisement
Advertisement
Advertisement