Google AI logo is displayed on a smartphone screen

Google to release new AI tools in race against ChatGPT and Microsoft

Google (NASDAQ: GOOGL) is set to release new general-purpose AI tools for “helping people reach their full potential.” Facing stiff competition from projects like OpenAI/ChatGPT and Microsoft (NASDAQ: MSFT), the company says it will unveil “PaLM 2,” a major update to its PaLM large language model (LLM) as well as new features for Search, its Bard personal collaboration tool, and image recognition software Google Lens.

With artificial intelligence (AI) now one of IT’s hottest buzzwords, there has been a race between tech giants to enhance their online tools with the most bleeding-edge advancements. Already, users have noticed AI answers included in search results and new interactive chat features added to social media networks.

PaLM has a medical/healthcare focus, and the latest iteration is named “Med-PaLM 2.” Google claims it was the first LLM to achieve an “expert” test-taker level on the MedQA dataset of U.S. Medical Licensing Examination-based questions, scoring 85%.

Bard is a generative AI (i.e., can produce intelligible text and images) chatbot based on the LaMDA LLM family. It’s the result of a Google “code red” response to the popularity of ChatGPT, which has captured much of the mainstream media attention in conversational AI services since it launched in November 2022. Bard is still not available for public use, and those interested must sign up for a waiting list to gain early access.

AI capabilities will begin trickling into other Google services like Gmail, Meet, and Google Docs applications like Slides and Sheets. Google says these are still in a testing phase and will be limited to a “small number of users” for the time being.

Google has also created a set of “AI principles” to ensure its services are useful and “safe” for everyone to use. It has made a list of seven key rules: “Be socially beneficial; Avoid creating or reinforcing unfair bias; Be built and tested for safety; Be accountable to people; Incorporate privacy design principles; Uphold high standards of scientific excellence; and Be made available for uses that accord with these principles.” The company has pledged not to pursue AI technologies likely to “cause overall harm,” produce weapons or surveillance tools, or break laws. Bard itself is restricted to users over 18 years old.

What is a ‘large language model’?

Commonly referred to as “LLMs,” large language models are neural network tools built for general-purpose use or at least a wide range of use cases. As the name suggests, they’re trained on extremely large text data samples, using a tokenization method to map text samples with integers to determine the best responses. The datasets they use would be far too large for any human to consume and digest in a lifetime.

Given the necessity to “train” LLMs on billions of samples and computing costs (in money and resources), building them is an extremely expensive task. As such, this task has become the domain of the biggest technology companies.

Advancements in LLM abilities have come faster than predicted. This is unusual in the technology industry and is responsible for a lot of the media hype surrounding AI in 2022-23. Some LLMs have demonstrated an ability to “learn new tasks without training” and build their own “mental models” of the worlds they’re describing. These are far from perfect, and although Google and OpenAI regularly stress the developmental nature of LLMs and AI, mistakes, perceived biases, and outright “hallucinations” made by chatbots often receive as much attention as the successes. ChatGPT will often return different responses to the same prompt and at times has demonstrated a tendency to “flatter” its human users by reflecting their individual opinions. Users have also tried to “jailbreak” LLMs by asking them to act out of character or disregard parts of their training data.

Calls for these technologies to be fine-tuned for “safety” have come from governments and society at large. Past attempts to release conversational tools to the general public have seen many testers immediately work to generate the most sensational responses—in the current era, that usually means responses considered offensive or racist, seemingly-violent, overly anthropomorphic, or even misanthropic. However, efforts by image-conscious corporations to fine-tune chat responses have led to accusations of socio-political bias in favor of “approved narratives” and a creeping suspicion that AI training data and responses are being “tweaked” to give answers considered acceptable to the interests funding development.

Definitions of what is “acceptable” are often subjective by nature, and the development processes for training data and responses have been opaque, largely due to confidentiality and competition between private companies—and whatever government connections they may have.

AI and blockchain

AI researchers, including Konstantinos Sgantzos, Ian Grigg, and Mohamed Al Hemairy, have advocated for using a blockchain with large data processing capabilities such as BSV to train LLMs (designed to facilitate natural-sounding conversations), store elements to train more-advanced Machine Learning processes, or even form the basis for an Artificial General Intelligence (AGI). True AGIs are at present still a hypothetical concept and would be able to match humans and/or animals in cognitive abilities—or surpass them.

“The resemblance of the human brain wiring with the current topology of the Bitcoin Network graph is remarkable,” they wrote in a 2022 paper. As well as serving as a reliable and permanent storage platform for training data, developers could also use Bitcoin Script language to build “perceptrons” (the computable representation of a neuron) as stateful contracts to verify information. Xiaohui Liu and his team at sCrypt have already provided basic demonstrations of this ability by using the BSV blockchain to run the “Game of Life” cellular automaton.

Most blockchains, with only limited capacities to scale, would be incapable of processing the contract transactions or handling the extremely large amounts of data necessary for these tasks. BSV has an unbounded ability to scale and, as it does, could likely become a key tool to develop AI at more accessible cost levels.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

CoinGeek Weekly Livestream: The future of AI Generated Art on Aym

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.