BSV
$84.64
Vol 169.94m
8.15%
BTC
$101337
Vol 181185.84m
5.87%
BCH
$616.16
Vol 1898.32m
10.23%
LTC
$143.12
Vol 3295.29m
13.64%
DOGE
$0.44
Vol 18716.7m
8.22%
Getting your Trinity Audio player ready...

Consumer electronics giant Samsung (NASDAQ: SSNLF) has banned staff from using artificial intelligence (AI) tools like ChatGPT, Google Bard, and Bing after an employee posted sensitive code on the platform, which the firm said could cause a data leak.

Samsung issued an internal memorandum precluding employees from using AI platforms, expressing worry that the platforms store information on external servers that are impossible to retrieve. This security concern arose from an internal source code uploaded on ChatGPT in April, which the company says could fall into the wrong hands.

The internal document bars staff from using generative AI on company devices but allows their usage on personal devices. However, employees are expected to take special care to avoid uploading company-related information to AI platforms or face the grim prospects of firing.

“We ask that you diligently adhere to our security guidelines, and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment,” read the memo, according to the Bloomberg report.

Samsung noted that the ban is temporary and is exploring security measures to allow employees to use generative AI tools for efficiency. The company reiterated, “until these measures are prepared, we are temporarily restricting the use of generative AI.”

Samsung’s decision sees it join the rank and file of several companies that have imposed a ban on the use of generative AI. Goldman Sachs (NASDAQ: GS), Citi (NASDAQ: C), and Wells Fargo (NASDAQ: WFC) have all imposed limits on employee usage of AI platforms over concerns of leaking sensitive financial data.

Despite the ban, Samsung is keen on developing its AI offerings for consumers via its semiconductor products. The tech firm noted that its AI products would form the cornerstone of future innovation in automobiles and entertainment to personalize content recommendations and make commuting safer.

AI has regulators on their toes

The rapid adoption of AI tools worldwide has stoked embers of worry for regulators, given their propensity to breach privacy laws. To promote the safe usage of AI, the U.K. government pledged $124.8 million to create a Foundation Model Task Force to regulate the nascent technology.

Financial regulators in Texas, Montana, and Alabama are urging residents to be wary of scams using AI to lure in investors. Since December, several projects riding the popularity of OpenAI’s ChatGPT have stolen millions of dollars in digital assets from investors.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Blockchain can bring accountability to AI

Recommended for you

US introduces bill demanding reports on AI in finance, housing
After overseeing the digital asset space, Maxine Waters and Patrick McHenry recently introduced a bill that seeks to understand AI's...
December 5, 2024
This Week in AI: OpenAI’s Sora leaks; Amazon to launch Olympus
OpenAI's Sora leakage highlights the firm's alleged unfair treatment of artists and the issue surrounding intellectual property; elsewhere, Amazon is...
December 5, 2024
Advertisement
Advertisement
Advertisement