Evening city life in the centre of Warsaw. Company signboard of Samsung

Samsung staff restricted from using AI tools over security concerns

Consumer electronics giant Samsung (NASDAQ: SSNLF) has banned staff from using artificial intelligence (AI) tools like ChatGPT, Google Bard, and Bing after an employee posted sensitive code on the platform, which the firm said could cause a data leak.

Samsung issued an internal memorandum precluding employees from using AI platforms, expressing worry that the platforms store information on external servers that are impossible to retrieve. This security concern arose from an internal source code uploaded on ChatGPT in April, which the company says could fall into the wrong hands.

The internal document bars staff from using generative AI on company devices but allows their usage on personal devices. However, employees are expected to take special care to avoid uploading company-related information to AI platforms or face the grim prospects of firing.

“We ask that you diligently adhere to our security guidelines, and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment,” read the memo, according to the Bloomberg report.

Samsung noted that the ban is temporary and is exploring security measures to allow employees to use generative AI tools for efficiency. The company reiterated, “until these measures are prepared, we are temporarily restricting the use of generative AI.”

Samsung’s decision sees it join the rank and file of several companies that have imposed a ban on the use of generative AI. Goldman Sachs (NASDAQ: GS), Citi (NASDAQ: C), and Wells Fargo (NASDAQ: WFC) have all imposed limits on employee usage of AI platforms over concerns of leaking sensitive financial data.

Despite the ban, Samsung is keen on developing its AI offerings for consumers via its semiconductor products. The tech firm noted that its AI products would form the cornerstone of future innovation in automobiles and entertainment to personalize content recommendations and make commuting safer.

AI has regulators on their toes

The rapid adoption of AI tools worldwide has stoked embers of worry for regulators, given their propensity to breach privacy laws. To promote the safe usage of AI, the U.K. government pledged $124.8 million to create a Foundation Model Task Force to regulate the nascent technology.

Financial regulators in Texas, Montana, and Alabama are urging residents to be wary of scams using AI to lure in investors. Since December, several projects riding the popularity of OpenAI’s ChatGPT have stolen millions of dollars in digital assets from investors.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Blockchain can bring accountability to AI

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.