Person holding smartphone with logo of US technology company Meta Platforms Inc. (Facebook) on screen in front of website. Focus on phone display

Meta shuts down its responsible AI team following internal restructuring

Getting your Trinity Audio player ready...

Meta (NASDAQ: META), the parent company of Facebook and Instagram, has dissolved its team spearheading efforts for artificial intelligence (AI) products’ safe development following an internal restructuring.

Meta drew the curtain on its Responsible AI (RAI) team in mid-November in a move to bolster its efforts in other AI verticals, according to a report by the Information. Going forward, members of the defunct RAI team will fill roles in Meta’s generative AI and infrastructure arm as the company eyes a leading role in AI innovation.

The RAI team, described as a cross-disciplinary team of experts from several fields, including civil rights, engineering, AI research, and policy, had the primary task of ensuring AI safety for consumers.

Meta’s dissolution of its RAI team raises eyebrows among industry stakeholders, given the ongoing conversations around AI safety in recent months. The company vows to promote responsible AI development guided by the five pillars of privacy and security, fairness and inclusion, robustness and safety, transparency and control, and accountability and governance.

“Our responsible AI efforts are propelled by our mission to help ensure that AI at Meta benefits people and society,” said Meta. “Through regular collaboration with subject matter experts, policy stakeholders and people with lived experiences, we’re continuously building and testing approaches to help ensure our machine learning (ML) systems are designed and used responsibly.”

With the internal reshuffle, it remains unclear how Meta hopes to achieve its targets for responsible AI development. Early in the year, Meta pruned its RAI team by half, leaving the unit “a shell of a team” as it continues to grapple with the challenges of red-tapism and bureaucracies.

AI systems at Meta have since come under fire following a study that revealed that Instagram‘s algorithms assist bad actors in accessing abusive material, biased image generation via WhatsApp stickers, and issues with translations on Facebook.

Meta, in collaboration with OpenAI and Google (NASDAQ: GOOGL), has previously pledged
to maintain voluntary AI safeguards in line with the principles mooted by the Biden administration as AI-powered misinformation and digital currency scams reach an all-time high.

Meta wants to close the gap

Despite being a late entrant into the generative AI space, Meta is confident that it can close the gap between industry leaders like Google and OpenAI. By bolstering the ranks of its generative AI team with members of the RAI unit, analysts are predicting an avalanche of new AI products.

Meta recently launched a suite of generative AI products for users, including Emu Video and Emu Edit, building on the successes of its Llama 2 model. The company’s use of open-source AI models has earned its plaudits from researchers as it racks up key partnerships less than a year after dipping its feet into the fray.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI & blockchain will be extremely important—here’s why

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.