Meta has reportedly disbanded its Responsible AI (RAI) team as it devotes more resources to generative artificial intelligence. Information announced the news today, citing an internal message he had seen.
According to the report, most of RAI’s members will join the company’s generative AI product team, while others will work on Meta’s AI infrastructure. The company regularly states that it wants to develop AI responsibly and even has one page dedicated to the promise, where the company lists its “pillars of responsible AI,” including accountability, transparency, security, privacy, and more.
InformationThe report quotes Jon Carvill, who represents Meta, as saying the company “will continue to prioritize and invest in the development of safe and responsible AI.” He added that even though the company splits the team, these members “will continue to support cross-meta efforts relevant to the development and responsible use of AI.”
Meta did not respond to a request for comment at the time of publication.
The team already went through a restructuring earlier this year, which Business Insider wrote included layoffs that left RAI “a shell of a team”. This report goes on to argue that the RAI team, which had existed since 2019, had little autonomy and that its initiatives had to go through lengthy negotiations with stakeholders before they could be implemented.
RAI was created to identify issues with its AI training approaches, including whether the company’s models are trained with sufficiently diverse information, with the aim of avoiding issues such as moderation issues on its platforms. Automated systems on Meta’s social platforms have led to issues like a translation glitch on Facebook that caused a false arrestWhatsApp AI sticker generation that results in skewed images when you are given certain promptsand Instagram algorithms help people find material about child sexual abuse.