In February the board requested that Meta urgently revise its approach to manipulated media due to huge advances in AI and the ease with which media can be manipulated into highly believable deepfakes. Facebook and Instagram giant Meta said on Friday it would start labeling AI-generated media from May, as it tries to reassure users and governments about the risks of deepfakes.
The social media juggernaut added that it will no longer remove manipulated images and audio that do not otherwise violate its rules, relying instead on labeling and contextualization, so as not to infringe on free speech. The changes come in response to criticism from the tech giant's Oversight Board, which independently reviews Meta's content control decisions. In February the board requested that Meta urgently revise its approach to manipulated media due to huge advances in AI and the ease with which media can be manipulated into highly believable deepfakes.
Opportunity to study in Germany without IELTS
Afridi wanted to keep Shaheen away from the leadership
নবীজির নির্মিত প্রথম মসজিদে এ বছর প্রায় ২ কোটি মুসল্লির নামাজ আদায়
The board's warning comes amid fears of widespread misuse of artificial intelligence-powered applications for disinformation on platforms in a crucial election year not only in the US but globally. Meta's new "Made with AI" labels will identify content created or modified with AI, including video, audio and images. In addition, more prominent labels will be used for materials considered at high risk of misleading the public. "We agree that providing clarity and additional context is a better way to address this content now," Monica Bickert, Mater's vice president of content policy, said in a blog post. "The labels will cover a wide range of content, apart from the manipulated content that the Oversight Board has recommended labeling," he added. These new labeling strategies are linked to a deal struck in February between major tech giants and AI players to crack down on manipulated content aimed at deceiving voters. Meta, Google and OpenAI have already agreed to use a common watermarking standard that will tag images generated by their AI applications.
Biden deepfakes
Meta said its rollout will happen in two phases, with AI-generated content labeling beginning in May 2024, while the removal of manipulated media based solely on the old policy will stop in July. According to the new standard, content, even if manipulated with AI, will remain on the platform unless it violates other community standards, such as hate speech or voter interference. Recent examples of AI deepfakes have raised concerns about the technology being easily accessible The board's list of requests was part of a review of Meta's decision last year to release a manipulated video of US President Joe Biden online.
The video showed Biden voting with his adult granddaughter, but was manipulated into falsely appearing to inappropriately touch her chest. In a separate incident, a robocall impersonation of Biden pushed to thousands of voters urged people not to cast ballots in the New Hampshire primary. In Pakistan, former Prime Minister Imran Khan's team used AI to produce their imprisoned leader's speech.