While developing technology brings great advances in the field of artificial intelligence, this situation also brings some problems. In recent years, deepfake content has increased and become more realistic, which has become a serious problem, especially on social media platforms. Meta’s Oversight Board, which closely monitors this situation, has recently identified and responded to two significant cases.

Deepfake Content Detected on Instagram and Facebook

Meta Fails to Combat Deepfake Content

The two cases identified by the Supervisory Board occurred on two different platforms, one on Instagram and one on Facebook. The image shared on Instagram was likened to a well-known woman in India, and it was determined that the account sharing this content only shared AI-generated images of Indian women. This content was complained about twice, but remained live due to lack of timely intervention. However, it was eventually removed for violating the “Community Standard on Bullying and Harassment”.

In another image shared on Facebook, deepfake content was found showing a man fondling a naked woman. This content was likened to a well-known woman in the United States. This content was also removed for violating the “Bullying and Harassment” policy. The appeal by the user who shared the content was rejected.

Meta’s Principles and Enforcement Practices to be Assessed

Meta’s Oversight Board will use these two cases for evaluation and will seek to determine how successful Meta’s policies and enforcement practices have been in addressing obscene images. It will then submit this assessment to Meta. However, this presentation will be advisory, not sanctioning.

How to deal with such problems brought about by artificial intelligence technology and what kind of measures platforms will take will set the agenda of the technology world in the coming period.

Leave A Reply