With the development of artificial intelligence technologies, although many new possibilities have entered our lives, they have also brought some problems. Language models and transcription tools, especially those developed by OpenAI like Whisper, can sometimes present non-existent information as if it were real. This situation, especially regarding academic sources and transcriptions, can cause serious problems. Irrelevant and incorrect information added during Whisper’s process of converting speech to text carries great risks in sensitive areas such as racist comments or misleading medical information.

The Hallucination Problem in Transcription Services

Artificial Intelligence and Transcription: The Hallucination Problem

A study from the University of Michigan revealed that a large portion of the texts in transcription services contain these types of ‘hallucinations’. The study found that 80% of the texts they reviewed had these issues. Additionally, another study by a machine learning engineer found similar problems in half of the texts after long-term analysis.

OpenAI acknowledges these problems and states that they are working to improve the accuracy of their models. They also developed policies stating that Whisper should not be used in high-risk decision-making processes and thanked researchers for raising these issues. How artificial intelligence can overcome these problems and how technology can be integrated more safely will be among the more attention-grabbing topics in the coming period.

Share.
Leave A Reply