Asia Tech Wire (Oct 29) -- Whisper, OpenAI's AI-powered transcription tool, has been revealed to have a serious flaw that creates large portions or even entire sentences of false information out of thin air.
When a machine learning engineer initially analyzed more than 100 hours of Whisper transcription data, he found that about half of the content was illusory, meaning that the transcriptions did not match the original speech.
However, more than 30,000 clinicians and 40 health systems, including the Mankato Clinic in Minnesota and Children's Hospital Los Angeles, have begun using a Whisper-based tool built by French startup Nabla.