OpenAI's latest reasoning AI models exhibit an increase in "hallucinations," where the models generate inaccurate or nonsensical information. Researchers are investigating the underlying causes of this phenomenon and exploring potential solutions to enhance the reliability of AI outputs. The findings raise concerns about the implications of deploying these models in critical applications without stringent oversight.
+ openai
reasoning-ai ✓
hallucinations ✓
reliability ✓
research ✓