1 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
The article critiques OpenAI for limiting web search capabilities in ChatGPT, making it difficult for users to access accurate information. Despite the high cost of LLM inference, the user interface hinders easy search activation and ignores personalization settings. This raises questions about the implementation and costs of web search within the product.
If you do, here's more
OpenAI's reluctance to fully integrate web search capabilities into ChatGPT raises significant questions about its performance and user experience. The article points out that despite the high cost of running large language model (LLM) inferences, OpenAI limits access to web searches. Many ChatGPT responses are likely inaccurate without user intervention, yet the interface does not offer straightforward options for enabling web search as a default feature. Users cannot easily activate search through keyboard shortcuts or commands, and personalization settings to prioritize web searches are often ignored.
The interface complicates the process further. Accessing web search requires navigating through multiple clicks, and OpenAI runs A/B tests that seem designed to reduce the frequency of searches. This raises concerns about how OpenAI balances costs associated with implementing search functionality against the need for accurate information retrieval. The author highlights a specific example: ChatGPT's failure to recognize common products, like the iPhone Air, suggests a significant gap in the model's knowledge base. This inconsistency is puzzling given the companyβs willingness to spend heavily on marketing and user acquisition while neglecting fundamental capabilities in its main product.
Questions about this article
No questions yet.