1 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
The article discusses a specific prompt that highlights the difficulties LLMs face with simple tasks, particularly in counting and spatial awareness. The author presents a prompt involving the arrangement of stars that humans can solve easily, but LLMs struggle with.
If you do, here's more
The article highlights a specific challenge that large language models (LLMs) face, contrasting their recent advancements with a persistent limitation in handling certain prompts. The author points out that while LLMs have improved significantly over the past 2.5 years, there are still simple, objective tasks where they struggle. A notable example is a prompt asking for a specific arrangement of stars, which combines counting and spatial awareness—two areas where LLMs typically falter.
The author tested this prompt with various models, including ChatGPT, Grok, Gemini, and Nano Banana Pro. The results were underwhelming. ChatGPT failed to provide any correct images, while Grok produced two images that were visually appealing but incorrect. Gemini and Nano Banana Pro also missed the mark, despite being regarded as advanced in image generation. The complexity arises from the niche requirements of the prompt, which blends basic mathematical concepts with graph theory, making it less straightforward for LLMs to interpret.
The core of the issue lies in how LLMs process information. The author suggests that counting and spatial awareness are particularly challenging for these models, especially when combined with less common mathematical principles. This specific example underscores a broader trend in AI—while models can handle many tasks well, certain simple prompts can still stump them due to their inherent complexity.
Questions about this article
No questions yet.