7 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
The article compares working with large language models (LLMs) to collaborating with human coworkers, emphasizing that both can misinterpret vague instructions. It discusses the importance of clear communication and proper context when interacting with LLMs, suggesting that many frustrations stem from unrealistic expectations of deterministic behavior. Adapting to this probabilistic nature can lead to more effective outcomes.
If you do, here's more
The article draws a parallel between large language models (LLMs) and human coworkers, highlighting how both can misinterpret vague instructions. When people express frustration with LLMs not producing desired results, it echoes common workplace experiences where communication fails. The author points out that while we accept human error as part of collaboration, we expect LLMs to perform with perfect accuracy, which is unrealistic. This stems from a long history of deterministic machines that behave predictably, unlike the probabilistic nature of LLMs that can generate unexpected outcomes based on incomplete or ambiguous prompts.
A significant theme is the need for clearer communication and context when working with LLMs. The author shares personal experiences showing that effective interactions require detailed prompts, similar to how one would clarify goals with a human teammate. By focusing on constraints, trade-offs, and examples, outcomes improve markedly. The article emphasizes the importance of documentation, stating that LLMs cannot infer context from past interactions or rely on shared history. Instead, they need explicit instructions and context supplied each time.
To manage this, the author advocates for maintaining thorough documentation and organizing project rules in a way that LLMs can easily access. Specific files and guidelines should be presented to the model for each task to prevent misunderstandings. The approach mirrors best practices for onboarding new human colleagues, underscoring that the lack of tribal knowledge means LLMs require more structured information to function effectively. This shift in perspective helps reduce frustration and enhances collaboration with these models.
Questions about this article
No questions yet.