Click any tag below to further narrow down your results
Links
This article clarifies what an async agent truly is, emphasizing that no agent is inherently asynchronous. It outlines the distinction between an agent and its management of tasks, arguing that an "async agent" should refer specifically to one that orchestrates multiple subagents concurrently.
This article explores the concept of a Code-Only agent that uses a single tool—code execution—to perform tasks. By enforcing this limitation, the agent generates executable code for all operations, shifting focus from tool selection to code production, which enhances reliability and clarity in computing tasks.
This article discusses the ease of creating LLM agents using the OpenAI API. It emphasizes hands-on experience with coding agents, explores context management, and critiques the reliance on complex frameworks like MCP.
This article provides guidance on creating effective agents.md files for GitHub Copilot. It draws from an analysis of over 2,500 repositories, highlighting the importance of specificity in defining agent roles, commands, and boundaries to improve functionality.
This article explains how to enhance the effectiveness of AI agents by implementing back pressure, which provides them with automated feedback. By doing so, you can delegate more complex tasks to agents while minimizing the time spent correcting their mistakes. It emphasizes using tools and type systems that improve agent performance and reduce manual oversight.
The article discusses the author's experiences with LLMs and coding agents over the past year. It highlights significant improvements in coding models, the issues with current IDEs, and the author's new approach to programming using agents instead of traditional environments.
The article discusses the concept of programming with agents, emphasizing their role in automating tasks and decision-making processes in software development. It explores various methodologies and frameworks that support agent-based programming, highlighting their advantages in creating responsive and adaptive systems.
Armin Ronacher reflects on the challenges of programming with inadequate tools and documentation, emphasizing the potential of programming agents to objectively measure code quality and developer experience. He discusses the importance of good test coverage, error reporting, ecosystem stability, and user-friendly tools, arguing that these factors impact both agents and human developers. By utilizing agents, teams can gain valuable insights into their codebases and improve overall project health.