7 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
This article tracks the development of LLM extensions, highlighting significant milestones from ChatGPT Plugins to Claude Code. It discusses how user customization has evolved, focusing on tools like Custom Instructions and Agent Skills that enhance agent capabilities. The author reflects on the future of LLMs and the integration of general-purpose tools.
If you do, here's more
In the last three years, the approach to using large language models (LLMs) has evolved significantly. Initially, users could only paste text into a chat box and hope for useful responses. Now, LLMs can interact with codebases and execute commands on behalf of users, raising questions about how to allow customization. The article outlines the progression from basic prompts to complex tools, highlighting key developments such as ChatGPT Plugins, Custom Instructions, and Custom GPTs.
ChatGPT Plugins, introduced in March 2023, aimed to integrate LLMs with APIs, but early models struggled with the complexity. This led to a simpler solution in July 2023 with Custom Instructions, allowing users to set context without the hassle of plugins. By November 2023, OpenAI launched Custom GPTs, which bundled instructions and tools for easier sharing, though they represented a shift toward more curated, less flexible applications. The introduction of memory features in February 2024 marked a move toward automatic personalization, remembering user preferences for future interactions.
The article also covers advancements like Cursorβs .cursorrules file in April 2024, which allowed for more natural integration of custom instructions directly into code repositories. By late 2024, Anthropic's Model Context Protocol (MCP) emerged, offering a more robust system for tool integration, albeit with added complexity. The launch of Claude Code in early 2025 consolidated various extension methods, including markdown-based Agent Skills, which streamline the process of integrating tools without overwhelming the model with context. These developments demonstrate a clear trajectory toward making LLMs more adaptable and user-friendly, although they also introduce new layers of complexity.
Questions about this article
No questions yet.