Understanding the effectiveness of new AI models can take months, as initial impressions often misrepresent their capabilities. Traditional evaluation methods are unreliable, and personal interactions yield subjective assessments, making it difficult to determine whether AI progress is truly stagnating or advancing.
Building AI products requires adapting to rapid changes in model capabilities rather than relying on clever engineering that may become obsolete. Key lessons include validating features with users early, recognizing when to abandon failing projects, and continuously reassessing how new models can enhance product development. Embracing failure and pivoting quickly is essential in this evolving landscape.