7 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
This article argues that human involvement often detracts from AI performance, especially in analytical tasks. While creative fields still benefit from human-AI collaboration, the author suggests that as AI improves, humans should limit their interference and focus on strategic decision-making instead.
If you do, here's more
The article argues that the traditional approach of keeping humans involved in AI tasks often worsens outcomes. In many analytical scenarios, human interference can degrade the performance of AI systems. The author points out that while humans and AI can create value together in creative fields, this collaboration may soon become less relevant as AI capabilities advance. Instead, humans should focus on decision-making and strategic management, allowing AI to operate independently to reduce bias and errors.
A key example cited is Google’s NotebookLM model, which can generate videos without human input. The author acknowledges that while his own AI-assisted videos are currently superior, this may change as AI capabilities improve. The distinction between human agency (the decision to initiate tasks) and human judgment (evaluating methods during tasks) is emphasized. As AI systems evolve, human judgment's value declines, while agency remains important for selecting which projects to pursue.
The piece also explores the chess domain, where AI has surpassed human ability. In 1997, IBM's Deep Blue defeated Garry Kasparov, marking a turning point. Initially, a combination of human and AI players performed well, but this synergy was short-lived. Now, studies show that human Grandmasters can hinder performance by deviating from AI suggestions, akin to ignoring a GPS's optimal route. The author notes that AI's Elo ratings have reached around 3,550, far exceeding top human players like Magnus Carlsen, who are around 2,850. This trend signals a shift in many areas where AI alone can outperform human-AI teams, often due to human errors and biases.
Questions about this article
No questions yet.