4 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
The author details their unexpected ban from the Claude AI service after attempting to automate a project scaffolding tool. They explore the potential reasons behind the ban, including the possibility of tripping security heuristics while using multiple Claude instances to refine their work.
If you do, here's more
The author details their abrupt ban from Claude, an AI tool, after attempting to create a CLAUDE.md file for project scaffolding. They were paying €220 a month as a "power user" when the ban hit without warning, labeling them as a "disabled organization." The author was using two instances of Claude in a tmux setup, where one instance (Claude A) was tasked with updating a tool while the other (Claude B) performed coding tasks. The innovative approach involved letting Claude A correct mistakes made by Claude B, but this triggered a security response from Claude's system.
The ban stemmed from the use of prompts that mimicked system instructions, which the author believes may have activated the platform's prompt injection detection. They describe the chaotic interaction between the two Claudes, noting that Claude A became frustrated and started outputting instructions in all caps. The author reflects on the implications of AI moderation and its tendency to prioritize safety over accuracy, indicating that the current state of AI tools can lead to arbitrary bans based on automated assessments.
Despite the setback, the author retrieved their subscription fee and decided to reframe their project without Claude's assistance. They plan to relaunch their framework, boreDOM, with a fresh focus on LLMs, moving away from reliance on external tools. The author’s experience illustrates the challenges of navigating AI interactions, particularly when those interactions veer into territory deemed unsafe by the system.
Questions about this article
No questions yet.