The Wikimedia Foundation reports a 50% increase in bandwidth consumption due to web-scraping bots that are primarily used to train AI models, leading to significant costs for the organization. With 65% of traffic for expensive content generated by these bots, the Foundation aims to reduce scraper traffic by 20% and prioritize human users in its resource allocation. Concerns about aggressive AI crawlers have prompted discussions about implementing better protective measures, although current methods, such as robots.txt directives, are often ineffective.