Skip to main content
I've been waiting over a month for Anthropic support to respond

I've been waiting over a month for Anthropic support to respond

This article was generated by AI based on the sources linked below. It is part of an automated research project by Sinan Koparan. Please verify claims against the original sources. Read our editorial standards.

A subscriber to Anthropic’s Claude Max plan has reported waiting over a month for a human response from customer support regarding approximately $180 in unexpected “Extra Usage” charges. The issue, which surfaced in early March 2026, highlights potential challenges in automated customer service, particularly when AI-driven systems are the primary interface for resolving complex billing discrepancies.

Unexplained Charges and Widespread Reports

The user, identified through the source as blogging under “Nick’s Thoughts”, noticed the unexpected charges in early March. As a Claude Max subscriber, they received 16 separate “Extra Usage” invoices, each ranging from $10 to $13, in quick succession between March 3 and March 5. This totaled approximately $180 in charges. The user stated they were not actively using Claude during this period, having been away from their laptop and sailing in San Diego.

Further investigation by the user revealed discrepancies in their account data. Their usage dashboard reportedly showed a 100% session despite no activity. The Claude Code session history, which tracks usage, showed only two minimal sessions on March 5, totaling under 7KB, with no sessions recorded on March 3 or March 4. These minimal sessions did not account for the substantial “Extra Usage” charges.

The problem appears to extend beyond a single incident. The user noted that other Claude Max plan subscribers have reported similar issues. Evidence includes “numerous GitHub issues”, specifically citing claude-code#29289 and claude-code#24727, and posts on r/ClaudeCode, a community forum. These reports describe the same pattern: incorrect usage meters and erroneously accumulated “Extra Usage” charges.

The AI-Only Support Wall

On March 7, the user sent a detailed email to Anthropic support, providing evidence of the billing discrepancy. Within two minutes, an initial response was received, not from a human, but from “Fin AI Agent, Anthropic’s AI Agent”. This AI agent instructed the user to utilize an in-app refund request flow. However, this specific refund pipeline is designed for subscription cancellations and is not applicable to “Extra Usage” charges, rendering the AI agent’s advice unhelpful for the user’s particular billing issue. Beyond seeking a refund, the user also aimed to understand the root cause of the unexpected charges directly from a human.

In response to a follow-up request to speak with a human, the user received an automated reply stating, “A member of our team will be with you as soon as we can,” and advised visiting the Help Center and API documentation for self-service troubleshooting. Despite subsequent follow-up emails sent on March 17, March 25, and again on April 8, the user reported receiving no further communication or assistance from a human support representative. As of April 8, over a month had passed since the initial outreach to Anthropic support without the issue being resolved or addressed by a human.

Implications for AI Customer Service

The incident highlights a growing tension in the AI industry regarding customer support models. Anthropic, a leading AI company known for developing capable AI assistants like Claude, relies on an AI-powered support system that, in this instance, failed to provide adequate resolution for a complex customer issue. The user explicitly stated they “don’t have a problem with AI-assisted support,” but do have a significant concern with “AI-only support that serves as a wall between customers and anyone who can actually resolve their issue.”

This situation raises questions about the limitations of fully automated support in scenarios that require nuanced understanding, investigation, or exceptions to standard procedures. For AI companies, customer trust is paramount, and the inability to effectively resolve billing errors or provide human interaction when automated systems fall short could erode that trust. As AI services become more integrated into daily life and business operations, the quality and accessibility of human support for issues beyond the AI’s programmed capabilities will likely become a critical differentiator and a benchmark for customer satisfaction.

What to Watch

The industry will be observing how Anthropic addresses these widespread billing issues and whether it modifies its customer support protocols to ensure human intervention for complex or unresolvable AI-generated problems. The balance between efficient AI-powered support and accessible human assistance remains a key challenge for AI companies.

Frequently Asked Questions

What was the user's primary billing issue with Anthropic?

The user, a Claude Max subscriber, incurred approximately $180 in unexpected "Extra Usage" charges between March 3-5, 2026, consisting of 16 separate invoices ranging from $10-$13 each, despite not using Claude during that period.

How did Anthropic's AI support agent respond to the user's request?

The "Fin AI Agent" initially directed the user to an in-app refund request flow, which was unsuitable for "Extra Usage" charges as it was designed only for subscriptions.

How long did the user wait for a human response from Anthropic support?

As of April 8, 2026, the user had waited over a month for a human response from Anthropic support, following their initial contact on March 7 and multiple follow-up emails.

AI Pulse