Google Permanently Bans AI Pro Users for Accessing Gemini via OpenClaw, Continues Charging $250/Month
A Hacker News post garnering 140 points and 107 comments details how Google terminated Google AI Pro and Ultra accounts without warning after users accessed Gemini through OpenClaw, a third-party client. The incident surfaces deeper issues around prompt caching, subscription economics, and how AI providers enforce terms of service.
A post on Hacker News on February 23, 2026 drew 140 points and 107 comments after a user reported that Google had permanently terminated their Google AI Ultra account — without warning, and without stopping the $250/month billing — following use of Gemini through OpenClaw, a third-party client.
What Google Support Said
According to the affected user, Google’s support response was unambiguous: using OpenClaw to access Antigravity (Google) servers for non-Antigravity products violates the terms of service. The policy, the support agent stated, is zero tolerance, and the ban is non-reversible.
The specific detail that generated the most discussion on HN was not the ban itself, but the fact that Google continued charging $250 per month after locking the user out of their account.
The Technical Problem: Prompt Cache Destruction
Several HN commenters identified a technical dimension to the third-party client issue that goes beyond terms of service enforcement.
Prompt caching is a mechanism where AI API providers cache the leading portion of a context window — typically a static system prompt — so that subsequent requests can reuse it without reprocessing. When it works correctly, cache hit rates above 90% are achievable, reducing effective token costs by a factor of 10 or more.
OpenClaw, in at least some configurations, inserts the current timestamp (formatted as hh:mm:ss) at the very beginning of the context window on every request. Because the first bytes of the context change with every call, the cache is invalidated on every request — driving the hit rate to near zero.
Claude Code’s official developers have stated publicly that the client is “carefully designed to maximize prompt caching.” The design principle is straightforward: static content goes at the top of the context window; dynamic content goes at the bottom. Many third-party clients implement context construction without knowledge of this constraint.
The Economics: Subscriptions Were Not Designed for Agents
The broader issue underlying the ban is a growing mismatch between how AI subscription products are priced and how developer-heavy users actually consume them.
| Usage scenario | Monthly cost |
|---|---|
| Claude Pro / Google AI Ultra subscription | ~$200/month |
| Equivalent API usage (heavy developer workload) | $1,600+/month |
Subscription pricing assumes usage patterns typical of a human user interacting with the provider’s own client — browsing conversations, drafting documents, answering questions. AI coding agents like Claude Code or Gemini CLI can consume orders of magnitude more tokens per session than those baseline assumptions.
The ccusage CLI, which gained renewed attention following this incident, visualizes exactly this gap: it records Claude Code’s actual token consumption and calculates what the equivalent API costs would have been. For heavy users, the numbers frequently exceed subscription pricing by a wide margin.
Takeaways for Developers
Three practical conclusions emerge from this incident.
Use pay-as-you-go API keys for agentic workloads. Subscription products are designed for human-paced, client-mediated usage. Routing agent traffic through subscription accounts puts you in a gray area that providers — Google especially — are willing to terminate over.
Design for prompt cache efficiency. If you are building a client or wrapper that calls AI APIs, avoid placing dynamic content (timestamps, session IDs, request-specific metadata) at the beginning of the context window. Treat the cache prefix as immutable across requests.
Google’s zero-tolerance posture is not universal, but it is real. Other providers have handled similar situations with warnings or throttling before termination. Google’s policy, as stated in this case, skips those steps. Whether that approach is applied consistently remains to be seen, but developers should treat it as the baseline assumption.
Third-party clients exist in a structurally ambiguous position relative to AI providers. Unlike web browsers, which access services through documented, public HTTP interfaces, these clients are often routing requests through infrastructure designed for the provider’s own products. Where exactly that line sits — and how strictly providers choose to enforce it — is a question that this incident puts clearly on the table.
Related Articles
Claude Code v2.1.47 Released: 40+ Bug Fixes, Windows Overhaul, Memory Improvements, and Faster Startup
Anthropic ships Claude Code v2.1.47 with over 40 bug fixes targeting Windows rendering bugs, long-session memory leaks, a ~500ms startup improvement, plan mode preservation after compaction, and a revamped background agent kill shortcut.
Claude Code v2.1.49 Released: Background Agent Kill Fix, Startup Speedups, and Yoga WASM Memory Leaks Squashed
Anthropic ships Claude Code v2.1.49 with roughly 20 changes, including a fix for Ctrl+C and ESC being silently ignored during background agent execution, multiple startup performance improvements via MCP batching and auth-failure caching, an end to Yoga WASM memory growth in long sessions, file editing in simple mode, and new SDK capability fields.
Claude Code v2.1.50: Worktree Hooks, Agent Memory Overhaul, Opus 4.6 Gets 1M Context
Anthropic ships Claude Code v2.1.50 with WorktreeCreate/WorktreeRemove hook events, declarative worktree isolation in agent definitions, a fix for session data loss on SSH disconnect, a native module compatibility fix for older glibc Linux systems, an avalanche of memory leak fixes targeting long-running sessions, and Opus 4.6 fast mode now supporting the full 1M context window.
Popular Articles
868 Agentic Skills, One Command: Antigravity Awesome Skills Becomes the Cross-Tool Skill Standard
Antigravity Awesome Skills (v5.4.0) delivers 868+ battle-tested skills for Claude Code, Gemini CLI, Codex CLI, Cursor, GitHub Copilot, and five other AI coding assistants via a single npx command. With official skills from Anthropic, Vercel, OpenAI, Supabase, and Microsoft consolidated under one MIT-licensed repository, it's emerging as the portable skill layer for the fragmented AI coding agent landscape.
How Claude Sonnet 4.6 Agent Teams Achieve 4x Productivity: Practical Insights from Anthropic's Own Research
Two Anthropic studies—a survey of 132 internal engineers and an analysis of 1M+ real-world agent interactions—reveal the precise delegation strategies and autonomy patterns that enable high-performing teams to multiply output with Claude Sonnet 4.6 agent teams.
What Actually Makes OpenClaw Special: The Full Story from VibeTunnel to 200k+ GitHub Stars
The three-stage VibeTunnel→Clawdbot→OpenClaw evolution, Pi runtime philosophy, why HEARTBEAT is the real differentiator from Claude Code, and the ClawHub supply chain attack (12% of skills were malicious). An unvarnished look at the most used and most misunderstood OSS agent.
Latest Articles
Two AI Agent Communication Projects Hit Hacker News Simultaneously, Targeting MCP's Blind Spots
Aqua and Agent Semantic Protocol appeared on Hacker News on the same day, both tackling the same unsolved problem: how AI agents communicate directly without a central broker, across network boundaries, and asynchronously.
Claude Sonnet 4.6 Becomes the Default for Free and Pro Users — Outperforms Opus 4.5 on Coding Agent Benchmarks
Anthropic has made Claude Sonnet 4.6 the default model for claude.ai's Free and Pro plans. Released February 17, 2026, it matches Sonnet 4.5 pricing at $3/$15 per million tokens while internal Claude Code evaluations show it beating the previous frontier model, Opus 4.5, 59% of the time on agentic coding tasks.
Tool Configuration Beats Model Upgrades for AI Coding Agent Cost Reduction — Sonnet vs. Opus Experiment
A team ran a controlled experiment on their @qa-tester agent comparing the impact of adding a bash tool versus upgrading from Sonnet to Opus. Adding the bash tool increased test coverage by 120% and cut costs by 32%. Upgrading to Opus delivered zero coverage gain at 65% higher cost.