Claude Code costs up to $200 a month. Goose does the same thing for free.
The artificial intelligence coding revolution comes with a catch: it’s expensive.
Claude Code, Anthropic’s terminal-based AI agent that can write, debug, and deploy code autonomously, has captured the imagination of software developers worldwide. But its pricing — ranging from $20 to $200 per month depending on usage — has sparked a growing rebellion among the very programmers it aims to serve.
Now, a free alternative is gaining traction. Goose, an open-source AI agent developed by Block (the financial technology company formerly known as Square), offers nearly identical functionality to Claude Code but runs entirely on a user’s local machine. No subscription fees. No cloud dependency. No rate limits that reset every five hours.
“Your data stays with you, period,” said Parth Sareen, a software engineer who demonstrated the tool during a recent livestream. The comment captures the core appeal: Goose gives developers complete control over their AI-powered workflow, including the ability to work offline — even on an airplane.
The project has exploded in popularity. Goose now boasts more than 26,100 stars on GitHub, the code-sharing platform, with 362 contributors and 102 releases since its launch. The latest version, 1.20.1, shipped on January 19, 2026, reflecting a development pace that rivals commercial products.
For developers frustrated by Claude Code’s pricing structure and usage caps, Goose represents something increasingly rare in the AI industry: a genuinely free, no-strings-attached option for serious work.
Anthropic’s new rate limits spark a developer revolt
To understand why Goose matters, you need to understand the Claude Code pricing controversy.
Anthropic, the San Francisco artificial intelligence company founded by former OpenAI executives, offers Claude Code as part of its subscription tiers. The free plan provides no access whatsoever. The Pro plan, at $17 per month with annual billing (or $20 monthly), limits users to just 10 to 40 prompts every five hours — a constraint that serious developers exhaust within minutes of intensive work.
The Max plans, at $100 and $200 per month, offer more headroom: 50 to 200 prompts and 200 to 800 prompts respectively, plus access to Anthropic’s most powerful model, Claude 4.5 Opus. But even these premium tiers come with restrictions that have inflamed the developer community.
In late July, Anthropic announced new weekly rate limits. Under the system, Pro users receive 40 to 80 hours of Sonnet 4 usage per week. Max users at the $200 tier get 240 to 480 hours of Sonnet 4, plus 24 to 40 hours of Opus 4. Nearly five months later, the frustration has not subsided.
Claude Code costs and pricing model
The problem? Those “hours” are not actual hours. They represent token-based limits that vary wildly depending on codebase size, conversation length, and the complexity of the code being processed. Independent analysis suggests the actual per-session limits translate to roughly 44,000 tokens for Pro users and 220,000 tokens for the $200 Max plan.
“It’s confusing and vague,” one developer wrote in a widely shared analysis. “When they say ’24-40 hours of Opus 4,’ that doesn’t really tell you anything useful about what you’re actually getting.”
The backlash on Reddit and developer forums has been fierce. Some users report hitting their daily limits within 30 minutes of intensive coding. Others have canceled their subscriptions entirely, calling the new restrictions “a joke” and “unusable for real work.”
How Block built a free AI coding agent that works offline
Goose takes a radically different approach to the same problem.
Built by Block, the payments company led by Jack Dorsey, Goose is what engineers call an “on-machine AI agent.” Unlike Claude Code, which sends your queries to Anthropic’s servers for processing, Goose can run entirely on your local computer using open-source language models that you download and control yourself.
The project’s documentation describes it as going “beyond code suggestions” to “install, execute, edit, and test with any LLM.” That last phrase — “any LLM” — is the key differentiator. Goose is model-agnostic by design.
Goose features and capabilities
You can connect Goose to Anthropic’s Claude models if you have API access. You can use OpenAI’s GPT-5 or Google’s Gemini. You can route it through services like Groq or OpenRouter. Or — and this is where things get interesting — you can run it entirely locally using tools like Ollama, which let you download and execute open-source models on your own hardware.
The practical implications are significant. With a local setup, there are no subscription fees, no usage caps, no rate limits, and no concerns about your code being sent to external servers. Your conversations with the AI never leave your machine.
What Goose can do that traditional code assistants can’t
Goose operates as a command-line tool or desktop application that can autonomously perform complex development tasks. It can build entire projects from scratch, write and execute code, debug failures, orchestrate workflows across multiple files, and interact with external APIs — all without constant human oversight.
The architecture relies on what the AI industry calls “tool calling” or “function calling” — the ability for a language model to request specific actions from external systems. When you ask Goose to create a new file, run a test suite, or check the status of a GitHub pull request, it doesn’t just generate text describing what should happen. It actually executes those operations.
Goose advantages and use cases
This capability depends heavily on the underlying language model. Claude 4 models from Anthropic currently perform best at tool calling, according to the Berkeley Function-Calling Leaderboard, which ranks models on their ability to translate natural language requests into executable code and system commands.
But newer open-source models are catching up quickly. Goose’s documentation highlights several options with strong tool-calling support: Meta’s Llama series, Alibaba’s Qwen models, Google’s Gemma variants, and DeepSeek’s reasoning-focused architectures.
Setting Up Goose with a Local Model
For developers interested in a completely free, privacy-preserving setup, the process involves three main components: Goose itself, Ollama (a tool for running open-source models locally), and a compatible language model.
Step 1: Install Ollama
Ollama is an open-source project that dramatically simplifies the process of running large language models on personal hardware. It handles the complex work of downloading, optimizing, and serving models through a simple interface.
Download and install Ollama from ollama.com. Once installed, you can pull models with a single command. For coding tasks, Qwen 2.5 offers strong tool-calling support:
ollama run qwen2.5
The RAM, processing power, and trade-offs you should know about
The obvious question: what kind of computer do you need?
Running large language models locally requires substantially more computational resources than typical software. The key constraint is memory — specifically, RAM on most systems, or VRAM if using a dedicated graphics card for acceleration.
Block’s documentation suggests that 32 gigabytes of RAM provides “a solid baseline for larger models and outputs.” For Mac users, this means the computer’s unified memory is the primary bottleneck. For Windows and Linux users with discrete NVIDIA graphics cards, GPU memory (VRAM) matters more for acceleration.
Why keeping your code off the cloud matters more than ever
Goose with a local LLM is not a perfect substitute for Claude Code. The comparison involves real trade-offs that developers should understand.
Model Quality: Claude 4.5 Opus, Anthropic’s flagship model, remains arguably the most capable AI for software engineering tasks. It excels at understanding complex codebases, following nuanced instructions, and producing high-quality code on the first attempt. Open-source models have improved dramatically, but a gap persists — particularly for the most challenging tasks.
How Goose stacks up against Cursor, GitHub Copilot, and the paid AI coding market
Goose enters a crowded market of AI coding tools, but occupies a distinctive position.
Cursor, a popular AI-enhanced code editor, charges $20 per month for its Pro tier and $200 for Ultra—pricing that mirrors Claude Code’s Max plans. Cursor provides approximately 4,500 Sonnet 4 requests per month at the Ultra level, a substantially different allocation model than Claude Code’s hourly resets.
Cline, Roo Code, and similar open-source projects offer AI coding assistance but with varying levels of autonomy and tool integration. Many focus on code completion rather than the agentic task execution that defines Goose and Claude Code.
The $200-a-month era for AI coding tools may be ending
The AI coding tools market is evolving quickly. Open-source models are improving at a pace that continually narrows the gap with proprietary alternatives. Moonshot AI’s Kimi K2 and z.ai’s GLM 4.5 now benchmark near Claude Sonnet 4 levels — and they’re freely available.
If this trajectory continues, the quality advantage that justifies Claude Code’s premium pricing may erode. Anthropic would then face pressure to compete on features, user experience, and integration rather than raw model capability.
For now, developers face a clear choice. Those who need the absolute best model quality, who can afford premium pricing, and who accept usage restrictions may prefer Claude Code. Those who prioritize cost, privacy, offline access, and flexibility have a genuine alternative in Goose.
Conclusion
Goose is not perfect. It requires more technical setup than commercial alternatives. It depends on hardware resources that not every developer possesses. Its model options, while improving rapidly, still trail the best proprietary offerings on complex tasks.
But for a growing community of developers, those limitations are acceptable trade-offs for something increasingly rare in the AI landscape: a tool that truly belongs to them.
FAQ
What are the main differences between Claude Code and Goose?
Claude Code is a cloud-based AI coding assistant with a subscription-based pricing model, while Goose is a free, open-source alternative that runs locally on a user’s machine.
What are the costs associated with using Claude Code?
Claude Code costs range from $20 to $200 per month, depending on the subscription tier and usage.
What are the benefits of using Goose over Claude Code?
Goose offers a free and open-source alternative to Claude Code, with no subscription fees, no usage caps, and no rate limits. It also allows for local execution of AI models, providing more control over data and workflow.
What are the system requirements for running Goose?
Goose requires a computer with sufficient RAM (at least 32 GB recommended) and a compatible language model to run locally.
How does Goose compare to other AI coding tools on the market?
Goose occupies a unique position in the market, offering a free and open-source alternative to commercial AI coding tools like Claude Code and Cursor.





