← Back to homepage
AI Tools

Higher usage limits for Claude and a compute deal with SpaceX

Anthropic is raising Claude Code and Claude API limits after a major compute expansion tied to SpaceX, Amazon, Google, Microsoft, NVIDIA and Fluidstack.

By TreffikAI Editorial6 min read
Claude AI interface and assistant concept graphic

Anthropic is giving heavy Claude users more room to work. The company says it has expanded its compute capacity through a series of infrastructure agreements, including a newly revealed partnership with SpaceX, and is using that extra capacity to raise limits across Claude Code and the Claude API.

For developers, teams and enterprise customers, this is not just a backend story. Rate limits are one of the most practical constraints in daily AI work. When a coding assistant pauses during a long refactor or an API workflow hits a ceiling during a production run, the model's raw intelligence matters less than whether the service can keep up.

Anthropic's message is clear: Claude is being prepared for heavier, more continuous use.

What changes for Claude users now

The most immediate changes affect people already using Claude Code and the Claude API. Anthropic says three upgrades are now live across its service tiers.

First, the five-hour rate limits for Claude Code are being doubled for Pro, Max, Team and seat-based Enterprise plans. That should matter most to users who keep Claude Code open through long software sessions: debugging, repository cleanup, migration work, test-writing and larger refactors.

Second, peak-hour reductions are being removed for Pro and Max accounts. Previously, users could see reduced access during busier periods. Removing that reduction makes the experience more predictable, which is especially important when Claude is part of a workday rather than an occasional helper.

Third, Anthropic says developers using its most powerful models will see higher API ceilings for the Claude Opus family. That gives API builders more breathing room for demanding workflows that depend on long reasoning, analysis or multi-step tool use.

Why usage limits matter more than they sound

Usage limits can sound like a billing detail, but for AI products they are part of the user experience.

A low ceiling changes how people behave. Developers split tasks into smaller chunks. Teams avoid running background agents for too long. Product builders add fallback logic earlier than they would like. Enterprises hesitate to move critical workflows onto a system that might throttle at the wrong moment.

Higher limits do not automatically make Claude smarter, but they make it easier to use Claude as infrastructure. That distinction matters.

Claude Code is most valuable when it can stay with a task across several steps: understand the codebase, propose a plan, edit files, inspect errors, adjust the implementation and explain the final change. More capacity means fewer interruptions in that loop.

For API customers, higher ceilings can support larger batches, more concurrent users and more ambitious agent designs.

The SpaceX compute deal

The headline partnership is with SpaceX. According to the announcement, Anthropic has secured access to the full compute capacity of SpaceX's Colossus 1 data center.

Within the month, that is expected to bring more than 300 megawatts of additional capacity online. Anthropic frames that as equivalent to more than 220,000 NVIDIA GPUs, with the new capacity directly supporting Claude Pro and Max subscribers.

That is a huge infrastructure claim, and it shows how the AI race is increasingly becoming a power and data-center race. Model quality still matters, but the winners also need enough electricity, chips, networking, cooling and operational reliability to serve millions of requests without a degraded experience.

The SpaceX angle also gives Anthropic a more dramatic long-term story: orbital compute.

Orbital AI compute is the moonshot

Anthropic says the agreement opens the door to future work with SpaceX on multi-gigawatt orbital AI compute capacity.

That phrase is easy to overread. It does not mean Claude is about to run from orbit next week. It does mean major AI companies are now thinking about infrastructure beyond conventional data centers. Space-based compute remains an early, extremely complex idea, with obvious challenges around launch cost, maintenance, power, thermal management, networking latency and regulation.

Still, the ambition is revealing. As demand for AI compute grows, companies are looking for any path that could unlock new scale. Some will focus on more efficient chips. Some will pursue new energy deals. Some will build geographically distributed data centers. Anthropic is now signaling interest in a much more unusual layer of infrastructure.

Even if orbital AI compute remains a long-term bet, the near-term SpaceX deal is about something very concrete: more capacity for Claude users.

A broader compute ecosystem

SpaceX is only one piece of Anthropic's infrastructure strategy. The company says Claude is trained and served across a mix of hardware, including AWS Trainium, Google TPUs and NVIDIA GPUs.

That diversity is important. Depending on a single hardware provider can make an AI company vulnerable to shortages, pricing pressure or regional constraints. A mixed compute stack gives Anthropic more flexibility as demand changes.

The company also points to several large-scale agreements:

  • Up to 5 gigawatts of capacity with Amazon, with nearly 1 gigawatt expected online by the end of 2026.
  • A 5-gigawatt agreement with Google and Broadcom, expected to begin coming online in 2027.
  • A strategic partnership with Microsoft and NVIDIA securing $30 billion in Azure capacity.
  • A $50 billion investment in U.S. AI infrastructure alongside Fluidstack.

These numbers underline the direction of the market. Frontier AI is no longer only a model race. It is a race to finance, secure and operate enormous compute systems.

Why enterprises care about local infrastructure

Claude is increasingly used in sectors where infrastructure location matters: finance, healthcare, government, legal services and regulated enterprise operations.

In those environments, it is not enough for an AI model to be capable. Customers need to know where data is processed, how systems are secured and whether the provider can satisfy regional rules around compliance and data residency.

Anthropic says its compute expansion is becoming global, with new inference capabilities already planned through its collaboration with Amazon in Asia and Europe.

That could make Claude easier to adopt for organizations that cannot send sensitive workloads through a single U.S.-centric infrastructure footprint. It also positions Anthropic to compete more seriously for enterprise deployments outside its home market.

The political and supply-chain filter

Anthropic is also emphasizing where it wants this infrastructure to live. The company says it is choosing partners in democratic nations with secure hardware and networking supply chains, as well as regulatory systems capable of supporting investments at this scale.

That is partly risk management and partly positioning. AI infrastructure is becoming strategically sensitive. Governments care about chip supply, energy use, national security, data flows and the economic impact of data centers.

For Anthropic, being selective about jurisdictions may help reduce supply-chain exposure and reassure customers in regulated industries. It also gives the company a policy narrative: Claude's infrastructure is not just expanding, it is expanding inside a set of preferred legal and security environments.

Community commitments and power costs

Large data centers create local pressure. They can bring jobs and investment, but they also consume electricity, water and land. Communities increasingly want to know who benefits and who pays.

Anthropic says it remains committed to the regions hosting its facilities. After pledging to cover consumer electricity price increases caused by its U.S. data centers, the company is now exploring how to extend similar commitments internationally.

That matters because AI compute expansion will be watched closely. If companies want multi-gigawatt infrastructure, they will need more than capital and chips. They will need local trust.

The most important part of this announcement may therefore be practical rather than futuristic. Claude users get higher limits now. Anthropic gets more room to scale. And the AI infrastructure race becomes even more visible.

(Source: Anthropic company update provided in the editorial brief. Photo: TreffikAI Claude AI cover asset.)

Tags:#claude#anthropic#claude-code#spacex
Share: