Anthropic locks in Google TPU deal to scale Claude AI training

Anthropic to use up to one million Google tensor processing units (TPU) to train future Claude models, in a deal valued in the tens of billions. Expanding its partnership with Google, the AI company secures more than one gigawatt of compute that Google plans to bring online in 2026.
The agreement gives Anthropic access to Google Cloud’s tensor processing units at large scale for training and serving the next Claude releases. Anthropic cited its prior TPU training runs, price-performance and efficiency as reasons for choosing Google’s silicon. The companies placed the value of the compute in the tens of billions of dollars.
The expansion comes as chip supply and power capacity shape AI build-outs across the sector. Anthropic’s TPU ramp adds another path beyond Nvidia GPUs and arrives alongside other hyperscale capacity deals reported this month. The new allocation is slated to start delivering in 2026 through Google Cloud infrastructure.
The expansion comes as chip supply and power capacity shape AI build-outs across the sector. Anthropic’s TPU ramp adds another path beyond Nvidia GPUs and arrives alongside other hyperscale capacity deals reported this month. The new allocation is slated to start delivering in 2026 through Google Cloud infrastructure.
The AI infrastructure race is also pulling in crypto miners. Data-center operators tied to Bitcoin mining are retooling sites and power contracts for AI workloads, highlighted by CoreWeave’s pending all-stock acquisition of Core Scientific to capture 1.3 gigawatts of energy capacity for AI compute. Sector reporting over the past year details how miners’ energy footprints make them targets for AI hosting and high-performance computing.
Data-center buildouts are now following the same playbook miners used at peak hashrate: lock in cheap megawatts first, then drop in compute. Power purchase agreements, curtailable load deals with utilities, and pre-existing interconnects are turning former ASIC halls into TPU/GPU farms. Operators are swapping immersion tanks and step-down gear for AI racks, adding high-density cooling, and wiring low-latency fiber to cloud on-ramps. The economics rhyme with mining cycles, but the payload is different: training runs and inference clusters instead of SHA-256.
For crypto, the overlap is practical. Sites that once scaled on stranded renewables or flexible load credits can point the same energy and real estate at AI tenants, smoothing revenue through hashrate downturns. As Anthropic’s TPU capacity arrives from 2026, more miners and hosting shops are bidding to become the “last mile” for hyperscalers - selling megawatts, floor space, and uptime SLAs - while keeping optionality to pivot back to rigs when network fees or price action make it worth it.
Data-center buildouts are now following the same playbook miners used at peak hashrate: lock in cheap megawatts first, then drop in compute. Power purchase agreements, curtailable load deals with utilities, and pre-existing interconnects are turning former ASIC halls into TPU/GPU farms. Operators are swapping immersion tanks and step-down gear for AI racks, adding high-density cooling, and wiring low-latency fiber to cloud on-ramps. The economics rhyme with mining cycles, but the payload is different: training runs and inference clusters instead of SHA-256.
For crypto, the overlap is practical. Sites that once scaled on stranded renewables or flexible load credits can point the same energy and real estate at AI tenants, smoothing revenue through hashrate downturns. As Anthropic’s TPU capacity arrives from 2026, more miners and hosting shops are bidding to become the “last mile” for hyperscalers - selling megawatts, floor space, and uptime SLAs - while keeping optionality to pivot back to rigs when network fees or price action make it worth it.
