Power, Pressure, and the Moment of Truth: Your February 25th Tech Roundup
Wednesday has a way of arriving like a verdict. You have spent the week building a case, watching the evidence pile up, and then suddenly the day comes where it all gets resolved, or at least redirected. That is where we find ourselves this morning. The Anthropic-Pentagon standoff has reached its deadline. Nvidia reports earnings after the bell tonight in what may be the most consequential data point in tech this quarter. Google quietly dropped a new flagship AI model earlier this week that did not get nearly the coverage it deserved. A software giant is in freefall as investors panic about what AI is doing to the enterprise software sector. And the President of the United States spent part of his State of the Union address last night telling the biggest tech companies in the world to build their own power plants. There is a lot happening. Let us work through all of it.
The Anthropic Deadline Is Here
If you have been following along this week, you already know the Anthropic story has been building toward this moment. The February 24th post laid it out plainly: the Pentagon gave Anthropic until 5:01 PM today, February 25th, to accept an open-ended contract allowing the military to use Claude for any lawful purpose without restriction. The Defense Department made clear that failure to comply would result in Anthropic being designated a supply chain risk and potentially subjected to the Defense Production Act, a legal mechanism that allows the federal government to compel private companies to support national security objectives even without their consent.
As of this morning, the standoff has not resolved publicly, and that matters. Anthropic's core objection is not bureaucratic or contractual. It is substantive. The company has insisted since its founding that its AI models should not be deployed in fully autonomous weapons systems operating without meaningful human oversight, and that Claude should not be used in mass surveillance of American citizens. These are not fringe positions. They reflect a genuine philosophical commitment to what Anthropic calls responsible AI development, and they are the same principles the company used to justify its early work on AI safety research going back years.
What makes this situation genuinely complicated is the competitive context. As noted yesterday, xAI has already signed a classified deal with the Pentagon without the safety carve-outs Anthropic has demanded. Google is reportedly close to its own agreement. The Pentagon acknowledged that Claude is actually the more capable and accurate model compared to Grok, which makes the standoff feel less like a capability dispute and more like a test of which company will blink first on the question of AI safety guardrails. If Anthropic holds its position and gets blacklisted, it loses significant government revenue and potentially signals to the market that safety-focused AI companies are commercially disadvantaged. If it caves, it sets a precedent that no AI company can maintain meaningful ethical limits when a government contract is on the line. Either outcome is significant. Watch for this to resolve, one way or another, before the end of the business day.
Nvidia Reports Tonight and the Whole Market Is Watching
This has been coming since the February 22nd post flagged it as the most important earnings report in tech this quarter, and the February 24th post did a deep dive on exactly what is at stake. Tonight is finally the night. Nvidia reports fiscal Q4 2026 results after the market closes, with results expected around 4:20 PM Eastern and the full earnings call beginning at 5:00 PM. Wall Street consensus puts revenue at approximately 65.7 billion dollars for the quarter, representing roughly 67 percent year-over-year growth. Adjusted earnings per share are expected at 1.53 dollars, up about 72 percent from a year ago. The data center segment alone is expected to come in near 60 billion dollars. These are numbers that would have been considered impossible to forecast eighteen months ago, and they still might not be enough to move the stock.
The reason for that strange paradox is simple: expectations have been so aggressively revised upward that even a strong beat needs to be paired with exceptional guidance to generate a meaningful stock reaction. Options traders are pricing in Nvidia's smallest post-earnings swing in three years, which tells you that the market has largely priced in a solid quarter and is most interested in what Jensen Huang says about the April outlook. Analysts at UBS have suggested that investor expectations for Q1 fiscal 2027 are likely demanding revenue in the 74 to 75 billion dollar range. That guidance number, not the actual Q4 result, is where the action will be tonight.
For everyone who does not own Nvidia stock and wonders why this matters to them: the answer is that Nvidia's earnings call is effectively a real-time status report on the entire AI industry's infrastructure buildout. If Jensen Huang signals continued record demand for Blackwell chips, it confirms that the massive capital expenditure programs at Alphabet, Amazon, Meta, and Microsoft are proceeding as planned. The 650 billion dollars in AI infrastructure spend projected for 2026 from those four companies alone keeps flowing. The tools you use, the products being developed, the services being built, all of it depends on that spending continuing. If the guidance softens even slightly, the ripple effects will be felt across the entire technology economy within 24 hours. This is one of those evenings where reading the Nvidia press release should be on your list.
Google Dropped a Major New AI Model and Almost Nobody Noticed
In a week dominated by Nvidia earnings anticipation and the Anthropic-Pentagon drama, Google managed to release what may be its most capable AI model to date with surprisingly little fanfare. Gemini 3.1 Pro entered public preview on February 19th, and the benchmark results are worth sitting with for a moment. On the ARC-AGI-2 benchmark, which tests a model's ability to solve entirely new logic patterns it has never encountered before, Gemini 3.1 Pro scored 77.1 percent. That is more than double the score of the previous Gemini 3 Pro, and it is a meaningful jump in what researchers consider a strong test of genuine reasoning rather than just memorized pattern matching.
The improvements are not limited to abstract reasoning benchmarks. Gemini 3.1 Pro also leads the field in video understanding, scoring 87.2 percent on VideoMME, which tests comprehension and temporal reasoning across video clips of varying lengths. For context, Claude Opus 4.5 from Anthropic scores 79.2 percent on the same benchmark, and GPT-5.1 from OpenAI scores 81.3 percent. Google's model now reliably handles video clips up to three hours in length, a practical capability that has real implications for anyone using AI to analyze lengthy media. The model also shows a roughly 20 percent improvement in reasoning capabilities over Claude Sonnet 4.6, and it currently leads the LM Arena rankings for general-purpose model quality.
The word preview matters here. This is not a full production release. Google typically runs a preview period of six to twelve weeks before locking a model checkpoint for production guarantees. But for everyday users, the practical upshot is this: the AI model accessible through Google's products is getting meaningfully better at complex reasoning, multi-step tasks, and video analysis at a pace that keeps Google competitive with both Anthropic and OpenAI. This also connects directly to what Apple previewed back in the February 22nd post. Recall that Apple's next version of Siri is expected to integrate Google Gemini as a backend for more complex requests. If Gemini 3.1 Pro is what powers that integration, Siri's upcoming leap could be more significant than many people currently anticipate. Apple's March 4th event is now six days away, and the software story underneath it is becoming clearer.
Workday Is the Canary in the AI Coal Mine
Tuesday night brought an earnings result that tells a different kind of story about what AI is doing to the technology sector, and it is one worth paying close attention to. Workday, the enterprise HR and payroll software company used by thousands of large businesses to manage their workforces, reported fourth quarter results that were actually fine on the surface. Revenue came in at 2.53 billion dollars, up 14 percent year over year, and adjusted earnings per share of 2.47 dollars beat the 2.32 dollar consensus estimate handily. So why did the stock fall nine percent and extend declines that have now sent shares down roughly 40 to 50 percent over the past year? The answer is guidance, and more specifically what the guidance implies about the company's future in a world where AI is starting to do what Workday does.
Workday guided for fiscal 2027 subscription revenue of between 9.925 and 9.95 billion dollars, implying subscription growth of only 12 to 13 percent, down from 14.5 percent growth in the year just ended. That deceleration is not dramatic on its own, but it is landing in a context where investors are already deeply nervous about what agentic AI systems mean for traditional enterprise software. The concern is straightforward: if an AI agent can handle many of the HR, payroll, scheduling, and workforce management tasks that historically required a large software platform like Workday, the pricing power and growth trajectory of those platforms gets undermined. Anthropic's new enterprise AI solutions, in particular, have been cited by analysts as a specific competitive threat that is accelerating those fears.
For the average worker, this story is both a warning and a preview. The companies that build the administrative and operational software your employer uses are feeling the pressure from AI right now. That pressure will eventually translate into decisions about what software your company buys, how HR processes are managed, and whether certain categories of business software jobs continue to exist in their current form. Workday CEO Aneel Bhusri argued in the earnings call that the company has 20 years of enterprise data advantage and is five to seven years ahead of any AI competitor. That may well be true. But the market did not find it reassuring on Tuesday night, and that tells you something about how quickly investor confidence in traditional software business models is eroding.
Trump Tells Big Tech to Build Their Own Power Plants
President Trump's State of the Union address Tuesday night touched on artificial intelligence in a way that is worth acknowledging here, because it connects directly to a story that has been developing for months. During the address, Trump announced a new framework requiring data center owners and operators to absorb the surges in electricity costs associated with AI use, rather than passing those costs to American households and utility customers. He also stated publicly that he has told the largest tech companies to build their own power plants to support their data center operations, rather than relying on existing grid infrastructure. The AI Infrastructure Coalition, a group co-chaired by former Senator Kyrsten Sinema and Representative Garret Graves, released a statement noting that companies including Google, Microsoft, Duke Energy, and Georgia Power have already made commitments aligned with this direction.
This matters for a reason that connects to a Reuters piece published this morning that deserves more attention than it is getting. According to the International Energy Agency, electricity demand is projected to rise by nearly two percent annually between 2025 and 2030, more than double the pace of the previous decade, with data center growth as the primary driver. PJM, the power grid operator managing roughly 180 gigawatts of power across 13 states in the mid-Atlantic and Midwest, has warned of potential power supply shortages of up to 60 gigawatts in the coming decades due to heightened demand from data centers. The operator has further indicated that by 2027, the grid could lack adequate capacity and reserves, increasing the likelihood of blackouts. ERCOT, the Texas grid operator, has reported that 226 gigawatts of large-load projects, primarily data centers, are currently seeking grid connections. That is approximately three times the current total US data center capacity.
What this means for you, practically speaking, is that the electricity bill question around AI is not hypothetical. It is already shaping policy at the highest levels of government, and the decisions being made now about who pays for the power infrastructure required to run AI systems will determine whether those costs land on tech company balance sheets or on your monthly utility bill. Trump's framework, if it holds, pushes those costs toward the companies. But the grid capacity problem does not get solved simply by deciding who pays. Building new power plants, upgrading transmission lines, and expanding grid capacity takes years and sometimes decades. The AI buildout is moving on a timeline measured in months. That gap is the real story, and it is one that will shape everything from where data centers get built to how quickly AI products reach consumers over the next several years.
Sam Altman, Water, and a Debate That Is Not Going Away
One more story worth covering from earlier this week that did not make it into Monday's post. At the India AI Impact Summit last Friday, OpenAI CEO Sam Altman was asked about the widespread claims circulating online that AI systems like ChatGPT consume enormous amounts of water per query. Altman dismissed those concerns as completely untrue, calling them totally insane, and said they have no basis in reality. He argued that while data centers historically used evaporative cooling that required significant water, modern facilities have largely moved away from that approach. His position was that the specific figures being circulated, such as claims that a single ChatGPT query uses the equivalent of a small bottle of water, are outdated and misleading.
Altman did acknowledge that total energy consumption remains a valid and serious concern, and he called for a rapid transition to nuclear, wind, and solar power to address it. But his framing of AI energy use compared to human energy use, arguing that training an AI model is comparable to the energy consumed raising a human being over 20 years, generated significant backlash online. Tech professionals and business leaders pushed back on the comparison, arguing that equating AI systems to human beings creates dangerous rhetorical ground and obscures the legitimate questions about whether the rate of AI energy consumption is justified by the actual value being created.
The reason this matters beyond the headline controversy is that it sits directly at the intersection of the power grid story above. If Altman is correct that modern data centers have largely solved the water problem, that is genuinely good news. The energy consumption question, however, is harder to dismiss. The IEA data and the PJM grid warnings are not coming from critics or activists. They are coming from grid operators and international energy agencies doing operational planning. Altman may be right that AI's value eventually justifies its energy cost. But the infrastructure to support that argument is being built under enormous time pressure with uncertain outcomes. The conversation about AI and energy is going to keep getting louder, and dismissing the valid parts of it is unlikely to make it quieter.
The Threads That Hold This Week Together
Pull back from any single story this week and a consistent pattern emerges. Whether it is the Anthropic-Pentagon standoff, the Workday earnings reaction, the power grid warnings, or the debate over Sam Altman's comments, every one of these stories is asking the same underlying question: how much of the current AI trajectory is sustainable, and who absorbs the cost when something does not go according to plan? The safety costs, the energy costs, the disruption to existing software businesses, the burden on national grid infrastructure. All of these are real, and they are all landing at roughly the same moment.
Tonight, Nvidia gives us the closest thing we have to an official answer to that question. Not the full answer. Not a permanent one. But a data point that will either reinforce confidence that the buildout is justified, or introduce the first serious cracks in that confidence. I will be back tomorrow with a full breakdown of the Nvidia results and guidance, the latest on Anthropic if the standoff resolves before end of business, and a preview of what is shaping up to be a genuinely packed week ahead with Apple's March announcements beginning in just five days. There is no shortage of things to talk about in this space right now. Thanks for being here. See you tomorrow.
Sources:
https://www.nytimes.com/2026/02/24/us/politics/pentagon-anthropic.html
https://www.bloomberg.com/news/articles/2026-02-24/pentagon-threatens-to-end-anthropic-work-in-feud-over-ai-terms
https://www.dw.com/en/us-pentagon-gives-ultimatum-to-anthropic-over-ai-curbs-report/a-76111915
https://www.kiplinger.com/investing/live/nvidia-earnings-live-updates-and-commentary-february-2026
https://www.reuters.com/business/nvidia-results-are-ai-markets-biggest-test-amid-competitive-worries-2026-02-24/
https://www.reuters.com/business/options-traders-price-nvidias-smallest-postearnings-swing-three-years-2026-02-25/
https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-1-pro/
https://techinformed.com/google-rolls-out-gemini-3-1-pro-preview/
https://whatllm.org/blog/gemini-3-1-pro-preview
https://siliconangle.com/2026/02/24/workdays-stock-slumps-weak-guidance-ai-disruption-fears/
https://www.reuters.com/business/workday-tumbles-dour-revenue-outlook-amid-ai-threat-2026-02-25/
https://www.bnnbloomberg.ca/business/2026/02/25/workday-shares-extend-declines-as-soft-forecast-deepens-ai-disruption-fears/
https://www.nextgov.com/artificial-intelligence/2026/02/trump-unveils-big-tech-pledge-offset-rising-data-center-energy-costs/411
https://www.reuters.com/business/energy/trump-says-he-has-told-big-tech-companies-build-their-own-power-plants-2026-02-25/
https://www.reuters.com/markets/commodities/us-ai-boom-faces-electric-shock-2026-02-25/
https://www.cnbc.com/2026/02/23/openai-altman-defends-ai-resource-usage-water-concerns-fake-humans-use-energy-summit.html
https://nypost.com/2026/02/23/business/openais-sam-altman-blasts-ai-concerns-around-water-usage-as-fake-humans-use-energy-too/
Comments
Post a Comment