Benchmarks Shattered and Billions Flow While Privacy Gets Pushed Aside: Your February 13th Tech Roundup

 First off, my apologies for getting this out so late in the day. Sometimes the timing doesn't line up quite right, but I wanted to make sure you got your tech news fix before the day wraps up. And honestly, with everything happening right now, it's worth the wait. Today brought us Google reclaiming its spot at the top of the AI leaderboard, OpenAI making a surprising hardware pivot, Anthropic closing a funding round so large it's hard to even process, and Meta pushing forward with facial recognition plans that should make us all pause. Let's dive in.


Google Gemini 3 Deep Think Crushes Reasoning Benchmarks


Google is back, and it's not playing around. The company released a major upgrade to Gemini 3 Deep Think, and the performance numbers are staggering. The model scored 84.6 percent on ARC-AGI-2, a reasoning benchmark verified by the ARC Prize Foundation that specifically tests whether AI can learn novel tasks and generalize logic rather than just recycle memorized training data. This isn't a trivial accomplishment. ARC-AGI-2 has been called one of humanity's last exams, designed to push AI models into territory where pure scale and training data won't help. Gemini 3 Deep Think passed that test convincingly.


But that's not all. The model also achieved a 3455 Elo score on Codeforces, putting it at the Legendary Grandmaster level in competitive programming. For context, that means it's outperforming the vast majority of human programmers when it comes to algorithmic complexity and system architecture. It scored 48.4 percent on Humanity's Last Exam without external search tools, showing that it can handle high-level conceptual planning across fields like advanced law, philosophy, and mathematics without hallucinating or drifting into incorrect reasoning paths.


What's interesting about Deep Think is the approach. Rather than training a larger model, Google is scaling inference-time compute, which means giving the model more time and resources to reason through problems before answering. That strategy appears to be paying off. The internal verification systems that prune incorrect reasoning paths are working effectively, and the model is demonstrating genuine abstract reasoning rather than pattern matching.


For you, this means the bar for what AI can do just jumped significantly. We've been watching models get better at coding and reasoning for months, but Gemini 3 Deep Think is operating at a level that was theoretical just a year ago. If you're a developer or knowledge worker who relies on AI tools, expect the capabilities you have access to to keep expanding rapidly. If you're someone who worries about AI moving faster than society can adapt, this is another data point suggesting that concern is justified.


I've mentioned in previous posts how deployment and infrastructure remain harder problems than raw capability. Google is proving that when you combine cutting-edge models with the right architecture, you can push performance into new territory. The question now is how quickly these capabilities make it into consumer products and how reliably they work outside of controlled benchmarks.


OpenAI Debuts GPT-5.3-Codex-Spark on Cerebras Chips


In a move that caught a lot of people off guard, OpenAI released its first AI model running on chips from Cerebras Systems instead of Nvidia. The model is called GPT-5.3-Codex-Spark, and it's designed to be a faster, more interactive version of OpenAI's coding assistant. According to OpenAI, the model can exceed 1,000 tokens per second under the right configuration, which is roughly 15 times faster than previous versions. The speed advantage comes from Cerebras' wafer-scale architecture, which uses a single massive processor with hundreds of thousands of AI cores and large pools of on-chip memory, rather than the typical GPU clusters that require high-speed interconnects.


The model is tuned for interactive development workflows like editing specific sections of code and running targeted tests. It defaults to minimal edits and won't automatically execute tests unless instructed, which makes it more practical for developers who want precise control rather than an AI that tries to rewrite everything. OpenAI says this is just the beginning of a broader partnership with Cerebras, with plans to bring ultra-fast inference to larger frontier models later this year.


This is OpenAI's first major product release on non-Nvidia hardware, and it's a signal that the company is diversifying its infrastructure dependencies. For years, Nvidia has been the dominant player in AI processing, but companies like Cerebras are proving that purpose-built architectures can offer real advantages for specific workloads. The move also reflects the intense pressure OpenAI faces to reduce costs and improve performance as it scales its products to hundreds of millions of users.


For you, this matters for a couple of reasons. First, faster AI tools mean more responsive experiences. If you've ever used an AI coding assistant and found yourself waiting for it to finish generating code, that's the problem Spark is solving. Second, this shift toward specialized hardware is likely to accelerate. As AI moves from experimental to essential, companies are going to keep looking for ways to optimize performance and reduce reliance on a single chip supplier. That competition will likely drive innovation and potentially lower costs over time.


Anthropic Raises 30 Billion Dollars at 380 Billion Dollar Valuation


And then there's Anthropic. The company announced yesterday that it closed a 30 billion dollar Series G funding round, valuing it at 380 billion dollars post-money. That's more than double its previous valuation, and it's the second-largest private financing round in technology history, trailing only OpenAI's 40 billion dollar raise last year. The round was led by Singapore's sovereign wealth fund GIC and investment firm Coatue, with participation from heavy hitters like Founders Fund, D. E. Shaw Ventures, and Abu Dhabi's MGX.


According to Anthropic's CFO Krishna Rao, roughly 80 percent of the company's revenue comes from enterprise clients, and much of that success is driven by Claude Code, the viral AI coding tool that automates parts of software development. The funding will be used to expand infrastructure, continue frontier research, and build out enterprise-grade products. This comes on the heels of Anthropic's 20 million dollar donation to a political group supporting AI safety regulations, which I covered in yesterday's post. The company is clearly betting that positioning itself as the safety-first alternative in the AI race will pay off both commercially and politically.


What's striking is the sheer scale of capital flowing into AI right now. Thirty billion dollars is an almost incomprehensible amount of money, and Anthropic raised it in a single round. For comparison, most startups raise millions or tens of millions. Anthropic is raising tens of billions, and its valuation puts it in the same league as some of the largest public companies in the world. This reflects investor belief that AI infrastructure and enterprise tools will be worth trillions of dollars over the next decade.


For you, this funding spree has direct implications. The companies raising these enormous sums are building the AI tools you'll be using in the coming years, and the products they prioritize will shape what's available to consumers and businesses. Anthropic's focus on enterprise clients means its tools are being designed for reliability, security, and compliance rather than pure consumer appeal. If you work in a company that's adopting AI, there's a good chance you'll encounter Claude or similar tools built by these hyper-funded startups. The question is whether these companies can justify their valuations by delivering products that fundamentally change how work gets done.


Meta Plans to Add Facial Recognition to Ray-Ban Smart Glasses


Now for the story that should make everyone uncomfortable. Meta is planning to add facial recognition technology to its Ray-Ban smart glasses, potentially as soon as this year. The feature, internally called Name Tag, would let wearers identify people and get information about them through Meta's AI assistant. According to sources who spoke to The New York Times, Meta considered adding this capability to the first version of its smart glasses back in 2021 but pulled back due to technical challenges and ethical concerns. The company has now revived its plans, reportedly betting that the current political environment is more favorable for the feature's release.


An internal memo from Meta's Reality Labs reportedly stated that the company views the political tumult in the United States as good timing for the feature's release, noting that civil society groups that would typically oppose such a move have their resources focused on other concerns. Meta's smart glasses have already been used for facial recognition in unofficial experiments. In 2024, two Harvard students used Ray-Ban Meta glasses alongside a commercial facial recognition tool called PimEyes to identify strangers on a Boston subway, and the video went viral. Meta emphasized at the time that the glasses have a small white LED light to indicate when recording is taking place, but that's hardly reassuring when the technology can identify people in real time.


Five years ago, Facebook shut down its facial recognition system for tagging people in photos, citing privacy concerns. Now Meta is bringing the technology back in a more invasive form, embedded in wearable devices that can be worn anywhere. The American Civil Liberties Union called the move a dire threat, and it's hard to disagree. Facial recognition technology on glasses that blend into everyday accessories creates a world where anyone can be identified at any time, and the potential for abuse is enormous.


For you, this is where the tech industry's relentless push forward starts to collide with basic privacy and safety. If Meta ships this feature, it won't just affect Ray-Ban smart glasses users. It will affect everyone who comes into contact with someone wearing them. You won't know if the person next to you on the subway or in a coffee shop is using AI to pull up your name, social media profiles, or any other information tied to your face. This technology fundamentally changes the dynamics of public space, and it's being introduced not because there's demand for it, but because Meta sees a market opportunity and thinks the political moment is right.


Wrapping It All Up


Today's stories highlight the dual nature of where we are with technology right now. On one side, we're seeing extraordinary technical achievements. Google's Gemini 3 Deep Think is solving problems that seemed out of reach just months ago. OpenAI is partnering with Cerebras to deliver AI tools that are faster and more responsive than anything we've had before. Anthropic is raising historic amounts of capital to build enterprise-grade AI systems that could redefine how businesses operate.


On the other side, we're watching privacy erode in real time. Meta's decision to push forward with facial recognition in smart glasses isn't driven by technical necessity or consumer demand. It's driven by the belief that the company can get away with it. And they might be right. The memo that leaked about timing the release around political distractions tells you everything you need to know about how these decisions are being made.


The AI industry is moving at a pace that's hard to track, let alone regulate. The companies leading the charge are raising billions, breaking benchmarks, and building products that will reshape entire industries. But they're also making decisions that have profound implications for privacy, security, and the basic norms of public life. Yesterday's post talked about how regulation is moving from debate to enforcement. Today's news shows why that can't happen fast enough.


That's it for today. I'll be back tomorrow with more updates as this transformation continues.


Sources:


https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-deep-think/


https://www.marktechpost.com/2026/02/12/is-this-agi-googles-gemini-3-deep-think-shatters-humanitys-last-exam-and-hits-84-6-on-arc-agi-2/


https://chromeunboxed.com/googles-new-gemini-3-deep-think-update-pushes-the-boundaries-of-ai-reasoning/


https://www.digitalapplied.com/blog/gemini-3-deep-think-reasoning-benchmarks-guide


https://www.bloomberg.com/news/articles/2026-02-12/openai-debuts-first-model-using-chips-from-nvidia-rival-cerebras


https://www.tomshardware.com/tech-industry/artificial-intelligence/openai-lauches-gpt-53-codes-spark-on-cerebras-chips


https://www.cerebras.ai/blog/openai-codexspark


https://www.anthropic.com/news/anthropic-raises-30-billion-series-g-funding-380-billion-post-money-valuation


https://www.cnbc.com/2026/02/12/anthropic-closes-30-billion-funding-round-at-380-billion-valuation.html


https://www.reuters.com/technology/anthropic-valued-380-billion-latest-funding-round-2026-02-12/


https://news.crunchbase.com/ai/anthropic-raises-30b-second-largest-deal-all-time/


https://bitcoinworld.co.in/anthropic-series-g-funding-valuation/


https://www.nytimes.com/2026/02/13/technology/meta-facial-recognition-smart-glasses.html


https://techcrunch.com/2026/02/13/meta-plans-to-add-facial-recognition-to-its-smart-glasses-report-claims/


https://www.macrumors.com/2026/02/13/meta-facial-recognition-smart-glasses/


https://www.businessinsider.com/meta-ray-ban-smart-glasses-facial-recognition-distracted-2026-2

Comments

Popular posts from this blog

When Billions Flow and Cofounders Walk: Your February 11th Tech Roundup

Agents, Apple's Big Week, and the Nvidia Moment: Your February 22nd Tech Roundup

Power, Pressure, and the Moment of Truth: Your February 25th Tech Roundup