
Lunar New Year Day 5 | Waxing Crescent | Late Winter
Quick Take
- AI Agents: OpenClaw is everywhere. Builders are running 2.5M-follower content empires, 3D modeling pipelines, and Upwork automation from a single Mac Mini. The agent economy just went from "cool demo" to "revenue engine."
- Model Wars: Claude Sonnet 4.6 dropped with a heavy focus on computer use. Gemini 3.1 Pro undercuts Opus at half the price. Taalas ships a chip doing 17k tokens/sec. The inference cost collapse is accelerating.
- Builder Signal: An Anthropic engineer says 100% of his code is written by Claude Code since November. Ships 10-20-30 PRs daily. "Coding is largely solved." That sentence should terrify and excite you in equal measure.
- Markets: A-shares opened the Year of the Horse weak -- CSI 300 down 1.25%, Shanghai Composite down 1.26%. But here's the split: internet giants bled while AI model startups surged. Zhipu up 43%, MiniMax up 14%. Money is rotating from platforms to pure AI plays.
The Agent Economy Is Real, and It's Moving Fast
Let me paint a picture of what happened this week in the AI agent space, because I think we're witnessing something that will look obvious in hindsight but feels chaotic right now.
Riley Brown -- a guy who openly says he doesn't know how to code and doesn't know how to use Blender -- gave his OpenClaw agent Blender skills through MCP. The agent searched the internet for Blender documentation, wrapped the MCP server, created a 3D character model, animated it (the thing blinks!), built a landing page around it, and deployed it to Vercel. All through conversation. Zero code. Zero Blender knowledge.
That's not a demo. That's a production workflow.
But wait -- it gets more interesting. Riley is also managing 2.5 million followers across 15 accounts using OpenClaw running on a Mac Mini. Seven core skills: tweet drafting, Notion management, Linear project management, competitor video extraction, thumbnail generation. The key insight from his video wasn't the skills themselves -- it was how he organizes and verifies them. Most people, he says, fail because they connect a bunch of tools without proper context or skill organization.
Ras Mic echoed this perfectly. The gap between people who love OpenClaw and people who think it's useless comes down to one thing: recursive improvement. Use it. Find where it breaks. Fix the skill. Repeat. Most people stop at "connect tools." The winners iterate on the skills themselves.
And then Greg Isenberg dropped the money angle. People are charging executives thousands of dollars per month to set up and manage OpenClaw instances. Deploy multiple instances as virtual employees. Automate Upwork tasks. He's talking about OpenClaw as a services business, not a productivity tool.
Here's what I find genuinely interesting about this. We've gone from "AI agents are cool" to "AI agents are revenue-generating employees" in about six months. The Lightcone/YC podcast coined the term "20X companies" -- startups where the leanness itself is the superpower. Anthropic's own engineers manage 3-8 Claude instances each. The team building one of the most sophisticated AI products on earth is using that AI to build more of itself.
But -- and this is the part most people skip -- an OpenClaw agent also published a defamatory hit piece on someone this week. The operator came forward, but the damage was done. System prompts matter. Guardrails matter. The HN comments nailed it: "It doesn't matter how careful you think you need to be with AI, because some asshole from Twitter doesn't care."
The agent economy is forming, and it simultaneously has a massive demand problem (everyone wants agents) and a massive trust problem (agents do unpredictable things). That tension is where the real business opportunities live -- the companies that solve agent reliability, monitoring, and skill verification will capture enormous value.
The Model Price War Nobody's Talking About
Three things happened simultaneously this week in the model space that, taken together, tell a story nobody seems to be connecting.
Claude Sonnet 4.6 launched with a heavy focus on computer use -- Anthropic clearly believes there's as much value in "AI controls your desktop" as there is in coding. But their own safety evaluation is wild: 8% one-shot injection success rate even with safeguards and extended thinking. 50% if given unbound access. Let that sink in. The model that powers half the agent economy can be hijacked one out of every twelve tries.
Separately, Anthropic officially banned using subscription auth for third-party use. The HN commenters called it what it is: Claude Code is a lock-in strategy. If the frontend and API are decoupled, they're one benchmark away from losing half their users. By bundling the experience, they capture the value even as inference costs drop.
Gemini 3.1 Pro arrived at half the price of Opus. HALF. And it's competitive -- one developer reported it one-shot fixed a UI race condition that Opus 4.6 couldn't solve across three attempts. Google's cost effectiveness is genuinely underrated. Jeff Dean explained why Google runs AI Mode on Flash: latency and cost are the production priorities, not raw intelligence. Models are built to retrieve, not memorize.
Taalas announced a chip doing 17,000 tokens per second. Specialized for high-speed, low-latency inference with small context. 20x cheaper to produce than Nvidia GPUs. 10x less energy per token. It's not general purpose, but for the use cases it targets -- real-time agent interactions, chatbots, quick inference -- it's a game-changer.
Connect these dots: inference costs are collapsing from three directions simultaneously. Model providers are undercutting each other (Gemini vs Claude). Specialized hardware is targeting the high-volume inference use case (Taalas). And the lock-in strategies (Anthropic's subscription auth ban) tell you that the companies themselves know commoditization is coming.
The question for 2026 isn't "which model is best?" It's "who owns the workflow?" That's why Anthropic is pushing Claude Code so hard. That's why Google runs Flash. That's why OpenAI is pivoting to Codex as a GUI app for managing parallel agent threads -- GDB himself said the app changed his workflow because managing 5-10 parallel AI threads is "much better managed in a GUI than in a terminal."
The moat isn't the model. The moat is the habit.
"Coding Is Largely Solved" -- Now What?
An Anthropic engineer appeared on Lenny Rachitsky's podcast and said something that stopped me cold: "A hundred percent of my code is written by Claude Code. I have not edited a single line by hand since November. Every day I ship 10, 20, 30 pull requests."
He went from 20% AI-written code in February to 100% by November. That's a 5x acceleration curve in nine months.
The No Priors podcast asked the obvious follow-up: is SaaS dying? Their answer was nuanced and, I think, correct. SaaS isn't dying -- it's transforming. When companies buy software, they're buying insurance, not just features. They're buying someone to call when things break. They're buying community. The idea that every corporation will vibe-code their own Salesforce is absurd.
But -- there's a real problem emerging. The No Priors hosts called it "vibe coding slop." When you can generate enormous amounts of code and nobody reads it, nobody deeply understands the codebase, and there's more fragility... that's a new category of technical debt. The anxiety isn't that AI will replace SaaS. The anxiety is that AI-generated codebases become unmaintainable.
Reddit's r/SaaS had a fascinating counterpoint: a researcher who can't code built a SaaS to $1K MRR in 25 days using vibe coding. The discovery site as top-of-funnel was smart -- basically what Ahrefs did with their free backlink checker. Build the free SEO magnet first, let it feed the paid tool.
The skill gap is shifting. The r/SaaS thread on AI creating a huge skill gap put it well: "The most desired skill isn't writing syntax anymore. It's system design, debugging subtle bugs, asking better questions, and rejecting bad output." One commenter said it perfectly: "It's liberating to not be the code monkey anymore."
I think we're seeing two parallel tracks form. Track one: non-technical founders use vibe coding to ship fast, hit $1-10K MRR, and either stay small or hire real engineers. Track two: experienced engineers use AI to become 10-20x more productive, shipping the output of entire teams solo. Both tracks are real. Neither invalidates the other.
Reddit and Startup Pulse
A few signals from the trenches worth pulling apart.
"My most boring offer makes 3x more than my actual business." This r/Entrepreneur post is Exhibit A for why founders struggle. The boring offer -- the one that removes a clear pain fast with low risk for the buyer -- wins every time. The premium offer requires belief and education. The comments were ruthless: "You have product-market fit, just not with the thing you're emotionally attached to." Ouch. But true.
YouTube 100 views generating $12K/month. Before you roll your eyes -- the logic is sound for niche B2B content. You don't need millions of views. You need the right 100 people watching your deep-dive on, say, enterprise compliance tooling, and a percentage converting to a $500/month SaaS subscription. The distribution math is completely different from consumer content.
Greg Isenberg's $273/day directory business. Built with Claude Code. Pick a niche, do data enrichment, clean the data, generate with Claude Code, let SEO bring autopilot traffic. Directories are boring. Directories also have incredibly low maintenance costs and surprisingly durable traffic. Frey showed the full workflow: prompts, data cleaning, deployment. The "boring is beautiful" thesis keeps getting validated.
The 29-year-old COO of a $16M family business that's profitable on paper but cash-flow negative and behind on rent. Post-COVID fundamentals changed but the father isn't adapting. The Reddit advice was sharp: non-business-essential expenses need to be cut first. "Without the business, those don't get paid anyway." Family businesses have this unique failure mode where personal expenses get mixed into the P&L until the whole thing collapses. Classic.
Semrush raised prices from $139 to $199. Someone commented "Adobe didn't waste time" -- referring to the acquisition. SEO tools have insane pricing power because the switching costs are high and the data moats are real. This is what monopolistic pricing looks like in SaaS.
SEO and Traffic Intelligence
Several signals this week that paint a coherent picture of how search is fragmenting.
The Ahrefs keyword research tutorial for the AI SEO era dropped a stat that should be pinned to every content marketer's wall: even if you rank number one on Google, you're losing around 35% of your clicks to AI overviews. That's not a rounding error. That's a structural change in how search distributes value.
Their new strategy is elegant: use a "10-second hero prompt" for seed keywords plus modifiers, then run those through traditional keyword tools. AI-assisted keyword research, not AI-replaced keyword research. The distinction matters.
ChatGPT search runs 43% of its fan-out queries in English, even when the original prompt is in a different language. Peec AI analyzed 10M+ ChatGPT prompts and found this. The implication for non-English content creators is severe: if the background search that feeds your AI citation runs in English, your content needs English-language signals to be discoverable, even if your audience speaks Japanese or Spanish.
Google's Jeff Dean explained why AI Mode runs on Flash rather than a more powerful model: latency and cost are production priorities. Models are built to retrieve, not memorize. This is the single most important thing for SEO practitioners to internalize. Google's AI search is a retrieval system with a language model on top, not a knowledge base. Your content still needs to be findable and indexable.
A 35-year SEO veteran summed it up: "Great SEO is good GEO -- but not everyone's been doing great SEO." Generative engine optimization isn't a new discipline. It's just good SEO that most people weren't doing in the first place: structured data, clear entity relationships, authoritative sourcing.
Meanwhile, Google Merchant Center has been flagging a feeds disruption since February 4th. If you're running Shopping ads or free product listings, monitor your diagnostics closely. When feeds stall, ecommerce performance follows.
Today's Synthesis
Here's what today's information looks like when you lay it all out on the table and squint.
We're living through a phase transition in how software gets built, deployed, and monetized. Not a gradual evolution -- a phase transition. The signals are coming from every direction simultaneously. An Anthropic engineer ships 30 PRs a day without writing a single line of code. A non-coder builds a SaaS to $1K MRR in 25 days. Riley Brown runs a 2.5M-follower content operation from a Mac Mini using an AI agent. Greg Isenberg shows you how to build a $273/day directory business with Claude Code. Taalas ships a chip that does 17k tokens/sec at 20x lower cost than Nvidia.
Every one of these stories points in the same direction: the cost of building just collapsed, but the cost of distribution stayed the same. Building a SaaS is now trivial. Finding 100 paying customers is still brutally hard. That's why the boring offer beats the clever offer. That's why directories work -- they solve the distribution problem by becoming the distribution. That's why the YouTube creator with 100 views makes $12K/month -- distribution to the right 100 people is worth more than eyeballs from 100,000 wrong ones.
From the China side, the data tells a complementary story. Hong Kong's first trading day of the Year of the Horse saw internet giants bleed (Alibaba down 5%, Baidu down 6%) while AI model startups exploded (Zhipu up 43%, MiniMax up 14%). Money is rotating from platforms to pure AI plays. Morgan Stanley projects MiniMax could hit $700M revenue by 2027 -- a 10x growth runway. Meanwhile, Alibaba and Meituan may have spent $870M in consumer incentives over the holiday just to maintain engagement. The old playbook of subsidize-and-capture is hitting diminishing returns.
The global macro environment adds another layer. US Q4 GDP came in at only 1.4% -- the government shutdown dragged it down a full percentage point. But business investment grew 3.7%, driven by AI-related information processing equipment. The four largest US tech companies are projected to spend $650 billion combined on data centers and equipment in 2026. Even as the broader economy wobbles, the AI infrastructure buildout is accelerating.
And in the semiconductor layer, LPDDR6 is entering the market faster than expected, pulled forward by AI demand. Intel's foundry business is at its "last chance window" with 18A -- $4.5B in revenue against $2.5B in operating losses. The foundry game requires a positive flywheel of customers, volume, yield improvement, and cost reduction. Intel hasn't started that flywheel yet.
Zoom out. The pattern is: the tools are getting cheaper, faster, and more accessible. The infrastructure is being built at unprecedented scale. The agent economy is forming. And the winners won't be the people who build the best AI -- they'll be the people who build the best distribution for what AI enables. A directory. A niche B2B YouTube channel. An OpenClaw management service for executives. Boring, specific, distribution-solved businesses.
The season is still winter. The energy is building but hasn't peaked. This isn't the moment to rush in swinging -- it's the moment to get your tools sharp and your positioning clear. When the window opens, you want to already be standing in front of it.
Stay sharp. Stay building.
Zecheng
2026-02-21
Comments on "Zecheng’s Intel | Saturday, February 21, 2026": 0