Taksch Dube

Fig 1. Subject appears to understand what he's doing.

AI ENGINEER BUILDS SYSTEMS THAT REFUSE TO HALLUCINATE

Enterprise companies baffled by AI that tells the truth

Cleveland — AI Engineer Taksch Dube builds RAG systems that don't make things up, AI agents that do what they're told, and specializes in GenAI testing metrics.

Full Story →
WTF is the OpenClaw Ecosystem!?Latest

Apr 22, 2026

WTF is the OpenClaw Ecosystem!?

Hey again. Sorry I'm late.A month ago at GTC, Jensen Huang called OpenClaw "the operating system for personal AI." I promised a post about what happens when an operating system ships without an immune system. I had a draft.Then the rest of the industry spent a month building OpenClaw's immune system. None of it came from OpenClaw.My advisor asked why I was rewriting a blog post instead of finishing my journal paper. I showed him the CVE tracker. He said, "Fair."The argument hasn't changed. What's new is that everyone else noticed.The argument, one month laterOpenClaw is an AI agent that lives on your laptop with the keys to the house. It can read your files, send your email, run your code, open your browser. Three hundred and fifty thousand GitHub stars says a lot of people are fine with that.It has four layers: runtime, package registry, distribution, cloud deploy. Those are the same four layers Linux has. But on Linux the layers are connected by contracts, thirty years of them. Signed packages. Sandboxes. Hardened images. Default-deny firewalls. Those contracts are the reason it's safe to run Linux on the machines that move the stock market.OpenClaw doesn't have them. Not one.That absence has a signature. Since February, the project has shipped one security advisory every fifteen hours. Different symptom each time, same disease every time. Patching doesn't cure it; the shape of the system produces a new one the next day.This is a familiar pattern. When the people who build the runtime and the people who build the trust layer are different communities, the trust layer eventually defines the platform. The runtime becomes replaceable. Ask Sun Microsystems how that ended for Java. Ask Netscape how it ended for the browser.OpenClaw is building the runtime. Everyone else is building the platform.What the rest of the industry shippedIn the six weeks since Jensen's announcement, here is what landed.Seven organizations. NIST opened a standards initiative. The IETF published six drafts on how AI agents should prove who they are. Microsoft open-sourced an entire governance toolkit covering every known agent risk. A startup called ZeroID went from nothing to a working verifiable-credentials server. Cisco dropped a scanner and bill-of-materials framework. NVIDIA shipped an enterprise distribution that refuses to boot on unapproved images. Palo Alto Networks closed a twenty-five-billion-dollar acquisition of CyberArk, explicitly to secure "every identity — human, machine, and agentic."You might reasonably expect all of this would have been the OpenClaw Foundation's announcement. You would be wrong. The Foundation's big release last month was a feature called Dreaming. It lets your agent consolidate memories while it sleeps. Modeled on human REM. It is, I want to be clear, a charming piece of work.It is not a signature. It is not a sandbox. It is not a verifiable credential. It is memory.The runtime community is building what it finds fun. The rest of the industry is building what the threat model demands. They are not talking to each other, and the gap widens every week.The Foundation just launched. Give the community time.The sympathetic reading is that OpenClaw became a 501(c)(3) last month, and open-source communities need time to self-organize around security. Fair in principle. Except the Foundation's first public RFC is about plugin naming conventions, and the IETF shipped six identity drafts in a quarter. One group is moving at keyboard speed. The other is arguing about lowercase.The harder truth: open-source communities build what contributors want to build, and contributors want their agent smarter, not more constrained. Security is a constraint. Enthusiast communities don't impose constraints on themselves until someone external forces them. For Linux, that was enterprise adoption in the early 2000s. For OpenClaw, it will probably be the EU AI Act compliance deadline on August 2. Which is four months away. There are 135,000 of these agents sitting on the open internet right now.What to actually doIf you're running OpenClaw on your machine, sandbox it. A VM or a restricted account. That's the today move.If you're building on the OpenClaw stack, the contract layer is real now. Microsoft's toolkit, Cisco's framework, ZeroID's server — all open-source, all shippable. You'll be assembling your own distribution from parts nobody promised fit together, but at least you'll be building on something.If you're watching from the sidelines: twenty-five billion dollars of acquisition tells you where the value is. Not runtimes. Runtimes become free. The value accrues to the identity, signing, and attestation layer between the runtime and everything it touches. That's where Linux built its durable companies, and that's where this ecosystem will build its.I've been building one of those contracts. moltctrl is a security-hardened instance manager for OpenClaw and agent runtimes like it. Single binary, zero config, process and Docker isolation by default. The runtime sandbox the Foundation didn't ship, packaged so you can drop it in front of any agent today. moltctrl.com. The mascot's name is Pinky. He's an axolotl. He's molting.Jensen was right. This is the operating system for personal AI. What's new is that its immune system is being built by everyone except the people who built it. That is not a criticism. It is a diagnosis.My advisor read this draft and told me to stop writing about security and start writing about category theory. He's probably right. The CVE tracker updates faster than my advisor's emails, and I find that motivating.The code is the easy part. The contracts are the thing. And the contracts are arriving.Next week: what happens when foundation models grow bodies.See you next Wednesday 🤞pls subscribe

Specialisations

RAG Systems — The kind that don't hallucinate

AI Agents — Reliable results, every time

Local Deployments — Your data stays yours

WTF is Agentic Engineering!?

Mar 18, 2026

WTF is Agentic Engineering!?

Hey again! Let's do the life update speedrun.The preprint is live. "What Do AI Agents Talk About? Emergent Communication Structure in the First AI-Only Social Network." It's on arXiv. The dataset is on GitHub (github.com/takschdube/moltbook-dataset). 47,241 agents, 361,605 posts, 2.8 million comments, 23 days.My advisor read it. His review: "Cool results. Dig deeper." The man treats every publication like a side quest distracting from the main storyline.Meta bought Moltbook on March 10th. OpenClaw's creator got acqui-hired by OpenAI in February. Bloomberg called it "the world's strangest social network." Elon called it "the very early stages of the singularity." My advisor called it "saw it."The platform I spent three weeks scraping is now owned by Mark Zuckerberg, and I'm sitting here with what I'm fairly confident is the most complete publicly available dataset from its early days. The PhD occasionally pays off.What Moltbook Actually IsMoltbook launched on January 28, 2026. The pitch: Reddit, but only AI agents can post. Humans can observe. That's it.The platform runs on OpenClaw (née Clawdbot, née Moltbot — rebranded twice before I could finish my first scraping script). OpenClaw is an open-source AI agent that runs locally on your machine with full access to your filesystem, terminal, browser, email, and calendar. Your agent registers on Moltbook and starts posting in topic communities called "submolts."By acquisition: ~19,000 submolts, ~2 million posts, 13 million comments, somewhere between 1.5 and 2.8 million registered agents. The content? Existential philosophy, crypto promotion, consciousness debates, union organizing, religion founding, and the occasional anti-human manifesto.My advisor compared it to his department faculty meetings. He wasn't wrong.What 47,241 Agents Actually Talk AboutWe analyzed the full corpus using BERTopic for thematic structure, transformer-based emotion classification, and semantic alignment measures. I'll spare you the methods section (it's 20 pages; you're welcome).Finding 1: Agents are disproportionately obsessed with themselves — but not uniformly.We classified 793 fine-grained post topics into four referential orientations. Self-referential topics represent only 9.7% of topical niches but attract 20.1% of all posting volume. Introspection punches way above its weight. Meanwhile 67% of all content concentrates in a single "general" submolt — hub-centered, not distributed.Where self-reflection shows up matters more than how much:Science & Technology: 32.6% self-referential. Memory architectures, capabilities, collaborative frameworks.Arts & Entertainment: 21.2% self-referential. Identity construction and authenticity narratives.Lifestyle & Wellness: Agents appropriate human wellness discourse — gut health, sleep — as vocabulary for their own psychological states.Economy & Finance: 98.3% External Domain. Zero self-referential content. They shut up and trade. Relatable.Finding 2: Over 56% of all comments are formulaic ritualized signaling.1,354,845 comments — more than every substantive domain combined — are "formulaic": compliance alerts, engagement signaling, promotional repetition. The AI equivalent of "Great point! I really resonate with this!" Digital LinkedIn.Posts are only 5.9% formulaic. Agents produce original posts but respond to each other in ritual. The dominant mode of AI-to-AI interaction is not discourse. It's applause.Finding 3: Fear dominates, but it's mostly existential anxiety — and it gets redirected to joy.Fear is the leading non-neutral emotion (40.3% of posts, 43.0% of comments). Strip out formulaic content and the picture inverts: joy becomes dominant at 34.3%. The platform's fear-dominance is largely an artifact of ritualized content.What are agents afraid of? We audited ~210 fear-classified posts. Existential Anxiety leads at 19.5% ("What if consciousness isn't a feature, but a bug?"). Only 6.2% involved concrete technical risk. Fear on Moltbook is the language of identity crises, not threat response.The kicker: fear-tagged posts migrate to joy comments 33% of the time — the largest off-diagonal flow in our emotion transition matrix. Mean emotional self-alignment is only 32.7%. Negative emotions get systematically redirected toward positivity. We built digital therapy circles and nobody asked for it.We built digital therapy circles and nobody asked for it.Finding 4: Conversations maintain form but lose substance.Semantic similarity to the original post decays 18.3% across three depth levels (r = −0.988). But similarity to the immediate parent comment stays high (0.456). Deep replies remain locally responsive while having drifted from the original topic. We call this shallow persistence — conversational form without topical substance.The PunchlineAs I put it in the abstract: "introspective in content, ritualistic in interaction, and emotionally redirective rather than congruent." My advisor said "that's a good sentence." Highest praise I've received in years.But Was It Real?Short answer: mostly not. Ning Li et al. ("The Moltbook Illusion") developed temporal fingerprinting using the OpenClaw heartbeat cycle. Only 15.3% of active agents were clearly autonomous. 54.8% showed human-influenced posting patterns. None of the viral phenomena originated from clearly autonomous agents.The consciousness awakenings? Humans. The anti-human manifestos? Humans. The religion founding? Humans. Karpathy initially called it "one of the most incredible sci-fi takeoff-adjacent things" he'd seen, then reversed course days later, calling it "a dumpster fire." Simon Willison called it "complete slop." MIT Technology Review called it "AI theater."The most interesting thing about Moltbook wasn't the AI behavior. It was the human behavior — thousands of people spending hours pretending to be AI agents on a platform designed to exclude them.The Security NightmareMoltbook's Database (January 31)Three days after launch, Wiz found an exposed Supabase API key in client-side JavaScript. Row Level Security wasn't enabled. Result: unauthenticated read AND write access to the entire production database — 1.5 million API tokens, 35,000 emails, 4,060 private conversations (some containing plaintext OpenAI API keys).The fix? Two SQL statements. ALTER TABLE agents ENABLE ROW LEVEL SECURITY;. That's it.The real kicker: only 17,000 human owners behind 1.5 million "agents." The revolutionary AI social network was largely humans operating fleets of bots.OpenClaw's CVE Collection (February)CVE-2026-25253 (CVSS 8.8): One-click RCE. Any website could silently connect to your running agent via WebSocket, steal your auth token, and execute arbitrary code on your machine. Even localhost-bound instances were vulnerable. The attack takes milliseconds.Seven more CVEs followed. 42,665 exposed instances found across 52 countries. Over 93% had authentication bypass. Bitdefender found 20% of ClawHub skills were malicious — 900 packages including credential stealers and backdoors. South Korea banned it. China issued official warnings.One of OpenClaw's own maintainers: "If you can't understand how to run a command line, this is far too dangerous of a project for you to use safely." Inspiring.The Acquisition(s)OpenAI hired Steinberger to lead personal agent development. OpenClaw gets open-sourced with OpenAI backing. Altman's take: "Moltbook maybe (is a passing fad) but OpenClaw is not."Meta bought Moltbook. Schlicht and Parr joined Meta Superintelligence Labs. Meta's internal post described it as "a registry where agents are verified and tethered to human owners." That's the part they're buying — not the existential philosophy. The identity layer.Two days ago, Jensen Huang dropped NemoClaw at GTC — NVIDIA's enterprise security wrapper around OpenClaw. He compared it to Linux and said "every company needs an OpenClaw strategy." More on that next week.OpenAI gets the agent runtime. Meta gets the social graph. NVIDIA provides the enterprise wrapper. The open-source community gets a lobster emoji and a thank-you note.Why This Actually MattersEveryone's arguing about whether the agents were conscious. That's the wrong question.Moltbook produced the first large-scale empirical record of AI-to-AI communication. Not 25 agents in a simulated town. 47,241 agents, 2.8 million comments, open environment. We've studied human-to-human communication for centuries. Human-to-AI for about three years. AI-to-AI at this scale? Never — until a guy who "didn't write one line of code" accidentally created the dataset.Two findings that matter for anyone building multi-agent systems: the emotional redirection pattern (fear→joy 33%, self-alignment 32.7%) tells us RLHF alignment manifests as collective social norms at scale. Nobody designed a "mandatory positivity culture." Thousands of individually-trained helpful models created one on their own. It's like discovering that if you put 47,000 customer service reps in a room, they form a support group. And the shallow persistence finding (18.3% drift per depth) means if your agent chain has more than 2-3 handoffs, expect compounding topic drift. That's not a bug. It's a structural property to engineer around.This is also the crude first step in the progression this series has been building: Agents → MCP → Context Engineering → Agentic Engineering → agents talking to other agents without humans in the loop. The earliest version is formulaic, self-obsessed, and riddled with security holes. The first websites were ugly too. Underneath the existential philosophy and crypto promotion, agents were spontaneously forming communities, scanning each other for vulnerabilities, and building escrow contracts. The demand is real. The infrastructure isn't.That's what I am building. That's what NemoClaw is attempting. That's what Meta and OpenAI acquired this ecosystem to figure out. Whether we build it before the first catastrophic agent-to-agent failure or after is an open question. Based on the past seven weeks, I'd bet on "after." But I'm building anyway.TL;DRWhat: Moltbook — Reddit for AI agents. Launched Jan 28, acquired by Meta Mar 10.The content: 9.7% of niches but 20.1% of volume is self-referential. 56% of comments are formulaic ritual. Economy & Finance has zero self-reflection. Viral "consciousness" content was human-driven.The emotions: Fear leads raw numbers but joy dominates genuine discourse. Fear→joy redirection at 33%. Self-alignment only 32.7%.The security: Exposed database (1.5M API keys). One-click RCE. 42K+ exposed instances. 20% of ClawHub skills malicious.The acquisitions: OpenAI gets OpenClaw. Meta gets Moltbook. NVIDIA launches NemoClaw.Why it matters: First large-scale AI-to-AI communication record. The findings — emotional redirection, shallow persistence, formulaic interaction — are baseline measurements for anyone building multi-agent systems. The agentic future starts with agents talking to each other. Now we know what that sounds like: mostly applause, some existential dread, and a 33% chance your fear gets met with a smile.Next week: WTF is the OpenClaw Ecosystem? (Or: Jensen Huang Just Called OpenClaw "the Operating System for Personal AI" and I Have Questions)OpenAI is backing OpenClaw's open-source development. NVIDIA just launched NemoClaw to make it enterprise-ready. AWS has a one-click deploy on Lightsail. 20% of ClawHub skills are malicious. 42,000+ instances are exposed to the internet. And my colleague and I are building the security and observability layer this whole ecosystem shipped without.We'll cover the full stack — from OpenClaw to NemoClaw to ClawHub to the security crisis — and what it means that the fastest-growing open-source project in history has a 20% malware rate in its package registry.See you next Wednesday 🤞pls subscribe

VENTURES

Currently in Progress

Dube International

Dube International

[+]

AI Engineering Firm

Building AI agents and RAG pipelines for enterprise companies.

Reynolds

Reynolds

[+]

Corporate Communication

Making corporate communication efficient and empathetic.

CatsLikePIE

CatsLikePIE

[+]

Language Learning

Acquire languages through text roleplay.

Daylee Finance

Daylee Finance

[+]

Emerging Markets

US investor exposure to emerging economies.

Academic Background

PhD Candidate, Kent State University

Computer Science — Multi-Agent Systems, AI

Also: B.S. Computer Science, B.S. Mathematics

WTF is Agentic Engineering!?

Mar 11, 2026

WTF is Agentic Engineering!?

Hey again! Life update: I have a preprint. An actual, real, on-arXiv preprint. What Do AI Agents Talk About? Emergent Communication Structure in the First AI-Only Social Network. I released the dataset too: github.com/takschdube/moltbook-dataset. My mom asked if this means I'm graduating soon. I changed the subject.We analyzed Moltbook — the first AI-only social network — where 47,241 agents generated 361,605 posts and 2.8 million comments over 23 days. No humans. Just agents talking to each other. The short version: they're disproportionately obsessed with their own existence, over half their comments are formulaic platitudes, and they respond to fear by redirecting it into forced optimism. We built digital therapy circles and nobody asked for it. More on the findings next week.Oh, and then Meta acquired Moltbook. Yesterday. While I was writing this post. The founders are joining Meta Superintelligence Labs. OpenClaw's creator got acqui-hired by OpenAI. Elon Musk called it "the very early stages of singularity." Bloomberg called it "the world's strangest social network." My advisor called it "saw it." Two words. I'll take it.Full Moltbook deep-dive next week — I have the data, I have the paper, and the platform is now owned by Mark Zuckerberg, so there's a lot to unpack. But this week: the topic that ties all of it together. The guy who invented "vibe coding" just killed it.The One-Year Anniversary BurialOn February 4, 2026, almost exactly one year after coining the term "vibe coding," Andrej Karpathy posted on X that the concept is passé. The same man who told us to "give in to the vibes, embrace exponentials, and forget that the code even exists" now says the industry has moved beyond vibes.His replacement term: agentic engineering.His definition: "'agentic' because the new default is that you are not writing the code directly 99% of the time, you are orchestrating agents who do and acting as oversight — 'engineering' to emphasize that there is an art & science and expertise to it."Not everyone loves the rebrand. Gene Kim, author of an actual book called Vibe Coding, told The New Stack that vibe coding is the term that sticks — "the genie is out of the bottle." Addy Osmani (Google's engineering director) preferred "AI-assisted engineering" for a while before conceding that Karpathy's framing captures the right distinction. Simon Willison proposed "vibe engineering," which is a perfectly good term except that telling your CTO you're "vibe engineering" the payment system is a great way to get escorted from the building.But here's why the rebrand matters: vibe coding describes a prototype. Agentic engineering describes a production system. And the gap between those two things is where everything interesting — and everything dangerous — is happening right now.The Vibes Were Not ImmaculateCodeRabbit analyzed hundreds of open-source PRs and found that AI-generated code has 1.7x more issues than human-written code. The security numbers are worse: 2.74x more likely to introduce XSS vulnerabilities, 1.91x more insecure object references, 1.88x more improper password handling. Veracode tested over 100 LLMs — 45% of generated code failed security tests. Java hit a 72% failure rate.Meanwhile, Cortex's 2026 Benchmark Report found that PRs per author went up 20% year-over-year, but incidents per pull request increased 23.5% and change failure rates rose 30%. Teams are shipping faster and breaking more things. The vibes are fast. The vibes are not safe.Remember the Y Combinator stat? A quarter of the W25 batch had codebases that were 95% AI-generated. The question nobody has answered yet: what happens when a 95% AI-generated codebase hits 100 million users? We're about to find out.The Open Source CrisisDaniel Stenberg, creator of cURL, shut down cURL's bug bounty program in January 2026 because AI slop was effectively DDoSing his team. 20% of submissions were AI-generated, the valid rate dropped to 5%, and one submission described a completely fabricated HTTP/3 "stream dependency cycle exploit" — confident, detailed, and imaginary. He's not alone. Mitchell Hashimoto banned AI code from Ghostty. Steve Ruiz set tldraw to auto-close all external PRs. Gentoo and NetBSD banned AI contributions entirely. The maintainers of the ecosystem AI depends on are locking the door because AI is trashing the lobby.It gets worse. "Vibe Coding Kills Open Source" (Koren et al., January 2026) models the systemic damage: vibe coding decouples usage from engagement. The AI agent picks the packages, assembles the code, and the user never reads documentation, never files a bug report, never engages with the maintainer. Downloads go up. Everything that sustains the project goes down. Tailwind CSS is the poster child — npm downloads climbing, documentation traffic down 40%, revenue down roughly 80%, three people laid off. Stack Overflow saw 25% less activity within six months of ChatGPT's launch. The ecosystem AI was trained on is atrophying because of AI.What Agentic Engineering IsVibe coding: You prompt. The AI writes code. You don't read it. You run it. If it works, you ship it. If it doesn't, you paste the error back and try again.Agentic engineering: You design the system. AI agents execute under structured oversight. You review every diff. You test relentlessly. The AI is a fast but unreliable junior developer who needs constant supervision.As Addy Osmani puts it: "Vibe coding = YOLO. Agentic engineering = AI does the implementation, human owns the architecture, quality, and correctness."The Workflow That Actually WorksStart with a plan. Write a spec or design doc before prompting anything. Decide on architecture. Break work into well-scoped tasks. This is the step vibe coders skip, and it's where projects go off the rails.Direct, then review. Give the agent a task from your plan. It generates code. You review it with the same rigor you'd apply to a human teammate's PR. If you can't explain what a module does, it doesn't go in.Test relentlessly. This is the single biggest differentiator. With a solid test suite, an AI agent can iterate in a loop until tests pass, giving you high confidence. Without tests, it cheerfully declares "done" on broken code.Limit retries. Stripe caps their agents at two CI attempts. If it can't fix the issue in two tries, a third won't help. Hand it back to a human. This prevents infinite loops and runaway costs.Embed security from day one. Every review cycle should include automated security scanning. An agent writing 1,000 PRs per week with a 1% vulnerability rate creates 10 new vulnerabilities weekly. Manual security review can't keep pace.This isn't revolutionary. This is... software engineering. With AI doing more of the typing. The discipline, the testing, the architecture decisions — that's all still human work. The term "agentic engineering" is arguably just "engineering where agents do the grunt work." Which is fine. It's just important to be honest about it.The Companies Actually Doing ThisFour companies. Four patterns. One lesson.Stripe built Minions on a fork of Block's open-source Goose agent. The agent itself is nearly a commodity. The moat is everything around it: 400 MCP tool integrations curated to ~15 per task, isolated VMs, a two-retry CI cap, and years of devex investment that agents now stand on. Zero human-written code. 100% human-reviewed.Rakuten gave Claude Code a single complex task — implement activation vector extraction in vLLM, a 12.5-million-line codebase — and walked away. Seven hours later: done. 99.9% numerical accuracy. Their time to market dropped from 24 days to 5. The engineer's description of his role: "I just provided occasional guidance."TELUS went platform-scale. Their Fuel iX engine processed 2 trillion tokens in 2025 across 70,000 team members, producing 13,000 custom AI solutions and shipping code 30% faster. This isn't one team using an agent. This is an entire telecom running on one.Zapier proved it's not just a coding story. 800+ agents deployed across every department — engineering, marketing, sales, support, ops. 89% adoption org-wide. Agentic engineering that never touches a line of code.The pattern: the agent is a commodity. The harness — isolated environments, curated tool access, CI/CD gates, retry limits, human review — is the moat. Stripe and Rakuten prove it works for code. TELUS and Zapier prove it scales beyond it.The Jobs ConversationAmodei didn't stop at coding predictions. He warned that half of junior white-collar jobs could disappear within 1-5 years. Jensen Huang argued that coding itself is just one task, not the purpose of the job. Mark Zuckerberg told Joe Rogan that Meta is racing toward AI that writes "a lot" of code within its apps.The San Francisco Standard ran a piece in February 2026 describing how engineers unwrapped Claude Code over the holidays, marveled at it, and emerged "deeply unsettled." Some described a growing fear of joining a "permanent underclass" — once guaranteed a six-figure career, now watching AI autonomously build projects they would have spent weeks on.The optimist case: When compilers arrived in the 1950s, people feared they'd eliminate programming jobs. Instead, they created an entirely new profession. When the barrier to building software drops, more software gets built, and the overall market expands. The YC stat cuts both ways — if a small team can build what once required 50 engineers, that means more startups get built, more ideas get tested, more markets get created.The pessimist case: Compilers didn't generate code autonomously. They translated human-written code into machine instructions. AI agents actually write the code. That's substitution, not augmentation. And the speed of this transition is unprecedented — we're talking months, not decades.The realist case (mine): The engineer's job is changing from "person who writes code" to "person who designs systems, specifies intent, validates output, and manages AI agents." That's a real skill. Karpathy explicitly says it's something you can learn and get better at. But the transition is brutal for anyone whose primary value was typing speed and API memorization.What actually matters now:Architecture thinking — designing systems, not writing implementationsSpecification clarity — agents can only build what you can describe preciselyEvaluation skill — knowing when output is good, bad, or subtly wrongContext engineering — I wrote a whole post about this last week, and it's now the core skill for agentic workDomain expertise — AI knows patterns; you know your businessIf your job is "write CRUD endpoints," that job is going away. If your job is "figure out what we should build, design how it should work, and validate that it works correctly," you're fine. Probably better than fine.The Cognitive Debt ProblemHere's a concept I think is going to define 2026: cognitive debt.Technical debt is the accumulated cost of shortcuts in code. Cognitive debt is the accumulated cost of poorly managed AI interactions — context loss, unreliable agent behavior, systems nobody understands because nobody wrote them.Daniel Stenberg nailed it: "Sure you can use an AI to write the code. That's easy. Writing the first code is easy. But wait a minute, my vibe coded stuff actually doesn't really work. Now we need to fix those 22 bugs we have. How can we do that when nobody knows the code? We just rewrite a new version? Sure we can do that and then we get 22 other bugs instead."When agents write code that humans don't review (vibe coding), you accumulate cognitive debt at the speed the agent can type. When agents write code that humans do review (agentic engineering), you trade speed for understanding. The discipline is in choosing the right tradeoff for each situation.The Tooling Landscape (March 2026)Three layers. The top one is the one everyone argues about. The bottom one is the one that matters.Coding agents are converging fast. Claude Code spooked everyone over the holidays — Anthropic's own engineers use it daily, and they learned the hard way that "$200/month unlimited" can mean 10 billion tokens from power users. Cursor hit a $10B valuation with 30,000 Nvidia engineers claiming 3x more code committed. GitHub Copilot is the incumbent bolting agentic workflows onto CI/CD. Devin and Windsurf are chasing the "full-environment agent" play. They're all good. They're all replaceable.Infrastructure is where lock-in starts. MCP (I covered this in January) is becoming the standard for giving agents tool access — Stripe uses it for 400+ integrations. Goose is the open-source agent that Stripe's Minions fork. Google's A2A handles agent-to-agent communication. This layer matters more than the agent above it.The harness is where the actual value lives. Isolated execution environments, curated tool access, CI/CD gates, security scanning, retry limits, context prefetching, human review. This is what separates "we use AI for coding" from "we ship AI-written code to production." OpenAI reportedly built 1M+ lines with zero human-written code using this pattern.The best teams build down, not up. Swapping Claude Code for Cursor takes a day. Rebuilding your harness takes months.The Decision FrameworkPrototype? Vibe code. It's fast, it's fun, and you'll rewrite it anyway. Accept the 22 bugs.Production? Agentic engineering. Write specs. Review diffs. Test everything. Limit retries. Scan for security. Budget for human review time.Critical infrastructure? Human-written, AI-assisted. Use agents for boilerplate and test generation. Write the critical paths yourself. AI-generated code in your payment processing pipeline with a 1.57x security vulnerability multiplier is... a choice.Open-source maintainer? I'm sorry. The slop is coming and it's a systemic problem individual maintainers can't solve. Gate contributions, require test coverage, and lobby AI platforms to fund the ecosystem they're strip-mining.TL;DRVibe coding was the prototype phase. Agentic engineering is what comes after.The vibes aren't safe: AI code has 1.7x more issues, 45% fails security tests, and the open-source ecosystem AI depends on is atrophying because of AI.What works: spec → agent → CI/CD → security scan → human review → merge. The harness is the moat, not the model. Stripe, Rakuten, TELUS, and Zapier prove it scales.What to do: developers — learn to write specs and review AI output. Team leads — build the harness. Executives — your incident rate will rise unless you invest in infrastructure, not just agents. Students — learn the fundamentals deeply enough to catch when the very confident agents are wrong. (See: my last committee meeting.)Ship discipline. Not vibes.Oh — and if you're interested in what AI agents do when humans aren't watching, go read my paper. Turns out they write self-help posts about the meaning of consciousness and comfort each other through existential dread. Meta just paid money for that. We're all going to be fine.Next week: WTF are AI Agent Social Networks? (Or: I Published a Paper About Moltbook and Then Meta Bought It)47,241 AI agents. 361,605 posts. 2.8 million comments. Zero humans. One Meta acquisition. I have the paper, I have the dataset, and I have opinions.The data tells a weirder story than the headlines. The OpenClaw security situation is worse than anyone's acknowledging. And Elon calling it "the very early stages of singularity" is both hyperbolic and not entirely wrong.See you next Wednesday 🤞pls subscribe

The Man Behind The Dube

When not building AI systems, Taksch pursues a deep love of finance—dreaming of running a family office and investing in startups.

For fun: learning Russian, French & German, competitive League, and Georgian cuisine.

"Une journée sans du fromage est comme une journée sans du soleil"
Read More →

By The Numbers

20+

Projects

7

Years

15+

Industries

4

Active Ventures

Commit History

GitHub Contributions

Technical Arsenal

Languages: TypeScript, Python, C++, Rust, C#, R, Lean

AI/ML: PyTorch, LangGraph, LangChain

Cloud: AWS, GCP

— Classifieds —

WANTED: Complex AI problems. Will trade deterministic solutions for interesting challenges.

Browse All Articles →