Jan 21, 2026
WTF is EU AI Act!?
Hey again! Week three of 2026.My advisor reviewed my research draft this week. His feedback: "Looks good for a baby." I pointed out that the EU AI Act prohibits AI systems that exploit vulnerabilities of individuals based on age. He said that only applies to AI, and unfortunately, my writing is entirely human-generated. Couldn't even blame Claude for this one.So the EU passed the world's first comprehensive AI law. Prohibited practices are already banned. Fines are up to €35 million or 7% of global revenue. The big enforcement deadline is August 2, 2026....that's 193 days away.And about 67% of tech companies are still acting like it doesn't apply to them.Let's fix that.What the EU AI Act Actually IsA risk-based regulatory framework for AI. Think GDPR, but for artificial intelligence. ┌─────────────────────────────────────────────────────────┐ │ RISK LEVELS │ ├─────────────────────────────────────────────────────────┤ │ UNACCEPTABLE → Banned. Period. │ │ HIGH-RISK → Heavy compliance requirements │ │ LIMITED RISK → Transparency obligations │ │ MINIMAL RISK → Unregulated │ └─────────────────────────────────────────────────────────┘Most AI systems? Minimal risk. Your spam filter, recommendation algorithm, AI video game NPCs — unregulated.The stuff that matters: prohibited practices (already illegal) and high-risk systems (August 2026).The Timeline That MattersFebruary 2, 2025: Prohibited practices banned. AI literacy required.August 2, 2025: GPAI model obligations live. Penalties enforceable.August 2, 2026: High-risk AI requirements. Full enforcement. ← The big oneAugust 2, 2027: Legacy systems and embedded AI.Finland went live with enforcement powers on December 22, 2025. This isn't theoretical anymore.What's Already Illegal (Since Feb 2025)Eight categories of AI are banned outright:Manipulative AI: Subliminal techniques that distort behaviorVulnerability exploitation: Targeting elderly, disabled, or poor populationsSocial scoring: Rating people based on behavior for unrelated consequencesPredictive policing: Flagging individuals as criminals based on personalityFacial recognition scraping: Clearview AI's business modelWorkplace emotion recognition: No monitoring if employees "look happy"Biometric categorization: Inferring race/politics/orientation from facesReal-time public facial recognition: By law enforcement (with narrow exceptions)The fine: €35M or 7% of global turnover. Whichever is higher.For Apple, 7% of revenue is ~$26 billion. For most companies, €35M is the ceiling. For Big Tech, the percentage is the threat.The August 2026 ProblemHigh-risk AI systems get heavy regulation. "High-risk" includes:Hiring tools: CV screening, interview analysis, candidate rankingCredit scoring: Loan decisions, insurance pricingEducation: Automated grading, admissions decisionsBiometrics: Facial recognition, emotion detectionCritical infrastructure: Power grids, traffic systemsLaw enforcement: Evidence analysis, risk assessmentIf your AI touches hiring, credit, education, or public services in the EU, you're probably high-risk.What high-risk requires:Risk management system (continuous)Technical documentation (comprehensive)Human oversight mechanismsConformity assessment before market placementRegistration in EU databasePost-market monitoringIncident reportingEstimated compliance cost:Large enterprise: $8-15M initialMid-size: $2-5M initialSME: $500K-2M initialThis is why everyone's nervous.GPAI Models (Already Live)Since August 2025, providers of General-Purpose AI models have obligations.What counts as GPAI: Models trained on >10²³ FLOPs that generate text, images, or video. GPT-5, Claude, Gemini, Llama — all of them.Who signed the Code of Practice:OpenAI ✓Anthropic ✓Google ✓Microsoft ✓Amazon ✓Mistral ✓Who didn't:Meta (refused entirely)xAI (signed safety chapter only, called copyright rules "over-reach")Signing gives you "presumption of conformity" — regulators assume you're compliant unless proven otherwise. Not signing means stricter documentation audits when enforcement ramps up.The Extraterritorial ReachHere's the part US companies keep ignoring.The EU AI Act applies if:You place AI on the EU market (regardless of where you're based)Your AI's output is used by EU residentsEU users can access your AI systemThat last one is the killer. Cloud-based AI? If Europeans can access it, you might be in scope.The GDPR precedent:Meta: €1.2 billion fine (2023)Amazon: €746 million (2021)Meta again: €405 million (2022)All US companies. All extraterritorial enforcement. The EU AI Act follows the same playbook.You cannot realistically maintain separate EU/non-EU versions of your AI. One misrouted user triggers exposure. Most companies will apply AI Act standards globally (same as GDPR).My TakesThis is GDPR 2.0Same extraterritorial reach. Same "we'll fine American companies" energy. Same pattern where everyone ignores it until the first major enforcement action, then panics.The difference: AI Act fines are higher (7% vs 4% of revenue).August 2026 is not enough timeConformity assessment takes 6-12 months. Technical documentation takes months. Risk management systems don't build themselves.Companies starting in Q2 2026 will not make the deadline. The organizations that will be ready started in 2024.The Digital Omnibus won't save youThe EU proposed potential delays tied to harmonized standards availability. Don't count on it. The Commission explicitly rejected calls for blanket postponement. Plan for August 2026.High-risk classification is broader than you thinkUsing AI for hiring? High-risk. Using AI for customer creditworthiness? High-risk. Using AI in educational assessment? High-risk.A lot of "standard business AI" falls into high-risk categories.The prohibited practices are already enforcedThis isn't future tense. If you're doing emotion recognition on employees, social scoring, or predictive policing, you're already violating enforceable law. Stop (pls).Should You Care?Yes, if:EU residents use your AI systemsYour AI generates outputs used in the EUYou have EU customers (even B2B)Your AI touches hiring, credit, education, or public servicesYou're a GPAI model providerNo, if:Your AI is genuinely minimal risk (spam filters, recommendation engines for non-critical decisions)You have zero EU exposure (rare in 2026)Definitely yes, if:You're in regulated industries (healthcare, finance, legal)You're building foundation modelsYou're deploying AI in HR, lending, or educationThe Minimum Viable ChecklistThis week:Inventory all AI systems [_]Classify each: prohibited, high-risk, GPAI, limited, minimal [_]Check for prohibited practices (stop them immediately) [_]This month:AI literacy training for staff [_]Begin technical documentation for high-risk systems [_]Identify your role: provider vs. deployer [_]Before August 2026:Complete conformity assessments [_]Register high-risk systems in EU database [_]Establish post-market monitoring [_]If you're reading this in late January 2026 and haven't started, you're behind. Not "a little behind." Actually behind.The TL;DRAlready illegal: Social scoring, manipulative AI, emotion recognition at work, facial recognition scrapingAugust 2026: High-risk AI requirements, full enforcement powersWho it applies to: Everyone whose AI touches EU users. Yes, US companies.The fines: Up to €35M or 7% global revenue. Market bans.The reality: 193 days until the big deadline. Compliance takes 6-12 months. Do the math.The EU AI Act is happening. The question isn't whether to comply or not, it's whether you can get compliant in time.Next week: WTF are Reasoning Models? (Or: Why Your $0.01 Query Just Cost $5)o1, o3, DeepSeek-R1 — there's a new class of models that "think" before answering. They chain through reasoning steps, debate themselves internally, and actually solve problems that made GPT-4 look stupid.The catch? A single query can burn $5 in "thinking tokens" you never see. Your simple question triggers 10,000 tokens of internal deliberation before you get a response.We'll cover how reasoning models actually work, when they're worth the 100x cost premium, when you're just lighting money on fire, and why DeepSeek somehow made one that's 10x cheaper than OpenAI's. Plus: the chain-of-thought jailbreak that broke all of them.See you next Wednesday 🤞pls subscribe