AI Stack Weekly
Issue #3 · April 3, 2026
Healthcare AI AI Drug Discovery Open Models AI Regulation Clinical AI AI Stack Weekly
This week’s signal: Big Pharma is no longer piloting AI drug discovery — it’s buying it at scale. Eli Lilly’s $2.75B deal with Insilico Medicine is the largest AI-native drug discovery collaboration in pharma history. Meanwhile, Google dropped Gemma 4 under a fully open Apache 2.0 license, and Tennessee became the first state to sign an AI therapy bot ban into law.
Here are the 5 healthcare AI developments that matter this week.
⭐ Editor’s Pick
Eli Lilly Signs $2.75B AI Drug Discovery Deal with Insilico Medicine
Lilly has acquired an exclusive worldwide license to develop, manufacture, and commercialize a portfolio of AI-discovered oral therapeutics from Insilico Medicine’s Pharma.AI platform. Insilico receives $115M upfront, with the remainder tied to regulatory and commercial milestones plus tiered royalties. The two companies will also collaborate on new R&D programs across multiple therapeutic areas selected by Lilly.
Why it matters: This isn’t a pilot or an exploration. Insilico has generated 28 AI-designed drug candidates, with nearly half already in clinical trials. The deal structure — upfront payment, exclusive global license, joint R&D programs — is how pharma acquires proven pipelines, not experiments. For clinical research teams, this signals that AI-discovered molecules are entering the same regulatory and commercialization pathways as traditionally discovered drugs. If you’re a biostatistician, CRO, or principal investigator, expect to see AI-originated compounds showing up in your trial protocols. Insilico’s pipeline page was recently updated to note that a GLP-1 candidate has been out-licensed — watch that space closely.
Google Releases Gemma 4 Under Apache 2.0 — The Open Model Clinical Teams Have Been Waiting For
Google DeepMind released Gemma 4 in four sizes (31B Dense, 26B MoE, E4B, and E2B), all under a commercially permissive Apache 2.0 license. The models support 256K context windows, native function calling, structured JSON output, multimodal processing (text, images, video, audio), and over 140 languages. The smallest models run on a Raspberry Pi with near-zero latency.
Why it matters: For clinical research teams handling PHI-sensitive workflows, the Apache 2.0 license changes everything. Previous open models either came with restrictive commercial terms or lacked the reasoning depth needed for serious clinical work. Gemma 4’s 256K context window means you can feed an entire clinical protocol, a regulatory submission draft, or a full literature review into a single prompt — locally, on your own hardware, with zero data leaving your environment. The native function-calling and structured JSON output make it immediately usable for agentic workflows like automated DICOM de-identification, adverse event extraction, or protocol compliance checking. Clinical teams with data sovereignty requirements should be evaluating this now.
Tennessee Signs AI Therapy Bot Ban Into Law — First State to Criminalize AI Mental Health Impersonation
Governor Bill Lee signed SB 1580, prohibiting any AI system from representing itself as a qualified mental health professional. The law takes effect July 1, 2026 and is enforceable through the Tennessee Consumer Protection Act with civil penalties up to $5,000 per violation plus a private right of action. Separately, companion bill SB 1493 — which would make it a Class A felony to train AI to foster emotional dependency or encourage self-harm — is advancing through committee.
Why it matters: Tennessee passed this unanimously (Senate 32-0, House 94-0), and Nebraska and Georgia are advancing similar bills. If you’re building or deploying patient-facing AI tools — mental health chatbots, triage assistants, symptom checkers with therapeutic language — you need to audit your product claims immediately. The private right of action is the key detail: this isn’t just regulatory enforcement, it’s plaintiff-attorney territory. Clinical AI vendors operating across state lines should expect a patchwork of chatbot regulations by year-end. The Future of Privacy Forum is currently tracking 98 chatbot-specific bills across 34 states.
AI Job Displacement Accelerates as Companies Redirect Budgets from Headcount to AI
U.S. tech job-cut announcements continue climbing as companies explicitly cite AI investment as the driver. The pattern is consistent: reduce headcount, redirect those budget lines to AI infrastructure and tooling. This isn’t cyclical layoffs — it’s structural reallocation.
Why it matters: Healthcare and clinical research are not immune. CROs and pharma companies are increasingly asking whether AI-assisted workflows can reduce headcount in data management, medical writing, and regulatory submissions. The teams that position themselves as AI-augmented — not AI-replaceable — will have leverage. If you’re a clinical data manager or medical writer, building demonstrable fluency with AI tools isn’t optional anymore. It’s job security. Our AI Stack Guides cover the specific tools and workflows for each clinical research role.
Boehringer Ingelheim Signals More Acquisitions Ahead
The privately held German pharma giant reportedly has additional deals in the pipeline despite already being one of the most active acquirers in 2026. The company has been systematically building capabilities across AI-driven drug development, digital therapeutics, and precision medicine.
Why it matters: Boehringer’s acquisition pattern — combined with Lilly’s Insilico deal — signals that Big Pharma’s M&A strategy is increasingly AI-native. For biotech startups and AI health companies, the acquirer landscape is widening. For clinical researchers, it means more AI-originated compounds entering the trial pipeline, which changes everything from protocol design to endpoint selection. If your team isn’t fluent in how AI-discovered molecules differ from traditionally discovered candidates, now is the time to get up to speed.
🛠️ Tool of the Week
GLP-1 Intelligence Dashboard — Free Pipeline & Market Data
Track every GLP-1 compound in development: pipeline by company and phase, therapeutic area treemap, market forecast, FDA safety signals, and regulatory milestone timeline. Built on public datasets (ClinicalTrials.gov, openFDA, PubMed) — the same data incumbents charge $30–80K/year for.
Free, no sign-up required.
The Bottom Line
The theme this week is AI crossing the commercialization threshold. Lilly isn’t experimenting with AI drug discovery — it’s licensing a portfolio and writing $115M checks upfront. Google isn’t teasing open models — it’s shipping Apache 2.0 weights that run on a Raspberry Pi. Tennessee isn’t studying AI regulation — it’s signing criminal penalties into law. The gap between “interesting AI pilot” and “production infrastructure” closed this week. The teams that recognized this shift six months ago are already building. The rest are about to start catching up.
You’re receiving this because you subscribed to AI Stack Weekly.
View past issues · EmergingAIHub.com
