AI Agentic First at Groupon: What Ales Drabek's Dark Software Factory Teaches Us

    Till FreitagTill Freitag30. April 20266 min Lesezeit
    Till Freitag

    TL;DR: „Groupon shows what AI Agentic First looks like in production: JIRA ticket → PR in 20 minutes via Claude Code, plus 'Speedboats' of 1 PM + 1 Engineer. Both rely on context infrastructure — not more tools."

    — Till Freitag

    When a CTIO Uses LinkedIn to Show Real Architecture

    LinkedIn is mostly noise. Sometimes it isn't. Ales Drabek, Resilient Chief Technology & Information Officer at Groupon, published a post that describes what enterprise engineering actually looks like in 2026 more precisely than most McKinsey decks on "AI Transformation".

    Two patterns, both live in production:

    1. Dark Software Factory — JIRA ticket to draft PR in ~20 minutes
    2. Speedboats — 1 Product Manager + 1 Engineer, one outcome, no sprint cadence

    Both go beyond hype. They're the operationalized version of what we described in our piece on Enterprise-Grade Agentic Setup as "the gap between vibe coding and production-grade agentic architecture".

    Pattern 1: Dark Software Factory

    What happens technically

    Drabek's description — condensed:

    A ticket moves to "Ready for Dev" → the pipeline fires. The runner provisions itself, Claude Code writes the code, posts the PR back to JIRA, and shuts down. Every PR is still reviewed by a human before merge. That's why we don't call it "dark" yet.

    Live since April 1, 9 orgs, 16 repositories. This isn't a sandbox.

    Why this is becoming a standard pattern

    The architecture follows a principle we see in every serious agentic setup:

    Component Function Why it matters
    Trigger via status change "Ready for Dev" as event bus No custom API — uses existing workflows
    Self-provisioning runner GitHub Actions runner spins up on demand No new infra, just agents on top
    Claude Code as builder agent Reads ticket context, writes code, opens PR Coding agent with file access, not chat
    Human review gate PR reviewed by human before merge Ownership stays with the engineer
    Auto-shutdown Runner terminates after posting PR Cost control is an architectural default

    Stage 2 (in review): the same ticket fans out across multiple prompt and model variants in parallel, a Claude QA agent emits PASS/FAIL. That's head-to-head measurement of agent performance — not gut feel.

    Stage 3 (in design): six agent stages from raw story to auto-merge — on the same infrastructure. The work is in the agents, not the platform underneath.

    What changes for the people

    Drabek's framing — and this is the honest part most posts skip:

    Product managers and engineers stop spending hours translating tickets into code, and put that time into the parts only humans should own — the requirement, the review, the judgment call.

    That's the right framing. Not "agents replace engineers", but "agents replace the boring 60% of the engineering workflow so humans can focus on the 40% that produce signal".

    Pattern 2: Speedboats

    The second pattern is organizational, not technical — but just as interesting.

    A speedboat is exactly what it sounds like: one product manager and one engineer, paired on a single outcome, shipping outside the quarterly planning machine.

    What a speedboat is:

    • Prototype first: prove → review → release → learn
    • Protected from backlog overhead
    • Connected to the same platform standards as everyone else

    What a speedboat is not:

    • A standalone AI side-project
    • A way around engineering review
    • A permanent shadow team

    This sounds like common sense. In large orgs, it's revolutionary. Quarterly planning is the defensive line where most innovative ideas die — not because they're bad, but because they don't fit into the next sprint.

    The Connection: Both Patterns Require the Same Foundation

    Drabek's last sentence is the most important:

    Both rely on the same prerequisite: context infrastructure. AI does not fix weak systems — it amplifies what is already there.

    That's exactly the point we make in every client engagement. You cannot throw a coding agent at a chaotic JIRA backlog with unclear tickets, missing acceptance criteria, and 200 open bugs and expect it to save the world. If you're cleaning up the backlog anyway, migrate from Jira to monday Dev — machine-readable tickets are a default there, not a discipline exercise.

    What context infrastructure means in practice:

    • Tickets machines can read — acceptance criteria, reproduction steps, linked spec, clear definition of done (see sprint planning with monday Dev)
    • Repos that are navigableTOOLS.md, ARCHITECTURE.md, clear module boundaries (details in Enterprise-Grade Agentic Setup)
    • System prompts that don't weigh 12k tokensdisciplined sub-agent briefings instead of monolith prompts
    • CI/CD pipelines that run deterministically — agents break on flakiness, humans tolerate it

    That's the homework. Skip it and you'll buy a Claude Code account and produce expensive half-results. Do it and you'll build the Dark Software Factory.

    Why This Is More Than a Groupon Case

    Drabek isn't the first CTIO running agentic engineering in production — but one of the first to publicly describe the architecture instead of just celebrating outcomes. That matters because it sets a template:

    1. Trigger via existing tools (JIRA, monday Dev, Linear) — not new platforms
    2. Self-provisioning compute on existing CI/CD infra — not new "AI platforms"
    3. Coding agents with file access (Claude Code, Cursor) — not just chat wrappers
    4. Human gates at decisive points — not "full autonomy" as a marketing lie
    5. Parallel agent variants with QA verdict — performance measurement instead of vibes

    If you're running an engineering org in 2026 and don't have this stack in pilot, you're already accruing debt.

    What You Should Do Now

    Three concrete moves, derived from Drabek's setup:

    1. Audit your context infrastructure

    Before you build the first agent: how many of your tickets could a human convert into a PR without follow-up questions? If the answer is below 50%, you have a ticket hygiene problem, not an AI problem.

    2. Pilot on a clearly scoped repo

    Pick a repo with high test coverage, clean architecture, and disciplined tickets. Not the messiest one — the cleanest. The agent needs wins, you need data.

    3. Speedboat model as an organizational lever

    Even without coding agents, you can test the speedboat pattern tomorrow: 1 PM + 1 engineer, one outcome, four weeks, out of the sprint cadence. You'll be surprised what happens.

    Conclusion: This Is What Engineering Looks Like in 2026

    Drabek's post matters because it demonstrably runs in production — not because it sounds impressive. Dark Software Factory + Speedboats aren't buzzwords, they're operational patterns with clear mechanics.

    The gap between "we use ChatGPT for code snippets" and "JIRA ticket → PR in 20 minutes across 9 orgs" isn't gradual. It's an architectural leap. And it doesn't start with the agent — it starts with the context infrastructure underneath.

    Source & credit: Original LinkedIn post by Ales Drabek — worth reading in full.

    Read more:

    FAQ

    Frequently Asked Questions

    Dark Software Factory, speedboats and context infrastructure – Drabek's patterns explained.

    TeilenLinkedInWhatsAppE-Mail

    Verwandte Artikel

    30. April 20267 min

    Enterprise-Grade Agentic Setup: Why an API Key Is Not an AI Strategy

    An API key on your website is child's play. An agentic setup with specialised sub-agents, TOOLS.md, clean system prompts…

    Weiterlesen
    Agent Skills Are Becoming an Industry Standard: What Teams Need to Know
    19. September 20254 min

    Agent Skills Are Becoming an Industry Standard: What Teams Need to Know

    Agent Skills are reusable capabilities for AI agents – and they're becoming the new standard. What sets them apart from …

    Weiterlesen
    Three architectures compared – structured grid, open mesh, and neural network as symbols for Copilot, OpenClaw, and ClaudeDeep Dive
    4. April 20268 min

    Copilot vs. OpenClaw vs. Claude: Enterprise AI Agents Compared 2026

    Three philosophies, one goal: AI agents in the enterprise. Microsoft Copilot (platform), OpenClaw (open source), Claude …

    Weiterlesen
    Futuristic AI orchestration interface with interconnected model nodes on dark background
    11. März 20264 min

    Perplexity Computer: 19 AI Models, One System – The End of Single-Model Thinking

    Perplexity just launched Computer – a multi-model agent that orchestrates 19 AI models to complete complex workflows aut…

    Weiterlesen
    Microsoft and Anthropic logos converge into Copilot Cowork – autonomous AI agents in the enterprise
    10. März 20265 min

    Copilot Cowork: Microsoft Bets on Claude – and What It Means for OpenAI

    Microsoft launches Copilot Cowork – powered by Anthropic's Claude. 400M+ users get an autonomous agent for emails, calen…

    Weiterlesen
    28. April 20263 min

    „Claude Code Killed OpenClaw" – Why That Comparison Makes No Sense

    People on LinkedIn keep saying „Claude Code killed OpenClaw." That's like comparing apples with interstellar spaceships.…

    Weiterlesen
    Personal AI agent as central hub, connected to mail, calendar, chat and code – sitting on a secure runtime layer
    23. April 20265 min

    Globster: monday.com Enters the Personal AI Agent Game – on NVIDIA's NemoClaw

    monday agent labs just launched Globster: personal AI agents built on OpenClaw, secured by NVIDIA's NemoClaw runtime. Wh…

    Weiterlesen
    Claude Managed Agents architecture – brain connected to multiple hands representing tools and sandboxes
    8. April 20265 min

    Claude Managed Agents: Anthropic's Play to Own the Agent Runtime

    Anthropic launches Managed Agents in public beta – a hosted runtime that decouples the 'brain' from the 'hands.' No more…

    Weiterlesen
    Futuristic marketplace for AI agents – Agentalent.ai by monday.com
    24. März 20263 min

    Agentalent.ai: monday.com Launches the First Marketplace for Hiring AI Agents

    monday.com launches Agentalent.ai – a marketplace where companies can 'hire' AI agents for real business roles. Here's w…

    Weiterlesen