Build Mode · · 11 min read

Inside the Technology Stack

What's under the hood of AI roll-ups. OpenAI partnerships, proprietary AI strategies, and why technology moats matter more than marketing claims.

Part 4 of The Founder's Guide to AI-Enabled Roll-Ups

Every founder I know who's received an AI roll-up offer has the same question: Is the technology real?

Not "real" in the philosophical sense. Real in the sense that matters: Will this actually transform my business, or am I being sold a thesis wrapped in buzzwords? The difference determines whether you're joining a platform that can justify paying premium multiples—or becoming a guinea pig for someone else's experiment.

The two major players have taken fundamentally different approaches. Thrive Holdings partnered directly with OpenAI, embedding AI engineers inside portfolio companies. General Catalyst builds proprietary capabilities, owning the technology rather than renting it. Both approaches can work. Both have trade-offs. And neither is as far along as the press releases suggest.

This chapter examines what's actually under the hood—what gets automated, what creates defensibility, and how to separate working technology from PowerPoint promises.

Key Takeaways

  • OpenAI-Thrive model: Embedded teams (engineers, researchers, product) build custom tools inside portfolio companies. OpenAI gets equity stake + data feedback; Thrive gets frontier AI access
  • General Catalyst model: Build proprietary AI first, prove automation works, then acquire distribution. Slower start, stronger long-term position through technology ownership
  • Valuation implication: Buyers with working AI can pay higher multiples because transformation creates value. Buyers with a thesis pay less—you're sharing their technology risk
  • Automation reality: 30-40% of tasks automatable today. The grunt work disappears; the judgment calls remain human
  • The real moat: Not the AI itself, but accumulated operational learning that makes AI work in specific contexts. Data + integration + deployment playbooks compound over time
  • Honest uncertainty: Track records are thin. Crete has ~30 firms, Shield has 7 MSPs. Founders joining now are betting on a thesis, not proven results at scale

OpenAI-Thrive Partnership

On December 1, 2025, OpenAI announced it was taking an equity stake in Thrive Holdings. The structure is unusual: OpenAI isn't investing cash. It's trading embedded teams for ownership.

Engineers, researchers, and product managers from OpenAI work inside Thrive's portfolio companies—not as consultants parachuting in for workshops, but as integrated team members building custom tools. At Crete, they're automating accounting workflows. At Shield, they're building MSP-specific automation. The AI isn't a generic layer sitting on top of operations. It's tailored to the specific work of tax preparers and IT technicians.

What Each Side Gets

For Thrive, the value is access. Direct line to frontier AI capabilities before they're commercially available. Priority feature development for their verticals. Integration support from the people who built the models. When OpenAI releases a new capability, Thrive's portfolio companies can deploy it while competitors are still reading the documentation.

For OpenAI, the value is data and equity. Operating company workflows generate a training signal that improves models. Real-world deployment surfaces edge cases that lab testing misses. And the equity stake means OpenAI captures upside from successful transformation—not just API fees, but ownership appreciation as portfolio companies scale.

The Circular Deal Critique

Bloomberg and TechCrunch have questioned the arrangement. Thrive Capital is a major investor in OpenAI. Now OpenAI is taking a stake in Thrive Holdings, which Thrive Capital also backs. The overlapping ownership makes it genuinely difficult to assess whether success comes from market traction or from advantages that exist only because of the partnership.

Thrive's response: the customer interest preceded the partnership. Accountants at Crete were saving hundreds of hours before the formal deal was announced.

Here's my honest take: the circular nature is a feature, not a bug. Both parties benefit when portfolio companies succeed, which aligns incentives far better than a standard vendor relationship. The real question isn't whether the arrangement is incestuous. It's whether the transformation can scale beyond the hothouse environment of heavily-supported early deployments.

That question remains open.

General Catalyst's Proprietary Approach

General Catalyst took a different path. Build the AI capability first. Prove it works. Then acquire distribution.

Their process starts with research—70 industries reviewed—looking for specific characteristics: 30%+ of tasks automatable, fragmented ownership, stable cash flows, succession pressure. Only after identifying a target vertical do they move to technology development.

Crescendo wasn't built by acquiring call centres and hoping AI would work. It was built AI-first, proving the automation achieved 80%+ resolution rates and 60%+ margins before scaling through acquisition. The same pattern applies to Eudia (legal) and Titan MSP (IT services). Technology precedes distribution.

Why Own Instead of Rent?

The strategic logic is defensibility. If you're using the same OpenAI API that any competitor can access, your advantage comes entirely from implementation. A well-funded rival with good execution could replicate your results.

Proprietary AI creates barriers that compound. Each acquisition generates operational data that improves the models. Better models make future acquisitions more valuable. The flywheel accelerates as the portfolio grows.

The trade-off is speed. Building vertical-specific AI takes years. Thrive moved faster by partnering directly with OpenAI. GC accepts the slower ramp, betting that technology ownership creates a stronger long-term position.

Which approach wins? Too early to tell. The honest answer is that both are reasonable bets with different risk profiles.

What Actually Gets Automated

Let me be specific about what AI transformation looks like in practice. Not categories on a slide. Actual workflows.

Accounting & Tax Services

What changes:

A senior associate at a mid-sized CPA firm used to spend four hours pulling numbers from bank statements into working papers. The AI does it in twenty minutes—and catches the transposition errors humans miss when they're exhausted during busy season.

A tax return review that previously required a manager's full attention now gets a first pass from AI that flags anomalies, missing forms, and optimisation opportunities. The manager still reviews, but she's looking at a prioritised list of issues rather than scanning every line.

Engagement letters, management letters, routine client correspondence—first drafts appear automatically, pulling relevant details from prior-year files. The accountant edits rather than writes from scratch.

Crete reports AI tools saving "hundreds of hours monthly" at individual member firms. Accountants handle 2-3x more clients without a proportional increase in headcount. That's the margin expansion story.

What doesn't change:

The client calls, worried about an IRS notice. The judgment call about whether an aggressive tax position is defensible. The relationship-building dinner where the partner learns the client is selling the business next year. The strategic advice about entity structure for a new venture.

AI handles volume. Humans handle judgment and relationships.

MSPs & IT Services

What changes:

A user submits a ticket: "I can't access the shared drive." The AI checks permissions, verifies network connectivity, identifies a cached credential issue, and resolves it—ticket closed without human touch. That's maybe 40% of a typical MSP's help desk volume.

Client onboarding used to take weeks: discovering assets, documenting configurations, and setting up monitoring. AI compresses discovery to hours by scanning networks and auto-populating documentation. A technician reviews and corrects rather than building from scratch.

Titan MSP's pilots demonstrated 38% of typical MSP tasks are automatable with current technology. Shield's internal tools—Sentinel and Spectre—auto-resolve repetitive tickets across portfolio companies, freeing technicians for work that requires thinking.

What doesn't change:

The server migration keeps breaking in unexpected ways. The client who insists their "computer is slow" when the real problem is a failing SSD that requires on-site diagnosis. The strategic conversation about whether to migrate to Azure or stay on-prem. The relationship with the CFO, who controls the IT budget.

Call Centres

What changes:

This is where the transformation is most dramatic. At Crescendo, 80-90% of customer inquiries never reach a human. The AI handles password resets, order status checks, return initiations, and FAQ questions—the high-volume, low-complexity work that traditionally consumed most agent time.

Quality assurance shifts from sampling 1-2% of calls to reviewing 100% of interactions. Issues surface immediately rather than weeks later. Agents get real-time coaching: suggested responses, relevant knowledge articles, and sentiment alerts when a customer's tone shifts.

The result: gross margins of 60-65% versus the industry standard of 10-15%. That's not incremental improvement. That's a different business model.

What doesn't change:

The customer whose order was lost and is now furious, threatening social media exposure. The billing dispute requires judgment about whether to issue a credit. The caller starts asking about their order and ends up revealing a genuine emergency. The emotionally sensitive conversation where empathy matters more than efficiency.

The Pattern

Across verticals, AI automates the repetitive, rules-based, high-volume work. It augments humans rather than replacing them—but the augmentation is substantial enough to transform economics.

The professionals who remain do different work. Less data entry, more judgment. Less routine, more exception-handling. Whether that's better work depends on the professional, but it's definitely higher-leverage work.

What You're Actually Buying Into

Forget abstract moat theory. If you're considering joining an AI roll-up platform, here's what creates durable value—and what's just marketing.

Accumulated Learning Advantage

The real moat isn't the AI models. OpenAI's models are available to anyone with an API key. The moat is the accumulated operational learning that makes AI work in specific contexts.

When Crete deploys AI at its thirtieth accounting firm, the system already knows the edge cases from the first twenty-nine. The weird chart of accounts structure broke the data mapping. The client who stores invoices as photos rather than PDFs. The state-specific tax rules require exceptions to standard workflows.

That accumulated knowledge—encoded in custom integrations, exception-handling logic, and deployment playbooks—compounds over time. A competitor starting today faces not just a capability gap, but a learning gap that widens with each deployment.

Why Integration Depth Matters

Per Deloitte's 2024 research, 62% of leaders cite data access and integration as their top obstacle to AI adoption. The technology exists. Making it work with existing systems is hard.

AI roll-up platforms invest in integration once, then deploy across dozens of companies. They've already solved the connections to QuickBooks, Lacerte, UltraTax. They've figured out the quirks of ConnectWise and Datto and Kaseya. They've built bridges to legacy systems that individual firms would never invest in connecting.

Once your workflows rebuild around these integrated AI systems, switching to a competitor means redoing all that integration work. The switching costs are real.

Why Operational Playbooks Matter

Crete invests $10M annually in learning and development, including 200+ hours of training per employee. That's not teaching people to click buttons. It's rebuilding how accountants approach their work.

A competitor with equivalent AI but no operational playbook will struggle to achieve the same results. Technology alone doesn't transform a business. The deployment methodology—honed through dozens of integrations—is what makes transformation actually happen.

Why Your Firm Can't Just "Do AI" Independently

A reasonable question: if AI tools are widely available, why can't any professional services firm adopt them and achieve similar results?

Some can. Most can't. Here's why.

Integration complexity is brutal. According to Tray.ai research, 42% of enterprises need access to 8+ data sources to successfully deploy AI agents. 86% require tech stack upgrades before deployment is viable. Your accounting firm doesn't have the engineering resources to build these integrations.

The talent doesn't exist at your scale. Building AI capability in-house requires data scientists, ML engineers, and product managers who understand both AI and your vertical. That talent is expensive and scarce. A 50-person accounting firm can't compete for that hiring pool.

Change management is its own discipline. Even with working technology, getting professionals to actually use it consistently requires significant effort. Your partners who've done things a certain way for twenty years won't change because you showed them a demo.

Roll-up platforms solve these problems through scale. The economics that don't pencil for an individual firm work when you spread investments across fifty firms.

This is the genuine strategic value of joining a platform. Not just capital. Not just succession planning. Access to technology transformation that you couldn't achieve independently.

Honest Uncertainty

Here's what the pitch decks don't emphasise: the track records are thin.

Crete has roughly 30 accounting firms. Shield has 7 MSPs. Crescendo is AI-native, not a roll-up—they built the technology from scratch rather than transforming acquired companies. Titan MSP just raised funding and is beginning acquisitions.

The transformation thesis is plausible. The early results are promising. But no one has proven this model works at scale—across dozens of integrations, hundreds of professionals, multiple economic cycles.

If you join now, you're betting on a thesis, not proven results.

That's not necessarily wrong. Early sellers often capture the best terms before platforms have scale and leverage. And the alternative—selling to traditional PE or running until retirement—has its own risks.

But go in with clear eyes. The question isn't whether AI roll-ups could work. It's whether this specific platform has the technology, capital, and operational capability to make it work for your firm.

The uncertainty cuts both ways. Wait eighteen months, and you'll have more data—but if the model works, you'll also have more competition among sellers and more leverage for buyers.

Technology Due Diligence

When you're evaluating an AI roll-up offer, the technology claims determine whether you're joining a transformation or a hope.

Questions That Reveal Capability

"Show me the tools that will be deployed in my business."

Not concepts. Working software. If a buyer can't demonstrate tools for your specific vertical, they're still in development mode. That's not disqualifying, but price and terms should reflect the technology risk.

"What results have you achieved in other acquired companies?"

Hours saved. Margin improvement. Client capacity increases. Specific numbers, not generalities. And ask to talk to founders who've been through the integration. Their experience tells you what actually happens.

"Walk me through the deployment timeline."

Transformation doesn't happen at close. It takes months or years. Understand the realistic sequence and what support looks like during each phase.

Phrases That Should Trigger Scepticism

Listen carefully to how buyers describe their technology. Certain phrases are tells:

"We're building partnerships with leading AI providers." Translation: no current capability.

"Our technology roadmap includes..." Translation: doesn't exist yet.

"The platform will enable..." Translation: future tense means future capability.

"We're leveraging cutting-edge AI." Translation: using the same APIs everyone else uses.

"Our proprietary algorithms..." Translation: possibly real, possibly marketing.

Compare to language that suggests actual capability:

"We deployed at [specific firm],, and they're now handling 40% more clients with the same staff."

"Here's a demo of the audit testing tool. I'll show you how it works with sample data."

"Talk to [founder name] at [acquired firm]. She's been through the integration and can tell you what it was really like."

The difference is specificity. Working technology can be demonstrated. Aspirational technology requires trust.

Red Flags Worth Walking Away From

Vague claims with no specifics. If they can't get concrete about tools, timelines, and results, the capability probably doesn't exist yet.

Pressure to close quickly. Legitimate buyers allow time for due diligence. Buyers who need to close before you investigate too deeply may be hiding capability gaps.

Evasive answers about data. If they won't explain specifically what data flows to whom for what purposes, either they haven't figured it out, or the answer isn't something you'd like.

No references. If they won't connect you with other founders who've gone through the process, ask yourself why.

Contrarian Take

Let me offer a perspective you won't hear in the pitch meetings: the best technology might not win.

AI capabilities are improving rapidly. What requires custom development today may work out of the box next year. Platforms that bet heavily on proprietary AI face the risk that general-purpose models catch up, eroding their technology advantage.

Meanwhile, operational execution—deploying consistently, training effectively, managing change across dozens of firms—is hard to replicate regardless of underlying technology. A platform with mediocre AI but excellent operational discipline might outperform one with superior technology but sloppy execution.

The implication for founders: don't just evaluate the AI. Evaluate the operating capability. How do they manage integrations? How do they handle firms that struggle with change? What happens when technology doesn't work as promised?

The platforms that succeed long-term will be those that combine technological capability with operational excellence. Technology alone isn't enough. The firms that get this right will build something genuinely valuable. The ones that don't will cost their investors dearly.

Which category does your prospective buyer fall into? That's the question worth answering before you sign.


← Back to Chapter 3: Industries in the Crosshairs

Continue to Chapter 5: Deal Structures—What Founders Get and Give Up →

Or return to: The Founder's Guide to AI-Enabled Roll-Ups (Hub)


Capital Founders OS is an educational platform for founders with $5M–$100M in assets. We focus on frameworks for thinking about wealth—so you can make better decisions.

If it's your first time here, don't forget to Subscribe.

⚠️
Disclaimer: The content of this website and newsletter is for informational purposes only and should not be construed as investment, legal, or tax advice. The views and opinions expressed herein are solely those of the author and do not necessarily reflect the views of any business, employer, or other entity. Investing involves risks, including the potential loss of principal. Past performance does not guarantee future results. Readers are advised to conduct their own research and consult with qualified professionals before making any investment, legal, or financial decisions. The information provided is believed to be accurate but cannot be guaranteed. The author and publisher disclaim any liability for actions taken based on the content of this newsletter. This newsletter is not an offer to buy or sell any security. By subscribing or continuing to read this newsletter, you acknowledge and accept these terms and conditions.

Read next