Personal Essay • March 2026

How a Blood Test
and a Claude Bug
Led to a Thesis

On health data, AI ensembles, probabilistic failure, and the architecture of trust

Daniel Enekes

SVP, Strategic Partnerships & M&A • Zuora

The Story

In late 2025, my bloodwork came back with numbers I didn't want to see. Elevated liver enzymes. Prediabetic HbA1c. Markers that told a story I'd been ignoring. My day job is connecting dots between partnerships, financial systems, technology, and business strategy, the kind of dots most people don't realize are connected. I pattern-match across deal structures and financial data, take positions that are often uncomfortable, and live in the numbers until they talk. It's what I do. And somehow, the person who does that for a living had been ignoring the most important dataset he'd ever own: his own health.

So I did what I know how to do. I built a system. I started tracking daily body composition on a clinical-grade BIA scale, logging every data point into a multi-sheet workbook: 56 columns of daily readings going back months. I paired that with a structured protocol: caloric restriction, high protein intake, resistance training, inline skating, and pharmacotherapy.

And then I brought in the AI. Not one AI. All of them. I ran ChatGPT, Gemini, and Claude in parallel, cross-referencing their outputs against each other. I already operated with a healthy skepticism toward AI-generated content, a habit I'd developed to protect myself against hallucination. If two out of three agreed on a recommendation and the third diverged, I'd dig into why. If all three converged, I'd act with confidence. It was my own personal ensemble method: probabilistic triangulation applied to the most important dataset I'll ever manage. Together, they helped me build predictive models for my body composition, forecast my bloodwork results before the draw, and construct detailed Excel workbooks and React dashboards to track every dimension of my transformation.

I even used Base44 to build a dedicated health tracking application that could grab screenshots from my scale, extract the data, generate charts, and enable a social health collaboration and support system so the people around me could follow my progress and keep me accountable. It wasn't one tool. It was an orchestrated system of tools, each doing what it does best.

The results exceeded my own predictions. By March 2026, I had lost over 24 pounds, nearly 87% of it from fat. My HbA1c exited the prediabetic range. My liver enzymes improved dramatically into the longevity-optimal range. Vitamin D and B12 levels came back strong. The data told the story, and AI helped me read it, visualize it, and act on it with a precision I couldn't have achieved alone.

But the part that surprised me most wasn't the lab results. It was how AI removed the daily cognitive burden of actually living inside a health protocol while running a demanding career. Business dinners became simple: I'd photograph the menu, send it to Gemini, and get back a recommendation calibrated to my macros and restrictions, sometimes followed by a brief negotiation. "The salmon is your best option." "What about the short rib?" "You can make it work if you skip the bread and the sauce on the side. But the salmon is still better." "Fine." At our sales kickoff, standing in front of a buffet line with forty options and zero nutritional labels, I'd snap a photo of what was laid out and ask what to do. Within seconds I'd have a plan. No mental math. No guessing. No decision fatigue.

This sounds small, but it was transformative. The hardest part of any structured health protocol isn't the protocol itself. It's the relentless daily decision-making. What can I eat at this restaurant? What's the best option at this conference lunch? How do I stay compliant at a client dinner without making it awkward? AI absorbed that entire cognitive load. It turned what would have been a constant low-grade mental tax into something effortless. I could focus on the dinner conversation, the client relationship, the deal, and let the AI worry about the macros. That's not a minor convenience. That's the difference between a protocol you maintain for three months and one you abandon after three weeks.

But here's the thing that changed how I think about AI and software, and ultimately led me to write a three-part research series about the future of enterprise technology.

In the days before my March 6 blood draw, I was using Claude to plan the exact protocol that would give me the cleanest possible liver enzyme reading: when to stop exercising, what to avoid eating, when to take my statin, how to time my Zepbound injection. The conversation spanned multiple days. And on day five of that conversation, I asked a time-sensitive question, and Claude gave me an answer anchored to the date from day one. It told me I was a week away from my blood test when I was actually two days away.

I caught it. And then I pushed on it: why did you get this wrong? The answer was revealing. Claude doesn't have an internal clock. It processes each message as a block of text, weighing contextual patterns through probabilistic attention. A date established strongly at the beginning of a multi-day conversation can outweigh a system timestamp buried in the prompt, because the model's attention mechanism treats both as competing signals, and the well-established context wins. It's not a bug in the traditional sense. It's the inherent behavior of probabilistic sequence prediction.

And in that moment, sitting in my home office in Pinecrest, preparing for a blood test that would tell me whether my body was healing, something clicked.

If I can't trust an AI to reliably track the date across a five-day conversation, something any calendar app handles trivially, how could anyone ever trust it to calculate a payroll withholding, reconcile a bank ledger, or validate a medication dosage without a separate system verifying every output? The answer is you can't. Not because AI isn't powerful. It is extraordinarily powerful. But because power without verification is not trustworthiness.

AI made me smarter about my health. But I still needed the lab equipment to produce the exact numbers. The AI told me what to optimize. The lab told me whether I'd actually succeeded. The intelligence was probabilistic. The measurement was deterministic. And neither was useful without the other.

That insight kept simmering. And as it did, something deeper surfaced.

I had been running the architecture without knowing it had a name.

Think about what I was actually doing. The AI models, ChatGPT, Gemini, Claude, were generating probabilistic recommendations: what to eat, when to train, how to time my supplements. The clinical scale, the bloodwork, the Excel workbooks with 56 columns of daily data, those were deterministic systems producing exact measurements. And I was the orchestration layer in between, taking the AI's suggestions, validating them against my data, cross-referencing across models, and only acting when probabilistic intelligence and deterministic precision converged.

In the AI research world, this pattern has a formal name: neuro-symbolic architecture. Neural networks (probabilistic, pattern-matching) combined with symbolic reasoning (rule-based, exact). The neural component handles fuzzy tasks like understanding context, generating recommendations, and reasoning across domains. The symbolic component enforces constraints, executes formal logic, and guarantees precision. DeepMind publishes papers on it. Anthropic is building systems around it. OpenAI is investing heavily in it. It's among the most important architectural paradigms in AI research today.

And I had been doing it manually, in my home office, on a spreadsheet, for three months, without knowing the term for what I was doing.

That's when the implications for enterprise software became impossible to ignore.

Right now, over $1 trillion in software market value has been wiped out in what analysts are calling the "SaaSpocalypse." The market is terrified that AI will replace enterprise software. And for some categories of software, the ones that primarily help humans do cognitive work like drafting content, managing tasks, or triaging support tickets, that fear is justified. AI can do those things now.

But for the systems that require exact answers, payroll engines that must withhold the correct tax across 10,000 jurisdictions, clinical systems where a missed drug interaction is a patient safety event, core banking ledgers that must reconcile to the penny, AI alone isn't enough. These systems need what I needed: the combination of AI intelligence and deterministic precision, working together, each doing what it does best.

And this is where the challenge gets real, because the companies that build these mission-critical systems are, overwhelmingly, not ready for this future. Most legacy enterprise software companies are still debating whether to add a chatbot to their UI. They're thinking about AI as a feature to bolt on, not as an architectural paradigm that reshapes how their entire system works. They don't know what neuro-symbolic architecture is. They don't know that the frontier AI labs, DeepMind, Anthropic, OpenAI, are building systems that dynamically generate deterministic code, validate it against formal constraints, and orchestrate between probabilistic reasoning and exact execution. They don't know that this is the bridge between "AI that's impressively smart" and "AI that's actually trustworthy enough to run your payroll."

What these companies need to do is specific and urgent. First, they need to understand what they actually know: audit the decades of domain expertise trapped inside their codebases: every regulatory interpretation, every edge case, every implementation pattern learned from thousands of customer deployments. That knowledge is their most valuable asset, and most of them can't even articulate what it is, let alone make it accessible to an AI orchestration layer. Second, they need to decompose their monolithic architectures into modular, testable components that an AI agent can understand, configure, and validate. You can't build a hybrid system on top of spaghetti code. Third, they need to invest, now, not in three years, in building AI orchestration layers that use their proprietary data and domain knowledge as the differentiator, knowing that the foundation models themselves (Claude, GPT, Gemini) will be commodities available to every competitor.

The window for this transformation is finite and shrinking. AI's ability to generate deterministic code, produce comprehensive test suites from regulatory specifications, and formally verify business logic is advancing rapidly. Today, that capability is mostly theoretical at production scale. But the gap between research demonstration and deployable product is closing faster than most enterprise software leaders appreciate. The companies that use this window to rearchitect, to extract their domain knowledge, expose it through AI layers, and build genuine hybrid systems, will define the next era. The ones that spend this window adding chatbots to legacy interfaces will find themselves outflanked by competitors, including AI-native startups, who built the right architecture from day one.

The companies that understand this, that the future isn't "AI replaces software" but "AI and deterministic systems combine into something more powerful than either", are going to define the next decade of enterprise technology. The companies that don't understand it are going to be left behind.

If someone immersed in AI every day can spend three months living inside this architecture and not recognize the pattern, imagine how invisible it is to the enterprise software leaders who will need to build it into their products.

I know this because I was, for three months, the manual version of that system. And I know, now, that the automated version will be transformative in ways that most of the industry is not prepared to imagine.

Daniel Enekes

SVP, Strategic Partnerships & M&A • Zuora

Miami, Florida • March 2026

The Series

The Hybrid SaaS Research Series

This experience, and the months of research it triggered, led me to write a three-paper series on why the SaaSpocalypse is mispriced, which software companies will thrive, and what it means for everyone in enterprise technology.

Paper 1

The Rise of Hybrid SaaS: Analytical Framework

The intellectual foundation. One root architectural insight, rooted in neuro-symbolic architecture, produces every original argument in the series: the Proprietary Context Limitation, Epistemological Resolution, fiduciary risk transfer, the guarantee pricing model, the AI Orchestration Paradox, and the Accidental Knowledge Paradox. Includes seven counterarguments and an explicit conflict of interest disclosure.

Paper 2

The Hybrid SaaS Investment Thesis

The evidence base. Vulnerability analysis across every major SaaS category, a normalized empirical analysis of 26 public companies showing a 23-point valuation gap between deterministic-core and workflow-layer companies, and the pricing and profitability transformation that makes hybrid platforms the most compelling assets in enterprise technology.

Paper 3

The Hybrid SaaS Operational Playbook

The action piece. Domain knowledge extraction with a sprint transformation model, the "Empire Strikes Back" downmarket expansion and same-tier competitive thesis, the dual-actor operating model, and strategic implications for software companies, enterprise buyers, investors, and employees.