With so much noise in the market, it can be difficult to discern if an artificial intelligence-powered (AI) tool is actually useful, especially in a sector as sensitive as healthcare. What differentiates the wheat from the chaff, according to Ganesh Padmanabhan, CEO and Founder of Autonomize AI, is “problem selection”.
“There is a big jump between [instructing] a model to create a demo and how you realize value,” he says. “So solve the hardest problem.”
In the latest episode of OPTO Sessions, Padmanabhan discusses Autonomize AI’s mission, the difference between traditional software solutions and AI, and how to build safer AI agents.
Padmanabhan’s own path into healthcare was a winding one. He began his career in AI-focused software development, serving as Head of Growth at CognitiveScale and Chief Revenue Officer at Circuit prior to the Covid-19 pandemic. He also hosts the Stories in AI podcast and YouTube channel.
During the pandemic, he was invited to be part of Texas Governor Greg Abbot’s Covid-19 task force. He quickly realized the sudden challenge facing outdated US public health infrastructure: “we were just not prepared.”
Personal tragedy also played a role. “I lost a friend to stage four breast cancer,” he says. “When she was actually first diagnosed, Merck [MRK] was doing their trials for keytruda, which is an immunotherapy drug, first in the market. She was eligible to be selected for their clinical trial and she would have probably survived … the fact that she was eligible was locked in page 389 of a 5,000-page report that somebody missed.”
Autonomize AI Origin Story
The company’s mission is simple: “to transform how healthcare operations are done.”
That means a better future for the healthcare sector, Padmanabhan explains. “So you have a future where humans and AI agents can coexist and then provide outcomes at much lower costs … increasing access to care that you cannot get to today with just humans and changing the operating model of one of the most critical industries for humankind.”
Working with several major US healthcare providers, Autonomize AI offers “utilization management, care management, population health, claims, revenue cycle management,” he says. In short, “everything that touches the different business aspects of delivering and managing care.”
Padmanabhan identifies three main objectives for Autonomize AI, and for AI in healthcare in general. First, “make healthcare more profitable for healthcare enterprises, because it’s not today. Make it … really seamless for patients. And more importantly, the most forgotten people of the whole thing are the actual caregivers, the doctors, the providers. Make it joyful for them to deliver care.”
By straddling the fields of healthcare and tech, Autonomize AI has become a “healthcare-native” tech company. Its efforts are paying off, Padmanabhan explains. The company cites 36,000 hours per month saved on mundane administrative tasks, with over 100,000 automated care plans created monthly. In June 2025, the company held a Series A funding round led by Valtruis, raising $28m and bringing its total funding to date to $32m.
Automation vs Autonomization
Up until recently, Padmanabhan says, new technologies in healthcare have largely automated processes. Most software automates “what humans already do. So if you are clicking on a button, you’re just trying to automate that particular process. Well, what if the whole process that you’re actually following is wrong? Automation doesn’t know to fix that.”
The difference between traditional software and AI, Padmanabhan explains, is the ability to scale. “All of these different architectures allow you to give an algorithm a goal and then some limitations and guardrails and figure out the best path to doing that … that kind of goal-oriented optimization is how you scale.”
He provides a simple example for how AI tools can already be integrated into physicians’ workflows. “If you’re a doctor and you’re spending time talking to patients, but you also have to write down what happened in that particular [appointment], there’s a listening agent that’ll just listen to the doctor-patient communication, package it, write the notes in a way that the doctors will get reimbursed [for] if that goes to an insurance company for any claims or approvals, and it has all the relevant clinical information in that conversation. The doctor says, ‘okay, I’m going to call your nearest pharmacy and [order this] prescription.’ The agent automatically takes the cue and delivers that.”
Padmanabhan stresses that, no matter how advanced AI agents get, they cannot work alone. “You can train all the models you want, but because agents are optimization engines, they’re going to optimize for what’s in there. But the doctor will know that I have to go outside of that [context]. So it’s a hard domain like that. In a place like healthcare, knowledge is not written down.
“Problems that require human judgment bring the best out of humans,” he says. “Use agents for the mundane stuff.”
This kind of collaboration can help rework the system itself. With capable human oversight, the questions are: “How do you look at these processes end to end and how do I optimize it in a way that [determines] what part of that process should be done by an agent? What part of the process should not even exist? Can you reimagine that workflow?”
Ultimately, these changes will save money and time, for both healthcare organizations and patients. “That’s how we think we can shift the cost curve in health.”
Building Guardrails
The recent debut of OpenClaw — the agentic AI project that briefly became the fastest-growing repository in GitHub history — shone a spotlight on both the potential and the dangers of agentic AI. Padmanabhan acknowledges the innovative nature of the project, but points out that it came with immense safety oversights. “They didn’t have the right guardrails on it. They didn’t have the right layers of segmentation, of what kind of data should be shared versus not.”
The intense competition in the AI market is another risk factor. By having agents custom-built for healthcare purposes, companies can better protect their intellectual property, he points out. “If your healthcare enterprise [gives] your data to Anthropic, next thing you know, they’re going to build an AI-powered health enterprise [to go] after your business.”
Padmanabhan also urges companies to maintain a healthy skepticism of AI tools. Useful agents are lightyears ahead of simple chatbots, both in terms of capabilities and requirements, he says. “The stuff is way harder than people give it credence to today. There is a lot of engineering between 'hey, I want to do something' and actually doing it — and doing it at scale and doing it well.”
Instead, “our health enterprise customers are building agents on our platform, … [and] that gives them the enterprise scaffolding, the regulatory scaffolding to make sure this is not going to go rogue and it’s going to actually be within the constraints of a regulated industry.”
The safety of AI systems, especially in a sector as sensitive as healthcare, comes down to three factors: traceability, guardrails and oversight. In building the agent, companies have to define “what decisions should be made by the agent and should not be made by the agent, and baking that into the agent instructions to make sure that if they are faced with that choice, they’re not going to make [it] up. They’re going to reach out to a human, get them engaged.”
Human-agent collaboration will always be key, he says. If agents can handle mundane, administrative tasks, “judgment, guardrails, regulation, compliance, all of that stuff will be oversight, will be delivered by humans … It’s a team sport, right?”
That leaves the most important work of care to doctors and nurses. “We like to say that AI should run the business of care, the operational layer, so that humans can heal.”
Companies like Autonomize AI are now laying the groundwork that will serve as the foundation for explosive growth in the coming years. Padmanabhan compares the process to planting bamboo. “If you have a little bamboo seed, you [plant it], you pour water, you wait for it … Three, four, five months go by and it doesn’t even pop up. Then there’s a little sapling that comes up. And then six months later, within two or three weeks, it shoots 40 feet.”
Continue reading for FREE
- Includes free newsletter updates, unsubscribe anytime. Privacy policy

