My position is chief product officer. My primary responsibility is to oversee development of the software platform, to think strategically about the direction in which the software should go in consultation with others in the organization, and to consider what features would be most valuable for our users. I also work with clients that need data architectural guidance, whether that’s setting up their data warehouse, helping them answer tough analytic questions, or considering how the different data that are available could be used to answer those questions. So my role is a mix of product development and scrubbing in with the professional services team.
We’re a small company, so we all wear a bunch of different hats. But another part of this dual role is the philosophy that has infused the development of Ursa Studio from the beginning: The platform should constantly be rehydrated with feedback from the field and actual client needs. I can’t just be briefed on things but need to do actual work in the field. I keep very aware of what problems our clients experience, how those problems can be solved in a generalizable way for other organizations, and how that translates back into the platform. I see that as an integral part of being smart about product development, pushing Ursa Studio in the right direction, and making it better organically.
We had the opportunity to take a thoughtful approach, both generally to what we wanted the company to be and specifically to what we wanted the software to do. This has been a very experimental development process, full of trial and error, where we’re trying to synthesize the feedback we’re getting from the outside world with our own beliefs about the right way to do things.
A unique aspect of our platform is the breadth of the data journey and different use cases that it covers. It’s starting with raw data and ending with the ability to share the charts and visualizations associated with the complex analytics it creates, but it's also doing everything in between. If we were just working on one particular part of the data journey—say, a measure authoring tool that covered one very specific type of analytic application—yeah, we could probably do something like that relatively quickly and put something out there that didn’t require a lot of revision. But no one’s smart enough to be able to imagine all the different ways that the software should work if it’s covering all of those use cases at once. We approach things with something like the scientific method—that is, you can’t just think your way out of problems, you need to experiment and see what the results are.
Nonetheless, we were able to expedite the development process as much as possible owing to our small team. Steve [Hackbarth, Andy’s brother] was our primary developer, and I was our primary product person. Having a unified thought process around that kind of work results in tremendous productivity and efficiency, and allows us to be agile in how we respond to feedback from the outside world.
Because he programmed the platform, Steve understands all of the technical implications of an idea, its feasibility, and the landmines that might be there. I, in turn, understand all the product implications across the entire scope of the platform. So if there’s a new problem that surfaces out there in the field, I will know what feature of the platform is closest to the solution, and whether there’s a way to extend that feature. It’s a very quick process to talk through a new idea because we both have the full picture in our heads. We don’t have to set up a meeting with a bunch of people and talk it through. So many miscommunications and bugs arise when you have different people with different pieces in their heads.
The agility of a small team would be the case at any company, but Steve is someone who takes the fewest number of words to understand something I say. And it’s probably the same the other way around. When we started the company, it was an incredible experience to work with someone on the technical side who could get what I was describing and execute it very quickly, and I wouldn’t have to worry about checking the quality of that execution. It’s a real joy to work with Steve, and I’ve sort of forgotten about that novelty over the years.
During the earlier years, when there was a lot of uncertainty—coming from ourselves as well as the outside world—about whether creating this platform would even be technically possible, the fact that there were three of us from more or less the beginning was essential. Having a couple of other people who you know you can rely on, who look out for each other’s best interests and not their own, who are true to their word—I can’t imagine how hard it would be to do this all on your own, either from a skill set or an emotional support standpoint.
And then Ursa’s co-founders are older than most entrepreneurs. We’re more mature, and our personalities are pretty easy going. So there hasn’t been any real tension within that trio through some pretty stressful periods.
I was a math major in college, but I took a bunch of computer science classes, and those turned out to be my favorites. When you think of “creative” professions, you don’t typically think of programming, but you’re literally creating something out of nothing, and I found that to be a very rewarding thing to do.
I was a consultant for a while and then a programmer. While working at the Institute for Healthcare Improvement (IHI), I became interested in the topic of failures in healthcare. Healthcare is a consequential place to do work. Lives are at stake. And it really struck me as an injustice that the little mistakes that happen all the time, in any sort of organization, could end up killing people or having these lifelong health consequences.
Learning how brittle healthcare processes can be was pretty profound. Most people probably think that doctors rarely make mistakes, care is very well organized, the sequence of things that happen to you as a patient is pretty standardized, and if you go to another specialist you’ll get the same recommendation. If that doesn’t happen, you consider it to be an unusual failure.
However, healthcare is not this super well-oiled machine. In fact, compared to other industries where a lot less is at stake, there’s ample opportunity for problems to occur—which is a shame, because so much human suffering is associated with them. And then on the financial side, consider the massive amounts of spending, much of which is probably not necessary, and the financial toll in terms of individual bankruptcies and loss of competitiveness for U.S. companies carrying the cost of healthcare.
I was attracted to working on that kind of problem, and I found my place in healthcare as someone who could help out with the more technical problems that arose in efforts to improve quality or reliability. You’ll often find those projects staffed by clinicians who have first-hand knowledge of care delivery systems and a lot of insight into how they go wrong and how they can be improved. What they lack is the statistical, mathematical, or engineering background to draw on. So I fit into that ecosystem pretty easily in that role.
After a few years, I got to the point at IHI where I was running into the limits of my own technical knowledge, so I went to RAND to start a Ph.D. program and fill in the gaps in my formal education about more advanced statistical techniques. And the program did that, but perhaps the most important thing I learned was that you don’t need all these advanced theoretical statistical techniques to solve most real-world problems—you mostly just need better organized information. This understanding, in turn, gave me the confidence to say it’s perfectly reasonable to solve a particular problem with a very simple model and not one of these really exotic things that can suck up all of your time—in other words, most real-world problems don’t need a mathematical sledgehammer, just a few little taps in the right places.
Around that time, I remember reading an Atul Gawande article, “Big Med,” in which he makes this great, provocative comparison of healthcare to restaurant chains like The Cheesecake Factory. Gawande’s talking about an orthopedic surgeon who is trying to standardize care for his patients. This is one of the key pillars of improvement work: figure out the appropriate sequence of steps, then perform those steps reliably. Lo and behold, when he gets a standardized process in place, things started to work better. Gawande then lists off a bunch of results:
“The surgeons now use a single manufacturer for seventy-five per cent of their implants, giving the hospital bargaining power that has helped slash its knee-implant costs by half. And the start-to-finish standardization has led to vastly better outcomes. The distance patients can walk two days after surgery has increased from fifty-three to eighty-five feet. Nine out of ten could stand, walk, and climb at least a few stairs independently by the time of discharge. The amount of narcotic pain medications they required fell by a third. They could also leave the hospital nearly a full day earlier on average (which saved some two thousand dollars per patient).”
This vignette would not be nearly as interesting without that proof. Without measures that were very targeted to this one particular type of patient, what do you have? Who’s to say anything got better? I thought, if every physician or other caregiver had access to information that told them how they were doing with that level of relevance and accuracy, it seems they would inevitably move toward improvement, either as part of a conscious, active improvement effort or just from passive exposure to the information. In fact, I’m sure having those data also helped the surgeon motivate more resistant administrators or surgeons to change their behavior. So one could easily imagine that having measures like that could be the difference between an improvement effort that succeeded or failed.
In late 2012, reading that article turned some of my general ideas about improvement and measurement into a sharper theory. However, I didn't realize how out of reach it would be for most organizations to generate those kinds of measures.
Around this time, I started working with Robin [Clarke, Ursa’s CEO] at UCLA Health. We came to see early on that the ability to answer highly specific questions was a key part of getting improvement efforts to work. However, those questions were effectively unanswerable, even in an environment that was extremely data rich—in other words, where the raw data to answer those questions existed but not in any form that could be put in front of people. Addressing that complexity would take programmers weeks if not months to get any results. So not only did 99% of requests for that sort of information lay fallow, but few even asked for it.
There’s this reasonable mistake that people make, because it’s so common for the numbers to not reflect reality, that, inherently, this kind of quantitative analysis will always fall short as an accurate assessment of what’s going on. But this kind of failure is not a fundamental, intrinsic quality of quantitative analysis; it’s that getting accurate analytics is just very hard to do. When you do it well, however, it has an absolutely transformative effect.
That’s when I stopped being a health services researcher and started being a database developer again, concentrating on finding ways to turn questions from clinicians about their performance into measures that could unlock all these different helpful activities across the healthcare organization. My background could be applied to get critical information to all these different teams. And the task at hand was building a new technology infrastructure that could interpret all the very messy raw data that are all over the place right now and turn them into key indicators of performance.
Robin and I had no idea at the time, but that work was laying the foundation for Ursa.
If you think of a performance measure as rendering judgment on how the system handles each patient, it’s so hard to correctly evaluate every unique case correctly. Traditional approaches to developing these kinds of technical solutions just fall short. Roughly speaking, a “traditional approach” is when you get a bunch of subject matter experts together, they think hard about a problem, they come up with a set of rules, you get a bunch of programmers to implement those rules, and at the end of that you have a finished product.
That approach is destined to fail for the vast majority of analytic problems in healthcare because the experts in a white-boarding session, no matter how many hours they take, are not going to think of every corner case. So they’re not going to get the rules right. And then there will be some misunderstanding when translating those rules on the programming side because, inevitably, the technical staff don’t have the subject matter expertise to improvise correctly when they find something unusual in the data. In fact, they don’t see it as part of their job to improve on the specifications they’ve been given—just to execute.
You need multiple swings at the ball to get it right in a complicated environment like healthcare. No one is going to figure out the correct plan on their first try. That first try needs to be met by reality, and then you revisit your assumptions and come up with a better idea.
We talked earlier about the importance of relevant measures. The only way to get there is to be able to rapidly iterate on your ideas. Doing so inevitably brings together two groups that often have some inherent institutional conflict—non-technical and technical teams—and forces them to work together. For the end result to be correct, you need to take advantage of the subject matter knowledge of those who know their business or clinical processes inside and out as well as the knowledge of technical folks who know how to arrange data in a way that allows questions to be answered.
I think the difficulty of those tasks is underappreciated in data analytics overall and in healthcare in particular. It feels like it should be much easier, but as I learned first hand, it is quite hard. Many of the features in Ursa’s platform are meant to streamline certain tasks so that such an iterative approach can be applied more naturally.
Yes. We spent years before Ursa existed thinking about this problem and formulating a general idea of the solution, and then once Ursa Health became a thing, we started creating a software platform that would become that solution. So, we created a toolkit to automate some of those difficult technical tasks; to make it easier for technical people share intermediate results with nontechnical people, capture their feedback, and work together on refining the questions to ask of the data; and to foster a different mindset about analytic work that emphasizes the need for these very tailored measures.
These measures—maybe thousands of them across a health system—are urgently needed. There’s a very reasonable desire to consolidate analytics into smaller and smaller sets, driven by the fact that it’s hard to maintain what ends up being a large infrastructure in most organizations. But you can solve the infrastructure problem by making it easier to create and maintain measures—for example, by creating a core set of data assets, such as claims, medication orders, office visits, surgical encounters, and more, that can be reused for all sorts of analytic purposes.
Once you do that, you then need to decide that it’s okay to say, “This group over here is going to have a totally different set of performance measures that we feed them weekly or monthly than this other group, maybe even in the same department, and we treat them differently simply because their patients are different, and what they’re interested in is different.” The most important thing is that the data are targeted and relevant, and that folks in the system believe the results. That’s what gets people to change their behavior.
I used to play in a bunch of bands when I was younger. I play the piano and sing—not well or anything, but I can carry a tune. For a while, I was in a country band. This was in Massachusetts. I wasn’t that into country music, but it’s a type of music where piano players get to participate more, and even rock out now and then. We played some older stuff, like Johnny Cash and Merle Haggard—I got to really like that music. And now I’ve co-founded a company in Nashville! Weird.
To read Robin Clarke's founder story, click here.
To read Steve Hackbarth's founder story, click here.