I’ve often heard people think of life in terms of 7-10 year stages and phases with a particular focus or motivation. Looking forward, its very hard to plan how one stage transitions to the next, but the teachings and experience of a prior stage often informs the next. For me, the first few stages looked a little like this:
College/Med School - Book knowledge and general learning. Studied biology, physiology, data science, and programming.
Residency/Fellowship - Hands-on training to take care of patients 1:1. Think critically about medicine, decision-making and the challenges of clinical care.
Post-Doc/Starting a lab - Understand needs in AI in medicine, develop AI models, study AI impact on clinical care, write papers and apply for research grants.
Our lab has been fortunate to be very successful in AI in medicine, in particular, developing and deploying AI models in cardiovascular medicine. We’ve been thoughtful about which problems to tackle, how to identify compelling questions, and develop clever solutions. I’d like to think that these achievements particularly stood out when the tasks are more complex and require more technical foundational work - say in echocardiography the study images are large/diverse and there’s much preprocessing required - but also required clinical context to identify what are the important clinical questions or needs.
At the beginning of 2024, we took a step back to re-evaluate the landscape of interesting questions and ideas in cardiovascular imaging. I presented the preceding slide to the group, where I described how we have already completed or have shot-on-goal roadmap to almost all the crucial tasks in echocardiography. I really believe we are on the cusp of revolutionizing echocardiography and point-of-care cardiac ultrasound. In fact, since then, we’ve presented EchoPrime, the largest echo model to-date capable of producing an automated full report, EchoNet-Measurements, a parallel model that performs annotation of all the common echocardiographic measurements, and task specific models to assess MR, TR, AR, pericardial effusion, and liver disease. All of this is due to the opportunity to work with a tremendously talented and passionate technical team, but if I were to give the same vision roadmap talk today, it would look more like this:
With this in mind, I’ve been thinking more about what happens next and what is still needed to deploy AI in medicine. There’s still tremendous barriers to adoption of AI into clinical medicine, ranging from FDA clearance to clinician buy-in and clinical integration. We’re definitely in a pivotal transitional period in medicine, but the day-to-day still feels like molasses as there seems to be unending stakeholder fears that need to be addressed.
In parallel, it felt like the research mission felt more at odds with the business mission. As it felt like there was more financial value in the work we were doing, we were told not to open-source code or put information in the public domain to allow others to build upon our research. [1] Readers might have already noticed, but previously most of our Github was under an MIT license to allow anyone to build upon our work, but we’ve since been mandated to revise the license to be much more restrictive. [2] I’ve never been particularly money motivated and always been a proponent of open science - hence my lab’s active use of Github and preprints, but it’s ultimately someone else’s call on how to balance science, community benefit, and profit.
Realizing our time is limited (even in the best case scenario, I might have 3 more stages / areas of focus in my career), there is a fixed amount of what we can accomplish. Being intentional with our direction is the best use of time. Even now, its incredibly hard to get AI into clinical use and I’d like clinical implementation to be the focus in next phase of my career. While my lab will still focus on developing the very best cardiovascular AI models, I’m dedicating increasing effort into understanding the operational workflows and internal incentives to getting AI tools into clinician’s hands. Hopefully, my short-term career roadmap will look like:
College/Med School - Book knowledge and general learning. I studied biology, physiology, data science, and programming.
Residency/Fellowship - Hands-on training to take care of patients 1:1. Think critically about medicine, decision-making and the challenges of clinical care.
Post-Doc/Starting a lab - Understand needs in AI in medicine, develop AI models, study AI impact on clinical care, write papers and apply for research grants.
Researcher and clinician - Health system implementation
Too often in the past, I thought being technically deep and technically right are the main facets in decision-making, but being able to navigate the decision-making process is a skill in of itself. At the same time, I’m hoping a deeper technical understanding of the limitations and opportunities around AI will help inform strategic decisions. ( For example, the importance of data is still a fundamental law of physics around AI, so leveraging more and more training data continues to come in handy).
At the end of the decade, if we can revolutionize cardiac ultrasound, making it more available to more patients in more places, and optimize clinical assessments (both by improving interpretations and identifying additional value from the images we already acquire), I would be incredibly satisfied with this stage of my career. I care less on who captures most of the business value, but patients can hopefully capture all of the clinical value.
[1] Nor did such requests come with resources to accelerate product development or help navigating internal stakeholders for clinicial adoption, and despite this work being funded by the NIH and other non-profits.
[2] https://github.com/echonet/EchoPrime?tab=License-1-ov-file. Also despite many other researchers releasing code under an MIT license and no policy prohibiting such releases.
I imagine this implementation layer will actually be the direct place where a business can be a value add - I heard that Bunker Hill went through a similar story and they seem like a good example of that.