History of Our AI for Healthcare Research

Our AI for Healthcare (AIHC) work began as the AI for Healthcare Bootcamp which was launched in September 2017 as a cohort of six students co-led by Pranav Rajpurkar and Anand Avati, who were mentored by Nigam Shah and Andrew Ng. Since Autumn 2018, many of the projects in the AIHC Bootcamp have been in close collaboration with the Stanford Center for Artificial Intelligence in Medicine & Imaging (AIMI).

In the past 5 years, the AIHC Bootcamp has graduated 100+ students with 15+ collaborating faculty. Over half of the students had their first research experiences in the bootcamp. We are proud to share that our bootcamp alumni have pursued PhD programs at institutions including MIT, Stanford, UW; led industry efforts at Google Brain, Apple Health, Microsoft Research, Tesla AI; and started AI startups including Valar Labs and Wispr AI.

After 2022, the AIHC Bootcamp evolved into two programs: (1) The AI for Healthcare Bootcamp at Stanford, and (2) The Medical AI Bootcamp, a joint Harvard-Stanford effort.

Below are four examples of successful bootcamp projects during 2017 - 2022 spanning Electronic Health Records, Imaging, and Wearables data.

AI for Healthcare Projects


Improving Palliative Care with Deep Learning

Anand Avati, Kenneth Jung, Stephanie Harman, Lance Downing, Andrew Ng, Nigam Shah

We developed a deep learning approach which uses electronic healthcare data to identify hospitalized patients who might have palliative needs. Our approach generates a report, using careful ablation techniques, highlighting the most crucial factors in the patient's EHR data that contributed towards making a high probability decision. A later version of the model has been deployed in the Stanford Medical Center and has improved care of over 2000 patients.

Paper Webpage Media

CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning

Pranav Rajpurkar*, Jeremy Irvin*, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Ding, Aarti Bagul, Curtis Langlotz, Katie Shpanskaya, Matthew P. Lungren, Andrew Y. Ng

We developed a deep learning model called CheXNet which accepts a chest X-ray and outputs whether the patient has pneumonia. When comparing CheXNet to board-certified radiologists, we found that it achieved diagnostic performance comparable to that of the radiologists. This model has since been further developed and extended to other diseases and use cases, resulting in follow-up works including CheXNeXt, CheXpert, CheXaid, CheXtransfer, CheXZero, and CheXED which is currently running in a shadow deployment within several emergency departments in a large national healthcare system.

Paper Webpage Media

MURA: Large Dataset for Abnormality Detection in Musculoskeletal Radiographs

Pranav Rajpurkar*, Jeremy Irvin*, Aarti Bagul, Daisy Ding, Tony Duan, Hershel Mehta, Brandon Yang, Kaylie Zhu, Dillon Laird, Robyn L. Ball, Curtis Langlotz, Katie Shpanskaya, Matthew P. Lungren, Andrew Y. Ng

MURA is a publicly available dataset of upper extremity bone X-rays labeled by radiologists to support the development of automated approaches for bone X-ray abnormality identification. We held a competition where users submitted their models trained on MURA, and the best submission substantially outperformed the best radiologist performance on the held-out test set.

Paper Competition

Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network

Awni Y. Hannun*, Pranav Rajpurkar*, Masoumeh Haghpanahi*, Geoffrey H. Tison*, Codie Bourn, Mintu P. Turakhia, Andrew Y. Ng

We developed a deep neural network which can diagnose irregular heart rhythms, also known as arrhythmias, from single-lead ECG signals. We compared the model to expert cardiologists and found that it achieved diagnostic performance similar to that of the cardiologists.

Paper Webpage