Most of the available health care diagnostics that use artificial intelligence (AI) function as black boxes—meaning that results do not include any explanation of why the machine thinks a patient has a certain disease or disorder. Adoption of these algorithms in health care has been slow because doctors and regulators cannot verify their results. However, a new type of algorithm called “explainable AI” (XAI) can be easily understood by humans. XAI algorithms being developed for health care applications can provide justifications for their results—in a format that humans can understand. Many of the XAI algorithms developed to date are relatively simple, like decision trees, and can only be used in limited circumstances. But as they continue to improve, these will likely be the dominant algorithms in health care. Health care technology companies would be wise to allocate resources for their development.