AI ability to analyze millions of references and predict outcomes could revolutionize prognoses and drug discovery, but questions remain about its role in health care.

Images with millions of pixels. Tens of thousands of samples. Thousands of hours of work by dozens of student employees.

Health research at Washington State University, as in labs all over the world, is laborious and time-consuming, as scientists work to find better ways to diagnose diseases, identify effective pharmaceuticals and therapies, and improve people’s health.

WSU biology professor Michael Skinner’s study of epigenetics⁠—molecular factors around DNA that regulate genome activity and are heritable traits that happen without changes to the DNA sequence⁠—requires careful analysis of a huge number of samples in gigapixel images to identify pathologies and evidence of disease. Skinner and other researchers now have a valuable assistant that speeds up processes like these with even more accurate results: artificial intelligence.

The health care landscape is undergoing a seismic shift, driven by the rapid advancement of AI in the past several years. From streamlining administrative tasks to potentially revolutionizing disease diagnosis and drug discovery, AI promises to reshape how we understand and treat illness, perform fundamental medical research, and keep people healthy.

As in Skinner’s work, WSU pharmaceutical researcher Senthil Natesan has leveraged AI to analyze almost unimaginable amounts of data and narrow possible drug candidates for ailments such as pain, epilepsy, or depression. His team’s techniques improve and accelerate that process with AI’s predictive abilities.

Even as medical researchers have made astounding gains with AI’s helping mind, doctors, nurses, and other health care providers are figuring out how AI can assist patients and health care work. However, questions and concerns of errors remain about AI’s ability to diagnose patients, says Thomas Heston, a medical doctor and clinical associate professor at WSU’s Elson S. Floyd College of Medicine.

Still, Heston sees the potential for AI to be an effective health consultant, as do medical sociologist Anna Zamora-Kapoor and other WSU researchers who see AI as a tool for patients too⁠—so long as AI truly makes health care easier to access.

 

Managing the flood of information

The work of basic health research, such as Skinner’s exploration of epigenetics, requires significant testing and analysis. In Skinner’s case, identifying signs of disease over multiple generations of rats and mice is necessary to verify findings.

photoillustration of a middle-aged, balding man gesturing in a dark blazer with a polo shirt
Michael Skinner
(Courtesy Organic Rising, derivative from original)

“We run an analysis on hundreds of slides for each organ system that we’re looking at. That’s why it would take six months to a year sometimes to manually do these pathologies,” he says, adding that each diagnosis requires validation with three people.

To speed up the process, Skinner (’83 PhD Biochem.) worked with Larry Holder, a computer science professor in the Voiland College of Engineering and Architecture, to develop an AI tool that could more quickly identify signs of disease in tissue images.

photoillustration of man in a suit with short hair
Larry Holder (Courtesy School of Electrical Engineering & Computer Science, derivative from original)

Holder and graduate student Colin Greeley (’21, ’23 MS Comp. Sci.) built a “deep learning” AI model with remarkable speed and accuracy, often surpassing human capabilities. The research, detailed in Scientific Reports, trained the AI using images from past epigenetic studies in Skinner’s laboratory. These studies focused on molecular-level signs of disease in kidney, testes, ovarian, and prostate tissues from rats and mice.

photoillustration of young smiling man in baseball cap and flannel shirt
Colin Greeley (Courtesy Palouse Robosub Club/Facebook, derivative from original)

To handle the extremely high-resolution, gigapixel images, Holder’s research team designed the AI model to analyze smaller individual tiles, while still maintaining their context within larger sections at a lower resolution. Holder explains that it’s like zooming in and out on a microscope. This allows the AI model to manage very large file sizes without significant slowdown.

The research team then tested the AI model using images from studies beyond Skinner’s lab, including identifying breast cancer and lymph node metastasis. The results demonstrated that the AI model not only correctly identified pathologies rapidly, but did so faster than previous models and, in some instances, found cases that a trained human team had missed.

Skinner and Holder point out that the tool could be used by physicians and other health care specialists, not to replace humans but to provide a very quick but accurate review of medical images such as X-rays or CT scans. The AI model “applies to almost every kind of image you can get, particularly in the medical field,” Skinner says.

Holder explains that deep learning is an AI method that goes beyond traditional machine learning by attempting to loosely mimic the human brain through a network of neurons and synapses. He says that if the model makes an error, it “learns” from it using a process called backpropagation, which involves making adjustments throughout its network to prevent the error from recurring.

Holder says that the state-of-the-art network they designed outperformed several other systems and datasets in their comparisons. He sees its potential for other areas of molecular research that can take months of computation. “With the AI, we could probably do it in a few days,” Holder says.

The potential of this AI model could revolutionize medicine for both animals and humans by significantly improving the speed and efficiency of analysis, Skinner adds. “This is the future in terms of how pathology is going to be done.”

The AI capability for analyzing data goes beyond diagnoses. Identifying new drugs for everything from pain to depression also requires sifting through billions of options to find effective and safe candidate drugs.

Natesan, an associate professor at the College of Pharmacy and Pharmaceutical Sciences, and his team use computer simulations and AI to analyze those options and expedite the drug development process.

photoillustration of a man with goatee in a button up shirt
Senthil Natesan (Courtesy WSU Health Sciences, derivative from original)

Traditional drug development is lengthy and expensive, often taking up to 10 years and costing billions of dollars per approved drug. Natesan’s work focuses on reducing this time and cost by using computational methods to study protein-drug interactions and identify potential drug molecules.

“Instead of testing 100,000 compounds in animals for several years, you can screen 100 million compounds in a matter of weeks, and then find the top 100 compounds,” Natesan says. “Then those can be tested in animals, so for pre-screening, pre-clinical drug discovery, computer modeling is extremely useful.”

Computer-aided drug design has been around for a while, Natesan says, but advances in generative AI enables the “in silico” simulations to predict drug interactions, safety, effectiveness, and selectivity in a fraction of the computational time.

The computer-based approach can also predict how well a drug will be absorbed or reach its target site much faster than experimental methods.

In a recent study, Natesan and his team developed and used a novel generative AI model to predict the interaction between drug molecules and proteins embedded in the cell membrane. This is crucial because approximately one-third of all human proteins reside in or around the cell membrane, and over half of all FDA-approved drugs target these membrane proteins.

Natesan’s AI-enhanced method significantly reduced the time required to obtain critical information of how drugs interact with membrane lipids, cutting it down to only a third of the typical computational time. Natesan says that finding information for a single drug usually takes about a month using a high-performance workstation. Their new method takes just 10 days.

Natesan leveraged AI’s predictive capability to achieve this time reduction. Instead of performing calculations across the entire length of cell membrane layers, requiring 60 calculations, they computed the information for just 10 selected regions. The AI model then generated missing data for the remaining 50 regions.

The model’s accuracy was validated by testing it on existing data for three common anti-asthma drugs and 20 other compounds.

It’s part of determining the safety of a drug, too. For example, “if you want to treat pain or depression, whether this new compound is able to produce relief or not is not the only thing we are looking into. We are also looking into whether this compound is safe,” Natesan says.

By significantly decreasing the time and cost associated with traditional experimental methods, Natesan sees many gains ahead: “Imagine doing all these tests using experiments versus predicting most of these drug characteristics with one click. We are approaching that capability.”

 

The human side of AI

While AI’s ability to handle huge swaths of data is undisputed, the technology’s role in direct health care with patients is still in flux.

Heston, a working physician and researcher, has taken a deep interest in AI use in patient care. He’s cautious about the ability of AI and large language models like ChatGPT for diagnosis.

photoillustration of a middle-aged man, smiling, in a suit and striped shirt
Thomas Heston (Courtesy Elson S. Floyd College of Medicine, derivative from original)

“Over the past several years, a lot of the research on AI in medicine has focused on whether it can pass a standardized exam,” he says. “But while it does answer these questions pretty well, its diagnostic capabilities are still evolving.”

A study led by Heston revealed potential pitfalls in relying too heavily on current AI models for complex clinical assessments, such as whether a patient with chest pain needs to be hospitalized. Heston found that ChatGPT, despite its reported ability to pass medical exams, performed inconsistently in assessing heart risk in simulated chest pain cases. The AI system provided varying risk levels for the exact same patient data and failed to align with traditional physician methods.

The researchers attributed this inconsistency to randomness built into the current version of the software. While randomness is beneficial for generating natural language, it’s problematic for health care applications requiring consistent and reliable answers.

Despite these findings, Heston acknowledges the potential of generative AI in health care, suggesting its strengths lie in creating differential diagnoses and rapidly summarizing pertinent information from medical records, so long as they meet critical privacy requirements.

“There’s going to be that human component in medicine,” he says. “But how do we use ChatGPT or other AI to make us better at our jobs?”

Heston’s questions and insights fit with the national conversation around AI and the work of physicians. Some research suggests that AI, when used as a tool and not a replacement, can outperform doctors on some diagnoses. But studies found that many physicians undervalue accurate AI input or struggle to effectively leverage AI capabilities.

“AI doesn’t need to replace our thinking, but it can challenge our thinking as physicians,” Heston suggests. For example, if a patient might have chronic kidney disease, Heston says he could ask AI to “answer questions as if you’re a nephrologist. Or, ‘I want you to provide 10 out-of-the-box diagnoses.’”

Another use of AI is communication with patients. AI tools are already being used by thousands of doctors to respond to patient queries, but often without the patient’s knowledge. The lack of transparency, impact on doctor-patient relationships, and potential for errors underscores the need for vigilance.

Yet even when AI communication about health is transparent, is it effective?

Maybe not without some changes, says medical sociologist Zamora-Kapoor. She is interested in examining the implementation of AI tools and their use by all kinds of patients. In contrast to most AI research, which focuses on algorithms and demos, Zamora-Kapoor is interested in real-life applications of these promising technologies.

photoillustration of a smiling woman with long dark hair
Anna Zamora-Kapoor (Courtesy WSU Health Equity Research Center, derivative from original)

Last year, she looked at how AI can influence health care delivery in underserved and rural communities that have less access to health care because of financial, geographical, and language or cultural barriers.

Zamora-Kapoor, who holds a joint appointment with the College of Medicine and the Department of Sociology, partnered with Three Rivers Family Medicine, a rural clinic in Brewster in remote north-central Washington. They examined the feasibility of AI-generated text messages to increase the uptake of lung cancer screenings.

The case study had ChatGPT generate two different messages, one direct and one polite, that were sent to 144 eligible patients at Three Rivers who were 50 or older with a history of smoking. The goal was to encourage them to schedule low-dose CT scans for lung cancer screening.

They found that less than half the patients who received a message actually opened it. Zamora-Kapoor describes this as a critical technology barrier. “The overarching challenge is sometimes simply reaching patients,” she says. She also notes that rural clinics might not be well equipped to communicate efficiently with patients via AI and smartphones.

Successful AI use in rural health care was further stymied by lack of adequate software and reliable Internet, limited training of clinic staff on AI, and issues with electronic health records.

“Everybody working in a rural clinic is stretched very thin,” Zamora-Kapoor says. “When I first started this project, I asked myself: ‘Is there a way to use AI to reduce the administrative burden on clinics?’”

She points out a significant technological divide between urban and rural areas at both state and national levels. Zamora-Kapoor was part of AIM-AHEAD⁠—the Artificial Intelligence–Machine Learning Consortium to Advance Health Equity and Research Diversity⁠—created by the National Institutes of Health to improve the capabilities of emerging technology to address health disparities and inequities.

Rural health clinics often lack robust data platforms and IT support, she says. They’d benefit from investments to improve their records, but some tech companies are reluctant to work with smaller rural clinics because of their small patient populations.

Moreover, sociological factors can affect technology and health in rural areas, such as older patients not having SMS-enabled phones or not regularly using online portals.

“The value of technologies like AI is limited unless people can use them,” Zamora-Kapoor says.

 

AI’s future as a health care partner

Heston agrees that AI, like any technology, can be a big help in the right context. “We can’t use it as a crutch,” he says. “We have to use it to develop our thinking, maybe by presenting alternatives you may not have thought of.”

Zamora-Kapoor also wants to see more benefits to patients: “The promise of AI is real, but it’s current implementation is not benefiting all Washingtonians in the same way.” In addition to reducing administrative burdens of small clinics, she would like to see if AI translation can bridge language barriers to good health information.

Along that vein, WSU nurse scientists Connie Nguyen-Truong, Catherine Van Son, Marian Wilson, Julie Postma, and Shelly Fritz established the NTECH lab at the intersection of technological innovation and nursing expertise.

Their broad combined knowledge⁠—in areas such as smart home technologies, health apps for smartphones, chronic pain management, gerontology, and the adoption of health technology in underrepresented communities⁠—enables NTECH to prepare health care professionals such as nurses to use tech and AI.

To address some lingering issues around AI, the NTECH team also develops bioethics training.

Meanwhile, WSU researchers benefit from AI’s data-crunching superpowers. Veterinary medicine researchers at Washington Animal Disease Diagnostic Lab are already finding uses for Holder’s AI model to diagnose diseases in animal tissue samples.

“We’re taking advantage of the new technology because it’s so quick and accurate,” Skinner says. “It’s inevitable every field is going to incorporate AI.”

 

Web exclusive

Nursing and elder care leverage AI tools

Learn more

WSU researchers develop machine learning model to predict virus reservoirs (WSU Insider, March 31, 2025)

Where Are All the AI Drugs? (Wired, July 17, 2025)