IBM's Doctor Watson: 10 Years Later (2024)

The year 2021 heralds a major 10-year anniversary for artificial intelligence that should not pass without acknowledgment. In 2011, IBM’s question-answering computer Watson roundly defeated the television game show Jeopardy! champions Ken Jennings and Brad Rutter with a three-day total of $77,000, as compared to Jennings’ $24,000 and Rutter’s $22,000, winning the first-place prize of $1 million.

Just a day after Watson’s Jeopardy! victory was televised, IBM began a marketing campaign to leverage its success. Within a few years, the company was taking a victory lap for Watson’s foray into cancer care, touting it as “a revolutionary approach to medicine and health care that is likely to have significant social, economic, and political consequences."

Champion Ken Jennings included in one of his Final Jeopardy! responses this assessment: “I, for one, welcome our new computer overlords.” Later, in an article in Slate, Jennings wrote:

Just as factory jobs were eliminated in the 20th century by new assembly-line robots, Brad and I were the first knowledge-industry workers put out of work by the new generation of "thinking" machines. "Quiz show contestant" may be the first job made redundant by Watson, but I'm sure it won't be the last.

In the intervening years, it has become clear that IBM overestimated the speed with which Watson would conquer the world. Perhaps Watson’s most prominent failure occurred in a 2013 partnership with MD Anderson Cancer Center, originally touted by the partners as leveraging the ability of cognitive systems to “‘understand’ the context of users’ questions, uncover answers from Big Data, and improve performance by continuously learning from experiences.”

But by 2017, MD Anderson “benched” Watson in what was heralded as a setback for artificial intelligence in medicine. Watson could not ingest medical information from patient records or the medical literature the way a physician would. It had difficulty comparing each new patient with the many patients who had come before. A “scathing” report by auditors at the University of Texas stated that the project had not achieved its goals and had cost approximately $62 million.

In fact, even Watson’s triumph in Jeopardy! is not as clear-cut as it seems. For one thing, Watson’s victory may have had more to do with its ability to “buzz in” to answer questions than its superior ability to answer questions. In many cases, contestants Jennings and Rutter also knew the answers but were “blocked out” by Watson’s superior timing.

Moreover, Watson was not perfect. On day two of the three-day contest, the Final Jeopardy category was U.S. Cities, and the answer was, “Its largest airport is named after a World War II hero and its second-largest airport is named after a WWII battle.” Watson answered, “What is Toronto?” That Watson could mistake Toronto for a U.S. city serves as a powerful reminder that even AI can make mistakes. A mistake in Jeopardy! is not a matter of life and death, but in medicine, it could be.

Some might argue that we have not given Watson sufficient time. Perhaps forecasts were excessively rosy, but with additional opportunities to work out the kinks, it will inevitably triumph in the end. Or perhaps there is something wrong with healthcare—namely, we simply don’t possess a clear enough understanding of patients, diseases, and physicians.

Yet other explanations are also possible. One would be that inputting and analyzing massive quantities of data is simply not enough to advance our understanding of patient care. Perhaps good patient care is not primarily a data-driven endeavor.

We should also think twice before we assume that real knowledge is an output solely attributable to following algorithms. If we liken a computer to a net, the mere fact that a net of a certain type catches some things and misses others in no way supports the conclusion that the only real or important things are the ones we find in the net.

Artificial Intelligence Essential Reads

AI's Turing Test Moment

AI's Quest for a Grand Unification Theory

It is quite possible that what physicians and other health professionals know and can do is not replicable in digital terms. A digital reproduction of a great painting may resemble such a painting a great deal, just as a digital audio recording may be nearly indistinguishable from a musical performance, but in the end, they are not the same.

The ultimate challenge in improving patient care is not to make patients, diseases, and physicians fit our models, but to make our models fit patients, diseases, and physicians. For one thing, we cannot refashion the world to fit our tools, and even more importantly, the world itself is a far more complex, rich, and beautiful reality than any of our models can capture.

IBM's Doctor Watson: 10 Years Later (2024)
Top Articles
Latest Posts
Article information

Author: Errol Quitzon

Last Updated:

Views: 5742

Rating: 4.9 / 5 (59 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Errol Quitzon

Birthday: 1993-04-02

Address: 70604 Haley Lane, Port Weldonside, TN 99233-0942

Phone: +9665282866296

Job: Product Retail Agent

Hobby: Computer programming, Horseback riding, Hooping, Dance, Ice skating, Backpacking, Rafting

Introduction: My name is Errol Quitzon, I am a fair, cute, fancy, clean, attractive, sparkling, kind person who loves writing and wants to share my knowledge and understanding with you.