WHO's AI Chatbot SARAH Found Inconsistent in Providing Health Advice

WHO's AI chatbot SARAH provides inconsistent and contradictory health advice, highlighting the dangers of relying on AI for critical healthcare guidance, according to a POLITICO review.

author-image
Israel Ojoko
Updated On
New Update
WHO's AI Chatbot SARAH Found Inconsistent in Providing Health Advice

WHO's AI Chatbot SARAH Found Inconsistent in Providing Health Advice

The World Health Organization's AI chatbot SARAH (Smart AI Resource Assistant for Health) has been found to provide inconsistent and sometimes contradictory health advice, according to a review conducted by POLITICO.

The chatbot, which is intended to offer the public guidance on healthy living based on WHO's expert recommendations, often struggles to give reliable and helpful responses.

In the review, SARAH was found to give conflicting answers to the same questions, fail to provide contact information for local healthcare providers when requested, and offer limited assistance for individuals experiencing severe mental health crises or suicidal thoughts.

While the chatbot can be prompt and courteous at times, even brilliant, it is also prone to being deeply unhelpful in certain situations. "SARAH often gives contradictory answers to the same queries," the POLITICO review stated.

Why this matters: The inconsistencies and limitations of SARAH highlight the potential dangers of relying on AI chatbots for critical healthcare advice. Inaccurate or inadequate responses from such chatbots could lead to harmful outcomes for users seeking guidance on health issues.

The WHO acknowledged the feedback from the POLITICO review, suggesting that it could be used to improve and strengthen health promotion initiatives and better understand the role of AI in these efforts. However, critics argue that in its current state, SARAH is unreliable and not dependable enough to be truly useful for providing health advice to the public.

While SARAH's performance did show some improvement with increased interaction, the chatbot's inconsistencies and inability to provide adequate assistance in certain critical situations, such as mental health crises, raise concerns about its effectiveness and safety. "The review highlights the dangers of relying on chatbots for healthcare advice, as SARAH's inconsistent and limited responses could potentially lead to harmful outcomes for users," POLITICO reported.

The POLITICO review of the WHO's AI chatbot SARAH reveals significant inconsistencies and limitations in its ability to provide reliable health advice to the public. The chatbot's tendency to give contradictory answers, its failure to provide local healthcare provider information, and its inadequate assistance for mental health crises underscore the potential risks of depending on AI for critical health guidance. As the WHO works to improve SARAH based on this feedback, the review serves as a cautionary tale about the current limitations of AI in the healthcare space.

Key Takeaways

  • WHO's AI chatbot SARAH provides inconsistent, contradictory health advice.
  • SARAH fails to provide local healthcare provider info or adequate mental health support.
  • Inaccurate chatbot responses could lead to harmful outcomes for users.
  • WHO acknowledges SARAH's limitations, plans to improve based on feedback.
  • Article cautions against over-reliance on AI for critical healthcare guidance.