Google has quietly removed a new artificial intelligence-powered feature from its search results that provided users with health advice sourced from online discussions and personal experiences. The feature, dubbed “What People Suggest,” aimed to leverage AI to offer insights from individuals facing similar health challenges, but faced scrutiny over the potential for inaccurate or harmful information.
The move comes amid increasing concerns about the reliability of AI-generated health information and follows a recent investigation that highlighted misleading advice appearing in Google’s AI Overviews, which are displayed prominently in search results. Google initially touted “What People Suggest” as a way to “transform health outcomes across the globe” by connecting users with real-world perspectives, but the feature was ultimately discontinued after a relatively short trial period.
According to a Google spokesperson, the decision to scrap “What People Suggest” was part of a broader effort to simplify the search results page and was not directly related to concerns about the quality or safety of the feature. Though, the timing of the removal coincides with heightened scrutiny of Google’s use of AI in healthcare and the potential risks associated with crowdsourced medical advice.
The initial rollout of “What People Suggest” occurred on mobile devices in the United States. The feature utilized AI to organize perspectives from online discussions into easily understandable themes, offering users insights into how others were managing similar conditions. For example, someone dealing with arthritis could find information on exercise routines favored by others with the same diagnosis. Three individuals familiar with the decision confirmed the feature is no longer available, with one source stating simply, “It’s dead.”
Concerns Over AI-Generated Health Information
The discontinuation of “What People Suggest” follows a January investigation by The Guardian, which revealed that Google’s AI Overviews were providing users with false and potentially dangerous health information. These AI-generated summaries, which are shown to an estimated 2 billion people each month, appeared above traditional search results, potentially leading users to rely on inaccurate advice.
Google initially responded to the criticism by stating that the AI Overviews linked to reputable sources and recommended seeking expert medical advice. However, the company subsequently removed AI Overviews for some, but not all, medical queries. This initial response and subsequent adjustments underscore the challenges Google faces in balancing the benefits of AI-powered search with the demand to ensure the accuracy and safety of health-related information.
Previous Plans for Expansion
In March 2023, at “The Check Up” event in New York, Google announced plans to expand its use of AI-generated summaries in search, including the introduction of “What People Suggest.” Karen DeSalvo, then Google’s chief health officer, explained in a blog post that the feature was designed to complement traditional expert-based medical information with insights from individuals with lived experience. DeSalvo wrote, “Whereas people arrive to search to find reliable medical information from experts, they also value hearing from others who have similar experiences.”
The company’s vision was to use AI to organize diverse perspectives from online discussions, making it easier for users to understand common themes and find relevant information. However, the project appears to have stalled, culminating in its recent removal.
Simplification or Safety Concerns?
While Google maintains that the removal of “What People Suggest” was solely a matter of simplifying the search results page, the timing raises questions about the potential influence of safety concerns. The company pointed to a November 2023 blog post by John Mueller, a search advocate at Google Switzerland, as evidence of this broader simplification effort. However, the post does not specifically mention “What People Suggest.”
When pressed on whether safety played a role in the decision, a Google spokesperson reiterated that it “had nothing to do with the quality or safety of the feature,” emphasizing that Google remains committed to providing users with access to reliable health information from a variety of sources, including online forums.
Looking Ahead
Google is scheduled to host its next “The Check Up” event on Tuesday, where Chief Health Officer Michael Howell and other company representatives will discuss their ongoing efforts to integrate AI into healthcare. The event will likely focus on how Google plans to address the challenges of providing accurate and safe health information in an era of rapidly evolving AI technology. The future of AI-powered health tools at Google will likely hinge on their ability to balance innovation with a commitment to user safety and reliable information.
The ongoing debate surrounding AI in healthcare highlights the need for careful consideration of the potential risks and benefits of these technologies. As AI continues to reshape the landscape of information access, ensuring the accuracy and trustworthiness of health-related content will remain a critical priority.
What are your thoughts on the use of AI in healthcare? Share your opinions in the comments below.
Disclaimer: This article provides informational content and should not be considered medical advice. Always consult with a qualified healthcare professional for any health concerns or before making any decisions related to your health or treatment.