RT Journal Article SR Electronic T1 Artificial intelligence-powered chatbots in search engines: a cross-sectional study on the quality and risks of drug information for patients JF BMJ Quality & Safety JO BMJ Qual Saf FD BMJ Publishing Group Ltd SP 100 OP 109 DO 10.1136/bmjqs-2024-017476 VO 34 IS 2 A1 Andrikyan, Wahram A1 Sametinger, Sophie Marie A1 Kosfeld, Frithjof A1 Jung-Poppe, Lea A1 Fromm, Martin F A1 Maas, Renke A1 Nicolaus, Hagen F YR 2025 UL http://qualitysafety.bmj.com/content/34/2/100.abstract AB Background Search engines often serve as a primary resource for patients to obtain drug information. However, the search engine market is rapidly changing due to the introduction of artificial intelligence (AI)-powered chatbots. The consequences for medication safety when patients interact with chatbots remain largely unexplored.Objective To explore the quality and potential safety concerns of answers provided by an AI-powered chatbot integrated within a search engine.Methodology Bing copilot was queried on 10 frequently asked patient questions regarding the 50 most prescribed drugs in the US outpatient market. Patient questions covered drug indications, mechanisms of action, instructions for use, adverse drug reactions and contraindications. Readability of chatbot answers was assessed using the Flesch Reading Ease Score. Completeness and accuracy were evaluated based on corresponding patient drug information in the pharmaceutical encyclopaedia drugs.com. On a preselected subset of inaccurate chatbot answers, healthcare professionals evaluated likelihood and extent of possible harm if patients follow the chatbot’s given recommendations.Results Of 500 generated chatbot answers, overall readability implied that responses were difficult to read according to the Flesch Reading Ease Score. Overall median completeness and accuracy of chatbot answers were 100.0% (IQR 50.0–100.0%) and 100.0% (IQR 88.1–100.0%), respectively. Of the subset of 20 chatbot answers, experts found 66% (95% CI 50% to 85%) to be potentially harmful. 42% (95% CI 25% to 60%) of these 20 chatbot answers were found to potentially cause moderate to mild harm, and 22% (95% CI 10% to 40%) to cause severe harm or even death if patients follow the chatbot’s advice.Conclusions AI-powered chatbots are capable of providing overall complete and accurate patient drug information. Yet, experts deemed a considerable number of answers incorrect or potentially harmful. Furthermore, complexity of chatbot answers may limit patient understanding. Hence, healthcare professionals should be cautious in recommending AI-powered search engines until more precise and reliable alternatives are available.Data are available upon reasonable request. The data that support the findings of this study are available from the corresponding author upon reasonable request and in compliance with German data protection laws.