ChatGPT falls short on specialized oral health questions

A new study has found that ChatGPT, while effective in addressing general questions, struggles with providing accurate and practical answers to complex oral health queries — especially those related to smoking and dental care.
Researchers evaluated ChatGPT’s responses to 500 frequently searched questions across five dental health categories, including periodontal conditions, oral hygiene, and oral surgery. About 20% of the responses were rated as “not useful” or only “partially useful,” particularly in niche areas like oral soft tissues and the effects of smoking on surgery.
The study, published in BMC Oral Health, also flagged concerns around readability, noting that while most responses were understandable, they often lacked accessible language for general audiences. Oral surgery topics were especially difficult to read.
In terms of practical guidance, or “actionability,” ChatGPT performed best in areas like periodontal care, offering hygiene tips for smokers. However, usefulness varied significantly by topic.
Overall, the researchers concluded that ChatGPT "can effectively supplement healthcare education". However, “it faces challenges with specialized topics, readability and consistent actionability [and] should not replace professional dental advice,” according to the study.
Read more: BMC Oral Health
The article presented here is intended to inform you about the broader media perspective on dentistry, regardless of its alignment with the ADA's stance. It is important to note that publication of an article does not imply the ADA's endorsement, agreement, or promotion of its content.