advertisement
advertisement

How can providers minimize discrimination in AI?

DHHS issues guidance letter following nondiscrimination rule

...

The federal government published guidance on how health care providers can use artificial intelligence tools with minimal risk of inadvertently discriminating against people. 

In a Jan. 10 letter, the Department of Health and Human Services' Office of Civil Rights explained how covered entities, including health care practitioners and insurers, can safely introduce AI tools into their operations. The letter noted that under Section 1557 of the Patient Protection and Affordable Care Act, covered health programs and activities are required to take “reasonable steps to identify and mitigate the risk of discrimination when they use AI and other emerging technologies in patients care that use race, color, national origin, sex, age or disability as input variables.”

The letter goes on to describe nondiscrimination in the use of AI and other emerging technologies in patient care, noting the final rule applies longstanding civil rights principles to the use of tools in health care to make clear that protections do not stop with the evolution of technology. 

The letter also includes a section focused on identifying and mitigating risks of discrimination, highlighting the final rule’s two regulatory requirements for covered entities in using AI. The agency encourages covered entities to take efforts to identify risks, such as:

• Reviewing the Office of Civil Rights' discussion of risks in the use of such tools in Sec. 1557, including categories of tools used to assess risk of heart failure, cancer, lung function and blood oxygen levels.
• Researching published articles of research studies in peer-reviewed medical journals or from health care professional and hospital associations, including those put out by the Department of Health and Human Services. 
• Utilizing, implementing or creating AI registries for safety that are developed by nonprofit AI registries for safety that are developed by nonprofit AI organizations or others, including use of internal registries by the covered entity to determine use cases within an organization.
• Obtaining information from vendors about the input variables or factors included in
existing patient care decision support tools. 

“[The Office of Civil Rights] encourages all entities to review their use of such tools to ensure compliance with Section 1557 and to put into place measures to prevent discrimination that will help ensure all patients benefit from technological innovations in clinical decision-making,” reads the letter.

The final rule discusses how Sec. 1557, which was last updated in 2024, applies when health programs and activities utilize patient care decision support tools, including AI. 

For more information about requirements and compliance resources, visit this Health and Human Services webpage


Recommended Content

RECOMMENDATION CONTENT HERE

© 2023 American Dental Association