Detecting blood pressure with voice recordings
Researchers have explored whether a machine learning model can screen for hypertension by analyzing acoustic features in patients’ voices.
In a study published in IEEE Access, the researchers used spectrotemporal acoustic features extracted from the speech recordings of 245 patients to develop predictive models. The patients were asked to record a specific phrase up to six times daily for two weeks, allowing the researchers to extract 160 acoustic features from the recordings such as temporal, spectral and spectrotemporal characteristics. They then tested the recordings whole, divided into quarters or stacked.
The researchers noted that the novel model’s performance was greatest when features from all quarters of the recordings were stacked, and the second quarter provided the most discriminative data. They then examined the model’s performance at two hypertension thresholds. The balanced accuracy of the model was 84% in women and 77% in men with systolic blood pressure levels at or above 135 millimeters of mercury and diastolic blood pressure levels at or above 85 millimeters of mercury. The model’s accuracy was fell to 63% in women but rose to 86% in men with systolic blood pressure levels at or above 140 millimeters of mercury and diastolic blood pressure levels at or above 90 millimeters of mercury.
The findings demonstrated the potential of incorporating speech-based blood pressure detection into noninvasive screening options for hypertension.
Read more: IEEE Access
The article presented here is intended to inform you about the broader media perspective on dentistry, regardless of its alignment with the ADA's stance. It is important to note that publication of an article does not imply the ADA's endorsement, agreement, or promotion of its content.