In Press
D. D. Deliyski, et al., “Laser-calibrated system for transnasal fiberoptic laryngeal high-speed videoendoscopy,” Journal of Voice, In Press.
K. L. Marks, J. Z. Lin, A. Fox, L. E. Toles, and D. D. Mehta, “Impact of non-modal phonation on estimates of subglottal pressure from neck-surface acceleration in healthy speakers,” Journal of Speech, Language, and Hearing Research, In Press.
H. Ghasemzadeh, D. D. Deliyski, D. S. Ford, J. B. Kobler, R. E. Hillman, and D. D. Mehta, “Method for vertical calibration of laser-projection transnasal fiberoptic high-speed videoendoscopy,” Journal of Voice, In Press. Publisher's VersionAbstract
The ability to provide absolute calibrated measurement of the laryngeal structures during phonation is of paramount importance to voice science and clinical practice. Calibrated three-dimensional measurement could provide essential information for modeling purposes, for studying the developmental aspects of vocal fold vibration, for refining functional voice assessment and treatment outcomes evaluation, and for more accurate staging and grading of laryngeal disease. Recently, a laser-calibrated transnasal fiberoptic endoscope compatible with high-speed videoendoscopy (HSV) and capable of providing three-dimensional measurements was developed. The optical principle employed is to project a grid of 7 × 7 green laser points across the field of view (FOV) at an angle relative to the imaging axis, such that (after calibration) the position of each laser point within the FOV encodes the vertical distance from the tip of the endoscope to the laryngeal tissues. The purpose of this study was to develop a precise method for vertical calibration of the endoscope. Investigating the position of the laser points showed that, besides the vertical distance, they also depend on the parameters of the lens coupler, including the FOV position within the image frame and the rotation angle of the endoscope. The presented automatic calibration method was developed to compensate for the effect of these parameters. Statistical image processing and pattern recognition were used to detect the FOV, the center of FOV, and the fiducial marker. This step normalizes the HSV frames to a standard coordinate system and removes the dependence of the laser-point positions on the parameters of the lens coupler. Then, using a statistical learning technique, a calibration protocol was developed to model the trajectories of all laser points as the working distance was varied. Finally, a set of experiments was conducted to measure the accuracy and reliability of every step of the procedure. The system was able to measure absolute vertical distance with mean percent error in the range of 1.7% to 4.7%, depending on the working distance.
M. E. Powell, et al., “Efficacy of videostroboscopy and high-speed videoendoscopy to obtain functional outcomes from perioperative ratings in patients with mass lesions,” Journal of Voice, In Press. Publisher's Version
G. Maguluri, D. Mehta, J. Kobler, J. Park, and N. Iftimia, “Synchronized, concurrent optical coherence tomography and videostroboscopy for monitoring vocal fold morphology and kinematics,” Biomedical Optics Express, vol. 10, no. 9, pp. 4450-4461, 2019. Publisher's VersionAbstract
Voice disorders affect a large number of adults in the United States, and their clinical evaluation heavily relies on laryngeal videostroboscopy, which captures the medial-lateral and anterior-posterior motion of the vocal folds using stroboscopic sampling. However, videostroboscopy does not provide direct visualization of the superior-inferior movement of the vocal folds, which yields important clinical insight. In this paper, we present a novel technology that complements videostroboscopic findings by adding the ability to image the coronal plane and visualize the superior-inferior movement of the vocal folds. The technology is based on optical coherence tomography, which is combined with videostroboscopy within the same endoscopic probe to provide spatially and temporally co-registered images of the mucosal wave motion, as well as vocal folds subsurface morphology. We demonstrate the capability of the rigid endoscopic probe, in a benchtop setting, to characterize the complex movement and subsurface structure of the aerodynamically driven excised larynx models within the 50 to 200 Hz phonation range. Our preliminary results encourage future development of this technology with the goal of its use for in vivo laryngeal imaging.
M. Motie-Shirazi, et al., “Toward development of a vocal fold contact pressure probe: Sensor characterization and validation using synthetic vocal fold models,” Applied Sciences, vol. 9, no. 15, pp. 3002, 2019. Publisher's VersionAbstract
Excessive vocal fold collision pressures during phonation are considered to play a primary role in the formation of benign vocal fold lesions, such as nodules. The ability to accurately and reliably acquire intraglottal pressure has the potential to provide unique insights into the pathophysiology of phonotrauma. Difficulties arise, however, in directly measuring vocal fold contact pressures due to physical intrusion from the sensor that may disrupt the contact mechanics, as well as difficulty in determining probe/sensor position relative to the contact location. These issues are quantified and addressed through the implementation of a novel approach for identifying the timing and location of vocal fold contact, and measuring intraglottal and vocal fold contact pressures via a pressure probe embedded in the wall of a hemi-laryngeal flow facility. The accuracy and sensitivity of the pressure measurements are validated against ground truth values. Application to in vivo approaches are assessed by acquiring intraglottal and VF contact pressures using a synthetic, self-oscillating vocal fold model in a hemi-laryngeal configuration, where the sensitivity of the measured intraglottal and vocal fold contact pressure relative to the sensor position is explored.
D. D. Mehta and J. H. Van Stan, “Ambulatory phonation monitoring,” in The SAGE Encyclopedia of Human Communication Sciences and Disorders, J. S. Damico and M. Ball, Ed. 2019, pp. 90–91.
A. J. Ortiz, et al., “Automatic speech and singing classification in ambulatory recordings for normal and disordered voices,” The Journal of the Acoustical Society of America, vol. 146, no. 1, pp. EL22–EL27, 2019. Publisher's VersionAbstract
Ambulatory voice monitoring is a promising tool for investigating phonotraumatic vocal hyperfunction (PVH), associated with the development of vocal fold lesions. Since many patients with PVH are professional vocalists, a classifier was developed to better understand phonatory mechanisms during speech and singing. Twenty singers with PVH and 20 matched healthy controls were monitored with a neck-surface accelerometer–based ambulatory voice monitor. An expert-labeled ground truth data set was used to train a logistic regression on 15 subject-pairs with fundamental frequency and autocorrelation peak amplitude as input features. Overall classification accuracy of 94.2% was achieved on the held-out test set.
O. Murton, S. Shattuck-Hufnagel, J. - Y. Choi, and D. D. Mehta, “Identifying a creak probability threshold for an irregular pitch period detection algorithm,” The Journal of the Acoustical Society of America, vol. 145, no. 5, pp. EL379–EL385, 2019. Publisher's VersionAbstract
Irregular pitch periods (IPPs) are associated with grammatically, pragmatically, and clinically significant types of nonmodal phonation, but are challenging to identify. Automatic detection of IPPs is desirable because accurately hand-identifying IPPs is time-consuming and requires training. The authors evaluated an algorithm developed for creaky voice analysis to automatically identify IPPs in recordings of American English conversational speech. To determine a perceptually relevant threshold probability, frame-by-frame creak probabilities were compared to hand labels, yielding a threshold of approximately 0.02. These results indicate a generally good agreement between hand-labeled IPPs and automatic detection, calling for future work investigating effects of linguistic and prosodic context.
D. D. Mehta, V. M. Espinoza, M. Zanartu, J. H. Van Stan, and R. E. Hillman, “The difference between first and second harmonic amplitudes correlates between glottal airflow and neck-surface accelerometer signals during phonation,” The Journal of the Acoustical Society of America, vol. 145, no. 5, pp. EL386–EL392, 2019. Publisher's VersionAbstract
Miniature high-bandwidth accelerometers on the anterior neck surface are used in laboratory and ambulatory settings to obtain vocal function measures. This study compared the widely applied L1–L2 measure (historically, H1–H2)—the difference between the log-magnitude of the first and second harmonics—computed from the glottal airflow waveform with L1–L2 derived from the raw neck-surface acceleration signal in 79 vocally healthy female speakers. Results showed a significant correlation (r = 0.72) between L1–L2 values estimated from both airflow and accelerometer signals, suggesting that raw accelerometer-based estimates of L1–L2 may be interpreted as reflecting glottal physiological parameters and voice quality attributes during phonation.
J. A. Whitfield, Z. Kriegel, A. M. Fullenkamp, and D. D. Mehta, “Effects of concurrent manual task performance on connected speech acoustics in individuals with Parkinson disease,” Journal of Speech, Language, and Hearing Research, vol. 62, no. 7, pp. 2099–2117, 2019. Publisher's VersionAbstract
Purpose: Prior investigations suggest that simultaneous
performance of more than 1 motor-oriented task may
exacerbate speech motor deficits in individuals with
Parkinson disease (PD). The purpose of the current
investigation was to examine the extent to which
performing a low-demand manual task affected the
connected speech in individuals with and without PD.
Method: Individuals with PD and neurologically healthy
controls performed speech tasks (reading and
extemporaneous speech tasks) and an oscillatory
manual task (a counterclockwise circle-drawing
task) in isolation (single-task condition) and concurrently
(dual-task condition).
Results: Relative to speech task performance, no changes
in speech acoustics were observed for either group when
the low-demand motor task was performed with the
concurrent reading tasks. Speakers with PD exhibited
a significant decrease in pause duration between the
single-task (speech only) and dual-task conditions
for the extemporaneous speech task, whereas control
participants did not exhibit changes in any speech
production variable between the single- and dual-task
Conclusions: Overall, there were little to no changes in
speech production when a low-demand oscillatory motor
task was performed with concurrent reading. For the
extemporaneous task, however, individuals with PD
exhibited significant changes when the speech and manual
tasks were performed concurrently, a pattern that was
not observed for control speakers.
Supplemental Material:
J. A. Whitfield and D. D. Mehta, “Examination of clear speech in Parkinson disease using passage-level vowel space metrics,” Journal of Speech, Language, and Hearing Research, vol. 62, no. 7, pp. 2082–2098, 2019. Publisher's VersionAbstract
Purpose: The purpose of the current study was to characterize
clear speech production for speakers with and without
Parkinson disease (PD) using several measures of working
vowel space computed from frequently sampled formant
Method: The 1st 2 formant frequencies were tracked for
a reading passage that was produced using habitual and
clear speaking styles by 15 speakers with PD and 15 healthy
control speakers. Vowel space metrics were calculated
from the distribution of frequently sampled formant frequency
tracks, including vowel space hull area, articulatory–acoustic
vowel space, and multiple vowel space density (VSD)
measures based on different percentile contours of the
formant density distribution.
Results: Both speaker groups exhibited significant
increases in the articulatory–acoustic vowel space and
VSD10, the area of the outermost (10th percentile)
contour of the formant density distribution, from habitual
to clear styles. These clarity-related vowel space increases
were significantly smaller for speakers with PD than
controls. Both groups also exhibited a significant increase
in vowel space hull area; however, this metric was not
sensitive to differences in the clear speech response
between groups. Relative to healthy controls, speakers
with PD exhibited a significantly smaller VSD90, the area
of the most central (90th percentile), densely populated
region of the formant space.
Conclusions: Using vowel space metrics calculated from
formant traces of the reading passage, the current work
suggests that speakers with PD do indeed reach the more
peripheral regions of the vowel space during connected
speech but spend a larger percentage of the time in more
central regions of formant space than healthy speakers.
Additionally, working vowel space metrics based on the
distribution of formant data suggested that speakers with
PD exhibited less of a clarity-related increase in formant
space than controls, a trend that was not observed for
perimeter-based measures of vowel space area.
J. P. Cortés, et al., “Ambulatory assessment of phonotraumatic vocal hyperfunction using glottal airflow measures estimated from neck-surface acceleration,” PLoS One, vol. 13, no. 12, pp. e0209017, 2018. Publisher's VersionAbstract
Phonotraumatic vocal hyperfunction (PVH) is associated with chronic misuse and/or abuse of voice that can result in lesions such as vocalfold nodules. The clinical aerodynamic assessment of vocal function has been recently shown to differentiate between patients with PVH and healthy controls to provide meaningful insight into pathophysiological mechanisms associated with these disorders. However, all current clinical assessment of PVH is incomplete because of its inability to objectively identify the type and extent of detrimental phonatory function that is associated with PVH during daily voice use. The current study sought to address this issue by incorporating, for the first time in a comprehensive ambulatory assessment, glottal airflow parameters estimated from a neck-mounted accelerometer and recorded to a smartphone-based voice monitor. We tested this approach on 48 patients with vocal fold nodules and 48 matched healthy-control subjects who each wore the voice monitor for a week. Seven glottal airflow features were estimated every 50 ms using an impedance-based inverse filtering scheme, and seven high-order summary statistics of each feature were computed every 5 minutes over voiced segments. Based on a univariate hypothesis testing, eight glottal airflow summary statistics were found to be statistically different between patient and healthy-control groups. L1-regularized logistic regression for a supervised classification task yielded a mean (standard deviation) area under the ROC curve of 0.82 (0.25) and an accuracy of 0.83 (0.14). These results outperform the state-of-the-art classification for the same classification task and provide a new avenue to improve the assessment and treatment of hyperfunctional voice disorders.
M. Brockmann-Bauser, J. E. Bohlender, and D. D. Mehta, “Acoustic perturbation measures improve with increasing vocal intensity in individuals with and without voice disorders,” Journal of Voice, vol. 32, no. 2, pp. 162-168, 2018. Publisher's VersionAbstract


In vocally healthy children and adults, speaking voice loudness differences can significantly confound acoustic perturbation measurements. This study examines the effects of voice sound pressure level (SPL) on jitter, shimmer, and harmonics-to-noise ratio (HNR) in adults with voice disorders and a control group with normal vocal status.

Study Design

This is a matched case-control study.


We assessed 58 adult female voice patients matched according to approximate age and occupation with 58 vocally healthy women. Diagnoses included vocal fold nodules (n = 39, 67.2%), polyps (n = 5, 8.6%), and muscle tension dysphonia (n = 14, 24.1%). All participants sustained the vowel /a/ at soft, comfortable, and loud phonation levels. Acoustic voice SPL, jitter, shimmer, and HNR were computed using Praat. The effects of loudness condition, voice SPL, pathology, differential diagnosis, age, and professional voice use level on acoustic perturbation measures were assessed using linear mixed models and Wilcoxon signed rank tests.


In both patient and normative control groups, increasing voice SPL correlated significantly (P < 0.001) with decreased jitter and shimmer, and increased HNR. Voice pathology and differential diagnosis were not linked to systematically higher jitter and shimmer. HNR levels, however, were statistically higher in the patient group than in the control group at comfortable phonation levels. Professional voice use level had a significant effect (P < 0.05) on jitter, shimmer, and HNR.


The clinical value of acoustic jitter, shimmer, and HNR may be limited if speaking voice SPL and professional voice use level effects are not controlled for. Future studies are warranted to investigate whether perturbation measures are useful clinical outcome metrics when controlling for these effects.

T. F. Quatieri, et al., “Multimodal biomarkers to discriminate cognitive state,” in The Role of Technology in Clinical Neuropsychology, R. L. Kane and T. D. Parson, Ed. Oxford University Press, 2017, pp. 409–443.
O. Murton, et al., “Acoustic speech analysis of patients with decompensated heart failure: A pilot study,” The Journal of the Acoustical Society of America, vol. 142, no. 4, pp. EL401-EL407, 2017. Publisher's VersionAbstract
This pilot study used acoustic speech analysis to monitor patients with heart failure (HF), which is characterized by increased intracardiac filling pressures and peripheral edema. HF-related edema in the vocal folds and lungs is hypothesized to affect phonation and speechrespiration. Acoustic measures of vocal perturbation and speech breathing characteristics were computed from sustained vowels and speechpassages recorded daily from ten patients with HF undergoing inpatient diuretic treatment. After treatment, patients displayed a higher proportion of automatically identified creaky voice, increased fundamental frequency, and decreased cepstral peak prominence variation, suggesting that speech biomarkers can be early indicators of HF.
M. Borsky, D. D. Mehta, J. H. Van Stan, and J. Gudnason, “Modal and nonmodal voice quality classification using acoustic and electroglottographic features,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 12, pp. 2281-2291, 2017. Publisher's VersionAbstract
The goal of this study was to investigate the performance of different feature types for voice quality classification using multiple classifiers. The study compared the COVAREP feature set; which included glottal source features, frequency warped cepstrum, and harmonic model features; against the mel-frequency cepstral coefficients (MFCCs) computed from the acoustic voice signal, acoustic-based glottal inverse filtered (GIF) waveform, and electroglottographic (EGG) waveform. Our hypothesis was that MFCCs can capture the perceived voice quality from either of these three voice signals. Experiments were carried out on recordings from 28 participants with normal vocal status who were prompted to sustain vowels with modal and nonmodal voice qualities. Recordings were rated by an expert listener using the Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V), and the ratings were transformed into a dichotomous label (presence or absence) for the prompted voice qualities of modal voice, breathiness, strain, and roughness. The classification was done using support vector machines, random forests, deep neural networks, and Gaussian mixture model classifiers, which were built as speaker independent using a leave-one-speaker-out strategy. The best classification accuracy of 79.97% was achieved for the full COVAREP set. The harmonic model features were the best performing subset, with 78.47% accuracy, and the static+dynamic MFCCs scored at 74.52%. A closer analysis showed that MFCC and dynamic MFCC features were able to classify modal, breathy, and strained voice quality dimensions from the acoustic and GIF waveforms. Reduced classification performance was exhibited by the EGG waveform.
J. H. Van Stan, S. W. Park, M. Jarvis, D. D. Mehta, R. E. Hillman, and D. Sternad, “Measuring vocal motor skill with a virtual voice-controlled slingshot,” The Journal of the Acoustical Society of America, vol. 142, no. 3, pp. 1199-1212, 2017. Publisher's VersionAbstract
Successful voice training (e.g., singing lessons) and vocal rehabilitation (e.g., therapy for a voice disorder) involve learning complex, vocalbehaviors. However, there are no metrics describing how humans learn new vocal skills or predicting how long the improved behavior will persist post-therapy. To develop measures capable of describing and predicting vocal motor learning, a theory-based paradigm from limb motor control inspired the development of a virtual task where subjects throw projectiles at a target via modifications in vocal pitch and loudness. Ten subjects with healthy voices practiced this complex vocal task for five days. The many-to-one mapping between the execution variables pitch and loudness and resulting target error was evaluated using an analysis that quantified distributional properties of variability: Tolerance, noise, covariation costs (TNC costs). Lag-1 autocorrelation (AC1) and detrended-fluctuation-analysis scaling index (SCI) analyzed temporal aspects of variability. Vocal data replicated limb-based findings: TNC costs were positively correlated with error; AC1 and SCI were modulated in relation to the task's solution manifold. The data suggests that vocal and limb motor learning are similar in how the learner navigates the solution space. Future work calls for investigating the game's potential to improve voice disorder diagnosis and treatment.
V. S. McKenna, A. F. Llico, D. D. Mehta, J. S. Perkell, and C. E. Stepp, “Magnitude of neck-surface vibration as an estimate of subglottal pressure during modulations of vocal effort and intensity in healthy speakers,” Journal of Speech, Language, and Hearing Research, vol. 60, no. 12, pp. 3404-3416, 2017. Publisher's VersionAbstract


This study examined the relationship between the magnitude of neck-surface vibration (NSVMag; transduced with an accelerometer) and intraoral estimates of subglottal pressure (P'sg) during variations in vocal effort at 3 intensity levels.


Twelve vocally healthy adults produced strings of /pɑ/ syllables in 3 vocal intensity conditions, while increasing vocal effort during each condition. Measures were made of P'sg (estimated during stop-consonant closure), NSVMag (measured during the following vowel), sound pressure level, and respiratory kinematics. Mixed linear regression was used to analyze the relationship between NSVMag and P'sg with respect to total lung volume excursion, levels of lung volume initiation and termination, airflow, laryngeal resistance, and vocal efficiency across intensity conditions.


NSVMag was significantly related to P'sg (p < .001), and there was a significant, although small, interaction between NSVMag and intensity condition. Total lung excursion was the only additional variable contributing to predicting the NSVMag-P'sg relationship.


NSVMag closely reflects P'sg during variations of vocal effort; however, the relationship changes across different intensities in some individuals. Future research should explore additional NSV-based measures (e.g., glottal airflow features) to improve estimation accuracy during voice production.

Y. - A. S. Lien, et al., “Validation of an algorithm for semi-automated estimation of voice relative fundamental frequency,” Annals of Otology, Rhinology, and Laryngology, vol. 126, no. 10, pp. 712-716, 2017. Publisher's VersionAbstract


Relative fundamental frequency (RFF) has shown promise as an acoustic measure of voice, but the subjective and time-consuming nature of its manual estimation has made clinical translation infeasible. Here, a faster, more objective algorithm for RFF estimation is evaluated in a large and diverse sample of individuals with and without voice disorders.


Acoustic recordings were collected from 154 individuals with voice disorders and 36 age- and sex-matched controls with typical voices. These recordings were split into training and 2 testing sets. Using an algorithm tuned to the training set, semi-automated RFF estimates in the testing sets were compared to manual RFF estimates derived from 3 trained technicians.


The semi-automated RFF estimations were highly correlated ( r = 0.82-0.91) with the manual RFF estimates.


Fast and more objective estimation of RFF makes large-scale RFF analysis feasible. This algorithm allows for future work to optimize RFF measures and expand their potential for clinical voice assessment.