In Press
V. M. Espinoza, M. Zañartu, J. H. Van Stan, D. D. Mehta, and R. E. Hillman, “Glottal aerodynamic measures in adult females with phonotraumatic and non-phonotraumatic vocal hyperfunction,” Journal of Speech, Language, and Hearing Research, In Press.Abstract



To determine the validity of preliminary reports showing that glottal aerodynamic measures can identify pathophysiological phonatory mechanisms for phonotraumatic and non-phonotraumatic vocal hyperfunction that are each distinctly different from normal vocal function.


Glottal aerodynamic measures (estimates of subglottal air pressure, peak-to-peak airflow, maximum flow declination rate, and open quotient) were obtained non-invasively using a pneumotachograph mask with intra-oral pressure catheter in 16 adult females with organic vocal fold lesions, 16 adult females with muscle tension dysphonia, and two associated matched control groups with normal voices. Subjects produced /pae/ syllable strings from which glottal airflow was estimated using inverse filtering during /ae/ vowels, and subglottal pressure was estimated during /p/ closures. All measures were normalized for sound pressure level (SPL) and statistically tested for differences between patient and control groups.


All SPL-normalized measures were significantly lower in the phonotraumatic group as compared to measures in its control group. For the non-phonotraumatic group, only SPL-normalized subglottal pressure and open quotient were significantly lower than measures in its control group.


Results of this study confirm previous hypotheses and preliminary results indicating that SPL-normalized estimates of glottal aerodynamic measures can be used to describe the different pathophysiological phonatory mechanisms associated with phonotraumatic and non-phonotraumatic vocal hyperfunction.
M. Brockmann-Bauser, J. E. Bohlender, and D. D. Mehta, “Acoustic perturbation measures improve with increasing vocal intensity in individuals with and without voice disorders,” Journal of Voice, In Press. Publisher's VersionAbstract


In vocally healthy children and adults, speaking voice loudness differences can significantly confound acoustic perturbation measurements. This study examines the effects of voice sound pressure level (SPL) on jitter, shimmer, and harmonics-to-noise ratio (HNR) in adults with voice disorders and a control group with normal vocal status.

Study Design

This is a matched case-control study.


We assessed 58 adult female voice patients matched according to approximate age and occupation with 58 vocally healthy women. Diagnoses included vocal fold nodules (n = 39, 67.2%), polyps (n = 5, 8.6%), and muscle tension dysphonia (n = 14, 24.1%). All participants sustained the vowel /a/ at soft, comfortable, and loud phonation levels. Acoustic voice SPL, jitter, shimmer, and HNR were computed using Praat. The effects of loudness condition, voice SPL, pathology, differential diagnosis, age, and professional voice use level on acoustic perturbation measures were assessed using linear mixed models and Wilcoxon signed rank tests.


In both patient and normative control groups, increasing voice SPL correlated significantly (P < 0.001) with decreased jitter and shimmer, and increased HNR. Voice pathology and differential diagnosis were not linked to systematically higher jitter and shimmer. HNR levels, however, were statistically higher in the patient group than in the control group at comfortable phonation levels. Professional voice use level had a significant effect (P < 0.05) on jitter, shimmer, and HNR.


The clinical value of acoustic jitter, shimmer, and HNR may be limited if speaking voice SPL and professional voice use level effects are not controlled for. Future studies are warranted to investigate whether perturbation measures are useful clinical outcome metrics when controlling for these effects.

D. D. Mehta, P. C. Chwalek, T. F. Quatieri, and L. J. Brattain, “Wireless neck-surface accelerometer and microphone on flex circuit with application to noise-robust monitoring of Lombard speech,” in Interspeech, 2017.Abstract
Ambulatory monitoring of real-world voice characteristics and behavior has the potential to provide important assessment of voice and speech disorders and psychological and emotional state. In this paper, we report on the novel development of a lightweight, wireless voice monitor that synchronously records dual-channel data from an acoustic microphone and a neck-surface accelerometer embedded on a flex circuit. In this paper, Lombard speech effects were investigated in pilot data from four adult speakers with normal vocal function who read a phonetically balanced paragraph in the presence of different ambient acoustic noise levels. Whereas the signal-to-noise ratio (SNR) of the microphone signal decreased in the presence of increasing ambient noise level, the SNR of the accelerometer sensor remained high. Lombard speech properties were thus robustly computed from the accelerometer signal and observed in all four speakers who exhibited increases in average estimates of sound pressure level (+2.3 dB), fundamental frequency (+21.4 Hz), and cepstral peak prominence (+1.3 dB) from quiet to loud ambient conditions. Future work calls for ambulatory data collection in naturalistic environments, where the microphone acts as a sound level meter and the accelerometer functions as a noise-robust voicing sensor to assess voice disorders, neurological conditions, and cognitive load.
Paper Poster
V. M. Espinoza, D. D. Mehta, J. H. Van Stan, R. E. Hillman, and M. Zañartu, “Uncertainty of glottal airflow estimation during continuous speech using impedance-based inverse filtering of the neck-surface acceleration signal,” Proceedings of the Acoustical Society of America. 2017.
D. Mehta, J. B. Kobler, G. Maguluri, J. Park, E. Chang, and N. Iftimia, “Integrating optical coherence tomography with laryngeal videostroboscopy,” Proceedings of the Acoustical Society of America. 2017.Abstract

Special Session: New trends in imaging for speech production (Speech Communication Technical Committee)

During clinical voice assessment, laryngologists and speech-language pathologists rely heavily on laryngeal endoscopy with videostroboscopy to evaluate pathology and dysfunction of the vocal folds. The cost effectiveness, ease of use, and synchronized audio and visual feedback provided by videostroboscopic assessment serve to maintain its predominant clinical role in laryngeal imaging. However, significant drawbacks include only two-dimensional spatial imaging and the lack of subsurface morphological information. A novel endoscope will be presented that integrates optical coherence tomography that is spatially and temporally co-registered with laryngeal videoendoscopic technology through a common path probe. Optical coherence tomography is a non-contact, micron-resolution imaging technology that acts as a visual ultrasound that employs a scanning laser to measure reflectance properties at air-tissue and tissue-tissue boundaries. Results obtained from excised larynx experiments demonstrate enhanced visualization of three-dimensional vocal fold tissue kinematics and subsurface morphological changes during phonation. Real-time, calibrated three-dimensional imaging of the mucosal wave and subsurface layered microstructure of vocal fold tissue is expected to benefit in-office evaluation of benign and malignant tissue lesions. Future work calls for the in vivo evaluation of the technology in patients before and after surgical management of these types of lesions.

D. D. Mehta, J. H. Van Stan, M. L. V. Masson, M. Maffei, and R. E. Hillman, “Relating ambulatory voice measures with self-ratings of vocal fatigue in individuals with phonotraumatic vocal hyperfunction,” Proceedings of the Acoustical Society of America. 2017. poster
Y. - R. Chien, D. D. Mehta, Jón Guðnason, M. Zañartu, and T. F. Quatieri, “Evaluation of glottal inverse filtering algorithms using a physiologically based articulatory speech synthesizer,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 8, pp. 1718-1730, 2017. Publisher's VersionAbstract
Glottal inverse filtering aims to estimate the glottal airflow signal from a speech signal for applications such as speaker recognition and clinical voice assessment. Nonetheless, evaluation of inverse filtering algorithms has been  challenging due to the practical difficulties of directly measuring glottal airflow. Apart from this, it is acknowledged that the performance of many methods degrade in voice conditions that are of great interest, such as breathiness, high pitch, soft voice, and running speech. This paper presents a comprehensive, objective, and comparative evaluation of state-of-the-art inverse filtering algorithms that takes advantage of speech and glottal airflow signals generated by a physiological speech synthesizer. The synthesizer provides a physics-based simulation of the voice production process and thus an adequate test bed for revealing the temporal and spectral performance characteristics of each algorithm. Included in the synthetic data are continuous speech utterances and sustained vowels, which are produced with multiple voice qualities (pressed, slightly pressed, modal, slightly breathy, and breathy), fundamental frequencies, and subglottal pressures to simulate the natural variations in real speech. In evaluating the accuracy of a glottal flow estimate, multiple error measures are used, including an error in the estimated signal that measures overall waveform deviation, as well as an error in each of several clinically relevant features extracted from the glottal flow estimate. Waveform errors calculated from glottal flow estimation experiments exhibited mean values around 30% for sustained vowels, and around 40% for continuous speech, of the amplitude of true glottal flow derivative. Closed-phase approaches showed remarkable stability across different voice qualities and subglottal pressures. The algorithms of choice, as suggested by significance tests, are closed-phase covariance analysis for the analysis of sustained vowels, and sparse linear prediction for the analysis of continuous speech. Results of data subset analysis suggest that analysis of close rounded vowels is an additional challenge in glottal flow estimation.
E. S. Heller Murray, et al., “Relative fundamental frequency distinguishes between phonotraumatic and non-phonotraumatic vocal hyperfunction,” Journal of Speech, Language, and Hearing Research, vol. 60, no. 6, pp. 1507–1515, 2017. Publisher's VersionAbstract

Purpose The purpose of this article is to examine the ability of an acoustic measure, relative fundamental frequency (RFF), to distinguish between two subtypes of vocal hyperfunction (VH): phonotraumatic (PVH) and non-phonotraumatic (NPVH).

Method RFF values were compared among control individuals with typical voices (N = 49), individuals with PVH (N = 54), and individuals with NPVH (N = 35).

Results Offset Cycle 10 RFF differed significantly among all 3 groups with values progressively decreasing for controls, individuals with NPVH, and individuals with PVH. Individuals with PVH also had lower Offset Cycles 8 and 9 relative to the other 2 groups and lower RFF values for Offset Cycle 7 relative to controls. There was also a trend for lower Onset Cycle 1 RFF values for the PVH group compared with the NPVH group.

Conclusions RFF values were significantly different between controls and individuals with VH and also between the two subtypes of VH. This study adds further support to the notion that the differences between these two subsets of VH may be functional as well as structural.

J. H. Van Stan, D. D. Mehta, and R. E. Hillman, “Recent innovations in voice assessment expected to impact the clinical management of voice disorders,” Perspectives of the ASHA Special Interest Groups, vol. 2, no. SIG 3, pp. 4-13, 2017. Publisher's VersionAbstract

This article provides a summary of some recent innovations in voice assessment expected to have an impact in the next 5–10 years on how patients with voice disorders are clinically managed by speech-language pathologists. Specific innovations discussed are in the areas of laryngeal imaging, ambulatory voice monitoring, and “big data” analysis using machine learning to produce new metrics for vocal health. Also discussed is the potential for using voice analysis to detect and monitor other health conditions.

M. Borsky, M. Cocude, D. D. Mehta, M. Zañartu, and J. Gudnason, “Classification of voice modes using neck-surface accelerometer data,” in International Conference on Acoustics, Speech, and Signal Processing, 2017.Abstract


This study analyzes signals recorded using a neck-surface accelerometer from subjects producing speech with different voice modes. The purpose is to explore if the recorded waveforms can capture the glottal vibratory patterns which can be related to the movement of the vocal folds and thus voice quality. The accelerometer waveforms do not contain the supraglottal resonances, and these characteristics make the proposed method suitable for real-life voice quality assessment and monitoring as it does not breach patient privacy. The experiments with a Gaussian mexture model classifier demonstrate that different voice qualities produce distinctly different accelerometer waveforms. The system achieved 80.2% and 89.5% for frame- and utterance-level accuracy, respectively, for classifying among modal, breathy, pressed, and rough voice modes using a speaker-dependent classifier. Finally, the article presents characteristic waveforms for each modality and discusses their attributes.


J. H. Van Stan, D. D. Mehta, D. Sternad, R. Petit, and R. E. Hillman, “Ambulatory voice biofeedback: Relative frequency and summary feedback effects on performance and retention of reduced vocal intensity in the daily lives of participants with normal voices,” Journal of Speech, Language, Hearing Research, vol. 60, no. 4, pp. 853–864, 2017. Publisher's VersionAbstract


Purpose Ambulatory voice biofeedback has the potential to significantly improve voice therapy effectiveness by targeting carryover of desired behaviors outside the therapy session (i.e., retention). This study applies motor learning concepts (reduced frequency and delayed, summary feedback) that demonstrate increased retention to ambulatory voice monitoring for training nurses to talk softer during work hours.

Method Forty-eight nurses with normal voices wore the Voice Health Monitor (Mehta, Zañartu, Feng, Cheyne, & Hillman, 2012) for 6 days: 3 baseline days, 1 biofeedback day, 1 short-term retention day, and 1 long-term retention day. Participants were block-randomized into 3 different biofeedback groups: 100%, 25%, and Summary. Performance was measured in terms of compliance time below a participant-specific vocal intensity threshold.

Results All participants exhibited a significant increase in compliance time (Cohen's d = 4.5) during biofeedback days compared with baseline days. The Summary feedback group exhibited statistically smaller performance reduction during both short-term (d = 1.14) and long-term (d = 1.04) retention days compared with the 100% feedback group.

Conclusions These findings suggest that modifications in feedback frequency and timing affect retention of a modified vocal behavior in daily life. Future work calls for studying the potential beneficial impact of ambulatory voice biofeedback in participants with behaviorally based voice disorders.


J. H. Van Stan, et al., “Integration of motor learning principles into real-time ambulatory voice biofeedback and example implementation via a clinical case study with vocal fold nodules,” American Journal of Speech-Language Pathology, vol. 26, no. 1, pp. 1-10, 2017. Publisher's VersionAbstract


Purpose Ambulatory voice biofeedback (AVB) has the potential to significantly improve voice therapy effectiveness by targeting one of the most challenging aspects of rehabilitation: carryover of desired behaviors outside of the therapy session. Although initial evidence indicates that AVB can alter vocal behavior in daily life, retention of the new behavior after biofeedback has not been demonstrated. Motor learning studies repeatedly have shown retention-related benefits when reducing feedback frequency or providing summary statistics. Therefore, novel AVB settings that are based on these concepts are developed and implemented.

Method The underlying theoretical framework and resultant implementation of innovative AVB settings on a smartphone-based voice monitor are described. A clinical case study demonstrates the functionality of the new relative frequency feedback capabilities.

Results With new technical capabilities, 2 aspects of feedback are directly modifiable for AVB: relative frequency and summary feedback. Although reduced-frequency AVB was associated with improved carryover of a therapeutic vocal behavior (i.e., reduced vocal intensity) in a patient post-excision of vocal fold nodules, causation cannot be assumed.

Conclusions Timing and frequency of AVB schedules can be manipulated to empirically assess generalization of motor learning principles to vocal behavior modification and test the clinical effectiveness of AVB with various feedback schedules.


M. Borsky, D. D. Mehta, J. P. Gudjohnsen, and J. Gudnason, “Classification of voice modality using electroglottogram waveforms,” in INTERSPEECH, 2016.Abstract


It has been proven that the improper function of the vocal folds can result in perceptually distorted speech that is typically identified with various speech pathologies or even some neurological diseases. As a consequence, researchers have focused on finding quantitative voice characteristics to objectively assess and automatically detect non-modal voice types. The bulk of the research has focused on classifying the speech modality by using the features extracted from the speech signal. This paper proposes a different approach that focuses on analyzing the signal characteristics of the electroglottogram (EGG) waveform. The core idea is that modal and different kinds of non-modal voice types produce EGG signals that have distinct spectral/cepstral characteristics. As a consequence, they can be distinguished from each other by using standard cepstral-based features and a simple multivariate Gaussian mixture model. The practical usability of this approach has been verified in the task of classifying among modal, breathy, rough, pressed and soft voice types. We have achieved 83% frame-level accuracy and 91% utterance-level accuracy by training a speaker-dependent system.


M. Maffei, J. H. Van Stan, R. E. Hillman, and D. D. Mehta, “Correlating ambulatory voice measures with vocal fatigue self-ratings in individuals with MTD and normal controls,” Proceedings of the Annual Convention of the American Speech-Language-Hearing Association, 2016. Poster
C. E. Stepp, M. Zañartu, D. D. Mehta, and R. E. Hillman, “Hyperfunctional voice disorders: Current results, clinical implications, and future directions of a multidisciplinary research program,” Proceedings of the Annual Convention of the American Speech-Language-Hearing Association, 2016.
V. McKenna, A. Llico, D. Mehta, and C. Stepp, “Neck-surface acceleration as an estimate of subglottal pressure during modulated vocal effort in healthy speakers,” Proceedings of the Annual Convention of the American Speech-Language-Hearing Association. 2016.
D. D. Mehta, H. A. Cheyne II, A. Wehner, J. T. Heaton, and R. E. Hillman, “Accuracy of self-reported estimates of daily voice use in adults with normal and disordered voices,” American Journal of Speech-Language Pathology, vol. 25, no. 4, pp. 576-589, 2016. Paper
M. Brockmann-Bauser, J. E. Bohlender, and D. D. Mehta, “Acoustic perturbation measures improve with increasing vocal intensity in healthy and pathological voices,” Proceedings of the Voice Foundation Symposium, 2016.
N. Iftimia, G. Maguluri, E. Chang, J. Park, J. Kobler, and D. Mehta, “Dynamic vocal fold imaging with combined optical coherence tomography/high-speed video endoscopy,” Proceedings of the 10th International Conference on Voice Physiology and Biomechanics, pp. 1-2, 2016. Paper
A. S. Fryd, J. H. Van Stan, R. E. Hillman, and D. D. Mehta, “Estimating subglottal pressure from neck-surface acceleration during normal voice production,” Journal of Speech, Language, and Hearing Research, vol. 59, no. 6, pp. 1335-1345, 2016. Publisher's VersionAbstract

Purpose The purpose of this study was to evaluate the potential for estimating subglottal air pressure using a neck-surface accelerometer and to compare the accuracy of predicting subglottal air pressure relative to predicting acoustic sound pressure level (SPL).

Method Indirect estimates of subglottal pressure (Psg′) were obtained from 10 vocally healthy speakers during loud-to-soft repetitions of 3 different /p/–vowel gestures (/pa/, /pi/, /pu/) at 3 pitch levels in the modal register. Intraoral air pressure, neck-surface acceleration, and radiated acoustic pressure were recorded, and the root-mean-square amplitude of the acceleration signal was correlated with Psg′ and SPL.

Results The coefficient of determination between accelerometer level and Psg′ was high when data were pooled from all vowel and pitch contexts for each participant (r 2 = .68–.93). These relationships were stronger than corresponding relationships between accelerometer level and SPL (r 2 = .46–.81). The average 95% prediction interval for estimating Psg′ using accelerometer level was ±2.53 cm H2O, ranging from ±1.70 to ±3.74 cm H2O across participants.

Conclusions Accelerometer signal amplitude correlated more strongly with Psg′ than with SPL. Future work is warranted to investigate the robustness of the relationship in nonmodal voice qualities, individuals with voice disorders, and accelerometer-based ambulatory monitoring of subglottal pressure.