Currently there is no validated objective measure of pain. Recent neuroimaging studies have explored the feasibility of using functional near-infrared spectroscopy (fNIRS) to measure alterations in brain function in evoked and ongoing pain. In this study, we applied multi-task machine learning methods to derive a practical algorithm for pain detection derived from fNIRS signals in healthy volunteers exposed to a painful stimulus. Especially, we employed multi-task multikernel learning to account for the inter-subject variability in pain response. Our results support the use of fNIRS and machine learning techniques in developing objective pain detection, and also highlight the importance of adopting personalized analysis in the process.
This paper highlights lessons learned from a four-year ambulatory study, developed to measure Sleep, Networks, Affect, Performance, Stress, and Health using Objective Techniques (SNAPSHOT), which was run in seven cohorts of college students (N=321), collecting continuous wearable and mobile phone data, typically for a month each. This paper overviews the objectives of this study, challenges faced, and some key findings focused on detecting sleep patterns and detecting and forecasting mood changes.
Pain is usually measured by patient’s self-report, which requires patient collaboration. Hence, the development of an objective automatic pain detection method would be useful in many clinical applications and patient populations. Previous studies have explored the feasibility of using physiological autonomic signals to detect the presence of pain. In this study, we focused on continuously estimating experimental heat pain intensity with high temporal resolution from autonomic signals. Speciﬁcally, we employed skin conductance deconvolution and point process heart rate variability analysis to continuously evaluate time-varying autonomic parameters, and presented a regression algorithm based on recurrent neural networks.
Pain is usually measured by patient’s self-report. While self-report is viewed as the gold standard of pain assessment, this approach fails when patients cannot communicate pain intensity or lack normal mental abilities. Here, we present a method for the automatic estimation of pain intensity from skin conductance data, and test it in a dataset containing physiological responses to nociceptive heat pain.
Pain is a subjective experience commonly measured through patient's self report. While there exist numerous situations in which automatic pain estimation methods may be preferred, inter-subject variability in physiological and behavioral pain responses has hindered the development of such methods. In this work, we address this problem by introducing a novel personalized multitask machine learning method for pain estimation based on individual physiological and behavioral pain response profiles, and show its advantages in a dataset containing multimodal responses to nociceptive heat pain.
The support vector machine (SVM) is a widely used machine learning tool for classification based on statistical learning theory. Given a set of training data, the SVM finds a hyperplane that separates two different classes of data points by the largest distance. While the standard form of SVM uses $L_2$-norm regularization, other regularization approaches are particularly attractive for biomedical datasets where, for example, sparsity and interpreability of the classifier's coefficient values are highly desired features. Therefore, in this paper we consider different types of regularization approaches for SVMs, and explore them in both synthetic and real biomedical datasets.
Pain is a personal, subjective experience that is commonly evaluated through visual analog scales (VAS). While this is often convenient and useful, automatic pain detection systems can reduce pain score acquisition efforts in large-scale studies by estimating it directly from the participants' facial expressions. In this paper, we propose a novel two-stage learning approach for VAS estimation: first, our algorithm employs Recurrent Neural Networks (RNNs) to automatically estimate Prkachin and Solomon Pain Intensity (PSPI) levels from face images. The estimated scores are then fed into the personalized Hidden Conditional Random Fields (HCRFs), used to estimate the VAS, provided by each person. Personalization of the model is performed using a newly introduced facial expressiveness score, unique for each person. To the best of our knowledge, this is the first approach to automatically estimate VAS from face images. We show the benefits of the proposed personalized over traditional non-personalized approach on a benchmark dataset for pain analysis from face images.
Patients' self-report is the most common method for pain assessment. However, while self-reports are often convenient and useful, they have a number of limitations, including being highly subjective, inconsistent and cumbersome to obtain in long-term large-scale studies. Furthermore, they cannot be obtained reliably from the mentally impaired and other vulnerable populations, such as children and elderly people. Therefore, there is an ever-growing need for reliable automatic pain assessment methods as the means for detecting pain, and for evaluating and comparing the effectiveness of different pain reduction strategies. In this work, we present a novel pain intensity estimation method that uses multi-modal data from a wrist-worn sensor. Our algorithm combines different autonomic activity metrics derived from electrodermal activity and plethysmogram wrist signals to estimate the intensity of nociceptive stimulii. We verify the robustness of our algorithm in a single-center, comparative, randomized, crossover, clinical study that evaluates the impact of different injection parameters on subcutaneous injection pain tolerance.
The automatic pain detection in the context of clinical settings and treatment interventions offers promising opportunities for pain management and treatment optimization. While subjective self-reports and clinical observers are often convenient and useful in quantifying pain, automatic pain detection systems can provide a continuous, more objective and consistent measure of pain. This can also greatly reduce efforts in large-scale studies, where it may be highly impractical and inefficient to conduct clinical interviews and questionnaires. To address this, we propose a novel machine learning method based on deep neural networks trained to automatically detect pain onset/offset and its intensity from facial expressions of participants receiving nociceptive stimuli. To train the network, we used the publicly available UNBC-McMaster Shoulder Pain Expression Archive Database, annotated in terms of the Prkachin and Solomon pain scale. We report the performance of our algorithm in a single-center, comparative, randomized, crossover, clinical study that evaluates the impact of different injection parameters on subcutaneous injection pain tolerance.
Inadequate sleep affects health in multiple ways. Unobtrusive ambulatory methods to monitor long-term sleep patterns in large populations could be useful for health and policy decisions. This paper presents an algorithm that uses multimodal data from smartphones and wearable technologies to detect sleep/wake state and sleep episode on/offset. We collected 5580 days of multimodal data and applied recurrent neural networks for sleep/wake classification, followed by cross- correlation-based template matching for sleep episode on/offset detection. The method achieved a sleep/wake classification accuracy of 96.5%, and sleep episode on/offset detection F1 scores of 0.85 and 0.82, respectively, with mean errors of 5.3 and 5.5 min, respectively, when compared with sleep/wake state and sleep episode on/offset assessed using actigraphy and sleep diaries.
Multiple sclerosis (MS) is the most common autoimmune disorder affecting the central nervous system. Most often diagnosed in young adults, MS runs a chronic, unpredictable course, often leading to severe disability: 50% of MS patients are unable to perform household and employment responsibilities 10 years after disease onset, and 50% are nonambulatory 25 years after disease onset. While it is not clear what factors influence the prognosis of MS, exposure to stress has long been suspected as a factor that can aggravate its progression. In this paper, we discuss the opportunities for wearable sensors in the management of stress in multiple sclerosis patients.
Air pollutants have become the major problem of many cities, causing millions of human deaths worldwide every year. Among all the noxious pollutants in air, particles with a diameter of 2.5 micrometers or less (PM2.5) are the most hazardous because they are small enough to penetrate to the lungs and invade the smallest airways. Since the presence of dangerous levels of PM2.5, commonly reported in newspapers and on TV, is intertwined with the global pattern of production and consumption, there is a need for citizen science projects that engage the young generations in efforts toward reducing air pollution as they will become the future leaders of society. With this goal, and to enable the geo-temporal characterization of PM2.5, we present a crowdsourcing-based air pollution measurement system that uses affordable DIY atomic force microscopes to measure and characterize PM2.5, exploiting the power of human computation through an online crowdsourcing platform to study how PM2.5 varies over time and across geographical locations. Our system is intended as both a scientific platform and a teaching tool for children to engage in environmental policy.
An algorithm to detect poor quality ECGs collected in low-resource environments is described (and was entered in the PhysioNet/Computing in Cardiology Challenge 2011 `Improving the quality of ECGs collected using mobile phones'). The algorithm is based on previously published signal quality metrics, with some novel additions, designed for intensive care monitoring. The algorithms have been adapted for use on short (10s) 12-lead ECGs. The metrics quantify spectral energy distribution, higher order moments and inter-channel and inter-algorithm agreement. Six metrics are produced for each channel (72 features in all) and presented to machine learning algorithms for training on the provided labeled data (Set-a) for the challenge. (Binary labels were available, indicating whether the data were acceptable or unacceptable for clinical interpretation.) We re-annotated all the data in Set-a as well as those in Set-b (the test data) using two independent annotators, and a third for adjudication of differences. Events were balanced and the 1000 subjects in Set-a were used to train the classifiers. We compared four classifiers: Linear Discriminant Analysis, Naıve Bayes, a Support Vector Machine (SVM) and a Multi-Layer Perceptron artificial neural network classifiers. The SVM and MLP provided the best (and almost equivalent) classification accuracies of 99% on the training data (Set-a) and 95% on the test data (Set-b). The binary classification results (acceptable or unacceptable) were then submitted as an entry into the PhysioNet Computing in Cardiology Competition 2011. Before the competition deadline, we scored 92.6% on the unseen test data (0.6% less than the winning entry). After improving labelling inconsistencies and errors we achieved 94.0%, the highest overall score of all competition entrants.