TY - GEN
T1 - Predicting "about-To-eat" moments for just-in-Time eating intervention
AU - Rahman, Tauhidur
AU - Czerwinski, Mary
AU - Gilad-Bachrach, Ran
AU - Johns, Paul
PY - 2016/4/11
Y1 - 2016/4/11
N2 - Various wearable sensors capturing body vibration, jaw movement, hand gesture, etc., have shown promise in detecting when one is currently eating. However, based on existing literature and user surveys conducted in this study, we argue that a Just-in-Time eating intervention, triggered upon detecting a current eating event, is sub-optimal. An eating intervention triggered at "About-To-Eat" moments could provide users with a further opportunity to adopt a better and healthier eating behavior. In this work, we present a wearable sensing framework that predicts "About-To-Eat" moments and the "Time until the Next Eating Event". The wearable sensing framework consists of an array of sensors that capture physical activity, location, heart rate, electrodermal activity, skin temperature and caloric expenditure. Using signal processing and machine learning on this raw multimodal sensor stream, we train an "Aboutto-Eat" moment classifier that reaches an average recall of 77%. The "Time until the Next Eating Event" regression model attains a correlation coefficient of 0.49. Personalization further increases the performance of both of the models to an average recall of 85% and correlation coefficient of 0.65. The contributions of this paper include user surveys related to this problem, the design of a system to predict about to eat moments and a regression model used to train multimodal sensory data in real time for potential eating interventions for the user.
AB - Various wearable sensors capturing body vibration, jaw movement, hand gesture, etc., have shown promise in detecting when one is currently eating. However, based on existing literature and user surveys conducted in this study, we argue that a Just-in-Time eating intervention, triggered upon detecting a current eating event, is sub-optimal. An eating intervention triggered at "About-To-Eat" moments could provide users with a further opportunity to adopt a better and healthier eating behavior. In this work, we present a wearable sensing framework that predicts "About-To-Eat" moments and the "Time until the Next Eating Event". The wearable sensing framework consists of an array of sensors that capture physical activity, location, heart rate, electrodermal activity, skin temperature and caloric expenditure. Using signal processing and machine learning on this raw multimodal sensor stream, we train an "Aboutto-Eat" moment classifier that reaches an average recall of 77%. The "Time until the Next Eating Event" regression model attains a correlation coefficient of 0.49. Personalization further increases the performance of both of the models to an average recall of 85% and correlation coefficient of 0.65. The contributions of this paper include user surveys related to this problem, the design of a system to predict about to eat moments and a regression model used to train multimodal sensory data in real time for potential eating interventions for the user.
KW - Eating Habit Modeling
KW - Just-in-Time Eating Intervention
KW - Next Eating Event Prediction
UR - http://www.scopus.com/inward/record.url?scp=84966614995&partnerID=8YFLogxK
U2 - 10.1145/2896338.2896359
DO - 10.1145/2896338.2896359
M3 - منشور من مؤتمر
T3 - DH 2016 - Proceedings of the 2016 Digital Health Conference
SP - 141
EP - 150
BT - DH 2016 - Proceedings of the 2016 Digital Health Conference
T2 - 6th International Conference on Digital Health, DH 2016
Y2 - 11 April 2016 through 13 April 2016
ER -