top of page

Result

​

Fine-tuning Signal Processing Parameters

​

The average feeding gesture in our training set was about 2.8 seconds. We found two parameters that yielded optimal results (both high F-measure and low variability). One combination represents a slow fine-grained approach (window size = 1.0sec, overlap = 0.8, and shift = 0.3) and resulted in LOSOCV F- measure = 0.746 and variance = 0.008, and the other is more of a fast, coarse-grained approach (window size = 1.5 sec, overlap = 0.7, and shift = 0.5) and resulted in LOSOCV F-measure = 0.732, and variance = 0.02. The slow, fine-grained approach identified feeding gestures with the following pattern "FFNFF" or "FFNNFF," where the bite in the middle of the gesture was captured as a non-feeding gesture moment (N), and the food-to-mouth and back-to-rest were detected as feeding gestures (F).

​

Optimal Feature Subset

​

The top six features (Gx_qurt1, Gx_mean, Gzirq, Gx_stdev, and Gx_min) that were selected involved the gyroscope z- and x-axes, which correspond to the pitch and roll, respectively. 

​

Selecting the Optimal Classifier

​

We wanted to test the predictive power of multiple classification algorithms to assess which classifier will outperform the other in detecting feeding gestures using the subset of features. We compared the performance using LOSOCV. We found thatwhile the AdaBoost classifier outperforms other classifiers in LOSOCV with an F-measure of 76.0%, the Random Forest Classifier outperforms all algorithms in 10-fold CV with an F-measure of 75.2% and produces a comparable F-measure of 75.3% in LOSOCV. The interesting factor arises in the fact that LOSOCV often outperforms 10-fold CV (averaged 10 times), which shows that the within-subject variability may be so high that even when the training data contains test-subject data, it remains challenging to predict feeding gestures.

​

Clustering to Count Feeding Gestures

​

Without clustering we realized that our classification model would overestimate the number of feeding gestures. This could be a result of the overlap in the sliding window. As a result we felt that applying clustering algorithms would improve our feeding gesture count. By testing a range of ε and minPts values, however, we achieved poor feeding gesture count for a generalized model (RMSE = 30.2).

As a result, we tested a more personalized cluster model setting for each individual, where we trained the model on lunch data and tested on breakfast. The results show an RMSE of 8.43.

bottom of page