IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, cilt.74, 2025 (SCI-Expanded, Scopus)
Unhealthy food selections and irregular eating habits are associated with chronic health issues. Maintaining a balanced diet and adhering to consistent meal times are essential for overall health. Conventional nutritional tracking requires careful data entry and precise feedback, which can lead to unusual behavior in social situations. To address this issue, the proposed system, AudiNosh, considers physiological factors and provides personalized dietary advice tailored to individual characteristics, including gender, body mass index (BMI), moderate-to-vigorous physical activity (MVPA) levels, and age. AudiNosh identifies bone-conducted sounds of meal consumption in the external auditory canal, facilitating seamless nutritional monitoring. It proficiently removes distractions from the environment and anatomy while eating. This study introduces a groundbreaking event detection approach that fuses swallowing patterns with dynamic time warping. We utilize physiological data, such as swallowing sounds and actigraphy-based alterations in these sounds, as well as the synthesis of typical swallowing sound characteristics, to develop a customized deep-learning model for recognizing food features. The system evaluates the user's BMI and MVPA from the past 12 hours to analyze their nutritional intake and decide if they need to modify their macronutrient consumption. After extensive testing, latency-optimized AudiNosh surpassed advanced lightweight models such as feed-forward, bidirectional LSTM (BiLSTM), and GRU by accurately detecting 93% of food attributes in diverse scenarios, underscoring its capability for clinical deployment-subject to validation across numerous user groups for personalized healthcare and diet governance.