🤖 AI Summary
            Researchers demonstrate a real-time wearing-detection system for true wireless stereo (TWS) earbuds that uses an in-ear photoplethysmography (PPG) sensor and edge AI to classify wearing state (fully worn, partially worn, not worn). The pipeline runs on constrained hardware: the MCU digitizes ear-canal PPG, buffers fixed-length segments, and applies a finite-difference based validity check to reject noisy/motion-corrupted windows (retaining the previous label when invalid). Valid segments are quantized to 8 bits and sent over SPI to an edge AI processor that runs a k‑nearest neighbor (k‑NN) classifier. The paper details an edge operator architecture—Data Transceiver, FSM, Instruction Encoder, and an N‑core of parallel neuron cells—that stores training samples, computes distances, and votes labels in real time. Validation shows the finite-difference validity filter successfully flags noisy sections without expensive multiplications/divisions, keeping computation and power low.
This approach is significant because it leverages physiological sensing (PPG) and lightweight on-device ML to provide robust, low-latency wearing detection—important for auto-pause/play, personalized audio, and secure health-aware features—while preserving privacy by avoiding cloud round trips. Key trade-offs include k‑NN’s memory cost for stored exemplars and the effect of 8‑bit quantization on feature fidelity, but the design’s simplicity, motion-artifact handling, and edge-centric architecture make it attractive for deployment in battery- and compute-limited earables; the system can also be extended to other classifiers (e.g., RBFNN) or wearable biometrics.
        
            Loading comments...
        
        
        
        
        
            login to comment
        
        
        
        
        
        
        
        loading comments...
        no comments yet