🤖 AI Summary
Researchers building brain–computer interfaces (BCIs report that implants in non‑motor areas can decode intentions and even “preconscious” signals hundreds of milliseconds before subjects report awareness. The Nature piece highlights Richard Andersen’s dual‑implant work (motor cortex plus posterior parietal cortex), where decoding of planning signals enabled a paralyzed pianist’s interface to trigger keystrokes before she consciously acted. Lab demonstrations include limited‑vocabulary internal‑speech decoding and single neurons in the parietal cortex that track card values and decision moments. Technically, implanted BCIs read spiking activity from local neural populations, while consumer devices rely on scalp EEG (averaged signals). Advances in AI—especially machine learning that denoises EEG and builds cross‑subject “foundation” models from thousands of hours of neural data—are rapidly improving decoding accuracy across both classes of devices.
Those gains sharpen ethical and regulatory stakes. Access to preconscious content and richer neural inferences (attention, intentions, mental‑health markers) raises privacy, manipulation and discrimination risks, particularly since consumer neurotech often lacks robust privacy safeguards and large firms could scale use quickly (Apple has patented EEG sensors). Some jurisdictions have started protecting neural recordings, and international bodies have issued guidelines, but experts warn laws focusing only on raw signals won’t stop harmful inferences drawn by AI. Clinically, motor BCIs (e.g., Synchron) are nearest to approval and promise restored communication and new psychiatric therapies, but developers and ethicists say coordinated safety, privacy and governance frameworks are urgently needed.
Loading comments...
login to comment
loading comments...
no comments yet