Cybernetics and the brain-controlled robot

An interesting story from Popular Mechanics about progress in cybernetics, titled “Mind control stories.” It starts with the macaque controlling a robot arm by brain implants, and then considers the future:

For Miguel Nicolelis, a professor of neuroscience at Duke University Medical Center, the backbone of mind-machine interfaces is the ability to analyze neural activity. Sure, the system demonstrated at Pitt in May accessed information from 100 neurons at once. But Nicoleliss lab has managed five times that amount, with data coming from up to 10 different brain structures.

For me, this is the most interesting part:

The main purpose of the walking robot experiment was to demonstrate just how precisely brain activity could be translated, but it produced another interesting result: It actually took less time for the brain signal to travel from the monkey in North Carolina to the robot in Japan than it took to go from the primates brain to its own muscles. At any given moment, then, the bot was receiving the command to walk before the monkeys body did.

I’ve been reading Ray Kurzweil’s book, and it has always seemed to me that a fundamental barrier to the development of effective neural implants is bandwidth: Human brains have evolved to use inputs and outputs at the speed of language, not the speed of electronics. So this idea of accelerating real-world responses and feedback by wiring may suggest substantial plasticity with respect to bandwidth.

I think I’ll lecture on this topic in my “Biology of Mind” course this fall.