When many of us think about interfaces between nerves and computers, we think in terms of downloading our thoughts into silicon. But there is a field of research that turns that thinking on its head. Neurons, after all, perform some impressive feats of visual sharpening and pattern recognition, essentially in real time. Supplementing a computer with the capabilities of a group of neurons may be able to improve the overall performance of a system.
To get this sort of device to function, however, we need to be sure that the nerves involved perform in a consistent and predictable manner. Physical Review E has published a paper that may represent a big step in this direction. The researchers involved grew neurons on a dish studded with electrodes that could track the firing of nerves. As others had before them, they found that the neurons produced synchronized bursting events, or SBEs; when a single nerve fired spontaneously, it would trigger those it was connected to, and the signal would quickly propagate down a whole network of nerves.
Unfortunately, on their own, nerves did not generate SBEs with consistent patterns. This led the researchers to attempt to train the nerve cells to fire in a predictable manner: in essence, to engineer the same pattern of firing that's thought to be similar to the way nerves store memories. Attempts to form these "memories" by stimulating some of the neurons with localized releases of calcium, however, failed, as the nerves reverted to random firings afterwards.
The authors reasoned that inhibitory neurons—those that tone down the activity of their neighbors—might be blocking the learning process. They then tried localized releases of a chemical that blocks the function of these inhibitory neurons. By repeating this dose every 20 seconds for several minutes, they were able to train a collection of neurons to repeatedly produce the same pattern during an SBE. They were also able to show that, by dosing different collections of cells, they could produce additional SBE groups without affecting the behavior of the early ones. In short, they could layer a number of memories onto a single collection of neurons.
We're still a long way off from intentionally engineering memories or other more elaborate behaviors into these cultures of neurons; the researchers simply relied on the connections that the neurons themselves formed when placed into culture. But this work does show that we can manipulate the behavior of those connections in a way that can produce consistent results.
Thursday, June 14, 2007
Robotic arm to conduct brain surgery
Robotic arm to conduct brain surgery
April 18, 2007Courtesy University of Calgaryand World Science staff
A robot to be installed at the University of Calgary in Canada is to conduct brain surgeries in tandem with real-time brain scans captured by Magnetic Resonance Imaging.The robot, dubbed NeuroArm, “aims to revolutionize neurosurgery,” the university said in a statement.
Billed as the first surgical robot compatible with the advanced imaging technology, it’s the creation of neurosurgeon Garnette Sutherland of the university. He spent six years leading a team of Canadian scientists to design it. “Many of our microsurgical techniques evolved in the 1960s, and have pushed surgeons to the limits of their precision, accuracy, dexterity and stamina,” said Sutherland. “NeuroArm dramatically enhances the spatial resolution at which surgeons operate, and shifts surgery from the organ towards the cell level.”“The best surgeons in the world can work within an eighth of an inch. NeuroArm makes it possible for surgeons to work accurately within the width of a hair,” said Doc Seaman, a Calgary philanthropist who donated to the project along with two brothers. Designed to be controlled by a surgeon from a computer workstation, neuroArm gives surgeons unprecedented control, enabling them to manipulate tools at a microscopic scale, researchers said. Surgical testing is currently underway. The first patient is anticipated for this summer at Calgary’s Foothills Hospital, site of the University of Calgary medical school’s research facility.Developing neuroArm required an international collaboration of health professionals, physicists, electrical, software, optical and mechanical engineers to build a robot that could work safely in a surgical suite and within the strong magnetic field of the Magnetic Resonance machine, experts said.Sutherland’s team is developing programs with the university and the Calgary Health Region, one of Canada’s largest health systems, to train surgeons to use neuroArm. Many other surgical disciplines have and continue to participate in applying neuroArm to various types of surgical procedures, the researchers said.“We’re not just building a robot, we’re building a medical robotics program,” Sutherland said. “We want the neuroArm technology to be translated into the global community, i.e. hospitals around the world.”
http://www.world-science.net/othernews/070418_neuroarm.htm
April 18, 2007Courtesy University of Calgaryand World Science staff
A robot to be installed at the University of Calgary in Canada is to conduct brain surgeries in tandem with real-time brain scans captured by Magnetic Resonance Imaging.The robot, dubbed NeuroArm, “aims to revolutionize neurosurgery,” the university said in a statement.
Billed as the first surgical robot compatible with the advanced imaging technology, it’s the creation of neurosurgeon Garnette Sutherland of the university. He spent six years leading a team of Canadian scientists to design it. “Many of our microsurgical techniques evolved in the 1960s, and have pushed surgeons to the limits of their precision, accuracy, dexterity and stamina,” said Sutherland. “NeuroArm dramatically enhances the spatial resolution at which surgeons operate, and shifts surgery from the organ towards the cell level.”“The best surgeons in the world can work within an eighth of an inch. NeuroArm makes it possible for surgeons to work accurately within the width of a hair,” said Doc Seaman, a Calgary philanthropist who donated to the project along with two brothers. Designed to be controlled by a surgeon from a computer workstation, neuroArm gives surgeons unprecedented control, enabling them to manipulate tools at a microscopic scale, researchers said. Surgical testing is currently underway. The first patient is anticipated for this summer at Calgary’s Foothills Hospital, site of the University of Calgary medical school’s research facility.Developing neuroArm required an international collaboration of health professionals, physicists, electrical, software, optical and mechanical engineers to build a robot that could work safely in a surgical suite and within the strong magnetic field of the Magnetic Resonance machine, experts said.Sutherland’s team is developing programs with the university and the Calgary Health Region, one of Canada’s largest health systems, to train surgeons to use neuroArm. Many other surgical disciplines have and continue to participate in applying neuroArm to various types of surgical procedures, the researchers said.“We’re not just building a robot, we’re building a medical robotics program,” Sutherland said. “We want the neuroArm technology to be translated into the global community, i.e. hospitals around the world.”
http://www.world-science.net/othernews/070418_neuroarm.htm
Wednesday, June 13, 2007
what is neural network?
What is a neural network and how does its operation differ from that of a digital computer? (In other words, is the brain like a computer?)
Mohamad Hassoun, author of Fundamentals of Artificial Neural Networks (MIT Press, 1995) and a professor of electrical and computer engineering at Wayne State University, adapts an introductory section from his book in response.
Artificial neural networks are parallel computational models, comprising densely interconnected adaptive processing units. These networks are composed of many but simple processors (relative, say, to a PC, which generally has a single, powerful processor) acting in parallel to model nonlinear static or dynamic systems, where a complex relationship exists between an input and its corresponding output.
A very important feature of these networks is their adaptive nature, in which "learning by example" replaces "programming" in solving problems. Here, "learning" refers to the automatic adjustment of the system's parameters so that the system can generate the correct output for a given input; this adaptation process is reminiscent of the way learning occurs in the brain via changes in the synaptic efficacies of neurons. This feature makes these models very appealing in application domains where one has little or an incomplete understanding of the problem to be solved, but where training data is available.
One example would be to teach a neural network to convert printed text to speech. Here, one could pick several articles from a newspaper and generate hundreds of training pairs—an input and its associated, "desired" output sound—as follows: the input to the neural network would be a string of three consecutive letters from a given word in the text. The desired output that the network should generate could then be the sound of the second letter of the input string. The training phase would then consist of cycling through the training examples and adjusting the network parameters—essentially, learning—so that any error in output sound would be gradually minimized for all input examples. After training, the network could then be tested on new articles. The idea is that the neural network would "generalize" by being able to properly convert new text to speech.
Another key feature is the intrinsic parallel architecture, which allows for fast computation of solutions when these networks are implemented on parallel digital computers or, ultimately, when implemented in customized hardware. In many applications, however, they are implemented as programs that run on a PC or computer workstation.
Artificial neural networks are viable models for a wide variety of problems, including pattern classification, speech synthesis and recognition, adaptive interfaces between humans and complex physical systems, function approximation, image compression, forecasting and prediction, and nonlinear system modeling.
These networks are "neural" in the sense that they may have been inspired by the brain and neuroscience, but not necessarily because they are faithful models of biological, neural or cognitive phenomena. In fact, many artificial neural networks are more closely related to traditional mathematical and/or statistical models, such as nonparametric pattern classifiers, clustering algorithms, nonlinear filters and statistical regression models, than they are to neurobiological models.
Answer posted on May 14, 2007
Mohamad Hassoun, author of Fundamentals of Artificial Neural Networks (MIT Press, 1995) and a professor of electrical and computer engineering at Wayne State University, adapts an introductory section from his book in response.
Artificial neural networks are parallel computational models, comprising densely interconnected adaptive processing units. These networks are composed of many but simple processors (relative, say, to a PC, which generally has a single, powerful processor) acting in parallel to model nonlinear static or dynamic systems, where a complex relationship exists between an input and its corresponding output.
A very important feature of these networks is their adaptive nature, in which "learning by example" replaces "programming" in solving problems. Here, "learning" refers to the automatic adjustment of the system's parameters so that the system can generate the correct output for a given input; this adaptation process is reminiscent of the way learning occurs in the brain via changes in the synaptic efficacies of neurons. This feature makes these models very appealing in application domains where one has little or an incomplete understanding of the problem to be solved, but where training data is available.
One example would be to teach a neural network to convert printed text to speech. Here, one could pick several articles from a newspaper and generate hundreds of training pairs—an input and its associated, "desired" output sound—as follows: the input to the neural network would be a string of three consecutive letters from a given word in the text. The desired output that the network should generate could then be the sound of the second letter of the input string. The training phase would then consist of cycling through the training examples and adjusting the network parameters—essentially, learning—so that any error in output sound would be gradually minimized for all input examples. After training, the network could then be tested on new articles. The idea is that the neural network would "generalize" by being able to properly convert new text to speech.
Another key feature is the intrinsic parallel architecture, which allows for fast computation of solutions when these networks are implemented on parallel digital computers or, ultimately, when implemented in customized hardware. In many applications, however, they are implemented as programs that run on a PC or computer workstation.
Artificial neural networks are viable models for a wide variety of problems, including pattern classification, speech synthesis and recognition, adaptive interfaces between humans and complex physical systems, function approximation, image compression, forecasting and prediction, and nonlinear system modeling.
These networks are "neural" in the sense that they may have been inspired by the brain and neuroscience, but not necessarily because they are faithful models of biological, neural or cognitive phenomena. In fact, many artificial neural networks are more closely related to traditional mathematical and/or statistical models, such as nonparametric pattern classifiers, clustering algorithms, nonlinear filters and statistical regression models, than they are to neurobiological models.
Answer posted on May 14, 2007
First generation neuro-memory chip
http://www.sciam.com/article.cfm?chanID=sa003&articleID=0306422B-E7F2-99DF-3809798634B2D416&ref=rss
June 13, 2007
SCIENCE NEWS June 06, 2007
A Step Toward a Living, Learning Memory Chip
Israeli scientists imprint multiple, persistent memories on a culture of neurons, paving the way to cyborg-type machines
By Nikhil Swaminathan
FIRST GENERATION NEURO-MEMORY CHIP?: Israeli scientists have taken a crucial first step in showing that a network of neurons outside the body can be stimulated to create multiple memories that they sustain for days. Researchers at Tel Aviv University in Israel have demonstrated that neurons cultured outside the brain can be imprinted with multiple rudimentary memories that persist for days without interfering with or wiping out others.
"The main achievement was the fact that we used the inhibition of the inhibitory neurons" to stimulate the memory patterns, says physicist Eshel Ben-Jacob, senior author of a paper on the findings published in the May issue of Physical Review E. "We probably made [the cell culture] trigger the collective mode of activity that … [is] … possible."
The results, Ben-Jacob says, set the stage for the creation of a neuromemory chip that could be paired with computer hardware to create cyborglike machines capable of such tasks as detecting dangerous toxins in the air, allowing the blind to see or helping someone who is paralyzed regain some if not all muscle use.
Ben-Jacob points out that previous attempts to develop memories on brain cell cultures (neurons along with their supporting and insulating glial cells) have often involved stimulating the synapses (nerve cell connections). So-called excitatory neurons, which amplify brain activity, account for nearly 80 percent of the neurons in the brain; inhibitory neurons, which dampen activity, make up the remaining 20 percent. Stimulating excitatory cells with chemicals or electric pulses causes them to fire, or send electrical signals of their own to neighboring neurons.
According to Ben-Jacob, previous attempts to trigger the cells to create a repeating pattern of signals sent from neuron to neuron in a population—which neuroscientists believe constitutes the formation of a memory in the context of performing a task—focused on excitatory neurons. These experiments were flawed because they resulted in randomly escalated activity that does not mimic what occurs when new information is learned.
This time, Ben-Jacob and graduate student Itay Baruchi, who led the study, targeted inhibitory neurons to try to bring some order to their neural network. They mounted the cell culture on a polymer panel studded with electrodes, which enabled Ben-Jacob and Baruchi to monitor the patterns created by firing neurons. All of the cells on the electrode array came from the cortex, the outermost layer of the brain known for its role in memory formation.
Initially, when a group of neurons is clustered in a network, merely linking them will cause a spontaneous pattern of activity. Ben-Jacob and Baruchi sought to imprint a memory by injecting a chemical suppressor into a synapse between inhibitory neurons. Their goal: to disrupt the restrictive function of those cells, essentially causing the brakes they put on the excitatory members in the network to loosen. "This is like teaching by liberation," Ben-Jacob says. "We liberate the excitatory neurons to do what they want to do."
The pair chemically treated inhibitory neurons by injecting them with droplets of picrotoxin, an antagonist of gamma-aminobutyric acid (GABA), the primary inhibitory neurotransmitter in the brain. The chemical suppression of the inhibitory neuron created a pattern kicked off by a neighboring excitatory neuron that was now free to fire. Other neurons in the culture began to fire one by one as they received an electrical signal from one of their neighbors. This continued in the same pattern, which repeated for over a day. This new sequence of activity coexisted with the electrical pattern that was spontaneously generated when the neural culture was initially linked.
A day later, they imprinted a third pattern starting at a different inhibitory synapse. Again, it was able to coexist with the other motifs. "The surprising thing is it doesn't affect the other patterns that the network had before," Ben-Jacob says.
The bottom line, the authors wrote: "these findings hint chemical signaling mechanisms might play a crucial role in memory and learning in task-performing in vivo networks."
June 13, 2007
SCIENCE NEWS June 06, 2007
A Step Toward a Living, Learning Memory Chip
Israeli scientists imprint multiple, persistent memories on a culture of neurons, paving the way to cyborg-type machines
By Nikhil Swaminathan
FIRST GENERATION NEURO-MEMORY CHIP?: Israeli scientists have taken a crucial first step in showing that a network of neurons outside the body can be stimulated to create multiple memories that they sustain for days. Researchers at Tel Aviv University in Israel have demonstrated that neurons cultured outside the brain can be imprinted with multiple rudimentary memories that persist for days without interfering with or wiping out others.
"The main achievement was the fact that we used the inhibition of the inhibitory neurons" to stimulate the memory patterns, says physicist Eshel Ben-Jacob, senior author of a paper on the findings published in the May issue of Physical Review E. "We probably made [the cell culture] trigger the collective mode of activity that … [is] … possible."
The results, Ben-Jacob says, set the stage for the creation of a neuromemory chip that could be paired with computer hardware to create cyborglike machines capable of such tasks as detecting dangerous toxins in the air, allowing the blind to see or helping someone who is paralyzed regain some if not all muscle use.
Ben-Jacob points out that previous attempts to develop memories on brain cell cultures (neurons along with their supporting and insulating glial cells) have often involved stimulating the synapses (nerve cell connections). So-called excitatory neurons, which amplify brain activity, account for nearly 80 percent of the neurons in the brain; inhibitory neurons, which dampen activity, make up the remaining 20 percent. Stimulating excitatory cells with chemicals or electric pulses causes them to fire, or send electrical signals of their own to neighboring neurons.
According to Ben-Jacob, previous attempts to trigger the cells to create a repeating pattern of signals sent from neuron to neuron in a population—which neuroscientists believe constitutes the formation of a memory in the context of performing a task—focused on excitatory neurons. These experiments were flawed because they resulted in randomly escalated activity that does not mimic what occurs when new information is learned.
This time, Ben-Jacob and graduate student Itay Baruchi, who led the study, targeted inhibitory neurons to try to bring some order to their neural network. They mounted the cell culture on a polymer panel studded with electrodes, which enabled Ben-Jacob and Baruchi to monitor the patterns created by firing neurons. All of the cells on the electrode array came from the cortex, the outermost layer of the brain known for its role in memory formation.
Initially, when a group of neurons is clustered in a network, merely linking them will cause a spontaneous pattern of activity. Ben-Jacob and Baruchi sought to imprint a memory by injecting a chemical suppressor into a synapse between inhibitory neurons. Their goal: to disrupt the restrictive function of those cells, essentially causing the brakes they put on the excitatory members in the network to loosen. "This is like teaching by liberation," Ben-Jacob says. "We liberate the excitatory neurons to do what they want to do."
The pair chemically treated inhibitory neurons by injecting them with droplets of picrotoxin, an antagonist of gamma-aminobutyric acid (GABA), the primary inhibitory neurotransmitter in the brain. The chemical suppression of the inhibitory neuron created a pattern kicked off by a neighboring excitatory neuron that was now free to fire. Other neurons in the culture began to fire one by one as they received an electrical signal from one of their neighbors. This continued in the same pattern, which repeated for over a day. This new sequence of activity coexisted with the electrical pattern that was spontaneously generated when the neural culture was initially linked.
A day later, they imprinted a third pattern starting at a different inhibitory synapse. Again, it was able to coexist with the other motifs. "The surprising thing is it doesn't affect the other patterns that the network had before," Ben-Jacob says.
The bottom line, the authors wrote: "these findings hint chemical signaling mechanisms might play a crucial role in memory and learning in task-performing in vivo networks."
Sunday, May 13, 2007
head-transplant
The clandestiene experiments on head transplant- have been carried out separately by US and russian scientists since 1950's. It were as bizzare as creating two headed animal, switching heads of two headed monkeys.
The brain upto upper spinal cord can be kept alive by managing their blood supply- by reconnecting teh carotids and vertebral artery. The vertebral coloumn can also be fixed. The immune reaction can be managed by immunosuppressants.
The vision, eye movements, hearing, taste, smell will also be intact. But the million dollar question will be will he move his hands and legs, will he percieve his body. That will require reconnecting the spinal cord of two bodies.
Immence research have been going on in spinal cord injury- to recoonect two severed ends of psinal cord by stem cells and grafts!
If severed spinal cords can be restored (with stem-cell grafts)this way, perhaps head transplants might eventually become a scientific possibility - without leaving the unfortunate 'patient' permanently paralysed. Whether such operations would ever be deemed ethical is another matter - and the psychological and emotional implications simply beggar belief.
One thing's for certain. With surgical techniques improving at such a rapid rate, the issue will shortly be not whether we could carry out a human head transplant, but, much more importantly, whether we should.
For further reading- see link- :
http://www.dailymail.co.uk/pages/live/articles/technology/technology.html?in_article_id=426765
The brain upto upper spinal cord can be kept alive by managing their blood supply- by reconnecting teh carotids and vertebral artery. The vertebral coloumn can also be fixed. The immune reaction can be managed by immunosuppressants.
The vision, eye movements, hearing, taste, smell will also be intact. But the million dollar question will be will he move his hands and legs, will he percieve his body. That will require reconnecting the spinal cord of two bodies.
Immence research have been going on in spinal cord injury- to recoonect two severed ends of psinal cord by stem cells and grafts!
If severed spinal cords can be restored (with stem-cell grafts)this way, perhaps head transplants might eventually become a scientific possibility - without leaving the unfortunate 'patient' permanently paralysed. Whether such operations would ever be deemed ethical is another matter - and the psychological and emotional implications simply beggar belief.
One thing's for certain. With surgical techniques improving at such a rapid rate, the issue will shortly be not whether we could carry out a human head transplant, but, much more importantly, whether we should.
For further reading- see link- :
http://www.dailymail.co.uk/pages/live/articles/technology/technology.html?in_article_id=426765
Subscribe to:
Posts (Atom)