Session VI: Understanding the Brain: Learning and Computation

Panel Discussion

Session Photos

Session Summary

Moderator: Leah Jamieson, dean, Purdue University
Keynote speaker: Jeff Hawkins, co-founder Numenta Panel discussion: Jeff Hawkins; Terrence Sejnowski, professor, Salk Institute for Biological Studies; Kevan Martin, co-director of the Institute for Neuroinformatics, a joint institute of the University of Zurich and the Swiss Federal Institute of Technology

“This is the most amazing thing we can do in our lifetime,” said Jeff Hawkins, referring to reverse engineering the brain. “It’ll be like the Enlightenment. We’ve been building computers for 60 years on the same model. Now nature is showing us a different way to do it.”

Hawkins is passionate about discovering how the human brain works and using that knowledge to design intelligent computers. “We have enough information and we have enough tools to start deciphering [how the brain works],” he said. “Engineers are used to deciphering, to picking apart messy systems. You just need to figure out the principle.”

After decades of work in computer engineering and neuroscience study, Hawkins has developed just such a principle for the human brain, in particular, the neocortex.

Before explaining it, he reviewed a few key facts about the neocortex, the wrinkled tissue that forms the outer layer of the brain. The neocortex is the center of all high-level intelligence, including high-level vision, touch, and language. He said that if unwrinkled and spread out, the neocortex would be 2-3 millimeters thick and about the size of a large dinner napkin. He held up a dinner napkin and said, “This is you. Your napkin is listening and mine is talking.”

His theory is that the neocortex builds a model of the world using input from the senses and then infers causes of novel input in terms of that model. The brain’s model also allows it to make predictions about what will likely happen next.

“If you understand English, you can predict what word is at the end of this. . . [audience laughter]. . . sentence,” he said.

Hawkins said that knowledge in the neocortex is distributed hierarchically. Sensory data, whether visual or tactile or auditory, is received in bits and pieces by cells lower on the hierarchy, which send the information to cells higher up, and so on. At each point in the process, cells are responding to patterns and pattern sequences and comparing them to past experience. Lower cells deal with small fragments of information over short periods of time. Cells nearer the top of the hierarchy respond to more complex objects over longer periods of time.

Information flows both ways in the hierarchy, with the upper level cells sending down information about what to expect in the coming moments. If the lower cells have collected information about tone and pitch and tempo that the upper level cells have interpreted as the first two bars of “Over the Rainbow,” the upper cells send the message down that the notes of the third bar will come next.

Hawkins calls his model “hierarchical temporal memory” (temporal because all sensory input and predictions are expressed and understood in sequences of changing information). Although different regions of the neocortex specialize in different tasks—sight or language, for example—each region works in a similar way by looking for sequences of patterns, making predictions, and constantly updating itself based on new experiences.

In other words, the brain continually trains itself to learn what it needs to know. “The structure is there at birth, but not the knowledge,” Hawkins said.

Hawkins and his colleagues have written software based on hierarchical temporal memory to teach computers to perform tasks of perception that are simple for the human brain, such as deciding whether an image represents a rubber duck, a cow, a boat, or a cell phone. Hawkins also trained a computer to perform digital pathology—to discriminate glands from other structures in images of cancerous tissue. In both cases, computers using hierarchical temporal memory perform fairly well but not perfectly. Hawkins said a new approach is needed before computers will be able to execute such basic acts of perception reliably.

He is optimistic that engineers, neurobiologists and other scientists will solve many of the essential questions related to building an intelligent computer in the coming years. “More and more people are working on this. We finally have the tools, data, and people to make significant progress.”

In the panel discussion afterward, Terry Sejnowski agreed. “This is going to be the generation that figures it out,” he said. Sejnowski is the Francis Crick professor at the Salk Institute for Biological Studies and professor of biology and computer science and engineering at the University of California at San Diego.

Kevan Martin, co-director of the Institute of Neuroinformatics, said, “In our Institute in Zurich, we got together neuroscientists, computer science engineers, physicists, mathematicians, and psychologists and put them in one place with a fantastic kitchen and a place for them to sit. The engineer is trying to make something that works. The neuroscientist says, ‘That’s not really a neuron.’ The engineer says, ‘But it works.’ You’ve got to have all these people talking to each other and learning each other’s languages.”

Perhaps to demonstrate the value of exchanging ideas among people with different perspectives, the panelists engaged in a lively debate about whether the challenges of intelligent computers would be solved with silicon or with software. Martin, who is working to simulate neurons using silicon, pointed out that silicon is much faster.

Hawkins maintained that it takes too long to design. “We’re learning so much that by the time the chip comes out things have changed and you’re hosed,” he said.

One thing the panelists had no troubling agreeing on is that the work being done to reverse engineer the brain and design intelligent computers will have far-reaching impacts on everything from learning to healthcare. Hawkins said technology resulting from this effort could play a role in meeting each of the 14 NAE grand challenges. “We cannot anticipate all the benefits that will come out of this,” he said.

Summit TopicsSummit SpeakersSpeaker InterviewsSpeaker Topics

How can business, academia and government partner to build the manufacturing and engineering capabilities needed today and in the future?

How can research produced by universities better align with industry to address real world challenges?

How can research produced by universities better connect with industry to address real world challenges?

How do we make manufacturing a desired career?

How do we make manufacturing a desired career?

How do we make manufacturing a desired career?

How do we make manufacturing a desired career?

What are the critical skills needed by the U.S. workforce to tackle the Grand Challenges?

What are the critical skills needed by the U.S. workforce to tackle the Grand Challenges?

What are the critical skills needed by the U.S. workforce to tackle the Grand Challenges?

What is North Carolina's competitive edge?

What next steps should be taken at the conclusion of the Manufacturing for the Grand Challenges conference?

What next steps should be taken at the conclusion of the Manufacturing for the Grand Challenges conference?

What next steps should be taken at the conclusion of the Manufacturing for the Grand Challenges conference?

What type of manufacturing can and should be located in the United States?

What type of manufacturing can and should be located in the United States?