Full description not available
O**R
Excellent book, but I have one objection
The book is about Hawkins' theory of how the mammalian cortex, especially the human cortex, works. Hawkins thinks it is only by understanding the cortex that we will be able to build truly intelligent machines. Blakeslee has aided him in presenting this theory so that it is accessible by the general public. I am very impressed by the theory of the cortex, but I do not agree that the cortex is the only way to achieve intelligence.Hawkins defines intelligence as the ability to make predictions. I think this is an excellent definition of intelligence.He says the cortex makes predictions via memory. The rat in the maze has a memory which includes both the motor activity of turning right and the experience of food. This activates turning right again, which is equivalent to the prediction that if he turns right, food will occur.The primate visual system, which is the sense best understood, has four cortical areas that are in a hierarchy. In the lowest area, at the back of the head, cells respond to edges in particular locations, sometimes to edges moving in specific directions. In the highest area you can find cells that respond to faces, sometimes particular faces, such as the face of Bill Clinton.But the microscopic appearance of the cortex is basically the same everywhere. There is not even much difference between motor cortex and sensory cortex. The book makes sense of the connections found in all areas of the cortex.The cortex is a sheet covering the brain composed of small adjacent columns of cells, each with six layers. Information from a lower cortical area excites the layer 4 of a column. Layer 4 cells excite cells in layers 2 and 3 of the same column, which in turn excite cells in layers 5 and 6. Layers 2 and 3 have connections to the higher cortical area. Layer 5 has motor connections (the visual area affects eye movements) and layer 6 connects to the lower cortical area. Layer 6 goes to the long fibers in layer 1 of the area below, which can excite layers 2 and or 3 in many columns.So there are two ways of exciting a column. Either by the area below stimulating layer 4, or by the area above stimulating layers 2 and 3. The synapses from the area above are far from the cell bodies of the neurons, but Hawkins suggests that synapses far from the cell body may fire a cell if several synapses are activated simultaneously.The lowest area, at the back of the head, is not actually the beginning of processing. It receives input from the thalamus, in the middle of the brain (which receives input from the eyes). Cells in the thalamus respond to small circle of light, and the first stage of processing is to convert this response to spots to response to moving edges.And the highest visual area is not the end of the story. It connects to multisensory areas of the cortex, where vision is combined with hearing and touch, etc.The very highest area is not cortex at all, but the hippocampus.Perception always involves prediction. When we look at a face, our fixation point is constantly shifting, and we predict what the result of the next fixation will be.According to Hawkins, when an area of the cortex knows what it is perceiving, it sends to the area below information on the name of the sequence, and where we are in the sequence. If the next item in the sequence agrees with what the higher area thought it should be, the lower area sends no information back up. But if something unexpected occurs, it transmits information up. If the higher area can interpret the event, it revises its output to the lower area, and sends nothing to the area above it.But truly unexpected events will percolate all the way up to the hippocampus. It is the hippocampus that processes the truly novel, eventually storing the once novel sequence in the cortex. If the hippocampus on both sides is destroyed, the person may still be intelligent, but can learn nothing new (at least, no new declarative memory).When building an artificial auto-associative memory, which can learn sequences, it is necessary to build in a delay so that the next item will be predicted when it will occur. Hawkins suggests that the necessary delay is embodied in the feedback loop between layer 5 and the nonspecific areas of the thalamus. A cell in a nonspecific thalamic area may stimulate many cortical cells.I think this theory of how the cortex works makes a lot of sense, and I am grateful to Hawkins and Blakeslee for writing it in a book that is accessible to people with limited AI and neuroscience.But I am not convinced that the mammalian cortex is the only way to achieve intelligence. Hawkins suggests that the rat walks and sniffs with its "reptilian brain", but needs the cortex to learn the correct turn in the maze. But alligators can learn mazes using only their reptilian brains. I would have been quite surprised if they could not.Even bees can predict, using a brain of one cubic millimeter. Not only can they learn to locate a bowl of sugar water, if you move the bowl a little further away each day, the bee will go to the correct predicted location rather than to the last experienced location.And large-brained birds achieve primate levels of intelligence without a cortex. The part of the forebrain that is enlarged in highly intelligent birds has a nuclear rather than a laminar (layered) structure. The parrot Alex had language and intelligence equivalent to a two year old human, and Aesop's fable of the crow that figured out to get what he wanted from the surface of the water by dropping stones in the water and raising the water level, has been replicated in crows presented with the problem.
S**Y
Intelligence and the Matter of Mind
Hawkins and his co-writer, Sandra Blakeslee, offer an intriguing analysis of what the brain does to produce intelligence, a very sticky subject any way you cut it. Separating intelligence from other familiar features of the conscious mind which brains are responsible for, he proposes that intelligence is best understood as predictive capacity and that it is basically a function of the cortex, the latest add-on to animal brains and which appears to be largest and/or most developed in humans among all other mammals. Wrapping the older parts of the brain (what Hawkins calls the "lizard brain"), Hawkins proposes that the cortex performs the function of intelligence via a relatively simple, uniform algorithm, contrary to the general opinion in AI circles which presumes the need for many complex and integrated algorithms.Taking his lead from Johns Hopkins neuroscience researcher Vernon Mountcastle back in the seventies, Hawkins presumes that the remarkably uniform appearance of the cortex (it basically consists, he tells us, of six layers of neuronal cells throughout) suggests that the various areas of the cortex, demonstrated by researchers to be responsible for different functions (vision, touch, hearing, conceptualizing, etc.), really do everything they do by performing the same processes. He is clear, of course, to emphasize that he is not talking about other things brains presumably do including emotions, instinctual drives, somatic sensations, etc. which he assigns to the lizard brain. It's just the intelligence part that he is interested in though he's certainly aware that for intelligence to work as it does in us it must be integrated with the broad range of other features found in consciousness including those produced in the lizard brain. So his argument is not that the cortex, in its special capacity, is a stand-alone but that it is a significant and inextricable add-on to the rest of our brain and works only with and in support of the other features.For Hawkins, the key to understanding how the cortex does intelligence comes down to understanding the pertinent algorithm. He argues that neuronal groups work in two hierarchical ways, both up and down the line in linked columns spanning the six layers of neurons, found more or less uniformly throughout the cortex, and also by combining and linking different cortical areas horizontally (responsible for different functions , e.g., shapes, colors, sound, touch, taste, smell, language, motor control) in other, non-physically determined (because non physically contiguous) hierarchies via links established between cortical layers through extension of myriads of cellular axons traveling transversely across the cortical areas AND to other parts of the lizard brain (each of which axon produces multiple connections, through the tree-like dendrites at its end points, resulting in difficult to estimate -- but likely in the hundreds of millions [or more] -- connections).The basic cortical algorithm, performed by all these interconnecting neurons in the cortex, on Hawkins' view, is one of patterning and of the capture and retention of so-called "invariant representations". He argues that human memory is not precise, the way computational memory is (a case made, as well, by Gerald Edelman in his own work). But, where Edelman (Bright Air, Brilliant Fire: On The Matter Of The Mind) emphasizes the dynamic and incomplete quality of human recollections, Hawkins emphasizes their general nature. We don't remember things precisely, in detail, he says, but, rather, in only general patterns (adumbrations rather than precise images).This, he suggests, is because of the basic patterning algorithm of the neuronal group operations in the cortex.When information flows in, he says, various neurons in the affected groups fire, in very fine detail, much as our taste buds operate in the tongue with different nerves for the different tastes which then pass the captured information up the line to combine further upstream via the brain's more comprehensive processes. In the vision parts of the cortex for instance, Hawkins notes that some cortical cells at the input end of the relevant cellular columns will fire in response to vertical lines, others to horizontals or diagonals, while others, nearby, presumably pick up color information, etc. The various firings pass up the line in increasingly broad (and more generalized) combinations, eventually losing much of the detail but generating patterns driven by the lower level details received.At the highest level of the cortex, Hawkins reasons we have only the broadest, most general pictures, combining the increasingly broad and more general patterns passed up from below with related general patterns from other areas (say visual patterns with touch patterns and sound patterns, etc.) to give us still larger patterns via associative linkage. When new inputs come in (as they are constantly doing) the passage of the information up the line encounters the stored general patterns higher up which respond by sending signals down the same routes (and also down our motor routes if and when actions are called for).The ability of the incoming inputs to match stored generic patterns higher up (when the information coming down the line matches the information heading up) is successful prediction. When there is no match, prediction fails and new general patterns form at the higher end of the cortical columns to replace the previous patterns. Thus memory in us is seen as an ongoing adjusting process with repetitive matches producing stronger and stronger traces of previously stored patterns.Because patterning happens at every level, a kind of pyramid of patterns from the lowest level in the cortex to the highest is seen. At all levels, associative mechanisms are utilized and, at the highest levels, these connect and combine multiple specialized patterns into still larger overarching representational patterns. The capacity to retain invariant representations at all levels, until adjustments are made, gives us the invariant representational capability that forms the basis of human memory and underlies prediction which, he thinks, is what we mean by "intelligence" (i.e., the dynamic process of matching old patterns to new inputs where the more successful the matching, the more "intelligent" we deem the operations performed).So the cortex, on this view, is a "memory machine" (as Hawkins puts it), using a patterning and matching mechanism to constantly fit the stored representations held in the cortex to the world. And intelligence is seen as the outcome of this massive process that is constantly going on in our brains, i.e., the ability to quickly adjust to incoming information and make successful predictions about it. It's this increasingly complex and generalizing capacity of cortexes, he argues, that gives us the ability to construct and use massively complex pictures of the world around us (the source of our sensory inputs)*.Hawkins thinks that this is a whole different way of conceiving of intelligent machines, replacing the notion prevalent in mainstream AI that the way to build machine intelligence is to construct massive systems of complex algorithms to perform intelligent functions typical of human capability. Instead, of that, he proposes, we need to concentrate on building chips that will be hardwired to work like cortical neurons in picking up, storing and matching/adjusting a constant inflow of sensory information and which can then be linked in a cortex-like architecture matching the cortical arrangements found in human brains.Such machines, he proposes, will learn about their world in a way that is analogous to how we do it, build pictures based on sensory information received, recognize patterns and connections and think out of the more confining algorithm-intensive computational box.Hawkins notes that we don't have to give such machines the kinds of sensory information available to humans and suggests that there is a whole range of different kinds of sensory inputs that might make more sense for such machines, depending on what complex operations they are built to perform (which may include security monitoring, weather prediction, automobile control or work in areas outside ordinary human safety zones, say in outer space, in high radiation areas or at great depths on the ocean floor). Nor does he think we have to worry about such machine intelligences supplanting us (a la The Matrix) since there is no reason, he argues, that we would have to give such machines drives or feelings, or even a sense of selves such as we have, any of which might make them competitors to humans in our own environment. (Of course, it bears noting that we don't really have any idea of how brains produce drives and selves, per se, so it's at least a moot question whether we can simply, as Hawkins suggests, resolve not to provide these to such machines. After all, what if the synthetic cortical array he envisions turns out to have some or all of the capabilities Hawkins now thinks are seated beyond the cortex in human brains? In such a case, mere resolve not to give such capabilities to the proposed cortical array machines might not be enough!)One of the main reasons Hawkins argues for a simple hardwired algorithm configured in a cortex-like architecture, versus a massively computational AI application (as envisioned in many AI circles), is that he believes even the most powerful computers today, with far faster processing capacities than any human brain, cannot hope to keep up with this kind of cortical architecture. He comes to this conclusion because he believes too many steps are involved in order to program intelligence comparable to what humans have, thus requiring a computational platform of vast, likely unwieldy, size, and detailed programming that must prove too monumental to undertake and maintain error-free. Nature, he argues, chose a simpler, more elegant and, in the end, superior way: a simple patterning/predicting algorithm.In many ways Hawkins is much better than Gerald Edelman in dealing with the brain since Edelman gets lost in complexities, vagueness and what look like linguistic confusions in trying to describe brain process or argue against the AI thesis. Hawkins, though he limits his scope to intelligence rather than the full range of consciousness features, gives us a much more detailed and structured picture of how the mechanism under consideration might actually work.In the end he gives us a picture best understood as arrays of firing cells (think flashing lights) that constantly do what they do in response to incoming and outgoing signal flows, with the incoming reflecting the array of sensory inputs we get from the world outside and the outgoing the stored general patterns that serve as our world "pictures" (not unlike Plato's forms, as he suggests, albeit without the platonistic mysticism) which are built up by the constant inflow.Thus, he envisions a constant upward and downward flow of signals in the cortical system which is not only dynamic based on the interplay of the dual directional flow of the signals but is reflective of the facts beyond the brain in the world through the compound construction of invariant representations (occurring at every level of cortical activity). To the extent the invariant representations he describes successfully match incoming signals, they are predicting effectively and the organism depending on them is more likely to succeed in its environment. To the extent they are unable to generate effective prediction, the organism depending on them suffers.A key weakness of Hawkins' explanation lies in his failure to either show exactly how the pattern matching and adjusting of the neuronal group hierarchies become the world of which we are consciously aware, in all its rich detail (how mere physical inputs become mind -- the components of our mental lives) and how the cortex integrates the many inputs of the rest of the brain. As John Searle (Minds, Brains and Science (1984 Reith Lectures) and Mind, Language, and Society : Philosophy in the Real World) has noted, our idea of intelligence is very much intertwined with our idea of being aware, being a subject, having experience of the inputs we receive, etc. If we understand something, it's not just that we can produce effective responses to the stimuli received but that we are aware of the meanings of what we're doing, what is going on, etc.Hawkins' "intelligence" looks to be a very much truncated form of this, albeit deliberately so, because he wants to argue for intelligent machines that will be "smarter" than computers but not quite smart enough to be a threat to us. Still, despite the fact that he has offered an intriguing possibility, which may well be an important step forward in the process of understanding minds and brains and of building real artificial intelligence, one can't escape the feeling he has still missed something along the way by distancing himself from the question of what it is to be aware -- to understand what one is doing when one is doing it.SWM~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~* One of the critical differences between us and mammals lower down the development scale, he suggests, is the relative size of our cortexes. Many mammals with smaller brains just have smaller cortexes and, thus, fewer cells there, while some mammals, e.g., dolphins, actually have larger brains but less dense cortexes -- three layers vs. our six. Thus, says Hawkins, the intelligence we have reflects a greater capacity to form representations (covering more inputs, including past and present and a greater capacity for abstraction).
ترست بايلوت
منذ شهر
منذ يوم واحد