Full description not available
L**R
Can Machines Become Humanly Intelligent and How Will We Know?
Broadly educated in poetry and computers and deeply immersed in philosophy, Brian Christian writes about his becoming The Most Human Human. The depth and breadth of his exposition, the importance of the idea -- how will we know if machines become humanly intelligent -- and the topic of a Turing Test Contest, make for a wonderful read. His writing is charming, elegant, guaranteed to inform, and sure to intrigue. Mr. Christian's central theme is his participation in a Turing Test contest created by Hugh Loebner and Robert Epstein ([...]), an idea originated by the British computer genius Alan Turning. Turning proposed that a computer is intelligent when a person (a "judge") typing and receiving notes both from another person and from a computer cannot tell which correspondent is the human. Each year since 1991, the Loebner Prize has been awarded to the computer program that best fools the judges. A corresponding prize goes to the most human human; the person, among several, who judges rate most certainly to be a human. Mr. Christian won this award in 2009. Mr. Christian, more often than not, subordinates his description of the contest itself to the subtitle of his book -- "What It Means to Be Alive." In short, interrelated sections that show his intense preparation for the Loebner competition, he relates computer contexts and our daily lives. I particularly liked his treatment of the concept "book" as applied to Gary Kasparov's chess match with IBM's Deep Blue Computer algorithm. Chess, as played by man and machine, includes openings and endings that can be "memorized" -- this is the "book" -- the previous established series of chess moves that humans and machines store in their memories. Thus, oftentimes, it is only in the middle game that chess skills come into play. Mr. Christian wonderfully shows us how the "book" concept is of general human importance, concluding, "And the book, for me, becomes a metaphor for the whole of life." He similarly wows readers with his discussion of data compression. No less interesting are his other tales and insights. For example, he retells the story of Professor Kevin Warwick of the University of Reading who, in the late 1990s and early 2000s, had various electronic devices implanted in his arm. Among these devices, the professor used active ultrasonic sensors to mimic sonar as his sixth sense -- he could "feel" objects without touching them. With another implanted device, Warwick remotely communicated with his wife who also had electronic implants: this was the first ever purely electronic communication conducted between two human nervous systems. Beyond these few examples, Mr. Christian enlightens us as to how computer programs have trouble with "barge-in" conversations, why "apricot" and "prescient" have the same root, and more. Although Mr. Christian doesn't explicitly draw the conclusion, one can infer from his writing that Alan Turing was wrong. The Turning Test seems unable to provide more than a superficial evaluation of intelligence. A machine with no "life," body, history, or actual experiences seems quite unable ever to convince us that it possesses a true intellect by winning this sort of contest. Still, if the Turing Test is ultimately a poor barometer of computer capability, the greater question remains: "can machines ever become humanly intelligent? Mr. Christian barely offers his opinion on this matter, only writing near the very end that, "Some people imagine the future as a kind of heaven... [e.g. Ray Kurzweil] ... Others ... as a kind of hell," [e.g. The Matrix]. I'm no futurist, but I ... think of ... AI as a kind of purgatory: the place where the flawed, good-hearted go to be purified -- and tested -- and to come out better on the other side."I, and most probably other readers, would have liked more such commentary, to know what Mr. Christian thinks about humankind's future in the face of rising machine intelligence. This is an under-appreciated concern that deserves our awareness. Interestingly, the 2009 Loebner Prize competition was a perfect opportunity to focus our attention. The other winner that year -- the person who won the most human computer award -- was David Levy, who also wrote Love + Sex with Robots, which I use it in my Queens College, CUNY Sociology course Posthuman Society. Levy argues that by 2050, humans will be conversing with, forming social relations with, having sex with, and perhaps even marrying with autonomous robots. Surely, if this happens -- and Levy's strong credentials make him a credible prognosticator -- we will be forced to conclude that machines have become intelligent, no matter how strange or imperfect their programming may seem. And with this, humankind's future will be forever changed -- I don't think for the better -- even if we survive the experience. Of course, Levy could be wrong. Producing the advanced robots that he envisions may require too enormous an effort, if it's even possible.But I don't think Levy is wrong. The New York Times (8/16/11), for example, reports that Stanford University will offer a free online course in AI this fall that is taught by two leading experts. More than 58,000 people worldwide have already registered for the course, which was only advertised virally. Why such great interest? Because people are curious, in part, but also because NASA needs intelligent robots to explore space. Our military has deployed intelligent machines to fight in Afghanistan. Business wants smart robots to manufacture cheaper and better goods. Google is spearheading the production of robotically driven cars. Japan seeks intelligent robots to care for its aging population. And, sharing love and sex with machines is already well underway. Smart robots are going to solve many human problems but also create others, with dramatic consequences, a future that I believe is inevitable. That said, my comments should in no way detract from Brian Christian's marvelous book. He is a gifted informative writer with a keen eye for the human condition. I look forward with great anticipation to curling up with his next provocative volume.
M**I
The problem with the Turing test
[This is an excerpt from a full review to appear in Skeptical Inquirer] Mathematician Alan Turing is famous for a number of things, but probably the one that comes most easily to mind is the famous Turing test, a simple procedure for allegedly determining whether a computer is thinking like a human being -- or at least, whether a computer can effectively fool us into such a conclusion. Turing predicted that by the year 2000 computers would be able to trick human judges into thinking they were talking to a fellow human instead of a machine at least 30% of the time, if the conversation lasted for about five minutes. This has always seemed to me to put the bar so low as to make the entire enterprise spectacularly uninteresting. Sure enough, reading Brian Christian's The Most Human Human confirmed my impression that the so-called Turing test is one of the most hyped ideas in both artificial intelligence and philosophy of mind. The issue, as Christian makes abundantly clear throughout the book, is not whether programmers can devise a clever enough trick that can fool some people some of the time (and for a short period at that), but whether it is possible, or even if it makes sense to try, to equip computers with something akin to human intelligence and thought (please notice that I do not subscribe to non-physicalist views of human consciousness). Christian seems convinced that the key to artificial intelligence is to be found in the implications of Shannon's information theory, which deals among other things with the compression of semantic content. As Christian puts it at the end of the book: "If a computer could ... compress English optimally, it'd know enough about the language that it would know the language. We'd have to consider it intelligent -- in the human sense of the word" (emphasis in the original). Well, is some sense of knowing and intelligence this may be true. But would we have succeeded in creating an artificial intelligence substantially analogous to the human variety? Would that computer be conscious of knowing the English language? There are serious reasons to doubt it. More likely, we would have created something different, and we might need to broaden our very understanding of what "thinking" means.
G**.
One of the most enjoyable books I have read for years
This is great read. It is extremely well written. I bought it because I am curious about AI and don’t really know anything about it. The book in writing about what makes a human human is also very informative about the potential and the limitations of AI. I would recommend it to anyone looking for a book which is incredibly easy to read, contains something thought provoking on most pages and leaves you fulfilled at the end. You can’t say that about many non fiction books.
A**A
One of the craziest things humans do...
One of the craziest things humans do is... chatting with a computer and trying to figure out if it's human or not. Wow! Way to go, humans :))But I really enjoyed Brian Christian's sense of humor and, most of all, his rich English language. Until I opened this book, I'd been thinking that even though I was not a native speaker of English, I had a decent English vocabulary. My god, this writer knows how to surprise in a linguistic sense! No wonder, he's got the degrees in computer science and philosophy as well as poetry. Let me tell you something, if you are preparing for some crazy test like GRE, this is the book to read. Besides, I'm grateful to B. Christian's generous sharing and intriguing choice of the books for further reading."The Most Human Human" is an amazing book. I really enjoyed reading it, particularly the curious anecdotes from the Turing competitions and the preparations for them. On the downside, I'd say the author's obsession with us trying to stay human and not resemble the computers is a bit perplexing since stating the rules of how to achieve this objective is in a way an attempt to systematize, simplify, codify human behavior (or language in this case)...I think there is nothing wrong with people resembling computers even though it might look degrading in the case of phone operators. This is a typical case of Pygmalion's wish. We always want what we create to be part of us...
R**G
Brilliant. Profound. Poetic.
I liked everything about this book. It took me on a surprising, insightful, inspiring journey through the lands of artificial intelligence, philosophy and what we can learn about what and how it means to be human.I recommend it to anyone interested in computing, philosophy, or the arts of writing and conversation.I will be coming back to this book over and over again, I have no doubt.
ترست بايلوت
منذ أسبوعين
منذ شهر