The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do

The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do

  • Downloads:5888
  • Type:Epub+TxT+PDF+Mobi
  • Create Date:2021-04-24 09:30:58
  • Update Date:2025-09-07
  • Status:finish
  • Author:Erik J. Larson
  • ISBN:B08TV31WJ3
  • Environment:PC/Android/iPhone/iPad/Kindle

Summary

"If you want to know about AI, read this book。。。it shows how a supposedly futuristic reverence for Artificial Intelligence retards progress when it denigrates our most irreplaceable resource for any future progress: our own human intelligence。"--Peter Thiel



A cutting-edge AI researcher and tech entrepreneur debunks the fantasy that superintelligence is just a few clicks away--and argues that this myth is not just wrong, it's actively blocking innovation and distorting our ability to make the crucial next leap。

Futurists insist that AI will soon eclipse the capacities of the most gifted human mind。 What hope do we have against superintelligent machines? But we aren't really on the path to developing intelligent machines。 In fact, we don't even know where that path might be。

A tech entrepreneur and pioneering research scientist working at the forefront of natural language processing, Erik Larson takes us on a tour of the landscape of AI to show how far we are from superintelligence, and what it would take to get there。 Ever since Alan Turing, AI enthusiasts have equated artificial intelligence with human intelligence。 This is a profound mistake。 AI works on inductive reasoning, crunching data sets to predict outcomes。 But humans don't correlate data sets: we make conjectures informed by context and experience。 Human intelligence is a web of best guesses, given what we know about the world。 We haven't a clue how to program this kind of intuitive reasoning, known as abduction。 Yet it is the heart of common sense。 That's why Alexa can't understand what you are asking, and why AI can only take us so far。

Larson argues that AI hype is both bad science and bad for science。 A culture of invention thrives on exploring unknowns, not overselling existing methods。 Inductive AI will continue to improve at narrow tasks, but if we want to make real progress, we will need to start by more fully appreciating the only true intelligence we know--our own。

Download

Reviews

Ben Chugg

There is a prevailing dogma that achieving "artificial general intelligence" will require nothing more than bigger and better machine learning models。 Add more layers, add more data, create better optimization algorithms and voila: a system as general purpose as humans but infinitely superior in their processing speed。 Nobody quite knows exactly how this jump from narrow AI (good on a particular, very well defined task) to general AI will happen, but that hasn't stopped many from building career There is a prevailing dogma that achieving "artificial general intelligence" will require nothing more than bigger and better machine learning models。 Add more layers, add more data, create better optimization algorithms and voila: a system as general purpose as humans but infinitely superior in their processing speed。 Nobody quite knows exactly how this jump from narrow AI (good on a particular, very well defined task) to general AI will happen, but that hasn't stopped many from building careers based on erroneous predictions, or prophesying that such a development spells the doom of the human race。 The AI space is dominated by vague arguments and absolute certainty in the conclusions。 Onto the scene steps Erik Larson, an engineer who understands both how these systems work and their philosophical assumptions。 Larson points out that all our machine learning models are built on induction: inferring general patterns from specific observations。 We feed an algorithm 10,000 labelled pictures and it infers which relationships among the pixels are most likely to predict "cat"。 Some models are faster than others, more clever in their pattern recognition, and so on, but at bottom they're all doing the same thing: correlating datasets。 We know of only one system capable of universal intelligence: human brains。 And humans don't learn by induction。 We don't infer the general from the specific。 Instead, we guess the general and use the specifics to refute our guesses。 We use our creativity to conjecture aspects of the world (space-time is curved, Ryan is lying, my shoes are in my backpack), and use empirical observations to disavow us of those ideas that are false。 This is why humans are capable of developing general theories of the world。 Induction implies that you can only know what you see (a philosophy called "empiricism") - but that's false (we've never seen the inside of a star, yet we develop theories which explain the phenomena)。 Charles Sanders Pierce called the method of guessing and checking "abduction。" And we have no good theory for abduction。 To have one, we would have to better understand human creativity, which plays a central role in knowledge creation。 In other words, we need a philosophical and scientific revolution before we can possibly generate true artificial intelligence。 As long as we keep relying on induction, machines will be forever constrained by what data they are fed。 Larson argues that the philosophical confusion over induction and the current focus on "big-data" is infecting other areas of science。 Many neuroscience departments have forgotten the role that theories play in advancing our knowledge, and are hoping that a true understanding of the human brain will be borne out of simply mapping it more accurately。 But this is hopeless。 Even after having developed an accurate map, what will you look for? There is no such thing as observation without theory。 At a time when it's in fashion to point out all the biases and "irrationalities" in human thinking, hopefully the book helps remind us of the amazing ability of humans to create general purpose knowledge。 Highly recommended read。 。。。more