Imatge de l'autor

Obres de Steven Shwartz

Etiquetat

Coneixement comú

Encara no hi ha coneixement comú d'aquest autor. Pots ajudar.

Membres

Ressenyes

On the one hand, you can watch a multitude of documentaries, read books and articles on the rise of Artificial Intelligence (AI), robots replacing our jobs, and computers getting so intelligent that the human brain will lose its unique capabilities. On the other hand, we're dealing with flaws in autonomous cars, chatbots, and automatic translation of difficult texts. Steven Schwartz, a veteran when it comes down to academic research and entrepreneurship in AI, presents a reality check in Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity.

He digs into the hype cycles around AI, the status quo of e.g. facial recognition, self-driving cars, industrial robots, machine translation, chatbots, deep fake video, audio, and messages. Despite the technological achievements, the prediction that it will take decades, centuries if not millennia to become on par with the human brain, still holds. We don't have to fear Terminator-like scenarios or robots taking over all our jobs. AI is overrated in the short term, while abuse of technology, think of armed drones, surveillance cameras coupled with algorithms that determine who's trusted and who's not, coded bias and explicit racism built into the computer code is so serious, that you should be glad that legislators and public opinion do matter.

Despite a couple of periods disappeared from the radar screen, AI is here to stay. We'd better learn what facts and fiction is, and what to accept and what not regarding AI.
… (més)
 
Marcat
hjvanderklis | Hi ha 2 ressenyes més | Apr 25, 2021 |
I received a review copy of this through book Sirens - and yes, this is voluntary. I've read Kurzweil and his predictions and was interested in Shwartz's perspective (Shwartz has opinions on Ray's take on this subject).

BLUF - bottom line up front: Shwartz contends that the evil robots, killer computers of the title and the other nefarious fictions from irresponsible journalism, sensational news (or "news) entities, science fiction authors and movie makers are not only not possible now, they'll likely never be possible. Shwartz has the pedigree and he explains his points well and simply here, and give a bunch of resources for the reader to follow up on (though one random link I checked didn't exist, all the others I did check worked.)

Self-driving cars, real language interaction, neural networks, artificial intelligence...artificial general intelligence, privacy issues, different periods of hype, fears of robots and AI taking jobs, Shwartz covewrs a lot in this compact book, and yet, he says "If you are among the more technically inclined, you may prefer a more in-depth treatment of some of the topics in this book. If so, you can find it on my website, Http://www.aiperspectives.com. The site provides hundreds of pages of technical detail in a dense, textbook-like format. It is not as easy to understand as this book, but I have worked hard to make it more accessible than many AI textbooks by leaving out the advanced mathematics found in them." Misconceptions, misunderstandings, some outright fabricated fear mongering narratives, political spin doctoring... Shwartz dismantles them all. "The biggest technology driver of job loss today is not AI. Conventional software that uses explicit coding of instructions and rules, such as e-commerce, rideshare software, and robotics, destroys far more jobs than AI systems. E-commerce is devastating brick-and-mortar stores but uses conventional software, not AI."

He seems at times to have an ax to grind, but I get it... I saw a rather stupid false comparison of how Romans could build roads that lasted "forever", but when engineers got involved, we got pot holes. I'm an engineer, and that nonsense is just frustrating. Of course, the twits spreading that graphic don't think of those Roman roads never having to handle tons of semis and millions of vehicles traveling on them. If you spend your life and career in something, as Shwartz has with AI, the sky (Skynet?) is falling stuff might tend to wear raw.

He does call out the problems of abuse from corporate, retail, and government, but while his mantra of the examples not being real AI is true, his answers of more regulation and legislation to curb it are unrealistic:
Facial recognition technology gives governments the ability to completely take away our privacy, and it is prone to discrimination. If we want to avoid becoming a surveillance state, where anyone can be arrested for being in the wrong place or for being the “wrong” color, we need laws that rein in how governments use AI-based surveillance tools. However, the tools themselves are not a threat.
The guns don’t kill people, people do argument has its own obvious and simple solution: keep the guns out of the hands of the people who kill. But that's just as overly simplistic and impossible.

The IBM Watsons, Amazon Alexas, Apple Siris, Google Assistants and whatever else comes along can't think - millions of phoneme parsings and if-then algorithms giving seemingly human responses do not make intelligence. Shwartz said
As a graduate student at Johns Hopkins, I coauthored several articles with former Harvard professor Stephen Kosslyn. He was perhaps the leading thinker on how people use mental imagery in their thought processes. For example, if you ask someone, “What shape are a German Shepherd’s ears?” most people will report that they conjure up an image of a German Shepherd from memory, picture the head on the dog, and finally see that the ears are pointy.
He then said,
Observations like these led to a debate about whether people have something like pictures in their heads or whether what they have is a set of facts, and the analysis of these facts makes them feel like they see pictures in their heads.
Shwartz was in the pictures camp. I read another book recently ( Brainscapes: The Warped, Wondrous Maps Written in Your Brain—And How They Guide You by by Rebecca Schwarzlose) that describes how we make maps for pretty much everything, so the non-expert me is also on the pictures side. Computers might do the same, but even brute force terabyte searching and petaflops still can't extrapolate and interpolate anywhere near what a toddler's brain can do easily. Shwartz says
One misunderstanding about unsupervised learning is that these algorithms have reasoning ability. For example, a Forbes magazine article said that unsupervised learning “goes into the problem blind—with only its faultless logical operations to guide it.” This statement makes it sound as if unsupervised learning algorithms use reasoning to explore unstructured data. Nothing could be further from the truth. Unsupervised learning algorithms are conventionally programmed and follow an exact step-by-step sequence of operations.


There is a lot of sense here. I made a bunch of notes, but other than the ones above, I'll just share this one more, one of many pretty savvy observations: "Imagine what insurance companies, retailers, and law enforcement could do with the data collected by self-driving cars, which will know everywhere you drive. Even worse, they have cameras that record what you do inside the vehicle." Add in the browser searches, credit card purchases... Be careful what we ask for, right?
… (més)
 
Marcat
Razinha | Hi ha 2 ressenyes més | Apr 10, 2021 |
Science fiction authors have been warning us for decades of the dangers technology poses. Recently, advances in artificial intelligence and robotics seem to suggest that they have been right all along. Computers are now driving cars and piloting automated weapons systems. They are beating grandmasters at chess and Go, recognising faces and writing articles. They are even in our homes and on our phones, answering our queries, sending texts and setting alarms and reminders, and generally helping us to organise our lives. Surely then it won’t be long before they can do everything – and what then is to stop them from taking over?

But, as Steven Shwartz argues in this fine book, the dangers of AI are vastly overstated. Despite the impressive advances, there are numerous reasons why we should not worry about “evil robots” or “killer computers”. Chief of these is that the recent developments in AI are all “narrow”. This means that, astounding as it is that IBM’s Watson DeepQA program was able to beat two top quizzing champions on the TV programme Jeopardy!, that’s about all it can do. Its expertise is “narrow”, confined to the type of pattern matching that allowed it to trounce its human opponents at this specific activity. The same program could not drive a car. But not only is Watson DeepQA limited to its specific field (language processing), it is limited within that field. For instance, it cannot answer simple comprehension questions. Similar may be said of facial visual recognition systems, that can pick out an individual from among thousands of others (mostly…), but cannot tell a cat from a dog. And if such systems lack even full competence within their specific field, then what hope is there that they will develop the sort of artificial “general” intelligence (AGI) that will allow them to function at a human level in a broad range of tasks? It is AGI, Shwartz argues, that the sci-fi authors worried about, but which is something we are nowhere near to developing, and which we have good reasons for thinking may in fact never arrive.

As a former academic specialising in AI, and the founder of several companies developing AI-driven applications, Shwartz is well placed to make such a claim. He walks the reader through each of the new developments – self-driving cars, natural language processing, facial recognition, and so on – explaining how each works, noting both its achievements and limitations. All, he argues, lack the sort of “common sense” possessed by the average young child, and which would be a necessary requirement for AGI. This common sense is something we often take for granted. We’re not even talking, here, about the sort of general wisdom often embodied in proverbs – “a stitch in time saves nine”, “many hands make light work” – but a much more basic grasp of the world and the way it works. A young child will at some point learn “object permanence” and “intuitive physics”, which allows it to hold presumptions and make predictions: when its parent hides a ball, that the ball still exists; when the parent drops the ball, that it will bounce. It is this vast stock of basic knowledge that we pick up almost unthinkingly, but that with a computer would have to be deliberately programmed in or in some other way acquired – and it is this difficulty that has so far evaded AI researchers. GPT-2, the much vaunted AI program considered “too dangerous to release”, can produce articles and answer natural language questions, but will nonetheless flounder when asked to apply the most basic reasoning and inference to the texts it deals with. This is because it has not been designed to reason or infer, but merely to compile and parrot out phrases based on how often such word combinations are commonly found together. In short, it has no reason, nor any knowledge, and certainly no common sense.

This is not to say that there are no dangers associated with AI, and Shwartz lays these out, too. For example, we need to better understand the potential biases inherent within the data we use to train AI when using it to suggest criminal sentences, or assess loan applications (both of which are current practices). But the real dangers of AI are not that they will one day out-perform humans in all fields, let alone become sentient; it is that humans will come to rely on them without fully understanding the processes they embody, or grant them autonomy which may lead to unforeseen harms. But such things are not inevitable, and it is ultimately down to people and governments to regulate and limit the applications of AI.

However, "Evil Robots, Killer Computers, and Other Myths" does more than debunk prevalent myths. It is a clear and concise account of recent developments in artificial intelligence, and as such serves as an excellent lay primer to the field. Given the complexity of the subject matter, technical explanations cannot be completely avoided, but these are conveyed with a minimum of jargon, and Shwartz does an excellent job of introducing the central concepts with admirable clarity, making the book an enjoyable and informative read. Highly recommended.

Gareth Southwell is a philosopher, writer and illustrator.
… (més)
 
Marcat
Gareth.Southwell | Hi ha 2 ressenyes més | Apr 10, 2021 |

Estadístiques

Obres
1
Membres
17
Popularitat
#654,391
Valoració
4.0
Ressenyes
3
ISBN
3