IniciGrupsConversesMésTendències
Cerca al lloc
Aquest lloc utilitza galetes per a oferir els nostres serveis, millorar el desenvolupament, per a anàlisis i (si no has iniciat la sessió) per a publicitat. Utilitzant LibraryThing acceptes que has llegit i entès els nostres Termes de servei i política de privacitat. L'ús que facis del lloc i dels seus serveis està subjecte a aquestes polítiques i termes.

Resultats de Google Books

Clica una miniatura per anar a Google Books.

Deep Fakes and the Infocalypse: What You…
S'està carregant…

Deep Fakes and the Infocalypse: What You Urgently Need To Know (edició 2020)

de Nina Schick (Autore)

MembresRessenyesPopularitatValoració mitjanaConverses
402621,498 (3.3)Cap
"In a world of deepfakes, it will soon be impossible to tell what is real and what isn't. As advances in artificial intelligence, video creation, and online trolling continue, deepfakes pose not only a real threat to democracy -- they threaten to take voter manipulation to unprecedented new heights. This crisis of misinformation which we now face has since been dubbed the "Infocalypse." In DEEPFAKES, investigative journalist Nina Schick uses her expertise from working in the field to reveal shocking examples of deepfakery and explain the dangerous political consequences of the Infocalypse, both in terms of national security and what it means for public trust in politics. This all-too-timely book also unveils what this all means for us as individuals, how deepfakes will be used to intimidate and to silence, for revenge and fraud, and just how truly unprepared governments and tech companies are for what's coming."--Amazon.com.… (més)
Membre:grahamhay
Títol:Deep Fakes and the Infocalypse: What You Urgently Need To Know
Autors:Nina Schick (Autore)
Informació:Monoray (2020), 224 pages
Col·leccions:La teva biblioteca
Valoració:
Etiquetes:AKL-LIB

Informació de l'obra

Deep Fakes and the Infocalypse: What You Urgently Need To Know de Nina Schick

Cap
S'està carregant…

Apunta't a LibraryThing per saber si aquest llibre et pot agradar.

No hi ha cap discussió a Converses sobre aquesta obra.

Es mostren totes 2
I agree with some of the other reviews I've read here. I appreciated the explanation of the threat of cheap and deep fakes, but it felt like the book was a little rushed. I noticed a handful of mistakes that could've used better editing as well. ( )
  mindatlarge | Jun 28, 2022 |
Deepfakes are not robots, nor A.I and that is a completely different matter. Also, it seems there will soon be laws around creating Deepfakes soon, and there'd probably be serious penalties for those who use it. Politicians will certainly want to see it banned.

Recognising shopping trends is not artificial intelligence. It is marketing and that data is manipulated by businesses. If you smoke, and have photos of you doing that, it may well change your medical insurance rates, as long as you have, given your social media or the insurance app to use your social media profile to obtain those details. In the end, when people use technology they should know what they are signing up to - there is no free meal, you are registering to use a free service, but the tech firms just want to sell your data and most people agree to giving it to the companies because they really want to share their selfies with people they don't even know well and call 'friends'. Marketing has always worked in this way. When my friends used to buy CDs from the Britannia music club, the company would often send them adverts in the mail for the same kind of music. They went straight to the bin, and don't forget junk mail - physical and digital. That has existed for years.

A software system ultimately won't make the decision about changing your insurance because it just collects and track data and have been doing this for years. Your insurers will make the decision to see if data they collected warrants changing your insurance rate. People are complex beings. They learn, they change, and they will dupe these systems. If a lot of people start seeing changes in insurance because of pictures of them smoking, they won't take smoking pictures anymore, or they might manipulate the system by showing pictures of themselves eating fruit and veg.

The liability with current A.I is that it is just matching things and is recognition based, and it would be easy for people to find ways to trick the system, which is much harder to do with people because people can smell bullshit.

Companies will lose a lot of money if they put current "AI" in charge of things where active thinking is required. We need doctors because they form opinions based on experience. You could sit with a computer and it could analyse your blood pressure and then give you a list of things, or could say 'some risk of heart attack'. If that makes you panic, the computer won't be able to reassure you, but a doctor with lots of experience might say "the risk is very low and it is nothing to worry about because of a, b, and c."

A robot can be programmed to give people lots of options for medical treatment, but a real doctor might be able to use their experience to try different approaches that are not as rigid as a computer system. For example, patient asks the bot "doctor, when I take this medicine, I start to see unicorns in my dreams and I am getting tendencies to eat butterscotch ice cream. What should I do?"

Bot: "no results found."

Patient repeats the question. Same response. In the end, the patient would have to see a doctor to have that question answered, so it won't be useful or workable, and would never be used exclusively. That's where "AI" like Siri is at the moment. Siri is just glamorised voice recognition with a link to search engines.

The article you are commenting on is discussing how people lack the knowledge to really know what they are talking about when they discuss A.I., and they don't stop to consider whether the technology being developed and promoted as A.I. or bots is really that. You are falling into the same trap. Deepafakes and marketing data collection is not A.I.

We now have lots of elaborate forms of data entry, where we could use our voices to control devices instead of typing in information, but effectively, when you tell Alexa to 'play Dua Lipa' it is using the same search algorithms that it would have used if you had typed the same data in.

The media really, really wants to discuss the 'worrying ethical' side of building robots because it is so appealing to make humans look like slave owners and morally reprehensible, yet people forget that we are nowhere near there but that doesn't matter because the tech companies and media keep misidentified things for 'A.I.' and ignorant people just start to think of 'thinking machines' that can brainwash people. ( )
  antao | Nov 14, 2020 |
Es mostren totes 2
Sense ressenyes | afegeix-hi una ressenya
Has d'iniciar sessió per poder modificar les dades del coneixement compartit.
Si et cal més ajuda, mira la pàgina d'ajuda del coneixement compartit.
Títol normalitzat
Títol original
Títols alternatius
Data original de publicació
Gent/Personatges
Llocs importants
Esdeveniments importants
Pel·lícules relacionades
Epígraf
Dedicatòria
Primeres paraules
Informació del coneixement compartit en anglès. Modifica-la per localitzar-la a la teva llengua.
There is a viral video of President Obama on YouTube, with almost 7.5 million views.
Citacions
Informació del coneixement compartit en anglès. Modifica-la per localitzar-la a la teva llengua.
To paraphrase the German-American philosopher Hannah Arendt, it doesn't matter that leaders lie if people think that everything is already a lie anyway. (p. 93)
Darreres paraules
Nota de desambiguació
Editor de l'editorial
Creadors de notes promocionals a la coberta
Llengua original
CDD/SMD canònics
LCC canònic

Referències a aquesta obra en fonts externes.

Wikipedia en anglès

Cap

"In a world of deepfakes, it will soon be impossible to tell what is real and what isn't. As advances in artificial intelligence, video creation, and online trolling continue, deepfakes pose not only a real threat to democracy -- they threaten to take voter manipulation to unprecedented new heights. This crisis of misinformation which we now face has since been dubbed the "Infocalypse." In DEEPFAKES, investigative journalist Nina Schick uses her expertise from working in the field to reveal shocking examples of deepfakery and explain the dangerous political consequences of the Infocalypse, both in terms of national security and what it means for public trust in politics. This all-too-timely book also unveils what this all means for us as individuals, how deepfakes will be used to intimidate and to silence, for revenge and fraud, and just how truly unprepared governments and tech companies are for what's coming."--Amazon.com.

No s'han trobat descripcions de biblioteca.

Descripció del llibre
Sumari haiku

Debats actuals

Cap

Cobertes populars

Dreceres

Valoració

Mitjana: (3.3)
0.5
1
1.5
2
2.5 1
3 2
3.5
4 2
4.5
5

Ets tu?

Fes-te Autor del LibraryThing.

 

Quant a | Contacte | LibraryThing.com | Privadesa/Condicions | Ajuda/PMF | Blog | Botiga | APIs | TinyCat | Biblioteques llegades | Crítics Matiners | Coneixement comú | 204,500,376 llibres! | Barra superior: Sempre visible