IniciGrupsConversesMésTendències
Cerca al lloc
Aquest lloc utilitza galetes per a oferir els nostres serveis, millorar el desenvolupament, per a anàlisis i (si no has iniciat la sessió) per a publicitat. Utilitzant LibraryThing acceptes que has llegit i entès els nostres Termes de servei i política de privacitat. L'ús que facis del lloc i dels seus serveis està subjecte a aquestes polítiques i termes.

Resultats de Google Books

Clica una miniatura per anar a Google Books.

Life 3.0: Being Human in the Age of…
S'està carregant…

Life 3.0: Being Human in the Age of Artificial Intelligence (2017 original; edició 2018)

de Max Tegmark (Autor)

MembresRessenyesPopularitatValoració mitjanaMencions
9782621,814 (3.9)9
Business. Science. Technology. Nonfiction. HTML:New York Times Best Seller
How will Artificial Intelligence affect crime, war, justice, jobs, society and our very sense of being human? The rise of AI has the potential to transform our future more than any other technologyand theres nobody better qualified or situated to explore that future than Max Tegmark, an MIT professor whos helped mainstream research on how to keep AI beneficial.

 
How can we grow our prosperity through automation without leaving people lacking income or purpose? What career advice should we give todays kids? How can we make future AI systems more robust, so that they do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will machines eventually outsmart us at all tasks, replacing humans on the job market and perhaps altogether? Will AI help life flourish like never before or give us more power than we can handle?
 
What sort of future do you want? This book empowers you to join what may be the most important conversation of our time. It doesnt shy away from the full range of viewpoints or from the most controversial issuesfrom superintelligence to meaning, consciousness and the ultimate physical limits on life in the cosmos.
… (més)
Membre:tgraettinger
Títol:Life 3.0: Being Human in the Age of Artificial Intelligence
Autors:Max Tegmark (Autor)
Informació:Vintage (2018), Edition: Reprint, 384 pages
Col·leccions:La teva biblioteca, Llegint actualment
Valoració:**1/2
Etiquetes:Cap

Informació de l'obra

Life 3.0 : being human in the age of artificial intelligence de Max Tegmark (2017)

S'està carregant…

Apunta't a LibraryThing per saber si aquest llibre et pot agradar.

No hi ha cap discussió a Converses sobre aquesta obra.

» Mira també 9 mencions

Anglès (24)  Italià (1)  Suec (1)  Totes les llengües (26)
Es mostren 1-5 de 26 (següent | mostra-les totes)
I've only read this as the Blinkist summary and my review here is of that summary. And I know, From reading Tegmark's book "Our Mathematical Universe" that his books can be incredibly dense with material and very hard to summarise. I'm also aware that he likes to grasp "way out" ideas and run with them. (Maybe he'd like to be the originator of the new paradigm"). But with this book, his objectives seem reasonably clear: to draw attention to AI and its potential for humankind....both for good and for trouble. Here are a few of the snippets that appealed to me:
The first stage of life, Life 1.0, is simply biological. Consider a bacterium. Every aspect of its behavior is coded into its DNA. It's impossible for it to learn or change its behavior over its lifetime. The closest it comes to learning or improvement is evolution, but that takes many generations.
The second stage is cultural, Life 2.0. Humans are included here. Just like the bacterium, our “hardware” or bodies have evolved. But unlike simpler life-forms, we can acquire new knowledge during our lifetimes. Take learning a language. We can adapt and redesign ideas that we might call our “software.” And we make decisions using this knowledge.
The final stage is the theoretical Life 3.0, a form of technological life capable of designing both its hardware and software. Although such life doesn’t yet exist on earth, the emergence of non-biological intelligence in the form of AI technologies may soon change this.
Those who hold opinions about AI can be classified by how they feel about the emerging field’s effect on humanity.
1. First up are the digital utopians.
2. Second, there are the techno-skeptics.
3. Finally, there’s the beneficial AI movement.
Researchers in AI, however, are generally opposed to such a notion. They claim that the capability for memory, computation, learning and intelligence has nothing to do with human flesh and blood, let alone carbon atoms.......the author likes to think of intelligence as the “ability to accomplish complex goals.”......but human intelligence is uniquely broad. It can encompass skills like language learning and driving vehicles.
Even though artificial general intelligence (AGI) doesn’t yet exist, it's clear that intelligence isn’t just a biological faculty. Machines can complete complex tasks too......Human brains can store information, but so can floppy drives, CDs, hard drives, SSDs and flash memory cards, even though they’re not made of the same material.
Computing involves the transformation of information. So, the word “hello” might be transformed into a sequence of zeros and ones......But the rule or pattern which determines this transformation is independent of the hardware that performs it. What’s important is the rule or pattern itself......So, if memory, learning, computation and intelligence aren’t distinctly human, then what exactly makes us human? As research in AI continues apace, this question is only going to prove harder to answer.
AI is advancing rapidly and will impact human life in the near future. An example is an AI system playing an old computer game named Breakout......At first, the AI system did poorly. But it soon learned and eventually developed an intelligent score-maximizing strategy that even the developers hadn’t thought of when they played themselves........and in March 2016, when the AI system AlphaGo beat Lee Sedol, the world’s best Go player......Go is a strategy game that requires intuition and creativity. AI systems are also advancing quickly in the field of natural languages.
It’s clear that AI will impact all areas of human life in the near future. Algorithmic trading will affect finance; autonomous driving will make transportation safer, smart grids will optimize energy distribution and AI doctors will change healthcare.
As AI systems can outperform humans in more and more fields, we humans may even become unemployable.
Until now, AI has been applied fairly narrowly in limited fields like language translation or strategy games.........But an intelligence explosion predicted. This is a process by which an intelligent machine gains superintelligence, a level of intelligence far above human capability......It would achieve this through rapid learning and recursive self-improvement because an AGI could potentially design an even more intelligent machine, which could design an even more intelligent machine and so on. This could trigger an intelligence explosion that would allow machines to surpass human intelligence.
Let’s say, for example, that humans program a superintelligence that is concerned with the welfare of humankind. From the superintelligence’s perspective, this would probably be akin to a bunch of kindergartners far beneath your intelligence holding you in bondage for their own benefit.
Various AI aftermath scenarios are possible, ranging from the comforting to the terrifying.
There are various aftermath scenarios. These vary from peaceful human–AI coexistence to AIs taking over, leading to human extinction or imprisonment.
1. The first possible scenario is the benevolent dictator. A single benevolent superintelligence would rule the world, maximizing human happiness.
2. In the same vein, there’s a scenario involving a protector god, where humans would still be in charge of their own fate, but there would be an AI protecting us and caring for us, rather like a nanny.
3. Another scenario is the libertarian utopia. Humans and machines would peacefully coexist. This would be achieved through clearly defined territorial separation. Earth would be divided into three zones. One would be devoid of biological life but full of AI. Another would be human only. There would be a final mixed zone, where humans could become cyborgs by upgrading their bodies with machines.
4. Then there’s the conquerors’ scenario, which we looked at in the last blink. This would see AIs destroy humankind,
5. Finally, there’s the zookeeper scenario. Here a few humans would be left in zoos for the AIs’ own entertainment, much like we keep endangered panda bears in zoos.
There’s no doubt that we humans are goal-oriented. Think about it: even something as small as successfully pouring coffee into a cup involves completing a goal.
But actually, nature operates the same way. Specifically, it has one ultimate purpose: destruction. Technically, this is known as maximizing entropy, which in a layperson’s terms means increasing messiness and disorder. When entropy is high, nature is “satisfied.”
Let’s return to the cup of coffee. Pour a little milk in, then wait a short while. What do you entropy increases. [This is not really a great example because to make the coffee in the first place requires complex beans to be grown which requires a decrease in entropy .......only then can you start the process of grinding the beans and mixing with hot water and milk that gives the increase in entropy]
On a bigger scale, the universe is no different. Particle arrangements tend to move toward increased levels of entropy. [But also begs the question of how stars etc. are formed with local decreases in entropy]. This goes to show how crucial goals are, and currently, AI scientists are grappling with the problem of which goals AI should be set to pursue.....After all, today’s machines have goals too. Or rather, they can exhibit goal-oriented behaviour. For instance, if a heat-seeking missile is hot on your tail, it’s displaying goal-oriented behaviour.
But even if humanity could agree on a few moral principles to guide an intelligent machine’s goals, implementing human-friendly goals would be trickier yet.
1. First of all, we’d have to make an AI learn our goals. This is easier said than done because the AI could easily misunderstand us. For instance, if you told a self-driving car to get you to the airport as fast as possible, you might well arrive covered in vomit while being chased by the police.
2. The next challenge would be for the AI to adopt our goals, meaning that it would agree to pursue them.
3. And finally, the AI would have to retain our goals, meaning that its goals wouldn’t change as it undergoes self-improvement.
Even though nature’s goal is maximum entropy, particles nonetheless rearrange themselves into complex organisms. Why? Because a living organism dissipates energy faster and thereby increases entropy [this explanation is a little to glib...a little too cute and it certainly doesn’t explain how the decrease in entropy ...the living organism;....came about].......what interests AI researchers, then, is the rearrangement that intelligent machines would have to undergo to become conscious......No one has an answer right now.
It’s tricky. We might like to imagine consciousness is something to do with awareness and human brain processes. But then we’re not actively aware of every brain process. For example, you’re typically not consciously aware of everything in your field of vision. It’s not clear why there’s a hierarchy of awareness and why one type of information is more important than another.....But the author favours a broad definition known as subjective experience, which allows a potential AI consciousness to be included in the mix. Using this definition, researchers can investigate the notion of consciousness through several sub-questions. For instance, "How does the brain process information?” or “What physical properties distinguish conscious systems from unconscious ones?”
Intelligent machines could be purposed with a broader spectrum of sensors, making their sensory experience far fuller than our own.....Additionally, AI systems could experience more per second because an AI “brain” would run on electromagnetic signals traveling at the speed of light, whereas neural signals in the human brain travel at much slower speeds.
The potential impact of AI research is vast. It points to the future, but it also entails facing some of humankind’s oldest philosophical questions.
The key message in this book: The race for human-level AI is in full swing. It’s not a question of if AGI will arrive, but when. We don’t know what exactly will happen when it does, but several scenarios are possible: humans might upgrade their “hardware” and merge with machines, or a superintelligence may take over the world. One thing is certain–humanity will have to ask itself some deep philosophical questions about what it means to be human.
My overall take on the book is that it is certainly interesting and he raises a number of issues that I hadn’t thought of. Especially worrying to me is the concept of superintelligence in a conquerors or zookeeper scenario. Some of his examples of entropy appear to be undeveloped or misleading but overall I was impressed with the book. Four stars from me. ( )
  booktsunami | Jul 14, 2024 |
This is a strange book, which mixes considerations that are vague conjectures about the future of intelligence with current research. The actual prose could do with some cleaning up, ironing out. But ultimately the content is unique, important and enlightening for both a technical reader as a lay person.

Undoubtedly AI will have a huge influenve on our future, this Physicists perspective is an important one to take into consideration. ( )
  yates9 | Feb 28, 2024 |
Best book I have read on this topic to date. He starts with a great story then goes over the positive and negative and the philosophy that is involved. ( )
  GShuk | Dec 30, 2023 |
One of the best books I read in my life. Chapters about intelligence, life form and conscience are mindblowing ( )
  kmaxat | Aug 26, 2023 |
The book "Life 3.0" is divided into three main parts.

The first part of the book explores the potential benefits and risks of advanced AI. Tegmark discusses the various ways in which AI could impact our lives, from enhancing healthcare and education to replacing human workers and potentially posing existential risks to humanity.

The second part of the book examines different scenarios that could arise as AI becomes more advanced. Tegmark discusses various possibilities, including a world in which machines become superintelligent and surpass human intelligence, a world in which humans merge with machines to become cyborgs, and a world in which AI goes wrong and causes unintended harm.

The third and final part of the book focuses on the ethical and social implications of AI. Tegmark examines various issues, such as the impact of AI on privacy, security, and inequality, and discusses ways in which we can ensure that AI is developed in a way that aligns with our values and goals as a society.

Throughout the book, Tegmark emphasizes the importance of ensuring that AI is developed in a way that benefits humanity. He argues that we need to be proactive in shaping the future of AI, rather than simply reacting to its development, and that we need to work together as a global community to ensure that the benefits of advanced AI are widely shared and that the risks are minimized.

Overall, "Life 3.0" provides a thought-provoking and accessible look at the potential future of AI and the ways in which we can shape that future to ensure that it aligns with our values and goals as a society.

There are several ways in which we can ensure that AI is developed in a way that aligns with our values and goals as a society. Here are a few examples:

1. Encourage transparency and accountability: One way to ensure that AI is developed in a way that aligns with our values is to encourage transparency and accountability in the development process. This could involve making the source code for AI systems open-source and publicly available, as well as requiring developers to explain how their systems make decisions.

2. Foster collaboration between developers and stakeholders: Another way to ensure that AI is developed in a way that aligns with our values is to foster collaboration between developers and stakeholders, such as policymakers, ethicists, and members of the public. This could involve creating forums for discussion and debate on the ethical and social implications of AI, as well as providing funding for research that examines these issues.

3. Develop ethical guidelines and standards: It's important to establish clear ethical guidelines and standards for the development and deployment of AI systems. This could involve creating codes of conduct for AI developers, as well as establishing regulatory frameworks that ensure that AI systems are safe, reliable, and transparent.

4. Ensure diversity in the development process: It's important to ensure that the development of AI systems is not dominated by a narrow group of developers. This could involve promoting diversity in the AI workforce, as well as involving a diverse range of stakeholders in the development process.

5. Promote education and awareness: Finally, it's important to promote education and awareness about the potential benefits and risks of AI. This could involve creating educational programs that teach people about AI and its implications, as well as encouraging public discussions and debates on the topic. By promoting education and awareness, we can ensure that people are informed and engaged in the development of AI, and that its benefits are widely shared.

There are several examples of ethical guidelines and standards that have been established for AI development. Here are a few examples:

1. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems: The IEEE (Institute of Electrical and Electronics Engineers) has established a global initiative to develop ethical guidelines for autonomous and intelligent systems. The initiative has developed a set of principles, known as the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, which are designed to guide the development of AI systems in a way that is safe, transparent, and beneficial to society.

2. The Asilomar AI Principles: In 2017, a group of AI researchers, ethicists, and policymakers gathered at the Asilomar Conference Center in California to develop a set of principles for AI development. The resulting Asilomar AI Principles consist of 23 guidelines that are designed to ensure that AI is developed in a way that is safe, transparent, and beneficial to society.

3. The Montreal Declaration for Responsible AI: In 2018, a group of AI researchers and ethicists gathered at the AI Forum in Montreal to develop a set of principles for responsible AI development. The resulting Montreal Declaration for Responsible AI consists of 10 principles that are designed to guide the development of AI systems in a way that is transparent, accountable, and respects human rights and dignity.

4. The European Union's Ethics Guidelines for Trustworthy AI: In 2019, the European Commission's High-Level Expert Group on AI published a set of Ethics Guidelines for Trustworthy AI. The guidelines consist of seven key requirements for AI development, including the need for transparency, accountability, and respect for fundamental rights.

5. The United Nations Development Programme's AI Ethics Framework: In 2020, the United Nations Development Programme (UNDP) published an AI Ethics Framework, which provides guidance on ethical considerations in AI development. The framework consists of five principles, including the need for transparency, fairness, and human-centered design.

These are just a few examples of the ethical guidelines and standards that have been established for AI development. As AI continues to evolve, it's likely that we will see the development of additional guidelines and standards to ensure that AI is developed in a way that aligns with our values and goals as a society. ( )
  AntonioGallo | May 4, 2023 |
Es mostren 1-5 de 26 (següent | mostra-les totes)
Sense ressenyes | afegeix-hi una ressenya

» Afegeix-hi altres autors (20 possibles)

Nom de l'autorCàrrecTipus d'autorObra?Estat
Max Tegmarkautor primaritotes les edicionscalculat
Sjöstrand Svenn, HelenaTraductorautor secundarialgunes edicionsconfirmat
Svenn, GöstaTraductorautor secundarialgunes edicionsconfirmat
WAA, Frits van derTraductorautor secundarialgunes edicionsconfirmat
Has d'iniciar sessió per poder modificar les dades del coneixement compartit.
Si et cal més ajuda, mira la pàgina d'ajuda del coneixement compartit.
Títol normalitzat
Informació del coneixement compartit en anglès. Modifica-la per localitzar-la a la teva llengua.
Títol original
Títols alternatius
Data original de publicació
Gent/Personatges
Llocs importants
Esdeveniments importants
Pel·lícules relacionades
Epígraf
Dedicatòria
Primeres paraules
Citacions
Darreres paraules
Nota de desambiguació
Editor de l'editorial
Creadors de notes promocionals a la coberta
Llengua original
Informació del coneixement compartit en alemany. Modifica-la per localitzar-la a la teva llengua.
CDD/SMD canònics
LCC canònic

Referències a aquesta obra en fonts externes.

Wikipedia en anglès

Cap

Business. Science. Technology. Nonfiction. HTML:New York Times Best Seller
How will Artificial Intelligence affect crime, war, justice, jobs, society and our very sense of being human? The rise of AI has the potential to transform our future more than any other technologyand theres nobody better qualified or situated to explore that future than Max Tegmark, an MIT professor whos helped mainstream research on how to keep AI beneficial.

 
How can we grow our prosperity through automation without leaving people lacking income or purpose? What career advice should we give todays kids? How can we make future AI systems more robust, so that they do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will machines eventually outsmart us at all tasks, replacing humans on the job market and perhaps altogether? Will AI help life flourish like never before or give us more power than we can handle?
 
What sort of future do you want? This book empowers you to join what may be the most important conversation of our time. It doesnt shy away from the full range of viewpoints or from the most controversial issuesfrom superintelligence to meaning, consciousness and the ultimate physical limits on life in the cosmos.

No s'han trobat descripcions de biblioteca.

Descripció del llibre
Sumari haiku

Debats actuals

Cap

Cobertes populars

Dreceres

Valoració

Mitjana: (3.9)
0.5
1 1
1.5
2 7
2.5 2
3 25
3.5 3
4 59
4.5 4
5 31

Ets tu?

Fes-te Autor del LibraryThing.

 

Quant a | Contacte | LibraryThing.com | Privadesa/Condicions | Ajuda/PMF | Blog | Botiga | APIs | TinyCat | Biblioteques llegades | Crítics Matiners | Coneixement comú | 208,474,874 llibres! | Barra superior: Sempre visible