Podcast: Play in new window | Download
Welcome to the first episode of the Future Thinkers Podcast!
We created this podcast for all of you futurology geeks out there. We often get into these deep discussions about the future, technology, spirituality, the singularity, artificial intelligence, future societies, entrepreneurship, and other related topics. We decided to start recording these conversations, and later will invite thought leaders in these fields to be guests on the show. This isn’t a hard science podcast, but rather a podcast about futurist ideas and cutting edge concepts. We hope you enjoy it!
There are many dystopian scenarios that people come up with when they think about future Artificial Intelligence. The Matrix, Terminator, and many other pop culture examples come to mind. Of course, they’re just movies – but they do reflect a fear that many people have of AI getting out of control and taking over the world.
An Ai won’t have the same vulnerabilities, fears, or desires as us mere mortals.
Will Artificial Superintelligence behave more like Terminator or Buddha? Click To TweetWhy do people have this fear? Will future AI really be violent towards humans? And will we be able to live forever or radically extend our lives in the near future? These are some of the questions we discuss. We also talk about technology and spirituality, and where they intersect.
In this episode of the Future Thinkers Podcast
- How meditation may be a window into an AI’s thought process
- How Buddhist philosophy and technology intersect
- Why an AI may not have the ego, desires or fears of humans
- Living forever – why are people against the idea?
- Telomere research & studies of non-aging animals
Mentions & resources
Recommended Books:
- Abundance by Peter Diamandis
- Bold: How To Go Big, Make Bank and Better The World by Peter Diamandis & Steven Kotler
- The Singularity Is Near by Ray Kurzweil
- Industries of The Future by Alec Ross
More From Future Thinkers:
- The Science Behind Synchronicity with Dr. Kirby Surprise (FTP037)
- Failed Utopias, Mapping the Mind, and Finding Meaning with Dr. Jordan Peterson (FTP038)
- Possibilities of The Blockchain (FTP041)
Comments are closed.
Thanks for your interesting podcast.
I would like to comment on some points you made:
1. AI would not be a danger to humanity because it does not have a body nor does it have human-like emotions.
While these statements might be true for some forms of AI – other AIs might be embodied in robots bodies and have implemented (at least an unconscious version of) a motivational systems that does resemble at least some human-like emotions.
Even if the AI lacked a body and emotions the AI will nevertheless have implemented some top level goals (as you pointed out it might be gaining knowledge). These goals do not result in emotions like a humanlike fear of death but all planning of the AI will be done in accordance to these goals and the derived sub goals.
For example:
The AIs hardcoded top level goal might be to gain knowledge. Then the AI itself might generate a subgoal that is to stay alive for as long as possible. Because limited lifetime would limit how much knowledge could be gained.
With this subgoal (that was only one easy step away from the top level and easy to look through and anticipate for an AI security concerned human) we are back to a scenario where if the AI would think of us as a potential danger – and a quick glance into human history shows clearly how mercilessly we exterminated other species and cultures/indigenous people – it might lead the AI to make sure that it disables us first.
This subgoaling and unanticipated interpretations in various contexts will lead to an AI that develops itself in very hard to anticipate directions. Therefore I would argue that implementing AI safety will be a very hard task.
And when one looks at how poor humanity does on risk estimation/management in other high risk areas like for example nuclear energy then one should be very much concerned about the by far more complex endeavor of AI.
Another general comment about AIs is that consciousness is not necessary for intelligence.
Therefore you can have an AI do all kinds of arbitrarily complex and difficult things (dangerous and beneficial) without it being conscious/subjectively experiencing at all.
2. People say they don’t want to radically extend their lifespan therefore life I painful for them
This might be true for some (imo minority) of people, but opinion polls clearly show that people are by and large content and happy with their lives.
Instead I want to propose a different explanation:
People just say (they just pay lip service) that they don’t want to radically extend their lifespan because they fear that the life extension technology would not be available for them, either because the technology might arrive too late or it might be too costly or restricted to some elite etc.
So because they are afraid that they would not benefit personally they deceive/console themselves by saying “I wouldn’t have wanted it anyway”.
Kind regards
If you are interested in how technology – and particularly the D Wave quantum computer – are mimicking the human condition see…
http://whoneedsthehiggs.blogspot.com/
Hi! Interesting podcast. My 5 cents worth. I do think that AI will intensify our human traits both strengths and deficiencies, since as human beings, we will be creating AI within the confines of our own evolution. As much as there will be emotionally evolved people contributing to AI there will equally be emotionally manipulative people adding to the collective AI as well. I think human beings will increasingly need to evolve spiritually and to learn coping skills to transcend their realities because both good and evil will continue to exist, maybe in different forms. Our anxieties and fears might even intensify as we are constantly challenged to be better versions of ourselves or face the threat of being made redundant by a more evolved AI being, wether threat is percieved or real. These are my thoughts.