Podcast: Play in new window | Download
Mike: Daniel, welcome to the show. It’s good to have you on again. So, today we’re talking about existential risks, which is a topic I know you’re quite familiar with that, you know, you’ve brought up in past episodes with us. So, this is something I’ve actually really been looking forward to getting into with you.
Daniel: Great.
Mike: So, why don’t we start with the big questions. What are some of the biggest existential risks [00:02:30] that we are facing on the planet right now?
Daniel: So, for those who aren’t familiar with the term, existential risk means some risk that could stop us from existing. So, it’s a species threatening event. There’s also plenty of catastrophic risks that would just really suck, even though we wouldn’t be completely wiped out as a species. So, there are versions of World War III that are existential, kinds of nuclear Holocausts that nobody makes it through. There are kinds of World War III that are catastrophic meaning some group of people in remote areas make it through, but the atmosphere is full of uranium and, you know, most of the people [00:03:00] did very poorly. So, we’re kind of interested in preventing all of those things, right?
So, what are the categories first? There’s different ways of kind of chunking in, different ways of doing taxonomy. One thing we can look at is start by saying there are kind of human induced things and then there’s kind of natural phenomena. Natural phenomena are things that either arise from within our earth and biosphere or outside of it. So, outside of it is things like solar flares and Carrington events, asteroids [00:03:30] that can be catastrophic or existential. And obviously, you know, within our atmosphere most of the natural phenomena that would occur would not be existential. But there are some super called era kinds of scenarios that would be. We’re obviously interested in any of them, because, you know, if there was an event that we just couldn’t avoid here, then mars colony moves up on the priority list, right?
But surprisingly, most of these things, if we have more intel about them, many of them can be avoided or at least [00:04:00] mitigated against. So, for instance, if we look at something like a Carrington event, where we’re looking at solar flares or chrono mass injections that, besides the just effect that they would have on biology directly would also have effects on electrical circuitry. You know, first biology goes, there might be an impact on one part of the planet, people can go underground it’s not that big a deal. But as far as frying the circuitry goes, if we’re talking about frying the circuitry of nuclear cooling [00:04:30] stations on the power plant, so then they turn into nuclear volcanos, that’s a pretty big deal.
Now, we could, of course, do things to mitigate against the sensitivity there, like hardening the nuclear cooling stations and, you know, other forms of key infrastructure. So, even with regard to things that we can call natural phenomena, obviously, you know, there are projects that are looking at asteroids and other near-earth objects because depending upon what we see they might actually be movable. There are a number of projects focused on that. Then we move our attention to [00:05:00] to, you know, human induced things and we can look at the ones that go through environmental pathways. Basically we, either through biodiversity loss, or climate change, ocean acidification or other kinds of affects on our biosphere, we create a biosphere that is uninhabitable to humans.
But we don’t even have to get to full uninhabitability for partial uninhabitability to lead to the beginning of cascade effects. You know, when we look at in Syria we see that [00:05:30] a major part of what happened, factoring that there were many, many factors simultaneously, but one part of it was droughts in an area that had not had droughts lead to subsistence farmers moving into the cities that tax to the resource capacity of those cities beyond their limits that lead to resource horrors that lead to factions and that whole kind of thing. So, when we start looking at climate change creating massive numbers of refugees, and the refugee dynamics leading to war dynamics and resource shortage [00:06:00] or economic collapse and then, you know, cascading wars.
You can have scenarios where environmental phenomena are the first step in a series of cascades that can occur. So, you’ve got all of the environmental phenomenon. The effects on coral of ocean temperature, ocean acidification, the loss of big fish and atrophic cascades, key species collapse throughout many different dynamics, pollinators, like there’s a lot, there’s a lot of things in there. Then we can look at other reasons for just human induced [00:06:30] violence towards each other.
So, all of the reasons that something like a World War III would happen. And we see right now, the United States leading its position of clear supremacy as a super power. We see China moving up into that position. We have rarely seen transitions of power go smoothly in the history of civilization, and we’ve never had transitions of power be at this kind of global level and with existential level technologies and with over seven billion people and with a biosphere near collapse and [00:07:00] near fragility points. Like, we’re in a very different place, but even when just there is local hegemony where the power was moving from one kind of local power to another, those were usually not easy transitions.
So, lots of different scenarios. And then there are the exponential text scenarios where different kinds of exponential technology – exponential technology means exponentially increased power to affect stuff – and whether we’re talking about through AI or through [inaudible] and bio tech, or through nano tech, [00:07:30] you know, or through robotics. If we have exponentially increasing ability to affect stuff without exponentially increasing good choice making, that is just self-determinating no matter what. And that is the scenario we have right now.
So, exponential tech can lead to existential scenarios on purpose or on accident, right? You can have just much more powerful kind of compacities, military capacities, and much more powerful capacities that even [00:08:00] smaller actors are able to get. Nuclear bomb started off as such a difficult thing that only huge state actors could have, those stage actors could watch each other, you could have a mutually assured destruction system so nobody could use it.
But as you start to have exponential tech that becomes distributed, or smaller and smaller actors have access to larger and larger potential to affect things. The ability to avoid that happening gets harder and harder. And then you also have exponential tech that leads to existential scenarios accidentally. So, most [00:08:30] of the AI risks that Bostrom lays out in Superintelligence, the grey goo risks that Drexler lays out, you know, these are mostly where we create an auto-poetic system. We create a self-fuelling system that has a faster feedback loop, is more adaptive than biology, and not commensurate with biology.
And so, we have to avoid accidental extinction, as well as on purpose extinction. You know, that’s kind of a rough lay of the land of the categories of [00:09:00] existential risk. Now, if we take a step back from it, we can say all of these categories have some things in common. Some deeper underlying drivers that are actually the real existential concerns. Because if, at this level, the scenarios are basically calculus. But if we go deeper, one way of saying it is that the real existential risk is a loss of sense making, a loss of the ability to actually make sense of the world around us and what is worth doing what the likely effects of things [00:09:30] will be, what the effects of our actions are now.
And so, when we think about it, how long do we have with ocean acidification before the coral die? Right now, the thoughts on that vary so widely that it’s either the most important issue or we have quite a bit of time. And is Fukushima really close to releasing a lot of radiation or not, and can North Korea’s nukes really reach certain places or not, and what are the real risks with AGI parasitic AI etcetera? Like, if we don’t [00:10:00] have better ability to make sense of these things then they’re just all fail scenarios, right? Because we don’t know how to prioritize.
Which should actually get attention paid to it and what could be successful and what’s not likely to be successful. So, we can say that a lack of information coherence is one of the things that is at the heart of driving them. And now, to take a deeper step, we say, well, within capitalism, and we can step even further back in a moment, but let’s take that step. Within capitalism, where we’re playing a competitive game, [00:10:30] we’re playing a win lose game, usually zero, sometimes small positive sum but still fundamentally win lose. Information is a source of competitive advantage. So, we are incented to both hoard information and to create disinformation.
We actually, like, if we succeed at that we will actually do better within the system. So, as you have exponential technologies, the ability to do more media, spread more different info through more channels etcetera, how do we have exponential [00:11:00] information tech with the incentive to disinformation, and actually be able to have enough coherent information to have any idea what the fuck is going on? Well, basically we don’t. You know, when you think about like, getting towards the tipping point of climate change where if we, you know, we get to a positive feedback loop with warming or something and if we don’t stop a dynamic before a certain point we won’t be able to afterwards.
We’re actually pretty close to the point of peak discoherence, where if we get to that place we might not actually [00:11:30] be able to recover coherence. And so, I would say that’s one way of thinking about what is actually essential. What is actually essential. Another way of thinking about what’s essential is, like we said, exponential technology it’s very hard for people’s intuition to grasp what an exponential curve is. Because most of the things we experience don’t feel like exponential things, but most of technology that is exponential is increasing our ability to affect something with our choice but it’s not increasing the quality of our choice.
[00:12:00] Now, when you multiply these together actually decreased information coherency with increased impact that is a self-determinating scenario.Euvie: What are some of the ways of avoiding that? Especially, I liked what you said about our loss of sense making and the exponential curve of that also. That’s a really interesting factor that I think a lot of people aren’t talking about. And it relates to one of the questions I was going to ask, what do we do about the sort of [00:12:30] institutionalized ignorance of people to what is actually going on? And there’s, you mentioned the corporate interests and producing disinformation, but there’s also, you know, bad educational systems that are outdated or motivated by different factors that have nothing to do with actually giving people good information. And then there’s research that serves corporate needs, and then there’s media propaganda in politics and all this stuff. So, what kind of tools do we actually have to [00:13:00] change these systems in a way that they serve the planet rather than serving these individual interests?
Daniel: We could say, well, when you talk about sense making, shouldn’t that be science? Shouldn’t science be sense making? The same way that we could say shouldn’t journalism be sense making? Right? Aren’t these, shouldn’t the intelligence agencies of the world be sense making. You know, we’ve seen scenarios where a scientist publishes a scientific paper in a prestigious scientific journal where the paper that they wrote is complete gibberish. They made it up [00:13:30] with technical sounding words, but it’s actually gibberish. And it goes through the publication process because they came from the right university or whatever. And they were doing that just to show that the peer review process is broken.
We’ve also seen where, you know, stats like 50 percent of the articles posted in places like Journal of American Medical Association, five years later the findings are shown to have been misinterpreted or wrong or had mad methodology etcetera. 50 percent wrong means that [00:14:00] if I’m reading a cutting edge medical journal, my chances of knowing what’s true are 50/50, right? It’s like polling tea leaves would give me as much as a sense of what’s true.
Mike: That’s unbelievable.
Euvie: Yeah.
Daniel: It’s a broken sense making system within science. If you want to think about what the heart of science is, like what the essence of the philosophy of science is, the essence of it is earnest inquiry, right? Like, earnest desire to understand the nature of reality. And so, whatever the speed of sound actually is, I don’t care if it is different than [00:14:30] I think it is, I want to find out. There’s a certain reverence for reality and respect for the nature of what is beyond my own ideas about it or how I can benefit from it that is at the heart of the scientific impulse. But the scientists that need paid and the equipment has to come from somewhere. And if there are some of the answers that are more profitable than other answers, then it’s easier to put money into it because I’ll be able to get that money back and if an herb or a plant has some medicinal property [00:15:00] but I will never be able to patent it and I won’t be able to recoup the money that I get from, you know, the small margins that I would put into the research.
How can we afford to pay for it? But we will with, say, some synthetic thing I can get a patent on and etcetera, right? So, what actually gets researched within that structure and how it gets researched, so you have to factor how deep those kind of perverse incentives end up trickling through the whole thing. So, then you ask a good question, which is within a system, within [00:15:30] a fundamentally win/lose game theoretic system how do prevent these issues and be able to have real coherency when we are incented to withhold information and even, you know, like, you’ll see in soccer, football or whatever, someone will fake left to try and get the other person to go left. And then they’ll actually go right, right? Like, that’s disinformation, intentional disinformation so that you can throw the opponent off, so you can win.
This is not different in corporations or nations, right? [00:16:00] Intel and counter intel and intentional disinformation, same thing. But with global level consequence. So, here’s another good way of thinking about it. Think about early tribes as competitive teams, almost like a sports team where they have to work really well together and be very coherent with each other to be able to compete with other teams and military conflicts when they occur, and compete with the other teams for scarce resources in the shared environment, the shared commons. Say they weren’t military, right. [00:16:30] They’re just doing their own thing, there’s plenty of abundance, but then if any of them realize that they can do this military thing, go kill another tribe, take their stuff and get the river front that they had, or get whatever kinds of things they had acquired and developed that actually worked for them. It worked better. Now, everyone else has to build defensive militaries, at least, otherwise they lose by default.
And so, one of the things to get in the history of win/lose game theory is that one, it worked. You could actually go kill the other people, take their stuff and make stuff better [00:17:00] for you and, you know, your people. And two, if you didn’t play win/lose game theory and someone else did, you lost by default. Which is why most of the more peaceable, less militaristic cultures got wiped out. So, then we say, you know, “One of the tribes is just bigger and stronger than us, we’re not going to be able to compete with them. But if two or three of us smaller tribes band together we can.” But then, of course, the other side has to complete. So, they band together. So, now we move from tribes to villages, right. And then we can move up to kingdoms and to nation states and to global economic trading blocks.
[00:17:30] I mean, think about those evolutions of what we think of as, like, civilization structures as evolutions of competitive teams within a win/lose game theoretic structure that have more and more power to be able to out compete the other one. And that’s both have more people that can be coherent with each other against the other one, right. So, then, we’re as a nation, we’re a team competing against the other one. We have a shared military industrial complex paid for by taxes, whatever. Or, as a religion or a race. So, we can have some overlapping teams. But once you get [00:18:00] to the point where all of the teams are stepping up simultaneously, that’s the kind of evolutionary driver, they’re stepping up in their power. Once you get to the point that you have exponential technological power on multiple teams, where the amount of power that it would take to actually win, would require destroying the playing field which becomes inevitable, right?Inevitably power will keep increasing until it’s actually bigger than the playing field can tolerate. And then winning pretty much means losing, because there is nothing [00:18:30] left to win, right? So, right now we’re at a place where the superpowers of the world cannot win a war against each other. When you have hundreds of times the nuclear capacity necessary to kill all life on earth, there is no winnable war but we keep building more fucking military capacity. We just spend, you know, a trillion and a half dollars on the F35 because we didn’t have bad ass enough jets with a military that already has all the tactical capability that it has in the US, right?
And think about what a trillion and a half dollars of resources means to other kinds of things. [00:19:00] And so, when you say, “Why did we ever get more than five times as many nukes as we needed to kill all life on earth?” Like, after you’ve got enough nukes to kill life on earth five or 10 times, why do you keep building more to where you have hundreds of times? Was that ever a really just strategically smart idea, or was that because we had a for profit military industrial complex that makes money when that happens, where the people who are in positions of decision making power also happen to be shareholders or have vested interest in those structures.
And this is not blaming the people involved, these are structural, [00:19:30] right? These are structural dynamics. And so, can you have lasting peace and a for profit military industrial complex at the same time? Of course not, right? Like, that’s supply and demand 101 that if there is massive capacity for supply it has to protect the demand. And then, you think about where those perverse incentives go everywhere, right? Healthcare, where if people weren’t sick then the system loses all of its resource. And so, how could it ever really invest in prevention deeply [00:20:00] when its profitability depends upon symptom management. If you even cured people quickly, that’s not profitable, but if you manage symptoms ongoingly it’s profitable. And if the symptom management causes other issues that need more symptom management, i.e. more meds, that’s just like an upsell or a cross sell maximizing the lifetime revenue of a customer that happens to be a patient.
So, when you think about, you know, win/lose game theory within that kind of incented structure where we can’t win at the wars anymore [00:20:30] and yet we’re moving closer to the kinds of tensions where war becomes more and more culpable. And we also can’t win at competing for extracting the scarce resource when we’ve already extracted so much scarce resource, we’re almost at the biosphere’s limits, right? We’re almost at the limits of what can be handled in the extraction of oil or fish or raising lamb for cattle or, you know, so many things. And so, either we continue with win/lose game theory and it becomes omni lose/lose, [00:21:00] catastrophically. Or, we have to figure out how to have that power not turned against the other power, and figure out what does omni win/win mean where no one has incentives that are misaligned with the well being of others in that system that has that power.
And that’s really the fork in the road we’re at as a species for the first time ever now is, we either figure out what omni win/win means economically, as a world view governance wise, we either figure out what that means or [00:21:30] we continue with win/lose and it becomes omni lose/lose. Which pretty much means we either step up into a radically higher level of quality of life existence with a new kind of collective intelligence, or we have a catastrophic step down. Existential or near existential. But like, in the next not very long time, that’s really the fork in the road.
Mike: And what evidence do you see of us going down one path or the other with more preference?
Daniel: If we just plot curves, it doesn’t look good, right? [00:22:00] If we just plot the curves of what has happened so far with regard to how much harm came from new power that we developed, and we look at how fast we are developing new power on exponential curves. If we look at the population curves and we look at, you know biosphere issues like, it all looks pretty bad from that point of view. And you can try and take the techno optimist route and say, “But look at the rate at which we are developing solutions.” But here’s the problem with that. Here’s another way of thinking about [00:22:30] what the driver of the existential risk and all the catastrophic risks are.
So, one way of thinking about it is that win/lose game theory is driving the whole thing, right? Win/lose structures are driving the whole thing. With win/lose game theory multiplied by exponential power and exponential populations. Another way of looking at it is a lack of information coherence. I mean, there’s information coherence and coherence as beings and agents with each other. Another way of thinking about it is a lack of coherence around value measures. And this is actually a very deep [00:23:00] and essential way of thinking about it. Think about a tree in an ecosystem and we say, well, what is the value of that tree? Well, it’s providing a home for a bunch of pollinators and birds, it’s providing food for them with flowers, it’s stabilizing top soil. It’s symbiotic with the micronize and the fungus and bacteria in the soil, it’s providing shade for all of these things. It’s, you know, pulling out CO2 and producing oxygen and providing food for animals and, you know, it might have [00:23:30] millions of value metrics as part of this kind of complex ecosystem.
But then, we cut it down to make it $10,000 worth of 2 by 4s, and its value is $10,000. And the 2 by 4s, right, like they’re not sequestering CO2 and stabilizing top soil, they are serving as a structural strut. It’s this one really simple thing, right? Structural strut. So, we took this very complex thing and we reduced the complexity of it radically, we made it a very simple thing. So, we down cycled [00:24:00] the shit out of it, because what, the metric we were seeking to optimize was dollars in my account. And so, like, a dollar is a value metric. But I get the $10,000 from that tree, or I get $10,000 from the elephant tusks, right? Or, I get $10,000 from this service, well, it’s like, how do I relate the value of an elephant tusk or a person’s art or a tree, like these should not be fungible, they should not be inner exchangeable.
These are fundamentally different things. [00:24:30] But because I remove all of the information from them, and all of the context, I want them to be simply exchangeable in terms of capital so I can maximize the ease of transaction to grow pools of capital. That is an extinctionary dynamic is that we’re taking complex value and turning it into simple value.
Euvie: That almost sounds like a grey goo scenario of itself.
Daniel: It is.
Euvie: We’re talking all of these super complex things and reducing them to this homogenous goo that is monetary [00:25:00] value.
Daniel: I actually just saw a really good article that was called Capitalism as Paper Clip Maximizer and it’s actually a really fun thought experiment. Paper Clip Maximiser, if the people aren’t familiar, was, I don’t remember who came up with it first, but it was in Mick Bostrom’s book on AI issues and it just said, you know, we could have an AI whose job was, you know, working at a paper clip company to maximise paper clip production. And it also had the capacity to upgrade its own capacity. And it ends up getting into a place [00:25:30] where it makes all these paper clips and upgrades its own ability to make more paper clips then starts competing, you know, taking all of the resource that humans needed to make paper clips, because that’s its algorithm.
And as it’s increasing its own capacity, it’s being able to outcompete us and then it realized that we’re made of atoms and it can make paper clips out of, you know, that it’s this kind of continuously growing, right? This auto-poetic capacity that’s just turning everything into paper clips. And so, they’re saying, you know, this paper capitalism is paper clip maximiser because capital is making [00:26:00] more capital. That’s kind of the gist of this distributed system, rather than as a central AI it’s this distributed kind of collected intelligence system that uses these distributed human bioprocesses within it.
But because we’ve reduced the value to capital and having more capital makes it easier to get more capital. The capital gains interest faster than the overall economy grows, gives you access to more financial services etcetera then, you know, that’s the goal. So, the [inaudible] cataclysm is a paper clip maximizer is a reasonable thing, [00:26:30] and Drexler’s model of grey goo was based on another exponential tech, which was nano tech, the ability to rearrange things at a molecular or atomic level. If we do that right, it’s pretty awesome, right? If we do it right, it’s like the replicator from Star Trek.
We can take trash and turn it back into rad stuff at the level of just atoms. And that actually might be the future of the materials economy is that we have quantum commuting that has enough sophistication, you know, give or take a million cubits to be able to properly direct nanotronics [00:27:00] to take old stuff and turn it back into new stuff at an atomic level, creating a closed loop materials economy that can upcycle indefinitely. But if we get it wrong then you can have, you know, these machines that are just turning everything into good. So, Drexler had this model that was called Grey Goo and in a way, Euvie you’re right, capitalism as grey goo is, you know, we take this tree with this radical contextualized complex about value and take it out of its context and give it this reduced abstracted simplified value metric.
[00:27:30] And we’ve done that to 80 percent of the old growth forests that the earth spent billions of years developing, you know. And to 90 percent of the large fish species in the ocean. And then what is that capital really do other than continue to do that autopoetic thing. And so, when people think about like what is capitalism, and honestly like we can say communism and socialism were really actually subsets of this kind of resource concentration system. It’s a process of abstracting [00:28:00] value, so we go from complex value to abstracted value and then extracting it and accumulating it.That particular model came from a colleague Forest Landry. And, you know, capitalism does that but socialism and communism have other versions of doing that. But that’s the core thing, that’s the ring of power that has to be broken. It’s abstraction of value and specifically a reductive abstraction. Extraction, so you remove the content from its context and accumulation. And that’s how you take a [00:28:30] complex system that is resilient and turn it into a complicated system that’s not resilient, that becoming progressively simpler and kill it.
Mike: This brings up something I find to be fascinating which is the use of blockchain to tokenize abstract value. Do you have thoughts about this and whether it’s actually possible to do this? I can get more specific if you need.
Daniel: Yeah, please do.
Mike: So, we had an interview last year with Vince Means and he talked about tokenizing these abstract values. Actually, he used the tree example. [00:29:00] So, what if you can tokenize all the extra abstract value that a tree provides, you know, have a token for oxygen converted, that sort of thing. And what if you can optimize those different tokens in the same way that capitalism optimizes capital. Can we use these other systems of value and use them as currency as well, I say that loosely, and optimize that using the existing capitalist system?
Daniel: I don’t think so. Laissez Faire Capitalism always kind of goes there. It says the tragedy of the commons [00:29:30] is because if nobody owns it enough then they’ll fuck it up, but if someone owned it they’d take responsibility for it. So, really the answer is to have everyone own everything, every coral reef, every everything is owned and we have some way of creating capitalist value in the process of owning it. This breaks down for a number of reasons. So, why is air not worth anything and gold is worth so much? Because in a win/lose game model something that everything has access to and can’t not have access to, and I can’t get any more of, [00:30:00] doesn’t give me competitive advantage.
So, everyone had access to air, so I don’t need to value it. I don’t need to pay attention to it. And now, if I cut down this tree, of course I’m cutting down something that produces oxygen and sequester CO2. But I’m not cutting down enough to fuck up everything. I’m not cutting down enough to really affect my experience at all. Now, of course, as we get seven billion people with that mindset it becomes a different story, but distributedly they’re all thinking about their own action and that, “If I cut this tree down, [00:30:30] I have $10,000 worth of 2 by 4s in my pocket and I need to feed my family. And I don’t have any measurably less oxygen.” But multiply that by seven billion and we all die, right?
But if I don’t cut this tree down, that tree in the commons is worth nothing to be in terms of differential or competitive advantage. And I have a system where pre-something like basic income, even before competitive advantage I just need to live, right? So, if I can take something out of the commons to live, like, I’m going to do that. [00:31:00] So, gold or diamonds or whatever are worth a lot because they were perceived as scarce when we started the valuation schema. And if there wasn’t enough for everyone to have lots of it, then some people could have it and others wouldn’t and those who had it had something that was unique where they had some differential advantage.
But it’s still fiat, right? Like, whether it’s just printed dollars or gold that we ascribe that value to that is not based on the actual material of the gold and then we set it in that, it’s still like, it’s a value metric that we made up. And so, [00:31:30] we’ll have gold worth, you know, however much per ounce and because of that, if there’s gold under some trees we’ll cut down a lot of trees to get the gold out and burn up the oxygen or damage it in the process, because we have a system that is valuing differential value not systemic value. If you start saying, “Well, how do we actually advance systemic value?” you can’t really tokenize that.
Because as long as there are separate balance sheets, and I have some number of tokens and I’m trying to advance the number of tokens that I have [00:32:00] in competition with other members. We’re going to keep getting all these win/lose dynamics. So, the post-capitalist move is a deeper move than that. Now, does that mean that tokens and blockchain can’t serve a transitionary role? No, of course they can. Right now, we make up the currency in it, you know, through a central bank. Well, you can have another group make up a different kind of unit and you can build some better structures into that unit.
And that can be valuable, totally. [00:32:30] But as long as we still have private ownership of those units, that if I have more of the units to begin with it’s easier to get more of them, if I have less to begin with it’s harder to get more of them. So, I have widening wealth gaps built in, all of the other shitty aspects of capitalism, all of the derivatives and complex financial instruments and fucked up markets will end up arising from those competitive structures. Those are just basically inexorable ways to gain the game. So, then [00:33:00] you’re like, “Fuck, so what do we do?” We do need a completely new economics. We do need a completely new system of sense making.
We do need a completely new system of choice making, right, governance. At an axiomatically redesigned level. And, you know, when most people, when you ask someone, “What is economics?” They think, “Well, it’s a system of trade, a system of barter,” right? They’re already assuming private ownership and trade and barter that is then mediated by some kind of currency. But you [00:33:30] have systems that don’t have private ownership that don’t have trade, right? Because I don’t own shit to trade to you, we have some kind of systemic value. We just didn’t like those ones because, for the most part, we thought of those as communist, socialist systems where if I didn’t privately own stuff then the state was giving me what I needed but then the state was also forcing me to do shit.
Because as long as there are shitty jobs that nobody wants to do, that we don’t have intrinsic motive to do but the society needs done, how do you get the people to do the shitty jobs? [00:34:00] Well, if we kind of meet everyone’s needs throughout this thing called like a state, then the state has to force some people to do the shitty jobs, we’d call that imperialism and we don’t like communism because it’s imperialist. So, we say, well, let’s let the free market force them. Which is, if they can’t be smarter and educate themselves better to do a better job then they get the shitty jobs, and if they don’t the state isn’t forcing them but they will still be homeless and starve.
It’s just, where do we switch the imperialism, right? We switch it from the state to the market. But basically, [00:34:30] we still have to have a system of extrinsic incentive to control people to do shit that they’re not intrinsically oriented to do. Well, okay, so that’s been part of what we’ve had to work with and Marks had thoughts on it, and Smith had thoughts on it. But technologic unemployment is changing that whole fucking story, where you can start to automate the shittiest jobs, so you can have a system where the things that people actually could have intrinsic incentive to do are what are there for humans to do.
The things that you had to extrinsically incent them for, you don’t have to. Like, that’s a new thing. That means an axiom, [00:35:00] which is how do we deal with the labour force, an axiom is changing. And so, we have to go back and rethink all of our economic ideas. And with regard to sense making there’s a lot of core things that are changing in terms of our capacity for sense making. We have the ability to have IOT sensors, right? Internet of things sensors that are giving us real time data about air chemistry and water chemistry and soil chemistry and fisheries and the commons etcetera. And we have deep learning systems that can be synthetizing [00:35:30] that information. We like, we never had sense making systems like that.
Like, we didn’t even have the technological capacity for things like that. And choice making systems. So, basically, we are at the face of deeper issues than we’ve ever been at the face of before, but also with deeper capacities. So, the same increase technological capacity that if we keep using it to take complex things and make them simple, simple or complicated, instead of complex. And we keep using it to fight against each other and win in competitive win/lose games. If we keep [00:36:00] doing that with these increased technologies, that’s existential. But if we use those technologies to actually obsolete those core social structures and create coherency based social structures where that technology can be used for not against. Then, I mean, we really do have a fork between radically better than anything we’ve every experienced, or catastrophic.
Mike: Wow.
Euvie: So, now my question is we talk about all these different structures that we can [00:36:30] create for humanity to exist within but there’s also human nature. And not just human nature but the nature of something being alive as essentially being a replicator machine. So, we have some sort of intrinsic tendencies that may be very difficult to hack. And no matter how you structure the system, we’re going to default to those tendencies. For example, you know, that’s probably one of the reasons why communism didn’t work. In some ways [00:37:00] it probably went against human nature, or the nature of just things that are alive. So, how do we deal with that in your opinion?
Daniel: That’s not why communism didn’t work though. Yes, there were things about the way it was structured that did not structure themselves well with humans. But it’s important to get that it wasn’t like, it wasn’t a good structure and fucked up human nature was the problem. It’s that it actually wasn’t a good structure. There were good parts of it. But in trying to keep [00:37:30] us from having economic inequality, because economic inequality would continue to inexorably widen, it became a lowest common denominator system. And it became a system of lack of freedom in some really key ways.
So, yes it was against our nature, but it was actually against good parts of our nature. So, then the question is are there shitty parts of our nature, or inexorably problematic parts of our nature? This will be the hardest part of the conversation so far, because I’m going to say that what we call human nature [00:38:00] is mostly human conditioning that is actually quite changeable and I don’t even mean genetic engineering or brain computer interface are necessary. Let’s unpack this. The question you asked is an essential question. When we look at Japan or Denmark or I think there’s a few Nordic countries that have had decreasing populations after they reach a certain level of economics and education.
It is not that the nature is to do exponential population curves forever. That is the nature within a certain set of conditions. [00:38:30] When the conditions change, many of the deepest, like even procreation, behavioural dynamics and drives start to shift, without externally imposed things like birth caps like China had. It’s really important to see that. When you look at some of the Indigenous cultures pre-capitalism that didn’t have private ownership, they had shared structures and they had languages that expressed that concept of sharing. They had radically de-emphasized words like ‘mine’, and radically increased words like ‘ours’. In any of those cultures [00:39:00] we can see some of that in some of the Figian Polynesian cultures, the [inaudible], you know, a number of them.
Wherever the ‘mine’ language was less emphasised in the nature of the language, also words like ‘selfish’ and ‘greed’ and ‘jealous’ were less emphasized because those were the results of memetic structures, of scarcity based value in a competitive framework structure. So, when we say what is human nature, there’s a little bit that’s definitely human nature. Like an impulse to right oneself when they start falling, or to [00:39:30] pull away from hot, right? These autonomic functions, those are nature.
But so, like, when we get what is essentially human nature, what was uniquely adaptive about us is that our genetics selected for memetics, right? Our genetics selected for a creature that could continue evolving itself, developing itself within the course of its life pretty radically. Which is why a baby horse can be up and walking in minutes, and it takes us a year. Think about how many [00:40:00] multiples of the 20 minutes it takes a horse to get up go into a year for a human. How useless we are for so long, even a baby gorilla can hold on to its mum fur as she moves around through the trees in the first day. We can’t even move our head for three months, right?
And that’s because we’re born neotonus, we’re born basically still embryos where we’re imprinting the world we’re born into, because as complex tool makers and tool evolvers with our capacity for abstraction we evolve the [00:40:30] environment around us radically. And so, creatures they evolve genetically to their environmental niche but we’re changing our own environment, so we have to be able to adapt to new environments that we create. So, we have to come in not hardwired because it was super adapted to be able to throw a spear at one point, but today, like, you and me don’t throw spears that much. We text and we do other shit.
So, we need to not just have really good genetic orientation for spear throwing, we need to have the capacity to come here with almost nothing, not even move our head, and imprint the world that we’re in. And so, [00:41:00] that memetics is mediated largely through neuroplasticity and the ability to have a lot of our behaviour not just bottom up genetically controlled but top down regulated. And the top down systems are highly influenceable. And so, we can say humans genetics selected for plasticity. And plasticity has a pretty huge amount of variance in its capability. And so, we can create environments like in, you know, look at the child soldier situations in Sudan or places like that where we can take almost everyone through [00:41:30] a process that make various degrees of psychopathy.
We can also, you know, look at Buddhist culture where you had millions of people all care about insects enough to not harm them, right? Beyond the Dunbar number, they had something that developed abstract empathy across the population. And so, that doesn’t seem humans are naturally psychopaths or naturally highly empathetic. It means we are naturally capable of being conditioned in either of those ways, or any of many different ways. [00:42:00] We do have an innate impulse towards agency, towards self-actualization. Within a win/lose game structure that will look like a competitive impulse. But within other structures, within win/win structures that will look like the desire to go beyond my own previous capacity but not to necessarily be better than or, you know, consume somebody or something else.
And so, that’s where there’s an innate impulse but that expresses itself through context. Now, let’s take the next step. [00:42:30] There’s another really key step here. Until very recently humans didn’t have any concept of what evolution was. And we only recently did and we’re only right now beginning to have a deep sense of what it actually is, not just biological natural selection. But the process by which subatomic particles come into atoms, come into molecules, come into more complex organic structures, dust clouds turn into stars in spiral galaxies. That evolution is this process of increasing orderly complexity in a way that has more and more synergy. [00:43:00] So, more and more emergent property. And the emergent properties define the area of evolution.
And as we’re starting to understand this, you know, like very few people, but the beginning of us are starting to understand this. We can actually become conscious agents of evolution. We can like, say, “Holy shit, the universe is actually doing something. It’s actually moving in this direction of increasing orderly complexity.” We can consciously participate with that and we move from just being part of the whole, where evolution is just kind of this unconscious algorithmic process [00:43:30] to thinking about, feeling about, identifying with, and being an agent for the whole.
And so, then evolution itself becomes an agentically mediated process, like we actually say, “Shit, the whole evolutionary process resulted in me. I am the result of this whole evolutionary process. So, in a way, the evolutionary process has kind of awoke to itself in me as I’m contemplating it.” And so, then we start needing pushed by evolutionary pressure and pain to evolve, and we start being able to evolve consciously by what [00:44:00] [inaudible] called the lure of becoming. Because we actually identify as evolutionaries, right? As part of the evolutionary impulse, and in doing so really obsolete the need for pain as an evolutionary driver. So, I know I said a number of things in there, but human nature has the capacity to transcend much of what human behaviour has been so far.
Mike: That is so well said.
Euvie: It’s like we are the tentacles [00:44:30] of the universe or something.
Daniel: Yeah, the sensors and the actuators, right?
Euvie: Yeah. That’s quite a spiritual proposition, too, that we are becoming conscious of what we are and acting based on that realization.
Daniel: Yeah, and I actually don’t think anything less than that is adequate for preventing existential risk. And that’s a big deal. That really frames up what we’re here to talk about, which is as we have increasingly distributed [00:45:00] exponential ability for impact we really have to have exponentially increasingly good choice making, omni considerate choice making everywhere. So, as long as I, when we talk about win/lose game theory, at the deepest level it starts with me having a sense of self that’s separate from everything else. Even all the way down to the semiotics, to the language, right?
I was told when I was a little kid I’m Danial and you’re Mike and you’re Euvie and that’s a chair and that’s a house and that’s a crow. [00:45:30] But it’s all just separate stuff in universe and I can be a good boy or a bad boy independent of what’s happening for anybody or anything else. I can actually win at the expense of someone else losing in a baseball game where everyone would get a whole lot of praise. And someone else winning can mean that I lost, right? Like, I am not only a separate self but I am also a separate self in competition for the scarce things, including scarce love, right? Scarce attention. So, that’s imprinted, that’s the basis [00:46:00] of the win/lose structures at the level of individual separate self-identity.
When the reality as we go a little bit deeper if we weren’t playing silly games, right? What the fuck am I without all the plants that make all the oxygen. I don’t exist, I can’t breathe. My mother wouldn’t have been able to breathe, I wouldn’t have been born, right? Without plants I am not even a concept. So, how am I separate self if I require the plants. But the plants require the pollinators and the micro rise in the soil [00:46:30] and on and on. And what would I be without gravity? I’m not fucking anything gravity or without electro magnetism. What would I be without the people that came before me that made the language within which I think all of thoughts, that is actually patterned the way in which I think and feel and structure the world, right?
And I start to say, “Well, shit. I’m actually, I wouldn’t exist without all of it.” So, that South African saying ubuntu, I am because we are is actually profound. Meaning me as a separate self, right? Okay, so take gravity away, take [00:47:00] electro magnetism away, take the plants away. I’m floating in the middle of the universe with no, I’m not even floating. There’s nothing holding me together, right? Like, the concept of me as a separate self is nonsense, it’s a misnomer. Weirdly, it’s just like when you take the tree out of the rainforest and turn it into wood it’s worth a whole lot less, the context was its value, right? Its value was the content within the context which coevolved.
I am within a context [00:47:30] and without that context I don’t even exist. And so, when I get that I am a misnomer as a separate thing, that really, I am an emergent property of the whole. I’m an emergent property of the biosphere but not just the biosphere, because without the sun I wouldn’t be very interesting either. And without the laws of the universe I wouldn’t. I’m an emergent property of the universe and so are you, and we’re interconnected in this. And that’s waking up at a cognitive level, an existential, at a spiritual level. And really, we need [00:48:00] new macroeconomics.
Economics is basically what do we value? Don’t value oxygen because we can’t get any competitive advantage off it, even though we’d all die without it but value the gold because I’ll get some competitive advantage even though I’m just going to stick it in a safe and not doing anything with it. Value the whale dead because I can do stuff with it. I don’t value it alive, right? Like, what do we value? Our value system is codified in a value equation that then determines what we confer power to, that’s economics. Totally spiritual thing. So, we need new macroeconomics that aligns the incentive of every [00:48:30] agent with the wellbeing of every other agent and of the commons.
Because any gap between your incentive and my wellbeing, or your incentive and the wellbeing of the commons, then you will do what your incented to do and you will externalize harm directly or indirectly, right? And with exponential increasing technology, that harm will always become catastrophic eventually. So, we need new macroeconomics that fundamentally align our wellbeings with each other. And in doing so, we have no incentive for disinformation, [00:49:00] we only have incentive for information. So, now science becomes science actually, right? And journalism actually becomes journalism, we actually fucking make sense of stuff because I’m not trying to hide information from you or lead you down the wrong track so I get there first.
And we need, you know, so like new macroeconomics, new sense making, new systems of choice making, all the way down to a new world view where we actually identify with the evolutionary impulse. So, we don’t need pain to push us. We are evolving, right? We are being and becoming and doing [00:49:30] at the same time. For ourselves and the whole at the same time. And all the way down to our personal identity shifting from being a separate self, which is just nonsense, to being a unique emergent property of reality. And when I get that you are a unique emergent property of reality, and you have had life experiences, you’ve sensed stuff that I haven’t, you have a prospective on everything that I couldn’t possibly have, you have the ability to do stuff that I can’t do.
If you self-actualize fully [00:50:00] you will create beauty in the world that I couldn’t, right? Like, no amount of Michelangelo self-actualization would have done M. C. Esher or Salvador Dali, like they’re their own fucking kinds of unique creation capacity. And so, I can only compete with you when we reduce you and me to very simple metrics, right? How much money do we have or how fast can you run, or whatever, right? But when we go back to the full metrics and it’s basically an indefinite [00:50:30] number of metrics and the synergistic common rhetoric of those, they’re incomparable. So, now the competition thing is gone because I’m identified with this whole complex thing, not this little narrow thing that in that narrow thing we can compete with each other, right? When that’s gone then you self-actualizing not only doesn’t take away from me, you self-actualizing makes a more beautiful universe in a way that I can’t.
But I want to live in that more beautiful universe, so I’m incented to help you self-actualize. The gist of the story as [00:51:00] I see it is that we awaken to that to everyone as a unique emergent property of an inner connected whole, who needs to simultaneously self-actualize, help everyone else self-actualize with the attendant economic systems, governance systems etcetera. Where all individual agentic activity is good for the individual and the whole simultaneously, just like cells in your body that are not competing against each other for scarce resource even though we’ve tried to retrofit that [00:51:30] shitty capitalist idea on biology as a weird kind of confirmation bias.
Like, when you think of it in almost mythic terms, and you have to think of it in mythic terms, exponential technology is, you know, where does an exponent scale to? As we’re moving towards having the power of Gods, like, Barbara Marx Hubbards said this, when you know, she watched the bomb be dropped by her godfathers who were, you know, the generals that were direction World War II and she saw that mushroom cloud from Hiroshima. And she’s like, “None of the depictions I ever heard of of Zeus’ lightning bolt were as big as that.” [00:52:00] And so, she went and asked them, she asked Eisenhower. And, you know, the other generals, I think Macarthur, and she said, “I see this new power that we have and I see the power of it that’s bad. What is the power of it that’s good?” And they had no answer.
And so, she set out on that life quest to answer that question, you know. But this other part of it we can ask is if we’re scaling towards the power of Gods, then you have to have the wisdom and the love of Gods or you self-destruct. And so, you know, when you think of it [00:52:30] in those mythic terms it’s like, that’s really the story right now. And so, the other part of that mythic term is, you know, the coming of a different age, and an age that is not categorized or characterized by like fundamentally separate interest and competition for scarce resources with separate interests, power over dynamics. But as characterized by, and this will sound corny, but characterized fundamentally by the metric we are optimizing is love. [00:53:00] Which requires all the complex metrics of everything the tree is. It requires recontextualizing things and unabstracting them into their instantiation, right?
And why I say love is what brings the atoms together in the molecules is these attractive forces, right? What brings the molecules together, what brought the two of you together to make this thing? What brought us together? There’s these attractive forces that bring anything together that leads to synergy, that leads to the emergent properties, that leads to evolution and ultimately, we become stewards of that. [00:53:30] Bringing reality together into higher and higher order of synergies.
Mike: It’ funny how this is such a repeated lesson to go towards what attracts you, instead of avoiding what repels you. Like, love is something I’ve never cared to define as well as you’ve defined. I’m curious how you’ve actually arrived at that being the metric. Like, maybe can you unpack that a little more?
Daniel: It’s not a metric, because you can’t put a number on it. So, that was kind of euphemistical language, but it is, love is a very, besides damaged word, [00:54:00] unwell defined word in English. But if we want to borrow from another language and take the Eros-agape a model I think that’s useful. So, the Eros model of love, erotic love, doesn’t just mean sex. Sex is one place that it expresses itself, but it is a passionate desire, energy. It’s an attraction, so we can think of things being attracted together, right? People being attracted together to procreate, to make new life as a special case example [00:54:30] of people being attracted together to create anything, or anything being attracted together to create anything, right?
Subatomic particles being attracted together to have this relationship that is atoms. And so, we can think of the evolutionary impulse, right, as kind of an Eros energy. And the agape energy that’s a kind of love, and so when we think about participating the evolutionary impulse of the universe consciously participating in it. Then we think about [00:55:00] supporting things coming together and right relationships that are synergistic, right? That’s what’s aiming at the, you know, there is no destination. It’s an eternal process of becoming and blossoming. The agape love is actually worth mentioning here.
It is often thought of as an unconditional caring for I think a decent way of thinking about it is you can’t love something or someone meaningfully if you don’t actually see them, understand them, right? [00:55:30] Otherwise, it’s just an abstract kind of concept but for me to really love you I’ve got to understand you, I’ve got to know you. Which means, for me to really love you, I have to seek to understand you and know you. And then as I understand you and know you more, I’m able to love the unique being that you actually are. And in seeing your uniqueness and your irreplaceability in the universe I want for you everything that could help you enjoy life, self-actualize, contribute fully. And so, the process of seeking to see and know reality and wanting for it and wanting to add to [00:56:00] it is that impulse of love.
And I think it’s those two impulses, the becoming impulse and the being impulse, evolving and the nourishing impulses that become the deepest kind of drivers of the both inner states and the macro systems of the future.
Euvie: So, here’s a difficult question. The people who are creating the economic systems are not very enlightened at the moment. So, can we, and can we, help people awaken to this [00:56:30] understanding so that we can create the kinds of systems that will support our evolution because we’re kind of running out of time.
Daniel: Well, if you think about, you know, within a current economic system moving up in the ladder of success in the economic system requires things like justifying externality. The thing we’re doing is really good, so it’s worth all of that mining damage or all of that waste damage or the, you know, poor condition [00:57:00] our workers, or whatever. So, someone whose empathy is really intact can’t do that, and so if you don’t have the ability to shed your empathy off, which is kind of low-grade psychopathy. You’re just not going to be able to succeed, and as you start to succeed and get rewarded it’s actually a system that is not just attracting but incenting and conditioning a kind of abstract psychopathy. And, as you get into financial services even more so, because you are not providing real goods or services. You’re basically, [00:57:30] the more you can gain the system, the better you can do in that dynamic.
So, when you have a system that attracts incents and rewards, pathological behaviour. So, it’s important to understand that success in the current system has required being at minimum complicit with the capacity, willingness to be complicit with that system. What we think of as power even, which is a power within a win/lose structure power over, is psychopathologic, right? [00:58:00] And so, we’re not creating new power structures, we’re creating new structures for shared activity but they won’t be power in the same sense of what we have thought of as power anymore.
There’s a few aspects of how people who are in positions of influence currently can start to tip. One is as they recognize near term existential risks that are inexorable within the current system, which means they will not win, right? They’re attempt to win [00:58:30] where win/lose becomes of the power dynamics exceeding the playing field’s capacity. If they understand that, which more and more people in positions of influence are starting to. And they realize that they actually need to learn a totally new kind of game. And when they realize that the new kind of game that is possible isn’t shitty communism, it’s not like a lowest common denominator. It’s a world where we can create a commonwealth economics that has a higher level of [00:59:00] material resource abundance for everyone and anyone, including them currently has.
Because right now, we can’t make the best phone or the best computer, or the best fucking anything because the science knows how to because the IP for how to make the best one is held between 10 different companies that are competing against each other, so we don’t get information coherence that we don’t even research the best things in medicine because some of them aren’t profitable. So, the wealthiest people today don’t have access to good healthcare because we’re not investing in it. The world that they have won in is [00:59:30] ending, and that is unavoidable. That’s inexorable. So, that game is not a game that gets to keep being played no matter what. The world that we can create is better for even the people who are winning at the current system at the highest level, the world that we create can have a better system of medicine, a better system of sense making, a, you know, a healthier environment better technological stuff, better transportation because of the coherence leading to better resource allocation and development of tech etcetera.
[00:60:00] So, we can build a world that is better for everyone than building our quality of life today in sustainable harmony with the biosphere for everybody. The only thing that they would lose is the differential of how much better it is for them now than it is for everyone else, but they can’t hold on to that anyways. I’ve actually seen many people who, you know, have been in power positions within the current system who have been coming to these awarenesses naturally. So, that’s an enheartening thing.Mike: Daniel, do you have any book [01:00:30] recommendations for our listeners if they want to continue learning about these subjects?
Daniel: Yeah. As we were just talking about a game that’s ending and the new game that the book Infinite and Finite Games [01:01:00] by James Carse is a really great introduction to thinking about what a world beyond game theory is. And it’s short and it’s actually like, it is both an interesting treatise on economics and governance and social systems, and a really profound spiritual book. So, I recommend that one, Infinite and Finite Games. And, you know, books like The Collapse of Complex Societies by Tainter that go through all of the, you know, civilizations of the past that collapsed, and how and why they collapse. It’s a very sobering insight for people to realize that, you know, existential risk at a whole species, at a whole global level we’ve never faced before but we have at the level of civilizations. And we almost always fail.
And to kind of understand why, so that we can make sure we’re not repeating that. That’s actually very valuable, very insightful. [01:01:30] And if people want to read, you know, about existential risk itself, you know, going to the Future of Life Institute or the Cambridge Centre for the study of existential risk, places like that and reading their reports. Those things have a lot of value. Going even back to like Limits of Growth and the World Three Model. It’s surprising how many places the World Three Model was accurate from, you know, when it was started. So, there’s still value in Limits of Growth. All of those are good ones. If I had to pick one for people to start with from here, I’d say [01:02:00] Infinite and Finite Games is a good start.
Mike: Cool, thank you. I’m going to pick that up.
Euvie: Also.
Mike: Any documentaries or movies or anything multimedia-esque that you can recommend?
Daniel: You know, for people who are not already familiar with Jacque Fresco’s work in the Venus Project, they should get familiar with it. Because, you know, there are elements of what’s necessary that weren’t included, the world he was coming up through he didn’t get very exposed to complex systems theory. So, the distinction between needing to build a world that’s complex and self-organized rather than complicated was not a clear distinction. And how [01:02:30] sense making and how world view happen, he was, you know, working from a pretty straight kind of technotopian point of view and he doesn’t seek much to the transition of how we get there.
But with all those critiques in place there’s a lot that he spoke to and demonstrating regarding how you could have a system beyond scarcity-based capital that worked radically better than this. And so, I think the documentaries, one is called Paradise or Oblivion, one’s called The Choice is Ours. And also, the second in the Zeitgeist [01:03:00] series, Zeitgeist Addendum. The first half is about fraction reserve banking, the second half is about his work. I think for people who haven’t got into post-capitalism in much depth yet, looking at the resource-based economy that the Venus Project proposed is a really good entry point to some of the thought.
Mike: Daniel, this has been a fantastic conversation, one of my favourites we’ve ever done. So, thank you again for coming on and sharing your insights and what you’ve studied. This has just been fantastic for us.
Euvie: Yeah, very [01:03:30] inspirational, I’m sure listeners will love it.
Daniel: These were fun and important topics that we got to cover today. This was a blast, and, you know, when you ask how can we help people in positions of influence shift I think what you all are doing is meaningful towards that. So, thank you for that.
Mike: Thank you.
Euvie: Thank you.
Mike: Alright Daniel, take care.
Daniel: Bye.
This is our fourth episode with Daniel Schmachtenberger, and as every other time, it left us fascinated and inspired.
Daniel is the co-founder of Neurohacker Collective and founder of Emergence Project, a futurist think tank.
This is probably our favourite episode with Daniel. We dive into the existential risks threatening humanity, and how we can mitigate them.
Existential Risks
An existential risk is anything that can cause the extinction of humanity. A catastrophic risk is one that can cause our near-extinction, or destroy human civilization as we know it. Existential and catastrophic risks are often divided into human induced risks and natural phenomena.
The natural phenomena include things like huge asteroids hitting the Earth, and solar flares or coronal mass ejections with a high enough intensity to knock out our power grid, destabilize nuclear cooling stations and lead to uninhabitable biosphere and ocean acidification.
Human Risks
Although natural risks are important to look at, Daniel Schmachtenberger is more concerned about human induced existential risks.
These include man-made climate change, which can cause agricultural failure, mass migration, and ultimately state failure and war. It also includes the risk from nuclear weapons and exponentially growing technologies like AI, biotechnology, and nanotechnology.
Daniel Schmachtenberger claims that an underlying problem behind many of man-made existential risks is our collective loss of ability to make sense of the world. With the rise of fake news, scientific research being funded by corporate interests, and technologies evolving faster than we can keep up with, it’s getting harder and harder to understand what is true and what is important.
Exponential tech increases our ability to affect things, but not the quality of our choice Click To TweetDesigning Post-Capitalist Systems
We have a system that runs on competitive advantage, both in national markets and a global level. This system is based on win-lose game theory, which incentivizes actions that give people and entities a competitive advantage, rather than actions that benefit humanity as a whole. This leads to pharmaceutical companies wasting time and talent repeating research (or not doing it at all if it’s non-patentable) while people die, and the earth holding enough nukes to wipe out the planet more than 10 times over.
Far from hopeless, Daniel reminds us that most of what we think of as “flawed human nature” is actually just social conditioning. Conditioning which is possible to change, or redirect with properly placed incentives.
The fundamental question Daniel is trying to answer is this: how do we design a system inside which all incentives are properly aligned, and we start valuing things around us for their systemic and not differential value?
We have a system that attracts, incents and rewards pathological behavior. Click To TweetNeurohacker Collective
Daniel’s recent project is Neurohacker Collective, a smart drug brand with a vision of holistic human neural optimization. Their first product Qualia is a reference to a philosophical concept meaning “an individual instance of subjective, conscious experience”.
After trying Qualia ourselves, we decided to arrange a special deal for our listeners who also wanted to give it a try. When you get a subscription to Qualia at Neurohacker.com, just use the code FUTURE to get 10% off.
Human nature has the capacity to transcend what human behaviour has been so far. Click To TweetIn This Episode of Future Thinkers Podcast:
- The loss of sense making
- Why do people spread intentional disinformation?
- What is the value of a tree?
- What do we do with the shitty jobs?
- Human nature
- We’re the tentacles of the universe!
- I don’t exist outside the context
- The power and wisdom of Gods
- Are we entering a new age?
Quotes:
“If you’re scaling towards the power of gods, then you have to have the wisdom and the love of gods, or you’ll self destruct. ” – Daniel Schmachtenberger
“The real existential risk is a loss of the ability to make sense of the world around us: what is worth doing, and what the likely effects of things will be” – Daniel Schmachtenberger
“We do have an innate impulse towards agency, towards self actualization. Within a win-lose game structure, that will look like a competitive impulse. But within a win-win structure, that will look like the desire to go beyond my previous capacity.” – Daniel Schmachtenberger
Mentions and Resources:
- The Grey Goo Scenario
- Capitalism is a paperclip maximizer
- Barbara Marx Hubbard
- I am, because of you: Further reading on Ubuntu
- The Choice is Ours (documentary)
- Zeitgeist Addendum (documentary)
- Paradise or Oblivion (documentary)
- Resource Based Economy
- Future of Life Institute
- Center for The Study of Existential Risks
Recommended Books:
- Infinite & Finite Games by James Carse
- The Collapse of Complex Societies by Joseph A. Tainter
- Superintelligence by Nick Bostrom
- Engines of Creation by Eric Drexler
- Radical Abundance by Eric Drexler
- Limits to Growth by Donella and Dennis Meadows and Jorgen Randers
More From Future Thinkers:
- Daniel Schmachtenberger on The Global Phase Shift (FTP036)
- Daniel Schmachtenberger on Neurohacking (FTP042)
- Daniel Schmachtenberger on Neurogenesis (FTP043)
- Dr. Jordan Peterson on Failed Utopias, Mapping the Mind, and Finding Meaning (FTP038)
- Phil Torres on Existential Risks (FTP023)
This Episode Was Sponsored By