Read Full Transcript

Euvie: In one of our previous episodes with you we spoke about how to assess and prevent existential risks. We got a lot of positive feedback on that episode, people really loved it. I know you’ve spent a lot of time thinking about the subject since then and you’ve refined your position. I wanted to ask you, what is your current thinking on existential risks and how to deal with them?

Daniel: Alright, great. Very happy to be back with both of you. It’s valuable for us to go up a level when we’re thinking [00:02:30] about risks, existential or just any kind of catastrophic risk. It’s not just a weird fetish topic and it’s not just purely, “There is some risk and we want to survive.” The deeper topic – Future Thinkers, you guys are looking here at the future of economy, the future of sense making, the future of technology, the future of healthcare, the future of ethics, the future of lots of things. In the presence of exponential technology that makes possible changing things that were never possible [00:03:00] to change before, what is the guiding basis for how we utilize that technological power well and make good choices in the presence of that?

We’ve always had a sense that human nature is a certain thing, fixed genetically at whatever level it’s fixed. Then we’re looking at good behaviour within that framework. As soon as we’re at the level of genetic engineering humans as real, at least, thought experiment could be a real possible technology, we could work to genetically engineer all people to be sociopathic hyper [00:03:30] computers who didn’t feel badly about winning at win lose games at all and were optimized for it. We could work to trick our brains into AIs that could take us even further in that direction of being able to win lose games. We could engineer aggression out of ourselves completely.

Then we start to say, “Wow, those are really different realities. What do we want?” Well, we want to win the game. Why do we want to win? There’s an assumed [00:04:00] game that we’re playing against whomever, right, that some ingroup is playing against some outgroup and whether it’s US versus Russia versus China or it’s whatever ingroup outgroup dynamic, we say, “What happens when you keep playing that ingroup outgroup game forever?” Those games have always caused harm. They have a narrow set of metrics that define a win and everything outside of those metrics is externality.

When we’re playing a win lose game, whether it’s person on person or team on team, or whether the team is a tribe or a country [00:04:30] or a company or a race or a whatever it is, right. The ingroup competing against an outgroup for some definition of a win, we’re directly seeking to harm each other, to cause the lose to each other and we’re also indirectly causing harm to the commons, which is we’re competing for the extraction of scarce resource, we are externalizing cost, pollution, whatever it is to the commons. If we’re competing militarily, there’s harm that comes from warfare. We’re polluting the information ecology through disinformation to be able [00:05:00] to maintain the strategic competitive advantage of some information we have.

Level risk games necessarily cause harm. If you keep increasing your harm causing capacity with technology – and technology that makes better technology that makes better technology, exponentially – exponential harm causing eventually taps out. Eventually, it exceeds the capacity of the playing field to handle and the level of harm is no longer [00:05:30] viable. It’s an important thing to think about that when we think about the risks of weaponized drones or [inaudible [0:05:37] biowarfare or any of the really dreadful things that are very new technologies that we could never do before. They’re not really different in fundamental type than the things that have always sucked.

When we first came out with catapults versus not having those, or canons or whatever it is. They’re just a lot more powerful of the things that have always sucked. It’s important to get that [00:06:00] we have been murdering other people in mass in this thing called war is a reasonable way to deal with the different or to get ahead for a long time, for the whole history of the thing we call civilization. We invent unrenewably plants out of the ground in terms of agriculture, in terms of cutting trees or whatever in ways that lead to desertification for thousands of years.

All the early civilizations that don’t exist anymore don’t exist anymore because they actually led to their own self-termination in really critical ways. It’s not like war and environmental destruction etcetera [00:06:30] is a new topic. If the Mayans fell or the Byzantine or the Mesopotamian or the Roman, even the Roman Empire fell. It wasn’t everything. It was a lot but it wasn’t everything. When you have the ingroup outgroup dynamics keep getting larger – tribe to groups of tribes to villages, [inaudible [0:06:50], kingdom, nation state, global economic trading block – so as to be able to compete with a larger team that keeps having the incentives to do those with [00:07:00] larger weaponry extraction tech, externalization tech, narrative tech, information and disinformation tech, you get to a point where you have – like we have today – a completely globally interconnected supply chain and globally interconnected civilization dynamics because of scale, where the collapse ends up being really a collapse of everything.

The level of warfare possible can actually make an inhabitable biosphere, can actually not just be catastrophic for a local people – which previous wars always war – but catastrophic for people. [00:07:30] Get that rivalrous games have always caused the behaviours of humans that have sucked but exponential suck is existential – that’s an important way of thinking of it. That means that we have to be different than we have ever been in the history of the thing we call civilization to simply not extinct ourselves. The way we have always been has been a smaller scale of the same thing that at this scale is now extinctionary. That’s a big deal because it means that the solutions we’re looking for [00:08:00] do not look like previous best practices, because those were practices at how to win at win lose games where winning at win lose games is now the omni lose lose generator.

It is now the thing that we can’t keep doing. We don’t like to think this deeply about things, we like to take whatever things have been, like the good best practices, and figure out how to iterate a tiny bit and run with those things. Except, the entire toolset of best practices we have are actually why we’re at the brink of a gazillion different ex risks scenarios that are a result of using [00:08:30] those toolsets. Words like capitalism and science and technology and democracy are our favourite words because they give us a lot of rad shit, they did. They also created a heap of problems and problems are now catastrophic in scale, where the solutions need to be new stuff. People freak out because if you say something other than capitalism, they think that you mean communism and you want to take their stuff and have the state force everyone to do shitty jobs.

If you say something other than democracy, again, it’s assumed that it’s going to be like some kind of fascist terrible thing and [00:09:00] something other than science means a regressive religion. No. I want to be very clear that I’m not proposing systems that sucked worse than the current systems we have. I’m proposing deeper level of insights and deeper level of what we would actually call innovation and novelty than have been on the table so far. For instance, democracy. When Churchill said, “Democracy is the single worst form of governance ever created, save for all the other forms,” he was saying something very, very deep, which is democracy’s the best form [00:09:30] of government we’ve ever created and it’s fucking terrible, but all the other ones are even worst, because the idea of government or governance is this really tricky thing where we’re trying to get lots of different humans to cooperate or agree or make choices together and we just suck at that.

He was admitting something very important and true. Jefferson said similar things, which is we were able to get a lot of people to care about each other, to see through each other’s perspective, to make agreements if it’s a [00:10:00] tiny number of people. This was tribes, the history of tribes. That’s why they capped out a very small size was above the size at which you could care about everybody, know about what was going on for everybody, factor their insight, share the same base reality, where if you hurt anyone you were directly hurting someone that you lived with and loved and cared about… As soon as you start getting to a size where you can hurt people without knowing it anonymously, through some supply chain action or voting on something or whatever, it starts to become a totally different reality.

Anonymous people. We’re willing to give up some freedoms for [00:10:30] people we also depend on and care about and have this close bonding with. As soon as we get to larger than tribe dynamics, we have had a real hard time doing that in any way that doesn’t disenfranchise lots of people. We have democracy, it says, “Okay, there’s no way everybody’s going to agree on anything but we still have to be able to move forward and decide if we make a road or not, or go to war or not, or whatever it is. Let’s come up with a proposition of something to do and let’s at least see that more people like it than don’t like it. At least it seems to represent the majority [00:11:00] of thinking. That seems like a reasonable idea.”

Whether we have a representative or not, or it’s 67 percent majority or 51 percent or a voting currency, they’re all different versions but basically of the same idea. Let me explain something about how bad the idea actually is, the catastrophic problems that it creates so democracy will stop being this really wonderful word. It doesn’t mean we don’t give it its due for the beautiful place that it served in history. It’s just it is a place that is in the rear-view mirror in terms of if it continues in the forefront we [00:11:30] actually can’t navigate, it’s not an adequate tool for the types of issues we need to navigate. Again, remember I’m not going to propose any other system ever proposed – propose things that don’t even sound like governance but that sound like a different method of individual and collective sense making and choice making.

Democracy’s a process where somebody or somebodies make a proposition of something, “We’re going to build the bridge this way or go to war,” whatever it is. They make a proposition to benefit something that they’re aware of that they care about. But they’re not aware of everything and they don’t care [00:12:00] about everything that’s connected equally, so other people realize, “Hey, the thing that you want to do is going to fuck stuff up that I care about. That bridge that you want to build so that you can get across the river without driving all the way around is going to kill all the owls in this area and mess up some fisheries. I care about that, the owls and the fisheries.” The other people are like, “Fuck you and the environmental owl, fishery stuff. We need to get to work.”

What you have now is a basis where, if the proposition goes through, it will benefit some things and harm other things. [00:12:30] If it doesn’t go through, the thing that would have been benefited now isn’t benefitted but the thing that would have been harmed now isn’t harmed. You will get an ingroup of people that care about the one set more who then band against the outgroup of the people who care about the other thing more. This will always drive polarization. Eventually, the polarization becomes radicalization. We didn’t even try, in the process, to figure out what a good proposition that might do a better job of meeting [00:13:00] everybody’s needs was. We didn’t even try and do a context map of what are all the interconnected issues here, what would a good design that everybody might like look like, can we even try to find a synergistic satisfier.

That’s not even part of the conversation. Maybe, rather than make the bridge there, we could make the bridge at just a slightly different area and there’s no owls there. Maybe we can move the owls. Maybe we can use pontoon boats. Maybe we don’t even need to make a bridge because all the transportation back and forth is for one company and we can just move the company’s headquarters. Maybe… The sense making [00:13:30] to inform the choice making is not a part of the governance process. If I’m making choices blind or mostly blind based on a tiny part of the optics, then other people who have other optics are like, “Wait, that’s not a good choice,” and right now we both know that if one eye shows something but my other eye shows something else, I want to pay attention to both eyes. I don’t want them in a game theoretic relationship with each other, they do parallax and give me depth perception.

My eyes and my ears don’t want to make each other the same. They actually really want to do different [00:14:00] functions but they also want to pay attention to each other. If I hear something and I think that it was over there and I’m going to go away from it but my eyes tell me it’s actually somewhere else, I want to pay attention to all of my sense making. My brain is doing this really interesting process of taking all of this different sensory information, putting it together, and trying to pay attention to all of it to make a choice that’s informed by all that sense making.

We have never been able to think about governance processes like this, where we start with, “How would a group of people that are inter-effecting each other, that are inter-effecting [00:14:30] within a particular context as sense making nodes be able to share their sense making in a way that could create parallax? It could actually synthesize into a picture that could create design criteria of what is actually meaningful, good, desired etcetera by everybody that could work towards progressively better synergistic satisfiers that are based less on theory of trade-offs, that will always create some benefit at some harm which will lead to some ingroups fighting some outgroups that will lead to increased seeking [00:15:00] of power on both sides that will eventually turn into a catastrophic self-terminating scenario.

When we look at it, we see that this type of scenario has always led to left right polarization that eventually becomes radicalization that ends in war to stabilize. You can’t keep having war with exponential tech where nonstate actors have existential level tech. You just can’t keep doing that. The thing that we’ve always done we can’t ever do anymore. That’s a big deal. We also are able to think about how do brains put the information from [00:15:30] eyes and ears together. How do a bunch of neurons coming together in neural networks to make sense in a way that none of them individually do? How do 50 trillion cells as autonomous entities operate in your body in a way that is both good for them as individuals and good for the whole simultaneously, where they are neither operating at their own benefit of the expense of the ones that they depend on nor are they damaging themselves for the other.

They are really in an optimized symbiosis because the system depends on them and they depend on the system. [00:16:00] That means that they depend on each other and the system etcetera. Can we start to study these things in a different way that gives us novel insights into how to be able to have higher level organization, collective intelligence, collective adaptive capacity, collective sense making, actuation capacity? The answer is yes, we can and we can see that that’s not the thing that we’ve called democracy. The thing that we’ve called democracy is some process of a proposition based on some very limited sense making with some majority that is always going to [00:16:30] lead to polarization, that’s going to lead to the group that doesn’t get it feeling disenfranchised and then having them typically against a group as a whole and then the warfare that occurs between them of whatever kind, whether info warfare or actual warfare.

The reason I’m bringing this up is because democracy was great compared to just a terrible fascist dictator. It definitely is not adequate to the complexity of the problems we have to solve, nor can we continue to handle the kinds of polarization [00:17:00] and problematic dynamics that it inexorably creates. The same is true with capitalism, the same is true with the thing that we call science and tech, which is probably the hardest one, I’ll get to that one last. The thing that we call capitalism, we all know the positive story there that it incents people being self-responsible and being industrious and seeking to bring better products and services to the market at a better value, and those who do will get ahead and they should get ahead because they’re going to be good stewards of resource because they got the resource by bringing [00:17:30] products and services at a value that people cared about etcetera.

We know that story. There is some truth to it like there was some truth to the democracy story and it served evolutionary relevance and it most certainly can’t take us to the next set of steps. For instance, the thing that we call money and ownership, you can actually think of it related to governance as a type of choice making process. You don’t think about deeply enough the nature of what these types of structures are. If I have a bunch [00:18:00] of money it means I have concentrated choice making capacity, because I now want to make a choice that I can extend through a bunch of employees. I can have a bunch of them working aligned with my sense making on my behalf to increase my actuator capacity, or I can get physical resources to be able to build something that extends my actuator capacity.

We recognize a system that ends up determining who has resource and then the resource ends up being a way that some people are actually directing [00:18:30] the choice making of other people as a choice making system. We say, “Well, it’s actually a shitty choice making system, because the idea that those who have the money are better choice makers for the whole is just silly. It’s just really silly. Even if someone did bring good products or services to market effectively, that doesn’t mean there are kids who inherit it that do and it doesn’t mean that I can’t make money by extraction rather than production that is not debasing the environment and it doesn’t mean that I didn’t externalize.

I figured out how to externalize more harm than cost to the commons [00:19:00] than somebody else did, which drove my margins up so I got more of it. And it didn’t mean that I didn’t do war profiteering, right? We start to realize, okay, that’s just actually a shitty choice making system. We stop even thinking about what is the future of economics and the future of governance, because the words are so loaded that we just can’t help ourselves but boot bad concepts when we think of those words. We start thinking about us humans are making choices, individually and in groups, based on information that we have [00:19:30] towards some things that we value and hope to have happened. How do we get better on value frameworks, what we really seek to benefit, how do we get better on our sense making processes and how do we get better on our choice making processes?

What is the future of individual and collective value frameworks, sense making, choice making processes. That ends up being what obsoletes the things we call economics and governance now. We come back to capitalism for a minute. Okay, not only is it just not a great choice [00:20:00] making system and not only does it end up actually being a pretty pathological choice making system because it’s easier to extract than it is to actually produce and it’s easier to rip off what someone else worked really hard on than it is to work really hard on making the thing and it’s easier to externalize cost to the commons than not etcetera. We say, “Okay, it’s not just that, it’s that we’ve got 70 trillion dollars with of economic activity, give or take, trade hands,” depending on what you consider a dollar, “where pretty much all of that externalizes [00:20:30] harm at some point along the supply chain.”

Meaning, the physical goods economy required mining and turns it into trash on the other side. The linear materials economy is destructive to the environment, causes pollution along the whole thing etcetera. Causes marketing to be able to drive it that is polluting people’s minds with a bunch of manufactured discontent etcetera. Drives disinformation of some companies trying to disinform other ones to maintain strategic advantage is [00:21:00] running on dollars that are supported by militaries. You just think about the whole thing, you’re like, “Wow, okay, even the things that I think are good like just the movement of what those dollars are is externalizing harm in an exponential way that is moving towards catastrophic tipping points.”

Alright, that can’t work. Then even worse, you say, alright, let’s take a look at Facebook for a minute because Facebook, since the last time we talked, between Russia and Cambridge Analytica and Tristan etcetera, there’s a lot more [00:21:30] understanding of the problem of platforms that have a lot of personal data but also have their own agency with respect to you. We say, okay, Facebook wants to maximize time on site because they make money by selling marketing. The more users are on site more often, the more they can charge for the marketing so they figure out, they use very complex AI, analytics, split testing etcetera, to see exactly what will make online stickiest for you possible. [00:22:00] You happen to feel really left out when you see pictures of your friends doing whit without you and that makes you click and look and it makes you feel really bad, but they optimize the shit out of that in your feed because it’s what you actually stay on.

Other people it’s the hyper normal stimuli of the girls that are all airbrushed and photoshopped that makes them stay on, or whatever it is. It’ll always be a hyper normal stimuli. If you think about the way that McDonalds wanted to make the most addictive shit or Coca Cola or Philips Morris, if I am on the supply side of [00:22:30] supply and demand, I want to manufacture artificial demand because I want to maximize lifetime value of a customer, lifetime revenue. If I can make you addicted to my stuff, that’s the most profitable thing. When you look around at a society that has ubiquitous rampant addiction of almost all kinds everywhere, you realize that’s just good for business, that’s good for JDP.

We see that Facebook is just by its own nature doing what it’s supposed to be doing – public company, fiduciary responsibility to the shareholders, maximize profit blah, blah, blah – is they’re going to maximize your time on site and [00:23:00] maximizing your time on site is going to work better by hyper normal stimuli that make you addicted than by things that make you sovereign more because you actually realize that your life is better when you get the fuck off Facebook and go hang out with real people. It has to drive addiction and discontent and whatever else. It’s really good at a bunch of crafty tricks for how to do that.

We see then that corresponding with that is the rise of bulimia and anorexia and all kinds of body dysmorphia. We see that corresponding with it is depression and probability for suicide rates. [00:23:30] We see that the hyper normal stimuli of polarizing news grabs people more than non-polarizing news because fights in an evolutionary environment are really important things to pay attention to, they’re a kind of hyper normal stimuli. The most polarizing names are going to show up on YouTube videos and get the most traction. You notice whenever there’s a debate and it’s supposed to be a friendly debate, you look at the name of the YouTube video that gets the most shares and it’s, “So and so eviscerates so and so.”

Mike: Yeah, “Destroys this person.”

Daniel: [00:24:00] Yeah. That hyper normal stimuli grabs us in the [inaudible [0:24:03] in the worst way possible, makes the worst versions of us but it fucking maximizes time on site. Now, you look at it and you say, okay, the net result of Facebook is increased radicalization in all directions, increased broken sense making, decreased sense making, increased echo chamber and antipathy at a level that will increase both the probability of civil wars and world wars and everything else, [00:24:30] increased teen depression and suicide rates and shopping addictions. You’re like, “Wow, this is fucking evil. This is a really terrible thing.” Does Facebook want to do that? We could almost make up a conspiracy theory that Facebook has optimized how to make the world worst as fast as possible.

No, Facebook doesn’t want to do that. Facebook is just trying to make a dollar and justify to itself that it should continue to exist. It just happens to be that how it makes a dollar has the externality of all that stuff in the same way that when Exon was making a dollar or when the military industrial complex [00:25:00] or a sick care industry that makes money when people are sick and not when they’re healthy. Fuck, okay, look at that whole thing. Capitalism is inexorably bound to perverse incentive everywhere. At an even deeper structural level, if we tried to fix that, we’d say, okay, we’re competing to own stuff and the moment you own something I no longer have access to it, even if you’re not using it, even if you hardly [00:25:30] ever use that whatever it is, drill that you don’t even remember where you put it.

I don’t have access to it. As a result, there’s a scarce amount of this stuff. The stuff that is more scarce, we make more valuable. Then you own it, I don’t have access to it but because you want to be able to provide for your future or whatever and there’s uncertainty, you want to own all the shit that you can and pull it out of circulation, put it in safes and security boxes. That takes a lot of resource from the earth where everything you own is just bothering [00:26:00] me, because it’s being removed from my possible access. We are in a rivalrous relationship with each other because the nature of the good itself is rivalrous because of the ownership dynamic, right, and the valuation on scarce things in particular. Everybody can say, “It makes sense why we value scarce things,” because if there’s enough for everybody it doesn’t have the same type of advantages if there’s not enough for everyone to make sense, except the problem, of course, is if we make decisions based on an economic calculus, [00:26:30] which we do the CFO looks at the numbers and says, “No, this quarter we have to do this,” and they’re only paying attention to the numbers they’re paying attention to.

If air isn’t worth anything, because there’s enough of it for everybody, and I can’t increase my strategic competitive advantage over you by hoarding more air, then air is worth nothing. Even though we all die without it, literally it is valueless to us and our economic calculus. We will pollute the shit out of it and burn the shit out of it, fill it full of CO2, pull the O2 out of it in the oxidizing [00:27:00] of hydrocarbons, because we don’t factor it because it’s not scarce and doesn’t provide competitive advantage, whereas the gold on the other hand, the gold we will fight wars over, we’ll destroy environments and cut down trees to mine it out that were actually putting the oxygen in the air that we don’t give a shit about so that we can put the gold in a safety deposit box that we don’t ever look at and doesn’t get to do anything other than be noted on my balance sheet as some increased competitive advantage that I have over you.

[00:27:30] Because if there’s not enough for everybody to have it, then I get some competitive advantage by having it and you don’t. The value is proportional to that, not the real physical asset of what that metal could do, which is why the metal is not actually being used in electronics, it’s being in gold bars and [inaudible [0:27:42] and wherever else. This is all insane and, of course, we see that the moment that we make abundant things worthless – even if they’re the foundation of life – and we make scarce things worth a lot – even if they’re meaningless – then it creates a basis to [00:28:00] artificially manufacture scarcity and avoid abundance everywhere. If I have something I supply and I make it abundant, then all of a sudden it’s worth nothing, which is why if I make some software that I could give to the whole world for free once I’ve made it – I just have to make enough money to have made it – no, no, no. I’m going to patent protect and come sue you if you figure out how to pirate it so that I can keep charging you, even though I have no unit cost.

I’ve actually solved the scarcity problem and I’m going to artificially manufacture [00:28:30] scarcity to keep driving my balance sheet equation. The Kimberly diamond mines burn and crush their diamonds because we thought diamonds were scarce, we made the price high, then we realized they weren’t scarce and the people who had the price high didn’t want that to be known etcetera, etcetera. If we want a world of abundance for everybody, we can actually technologically engineer the scarcity out, create abundance everywhere is a feasible thing to do and we can talk more about that later. You can’t have an incentive on scarcity and engineer it out at the same time. Now, there’s scarce stuff, we’re competing for it.

You own it, [00:29:00] which means you possess it and you remove my capacity for access. I want to own it faster than you own it. Now, we get into a tragedy of the commons and a multi-polar trap thing. Say I go cut down a tree in the forest. I don’t need that many trees right now but you’re in this other tribe and you’re going to go cut down some of the trees. I would like there to be a forest rather than a total clear-cut environment, because forests are beautiful and I grew up with the forests and I like the animals that are there. I know that if I don’t [00:29:30] cut down the trees and I leave the trees, there still won’t be a forest because you’re going to cut down the trees. Since you’re going to cut down for the trees and the other tribe knows that, they know that I’m going to. We say, “Fuck it, I got to cut down all the trees as fast as I can because, if there’s going to be no trees anyways, I might as well get them rather than them get it because if they have increased economic power over me they’re going to use those pieces of wood against me in war or in an economic competition.

Now, we all increase the rate of the destruction of the forest as fast as we fucking can, even though we’d all like a forest, because we’re caught in a multi-polar prisoner’s dilemma [00:30:00] with no good way out. The tragedy of the commons is a multi-polar trap. The arms race where we say, “You know what, we should just not make weaponized AI, we should just not do that.” Everyone in the world thinks the idea of facial recognition weaponized AI drones is a world that would be better not to live in. Nobody wants to live in that world. It doesn’t matter how rich or powerful you are, you’re not fucking safe to a bunch of bad scenarios in that world. [00:30:30] But we’re all making them, everybody’s advancing the fucking tech. Why are we doing that?

Why don’t we just make a treaty not to do it? Because if we make a treaty, we secretly know that the other guy’s going to defect on it secretly. If he gets the tech first, it’s a world war and he’ll beat us. He knows that we’re going to defect on it. Either we make the treaty and we all defect on it secretly while agreeing to it publicly while trying to spy on the other guys, while trying to disinform their spies about what we’re actually doing. Or, we just don’t [00:31:00] even fucking agree to it. We move forward towards a world that increases the probability of extincting all of every day, in this scenario, these multi-polar traps. We’ll come back to this, I was on capitalism but I’m going to come back to this multi-polar trap so please remind to do it.

One of the things we have to solve at a generator function level is multi-polar traps as a category, meaning all instances of them [00:31:30] categorically. Because a multi-polar trap basically means where the local optimum for an individual agent, if they pursue it, leads to the global minimum for the whole. If I don’t make the nukes or the AI or take the tree down, I’ll get fucked right now. If I do it, I don’t get fucked right now, as we all do it, we all get worse fucked later. You made a scenario where the [00:32:00] incentive of the agent short-term locally is directly against the long-term global wellbeing. That is the world at large right now and capitalism is inexorably connected to that, because it drives these rivalrous dynamics and rivalrous dynamics are the basis of multi-polar traps.

I could just sound depressing, except there are actually solutions to all of these things. There is a basis for how we deal with physical stuff that is not you owning some scarce thing [00:32:30] and removing my access. We all know examples of it, we know that when you go to the grocery store and you use a shopping cart, you don’t bring your own shopping cart that you own which would be a major pain in the ass and I don’t do that. You have access to a cart where there’s enough carts that during peak hours, the busiest hours, there’s enough carts for everybody and enough leftover for repairs. Your access to the cart does not decrease my access to carts. I’m not upset that you got a cart. We’re not in any competition over carts. Because I only need enough [00:33:00] for there to be enough during peak time – which is maybe 200 people – I only need 200 carts, 220 carts, even though that store might service 10,000 people a month.

Think about 200 carts versus 10,000 carts, how much less resource it is from the environment, how much more efficient it is. You start saying, “Alright, let’s look at other places where this thing could happen.” We say, “Well, we’ve owned cars as a good and when you own a car I no longer have access to it.” Then you’re mostly [00:33:30] just going to leave it sitting and hardly ever use it. You’ll use it now and again, but it’s going to spend 95 percent of its life just sitting places. As a result of that, there’s a lot of fucking cars to not provide that much transportation. That’s a lot of metals taken out of the earth and a lot of plastics and a lot of actual environmental costs to be able to make those, to have most of them never in use.

You start to say, “Okay, we look at car sharing like Uber and Lift and whatever and we start moving there from a possession of a good [00:34:00] to an access of a service.” That’s pretty cool. We still got to pay for it, it’s still kind of shitty because there’s not enough of it to give me access to go everywhere but it’s pretty easy to imagine that that thing takes over. Then you use something like blockchain to disintermediate the central company that’s pulling the profits out of it. Now, it’s cheaper access for everybody and puts more resource back into the quality of transportation and then it becomes self-driving cars, so it doesn’t actually even have that cost it’s just a self-maintaining dynamic. You say, “[00:34:30] Okay, now it takes a tiny fraction of the metals coming out of the earth to provide higher quality transportation units for everyone, where you having access to transportation as a commonwealth service does not decrease my access to transportation as a commonwealth service.

But when you use transportation to go to school and learn stuff, or go to a makers studio and make stuff, or go to a music studio and make stuff, you’re going to make stuff that also becomes part of a commonwealth, enriches the commons that I have access to. We move from a rivalrous [00:35:00] ownership of scarce goods to an anti-rivalrous access to shared commonwealth resources. We went from rivalrous, not just to non-rivalrous meaning uncoupled – and in rivalrous we’re anti coupled, your wellbeing is directly against mine – but now to anti-rivalrous, which is a coupled good where, as you have access to more resources that make you more generative, what you generate enriches the commons that I have access to, so I am engendered, I am incented [00:35:30] to want to give you the maximum to support you having the maximum access to generative resources.

I’m not explaining all of what the future of the economic system looks like right now, but to just start giving a sense. When we think about the problems of capitalism, there have been problems associated with it forever but the scale of the problems is just more catastrophic now. I’m also sharing examples of ways that we can start shifting some of the things that we couldn’t shift previously, some of the things that neither Marx nor Smith [00:36:00] had available to them. This is very interesting. We have to shift off these systems and we can at the same time. This is, to me, a very interesting development insight when I look at biology in particular, is that if we look at the 40 weeks of a baby in utero – we’ve talked about this before but I’ll bring it up in this context – it couldn’t be born much earlier. It would be premi and without an ICU or whatever it would die.

It also couldn’t stay much longer, it would get too big and never be able to be born, kill the mum and kill itself. It comes out in a pretty [00:36:30] narrow window, where it both for the first time can – it actually now has the capacities – and has to, it can’t stay any longer. It’s interesting, it’s got 40 weeks on one growth path. It’s growing the whole time, there’s a growth curve. It’s not going to stay on that growth curve forever. If we tried to forecast its future and just continue the progression of 40 weeks, around 50 weeks it kills itself and the mum. That’s not the thing, it goes through this discreet non-linear phase that if I had never seen it before I’d have no idea how to predict. [00:37:00] Out of the birth canal, umbilical cord cut, respiration to the lungs, stuff coming in through the mouth rather than the belly button. Everything’s different and it does it in a way that’s unprecedented the whole 40 weeks that it’s existed previously.

If I tried to plot the curve of what has been, it is not that. It does it both when it has to and can. If we look at any type of development phase – chicken developing inside of an egg, if it tried to come out earlier it’s still goo, if it tried to stay in later it’ starves. If we look at a caterpillar to [00:37:30] butterfly through chrysalis, same thing. If it tried to just keep eating it would eat itself to extinction, if it tried to go into the chrysalis earlier it doesn’t have enough resources to make a butterfly, it would die and the chrysalis is partial goo. This is really interesting that when we look at discreet non-linear phase shifts, where there’s one phase – the caterpillar’s getting bigger and bigger and bigger, eating everything in its environment and if I just forecast it keeps being that, I forecast it eats all the environment and then dies, except that’s not what happens.

It does this really different thing that is not [00:38:00] an extension of the previous curve. When we try to take our capitalism curve, our nationalism curve, our science and tech curve and keep extending them, it’s just fucking silly and it’s why we come up with silly stuff like it [inaudible [0:38:11] into infinity with the singularity. On one side, if we take all the things that seem like they’re good then we [inaudible [0:38:17] into a singular. If we take the things that look bad, then it just goes into self-extinction. It’s neither of those curves, because shit is getting exponentially more powerful, which means better and worse at the same time. That means it’s neither of those curves, that means [00:38:30] that this phase is just coming to an end, it’s destabilizing.

The things that are getting exponentially worse will end the phase before the things that are getting exponentially better will change the nature of those things. That means that we’re actually going to get something altogether, if it is other than extinction. It’s very interesting that we’re at the place where these types of dynamics both have to change right now, because we have catastrophic level tech that we never had to have the same things that always sucked be [00:39:00] unsurvivable, and can change because we actually have the level of insights that make possible things that are different at a deep enough axiomatic level. It’s like when you’ve got a bunch of cells competing against each other and then there’s this metamorphosis where now they’re inside of a slime mould and they’re part of a multi-site organism where they’re participating together as part of a shared identity.

That’s a really deep level of reorganization. People going from that shift as separate, rivalrous, competitive agents [00:39:30] modelling themselves as apex predators trying to compete with each other to be the most apex predator and predate the environment. We modelled ourselves as apex predators, right, that was an easy thing to do. We look around at nature and we’re more like lions than gazelles. We look a lot like chimpanzees, they’re kind of badass hunter apex predator, they coordinate etcetera. We’re like, “That’s cool, we’ll do the apex predator thing.” We both want to see where we can get in the dominance hierarchies, we get to add the prestige hierarchy but how we do [00:40:00] within our tribe than our tribe as a whole, our species as a whole relative to the other species is in this apex position.

There’s a reason why we cannot think that way anymore. Again, we thought this way forever and we cannot think this way anymore. When I say we thought this way forever, I don’t mean every indigenous tribe thought this way, because they didn’t. They had a web of life and were merely a strand in types of thoughts. The ones that thought this way became more apex predators and killed those other people. It was effective to try and do this apex predator thing up until now. Now, it also destroys everything. [00:40:30] Again, the thing that has been adaptive is now anti-adaptive, which is why the thoughts that have made us win are the thoughts that make us extinct now. If I take an apex predator, how its ability to be a predator, its ability to kill other things and compete with the other predators for who’s most badass is pretty fixed and it’s pretty symmetrically bound with its environment.

Lions can’t get better at killing faster than gazelles get better at running. [00:41:00] They coevolve where the lions slowly get a little bit better at some things and the gazelles also get better at other things. If the lions just rapidly got way better, they’d kill all the gazelles and then debase their own capacity to keep living and then they’d be extinct. If great whites could make mile long drift nets and just take all the fucking fish out of the ocean and reproduce at an exponential rate, they would have already self-induced their extinction but they can’t. All the years, the great whites never got a drift net but we have drift nets and we have nuclear weapons and we have D9s and we have [00:41:30] the ability to technologically extend our predatory capacity and to do so in a way that is exponential and that makes us completely asymmetric with the environment that we depend on.

Again, I come back to the lion and I say, alright, the most badass lion, the most badass gorilla is not like 10x more badass than the next most badass gorilla. It’s like, marginally better. Marginally can win at a fight and only for a pretty short [00:42:00] period of time before the next guy takes him. Then you look at a Putin or a Trump and you say, “How much military capacity does that one person have to bare if they wanted to, or economic capacity compared to, say, a homeless guy?” You look at the spread and you’re like, “Oh, this is a very different distribution of power than any other species has.” Other species did not have a million x power dynamics within the same species or billion x. Freakin’ tremendous. Or that much more power [00:42:30] relative to their environment.

If the lions could get technologically more advanced than their predation faster than the gazelles could, it would debase the stability of the entire ecosystem. The thing to realize is if a cancer cell starts replicating in a way that’s good for it, it’s actually getting more sugar and replicating faster inside the body, if it keeps doing that it kills its host and kills itself, it is ultimately suicidal. Its own short-term success is suicidal. Viruses that kill people too quickly don’t propagate for very long because they kill their host and they don’t get a chance to propagate. The [00:43:00] viruses that are less lethal end up being the ones that get selected for over a longer period of time, because they get a chance to propagate. If there was a species that was so good at hunting that it killed everything in its environment, then it would go extinct.

It’s not the most competitive advantage that makes it through, it’s self-stabilizing ecosystems that make it through. This is such a way more complete understanding of evolution, which is individuals within a species don’t make it through because they wouldn’t have survived without the whole species. Species don’t even make it through [00:43:30] because they wouldn’t survive without other species. Whole evolutionary self-stabilizing niches make it through. That’s fucking important, right?

Mike: Yeah, this whole idea of survival of the fittest is challenged with this concept because at no point is survival of the fittest taking the whole system into account going forward infinitely. It has to go through that phase shift.

Daniel: Yeah, survival of the fittest was something that had a local truth but was not the [00:44:00] only global phenomena that was operating, because there was also a tremendous amount of cooperation that was happening. Cooperation within members of a species with each other and between species and inter-dependence on each other. Again, the idea of competition is hyper normal stimuli, it was an early hyper normal stimuli hijack like sugar and porn and airbrushed pictures and likes on Facebook. In an evolutionary environment, fights standout even though they’re not mostly what’s happening. Mostly, if I am in a forest [00:44:30] there’s a gazillion interactions happening every second of [inaudible [0:44:32] soil bacteria having a relationship with each other and gas exchange between me and the plants, that’s just boring but it’s almost everything.

Then I see a couple lions fighting and I’m like, “Shit, that’s really interesting, survival of the fittest.” There is this hyper normal stimuli that made us actually miss-emphasize what was happening as a part of the phenomena – it was not all of the phenomena – miss-emphasize it. There’s also this thing that, as we’re moving forward right now, the way [00:45:00] we have been applying that thinking, which is that some individual agent or some ingroups – countries, companies, races, whatever – some ingroups can be more fit to survive than others through better militaries, or better economic extraction tech, or better info and disinfo in narrative tech. That has always been true.

I’m not criticizing that that was always true and even necessary. If one tribe killed another tribe [00:45:30] and their life got better because now the other tribe wasn’t competing for pigs with them and now they got all the kids and the got all the stone tools that their tribe had made and whatever, they’re like, “Shit, I realize that this killing other tribes thing is actually a pretty good evolutionary strategy.” Now, all the other tribes have to build militaries or die by default. The win lose game becomes obligate. One, the win lose game worked, it actually worked if you were good at it. Two, it was obligate, which is if you didn’t do it, you got killed [00:46:00] by Genghis Khan or Alexander the Great or whoever the fuck it was.

When we look at cultures that did not focus on militaries but focused on the arts and humanities and education and healthcare etcetera, they outcompeted the other cultures in terms of quality of life but that wasn’t where the thing actually got decided. They all got murdered. The really effective murdering cultures combined and combined and made it through and that’s us today. Yet, the tools of murdering and the tools of environmental extraction are going up and up until [00:46:30] we’re at a level where the playing field just cannot handle that game anymore. You can’t keep extracting more stuff from the environment when you’ve already got to peak resource, when you’ve got the biodiversity issues and species extinction issues etcetera you can’t keep polluting an environment when you’ve got dead zones in the ocean from nitrogen runoff and CO2 levels in the air etcetera getting to the point of cataclysm.

You can’t keep doing increasing military tech following exponential tech curves, where [00:47:00] then non-state actors can have fully catastrophic level tech and you can’t even monitor it, you just can’t keep playing that game. This is the just is that the thing that has always defined the game is that it’s always been a rivalrous game theoretic environment, and the rivalrous game theoretic environment, if it can, produce tech that keeps increasing will always self-terminate at a point and we just happen to be in the eminence of that point. This is the first generator function of X risk. Now, [00:47:30] we take this all the way back to the beginning of the conversation – I obviously got longwinded.

At the beginning of the conversation we said, “Why are we focusing on risk?” If we’re focused on, “How do we design a civilization that is actually good, that’s beautiful, that’s desirable?” Those are hard terms and we’ll get to that in a minute, that starts to get to some of the inadequacy of the thing we call science right now and its incommensurability with ethics, we’ll get to that. What does a beautiful civilization look like? [00:48:00] The first thing we can say easily is that it doesn’t self-terminate. If it self-terminates, we can mostly all agree that’s actually not a desirable thing. If it is inexorably self-terminating, it is structurally self-terminating – not just one little accident that we can solve but overdetermined through mini-vectors because of underlying generators functions, that’s not a good civilization design.

The first design criteria of an effective civilization is that it’s not self-terminating. Then we say, “[00:48:30] What are the things that cause self-termination?” What we find is that even though there are a gazillion different ways that it can express, ways that it can actually happen, it’s from very deep underlying dynamics that, if we understand those and we solve them at the dynamics level, we fix all of them. The first one is this topic we’ve been talking about and we’ve talked about previously, which is that rivalrous games multiplied by exponential technology self-terminate, because rivalrous games cause [00:49:00] harm with power and more power ends up being more harm until it’s more than is handable at the playing field. We got that. Exponential tech, whether we’re looking at a scenario of everything getting fucked up by AI or by bio warfare or by nanotech stuff or by so many different types of scenarios, those are all the same types of choices humans have been making just with those exponential powers added.

Given that we cannot put those technologies away, [00:49:30] we cannot get the world to stop making them, much as often wish we could. We either figure out how to move from a rivalrous to an anti-rivalrous environment that is developing and deploying those technologies, or we self-terminate. This is the first design criteria is that we have to create rigorously anti-rivalrous environments. It doesn’t end up being all of it. I’ll do two generator functions, that’s one. That is [00:50:00] what we see in terms of all of the either exponential tech risks or war risks or economic collapse leading to failed state scenario risks, they all come from things like that. All of the environmental biosphere collapse stuff is also related to tech getting bigger, we’re fishing out more fish as we’re putting more CO2.

It also relates to that but it’s a slightly different thing that we look at here, which is whether we’re talking about CO2 [00:50:30] in the air or mercury in the air or the water, or micro-plastics in the water or a continent of plastic in the ocean, or nitrogen affluent in the river deltas. Those are all toxicity dynamics, those are all basically stuff that has accumulated somewhere that it shouldn’t have, we call that pollution. We don’t need to solve the dead zones issue or the CO2 issue, or that we have to solve all of those categorically, that we’re not creating accumulation dynamics. [00:51:00] On the other side of that same coin, is depletion dynamics, cutting down all the old growth forests, fishing out all of the fish, species extinction, biodiversity loss, peak nitrogen, peak oil etcetera. Those are all where we are using something in a way that depletes it.

Then it gets turned into [inaudible [0:51:21] pollution on the other side where it accumulates somewhere. We can define toxicity formally as depletion or accumulation because of an open loop in a [00:51:30] network diagram. If the loop was closed, the things wouldn’t accumulate, they would go back to the source of where we would get something from so it wouldn’t have to deplete. We notice that when we see any kind of natural system – we’ll go to nuclear system – whether it’s a coral reef or a forest or whatever, when we go to a forest there is not trash. The body of something dies, it’s compost. There’s faeces, it’s compost. Something gets cut and bleeds, it processes. It doesn’t matter what it is. Anything you can think about that is part of that environment [00:52:00] from an evolutionary sense, the environment’s evolved to be able to process.

There’s also no unrenewable use of anything. Anything that is utilized is utilized in a closed loop fashion. One of the things we see in complex systems, in natural systems, is comprehensive loop closure. One of the things we notice about the human design systems is open loops everywhere. The materials economy itself, learning materials economy that takes version stuff and turns it into trash after using it for a very short period of time is a [00:52:30] macro-open loop. There’s micro-open loops everywhere. We have to categorically solve that. We have to basically close all of those loops. Another way of saying what this generator function of issues is is that the stuff that nature makes is what we call complex, it’s self-organizing, self-creating, self-organizing, self-repairing.

The stuff we make, design tools, is complicated. It’s rad, the computer we’re talking on is rad. If [00:53:00] it got broken, it would not self-repair and it didn’t self-organize, it was made, whatever design, from the outside. We can think about complex stuff comes about through evolution, complicated stuff comes about through design. Two different types of creative processes with fundamentally different characteristics. Complex stuff has comprehensive loop closure everywhere, because it couldn’t externalize something and still be selected for adaptivity. The adaptiveness factors everything. Whereas, [00:53:30] if I’m building something I might make it for one reason or two or three reasons, but it actually affects a lot of other stuff but the other stuff wasn’t what I was trying to optimize for, so there ends up being more externalities occur.

Even this computer I’m talking to you on right now was not optimized – it was optimized for a bunch of things, so it’s really cool. The fact that it’s got a backlit screen that is 2D that’s at a fixed focal length from my eyes where I’m getting macular degeneration from spending too many hours on it was one of the things that… My eye health was not one of the things it was built to try and focus [00:54:00] on. Or, the fact that it’s fucking up my posture ergonomically by me looking down at the screen, it wasn’t one of the things it tried to focus on. It was a gazillion other things. What happened to the environment and the supply chain process of getting the metals to make it or the making of this computer affected a gazillion things that were not part of its design criteria, which means them making of it required externalizing a bunch of harm, i.e. a bunch of open loops where it affected stuff that was not internalized to the process.

We can see that if a forest burns, it repairs itself. [00:54:30] If a house burns, it does not repair itself. If my computer gets broken, it doesn’t fix itself but if I cut myself it fixes. There’s this fascinating difference. The reason we’re bringing this up is to say for something to be anti-fragile it has to be complex. Complexity is the defining origin of anti-fragility. Complicated things are all ultimately fragile, more or less fragile. If we have a situation where complicated systems [00:55:00] subsume their complex substrate, then this means continue to grow. Basically, we’re converting the complexity of the natural world into the complicated built world and it’s continuing to grow, it will eventually collapse because the complicated world – and if you notice, it’s just like my computer, the water infrastructure is complicated, not complex.

The pipes don’t self-repair, they can break easily, they’re subject to being broken on purpose or accident. The same is true with everything – [00:55:30] the roads, the energy grid, everything. Now, when I look at globalization, I say, “I’ve got an increasingly interconnected complicated world that is increasingly more complicated where the failures anywhere can trigger failures in more places, because nowhere can actually make its own stuff anymore, because their shit is complicated enough, it has to be made across this whole thing. This computer we’re speaking on took six continents to make. If China did, everywhere is fucked. [00:56:00] If the US died… There are so many places. If mining wasn’t accessible in Africa, everywhere’s fucked.

We see an increasingly interconnectedly complicated, which also means increasingly fragile built world that we’re trying to run exponentially more energy through, in terms of human activity, dollars, etcetera, etcetera. That’s happening while decreasing the complexity of the biosphere far enough that [00:56:30] its own anti-fragility is going away, we’re getting to a place where, rather than climate being self-stabilizing, it can go into auto-destabilizing, positive feedback cycles where the biodiversity is getting low enough that you can get catastrophic shifts in the nature of the [inaudible [0:56:45] that’s made it possible to have a biosphere like the one that we’ve lived in. If you have a world where complicated systems are subsuming their complex substrate and continuing to grow, they will eventually collapse. These are two different generator functions where we can say, “[00:57:00] If I’m trying to solve ocean dead zones, or plastic, or species extinction as one offs, I will certainly fail.”

At most, I move the curve of collapse a year, but there’s so many other scenarios for fail that are overdetermined. If I don’t solve the generator function of all of them, I haven’t actually got it. Having a right relationship between complex and complicated, and having loop closure within the complicated, [00:57:30] and creating anti-rivalrous spaces that is a safe basis for exponential technology is the first level of assessment of necessary design criteria for a viable civilization.

Read Full Transcript

Euvie: Can we talk about [00:01:30] some of the specific examples of the generator functions and what they look like in a society?

Daniel: Okay. I want to look at these same two generator functions through a couple different lenses that end up mapping to the same thing, but these lenses are valuable. One way of saying what needs to happen is that we need systems of collective intelligence and collective sense making and choice making that [00:02:00] increase with scale effectively. I’ll say why that makes sense. If you look at Jeffrey West’s work out of the Satna Fe Institute – his book Scale is a classic example – we see that you’ve got productive capacity or intelligence capacity, design capacity of a person.

Then you bring a few people together and you get increased productive capacity. For a little while, you’ll actually get an exponential up curve where more people give you a lot more ability, because they’re sharing new [00:02:30] capacities. This is the start up phase of something. Pretty soon, you get an inflection point where adding more people starts having diminishing returns per capita. Then you get to this tabling part of the S curve, where adding more people is not increasing adaptive capacity really at all. If I have a curve where more people don’t keep adding adaptive capacity well, then those people will always have an incentive to defect against that system, because they’ll actually have more [00:03:00] adaptive capacity per capita if they defect against it.

So, as long as I have a system of collective intelligence that cannot scale with the number of people, it can’t include everybody, right? It will always force its own collapse, its own defection. So far, we’ve found that all the things that we call intentional, innate human systems – countries and companies – have this type of curve. Again, this shows why none of those can make it. This is a design criteria thing of [00:03:30] if we’re creating a social system, if it doesn’t have sense making and choice making able to increase at least linearly with number of people through some process, then the people will defect against that system as soon as you pass the inflection point.

They’ll either defect and make their own thing completely, unless the big system – even though it’s less efficient per capita than them – is still a lot bigger than them and it would take them out. In which case, it’s not safe to overtly defect, then they covertly defect or they defect while staying within [00:04:00] the system, which starts to look like everything you see in the world today where someone says, “Okay, what is my particular bonus structure and how can I optimize getting the most bonuses here, even though it’s not what’s good for the company?” They have now defected against the wellbeing of the whole, because their own incentive and the wellbeing of the whole are actually anti-coupled or, at least, miscoupled.

We can see that almost everywhere people within systems have actually defected against [00:04:30] being optimally aligned with the integrity of the system. They’re basically preying on the system in some way, while continuing to look like they’re serving it because that’s where the incentive is. That’s going to, of course, make the system radically unadaptive and speed up the rate of its inevitable decline. If I could get a system that could scale its actual adaptive capacity with the number of people that were in, the people being in the system would always be better for them and participating with it than defecting against [00:05:00] it, covertly or overtly.

One way of looking at of what we have to solve is that sense making and choice making processes that scale adaptivity, right, where the adaptiveness scales with the number of people. We’ll come back to this in a minute. This ends up being a very central way of thinking about it. When we talk about rivalrous dynamics creating problems, anti-rivalrous dynamics, coherence dynamics of agents with other agents, are core to the solutions that we’re looking at. It also ends up being that [00:05:30] coherence is what solves a lot of the problems of the [inaudible [0:05:33] in tech. Just how when we were talking about, in democracy, the process of making a proposition, somebody sensing something they care about, they make a proposition that benefits it but it harms other things that they might not have even been sensing.

How do we bring all the sensors together to say, “What is everything important connected to this?” Holds those as design constraints and then go into an integrated design process that is trying to meet all of those design constraints. That sounds like the future of governance but it also [00:06:00] sounds like the future of technology design, right, to make technology that’s not externalizing harm. That also means infrastructure design. Coherence between all the agents that are sensing a part of the system means that all the information from all the parts of the system get to be internalized to the decision making processes, to the choice making processes which means that externalities get internalized.

Interpersonal coherence at scale ends up being a central way of thinking about this. Another way of thinking about it is the [00:06:30] generator function of all the X risks… One thing we just said is the generator function is social and coherence, right? That was actually what we just said is a generator function. Spoke the language of collective intelligences ability to scale. Another way of looking at it is that the source of all of the risks and also the things that have always sucked, the risks are just the things that have always sucked bigger, is the relationships between choice and causation, an inappropriate relationship between [00:07:00] choice and causation as two types of change.

This is going to get into a very tricky philosophic area, which is partly why I said the thing that we call science and tech is necessary but not sufficient. Science is a theory of causation, right? When we study law, what we’re studying – the laws of physics or the laws of any domain – are the rules that create change, causal change. When we study, even more purely, in computer science, [00:07:30] computation means rule based transform. Something is changing, it’s transforming from one state to another state in a perfectly predictable way, governed by a rule set, law set. Causation.

Choice, we don’t have a theory of choice, which is why every philosophic conversation gets into free will and determinism and gets stuck there. Sam Harris and Dunnite debated out and get nowhere and, at the end of the day, say, “We just have different intuitions on this.” That’s happened since forever in philosophy. [00:08:00] We could actually get into that topic at depth at some point, it does require some nuance to do well. We are making choices. We are, at minimum, operating as if we were making choices and we are making choices that are extended by techs knowledge of causation. Before I got the tech and I was just predator, I could hit somebody with my fist but then I could understand causation and say, “The heavier the thing is and the faster it goes and the harder it is, right, causal principle, the more damage it causes.”

[00:08:30] I can extend my fist through a stone hammer. Then I can extend it through a sword. Then I can extend it through a gun. Then I can extend it through a… I’m taking my knowledge of causation, science, and creating technology, applied science, that allows my choice still to be extended through levers, very powerful levers. Science is giving me a theory of causation that can create applied causation [00:09:00] without giving me a theory of choice that tells me how to use that increased causal power. This has been a classic thing in the philosophy of science from the time of Descartes, that science says what is not what ought, that it is the realm of objective, “This is,” but we can’t say anything what ought because we don’t deal in subjectives. Through most interpretations of physicalism, the entire thing is meaningless and deterministic anyways.

Ought just means something that we can’t even make sense of. [00:09:30] There’s a real bitch in here, there’s a real problem. Which is science gives us the ability to understand the physical world very deeply and to create technology that can change the physical world very deeply. It is the most powerful avenue for affecting the physical world – technology, which is applied science – but it has absolutely no compass for how it should do that. Not only does it have no compass, [00:10:00] it says that all compasses are gibberish, because it’s going to be some religious idea, some moral, some whatever, but they’re not science. They’re not objective. We’ve equated objective with real and subjective with gibberish.

The relationship between subjective and objective we don’t even really take seriously, we don’t know how, we don’t have good tools for that – intellectual tools. Then we say, “Okay, if technology is the power to change, to create nuclear bombs that can blow up the [00:10:30] entire fucking world, to create all kinds of dystopias or all kinds of protopias. If it’s all this power to do anything, what determines how we use that power?” It’s not an ethical framework, it’s not a theory of choice. What it ends up being is well, who paid for the science? Who can pay for the tech? How did they get the money to pay for the tech? Remember, money is a concentrated choice making system.

What we end up getting is capitalism. What that ends up meaning is social Darwinism, that means game theory. Win lose game theory still ultimately guides [00:11:00] the development of all the tech. If we’re now growing exponential tech with no basis for how to use it other than to keep winning at win lose games – where every time we increase out technological capacity, so do all sides in a multi-polar way, so where just upping the ante of the playing field. This is a very important principle. It is impossible to have a kind of technological asymmetric advantage and maintain it indefinitely once you employ it. You can maintain it while you don’t employ it, [00:11:30] in which case it’s not really advantage it’s only potential advantage.

The moment that you deploy it then everybody else sees it and it’s much easier to copy than it was to do the initial innovation. It’s easy to iterate and find some other examples of. All you did was up the playing field, which was where we went from one country with nukes to lots of countries with nukes and somebody with AI to lots of places with AI etcetera, etcetera. The idea, “Well, we’re going to develop this technology for our good purpose,” that’s just silly because it’s going to be [00:12:00] used by all players for all purposes. This is why I call naïve techno optimism naïve is because – could technology solve some problems? Sure. Have we addressed loop closure on complicated systems where it’s not externalizing harm somewhere else? No.

Have we recognized that that same technological capacity will be used by everybody for all purposes, all purposes that are incentive in a system that has ubiquitous perverse incentive? That’s a problem. [00:12:30] The answer is not that we technology ourselves out of it. We actually have to change that underlying basis. What that means is if we got all this power, what should we do with it? How the fuck do we answer that? Ultimately, that’s an ethical question, is an existential question. This comes to the forefront and we don’t like to ask this because we only know physicalism, which says, “This is not even a question.”

Basically, in physicalism we have a couple different versions and they all suck. They’re all nihilist unless you do some [00:13:00] mental gymnastics to try and pretend that they aren’t, which I would say intellectually that honest because either I get consciousness is an epiphenomena of brain but that makes it acausal. The brain is a causally closed physical system where voltage differentials in the brain move ions across membranes and a neuro transmitter goes in one way versus the other, you think you love her or not, or have this idea or believe this thing or not and it’s all basically controlled by particle physics and your consciousness is an epiphenomena for [00:13:30] some reason but could not be causal because what is consciousness that is not physics that could causally affect physics if physics is causally closed.

You don’t really have a choice, your experience of yourself as a choosing agent is ultimately an adapt illusion. Which is also a problematic argument because why would it be adaptive to have that thing if that thing doesn’t affect causality at all? What is the weird metaphysics of how first person pops out of third person? David Chalmer speaks about it in a very interesting good way when he says, “Okay, [00:14:00] You’ve got a bunch of atoms, they’re non-experiential and you arrange them in a particular way and experience pops out of it.” They’ve got position and momentum and mass and shit like that and now we’ve got feeling and emotion and different type of stuff. If I have tools to study third person, I’m going to find third person because that’s my tools.

If my epistemology is measure shit and then do math across the stuff that I measured, I’m going to come to a belief that reality is measurable shit. [00:14:30] If I went the Buddhism direction and my epistemology was enquire into the nature of my own experience, I could do the exact opposite. I could say, “I actually don’t know that there are any particles here. I might be dreaming. I might be a brain in a vat being simulated by electrodes. I might be a crazy person. I might be who the fuck knows. I can’t know any of that.” Buddhists and Descartes are the same thing. What can I know for sure? [inaudible [0:14:54], I’m experiencing something but the thing that I think is ‘I’ and the thing that I think is something might not be what they are.

What I know for sure [00:15:00] is experience. Let me explore the nature of my experience. Then, of course, the Buddhists typically do the opposite reductive move – what’s real is consciousness and the physical universe is either nor real or an epiphenomena. If my epistemology is to enquire into the nature of experience, what is going to come up as real is experience. What you end up having is epistemologies that bias ontologies and are self-referential. On the physics side and the [00:15:30] physicalism interpretation of physics – you get dreadful nihilism or incongruencies. Other than that, you get weird religious shit. We’re just not happy with any of this, so we just try not to think about this too much.

We actually can’t, because we have to actually address what do we want, why do we want it, what is worth wanting. The addict want’s stuff that makes their life suck and the little kid who grows up in front of the screen with a bunch of flashing lights wants it again because they were programmed to want it, their [00:16:00] sovereignty is hijacked. What is worth wanting? What actually creates a good life? What does good mean? The fact that we didn’t like the bad religious answers for these doesn’t mean that we get to throw these questions out complete, because you end up getting the existential risk of where we’re at right now, which is we don’t ask those, we just built the tech based on who pays us. Great.

We all get to go extinct in a world where we have no theory of choice but we are choosing based on shitty theory of choice, [inaudible [0:16:28], game theoretic theory of choice, and [00:16:30] we have a theory of causation so we’re extending our shitty choices through exponential tech. Another way of saying what we have to actually get right is individual and collective choice making that doesn’t suck. Another way of saying doesn’t suck is individual and collective choice making that is omni positive or, at least, vectoring towards omni positive. It is omni considerate in terms of considering all that will affect, realizing [00:17:00] that it’s interconnected with all of these things and that if I act in a way to beat the other guy, I’m engendering his enmity and my own insecurity in the future.

If I’m polluting the air, I breathe the air. This is where I have to shift all the way down to an identity level, which is the idea that I’m a separate thing. You’re a separate thing over there. I can advantage myself independent of you or even at your expense. It’s just actually ontologically not a well-formed idea. Ontologically, when I say ‘I’, [00:17:30] I might think ‘I’ and I’ve got some idea of what that means – a set of atoms contained in a particular boundary that looks like this guy called me, it’s on my Facebook picture, a set of memories or whatever. But when I think of ‘I’, I usually don’t think of all the plants on the biosphere, without which I would not exist because there would be no atmosphere and I would be dead.

I usually don’t think about all of the [inaudible [0:17:53] without which the plants wouldn’t exist, and all of the pollinators. I don’t exist without all those. If I think [00:18:00] of ‘I’ without those, it’s an ill formed concept. When I think of that ill formed concept and I think that it’s a good concept, it’s a real thing, I can think about advantaging that I at the expense of the things that I depend on. That is a kind of insanity but it is a kind of ubiquitous insanity currently. For me to make a choice for me, I have to know what the fuck I am. I am not a separate thing in game theory at competition with everything else. I am ultimately [00:18:30] interdependent with and dependent on so much other than me.

Then I say, alright, to not debase the world upon which I exist, the foundation upon which I exist, I start to get, “I am an emergent property of this whole thing. If there weren’t the bacteria and the plants and the pollinators, there weren’t the people that came before that made up the ideas that I believe and the words that I think in and the aesthetics that I perceive through, if there wasn’t gravity and electro magnetism making [00:19:00] the whole fucking thing possible, if it wasn’t for so much stuff that I consider other than me, I don’t exist.” Then I get, “I am an emergent property of the whole and I am both interconnected with everything and totally unique within it.” So are you.

As soon as I get that, I get a couple things. Because we’re totally interconnected, I cannot advantage myself at your expense in a real way, I only can if I haven’t factored loop closure everywhere. The way that I harm you [00:19:30] is going to end up being an open look that is going to harm the substrate of what I care about. When I factor all the closed loops, we get to David Bohm’s the Evolution of Wholeness. This whole thing. The evolution of wholeness. The Schrödinger equation of the whole thing, of evolving in its complexity. I can’t advantage myself at your expense in a meaningful way and I also can’t understand myself without understanding myself through my interactions with you and your feedback, your reflection.

[00:20:00] Choice. We have to have a theory of choice that comes from a philosophy that can actually relate choice and causation and can have a theory of causation and a theory of choice that are [inaudible [0:20:16] with each other. It’s not a made up theory of choice. It can serve as a basis for how to make good choices in the presence of all of the technology we have. Something we said last time when we were [00:20:30] together is the technology is a causal extension of our choice, extending our power to be like the power of Gods. The ability to create new life forms with genetic engineering, to destroy whole life forms and species, to blow things up. Nuclear bombs are bigger than Zeus’ lightening bolt was depicted. The power of Gods.

If you do not get the love and wisdom and consideration of Gods as a choice making basis for that power, you’ll use that power [00:21:00] in stupid ways that will end up causing self-destruction on a very tiny playing field. If I want to make choices that are good for me and I’m interconnected with the whole, I have to make choices that are good for the whole. That means I have to understand that I’m not a separate thing. I have to be able to progressively consider my relationship with everything, the impacts of my choices on everything, and be able to internalize the things [00:21:30] that would have been externalities into my choice making process. Not just at an individual level but at a group level and in group process.

This becomes the future of design as opposed to the open loop harm externalizing method that we’ve had is how do we have progressively more omni considerate design that is more omni positive, that is a safe vessel for the level of effect that it has.

Euvie: The idea of us being separate from the world in many ways is modern, because if you look at how [00:22:00] some of the what we call primitive tribes looked at themselves and considered themselves, it was not the same. They considered themselves a lot more connected with everything or just a small part of the whole. Like you said, those tribes, a lot of them got killed by the other tribes who considered themselves as more separate. How we place our sense of agency and where we place our sense of what is us [00:22:30] and what is not us actually has a very significant effect on the outcomes.

I think that in the modern world a lot of people don’t even realize that this sense of separation between self and world is a construct. That’s because we have certain scientific definitions or whatever of where body ends and the environment begins. If you start deconstructing that – you can do it through any number of things, [00:23:00] like just intellectually deconstructing it or in deep meditation looking at where your sense of self ends and self of the world begins – it just breaks down. That sense of self is very strong.

Maybe it is because capacity to affect the environment is so much higher now with technologies and all these tools that have become the extension of ourselves. Maybe that’s part of the reason why people are so attached [00:23:30] to their sense of self, because they actually see the affect so strongly. Whereas, in the past, people would see the affects of nature a lot more strongly than the affects of themselves.

Mike: I think it’s strong because of the dynamic of experience and time. You don’t see the direct results of your actions coming back to you after a long time until they become external and separate and then, when they come back, it’s someone else or something else affecting you in a way you don’t want to be affected, but you don’t loop it back to your original [00:24:00] actions. This is something I’ve been thinking about quite a lot as you’ve been talking, Daniel, is the natural experience that people have and how these theories might contradict those natural experiences, even though the natural experience is incorrect. How do we communicate these concepts in a way that, even though it can conflict with someone’s day to day, minute by minute experience, but could cause them to think differently and expand their identity?

Euvie: People’s experience is also dictated by their concepts, [00:24:30] those two things affect each other. When people have an idea about how a certain thing is, they tend to experience it in a way that is consistent with that idea.

Mike: True.

Daniel: If we look at the tribes that have more of there’s a web of life and we’re a strand in that view that you were mentioning a moment ago. They had an informal theory of choice. They might have had some moral ethical principles. [00:25:00] This was not like a formal system of formalized ethics but they had some theory. They had a very weak theory of causation. They thought diseases happen by ghosts and obviously didn’t know how to work with tech all that well etcetera. They didn’t make it through with other people that understood causation better. The causation leads to physical adaptive advantage and there’s a theory of choice, it’s just a theory of choice [00:25:30] called win out win lose games, right.

Game theory is only theory of choice. It’s just that theory is tapping out. That theory itself leads to its own self determination because, as the power keeps getting larger, it becomes more than you can keep winning at. We said this before. The win lose eventually becomes lose lose, omni lose lose when you have levels of war that nobody can win, when you have tragedy of the commons that are completely ruined commons, those types of dynamics. When you have an information ecology [00:26:00] that’s so broken from the incentive to disinform that nobody has any idea what the fuck is true.

You either figure out how to create omni win win as a new solution against win lose, other than win lose, or you get omni lose lose as the inevitable by-product of trying to keep win lose. It’s funny because there’s this very mytho poetic way of thinking about what we’re talking about that is otherwise a very technically clear thing. We said scaling to get the power of Gods we have to have the love and wisdom of Gods to guide it. Similarly, we say [00:26:30] we either create omni win win or we get omni lose lose, that sounds a lot like heaven or hell on the other side of purgatory and we get some hard choices to make.

We might even ask if maybe those stories were metaphors for seeing, “Hey, we’re making choices that if we kept getting more powerful at this, we’d be problematic.” Not just the heaven or hell and purgatory story, but the, “We’re in [inaudible [0:26:55] and [inaudible [0:26:57] is the next phase.” There’s a lot of these stories [00:27:00] that, in a Joseph Campbell like way, we can say, “That’s actually a very interesting way to think about where we’re at right now.” We can’t depend on Jesus to come back and solve it, or aliens to come fix it for us, or fifth dimensional light ray from the galactic centre or whatever it is. We have to actually become that Jesus or those aliens.

We have to actually become a being that has the right capacity to make the right choices and do the right sense making and the right choice making. It is true that a being [00:27:30] that’s a shit ton more effective at good choice making needs to encompass all of these things. The types of beings we’ve been, it’s just we have to become them. With regard to, Mike, your question on what experiences are natural. Obviously, it’s natural for me to think in English not Mandarin. If I grew up in China, I would think in Mandarin and that would be natural. Because I think in English, I have certain constructs of thought linguistically that are related [00:28:00] to the syntax of English that, if I thought in Mandarin, would be different.

My aesthetic would be different. I if grew up in the plains with the Sioux Indians, what would be natural to be would be different and my identity ethics, aesthetics etcetera. This is very conditioned. In the modern world, where we experience ubiquitously feeling separate, we also experience ubiquitously feeling alone and lonely. Not just alone, but lonely. We can see that loneliness as a major [00:28:30] source of depression, anxiety, and ultimately even suicidal impulse is pretty much ubiquitous in the developed world. Is that a natural experience? It’s certainly a ubiquitously conditioned one.

Was there ever a person in an indigenous tribe throughout the history, two and a half million years of [inaudible [0:28:50] history or 250,000 years of homo sapien history that felt lonely? Not that much. You live in a tribe with 150 people that know every fucking thing about you and that you’ve known forever and that you [00:29:00] fully depend on, you know everything about them and you’ve got no secrets. Lonely’s not really a thing. Separate from them is not really a thing. When we say ‘natural’, I think what you mean is conditioned ubiquitously. Then we have to say, alright, humans are more susceptible to being conditioned by their environment than most creatures.

Obviously, a dog that grows up in the wild or captivity is going to be different but we’re even more susceptible because – and this is a really important thing to understand about sapiens – the gorilla [00:29:30] or the chimp that’s close to us can grab onto its mum’s fur in the first five minutes while she moves around. We can’t even move our head for three months. A horse is going to get up and walk in 20 minutes and it takes us a year. Just to have a real sense, do the calculation, how many 20 minutes fit into a year to get how many multiples it takes us to be adaptive in the most simple way.

Wow, we are embryos for an extraordinarily long period of time. We are helpless for an [00:30:00] extraordinarily long period of time. Why is that? Because the horse comes pre-programmed how to be horse, it’s going to be a horse pretty much the same from generation to generation in the wild. It coming pre-programmed works because it adapted to fit its environment. Then it can hold the code of how to be adaptive in relationship to its environment. It used to be really adaptive to throw spears and probably none of us are all that good at throwing spears and we’re pretty adaptive because we like texts and podcasts and shit that they didn’t do. What’s adaptive for us [00:30:30] changes pretty rapidly.

Unlike the other creatures that are the product of their environment, we are, as tool evolvers, creatures that not only go and exploit every niche – gorillas didn’t leave and go find islands and be in the water and go to the arctic, they adapted to a niche. We went about found every niche and then we made new niches. We made cities and treehouses and all kinds of shit. As a result, we had to learn how to adapt to the new world that we were in, [00:31:00] because we were going to keep creating new worlds, finding and creating new worlds. As babies, we’re born pretty much open to just start imprinting what world am I in and how do I not have a genetic program to do this but how do I [inaudible [0:31:13] program to be able to do this?

It’s not just in childhood because we can change stuff so fast that the whole tribe might get up and leave and go somewhere else where we went from gathering to farming or to hunting, something super different. This is why we need adult neuroplasticity, to be able to change our [00:31:30] [inaudible [0:31:31] orientation even later in life. We are radically affected by our environment and it’s part of our adaptive advantage and we are mostly affected by our social environment, what the other people around us are paying attention to and doing and what the nature of the relationships are like. You’ll notice that mostly now people live in nuclear family homes on their own, don’t interact with other people all that much and then they spend all their time addictively looking at other people on screens.

They watch TV and they watch people [00:32:00] and then they go to Facebook and they look at people and they read news articles about people. They’re fucking fascinated by people but then we’re conditioned currently to both suck at interacting with people – capitalism has largely been a way of not needing each other directly and being able to indirectly intermediate meeting each other through money. Money can just buy whatever I need, I don’t actually have to have friends or neighbours or give a shit about anybody or have anyone give a shit about me. That seems really convenient and everybody is in a crisis of loneliness [00:32:30] at home looking at people hoping that they’re getting likes, which are not real relationships.

This is basically sugar rather than nutrients. This is porn rather than a real relationships. This is a hyper normal stimuli having co-opted real stimuli but also having desensitized us to real stimuli. We’re all being comprehensively bad for us. When you asked, “How could people have a different experience,” realizing that we are conditioned by our environment to think in words of a certain language to experience in a certain way, [00:33:00] I could say lots of things that won’t matter. I could say, “Go spend more time in nature,” but nobody will do it. We both know that people listen to podcasts, nobody will do it. Maybe one time and then their life is busy and they will be a product of their environment again. Very largely. Because the relationship with us here was a tiny part of all of the relationships that were influencing.

I could say, “Go spend time in nature,” and I could say, “Have some good psychedelic journeys and do this kind of breath work and contemplate that the atoms that make up your body were plants not that long ago. [00:33:30] It would just be nice words to say that wouldn’t end up affecting people that much. If you want to ask, “How could people really change their experience?” Only thing I can say that is statistically really going to work is immerse yourself in an environment that makes that likely, around people that are doing that. If you went and lived with a tribe for a while in the Amazon, you go live with the [inaudible [0:33:52] or something, you will experience the world differently before a couple weeks passes.

You will start hearing things in a different [00:34:00] way, experiencing yourself in a different way, feeling a connection to people in a different way. If you even change the group of people you’re hanging out around and what they’re motivated by and what they think about, you change the quality of your relationships with them, you will end up changing the basis of what seems like natural experience to you.

Read Full Transcript

Euvie: I’m reading Carl Young right now and he’s talking about his experience [00:01:30] of going to live with the [inaudible [0:01:32] Indians and how it completely just blew apart his conception of what was natural and how the western world view is different from other world views. He noticed that they were so happy and serene and they felt themselves as one with their environment and they had this very special relationship with the sun. It was very beautiful but, at the same time, he realized how they were very vulnerable to [00:02:00] the invasion of the western civilization. If we create a new civilization operating system that is not oriented towards winning wars, then how do we ensure that it doesn’t get destroyed by those who are?

Daniel: Imagine there’s a group of people that get a stronger theory of causation. They learn Newton’s physics and now they can use calculus to plot a [inaudible [0:02:22] curve and make the [inaudible [0:02:23] hit the right spot every time, rather than the pendulum dousing, which is hit or miss. That belief is going to catch on [00:02:30] and that’s why science really caught on, took us out of the dark ages, was because it led to better weapons and better agriculture tech and better real shit. It proliferated because it was proliferative. If we increase our theory of causation, that ends up catching on.

If we could increase our theory of causation and our theory of choice, and the relationship between them, that would actually be the most adaptive. Especially in the presence of where our particular game theoretic model of choice, with the extension of causation we have, is definitely self-determinating, [00:03:00] definitely anti-adaptive. I know we’ve been on for a long time. There’s really only one more thing that I want to share that closes this set of concepts. Remember we said that any source of asymmetric advantage, competitive advantage in a win lose game will end up, once it’s deployed, being figured out and utilized by everybody. You just up the level of ante in the playing field.

We also said that in the [inaudible [0:03:25] and many of the tribes we’ve mentioned [00:03:30] lost win lose games. We don’t want to try and build something that’s just going to lose at a win lose game, but we know that if it tries to win at win lose games it’s just still part of the same existential curve that we’re on. It has to not lose at a win lose game while also not seeking to win. It’s basically not playing the game but it is oriented about how not to lose. It’s a very important thing. We can think about power, the way we have traditionally thought of power, as a power over or power [00:04:00] against type dynamic – game theoretic, win lose dynamic. Any agent that deploys a particular kind of power leads to other agents figuring how to deploy the same and other kinds of power. Power keeps anteing up until we get to problems.

We could think about another term we might call strength, which is not the power to beat someone else but it’s the ability to not be beat by someone else. It’s the ability to maintain our own sovereignty and our own coherence in the presence of outside forces. We could talk about my power, “Can I go beat somebody up?” [00:04:30] But my strength is, “Can my body fend off viruses? Can I fend off cancers? Can I actually protect myself if I need to protect myself?” Which is different than, “Can I go beat other people up?” The power game is the game we actually have to [inaudible [0:04:45]. Power over dynamics mean rivalrous dynamics, mean win lose dynamics, is the source of evil.

It’s not that money is the source of evil, it’s that power over where I think my wellbeing is anti-coupled to yours ends up being [00:05:00] the source of evil and money’s just very deep in the stack of power dynamics. Status is and certain ways of relating to sex and a number of things are. We have to get rid of the power over dynamics. It doesn’t mean that I can’t develop strength that makes me anti-fragile in the presence of rivalry. Then I say, “What kind of capacity can I develop that doesn’t get weaponized by somebody else and used against me, given that any asymmetric capacity I get can be weaponized?” There’s really only one and this is a really interesting thing.

[00:05:30] If I make the adaptive capacity of… Say we’re trying to make a new civilization as a model and a new [inaudible [0:05:38] civilization, new economics, new governance, new infrastructure, new culture that has comprehensive loop closure, doesn’t create accumulation or depletion, doesn’t have rival risk games within it etcetera. If I try to have some unique adaptive capacity via a certain type of information tech, the other world will see that information tech and [00:06:00] use it for all kinds of purposes including against me where there’s an incentive to do so. The same is true if I use military tech or if I use environmental extraction tech, I’m still in the same problem.

If my advantage, if the advantage of the way this civilization’s structured has to do with increase coherence in the sense making and choice making between all the agents in the system, all the people in the system, increase interpersonal coherence, this cannot be weaponized. Anyone else employing it is now just a [00:06:30] system it’s self-propagating. For instance, when we start playing rivalrous games we start realizing that it’s not just us against somebody else, it’s teams against larger teams. Then the idea with a team is we’re supposed to cooperate with each other to compete against somebody else.

The compete against someone else idea ends up going fractal and I end up even competing against my team mates sometimes, and that’s part of why the collective intelligence doesn’t scale thing is because I’ll cooperate with my other buddies [00:07:00] on the basketball team unless there’s also a thing called most valuable player and I’m in the running for it and I have a chance to make the three point shot rather than pass, even though it decreases the chance of the team winning. Now, I have an incentiveness alignment. I might go for that. Then it gets bigger where there’s a couple of us that both want the same promotion to the same position at the company and we’re actually going to try and sabotage the other one, even though that harms the company, because my own incentive is not coupled with their [00:07:30] incentive and with the company.

Then I can look a couple different government agencies that are competing for the same chunk of budget. They will actually seek to undermine each other so they get more of the budget when they’re supposed to be on the same team called that country. What we realize is we get this thing called fractal disinformation, fractal decoherence and defection happening everywhere. That creates the most broken information ecology and the least effective coordination and cooperation possible. [00:08:00] That’s everywhere, that’s ubiquitous. It’s the result of that underlying rivalry. As we mentioned before, if I have some information, I want to make it to where nobody else can use it. I’m going to trademark it, patent it, protect my intellectual property. Before I release it, I actually want to disinform everybody else about it, tell them the gold is somewhere else so that they go digging somewhere else and don’t pay attention to what I’m doing.

If I am both hoarding information, disinforming others, and [00:08:30] keeping my information from being able to be synthesized with others, that means I’m going to not let my knowledge about cancer research and whatever it is be out there because I gotta make the [inaudible [0:08:39] back. The best computer that the world could build doesn’t exist because Apple has some of the IP but Google has some of the IP, and 10 other companies have some of the IP. The best computer that science knows how to build can legally not be built in this world. And the best phone [00:09:00] and the best car and the best medicine and the best every fucking thing there is because we keep the actual adaptive knowledge from synthesizing, let alone that everybody’s having to reproduce the same fucking work because we don’t want to share our best practices.

Then almost all the budget is going into marketing against the other ones rather than actual development and the marketing’s just lying in manipulation, at least, about why ours is comprehensively better when they then have to say the same thing about what their IP does that’s good and our IP does [00:09:30] another thing. Imagine if we had a world where all the IP got to be synthesized. Nobody was disinforming anybody else. Nobody was sabotaging anyone else. Everyone was incented to share all of the info. To synthesize all the info, to synthesize all of the intellectual property ideas etcetera, work towards the best things possible, imagine how much more innovation would actually be possible, how much more collective intelligence and capacity would actually be possible.

If our source of adaptive advantage [00:10:00] is that, is we make a world and now we have to come back to – we were talking about – if you possess a good and I no longer have access to it, we’re in a rivalrous relationship. You possess a piece of information that I don’t then get to have access to, we’re in a rivalrous information of knowledge etcetera. If you have access to something and we’ve structured the nature of access where we have engineered the scarcity out of the system as such that you’re having access doesn’t make me not have access, and you having access leads to you [00:10:30] being a human who has a full life and some of your full life is creativity and generativity.

Now, not only do you have the full access to those transportation resources, but also maker studios and art studios and education and healthcare and all the kinds of things that would make you a healthy, well adaptive, creative person – and every well adapted person’s creative. Nobody wants to just chill out watching TV all the time unless they were already broken, broken by a system that tried to incent them to shit that nobody wants to do or, if they can get a way out, they will but they’re a broken person. If someone was supported in an educational system [00:11:00] to pay attention to what they were innately fascinated and to facilitate that, they will all become masterful at some things with innate, intrinsic motivation to do those things.

Now, in a world where we support everybody to have access to the things that they are intrinsically incented to want to create. If, right now, I get status by having stuff but if we are engineering [inaudible [0:11:24] system everyone has access, nobody possesses any of it, everybody has access to all of it. There’s no status of having things and it’s totally boring. [00:11:30] There’s no differential advantage, the only way you get status, the only way you get to express the uniqueness of what you are is by what you create. Now, the whole system’s running towards that but you don’t create something to get money because money for what, to have access to shit you already have access to? Because you get to be someone who created that thing and both your own intrinsic express of it and extrinsically getting to offer to the world that would recognize that.

Now, we have a situation where we all have access to commonwealth resources that create an anti-rivalrous relationship to [00:12:00] each other. Obviously, I’m just speaking about this at 100,000-foot level. We could drill down on what the actual architecture looks like but there is actual architecture here. It is viable, it meets the design of criteria. We have sense making processes where we look at what a good design would be before making a proposition for a design that don’t lead to polarization and radicalization, that lead to progressively better synergistic satisfiers and get us out of theory of trade offs and into [inaudible [0:12:27] also as a way of having people be [00:12:30] more unifiable and on the same team.

If I’ve got this world where it’s source of competitive advantage, if you want to call it that, is that it is obsolete at a competition within itself, it has real coherence. Then not only is the quality of life erratically higher because the people don’t feel lonely and they actually have creative shit to do and they aren’t being used as instrumental pawns to some other purpose etcetera, and the quality of life is better because they’re actually making better medicine and better technology and better [00:13:00] etcetera because of the ability for the IP to synthesize and everything else. This world can also out innovate in really key ways in other places in the world. Then, rather than the rest of the world wanting to attack it, it can actually say, “Here, we’ll export solutions you want.” The rest of the world starts to create a positive dependence relationship.

The rest of the world says, “Shit, we want to be able to innovate. Why were they able to solve that problem we weren’t able to solve?” Because our guys were sabotaging each other and their guys weren’t sabotaging each other. We say, “Great, [00:13:30] here’s the social technology to use. Now, as soon as the implement that, it’s not being weaponized, that’s just the world actually shifting. That’s where this model actually becomes a new base [inaudible [0:13:37] the world starts flowing to. You have to do that. You create a prototype of a false act civilization that is anti-rivalrous, that is anti-fragile against rivalry – strengths, not power – and that is auto-propagating that by the nature of the solutions that it is exporting and by its own adaptive capacity, its own design [00:14:00] starts to implemented in other places. That’s ultimately, the desire. That is a path to a post-existential risk world, which is building it in prototype in a way where it auto-propagates.

Mike: That’s so exciting.

Euvie: Are there places where these prototypes are being built?

Daniel: Kind of but not really. There are intentional communities where people are trying to practice some things they feel will be relevant, a closed look agriculture [00:14:30] system where they at least have regenerative agriculture and maybe some kinds of social coherence technologies where they have a better system of conflict resolution than our current judicial system. Better parenting, better education. We have those things and those are cool and they’re valuable but they still have to buy their computers from Apple and fly on a Boeing to get somewhere that depends upon environmental destruction and war. They can’t actually provide a high-tech civilization, they’re not yet [00:15:00] civilization models and the civilization models are all part of this one dominant civilization model. This is next endeavour.

Before a full stack civilization occurs, obviously partial ones but that are directed towards a full stack civilization have to occur. Because in the world we’re talking about, there is no place for the things currently called judges or lawyers or politicians or bankers. Those systems don’t exist. That doesn’t mean that there isn’t an equivalent of a judicial system that is totally fucking different from the level of the theory [00:15:30] of ethics to the [inaudible [0:15:31]. Somebody has to be getting trained in the civics of that system. There’s nothing like banking but there are things like paying attention to how the accounting of this new economy works but people have to be trained in that. I’ll give you one, for instance, of we think about the physical economy.

We’ll take attention out and just look at physics. We see that there’s at least three different kinds of physics involved in the materials economy that are fundamentally different in their math. There is a physics of atoms, [00:16:00] physical atoms. There is a physics of energy and there is a physics of bits. Right now, those are fungible. I can use the same dollar to buy software or to buy energy or to buy metals or physical stuff, food. There’s a fixed number of atoms of a certain type on the planet that are reasonably accessible.

Right now, we’re just taking them from the environment in a way that causes depletion and then putting them back into the environment as waste in a way that cause accumulation toxicity on both sides. [00:16:30] You can’t keep doing that, we have to close loop it where we have been, give or take, a finite amount of metals. Not just metals but hydrocarbons, everything. A finite amount of atoms that are in a closed loop relationship but they can be upcycled because we have the energy to upcycle them, which means putting the same atoms into higher pattern – where the pattern is evolving, the pattern’s stored in bits.

If I take the atoms out of one battery, put it into a new battery which evolved as battery technology. That new battery is in bits, a blueprint. [00:17:00] I’m going to use energy to take the atoms in the current form, disassemble them and reassemble them into this new battery. There’s a fixed amount of atoms – we have to close loop those. There’s not a fixed amount of energy. We get new energy from the sun every day but we have a finite bandwidth of how much we get, we have to operate within. That’s not closed loop, we can use that up and it has entropy. Within that bandwidth, we [00:17:30] have to work. Bits are fundamentally unlimited – limited only by the compute of the energy and matter. That can keep expanding basically indefinitely.

Once I’ve made a bit, I can reproduce it exponentially without any unit cost because I can reproduce it exponentially without unit cost once I’ve developed at once. I get exponential return on software in a way that I could never get on atomic stuff, which is why Elon has a hard time raising money for physical stuff and [00:18:00] WhatsApp sold for 19 billion dollars. It’s why all the unicorns are software and mostly social tech or fintech or something that is actually doing not good things for the world can create exponential returns. It’s why Silicon Valley has basically mostly just invested in software stuff. If you make those fungible, you’ll actually be moving the energy away from the atom and away from the energy into the virtual. Away from the physical into the virtual, even though the virtual depends [00:18:30] on the physical, so you’re actually debasing the substrate upon which it depends.

You notice that, since the bit we can keep having more of forever, they don’t go through an entropy degradation when we use them, the energy we can use entropically degrades but we get more of it every day and the atoms don’t entropically degrade but we have to keep cycling them and there’s a fixed number. The physics, the accounting of those are totally different. That’s not one economy that’s totally fungible to its self-appointed accounting system. That’s three completely separate [00:19:00] but interacting physical economies. Again, we already said we’re not owning goods, we’re having access to shared commonwealth services. To really go into it, it’s a lot of things. These are examples of some of the considerations that have to happen to actually be able to think about things like economics at a level of depth that is appropriate to the nature of the issues.

If we don’t answer the question of what makes a good civilization. We simply say [00:19:30] what allows civilization to endure. We start with, let’s just say we don’t want existential catastrophic risk. There’s a whole bunch of different types of existential catastrophic risks that all have the same generator function, so we have to create categorical solutions to the generator functions. It turns out that those are the generator functions that have made all the things that we intuitively have experienced as sucking – like violence and environmental devastation.

Solving those generator functions doesn’t just allow us to survive [00:20:00] in maybe some dystopian dynamic. Anti-rivalrous dynamics with each other and close loop dynamics and the proper relationship between the complicated and the complex. Scalable collective intelligence systems in a right understanding in theory of choice and relationship with theory of causation end up being a way of mapping to a world that is definitely [inaudible [0:20:20] on any meaningful definition of [inaudible [0:20:23] and any meaningful consideration of what good could mean. We come back to this mytho poetic. [00:20:30] We can’t keep going the way that we’re going. There’s a purgatory coming and it’s going to go one way or the other. One way is really shitty and one way’s really lovely.

That’s a true story. Bucky Fuller said utopia or oblivion and it’s going to be hit or miss until we’re actually there. We’re not gonna know which way it goes. That’s the thing we’re just kind of in on is what it takes if we try to solve the various risks in isolation is impossible, we fail. If what it takes to solve them categorically ends up [00:21:00] also mapping to how we engage everyone in creating the true, the good, and the beautiful that is theirs to create progressively better, both upregulating their sense making of what that is with themselves and with each other being able to make that, scaling the collective intelligence that is progressively answering those questions better.

Mike: Can you leave us with some book recommendations for anyone who wants to read up on this a little more and expand their understanding?

Euvie: Books or other resources.

Daniel: Yes. [00:21:30] I wish I could share more things than I can but a lot of what we’re thinking in terms of a new civilization design like this is new. It doesn’t mean it’s not drawing on lots of elements. Couple things. We mentioned Jeffrey West’s work Scale on collective intelligence, that’s very valuable. We talk about some of the dynamics of game theory that have to shift and so finite and infinite games is just my favourite starting point. It’s one of the types of books that is very simple but has multiple levels of depth [00:22:00] of meaning. If you read it multiple times, you’ll gain new insights.

Euvie: That one blew my mind.

Mike: Yup, me too.

Daniel: James Carse, very beautiful. I have a blog, civilizationemerging.com that has some articles on these types of topics. There’s also a booklist there with a heap of books.

Euvie: Great.

Mike: Awesome. This, as always, is enlightening and so fun.

Daniel: There are books that are valuable and there’s obviously all of your podcasts available. If you had the [00:22:30] experience of anything that I said making sense and actually seeming obvious. But then you also realize you never thought it in that particular way, then there’s a question, “Why did I never think it in that way, even though it seems obvious after the fact?” That’s one of the properties that clarity has is it can add novel insight but that seems obvious and [inaudible [0:22:50] relates to everything that we know.

Then we say, “Okay, I wasn’t thinking about rivalrous dynamics and upping the ante clearly enough, [00:23:00] I wasn’t thinking about the exponential economy and software and atoms all being fungible. There’s a problem there. I wasn’t thinking about open loops, closed loops in this particular way.” You can. If you start just asking for yourself, “What do I think is actually wrong?” Of all the things that seem wrong, what do they have in common? Why are those things that are wrong wrong? Then go deeper. Keep going deeper with that. Don’t look for one answer. Are there a number of different things that come together [00:23:30] that are partial answers to this? What would solving that look like? There are resources of other peoples thinking on these things but they won’t replace. They will inspire.

They won’t replace your own deep thinking on these things for your own sense making. The resource that I would offer the most is when you are bothered by something or you wish some beautiful thing existed more than it does, really think hard about why things are the way they are. Know that the first thoughts you will come with [00:24:00] are not that good. If you stop, you won’t get beyond there. If you really keep working on it and thinking about it and then going and researching in light of that question and then thinking about it more, you actually start to get novel and meaningful and deeper sense making that is aligned with what is yours to pay attention to and work on.

Euvie: To relate to what you said earlier, I think it really helps to use different modes of enquiry. People can get stuck in just intellectual inquiry or just [00:24:30] spiritual inquiry. But all are valuable. When we can see the same thing from several different perspectives, it becomes a 3D object, rather than just being flat.

Daniel: You just actually mentioned one of my favourite practices is really endeavouring to see and experience the world through the perspective of someone else and actually see and experience it. If I’m still thinking, “No, if I was in their position, I wouldn’t do that,” I haven’t got it yet. If I was in their position, I would do. If I’m really putting myself [00:25:00] in a position, I would get enraged by the things they’re enraged by. I would get excited by the things they’re excited by. This, both as a practice of empathy and connection, as a practice of understanding, as a practice of intelligence and learning because I see different things. If I look at the world through the lens of a mechanical engineer, I see shit everywhere that mechanical engineers see that you never saw.

Which is different than if you look through the lens of a fashion designer, or you look through the lens of a game theory person. They’re looking at different things. Or an evolutionary biologist. [00:25:30] There’s a whole universe I wasn’t paying attention to – like when you buy a car and you see it everywhere, you put on a lens and you start seeing all kinds of stuff, effective sense making. Also, as a kind of spiritual technology of getting out of the default mode of what you think you are. When I’m trying to be someone else, it’s not my personality that can do that. If I’m trying to take their personality, it’s not perspective that can do it. It’s the same consciousness witnessing my perspective that can then witness somebody else’s. As soon as I do that, I actually dissociate from just being [00:26:00] my personality and then I get some more spaciousness around it and less reacted by it.

Euvie: This also relates to not just looking through the lens of different personalities or different modern frameworks, it’s also looking through the lens of premodern frameworks or even animal frameworks and that can all be very useful, as well. If we look at the world through the lens of quote unquote primitive tribe, then different things come into focus and different things become very meaningful and very powerfully meaningful. It [00:26:30] resonates through your whole being… wow. That’s not to be dismissed, because there’s something there.

Mike: At the very minimum, for self-discovery it’s super useful because we spent more time as those primitive versions of ourselves than we have the modern versions. You can untie a lot of behaviours that you don’t really realize you have based off of just looking at the world from a primitive standpoint.

Daniel: If your [inaudible [0:26:55] decide to go visit the Amazon and live with a tribe [00:27:00] and experience the world through those eyes and be affected by it and then look at how they can incorporate elements of that experience of the world and their previous experience of the world to being able to live more fully. That would be beautiful. This was wonderful, I really appreciated being here with you both. I really do want to say that I love your podcasts and I love what you’re creating, both of you together. It’s very easy to have [00:27:30] well informed dystopian views and it’s easy to not think about things or it’s easy to have poorly informed positive views.

To have well informed positive views is actually tricky. If we keep being anything like the kinds of people that we have always been, that do really wonderful and really atrocious shit with our power but having exponentially more power, they’re all dystopian scenarios. We have to be something really different than we’ve ever been, which requires some type of deep [00:28:00] shift that could make that happen. That requires some deep thinking, some deep imagination. I know that what you are really dedicated to doing here on the show. I’m reminded of this quote from the book of Romans. It says the pathway to heaven and narrow and steep and the pathway to hell is wife and many. It’s just like a way of thinking about thermodynamics, which is that there’s just more ways to break shit than to build it.

There’s not that many ways all the cells can in your body [00:28:30] can come together that make you, the emergent property of you. There’s a lot of ways that you just get 150 pounds of goo. We say, “Okay, we’ve got a lot of power and most of those scenarios with a lot of power suck. How could we have this much power that doesn’t suck? How could we have this much power and not use it against each other?” We start seeing Orwell and control systems and we start saying, “That sucks, too.” To keep thinking through, “How could we have it that doesn’t suck that can’t depend on aliens or Jesus [00:29:00] coming back? How do we get us to be that kind of consciousness?” It’s a really good way of thinking about how to actually address these problems.

If we can’t [inaudible [0:29:09] without vision, man perishes. If we can’t even see a well-grounded positive future and positive use of the technological capacity have, we are not going to make it. I love that you all have this space dedicated to exploring a topic. Incentive is always evil. It’s a bitch. I don’t want to [00:29:30] move from perverse incentive to positive incentive. Positive incentive means my sense making has determined what I think is good and I’m going to try and extrinsically override your sense making to make a choice aligned with my sense making.

I’m going to use an extrinsic reward strategy to co-opt your sovereignty and have your choice making be based on my sense making incentive scheme rather than your own sense making. That is always the basis of evil. If I want to have a collective [00:30:00] intelligence that’s actually intelligent, I need everyone to have intrinsic sense making and choice making that is uncorruptable, which means it’s not being co-opted by extrinsic reward and punishment schemas. I got this wrong at first. I use to say, “We have to create a world where the incentive of every agent is rigorously aligned with the wellbeing of every other agent and of the commons, that is wrong.

What is right is to say that we must [00:30:30] rigorously remove any place where the incentive of an agent is misaligned with the wellbeing of other agents and the commons, but an adequate future is one that has no system of structural incentive.

Euvie: That’s a mine blower.

Daniel: The cells in your body are actually not trying to get the other ones to do what they want them to do. They have their own internal sense making processes and they do what makes sense to them. What makes sense to them also happens to be [00:31:00] what’s good for the ones around them, because they depend on the ones around them and vice versa and they’re in a communication process. The brain is not overriding the cells and in no way could handle the complexity necessary of the cells not doing their own sense making. Better incentive schemas as a transition, which is happening in the blockchain, is nice. It’s worse than more perverse incentives but it is transitional, not post-transitional.

It actually does not address existential-risk, it doesn’t give us the right collective intelligence. The right collective intelligence has to be [00:31:30] fractal sovereignty. Meaning, at the level of an individual and every group size, it has its own intact sense making and choice making that ends up being vectoring towards omni consideration. The level of shift that we’re talking about is hard to imagine.

Mike: Yeah. What has to be invented to even begin a transition and then be put to rest so that the next version can come along is such a long road.

Daniel: The reason we [00:32:00] incent people is because we have a civilization that needs a lot of shit done that is not fun. It’s dreadful stuff. We want to get the people to do the dreadful stuff. If we created a commonwealth where everyone had access to resources, then nobody would do the dreadful stuff and then the state would have to force them. That’s why we don’t like communism. Then you get the state imperialism. We say, “Okay, cool, let the free market force them instead,” it’s economic servitude but at least that doesn’t look like somebody did it because the [00:32:30] market is just the anonymous thing.

If you don’t do the shitty job, you’re homeless and you’re kids can’t eat. Cool. But we’ll tell you the story that you can work your way up and become wealthy, even though statistically we know that it’s silly, it happened to those two guys that one time. Even though statistically the rest of the time having more resources makes it easier to make more resources, and having less resources makes it harder to make more resources. The system has a gradient that makes it actually continue in the direction of inequality, not otherwise. [00:33:00] That’s where incentive came from. That’s the good side.

The negative side is a few controlling a many one to use incentive, reward, and punishment, and to get people to do the shitty things we have to do them. This is using choice to create a system of causation – incentive is a causal system, game theory is a causal system – to control the choice of others. Control or co-op. I want to have my theory of choice effect [00:33:30] causal dynamics that are only causal. I.e. If I make an automated robot, I haven’t actually made a sentient being a utility. I’m going to say something even deeper, which is instrumental relationships are evil.

Mike: Can you expand that?

Daniel: Yeah. If I’m interacting with you to meet some people that you know to get my network ahead or to get some knowledge from you or to gain access to something or to whatever it is, I have something [00:34:00] that I want to do that you are an instrument towards, you are a path towards. It’s a utilitarian ethic. You are a ends to a means for me. However I relate with you, however it affects your own sovereign, sentient experience is a place I might externalize harm because it’s not why I’m relating with you.

Mike: Yeah, yeah.

Daniel: Again, a healthy world, a world of the future, other people need to have intrinsic [00:34:30] value independent of utilitarian value to everyone. That’s a part of the culture. Not just the other people and other beings, all kinds of sentient beings, but relationships have intrinsic value. I’m going to invest in the integrity of our relationship independent of me getting anything out of it, because it is actually the basis of meaningfulness itself. Which is why in a utilitarian and instrumental dynamic, we’re getting ahead while feeling utterly fucking meaningless and destroying everything [00:35:00] that is meaningful in the process. That is us being hooked to addiction to a stupid game, where what we think we want is not what we actually want, and what we think of as a win is actually an omni stupid thing.

This is why the Hindu concept of Dharma was a virtue ethic, not a utilitarian ethic and there was a very meaningful set of concepts of, “Do what is inherently right in your relationships with life, [00:35:30] independent of what the outcome might be, because you really don’t know what the fucking out come is going to be.” If you try and just figure out what the outcome is going to be, you’re going to be wrong a lot of times and you’re also going to justify a lot of unethical stuff. Utilitarianism is the rampant ethics that anyone who’s paying attention to ethics pays attention to right now. It’s not without any merit but it is also problematic. It is up there with democracy and capitalism and the philosophy of science in terms of being a [00:36:00] problematic thing, to be the dominant system.

We cannot actually predict in complex systems well enough to do a utilitarian thing and the intrinsic dynamics of a relationship in another being end up becoming moved to being a means to an ends other than them. As soon as I start factoring everything meaningful along the chain of whatever I think my outcome is to where my outcome is actually being in a way that is an integrity with an honouring of all life, [00:36:30] now it’s a virtue ethic.

Euvie: Yeah. I was having this conversation recently about people who are obsessed with life hacking and optimizing everything. When they get into that mindset, eventually they get to what they call optimizing relationships and then they start putting people on a value hierarchy where they want to interact with high value people and they want to get a high value woman and they’re using these tactics to find and attract the most high value woman. It’s funny, because those people, [00:37:00] in my experience, are some of the most existentially unhappy people that I’ve met. They will never demonstrate it outward, in an outward way, but that’s what I’ve noticed. That people who try to optimize everything in this kind of utilitarian way end up really profoundly unhappy.

Daniel: It’s the same thing as continuously pursuing a better high. It’s, “I’m getting a hit from winning at a particular thing, so I’ve got to try to win at it all the time.” But, “I need the hit because my baseline [00:37:30] is that life feels fucking meaningless because I don’t actually have any real relationships and I don’t even know what meaning means. I don’t even know what intimacy means.” That hyper normal environment needs a hyper normal stimuli to feel anything. The fact that I use people instrumentally has people end up not liking me, which makes me hurt even more, which makes me want another hit even more.

Mike: People like you make it super easy. You just come on and it’s like we listen to audio books all day then we get to actually talk [00:38:00] to the person who’s coming up with the cutting-edge ideas themselves. It’s quite interesting, thank you.

Daniel: Bye ya’ll.

Euvie: It’s always wonderful getting our brains blown by you, thank you.

Daniel: Thank you both, this was really fun.

Daniel Schmachtenberger

Today on the show we welcome back Daniel Schmachtenberger, the co-founder of Neurohacker Collective and founder of Emergence Project.

After addressing the existential risks that are threathening humanity in one of our earlier episodes, Daniel now dives deeper into the matter. In the following three episodes, he talks about the underlying generator functions of existential risks and how we can solve them.

Win-Lose Games Multiplied by Exponential Technology

As Daniel explains, all human-induced existential risks are symptoms of two underlying generator functions.

One of these functions is rivalrous (win-lose) games. This includes any activity where one party competes to win at the expense of another party. Daniel believes that win-lose games are at the root of almost all harm that humans have caused, both to each other and to the biosphere. As technology is increasing our capacity to cause harm, these competitive games start to exceed the capacity of the playing field. Scaled to a global level and multiplied by exponential technology, these win-lose games become an omni lose-lose generator. When the stakes are high enough, winning the game means destroying the entire playing field and all the players. 

Daniel then looks into some of the issues that capitalism, science and technology have created. Among byproducts of these rivalrous games are what he calls “multipolar traps”.  Multipolar traps are scenarios where the things that work well for individuals locally are directly against the global well-being. He proposes that our sense-making and choice making processes need to be upgraded and improved if we want to solve these traps as a category.

Daniel believes that the current phases of capitalism, science, technology and democracy are destabilizing and coming to an end. In order to avoid extinction, we have to come up with different systems altogether, and replace rivalry with anti-rivalry. One of the ways to do that is moving from ownership of goods towards access to shared common resources. Daniel argues that we are at the place where the harmful win-lose dynamics both have to and can change.

He also proposes a new system of governance which would allow groups of people that have different goals and values to come to decisions together on various issues.

Humanity’s current predatory capacity enhanced with technology makes us catastrophically harmful to the environment that we depend on. Daniel challenges the notion of “the survival of the fittest”, and argues that it is not the most competitive ecosystem that makes it through, but the most self-stabilizing one.

Complicated Open-Loop Systems vs. Complex Closed-Loop Systems

The biosphere is a complex self-regulating system. It is also a closed-loop system, meaning that once a component stops serving its function, it gets recycled and reincorporated back into the system. In contrast, the systems humans have created are complicated, open loop systems. They are neither self-organizing nor self-repairing. Complex systems, which come from evolution, are anti-fragile. Complicated systems, designed by humans, are fragile. Complicated open-loop systems are the second generator function of existential risks. 

Open loops in a complicated system, such as modern industry, create depletion and accumulation. This means that resources are depleted on one end of the chain and waste is accumulated on the other end. A natural complex system, on the contrary, reabsorbs and processes everything, which means there is no depletion or waste in the long run. This makes natural systems anti-fragile. By interfering with natural complicated system, we affect the biosphere so much that it begins to lose its anti-fragility. 

At the same time, man-made complicated systems are outgrowing the planet’s natural resources to the point where collapse becomes unavoidable.

Daniel explains that the necessary design criteria for a viable civilization which is not self-terminating are:

  • Creating loop closure within complicated man-made systems
  • Having the right relationship between complex natural and complicated man-made systems
  • Creating anti-rivalrous environments within which exponential technology does not threaten our existence
Complex systems, which come from evolution, are anti-fragile. Complicated systems, which come from design, are fragile. - Daniel Schmachtenberger of @theneurohacker Click To Tweet

The Relationship Between Choice and Causation

Daniel explains that adaptive capacity increases in groups, but only up to a point. After a certain point, adding more people starts having diminishing effects per capita. This results in people defecting against the system, because that’s where their incentives are. He proposes that we create new systems of collective intelligence and choice-making that can scale more effectively.

Science has given us a solid theory of causation. Through science, we have gained incredible technological power that magnifies the outcomes of our choices. We don’t have a similarly well grounded theory of choice, an ethical framework to guide us through using our increased power. When it comes to ethics, science rejects all non-scientific efforts, such as religious ideas or morals. Instead, win-lose game theory has served as the default theory of choice in science. This has lead to a dangerous myopia towards the existential risks that are generated from win-lose games.

It is necessary to address these ethical questions, especially in terms of existential risk we are now at. We have to improve the individual and collective choice-making to take everything in consideration and realize how we are  interconnected with everything around us. “I” is not a separate entity, but an emergent property of the whole.

We need to have a theory of choice which relates choice and causation. The core to the solution, as Daniel explains, is the coherence dynamics, which internalizes the external and includes it in the decision making process.

It's not those with the most competitive advantage that make it through in the long run, but the most self-stabilizing ecosystems. - Daniel Schmachtenberger of @theneurohacker Click To Tweet

The Path to a Post-Existential-Risk World

Daniel talks about the need for individuals and systems to have strength as opposed to power. Strength is not the ability to beat others, the ability to maintain sovereignty in the presence of outside forces.

The path to the post-existential risk world is towards a civilization that is anti-rivalrous, anti-fragile and self-propagating. Ultimately, we have to create a world that has not only overcome today’s existential risks, but is also a world where humanity can thrive.

Do what is inherently right in your relationship with life independent of the outcome, because you really don’t know what the outcome will be. - Daniel Schmachtenberger of @theneurohacker Click To Tweet

Neurohacking

Daniel’s recent project is Neurohacker Collective, a smart drug brand with a vision of holistic human neural optimization. Their first product Qualia is a reference to a philosophical concept meaning “an individual instance of subjective, conscious experience”.

After trying Qualia ourselves, we decided to arrange a special deal for our listeners who also wanted to give it a try. When you get an ongoing subscription to Qualia at Neurohacker.com, just use the code FUTURE to get 10% off.

In this episode of Future Thinkers:

  • The generator functions of existential risks
  • The impact of win-lose games, multiplied by exponential technology
  • Win-lose games in the essence of capitalism, science and technology
  • How to solve multi-polar traps
  • How to replace rivalry with anti-rivalry
  • The design criteria of an effective civilization
  • The characteristics of complex and complicated systems
  • Open-loop vs. closed-loop systems
  • Scalable collective intelligence, sense-making and choice-making
  • The relationship between choice and causation
  • Natural and conditioned experiences 
  • The difference between power and strength
  • The path to a post-existential-risk world
  • How to increase our self sovereignty
  • Why incentives are intrinsically evil

Mentions:

Book Recommendations:

More from Future Thinkers:

This Episode is Sponsored By:

 

 

 

 

CONTACT US

Say Hi!

Sending
Our mission is to help people wake up, evolve, and adapt to the changing world.

Log in with your credentials

Forgot your details?