Podcast: Play in new window | Download
Amber Case: Well, hello. My name is Amber Case, and right now, I am doing research at MIT Media’s Civic Media Lab and the Berkman Klein Center of Internet and Society at Harvard University, and what I’m interested in is how we [00:04:00] interact with technology, and how that affects our culture. Topics such as automation, AI, self-driving cars, notification fatigue, social networks, ethics, security, Internet of things, these are the things that I study. How they are really part of our lives, and almost just part of every interaction we have at this point, and what that really means. Because only 15 years ago, it would be considered weird to have a cell phone with a camera, but now we have these devices that sit in our pockets and cry and we have to [00:04:30] soothe them back to sleep, and they get hungry and we have to plug them into the wall. And, so, what does that actually mean for us as humans and where are we going, and how is that affecting our own concept of who we are? I just wrote a book called Calm Technology, which is all about alert fatigue, and how you can design better systems that have less alerts, but still get the same amount of information across. And this is just taking from Xerox PARC in the 80s and 90s, where there were two people that came up with this idea of [00:05:00] ubiquitous computing, which we now know as the Internet of things. And what they realized is that the 21st century would not have a scarcity of technology. We would have an overabundance of technology. The thing we would have the least of would be our own attention, and how we can make technology that respected that attention instead of took it away would be a really important thing to work on in the future.
Mike Gilliland: So, what do you think are some of the biggest problems that you see now about the way we use technology and interact with technology?
Amber Case: I [00:05:30] think some of the biggest, glaring problems that come to mind are first the audio user interfaces, such as audio based phone tree systems, where if somebody’s talking in the background, it could reset that phone tree right back to the beginning, or the idea that a lot of these technologies try to speak in a human voice, which makes it really hard to understand them, communicate with them. They don’t necessarily understand things like accents or mumbling. And when we have something like an Amazon [00:06:00] Alexa that’s persistently listening to everything that you do, there’s privacy and security concerns as well, not to mention the fact that once something interacts with you in that human voice, you kind of expect it to interact back in the same level that a human could. But it’s not really the greatest of technology, and it’s also very ambiguous input. And I think this is one of the biggest problems that we see in things like VR and AR. We have this idea of the Magic Leap, where you can have natural user input with your [00:06:30] hands, but could you imagine trying to play Mario Brothers with your hands? You would fail like 20% of the time. It will be awful, just by using hand gestures. That’s why having unambiguous input, like the buttons on a remote control, or the buttons on a phone, or just the buttons on a video game controller, it gives you a guaranteed input because you’re directly telling the computer what to do instead of saying again and again what you want the computer to do, and turn yourself into a robot in the process.
[00:07:00] Euvie Ivanova: Yeah, it’s true. We were playing around with an HTC Vive recently, and I had to do this hand gesture to shoot, and it was just the most unintuitive hand gesture, and I was really struggling with it. I think it took me 10 minutes before I shot anything in the game.Mike Gilliland: It was the HoloLens.
Euvie Ivanova: The HoloLens, yes, that’s right.
Amber Case: Oh, HoloLens. Yeah, so we call that smashing butterflies.
Euvie Ivanova: Yes, that’s the one.
Amber Case: I have worked with the company. Yeah, I work with a company called PTC. They’re in Needham, Massachusetts, and [00:07:30] we use the HoloLens all the time in industry because you can actually layer machine data onto your view of the HoloLens in real time, so you can actually see, like, the status of all these industrial machines. But if you want to click on anything, you have to put your hand in front of you, and click and smash butterflies. And the issue is that if you’re in a warehouse, you’re probably wearing like a giant glove or your hands are really dirty, or you don’t have the right lighting conditions. And the HoloLens isn’t going to pick it up a 100% of the [00:08:00] time, so you end up just smashing butterflies until you can get the input, whereas if you just had a button, it would click, and you be done.
Mike Gilliland: It’s amazing really how little people have thought about these inputs. Like, even I bought a tablet recently, Galaxy Tab 8, and then I bought a VR headset, not related to the tablet. But the VR headset came with a little Bluetooth controller with the joystick and, like, four buttons on it. It’s really tiny, like, it fits right in the center of the palm of your hand. And I use that for going through books. When I’m reading, [00:08:30] like, books, I can flip through pages and stuff on it. And, so, I just, like, hang my tablet up in the corner of the room or something, and I’m able to flip through pages that way without having to touch, lift my hand up, and touch the screen. Like, it’s a small thing, but I’ve thought about that for, like, using it with VR, or, you know, HoloLens would be another thing too. You could have your hand anywhere, and not have to point and smash butterflies, as you say. But it’s amazing that’s not used anymore.
Amber Case: I agree. It is amazing that it is not used for. Actually, Microsoft came out with a physical [00:09:00] Bluetooth button for HoloLens recently because of this issue. It’s so funny because we kind of are used to buttons. And this has happened before, by the way. Like, history repeats itself in technology, and, like, attend a 20-year cycle often, so I picked up a book on old school VR and AR interactions, called, like, Artificial Reality 2, at the MIT media lab, and I was thumbing through it, and lo and behold, there were all the primordial technologies we know today as Magic Leap, [inaudible] and Oculus Rift, [00:09:30] and you could see which ones failed, and you could see which ones succeeded, and it was just like a little rule book to see right smack dab into the future. It was pretty fantastic because you could see, like, this natural user interface input. We think that because we’re natural that the computer will understand this natural user interface, but humans have trouble understanding each other, as is to think that we could program a computer that could understand everybody’s input. The biggest issue with that is that a computer… usually the error is [00:10:00] tragic. You know, error. We don’t know why that there’s an error in this form. Ok, well, now we’re going to tell you that there’s an error in this form. Oh, no, there’s a database failure. Like, these errors are catastrophic. When somebody… When you’re talking to somebody and there’s an error, they say. “I’m sorry, could you repeat that?” And it happens every once in a while and it happens with thick accents, and it happens with a truck driving by. Imagine a computer trying to input that, whereas a computer has no body. Like, literally, a computer is not embodied. It doesn’t need to [00:10:30] reproduce. It doesn’t need to eat. It doesn’t need to breathe. It doesn’t have the experience of growing up. It’s just a machine of inputs, and it says good as it’s been programmed. And we’re human and we’re programming it. And, so, whenever I think of, like, programs and computer code, I think of this very artistic crafted human process that’s incredibly organic, and, therefore, incredibly prone to mistakes. It’s not perfect, but the way in which it responds to errors is very severe. Like, if you’ve ever been stuck in a parking lot garage because you [00:11:00] bent the ticket to get out of the garage, and then you have to ask for a human to help. I think the most important thing as we automate is that we need have better customer service because the more we automate, the more we’ll get stuck into these situations where we can’t get out of them. Where we’re put on pause and we don’t know what to do, and as it becomes closer and closer to us, those being-put-on-pause situations could become catastrophic if we rely on those systems.
Euvie Ivanova: Yes, there’s actually a really good Twitter feed that’s called the Internet of [00:11:30] shit. It’s a feed of all these different hilarious terrible things that happen when the Internet of things doesn’t work, like people having all of their lights and heat turned off in the middle of the winter because there’s something wrong with their, you know, network of devices, like it a got a virus or something like that.
Mike Gilliland: Pranksters playground, too. That’s another thing. Like, you know, anyone who knows how to hack that stuff is going to be messing with people all the time. I’m only saying that ’cause that’s the first thing I think of. Like, messing with your lights in the middle of the night.
Amber Case: That’s true. Internet is shit. I love it. [00:12:00] I think the Internet of shit is a great account. There’s actually a hackerspace in Portland that I go to, and there’s like an exploit workshop night where people go in and they just try to mess with stuff. A lot of security researchers like to do something really mild just to make people aware of an issue. And then after a while, you know, other people will ramp it up, they’ll have more nefarious ideas. But there’s often a warning, and it’s really hard because people don’t often take heed of these warnings. Not only do we have more surface areas for people to attack and disable [00:12:30] things, but if you think of how powerful and reliable electricity is versus the Internet, we can guarantee that when you turn on the light, and this is a big power outage, that light will stay on. Could you imagine if all of our lights were running through the web? They would be flickering constantly. Just like this phone conversation is over Skype. And the issue with that is that we tried really hard at the beginning of last century to electrify everything and to make the grid incredibly stable, so that we could have electrical applications, home [00:13:00] appliances, or the big electrical applications. The killer apps was, like, the dishwasher and the washing machine, all these important things. But, now, we’re thinking of inviting things into our home that rely on the web, and we know that the web is not stable or reliable. The bandwidth goes up and down all the time. We have no regulation over it like we have the electrical grid, and, therefore, we can’t guarantee any good user experience. And when people and companies like Petnet, which is an automated pet feeder, decide that [00:13:30] they’ll have this application automatically feature pets that functions over the web, and then the server goes down, and the pets gets stuck. This is just a harbinger of things to come. I feel like early on, like, especially in the United States, like, there would be people who would open up Tylenol bottles and, like, put poison in them, and then we got, like, the child safe caps. Like, until somebody dies, we might not have the right consumer protection in place that guarantees an experience because there’s been no incentive to. Nobody is [00:14:00] liable. Companies aren’t really liable if something goes wrong. Even in Petnet terms of service, it said, “Sorry, we’re not liable for any of the service outages.” And, so, we’re not liable if your pet’s die, basically.
Mike Gilliland: Wow.
Euvie Ivanova: Yeah, and it gets really scary when you talk about medical devices that are hooked up to the web.
Mike Gilliland: Pacemakers.
Euvie Ivanova: Pacemakers, yeah, things like that. Or if you involve AI. Even things like, you know, if you live in a really cold climate, and you have the heat in your house that’s being controlled via the web or it’s [00:14:30] hooked up to the web, and your house gets a virus, you could actually, you know.. And you’re sleeping in a house like that, and goes out in the middle of night, you could actually… Well, maybe you could even die, or get really hurt.
Amber Case: Yeah, if you’re elderly or really young, you will be less likely to survive heat waves and colds, and this is a really big issue because where are the ethics around this? Like, we think, okay, artificial intelligence and technology is supposed to free up our time. First off, [00:15:00] artificial intelligence had big peaks in like the 60s and the 80s, and the peaks busted so much. As in, people tried AI, and then it failed so much that people just didn’t see it for a long time. And now it’s back, like its own virus. Like, I don’t want to see technology making decisions for us because that means the people who write the software are making decisions for us. We don’t have the idea of a judge or lawyer or ethics or humanity. We have an [00:15:30] automated system that thinks it’s best for us, that’s supplying one-size-fits-all rule sets. I want to work alongside technology. Like, we domesticated animals for instance. Like, herd animals are capable of being domesticated because they work in groups, and, to an extent, humans are, too. Like, that’s how we get people to work in offices, you know, and have communities. But why would we have something above us overseeing us written by us? We already have laws, but the people who implement the laws are humans. [00:16:00] We don’t want to have a machine implement the laws. How would we determine if that machine is just without having humans around to determine if it’s just based on that situation? And more than one human, too. And there’s a reason why people learn so much law and order to become a judge, or so much history. So, we work alongside domesticated farm animals on farms. We work now with some automated farm equipment, but we’re in control of that. That’s a tool that works, alongside us, to help us to do a thing. Why would we want a society in [00:16:30] which if you do something wrong or you mess up on the application form, you have an unrecoverable error because AI has turned against you? Because you bought the wrong thing, or you said the wrong thing online. This terrifies me because there’s no way out. It’s this Ouroboros of the snake eating its tail where, “Oh, well. Let’s just do AI-made movies.” Well, see you later to anything interesting or unique. It will only produce blockbusters that guaranteed make a billion dollars each time. It won’t [00:17:00] allow any of the human creativity that pushes things forward to come through. If you think about some of the great things that have been made by humans, or the great experiences that you’ve had, they’re probably on Kairos time and not Chronos time. Chronos time from Greek is about this idea of industrialized time, you know. The nine-to-five job. Whereas Kairos time was this kind of non-planned experience little time that is looking at a sunset or eating a great meal, or falling in love. We [00:17:30] don’t want to automate those things, so why are we making technology that takes us away from those experiences and interrupts us in the middle of a sunset? Why isn’t it free in our lives a little bit more? And, furthermore, why can’t we be more human? Like, we’re tired, we need to take a nap. We should be able to. We shouldn’t be forced to look at all these notifications all the time.
Mike Gilliland: It’s funny, you bring up a lot of interesting points that are, like, we’ve been discussing quite a bit lately. We were just talking about this last night in bed about how, you know, we should be leaving the [00:18:00] phones outside of the bedroom when we go to bed. It’s just a space to decompress from the day, and detach from technology for a while. But why is that such a scary thought to leave the phone outside where the notifications aren’t constantly buzzing? And I was thinking, like, the partners and business partners we have and stuff, are they really expecting that we’re available every 20-minute interval to put out fires or solve emergencies and stuff like that? Like, why is our society developed in such a way that we have to be on that 15 or 20-minute interval of checking our phone.
Euvie Ivanova: Yeah, and if somebody doesn’t [00:18:30] respond to in an instant chat…
Mike Gilliland: They start freaking out.
Euvie Ivanova: You get freaked out, yeah.
Amber Case: Oh, people get terrified, or you get the… If people don’t respond to you in the chat, you get the, like, the three little dots that says they’re typing. And this is more compounded by the fact that we have things like OkCupid and Tinder where people are distracted by their own lives, and they’re trying to fit in these micro-interactions all the time, but they’re all out of phase. And, so, it just ends up making people feel completely weird. I have a friend recently that just expected me [00:19:00] to be available, and she was panicking at me because she thought, “Are you my friend anymore?” I was, like, “Wait, what?” I was asleep for three hours. I woke up to seven phone calls because I put my phone on airplane mode by default because I don’t want my attention hijacked. Like, I want my brain back. There’s a movement by Tristan Harris, which is all about time well spent, where its spending your time in a nice way, and getting your brain back from social media. And there’s a bunch of plugins that you can install, so, I started installing [00:19:30] all these plugins. One of them is called Inbox When Ready, which is fantastic. It’s for Gmail. It’s a Chrome plugin. And all you do is… It just hides your inbox when you log in, and then you can hit “Show Inbox” if you want to see it. But every time I try to go into Gmail to do a task, I end up getting distracted by something else answering that email, and never getting to do the thing that I wanted because there’s always a fresh set of email on top of my inbox. And, so, if I hide my inbox, I won’t get distracted. And I do the same thing with [00:20:00] Facebook and a bunch of other applications. And, also, the expectations I’ve tried to set with people is that, yeah, I’m not going to respond instantaneously. In fact, it might be a couple days. And that’s going to have to be ok. You can call me if you absolutely need something, which is terrifying to some people. But, in some industries, they just call, we get it done within 30 seconds, and it’s fine. And, you know, that could take the span of 30 emails to get what you could do in that 30 seconds.
Mike Gilliland: I use my phone, and I hate using tiny phones [00:20:30] and texting. So, I leave voicemails instead. And that’s become something, like, oh my God, that’s become way better for me, to be able to communicate with employees or team members, or something. Just leaving like a one-minute Facebook voicemail. So, what do you think are some of the ways that we need to change the inputs and the notification systems in our technology?
Amber Case: I think it needs to be okay not to respond constantly, and people do need to have a break. We need some moderation. In some religions, you have the idea of a day of rest, or like a Shabbat. [00:21:00] Or, you know, something where you’re not around technology constantly, and you’re around people of different ages and hanging out with each other, where you’re forced to deal with the uninterrupted reality around you.
Mike Gilliland: Yeah.
Amber Case: That might be nice, you know. And it doesn’t have to be set in stone, or the idea that you can have a magic button switch on the router in your home, and when you want to have dinner, you just turn off the web. Just, ding. That would be really nice where there’s just these times where it’s not default on, you know. That’s the thing, it’s default on. I think in [00:21:30] France, there was a bill passed that said you don’t have to be available on your work email after 5 pm, so that you can actually have that moment of reflection time. What happened to journaling? Or, like, when I go on a big Reddit binge, for instance. Like, I kind of want to be forced to write down what I’ve read because otherwise I’m not writing any of that memory to my brain. It’s just Internet junk food, and it doesn’t feel good afterwards. So, I think we’re [00:22:00] just really off balance right now. We went from an early dangerous technology, such as the book, to another dangerous technology, such as the phone, where it was taking all of our attention for a while, and, eventually, we’ll come to terms with it where it won’t be taking all of our attention. But that’s the thing, there’s always going to be a whole segment of the population that is addicted to something because they have an addictive personality, and whether that’s, like, cigarettes or television or whatever it is, they’ll still find a way to get addicted and fill [00:22:30] upset. But at least we can have it come back to being a tool that helps us out. I think one of the things that I would really like to see is long-term technology. You know, if I talked to my grandmother, and she’s like, “Oh my gosh, I can’t use this phone.” And I say, “But, can you cook me something, or can you mend clothes?” Or, “Do you know how to fix a car or change a tire?” She’ll say, “Of course.” Because those things are kind of permanent, or the idea of cooking. You get better at cooking with age. You get worse at technology with age because it [00:23:00] changes so quickly. And what happened to, you know, having something really stable? Like, we can’t even keep our phones for more than a year now because they turn against us. Like, why do we have a software that gets larger in size instead of smaller in size? If we’re having more efficient software, shouldn’t that be smaller? Shouldn’t the packet size be smaller when we use the mobile web? Like, everything is getting slow and junkie and full of crap, and it doesn’t need to be that way. There should be a reward for the least amount of data, [00:23:30] not the most amount of data. I mean, I hate this thing where they’re like we’ll just take all the data and throw a data scientist at it. Like, well, what about taking just the data you need? Or observing in a supermarket that, like, you shouldn’t put this item here because there’s a tiny woman that’s trying to get this item, and you put it on the wrong shelf, you know. You could find that out just by observing. And, so, when we think that, like, we can just have a bunch of data, and that replaces wisdom and time, it makes no sense to me. It’s just going backwards. [00:24:00] We’re going backwards. Aaah!
Euvie Ivanova: I think we’re approaching technology kind of like teenagers right now. We have these awkward devices that we’re still figuring out how to use them, much in the same way that we kind of are trying to figure out what our bodies are doing when we’re going through puberty. I feel like we’re in the pubescent phase with technology where we don’t have self-control with it, we don’t really know how to use it.
Mike Gilliland: There’s no culture around it either.
Euvie Ivanova: Yeah, there’s no culture around it, and it’s kind of like this… Everything goes and people just [00:24:30] go overboard with it.
Amber Case: I agree. And people get into very intense fights on Twitter and Facebook that normally… Were you on the web when there were just a bunch of forums everywhere? And if you were interested in something, it was like, “Here’s the subject you’re interested in, and here’s a forum.”
Mike Gilliland: Yeah.
Euvie Ivanova: Yeah, I remember those days.
Amber Case: Right. So, forums were… If you had a fight, it was between, like, 15 people in a forum thread. And the moderators would get to it, and they probably knew the people and they would handle it, and [00:25:00] either people would leave or people would calm down, or people would be banned. It was small and manageable. And now we expect with this kind of one-size-fits-all template culture where you put your name in, you put your photo in. That now we have these kind of tiny threads that would normally be at a really small forum, moderated by somebody you knew or at least somebody close to you, that you’ve been on for three years. Now, we have that plus anybody can see it, so these tiny little discussions that could have been solved [00:25:30] between individuals are now solved or not solved by a bunch of anonymous administrators who have no idea what’s going on and aren’t close to anybody. And I think that’s a big change from going to a kind of smaller culture to a much larger Internet culture, where we’re not running our own forums, we don’t have, like, these tiny pocketed communities. We just have one global community. I mean, the idea of Aristotle’s oecumene, a world of [00:26:00] open doors without borders has kind of come to life through Mark Zuckerberg’s Facebook. I mean, he did study these Greek scholars, and I’m wondering if they got into his brain about making this global oecumene without doors because that’s what it is. But we lose this closeness and this ability to make mistakes in the process. We lose our flexibility.
Euvie Ivanova: Yeah, that’s an interesting point actually, our ability to make mistakes. Now, people who grew up with the Internet and who came [00:26:30] of age with the Internet, they might have posts they made when they were 13 that are still on some website. And now you lost your password, and you can’t login anymore, but it’s still there. And you said something terribly embarrassing or posted an awkward picture of yourself, and it’s there forever, and there’s no way to get rid of it. So, I know that in some countries in Europe, they have this law called the right to be forgotten, and I think, like, beyond a certain date, any posts, or any, like, search results with your name are not shown. So, I [00:27:00] think that’s pretty interesting.
Amber Case: I think that’s a very interesting idea because, yeah, as teenagers or as adults or anything, people can be in a really bad mood. Or they haven’t eaten enough yet, and they’ll say something pretty nasty, and they don’t want that permanently there. And, usually, if you say that, it’s in a room of five people or just to yourself. It’s not possible to broadcast all throughout the world. And I think that it’s really important to remember that a lot of times when people are on social media, they should be just taking a [00:27:30] nap, or it’s because they’re exhausted, and it’s like a cigarette break. It’s so easy to dip into, it’s so instantaneous. And, so, this is a problem because you have a bunch of kind of grouchy, upset, slightly depressed people, and now they’re suddenly posting things, and other kind of upset, grouchy people are also posting things. Of course, they’re going to get into a fight. And that’s kind of the problem where we have these big debates, where it’s like wait a second, none of this was necessary. But this could have been resolved [00:28:00] before, you know. If you were on a desktop computer before versus a mobile phone, you would go home, eat some food, turn on your computer, and look at the notifications. It had a time and a place, a physical hard-wired location that could never move. You could just move your desktop somewhere. That was like an afternoon chore. But, now, it’s anywhere, and, so, all these people with different contexts are looking at the same post and reacting differently, and there’s such a short fuse between your hand and your [00:28:30] ability to type something that we don’t have the pause and thought that we could have before. We don’t have that reflective moment that you might have when you write a physical letter. And, so, what meaning do you have any more when you when you don’t have that? Like, we might be able to text really fast, but we’re producing the equivalent of the Canterbury Tales and not anything like Shakespeare that really tells us about the human condition, except for the fact that we’re [00:29:00] kind of annoyed, and we need to eat and need more sleep.
Euvie Ivanova: Yeah, it’s like unfiltered monkey mind. I wonder if this is something that we should be mitigating with technology, or if we should be trying to change ourselves to become more mindful, meditate more, you know, take more breaks, spend more time in nature, and try to fix it, kind of internally rather than externally, or if it should be some combination.
Mike Gilliland: Or in between, yes, on the societal level. You know, having some sort of propaganda campaign to change how society looks at how we [00:29:30] should behave with technology.
Amber Case: I think three things there, we need more human support. We need the support of companies in this, and we need the support of our community, in that we need to stop building anonymous condos in the middle of nowhere, where no neighbor knows each other. We need to have small neighborhood communities and not have so much of a driver culture. But that’s a whole grid problem that, you know, some cities are better at than others. But I think the fundamental issue behind all of this is that we have companies now that are larger than entire [00:30:00] countries. You know, it’s a threat to them to say, “Hey, use our services less, and we’re going to help you”, because they make their entire money off of that, and if somebody does that in a company and the shares go down and the stock price goes down, then they can get fired. And, so, there’s a massive disincentive for people to actually spend, you know, have people spend less time on these social networks. So, I think we need to have a little bit of it. I just got a Facebook survey when I logged in the other day that said, “Do you like Facebook or not? Do you find [00:30:30] it so addicting or not?” I just… You know, I wrote them all these notes. So, they’re aware of it, and the whole Time Well Spent movement is putting pressure on these large companies. But it’s hard to say because, you know, Facebook and Google and all these things, they should be utilities. They shouldn’t be ad sponsored, and that’s the other reason why we have all this fake news. Fake news is profitable.
Euvie Ivanova: Yep.
Amber Case: You know what triggers people to make them upset and emotional because you watch these social networks, and then you can easily write a bunch of stories and make [00:31:00] 40,000 a month in ad revenue off of it, you know, with a lot of effort. But, still, the attention model is totally off. I mean, we used to buy a paper for 50 cents every morning, or 10 cents or whatever, and we would see the news. Or we’d have a subscription, and that subscription meant that the newspaper business could still be in business. And, yeah, the news wasn’t always the greatest. We had these guarantees that these businesses would stay open, but now it’s all consolidated, and, so, how many other websites do you go [00:31:30] to versus 10 years ago? So, like, you go to Facebook, Google, Twitter, Reddit maybe, Hacker News, Gmail. Do you go to any other sites, or are those just afterthoughts?
Mike Gilliland: Or if I arrive at them, I arrive at them through any of those sites you mentioned.
Euvie Ivanova: Yeah. Like, those sites are entry points, and then I go and read articles somewhere else.
Mike Gilliland: Yeah. But even Facebook is trying to build an API to pull in articles into the Facebook platform, so you don’t even have to leave.
Amber Case: Aaah!
Mike Gilliland: Brutal.
Amber Case: Aaah! So, then [00:32:00] they’re a web browser.
Mike Gilliland: Yeah.
Amber Case: They’re a web browser that grabs everything that you browse, and that… Once they do that, then they’ll be fighting with Chrome.
Mike Gilliland: Yeah, exactly.
Amber Case: Then they’ll really be fighting with Google.
Mike Gilliland: Yeah.
Amber Case: It’s so weird because now it’s kind of illegal to be anonymous on the web. It’s considered terrifying. We should have not only the right to be anonymous, but the right to have different user profiles, so that they aren’t all commingled. I would love to be able to browse the web anonymously, or the idea of those old chat forums where you just made up a name, and then you could be anybody and just had a fun time [00:32:30] chatting. That’s considered, “Oh, no you can’t do that anymore”, and it really makes me sad because that’s the type of interaction I really loved and enjoyed on the web before you met people’s minds, and their bodies didn’t matter. This kind of disembodied interaction was neat because you could talk about ideas, you know. But, now, you know, you see somebody’s entire form, their real name, not a handle. You can get at their address really easily. You can see all of their history on Facebook and it’s no big deal, [00:33:00] and a lot of people don’t know how to lock that down. And, so, it’s just… Now, this is the norm. So, people growing up in this, it’s like, “Oh, yeah. This is how I present myself online. Cool.” But I think there’s something missing, like that ability to be dumb, silly, make mistakes, experiment with identity, and just be human.
Euvie Ivanova: Yeah, and be honest because I remember back in the day, sometimes, like you said, you could, you know, make up a name and put up some random picture for your avatar, and nobody knew who you were and [00:33:30] you could have really honest, deep conversations with people about difficult subjects. And you could reveal personal things about your own life without being afraid that they’ll find out who you are.
Mike Gilliland: Yeah.
Euvie Ivanova: And it was therapeutic in many ways. Like, you could just be real, you could just be human. And, now, it’s… Being on social media, it involves a certain level of fakeness always.
Amber Case: Yeah, there’s always that presentation of self. Sociologist Erving Goffman had this book on the presentation of self in everyday life, but there’s [00:34:00] kind of a sequel to that. The presentation of self in digital life. It’s always be on your best behavior, and never say anything that’s difficult, and don’t disagree with anybody because if you do, 87 replies later, they’ll get mad about your choice for political party. And I think that’s the problem where now we’re kind of forced even more into a kind of hive mind because if you disagree, it could be career suicide. Or it could be all these people will unfriend you because of something you believe, not because you just had an [00:34:30] opinion and you had a nice discussion over dinner with five people.
Mike Gilliland: Yeah, that’s exactly true. We’re lucky to kind of benefit from controversy, but if we didn’t, it would be a really horrible thing. Some of the things we say that cause firestorms, like… Oh, they’ll be scary.
Euvie Ivanova: Or some of the guests we have on the show. Like, we had Jordan Peterson on the show recently, and some people stopped supporting us on Patreon, or said that they’ve stopped listening to our podcast because we had Jordan Peterson on.
Mike Gilliland: Yeah, very funny.
Amber Case: Well, that’s so interesting because now it’s like all these things are [00:35:00] sponsored by people. Basically, you have, like, a public radio station sponsored by Patreon listeners, right? And, so, then you have to do something that works with your broadcast community, or you take these risks, but you see, like, a podcast like love and radio, and they’re like, “Yeah, we’re going to bring in child molesters, and, you know, rapists and drug dealers, and, you know, everybody. And we’re going to give them a voice, and get you to hear their perspective. And that’s how so uncomfortable to listen to. It’s [00:35:30] so uncomfortable. But then you’re like, “Oh, I think I understand, like, why somebody could do these things”. And, ha! Their societal factors, and they’re screwed up, but they also know, but, like, at least you can hear it, and it’s difficult, very difficult.
Mike Gilliland: Yeah.
Amber Case: But unless somebody’s doing that specifically, it’s really hard because how do you say, like, well, “I want to be open and honest”, or, “I want to represent many viewpoints”, when those things are so magnetically polarized at this point. We don’t have a lot of moderation because the systems that we have [00:36:00] pushes into these really extreme shapes because extreme shapes make way more money than non-extreme shapes.
Mike Gilliland: I know you got to go, but I just want to ask you one more question before you go. And that is what advice do you have for people to adapt and change how they consume this information, and avoid inundation with notifications?
Amber Case: Simply, some browser plugins. Like, literally, the Gmail When Ready has saved me a lot of time for Chrome, and the Facebook When Ready plugin… I forgot what it’s called, but it gives you just like a [00:36:30] little nice quote in front of your Facebook feed and then I can just use my Facebook messenger, and I don’t have to worry about diving into the news feed, which is great. Put your phone on airplane mode as much as you can. Turn your Wi-Fi off when you can. Before you look on your computer, put a list down on paper about what you actually want to accomplish, or don’t use your computer at all. When you notice yourself just sitting there on your phone, try not to. Try to just sit on the floor, and stare into [00:37:00] space for 30 minutes. You’ll be doing the exact same thing, but you can be thinking within your own mind instead of having the thoughts of a bunch of other people show up in your head. You won’t be missing out on anything…
Mike Gilliland: Yeah,
Amber Case: … at all. We forget all these things that, you know, we thought we were missing out on, but we don’t really have a lot of memory or comprehension because there were always inundated by exciting things. But how many of those do we actually remember, which are long-term memories that actually help us or affect us? There aren’t that many, and those are decreasing, [00:37:30] you know. Remember when you were a kid and you watched a movie, and that movie stuck with you forever. And you made your own toys about it, or you made your own drawings about it, or you thought about it in your head. Now, you just buy a bunch of merchandise and, like, go to the fan communities. Well, that’s nice. It’s different than the reflective time that I think we’re missing because we don’t have that digital downtime. We don’t have the ability to be bored and not have anything to do. There’s always something to do. It’d be really nice not to have a lot of things to do. Or [00:38:00] take a freaking road trip with a paper map, and don’t have Internet access during it. That would be amazing. I do that as much as I can.
Mike Gilliland: That’s horrifying.
Amber Case: Yeah, after the first few days, it gets really good. But those first few days are kind of scary.
Mike Gilliland: You must actually be a fan of the blockchain movement, or are you familiar…
Amber Case: I’m not a fan of the blockchain movement at all.
Mike Gilliland: No?
Amber Case: It’s very slow.
Mike Gilliland: Really?
Amber Case: It’s basically a glorified bank ledger. Anytime somebody says this one digital thing is going to solve all [00:38:30] of our problems, and then we look at Bitcoin transaction rates and dates and times and time logs, and I say, “Really? We want to put everything on this incredibly slow system? No, thank you. Sorry, sorry to have [inaudible] the blockchain fans. If you’re a blockchain fan, and you cease to subscribe to this podcast because there was an anti-blockchain… It’s not that I’m anti, but like stop hailing new technologies. It’s like the god pill. Everything’s going to have problems. The reason why it’s perfect is because it isn’t [00:39:00] used enough yet. Like, things are organic in the real world. They aren’t perfect. And, man, the minute so many people use that, like, I am not waiting for three days to buy a burger at a restaurant. It’s really good for large transactions, probably for governments, but as an individual basis, no, thank you. I will stick with my cash. I don’t want a record of everything I’ve done.
Mike Gilliland: Well, I’m more specifically interested in ownership of companies that is decentralized [00:39:30] and distributed. That’s the thing that’s most exciting. Like, you said Facebook should be a community good, or… I forget your exact words… Should be utility. That would be an application I could see, assuming the speed aspect was fixed. If, like, everyone owns these technologies, the incentives for fucking it up with advertising or anything else would hopefully go away.
Amber Case: Right, it could be that way. Or it could be, like, a very angry shareholder meeting. Or, like, a big town hall where everybody is constantly [00:40:00] fighting, or, like, the HOA condo community type issues. I mean, everything’s going to be a mess. We just have to choose which mess we want to take on, and what’s the least amount of mess. And I prefer to choose messes that have been time tested for a long time, or kind of, like, guaranteed messes, where we know all the issues already, instead of, say, “Oh, it’s new. Therefore, it won’t be a mess.” It’s going to be a mess. Just figure out if it’s going to be a larger or smaller mess than the one before it that we already figured out and we already deal with. Well, [00:40:30] with that, thanks for having me on this show. It’s been really fun to talk.
Mike Gilliland: Yeah, cool. Thanks for coming.
Euvie Ivanova: Thanks for your time.
Amber Case: Thanks.
Mike Gilliland: Talk to you soon.
Euvie Ivanova: Bye-bye.
Today our guest is Amber Case, a researcher on human-tech interfaces at MIT and Harvard. Her areas of study include home automation, AI, self driving cars, IoT, social networks, cybersecurity and the ethics of all of those fields. Because of her fields of study, she sometimes refers to herself as a cyborg anthropologist.
Amber Case on Building Unintrusive Technology
While many tech futurists argue for humans merging with machines as a utopian vision of the future, Amber has a more critical point of view. She studies the ways in which human-machine interactions can be unproductive, unhealthy, or harmful.
She points out the ways in which the incentives of technology companies like Facebook is to hijack our attention, instead of providing a public utility.
Another side effect of an interconnected world Amber has studied is the notification fatigue. Being connected to the internet during all waking hours has set unrealistic expectations to be always available. Even more so, we are constantly overwhelmed by the number of notifications, which exhausts our willpower and decreases our focus.
Amber also talks about the ways we are trying to push our interactions with technology to be more human-like – for example, with “natural gestures”. This can often backfire and create less intuitive interactions comparing to unambiguous inputs, like the good old buttons.
The title of Amber’s book is Calm Technology, where she proposes that technology should integrate into our lives seamlessly and run quietly in the background rather than being in-your-face.
The reason the blockchain seems perfect is because it isn't used enough yet. Click To TweetIn This Episode of Future Thinkers Podcast
- How we design human-tech interfaces
- “The Internet of Shit”
- The internet culture of instant-response
- We’re in a pubescent phase with how we use technology
- Re-integrating community into our lives
- The reason we have fake news
- Advice for the notification-fatigued
Quotes
“I don’t want to see technology making decisions for us, because that means the people who write the software are making decisions for us.” – Amber Case
“I think the fundamental issue behind all of this is that we now have companies that are larger than entire countries. It’s a threat for them to say “hey, use our services less, and we’ll help you.” – Amber Case
Mentions and Resources
- Amber Case’s website and Twitter
- Amber Case’s TED talk: We’re all cyborgs
- The Internet of Shit Twitter feed
- Portland PDX Hackerspace
- The difference between Chronos & Kairos
- Time Well Spent Movement
- Inbox when ready Chrome plugin
- Aristotle’s Ecumene
- The Presentation of Self in Digital Life
Upcoming Events
- d10e Decentralization Conference in Kiev, Ukraine – Sep 16-19, 2017
- World Blockchain Forum in London, UK – Sep 24-26, 2017
- Future Thinkers Meetup in London, UK – Sep 27, 2017
- Future Thinkers Facebook page and group – best way to stay up to date about our in-person meetups
Recommended Books
- Calm Technology: Principles and Patterns for Non-Intrusive Design by Amber Case
- An Illustrated Dictionary of Cyborg Anthropology by Amber Case
- The Presentation of Self in Everyday Life by Erving Goffman
- Artificial Reality 2 by Myron K. Krueger
More From Future Thinkers
- Daniel Schmachtenberger on Neurohacking (FTP042)
- Dr. Jordan Peterson on Failed Utopias, Mapping the Mind, and Finding Meaning (FTP038)
- James Hughes on The Cyborg Buddha (FTP025)
- Human-AI Relationships (FTP004)
This Episode Was Sponsored By
Become a sponsor of the next episode.
Comments are closed.