Amber Case 7:42 Hello everyone, can I ask you to please take your seats? 7:53 Hello everyone, can I ask you to please take your seats? 8:01 Thank you very much. 8:03 I am super stoked. 8:05 I saw this just little snippet of something of waiting for the future to load, and I'm like, this is— Case, it's time to talk about the future again, and I'm really excited to see this talk. 8:20 I'll hand it over to you. Speaker B 8:21 Great, thank you so much. 8:24 I'm gonna wait for people to sit a little bit. 8:26 Hi, Skylar. 8:28 And if you want to sit in the front, you can also sit in the front. 8:32 It is safe. 8:33 I will not pop quiz you. 8:37 Let's wait 30 more seconds. 8:39 So if you want to get a cup of coffee or something, you can come through. 8:45 My name is Amber Case. 8:47 I go by Case. 8:48 I'm an early Bluesky adopter, and I am I am trained as a cyborg anthropologist, so I look at how technology affects culture and how culture affects technology. 9:02 And today I want to get slightly metacognitive and think about how do we think about the future as humans, and what does that actually mean? 9:12 What is the future anyway? 9:14 So I like the idea of local minima, and who knows about local minima in here? 9:18 You can raise your hand. 9:19 Yay! 9:20 Great, good audience. 9:21 So my non-perfect explanation of local minima is you are trying to find either the highest or the lowest point on a graph, but sometimes you can get stuck in this local minima. 9:35 I don't know if this thing works, but anyway, so you can see that like little shallow divot over there, and you can get stuck in the shallow divot, and because all the other mountains around, you can say, well, I'm at the lowest point, this is great. 9:48 And the only way to get out of that is to walk around. 9:51 And you can see the meme about just walk around, it's fine. 9:55 But most people might get stuck and they might say, I don't even know what it is to walk around. 10:00 And so in this talk, I want to give you some ideas on how to get out of the local minima of what we think about the future and what we think the future is, and learn how to walk around, or different methods of walking around, so that we can reach a different point. 10:17 Usually this is someone who says, you know, VR and AR are the future. 10:22 Where did they get that idea? 10:23 That's usually a narrative. 10:26 And just walking around can be the difference between trying to develop something for another $80 billion again or— yeah. 10:39 So one of my favorite people, I call him the local Big Bird of Portland, Oregon. 10:43 Ward Cunningham, he's one of my favorite friends, but he created the wiki. 10:50 But also he has Cunningham's Law: the best way to get the right answer on the internet is not to ask a question, but it's to post the wrong answer. 11:00 But I also love this quote from James Baldwin, which is: the purpose of art is to lay bare the questions that have been hidden by the answers. 11:10 And I think we have so many things that look like answers out there that it's incredibly hard to understand what the questions are. 11:19 And for most of technology, we can look at once something gets adopted, it becomes normal, it becomes invisible, we don't question or talk about it, it just is. 11:29 We do not have a massive set of headlines that say every person at Stanford wants to raise venture capital in order to make the new light switch. 11:39 It's a light switch. 11:40 We don't even think about it because it's just normal. 11:43 So, so many things in our everyday lives are so normal because technically they're so well designed that we don't have to think about them anymore. 11:51 And so my question to you, if you can do anything after this talk, is to look at the things in your environment that are so well designed you no longer notice them and question, where did they come from? 12:04 Who invented them? 12:06 Why do they exist? 12:08 And then ask different questions. 12:09 Try to come up with new questions. 12:11 So there's a lot of false narratives in the way of thinking about what the next thing actually is. 12:20 And really innovation comes from the side. 12:23 You might be driving in a car, looking into the front, saying, oh, what's next? 12:26 It's this future, it's this future. 12:28 Or looking behind at what the notion of the future was told to you was supposed to be the future. 12:36 And you might not see it coming. 12:40 So I started to interview people who had been sideswiped by the future. 12:46 I was interviewing people that worked at Blockbuster, and they said, "Yay, Blockbuster, it was great." And then somebody said, "Well, we're going to do this new thing." You know, there's of course Redbox, and then there's Netflix. 12:58 And they said the majority of people said, well, we're going to work on this thing. 13:03 And there were a few people that said, we're going to digitize content and load it into the sky and then download it through people's machines. 13:10 And it wasn't until people could clearly see that future that they could adopt it and there was a pathway for them to get to it. 13:17 And there were just a few people on the edge that said, yeah, I can see this future. 13:21 It makes sense. 13:22 This is normal. 13:24 And then the future sideswiped a bunch of people. 13:31 I was trained at the Institute for the Future, which is at least 50 years of predicting the future. 13:38 First, you do not predict one future, you predict multiple futures. 13:42 Second, you use the past to understand the future. 13:45 And three, you look at cosmotechnics, which is the idea that every culture has a technology, not just Silicon Valley, and that it is 10,000 to a million years old. 13:55 And it's site-specific based on the geography, based on culture, based on history, based on food, and that we shouldn't narrow it into technology and the future is the last 20 years, but that it is all time and it's extremely sophisticated. 14:10 From wind catcher towers in Persia 2,000 years ago that allowed people to have modern air conditioning and ice cream, we can think of everything not as this time right now, not just now, but all time. 14:24 And this is the two curves model. 14:26 So the first curve is like a signal that masks the future. 14:30 It's so hard to see the future because the signal is so weak at first. 14:34 And then the second curve comes up, and it is only when that first curve declines that that second curve signal gets strong enough to be seen. 14:44 My dad was an audio engineer, so everything I think about is in terms of signal processing. 14:48 I'm sorry. 14:51 Then I started to look at different types of innovation. 14:54 So Art Fry, who's a banjo player that worked at 3M, was trying to make the strongest adhesive in history, and he failed tremendously. 15:03 He made this kind of weak, terrible adhesive that didn't work, and I think his wife at one point was like, hey, I just put these things on the choir notes and brought them to church, and everybody loves them because you can remove them and place them back on, and the sticky note was boring, the Post-it note. 15:21 There's a different side that comes more from constraint. 15:24 There are a lot of books and interesting history on this, the America's First Paramedics, because people didn't have access. 15:31 And it's a really upsetting, tragic story. 15:34 And then we got this incredible service that people use all of the time now, even though it's been really raised in price in the United States. 15:44 But there's this idea that When there is constraint and discrimination and non-access, really interesting things can come about based on the community, based from the ground up. 15:56 It's not a top-down innovation space. 15:58 It's not a company saying the future is the metaverse and then not understanding anything. 16:05 I got to have dinner with this person. 16:07 I spoke at a conference with him and I saw him eating alone and I was like, can I eat with you? 16:12 He created, he was a really curious person, and his mother was one of the hidden figures that worked at NASA, but he tried to figure out how he could get an education in a world that didn't support that, and he ended up working at Bell Labs as an intern, and then he had a full-time position at Bell Labs, and he was talking about how not necessarily, if you're just curious, the microphone that I'm using is using his technology. 16:41 He's a really, really curious person. 16:44 But he followed it. 16:46 And so a lot of really interesting inventions came forward not because someone was trying to make something, but because they were interested in a very small thing and asked an interesting question. 17:00 Also, if anyone knows why Mario is very small on the screen, it was because of a constraint. 17:08 So instead of of pulling the camera away, which would have required more memory and processing power to render a larger portion of the screen, the inventor suggested shrinking the size of the character sprite. 17:21 And so the whole game could fit into 32K and about 40K total with the full cartridge data with the code and graphics. 17:30 Steve Wozniak, the co-founder of Apple, was up all night every night because he really wanted a personal computer. 17:35 And so he was constantly thinking, how could you fit the chips onto the motherboard in order to do this. 17:42 And he was working at HP at the time. 17:43 And of course the famous thing was he said, hey, we invented this small computer. 17:47 And his boss said, no one will want a small desktop computer. 17:51 And there's no way it can be that small. 17:53 And so Apple was born in the garage. 17:57 Also laziness. 17:58 The first webcam was for coffee. 18:02 It was in Cambridge, England. 18:03 The idea was we don't want to go to the coffee all the way and see that it was empty, so we're going to make a webcam. 18:12 And this was the first kind of wormhole to see another space when you were somewhere else. 18:16 And then of course it got put on the web in 1993 and became an artifact of the web. 18:23 So why are we as humans, not all humans, but many, many humans, especially in the West and especially in Silicon Valley, very obsessed with the future? 18:34 What is this idea of the future? 18:37 And I use this ostrich thing. 18:39 It was one part of it was standing on two feet. 18:42 When we started to stand on two feet, we had this huge foundational shift in evolution that was preceding large brain development and really acting as a primary driver for neurological and cognitive advancement because you suddenly had your hands free. 18:58 Everything began became about using tools and working with systems over time, and then also increased efficiency for long-range travel, and better thermoregulation in open landscapes. 19:12 But this was a huge neurocomputational shift. 19:15 And we also started to look at the horizon at a fixed point. 19:20 There are still many cultures that sit down by default. 19:23 I work a lot in Japan, and we sit down when we think about things, and it's a very different way of thinking than standing up or sitting in an abstracted couch that's extremely, you know, it's a different way of thinking. 19:38 This is the Chickasaw tribe. 19:40 Of course, here's an elder telling a story to a bunch of kids. 19:43 But I was told by one of the co-creators of Calm Technology that he was going to explore different ways of being and how information got passed down through generations. 19:57 And he was with a First Nations tribe and he noticed that all of the kids were not ready to listen. 20:03 And so the elder said, "Go swim across that river and come back." So all the kids get into the river, they swim across to the shore, and they come back, and they're extremely exhausted. 20:13 And then they're sitting down and they're ready to listen to a story. 20:16 But the most important thing was this is not a PhD level— this is a PhD level of information getting given in a narrative story. 20:25 That doesn't require a PhD or a research paper to be published. 20:29 It's, you know, how do you tell the pH of the soil? 20:32 Well, we've placed these indicator plants around so you can see when this is ready to plant. 20:36 And now all you have to do is go and observe and you get this information. 20:40 But you tell it in an interesting story because our brains are really, really excited about narrative information. 20:47 Extremely excited. 20:49 So excited that as the first keynote was talking about, if it's in a nice narrative and you read about it with enough density that it looks like a thing, then we think We don't think that we have the knowledge unless we're asked to redo that or, you know, fix a toilet. 21:03 But the important part about it is that when it's given in this structure multigenerationally, you get all sorts of interesting information. 21:13 And sometimes narratives can really go crazy. 21:17 This is the railway mania from the 1840s to 1860s. 21:20 In my opinion, this is still recent history. 21:23 I don't think of history as like 500 years ago starts get like, you know, that's like what we would think of as like the '80s to me. 21:31 And then, you know, 1,000 years ago, 2,000 years ago, that's like, that's starting to get, that's like kind of contemporary in a sense. 21:39 Like, you know, that'd be like the '50s. 21:41 Because our choice of how we experience time and understand what time is, is just a filter and a lens that we have. 21:49 If we take that aperture and change it, we can start to understand how much rhyme there is out there. 21:55 So this Railway Mania was hilarious and terrible. 21:58 It was basically like an AI bubble. 22:00 I think 1/5 of people at that point in time, I think at least in England, had invested and had shares in the railway companies. 22:11 That was everyone going, you need to buy this, it's going to go up like crazy, it's like the new crypto, and we're going to pave the way literally to a future in which we're going to have all these amazing things. 22:22 The narrative was so exciting that everyone went crazy about it. 22:25 And then it crashed terribly. 22:28 But all of that money that went in through, you know, the general public holding the bag ended up paving, actually making rail possible. 22:39 And so it was a really interesting thing. 22:41 We can see parallels with the dot-com boom. 22:43 We can see parallels today that oftentimes the public will get hoodwinked into a narrative of the future that's so exciting and spicy that they will fund it. 22:52 It will crash, and then on the dying flames, a phoenix will rise of the real thing. 22:58 So if we ask this question again, laying bare the questions that have been hidden by the answers, we can ask a question. 23:07 So the question here today is, where did the term artificial intelligence come from? 23:11 We like to talk about this a lot. 23:13 Does anyone know? 23:15 Yay, great, one person. 23:17 This is, it's very rare that even one person. 23:21 When we use a term a lot, it doesn't mean that it's the real term, and it also doesn't mean that we know what it is. 23:29 And looking into the terms that we use every day is so important. 23:34 There's a longer history of this, but I'm going to give a bridge version. 23:37 This is Norbert Wiener. 23:38 He's considered the father of cybernetics. 23:41 The idea of cybernetics is the thing that really predates AI. 23:47 It's from Kubernetes, meaning Helmsperson, it's from Greek. 23:52 Here's a person getting a signal and making a decision. 23:56 So it's a human and a machine. 23:58 Underneath this ship there's a lot of automated processes. 24:01 There's tons of automation, but those automations are producing signals that can be human readable and that the human is making a decision and steering. 24:09 They're also trained. 24:11 So here we have the cybernetic loop. 24:12 Here's a sensor with a threshold. 24:15 There's a difference. 24:16 The controller sees it. 24:18 There's an action that, you know, feeds back into the system. 24:21 So over time, you can drive a car which could kill anyone for no good reason, yet we are trained to stay within the lines and get that information and steer. 24:31 So it's about steering a system, not having the system do everything for you. 24:36 Knowing what things the system can do for you well as an automation, understanding what to steer. 24:42 So some of it is Making the invisible visible. 24:46 This thing is an automation, here's the information, here's how to make a decision. 24:50 Now, the problem is that Norbert Wiener apparently was a very talkative person. 24:56 He would go into every room and say, "Well, that's cybernetics." It was true. 25:00 Um, and he wouldn't stop talking. 25:03 He just wouldn't stop talking. 25:05 He was like, and, and, and so, um, John McCarthy on the creation of the term AI said one reason for inventing the term AI was to escape association with cybernetics. 25:16 I wish to avoid having either to accept Norbert Wiener as a guru or having to argue with him. 25:23 Alright, so wait, you're making this term, you're making a term so that Norbert Wiener will hate it so much that he won't show up to a meeting so that you have the ability for you and your friends to show up and have a nice chat without this guy hogging all the airspace. 25:38 Okay. 25:41 1956. 25:42 They got Dartmouth to do a little summer conference. 25:45 Here are some people, like, you might recognize some names like Claude Shannon, Marvin Minsky, John McCarthy, and they said, we're gonna have this conference, here's a plaque, this is the first use of the term artificial intelligence at Dartmouth, and I was just thinking about the comment by the woman who took this picture, she said, look at how happy they are, they don't have to deal with this guy. 26:08 But, ha ha ha, we figured it out. 26:11 Unfortunately, words are aesthetics, and aesthetics are the realm of causality. 26:18 And by that I mean, when we say something like black hole, most people think it's a black hole. 26:24 And John Archibald Wheeler just said that offhand in the middle of a conference when he was sitting there, and it became this term. 26:31 And the issue is that the term can be separate from what the thing actually is, And we can drive a narrative that's more about science fiction than it is about reality and distort things. 26:42 And this term got taken up by journalists and pop tech and marketing. 26:48 And there were all these headlines. 26:50 Will automation be a Frankenstein? 26:52 Now, Frankenstein is a really great book to read and you should definitely read it, but the speculation is using narratives that we already understand to Put together some speculation. 27:03 AI is taking off. 27:05 IBM experts say electronic brains offer no threat to taking over the world. 27:09 Too much autonomy in national defense could bring chaos. 27:12 Yeah, definitely. 27:14 We have not experienced any of that at all recently. 27:18 Artificial intelligence moves into mainstream in 1987. 27:22 Right? 27:23 Everybody was talking about this. 27:25 Now, where— why are these narratives freaking people out, especially today, where all your jobs are going to be taken? 27:32 They're always about all. 27:33 All things are going to happen. 27:35 That doesn't happen in nature. 27:36 You don't have a system for all things happen. 27:38 You have a system where a gradient amount of things happen, and then that gradient shifts over time and goes into dynamic equilibrium because that's how things balance ideally. 27:49 And if you have less of one variable, you have more of another variable to balance that equation. 27:55 The Yellowstone wolves and deer population story fits here. 28:00 Um, but here is this book, which you might know as the film, which actually came out in 1900. 28:05 The Wonderful Wizard of Oz. 28:07 So I had these books from my grandmother that were the originals, and I was reading them when I was a kid. 28:12 I didn't have access to television except for Star Trek. 28:17 And I love these book series because the one that came out in 1914 was this person lost her private keys and she couldn't get into her kingdom. 28:25 There is a robot called TikTok. 28:28 T-I-K-T-O-K. 28:29 Um, There is a magic book in which everything can be seen by the ruler of the country/government, and it is a surveillance book in which everything is recorded into the book. 28:43 And the person wields this power very strangely. 28:45 She is the young girl ruler of Oz, a 14-year-old that actually is kind of mean and spies on everybody. 28:54 So there's a lot of things in here, but also the analogy of this was that At this point in time, the country of the United States was reeling between industrial revolution and agricultural revolution. 29:08 The Yellow Brick Road is the gold standard. 29:10 You have the Tin Woodman, who is the industrial man who wonders whether he has a heart as he goes into industry. 29:18 You have the Scarecrow, which is the farmer wondering if he is going to be automated or not, wondering whether he has a brain. 29:24 You have Dorothy, who's growing up in the middle of this as the ultimate Gen Z character, and you have the Cowardly Lion, man who is the president at the time freaking out. 29:33 There's a lot of this tension between the Industrial Revolution and the first major use, human use of human beings through a machine, especially in Manchester, England, where you look and you say, wow, these people have such a good set of science fiction. 29:52 When I was looking, when I was in Istanbul, Turkey, I met a Turkish science fiction author and I said, I don't find a lot of science fiction coming out of Turkey. 30:02 And he said, that's because we have so much history that it suffocates the future. 30:07 Whereas in Manchester, you're dealing with the fact that at any time you could be stricken as an orphan, forced and lashed to a cotton machine in a textile factory, which predates computing, by the way, as textile producers. 30:22 And you don't have any history. 30:24 It gets torn down, and for no good reason, for an interdimensional highway. 30:28 And so all of these things become this idea that when journalists talk about AI and automation, it's we've already experienced the automation of human beings and portioned that out to different countries and that sort of thing. 30:43 But now it's what if our mental state goes that way too? 30:47 Because history isn't that long ago, and the 1900s, 1800s were not that long ago. 30:55 We still have this memory. 30:58 Most people have gone through something terrible in their history, not that long ago, and there's this feeling that this could happen again. 31:07 And so it just hearkens back to these experiences. 31:10 And so when a journalist says, "Everything will be taken over," well, that's what we feel. 31:17 And then of course we have the other issue, which is science fiction. 31:21 Science fiction is really fun to watch, especially if it's on a movie screen and everything is a big bright blue light. 31:28 And what does that actually mean? 31:29 It means that we think that this is what the future looks like. 31:34 But when we take one of these fancy smart washer and dryer machines home that like beep at us really loud and have bright blue lights and won't let us go to sleep and have a software update on the Samsung washer that won't let us use it until we download the software update or puts ads on the side. 31:50 It's not a very good roommate, but it looks like what the future is supposed to look like. 31:55 It looks like what that narrative is. 31:57 And so instead, when Google first came out, people tried to scroll down the page because they said, well, where is everything? 32:05 We're used to Yahoo with the little links. 32:08 And so there's this difference in one of the local minima that people can get trapped in is where is your notion of the future coming from? 32:16 Is it actually from science fiction? 32:19 Or is it from what you're seeing? 32:22 Are you going to get sideswiped or not? 32:26 I think I have like 8 minutes, so I'm not going to go through everything, but here's the first chatbot, Eliza, which all chatbots are based on. 32:37 This was Joseph Weizenbaum was working as a psychotherapist. 32:42 He was a kind of Rogerian school psychotherapist. 32:45 The therapist that reflects back the patient's words to the patient, unlike Claude, and says, "Oh yeah, you're saying that AI can take over the world? 32:56 Well, I will show you how dumb it is." And he made this chatbot as a joke. 33:02 "Hahaha, Rogerian method, here's the chatbot. 33:05 Please tell me what's been bothering you. 33:08 Tell me more about that." "Why, no." 'Oh, I'm not sure I understand you fully.' So he noticed that his secretary was using this bot all the time and he said, 'You dumb secretary, don't you know it's fake?' And she said, 'I know it's fake, but it's not judging me like a real therapist might.' Here's Joseph Weizenbaum and his family. 33:28 No other organism and certainly no computer can be made to confront genuine human problems in genuine human terms. 33:35 He became like the foremost anti-AI guy, but his chatbot had gotten out into the world and dun dun dun dun, now we have what we have now. 33:46 So we really, we live in this very intense era of interruptive technology in which if we look back in time how electricity was built, we don't look at electricity as WiFi. 34:01 For instance, if electricity were built like WiFi, it would be intermittent all of the time and there would be like a rate limit, sorry you can't plug this in here. 34:09 But we need a different thing. 34:11 We need a kind of calm technology. 34:13 And I will leave that exercise to the reader. 34:17 But my big question here is why do people keep making the wrong thing? 34:22 Why do they keep making annoying technologies? 34:24 Why do they keep making the same thing again and again? 34:27 And I think one of the reasons is that the future is outdated. 34:31 The general future that a lot of people have that have a lot of money is that they grew up on this Popular Mechanics, Scientific American, flying cars, and they're trying to make something in the shape of the thing that they had from childhood, or they're just trying to consolidate and make a lot of money. 34:49 So design is fundamentally governance, and what that means is that when you give people a touchscreen— and I work a lot of manufacturing of cars to try to figure out how we can put buttons back on cars. 35:04 Like this and make them orange, is that so many car manufacturers saw the Tesla come out and saw, you know, that 2010s experience with the iPhone, and they said, the future must be going flat. 35:15 We have to do it. 35:16 We're terrified, and it's cheaper. 35:18 But I interviewed people at leading car companies, and I said, does anyone on your team that makes the flat screen touch technologies even have a car? 35:27 Do you even drive? 35:28 And it was usually like a group of 25-year-olds that were like, No, we don't drive yet. 35:33 We use rideshares. 35:34 And we, you know, and it was terrifying because you'd always talk to the people that made the physical stuff. 35:41 They said, oh well, I had this car in 1973 and I hand-built it like this. 35:45 And it was very grounded to mechanics. 35:48 And so that was terrifying because you had the most important part of the car, arguably, the, you know, the cognitive incurring of where your hands have eyes too. 35:58 And you can experience the car over time and get get better at it, turning into something that doesn't have any fixed state and position and is in a bright blue color from science fiction because blue LEDs were the most expensive LEDs at one point at Radio Shack, and when they became cheap enough, there was a Nobel Prize awarded for it, and we thought that the color of the future was blue. 36:21 So I would love to encourage you to beware of times when telling stories about doing the real thing are more rewarded than actually doing the real thing. 36:35 And I think this is a particular era in which that, you know, with AI, LinkedIn is just full of it. 36:42 This is how you do a thing. 36:44 Okay, well, there's no content in there. 36:48 And then finally, if you get trained as a civil engineer, I went to a lot of engineering school as a kid and architecture, and my parents were broadcast engineers and audio engineers, so everything was about show the Tacoma Narrows Bridge in 1940, Galloping Gertie, which was a bridge that had an issue due to heat and wind, and it fell with a bunch of cars on it. 37:12 I don't think anyone died, but in civil engineering class, the first video that you see is, with a good teacher, is always this video about a catastrophic thing that can happen when you don't do things right. 37:26 And it really, it's overbuilding the bridge to a very intense spec because someone down the line of contractor subcontracting will degrade whatever you've set and it still needs to work within a specific tolerance. 37:42 And we do not have an artificial intelligence class that teaches what can happen catastrophically when things go wrong. 37:50 You don't even get to watch WarGames as your first First thing. 37:54 So would you drive over this bridge in a 24-ton vehicle? 37:58 Well, if you knew the bridge and you knew how many subcontractors had worked on it, you would know that it probably was weight rated to 24 tons. 38:06 Um, but you still wouldn't drive over on it because these are the weight limits. 38:10 But would you run an organization on this? 38:13 ChatGPT can make mistakes. 38:15 Consider checking important information without knowing the weight limit It's very hard to know the weight limit. 38:20 You're not actually getting the same thing every time. 38:22 You're getting kind of a diluted version of the service if everybody's trying to finish their calculus homework at the time that you're logging in. 38:32 So there's this book called Cognitive Security, well, it's Attention and Self-Regulation. 38:37 It's a control theory approach to human behavior, but really, I'm very interested in cognitive security. 38:43 Where do you get your thoughts? 38:45 Where did they come from? 38:47 Is someone trying to monetize where that thing came from? 38:50 And what is actually the path? 38:52 What is that small signal that's coming up from the edge that's going to sideswipe the future? 38:57 How do you find that and sift through all the strong signal? 39:00 And yes, so Douglas Engelbart, during the same time as artificial intelligence came out, said, how about augmented intelligence? 39:10 It's a human alongside a machine. 39:13 And as Norbert Wiener would say, that's cybernetics. 39:17 So, one final thought. 39:22 The question is, what do most people have in common? 39:24 Engineers, doctors, lawyers, artists, writers, political scientists, international security building people, anybody. 39:33 And the main thing is that most people in the past would have an apprenticeship, there would be a history, or especially in art, before somebody studies art history and joins advertising to sell people things based on art history, you get the history. 39:51 In international security, you study the Peloponnesian War as if it happened yesterday, as if it's contemporary, because it's all the same stuff. 39:59 But somehow, we thought that technology and how it's taught is that it's separate from history. 40:05 We don't study the history of like even the Atari or the PDP-11. 40:09 That's in the history museum if you want to go hang out. 40:12 It's a good history museum in California. 40:14 But the idea is that we don't even think back to any sort of history when we teach this or programming or anything. 40:22 And some of the people that go back in time or even Stewart Butterfield who is a philosophy major who made Flickr and Slack and also Game Neverending and Glitch. 40:31 The original side-scrolling Flash-based, not the Glitch today, had an advantage by having either a liberal arts education or stepping outside of that local minima and asking what something was. 40:45 So I'd like to say that, you know, technology is not separate from history and that a good tool is invisible. 40:54 By invisible, we mean that the tool does not intrude on your consciousness. 40:58 And you focus on the task and not the tool. 41:02 So, if we have humans alongside nature, we should also have humans alongside technology. 41:08 Why does this still work, this nice Yamaha system, and this still work after 4,000 years? 41:16 I just went to the Anthropology Museum, but the same piece of technology in the exhibit next to the 4,000-year basket says, Sorry. 41:25 We are experiencing technical difficulties. 41:29 I think I'm probably out of time, but maybe I can show you a thing that I've been working on. 41:34 If you want to look at some history, I have this thing called Calm Tech Institute. 41:40 We are the very first standards body that focuses on how much attention it takes to use a piece of technology. 41:46 We hate blue lights. 41:47 We love tactile interfaces. 41:49 And we're trying to make tech an enjoyable delightful experience again that's much more like this or this or this. 42:02 So thank you so much. Amber Case 42:16 That was amazing. 42:17 Thank you, Case. 42:20 Next up, we have Mike Masnick, who is going to be having a little chat with Alex Komoroske, one of the— you're both authors of the Resonant Computing Manifesto. 42:33 Come on up.