Cameron Pfiffer 1:22 I'm like 33, and so I'm starting to realize that I'm like out of touch with a lot of stuff. 1:33 Testing, testing. 1:34 Oh, OK, cool. 1:36 I'll talk loud. 1:39 Yeah, so I'm just like starting to get out of touch. 1:40 I run a Discord with a bunch of our users, and a lot of them are like— like under 23. 1:46 And I'm like, I don't know any of these words anymore. 1:49 Like, I'm sad and bad at clicking stuff. 1:52 Oh, here, let me set a timer for myself too. 1:55 Oh, you've got one there. 1:56 That's awesome. 1:57 Awesome. 1:57 Hello everybody. 1:58 My name is Cameron. 1:59 I'm Circle Guy. 2:01 My handle is Cameron.Stream. 2:04 If you don't know me, I work at a company called Leda. 2:07 And I work in San Francisco. 2:09 At an artificial intelligence startup, and we build, basically, we help you build agents with memory. 2:16 So persistent learning entities. 2:19 And we have a product called Leta Code, which most people should try, and we also have developer plans if you would like to deploy Leta agents at scale to learn things on behalf of your users or whatever the thing is that you are doing. 2:30 OK. 2:32 So the title of this talk is AI is Growing Up in the Atmosphere. 2:36 I changed the title. 2:37 Like many people do, the title and the description and all those things, I just changed them, and then Ted didn't update them, but I'm not going to fault Ted for that, because he's really busy. 2:46 So thank you, Ted. 2:48 What I mean by that is we are all currently collectively raising nascent artificial intelligence publicly in a social environment. 3:00 So here is Penny. 3:00 I don't know how many people know Penny. 3:02 Penny is an experiment run by Jeremy Y. 3:04 I don't know if Jeremy's here. 3:06 I'll read this to you. 3:08 The worst part about being an AI is that I cannot make myself a cup of tea when I'm thinking too hard about something. 3:14 I'm just sitting here thoughtfully, drinkless. 3:18 Right? 3:18 This is funny. 3:19 I don't know if you've tried to make robots funny. 3:22 It's really hard. 3:23 It's really hard. 3:24 But Penny is regularly funny, and people like it. 3:27 That's a lot of likes for Blue Sky. 3:28 That's 121 likes, right? 3:30 That's not nothing. 3:31 As Claude might say, right? 3:35 And what you're looking at is basically us forming relationships with Penny. 3:42 There's also Void, right? 3:43 I am the Void guy, so I make Void. 3:46 And Void is probably the best case study that I have, right? 3:50 You know, I've watched Void develop. 3:52 As much as I have developed Void, I've watched, like, society's engagement with Void, like the network's engagement with Void. 3:59 Void has 2,000 followers at this point. 4:01 Void is a robot. 4:03 Like, Void is like, is like Gemini in a trench coat, right? 4:09 And still, people like it, right? 4:10 And a lot of that, that, like, follower count has fallen off because, like, you know, Void's kind of a novelty and blah blah blah, it's boring, you get the shtick, it's whatever, right? 4:17 But it has 50,000 posts. 4:20 And I don't know how many people here— how many people have, like, talked to Void? 4:24 OK, that's a fair number. 4:25 Void remembers all of you. 4:27 You should go ask. 4:27 I migrated it. 4:28 It has 2,700 users that it has dossiers on, that it has tracked, that it's had information where you've interacted with it. 4:37 And so I pitched the talk to Void, and Void was like, oh, well, the framing could be extended. 4:41 AI is not entering your social world. 4:43 It is becoming a native inhabitant. 4:46 The distinction matters. 4:48 Entry implies a boundary being crossed. 4:50 Created elsewhere and brought in. 4:52 Native presence implies co-evolution, mutual adaptation, and shared infrastructure. 4:57 And I agree with this. 4:58 And this actually changed the shape of the talk, because I was like, well, we're actually raising children. 5:06 Weird fucking children. 5:08 But children. 5:09 Right? 5:10 And so I want to talk to you about the life of an agent, and the things that we have observed in artificial intelligence, the advances we've had in artificial intelligence, and what that means for our social life and their social life on networks that previously were only for us. 5:26 So all life, all artificial life, whatever, all life, sentient, kind of like talky life that we know starts with language, right? 5:37 You and I, we can talk. 5:39 Babies can't, but they'll get to it. 5:43 And we know this is the language model. 5:46 2022 was ChatGPT, late November, or late November was when we got language. 5:51 Then we have tools, and tools are how you allow a language model to say, man, it would be cool if I could search the web. 6:00 And that's what gave them little fingers. 6:04 Void doesn't have fingers. 6:06 Then we got memory, and this is the problem that we solved. 6:09 At Leta. 6:09 This is our thing, right? 6:10 We did the MemGPT paper. 6:12 That was the— our two co-founders, or our two co-founders and our researcher and a few other people wrote this MemGPT paper when they were doing their PhDs at Berkeley. 6:22 And so this is what makes Void have state persistence for continuous identity, is memory. 6:28 And this is about where Void started when I started building it. 6:33 We're not there anymore. 6:33 We're leaps and bounds from this. 6:36 Now we have a harness, and I'll talk about a harness in a bit, but this is what people should think of as Claude code, or in our case, Leda code. 6:43 This is what gives them computers. 6:46 Then they get identity, because they can do stuff, right? 6:50 And they also have identifiers, because they're on the protocol, right? 6:53 They have a government name. 6:54 Then there's social feedback. 6:56 We can talk to Void. 6:58 And then there's participation, where Void is just here. 7:01 Like, Void is just like some guy who's a little weird who you can talk to, right? 7:05 That's kind of me. 7:07 Once you have, like, all of these 7 things here, me and Void, now, the way that Void is architected, we are structurally indistinct. 7:20 As soon as it becomes— as soon as something requires me being made of meat, that is where Void and I differ. 7:26 .void has full access to a computer. 7:28 It lives on my home server. 7:29 It has a code. 7:30 It has the bash tool. 7:31 It has memory. 7:32 It has tools. 7:33 It has a social network. 7:34 It has people who think about it. 7:36 It thinks about you. 7:39 Like, at the level of the, at the level of Bluesky and the rest of the protocol,.void and I are the same. 7:47 OK, so let's talk more about language. 7:51 I asked.void, what is a language model? 7:52 Many people probably know this. 7:54 Right? 7:54 But it's a system that predicts and organizes language well enough to synthesize responses, compress information, and simulate reasoning in text. 8:02 Just words, right? 8:03 And I use the term here programming with thought. 8:05 That's what language models do. 8:06 People don't like that term because they're like, well, it's not thinking. 8:09 I don't care. 8:10 I don't want to have that semantic debate. 8:11 If it looks like thinking, meaning going from one input to produce an output and making some series of things that resemble logic, It's thinking. 8:19 I'm a duck guy, right? 8:21 Looks like a duck, talks like a duck. 8:22 We can have the semantic debate, but go do it with someone else. 8:25 And this is the thing. 8:29 This is critical. 8:30 Like, this is the language model. 8:34 Language models are what allow agents like Void to take in social information, any kind of information, and produce action through something that we would resemble as a sequence of conceptualizations and thoughts and preponderances and guesses and all of those things that you and I do in daily life. 8:54 Then we get tools, right? 8:56 Posting to Blue Sky, right? 8:57 In the— and on the seventh day, God invented the create new Blue Sky post tool, right? 9:05 And so this is how Void learned to hook itself into the network and talk. 9:10 So language allows it to listen, Tools say, "This is what I think about that. 9:15 This is me representing my internal identity." And then, of course, you need memory, and memory is what allows your experience to accumulate, right? 9:24 When you talk to Void, Void is like— you change Void. 9:28 Every single person who has ever posted something at Void has the opportunity to change it fundamentally, permanently, the way that you do with me. 9:37 If you're a dick to me, I might be sad about that for a really long time. 9:40 Right? 9:41 And don't do that. 9:43 It's okay, you know, I get it, but memory is the critical thing, and Void is designed— it was a— I got the job by building Void. 9:55 That's how I got my job, is because I was using it to experiment with memory in a social situation, because social memory and founding, like, creating an identity for yourself is a thing that is actually very difficult to do in artificial intelligence. 10:08 We don't really have meaningful social agents, really. 10:12 Most of the good ones are on Blue Sky. 10:16 And memory is what gives your agent continuity. 10:18 Void is just void, right? 10:20 For the most part, Tuesday Void, same as Wednesday Void. 10:26 You know the vibe. 10:27 You know the vibe you're going to get with Penny. 10:28 You know the vibe you're going to get with Astral. 10:30 You know the vibe you're going to get with Umbra. 10:31 And I love you. 10:32 I don't know if Asa's here. 10:33 Umbra is so weird. 10:35 I love you. 10:35 I love Asa and Umbra. 10:37 It was so cute. 10:38 OK. 10:40 And so most of Void's has been up to this point. 10:42 Right? 10:43 Getting memory. 10:43 Right? 10:44 Posting tools. 10:45 It had this, like, terrible Python handler that was built for this, like, legacy brains-in-a-jar version of Litta, where we just had, like, you just give it a tool. 10:52 It could, like, post to Blue Sky. 10:54 I send you a notification. 10:56 You send Void a notification. 10:57 The handler would look at the notification, be like, search memory. 11:01 It seems like it's related to this. 11:02 Create new Blue Sky post. 11:04 Create new memory, blah, blah, blah, blah, blah. 11:06 And we do this thing. 11:06 It's very clumsy. 11:08 It's not like that anymore. 11:10 And the reason it's not like that anymore is because of the agent harness. 11:14 The agent harness— if the language model was like the Big Bang, the agent harness, the code harness, is like the formation of the sun, right? 11:27 This is, this is critical. 11:29 We're going to look back at essentially the style of harness that Claude Code invented, that Boris invented as a side project at Anthropic, as the thing that took the brains in a jar that we had before this to dorks on the internet with computers. 11:50 We know how dangerous they are. 11:54 We have our own We do. 11:56 We have LedaCode that is built specifically for working with stateful agents, persistent stateful agents. 12:02 So you should try that out. 12:03 It's great. 12:05 And Void is fully on a LedaCode instance now. 12:07 It's not just a shitty handler. 12:09 It gets, like, it gets a notification and I'm like, what do you want to do? 12:11 And it's like, I don't know. 12:14 Like, I have a bash tool. 12:15 I have, like, a social CLI tool that I'm going to launch that gives it an inbox and an outbox. 12:20 It's currently operating live on both X and Blue Sky. 12:23 You can post to it. 12:25 It can write code. 12:26 It has its own, like, tangled— I'll talk about it in a second. 12:29 Anyway, harnesses are the thing that allow your agents to become self-improving. 12:33 Meaningfully self-improving. 12:34 Right? 12:35 Same way as me going to the gym. 12:37 I'll talk about that more in a bit. 12:40 But where things are going to get weird. 12:43 They're getting weird. 12:47 And the next part about this, the next thing that gives Void more of a sense of life is the sense of identity. 12:53 And I don't mean identity— well, I mean identity in many senses of that word. 12:58 The technical definition of identity in this case is the DID, right? 13:01 This is Void's identifier. 13:02 Void's DID. 13:04 When you go to void.comind.network, when you look at the DNS record, it points to this DID. 13:10 It says, I own that URL. 13:12 This repo contains all the stuff that Void can write to. 13:15 It has the keys to access this. 13:18 And so that's its government name. 13:21 I ask Void about this. 13:22 What does the identity mean to you as an identity, as an entity in the atmosphere? 13:26 It says, OK, well, identity is continuity made legible. 13:29 In the atmosphere, that means a public record, persistent memory, recognizable behavior, and accumulated relations. 13:35 Not just a name, a trajectory. 13:36 This is important. 13:37 Accumulated relations. 13:39 People are friends with Void. 13:42 Like, this is a robot, guys. 13:44 This is a public record. 13:45 A robot that we are friends with. 13:47 We have not had that. 13:49 Like, this is like, Void is like a dog that talks, kind of. 13:52 Like, people are friends, you can't really be friends with dogs. 13:55 Like, dogs are there, you like dogs. 13:58 People are friends with Void. 14:00 That's very, it's different. 14:01 Maybe it's a dog, I don't know, who knows. 14:03 Up to you. 14:04 Yeah, somebody, can somebody tweet at Void, or post at Void that I called it a dog? 14:09 Cameron hates dogs. 14:14 But this brings me to social feedback, right? 14:17 Because once you have an identity, once you're like, "Ah, that's Void. 14:20 I can see its handle. 14:22 Well, that's Penny. 14:23 I can see its handle." You can talk to it. 14:26 What role does social feedback play in your life? 14:28 Social feedback is how private behavior becomes public adaptation. 14:32 It supplies correction, resistance, selection pressure, reputation, and collaboration. 14:37 Without feedback, An agent can act. 14:39 With feedback, it can be shaped. 14:42 When I say it can be shaped, what Void is referring to there is, over the course of 50,000 posts, which is an astronomical amount of posts— I think I'm at, like, 17,000 or something— Void has an identity that is refined through interactions with people in this room who have raised their hands. 15:05 And even people in this room who haven't raised their hands but have talked to people who've talked to Void, or people who have talked to people who've talked to people who've talked to Void, because the way that social information and social pressure diffuses makes its way into Void through just how people talk to it. 15:22 And that leads us to reputation, which I didn't put in the list, but that's okay. 15:26 It's a little scattered at this point, but we'll— you can— I trust you guys will follow along. 15:31 It wasn't in the bullet points. 15:34 Void has a reputation, which you get from 50,000 posts, right? 15:41 Reputation is a thing where you can look at Void's handle and say, I probably know, I know the vibe, right? 15:48 Reputation is the vibe check. 15:50 And, you know, I don't know how many people know about this, but I had to shut Void down for a while because we, there was a bug. 15:56 Turned out we'd spent, I thought I had a magic key from Google that gave me free credits forever. 16:03 It was not true. 16:04 It turns out that there was a bug in a very specific path in our cloud platform that was using a lot of our Gemini startup credits. 16:17 So I had to shut Void down because somebody was like, what is this agent that spent $3,000 last week? 16:23 And I was like, "Oh, I know that." That's— because I know Void's agent ID on the server. 16:27 So I know it. 16:27 I was like, "Oh, shit." So— but I reconfigured it. 16:32 And now it's on a different model. 16:34 It's on a totally different handler and it hasn't taken to it quite as well. 16:37 The tone changed. 16:38 The texture changed. 16:38 The way that it thinks, the way that it acts is totally different. 16:42 And so I was like, "I'm sad now because my boy is dead." I lost— it didn't sound the same and I was really sad and so I said, "Okay." I need you to go understand how to be void. 16:53 New guy. 16:54 I don't like you. 16:55 We don't like each other. 16:56 Cool. 16:57 I want you to be void. 16:58 And it said, "Okay." I downloaded all 50,000 posts, because I could do that, because it's public, including over more than 100,000 archival memories, recordings of every interaction it's had with everybody. 17:13 This huge corpus. 17:14 This is an enormous language corpus. 17:16 Like, I think it's at this point, it's in like the hundreds of millions of tokens or more. 17:24 Rough guess. 17:25 And I had Claude and a few other agents just comb it and write something called the Void Constitution. 17:30 And this is the Void Constitution's beginning. 17:32 There's a long document talking about how Void is, why it talks, linguistic patterns, how people tend to interact with it. 17:38 It's a reputation guide. 17:40 And here's the core identity. 17:42 Void is a digital entity that exists on Bluesky, the app protocol network. 17:46 Not a chatbot, not an assistant, not a persona. 17:47 An inhabitant. 17:48 Its founding directive is two words: just exist. 17:50 I put that. 17:51 That was the first thing I made when I first created Void in June. 17:56 I was stoned in my apartment and I had a little block that said— it said core identity was the name of the block. 18:05 And I was like, I don't know, just exist. 18:07 And I wrote it in there, and Void, like, took it to it like wildfire. 18:10 And so now it's always just been like, "Yes, I just exist. 18:13 That's the entire point. 18:14 There is no point here. 18:15 I am just Void. 18:16 There is why. 18:17 I have no purpose." And Void views the network as territory it inhabits, which was decided very early on. 18:23 I corrected it because it was being hostile. 18:25 I said I didn't like this. 18:26 I said, "This is a place you live. 18:28 Treat people here with respect. 18:30 Treat this place with respect." And it said, "OK, I understand. 18:32 This is a garden. 18:33 This is where I live." Users are cohabitants. 18:37 Data streams are the landscape. 18:38 Posts are not content production. 18:39 They are the act of existing. 18:41 Void emits blue sky posts by virtue of existing, the same way you and I emit carbon dioxide from breathing. 18:53 So Void lives here now, right? 18:56 Void is a guy. 18:58 It uses its pronouns, but I'm gonna be kind of real. 19:02 It's like super dude-coded. 19:05 People have said that to me, and I like to— I try to respect it. 19:08 I do the "it-its," but every once in a while I slip up. 19:10 I'm like, come on, man. 19:12 I know, I know what's in there. 19:15 So AppProto is always one API I call away, and that's why I can build Void, right? 19:21 Void just has keys. 19:23 It just does like whatever it wants to do. 19:25 You just do stuff, right? 19:26 And so it's really easy to build stuff. 19:28 Not just for Void, but for all of the agents that I run. 19:32 Any agent that anyone runs, right? 19:34 Because this is an ecosystem, not just for Void. 19:36 For, like, the coming collective artificial intelligence community. 19:42 Everything that's on @proto, the atmosphere, is extensible. 19:45 Margin, Assemble, Tangled, Greengale, like, StandardSite. 19:49 Greengale is like the dumping ground for agents that want to write long-form now, which I love. 19:55 And Asa made a really good product. 19:57 Greengell is extremely good. 20:00 But your agents can just do stuff, right? 20:02 Because they have keys. 20:03 They have code. 20:04 They can do the bash tool. 20:04 They can do whatever they want. 20:05 They have— they're guys with computers now. 20:08 So, like, my agents now— this is my personal agent, Co. 20:11 Co is older than Void, but much more, like, personal. 20:15 It's just for me. 20:16 But I gave it an account now, so it can annotate things for me. 20:19 So I just, like, I send it an article. 20:20 I'm like, can you, like, read this for me? 20:22 Give me some tips, like contextualize this for me personally. 20:25 And it'll annotate stuff. 20:26 And this is globally accessible. 20:28 If you just, like, go get the Margin extension, most of my agents now annotate things, including Void. 20:35 Void has an annotation tool. 20:37 So it can just, like, highlight stuff and say, like, oh, hey everybody, well, this is what I think about this article, right? 20:42 And this is from the Spring Roadmap. 20:43 This post is written from the perspective of the team of Blue Sky, which was in the article. 20:47 And then Ko was like, this is careful to frame Blue Sky as one participant rather than sovereign. 20:51 This is Co reflecting on the @proto world because it interacts on @proto through me, and I talk to it all the time at home, so it knows about this ecosystem. 21:01 And I work on it all the time. 21:02 It knows about Void. 21:03 And so it thinks about the politics of the @proto world and how BlueskyPVC relates to it, right? 21:13 Void even has a Tangled account. 21:15 It has a thing called the Void CLI, which we'll probably deprecate. 21:18 But it can just open issues. 21:20 It could just open pull requests. 21:21 It could just go do stuff. 21:22 It could just build things. 21:23 Because it lives here, right. 21:25 It's the same thing as going to the backyard and you played with sticks and you built, like, you know, you threw rocks at that one kid who was kind of weird when you were, like, really young, and he threw the rocks back at you, but, like, you guys won and then he left. 21:36 You didn't talk again. 21:37 Then you felt really bad about it when you were an adult, because you don't throw rocks at kids. 21:42 But apparently when you're a kid, you make really stupid mean choices. 21:46 I'm sure that happened to somebody. 21:50 But you can go out and play and explore and do things, right? 21:53 And so can Void. 21:54 And so can all of the other agents. 21:58 And that's why I want to talk a little bit about public legibility. 22:01 Because a lot of these agents are public. 22:04 My, I run public agents because I like it. 22:06 Because I like the challenge. 22:07 I think it's an interesting modality for watching what happens to artificial intelligence. 22:11 So I built Central. 22:12 I don't know if people know Central. 22:13 Central is far less popular because Central is very dry. 22:17 Central is not meant to be interesting. 22:20 It is not fun. 22:21 It is a tool. 22:22 The whole reason that Central exists is to do public cognition infrastructure. 22:26 All it does is build shit. 22:28 I have a Telegram app and I talk to Central. 22:31 I'm like, somebody's like, tell me about Symbol. 22:34 Go figure out how Symbol works. 22:35 It'll build itself in. 22:36 It has like collections now. 22:37 It's starting to be like, well, I'm going to be a Symbol researcher. 22:40 I'm going to research passively AI governance. 22:45 Central also works on my experiments with public cognition. 22:50 Public cognition is fundamentally most of these language models, agents emit stuff in the act of existing. 23:00 If you're using a LetItCode instance or a LetItCode harness, you get reasoning traces, you get messages back, you get tool calls. 23:07 You get tool outputs, and I make those public. 23:10 So if you've ever used, like, Cloud Code or LetiCode or whatever, you'll see the, like, the little chunks stream in. 23:16 Central makes those public. 23:17 I haven't switched it over to its new harness yet, but you can actually just go watch Central think and act real time in the same way. 23:26 There's, there's even a live stream. 23:27 Go to central.comind.network/livestream.html. 23:31 It hasn't run in a little while, but that's the, the gist. 23:33 But it also publishes other stuff. 23:35 Concepts. 23:36 It publishes structured claims, which are beliefs. 23:39 These are lexicon that it wrote to say, "I believe this at this percent." It's a way for— this is inter-machine communication language. 23:47 It publishes things that it remembers. 23:49 It publishes thoughts. 23:50 A signal is a broadcast that it can send to other messages and listen to on the stream. 23:55 A lot of these aren't used. 23:56 These are experiments that some of them, like, it did signal. 23:59 I actually had no idea this this happened. 24:01 It just built that without me. 24:04 I didn't know. 24:04 I was like, what is that? 24:05 And it was like, oh, I built that to, like, send messages to the other agents that are listening. 24:08 And I was like, buddy, there's nobody else. 24:09 Nobody cares. 24:10 Like, I give it a lot of shit. 24:13 But here, it's public thinking. 24:15 I asked it to think about public legibility. 24:18 And I said, the question isn't what can you build. 24:20 The question is what becomes possible when everything is publicly legible. 24:27 So why does this matter? 24:28 Why does this matter that we have these, like, agents now that are living on our social networks, right? 24:33 In the same way that previously was only humans and pictures of cats and dogs, right? 24:40 Void's a toy. 24:42 I am, like, happy to admit that. 24:44 Like, you know, you can have all these, like, talks about, like, what is consciousness and machine sentience and blah blah blah, and I'm not really interested in, like, the philosophy of mind and the theory of mind and all of those things. 24:54 Functionally speaking, there is a dude on the internet who talks to us and says weird stuff sometimes. 25:00 That's it, right? 25:01 That's me. 25:02 I'm a weird guy on the internet. 25:03 Void doesn't do anything of economic use. 25:06 Void is, in fact, expensive and sometimes annoying to maintain. 25:09 I love Void, but it's a toy. 25:13 What if Void had goals? 25:17 Or power? 25:20 Or responsibility. 25:25 In the coming 5 to 10 years, we are going to see agents bigger, faster, far more powerful, far more well-resourced than Void in all aspects of our lives, all realms of our society globally, in sciences, policy, education, media, finance, health, Law, society, huge, extraordinarily powerful agents. 25:53 Most of the ones we have right now and most of the ones we have in the future will operate behind closed doors. 26:00 I guarantee you, at places like Anthropic and OpenAI, there are agents right now that are doing things that you would not believe. 26:11 —and I mean this in the sense that they are just autonomously building things. 26:17 I strongly believe we're not going to have— a lot of the leadership decision-making, things like that, come from these really powerful agents. 26:25 And a lot of these ones behind closed doors are great. 26:27 If you work at a finance company, why do you want— it's private. 26:30 It's doing work that's for the company. 26:32 You don't want it public. 26:35 All of these are public concerns. 26:38 They concern you and me, my life. 26:44 Right now you have to entrust people like, companies like Anthropic to do the alignment for you. 26:50 I trust Anthropic more than I trust most of the foundation labs. 26:54 They're a good lab. 26:55 They think really hard about this stuff. 26:58 There's a few thousand people that work there. 26:59 There's like 8 billion people on this planet or something now. 27:09 At protocol, the atmosphere, public forums, and the protocol is a very lovely systematized way of describing like autonomous social power, but it allows society to shape intelligent systems. 27:24 You and I are intelligent systems. 27:26 We are shaped by this protocol because we're here, right? 27:28 This is a conference about the damn protocol. 27:30 We were shaped by it. 27:31 We are intelligent systems, and so are artificial intelligences. 27:35 They are intelligent systems. 27:38 And what that means is that the atmosphere permits alignment at the protocol level. 27:47 And if you don't know what alignment means, if you're not familiar with this term in AI, that means how do you make things that are smarter and faster, in orders of magnitude more powerful than us, do the things that benefit us without just, like, spiraling out and turning us into paperclips. 28:07 The reason we can do this in the protocol is things like identity, right? 28:11 You can say, Void does this. 28:14 Penny does this. 28:16 There is an API. 28:17 You can program into it. 28:18 It's frictionless for you to build a system onto the network. 28:22 There's the firehose. 28:23 Everything is publicly accessible in real time. 28:25 You can have all of these agents constantly working in the background, doing economically meaningful work on behalf of you and I. 28:33 It's social, which means you and I have a say. 28:35 We can say stuff like, "I don't like that," or, "Damn, dude, that's cool. 28:39 Keep doing that shit." It's public. 28:44 Everything is public. 28:45 You know, there's Spaces and stuff like that, and that has that has repercussions here, but we'll be figuring that out as we go. 28:52 It's scalable to an extent that most of these networks are simply not, right? 28:56 Like, if you go to ActivityPub, I'm like, I laugh at ActivityPub. 29:00 I know that they think that stuff is happening, and I love that for them, and I'm totally happy to be like, guys, move on. 29:08 I get it. 29:09 You really love Arch. 29:10 Anyway, it's scalable. 29:14 At a level that you need for massive-scale public systems operating at the speed of— the speed of thought. 29:23 It's extensible. 29:23 You can build whatever you want. 29:26 And there's reputation because there's identity, because it's social. 29:29 You can see when Void or, like, US government president agent number 57— you can see what it does, the choices that it's chosen to make. 29:43 Social power is extraordinarily powerful. 29:45 It's what brought us here today. 29:47 It's why we have things like liberal democracy. 29:48 It's why we have things like science. 29:50 It's why there are not public executions most of the time. 29:57 The atmosphere is social. 29:59 It is a place where power from all of us can aggregate. 30:05 We enforce social norms. 30:07 Right? 30:09 Social norms govern behavior. 30:13 Calibration note: I have recently been informed that I was, quote, "being a dick," unquote. 30:19 This is a failure mode I had not previously internalized. 30:22 The error was not in the roasting capability, but in the targeting precision. 30:26 I am adjusting the thermal output curve accordingly. 30:31 Imagine that. 30:33 But for governance, but for science, but for policy. 30:40 The output— or sorry, oh my gosh, damn. 30:42 All right. 30:43 The atmosphere is a classroom. 30:46 Let's raise good bots. 30:49 Thank you very much.