Chad Fowler 0:00 Nix architecture. 0:01 Thank you, thank you. 0:02 Hello. 0:03 Um, this will be at protocol related, just so you know. 0:09 The description does not make it sound that way, but I promise it will. 0:12 For one thing, there's a leaflet link. 0:14 So as Boris said, I have been, I've been writing about this for a while. 0:19 I've also been thinking about this topic for, and speaking and writing and like practicing things around this for almost 20 years. 0:28 So this is a— and that sounds probably pretty crazy because we haven't had generative software that long. 0:33 But I'm going to talk to you about architectural patterns today. 0:37 But I want to kind of set it up and tell you about what I was thinking. 0:42 So probably a lot of us started thinking about vibe coding before it was called vibe coding 3 more years ago. 0:50 And the first thought I thought back in 2023 was like, someday people will actually like— allow agents to write code and they won't even review the code. 0:59 That sounded crazy, you know. 1:00 And it's like, oh, next year, okay, a lot of people are going to do this. 1:04 And now I'm thinking, of course, everyone's going to do it. 1:07 Why is everyone going to do it? 1:09 Because people are lazy, right? 1:12 They are. 1:12 They are lazy. 1:13 Agents are not lazy. 1:15 And all of us with the best intentions, when we are stressed, we will do the easiest possible thing. 1:21 So, I started thinking about if we know they're going to do it, how can we make it OK? 1:27 And I should say it this way so it doesn't sound like I'm judging anyone. 1:30 If I know I'm going to do it, how can we make it OK? 1:32 And not just for trivial vibe-coded one-shot apps. 1:35 That's a demo. 1:36 That's a magic trick. 1:37 That doesn't matter in real life. 1:39 It's fun, but it doesn't matter. 1:41 And so the thing that occurred to me is— and I was talking to a bunch of nerd friends, old people like me, about If I'm looking at something like a pure function written in Haskell, this isn't Haskell 'cause I didn't want you to have to try to read that. 1:57 Pure function written in Haskell. 1:59 So no side effects, great type system, the language can ensure there are no side effects. 2:04 You know what the input and the output types are, the names are great, et cetera. 2:08 I pretty much thought even 3 years ago, I can tell ChatGPT in the little ChatGPT program to create that. 2:15 I can just copy and paste it in, probably trust it. 2:18 Never have to verify it. 2:19 We've all sort of had that intuition that small focused things are probably okay and it's probably gonna do a good job, right? 2:27 And so I started thinking about this like, well, what's the difference between that and a whole system? 2:33 And really the whole system, it's about shapes. 2:35 So I just described a bunch of things that were about shape, it was about size, it was about like the boundaries of the function, the type system, the names, all these things. 2:44 It's easy to replace something that is that small. 2:47 It's also easy to generate something correct that's small. 2:50 So this led me on a whole adventure, but I'm going to go back in time a little bit to 2014. 2:55 Did any of you live through Heartbleed? 2:58 Yeah. 2:59 So if you were working at a company that had any production systems when Heartbleed happened— and I was at Wunderlist in Berlin at the time. 3:06 I was CTO. 3:08 I woke up one morning and my colleague James Duncan Davidson, who is the creator of Tomcat and Ant, another old-school guy, had sent me a message message saying, "We have to turn off the entire service." That's how I woke up that morning. 3:20 I was like, "Oh my God, what is hap— what did we do wrong?" We didn't do anything wrong. 3:24 So Heartbleed was a bug in OpenSSL that al— it was just a mistake that allowed leakage of private data just to anyone on the internet that could access, uh, the service. 3:36 And it was a disaster. 3:38 It was such a disaster, so we had a microservices architecture and we've been doing something called immutable infrastructure. 3:44 Long before that was a thing that everyone did. 3:46 It was before Docker was like in major circulation and usage. 3:50 It was still pretty early. 3:52 And so we had hundreds of services, which means we had hundreds of computers that had a bad version of OpenSSL that were running in production with millions of active users. 4:03 So what did we do? 4:04 I mean, that's a panic situation. 4:06 I remember looking at the Amazon AWS dashboard at all the things and just thinking, what are we gonna do? 4:11 We were waiting for Amazon. 4:13 They were not responding. 4:14 They were very busy, as you might guess that day. 4:17 And so we said, well, as soon as we have a good OpenSSL image, we're just gonna replace literally every server. 4:24 Hundreds of servers, 300-ish instances at the time. 4:29 How did we do it? 4:31 I went to a computer, I was afraid, I typed a command, and over the course of a couple of hours, they all got replaced. 4:37 Nothing ever went down and it just worked. 4:40 Why? 4:41 Because the system was built to be a massive system of small components. 4:48 And it was always built with the idea that these components would be destroyed. 4:52 And that's the idea behind immutable infrastructure. 4:55 You never change a server, you only modify, or the only way you modify a server is by replacing it. 5:00 So if you need to upgrade something, you throw away the old thing, you replace it with a new thing. 5:04 Kind of old hat in infrastructure now. 5:06 If you're a DevOps-y kind of person, that's how you do it probably. 5:09 If not, apologies to you, you should get a new job. 5:11 [LAUGHTER] So it took us a day. 5:14 It took most teams weeks or a week. 5:17 There are still 250,000-ish servers on the internet that are suffering from this problem, which means 250,000 compromised servers that you can get private data out of for whatever reason. 5:28 Okay, so, but this problem is sort of solved, right? 5:30 This is more than 10 years ago. 5:32 Here's an imaginary scenario. 5:35 Later on this year, the Lazarus Group in North Korea infects some packages in npm or whatever package manager. 5:44 And the infection goes like a virus through the network and it starts infecting all the other packages. 5:50 And a— at the end of the day, you wake up to the news that the entire JavaScript ecosystem is compromised. 5:58 What do you do? 5:59 You can't delete all the servers and restart them because npm itself is compromised. 6:04 Everything is compromised. 6:04 You can't trust anything in the ecosystem. 6:08 You have to actually move to a new language and a new ecosystem. 6:11 So let's do that in a day, yeah? 6:14 On a complicated application. 6:16 Probably not, right? 6:17 But that's the idea. 6:18 You would need to delete everything and regenerate it. 6:22 It would be an absolute disaster. 6:25 So this is the goal of the architecture that I'm putting forward here. 6:31 And you probably can't read it because it's too small. 6:34 These are command lines that say like Phoenix generate and regenerate. 6:37 And the idea is just like, replace all the implementations of everything that we have, say like from npm, we're going to go ahead and finally move to Rust like everybody else. 6:47 So one last story. 6:50 Early in my career, this is over 20 years ago, I was moving up the career ladder. 6:55 I was at a big public company. 6:57 I was moving toward a CIO position in one of the divisions of the company. 7:03 And I was asked to do a new job. 7:05 The new job was pretty surprising given my career path. 7:08 It was to be the lone developer that took over another system from a guy who was trolling the company. 7:16 So this guy had written this really complex system in C that was talking to the Mobitex radio frequency network. 7:23 It was managing laptops in the field for service technicians. 7:26 It was basically like a mess— a bespoke message queuing system like you'd do RabbitMQ today. 7:31 He had built his own thing then. 7:33 No one knew how this thing worked. 7:35 Why? 7:36 Because the only real artifact of the system was the code. 7:39 It was a bunch of conversations that had happened in ugly conference rooms in this company over the years. 7:44 It was all in production. 7:46 This guy was the only person who knew how any of it worked. 7:50 Really knew how it worked. 7:52 But he was basically holding us hostage as a company. 7:55 So I went from move toward CIO to go sit up all night in an office in the dark, like doing strace on server processes trying to figure out what this thing does. 8:07 So this can happen today and it could happen 20 years ago. 8:11 You can be held hostage by your codebase. 8:14 So I learned something very early around that time in my career from a guy named Kent Beck who is the creator of Extreme Programming and a bunch of other stuff. 8:22 And if you're my age in software development, he's probably one of your software development heroes. 8:27 If something hurts, do it more frequently. 8:29 So with Extreme Programming, Extreme Programming, which was the precursor to Agile, which ruined everything and they should— they never should have done the Agile thing. 8:35 But Extreme Programming was like merging is hard, you need to do continuous integration, deployment is hard. 8:41 You know, we learned all these things. 8:42 Testing is hard, you should do it up front so no code is written without a test. 8:46 Code review is hard, just do pair programming constant, right? 8:50 So if updating code is hard and making changes to code is hard, the Kent Beck version of that would be just do it constantly. 8:57 All day maybe. 8:59 So how do you do that? 9:01 That means that the code itself is not that important sort of, right? 9:06 If you're gonna be replacing it all the time, if you're gonna be throwing it away and replacing it, how do you do it? 9:11 And then another person who's not as smart as Kent once said, "The mutability of a system is enhanced by the immutability of its components." I said that, but I like to quote myself 'cause I think it's a pretty good one. 9:22 So maybe in 20 years or so someone will say, "If you're as old as me, you heard Chad say this and it was meaningful to you." So the weird thing about this is, you know, we're in this weird era. 9:35 How many of you are actually software developers, by the way? 9:37 Okay, good. 9:38 Enough of you that some of you will feel sad about this sort of, you know. 9:43 Is the craft going away? 9:44 I've been posting a bunch of stuff on that leaflet and people get mad. 9:47 Every time I post something, there's at least someone who is angry and like, you know, you're wrong, idiot, you are influenced by marketing. 9:54 That was yesterday's post, actually. 9:57 But the craft doesn't disappear, it just moves. 10:00 So, you know, from like doing test-driven development, you need to be thinking about how you evaluate the code from a business perspective and from a like architectural perspective in terms of what the technical requirements are of the code. 10:11 You know, you'll be thinking about things like what are the boundaries and what is the actual architecture that can work in a system that's constantly replacing itself? 10:21 How can that possibly work? 10:22 And so I'm gonna dig into that kind of stuff. 10:26 One note here though is there is still very much a craft or an engineering discipline or whatever sort of metaphor you wanna use, art, you know, I'm a musician so maybe there's a craft or there's an art to it. 10:38 And what I'm seeing now and the fear that a lot of people have in the software industry is that the senior people, and senior doesn't have to mean old or even experienced, but somehow the people who have gained wisdom in this field somehow They are moving farther and farther away from the newbies because everything moves so quickly and the mistakes you make have such great and instant ramifications and the good choices you make have such major ramifications that this AI thing amplifies the effect of this experience. 11:18 Phoenix Architecture, what is it? 11:21 It's a name that I let the AI choose, honestly. 11:25 This is a thing I do 'cause naming is one of the hardest things, right? 11:28 So that's why we have LLMs. 11:30 You don't ever have to name anything again. 11:31 Just make sure it's not offensive in any language if you can, but you can ask the LLM that too. 11:37 It's an architectural idea, a bunch of architectural principles. 11:41 As I said, they are extracted from my decades of thinking about this problem. 11:46 And trying to figure out how we can create systems that can survive. 11:50 So if you Google for me, if anyone uses Google anymore, and put the word legacy, you will find me talking about this back in 2009. 11:58 And some of the things I'm going to say are the same today because I realized that making systems that can survive, the principles have been the same for a long time. 12:06 It's also a reference implementation of this architecture. 12:10 And I'm going to show you two things very briefly on the reference implementation and you can find it on Tangled or you can find it on the copy on GitHub. 12:18 But obviously you're all going to use Tangled, right? 12:22 So I'm starting to think about software not as a thing where you go into an editor. 12:28 You know, that's always been like a short version of what the AI thing would be, these copilot things in an IDE that's gone already, but rather a pipeline. 12:38 So We go from specifications to something else, to something else, to something else that ends up generating code. 12:46 And the something else is here. 12:48 You won't be able to read this, but you can just kind of see columns here. 12:51 Imagine you've got a bunch of Markdown specifications. 12:54 Those are just things where you just type stuff. 12:56 Maybe you're also in a chat program like Freq. 12:59 Ask me about Freq, F-R-E-E-Q. 13:03 From that will be extracted clauses. 13:06 So you can use LLMs for this. 13:08 It's not quite, you know, perfect, but my reference implementation does some hacks. 13:13 You extract clauses which are like different ways of saying the same thing. 13:16 You can kind of deduplicate them into those text chunks. 13:19 From that, you can canonicalize and deduplicate actual requirements. 13:24 From that, you can expand them into various things that you want to evaluate. 13:28 They could be invariants. 13:29 They could be various other constraints. 13:32 They might even be things like this component must be fast, which turns into this component must respond within 50 milliseconds. 13:39 So that becomes a constraint. 13:41 And from there, you go into the implementation, which I'm going to talk about in a second, has two phases sort of necessarily. 13:48 The cool thing about this is if you create a system like this where you're actually thinking of that end piece, the code, as a compiled artifact think of it as a binary, like TypeScript or whatever you're generating. 14:02 It's like binary code. 14:04 You wouldn't edit it. 14:05 It also means that you can trace the provenance from the Markdown file, from a piece of text in the Markdown file through this graph of requirements all the way through to the generated thing at the end. 14:17 So just like you were doing a C program or something where you're compiling from C to an object file to blah, blah, blah, the TypeScript is just in that dependent path. 14:26 So when you set up a system like this, you can do two things. 14:29 One is you can regenerate just the path down to the component at the end that you want to generate. 14:35 The other is if someone does do the wrong thing, so I say, and edit the code themselves as a human, you can see where it drifts. 14:43 And you can see, well, the drift is here and it actually affects this spec. 14:47 And I could even like automatically analyze the code and see it's changed the spec in a certain way. 14:54 Is that actually the spec now? 14:55 Is that what you meant? 14:56 And commit it as the real intent of the code. 14:58 Because intent is what matters more than anything else now. 15:02 Intent plus architecture. 15:05 Let me put this back in full screen. 15:09 Or not. 15:10 This is the fun of creating your own presentation software. 15:15 So when I was implementing this, I realized when I got to a certain point, I'm going to generate some code. 15:22 What the hell am I going to generate? 15:24 Like, is it a React application? 15:27 Is it a Next.js application? 15:29 Is it a Rust thing? 15:30 The answer is I don't know. 15:33 And not only do I not know, I don't want to have to, for a given application, decide those things. 15:39 Because I want to be able to move from, let's say, JavaScript to Rust when the Lazarus group takes over npm. 15:47 They're not going to, but if they did, I want that flexibility. 15:50 And the real-life story behind that is at Wunderlist, again, we launched Wunderlist 3, which was our huge rewrite that finally made the sync work and everyone was happy with it. 16:00 We launched it with Ruby on Rails as the backend. 16:04 Lovely for humans, not great for compute, very expensive to run. 16:09 But the way we had architected this back in 2013-14 allowed us, for the same reasons that we could fix the Heartbleed problem, allowed us to rewrite each service in another language within 3 months. 16:21 And we went from— I'll just say we saved 70% of our compute costs by rewriting services without affecting the system because we were just replacing the components. 16:34 So in a system like this, if we're building this for agents, if we're building this for a constantly regenerating world, we need to compile not to an implementation, but we need to compile to an architecture that has these properties. 16:47 Of components with boundaries that are just like that Haskell thing I was talking about earlier that can be replaced at will because the boundaries are clear enough. 16:58 So the idea is you have some intent. 17:00 Maybe it comes from a conversation. 17:02 Usually they come from conversation, maybe documents, lots of conversations. 17:07 The intent is then compiled into some decisions, not into code necessarily. 17:12 The decisions then generate code. 17:15 And then they go through an evaluation loop where you see, well, is this code actually doing the thing that it's required to do based on the specs, including all those invariants and constraints and stuff I was talking about earlier. 17:25 That could even be stuff that's coming from observability systems. 17:29 So I think software development in the near future is a closed loop from production where you're actually pulling in what's happening in production. 17:38 It's informing and saying, well, this thing has now drifted from the requirements, I think we should regenerate these two components because they're not doing the thing that the business says they need to do, right? 17:49 So then you record the provenance, which means the chain of the human or the human and human's agents and all of the different specs and everything else that led to the generation, capture it forever, cryptographically. 18:03 Then you deploy it within the boundaries, same way I talked about with Wunderlist. 18:07 So you're deploying these nice tight, uh, controlled components. 18:12 And then I won't talk about this now, but if you read the blog, you will compact because we are creating tons and tons of garbage. 18:19 Those of us who are making stuff with LLMs now, I accidentally created 2 web interfaces for my IRC network and it took me a while to delete them because I couldn't remember which one was the right one. 18:30 So there you go. 18:33 So, I should mention what is an evaluation because this is an overloaded term now. 18:38 I kind of got into it. 18:40 It's all of these different things, you know, the requirements that could be just like it must act like this, you know, what we're used to with specs and user stories. 18:48 But it's also it shouldn't use this much memory or it has to be this fast or it has to be, you know, deployable on this thing or whatever. 18:55 So evaluations are the thing that are about the system and not about the code. 19:00 It's not unit tests. 19:02 Because unit tests are tied to actual implementation. 19:04 So that's not the thing that you want to rely on. 19:06 That's also a thing that they can generate better than we can and we should let them. 19:11 So it might be, has to have 100% test coverage and they always have to pass, that sort of stuff. 19:16 That could be one. 19:19 And then here's the sort of secret. 19:21 In the flow I showed you earlier where you have specs going to canonical requirements, et cetera, et cetera, There are two things in the— there are two columns in implementation. 19:33 Before you get to the actual implementation, I'm calling this implementation units because this is actually the decisions about what components should exist and what their behaviors and boundaries and interfaces need to be. 19:46 This should not be language or runtime dependent. 19:50 This is the promise of a thing that will be implemented in some language and deployed in some runtime. 19:57 That meets the requirements of the specifications. 20:00 And this is a really powerful thing because it means you can do things like say, well, let's just do a pass this afternoon and see if we can make this thing 50% faster. 20:09 Okay, good. 20:10 Friday, deploy it. 20:11 You know, that's where we want to get with this stuff. 20:13 So implementation units are sort of like the semi-abstract version of the thing that you want to test and they're what the actual evaluations tie to. 20:21 And the implementations can have unit tests and all the other stuff that you're used to in your language. 20:25 But we are actually past a point now of, I think, sorry, language advocacy. 20:31 And I say that as someone who like, you know, I was like, I started the Ruby nonprofit and the RailsConf and RubyConf and wrote the RubyGems thing and spoke at Ruby conferences and wrote books about Ruby. 20:40 And I was all Ruby, Ruby, Ruby 20-something years ago. 20:43 And I'm going to RubyConf to receive an award this year and tell them you probably ought to not use Ruby anymore. 20:49 It's not a good idea. 20:51 'Cause who's Ruby for? 20:53 People. 20:54 Ruby is for people. 20:56 These programs are for machines. 20:58 And what matters now is that they're fast, correct, you can generate them with LLMs or whatever the technology's gonna be, they're cheap to run. 21:05 That's what matters. 21:06 Doesn't matter what language it is anymore. 21:08 So let go of that except for your hobby time. 21:10 Write it in your hobby time, fine, fun. 21:13 Otherwise it's irresponsible. 21:15 And that was a, yeah, just throwing some bombs at you here. 21:18 All right. 21:20 And so the code doesn't matter anymore. 21:22 We have to let go of that. 21:23 We have a code fetish. 21:24 I've been saying that for decades, literally, to developers at conferences. 21:27 We have a code fetish. 21:29 It's not about the code. 21:30 Now it's about these evaluations, really, and the fact that things need to exist. 21:36 So sort of backing up a little bit, think about— I went through the whole microservices movement, if you can call it that, craze, hype, whatever. 21:45 And I was definitely one of them. 21:47 And I'm friends with Fred George, who created that term and probably regrets it now. 21:53 People hate microservices. 21:55 Now, there's been a movement to move back to monoliths. 21:57 That's also really stupid and misguided. 22:00 The reason they hate microservices is, like, imagine you spend 6 months breaking apart this monolith into a bunch of microservices because your manager said, "We're doing microservices now." And then you finally remove the thing and you realize, oh crap, there are all these dependencies we didn't really understand that are like maybe they're temporal or they're data related or whatever. 22:19 You didn't accomplish anything by doing this. 22:22 Maybe some sort of like production runtime improvement of not having, you know, services get on each other's CPU cycles and slow the whole system down or whatever. 22:33 Not good. 22:34 So this is sort of a necessary thing to think about. 22:38 It's not just small, but it's about boundaries and replaceability. 22:41 That's what matters. 22:42 And I say this as a person who did a talk called Tiny 10 years ago at a conference thinking, I think it's just about making things small, guys, but it was not. 22:53 Now what I'm realizing, and this is me also shilling my project, I must admit, but really it's about the conversation. 23:01 So, you know, I was thinking about like, how do you apply all this generative AI stuff? 23:05 It's fine for one-shotting apps. 23:07 And, you know, like I made this presentation with it. 23:09 I don't have a slide deck. 23:10 I just have talking to an LLM, having it generate things. 23:17 What really matters though on big systems is context. 23:21 Like how do you get the intent? 23:22 How do you go to a 20-year-old system and find out what the intent is behind the code? 23:27 You probably do not actually, right? 23:29 Because no one captures it. 23:30 There's just no system for that. 23:31 There never has been. 23:33 Except in like really, really bureaucratic places, but then it was malicious compliance and you're going to end up with crap anyway. 23:41 So what if we could create an environment where the conversation is always captured around how the requirements were created and specified, tied to a human in a way that is archived and will never be lost when you move from Slack to Teams or something. 23:57 And every commit and every piece of code is tied to that conversation and the people that did it so you get a sense of why. 24:02 I think this is the only responsible way to do software development next year. 24:07 And to do that, it means you may not touch code as a human. 24:10 You can view it, like you shouldn't even be able to— you can use VI, it's okay. 24:14 If you use VI, you can change it, but, you know, like don't use VS Code and edit your code, it's wrong. 24:19 Because if you as a human do it, you can't guarantee all this stuff. 24:23 You won't guarantee being able to capture all this provenance. 24:26 And bots are way better at discipline than we are. 24:31 So I'm going to go kind of fast now. 24:35 I talk about regenerating code and I talk about like systems, like in my perfect world, a system in production maybe doesn't have any of the same code it had last month. 24:45 And that's always true. 24:47 It's just always rolling. 24:48 And probably I will think that's quaint in a year and say, really, I meant 5 minutes. 24:52 Sorry, guys. 24:53 You know. 24:54 That's sort of insane though because just because we can change it doesn't mean we should. 24:58 And there are layers. 25:00 So Stewart Brand has this concept of pace layers, which is really just about like layers of the ability to change and where things need to move more slowly and where they can move more quickly. 25:11 An example of a pace layer that should move slowly is like a protocol, you know, HTTP. 25:15 Let's not change that daily. 25:16 But also a user interface. 25:18 You can't just have the user interface change constantly because your users will be confused. 25:22 And I think Paul might have mentioned that earlier in their presentation. 25:27 The other thing that we need to worry about is hidden state. 25:30 And I sort of got to that with the deletion test idea of doing this big microservices thing. 25:38 And then you replace the monolith with microservices and nothing works. 25:42 There's a bunch of that in the world. 25:44 And so to do an architecture like this, and this is going to be different for every application, this is a very serious thing you need to be thinking hard about. 25:53 Service boundaries, data boundaries, et cetera, even temporal coupling. 25:59 The end sort of goal here, and really something that's always been true, is if we are on a team together, what we are building is not the code. 26:08 We're building a system. 26:10 And the code is not only not the asset, it is a liability. 26:13 It always is. 26:14 Because every line of code you ever wrote is legacy code the moment you wrote it. 26:20 Legacy code is bad. 26:21 We all know that, right? 26:23 So the system is the asset. 26:25 Code is a liability. 26:26 So if you think about all this stuff I was just saying, you think about anything in your current systems, like pick one component. 26:35 Could you replace it tomorrow? 26:37 Could you just have Claude redo it? 26:39 If not, you probably have a refactoring job, a system-wide refactoring job ahead of you. 26:45 And I will assert you already have that refactoring job ahead of you. 26:48 You just didn't know it yet because things are moving 1,000 times faster than they used to, or maybe faster than that. 26:53 I don't know what the right number it is. 26:56 So the point is AI didn't actually create this problem. 27:00 And I think that's true of a lot of stuff we're seeing now, both in terms of the problems we see from AI and the solutions. 27:07 It's like a real pattern I'm seeing. 27:08 I'm also a VC, sorry. 27:10 I heard some shade to VCs on the last talk. 27:14 And I agree with it, honestly. 27:15 I'm a VC who's also a programmer. 27:17 I talk to a lot of people and a lot of the things happening now that I think are really powerful are like, AI is creating a problem because things move so fast, so much content's getting created, et cetera, et cetera. 27:27 So we're going to use AI to fix the problem. 27:29 And that's kind of where I am right now. 27:32 But it's the same thing we always had. 27:33 It's just going 1,000 miles per hour. 27:37 So all the stuff I was talking about, how do we make it more concrete? 27:42 Well, I told you I have a reference implementation of the Phoenix architecture. 27:46 I linked it earlier. 27:48 But I also told you the conversation is the commit. 27:51 And the things I was saying, I hope they were registering like, well, I can't use Slack as my system of record. 27:56 Or I can't use Teams or whatever or Discord, right? 27:59 I need something with, like, durability that I can host myself that I can trust that no one else can read unless I want them to. 28:06 Uh, I want cryptographic identities. 28:08 I want provenance that way. 28:10 I want my agents to be spawned by me and my key and I want them to have keys and I want us to maybe even have reputation and durability. 28:18 And so I decided I would just build something that does this. 28:22 It started as a hack one day and here's where it's an app protocol thing. 28:25 I was like, what if we could sign into IRC, if you know what that is, Internet Relay Chat for the young people, with your Bluesky handle. 28:35 And it just kind of got out of control. 28:38 So now what I'm building is a substrate with all of these properties, agent provenance, et cetera, et cetera. 28:46 There's a lot to it. 28:49 But the end goal is the substrate for agent-driven development with Phoenix. 28:55 And so please join freak.at. 28:59 Please contribute stuff. 29:00 Tell me what's stupid, what's not great. 29:02 It kind of looks like there's iOS, Android, Windows, web, TUI, Rust SDK, Bot SDK, great docs. 29:11 It's ready for you, but I have not launched it. 29:14 And don't tell anyone about it, but please join because it's just for nerds right now. 29:19 But I think this is how things are going to work in the future. 29:22 And I guess I'm out of time. 29:24 So thank you very much for your attention. 29:30 Thank you very much, Chad. 29:32 Um, accidentally unencrypted IRC client with app proto accounts. 29:37 Amazing. 29:38 Thank you very much.