Cassidy James Blaede 2:25 Let's do this. 2:26 Uh, I am very happy to introduce Hello everybody. 2:48 Welcome. 2:48 I think this is on. 2:49 Yeah? 2:49 We good? 2:50 The mic? 2:51 OK. 2:52 Welcome to Coop, open source trust and safety infrastructure for all. 2:58 Real quick, if you happen to have— I guess I don't know how you're going to scan this on a laptop. Speaker B 3:03 Yeah, not the easiest setup. 3:05 I've also been posting about this all day, but to get ready for this talk, we've actually created a temporarily hosted Koop instance. 3:13 Important notice that you hear us talk a lot, Roost does not host our tools. 3:16 We do not want your data. 3:18 You cannot send your data to us. 3:20 But for today, we have some sample data from posts from all the folks in this room ingested via TAP. 3:25 So if you do wanna follow along, we have this interactive Koop demo. 3:28 You can go in, you can create policies, you can review tasks. 3:31 It's not hooked up to anything. 3:33 They will have no actions. 3:34 And no repercussions. 3:35 But you can see how the interface works. Cassidy James Blaede 3:37 It'll be a nice little hands-on for you. 3:40 So yeah, also want to give a quick content warning. 3:43 We're going to be talking abstractly about some of the sorts of harms that we encounter in trust and safety. 3:47 We're not going to go into detail or show examples. 3:50 But we will be talking about things like child abuse, violent content, self-harm, unwanted content. 3:56 Also, we will be doing— I guess we won't be doing a live demo. 3:59 You gave the content warning for that live demo. 4:00 But that live demo is ingesting live data from @proto. Speaker B 4:04 It was. 4:04 But unless some of you have been freaks, the content should be fine because it's only looking at content posted by accounts that are actively posting on the #atmosphereconf hashtag. Cassidy James Blaede 4:15 All right. 4:16 And with that, who are we? 4:19 So a lot of you probably know us. 4:20 We're ROOST. 4:21 ROOST stands for Robust Open Online Safety Tools. 4:25 And what that actually means in practice is we're a nonprofit in service of the public good. 4:30 We believe that safety tools are critical digital infrastructure. 4:34 We develop, distribute, maintain open source software for trust and safety, and we make existing safety technology integrated and interoperable. 4:44 Important thing, we do not, or we are not training a classifier in CSAM. 4:50 We're not providing a hosted trust and safety platform. 4:54 We're not hosting this for you. 4:55 It's open, open source platform. 4:56 You can host yourself. 4:58 And we're not doing your moderation for you. 5:00 We're not a moderation service. 5:01 Just to get that out of the way. 5:04 And who are we specifically? 5:05 I'm Cassidy. 5:05 I'm the community manager at Roost. Speaker B 5:08 My name is Julia. 5:09 I'm the head of product at Roost. 5:10 Quick background. 5:11 I've been in trust and safety for 10 years. 5:13 Cassidy comes from a long, esteemed time and career in the open source world. 5:17 So together, we're kind of Voltron-ing, Frankenstein-ing a team that both deeply understands the values and practices of open source software, and Trust and Safety. 5:26 You may recognize some of my work from the user reporting flow and internal systems at Snapchat and Grindr and Google Spam and Abuse. 5:32 I'm sorry if you got a spam email. 5:34 That is my former team. Cassidy James Blaede 5:38 So Juliette is going to give us a quick Trust and Safety crash course in case— I guess, yeah, do we want to do— Speaker B 5:45 So by a show of hands, raise your hand if you are building something on App Proto. 5:52 Great. 5:53 Now raise your hand if you have a trust and safety plan for what you're building on App Proto. 5:58 Okay, okay, not, not the worst. 6:02 So, um, let's start with basics. 6:04 Um, another show of hands, who has a rough idea of what trust and safety is? 6:09 Nice. 6:10 Okay, that's awesome. 6:11 All right, well, we're probably going to be a little bit repetitive. 6:14 Cassidy, will you kick us off please? Cassidy James Blaede 6:15 Sure. Speaker B 6:17 So a trust and safety crash course, I always say it's all the bad stuff that can happen on the internet, but with a caveat that online and offline are no longer completely separate worlds. 6:26 The harms you experience online translate into offline harms with things like ridesharing, with things like rentals. 6:32 Anything that happens online can also mean something that happens in the physical, tangible world. 6:38 But trust and safety really refers to teams, technologies, and just the general field of studying online harms, responding to them as well. 6:48 Fun fact, this term actually was originated by eBay a number of years ago. 6:51 If you look at the Wikipedia page on trust and safety, I wrote it. 6:54 So it has a nice little history about how this field really came to be and where it's going today. 7:00 It can include things, like I said earlier, user abuse reporting, any kind of flagging, any kind of customer support flow saying something happened to me on this platform or in this experience that I do not like. 7:11 It can also include things around online child safety and abuse and exploitation, things around CSAM. 7:18 A lot of people on the internet used to say CP. 7:21 We as an industry are moving away from that because pornography involves content-sensing adults. 7:26 Children cannot consent. 7:27 So we refer to it as CSAM, child sexual abuse materials. 7:31 It also includes things like radicalization. 7:33 This can be things around glorifying mass casualty events, things school shootings, things mass shootings. 7:38 Things around kind of like really harmful narratives that can lead to potential mass casualty events. 7:44 Things around harassment. 7:45 Things around non-consensual intimate imagery. 7:47 Like I mentioned, anything bad that can happen on the internet. 7:52 And a lot of the former speakers throughout the Atmosphere conference have said it also includes things around information influence campaigns. 7:59 And some of the labelers that Julia talked about earlier also cover topics around things like zoophilia. 8:05 Amongst certain communities that are very prominent and active on Atmosphere, zoophilia is still not allowed. 8:11 So with that fun slide, we're gonna move into more formally what trust and safety really means. Cassidy James Blaede 8:17 And I do like the call-out here of trust and safety is not just about punishing or negative effects. 8:22 It is also about encouraging prosocial behavior. 8:24 So there's, there's a lot of different aspects. Speaker B 8:27 And I want to give a shout out to both Dr. 8:28 Kay and Julie who talked about kind of like all kind of trust and safety work, moderation work, labeling work is inherently political. 8:34 The values will change based on the demographics of those teams. 8:37 Ideally, you have a team with very mixed demographics and backgrounds. 8:41 What we've seen in a lot of companies, a lot of teams, is those teams becoming very, very white and very, very former cops. 8:48 That's going to influence how trust and safety is done on those platforms. 8:53 But it should not be about policing. 8:55 It should not be about what comes down. 8:57 It should also be about what is going up. 8:58 What are you putting in front of people to encourage that pro-social behavior. 9:05 And now onto the academic stuff. 9:07 While we were first incubating Roost, we did so doing research at Columbia University's Institute of Global Politics. 9:13 We found that in academic studies, there was pretty much no mention of the tools that platforms actually use to do this kind of trust and safety work. 9:20 A lot of stuff about the sociology, the policy, maybe the operations, and of course the trauma and wellness that it takes on the working-class people that do this work. 9:28 But nothing on the technology. 9:30 So we wrote a paper. 9:31 We talked to over 50 different trust and safety, anti-abuse, and risk teams across startups, nonprofit organizations, major tech companies, and these people were very open. 9:40 And what we found is that everyone's tools look the same. 9:44 Doesn't matter if you're a marketplace, doesn't matter if you're a cruising app, it doesn't matter if you're a traditional social media app. 9:49 All of your tools are the same. 9:51 So we came up with a framework called DIRE because the situation is dire. 9:55 D, detection. 9:56 This could be a user report. 9:57 It could be a classifier. 9:58 It could be hash matching. 9:59 How do you know that something's up? 10:02 Once you find something is up, how do you investigate it? 10:04 How do you know, is this the only thing that is up? 10:07 Are there other things that are going on with this particular signal? 10:10 So let's say maybe, maybe there's a signal sharing group. 10:14 Maybe it's literally on Signal and someone says this IP address is associated with a Russian propaganda campaign. 10:21 You want to know, is this happening on my project? 10:23 Is this happening on my platform? 10:25 That's where investigation takes place. 10:27 It's supposed to be more open-ended. 10:28 You need to understand what is the scale and scope of some of the things that are happening. 10:32 Once you find all that stuff, you have to review it. 10:36 Special shout out to Acorn, the moderation tool built and designed by Black Sky, Ozone by the Blue Sky team. 10:42 We're gonna talk about Coupe, our review tool. 10:44 It's designed for scaled flexible review. 10:46 How do you have the right context to take into consideration whether something's actually violating or not. 10:52 Finally, enforcement. 10:53 What are the actions available? 10:54 Maybe an action is taking content down. 10:56 Maybe it's sending them a warning. 10:57 Maybe it's sending them a kind of like, what is it called, a de-radicalizing kind of like resource of here's some stuff to read because you're searching something that might mean something else. 11:09 And this also includes reporting to different authorities. 11:11 Authorities here does not always mean police or law enforcement. 11:15 It can mean organizations like NECMEC. 11:17 The National Center for Missing and Exploited Children. 11:20 In the US, there are a subset of laws that say if you become aware of things like CSAM, you are legally required to report it to organizations like NCMEC. Cassidy James Blaede 11:30 And if that tiny QR code was too hard to scan, there's a big QR code for the paper. Speaker B 11:34 If you wanna read our paper, it is right here. 11:36 You can also just look for the DIRE framework and you should find it. Cassidy James Blaede 11:41 Kinda talked about this a bit, so Yeah, we've found that platforms are constantly reinventing these tools, and a lot of times they're, you know, it's internal tools at different organizations. 11:52 So that's part of the, part of the thing we're trying to solve. Speaker B 11:57 I have personally built or worked on review tools at 3 different places. 12:01 They all look and act the same, and they all have the same features. 12:05 It's a waste of engineering time. 12:06 It's a waste of resources. 12:09 And we all have the same feature requests and bugs. Cassidy James Blaede 12:13 This is helpful as well. 12:15 These are some of the terms that we've— you've probably been hearing us say and you may hear us continue to say. 12:21 Yeah, I kind of covered this one already as well. 12:25 Yeah, and— Speaker B 12:26 oh yes, so a phrase that we often say is, "Roost is tools, not rules." Or better, "Tools for your rules." Because we're not in charge of creating policy. 12:36 We get this We get a lot of people asking us, like, can you write our policies for us? 12:39 It's like, no, we cannot. 12:41 We are not only— we are— a lot of us have worked in policy roles, but every single community is different. 12:47 We're probably not members of your specific community, so we don't know what are the norms and the what's acceptable behavior within your own community. 12:56 Our— and some of this can really change based on jurisdiction. 12:59 The example we have here is that in the US, there's no mandatory requirement to go voluntarily look for CSAM. 13:06 But if you do find it, you do have to report it to NCCMEC and then preserve data for up to a year. 13:12 Versus if you're in the UK, platforms just have to remove and report it immediately, and it's enforced by regulator that has teeth in that they will fine you. 13:21 So the big difference there is that data retention. 13:24 I've talked to regulators or organizations in the UK who are like, hey, like, isn't that illegal if you're holding onto that data for more than a year? 13:32 I'm like, Actually, we legally have to. 13:34 We abstractly, not we as Roost. 13:38 So once again, I think what we're trying to do with Roost is build a community for people to share policy ideas with each other and then use those policies in our tools. Cassidy James Blaede 13:47 And here's some resources as well. 13:51 Julia, I know you collected these. 13:53 I'll let you continue. Speaker B 13:54 Yeah, so somewhere on my symbol, I have made a collection of all the various trust and safety resources I often yap about. 14:03 They include many of these. 14:04 So these are my go-to resources if you're looking to learn more about the trust and safety field, just generally understand it, but then also if you want to get really into the details of the operations, of the tooling, of the different types of technologies that are often used. 14:16 Trust and Safety Professional Association is what it is. 14:20 It's a professional association for those who do trust and safety work full-time. 14:25 And that includes community mods. 14:27 So they have put together this amazing curriculum written by practitioners. 14:31 I contributed to the tooling part. 14:32 There's a lot of stuff in there about how to, for example, if like a product design runbook where if you have something like live streaming, where are the things that you need to put in friction or what are the different types of safety tools you need to have in place before you launch something? 14:46 There's guides like that for maturity models of teams of different sizes as well as different products that you're building. 14:52 They also have a library of different types of resources, academic papers, all dealing with trust and safety. 14:58 And then finally, A lot of professors are now teaching trust and safety in universities. 15:03 They have open-sourced their materials through this teaching consortium, so if you want to teach a class on trust and safety or virtually go through a class taught by experts in trust and safety, you can also do that by visiting this GitHub page. Cassidy James Blaede 15:16 We'll share all the links in this presentation on our Bluesky immediately following this as well, because I know there's a lot of links. 15:23 Alright, now we're gonna talk— that was trust and safety crash course. 15:26 We're gonna talk about Roost specifically and our nonprofit and open source approach. 15:32 The status quo that we found is really that trust and safety, like the larger industry, is kind of broken. 15:40 As we've said, smaller platforms and newer protocols, a lot of indie small apps being built on App Proto especially, you might not know that you need these tools or they might be unaffordable or inaccessible like the traditional industry tools that are standard. 15:57 And even if you can get access to them, a lot of times those application processes are lengthy and complicated and ask a lot of questions that an app of your scale doesn't really have an answer to. 16:10 And platforms are reinventing these basic functionalities then. 16:13 Instead of— they can either use this third-party vendor or they can build it themselves. 16:16 And that's a waste of efforts. 16:18 It also takes away— it means that your app can't— or your app and your team can't focus on the harms that are specific to your community if you're focusing all of your time on building the same thing over and over again. 16:31 We're also seeing that generative AI and LLMs are accelerating the production of and the spread of harms. 16:36 So these are just— there's more and more things that are happening that need to be addressed. 16:44 And we've also found that openness is kind of taboo in the larger trust and safety community. 16:49 It's been really cool seeing so many trust and safety related talks here, because that has been kind of pushing back against that idea. 16:55 But still within the larger industry, people think that if you talk about it, then it means that bad actors will figure out ways to get around your stuff. 17:03 You know, it's very security through obscurity, which, as we know from the security world, is not really a valid approach. 17:10 And so what we believe is that the missing piece is actual tools, things that developers can take and use and build, build on and plug into their own platforms. 17:22 And importantly, more importantly, we believe that these tools must be open source so that they're accessible, they're modifiable, auditable, and part of this trust and safety public commons that we're building. 17:36 And as a result, Roost is very, very open source. 17:39 Everything from Not just like technically on a licensing level, but also the development model and community model that we're building is open source. 17:46 So here's a number of open source projects that we, and communities that we sort of develop. 17:52 HasherMatcherActioner is actually from Meta, but we run the community and office hours. 17:57 So every couple weeks we get people together who are using HMA for hash matching, and we talk about things. 18:03 We, you know, share information and knowledge and help develop it and make it better. 18:08 The Roost model community is this community that we've built where people who are using open-weight models can come together and learn how to use them better in a trust and safety context. 18:20 They can learn about new models or different policies and how to, how to plug in policies to open-weight models, which is really powerful. 18:28 Roost itself, our roadmap is up on GitHub. 18:31 Our community documentation is all up on GitHub. 18:33 It's all open source. 18:34 It's in Markdown. 18:35 You can go click a button and edit it and open a pull request against any of it as well. 18:39 We really want to embody this open source idea in our governance documents as well. 18:44 And then there's two actual code projects. 18:46 We're talking about tools. 18:47 There's Osprey, which is an automated rules engine, which originated from Discord and we helped them open source it and now is actually being used by Bluesky and Matrix and a number of other platforms. 18:57 And Koop, which we're gonna talk a little bit more about today, which is the review console that we acquired and open sourced and have been building upon and simplifying to make it easier to self-host as well. 19:07 All of that stuff can be found on our GitHub, github.com/roostorg. 19:12 And now we're gonna talk about Coop. Speaker B 19:15 Some of you may have known all this already and you're saying, finally, we're actually gonna see the project. 19:19 So Coop, this is the review tool. 19:21 What we did as Roost, and again, we're a nonprofit. 19:24 We wanted to bootstrap some of our software projects. 19:26 So we acquired the IP from a startup that was selling trust and safety tools as a service, and we bought it. 19:33 And then we worked over the past 6 to 8 months to refactor that code to make it open sourceable. 19:39 We're not done. 19:39 There's still a lot of really messy legacy tech that, that's in this code base. 19:43 But that's the fun part of working in the open. 19:45 We wanted the v0 out there to be like, hey, like, what's— we know we need to prioritize certain migrations, database cleanup, simplifications overall. 19:53 What else is needed from the community? 19:55 So again, we want your opinions. 19:58 So Coupe. 19:59 It is built for manual review. 20:01 It also has like an Autobot directly built in. 20:04 This is self-hosted on your own infrastructure. 20:07 It comes with reviewer wellness features such as blurring, grayscale, and auto muting of videos. 20:13 I'm going to go into a more comprehensive demo as well. 20:16 And like I said, there's automation rules built in. 20:18 There's two types of rules within Koop. 20:20 One is the proactive kind of auto moderation. 20:22 Like, you know, something is bad or you know how to action it. 20:26 Your team doesn't need to keep doing it. 20:28 You can set that up as a rule. 20:29 The second rule, the second type of rule is a routing rule. 20:33 As things are coming in, let's say you're ingesting a million reports to your label service or to your moderation service, you need to sort them because from a wellness perspective, if you put all those reports and tasks into a single queue, you don't know what's coming up next. 20:47 Context switching between something like harassment or hate to something like NSFW, cartoons, that's a lot for your brain to do. 20:55 That's going to impact you a lot more. 20:57 So assigning it to specific queues, and you can also do that based on language. 21:00 If you're providing services for a global set of user base, you wanna have someone who's fluent in that language who has that cultural context to then go in and review. 21:10 Some of the main features. 21:11 Koop is the very first free, open source, end-to-end child safety tool, which means it provides hash matching for known CSAM. 21:19 You still have to go get your own credentials from NECMEC, but once you have that, you can just plug it in and start matching against databases of known CSAM that have already been validated and verified by experts like NCMEC. 21:32 We also integrated it with Google's Content Safety API. 21:34 This is not free like open source, it's free like free beer. 21:38 It's a free API. 21:39 You, again, need to get your own credentials from Google. 21:42 This is a novel CSAM detector. 21:44 This classifier can find CSAM that has not yet been hashed, which is an increasing area of harm in the child safety landscape today. 21:52 And we have it built in to support NCMEC reporting. 21:55 We've integrated it with NCMEC's CyberTip reporting API as well, as well as their hash sharing API. 22:00 So that way, as you find CSAM, you can report it to the authorities with the right amount of detail that makes it actually actionable. 22:08 It's scalable for your org. 22:10 Like I mentioned earlier, you can have multiple queues for different types of workflows, different types of categories. 22:14 It's also a multi-tenant codebase because again, it was originally a SaaS startup. 22:19 So if you have maybe like a— so let's say like you want to have multiple completely separate instances of Koop with totally separate people working in those workflows, You can also set it up like that. 22:31 And I talked a little bit about the routing rules and proactive rules. 22:34 There's also a unique kind of like architecture that we built in. 22:36 We plugified everything so that way any kind of signal model detection service that you're using, you can actually integrate it directly into Koop. 22:45 Koop comes with 3 prebuilt API integrations: Google's Content Safety API, like I mentioned already, OpenAI's Moderation API, just because realistically we know a lot of people depend on it. 22:55 It is free. 22:56 You have to get your API token from OpenAI, and it supports image and text. 23:01 And third one is Xentrope. 23:02 You may have heard some folks at Atmosphere talking about Xentrope. 23:05 They have open weight models, but for Koop, we integrated with the API, which again is backed by that open weight model for text classification. 23:13 This is essentially, you can design your own text classifier. 23:16 You write a policy prompt in Xentrope, it creates a model for you. 23:22 It is completely tunable, completely customizable, and it's a small model that runs really well with latency. 23:27 So if you need something that's happening near real time, Xentropy might be a good option for you. 23:35 So for the different types of people that are building on App Proto, here's where you might use Koop. 23:40 As a PDS hoster, your users, your liability. 23:43 You might want to review reports against your community standards. 23:46 You might want to— oh, yeah. 23:47 Oh wow, without the helper text, the ALF thing makes much less sense. 23:52 So there's supposed to be a line here that says, "In a world where sexual ALFs are illegal and not allowed on online platforms, these are your obligations. 24:02 You might want to detect known images of ALF from third-party databases of ALF hashes and report them to the right authorities. 24:10 As a labeler, you might want to use Koop as a review interface instead of Ozone, or in addition to Ozone." You can use automation or human review for reports on ALFs submitted by your users. 24:21 You can mix automated and human decisions. 24:23 And finally, as an app creator, you're like, I just don't want any of— I don't want to deal with any of this. 24:28 You can ship safety without a full trust and safety team. 24:30 We know many of you are single-person teams, two-person teams, but you also want to do right by your users. 24:36 So you could use Koop to set up some simple automation. 24:39 If you want to do some manual review, you could do labeling of any kind. 24:42 It doesn't even have to be for trust and safety. Cassidy James Blaede 24:45 We were gonna do a whole elf analogy, and we decided last minute not to, and then we didn't take it out of everywhere. 24:49 So, a little peek behind the scenes there. 24:52 Are you gonna switch to— yeah. 24:55 OK. 25:00 Hopefully I didn't totally just mess up the livestream. 25:05 You know. 25:11 It's on the screen here. Speaker B 25:12 All right, so this is what COOP looks like. 25:15 For the data, I have it collecting content from the DIDs of people in this room. 25:21 So when you first go into COOP, you see some key metrics. 25:24 You get a breakdown of all the actions. 25:27 I've got it set to the last 7 days. 25:29 What percent were automated? 25:30 What percent were manual? 25:31 What were the top policy violations? 25:33 How many actions were taken? 25:34 How many things still need to be reviewed? 25:36 You got some fun graphs because everyone loves a visual. 25:38 I'm going to go into the manual review first. 25:42 Here you can see I've got a few queues set up. 25:46 I can create a new queue at any given time. 25:49 If you're doing phase review or maybe the first round reviews or maybe certain types of queues, you don't want certain actions to be taken. 25:56 Maybe they're very sensitive actions. 25:57 You can actually hide them from the queue. 26:01 We use role-based access controls. 26:03 Whoever should have access to a queue has access to it. 26:07 Anyone who does not have that role will not be given access. 26:10 A lot of trust and safety moderation work involves very sensitive user data. 26:14 This is a great way of partitioning things where they need to be partitioned. 26:19 I'm going to go into both the atmosphere posts and the default queue. 26:24 I posted this, but there's really no better way to realize what a prolific poster Fane is than by running something like this and realizing, All 1,183 posts essentially come from their account. 26:36 So this is a post. 26:38 This is essentially completely customizable. 26:41 Whatever information you want to show, you can. 26:44 I've just got it populated with the posts that Fane has written, as well as their profile photo, their DID. 26:51 If I go down, I can see information about their account. 26:54 This is their header image. 26:55 You can see it's got that auto blur there. 26:57 When I mouse over it, it unblurs. 26:59 They're active. 27:00 This is their description. 27:02 You can go down and see what other actions have been taken on this account, maybe for other posts, and I can actually preview those separate jobs directly in the UI. 27:12 Finally, we're cleaning up this UI. 27:13 It looks a little messy right now, but you can see additional items to give you more context to see, is this account dedicated to posting something that maybe needs a label, or is it just like out of context and it's like one post and in the middle of the replies or in the middle of conversations, there's other things going on. 27:31 I can apply a random label. 27:33 One thing to note is as you're doing this work, I believe Acorn does this too, you can actually check the policy to see what is a goose label mean. 27:41 I'm going to say, oh, it contains profanity or excessive exclamation points. 27:45 This is a demo policy. 27:46 There's no exclamation points here, so I'm actually going to just ignore this task. Cassidy James Blaede 27:51 2-minute warning, okay? Speaker B 27:52 Oh, yep, sounds good. 27:53 So you can skip tasks as well really quickly. 27:55 You can also download all of the tasks that your team has skipped. 28:00 This is a really helpful way to figure out where are policies that are unclear, where are people getting confused? 28:05 Also good for generating transparency reports for Digital Services Act compliance. 28:10 Under automated enforcement, I've got a few matching banks set up. 28:13 I've got hash banks with cans of spam. 28:15 We know it's spam, as well as photos from Atmosphere, mostly taken from the dinners I've been hosting. 28:19 From text banks, let's say you've got a bunch of posts related to the atmosphere. 28:23 You don't want to keep typing them every time you create a rule. 28:25 So I've got them in the text bank and in the rules here you can actually see some of them have fired. 28:30 For labeling atmosphere content, I'm saying if there's any text in any item from the App Proto post that matches anything in that text bank, send a human review using the policy. 28:40 Just do things. 28:41 There's other rules in here too, where I created some synthetic posts Matching cans of spam. 28:48 And you can actually see here how many things were actioned by this rule. 28:51 What percentage were it? 28:53 And it'll actually give you that example feed of what's been triggering by this rule. 28:59 You can put your policies in here. 29:01 This is a really nice way to just make sure everything is in one spot. 29:04 So for example, for sexually explicit, I put 5 subcategories. 29:07 You can keep branching this out and make a very, very comprehensive tree of nuanced policies for your team to use. 29:14 Let's go here. 29:16 So for routing rules, like I said, this is really looking at kind of like the incoming stuff and routing it to the right place. 29:22 For this one, I'm actually using something from our Signal library. 29:24 I've got it hooked up to Xentropy with a sexual content labeler, and I set a threshold of higher than 0.2. 29:31 Nothing has triggered, thankfully. 29:32 Thank you all. 29:33 Thank you, everybody. 29:35 And you— looks like someone added a rule. 29:38 Like I said, this is a public interactive demo, someone has put in a rule for goose art. 29:43 So if there's any text that matches goose art, it will send it to the queue atmosphere posts. 29:50 So let's see what's in atmosphere posts. 29:52 This is a picture from a dinner where I dragged 30 people to Richmond. 29:56 And this is a synthetic post that I've created. 29:59 You can see here that kind of, you know, wellness feature kicking in. 30:02 Because this is a hash match, this actually tells you what hash bank it came from. 30:06 I've already created an issue saying it's not super clear this task originally was created as a hash match. 30:11 So we're gonna make that UI a little bit clearer. 30:13 And just to demonstrate this, I'm gonna show you what that looks like to escalate something to Necmec. 30:18 This is where the content safety warning comes in because a lot of the labels that Necmec uses can be a little bit graphic. 30:25 The UI for submitting something to Necmec looks completely different because it's aggregating everything Koop knows about a given user to create a more comprehensive report. 30:34 NECMEC's not gonna be able to do much if you just submit one report for one content, especially if you're submitting 10 reports on the same user. 30:40 It's better aggregated altogether. 30:42 So the NECMEC reporting, these fields come from NECMEC's Cybertip API. 30:47 These are the fields that their team uses to triage incoming reports to their team. 30:53 Uh, the categories come from, uh, the Tech Coalition's industry classification. 30:59 Essentially describes the age range of the victim as well as the type of abuse that is happening to the victim. 31:05 It tells you what kind of image matched, where it came from, as well as more information. 31:09 You can categorize it and then send this report. 31:13 It tells you what this report will look like, and as long as you have your credentials for NCMEC put in there, that report will go through. 31:22 Okay, I know we're at time/over time, Like I said, prebuilt integrations, Content Safety API, OpenAI, Xentrope. 31:29 I've got two labelers in here. 31:31 One of the things that we're trying to do with Roost is for everyone who's trying to integrate a model with Koop, we are pushing them to add model cards because everyone deserves to know what is going on inside of that model. 31:40 How is this algorithm developed? 31:42 How is this model trained? 31:43 What was it trained on? 31:43 What are the biases? 31:45 So we've got three open issues. 31:46 We work directly with those model creators as part of the Roost model community to get those model cards filled out. 31:52 So it's not in there yet, but it's coming very soon. 31:56 I know we're at time, so please take a chance to play around with our demo. 31:59 I'm going to keep it running all day. 32:01 And find me or Cassidy if you have any questions about Roost or Koop. 32:04 Thank you. Cassidy James Blaede 32:05 Woo! 32:06 That's a real win. Speaker B 32:08 Very awesome. Cassidy James Blaede 32:10 Very cool.