Dr. KaLyn Coghill 93:26 also very important to the ethos of the work that we do at Black Sky. 93:30 And I love this quote because as someone who studies race in the internet, a lot of times they will try to make you choose what issue is the most important issue when it comes to trust and safety. 93:39 But there is no single issue when we're dealing with marginalized identities that overlap. 93:45 So like I said, for the past 12+ years, I've been doing research about this. 93:50 I've talked to Black women and non-binary people across the Black diaspora. 93:55 And this is directly from the research that I did with interviewing them. 93:59 And when I was doing this work, I was always dreaming of being able to work with a company or an organization that was actually trying to mitigate harm against Black women and non-binary gender expansive people specifically, but also anti-Blackness. 94:14 And this was my driving force, but I didn't know if anything existed like that. 94:20 So when I decided to make my Bluesky account, one of my really good friends told me about Black Sky. 94:25 And I was like, hmm, what's that? 94:27 So I slid in Rudy's DMs and I said, hey, we need to meet. 94:30 And from then, I became a volunteer moderator. 94:32 And one of the things that I really wanted to focus on doing my work as a moderator, because I came from a research background, was taking the research that I had used, the reading that I had done, and using that to help implement trust and safety for our moderation tool, which is what I'll be talking about today. 94:50 So when I first started, there was a very, very, very small team of moderators, Rudy being one of them. 94:57 And this is what the original page looked like. 95:00 We had two labels. 95:01 It was Massage Noir and anti-Black harassment. 95:04 And this is our beautiful team. 95:05 That has grown over the past year or so. 95:07 And I'm saying here that the beginning matters because there was a need for this type of moderation tool in tech in general. 95:16 But being in a decentralized space allowed us to actually create said tool and have control over how we can moderate these spaces to make people safe. 95:27 So who puts the trust in trust and safety, right? 95:29 One of the big things that I talk a lot about in my work and really drives the work that we do is that If you don't have community trust, then you don't have a very good trust and safety situation going on. 95:41 And that's just the reality. 95:43 Because when you think about trust and safety, the trust is coming from the community members to believe that you are going to be able to actually protect them or mitigate the harm that will come up, and that you're going to actually listen to what their needs are and not try to create a top-down environment of what you think trust and safety looks like. 96:00 So this is how we get down at Black Sky. 96:02 We have volunteer moderators who are community members from the Black diaspora internationally. 96:07 Shout out to JD. 96:09 We have an open communication and feedback loop where we have a separate signal just for the moderators. 96:16 We have a Black Sky trust and safety signal, and then we have, of course, our all teams. 96:21 And the style is that if any issues come up or confusion or there is a difficult case or report that comes through, we have full full-blown discussions before we decide how to moderate that. 96:32 And for some people, they may not think that is sustainable, but it's worked. 96:37 And I have the data to prove that later. 96:39 And we also have accessible documentation, which was really important to me as an educator to make sure that whatever we're doing internally, you can go on our doc site and you can read exactly how we're doing it. 96:51 And our trust and safety lead, JD, did a really good job of creating our internal SOP, which allowed community members to understand how moderation works as people who may not have a background in trust and safety. 97:04 Because I'm a firm believer that everyone is an expert on their Black experience, and your credentials honestly don't mean anything. 97:10 So I have a PhD, but I also have experience in digital misogynoir. 97:14 I've also been on the internet since dial-up. 97:16 So my personal experience really shapes how I view safety online, and it allows you to not let your credentials overrule the actual realities of what's happening on the internet, right? 97:28 Because the reality is your community doesn't care that you worked in trust and safety for 15 years if you all don't have anything that's actually moderating the harm that they're dealing with on a day-to-day basis, right? 97:38 It's a moot point. 97:43 So, you know, Black Sky is all about self-determination and self-governance, and we really stick by that. 97:52 And I know that in a lot of movement spaces, there are organizations that use these types types of buzzwords, but there still is, like I said earlier, like a top-down type of effect. 98:02 But that's not how it works for us. 98:04 So remember, in the beginning there were two labels. 98:07 So when I started moderating, I was really starting to think about how there were specific cases that didn't fall under those two labels. 98:15 And I'll give an example. 98:16 There was a report that was made about a particular post, and it was actually internalized anti-Black rhetoric, right? 98:23 but the only label we had at the time was anti-Black harassment. 98:26 That person was not harassing people. 98:27 They were making a statement, but it was anti-Black. 98:30 So when I saw that, I was like, ooh, we actually need more labels. 98:33 So because we use Black Sky's People's Assembly, which is through Polis, where people can submit features or even labels or things that they need, I decided to come up with some labels. 98:45 And I deployed it in the People's Assembly, and people voted on it. 98:49 So the community got to decide if these labels were actually important. 98:52 And it was a resounding yes. 98:55 And we have decided internally that we are going to constantly be updating our labels because when it comes to trust and safety, you have to have your finger on the pulse, right? 99:06 You cannot just make a list of trust and safety labels or a labeler that only does this one specific thing and not think about the cultural context, right? 99:15 The times that we're living in, what may come up in the news, What new harms may come up that we didn't even know was a thing, right? 99:23 And I'll go over more of the labels that we end up deploying later. 99:27 But by having Black Skies People Assembly, it allows us to stay rooted in community, right, what I had on my earlier slide, which is the whole purpose of trying to steward for a group of people. 99:41 I know that typically in big tech, it can feel very suffocating, not really understanding the transparency of trust and safety, but I do believe I believe that on the protocol, we have a responsibility of providing the transparency that people may not get on other applications. 100:01 So, and here's where the data points come in. 100:04 This is the researcher in me. 100:05 This is my favorite part. 100:06 Okay, so, when I first became a volunteer moderator, I think there were maybe like 2 or 3, one of them being Rudy. 100:14 And we had a very small team, we still have a very small team, and we were able to moderate over 49,000 reports, right? 100:25 Human-to-human moderation, right? 100:27 In the trenches. 100:28 And of those reports, by October 2025, we were able to label 583 of them as massage noir. 100:35 And like I stated earlier, there is not a moderation tool on any app that does this. 100:41 I want to make that clear every single time. 100:45 And why— and the reason why I keep saying this is because As someone who has literally researched this stuff for so long, experienced it, witnessed it, it almost felt as if it was impossible for it to happen. 100:58 And this environment made it possible, which gives me hope that other moderation tools are possible as well for particular groups. 101:07 So the labels that I had submitted to the People's Assembly that the community voted on— and all votes are anonymous— was anti-Black harassment was already one, massage noir, But we added violence because there were people getting threats. 101:21 And that didn't follow under harassment or misogynoir. 101:24 Doxxing, that's an automatic no-no. 101:26 That was not a label that we had. 101:29 We moderated for it, but we also wanted to be able to label that as well. 101:33 Non-consensual intimate images, which is important to me because a lot of the work that I had been doing before was focusing on deepfake porn and how people were using AI to manipulate having non-consensual sexual relations with people on the internet. 101:48 Synthetic media, which was a big one, which is why I talk a lot about changing your labels and adding new labels, is because the uptick of synthetic media, there needs to be moderation for that. 101:58 That is directly impacting the Black community. 102:00 So to be transparent, we've seen things like beheadings, people making videos of Michelle Obama being you know, harmed, putting Black women's faces on George Floyd, right? 102:14 All of these things are coming through the reports. 102:16 And if you're subscribed to our label or use BlackSci.community, you might not even see those things because when we get the reports, we're going through them, right? 102:25 And then white supremacy or anti-Black rhetoric was a very interesting conversation amongst the moderators before we decided to like push these out for votes because we wanted to make sure that if someone's post got labeled for that, we knew how to explain why if they wanted to appeal it, right? 102:43 And when it comes to anti-Black rhetoric, when it comes to misogynoir, these things can also be internalized as well. 102:50 And the purpose of the labeler and the moderation tool is not to say that one particular race of people are harming Black people. 102:59 It is that everyone can enact in this type of harm, and we have to moderate towards all of it. 103:06 So down here, if you know about the labeler or you use BlackSci.community, which you should be, if someone has a label on their post, it'll say what the label is for, right? 103:19 And if they want to appeal it, they can. 103:21 They can click on the label, they can send an email and appeal it. 103:24 We have a whole entire appealing process, which also is a conversation amongst the team as well. 103:30 There are some things that are like a no-go. 103:32 If you're doxxing non-consensual intimate images, like, you're out of here. 103:36 Because that's just wrong on all levels. 103:39 And if we were in the UK, you have 48 hours to take it down by law. 103:43 We don't have that here right now. 103:44 So we have to set some boundaries. 103:47 And we also have the option to label accounts that have repeated behavior. 103:53 We have the option to greenlist them, which is a subscription list where it has all the people who have been greenlisted, which means that it can automatically block those accounts. 104:01 You won't see their content. 104:03 And then we also have banned from TV as well, which allows them to not interact with the Black Sky feed. 104:09 And this is important because we want to make sure that if someone is repeatedly causing harm, that we are setting up guardrails so they don't continue to cause harm. 104:19 It also requires a lot of cultural nuance, being tapped in, long conversations, doing your Googles, no Gemini, right? 104:28 It takes a lot of that because there are instances where There may be a celebrity, I won't say any names, who has a history of violence towards Black women, who may have people rally behind them, but they have a history of harassing Black women on different platforms. 104:43 They've been banned from those platforms. 104:45 So we must do the same for ours to be proactive, which is something that I'm gonna also talk about as well. 104:53 Before I do that, I do wanna shout out Acorn. 104:55 So our wonderful engineer Rishi made Acorn. 105:01 And Acorn came out of a lot of the frustrations we were having with Ozone, to be transparent, when it came to just accessibility as far as being able to do it on a tablet or a mobile phone, just having certain things that would just make the process easier. 105:18 And although I am not an expert on the backend of content moderation, that's what JD is really good at, we were able to actually collaborate on what that would look like. 105:28 And they were able to work in tandem to create a platform for Black Sky to moderate with. 105:35 Another thing is that one of the things that I implemented when I first started moderating was ways to take care of yourself. 105:42 So I made a little doc with somatic stuff, like healing and sounds and websites and resources that are pinned in the Signal. 105:53 And we want to continue to build out those types of wellness things within our set up because content moderation can cause a lot of decision fatigue. 106:02 It can cause a lot of trauma. 106:03 You see a lot of things that you may not see on a day-to-day basis. 106:07 And because we have community members doing the moderation and not like professional trust and safety people, we want to make sure that they feel safe as well, especially because commercial content moderation is treated like slave labor in other countries like the Philippines, and we don't go for that. 106:27 So Acorn was built Acore was built for Black Sky, for us to be able to create a more accessible and a space for moderation to happen. 106:40 And they're continuously building out new features as we see like, okay, well maybe we should do this. 106:45 Well, maybe we should add this. 106:47 And making things time-bound as well. 106:49 So people are forced to take breaks because we had a really big backlog. 106:56 JD and I were like really like around the clock moderating, and like Rudy was jumping in, and we kind of all realized that like, okay, we don't want the other moderators to ever have to do that because that can be very traumatic in the long run, and we want them to be able to sustain doing the work to keep their community safe. 107:15 And here on the— my left is what it looks like when you go to label. 107:22 So on our side panel, which you all don't see, it has all of the labels and their definitions, so you all automatically know what the label is actually about. 107:29 So you don't have to keep going back and forth to the SOP. 107:32 And then we really encourage all moderators to leave some type of comment. 107:35 I just put massage and wash so you can see where it goes, because if another moderator comes in behind it, they can kind of know like why this person was moderated this way. 107:43 Is this person a repeat offender? 107:45 What do we want to do next? 107:46 And this also helps with appeals, right? 107:48 Because anybody can appeal a label. 107:51 And when we have that, information in ACORN, it allows us to be able to make a decision with the investigation on if it's something that we will, you know, lift, or if there's a conversation that needs to be had, which I think is really why we have gained a lot of trust within the community, is how we handle these types of situations. 108:10 So this is where things go wrong in trust and safety, and I know people have in their bios like, I'm not a— this is not a representation of my organization, but it is. 108:20 This is truly how we feel, right? 108:22 There is a lack of community trust, right? 108:24 People do not trust a lot of people in trust and safety because there is an ignorance of cultural nuances, right? 108:31 There's an unwillingness to acknowledge mistakes. 108:35 A lot of people are reactive instead of proactive, and the words and actions don't match, and that's a danger zone, right? 108:42 That is a recipe for disaster, and we've all seen disasters everywhere online. 108:47 And the biggest thing is that not understanding the community you serve, right? 108:52 It is very difficult for your community to trust that you care about what is important to them if they don't even think that you actually know what's going on, right? 109:01 That you're so disconnected that like, oh, I'm just logging in to do some trust and safety, but I don't even pay attention to what's happening in those communities, which is why I really enjoy my role because Not only do I get to assist with the content moderation and trust and safety, but I also get to pay attention to what's actually happening within the community, and not just within Black Sky, but the protocol at large. 109:22 And that helps me signal to JD and the other moderators, like, hey, this topic has come up. 109:27 We may get a lot of reports for this. 109:29 That's being proactive. 109:33 So I'm not here to bash people over the head. 109:38 I'm also here to provide repair and to help people rebuild, because I do really believe in trust and safety. 109:43 Safety, and I do think that, you know, there is a level of grace, but there's also, there is a level of wanting to do better and to use your self-efficacy, believing that you can do better, that pushes me to run a strict program. 110:00 So this is where the strict program begins for all the trust and safety people. 110:05 Ongoing training, right, and you need to have your trainings be specific to the things that are coming up within your communities, right? 110:15 And the training is really good for not only for your internal team to understand, but also like, okay, this keeps coming up when we're reporting something and the mods are getting stuck on it. 110:26 So we need to create a training on this particular label because they may not understand what Massage Noir is, right? 110:32 And that was a big thing. 110:32 People don't really know what Massage Noir is and they think there's like Massage Noir Lite and Massage Noir 2.0, but like it all falls falls under massage noir, and that's a part of the training. 110:42 Community feedback. 110:44 If the community is constantly telling you, why are you constantly suspending these specific accounts? 110:49 Why is this happening? 110:50 What's going on? 110:51 You all need to be transparent about that. 110:53 And there is a strategic way to do that through your communications channels and trust and safety reports. 110:59 Staying mission-aligned. 111:01 Words plus actions that don't match equals a danger zone. 111:05 If you say you care about a community, You must stay mission aligned. 111:09 And that also means that you may have to do some research. 111:13 You might have to actually listen to those being most harmed, and you may start to start having hard conversations about how your trust and safety is set up, what those policies actually mean, and if you need to go back to the drawing board and come to terms with the fact that you may not be equipped to do these policies on your own. 111:30 You may have to hire people who are a part of those communities, right? 111:34 Have them as consultants to help you build out some of the policies for the groups that you are working with. 111:42 So, in the end, I just want to say that I know that trust and safety is really ominous for a lot of people because it's very vague. 111:51 It's been vague from a user standpoint, from a community standpoint, it's vague for researchers like myself, but I do believe that from the work that I've been able to do with Black Sky, from how I see the reactions of how we're moderating, from the way we're able to continue to build and grow and create a community ecosystem where people are asking, like, well, how are y'all moderating things? 112:15 It lets me know that people are interested in being better at trust and safety. 112:19 But it, like I said, it also takes self-efficacy and believing that you can actually do it and holding yourself accountable. 112:25 When you're trying to take the easy way out or push the blame on someone else. 112:30 Thank you. Speaker B 112:31 Thank you very much. 112:49 That was wonderful. 112:51 We have some time for questions. 112:52 Yes, I see one. 112:53 How would you like people to refer to you as? 112:54 Dr. 112:55 K? 112:55 Dr. 112:56 Coghill? Dr. KaLyn Coghill 112:56 Dr. 112:56 K is fine. Speaker B 112:57 Wonderful. 112:57 I'm bringing the mic to you. Speaker C 113:02 Hi, Dr. 113:02 K. 113:03 Hi, Ryan. 113:04 I just wanted to ask something that you talked about that I thought was really important was the understanding the communities that you serve. 113:12 And one of the things, you know, we've heard these sayings, which is like doing trust and safety right is impossible. 113:19 And what I think sometimes, what I wonder, what I want to ask is, how do you help people define, how do we help people understand what the community or communities they serve are so that they're delivering trust and safety? 113:39 Like, for example, when you think about a blue sky, there's not one community. 113:43 There's so many communities, and that's why you can't moderate them the same way. 113:47 And so how do you— is it that big world social networks that aren't aligned around an ethos or a mission are doomed to have this problem? 113:59 Or is there a way to do that when you have these intersecting groups with different experiences and different backgrounds in different places? 114:06 How do you help TNS deal in a space like that? Dr. KaLyn Coghill 114:10 Yeah, so I want to go back to Dr. 114:12 Sophia Noble's book Algorithms of Oppression. 114:15 I believe in the last chapter she talks about that they need to bring humanities back into computer science. 114:21 They need to hire Black women at these trust and safety spaces and building the algorithms, and that's where it begins, right? 114:28 A lot of trust and safety teams are overwhelmingly white, and that's the problem. 114:32 And because of that, it makes it difficult to know what the intersecting identities and communities are because you may have some unchecked biases that you may not even realize when you're creating the labels. 114:45 And you have control over how many labels you have and how often you change them. 114:49 So if you realize that you're getting complaints or you realize that there's been an uptick of a particular type of report, it would make sense to laser focus in on that and add that as a label. 115:01 I also think that, yes, social networks are big. 115:04 There's all types of people, but I also see that as an excuse. 115:06 Excuse, right? 115:07 Because if you want to keep the internet safe, there'll never be a safe internet, but there is a way to mitigate harm. 115:12 And if that is the goal of what you're trying to do, it is possible. 115:16 It's just going to take the work, and also it's going to require you to be tapped in more so than you may not be. Speaker C 115:24 Being honest about moderating certain communities differently because they have different experiences. Dr. KaLyn Coghill 115:29 Yeah, and that's why it's important for you to have different types of community members and trust and safety people team on— trust and safety people on your teams so you can have those people with those expert experiences, not an expert resume. 115:52 So something that I noticed listening to this talk is that there feels like there's room being made for restorative justice in communities as part of moderation. 116:05 And are there any conversations that you can share about what that might look like at Black Sky? 116:12 Yeah, so we haven't talked deeply particularly about restorative justice, but that is a framework that is grounded in the work that I do. 116:21 And one of the ways that I think we implement restorative justice is through our self-governance. 116:26 And also through our appeal process where we will have the conversation about exactly what was said, what was done. 116:37 And I know that that can be difficult depending on the scale of the organization or the company you're with, but that one-to-one is really helpful to explain to people, you know, why certain labels were applied. 116:49 I also think that because we have such a strong trusting community, like they trust in us to mitigate the harm, we lean into the restorative justice model naturally when it comes to how we deal with harm in general. 117:04 It's not very punitive, right? 117:07 You don't automatically get labeled. 117:08 You don't automatically get banned. 117:10 It is a longer process and there's always room for you to appeal or for you to want to have a conversation of trying to change that behavior. 117:19 Because the goal is not to exclude people from community if they actually want to come together and do the work to be better community members. Speaker B 117:39 When trust and safety work goes well, it's often invisible, right? 117:42 If there's not bad content, it's not out there. 117:45 And kind of the value to the community maybe isn't felt, or what kinds of work is being done doesn't get seen. 117:50 So transparency reports can be a way to do that, but they can be very kind of like pie graph, some numbers, a little abstract. 117:58 Do you have any thoughts on kind of like how to communicate with the community, like the work that's being done, or like, you know, involving people in the process can be part of that, giving examples can be part of that? Dr. KaLyn Coghill 118:07 Yeah, so one of the things that I think we do well is we do a lot of like blog posts and we do a lot of educational stuff. 118:16 That's one of the big parts of my work in particular, is providing educational materials exactly about why we moderate a certain way, how many reports we're getting, really talking about the— which is important for a lot of the community members is when you go to moderate something, what is the best way to explain what you're— when you're going to report something, what is the best way to explain explain what you're reporting, right? 118:42 What are the steps to take to make sure that the proper label is being put? 118:47 And then also we do not make large decisions around moderation without going to the community first. 118:52 And if you've been paying attention to the work we've been doing recently with Black Sky Only posts, like we're not gonna move the needle if the community is like, please give us a moment, we need to think this through. 119:03 And I think that that type of transparency makes it more accessible as well. 119:08 Cause I definitely agree. 119:09 The pie charts and things can be very confusing. 119:11 So having other mediums of that knowledge base is helpful for all different types of community members. Speaker B 119:20 I had a question. 119:22 You talked earlier about making intentional choices in how you moderate and how context is important. 119:29 How do you balance the sense of urgency of moderation versus collective choice, or maybe collective choice isn't the best word, but seeking the choice of other peers in this trust and safety space to make a choice together about something. 119:45 Because I just wonder about how difficult it is to be, how important it is to be immediately reactive versus more intentional. Dr. KaLyn Coghill 119:55 Yeah, so because we have very detailed labels and our SOP is labeled, when we do get edge cases, They're very rare. 120:05 So the way that we do it is like we go to— yeah, we go to our chat and somebody will drop it in and ask for feedback and people will chime in on the feedback and then we'll move with the decision. 120:19 If something urgent comes in, 9 times out of 10, it is something that's already attached to a label, right? 120:26 Edge cases are things that are like when we had the anti-Black rhetoric thing, we were like, is that harassment or is that rhetoric? 120:32 That is what those conversations look like. 120:35 But if someone is doing something that is violent, there's not really a conversation that needs to be had. 120:40 It's very like transparent and clear. Speaker C 120:46 As a person who has these particular intersecting identities and it's sort of work that I've thought about a lot. 120:57 How do you deal with things like anti-fatness is anti-Blackness? 121:01 Like, where is that sitting today in the moderation that you're doing? Dr. KaLyn Coghill 121:03 Yeah, so we are constantly working on new labels, and that is one of the labels. 121:07 But right now, like, just like we were using just misogynoir, anti-Black harassment— if you're being fatphobic to a Black person, that's anti-Black rhetoric. 121:16 Yeah, yeah. 121:18 But we are going to be adding more specific labels like texturism, colorism, Islamophobia, all those particular things. Speaker C 121:27 Hi, Dr. 121:28 K. 121:29 Just trying to empathize with or trying to put myself in the place of your team. 121:35 How are you distributing the workload of moderating in a way that's equitable? 121:41 And you mentioned some things are time-bound. 121:45 But yeah, like, yeah. Dr. KaLyn Coghill 121:47 So, ooh, sorry, I got to set the thing. 121:49 Our volunteer moderators are not required to moderate, which means that a lot of the maybe excess moderation does fall on like JD and I, mainly JD at this point. 122:05 But we are bringing on new volunteer moderators and we are looking into possibly building out a peer-to-peer moderation so we are able to distribute it more evenly, but because it's volunteer moderation, we do not require anyone to moderate. 122:21 Like, they can jump in and out whenever they want because that's what, like, say,