Aex Garcia-Joyner 0:02 So, hello everyone. 0:03 My name is Alex, and my talk today is going to be Building Collective Intelligence to Reduce Hostility. 0:12 Several weeks ago when I applied for this talk, the topic wasn't reducing hostility, but rather reducing division. 0:20 I've since pivoted on that idea a little bit because I've realized that it isn't division that we're actually out to solve. 0:27 See, I grew up in the southeastern United States, and in recent years, I've been inspired to create this project because I've seen how divided my friends and family have become on lots of different topics. 0:43 But the division isn't exactly the problem. 0:46 See, it's our differences in cultures and backgrounds and experiences of experiences that makes the world really interesting and leads to a lot of innovation. 0:58 It's when the discussion around those differences leads to hostility, broken or damaged relationships, or in worst case, destruction of the world, do we start to have a lot of problems. 1:12 Today, a lot of the conversation around those differences takes place on social media. 1:18 But it's important to realize that social media was never designed for this type of discourse. 1:24 It was made to connect you with your friends, share updates, and build community. 1:30 Take this post for example. 1:31 Someone says, "The moon is made of cheese." Now, because they posted this to any of the popular social media platforms, and they all share the same fundamental structure where your social media profile is meant to represent you and how you see the world. 1:51 There's the assumption that this person actually believes the moon is made of cheese. 1:56 Now, because they've posted this, they've also opened themselves up to personal attack from other users who believe differently than they do, saying things like, "You're wrong, you're crazy," or in worst case, leave the platform altogether and divide because they don't want to be affiliated with the cheese moon people. 2:16 Now, maybe someone else comes in and tries to remediate the conversation, saying, "You're a little bit off base here. 2:23 Here's a link to go and learn more, and maybe it'll help change your perspective." But there's problems with this also. 2:31 Where's this person going to be the next time somebody thinks the moon is cheese? 2:35 And for our most polarized topics, It takes lots of different sources to really get a consensus of the various issues. 2:46 And if we can't discuss confidently if the moon is cheese or not, how are we possibly going to be able to discuss more pressing issues in the world productively? 2:57 It's for all these reasons that we're building VuSift. 3:03 VuSift is this fundamentally new type of social environment. 3:07 It's an evidence-based social research platform as opposed to the traditional belief-based social media. 3:15 And we're building it with features in mind to help with discussing polarized content without so much hostility. 3:24 It starts when you create your first post on ViewSift. 3:28 See, if you think of traditional social media, there's a text box and at the top of the text box as a prompt saying something like, "What are you thinking? 3:37 What's up in your world?" or "What's your post?" Different variations to say this is open discourse for you to share your mind and opinions on the world. 3:48 But as we've learned, that's a problem because now you've opened yourself up to attacks on your beliefs. 3:53 ViewSif works a little bit different. 3:55 There's still the same text box when you create your post, but the prompt at the top is a little bit different. 4:01 You see, it says finish this sentence. 4:04 I may or may not believe, and then you put in your claim. 4:09 So for example, some of the posts that I would feel comfortable creating is the moon is made of cheese, aliens are real, App Proto isn't decentralized, or relays are expensive to run. 4:24 And I've put a little tag on the word relays to let us all know that we're talking about App Proto relays. 4:30 So I create the post, "The moon is made of cheese." And you'll notice on the VuSiv site there's a few different differences to traditional social media. 4:41 At the top, there's a link that centralizes the discourse around this claim, "The moon is made of cheese." And there's one button where you can add support for this claim. 4:53 And it's important to realize that on this page, you only see supporting points to support the claim above. 5:05 Now, I don't have any evidence supporting the moon being made of cheese. 5:09 And you might be thinking, obviously it isn't. 5:11 So we need to take account that perspective. 5:14 So that's where we've introduced the sifter. 5:18 When you click on the sifter, it goes to the opposite perspective. 5:22 Perspective. 5:23 In this case, the moon is not made of cheese. 5:26 And on this page, you can see the supporting claims that have come in. 5:31 It's important to note that all the supporting points are claims themselves, meaning that you can click on them and jump to that claim, see all the supporting reasons, and then click on the sifter to explore other viewpoints. 5:48 This gives you the great ability to stay high level and really explore different ideas and perspectives about the world without really having to dive into all the nitty-gritty complex research papers. 6:03 But it's at this point that we're really building on the app proto research, social research ecosystem. 6:10 And today, This is kind of where we're stopping with, with ViewSift. 6:17 I guess I'd say this is our checkpoint. 6:20 You can sift through different views and see the reasons that people support those views to help you break out of echo chambers and thought silos and explore ideas. 6:31 But pretty soon, we'll be integrating with other social research apps in the App Proto ecosystem. 6:39 Apps like Margin and Symbol to help you identify the various quotes from sources online that support ideas and link them together. 6:50 And for the links themselves, you can share them with your Symbol social graph and add them to collections to share with the other researchers. 7:01 Then in the future, we can build on all of these pieces to really build community-backed ways to know when to trust information. 7:11 And for all of the great ideas that rise from this noise that we're in, we can build features to help prioritize. 7:21 If all of this sounds exciting, we're going to have a table upstairs in the lounge a little bit later. 7:26 So we'll have some— a little bit of demos you can try out and ask us questions there. 7:33 Thank you so much for AppScience for having me to talk with you today, and I appreciate you guys for listening. 7:46 Thanks, Alex. 7:47 Any questions? 7:50 Yeah, thanks. Speaker B 7:53 It sounds like a really interesting platform, and I think that there's a lot of cases where this is a useful thing to have. 7:58 I wanted to ask about if you— what plans you have for moderation, because As a platform, I feel like you have to also make decisions on what is an acceptable viewpoint because there's people who give crackpot science along the, say, the lines of eugenics or trying to push racist things or transphobic things and stuff. 8:15 And I was wondering if you thought about moderation and how you might decide, because I feel like you have to decide what's an acceptable thing to discuss. Aex Garcia-Joyner 8:24 First, it's important to realize that with all of these technologies, we're in the very ground stages of it. 8:29 So it's important that We introduce the community to each of these steps to make sure that everything makes sense before we build in other, other features. 8:40 Moderation is absolutely very important, and by leveraging this app proto type of world, we're going to focus on making this claim section of the app proto ecosystem as robust as possible because there are a lot of nuanced details to think about that we need to do for moderation. 9:02 But then we can leverage other tools to really build this like community-based moderation atmosphere. Speaker C 9:12 Yeah, wonderful idea and great presentation. 9:15 I'm curious how you think one of the big effects of polarization, right, is a resistance to new information. 9:21 And even with new information, different— you'll have different interpretations of the same kind of piece of evidence. 9:27 I'm curious how you all are thinking about attracting users and how you imagine folks who are already polarized are gonna choose to engage with ideas with which they differ. Aex Garcia-Joyner 9:37 So one of the things that really hinders people from being able to open up freely and share their ideas is when they're in an atmosphere where they know the second that somebody has a disagreement with them that it's gonna be this escalation, and they're gonna start attacking each other over their beliefs, all the things we talked about. 10:00 So the first step, we think, is just creating that space where you don't just have to share the things that you believe in, but just generally the things that you've heard are true about the world. 10:10 And then we can have open discourse on that, and absolutely working with the community to develop ways to improve that type of atmosphere. 10:21 But then it kind of gets this foundation where we can start having more in-person collaboration experiences with people because they know it's a safe space to share and brainstorm, to move away from this hostility environment and to more just a general curiosity environment. 10:44 Yeah, one more question. Speaker D 10:50 Um, this is exciting. 10:51 Um, I'm curious about the formulation, um, of putting in a claim rather than a question. 10:57 Um, and I wonder also, um, what it might be like to have the ability to ask questions that are beyond true or false or yes and no. Aex Garcia-Joyner 11:10 Yeah. 11:10 So, This knowledge graph or discourse graph, as you guys are calling it, there's multiple layers to it. 11:22 And it's important to realize that at each one of those layers, there's a lot of things to consider with like duplicate information, moderation, and helping like the signal come from the noise. 11:35 And so if you think of Like, most things you could say, like, should we X? 11:44 And you could say that as a question, but then I could also say just, we should X, and just put it out as a statement. 11:53 So there's, what we're doing with this claim section is focusing on the statements and getting those right. 12:02 And then once we know that the community is good with that part of the social graph, then we can start building questions into the mix and start to have those questions linked to all the other claims. 12:15 It's really kind of multiple levels that we're building out, not that we're trying to exclude questions from the mix entirely. 12:22 Thank you very much. 12:23 Thanks, Alex.