Hypha Coop 0:00 There's no connection between the BlueSky server and the Matrix server. 0:04 They're just two different servers. 0:07 They're coming together in this React social app. 0:11 So, you know, what you see is they don't know about each other. 0:16 Yeah. 0:18 So that really helps with, you know, being able to use all the other Matrix stuff because there's no change. 0:23 They can log in with other clients. 0:24 Yes, go ahead. Cole Anthony Capilongo 0:26 [Speaker] Yeah, yeah, yeah. Hypha Coop 0:31 So, but they have only one credential. 0:34 We generate the Matrix account for them in the background. 0:37 You know, what is the name, managing the credentials, even, you know, in Matrix there's a second, more than a password, there's even a recovery key. 0:44 We do all of that in the background so they don't know anything about it. 0:46 They just remember username, password, that's it. 0:49 As they do with BlueSky. 0:55 [INAUDIBLE] So the way we do it, the secure way to do it is what I showed over there. 1:00 Show the QR code and that logs you in into your mobile app. 1:05 And then after that, if you want to go into the desktop app, you actually, again, you're already logged in. 1:11 So if you open another browser tab, you will actually go directly into the app, but you still need to verify. 1:17 And due to the verification against the phone. 1:20 So yeah, the idea is we don't show them the key, but we allow them to basically do what's needed. Udit Vira 1:25 Yeah. Cole Anthony Capilongo 1:30 Thank you very much. Speaker D 1:30 Okay. Hypha Coop 1:31 Thank you. Udit Vira 2:26 Okay. Speaker D 2:26 So, hi everybody. 2:27 Welcome to our talk on transparent feeds. 2:30 I'm Udit and this is Cole. 2:33 We're from Haifa Worker Co-op. 2:35 And Starling Lab. 2:37 Haifa is a tech worker co-op. 2:40 We do a bunch of ad prodding things. 2:42 And Starling Lab is a lab that focuses on questions around information integrity based out of Stanford and USC. 2:53 I probably don't have to convince a lot of you around that misinformation and disinformation are growing concerns. 3:01 For example, some of you may have seen the image on the right on social media. 3:07 This is an image claiming to be about the war in the Middle East. 3:11 It's actually false. 3:13 The real image is the one on the left, which is a satellite image from a few years ago. 3:18 And when we have both of these images shown together with the context, you know that the one on the right is false. 3:25 But often we don't have all the context and it's hard to tell. 3:28 What images are real and what images are fake. 3:32 And we think this is a growing concern, not only because fakes promote false narratives, but also fakes make it easier for bad actors to deny reality. 3:45 This is called the liar's dividend, and we think one of the things that's going to happen with this is increasingly we're going to see an erosion of accountability and and sort of assault on democracy. 3:57 So what can we do? 4:00 And specifically, what can we do in the blue sky and atmosphere context? 4:06 And so one of the things we've been working on is the Nectar API, which is a similar search, similar image search API. 4:14 So you can find similar images on the atmosphere. 4:19 One of the things we do is we run a bunch of image fingerprinting in the background. 4:24 So we fingerprint all images coming through the firehose. 4:29 And on top of that, to demo exactly how you can use this in a misinformation and disinformation context, we've built a browser extension called Pollen, which makes it easier for us to attach provenance claims and add context to images. 4:48 And I'm going to hand it over to Cole to demo what this looks like in practice. Udit Vira 4:55 All right, so here's just a quick video demo of Pollen. 4:59 Here we're on bsky.app, the regular website, and we have our Pollen browser extension loaded into the browser. 5:05 And we're looking at an example news account that we created for the demo called News 123. 5:09 And in this demo, they post, you know, headlines, stories, they post photographs. 5:15 So we're about to see them make a post, which we'll see in one moment. 5:22 And the post is about the Chicago River being dyed green for St. 5:25 Patrick's Day, which just happened this past March 17th. 5:28 And they're gonna attach an image from one of their photojournalists of that happening. 5:32 So far, this is just a regular blue sky flow, and nothing has been modified by our extension. 5:37 There's the image. 5:38 They're gonna post it, and it goes into their PDS. 5:41 But when they post that image, if they hover over it, they're going to see in the corner this claim button. 5:47 And the claim button is a button being added by a browser extension where any user can make a claim on any other image on the network, and they get this text box where they can enter more information, more context on the image. 5:57 We call this, like, a claim. 5:59 And so in this case, News123 is going to add a claim on their own image. 6:03 They're going to say something about their own content. 6:06 It could be any— user's content, but in this case it's their own. 6:09 And so they're gonna put just a little piece of text in there. 6:12 They're going to describe the content of the image, say, you know, this is the Chicago River. 6:17 We took the photo on this day. 6:18 It was taken by us, News123. 6:20 And so they're just adding that context to the image itself. 6:23 And they click Create. 6:25 And immediately we see, thanks to the extension, one claim. 6:27 And we see this drawer here that's been created by the extension where you can see the account that created it. 6:33 And you can see the, the text of, of their claim. 6:37 If you open the claim and look at it, you see it's not a Bluesky post. 6:40 It's our own kind of claim record, and it has extra data in there. 6:43 And it has a perceptual hash. 6:44 It has the text. 6:46 It has a link to the original post. 6:47 So we have all this important information that's attached. 6:50 But when this really comes into play is when you see it in other contexts. 6:55 So we're gonna switch to a different account for the demo called Mallory. 6:58 Malorie gets her information from all kinds of other sources, maybe Facebook and Instagram, as well as Blue Sky. 7:04 Maybe she wants to misinform people, like, maliciously. 7:06 Maybe she's just misinformed herself. 7:08 But in any case, you know, she's writing her own posts. 7:11 She's writing, "Toxic spill turns a river neon green in London." Totally different idea. 7:16 And she's attaching her own image. 7:18 This is a different image that she's gonna attach coming from a different site. 7:22 It's got a smaller resolution. 7:23 It's more compressed, which you can see a little bit on the left side there. 7:26 The visual content is obviously the same, but the image bytes, the CID, it's totally different. 7:32 And so she's going to post that to her PDS. 7:36 And what we see immediately, one claim. 7:38 So that's the Nectar API and the Pollen browser extension carrying through this claim information because it's matched these visually similar images. 7:47 And so we see a different idea, that it's the Chicago River that's been dyed. 7:51 This contradicts Mallory, and as a viewer, as a user, we get to decide who we trust. 7:55 News 123 or Mallory. 7:57 We also get some extra information. 7:59 We get to see the username. 8:00 And we get to see 95% match. 8:02 That's our matching algorithm at work, just telling you, like, how close these images are visually. 8:07 95% is pretty high. 8:09 And so, you know, that's great. 8:10 We also see View Claimed Post, which is a button that takes you to the original post the claim is on, so you can trace the provenance of the image. 8:18 You can find the original place that the claim was made and allow you to see, like, You know, if you don't trust the matching algorithm, you can see what the actual original image was and all that kind of stuff. 8:28 So that's just a simple demo of extension. 8:30 We think the really interesting thing here is that we've now created the ability to create records that refer to images themselves, not just records that refer to posts, like when you make a reply on the network. 8:42 So if we move to the next slide. 8:45 So yeah, in Pollen, we've just created text claims. 8:47 We think there's lots to explore here with different kinds of claims. 8:50 You could do a geolocation claim, for example. 8:52 So users, instead of writing text, they put a pin on a map and they're claiming, you know, this photo was taken in this location. 8:58 You could do timestamping claims. 8:59 Every record already has a timestamp, but if you wanna prove when an image was taken, you can use external parties to do that. 9:06 You can register the image on a blockchain, like the Cardano blockchain, put that registration in the record, and that proves the image existed before a certain point in time. 9:14 Or you can use a third party with a trusted timestamping protocol that signs the hash of the image. 9:19 You insert that signature. 9:21 Anyway, there's lots of different things you can do. 9:23 We think this is like an open standard that we've created that people can explore. 9:26 You could do like voice notes. 9:27 You could do all kinds of different cool claims with Pollen. 9:31 So just to recap, we've created Nectar, which is an API to find similar images across the atmosphere with perceptual hashing, which I'm happy to talk about afterward. 9:40 And then we created Pollen as an example demo. 9:42 But we think there's lots of cool other use cases that you can do with this API. 9:46 You could track image reposts across the network. 9:49 You could use it for trust and safety to do moderation of similar images. 9:52 You could do social research like the previous talk where you track like, okay, this image has been like posted this many times over this time frame. 10:01 You could do community notes. 10:02 We see Pollen as like a step towards the idea of community notes on Blue Sky and on the atmosphere at large. 10:08 And you can also do rights management to track copyrighted content or just content you didn't want to be posted. 10:14 Excuse me. 10:16 So that's Nectar. 10:17 You can check us out at nectar.hypha.coop. 10:20 If you're interested in adding image annotations to your application or image search to your application, if you want to collaborate with us on this project, fund the project, partner with us, either reach out in person to Udit or myself or check us out at nectar.hypha.coop. 10:35 Thank you very much. 10:43 We have time for questions. Speaker D 10:48 Thank you. Cole Anthony Capilongo 10:49 I see the Streamplace folks are right here, and I know Eli recently published a blog about using STUVA and Muxl, some standards to try to ensure hashing of videos for authenticity along relatively similar lines as this. 11:06 I'm curious if you if you looked into those standards too, if you think the UX for images and video can be similar in this way and if you might want to develop shared standards for that. Udit Vira 11:16 [Speaker:JASON] Yeah. Speaker D 11:17 [Speaker:MAN] Can you repeat the question? Udit Vira 11:18 [Speaker:JASON] Yeah, yeah. 11:19 So just to repeat the question, it's about whether Eli's standards around video hashing are relevant to our work. 11:25 So, yeah, we've actually worked with Eli and we discussed the standard and everything. 11:30 The actual technical details of how this works, we're using perceptual hashing, so it's slightly different than the idea of cryptographic hashes that match. 11:37 We also We also did create a standard for perceptual hashing, which is with Dazzle. 11:43 Robin, who's working at Dazzle there. 11:45 Yeah. 11:47 But it's still relevant. 11:49 Maybe less so with exact hash matching. 11:52 But there are hashes that apply to video that could be used to match similar video content. 11:57 We started with images because that's the easiest lift. 12:00 But you could expand it to video, audio, and so on, text. 12:06 So it's definitely