Digital Literacy Training (Part 2) with Olivia McKayla Ross (@cyberdoula) and Ingrid Burrington (@lifewinning)

Digital Literacy Training (Part 2) Transcript

OLIVIA: Hi, everyone. My name’s Olivia. My pronouns are she/her. Co‑facilitating with Ingrid. And some of the values that this particular digital literacy/defense workshop will be centered in include cyber defense, less as a way of military technology, right? Reframing cryptography as more of an abolitionist technology. Right? And cyber defense as an expression of mutual care and a way of accumulating community‑based power. And in that way, also thinking of ways to teach this type of material in ways that are antiracist, but also anti‑binary and pro‑femme.

And so, we’re really ‑‑ we really care a lot about making sure that this is trauma‑informed and teaching from a place of gentleness, considering the previous digital harm people have experienced and trying not to relive it. So if you need to take a break, remember that this is being recorded and posted online so you will be able to access it later.

INGRID: Great. Thank you, Olivia. My name’s Ingrid. I use she/her pronouns. And welcome back to people who were here yesterday. Today, we are talking about platforms! And in this context, we primarily mean social media sites like Facebook and Instagram. Some of this, you know, it can be applied to contexts where people kind of buy and sell stuff. But essentially, we’re talking about places where people make user accounts to communicate with each other. And ways in which ‑‑ but with kind of more of a focus on kind of the large corporate ones that many people are on!

There were four sort of key concepts we wanted to cover. There’s a lot in them, so we’ll try to move through them smoothly. First kind of being algorithmic curation and the way that can produce misinformation and content suppression. And some of the laws and legal context that are defining decisions that platforms make. We talked a little bit about this yesterday, but, you know, reiterating again: Platforms are companies, and a lot of decisions they make come out of being concerned with keeping a company alive, more than taking care of people.

What is algorithmic curation and why does it matter?

So we’re going to start with algorithmic curation. And I think there’s a thing also that came up yesterday was a tendency for technical language to kind of alienate audiences that don’t know as much about computers or math, I guess. An algorithm is a long word that ‑‑ (Thud) Sorry. That’s the sound of my dog knocking on my door, in the background.

Broadly speaking, an algorithm is a set of rules or instructions ‑‑ (clamoring) Excuse me. One second. She just really wants attention. I’m sorry you can’t see her; she’s very cute!

But… An algorithm is a set of rules or instructions for how to do a thing. You could think of a recipe or a choreography. The difference between an algorithm used in the context of a platform and a algorithm that contains, you know, ingredients for a recipe is that there is a lot less flexibility in interpretation in an algorithm. And it’s usually applied on a much larger scale.

And the reason that a lot of platforms… deploy algorithmic curation, and what algorithmic curation is experienced as, is often recommendation algorithms? And algorithms that determine what content is going to show up in a social media timeline.

So I am ‑‑ you know, I have recently been watching Avatar: The Last Airbender on Netflix. I am 33 years old. And… (Laughs) I found that, you know, Netflix wants to make sure that I know they have lots of other things that I might like because I liked that show. Right? And you could kind of think of algorithms as kind of being if this, then that rules. Like, if somebody watches this show, look at all the other people who watched that show and the other shows that they watched, and suggest that, you know, you probably will like those.

And platforms give the rationale for deploying these kinds of algorithms partly just trying to help people? Right? Like, discover things, because there’s so much content, and you’ll get overwhelmed, so we prioritize. What it actually kind of in practice means is trying to keep you using a service. Right? Like, I’m probably going to cancel my Netflix account once I finish Avatar, so. But oh, like no, now I gotta watch The Dragon Prince. Right?

I think… Do I do this part, or Olivia?

OLIVIA: I can do it?

INGRID: Sorry! I couldn’t remember how we split up this section.

OLIVIA: I… So… In early social media, we didn’t really have super complicated algorithms like the ones we do now. You have the, like, find your friends algorithms that would basically like show you perhaps like the friend of your friends. But the people you follow were mostly the only people whose posts you would see.

But now that we’re able to collect more user data about how you’re using the platform, as well as your activities off the platform, now algorithms are able to become more complicated, because there’s so much more information that they’re able to use.

So some of the things that might be going into your algorithmic curation are listed here. It’s a really long list, and not all of the things that are on this list are even… not all of the things that are on this list are even like the long exhaustive list of things that might be factoring into the algorithm? ‘Cause so few platforms actually disclose what are the things that contribute to the stuff that you see, and what you don’t see, and who’s seeing your own content, and the people who don’t see your own content. But one thing that we know for sure is that the way that these platforms are designed is specifically in order to make money. And so following that motive, you’re able to kind of map a lot of the predicted behavior of some of them.

And one of the really big consequences of these like algorithmic filter bubbles is misinformation. Right? So because we’ve all been inside for the past couple of weeks and months, we’re all really susceptible to seeing really targeted misinformation, because we’ve been online a lot. And so it’s quite possible that more data is being collected about you now than ever before. Platforms make money off of our content, but especially content that encourages like antisocial behaviors. And when I say antisocial behaviors, I mean like antisocial for us pro‑social behaviors. One of these categories encourages a healthy boundary with social media, like light to moderate use. Comforting people! Letting people know that they rock! Right? Cheering up people. Versus antisocial behaviors, while they’re much less healthy, they encourage people to use social media like three times as much. Right? People are spreading rumors; people are posting personal infection; if people are being ignored or excluded or editing videos or photos or saying mean things. Right? And so that makes an environment where misinformation does super well, algorithmically.

Through their design, especially platforms like Instagram and Twitter, they prioritize posts that receive lots of attention. We see this like how people ask others to “like” posts that belong to particular people so that they’ll be boosted in the algorithm. Right? They prioritize posts that get a lot of clicks and that get a lot of like feedback from the community. And it’s really easy to create misinformation campaigns that will take advantage of that.

OLIVIA: Nice. That was a really quick video from the Mozilla Foundation. But I wanted to clarify that there’s this assumption that people who fall for misinformation are like kinda dumb, or they’re not like thinking critically. And this is like kind of a really ableist assumption, right? In truth, anyone could unknowingly share misinformation. That’s like how these campaigns are designed, right? And there’s so many different forms that misinformation takes.

It could be like regular ole lies dressed up as memes; fabricated videos and photos that look super real, even though they’re not; performance art and like social experiments? (Laughing) Links to sources that don’t actually point anywhere? And it could have been investigation that was originally true! But then you told it to your friend, who got the story kind of confused, and now it’s not true in a way that’s really, really important. And of course, there’s also conspiracy theories, and misleading political advertisements, as well.

But sometimes, misinformation is less about being not told ‑‑ being told a lie, and more about not being told the truth, if that makes sense.

So, the easiest way to avoid misinformation is to just get in the habit of verifying what you read before you tell someone else. Even if you heard it first from someone that you trust! Right? Maybe one of your friends shared misinformation. But my friend is a really nice, upstanding citizen! Right? There’s no way that… I don’t know; being a citizen doesn’t matter. My friend is a nice person! And not always… are the people ‑‑ people who share misinformation aren’t always doing it to stir the pot. They just got confused, or they just… ended up in a trap, really.

So, fact‑check the information that confuses you, or surprises you. But also fact‑check information that falls in line with your beliefs. Fact‑check all of it. Because you’re more likely to see misinformation that falls in line with your beliefs because of the algorithmic curation that we talked about before. Right? We have an internet that’s like 70% lies.

So, two sites that were pretty popular when I asked around how people fact‑checked were PolitiFacts and Snopes.com. You could also use a regular search engine. There’s Google, but also using DuckDuckGo at the same time. You could ask a librarian. But also, if you look at a post on Instagram or Twitter and scroll through the thread, there might be people saying, like, hey, this isn’t true; why’d you post it? So always be a little bit more thorough when you are interacting with information online.

How does algorithmic curation contribute to content suppression and shadowbanning?

INGRID: So the next sort of thing we wanted to talk about that’s a, you know, consequence of algorithmic curation and companies, like, platforms being companies, is suppression of content on platforms. Right? Platforms have their own terms of service and rules about what people can and can’t say on them. And those terms of service and rules are usually written in very long documents, in very dense legal language that can make it hard to understand when you break those rules, and are kind of designed to, you know, be scrolled through and ignored.

And we wanted to ‑‑ but because a lot of the decisions about what is, like, you know, acceptable content or unacceptable content are, again, being made by an algorithm looking for keywords, for example… the platforms can kind of downgrade content based on assumptions about what’s there.

So… shadowbanning is a concept that I imagine many of you have heard about or, you know, encountered, possibly even experienced. It actually originally is a term that came from like online message groups and forums. So not an automated algorithm at all. Basically, it was a tool used by moderators for, you know, forum members who liked to start fights, or kind of were shit‑stirrers, and would basically be sort of a muting of that individual on the platform. So they could, you know, still post, but people weren’t seeing their posts, and they weren’t getting interaction, so they weren’t getting whatever rise they wanted to get out of people.

Today, the more common kind of application of the term has been describing platform‑wide essentially muting of users from, like, the main timeline, or making it hard to search for that individual’s content, based on what is thought to be automated interpretation of content. I say “what’s thought to be automated interpretation of content,” because there is a lot that is only kind of known about what’s happening on the other side of the platform. Again, yeah, what it often looks like is not showing up in search unless someone types the entirety of a handle; even if you follow that person, that person’s content not showing up in the main timeline, like in their follower’s feeds, not showing up in a hashtag…

And, shadowbanning is like a really gaslighting experience? Because it’s hard to know, is the result of what I’m saying is people just don’t like it, or people just don’t care anymore, or am I being actively suppressed and people just can’t see me? And if it’s something that has happened to you, or is happening to you, one thing that is important to remember is like you will feel very isolated, but you are in fact not alone. This is a thing that happens. It’s often sort of ‑‑ it’s been, over time, kind of dismissed by platforms as myth or kind of ‑‑ and I think, I wonder if, in some ways, perhaps their aversion to it comes from associating it with this less automated context? Because it’s like, well, we’re not deliberately trying to mute anybody; it’s just our systems kind of doing something! But the systems are working ‑‑ you know, they designed them, and they’re working as designed. Right?

Instagram recently, in making an announcement about work that they want to do to address sort of implicit bias in their platform, sort of implicitly acknowledged that shadowbanning exists. They didn’t actually use the term? But it is interesting to see platforms acknowledging that there are ways that their tools will affect people.

In terms of the “what you can dos” and ‑‑ Blunt, if you have anything that you want to add to that, I’d totally be happy to hear because I’m far from an expert. It’s a lot of what the sort of like best practices tend to be based on what other people have shared as like working for them. So basically, I don’t want to tell you anything and say like this is a guarantee this will like work for you in any given context. One thing that I have seen a lot is, basically, posting really normie content? Like, just going very off‑script from whatever your normal feed is, and doing something like, I don’t know, talking about your pet, or having ‑‑ you know, talking about like cooking. Basically just like changing what you’re doing. Another approach is getting your friends and followers to engage with your content, so that it’s seen as popular, so that it will like return to the timeline.

Blunt, is there anything else that you would want to include in there?

BLUNT: Yeah, I think something that communities found to be useful is that if you are going to be automating posts, to do it on a backup account so that what’s flagged as bot‑like behavior is ‑‑ so your promo account might be shadowbanned, but you might have a wider reach to direct people to where to give you money. But it’s a really complex topic. I’ve been thinking about it a lot right now as I was just ‑‑ Hacking//Hustling is currently studying shadowbanning. So far, we’ve found our data backs up a lot about what sex workers know to be true about show shadowbanning sort of works, what seems to trigger it and what seems to undo it. But as I was making a thread about the research, which both included the words “sex worker” and “shadowban,” I was like, I don’t even know if I can say either of these words without being shadowbanned! So I write it with lots of spaces in it, so hopefully the algorithm won’t recognize it, which also makes it inaccessible to anybody using a screen reader.

So, I don’t know. I know there was a class on how to reverse a shadowban, but I also think that after the global protests started that the algorithm changed a little bit, because we were noticing a lot ‑‑ a higher increase of activists and sex worker content suppressed in the algorithm.

INGRID: Yeah. That’s ‑‑ do you know when you’re going to be putting out some of the research from ‑‑ that Hacking//Hustling’s been doing?

BLUNT: Yeah, we just tweeted out a few of our statistics in light of the recent Twitter shenanigans, and… (Laughs) Some internal screenshots being shared, where they say that they blacklist users? Which is not a term I knew that they used, to describe this process. We’re in the initial analysis of the data stages right now, and we’ll probably ‑‑ our goal is to share this information primarily with community, so we’ll be sharing findings as we are able to, and then the full report will probably come out in like two to three months.

Can algorithms judge video content?

INGRID: “Have you found that the algorithm can judge video content? I know nudity in photos are flagged.” I would defer to Blunt on this question, actually.

BLUNT: I would say, yeah. I’ve had videos take ‑‑ I have lost access to YouTube from videos. So I think anything that you post with a… either a link… for sex work, or just links in general and photos are more likely to be flagged. So, like, personally, I notice my posts that are just text‑based show up higher and more frequently in the algorithm and on the feed.

Which laws and politics surround content suppression?

INGRID: Mm‑hmm… yeah. So the other kind of form of suppression we wanted to mention and talk about is not as algorithmic. It’s when, you know, the state gets involved.

So platforms are companies; companies are expected to follow rules; rules are made by governments. Sometimes, it’ll kind of look like shadowbanning. So TikTok has been reported to basically down‑rank certain kinds of content on the site, or like not, you know, have it show up in a “For You” page, or on your follow page, depending on laws in a country around homosexuality. Sometimes it’s, you know, a result of companies creating rules that are sort of presented as being about national security, but are actually about suppressing dissent. So in Vietnam and the Philippines, there have been rules basically made that mean ‑‑ that have made the contents of social media posts seen as, you know, potentially threats against the state, basically. And sometimes their rules about protecting the vulnerable are actually about, you know, some moral majority bullshit. Which seems like a good time to start talking about sort of legal contexts!

And a lot of this is ‑‑ all of this particular section is really USA contexts. And I feel like I should ‑‑ I wanted to kind of give some explanation for that, because I feel weird doing this like broad sweep on, like, other kind of like countries’ approaches and focusing so much on the United States. But the reason for doing that is, basically, America ‑‑ as, you know, an imperialist nation! Tends to have an outsized impact on what happens on global platforms, overall. And there’s, you know, two reasons for that; one is that most of these companies are located in the United States, like their headquarters are here, so they are beholden to the laws of the place; but secondly, it’s also about sort of markets. Right? Like, the ‑‑ if you, you know. Like, if Facebook is like, we don’t need the American consumer base! Like, it’s probably going to affect their ability to make money.

And there are exceptions in terms of, like, the ways that other law, like, law kind of impacts platforms’, like, structure and decisions. And we talked a little bit yesterday about European privacy laws, but we’ll try and bring a little more in tomorrow about those.

First kind of category is like ‑‑ this is a little bit of a tangent, but it came up yesterday, so I wanted to kind of mention it. This is an image from the account shutdown guide that Hacking//Hustling made, that I did some work on. And basically, platforms that, you know, can facilitate financial transactions, which can be something, you know, like Stripe, PayPal, or Venmo, but, you know… Basically, they have to work with banks and credit card companies. And banks and credit card companies can consider sex work‑related purchases to be like “high risk,” despite there being very little evidence that this is true? The reason sometimes given is the possibility of a charge‑back? Meaning, you know, hypothetically, heteronormative sitcom scenario, that I don’t want my wife to see this charge on my bill! So reports it, and it gets taken off. How much this is actually the case? Unclear. It’s also, like, they’re just kind of jerks.

But, you know, platforms don’t actually have a lot of ability to kind of decide ‑‑ like, to actually like argue with these companies? Because they control the movement of money. Around, like, everywhere? So, in some ways, it’s kind of ‑‑ you know, they kind of just have to fall in line. I mean, that being said, companies themselves are also like kinda dumb. I wasn’t sure whether this needed to be included, but this Stripe blog post explaining why businesses aren’t allowed? They have a section on businesses that pose a brand risk! And they have this whole thing about like, oh, it’s our financial partners don’t want to be associated with them! It’s not us! But, you know, like, fuck out of here, Stripe.

What is section 230?

Back to other laws! (Laughing) So. Section 230 is a term that maybe you’ve heard, maybe you haven’t, that describes a small piece of a big law that has a very large impact on how platforms operate and, in fact, that platforms exist at all. So in the 1990s, lawmakers were very stressed out about porn on the internet. Because it was 1996, and everyone was, you know, didn’t know what to do. And a bill called the Communications Decency Act was passed in 1996. Most of it was invalidated by the Superior Court? Section 230 was not. It’s part 230 of it. It’s a very long bill. It’s really important for how platforms operate, because it says that platforms, like, or people who run hosting services, are not responsible when somebody posts something illegal or, you know, in this case, smut. I, I can’t believe that there was a newspaper headline that just said “internet smut.” It’s so silly… But that the platform, the hosting service, they’re not responsible for that content; the original poster is responsible. Like, if you wanted to sue someone for libel, like, you would not sue the person who hosted a libelous website; you would sue the creator of the libelous website.

And this was initially added to the Communications Decency Act, because there was concern ‑‑ really because of capitalism! There was concern that if, if people were afraid of getting sued because somebody, you know, used their services to do something illegal, or used their services to post something that they could get sued for, that people would just not go into the business! They would not make hosting services. They would not build forums or platforms. And so it ‑‑ removing that kind of legal liability… opened up more space for, for platforms to emerge. It’s, in some ways, it’s a fucked up compromise, in so far as it means that when Facebook does nothing about fascists organizing on their platforms and fascists actually go do things in the world, Facebook can’t be held responsible for it. Right? I mean, the Charlottesville rally in 2017 started on Facebook. Facebook obviously got some bad PR for it, but, you know. Then again, writing some exceptions where platforms are responsible for this or that… tend to not be made on kind of trying to meaningfully support people with less power, but usually about what powerful people think are priorities. Such as the first effort, in 2018, to change or create exceptions to Section 230. Which was FOSTA‑SESTA!

What is FOSTA-SESTA?

It was sold originally as fighting trafficking? The full ‑‑ FOSTA and SESTA are both acronyms. FOSTA is the Allow States and Victims to Fight Online Sex Trafficking Act. SESTA is the Stop Enabling Sex Traffickers Act. But the actual text of the law uses the term, “promotion or facilitation of prostitution and reckless disregard of sex trafficking.” So basically, it’s kind of lumping sex work into all sex trafficking. Which… Yeah. That’s ‑‑ not, not so wise.

And what it essentially creates is a situation where companies that allow that ‑‑ allow prostitution, or facilitation of prostitution, and reckless disregard of sex trafficking to happen on their platform? Can be held legally responsibility for that happening. The day that FOSTA and SESTA was signed into law, Craigslist took down the Personals section of its website. It has generally heightened scrutiny of sex worker content across platforms, and made it a lot harder for that work to happen online.

What is the EARN IT Act?

And in some ways, one of the scary things about FOSTA‑SESTA is the way in which it potentially emboldens further kind of attempts to create more overreaching laws. The EARN IT Act is not a law, yet. It is one that is currently being… discussed, in Congress. It emerged as ‑‑ or, the way that it’s been framed is as a response to an investigative series that happened at the New York Times about the proliferation of sexual images of children on platforms. And this, this is a true thing. Basically, any service that allows uploading of images has this problem. Airbnb direct messages can be, are used? And it’s a real thing. But this, the actual law is a very cynical appropriation of this problem with a solution that really serves more to kind of control and contain how the internet, like, works.

It proposes creating a 19‑member committee of experts, headed by the Attorney General, who would be issuing best practices for companies and websites, and allow those that don’t follow the best practices to be sued. And what “best practices” actually means is currently ‑‑ is like very vague in the actual text of the bill. The word “encryption” does not actually appear in the text of the bill, but its authors have a long history of being anti‑encryption. The current Attorney General, Bill Barr, has expressed wanting back doors for government agencies so that they can look at encrypted content. And likely, you know, it’s thought it could include “best practice” things like make it easier for the government to spy on content.

This is ‑‑ you know. I know somebody who worked on this series, and it is so frustrating to me to see that effort turn into, how about we just kill most of what keeps people safe on the internet?

So I mention, this is something that is more good to pay attention to. Write your Congress member about. Hacking//Hustling has done ‑‑

What is encryption?

Oh, Blunt would like me to define encryption. So it’s a mechanism for keeping information accessible only to people who know how to decode it. It is a way of keeping information safe, in a way! And… The ability ‑‑ and it’s ‑‑ the introduction ‑‑ encryption was not inherently actually part of the early internet, because it was originally created by researchers working for the government who thought it would just be government documents moving around it, so they were all public anyway. But it has since been kind of normalized into a part of, like, just using the internet as we know it today. But it’s, in this context, it’s ‑‑ yeah, basically, it means that when ‑‑ if I want to send you a message, that the only people who can read that message are like you and me, and not the service that is moving the message around, or not like the chat app that we’re using.

That was ‑‑ I feel like that was a little bit garbled, but… I don’t know if you like ‑‑ if, Olivia, is there anything that you would want to add to that? Or a better version of that? (Laughs)

OLIVIA: I think, I think you’ve mostly said it, in terms of it’s like a way of like encoding information so that ‑‑ someone might know the information is present, but they don’t know what it says. So, when we have things like end‑to‑end encryption on the internet, it means that something is encrypted on my side, and no matter, like, say what third party tries to look at the message that I sent to you while it’s in transit, it can’t be seen then, and it also can’t be seen by them on the other side, because the person who I sent the message to has their own, like, code that allows them to decode the message that’s specific to them. And this happens on a lot of platforms without our knowledge, in the sense that apps that are end‑to‑end encrypted, like Signal, they don’t really tell you what your key is. Even though you have one, and the person that you’re talking to has one, it’s not like you’re encoding and decoding yourself, because the math is done by other things.

But if the bill goes out of its way to exclude encryption, then it might make it potentially illegal for these services to exist, which would be a really bad thing for journalists and activists and sex workers and, like, everybody.

INGRID: Yeah. And additionally, there is ‑‑ I mean, within the world of people who work on encryption and security tools, any ‑‑ the idea of creating a back door, or some way to sneakily decrypt a thing without somebody knowing, is that that creates a vulnerability that… essentially, it creates a vulnerability that essentially anyone else could exploit. Like, if it exists there, it’s like somebody will hack it and figure it out.

OLIVIA: There’s no such thing as a door that only one person can use.

What’s the connection between the EARN IT Act and The New York Times?

INGRID: A question ‑‑ EARN IT is not solely a response to an article by the New York Times? It was a series of seven articles. And when I say “in response,” that is the argument ‑‑ that is the statement made by the people who wrote the bill. I think that it was more that EARN IT was proposed by some Congress people who saw an opportunity to cheaply exploit outrage over, like, abuse of children, to put forward some policies that they would want to have happen anyway. I think, like, the ‑‑ it’s ‑‑ and I think the reason, I guess, I mention it is because I think it’s also important to acknowledge the ways that these ‑‑ yeah, it was all, it was an entire, entirely from the New York Times. And it’s, you know, honestly, like, I don’t… I, I think that the main takeaway from that series to me was more that, like, companies are dropping the ball? Not that we need the government to come in and, like ‑‑ or like, if there’s supposed to be government making rules about how companies address this issue, like, I don’t think that the solution is create a committee that pursues, like, telling the companies what to do in this way that doesn’t actually seem to have anything to do with the actual problem they’re talking about.

BLUNT: Totally. And we actually ‑‑ I just want to also say that on the 21st, Hacking//Hustling will be hosting a legal literacy panel, where we will be talking about the ways that fear and threats to national security are used to pass… laws that police us further, that want to end encryption, that want to do away with our privacy. So if you check out HackingHustling.org slash events, I think, you should be able to find out more about that. Again, that’s at 7:00 p.m. on the 21st. You’ll be able to learn a lot more. We’ll do an update on EARN IT, where to look for updates, and similar legislation that’s being passed.

INGRID: I did see ‑‑ there was like a ‑‑ I saw an article that said a bill was being worked on, that was basically like in response to EARN IT, trying to say, like, yes, this is this problem you’re claiming that you’re going to address, like, it’s bad, but like this is not the way to do it, and trying to come up with an alternative. I think Ron Whiting was involved. Do you know anything about this?

BLUNT: Yeah, I think that’s ‑‑ yes. I mean, yes, we will talk about that on the 21st. I’m not ‑‑ we will have our legal team talk about that, so I don’t say the wrong thing.

INGRID: Okay, great. Moving forward!

What are some secure and private platform alternatives?

Olivia, do you want to do the platform alternatives? I feel like I’ve just been talking a lot!

OLIVIA: Sure! So, it kind of sucks that we’re all kind of stuck here using… really centralized social media platforms that we don’t control, and that kind of, in like nefarious and really complicated ways, sometimes control us. And so you might be thinking to yourself, gee, I wish there was something I could use that wasn’t quite Instagram and wasn’t quite Twitter that could let me control information.

So, we have some alternatives. One of these alternatives is called Mastodon. And… Essentially, it’s a independent ‑‑ is that the word? I think the word is ‑‑

BLUNT: An instance?

OLIVIA: It’s an instance! There you go. It’s an instance of… Oh, no, I don’t think that’s the word, either.

Basically, Mastodon is a very ‑‑ is a Twitter‑like platform that’s not Twitter, and instead of going on like a centralized place, you can set up your own Mastodon instance for your community. So instead of having ‑‑ like, you might have Mastodon instances that are called other names? Kind of like ‑‑ would a good analogy be like a subreddit?

INGRID: Maybe. I think, like, the existence of ‑‑ so, Mastodon is also from a project to create… like, open standards for social networking tools. I think we talked a little bit about sort of standardizing of browsers and web content. And in the last decade, one that’s been in development is one for just creating an open standard of what, like, a social network should do and could be. The protocol is actually called ActivityPub, and Mastodon is built on top of it. It’s, it’s more ‑‑ it’s kind of like… the term used for how they’re actually set up is like “fed rated.”

OLIVIA: Federated!

INGRID: Yeah. You set up one that’s hosted on your own. And it can connect to other Mastodon sites that other people run and host. But you have to decide whether or not you connect to those sites. And I think the, the example ‑‑ the thing that ‑‑ sorry. I can jump off from here, ’cause I think the next part was just acknowledging the like limitations. (Laughs) ‘Cause I think ‑‑ so… With ‑‑ so, this is a screenshot of Switter, which had been kind of set up as a sex work‑friendly alternative to Twitter, after FOSTA‑SESTA. And… It has run into a lot of issues with staying online because of FOSTA‑SESTA. Their hosting in ‑‑ like, I think Cloudflare was originally their hosting service, and they got taken down, because the company that like made ‑‑ you know, the company that was hosting it didn’t want to potentially get hit with, you know, like, liabilities because FOSTA‑SESTA said you were facilitating sex trafficking or some shit.

So it’s, it’s not a, necessarily, like, obvious ‑‑ like, it’s not easy, necessarily, to set up a separate space. And whether setting up a separate space is what you want is also, like, a question.

OLIVIA: Another option is also… Say you have a community that’s on Instagram, or on Twitter, and you guys are facing a lot of algorithmic suppression, and you’re not able to, like, reliably which you can’t to the ‑‑ communicate to the people who like your page. You could also split it both ways. You could try having an additional way of communicating to people. So you might have like a Twitter page where you have announcements, but then have a Discord server or something where you communicate with community members, or similar things.

And those types of interventions would essentially allow you to avoid certain types of algorithmic suppression.

INGRID: Yeah. And in a way, the construction of an alternative, it’s, I think… the vision probably is not to create, like, a new Facebook, or a new, you know, Twitter, or a new Instagram, because you will just have the same problems. (Laughs) Of those services. But rather to think about making sort of intentional spaces, like, either ‑‑ like, within, you know, your own space. This is a screenshot of RunYourOwn.social, which is a guide created by Darius Kazemi about ‑‑ you know, what it is to create intentional online spaces. I just find it really, really useful in thinking about all this stuff.

All right. Those were all our slides…

BLUNT: I actually just wanted to add one little thing about that, just to follow up on those previous two slides. I think it’s important to note, too, that while there are these alternatives on Mastodon and in these various alternatives, that’s often not where our clients are? So I think that it can be helpful for certain things, but the idea that entire communities and their clients will shift over to a separate platform… isn’t going to, like, capture the entire audience that you would have had if you had the same access to these social media tools that your peers did. So I think just one thing that I’ve been recommending for folks to do is to actually, like, mailing lists I think can be really helpful in this, too, to make sure that you have multiple ways of staying in touch with the people that are important to you, or the people that are paying you. Because we don’t know what the stability is of a lot of these other platforms, as well.

INGRID: Yeah.

OLIVIA: E‑mail is forever.

BLUNT: Yeah.

INGRID: Yeah, that’s a really, really good way to ‑‑ you know, point. And thank you for adding that.

Okay! So I guess… Should we ‑‑ I guess we’re open, now, for more questions. If there’s anything we didn’t cover, or anything that you want kind of more clarification on… Yeah.

I see a hand raised in the participant section, but I don’t know if that means a question, or something else, or if… I also don’t know how to address a raised hand. (Laughs)

BLUNT: Yeah, if you raise your hand, I can allow you to speak if you want to, but you will be recorded, and this video will be archived. So, unless you’re super down for that, just please ask the questions in the Q&A.

What is Discord and how secure is it?

Someone asks: Can you say more about Discord? Is it an instance like Switter or Mastodon? What is security like there?

OLIVIA: So Discord is a ‑‑ is not an instance like Switter and Mastodon. It’s its own separate app, and it originated as a way for gamers to talk to each other? Like, while they’re playing like video games. And so there’s a lot of, a lot of the tools that are currently on it still make kind of more sense for gamers than they do for people who are talking normally.

A Discord server isn’t really an actual server; it’s more so a chat room that can be maintained and moderated.

And security… is not private. In the sense that all chats and logs can be seen by the folks at, like, at Discord HQ. And they say that they don’t look at them? That they would only look at them in the instance of, like, someone complaining about abuse. So, if you say like, hey, this person’s been harassing me, then someone would look at the chat logs from that time. But it’s definitely not… it’s not a secure platform. It’s not‑‑ it’s not end‑to‑end encrypted, unless you use like add‑ons, which can be downloaded and integrated into a Discord experience. But it’s not out of the box. It’s mostly a space for, like, communities to gather.

Is that helpful…?

INGRID: “Is the information on the 21st up yet, or that is to come?” I think this is for the event ‑‑

BLUNT: Yeah, this is for July 21st. I’ll drop a link into the chat right now.

What are some tips for dealing with misinformation online?

INGRID: “How would you suggest dealing with misinformation that goes deep enough that research doesn’t clarify? Thinking about the ways the state uses misinformation about current events in other countries the U.S. uses to justify political situations.” (Sighs) Yeah, this is ‑‑ this is a hard one. The question of just ‑‑ yeah. The depths to which misinformation goes. I think one of the… really hard things about distinguishing and responding to misinformation in this ‑‑ in, like, right in this current moment… is doing ‑‑ is kind of ‑‑ know ‑‑ like, it is very hard to understand who is an authoritative source to trust? Because we know that the state lies. And we know that the press follows lies! Right? Like, I imagine some of you were alive in 2003. Maybe some of you were born in 2003. Oh, my goodness.

(Laughter)

I ‑‑ again, I feel old. But… Like, the ‑‑ and you know, it’s not even ‑‑ like, you can just look at history! Like, there are… there are lots of legitimate reasons to be suspicious! Of so‑called authoritative institutions.

And I think that some of the hard things with those ‑‑ with, like… getting full answers, is… being able to ‑‑ is like finding, finding a space to like kind of also just hold, like, that maybe you don’t know? And ‑‑ and that actually maybe you can’t know for sure? Which is to say, maybe ‑‑ okay, so one example of this. So, I live in New York. I don’t know how many of you were ‑‑ are based near here, or heard about ‑‑ we had this fireworks situation this summer? (Laughing) And there was a lot of discussion about, like, is this like a op? Is this some sort of, like, psychological warfare being enacted? Because like, there were just so many fireworks. And, you know, the ‑‑ it’s also true that, like, fireworks were really like cheap, because fireworks companies didn’t have more fireworks jobs to do. I, personally, was getting lots of like promoted ads to buy fireworks. But like at the end of the day, the only way that I could kind of like safely manage, like, my own sense of like sanity with this is to say, like: I don’t know which thing is true. And the thing that ‑‑ and like, neither of these things address the actual thing that I’m faced with, which is like loud noise that’s stressing out my dog.

And so I think that some ‑‑ I think the question with, like, misinformation about sort of who to trust or what to trust, is also understanding, like… based on like what I assume, what narrative is true or isn’t true, what actually do I do? And… How do I kind of, like, make decisions to act based on that? Or can I act on either of these?

I guess that’s kind of a rambly answer, but I think ‑‑ like, there isn’t always a good one.

BLUNT: I just dropped a link to ‑‑ it’s Yoghai Benkler, Robert Faris, and Hal Roberts’ Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. I think it’s from 2018? I think it’s a really interesting route if you’re interested in learning more about that.

INGRID: There are two other questions, but I just want to quickly answer: What happened in 2003 is America invaded Iraq based on pretenses of weapons of mass destruction that didn’t exist. And companies ‑‑ like, news outlets reported that with no meaningful interrogation. (Laughs) Sorry.

What’s going on with TikTok and online privacy right now? Is it worse than the EARN IT Act?

OLIVIA: Re: TikTok… It’s a really confusing situation, because most places, especially a lot of cyber security experts on the internet, have been saying to delete TikTok? But also a lot of that ‑‑ a lot of reasons that it’s being done so are kind of boiling down to, it’s a Chinese app. Which is really xenophobic. But there are ‑‑ TikTok does track a lot of information about you. What it uses it for, mostly it’s to send you really, really, hyper‑specific TikToks. But it definitely is ‑‑ like, that information is being collected about you, and it exists in their hands. So I think it’s mostly a decision for individuals to make about whether they’re going to decide to trust TikTok with their information in that way. Because they absolutely know where you live, and they definitely know whatever things about you that you feel like they’ve gathered in order to create the TikTok algorithm that shows up in your feed. Those things can be ‑‑ those things are true. So.

I think ‑‑ Ingrid, do you have anything to say on that?

BLUNT: You’re still muted, Ingrid, if you’re trying to talk.

INGRID: Oh, sorry. I… The question, also, asked, you know, if things like the data collection on platforms like TikTok was worse than things like EARN IT. And I think the… It kind of depends on where you think, like, sources of harm are going to be? It’s ‑‑ you know, it’s kind of, just ‑‑ it’s different! Like, you know, there’s a bunch of information that a company now has that they could choose to sell, that they could choose to utilize in other ways, that they might give to a law enforcement agency that gets a subpoena. But whether or not ‑‑ but like, EARN IT and FOSTA‑SESTA are examples of ‑‑ like, those are ‑‑ that’s, I guess, a different kind of harm? That harm has less to do with collection of information, and more about suppression of content and information and of certain kinds of speech.

“Is it fair to say that social media companies can use your username alone to connect you to other accounts? Should we slightly modify our usernames to avoid being associated and shut down all at once?” So I think ‑‑ I mean, I would say just for the question of like whether to modify your username or not, I think that’s also a risk assessment question, in so far as if you need people to be able to find you across multiple platforms, I would not want to tell you to like not do that? Or to like make it harder for you to, like, reach clients or an audience. I think ‑‑ social media companies tend to… whether they’re looking for you across platforms, like, is not as clear to me. I think it depends on the, like, agreements that exist within the platform. So like, I know that ‑‑ I mean, like Facebook and Instagram are owned by the same company. Right? So they will end up sharing ‑‑ like, the sharing of those two identities, like, is fairly ‑‑ you know, that’s likely to happen. But…

OLIVIA: Some might not be looking for your other accounts? But if you’re ever, like, being investigated by like an actual individual person, or like say your local police department, or the state in general, they probably would be.

INGRID: Yeah. And in that case, I think that what may be more helpful is if you have sort of a public persona that you want to have kind of have a similar identity… That’s a choice you can make. And then if there’s like alt accounts that, you know, maybe are where you have more personal, like, communications, or are working ‑‑ you know, kind of more connected to community and less business? That, making those slightly harder to associate, or making those slightly more compartmentalized? And we’ll talk more a little bit about sort of compartmentalizing identities tomorrow. But I think, yeah, that’s one way to kind of address that ability of being kind of identified.

BLUNT: I think, too, I wanted to add that it’s not just like using the same username, but where you post it, or like what e‑mail is associated with an ad. If you’ve linked your social media to a sex working ad, one of the statistics that we found in the research, the ongoing research projects that Hacking//Hustling is doing right now on shadowbanning is that sex workers who linked their social media to an advertisement are significantly more likely to believe they’ve been shadowbanned, at 82%. Which seems to me that linking might put you in… the bad girl bin, as I call it. (Laughs)

Do we have any other questions? We still have a good chunk of time. Or anything that folks want more clarity on?

What is DuckDuckGo and what is a VPN? Should we use them?

Okay, so we have one that says, “I heard DuckDuckGo mentioned. Do you personally use that search engine? Also, I recently started using ExpressVPN, as I just started sex work, and bad on my part, I did little research on which VPNs. Have you heard of ExpressVPN? Do you have another app that you personally use or have more knowledge about? I want to stay safe and of course share with others what would be the best app to use for VPN.”

INGRID: Olivia, do you want to take some of this one…?

OLIVIA: I was muted. So, I do use DuckDuckGo, most often. Sometimes, if I’m trying to like test to see if something ‑‑ like, if I’m using another ‑‑ like, my house computer uses Google, because my mom’s like, I don’t like DuckDuckGo! It’s not showing me the things I want to see! And that’s usually because Google, again, collects data about you and actively suggests results that it thinks are the things you’re searching for, whether or not they’re what you’re actually searching for.

For VPN use, I use ProtonVPN, mainly because it’s free and I don’t really have money to pay for a VPN right now. But I think ExpressVPN is one of the most popular ones. So I’d say it’s pretty trustworthy.

INGRID: Yeah, I’ve used ExpressVPN. I’ve seen that it’s ‑‑ yeah. It’s generally, I think, a well‑regarded one. I think that’s partly why it costs the money it costs. (Laughs) So I think ‑‑ yeah. If you don’t want to have to keep paying for it; but if you’ve already paid for it, yeah, keep using it.

What are the alternatives for encryption?

Yeah. “Can we talk about some alternatives for encryption, assuming a back door is created?”

OLIVIA: This isn’t ‑‑ oop.

INGRID: Go ahead.

OLIVIA: This isn’t really an alternative for encryption, but I think one of the things that we could start doing is ‑‑ less so would it be, like, trying to function without encryption, but instead encrypting our messages ourselves. Because technically, you could have end‑to‑end encryption over Instagram DM if you do the hand work of encrypting the messages that you send by yourself. Bleh! Tripped over my tongue there.

So there are a lot of apps, specifically for e‑mail, I’m thinking of? Like, Enigmail, and Pretty Good Privacy, that are essentially tools that you can use to “hand encrypt,” in quotation marks, your e‑mails, so you don’t have to depend on someone doing that for you. Right, the government can’t knock on your door and say you’re not allowed to encrypt anymore. And encryption algorithms are mathematical things. So you wouldn’t be able to make one that’s like kind of broken. The ones that we have now are… as long as ‑‑ like, Signal for instance is very public about the algorithms that they use, and that’s how we know that we can trust them. Because other people can trust them, and they’re like, yeah, it’s really ‑‑ it would take a computer about a thousand years to crack this. And so we’re able to use those same algorithms by ourselves without depending on other platforms to do that work for us. And it would suck that we’d have to interact with each other with that level of friction? But it is possible to continue to have safe communications.

BLUNT: Yeah, and I think just in general, if you’re unsure about the security of the messaging system that you’re using? Like, right now, we’re using Zoom, and we had this conversation a bit yesterday. But I’m speaking on Zoom as if I were speaking in public. So if I were to say ‑‑ if I wanted to talk about my personal experiences, potentially I would phrase it as a hypothetical, is also one way. So just slightly changing the ways that you speak, or… Yeah. I think that’s also an option. Go ahead, sorry.

OLIVIA: No, I agree. Just bouncing off with the people that you’re talking to that, like, hey, we’re not going to talk about this. And not being, like, reckless. So in a, like in a public forum, don’t like post about the direct action that’s happening on Sunday at city hall. Things like that are not things ‑‑ just like using, in that sense, using discretion, at that point.

What is the back door issue and how does it relate to encryption?

BLUNT: Someone says: “So the back door issue is for companies that encrypt for us?”

INGRID: Basically, yeah. The ‑‑ the back door issue, or like what, I guess… the back door issue is not ‑‑ and it’s also not necessarily, like, all encryption would stop working. Right? It would be something like… you know, the government ‑‑ like a government saying, hey, WhatsApp, we want access to conversations that currently we can’t have access to because WhatsApp communications are encrypted, and ordering WhatsApp to do that. And one would hope? (Laughs) That ‑‑ like, companies also know that they have a certain amount of, like, brand liability… when they remove security features. So it’s something that would probably be known about? I don’t think that it would be done ‑‑ like, I would hope it wouldn’t be done surreptitiously? But, yeah. It’s more about, like, whether or not certain ‑‑ like, previously considered secure communications would become compromised. It wouldn’t necessarily end the possibility of ever, you know, deploying encryption ever again. It would be more of a service by service thing.

BLUNT: We still have some time for more questions, if anyone has any. Please feel free to drop them into the Q&A.

And maybe if Ingrid and Olivia, if you wanted to chat a little bit about what we’ll be talking about tomorrow, folks might have an idea of other things that they might want clarity on, or other things that they are really hoping might be covered.

What will be covered in part 3 of the digital literacy series?

OLIVIA: Yeah, tomorrow we’re gonna talk a lot about surveillance, like more specifically. So like, surveillance that’s done on platforms, and in ‑‑ but also like talking both about surveillance capitalism and state surveillance, and how they ‑‑ the different ways that they might cause harm for someone who’s like trying to use the internet. Yeah. I think those are the most ‑‑ the biggest points? But also thinking about… like, mitigation.

INGRID: Yeah. And we’re ‑‑ and in the context of state surveillance, we’re primarily talking about when the state utilizes platforms in the service of surveillance, or obtains information from platforms. There are a myriad of other ways that the state can ‑‑ that, you know, police departments or federal or state governments can engage in surveillance of people, digitally or otherwise. But partly because the scale and scope of that topic is very, very large, and because we know people are coming from lots of different settings, and maybe like ‑‑ and we don’t personally know the ins and outs of the surveillance tools of every police department in the world? We didn’t want to kind of put forward, like, examples of tools that might just be, like ‑‑ that would mostly just create, like, greater like anxiety or something, or that wouldn’t necessarily be an accurate depiction of threats or realities that people might face.

If there is interest in more of those things, we’re happy to do questions about them in the thing? But it’s not something that we did ‑‑ we’re doing a deep dive into, because… again, it seems like that might be better to do more tailored questions to specific contexts.

BLUNT: I’m curious ‑‑ did you see the EFF launched the searchable database of police agencies and the tech tools that they use to spy on communities? Speaking of not spying on people! (Laughing)

INGRID: Yeah, but that’s the thing ‑‑ another thing is like, well, those tools are here. God bless these agencies for putting that work together.

BLUNT: Cool. So I’ll just give it like two or three more minutes to see if any other questions pop in… And then I’ll just turn off the livestream, as well as the recording, in case anyone would prefer to ask a question that’s not public.

How to Build Healthy Community Online

Okay. So we have two more questions that just popped in… “Could you speak to building healthy community online? How to do that, how to use platforms for positive information spread?”

OLIVIA: So, when it comes to building healthy communities, I think… it really comes down to, like, the labor of moderation. Like, it has to ‑‑ it has to go to someone, I think. We often have ‑‑ one of the problems with a lot of platforms online is that they’re built by people who don’t really, like, see a need for moderation, if that makes sense? Like, one of the issues with Slack is that there was no way to block someone, in Slack. And a lot of the people who originally were working on Slack couldn’t conceive of a reason why that would be possible ‑‑ couldn’t conceive of a reason why that would be necessary. While someone who’s ever experienced workplace harassment would know immediately why that kind of thing would be necessary, right?

And so I think when it comes to like building healthy communities online, I think like codes of conduct are really honestly the thing that’s most necessary, and having people or having ‑‑ creating an environment on that specific profile or in that specific space that kind of invites that energy in for the people who are engaging in that space to do that moderation work, and to also like promote… pro‑social interactions, and to like demote antisocial interactions, and things like that.

BLUNT: I also think that we ‑‑ Hacking//Hustling also on the YouTube channel has… a conversation between myself and three other folks talking about sort of social media and propaganda and a couple of harm reduction tips on how to assess the, like, truthfulness of what you’re sharing and posting. And I think that’s one thing that we can do, is just take an extra second before re‑tweeting something and sharing something, or actually opening up the article before sharing it and making sure that it’s something that we want to share… is a simple thing that we can do. I know things move so fast on these online spaces that it’s sometimes hard to do, but I think that that… if, if you’re able to assess that something is misinformation, or maybe it’s something that you don’t want to share, then. It slows down the spread of misinformation.

Thank you so much to everyone and their awesome questions. I’m just going to take one second to turn off the YouTube Live and to turn off the recording, and then see if folks have any questions that they don’t want recorded.

Okay, cool! So the livestream has stopped, and the recording is no longer recording. So if folks have any other questions, you’re still on Zoom, but we would be happy to answer anything else, and I’ll just give that two or three more minutes… And if not, we’ll see you tomorrow at noon.

(Silence)

Okay. Cool! Anything else, Ingrid or Olivia, you want to say?

INGRID: Thank you all for coming. Thank you, again, to Cory for doing transcription. Or, live captioning. Yeah.

BLUNT: Yeah, thank you, Cory. Appreciate you.

OLIVIA: Thank you.

CORY DOSTIE: My pleasure!

BLUNT: Okay, great! I will see you all on ‑‑ tomorrow! (Laughs) Take care.

INGRID: Bye, everyone.

OLIVIA: Bye, everyone!

Leave a Reply