Digital Literacy Training (Part 2) with Olivia McKayla Ross (@cyberdoula) and Ingrid Burrington (@lifewinning)

Platforms and you digital literacy training

https://www.youtube.com/watch?v=4E2kq5Cu9Qo

Digital Literacy Training (Part 2) Transcript

OLIVIA: Hi, everyone. My name’s Olivia. My pronouns are she/her. Co‑facilitating with Ingrid. And some of the values that this particular digital literacy/defense workshop will be centered in include cyber defense, less as a way of military technology, right? Reframing cryptography as more of an abolitionist technology. Right? And cyber defense as an expression of mutual care and a way of accumulating community‑based power. And in that way, also thinking of ways to teach this type of material in ways that are antiracist, but also anti‑binary and pro‑femme.

And so, we’re really ‑‑ we really care a lot about making sure that this is trauma‑informed and teaching from a place of gentleness, considering the previous digital harm people have experienced and trying not to relive it. So if you need to take a break, remember that this is being recorded and posted online so you will be able to access it later.

INGRID: Great. Thank you, Olivia. My name’s Ingrid. I use she/her pronouns. And welcome back to people who were here yesterday. Today, we are talking about platforms! And in this context, we primarily mean social media sites like Facebook and Instagram. Some of this, you know, it can be applied to contexts where people kind of buy and sell stuff. But essentially, we’re talking about places where people make user accounts to communicate with each other. And ways in which ‑‑ but with kind of more of a focus on kind of the large corporate ones that many people are on!

There were four sort of key concepts we wanted to cover. There’s a lot in them, so we’ll try to move through them smoothly. First kind of being algorithmic curation and the way that can produce misinformation and content suppression. And some of the laws and legal context that are defining decisions that platforms make. We talked a little bit about this yesterday, but, you know, reiterating again: Platforms are companies, and a lot of decisions they make come out of being concerned with keeping a company alive, more than taking care of people.

What is algorithmic curation and why does it matter?

So we’re going to start with algorithmic curation. And I think there’s a thing also that came up yesterday was a tendency for technical language to kind of alienate audiences that don’t know as much about computers or math, I guess. An algorithm is a long word that ‑‑ (Thud) Sorry. That’s the sound of my dog knocking on my door, in the background.

Broadly speaking, an algorithm is a set of rules or instructions ‑‑ (clamoring) Excuse me. One second. She just really wants attention. I’m sorry you can’t see her; she’s very cute!

But… An algorithm is a set of rules or instructions for how to do a thing. You could think of a recipe or a choreography. The difference between an algorithm used in the context of a platform and a algorithm that contains, you know, ingredients for a recipe is that there is a lot less flexibility in interpretation in an algorithm. And it’s usually applied on a much larger scale.

And the reason that a lot of platforms… deploy algorithmic curation, and what algorithmic curation is experienced as, is often recommendation algorithms? And algorithms that determine what content is going to show up in a social media timeline.

So I am ‑‑ you know, I have recently been watching Avatar: The Last Airbender on Netflix. I am 33 years old. And… (Laughs) I found that, you know, Netflix wants to make sure that I know they have lots of other things that I might like because I liked that show. Right? And you could kind of think of algorithms as kind of being if this, then that rules. Like, if somebody watches this show, look at all the other people who watched that show and the other shows that they watched, and suggest that, you know, you probably will like those.

And platforms give the rationale for deploying these kinds of algorithms partly just trying to help people? Right? Like, discover things, because there’s so much content, and you’ll get overwhelmed, so we prioritize. What it actually kind of in practice means is trying to keep you using a service. Right? Like, I’m probably going to cancel my Netflix account once I finish Avatar, so. But oh, like no, now I gotta watch The Dragon Prince. Right?

I think… Do I do this part, or Olivia?

OLIVIA: I can do it?

INGRID: Sorry! I couldn’t remember how we split up this section.

OLIVIA: I… So… In early social media, we didn’t really have super complicated algorithms like the ones we do now. You have the, like, find your friends algorithms that would basically like show you perhaps like the friend of your friends. But the people you follow were mostly the only people whose posts you would see.

But now that we’re able to collect more user data about how you’re using the platform, as well as your activities off the platform, now algorithms are able to become more complicated, because there’s so much more information that they’re able to use.

So some of the things that might be going into your algorithmic curation are listed here. It’s a really long list, and not all of the things that are on this list are even… not all of the things that are on this list are even like the long exhaustive list of things that might be factoring into the algorithm? ‘Cause so few platforms actually disclose what are the things that contribute to the stuff that you see, and what you don’t see, and who’s seeing your own content, and the people who don’t see your own content. But one thing that we know for sure is that the way that these platforms are designed is specifically in order to make money. And so following that motive, you’re able to kind of map a lot of the predicted behavior of some of them.

And one of the really big consequences of these like algorithmic filter bubbles is misinformation. Right? So because we’ve all been inside for the past couple of weeks and months, we’re all really susceptible to seeing really targeted misinformation, because we’ve been online a lot. And so it’s quite possible that more data is being collected about you now than ever before. Platforms make money off of our content, but especially content that encourages like antisocial behaviors. And when I say antisocial behaviors, I mean like antisocial for us pro‑social behaviors. One of these categories encourages a healthy boundary with social media, like light to moderate use. Comforting people! Letting people know that they rock! Right? Cheering up people. Versus antisocial behaviors, while they’re much less healthy, they encourage people to use social media like three times as much. Right? People are spreading rumors; people are posting personal infection; if people are being ignored or excluded or editing videos or photos or saying mean things. Right? And so that makes an environment where misinformation does super well, algorithmically.

Through their design, especially platforms like Instagram and Twitter, they prioritize posts that receive lots of attention. We see this like how people ask others to “like” posts that belong to particular people so that they’ll be boosted in the algorithm. Right? They prioritize posts that get a lot of clicks and that get a lot of like feedback from the community. And it’s really easy to create misinformation campaigns that will take advantage of that.

OLIVIA: Nice. That was a really quick video from the Mozilla Foundation. But I wanted to clarify that there’s this assumption that people who fall for misinformation are like kinda dumb, or they’re not like thinking critically. And this is like kind of a really ableist assumption, right? In truth, anyone could unknowingly share misinformation. That’s like how these campaigns are designed, right? And there’s so many different forms that misinformation takes.

It could be like regular ole lies dressed up as memes; fabricated videos and photos that look super real, even though they’re not; performance art and like social experiments? (Laughing) Links to sources that don’t actually point anywhere? And it could have been investigation that was originally true! But then you told it to your friend, who got the story kind of confused, and now it’s not true in a way that’s really, really important. And of course, there’s also conspiracy theories, and misleading political advertisements, as well.

But sometimes, misinformation is less about being not told ‑‑ being told a lie, and more about not being told the truth, if that makes sense.

So, the easiest way to avoid misinformation is to just get in the habit of verifying what you read before you tell someone else. Even if you heard it first from someone that you trust! Right? Maybe one of your friends shared misinformation. But my friend is a really nice, upstanding citizen! Right? There’s no way that… I don’t know; being a citizen doesn’t matter. My friend is a nice person! And not always… are the people ‑‑ people who share misinformation aren’t always doing it to stir the pot. They just got confused, or they just… ended up in a trap, really.

So, fact‑check the information that confuses you, or surprises you. But also fact‑check information that falls in line with your beliefs. Fact‑check all of it. Because you’re more likely to see misinformation that falls in line with your beliefs because of the algorithmic curation that we talked about before. Right? We have an internet that’s like 70% lies.

So, two sites that were pretty popular when I asked around how people fact‑checked were PolitiFacts and Snopes.com. You could also use a regular search engine. There’s Google, but also using DuckDuckGo at the same time. You could ask a librarian. But also, if you look at a post on Instagram or Twitter and scroll through the thread, there might be people saying, like, hey, this isn’t true; why’d you post it? So always be a little bit more thorough when you are interacting with information online.

How does algorithmic curation contribute to content suppression and shadowbanning?

INGRID: So the next sort of thing we wanted to talk about that’s a, you know, consequence of algorithmic curation and companies, like, platforms being companies, is suppression of content on platforms. Right? Platforms have their own terms of service and rules about what people can and can’t say on them. And those terms of service and rules are usually written in very long documents, in very dense legal language that can make it hard to understand when you break those rules, and are kind of designed to, you know, be scrolled through and ignored.

And we wanted to ‑‑ but because a lot of the decisions about what is, like, you know, acceptable content or unacceptable content are, again, being made by an algorithm looking for keywords, for example… the platforms can kind of downgrade content based on assumptions about what’s there.

So… shadowbanning is a concept that I imagine many of you have heard about or, you know, encountered, possibly even experienced. It actually originally is a term that came from like online message groups and forums. So not an automated algorithm at all. Basically, it was a tool used by moderators for, you know, forum members who liked to start fights, or kind of were shit‑stirrers, and would basically be sort of a muting of that individual on the platform. So they could, you know, still post, but people weren’t seeing their posts, and they weren’t getting interaction, so they weren’t getting whatever rise they wanted to get out of people.

Today, the more common kind of application of the term has been describing platform‑wide essentially muting of users from, like, the main timeline, or making it hard to search for that individual’s content, based on what is thought to be automated interpretation of content. I say “what’s thought to be automated interpretation of content,” because there is a lot that is only kind of known about what’s happening on the other side of the platform. Again, yeah, what it often looks like is not showing up in search unless someone types the entirety of a handle; even if you follow that person, that person’s content not showing up in the main timeline, like in their follower’s feeds, not showing up in a hashtag…

And, shadowbanning is like a really gaslighting experience? Because it’s hard to know, is the result of what I’m saying is people just don’t like it, or people just don’t care anymore, or am I being actively suppressed and people just can’t see me? And if it’s something that has happened to you, or is happening to you, one thing that is important to remember is like you will feel very isolated, but you are in fact not alone. This is a thing that happens. It’s often sort of ‑‑ it’s been, over time, kind of dismissed by platforms as myth or kind of ‑‑ and I think, I wonder if, in some ways, perhaps their aversion to it comes from associating it with this less automated context? Because it’s like, well, we’re not deliberately trying to mute anybody; it’s just our systems kind of doing something! But the systems are working ‑‑ you know, they designed them, and they’re working as designed. Right?

Instagram recently, in making an announcement about work that they want to do to address sort of implicit bias in their platform, sort of implicitly acknowledged that shadowbanning exists. They didn’t actually use the term? But it is interesting to see platforms acknowledging that there are ways that their tools will affect people.

In terms of the “what you can dos” and ‑‑ Blunt, if you have anything that you want to add to that, I’d totally be happy to hear because I’m far from an expert. It’s a lot of what the sort of like best practices tend to be based on what other people have shared as like working for them. So basically, I don’t want to tell you anything and say like this is a guarantee this will like work for you in any given context. One thing that I have seen a lot is, basically, posting really normie content? Like, just going very off‑script from whatever your normal feed is, and doing something like, I don’t know, talking about your pet, or having ‑‑ you know, talking about like cooking. Basically just like changing what you’re doing. Another approach is getting your friends and followers to engage with your content, so that it’s seen as popular, so that it will like return to the timeline.

Blunt, is there anything else that you would want to include in there?

BLUNT: Yeah, I think something that communities found to be useful is that if you are going to be automating posts, to do it on a backup account so that what’s flagged as bot‑like behavior is ‑‑ so your promo account might be shadowbanned, but you might have a wider reach to direct people to where to give you money. But it’s a really complex topic. I’ve been thinking about it a lot right now as I was just ‑‑ Hacking//Hustling is currently studying shadowbanning. So far, we’ve found our data backs up a lot about what sex workers know to be true about show shadowbanning sort of works, what seems to trigger it and what seems to undo it. But as I was making a thread about the research, which both included the words “sex worker” and “shadowban,” I was like, I don’t even know if I can say either of these words without being shadowbanned! So I write it with lots of spaces in it, so hopefully the algorithm won’t recognize it, which also makes it inaccessible to anybody using a screen reader.

So, I don’t know. I know there was a class on how to reverse a shadowban, but I also think that after the global protests started that the algorithm changed a little bit, because we were noticing a lot ‑‑ a higher increase of activists and sex worker content suppressed in the algorithm.

INGRID: Yeah. That’s ‑‑ do you know when you’re going to be putting out some of the research from ‑‑ that Hacking//Hustling’s been doing?

BLUNT: Yeah, we just tweeted out a few of our statistics in light of the recent Twitter shenanigans, and… (Laughs) Some internal screenshots being shared, where they say that they blacklist users? Which is not a term I knew that they used, to describe this process. We’re in the initial analysis of the data stages right now, and we’ll probably ‑‑ our goal is to share this information primarily with community, so we’ll be sharing findings as we are able to, and then the full report will probably come out in like two to three months.

Can algorithms judge video content?

INGRID: “Have you found that the algorithm can judge video content? I know nudity in photos are flagged.” I would defer to Blunt on this question, actually.

BLUNT: I would say, yeah. I’ve had videos take ‑‑ I have lost access to YouTube from videos. So I think anything that you post with a… either a link… for sex work, or just links in general and photos are more likely to be flagged. So, like, personally, I notice my posts that are just text‑based show up higher and more frequently in the algorithm and on the feed.

Which laws and politics surround content suppression?

INGRID: Mm‑hmm… yeah. So the other kind of form of suppression we wanted to mention and talk about is not as algorithmic. It’s when, you know, the state gets involved.

So platforms are companies; companies are expected to follow rules; rules are made by governments. Sometimes, it’ll kind of look like shadowbanning. So TikTok has been reported to basically down‑rank certain kinds of content on the site, or like not, you know, have it show up in a “For You” page, or on your follow page, depending on laws in a country around homosexuality. Sometimes it’s, you know, a result of companies creating rules that are sort of presented as being about national security, but are actually about suppressing dissent. So in Vietnam and the Philippines, there have been rules basically made that mean ‑‑ that have made the contents of social media posts seen as, you know, potentially threats against the state, basically. And sometimes their rules about protecting the vulnerable are actually about, you know, some moral majority bullshit. Which seems like a good time to start talking about sort of legal contexts!

And a lot of this is ‑‑ all of this particular section is really USA contexts. And I feel like I should ‑‑ I wanted to kind of give some explanation for that, because I feel weird doing this like broad sweep on, like, other kind of like countries’ approaches and focusing so much on the United States. But the reason for doing that is, basically, America ‑‑ as, you know, an imperialist nation! Tends to have an outsized impact on what happens on global platforms, overall. And there’s, you know, two reasons for that; one is that most of these companies are located in the United States, like their headquarters are here, so they are beholden to the laws of the place; but secondly, it’s also about sort of markets. Right? Like, the ‑‑ if you, you know. Like, if Facebook is like, we don’t need the American consumer base! Like, it’s probably going to affect their ability to make money.

And there are exceptions in terms of, like, the ways that other law, like, law kind of impacts platforms’, like, structure and decisions. And we talked a little bit yesterday about European privacy laws, but we’ll try and bring a little more in tomorrow about those.

First kind of category is like ‑‑ this is a little bit of a tangent, but it came up yesterday, so I wanted to kind of mention it. This is an image from the account shutdown guide that Hacking//Hustling made, that I did some work on. And basically, platforms that, you know, can facilitate financial transactions, which can be something, you know, like Stripe, PayPal, or Venmo, but, you know… Basically, they have to work with banks and credit card companies. And banks and credit card companies can consider sex work‑related purchases to be like “high risk,” despite there being very little evidence that this is true? The reason sometimes given is the possibility of a charge‑back? Meaning, you know, hypothetically, heteronormative sitcom scenario, that I don’t want my wife to see this charge on my bill! So reports it, and it gets taken off. How much this is actually the case? Unclear. It’s also, like, they’re just kind of jerks.

But, you know, platforms don’t actually have a lot of ability to kind of decide ‑‑ like, to actually like argue with these companies? Because they control the movement of money. Around, like, everywhere? So, in some ways, it’s kind of ‑‑ you know, they kind of just have to fall in line. I mean, that being said, companies themselves are also like kinda dumb. I wasn’t sure whether this needed to be included, but this Stripe blog post explaining why businesses aren’t allowed? They have a section on businesses that pose a brand risk! And they have this whole thing about like, oh, it’s our financial partners don’t want to be associated with them! It’s not us! But, you know, like, fuck out of here, Stripe.

What is section 230?

Back to other laws! (Laughing) So. Section 230 is a term that maybe you’ve heard, maybe you haven’t, that describes a small piece of a big law that has a very large impact on how platforms operate and, in fact, that platforms exist at all. So in the 1990s, lawmakers were very stressed out about porn on the internet. Because it was 1996, and everyone was, you know, didn’t know what to do. And a bill called the Communications Decency Act was passed in 1996. Most of it was invalidated by the Superior Court? Section 230 was not. It’s part 230 of it. It’s a very long bill. It’s really important for how platforms operate, because it says that platforms, like, or people who run hosting services, are not responsible when somebody posts something illegal or, you know, in this case, smut. I, I can’t believe that there was a newspaper headline that just said “internet smut.” It’s so silly… But that the platform, the hosting service, they’re not responsible for that content; the original poster is responsible. Like, if you wanted to sue someone for libel, like, you would not sue the person who hosted a libelous website; you would sue the creator of the libelous website.

And this was initially added to the Communications Decency Act, because there was concern ‑‑ really because of capitalism! There was concern that if, if people were afraid of getting sued because somebody, you know, used their services to do something illegal, or used their services to post something that they could get sued for, that people would just not go into the business! They would not make hosting services. They would not build forums or platforms. And so it ‑‑ removing that kind of legal liability… opened up more space for, for platforms to emerge. It’s, in some ways, it’s a fucked up compromise, in so far as it means that when Facebook does nothing about fascists organizing on their platforms and fascists actually go do things in the world, Facebook can’t be held responsible for it. Right? I mean, the Charlottesville rally in 2017 started on Facebook. Facebook obviously got some bad PR for it, but, you know. Then again, writing some exceptions where platforms are responsible for this or that… tend to not be made on kind of trying to meaningfully support people with less power, but usually about what powerful people think are priorities. Such as the first effort, in 2018, to change or create exceptions to Section 230. Which was FOSTA‑SESTA!

What is FOSTA-SESTA?

It was sold originally as fighting trafficking? The full ‑‑ FOSTA and SESTA are both acronyms. FOSTA is the Allow States and Victims to Fight Online Sex Trafficking Act. SESTA is the Stop Enabling Sex Traffickers Act. But the actual text of the law uses the term, “promotion or facilitation of prostitution and reckless disregard of sex trafficking.” So basically, it’s kind of lumping sex work into all sex trafficking. Which… Yeah. That’s ‑‑ not, not so wise.

And what it essentially creates is a situation where companies that allow that ‑‑ allow prostitution, or facilitation of prostitution, and reckless disregard of sex trafficking to happen on their platform? Can be held legally responsibility for that happening. The day that FOSTA and SESTA was signed into law, Craigslist took down the Personals section of its website. It has generally heightened scrutiny of sex worker content across platforms, and made it a lot harder for that work to happen online.

What is the EARN IT Act?

And in some ways, one of the scary things about FOSTA‑SESTA is the way in which it potentially emboldens further kind of attempts to create more overreaching laws. The EARN IT Act is not a law, yet. It is one that is currently being… discussed, in Congress. It emerged as ‑‑ or, the way that it’s been framed is as a response to an investigative series that happened at the New York Times about the proliferation of sexual images of children on platforms. And this, this is a true thing. Basically, any service that allows uploading of images has this problem. Airbnb direct messages can be, are used? And it’s a real thing. But this, the actual law is a very cynical appropriation of this problem with a solution that really serves more to kind of control and contain how the internet, like, works.

It proposes creating a 19‑member committee of experts, headed by the Attorney General, who would be issuing best practices for companies and websites, and allow those that don’t follow the best practices to be sued. And what “best practices” actually means is currently ‑‑ is like very vague in the actual text of the bill. The word “encryption” does not actually appear in the text of the bill, but its authors have a long history of being anti‑encryption. The current Attorney General, Bill Barr, has expressed wanting back doors for government agencies so that they can look at encrypted content. And likely, you know, it’s thought it could include “best practice” things like make it easier for the government to spy on content.

This is ‑‑ you know. I know somebody who worked on this series, and it is so frustrating to me to see that effort turn into, how about we just kill most of what keeps people safe on the internet?

So I mention, this is something that is more good to pay attention to. Write your Congress member about. Hacking//Hustling has done ‑‑

What is encryption?

Oh, Blunt would like me to define encryption. So it’s a mechanism for keeping information accessible only to people who know how to decode it. It is a way of keeping information safe, in a way! And… The ability ‑‑ and it’s ‑‑ the introduction ‑‑ encryption was not inherently actually part of the early internet, because it was originally created by researchers working for the government who thought it would just be government documents moving around it, so they were all public anyway. But it has since been kind of normalized into a part of, like, just using the internet as we know it today. But it’s, in this context, it’s ‑‑ yeah, basically, it means that when ‑‑ if I want to send you a message, that the only people who can read that message are like you and me, and not the service that is moving the message around, or not like the chat app that we’re using.

That was ‑‑ I feel like that was a little bit garbled, but… I don’t know if you like ‑‑ if, Olivia, is there anything that you would want to add to that? Or a better version of that? (Laughs)

OLIVIA: I think, I think you’ve mostly said it, in terms of it’s like a way of like encoding information so that ‑‑ someone might know the information is present, but they don’t know what it says. So, when we have things like end‑to‑end encryption on the internet, it means that something is encrypted on my side, and no matter, like, say what third party tries to look at the message that I sent to you while it’s in transit, it can’t be seen then, and it also can’t be seen by them on the other side, because the person who I sent the message to has their own, like, code that allows them to decode the message that’s specific to them. And this happens on a lot of platforms without our knowledge, in the sense that apps that are end‑to‑end encrypted, like Signal, they don’t really tell you what your key is. Even though you have one, and the person that you’re talking to has one, it’s not like you’re encoding and decoding yourself, because the math is done by other things.

But if the bill goes out of its way to exclude encryption, then it might make it potentially illegal for these services to exist, which would be a really bad thing for journalists and activists and sex workers and, like, everybody.

INGRID: Yeah. And additionally, there is ‑‑ I mean, within the world of people who work on encryption and security tools, any ‑‑ the idea of creating a back door, or some way to sneakily decrypt a thing without somebody knowing, is that that creates a vulnerability that… essentially, it creates a vulnerability that essentially anyone else could exploit. Like, if it exists there, it’s like somebody will hack it and figure it out.

OLIVIA: There’s no such thing as a door that only one person can use.

What’s the connection between the EARN IT Act and The New York Times?

INGRID: A question ‑‑ EARN IT is not solely a response to an article by the New York Times? It was a series of seven articles. And when I say “in response,” that is the argument ‑‑ that is the statement made by the people who wrote the bill. I think that it was more that EARN IT was proposed by some Congress people who saw an opportunity to cheaply exploit outrage over, like, abuse of children, to put forward some policies that they would want to have happen anyway. I think, like, the ‑‑ it’s ‑‑ and I think the reason, I guess, I mention it is because I think it’s also important to acknowledge the ways that these ‑‑ yeah, it was all, it was an entire, entirely from the New York Times. And it’s, you know, honestly, like, I don’t… I, I think that the main takeaway from that series to me was more that, like, companies are dropping the ball? Not that we need the government to come in and, like ‑‑ or like, if there’s supposed to be government making rules about how companies address this issue, like, I don’t think that the solution is create a committee that pursues, like, telling the companies what to do in this way that doesn’t actually seem to have anything to do with the actual problem they’re talking about.

BLUNT: Totally. And we actually ‑‑ I just want to also say that on the 21st, Hacking//Hustling will be hosting a legal literacy panel, where we will be talking about the ways that fear and threats to national security are used to pass… laws that police us further, that want to end encryption, that want to do away with our privacy. So if you check out HackingHustling.org slash events, I think, you should be able to find out more about that. Again, that’s at 7:00 p.m. on the 21st. You’ll be able to learn a lot more. We’ll do an update on EARN IT, where to look for updates, and similar legislation that’s being passed.

INGRID: I did see ‑‑ there was like a ‑‑ I saw an article that said a bill was being worked on, that was basically like in response to EARN IT, trying to say, like, yes, this is this problem you’re claiming that you’re going to address, like, it’s bad, but like this is not the way to do it, and trying to come up with an alternative. I think Ron Whiting was involved. Do you know anything about this?

BLUNT: Yeah, I think that’s ‑‑ yes. I mean, yes, we will talk about that on the 21st. I’m not ‑‑ we will have our legal team talk about that, so I don’t say the wrong thing.

INGRID: Okay, great. Moving forward!

What are some secure and private platform alternatives?

Olivia, do you want to do the platform alternatives? I feel like I’ve just been talking a lot!

OLIVIA: Sure! So, it kind of sucks that we’re all kind of stuck here using… really centralized social media platforms that we don’t control, and that kind of, in like nefarious and really complicated ways, sometimes control us. And so you might be thinking to yourself, gee, I wish there was something I could use that wasn’t quite Instagram and wasn’t quite Twitter that could let me control information.

So, we have some alternatives. One of these alternatives is called Mastodon. And… Essentially, it’s a independent ‑‑ is that the word? I think the word is ‑‑

BLUNT: An instance?

OLIVIA: It’s an instance! There you go. It’s an instance of… Oh, no, I don’t think that’s the word, either.

Basically, Mastodon is a very ‑‑ is a Twitter‑like platform that’s not Twitter, and instead of going on like a centralized place, you can set up your own Mastodon instance for your community. So instead of having ‑‑ like, you might have Mastodon instances that are called other names? Kind of like ‑‑ would a good analogy be like a subreddit?

INGRID: Maybe. I think, like, the existence of ‑‑ so, Mastodon is also from a project to create… like, open standards for social networking tools. I think we talked a little bit about sort of standardizing of browsers and web content. And in the last decade, one that’s been in development is one for just creating an open standard of what, like, a social network should do and could be. The protocol is actually called ActivityPub, and Mastodon is built on top of it. It’s, it’s more ‑‑ it’s kind of like… the term used for how they’re actually set up is like “fed rated.”

OLIVIA: Federated!

INGRID: Yeah. You set up one that’s hosted on your own. And it can connect to other Mastodon sites that other people run and host. But you have to decide whether or not you connect to those sites. And I think the, the example ‑‑ the thing that ‑‑ sorry. I can jump off from here, ’cause I think the next part was just acknowledging the like limitations. (Laughs) ‘Cause I think ‑‑ so… With ‑‑ so, this is a screenshot of Switter, which had been kind of set up as a sex work‑friendly alternative to Twitter, after FOSTA‑SESTA. And… It has run into a lot of issues with staying online because of FOSTA‑SESTA. Their hosting in ‑‑ like, I think Cloudflare was originally their hosting service, and they got taken down, because the company that like made ‑‑ you know, the company that was hosting it didn’t want to potentially get hit with, you know, like, liabilities because FOSTA‑SESTA said you were facilitating sex trafficking or some shit.

So it’s, it’s not a, necessarily, like, obvious ‑‑ like, it’s not easy, necessarily, to set up a separate space. And whether setting up a separate space is what you want is also, like, a question.

OLIVIA: Another option is also… Say you have a community that’s on Instagram, or on Twitter, and you guys are facing a lot of algorithmic suppression, and you’re not able to, like, reliably which you can’t to the ‑‑ communicate to the people who like your page. You could also split it both ways. You could try having an additional way of communicating to people. So you might have like a Twitter page where you have announcements, but then have a Discord server or something where you communicate with community members, or similar things.

And those types of interventions would essentially allow you to avoid certain types of algorithmic suppression.

INGRID: Yeah. And in a way, the construction of an alternative, it’s, I think… the vision probably is not to create, like, a new Facebook, or a new, you know, Twitter, or a new Instagram, because you will just have the same problems. (Laughs) Of those services. But rather to think about making sort of intentional spaces, like, either ‑‑ like, within, you know, your own space. This is a screenshot of RunYourOwn.social, which is a guide created by Darius Kazemi about ‑‑ you know, what it is to create intentional online spaces. I just find it really, really useful in thinking about all this stuff.

All right. Those were all our slides…

BLUNT: I actually just wanted to add one little thing about that, just to follow up on those previous two slides. I think it’s important to note, too, that while there are these alternatives on Mastodon and in these various alternatives, that’s often not where our clients are? So I think that it can be helpful for certain things, but the idea that entire communities and their clients will shift over to a separate platform… isn’t going to, like, capture the entire audience that you would have had if you had the same access to these social media tools that your peers did. So I think just one thing that I’ve been recommending for folks to do is to actually, like, mailing lists I think can be really helpful in this, too, to make sure that you have multiple ways of staying in touch with the people that are important to you, or the people that are paying you. Because we don’t know what the stability is of a lot of these other platforms, as well.

INGRID: Yeah.

OLIVIA: E‑mail is forever.

BLUNT: Yeah.

INGRID: Yeah, that’s a really, really good way to ‑‑ you know, point. And thank you for adding that.

Okay! So I guess… Should we ‑‑ I guess we’re open, now, for more questions. If there’s anything we didn’t cover, or anything that you want kind of more clarification on… Yeah.

I see a hand raised in the participant section, but I don’t know if that means a question, or something else, or if… I also don’t know how to address a raised hand. (Laughs)

BLUNT: Yeah, if you raise your hand, I can allow you to speak if you want to, but you will be recorded, and this video will be archived. So, unless you’re super down for that, just please ask the questions in the Q&A.

What is Discord and how secure is it?

Someone asks: Can you say more about Discord? Is it an instance like Switter or Mastodon? What is security like there?

OLIVIA: So Discord is a ‑‑ is not an instance like Switter and Mastodon. It’s its own separate app, and it originated as a way for gamers to talk to each other? Like, while they’re playing like video games. And so there’s a lot of, a lot of the tools that are currently on it still make kind of more sense for gamers than they do for people who are talking normally.

A Discord server isn’t really an actual server; it’s more so a chat room that can be maintained and moderated.

And security… is not private. In the sense that all chats and logs can be seen by the folks at, like, at Discord HQ. And they say that they don’t look at them? That they would only look at them in the instance of, like, someone complaining about abuse. So, if you say like, hey, this person’s been harassing me, then someone would look at the chat logs from that time. But it’s definitely not… it’s not a secure platform. It’s not‑‑ it’s not end‑to‑end encrypted, unless you use like add‑ons, which can be downloaded and integrated into a Discord experience. But it’s not out of the box. It’s mostly a space for, like, communities to gather.

Is that helpful…?

INGRID: “Is the information on the 21st up yet, or that is to come?” I think this is for the event ‑‑

BLUNT: Yeah, this is for July 21st. I’ll drop a link into the chat right now.

What are some tips for dealing with misinformation online?

INGRID: “How would you suggest dealing with misinformation that goes deep enough that research doesn’t clarify? Thinking about the ways the state uses misinformation about current events in other countries the U.S. uses to justify political situations.” (Sighs) Yeah, this is ‑‑ this is a hard one. The question of just ‑‑ yeah. The depths to which misinformation goes. I think one of the… really hard things about distinguishing and responding to misinformation in this ‑‑ in, like, right in this current moment… is doing ‑‑ is kind of ‑‑ know ‑‑ like, it is very hard to understand who is an authoritative source to trust? Because we know that the state lies. And we know that the press follows lies! Right? Like, I imagine some of you were alive in 2003. Maybe some of you were born in 2003. Oh, my goodness.

(Laughter)

I ‑‑ again, I feel old. But… Like, the ‑‑ and you know, it’s not even ‑‑ like, you can just look at history! Like, there are… there are lots of legitimate reasons to be suspicious! Of so‑called authoritative institutions.

And I think that some of the hard things with those ‑‑ with, like… getting full answers, is… being able to ‑‑ is like finding, finding a space to like kind of also just hold, like, that maybe you don’t know? And ‑‑ and that actually maybe you can’t know for sure? Which is to say, maybe ‑‑ okay, so one example of this. So, I live in New York. I don’t know how many of you were ‑‑ are based near here, or heard about ‑‑ we had this fireworks situation this summer? (Laughing) And there was a lot of discussion about, like, is this like a op? Is this some sort of, like, psychological warfare being enacted? Because like, there were just so many fireworks. And, you know, the ‑‑ it’s also true that, like, fireworks were really like cheap, because fireworks companies didn’t have more fireworks jobs to do. I, personally, was getting lots of like promoted ads to buy fireworks. But like at the end of the day, the only way that I could kind of like safely manage, like, my own sense of like sanity with this is to say, like: I don’t know which thing is true. And the thing that ‑‑ and like, neither of these things address the actual thing that I’m faced with, which is like loud noise that’s stressing out my dog.

And so I think that some ‑‑ I think the question with, like, misinformation about sort of who to trust or what to trust, is also understanding, like… based on like what I assume, what narrative is true or isn’t true, what actually do I do? And… How do I kind of, like, make decisions to act based on that? Or can I act on either of these?

I guess that’s kind of a rambly answer, but I think ‑‑ like, there isn’t always a good one.

BLUNT: I just dropped a link to ‑‑ it’s Yoghai Benkler, Robert Faris, and Hal Roberts’ Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. I think it’s from 2018? I think it’s a really interesting route if you’re interested in learning more about that.

INGRID: There are two other questions, but I just want to quickly answer: What happened in 2003 is America invaded Iraq based on pretenses of weapons of mass destruction that didn’t exist. And companies ‑‑ like, news outlets reported that with no meaningful interrogation. (Laughs) Sorry.

What’s going on with TikTok and online privacy right now? Is it worse than the EARN IT Act?

OLIVIA: Re: TikTok… It’s a really confusing situation, because most places, especially a lot of cyber security experts on the internet, have been saying to delete TikTok? But also a lot of that ‑‑ a lot of reasons that it’s being done so are kind of boiling down to, it’s a Chinese app. Which is really xenophobic. But there are ‑‑ TikTok does track a lot of information about you. What it uses it for, mostly it’s to send you really, really, hyper‑specific TikToks. But it definitely is ‑‑ like, that information is being collected about you, and it exists in their hands. So I think it’s mostly a decision for individuals to make about whether they’re going to decide to trust TikTok with their information in that way. Because they absolutely know where you live, and they definitely know whatever things about you that you feel like they’ve gathered in order to create the TikTok algorithm that shows up in your feed. Those things can be ‑‑ those things are true. So.

I think ‑‑ Ingrid, do you have anything to say on that?

BLUNT: You’re still muted, Ingrid, if you’re trying to talk.

INGRID: Oh, sorry. I… The question, also, asked, you know, if things like the data collection on platforms like TikTok was worse than things like EARN IT. And I think the… It kind of depends on where you think, like, sources of harm are going to be? It’s ‑‑ you know, it’s kind of, just ‑‑ it’s different! Like, you know, there’s a bunch of information that a company now has that they could choose to sell, that they could choose to utilize in other ways, that they might give to a law enforcement agency that gets a subpoena. But whether or not ‑‑ but like, EARN IT and FOSTA‑SESTA are examples of ‑‑ like, those are ‑‑ that’s, I guess, a different kind of harm? That harm has less to do with collection of information, and more about suppression of content and information and of certain kinds of speech.

“Is it fair to say that social media companies can use your username alone to connect you to other accounts? Should we slightly modify our usernames to avoid being associated and shut down all at once?” So I think ‑‑ I mean, I would say just for the question of like whether to modify your username or not, I think that’s also a risk assessment question, in so far as if you need people to be able to find you across multiple platforms, I would not want to tell you to like not do that? Or to like make it harder for you to, like, reach clients or an audience. I think ‑‑ social media companies tend to… whether they’re looking for you across platforms, like, is not as clear to me. I think it depends on the, like, agreements that exist within the platform. So like, I know that ‑‑ I mean, like Facebook and Instagram are owned by the same company. Right? So they will end up sharing ‑‑ like, the sharing of those two identities, like, is fairly ‑‑ you know, that’s likely to happen. But…

OLIVIA: Some might not be looking for your other accounts? But if you’re ever, like, being investigated by like an actual individual person, or like say your local police department, or the state in general, they probably would be.

INGRID: Yeah. And in that case, I think that what may be more helpful is if you have sort of a public persona that you want to have kind of have a similar identity… That’s a choice you can make. And then if there’s like alt accounts that, you know, maybe are where you have more personal, like, communications, or are working ‑‑ you know, kind of more connected to community and less business? That, making those slightly harder to associate, or making those slightly more compartmentalized? And we’ll talk more a little bit about sort of compartmentalizing identities tomorrow. But I think, yeah, that’s one way to kind of address that ability of being kind of identified.

BLUNT: I think, too, I wanted to add that it’s not just like using the same username, but where you post it, or like what e‑mail is associated with an ad. If you’ve linked your social media to a sex working ad, one of the statistics that we found in the research, the ongoing research projects that Hacking//Hustling is doing right now on shadowbanning is that sex workers who linked their social media to an advertisement are significantly more likely to believe they’ve been shadowbanned, at 82%. Which seems to me that linking might put you in… the bad girl bin, as I call it. (Laughs)

Do we have any other questions? We still have a good chunk of time. Or anything that folks want more clarity on?

What is DuckDuckGo and what is a VPN? Should we use them?

Okay, so we have one that says, “I heard DuckDuckGo mentioned. Do you personally use that search engine? Also, I recently started using ExpressVPN, as I just started sex work, and bad on my part, I did little research on which VPNs. Have you heard of ExpressVPN? Do you have another app that you personally use or have more knowledge about? I want to stay safe and of course share with others what would be the best app to use for VPN.”

INGRID: Olivia, do you want to take some of this one…?

OLIVIA: I was muted. So, I do use DuckDuckGo, most often. Sometimes, if I’m trying to like test to see if something ‑‑ like, if I’m using another ‑‑ like, my house computer uses Google, because my mom’s like, I don’t like DuckDuckGo! It’s not showing me the things I want to see! And that’s usually because Google, again, collects data about you and actively suggests results that it thinks are the things you’re searching for, whether or not they’re what you’re actually searching for.

For VPN use, I use ProtonVPN, mainly because it’s free and I don’t really have money to pay for a VPN right now. But I think ExpressVPN is one of the most popular ones. So I’d say it’s pretty trustworthy.

INGRID: Yeah, I’ve used ExpressVPN. I’ve seen that it’s ‑‑ yeah. It’s generally, I think, a well‑regarded one. I think that’s partly why it costs the money it costs. (Laughs) So I think ‑‑ yeah. If you don’t want to have to keep paying for it; but if you’ve already paid for it, yeah, keep using it.

What are the alternatives for encryption?

Yeah. “Can we talk about some alternatives for encryption, assuming a back door is created?”

OLIVIA: This isn’t ‑‑ oop.

INGRID: Go ahead.

OLIVIA: This isn’t really an alternative for encryption, but I think one of the things that we could start doing is ‑‑ less so would it be, like, trying to function without encryption, but instead encrypting our messages ourselves. Because technically, you could have end‑to‑end encryption over Instagram DM if you do the hand work of encrypting the messages that you send by yourself. Bleh! Tripped over my tongue there.

So there are a lot of apps, specifically for e‑mail, I’m thinking of? Like, Enigmail, and Pretty Good Privacy, that are essentially tools that you can use to “hand encrypt,” in quotation marks, your e‑mails, so you don’t have to depend on someone doing that for you. Right, the government can’t knock on your door and say you’re not allowed to encrypt anymore. And encryption algorithms are mathematical things. So you wouldn’t be able to make one that’s like kind of broken. The ones that we have now are… as long as ‑‑ like, Signal for instance is very public about the algorithms that they use, and that’s how we know that we can trust them. Because other people can trust them, and they’re like, yeah, it’s really ‑‑ it would take a computer about a thousand years to crack this. And so we’re able to use those same algorithms by ourselves without depending on other platforms to do that work for us. And it would suck that we’d have to interact with each other with that level of friction? But it is possible to continue to have safe communications.

BLUNT: Yeah, and I think just in general, if you’re unsure about the security of the messaging system that you’re using? Like, right now, we’re using Zoom, and we had this conversation a bit yesterday. But I’m speaking on Zoom as if I were speaking in public. So if I were to say ‑‑ if I wanted to talk about my personal experiences, potentially I would phrase it as a hypothetical, is also one way. So just slightly changing the ways that you speak, or… Yeah. I think that’s also an option. Go ahead, sorry.

OLIVIA: No, I agree. Just bouncing off with the people that you’re talking to that, like, hey, we’re not going to talk about this. And not being, like, reckless. So in a, like in a public forum, don’t like post about the direct action that’s happening on Sunday at city hall. Things like that are not things ‑‑ just like using, in that sense, using discretion, at that point.

What is the back door issue and how does it relate to encryption?

BLUNT: Someone says: “So the back door issue is for companies that encrypt for us?”

INGRID: Basically, yeah. The ‑‑ the back door issue, or like what, I guess… the back door issue is not ‑‑ and it’s also not necessarily, like, all encryption would stop working. Right? It would be something like… you know, the government ‑‑ like a government saying, hey, WhatsApp, we want access to conversations that currently we can’t have access to because WhatsApp communications are encrypted, and ordering WhatsApp to do that. And one would hope? (Laughs) That ‑‑ like, companies also know that they have a certain amount of, like, brand liability… when they remove security features. So it’s something that would probably be known about? I don’t think that it would be done ‑‑ like, I would hope it wouldn’t be done surreptitiously? But, yeah. It’s more about, like, whether or not certain ‑‑ like, previously considered secure communications would become compromised. It wouldn’t necessarily end the possibility of ever, you know, deploying encryption ever again. It would be more of a service by service thing.

BLUNT: We still have some time for more questions, if anyone has any. Please feel free to drop them into the Q&A.

And maybe if Ingrid and Olivia, if you wanted to chat a little bit about what we’ll be talking about tomorrow, folks might have an idea of other things that they might want clarity on, or other things that they are really hoping might be covered.

What will be covered in part 3 of the digital literacy series?

OLIVIA: Yeah, tomorrow we’re gonna talk a lot about surveillance, like more specifically. So like, surveillance that’s done on platforms, and in ‑‑ but also like talking both about surveillance capitalism and state surveillance, and how they ‑‑ the different ways that they might cause harm for someone who’s like trying to use the internet. Yeah. I think those are the most ‑‑ the biggest points? But also thinking about… like, mitigation.

INGRID: Yeah. And we’re ‑‑ and in the context of state surveillance, we’re primarily talking about when the state utilizes platforms in the service of surveillance, or obtains information from platforms. There are a myriad of other ways that the state can ‑‑ that, you know, police departments or federal or state governments can engage in surveillance of people, digitally or otherwise. But partly because the scale and scope of that topic is very, very large, and because we know people are coming from lots of different settings, and maybe like ‑‑ and we don’t personally know the ins and outs of the surveillance tools of every police department in the world? We didn’t want to kind of put forward, like, examples of tools that might just be, like ‑‑ that would mostly just create, like, greater like anxiety or something, or that wouldn’t necessarily be an accurate depiction of threats or realities that people might face.

If there is interest in more of those things, we’re happy to do questions about them in the thing? But it’s not something that we did ‑‑ we’re doing a deep dive into, because… again, it seems like that might be better to do more tailored questions to specific contexts.

BLUNT: I’m curious ‑‑ did you see the EFF launched the searchable database of police agencies and the tech tools that they use to spy on communities? Speaking of not spying on people! (Laughing)

INGRID: Yeah, but that’s the thing ‑‑ another thing is like, well, those tools are here. God bless these agencies for putting that work together.

BLUNT: Cool. So I’ll just give it like two or three more minutes to see if any other questions pop in… And then I’ll just turn off the livestream, as well as the recording, in case anyone would prefer to ask a question that’s not public.

How to Build Healthy Community Online

Okay. So we have two more questions that just popped in… “Could you speak to building healthy community online? How to do that, how to use platforms for positive information spread?”

OLIVIA: So, when it comes to building healthy communities, I think… it really comes down to, like, the labor of moderation. Like, it has to ‑‑ it has to go to someone, I think. We often have ‑‑ one of the problems with a lot of platforms online is that they’re built by people who don’t really, like, see a need for moderation, if that makes sense? Like, one of the issues with Slack is that there was no way to block someone, in Slack. And a lot of the people who originally were working on Slack couldn’t conceive of a reason why that would be possible ‑‑ couldn’t conceive of a reason why that would be necessary. While someone who’s ever experienced workplace harassment would know immediately why that kind of thing would be necessary, right?

And so I think when it comes to like building healthy communities online, I think like codes of conduct are really honestly the thing that’s most necessary, and having people or having ‑‑ creating an environment on that specific profile or in that specific space that kind of invites that energy in for the people who are engaging in that space to do that moderation work, and to also like promote… pro‑social interactions, and to like demote antisocial interactions, and things like that.

BLUNT: I also think that we ‑‑ Hacking//Hustling also on the YouTube channel has… a conversation between myself and three other folks talking about sort of social media and propaganda and a couple of harm reduction tips on how to assess the, like, truthfulness of what you’re sharing and posting. And I think that’s one thing that we can do, is just take an extra second before re‑tweeting something and sharing something, or actually opening up the article before sharing it and making sure that it’s something that we want to share… is a simple thing that we can do. I know things move so fast on these online spaces that it’s sometimes hard to do, but I think that that… if, if you’re able to assess that something is misinformation, or maybe it’s something that you don’t want to share, then. It slows down the spread of misinformation.

Thank you so much to everyone and their awesome questions. I’m just going to take one second to turn off the YouTube Live and to turn off the recording, and then see if folks have any questions that they don’t want recorded.

Okay, cool! So the livestream has stopped, and the recording is no longer recording. So if folks have any other questions, you’re still on Zoom, but we would be happy to answer anything else, and I’ll just give that two or three more minutes… And if not, we’ll see you tomorrow at noon.

(Silence)

Okay. Cool! Anything else, Ingrid or Olivia, you want to say?

INGRID: Thank you all for coming. Thank you, again, to Cory for doing transcription. Or, live captioning. Yeah.

BLUNT: Yeah, thank you, Cory. Appreciate you.

OLIVIA: Thank you.

CORY DOSTIE: My pleasure!

BLUNT: Okay, great! I will see you all on ‑‑ tomorrow! (Laughs) Take care.

INGRID: Bye, everyone.

OLIVIA: Bye, everyone!

Digital Literacy Training (Part 1) with Olivia McKayla Ross (@cyberdoula) and Ingrid Burrington (@lifewinning)

Part 1: OK But What Is The Internet, Really? In this three-day lunch series with Olivia McKayla Ross (@cyberdoula) and Ingrid Burrington (@lifewinning), we will work to demystify the tools and platforms that we use every day. It is our hope that through better understanding the technologies we use, we are better equipped to keep each other safe!

Digital Literacy Training (Part 1) Transcript

OLIVIA: Hi, everyone.

Just before we begin, some of the things that ‑‑ the values that we’re trying to cement this workshop in terms of cyber defense is firstly acknowledging cyber defense as a way of maintaining community‑based power, and cryptography as an abolitionist technology rather than military or something that doesn’t come from us, right?

So, there have been ways of using techniques like cryptography, and using ‑‑ and that community defense is something that doesn’t have to be immediately associated with a white supremacist, industrial technology.

So following that, we want to affirm that there can be a cyber defense pedagogy that can be ant-iracist, anti‑binary, and pro‑femme. But also one that’s trauma informed, right? And doesn’t reinforce paranoia. Because we know there are white supremacist institutions. And teaching from a place of gentleness. And considering, because of our myriad identities, the previous harm people might have experienced, and trying not to replicate it or force people to relive it.

So if you need to take space at any point during this workshop, we want to honor that, and this will be reported and available for view at a later time, as well.

INGRID: Thank you, Olivia. That was great.

My name is Ingrid. I go by she/her pronouns. And we are ‑‑ welcome, welcome to the internet! (Laughs) This is the first of a series of three digital literacy you sessions where we’re gonna be walking through a few different concepts.

And this first one we wanted to start with was really getting into just some of the baseline, you know, technical things around what the internet actually is and how people experience it, or how it, you know, works.

And… We’ve sort of organized this into a couple of sections. We’re gonna, you know, talk ‑‑ start kind of with a couple things about our personal kind of opinions about how to talk about some of these things, some grounding perspectives we’re bringing to it. The internet and kind of how it works as an infrastructure.

Browsers? Which, as like a particular technology for interfacing with the internet. And the World Wide Web, which is… you know, basically the thing that the browser takes you to. (Laughs)

So, starting with our opinions… (Laughs) We got ‑‑ we got more, but these seem important to start with.

The first one that we wanted to convey is that, you know, some of this stuff around what ‑‑ around how the internet works gets treated like this sort of special knowledge, or like something only for smart people. But, you know, companies have a lot more resources to do things. The people who run, work in, found tech companies often have had, you know, privileges like generational wealth! Or like early exposure to technology, that mean that some of this stuff was just more available to them.

And has been for a long time. And if there are things that are confusing, or unfamiliar, it’s ‑‑ you know, it is not because you don’t understand. And it’s because the people who kind of have a lot of control and power, like, are able to like overcome things that are confusing… Yeah.

We’ll come back to this point in other ways, I think, in this presentation today.

OLIVIA: The other point that we really want to hammer in is that nothing is completely secure online. And that’s due to the nature of how we connect to the internet, right? The only way you can really have a completely secure computer is to have a really, really boring computer! Right?

Computers are interesting because… computers and the internet are able to be interesting and fun things to use because we are able to connect to other computers. Right? Because it’s a form of a telecommunication device. And so it’s kind of okay! That our computers can’t be completely secure, because if they were, they’d just be kind of like brick boxes that don’t really do anything.

So instead of trying to chase like a mythological, like, security purity, what we do is we learn to manage risk instead. Right? We create systems so that we put ourselves in at least danger as possible.

What is the internet?

INGRID: So, for our kind of initial kind of grounding point, we want to just ‑‑ or, what the internet is. And this is, this is a hard question, sometimes, I find? Because… The word “internet” comes to kind of mean lots of different things. I ‑‑ for me, one of the most, like, the simplest summary I can ever provide is that the internet is just computers talking to computers. (Laughs)

It’s information going between computers. This image, which is, you know, one of many you can find when you Google image search “internet diagram” is a bunch of computers in, you know, a household, including a game machine and a few PCs. Who is this person? With all these devices? And they’re connecting to a router in their house, which has connected to a modem, which connects to the internet! Which is more computers. Not the ones that you’re seeing on the screen.

It’s kind of dorky, but this is a really goofy example of a computer talking to another computer. It’s from the movie Terminator 3. This also, I realize, is an Italian dub?

INGRID: So, I show this ‑‑ so what’s actually happening in this scene, which is, yes, very garbled, is the lady terminator, who is a robot, a very large sentient computer, is using a cell phone, like a dumb phone, to call another computer? And then she is making noises into the phone that are a translation of data into audio signal. And that is allowing her to hack into the LA School District’s database. It’s ‑‑ and it’s, you know, it’s very 2003? (Laughs) In that that was an era where, when people were getting online in their homes, they would have to connect to a modem that made sounds like that, too.

So I think, you know, it’s kind of a corny old example, but I like it because it also shows something that is hard to see in our day‑to‑day use of the internet, which is that for information to move from one computer to another computer, it has to be rendered into something material. In this case, it’s tones? It’s sound? On a home computer connected to a wi‑fi network, it would be radio waves. And kind of when you get to different layers of the internet, it’s going to be pulses of light traveling through fiberoptic cable.

So everything you type, every image you post, at some point it gets ‑‑ you know, that digital data gets transformed into a collection of, you know, arrangements of points of light, or, you know, a sound, or like a different material.

And it’s, you know, it’s much bigger! (Laughs) Than, like, than what we see on a screen! This is a map of the submarine cables that cross oceans that make it possible for the internet to be a global experience. It’s very terrestrial?

This is just for fun. This is just a video of a shark trying to eat one of the cables in the ocean… A cutie.

Rawrumph!! I just love his little… The point being, yeah. The internet is vulnerable to sharks! It is… it is very big, and it is complicated, and it is ‑‑ it is not just, you know, a thing on a screen. It needs a lot of physical stuff.

And when computers talk to computers, that doesn’t usually mean, like, a one‑to‑one connection? Right? So… I’m talking in this webinar to all of you right now, but, like, my computer is not directly connecting to your computer. What’s actually happening is that both of our computers are talking to the same computer… somewhere else.

There’s like a, you know, intermediary machine, that’s probably in a big building like this. This is an Amazon data center in Ashburn, Virginia. And that’s kind of the model that most of the internet takes; it’s usually, there’s kind of intermediary platforms, right?

And in a lot of technical language, this is called the client‑server model. The idea being that a server, which is a computer, holds things that are, you know, content on the internet, or applications like Zoom, and the client, which is just a computer, requests things from the server. You know, the server serves that. This goes ‑‑ this gets to the client computer through a routing process, that usually means that the information has to travel through multiple computers.

But! Again, this, like ‑‑ these words just mean computer and computer? Technically, you could turn a home computer into a server and get a stable internet connection and make it something ‑‑ make it something that just serves information to the internet. Or, you know, you could even think about the fact that because, you know, lots of information is taken from personal computers and sent to companies, you know, in some ways we are serving all of the time!

And I ‑‑ mostly, this is just a dynamic, again, thinking about… who controls and how the internet is governed, that I think is important to acknowledge? I mean, in some ways, the internet is not computers talking to computers so much as… computers owned by companies talking to computers owned by people?

The internet, you know, it began as a project funded by the U.S. military, but became the domain of private companies in the late 1990s. So all of that stuff that I was talking about earlier? You know, the submarine cables, the data centers, they’re all private property owned by corporations. And it’s kind of ‑‑ all of the, you know, technical infrastructure that makes the internet possible is a public good… but it’s all managed by private companies. So it’s kinda, it’s more, you know, a neoliberal private partnership. And it has been more a long time.

And I mention this mainly because it’s good to remember that companies are beholden to laws and markets, and it’s in a company’s interest to be compliant with laws and be risk‑averse, and that’s partly why a lot of decisions made by platforms or other companies are often, like, kind of harmful ‑‑ like, can be harmful to communities like sex workers.

And again, like, this doesn’t have to be the way the internet is? It’s just sort of how it has been for a very long time.

So, computers talking to other computers is what, you know, is our very simple summary of what the internet is. But computers don’t necessarily ‑‑ don’t ‑‑ can talk to each other in different kind of languages or dialects, let’s say? Which, in, you know, internet speak, are called protocols. Which, you know, a protocol is what it sounds like: It’s a set of rules about how something’s done. And so that’s, I find, maybe the dialect or language thing kind of useful.

Common Internet Protocols

So a few protocols that exist for the internet that you probably encounter in your daily life that you maybe don’t think that much about are Internet Protocol, wi‑fi, Address Resolution Protocol, Simple Mail Transfer Protocol, and HyperText Transfer Protocol. Maybe you haven’t heard as much, or it’s not as commonly talked about? But I’ll explain about these.

And I apologize; these screenshots are from my Mac. There are ways to access these same sorts of things from a Windows machine? I don’t have screenshots. (Laughs)
So Internet Protocol is basically the foundation of getting on the internet. It assigns a number called an IP address, Internet Protocol address, to a computer when it’s connected to a network. And that sort of ‑‑ that is the ID that is used for understanding, like, who a computer is and how do you access it.

So when I want to go get content from a specific website, what I’m actually requesting under the hood is… is a set of numbers that is an IP address, which is like the name or ID of the computer that I want to go to.

I’m hoping this isn’t too abstract, and I hope, like ‑‑ yeah, please, if there are places where you have questions… please, add things to the Q&A.

So, Address Resolution Protocol and Media Access Control are a little different, but I wanted to talk about because it’s sort of related to understanding how your computer becomes a particular identity.

So, all ‑‑ there’s a question: Do all computers have their own IP address? They do, but they change, because different ‑‑ basically, when you go ‑‑ when you join the network, the address is assigned. It’s not a fixed ID. But there is a fixed ID that is connected to your computer, and it’s called a Media Access Control, or MAC, address.

And this is another screenshot from my machine. You can see this thing I circled here. That is my MAC address. And that is at the level of like my hardware, of my computer, an ID that has been… basically, like, baked into the machine. Everything that can connect to a network has one of these IDs.

And when ‑‑ and so Address Resolution Protocol is a mechanism for associating your temporary IP address with the MAC address, and it mainly exists so that if there’s, like ‑‑ like, if the network screws up and assigns the same IP address to two things, to like two different devices, the MAC address can help resolve like, oh, we actually mean this device, not that device.

Oh, I realize I didn’t make a slide for wi‑fi. I think most of you probably know wi‑fi as, like, it is the wireless ‑‑ the way that basically information is transferred to something wireless.
Yes! Your IP ‑‑ well. Your IP address… will change when you connect, although it generally won’t change that much… It’s, it’s not like ‑‑ how am I answering this?

Like, if you’re ‑‑ if you’re connecting to the internet, in like your home? It’s probably ‑‑ you’re probably gonna get the same ID number, just ’cause it’s the same device you’re connecting to? But when you connect to a network at ‑‑ I guess no one goes to coffee shops anymore…

But in the time when you would go to a place with a different wireless network and connect to the internet! (Laughing) You would probably have a different IP address, because you’re connecting from a different device in a different network.

Oh, the other thing ‑‑ the only other thing about wi‑fi thing I will mention right now is that “wi‑fi” doesn’t actually mean anything. It’s not an acronym; it’s not an abbreviation. It’s a completely made‑up name… No one ‑‑ no one has a good answer for why it’s named that! (Laughs) I think like a branding consultant named it? It’s ‑‑ anyway.

So other protocols. So the Simple Mail Transfer Protocol, that underlies how e‑mail works.

So you encounter it a lot, but probably don’t think much about what ‑‑ like, that’s its own special kind of language for moving information, that’s different from the HyperText Transfer Protocol, which is one that may be familiar to all of you because it is the central protocol used for moving information in the browser!

Which is a nice segue, but I realized I also should mention that there is a variant of HTTP called HyperText Transfer Protocol Secure, or HTTPS. It’s an implementation of HTTP that encrypts the information transferred. So, that wasn’t adopted or implemented when browsers and HTTP were first being developed?

Because, again, these technologies were being developed with, you know, public funding and thought of as tools for scientific research, not for making purchases with credit cards or having, you know, private communications. So the implementation of security features and encryption into the internet is sometimes clumsy or frustrating because it was not designed into the original concept.

What’s an internet browser?

All right. So, we are next moving into the browser. I’m kind of a nerd about internet history things, so part of what I wanted to talk about with the browser is just its origin story?

The first example of a browser that was easy to use was created by researchers at a University of Illinois, including a guy named Marc Andreessen. He made something called Netscape Navigator. It was kind of a… It was a very important opening of the internet to the general public, and it changed a lot of the people’s the perception and ability to be part of the internet.

Marc Andreessen became very rich because he did this, and he founded a venture capital company, or firm, called Andreessen Horowitz. Returning to the idea that a lot of these companies are not smart, they’re just rich? He worked on a thing that is very important… That is not a good reason that he gets to throw money at Airbnb and decide how, you know, urban planning and housing is going to be changed forever!

There are fundamental kind of reasons ‑‑ like, there’s something about that which I feel is kind of important to remember. Both to acknowledge ‑‑ it’s not that Marc Andreessen is a dumb guy; that’s that he’s been given a lot of authority through getting a lot of money through being part of one ‑‑ through doing a clever thing.

A lot of the things that define the browser in the 1990s when it was first becoming an adopted thing were actually proprietary technologies made by different companies. So different companies had their own browsers that they had made. And they wanted to be The Browser everyone used. Right? And so they invented new things to make their browser cool? But they wouldn’t work on other ones.

So Olivia will talk a little bit more about these, I think, in the section on the web. But Cascading Style Sheets, which is a way of adding, you know, designing aspects of a web page, were invented by Microsoft. Javascript, which is a programming language that works in browsers, was created by a guy at Netscape in 14 days? (Laughs) And, yeah, if you wanted to ‑‑ if you made a website and it had, you know, CSS in its layout, it would be like ‑‑ it would be visible in a Microsoft browser, but not in a Netscape browser.

This was a terrible way of doing things? And possibly because companies got nervous about possibly getting regulated, and partly because it was just bad for business, they started ‑‑ they sort of, they figured out how to kind of put aside some of their differences and develop standards, basically.

So the standardization of browsers, so that basically when I open something in Chrome and I open something in Firefox it looks the same and it works the same… kind of starts to be worked on in 1998. It really only starts to be implemented/widespread in 2007, and it continues to be worked on. There are entire kind of committees of people who mostly work at the tech companies that make these browsers who kind of come and talk to each other about, like, what are the things we’re all gonna agree are gonna ‑‑ about, like, in terms of how this technology works?

And we’re looking at, and wanting to talk a little bit, about browsers also because they are really useful teaching tools. It’s really easy ‑‑ well, it’s not “really” easy. It is pretty easy to kind of look at what’s going on behind the scenes, using a browser. And that’s mainly because they’re very old.

You know, by 2007 when the iPhone emerges, and when I think the App Store is in 2010 or 2011, you can’t really look and see ‑‑ it’s much harder to go on your phone and see, like, I wonder what kind of data Instagram is sending back to, you know, Facebook right now! Like, to actually try and look for that on your phone is almost impossible. But you can kind of start to look for that in a web browser.

And that’s sort of a privileging of desktop technology, and a legacy of this being kind of an old technology, where transparency was treated as just inherently a good idea. And I think that if they were being built today, we probably wouldn’t have it.

So, we’re going to introduce you to some browser tools in this next section ‑‑ oh, wait, sorry, one more thing I wanted to acknowledge. This isn’t super detailed as far as comparing the privacy features of different browsers? But ‑‑ and we are working on a list of sort of, like, a bibliography that we can share with everyone later.

The point being ‑‑ the main thing I just wanted to convey here is like different browsers defined by different companies, they’re gonna all work more or less the same, but they do have kind of underlying qualities that might not be great for user privacy. And, also, there’s, you know, questions of like… when, you know, one company kind of controls the browser market, how does that change kind of the way that people see the internet?

So, you know, doing some research, doing some comparison of, of what different browsers… you know, do and don’t do. Most of the screenshots for this were done in Firefox. If you use other browsers, that’s fine. But… Yeah.

All right. Now ‑‑ (Laughs) Now we will move to World Wide Web!

What are web pages and how do they work?

OLIVIA: Hi, everyone! So, this part is talking a lot about the actual content that you are able to look at using your browser. So we’ll be making use of a lot of the tools that Ingrid mentioned about looking deeper into the actual… web pages themselves.

Awesome. So, this is a web page. It’s the same page that the video that we showed earlier in the beginning of sharks biting undersea cables! (Laughs) And it’s accessible to anyone who can connect their computer to the World Wide Web. And so, a lot of times we use “the internet” and “the web” interchangeably?

But the internet itself is more of the infrastructure, and the actual place, if we can call it a place, that we’re going to logically… is called the World Wide Web. Right? That’s the whole WWW‑dot thing that we’ve all been doing.

So, web pages are hosted on computers! You can host a web page on your own computer; you can pay another company to host it for you; other companies host themselves, if they have a lot of money. And… If you are paying someone else to host your website for you, you might end up ‑‑ you have a lot less autonomy. Right?

So there’s a lot of movements for people to like start hosting things themselves to avoid things like censorship and surveillance. Because like we said in the beginning, companies are beholden to a lot stricter laws than individuals are. And individuals are able to kind of themselves say ‑‑

What’s the difference between VPN and TOR? If we have time at the end, we will cover that a little bit, briefly. But essentially, a VPN ‑‑ TOR is a browser, and a VPN is something that you can install into your computer.

TOR does something, does things, that are very similar to what VPNs do, in terms of like onion routing? But they’re not… they’re not the same. Like, you can use TOR to navigate the internet, or you can use a VPN and use your normal browser. Right.

To look at a web page’s source, right, oftentimes you can right click or can N‑click? And you click the, like, You click View Page Source, and you’ll be able to get a closer look at the actual web page itself.

And so when you, when you view the source, you ‑‑ oh, you can go back. When you view the source, you end up seeing HTML. Right? So we told you earlier that the web uses HTTP, which is the HyperText Transfer Protocol, to send and receive data. The data that’s being sent and received is HyperText. Right? That’s written in the HyperText Markup Language.

So… HTML isn’t a programming language, per se; it’s a markup language. So it defines the structure of your content. It displays things, like text and images and links to other web pages.

And there are two ways that HTML pages can exist: Static and dynamic. So static would be a lot of the pages that we might code ourselves, right? Dynamic is more of the… the web pages that are generated dynamically are like Facebook and Instagram. The user requests a page, which triggers code that generates an HTML page.

So sometimes you would want… ‑‑ if you try to look at the source code of a website, you won’t really see much of anything? Because that code, like, doesn’t exist yet. Unless you open an inspector, and you look at the code that’s visible on your side.

So, to make this content look better, it’s often styled. Right? ‘Cause otherwise, it would just be plain Arial size 12. So we add color, add shape, animation, layouts, italics. And we do that using Cascading Style Sheets, or CSS.

CSS is also not a programming language. It’s a way of representing information.
So this is what a static HTML file might look like. I grabbed this from a teaching resource, so that’s why you can see things like explanations of what HTML is, because I thought it would look a bit cleaner than the WIRED article.

And this is a CSS file! You see things like font size, font family, color, background color, position. Right? So those are the types of things that you can control using CSS. You can even make animations.

So, knowing that ‑‑ the point that we’re trying to make in saying this is that HTML can’t do anything with your data. Neither can CSS. They just display things that are coming from the other computer that you’re connecting to. So how are web pages collecting our data? Well, the code that actually does stuff in your browser is usually written in Javascript.

So… To see it in action, we can go into Tools, and Web Developer, and Inspector! And we can see some of the stuff that’s going on behind the scenes, right? This is how you do this in Firefox, and it’s similar but not identical in other browsers like Chrome and Safari. You ‑‑ I don’t think you can do this in Safari at all, but I might be wrong about that.

So if you check out the Inspector tab, we have an easier way of reading the HTML source than just pulling it all up in a really large, confusing doc. Right? We get syntax highlighting. We get little disclosure triangles. And we’re able to highlight things and see ‑‑ we’re able to hover over different parts of the HTML, and it’ll highlight that section in the actual web page. So it’s a really useful teaching tool.

The Console tab, we’re able to see more of the Javascript activity that’s happening in the background of the page. So we’re able to see all of these, like, jQuery calls and database calls and analytics. Right? So this is how a web page might try to get information about you so that they ‑‑ the company, in this case WIRED, can use that information to structure their own marketing practices. Like, how many people went to this article about sharks biting undersea cables? They would use Javascript in order to record the fact that you, one person, went to this website.

In the Network tab, it shows data being sent and data being received by your browser. Right? So all of the, all of the ones marked “POST,” or ‑‑ you can only see the P‑O, in this part, are being sent, and all the ones marked “GET” are being received.

And so some of this stuff is fairly, like, normal. It’s actual HTML stuff that’s being included on the page. You can see the different types. And then some of it, in other places, you would be able to see like actual trackers. Right?

And when you click on one of the items, you’re able to see more information about what’s being transferred.

INGRID: And this is not necessarily ‑‑ I mean, although this is not very helpful? Like, when you click the headers? It’s like, here is a bunch of words! I don’t know what’s going on! But the other tabs can give us a little more, and depending on the type of network request, you’ll get slightly easier‑to‑read data.

What are cookies and how do they work?

So, in this section, we’re going to talk a little bit about some of the tracking methods. Cookies… are called cookies, because in ‑‑ they’re called cookies with the web, because in a different technology, whose name I do not recall, this same thing was called a magic cookie.

And I don’t know why it was called that in the other one… It’s just a, it’s a… it’s a holdover from the fact that a small number of people working on the internet had inside jokes, as far as I can tell.

But a cookie is a text file that contains information. Usually it’s something like an ID. And it’s used for doing things like storing preferences, or kind of managing things like paywalls on news websites.

So in this case, the cookie that was handed off to me from this particular page gave me this ID number that’s just like a pile of letters and numbers. And my browser will store that cookie, and then when I ‑‑ if I go back to the WIRED website, it’ll see ‑‑ it’ll check to see, like, oh, do I already have a cookie assigned to this one?

And if it does, it will take note of how many other WIRED articles I’ve already read. And that’s how WIRED gets ‑‑ is able to say, hey, we noticed you’ve gone, you’ve read all your free articles… Stop, stop doing that. You don’t get anymore.

And they’re not ‑‑ they can also be used for things like, you know, like say you have, you know, a certain kind of like ‑‑ like, you have a log‑in with a particular website, and you don’t want to have to log in every time, and the cookie can store some information for you.
But they’re also used for things, like, kind of tracking ‑‑ like, just trying to see where people go online to, you know, be able to figure out how to sell them things.

Just a distinction note, like, if you look at things in the Network tab: A response cookie is a file that comes from, like, a website to your computer; a request cookie is a file that your computer generates that goes to that computer. And it’s, you know. And a lot of this is stuff that is encrypted or encoded or kind of arbitrary ‑‑ like, which is good, in so far as it’s not creating ‑‑ oh, sorry.

It’s not ‑‑ it’s not just giving, you know, information, passing information about you and storing it in the clear? You still probably don’t want it? (Laughs)

So cookies can also be used for, like, tracking. This website has like, you know, a lot of different scripts running on it, because media companies work with other, you know, companies that do this kind of audience tracking stuff.

So like, when I was looking at this one, it was like the URL that the domain was coming from is elsa.memoinsights.com. That’s a weird name, and I don’t know what any of this is. If I type that into the browser, it doesn’t produce a web page?

But when I Google “memo insights,” I find: A company that works with companies to give them, you know, competitive analysis and campaign summaries. I don’t know what these things are, but this is some boutique company that works with Conde Nast, which owns WIRED. Maybe they do something with what I read, and maybe we can learn that people who read WIRED also read the New Yorker, or something.

What are pixel trackers and how do they work?

There are other trackers on the web that are not based in cookies and are a little bit weirder. So, pixel trackers are basically just tiny image files. They’re called this because, you know, sometimes they’re literally just one pixel by one pixel. And to load the image, you know, so the image is hosted on a server somewhere else, not on the WIRED website.

It’s hosted by whatever company, who knows, is doing this work. And because the image has to load from this other server, my computer makes a request to that server. And once that request is logged, it’s ‑‑ that’s, you know, the… that server can, you know, get information from my request about, you know, my computer, where I’m coming from, what ‑‑ like, how long I spent on it, what time I accessed it.

If you ever used, like, e‑mail marketing software, or like newsletter software, like MailChimp or TinyLetter, this is usually how those services are able to tell you how many people have opened your e‑mail. They’ll have like an invisible pixel tracker loaded into the, into the actual e‑mail, and will send the information about when that image loaded to the newsletter web service.

So, and so pixel trackers, they’re sort of sneaky in that they’re like… Again, like, you literally can’t see them on the web page. And they’re not as transparently kind of present? (Laughs) As other things?

What is browser fingerprinting and how does it work?

A more ‑‑ another method of tracking users on the internet across different websites is something called browser fingerprinting, which is a bit more sophisticated than cookies. So in the last few years, browsers have become a lot more dependent on and intertwined with a computer’s, like, operating system and hardware. For example, when you join a Google Hangout or a Zoom call! (Laughs)

You ‑‑ the browser is gonna need to access your webcam and your microphone. Right? And those are, those are, you know, parts of the hardware. So there needs to be like ways for the browser to talk to those parts of your computer? And that in and of itself isn’t a bad thing. But! It means that if a, you know, if some code is triggered that asks questions about, you know, those other parts of hardware, it might just get ‑‑ like, that’s data that could get sent to another server.

So in this example, we’re looking at the loaded information includes things like browser name, browser version. And that’s stuff ‑‑ like, that will usually be in a typical request. Like, knowing what the browser is or what kind of browser isn’t that unusual? But then we get things like what operating system am I on? What version of the operating system am I on?

I don’t ‑‑ like, I don’t know why this site needs that information! And I didn’t see any fingerprinting happening on the WIRED website, so I had to go to the YouTube page that the video was on. (Laughs)

There are a lot of more detailed sorts of things that can be, like, pulled into fingerprinting. So like your camera. Like, is your camera on? What kind of camera is it? That can get ‑‑ that can be something that a, you know, browser fingerprint will want to collect. Your, like, your battery percentage, weirdly? It’s ‑‑ and all of this is in the service of creating, like, an ID to associate with you that is definitively your computer, basically.

As opposed to, like, you know, you can actually like erase cookies from your browser, if you want to. Or you can say, like, don’t store cookies. But it’s a lot harder to not have a battery.

In terms of knowing if fingerprinting’s happening, one way to do that in the Network tab is you’re looking for the POST requests, which means that your computer is sending something to another computer. And one way that it can get sent is in a format called JSON, which is an abbreviation for JavaScript Object Notation. Which is basically a format for data that can be processed by the programming language that works in the browser.

This is ‑‑ another way if, like, if the, you know, the Network tab is like a little overwhelming, there are browser extensions that can show you more kind of detailed things about what’s going on with fingerprinting.

Additionally, just as a sidenote, browser ‑‑ like, browser extensions are another example of like throwbacks of the browser. The idea that anyone can build, like, extra software for that piece of software? It’s like, no one would ever let you do that to the Instagram app on your phone. And it’s sort of a, it’s kind of a leftover thing from something ‑‑ like, Firefox started doing it in 2004, and then everyone copied them. (Laughs)

But, back to fingerprinting.

Just as far as ‑‑ this is a Chrome extension called DFPM, Don’t FingerPrint Me, which just logs this in a slightly tidier way. So I thought I would show it. And it highlights a couple of examples of ways that this page is currently doing fingerprinting that I might want to know about.

So canvas fingerprinting is a method ‑‑ it sort of describes it here. It draws like a little hidden image on the page that then is kind of encoded to be, like, your fingerprint. I think Firefox actually blocks this by default, so I had to do this in Chrome! (Laughs)

WebRTC, that’s related to your camera and microphone. WebRTC stands for Web RealTime Communication, or Chat, I’m not sure which. But that’s basically the tool used for making ‑‑ for doing web calls. They’ll also look at what fonts you have on your computer, your screen resolution. You can see here the battery level stuff.

So I guess the point I wanted to bring across with the fingerprinting stuff is just that, like, there are lots of different things in play here.

Should we ‑‑ do you think we have time for our bonus round…? Oo, it’s almost 1:00. But I feel like there was ‑‑ I’m hoping, I think there was some interest in this. I don’t know, Olivia, what do you think?

OLIVIA: I just pasted in the chat an answer to the TOR versus VPN question? So we can skip those slides. But it might be useful to kind of rapid‑fire go through safer browsing techniques? Yeah, I just got a “yes please” in the Q&A.

What is a VPN and how does it work?

INGRID: Okay. Quick version of the VPN thing. This is how a normal connection, you know, logs data about you. I go to a website, and it logs this computer came to me! This computer over here.

A VPN basically means that you’re connecting to that computer through another computer. And so your request looks as though it’s coming from kind of somewhere else. That being said, like, it’s ‑‑ you know, there’s still other data. Like, given the point I just made about fingerprinting, there’s other data that could be collected there that’s worth thinking about.

When data travels through TOR, TOR is an acronym for The Onion Router, and the idea is that it wraps your request in multiple ‑‑ by going through multiple different computers, which are called relays.

So when you use TOR, which is a browser, to connect, it sends your request through this computer and this computer and this computer, and whatever is the last one you were on before you get to the page you want to visit, that’s where this ‑‑ that’s the, like, IP address that this device is going to log. These are called ‑‑ this last sort of like hop in the routing is called the exit relay. Those can be ‑‑ yeah. I think that that, that was my attempt at being quick. I apologize. (Laughs)

OLIVIA: Fun fact about VPNs. If you ‑‑ because the United States has different privacy laws than other countries, if you were to connect to a VPN server that was in, for example, the European Union, you might get a lot more notifications from the sites that you normally go to about different cookies and different things that they do with your data. Because in Europe, they’re required to tell you, and in America, they’re not always required to tell you what they’re doing with your data.

What is private web browsing and how does it work?

Oh, I can take it. So this is how, in Firefox, you would open a private window. And private windows, I think we’re all kind of a little bit familiar with them. They clear your search and browsing history once you quit. And it doesn’t make you anonymous to websites, or to your internet service provider. It just keeps it private from anyone else.

But that might be really useful to you if you are using a public computer, or if you’re using a computer that might be compromised for any other reason. Like say if you suspect that you’re going to protest and a cop might take your device from you.

What are script blockers and how do they work?

INGRID: So script blockers, so like the tracking and the little analytic tools and stuff usually are written in Javascript, because that is the only programming language that works in a browser. So there are tools that will prevent Javascript from running in your browser. And that can be helpful for preventing some of those tracking tools from sending data back to, back to some, you know, computer somewhere else. It can be a little bit frustrating, because Javascript is used from all ‑‑ for all sorts of things on websites. Sometimes it’s used for loading all of the content of the web page! (Laughs)

Sometimes it’s used to, you know, make things kind of have like fun UI! But it’s ‑‑ so it sort of… You know. It’s worth ‑‑ it’s interesting to try, if only to see how much of your internet experience needs Javascript? But yeah. There are some tools that will, you know ‑‑ the Electronic Frontier Foundation has a cool extension called Privacy Badger that sort of learns which scripts are trackers and which ones aren’t as you browse. But yeah. These are, yeah, these are extensions that browsers will ‑‑ you can install onto a browser.

And then firewalls!

What is a firewall and how does it work?

OLIVIA: So firewalls are kind of the first line of defense for your computer’s security. It would prevent, basically, other computers from connecting directly to your computer, unless you like say yes or no. And so… They’re really easy to turn on? On your computers? But they’re not that way by default.

So in a Mac computer, like I’ve shown here, you would literally just go to security and privacy, and go to the firewall tab, and it’s like one button. Turn off, or turn on. And you don’t really have to do much more than that.

And in Windows, there’s a similar process, if you go to the next slide, where you really just go into settings, go into security, and switch the “on” setting. It’s pretty… It’s pretty easy, and it’s kind of annoying that it’s not done for you automatically.

But I recommend everyone to just check out and see, like, hey, is my firewall turned on? Because it’s a really easy step to immediately make your computer much safer.

INGRID: All right! We went through all the slides! (Laughter)

BLUNT: That was perfectly timed! You got it exactly at 1:00

What’s the difference between a VPN and TOR?

OLIVIA: Okay. So for the TOR versus VPN answer.

As we said just a while ago, TOR uses onion routing and sends your data through multiple computers called TOR nodes to obscure traffic and anonymize you, while a VPN just connects you to a VPN server, that are often owned by VPN providers, which is sometimes you have to pay to use them and other ones are free.

So I described it as kind of like a condom? (Laughs) Between you and your internet service provider? So Verizon knows that you’re using a VPN, but it doesn’t know what you’re doing on it, because a VPN would encrypt all your traffic.

It’s really important that you use a VPN that you trust, because all of your internet traffic is being routed through their computer, which is another reason people like to pay. Because you can have a little bit more faith that it’s like a trusted service if you’re paying for it? Even though that’s of course not always true.

But there is Proton Beacon, which is one I use that’s free, which is run by the same people who run Proton Mail, which I use. I haven’t had any problems with it.

You can use a VPN and TOR at the same time, which is what the question directly asked. And I believe that your ISP would know that you’re using a VPN, but because you’re using a VPN it wouldn’t know that you’re using TOR. Ingrid, if that’s not true, you can like clap me on that.

Because TOR is super slow and it routes your computer through a bunch of different things, it can break a lot of websites, including video streaming like YouTube and Netflix. A lot of people use VPNs, however, so they can access videos or things that are banned in different countries by making it look like they’re in a different place.

But if you’re doing something highly sensitive or illegal, you’d probably want to use TOR, and probably some other precautions, too.

BLUNT: Thank you so much. That was super helpful. Do folks have any questions? Is there anything that people would benefit from sort of like going back and going into in a little bit more detail?

Someone just said: Is there a way around TOR breaking websites? I’ve had used it and it throws a lot of captcha tests on regular websites.

OLIVIA: So Cloudflare kind of hates TOR? (Laughs) It takes a really aggressive stance towards TOR users, actually? There was like an Ars Technica article I read that said Cloudflare said 90% of TOR traffic we see is, per se, malicious.

So I don’t know if there’s going to be a time that you can use captcha and not have it act up, because Cloudflare sees that kind of activity as malicious activity.

Can Apple see what you’re doing on your computer or phone?

INGRID: “This may be hardware‑related, but does Apple see what you’re doing on your computer because you connect to the internet, e.g. any photos, videos you store?”

Okay, to make sure I understand the question: Is the question whether, like, if you’re using an Apple device, whether Apple is able to see or collect anything if you’re connected to the internet from that device?

Okay. So I think ‑‑ I mean, the answer to that is you would need to kind of tell them to do that? (Laughs)

They’re ‑‑ so like, if you are using something like iCloud to store photos and videos, then yes, they would be able to see and have all of those. But in terms of, like, just being on the internet doing things on an Apple device? Apple can’t personally, like, kind of peek in and see that. I mean, they, like ‑‑ there are, you know, other computers will know that you’re on an Apple device.

But yeah, you have to be directly interfacing with Apple’s network for Apple to be able to have anything on or from your computer.

OLIVIA: And when it comes to things like iMessage and iCloud, they… say? That that information is encrypted. Of course, it’s like not open sourced, so we don’t actually know how they’re encrypting it or what they do. But Apple has said for a while that communications between, like say two iMessage users?

So not someone using it to speak to someone who has an Android; that’s SMS.

But two iMessage users speaking to each other, that’s technically an end‑to‑end encrypted conversation. Apple does collect some information from you when you are initially typing in someone’s number to text them, because it pings the server to find out if that number is associated with an iCloud amount.

So for iPhone users, that little moment between when a number that you’re typing in turns either blue or green, in that moment it’s sort of pinging Apple’s servers. So they do have a list of the times that that ping has occurred.

But of course, that doesn’t tell you if you actually contacted the person whose number you typed in; it just knows that you made that query. And that’s the extent, so Apple says, of the information that they collect about your iMessage conversations.

So, yes, they do ‑‑ they can technically see that information? But they tell us that they don’t look at it. So.

Open-source vs. Closed-source Technology

BLUNT: Can you explain a little bit more about open source or closed source technologies?

OLIVIA: Yeah! So, open source technologies are… basically, they’re apps, websites, tools that they’re ‑‑ the code that’s used to write them and run them is publicly available.

When it comes to security technologies, it’s really… best practice to try to use tools that are open source, because that means that they’re able to be publicly audited.

So like, regular security experts can like go in and like actually perform an audit on open source security tools, and know that they work. Versus, you have a lot of paid security tools that you basically assume that they work because people tell you that they work?

And you can’t really, like ‑‑ the public can’t really hold them to any, like, public accountability for whether or not they work or not.

Versus you can actually, like, test the encryption algorithm, say, of Signal, which is a messaging app and all of their code is public information.

INGRID: Open source, it’s also like a way of… kind of letting people developing software kind of support each other, in a way? Because the fact that Signal is open source, it’s not just like oh, we can be accountable if Signal says it’s doing something but it’s not; it’s also a way to be like, hey, I noticed something. Is it working? And you can actually directly contribute to improving that technology.

It’s complicated ‑‑ I mean, the world of open source, it’s complicated in that it’s like, it still has elements of the like… you know, snobby, like, like culture of tech, sometimes? But it, it’s kind of ‑‑ in principle, it’s very like useful for being able to have technologies that are accountable and that kind of have some element of like public engagement and understanding.

How to Choose a VPN

BLUNT: Awesome. Thank you. And so I have another question in the chat: What are some good ways to assess the trustworthiness of a VPN, as you were discussing before?

OLIVIA: The way most people do it, I think, Ingrid, you could check me on this, is kind of by reputation. If you look up how to find a good VPN, you’ll find a lot of articles where people talk about the pros and cons of different ones. And you’ll be kind of directed to the ones considered by the public to be the most trustworthy ones?

INGRID: Yeah. And I think one way I guess I evaluate companies sometimes on this is like looking at their level of engagement with the actual, like, issues that they… of like user privacy?

So like, one of the, you know, things I ended up using as a reference for this workshop as a guide to making ‑‑ as a guide for, like, different browsers, was like a blog post by Express VPN. And they’re a company that, they don’t have to tell me anything about like which browser ‑‑ there’s no reason for them to generate that content.

I mean, it’s good PR‑ish? But they’re not going to get new customers because I’m using a different browser now.

So some of it’s thinking, you know, is it open source or not? What is the like business model? And are they kind of actively, you know, engaging with issues related to user privacy?

We’ll talk a little bit more tomorrow about legislative issues around privacy, and that’s also another way. Like, have they taken positions on particular, you know, proposed laws that could harm user privacy?

To me, those are sort of like, how are they kind of like acting on principles?

OLIVIA: It also might be a good way of checking to see if ‑‑ yeah! If they produce logs in court proceedings, so you know that they don’t track traffic.

Also, to see like, say, certain companies might be funded by other companies that, like, are less concerned about… public safety or privacy or human rights.

So that might also be a good way of like checking to see, like, the integrity of a VPN company. ‘Cause at the end of the day, they’re all companies.

Is WordPress a reputable option for sex workers?

INGRID: All right. The next question: Would y’all consider WordPress reputable for housing a sex worker website?

This ‑‑ thank you for asking, because it lets us kind of talk about something I wanted to figure out how to include in that whole presentation but didn’t.

So… Just as like a point of clarification, and maybe this is understood by people, but maybe for the video it will be helpful… WordPress? (Sighs) Is both a, like, hosting company and a piece of software. WordPress, I think ‑‑ WordPress.org is the hosting one? Or WordPress.com? I can never remember. (Laughs)

I think it’s WordPress.com. But you can host a website on WordPress’s, like, platform, and when you do that you will be running a website that is built using WordPress’s software. Which is also called WordPress! This is confusing and annoying.

But… you can also use WordPress’s software on another web, like, hosting service. Like, you can install WordPress onto a like hosting service website. I think a fair amount today, like of hosting services, actually do sort of a one‑step click, like they’ll set up a server with WordPress for you option.

In terms of WordPress, like, as the host of a website? And as a host for sex worker websites… I don’t actually know. I would say ‑‑ I would, like, check ‑‑ I would need to go check their terms of service? (Laughs)

I think in general… Yeah. I don’t totally ‑‑ I think with all hosting companies, it’s hard ‑‑ like, they’re, like, figuring ‑‑ figuring out which ones are kind of the most reputable is partly about looking at any past incidents they’ve had in terms of takedowns, or like what their ‑‑ also like where they’re located?

So like, WordPress is a company based in the United States, so they’re beholden to United States laws and regulations. And I’m guessing part of the reason this question was asked is that this person ‑‑ that you probably know a little bit about FOSTA‑SESTA, which makes it harder for companies to allow any content related to sex work on their servers.

And as far as I know, WordPress wants to be compliant with it and hasn’t taken a radical stance against it.

Blunt, do you have any…?

BLUNT: Yeah, I can say I think hosting anywhere on a U.S.‑based company right now has a certain amount of risk, which you can decide if that works for you or not. If you are hosting on WordPress right now, I would just recommend making lots of backups of everything, as like a harm reduction tool. So if they decide to stop hosting your content, you don’t lose everything.

And I also just recommend that for most platforms that you’re working on. (Silence)

Cool. So we have around 15 minutes left. So if there are any other questions, now’s the time to ask them. And… I don’t ‑‑ and if not, I wonder if just chatting Ingrid and Olivia a little bit about what y’all will be covering in the next two days!

Okay, we have two more questions.

Can you reverse browser fingerprinting?

“This may be a digital surveillance question, but once you get browser fingerprinted, is it reversible?”

INGRID: Hmm. That’s actually a question where I’m not sure I know the answer. Olivia, do you know…?

OLIVIA: No…

INGRID: I do know that… you can sort of ‑‑ I know on some, on mobile devices, you can like spoof aspects of your identity?

So, like, you can ‑‑ like, so I mentioned Mac addresses are sort of this hard coded thing. That’s just the idea of your like device? A phone can actually ‑‑ like, you can actually generate sort of like fake MAC addresses? (Laughs)

That are the one that’s presenting to the world? So if that sort of was a piece of your fingerprinted identity, that’s one way to kind of, like ‑‑ you know. It’s like you wouldn’t be a perfect match anymore? But… Yeah, I don’t know if there’s sort of a way to completely undo a fingerprinting.

Yeah. I will also look into that and see if I can give you an answer tomorrow, if you’re going to be here tomorrow. If you’re not, it will be in the video for tomorrow.

Additional Digital Literacy Resources

BLUNT: Great, thank you. And someone asked: Are there any readings that y’all would recommend? I’ve read Algorithms of Oppression and am looking for more. I love this question!

OLIVIA: That… the minute I heard that question, like, a really long list of readings just like ran through my brain and then deleted itself? (Laughs) We’ll definitely share like a short reading list in the bibliography that we’ll send out later.

BLUNT: Awesome. That’s great.

Okay, cool! This has been really amazing. Thank you so much. I’m just going to say, one more chance for questions before we begin to wrap up.

Or also, I suppose, things that you’re interested in for the next two days, to see if we’re on track for that.

How do fintech companies use digital surveillance?

Someone asks: This is a fintech‑related question for digital surveillance, but can you talk about how that kind of works internet‑wise?

INGRID: Fintech…

BLUNT: For financial technologies. And how they track you. Oh! So like, if you’re using the same e‑mail address for different things? Is that sort of on the…?

OLIVIA: Like bank tracking? Like money type of…?

INGRID: So… Depending on the, you know, like financial servicer you’re working with, like PayPal or Stripe or whatever, they’re going to have ‑‑ like, they ‑‑ like, in order to work with banks and credit card companies, they are sort of expected to kind of know things about you.

These are like related to rules called KYC, Know Your Customer. And so part of the tracking or like ‑‑ or, not tracking, but part of information that is collected by those providers is a matter of them being legally compliant?

That doesn’t mean it produces great results; it’s simply true.

And I think the ‑‑ in terms of the layer ‑‑ I’m trying to think of what’s ‑‑ I don’t know as much about whether or not companies like Venmo or… Stripe or PayPal are sharing transaction data? I’m pretty sure that’s illegal! (Laughs) But… Who can say. You know, lots of things happen. That would be capitalism.

BLUNT: I also just dropped the account shutdown harm reduction guide that Ingrid and Hacking//Hustling worked on last year, which focuses a lot on financial technologies and the way that, like, data is sort of traced between them and potentially your escorting website. So that was just dropped into the chat below, and I can tweet that out as well in a little bit.

Zoom vs. Jitsi: which is more secure?

OLIVIA: Privacy/security issues of Zoom versus Jitsi… I also prefer to use Jitsi when feasible? But I also found that call quality kind of drops really harshly the more people log on. Like, I don’t think we can actually sustainably have a call of this many people on Jitsi without using like a different ‑‑ without hosting Jitsi on a different server.

Concerning how I handle the privacy/security issues of Zoom, they’re saying they’re going to start betaing end‑to‑end encryption later this month. I don’t know what that actually even means for them, considering that they’re not open source, right?

But I do say that one of the things that I tend to try and practice when it comes to, like, using Zoom, is kind of maintaining security culture amongst me and people who we’re talking to. Right? So I’m never going to talk about, like, any direct actions, right, that are going to happen in real life on Zoom. Refrain from just, like, discussing activity that could get other people in trouble anyway.

Like, while it would be nice to have, like, say this kind of conversation that we’re all having now over an encrypted channel, I think it’s generally much safer and ‑‑ I don’t like using the word “innocent,” but that’s like the word that is popping into my head, to talk about ‑‑ to use Zoom for education, even if it is security education, than it would be to actually discuss real plans.

So… It might be really beneficial to you if you are, like, say, having ‑‑ using Zoom to talk to a large group of people about something that is kind of confidential? To talk over, like, Signal in a group chat, or some other encrypted group chat platform, and decide like, okay, what are you allowed to say over Zoom, and what you’re not allowed to say. And to think of Zoom as basically you having a conversation in public.

Assume for all of your, like, Zoom meetings that someone’s recording and posting it to ‑‑ (Laughs)

YouTube later! And that would probably be… that would probably be the most… secure way to use it, in general? Is just to assume that all of your conversation’s in public.

BLUNT: Yeah. I totally agree, Olivia. And that’s why this is going to be a public‑facing document. So, Zoom felt okay for us for that.

INGRID: Yeah. I mean, I think another way I’ve thought about this with Zoom is like, just remembering what Zoom’s actually designed for, which is workplace surveillance? Right? It’s like, you know, its primary market, like when it was first created, and still, is corporations. Right?

So there’s lots of ‑‑ so like also, when you’re going into like ‑‑ even if you’re going to a, you know, public Zoom thing that is, you know, about learning something. Like, whoever is managing that Zoom call gets a copy of all of the chats. Right?

And even if you’re chatting like privately with one other person, that message is stored by ‑‑ like, someone gets access to that! And… Mostly just that’s something to… like, thinking ‑‑ like, just keep in mind with, yeah, what you do and don’t say. Like, especially if you are not the person who is running the call.

Think about what you would or wouldn’t want someone you don’t know to kind of like have about you.

What’s to come in the digital literacy lunch series?

BLUNT: Awesome. Thank you so much. Do you want to start to wrap up and maybe chat briefly about what we’ll be seeing in the next two sessions?

OLIVIA: Sure, yeah. So the next two sessions are going to be one talking more about how platforms work and sort of the whole, like, algorithmic ‑‑ bleh! (Laughs)

Algorithmic curation, and how misinformation spreads on platforms, and security in the Twitter sphere, rather than just thinking about using the internet in general. And then the third will be talking more explicitly about internet surveillance.

So we’re going to be talking a little bit about surveillance capitalism, as well as like state surveillance, and the places where those intersect, and the places where you might be in danger and how to mitigate risk in that way.

Operations Security (OPSEC): An Introductory Overview

Opsex operational security meme

We live in an age of increased surveillance and censorship. Social media is a bastion for fascism. Abusers target sex workers, queer users, and people of color and prey on them without fear. Whorephobes and bigots alike use our vulnerability to their advantage through social manipulation, doxxings, and swattings. It has never been a more dangerous time to be a marginalized person online. And our first line of defense is operations security.

“Operations security (OPSEC) is a process by which organizations assess and protect public data about themselves that could, if properly analyzed and grouped with other data by a clever adversary, reveal a bigger picture that ought to stay hidden,” CSO writes.

OPSEC is to online safety what sex education is to sex: a necessary part of modern life that is underfunded, underappreciated, and rarely discussed in an approachable way. This guide is our attempt to introduce OPSEC in an accessible way to sex workers, activists, marginalized users, and allies who may not necessarily have the tech literacy to know about these harm reduction practices.

(Please note that this is an introductory overview to digital and technical safety, and it may not provide the full protection you need in your specific circumstance. For more information, see the links at the end of this article.)

Why does operations security (OPSEC) matter?

Imagine you’re a sex worker from New York at a Black Lives Matter march. While you were spraypainting a statue, an NYPD officer successfully grabbed you, stole your phone, and forced you to use your FaceID login to unlock your messages. He was able to browse through your photos and text messages in detail. Luckily, your fellow protesters came in, dearrested you, and brought you and your phone back to safety. You’re shaken from the ordeal, but the worst is over, right?

Well, no. The NYPD officer saw signs that you were engaging in full-service work in your messages. You accessed a hacked public WiFi near the march, and officers were able to grab your Twitter and Instagram account names. The NYPD was able to identify your phone and track you on the walk home. The police now have your address and enough evidence of some kind to draft up a warrant, and they’re eager to enact revenge.

But instead of immediately arresting you, they break into your WiFi connection and keep tabs on your Facebook posts, Twitter DMs, and Instagram chats. It’s a gold mine for the cops: they know that you’re not just going to multiple protests, but you played a key role in pulling down multiple racist monuments. Not just that, they also have corroborating evidence to arrest a few of your fellow full-service workers joining you for the “vandalism.”

You didn’t know the cops were spying on you. How could you? The game was rigged against you from the start.

Or, imagine you were never arrested in the first place. You advertise on an escorting website where you had to upload your ID. The escorting website has been raided by the feds, and facial recognition technologies, such as Thorn’s SPOTLIGHT, build databases off of escort ads. When the cops are going through footage, they are able to link an image of your face from the protest to your escorting ad and have access to your ID and social media accounts.

This is not a dystopian future; this is now. This is not to instill fear; this is to encourage you to protect yourself, protect your data, and to protect each other.

So, what is operations security (OPSEC)?

It’s no secret that the government can track your online activity. But surveillance is actually much more prevalent than most people think. When you visit a website, your connection leaks a ton of information about where you are located, down to your country, state, city, and even a guesstimate of your latitude and longitude. Meanwhile at work, you’re forced to use surveillance software like Cocospy, which sends your boss information on your social media posts, text messages, call logs, and more. And if that isn’t enough, predators, police officers, and right-wing fascists can easily break into your WiFi network and snoop on your web traffic with a few apps and some tech knowledge. It doesn’t take much to steal your login information.

Good OPSEC grants you protection against hacking, data theft, doxing, and surveillance. OPSEC is preventative in nature: it requires you to understand your biggest threats and the potential ways they can harm you. Identifying and conceptualizing this is called threat modeling.

There are various design philosophies for threat modeling. The Electronic Frontier Foundation’s Surveillance Self-Defense project offers a great starting model based on five key questions:

  • What do I want to protect?
  • Who do I want to protect it from?
  • How bad are the consequences if I fail?
  • How likely is it that I will need to protect it?
  • How much trouble am I willing to go through to try to prevent potential consequences?

Ars Technica also offers a valuable guide to threat modeling based off these four questions:

  • Who am I, and what am I doing here?
  • Who or what might try to mess with me, and how?
  • How much can I stand to do about it?
  • Rinse and repeat.

Threat models require careful consideration about the trade-offs to different protections. If you’re an online sex worker with a popular Twitter presence, it may be incredibly difficult or outright impossible to stop using social media. However, communicating with your full-service clients over a burner phone connected to Signal may be a good option to evade police surveillance.

An example of the minimal data handed over to the U.S. government by Signal during a subpoena.
Data handed over to the U.S. government by Signal during a subpoena is minimal. For more information, read here.

What is encryption?

“Encryption is a process that encodes a message or file so that it can only be read by certain people,” Search Encrypt writes. “Encryption uses an algorithm to scramble, or encrypt, data and then uses a key for the receiving party to unscramble, or decrypt, the information.”

Let’s say you want to send an encrypted message to another user. The words you type in – or the “plaintext” – is algorithmically encoded into something called “ciphertext.” Ciphertext can only be decoded with its encryption key. When you send your message, the other user receives the decryption key and converts ciphertext back to plaintext.

End-to-end encrypted messaging

Some services offer encrypted messaging where the service holds the key to your messages. This means the site can choose to decrypt your messages and read them or send your messages to law enforcement upon request. This is why the best form of encrypted messaging is end-to-end encryption.

End-to-end encryption “means that messages are encrypted in a way that allows only the unique recipient of a message to decrypt it, and not anyone in between,” Wired reports. “In other words, only the endpoint computers hold the cryptographic keys, and the company’s server acts as an illiterate messenger, passing along messages that it can’t itself decipher.”

Sex workers, privacy advocates, organizers, and journalists commonly rely on end-to-end encryption to respond to their threat model. Thanks to social media and smartphones, end-to-end encrypted messaging is as popular as it is accessible, and there are a number of services you can use to keep in touch with others.

Popular end-to-end encrypted messaging services include:

  • Signal
  • WhatsApp
  • Telegram
  • Dust
  • Wire
  • Keybase
  • iMessage

Among these, the following are generally considered the best for the most private and secure messaging:

  • Signal – Open-source, strong pro-privacy stance, data collection minimal, zero-access encryption. Most popular
  • Wire – Open-source with similarly strong pro-privacy stance, phone number not required
  • Dust – Automatic 24 hour message deletion, phone number kept private after creating username, based off Signal protocol

Note that each of these platforms have their pros and cons. For example, Signal requires your phone number, which may put sex workers at risk for being identified.

Encrypted email

In terms of email services, end-to-end encryption and zero-access encryption is preferred. The latter is a form of encryption that prevents service providers from reading your emails in plaintext while “at rest,” or sitting in your inbox.

Two popular end-to-end encrypted email services include ProtonMail and Tutanota. Both offer end-to-end encrypted communication with fellow service users, such as a ProtonMail user emailing another ProtonMail user.

Be warned that ProtonMail does not encrypt subject lines, while Tutanota does. Additionally, no email service can provide end-to-end encrypted communication if one of the recipients does not use end-to-end encryption. A ProtonMail message sent to an @aol.com account, for example, will not be encrypted in the AOL user’s inbox. Your correspondence will be encrypted at rest within your own inbox, however. For more information, read this author’s overview and review of ProtonMail.

(One workaround for this issue is PGP. Short for “Pretty Good Privacy,” this involves a sender encrypting an email with a key, and a recipient decrypting it with their own key. Mozilla Thunderbird users can easily navigate this with the Enigmail add-on.)

Hiding your internet footprint with a VPN

A virtual private network is a service that lets users connect to an off-site server to route traffic over the internet. This connection uses an encrypted tunnel to protect your privacy. This ensures your outbound and inbound web traffic alike are secure.

A screenshot from a ProtonVPN user  who is taking advantage of an encrypted connection.
Screenshot from ProtonVPN.

“When you browse the web while connected to a VPN, your computer contacts the website through the encrypted VPN connection. The VPN forwards the request for you and forwards the response from the website back through the secure connection,” Chris Hoffman writes for How-to Geek. “If you’re using a USA-based VPN to access Netflix, Netflix will see your connection as coming from within the USA.”

VPNs come with their trade-offs. Your ISP can see when you’re using a VPN, as can other websites. VPNs are much more common than in previous years, although simply using one may be enough to gain a company, police department, or state entity’s attention. Your information is in the hands of your VPN provider, and some companies are more trustworthy than others. Do your research before choosing a VPN, especially if you’re planning to engage in high risk activism work or full-service sex work.

Several popular, vetted VPN services include:

Privacy-friendly software alternatives

When corporations control the programs you use, they control access to the data you create with their platforms. There are plenty of privacy-friendly software alternatives to some of the most basic proprietary software out there, many of which open-source. Microsoft Office, for instance, has a free, open-source alternative called LibreOffice. Here is a list of alternatives to some of the most popular websites and services out there:

Additional alternatives can be found on PRISM Break.

Switch to Linux and minimize data tracking

If you’re on a Windows or MacOS computer, your data is being tracked. Microsoft and Apple are notorious for collecting an immense amount of information on its users and storing it. One of the few viable alternatives to these corporate tech giants is using Linux.

A screenshot of the Linux Mint start menu.
Screenshot from Linux Mint.

Linux is not one operating system, but a family of free open-source OSes built off of the Linux kernel. In 2020 there are many distributions (or “distros”) available built for user accessibility, and these are as easy as placing a boot disc on a flash drive and installing the OS on your computer of choice. You can erase your current OS with Linux, create a “dual boot” option that keeps your current OS, or even install Linux on an external hard drive and use your distro between devices. Many distros support drive encryption, letting users protect their entire OS and all of its contents prior to boot-up.

Look into the following Linux distros for an accessible, privacy-friendly experience:

  • Debian – One of the most accessible secure distros available, relies entirely on free, open-source drivers and applications
  • PureOS – Security and privacy-based Linux distro
  • Linux Mint – Easy to use, similar in nature to Windows. Installation is easy, OS is highly stable, and overall a solid distro for newcomers
  • Manjaro – Like Linux Mint, user-friendly design and lightweight distro perfect for switching from Windows

For more information on Linux, visit FOSS Post’s beginner’s guide to the operating system family.

Conclusion

This guide goes over technical solutions sex workers and activists can take to protect their data. However, the role human error plays in OPSEC cannot be understated. A trusted VPN, secure Linux distro, and end-to-end encrypted email account will not protect you if you set all of your account passwords to “password,” or if you happen to share your address on social media.

Your OPSEC’s weakest link usually comes from an outside party: a client, a fellow organizer, a family member, or a friend. Ideally, you should send this guide to your trusted comrades and suggest they begin improving their digital security too. But you must meet your social network where it’s at. If your client does not understand why they need to use ProtonMail to communicate with you, it may be easier to simply purchase a burner phone for sex work and exchange numbers on Signal.

Always do your research before using any operating system, device, phone app, or communications platform. Services such as Telegram are not quite as secure as people assume, and products like ProtonMail are not fully upfront about their encryption features. You are as safe as the products you trust, so make them earn it.

At times, you may need to sacrifice convenience for privacy by taking certain conversations offline. Not all conversations can be had safely digitally.

There is no such thing as the perfect security system. The advice activists and tech freedom advocates provide is based on what we currently know and consider best practices. New laws, leaks, and technological innovations may introduce changes to your threat model. Stay connected to your local tech activist community to know more about contemporary OPSEC guidelines.

A closing note on privilege

Tech resources are a privilege. They are gatekept by white cishet men who assume their relationship with the world is the default. This not just drives women, trans people, sex workers, and Black activists from tech spaces, it cultivates exclusion. Poor OPSEC goes all the way back to the white men who get to decide who can access tech spaces, who cannot, and what issues the community cares about.

Your ability to successfully build a new computer, buy a new laptop, or even purchase a flash drive is dictated by your race, class, gender, and sex working status, among many other factors. It is the responsibility of the privileged to lend a hand and help the marginalized protect themselves. This can be done in numerous ways – running workshops, donating devices, volunteering one-on-one tech support, funding mutual aid projects, or directly giving your money to the most marginalized among us. No matter how you do it, it’s our responsibility to make sure digital safety is accessible to everyone.

Special thanks to Raksha Muthukumar and SX Noir for feedback on this post’s initial draft.

To read more from Ana Valens, click here!

Further Reading

EFF’s Surveillance Self-Defense Project – An in-depth overview of digital security and safety designed for new and experienced tech users alike

Attending a Protest: Surveillence Self-Defense – Digital safety guide by the EFF specifically for protesters, highly recommended

Protesting for Black Lives Matter? Follow these data privacy tips – For protesters attending Black Lives Matter marches or other events. Written by this guide’s author

How to Protest Safely in the Age of Surveillance – Additional overview for Black Lives Matter protesters

ProtonMail Review – Overview of ProtonMail, its features, and its weaknesses. Written by this guide’s author

GOP introduces bill that would give police easy access to encrypted data – Overview of a Senate bill targeting encryption. Would federally mandate “device manufacturers and service providers” to work with law enforcement in “accessing encrypted data if assistance would aid in the execution of [a] warrant”

How To Stop Instagram From Tracking Everything You Do – Overview of ways you can prevent Instagram from collecting personal data. The best option is, unfortunately, to delete Instagram from your phone

Threat Modeling

12346

What is Threat Modeling?

“A way of narrowly thinking about the sorts of protection you want for your data. It’s impossible to protect against every kind of trick or attacker, so you should concentrate on which people might want your data, what they might want from it, and how they might get it. Coming up with a set of possible attacks you plan to protect against is called threat modeling. Once you have a threat model, you can conduct a risk analysis.” – EFF

What are Threat Modeling Questions To Ask?

  1. What do I want to protect?

2. Who do I want to protect it from?

3. How bad are the consequences if I fail?

4. How likely is it that I will need to protect it?

5. How much trouble am I willing to go through to try to prevent potential consequences?

What are other Threat Modeling Concerns?

What are my assets?

Who are my adversaries?

What are the threats of my adversaries?

What is the risk of ___ happening?

What does a sample Threat Model look like?

Example: Sex Work Provider in NYC

Assets: Photos, legal id, address, social media accts, email, communications, texts, bank acct, payment legers, contacts.

Adversaries: Cops, stalkers, family, exes, journalists, careless ppl, catfish, trolls, anti sex work ideologues, algorithms.

Threats: Location tracking spyware, doxxing, blackmail, report police, steal photos, intercept, falsified charge reason/arrest reason, reporting status as sex work provider to ‘vanilla’ job.

@babyfat.jpeg on Lesbians Who Tech

Last year, two organizers from Hacking//Hustling were rejected from speaking at last year’s Lesbians Who Tech convening in San Francisco, which took place shortly after SESTA-FOSTA was signed into law. Hacking//Hustling provided a partial scholarship to Baby Fat (@babyfat.jpeg) to attend and make sure that there would be sex worker representation at the conference. Baby Fat’s reflections on her experience at Lesbians Who Tech as a sex working Femme are below.

A few months ago, I was able to attend my first Lesbians Who Tech summit thanks largely to the support of my community. At the time of attending I was working as a digital media associate at a Queer healthcare nonprofit. Most of my 9-5 background has come from my work in Queer nonprofits, working mostly in direct outreach. For the last three years I have worked in tech specific positions within nonprofits, skills which I was able to acquire because of my hustling. I’m from a nontraditional background, but hustling has taught me everything I know about tech, marketing, and community management.

It’s worth mentioning I was able to attend the conference because I was awarded a partial scholarship for them. I attended the summit because I have always had a passion for social media and believe in its ability to connect community and provide accessible education, especially as it relates to Queer sexuality and wellness. From a hustling perspective, it’s the best way for me to engage and advertise to those who utilize the multitude of my services. Post FOSTA/SESTA I have had to rely even more heavily on social media and have since began operating more discreetly. 

While the conference was exciting and I was able to connect with some great folks I often felt that some overall nuance was missing. There was a lack of intentional conversations around gentrification, sex work, and the Queer complacency. Navigating the space as a fat femme sex worker was complex and exhausting at times, between being unable to fit in certain seating, being talked down to by masc attendees, or feeling uncomfortable disclosing the extent of my work. Because a bulk of my 9-5 career has been in nonprofits a majority of the conferences I have attended have been specifically dedicated to Queer theory, resistance, and community building. However, these spaces often lack on seeing the importance of tech within these movements and have been slow to adapt to the changes tech have created in communities. I think LWT is doing better work than most other tech specific conferences, but I do think they could benefit from adapting some of the approaches and topics Queer nonprofit conferences have.

Throughout the summit, I heard no mentions of gentrification from LWT leadership, which felt especially out of place considering that LWT seeks to empower the very people gentrification disproportionately effects. While gentrification has been a popular conversation in tech spaces, having been discussed in length, I can understand how it feels like it might not need as much attention. But I still feel it’s incredibly important to have some intentional dialogue and education around it. I’m from Chicago and the city’s recent tech expansion and attempt at being a global city has reinvigorated the conversation of gentrification and tech. If LWT truly aims to create a more intersectional and diverse tech workforce than they need to fully engage the communities that are being displaced by tech gentrification. LWT leadership needs to recognize they have a platform to educate and incite change. Choosing not to talk about gentrification is choosing to be complicit in it.

At the root of complicity are respectability politics, something LWT engages heavily in, in order to maintain funding, connections, and a respectable reputation. But with these politics comes the erasure of some folks who rely on tech for their safety and economic stability. Sex workers have always been at the forefront of using and building the popularity of tech platforms and services. Between navigating digital banking, advertising online, and censorship on social media sex workers utilize tech at significant rates. Sex workers made Cashapp and Venmo mainstream, and continue to be a driving force between both banking systems growth. But both systems, as well as most social media platforms, have made it increasingly difficult for sex workers to continue using them. 

I went to LWT knowing that there were no formal mentions of sex work in the programming, an oversight considering the historical connections between sex work and Queer folks. After all, pride was started by Marsha P. Johnson, a Black Trans woman, and a sex worker. Countless other Queer revolutionaries like Sylvia Rivera, Amber L. Hollibaugh, and Miss Major among numerous others have been on the front lines of Queer liberation. But as Queer folks have become more assimilated into mainstream culture, Queer sex workers have been pushed farther to the fringes by their own communities.

Whenever in casual conversation with other attendees, the mention of sex work would make them uncomfortable. When I disclosed my experiences in navigating social media as a sex worker, I could feel them try to calculate what type of work I did. It felt like I had to prove my credentials and cleanliness to them. I had a few people implore what type of sex work I did, and I generally got the feeling from them that some forms were more acceptable than others. Often times folks would withdraw from the conversation or worse, explain to me how they knew things were “difficult” because they read a Vice article once. When I pressed them for ways that they were working to make their companies and products better for sex workers since they read a Vice article, they often said there wasn’t much they could do because they weren’t a decision-maker or programmer. But I think that’s just coded language for “I don’t want to do anything.”

I don’t think it’s a matter of people not understanding the difficulties sex workers face while trying to navigate tech. I think it’s an issue of respectability politics; additionally, those that are willing to make change are unsure where to start. Sex work, despite what sex positivity would have you think, is still incredibly stigmatized, especially within educated Queer spaces, like LWT. Leadership at LWT has the power to educate attendees on the nuances of tech and sex work and can impact attendees to do more within their positions, but once again, they choose not to.

The highpoint of the conference for me was being able to see Angelica Ross speak, Ross has been incredibly vocal about the importance of sex workers in tech and has provided visibility to the larger movement. I want to see more more dialogue around sex work and sex workers speaking and facilitating conversations specifically at LWT in the future. Additionally, I would like to see LWT engaging more with sex-workers by partnering with sex worker specific organizations and speaking about sex work more vocally on their digital platforms. I think engaging more sex worker based organizations would encourage more sex workers to attend, and if anyone needs better tech, it’s sex workers. 

Publicly talking about sex work not only educates civilians on the nuances of tech and sex work but also actively destigmatizes sex work in tech spaces, making it easier for folks to openly (and comfortably) talk about their narratives as sex workers. I’m critical of LWT because I want it to succeed, I want people to feel comfortable and for tech to be reclaimed.