Digital Literacy Training (Part 3) with Olivia McKayla Ross (@cyberdoula) and Ingrid Burrington (@lifewinning)

Platform Surveillance Digital Literacy Training

INGRID: So today, we were gonna talk about platform surveillance. And… There’s, in general, what we’re kind of focused on is both ways that platforms surveil and forms of state surveillance that utilize platforms.

So, this is sort of a outline of where we’re going today. First, kind of doing a little bit of clarifying some terms that we’re gonna use. Those two different kind of terms being state surveillance and surveillance capitalism. And then we’ll talk a little bit about mitigation strategies, with both.

What is surveillance capitalism?

So, clarifying some terms. We ‑‑ we wanted to kind of make sure we were clear about what we were talking about. So surveillance capitalism is a term used to describe sort of a system of capitalism reliant on the monetization of personal data, generally collected by people doing things online.

What is state surveillance?

State surveillance is a tactic used by governments to intimidate and attack. It can be digital? It can also be IRL. For many years, it was just people following people, or decoding paper messages. It’s more sophisticated now, but it takes many different forms.

And surveillance capitalism kind of emerges in part because of the demands of state surveillance. So in the early 2000s, after 9/11, the existence of a military industrial complex that had funded the development of surveillance technologies was already well established, but was amped up in some ways by the demands of the global war on terror. And there are… you know, deep interconnections between the history of the Silicon Valley and the history of the military industrial complex.

The state will utilize the resources of surveillance capitalism. But we wanted to make it clear, like, surveillance capitalism’s negative impacts and risk to personal safety are not solely confined to state violence. They can perpetuate interpersonal violence; they can, you know, create greater harms for your ability to kind of, like, find a place to live, or change your mobility. And those are ‑‑ you know, those two distinctions, we just wanted to kind of clarify as we go into this.

And I feel ‑‑ like, part of me wants to believe this is kind of already taken as a given, but it’s good to kind of have it said: Neither surveillance capitalism nor state surveillance exist independent of racism and the legacy of the transatlantic slave trade. Sometimes ‑‑ I found when I first started getting into this space, in 2012, 2013, that was kind of an angle that was kind of neglected. But you don’t have an Industrial Revolution without transatlantic slave trade, and you don’t have surveillance without the, you know, need to monitor people who are considered, you know ‑‑ property, basically. I would highly recommend Simone Browne’s Dark Matters as an introduction to the role of Blackness in surveillance. But, yeah. Just wanted to raise that up, and be really honest about who is affected by these systems the most.

And… Also wanted to reiterate something that we said in the first day of this series, which is, nothing is completely secure… and that’s okay. We’re learning how to manage risk instead. Right? The only way to be completely safe on the internet is to not be on the internet. Connection is, you know, involves vulnerability by default. And… The work that we’re trying to all do is find ways to create space. To mitigate or, like, understand the risks that we’re taking, rather than just assuming everything’s a threat and closing off. Or saying, well, I’m doomed anyway; doesn’t matter.

What is threat modeling?

In the world of security studies and learning about, you know, building a security ‑‑ like, a way of thinking about keeping yourself safe, a term that comes from that space is threat modeling. Which is a practice of basically assessing your potential vulnerabilities with regard to surveillance. We’re not going to be doing threat modeling, per se, in this presentation. There are some really great resources out there that exist on how to do that that we can ‑‑ we’re happy to point you to. But we wanted to raise it up as something that can help in approaching managing risk and mitigation, because it’s sort of inventorying your own, you know, circumstances and understanding where you are and aren’t at risk, which can make it a little less overwhelming.

All right. So the concept of state surveillance, for this presentation, we’re going to be talking about it on and using platforms, which we talked about yesterday. There’s all kinds of other ways? (Laughs) That, you know ‑‑ as I said earlier, that the state can engage in surveillance. We’re, right this second, just gonna focus on platforms. We might be ‑‑ if there are questions specifically about non‑platform things, maybe in the Q&A part, we could talk about those, if there’s time.

What are platforms and why do they matter?

So, platforms are companies. I think we’ve said this a lot over the last three days. And what that generally means is that platforms have to, and will, comply with requests from law enforcement from user data. They don’t have to tell anyone that that happens. They do, some of the companies; some big companies do. These are from ‑‑ the one on the top is Twitter’s annual transparency report, and the one below is Facebook’s. And these are just graphs that their ‑‑ digitalizations they made about government requests for user data. And ‑‑ but again, this is almost a courtesy? This is like something that’s kind of done maybe for the brand, not necessarily because they have any obligation. But… It’s also just a reminder, like, there’s ‑‑ they can’t actually necessarily say no to, like, a warrant. This also applies to internet service providers, like Verizon; mobile data providers; hosting services. Like, companies have to do what the law tells them, and most of the internet is run by companies.

Next slide… So, companies don’t actually ‑‑ but governments don’t really always have to ask platforms to share like private data, if there’s sort of enough publicly available material to draw from. The method of using sort of publicly accessible data from, you know, online sources is sometimes called open source investigations, in that the method is reproducible and the data is publicly available. When Backpage still existed, that was more or less how cops would use it to conduct raids. One older example of this is in 2014, the New York Police Department conducted a massive raid on public housing project Harlem to arrest suspected gang members. It was called ‑‑ oh, shoot. It had like some terrible name… Operation Crew Cut. That’s what it was called. (Laughs) And much of the evidence used in the raid was culled, basically, from cops lurking on social media, interpreting slang and inside jokes between teenage boys as gang coordination. Some of the young men ‑‑ I think they were as young as 16 and as old as 23 ‑‑ who were caught in that raid are still serving sentences. Some of them were able to challenge the case and be let out, but they still ‑‑ it was a pretty terrible process.

A more recent example of sort of police using publicly available data is this one on the left in June. This woman in this photo was charged with allegedly setting a Philadelphia police vehicle on fire. And the police were able to figure out who she was, based on a tattoo visible in this photo ‑‑ which you can’t really see in this image because it’s quite small; I couldn’t get a bigger one. Based on a T‑shirt she had previously bought from an Etsy store, and previous amateur modeling work on Instagram. So, you know, maybe only a handful of people had bought that one Etsy shirt. And they were ability to kind of figure out and match her to these other images than online out in the public.

What is open source investigation and why does it matter?

I want to note briefly that open source investigation, or digital investigation using publicly available resources, isn’t inherently a bad thing or technique. It’s research. Activists and journalists use it to identify misinformation campaigns and human rights violations when it’s not safe for them to be on the ground. I’ve used it in my own work. But you know, the point is don’t make it easier for the police to do their job.

What is metadata and why does it matter?

The next slide… is another source of information that can be pulled from publicly available sites, besides just sort of reading the material and the images, is metadata. And “metadata” is sort of just a fancy word for data about data. If ‑‑ one way that sometimes this gets described is like, if you have like a piece of physical mail, the letter is sort of like the data, but who it’s ‑‑ the envelope, so who it’s mailed to, where it was mailed from, things like that, that’s the metadata. That’s the container that has relevant information about the content.

So in an image, it’s encoded into the file with information about the photo. This is some screenshots of a photo of my dog, on my phone. (Laughs) She, she briefly interrupted the session yesterday, so I thought I’d let her be a guest in today’s presentation. And if I scroll down on my phone and look kind of further below in the Android interface, I can see that it has the day and the time that the photo was taken, and then what I’ve blocked out with that big blue square right there is a Google Maps embed that has a map of exactly where I took the picture. You can also see what kind of phone I used to take this picture. You can see where the image is stored in my folder. You can see how big the image file is. These are examples of, like, metadata. That, combined with things like actual information in the data, like the actual information in the image, like a T‑shirt or a tattoo, all of that is like really useful for law enforcement. And metadata is something that is stored ‑‑ if you use like Instagram’s API to access photos, you can get metadata like this from the app.

Is it possible to remove metadata from your phone’s camera?

OLIVIA: So, surveillance capitalism! Really big ‑‑ oh, there’s a Q&A. Is it possible to remove metadata from your phone’s camera?

INGRID: So there’s two things that you can do. One is ‑‑ I think that you can, in your settings, like you can disable location? Being stored on the photos? Depending, I think, on the make and model. Another thing, if you’re concerned about the ‑‑ you know, detailed metadata… you can ‑‑ like, taking a screenshot of the image on your phone is not gonna store the location that the screenshot was taken. It’s not gonna store ‑‑ like, it might ‑‑ I think that the screenshot data might store, like, what kind of device the screenshot was taken on. But given ‑‑ but that doesn’t necessarily narrow it down in a world of mostly iPhones and, you know, Samsung devices and Android devices. Like, it could be ‑‑ it’s like a bit less granular. Yeah.

OLIVIA: Awesome.

Surveillance Capitalism: How It Works and Why It Matters

So, surveillance capitalism! I don’t know if you guys notice… I’ve been seeing them a lot more often. But some advertisements in between YouTube videos are just kind of like multiple choice questions? Some of them ask how old you are; some of them might ask if you’ve graduated from school yet; et cetera. So like, in what world is a single‑question survey a replacement for, say, a 30‑second advertisement for Old Spice deodorant?

Our world! Specifically, our world under surveillance capitalism. So, to go further on Ingrid’s initial definition, surveillance capitalism occurs when our data is the commodity for sale on the market. And usually, almost always created and captured through companies who provide online service ‑‑ free online services, like Facebook, Google, YouTube, et cetera.

We can’t really know for sure how much our data is worth? There’s no industry standard. Because, at the end of the day, information that’s valuable for one company could be completely useless for another company. But we do know that Facebook makes about $30 every quarter off of each individual user.

What are data brokers and why do they matter?

But social media sites aren’t the only ones with business models designed around selling information. We also have data brokers. And data brokers… If we go back to the private investigator example that we saw in the state surveillance section, thinking about the tools at their disposal, the level of openness that you have online, they could find out a lot of things about you. Like where you’ve lived, your habits, what you spend money on, who your family members and romantic partners are, your political alignments, your health status, et cetera. That’s like one person.

But imagine that rather than searching through your data and piecing together a story themselves, they actually just had access to a giant vacuum cleaner and were able to hoover up the entire internet instead. That is kind of what a data broker is!

I made up this tiny case study for Exact Data. They’re a data broker, and they’re in Chicago.

And Exact Data has profiles of about 20 ‑‑ not twenty. 240 million individuals. And they have about 700 elements associated with each of them. So some of the questions that you could answer, if you looked at a stranger’s profile through Exact Data, would be their name, their address, their ethnicity, their level of education, if they have grandkids or not, if they like woodworking. So it goes from basic data to your interests and what you spend time on.

So, potential for harm. You get lumped in algorithmically with a group or demographic when you would prefer to be anonymous. Your profile may appear in algorithmic recommendations because of traits about yourself you normally keep private. The advertisements you see might be reminders of previous habits that could be triggering to see now. And it’s also a gateway for local authorities to obtain extremely detailed information about you. I don’t know if Ingrid has any other points that might be potential for harm.

How to Mitigate Harm

But, luckily, there are ways to mitigate. Right? You can opt out of this. Even though it’s pretty hard? But if you remember from our first workshop where we talked about how websites collect data from us, we know that it’s captured mostly using scripts: trackers, cookies, et cetera. So you can use a script blocker! Also, reading the terms of service, it will probably mention the type of data an online service collects and what it’s for. They don’t always, but a lot of them do. So if you read it, you’re able to have a bit more agency over when you agree to use that service or not, and you might be able to look for alternatives that have different terms of service.

Privacy laws in the United States are pretty relaxed when it comes to forcing companies to report things. So, one tip is also to try setting your VPN location to a country that has stronger privacy laws. And then you might get a lot more banners about cookies and other trackers that they’re required to tell you if you live somewhere else that’s not here.

You can also contact data brokers and ask to be put on their internal suppression list. And a lot of brokers have forms you can fill out online requesting that. But the only issue is that this is really hard? Because there are a lot of data broker companies, and we don’t actually know how many more there are, because this is an industry that’s pretty unregulated.

Another mitigation strategy is creating, essentially, a digital alter ego that’s difficult to trace to your other online accounts. So if you are behaving in a way that you don’t want to be conflated algorithmically with the rest of your life, you can create separate online profiles using different e‑mail addresses, potentially using them in other places, and just creating as much distance between you and one aspect of your life and you in the other aspect of your life, and compartmentalizing in a way that makes it difficult to connect the two of you.

And then of course you can use encrypted apps that don’t store metadata or actual data. This could include messaging apps, but this could also include… word processors like CryptPad; it could include video conferencing; it could include a lot of different apps.

So, to wrap everything up: Over the past three days…

Wrapping Up the Digital Literacy Series

INGRID: So, I guess we wanted to kind of try and provide some wrap up, because we covered a lot of things in three days. And that was, like, a very broad version of a very, like, deep and complicated subject. But we sort of ‑‑ you know. We went through, you know, the foundational kind of depths of the internet, how it’s, you know, made, what’s like the actual kind of technical aspects of how it works. The platforms that, you know, are built atop that foundation that extract value out of it. And systems of power that incentivize those platforms to exist and control and govern kind of how some of that value is used ‑‑ (Laughs) Or misused.

And I guess across all three of these, I had a couple of, like, kind of bigger questions or things to think about that I wanted to kind of put forward. One is sort of, like, I think in some ways the neoliberal public/private structure of the internet as a ‑‑ as an infrastructure that everyone lives with… like, they’re ‑‑ like, you ‑‑ it’s ‑‑ it shapes the way that, like, everything else kind of follows. Right? When a… when something that was originally sort of like a government‑built property becomes a commodity that becomes the foundation of how anyone kind of can like live in the world, it creates kind of a lot of these aftereffects.

And I think ‑‑ I find internet history always really fascinating, because it’s a reminder that a lot of this is very contingent, and it could have gone different ways. Sometimes, people wanted it to go a different way? And thinking about what it looks like to build different networks, build different services and structures. And, you know, while living within surveillance capitalism. ‘Cause we haven’t built different internets and different structures quite yet. Our surveillance capitalism’s still pretty big. A big part of taking care of one another and ourselves is… taking care with where and how we speak and act online. Which is different from being afraid to say things? And more being kind of competent in where and how you, like, choose to speak, to protect yourself and to protect people you care about.

I think… that’s ‑‑ yeah, that went by really fast! (Laughs)

BLUNT: We’ll just shower you with questions. (Laughs)

How are companies making money off of data?

I have two questions in the chat. Someone says: How is it exactly that companies make money off of our data? Is it just through ads? Are there other processes?

OLIVIA: So, when it comes to making money off of it, sometimes… let’s say you’re a company that sells… let’s say you’re a company that sells headphones. And you are tracking data of the people who are using your headphones. They buy them, and then in order to use them, they have to download an app into their phone. Right? Through that app, they might record things like the songs you listen to, what time of day you listen to them, how long you’re using the app, where you are when you’re listening to it… And they might have this, like, select little package of data about you.

Now, they might find that that’s data that, like, a music… campaign ‑‑ the people who like do advertisement for musicians, I guess? I don’t remember what that job’s called. But it’s more like the idea that different companies collect data that’s useful for other companies in their… in their marketing practices, or in their business practices.

So Facebook collects data that a lot of different companies might want for a myriad of reasons, because the amount of data Facebook kind of siphons from people is so large. And so ‑‑ yeah, does that…? Do any of you guys have something to say around that, about other ways that companies might ‑‑

INGRID: Yeah. I mean, a lot of it bottoms out in ads and market research.

OLIVIA: Yeah.

INGRID: There ‑‑ I mean, another, another place where ‑‑ I don’t think, it’s not the most lucrative source of revenue, I think? In so far as, it’s not the biggest buyer? But like, police departments will buy from data brokers. That’s some ‑‑ and that’s, there’s no real regulation on whether or when they do that.

So. Like, you know, it’s ‑‑ just, information in general is valuable. (Laughs) So, it’s ‑‑ it’s not ‑‑ like, I think the ‑‑ and I mean, ironically, I think what’s kind of so fascinating to me about the model of surveillance capitalism is that, like, ads don’t work. Or like, they kinda work, but like. They don’t… In terms of ‑‑ like, in terms of actually proving that, like, after I look at a pair of glasses once, and then I’m followed around on the internet by the same pair of glasses for like two and a half months, like… The actual success rate that I actually bought the glasses, I don’t think is that high? But there is just enough faith in the idea of it that it continues to make lots and lots of money. It’s like, very speculative.

OLIVIA: I actually saw a article recently that said… instead of advertising ‑‑ like, say like we all paid for, like, an ad‑free internet? It would cost about like $35 a month, for each of us. In terms of, like, being able to maintain, like, internet infrastructure and pay for things, without having advertisements.

If you have an alter ego for privacy, how can you ensure it remains separate? Is facial recognition something to worry about?

OLIVIA: “If you have an alter ego account and a personal account, how do you ensure your online accounts stay completely compartmentalized and aren’t associated through your device or IP address, et cetera?” And then they say, “Is there a way to protect your face from being collected on facial recognition if you post pictures on both accounts?”

INGRID: Yeah. So we didn’t ‑‑ we didn’t talk about facial recognition. And I ‑‑ I kind of ‑‑ I kind of falsely assumed that that’s ‑‑ it’s been so heavily talked about in the news that maybe it was sort of the thing people were kind of ‑‑ not ‑‑ but I also didn’t want to overemphasize that as a risk, because there’s so much information beyond a face that can be used when trying to identify people?

In terms of if you’re posting pictures on two different accounts… I don’t ‑‑ like ‑‑ I mean, if they’re similar photos, I don’t think ‑‑ I think the answer is, like, your face will be captured no matter, like, what? That’s sort of a piece of it. I don’t know. Olivia, can you think of any examples of, like, mitigate ‑‑ like, mitigation of face recognition, beyond like ‑‑ I mean, Dazzle doesn’t really work anymore. But like, in the same way that like people will avoid copyright, like, bots catching them on YouTube, by like changing the cropping, or subtly altering like a video file?

BLUNT: I just dropped a link. Have you seen this? It’s from the Sand Lab in Chicago, called the Fawkes Tool, and it like slightly alters the pixels so that it’s unrecognizable to facial recognition technologies. I’m still sort of looking into it, but I think it’s an interesting thing to think about when we’re thinking about uploading photos to, like, escort ads or something like that.

OLIVIA: I think that’s difficult when it comes to, like, facial recognition, is because depending on like who the other actor is, they have access to like a different level of technology. Like, the consumer‑facing facial recognition software, like the stuff that’s in Instagram face filters, and like the stuff that’s on your Photobooth app on your laptop, it’s really different from the kinds of tools that like, say, the state would have at their disposal.

So it’s kind of like a different… I don’t know if the word “threat model” is even a good way to phrase it, because we know that like, say, for instance, the New York Police Department definitely has tools that allow them to identify pictures of protesters with just their eyes and their eyebrows.

And so, normally… if I were talking to someone who uses ‑‑ who has, like, two different accounts and is interested in not, like, being connected to both of those accounts because of their bio‑metric data, like their face, I would normally suggest that they like wear a mask that covers their whole face, honestly. Because I don’t really know of a foolproof way to avoid it, digitally, without like actively, like, destroying the file. Like, you’d have to put like an emoji ‑‑ like, you’d have to physically ‑‑ you’d have to physically cover your face in a way that doesn’t… that’s irreversible for someone else who downloads the photo to do. Because there’s a lot of tricks online when it comes to, like, changing the ‑‑ changing the lighting, and like, putting glitter on your face, and doing a lot of different stuff?

And some of those work on consumer‑facing facial recognition technology. But we don’t actually know how ‑‑ if that even works at the state level, if that makes sense.

So depending on like, who you’re worried about tracking your account… you might just want to straight up cover your face, or leave your face out of photos.

What is gait analysis and why is it important?

BLUNT: I wonder, too, do you ‑‑ if you could talk a little bit about gait analysis, and how that’s also used? Are you familiar with that?

INGRID: I don’t ‑‑ I don’t know enough about gait analysis…

OLIVIA: I know that it exists.

INGRID: Yeah. And I think ‑‑ like, it is ‑‑ and this is another thing where, in trying to figure out what to talk about for this session, figuring out like what are things that we actually know where the risks are, and what are things that… may exist, but we, like, can’t necessarily like identify where they are?

OLIVIA: I have heard of resources for people who are interested. Like, for high‑risk people who are worried about being founded via gait analysis? And gait analysis is literally being identified by the way that you walk, and the way that you move. And there are resources of people who, like, teach workshops about like, how to mess with your, like, walk in a way that makes you not recognizable. And how to, like, practice doing that.

BLUNT: It’s fascinating.

Does it matter if you use popular platforms in browsers or apps?

INGRID: “If you use popular platforms like Facebook and Instagram in browsers instead of apps, does that give you a little more control over your data, or does it not really matter?”

I ‑‑ so, Olivia and Blunt, if you have other thoughts on this, please jump in. I mean, my position is that it kind of doesn’t matter, in so far as what ‑‑ the things that Facebook, like, stores about you are things you do on Facebook. Like, it’s still tied to an account. Unless you’re talking about ‑‑ so I don’t think ‑‑ so it’s kind of whether it’s, you know ‑‑ like, ultimately, like every ‑‑ it’s not just like… you know, passive, kind of, like trackers are happening that you could maybe use a script blocker on, and that’s cool? But things you like, and things you click on, on Facebook in the browser, are still going to be stored in a database as attached to your profile. So it doesn’t necessarily change, I think, the concerns over both of those. But.

BLUNT: I’m not totally ‑‑ I have also heard things about having the Facebook app on your phone, that it gives Facebook access to more things. Like, the terms of service are different. I’m not totally sure about it. I don’t have it on my phone.

INGRID: That’s actually ‑‑ that’s a good point. I apologize. I guess it also ‑‑ it depends on what thing you’re trying ‑‑ kind of concerned about. So, one way that ‑‑ one thing that Facebook really likes having information on for users, individual users, is who else they might want to be Facebook friends with. Right? The like “People You May Know” feature, I once read, uses up like a very large percentage of, like, the Facebook infrastructure compute. Because connecting people to other people is really, really hard. And once ‑‑ and like, the Facebook app being on your phone does give it kind of the opportunity to be opened up to your phone contacts, and places you take your phone. Which can expand, like, the network of people that it thinks might be, like, in your proximity, or in your social network. Because if a phone number in your phone has a Facebook account, maybe they will ‑‑ they’ll say, like, oh, you know this person, probably!

In 2017, Kashmir Hill and Surya Mattu did a feature for Gizmodo on how it works, that was inspired by Kashmir getting recommended a friend on Facebook that was a long‑lost relative, from her like father’s previous marriage or something. That there was, like, no way they would have otherwise met. And part of ‑‑ so, her interest partly came out of trying to figure out how they could have possibly even made those connections. And Facebook wouldn’t tell them! (Laughs) Because the “People You Know” feature is also a very powerful tool that makes them, like, an app that people want to use, in theory. They also did some follow‑up stories about sex workers being, like, outed on their alt apps, on their like alt accounts, because the “People You May Know” feature was recommending friends who knew the sex worker from like other parts of their life the alt account. And there also were examples of, like, therapists and like mental health professionals being recommended people who were clients as Facebook friends. People who met other people in, like, you know, 12‑step meetings being recommended as Facebook friends.

So there is ‑‑ in terms of, like, app versus browser, like, Facebook won’t say whether or not some of this stuff goes into the, you know, whether or not information it gathers from mobile devices goes into its “People You May Know” recommendations. But based on examples like this, it seems likely that that plays a role.

So that doesn’t ‑‑ I guess, in terms of control over your data, like… I think I misunderstood the framing of the question, ’cause I guess it’s more ‑‑ it gives you slightly more control over what Facebook does and doesn’t know about you. Because if Facebook doesn’t know what you’re doing with and on your phone, that’s probably like not a bad idea.

Did that all make sense, or was that…? I don’t know.

How secure is Facebook Messenger? How secure is Instagram?

BLUNT: No, I think that made sense. Someone was asking about the Facebook Messenger app. I think the same thing sort of applies to that, ’cause it’s all connected. I don’t know if anyone has anything else to say about that.

INGRID: This is the part where I admit that I’m not on Facebook. So, I’m actually terrible at answering a lot of Facebook questions, because I don’t…

BLUNT: Yeah, I ‑‑ I also think, like, Instagram is owned by Facebook, so also having the Instagram app on your phone, I feel like, might also bring up some of the same concerns?

INGRID: Yeah. I think it’s slightly ‑‑ from ‑‑ like, from what I ‑‑ I mean, I do use Instagram, so I can remember that interface slightly better? But… Like… My experience ‑‑ like, so as far as I’ve been able ‑‑ like. So like, as my point of comparison, I had sort of a dummy, like, lurker Facebook account, for some research a while ago. And the difference between its attempts to like connect me and suggest follows, versus Instagram’s attempts, were like ‑‑ Facebook seemed far more aggressive and given far less information about me was able to draw connections that, like, didn’t make any sense ‑‑ that like, were, very accurate. So, I think that’s just a good thing ‑‑ like, it’s ‑‑ you know. Don’t trust Instagram, because don’t trust Facebook, but. In my experience ‑‑ like, I don’t know if it’s… as much a part ‑‑ like, it’s not as central to the business model.

BLUNT: Yeah. And I think, too, just speaking from my own personal experience, like when I have had Facebook or Instagram, I use like a tertiary alias and lock the account and don’t use a face photo on the app, just so if it does recommend me to a client they’re much less likely to know that it’s me. And like, that has happened on my Instagram account. So. My personal Instagram account.

What is Palantir and why does it matter?

INGRID: Yeah. There are several follow‑ups, but I feel like this question “Can you explain about Palantir?” has been sitting for a while, so I want to make sure it gets answered, and then come back to this a little bit. So ‑‑

OLIVIA: I can explain a little bit about Palantir. So, it’s kind of the devil. We have ‑‑ I think it’s existed for… since, like, 2014? That might be the wrong ‑‑ no, not ‑‑ I think it was 2004, actually!

INGRID: Yeah, no, it’s 2004. I just ‑‑ I accidentally stumbled into a Wall Street Journal article about them from 2009 yesterday, while researching something else, and died a little? It was like, “This company’s great! I can’t see how anything could be problematic.”

BLUNT: It’s 2003, yeah. 17 years.

OLIVIA: It was started by Peter Thiel, who is a really strong Trump supporter and is linked to this really weird, like, anti‑democracy pro‑monarchy movement in Silicon Valley that’s, like, held by a lot of like a weird circle of rich people. And they are kind of the pioneers of predictive policing, and have also assisted ICE with tracking down and deporting immigrants. And they actually recently went public ‑‑ hmm?

INGRID: They did? Wow!

OLIVIA: Yeah. They haven’t, like, turned a profit yet, in 17 years. But it was initially funded, if I’m getting this right, I’m pretty sure they were initially funded by like the venture capital arm of the CIA.

INGRID: Okay, they actually, they haven’t gone public yet, but they are planning for an IPO…

OLIVIA: Soon. Is that it?

INGRID: Yeah.

OLIVIA: Okay.

INGRID: Sorry, just ‑‑ ’cause they ‑‑ so, a thing about this company is that ‑‑ like, every two years, there is a flurry of press about them maybe doing an IPO, and then they don’t. And… So, I mean ‑‑ and yeah, sorry. So they were ‑‑ their initial funding partially came from In‑Q‑Tel, which is a capital firm run by the CIA that funds companies that make products that the CIA might need. Which ‑‑ so… Keyhole, which was a satellite imagery, like, software company, was initially funded by the CIA. And that company was later acquired by Google and became Google Earth. So just an example of the kind of things that they fund. It’s stuff like that.

OLIVIA: Oh, and to clarify, it’s like a datamining company. So they do the same kind of thing that the case study that I showed earlier does. But they have ‑‑ they’re really good at it. And they also create tools and technologies to do more of it.

INGRID: So ‑‑ and they’re kind of a good example of the point made at the beginning of this about surveillance capitalism and state surveillance being closely intertwined. Palantir has corporate and government contracts to do datamining services. I think they were working with Nestle for a while, and Coca‑Cola. They want to be providing as much tools to businesses as they do to ICE. And those, you know, kinds of services are seen as sort of interchangeable. (Laughs)

I mean, the funny thing to me about Palantir, too, is that ‑‑ it’s not like they’re ‑‑ in some ways, I feel like I’m not even sure it’s that they’re the best at what they do? It’s that they’re the best at getting contracts and making people think they’re cool at what they do?

OLIVIA: They market themselves as like a software company, but they really just have a lot of files.

INGRID: They’re kind of ‑‑ somebody in the tech industry once described them to me as like the McKinsey of datamining? That’s a firm that ‑‑ they work with lots of governments and corporations that do things that seem to just make everything worse, but they keep making money? (Laughs) Is the best way to describe it!

So I think, in terms of, like, explaining about Palantir, like… I guess, they are a high‑profile example of the kind of company that is rampant throughout the tech industry. They’ve had the most cartoonish accoutrement in so far as, you know, one of their founders is literally a vampire. And ‑‑ you know, they took money from the CIA. And their name comes from a Lord of the Rings, like, magical seeing stone. In some ways, I think that there is… a level ‑‑ like. They have ‑‑ there have been like documented instances of them doing egregious things, such as working with the City of New Orleans Police Department to develop predictive policing tools without an actual contract, so without any oversight from the City Council, without any oversight from the Mayor’s Office, based on ‑‑ and basically through the funding for the project coming through a private foundation. But in terms of, like, you personally in your day‑to‑day life, should you worry about this specific company any more than you would worry about a specific state agency? I don’t think that’s ‑‑ it’s going to depend on your particular risk levels, but… They’re kind of ‑‑ they’re a tool of the state, but not necessarily themselves going to, like, come for people.

OLIVIA: Also ‑‑

INGRID: Oh, literally a vampire? Peter Thiel is one of those people who believes in getting infusions of young people’s blood to stay healthy and young, so. As far as I’m concerned? A vampire.

What are Thorn and Spotlight?

BLUNT: I also just wanted to talk briefly about Thorn and Spotlight, ’cause I think that ‑‑ so, Spotlight is a tool used by Thorn, which I believe Ashton Kutcher is a cofounder of? The mission of Thorn is to, quote, stop human trafficking, and what they do is use their technology to scrape escort ads and create databases of facial recognition built off of those ads. And so I think it’s just interesting to think about the relationship between these tools and how they collaborate with the police and with ICE and in a way that could potentially facilitate the deportation of migrant sex workers, as well.

INGRID: Okay. Sorry, let’s ‑‑ (Laughs) Yes. Ashton. Fuck him.

Will deleting the Facebook or Instagram app help?

So, one question here… “Deleting the Facebook or Insta app won’t help, right, because the info on you has will be been collected?” Not necessarily. I mean, there won’t be any more data collected? And I think ‑‑ it’s true that it won’t be erased, unless you delete your account. And like, go through the steps to like actually‑actually delete it, because they’ll trick you and be like “Just deactivate it! You can always come back!” Never come back…

But like, yeah. There’s ‑‑ if it’s something that, like ‑‑ you know. As you continue to live your life and go places, although I guess people aren’t going places right now… They won’t have any more material. Yeah.

What data points does Facebook have?

BLUNT: Someone asked: “If you have an existing Facebook account that only has civilian photos and you haven’t touched it for four years, it only has those data points, right?” I think that’s a good follow‑up to the previous question.

INGRID: Yeah, that’s true. Well ‑‑ there’s also people you know who have Facebook accounts? And like, Facebook has this internal mechanism for, like, tracking non‑Facebook users as, like, air quote, like, “users,” or as like entities that they can serve ads to. And generally, it’s based on, like, those people being, like, tagged in other people’s images, even if they don’t have an account, or if they have a dead account. Like, if somebody ‑‑ if you have like a four‑year‑old untouched Facebook account, and somebody tags a contemporary photo of you, like, those data points are connected.

So, you know. Whatever other people do that could connect back to that original account, or whatever ‑‑ yeah. Whatever followers or friends you have on it… Updates that they produce could kind of be new data points about you.

Can fintech platforms connect your pay app accounts to your social accounts?

“In terms of fintech, how easy is it for companies to link your pay app accounts to social accounts?” Blunt, you might have a more comprehensive answer on this than I will.

BLUNT: Okay… Let me take a look. So, I think that there are, like, databases built off of escort ads that then get shared on the back end to sort of facilitate, like, a network‑type effect of de‑platforming sex workers. So a lot of people report ‑‑ and some of the research that we’re doing sort of confirms this ‑‑ that once you experience, like, de‑platforming or shadowbanning on one platform, you’re significantly more likely to experience it or lose access to it on another. So, as like a form of harm reduction and being able to keep access to those financial technologies, I suggest just having like a burner e‑mail account that you only use for that that’s not listed anywhere else publicly, that sounds like vanilla and civilian.

So that way, if they’re, like, running ‑‑ if they’re scraping e‑mail addresses from escort ads and then selling that data to facial ‑‑ to financial technologies, that your e‑mail won’t be in there. It’s just like adding one layer of protection. It might be connected in some other ways, but… just sort of as a form of harm reduction.

I don’t know if that answered… that question.

And I’m looking right now for resources that specifically ‑‑ resource specifically on protecting community members in regards to ICE centering trans sex workers. I know that… Red Canary Song does some work around this, specifically with massage parlor workers, and I’m looking up the name of the organization of trans Latinx sex workers in Jackson Heights. So I will drop that link so you can follow their work, as well.

And please feel free to drop any other questions in the chat, even if it wasn’t covered. We’ll see if we can answer them, and this is your time. So, feel free to ask away.

(Silence)

Tech Journals or Websites to Follow

INGRID: “What tech update journals or websites do we follow to stay up to date?” Oh! I want to know more about what other people do, too. I tend to, like ‑‑ I tend to follow specific writers, more than specific outlets, partly because… like, there are freelancers who kind of go to different places. But also, I kind of value seeing people who, like, have developed expertise in things. So… Julia Carrie Wong at The Guardian is someone I read a lot. Melissa Gira Grant, at The New Republic. (Laughing) Is not always writing about tech, but is probably one of the smartest and sharpest and most integrity‑filled writers I know.

BLUNT: Definitely one of the first to cover FOSTA‑SESTA, for sure.

INGRID: Yeah. Yeah. And… I’ve been… Let’s see. Motherboard, in general, Jason Koebler and Janus Rose, are very well‑sourced in the industry. So I usually trust things that they cover and find. Caroline Haskins is a young reporter who used to be at Motherboard and is now at BuzzFeed and does excellent work, along with Ryan Mac. And Kashmir Hill, who is now at The New York Times, but has also written for Gizmodo and others. And who else… Davey Alba is also with The New York Times, and is really great. Those are mine.

BLUNT: I sign up for the ‑‑ it’s like, Casey Newton’s daily e‑mail letter? And I just read that to stay up to date on certain news, and then research more. I know that the Internet Freedom Festival also has good updates, and I’m also happy to drop links to other weekly mailing lists that I sign up to, as well.

OLIVIA: Oh, I was muted! Oops. I, I listen to a lot of podcasts. And I know the, like, Mozilla IRL podcast was really good, for me, in terms of like learning more about like the internet, and specifically like surveillance infrastructure. And they have a lot of episodes. So if you’re, like, idling, and you have time to listen rather than reading. They also ‑‑ Mozilla also has their transcripts out, which is pretty nice.

Can browser bookmarks be accessed?

INGRID: “Is there any way for bookmarks on my browser to be accessed?” I believe the answer for that is gonna depend on the browser. Because ‑‑ or, it can depend on the browser? So, I think in the case of a browser like Chrome, which is owned by Google, if you are like logged into your Google account as part of, like, your browser using, I think all of that information then gets associated with your Google account. So Google will have that data. In terms of access beyond that, I’m not sure.

And then I think for other browsers, I don’t believe that that would be something that’s stored on Firefox. I’m not sure about Microsoft Edge. Olivia, do you have any other thoughts on that one?

OLIVIA: I, I don’t know…

How secure and safe is Zoom?

INGRID: Talking about safety of Zoom! Okay. Yeah. We talked, we talked a little bit about this yesterday, I think. Zoom is, you know, it’s software that was made for like workplace calls, and is designed for that setting. Which means ‑‑ (Laughs) In some ways, like, it is inherently a workplace surveillance tool. It is… relatively, like ‑‑ I mean, it’s, it’s not secure in the sense that, like, these ‑‑ I mean, first of all, this call, this is being recorded, and can ‑‑ you know, Zoom calls can be broadcast to livestream, like this one is. But also, the, you know, communications ‑‑ like, the actual calls aren’t, you know, encrypted in any way. Like, they kind of can just be like logged onto if they’re public. There can kind of just be some URL hacking. There are, you know, different settings you can make in terms of letting people in or out of meetings. But at the end of the day, also, Zoom has access to the calls! (Laughs) And how much you trust Zoom with that material is, you know, at your discretion.

I… (Sighs) I mean, in general, like… When it comes to, like ‑‑ like, Zoom calls are not where I would discuss, like, sensitive topics, or anything I wouldn’t want to have on the record. And that’s generally just the protocol I take with it. And I think ‑‑ I mean, that being said, like, yeah. There are… it is in such common use at this point, in terms of like spaces for events, like this one! That I won’t, like, kind of outright boycott it, simply because it’s become such a ubiquitous tool. But I think compartmentalizing what I do and don’t use it for has been helpful.

BLUNT: And so if you’re interested in staying up to date with EARN IT, I would suggest following Hacking//Hustling and Kate Villadano. I can drop their handle. And also, on July 21st, we’ll be hosting with our legal team… a legal seminar, sort of similar to this with space to answer questions, and we’re providing more information as to where EARN IT is and how you can sort of plug in on that.

Is there government regulation surrounding data tracking?

√: “Is there government regulation of data tracking, or not so much?” Not so much! Yes! In the United States, there’s very little regulation.

So, the reason ‑‑ or, one of the reasons that if you use a VPN and set it to a place like Switzerland and use it, you get a lot more information about what tracking is happening and you can make requests for data from platforms, is because of a European Union regulation called GDPR, General Data Protection Regulations? Or maybe General Data Privacy Regulations, sorry. And, yeah. The United States does not have an equivalent to that. Some websites ‑‑ in some ways, like, because the European Union is such a large market, I have seen some companies kind of just unilaterally become GDPR‑compliant, for like all users, simply because it’s easier than having, like, a GDPR version and a “other places” version. But, you know, when it comes to Facebook or… like, Instagram, or like large platforms, there’s ‑‑ like, they don’t ‑‑ they don’t really have an incentive to create conditions where they collect less data. So I think there, it’s kind of like, well, sorry. It’s gonna be that way. (Laughs) Yeah.

And it’s not as though ‑‑ and I think it is a thing that, like, lawmakers have interest in? But I think part of the challenge is… both, like, these companies are, you know, very well‑funded and will, like, seek to ‑‑ and will like lobby against further regulation? And also like a lot of people in Congress are… old! And bad at computer? And… don’t necessarily have ‑‑ sometimes have trouble, I think, wrapping their heads around some of the concepts underlying this. And, you know, and are not necessarily ‑‑ and like, I think the general atmosphere and attitude around, like, free markets solve problems! Kind of further undermines pursuit of regulation.

What exactly contributes to shadowbanning?

“In terms of social media accounts following your activity, based on your research so far for shadowbanning et cetera, who do you follow and… to certain lists?” I think, Blunt, this question might be for you, because of the research.

BLUNT: Yeah. I think it’s less about who you follow, and more about who you interact with. So, like, we’re still analyzing this research, but there seems to be a trend that, like, if ‑‑ if you’re shadowbanned, the people that you know are more likely to be shadowbanned, and there might be some relationship between the way that you interact and the platform? But we’re still figuring that out? But just like one thing you can try and ‑‑ we talked about this in another one, but having a backup account for promo tweets, so your primary account with the most followers doesn’t exhibit quote‑unquote “bot‑like activity” of automated tweets. And just having, like, casual conversation about cheese, or nature…

(Laughs) We’re not totally sure how it works.

Oh, and also! I believe her name is Amber ‑‑ I’m going to drop the link to it. But someone is doing a training on shadowbanning, and it seems like we’re collecting data on like multiple accounts. And it seems like there’s some interesting things to say. So I’m going to go grab a link to that. If you’re interested in learning more on shadowbanning, that’s on the 25th, at like 5:00 p.m. I think. So I’ll drop a link.

And just for the livestream: So, this is with Amberly Rothfield, and it’s called Shadowbanning: Algorithms Explained, on July 25th at 6:00 p.m. Still a few spots left. And it looks like she was pulling data on different accounts and playing with the accounts and analyzing the posts’ interaction and traction. So that should be really interesting, too.

Cool. Thank you so much for all of the amazing questions. I think we’ll give it a few more minutes for any other questions that folks have, or any other resources we can maybe connect people with, and then we’ll log off!

Can you request your information from datamining companies?

BLUNT: Oh. Someone’s asking, can I request my information from datamining companies?

OLIVIA: Yes, you can! Yes, you can. And a lot of them… Let me see if I can find a link? ‘Cause a lot of them have, like, forms, either on their website or available where you can make requests like that. You can request to see it, and I’m pretty sure you can also ‑‑ I know you can request that they stop collecting it and that they get rid of your file. But I think you can also request to see it.

BLUNT: I also just dropped a link to this. This is an old tool that I’m not sure if it still works, but it’s Helen Nissenbaum’s AdNauseam, which clicks every single advertisement, as well as tracking and organizing all of the ads targeted at you. It’s really overwhelming to see. I remember looking at it once, and I could track when I was taking what class in school, when I was going through a breakup, just based on my searches and my targeted ads.

Cool. So, is there anything else you want to say before we log off?

INGRID: I mean, thank you all so much for participating. Thank you, again, Cory, for doing the transcription. And… Yeah! Yeah, this has been really great.

Digital Literacy Training (Part 2) with Olivia McKayla Ross (@cyberdoula) and Ingrid Burrington (@lifewinning)

Platforms and you digital literacy training

Digital Literacy Training (Part 2) Transcript

OLIVIA: Hi, everyone. My name’s Olivia. My pronouns are she/her. Co‑facilitating with Ingrid. And some of the values that this particular digital literacy/defense workshop will be centered in include cyber defense, less as a way of military technology, right? Reframing cryptography as more of an abolitionist technology. Right? And cyber defense as an expression of mutual care and a way of accumulating community‑based power. And in that way, also thinking of ways to teach this type of material in ways that are antiracist, but also anti‑binary and pro‑femme.

And so, we’re really ‑‑ we really care a lot about making sure that this is trauma‑informed and teaching from a place of gentleness, considering the previous digital harm people have experienced and trying not to relive it. So if you need to take a break, remember that this is being recorded and posted online so you will be able to access it later.

INGRID: Great. Thank you, Olivia. My name’s Ingrid. I use she/her pronouns. And welcome back to people who were here yesterday. Today, we are talking about platforms! And in this context, we primarily mean social media sites like Facebook and Instagram. Some of this, you know, it can be applied to contexts where people kind of buy and sell stuff. But essentially, we’re talking about places where people make user accounts to communicate with each other. And ways in which ‑‑ but with kind of more of a focus on kind of the large corporate ones that many people are on!

There were four sort of key concepts we wanted to cover. There’s a lot in them, so we’ll try to move through them smoothly. First kind of being algorithmic curation and the way that can produce misinformation and content suppression. And some of the laws and legal context that are defining decisions that platforms make. We talked a little bit about this yesterday, but, you know, reiterating again: Platforms are companies, and a lot of decisions they make come out of being concerned with keeping a company alive, more than taking care of people.

What is algorithmic curation and why does it matter?

So we’re going to start with algorithmic curation. And I think there’s a thing also that came up yesterday was a tendency for technical language to kind of alienate audiences that don’t know as much about computers or math, I guess. An algorithm is a long word that ‑‑ (Thud) Sorry. That’s the sound of my dog knocking on my door, in the background.

Broadly speaking, an algorithm is a set of rules or instructions ‑‑ (clamoring) Excuse me. One second. She just really wants attention. I’m sorry you can’t see her; she’s very cute!

But… An algorithm is a set of rules or instructions for how to do a thing. You could think of a recipe or a choreography. The difference between an algorithm used in the context of a platform and a algorithm that contains, you know, ingredients for a recipe is that there is a lot less flexibility in interpretation in an algorithm. And it’s usually applied on a much larger scale.

And the reason that a lot of platforms… deploy algorithmic curation, and what algorithmic curation is experienced as, is often recommendation algorithms? And algorithms that determine what content is going to show up in a social media timeline.

So I am ‑‑ you know, I have recently been watching Avatar: The Last Airbender on Netflix. I am 33 years old. And… (Laughs) I found that, you know, Netflix wants to make sure that I know they have lots of other things that I might like because I liked that show. Right? And you could kind of think of algorithms as kind of being if this, then that rules. Like, if somebody watches this show, look at all the other people who watched that show and the other shows that they watched, and suggest that, you know, you probably will like those.

And platforms give the rationale for deploying these kinds of algorithms partly just trying to help people? Right? Like, discover things, because there’s so much content, and you’ll get overwhelmed, so we prioritize. What it actually kind of in practice means is trying to keep you using a service. Right? Like, I’m probably going to cancel my Netflix account once I finish Avatar, so. But oh, like no, now I gotta watch The Dragon Prince. Right?

I think… Do I do this part, or Olivia?

OLIVIA: I can do it?

INGRID: Sorry! I couldn’t remember how we split up this section.

OLIVIA: I… So… In early social media, we didn’t really have super complicated algorithms like the ones we do now. You have the, like, find your friends algorithms that would basically like show you perhaps like the friend of your friends. But the people you follow were mostly the only people whose posts you would see.

But now that we’re able to collect more user data about how you’re using the platform, as well as your activities off the platform, now algorithms are able to become more complicated, because there’s so much more information that they’re able to use.

So some of the things that might be going into your algorithmic curation are listed here. It’s a really long list, and not all of the things that are on this list are even… not all of the things that are on this list are even like the long exhaustive list of things that might be factoring into the algorithm? ‘Cause so few platforms actually disclose what are the things that contribute to the stuff that you see, and what you don’t see, and who’s seeing your own content, and the people who don’t see your own content. But one thing that we know for sure is that the way that these platforms are designed is specifically in order to make money. And so following that motive, you’re able to kind of map a lot of the predicted behavior of some of them.

And one of the really big consequences of these like algorithmic filter bubbles is misinformation. Right? So because we’ve all been inside for the past couple of weeks and months, we’re all really susceptible to seeing really targeted misinformation, because we’ve been online a lot. And so it’s quite possible that more data is being collected about you now than ever before. Platforms make money off of our content, but especially content that encourages like antisocial behaviors. And when I say antisocial behaviors, I mean like antisocial for us pro‑social behaviors. One of these categories encourages a healthy boundary with social media, like light to moderate use. Comforting people! Letting people know that they rock! Right? Cheering up people. Versus antisocial behaviors, while they’re much less healthy, they encourage people to use social media like three times as much. Right? People are spreading rumors; people are posting personal infection; if people are being ignored or excluded or editing videos or photos or saying mean things. Right? And so that makes an environment where misinformation does super well, algorithmically.

Through their design, especially platforms like Instagram and Twitter, they prioritize posts that receive lots of attention. We see this like how people ask others to “like” posts that belong to particular people so that they’ll be boosted in the algorithm. Right? They prioritize posts that get a lot of clicks and that get a lot of like feedback from the community. And it’s really easy to create misinformation campaigns that will take advantage of that.

OLIVIA: Nice. That was a really quick video from the Mozilla Foundation. But I wanted to clarify that there’s this assumption that people who fall for misinformation are like kinda dumb, or they’re not like thinking critically. And this is like kind of a really ableist assumption, right? In truth, anyone could unknowingly share misinformation. That’s like how these campaigns are designed, right? And there’s so many different forms that misinformation takes.

It could be like regular ole lies dressed up as memes; fabricated videos and photos that look super real, even though they’re not; performance art and like social experiments? (Laughing) Links to sources that don’t actually point anywhere? And it could have been investigation that was originally true! But then you told it to your friend, who got the story kind of confused, and now it’s not true in a way that’s really, really important. And of course, there’s also conspiracy theories, and misleading political advertisements, as well.

But sometimes, misinformation is less about being not told ‑‑ being told a lie, and more about not being told the truth, if that makes sense.

So, the easiest way to avoid misinformation is to just get in the habit of verifying what you read before you tell someone else. Even if you heard it first from someone that you trust! Right? Maybe one of your friends shared misinformation. But my friend is a really nice, upstanding citizen! Right? There’s no way that… I don’t know; being a citizen doesn’t matter. My friend is a nice person! And not always… are the people ‑‑ people who share misinformation aren’t always doing it to stir the pot. They just got confused, or they just… ended up in a trap, really.

So, fact‑check the information that confuses you, or surprises you. But also fact‑check information that falls in line with your beliefs. Fact‑check all of it. Because you’re more likely to see misinformation that falls in line with your beliefs because of the algorithmic curation that we talked about before. Right? We have an internet that’s like 70% lies.

So, two sites that were pretty popular when I asked around how people fact‑checked were PolitiFacts and Snopes.com. You could also use a regular search engine. There’s Google, but also using DuckDuckGo at the same time. You could ask a librarian. But also, if you look at a post on Instagram or Twitter and scroll through the thread, there might be people saying, like, hey, this isn’t true; why’d you post it? So always be a little bit more thorough when you are interacting with information online.

How does algorithmic curation contribute to content suppression and shadowbanning?

INGRID: So the next sort of thing we wanted to talk about that’s a, you know, consequence of algorithmic curation and companies, like, platforms being companies, is suppression of content on platforms. Right? Platforms have their own terms of service and rules about what people can and can’t say on them. And those terms of service and rules are usually written in very long documents, in very dense legal language that can make it hard to understand when you break those rules, and are kind of designed to, you know, be scrolled through and ignored.

And we wanted to ‑‑ but because a lot of the decisions about what is, like, you know, acceptable content or unacceptable content are, again, being made by an algorithm looking for keywords, for example… the platforms can kind of downgrade content based on assumptions about what’s there.

So… shadowbanning is a concept that I imagine many of you have heard about or, you know, encountered, possibly even experienced. It actually originally is a term that came from like online message groups and forums. So not an automated algorithm at all. Basically, it was a tool used by moderators for, you know, forum members who liked to start fights, or kind of were shit‑stirrers, and would basically be sort of a muting of that individual on the platform. So they could, you know, still post, but people weren’t seeing their posts, and they weren’t getting interaction, so they weren’t getting whatever rise they wanted to get out of people.

Today, the more common kind of application of the term has been describing platform‑wide essentially muting of users from, like, the main timeline, or making it hard to search for that individual’s content, based on what is thought to be automated interpretation of content. I say “what’s thought to be automated interpretation of content,” because there is a lot that is only kind of known about what’s happening on the other side of the platform. Again, yeah, what it often looks like is not showing up in search unless someone types the entirety of a handle; even if you follow that person, that person’s content not showing up in the main timeline, like in their follower’s feeds, not showing up in a hashtag…

And, shadowbanning is like a really gaslighting experience? Because it’s hard to know, is the result of what I’m saying is people just don’t like it, or people just don’t care anymore, or am I being actively suppressed and people just can’t see me? And if it’s something that has happened to you, or is happening to you, one thing that is important to remember is like you will feel very isolated, but you are in fact not alone. This is a thing that happens. It’s often sort of ‑‑ it’s been, over time, kind of dismissed by platforms as myth or kind of ‑‑ and I think, I wonder if, in some ways, perhaps their aversion to it comes from associating it with this less automated context? Because it’s like, well, we’re not deliberately trying to mute anybody; it’s just our systems kind of doing something! But the systems are working ‑‑ you know, they designed them, and they’re working as designed. Right?

Instagram recently, in making an announcement about work that they want to do to address sort of implicit bias in their platform, sort of implicitly acknowledged that shadowbanning exists. They didn’t actually use the term? But it is interesting to see platforms acknowledging that there are ways that their tools will affect people.

In terms of the “what you can dos” and ‑‑ Blunt, if you have anything that you want to add to that, I’d totally be happy to hear because I’m far from an expert. It’s a lot of what the sort of like best practices tend to be based on what other people have shared as like working for them. So basically, I don’t want to tell you anything and say like this is a guarantee this will like work for you in any given context. One thing that I have seen a lot is, basically, posting really normie content? Like, just going very off‑script from whatever your normal feed is, and doing something like, I don’t know, talking about your pet, or having ‑‑ you know, talking about like cooking. Basically just like changing what you’re doing. Another approach is getting your friends and followers to engage with your content, so that it’s seen as popular, so that it will like return to the timeline.

Blunt, is there anything else that you would want to include in there?

BLUNT: Yeah, I think something that communities found to be useful is that if you are going to be automating posts, to do it on a backup account so that what’s flagged as bot‑like behavior is ‑‑ so your promo account might be shadowbanned, but you might have a wider reach to direct people to where to give you money. But it’s a really complex topic. I’ve been thinking about it a lot right now as I was just ‑‑ Hacking//Hustling is currently studying shadowbanning. So far, we’ve found our data backs up a lot about what sex workers know to be true about show shadowbanning sort of works, what seems to trigger it and what seems to undo it. But as I was making a thread about the research, which both included the words “sex worker” and “shadowban,” I was like, I don’t even know if I can say either of these words without being shadowbanned! So I write it with lots of spaces in it, so hopefully the algorithm won’t recognize it, which also makes it inaccessible to anybody using a screen reader.

So, I don’t know. I know there was a class on how to reverse a shadowban, but I also think that after the global protests started that the algorithm changed a little bit, because we were noticing a lot ‑‑ a higher increase of activists and sex worker content suppressed in the algorithm.

INGRID: Yeah. That’s ‑‑ do you know when you’re going to be putting out some of the research from ‑‑ that Hacking//Hustling’s been doing?

BLUNT: Yeah, we just tweeted out a few of our statistics in light of the recent Twitter shenanigans, and… (Laughs) Some internal screenshots being shared, where they say that they blacklist users? Which is not a term I knew that they used, to describe this process. We’re in the initial analysis of the data stages right now, and we’ll probably ‑‑ our goal is to share this information primarily with community, so we’ll be sharing findings as we are able to, and then the full report will probably come out in like two to three months.

Can algorithms judge video content?

INGRID: “Have you found that the algorithm can judge video content? I know nudity in photos are flagged.” I would defer to Blunt on this question, actually.

BLUNT: I would say, yeah. I’ve had videos take ‑‑ I have lost access to YouTube from videos. So I think anything that you post with a… either a link… for sex work, or just links in general and photos are more likely to be flagged. So, like, personally, I notice my posts that are just text‑based show up higher and more frequently in the algorithm and on the feed.

Which laws and politics surround content suppression?

INGRID: Mm‑hmm… yeah. So the other kind of form of suppression we wanted to mention and talk about is not as algorithmic. It’s when, you know, the state gets involved.

So platforms are companies; companies are expected to follow rules; rules are made by governments. Sometimes, it’ll kind of look like shadowbanning. So TikTok has been reported to basically down‑rank certain kinds of content on the site, or like not, you know, have it show up in a “For You” page, or on your follow page, depending on laws in a country around homosexuality. Sometimes it’s, you know, a result of companies creating rules that are sort of presented as being about national security, but are actually about suppressing dissent. So in Vietnam and the Philippines, there have been rules basically made that mean ‑‑ that have made the contents of social media posts seen as, you know, potentially threats against the state, basically. And sometimes their rules about protecting the vulnerable are actually about, you know, some moral majority bullshit. Which seems like a good time to start talking about sort of legal contexts!

And a lot of this is ‑‑ all of this particular section is really USA contexts. And I feel like I should ‑‑ I wanted to kind of give some explanation for that, because I feel weird doing this like broad sweep on, like, other kind of like countries’ approaches and focusing so much on the United States. But the reason for doing that is, basically, America ‑‑ as, you know, an imperialist nation! Tends to have an outsized impact on what happens on global platforms, overall. And there’s, you know, two reasons for that; one is that most of these companies are located in the United States, like their headquarters are here, so they are beholden to the laws of the place; but secondly, it’s also about sort of markets. Right? Like, the ‑‑ if you, you know. Like, if Facebook is like, we don’t need the American consumer base! Like, it’s probably going to affect their ability to make money.

And there are exceptions in terms of, like, the ways that other law, like, law kind of impacts platforms’, like, structure and decisions. And we talked a little bit yesterday about European privacy laws, but we’ll try and bring a little more in tomorrow about those.

First kind of category is like ‑‑ this is a little bit of a tangent, but it came up yesterday, so I wanted to kind of mention it. This is an image from the account shutdown guide that Hacking//Hustling made, that I did some work on. And basically, platforms that, you know, can facilitate financial transactions, which can be something, you know, like Stripe, PayPal, or Venmo, but, you know… Basically, they have to work with banks and credit card companies. And banks and credit card companies can consider sex work‑related purchases to be like “high risk,” despite there being very little evidence that this is true? The reason sometimes given is the possibility of a charge‑back? Meaning, you know, hypothetically, heteronormative sitcom scenario, that I don’t want my wife to see this charge on my bill! So reports it, and it gets taken off. How much this is actually the case? Unclear. It’s also, like, they’re just kind of jerks.

But, you know, platforms don’t actually have a lot of ability to kind of decide ‑‑ like, to actually like argue with these companies? Because they control the movement of money. Around, like, everywhere? So, in some ways, it’s kind of ‑‑ you know, they kind of just have to fall in line. I mean, that being said, companies themselves are also like kinda dumb. I wasn’t sure whether this needed to be included, but this Stripe blog post explaining why businesses aren’t allowed? They have a section on businesses that pose a brand risk! And they have this whole thing about like, oh, it’s our financial partners don’t want to be associated with them! It’s not us! But, you know, like, fuck out of here, Stripe.

What is section 230?

Back to other laws! (Laughing) So. Section 230 is a term that maybe you’ve heard, maybe you haven’t, that describes a small piece of a big law that has a very large impact on how platforms operate and, in fact, that platforms exist at all. So in the 1990s, lawmakers were very stressed out about porn on the internet. Because it was 1996, and everyone was, you know, didn’t know what to do. And a bill called the Communications Decency Act was passed in 1996. Most of it was invalidated by the Superior Court? Section 230 was not. It’s part 230 of it. It’s a very long bill. It’s really important for how platforms operate, because it says that platforms, like, or people who run hosting services, are not responsible when somebody posts something illegal or, you know, in this case, smut. I, I can’t believe that there was a newspaper headline that just said “internet smut.” It’s so silly… But that the platform, the hosting service, they’re not responsible for that content; the original poster is responsible. Like, if you wanted to sue someone for libel, like, you would not sue the person who hosted a libelous website; you would sue the creator of the libelous website.

And this was initially added to the Communications Decency Act, because there was concern ‑‑ really because of capitalism! There was concern that if, if people were afraid of getting sued because somebody, you know, used their services to do something illegal, or used their services to post something that they could get sued for, that people would just not go into the business! They would not make hosting services. They would not build forums or platforms. And so it ‑‑ removing that kind of legal liability… opened up more space for, for platforms to emerge. It’s, in some ways, it’s a fucked up compromise, in so far as it means that when Facebook does nothing about fascists organizing on their platforms and fascists actually go do things in the world, Facebook can’t be held responsible for it. Right? I mean, the Charlottesville rally in 2017 started on Facebook. Facebook obviously got some bad PR for it, but, you know. Then again, writing some exceptions where platforms are responsible for this or that… tend to not be made on kind of trying to meaningfully support people with less power, but usually about what powerful people think are priorities. Such as the first effort, in 2018, to change or create exceptions to Section 230. Which was FOSTA‑SESTA!

What is FOSTA-SESTA?

It was sold originally as fighting trafficking? The full ‑‑ FOSTA and SESTA are both acronyms. FOSTA is the Allow States and Victims to Fight Online Sex Trafficking Act. SESTA is the Stop Enabling Sex Traffickers Act. But the actual text of the law uses the term, “promotion or facilitation of prostitution and reckless disregard of sex trafficking.” So basically, it’s kind of lumping sex work into all sex trafficking. Which… Yeah. That’s ‑‑ not, not so wise.

And what it essentially creates is a situation where companies that allow that ‑‑ allow prostitution, or facilitation of prostitution, and reckless disregard of sex trafficking to happen on their platform? Can be held legally responsibility for that happening. The day that FOSTA and SESTA was signed into law, Craigslist took down the Personals section of its website. It has generally heightened scrutiny of sex worker content across platforms, and made it a lot harder for that work to happen online.

What is the EARN IT Act?

And in some ways, one of the scary things about FOSTA‑SESTA is the way in which it potentially emboldens further kind of attempts to create more overreaching laws. The EARN IT Act is not a law, yet. It is one that is currently being… discussed, in Congress. It emerged as ‑‑ or, the way that it’s been framed is as a response to an investigative series that happened at the New York Times about the proliferation of sexual images of children on platforms. And this, this is a true thing. Basically, any service that allows uploading of images has this problem. Airbnb direct messages can be, are used? And it’s a real thing. But this, the actual law is a very cynical appropriation of this problem with a solution that really serves more to kind of control and contain how the internet, like, works.

It proposes creating a 19‑member committee of experts, headed by the Attorney General, who would be issuing best practices for companies and websites, and allow those that don’t follow the best practices to be sued. And what “best practices” actually means is currently ‑‑ is like very vague in the actual text of the bill. The word “encryption” does not actually appear in the text of the bill, but its authors have a long history of being anti‑encryption. The current Attorney General, Bill Barr, has expressed wanting back doors for government agencies so that they can look at encrypted content. And likely, you know, it’s thought it could include “best practice” things like make it easier for the government to spy on content.

This is ‑‑ you know. I know somebody who worked on this series, and it is so frustrating to me to see that effort turn into, how about we just kill most of what keeps people safe on the internet?

So I mention, this is something that is more good to pay attention to. Write your Congress member about. Hacking//Hustling has done ‑‑

What is encryption?

Oh, Blunt would like me to define encryption. So it’s a mechanism for keeping information accessible only to people who know how to decode it. It is a way of keeping information safe, in a way! And… The ability ‑‑ and it’s ‑‑ the introduction ‑‑ encryption was not inherently actually part of the early internet, because it was originally created by researchers working for the government who thought it would just be government documents moving around it, so they were all public anyway. But it has since been kind of normalized into a part of, like, just using the internet as we know it today. But it’s, in this context, it’s ‑‑ yeah, basically, it means that when ‑‑ if I want to send you a message, that the only people who can read that message are like you and me, and not the service that is moving the message around, or not like the chat app that we’re using.

That was ‑‑ I feel like that was a little bit garbled, but… I don’t know if you like ‑‑ if, Olivia, is there anything that you would want to add to that? Or a better version of that? (Laughs)

OLIVIA: I think, I think you’ve mostly said it, in terms of it’s like a way of like encoding information so that ‑‑ someone might know the information is present, but they don’t know what it says. So, when we have things like end‑to‑end encryption on the internet, it means that something is encrypted on my side, and no matter, like, say what third party tries to look at the message that I sent to you while it’s in transit, it can’t be seen then, and it also can’t be seen by them on the other side, because the person who I sent the message to has their own, like, code that allows them to decode the message that’s specific to them. And this happens on a lot of platforms without our knowledge, in the sense that apps that are end‑to‑end encrypted, like Signal, they don’t really tell you what your key is. Even though you have one, and the person that you’re talking to has one, it’s not like you’re encoding and decoding yourself, because the math is done by other things.

But if the bill goes out of its way to exclude encryption, then it might make it potentially illegal for these services to exist, which would be a really bad thing for journalists and activists and sex workers and, like, everybody.

INGRID: Yeah. And additionally, there is ‑‑ I mean, within the world of people who work on encryption and security tools, any ‑‑ the idea of creating a back door, or some way to sneakily decrypt a thing without somebody knowing, is that that creates a vulnerability that… essentially, it creates a vulnerability that essentially anyone else could exploit. Like, if it exists there, it’s like somebody will hack it and figure it out.

OLIVIA: There’s no such thing as a door that only one person can use.

What’s the connection between the EARN IT Act and The New York Times?

INGRID: A question ‑‑ EARN IT is not solely a response to an article by the New York Times? It was a series of seven articles. And when I say “in response,” that is the argument ‑‑ that is the statement made by the people who wrote the bill. I think that it was more that EARN IT was proposed by some Congress people who saw an opportunity to cheaply exploit outrage over, like, abuse of children, to put forward some policies that they would want to have happen anyway. I think, like, the ‑‑ it’s ‑‑ and I think the reason, I guess, I mention it is because I think it’s also important to acknowledge the ways that these ‑‑ yeah, it was all, it was an entire, entirely from the New York Times. And it’s, you know, honestly, like, I don’t… I, I think that the main takeaway from that series to me was more that, like, companies are dropping the ball? Not that we need the government to come in and, like ‑‑ or like, if there’s supposed to be government making rules about how companies address this issue, like, I don’t think that the solution is create a committee that pursues, like, telling the companies what to do in this way that doesn’t actually seem to have anything to do with the actual problem they’re talking about.

BLUNT: Totally. And we actually ‑‑ I just want to also say that on the 21st, Hacking//Hustling will be hosting a legal literacy panel, where we will be talking about the ways that fear and threats to national security are used to pass… laws that police us further, that want to end encryption, that want to do away with our privacy. So if you check out HackingHustling.org slash events, I think, you should be able to find out more about that. Again, that’s at 7:00 p.m. on the 21st. You’ll be able to learn a lot more. We’ll do an update on EARN IT, where to look for updates, and similar legislation that’s being passed.

INGRID: I did see ‑‑ there was like a ‑‑ I saw an article that said a bill was being worked on, that was basically like in response to EARN IT, trying to say, like, yes, this is this problem you’re claiming that you’re going to address, like, it’s bad, but like this is not the way to do it, and trying to come up with an alternative. I think Ron Whiting was involved. Do you know anything about this?

BLUNT: Yeah, I think that’s ‑‑ yes. I mean, yes, we will talk about that on the 21st. I’m not ‑‑ we will have our legal team talk about that, so I don’t say the wrong thing.

INGRID: Okay, great. Moving forward!

What are some secure and private platform alternatives?

Olivia, do you want to do the platform alternatives? I feel like I’ve just been talking a lot!

OLIVIA: Sure! So, it kind of sucks that we’re all kind of stuck here using… really centralized social media platforms that we don’t control, and that kind of, in like nefarious and really complicated ways, sometimes control us. And so you might be thinking to yourself, gee, I wish there was something I could use that wasn’t quite Instagram and wasn’t quite Twitter that could let me control information.

So, we have some alternatives. One of these alternatives is called Mastodon. And… Essentially, it’s a independent ‑‑ is that the word? I think the word is ‑‑

BLUNT: An instance?

OLIVIA: It’s an instance! There you go. It’s an instance of… Oh, no, I don’t think that’s the word, either.

Basically, Mastodon is a very ‑‑ is a Twitter‑like platform that’s not Twitter, and instead of going on like a centralized place, you can set up your own Mastodon instance for your community. So instead of having ‑‑ like, you might have Mastodon instances that are called other names? Kind of like ‑‑ would a good analogy be like a subreddit?

INGRID: Maybe. I think, like, the existence of ‑‑ so, Mastodon is also from a project to create… like, open standards for social networking tools. I think we talked a little bit about sort of standardizing of browsers and web content. And in the last decade, one that’s been in development is one for just creating an open standard of what, like, a social network should do and could be. The protocol is actually called ActivityPub, and Mastodon is built on top of it. It’s, it’s more ‑‑ it’s kind of like… the term used for how they’re actually set up is like “fed rated.”

OLIVIA: Federated!

INGRID: Yeah. You set up one that’s hosted on your own. And it can connect to other Mastodon sites that other people run and host. But you have to decide whether or not you connect to those sites. And I think the, the example ‑‑ the thing that ‑‑ sorry. I can jump off from here, ’cause I think the next part was just acknowledging the like limitations. (Laughs) ‘Cause I think ‑‑ so… With ‑‑ so, this is a screenshot of Switter, which had been kind of set up as a sex work‑friendly alternative to Twitter, after FOSTA‑SESTA. And… It has run into a lot of issues with staying online because of FOSTA‑SESTA. Their hosting in ‑‑ like, I think Cloudflare was originally their hosting service, and they got taken down, because the company that like made ‑‑ you know, the company that was hosting it didn’t want to potentially get hit with, you know, like, liabilities because FOSTA‑SESTA said you were facilitating sex trafficking or some shit.

So it’s, it’s not a, necessarily, like, obvious ‑‑ like, it’s not easy, necessarily, to set up a separate space. And whether setting up a separate space is what you want is also, like, a question.

OLIVIA: Another option is also… Say you have a community that’s on Instagram, or on Twitter, and you guys are facing a lot of algorithmic suppression, and you’re not able to, like, reliably which you can’t to the ‑‑ communicate to the people who like your page. You could also split it both ways. You could try having an additional way of communicating to people. So you might have like a Twitter page where you have announcements, but then have a Discord server or something where you communicate with community members, or similar things.

And those types of interventions would essentially allow you to avoid certain types of algorithmic suppression.

INGRID: Yeah. And in a way, the construction of an alternative, it’s, I think… the vision probably is not to create, like, a new Facebook, or a new, you know, Twitter, or a new Instagram, because you will just have the same problems. (Laughs) Of those services. But rather to think about making sort of intentional spaces, like, either ‑‑ like, within, you know, your own space. This is a screenshot of RunYourOwn.social, which is a guide created by Darius Kazemi about ‑‑ you know, what it is to create intentional online spaces. I just find it really, really useful in thinking about all this stuff.

All right. Those were all our slides…

BLUNT: I actually just wanted to add one little thing about that, just to follow up on those previous two slides. I think it’s important to note, too, that while there are these alternatives on Mastodon and in these various alternatives, that’s often not where our clients are? So I think that it can be helpful for certain things, but the idea that entire communities and their clients will shift over to a separate platform… isn’t going to, like, capture the entire audience that you would have had if you had the same access to these social media tools that your peers did. So I think just one thing that I’ve been recommending for folks to do is to actually, like, mailing lists I think can be really helpful in this, too, to make sure that you have multiple ways of staying in touch with the people that are important to you, or the people that are paying you. Because we don’t know what the stability is of a lot of these other platforms, as well.

INGRID: Yeah.

OLIVIA: E‑mail is forever.

BLUNT: Yeah.

INGRID: Yeah, that’s a really, really good way to ‑‑ you know, point. And thank you for adding that.

Okay! So I guess… Should we ‑‑ I guess we’re open, now, for more questions. If there’s anything we didn’t cover, or anything that you want kind of more clarification on… Yeah.

I see a hand raised in the participant section, but I don’t know if that means a question, or something else, or if… I also don’t know how to address a raised hand. (Laughs)

BLUNT: Yeah, if you raise your hand, I can allow you to speak if you want to, but you will be recorded, and this video will be archived. So, unless you’re super down for that, just please ask the questions in the Q&A.

What is Discord and how secure is it?

Someone asks: Can you say more about Discord? Is it an instance like Switter or Mastodon? What is security like there?

OLIVIA: So Discord is a ‑‑ is not an instance like Switter and Mastodon. It’s its own separate app, and it originated as a way for gamers to talk to each other? Like, while they’re playing like video games. And so there’s a lot of, a lot of the tools that are currently on it still make kind of more sense for gamers than they do for people who are talking normally.

A Discord server isn’t really an actual server; it’s more so a chat room that can be maintained and moderated.

And security… is not private. In the sense that all chats and logs can be seen by the folks at, like, at Discord HQ. And they say that they don’t look at them? That they would only look at them in the instance of, like, someone complaining about abuse. So, if you say like, hey, this person’s been harassing me, then someone would look at the chat logs from that time. But it’s definitely not… it’s not a secure platform. It’s not‑‑ it’s not end‑to‑end encrypted, unless you use like add‑ons, which can be downloaded and integrated into a Discord experience. But it’s not out of the box. It’s mostly a space for, like, communities to gather.

Is that helpful…?

INGRID: “Is the information on the 21st up yet, or that is to come?” I think this is for the event ‑‑

BLUNT: Yeah, this is for July 21st. I’ll drop a link into the chat right now.

What are some tips for dealing with misinformation online?

INGRID: “How would you suggest dealing with misinformation that goes deep enough that research doesn’t clarify? Thinking about the ways the state uses misinformation about current events in other countries the U.S. uses to justify political situations.” (Sighs) Yeah, this is ‑‑ this is a hard one. The question of just ‑‑ yeah. The depths to which misinformation goes. I think one of the… really hard things about distinguishing and responding to misinformation in this ‑‑ in, like, right in this current moment… is doing ‑‑ is kind of ‑‑ know ‑‑ like, it is very hard to understand who is an authoritative source to trust? Because we know that the state lies. And we know that the press follows lies! Right? Like, I imagine some of you were alive in 2003. Maybe some of you were born in 2003. Oh, my goodness.

(Laughter)

I ‑‑ again, I feel old. But… Like, the ‑‑ and you know, it’s not even ‑‑ like, you can just look at history! Like, there are… there are lots of legitimate reasons to be suspicious! Of so‑called authoritative institutions.

And I think that some of the hard things with those ‑‑ with, like… getting full answers, is… being able to ‑‑ is like finding, finding a space to like kind of also just hold, like, that maybe you don’t know? And ‑‑ and that actually maybe you can’t know for sure? Which is to say, maybe ‑‑ okay, so one example of this. So, I live in New York. I don’t know how many of you were ‑‑ are based near here, or heard about ‑‑ we had this fireworks situation this summer? (Laughing) And there was a lot of discussion about, like, is this like a op? Is this some sort of, like, psychological warfare being enacted? Because like, there were just so many fireworks. And, you know, the ‑‑ it’s also true that, like, fireworks were really like cheap, because fireworks companies didn’t have more fireworks jobs to do. I, personally, was getting lots of like promoted ads to buy fireworks. But like at the end of the day, the only way that I could kind of like safely manage, like, my own sense of like sanity with this is to say, like: I don’t know which thing is true. And the thing that ‑‑ and like, neither of these things address the actual thing that I’m faced with, which is like loud noise that’s stressing out my dog.

And so I think that some ‑‑ I think the question with, like, misinformation about sort of who to trust or what to trust, is also understanding, like… based on like what I assume, what narrative is true or isn’t true, what actually do I do? And… How do I kind of, like, make decisions to act based on that? Or can I act on either of these?

I guess that’s kind of a rambly answer, but I think ‑‑ like, there isn’t always a good one.

BLUNT: I just dropped a link to ‑‑ it’s Yoghai Benkler, Robert Faris, and Hal Roberts’ Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. I think it’s from 2018? I think it’s a really interesting route if you’re interested in learning more about that.

INGRID: There are two other questions, but I just want to quickly answer: What happened in 2003 is America invaded Iraq based on pretenses of weapons of mass destruction that didn’t exist. And companies ‑‑ like, news outlets reported that with no meaningful interrogation. (Laughs) Sorry.

What’s going on with TikTok and online privacy right now? Is it worse than the EARN IT Act?

OLIVIA: Re: TikTok… It’s a really confusing situation, because most places, especially a lot of cyber security experts on the internet, have been saying to delete TikTok? But also a lot of that ‑‑ a lot of reasons that it’s being done so are kind of boiling down to, it’s a Chinese app. Which is really xenophobic. But there are ‑‑ TikTok does track a lot of information about you. What it uses it for, mostly it’s to send you really, really, hyper‑specific TikToks. But it definitely is ‑‑ like, that information is being collected about you, and it exists in their hands. So I think it’s mostly a decision for individuals to make about whether they’re going to decide to trust TikTok with their information in that way. Because they absolutely know where you live, and they definitely know whatever things about you that you feel like they’ve gathered in order to create the TikTok algorithm that shows up in your feed. Those things can be ‑‑ those things are true. So.

I think ‑‑ Ingrid, do you have anything to say on that?

BLUNT: You’re still muted, Ingrid, if you’re trying to talk.

INGRID: Oh, sorry. I… The question, also, asked, you know, if things like the data collection on platforms like TikTok was worse than things like EARN IT. And I think the… It kind of depends on where you think, like, sources of harm are going to be? It’s ‑‑ you know, it’s kind of, just ‑‑ it’s different! Like, you know, there’s a bunch of information that a company now has that they could choose to sell, that they could choose to utilize in other ways, that they might give to a law enforcement agency that gets a subpoena. But whether or not ‑‑ but like, EARN IT and FOSTA‑SESTA are examples of ‑‑ like, those are ‑‑ that’s, I guess, a different kind of harm? That harm has less to do with collection of information, and more about suppression of content and information and of certain kinds of speech.

“Is it fair to say that social media companies can use your username alone to connect you to other accounts? Should we slightly modify our usernames to avoid being associated and shut down all at once?” So I think ‑‑ I mean, I would say just for the question of like whether to modify your username or not, I think that’s also a risk assessment question, in so far as if you need people to be able to find you across multiple platforms, I would not want to tell you to like not do that? Or to like make it harder for you to, like, reach clients or an audience. I think ‑‑ social media companies tend to… whether they’re looking for you across platforms, like, is not as clear to me. I think it depends on the, like, agreements that exist within the platform. So like, I know that ‑‑ I mean, like Facebook and Instagram are owned by the same company. Right? So they will end up sharing ‑‑ like, the sharing of those two identities, like, is fairly ‑‑ you know, that’s likely to happen. But…

OLIVIA: Some might not be looking for your other accounts? But if you’re ever, like, being investigated by like an actual individual person, or like say your local police department, or the state in general, they probably would be.

INGRID: Yeah. And in that case, I think that what may be more helpful is if you have sort of a public persona that you want to have kind of have a similar identity… That’s a choice you can make. And then if there’s like alt accounts that, you know, maybe are where you have more personal, like, communications, or are working ‑‑ you know, kind of more connected to community and less business? That, making those slightly harder to associate, or making those slightly more compartmentalized? And we’ll talk more a little bit about sort of compartmentalizing identities tomorrow. But I think, yeah, that’s one way to kind of address that ability of being kind of identified.

BLUNT: I think, too, I wanted to add that it’s not just like using the same username, but where you post it, or like what e‑mail is associated with an ad. If you’ve linked your social media to a sex working ad, one of the statistics that we found in the research, the ongoing research projects that Hacking//Hustling is doing right now on shadowbanning is that sex workers who linked their social media to an advertisement are significantly more likely to believe they’ve been shadowbanned, at 82%. Which seems to me that linking might put you in… the bad girl bin, as I call it. (Laughs)

Do we have any other questions? We still have a good chunk of time. Or anything that folks want more clarity on?

What is DuckDuckGo and what is a VPN? Should we use them?

Okay, so we have one that says, “I heard DuckDuckGo mentioned. Do you personally use that search engine? Also, I recently started using ExpressVPN, as I just started sex work, and bad on my part, I did little research on which VPNs. Have you heard of ExpressVPN? Do you have another app that you personally use or have more knowledge about? I want to stay safe and of course share with others what would be the best app to use for VPN.”

INGRID: Olivia, do you want to take some of this one…?

OLIVIA: I was muted. So, I do use DuckDuckGo, most often. Sometimes, if I’m trying to like test to see if something ‑‑ like, if I’m using another ‑‑ like, my house computer uses Google, because my mom’s like, I don’t like DuckDuckGo! It’s not showing me the things I want to see! And that’s usually because Google, again, collects data about you and actively suggests results that it thinks are the things you’re searching for, whether or not they’re what you’re actually searching for.

For VPN use, I use ProtonVPN, mainly because it’s free and I don’t really have money to pay for a VPN right now. But I think ExpressVPN is one of the most popular ones. So I’d say it’s pretty trustworthy.

INGRID: Yeah, I’ve used ExpressVPN. I’ve seen that it’s ‑‑ yeah. It’s generally, I think, a well‑regarded one. I think that’s partly why it costs the money it costs. (Laughs) So I think ‑‑ yeah. If you don’t want to have to keep paying for it; but if you’ve already paid for it, yeah, keep using it.

What are the alternatives for encryption?

Yeah. “Can we talk about some alternatives for encryption, assuming a back door is created?”

OLIVIA: This isn’t ‑‑ oop.

INGRID: Go ahead.

OLIVIA: This isn’t really an alternative for encryption, but I think one of the things that we could start doing is ‑‑ less so would it be, like, trying to function without encryption, but instead encrypting our messages ourselves. Because technically, you could have end‑to‑end encryption over Instagram DM if you do the hand work of encrypting the messages that you send by yourself. Bleh! Tripped over my tongue there.

So there are a lot of apps, specifically for e‑mail, I’m thinking of? Like, Enigmail, and Pretty Good Privacy, that are essentially tools that you can use to “hand encrypt,” in quotation marks, your e‑mails, so you don’t have to depend on someone doing that for you. Right, the government can’t knock on your door and say you’re not allowed to encrypt anymore. And encryption algorithms are mathematical things. So you wouldn’t be able to make one that’s like kind of broken. The ones that we have now are… as long as ‑‑ like, Signal for instance is very public about the algorithms that they use, and that’s how we know that we can trust them. Because other people can trust them, and they’re like, yeah, it’s really ‑‑ it would take a computer about a thousand years to crack this. And so we’re able to use those same algorithms by ourselves without depending on other platforms to do that work for us. And it would suck that we’d have to interact with each other with that level of friction? But it is possible to continue to have safe communications.

BLUNT: Yeah, and I think just in general, if you’re unsure about the security of the messaging system that you’re using? Like, right now, we’re using Zoom, and we had this conversation a bit yesterday. But I’m speaking on Zoom as if I were speaking in public. So if I were to say ‑‑ if I wanted to talk about my personal experiences, potentially I would phrase it as a hypothetical, is also one way. So just slightly changing the ways that you speak, or… Yeah. I think that’s also an option. Go ahead, sorry.

OLIVIA: No, I agree. Just bouncing off with the people that you’re talking to that, like, hey, we’re not going to talk about this. And not being, like, reckless. So in a, like in a public forum, don’t like post about the direct action that’s happening on Sunday at city hall. Things like that are not things ‑‑ just like using, in that sense, using discretion, at that point.

What is the back door issue and how does it relate to encryption?

BLUNT: Someone says: “So the back door issue is for companies that encrypt for us?”

INGRID: Basically, yeah. The ‑‑ the back door issue, or like what, I guess… the back door issue is not ‑‑ and it’s also not necessarily, like, all encryption would stop working. Right? It would be something like… you know, the government ‑‑ like a government saying, hey, WhatsApp, we want access to conversations that currently we can’t have access to because WhatsApp communications are encrypted, and ordering WhatsApp to do that. And one would hope? (Laughs) That ‑‑ like, companies also know that they have a certain amount of, like, brand liability… when they remove security features. So it’s something that would probably be known about? I don’t think that it would be done ‑‑ like, I would hope it wouldn’t be done surreptitiously? But, yeah. It’s more about, like, whether or not certain ‑‑ like, previously considered secure communications would become compromised. It wouldn’t necessarily end the possibility of ever, you know, deploying encryption ever again. It would be more of a service by service thing.

BLUNT: We still have some time for more questions, if anyone has any. Please feel free to drop them into the Q&A.

And maybe if Ingrid and Olivia, if you wanted to chat a little bit about what we’ll be talking about tomorrow, folks might have an idea of other things that they might want clarity on, or other things that they are really hoping might be covered.

What will be covered in part 3 of the digital literacy series?

OLIVIA: Yeah, tomorrow we’re gonna talk a lot about surveillance, like more specifically. So like, surveillance that’s done on platforms, and in ‑‑ but also like talking both about surveillance capitalism and state surveillance, and how they ‑‑ the different ways that they might cause harm for someone who’s like trying to use the internet. Yeah. I think those are the most ‑‑ the biggest points? But also thinking about… like, mitigation.

INGRID: Yeah. And we’re ‑‑ and in the context of state surveillance, we’re primarily talking about when the state utilizes platforms in the service of surveillance, or obtains information from platforms. There are a myriad of other ways that the state can ‑‑ that, you know, police departments or federal or state governments can engage in surveillance of people, digitally or otherwise. But partly because the scale and scope of that topic is very, very large, and because we know people are coming from lots of different settings, and maybe like ‑‑ and we don’t personally know the ins and outs of the surveillance tools of every police department in the world? We didn’t want to kind of put forward, like, examples of tools that might just be, like ‑‑ that would mostly just create, like, greater like anxiety or something, or that wouldn’t necessarily be an accurate depiction of threats or realities that people might face.

If there is interest in more of those things, we’re happy to do questions about them in the thing? But it’s not something that we did ‑‑ we’re doing a deep dive into, because… again, it seems like that might be better to do more tailored questions to specific contexts.

BLUNT: I’m curious ‑‑ did you see the EFF launched the searchable database of police agencies and the tech tools that they use to spy on communities? Speaking of not spying on people! (Laughing)

INGRID: Yeah, but that’s the thing ‑‑ another thing is like, well, those tools are here. God bless these agencies for putting that work together.

BLUNT: Cool. So I’ll just give it like two or three more minutes to see if any other questions pop in… And then I’ll just turn off the livestream, as well as the recording, in case anyone would prefer to ask a question that’s not public.

How to Build Healthy Community Online

Okay. So we have two more questions that just popped in… “Could you speak to building healthy community online? How to do that, how to use platforms for positive information spread?”

OLIVIA: So, when it comes to building healthy communities, I think… it really comes down to, like, the labor of moderation. Like, it has to ‑‑ it has to go to someone, I think. We often have ‑‑ one of the problems with a lot of platforms online is that they’re built by people who don’t really, like, see a need for moderation, if that makes sense? Like, one of the issues with Slack is that there was no way to block someone, in Slack. And a lot of the people who originally were working on Slack couldn’t conceive of a reason why that would be possible ‑‑ couldn’t conceive of a reason why that would be necessary. While someone who’s ever experienced workplace harassment would know immediately why that kind of thing would be necessary, right?

And so I think when it comes to like building healthy communities online, I think like codes of conduct are really honestly the thing that’s most necessary, and having people or having ‑‑ creating an environment on that specific profile or in that specific space that kind of invites that energy in for the people who are engaging in that space to do that moderation work, and to also like promote… pro‑social interactions, and to like demote antisocial interactions, and things like that.

BLUNT: I also think that we ‑‑ Hacking//Hustling also on the YouTube channel has… a conversation between myself and three other folks talking about sort of social media and propaganda and a couple of harm reduction tips on how to assess the, like, truthfulness of what you’re sharing and posting. And I think that’s one thing that we can do, is just take an extra second before re‑tweeting something and sharing something, or actually opening up the article before sharing it and making sure that it’s something that we want to share… is a simple thing that we can do. I know things move so fast on these online spaces that it’s sometimes hard to do, but I think that that… if, if you’re able to assess that something is misinformation, or maybe it’s something that you don’t want to share, then. It slows down the spread of misinformation.

Thank you so much to everyone and their awesome questions. I’m just going to take one second to turn off the YouTube Live and to turn off the recording, and then see if folks have any questions that they don’t want recorded.

Okay, cool! So the livestream has stopped, and the recording is no longer recording. So if folks have any other questions, you’re still on Zoom, but we would be happy to answer anything else, and I’ll just give that two or three more minutes… And if not, we’ll see you tomorrow at noon.

(Silence)

Okay. Cool! Anything else, Ingrid or Olivia, you want to say?

INGRID: Thank you all for coming. Thank you, again, to Cory for doing transcription. Or, live captioning. Yeah.

BLUNT: Yeah, thank you, Cory. Appreciate you.

OLIVIA: Thank you.

CORY DOSTIE: My pleasure!

BLUNT: Okay, great! I will see you all on ‑‑ tomorrow! (Laughs) Take care.

INGRID: Bye, everyone.

OLIVIA: Bye, everyone!

Digital Literacy Training (Part 1) with Olivia McKayla Ross (@cyberdoula) and Ingrid Burrington (@lifewinning)

Part 1: OK But What Is The Internet, Really? In this three-day lunch series with Olivia McKayla Ross (@cyberdoula) and Ingrid Burrington (@lifewinning), we will work to demystify the tools and platforms that we use every day. It is our hope that through better understanding the technologies we use, we are better equipped to keep each other safe!

Digital Literacy Training (Part 1) Transcript

OLIVIA: Hi, everyone.

Just before we begin, some of the things that ‑‑ the values that we’re trying to cement this workshop in terms of cyber defense is firstly acknowledging cyber defense as a way of maintaining community‑based power, and cryptography as an abolitionist technology rather than military or something that doesn’t come from us, right?

So, there have been ways of using techniques like cryptography, and using ‑‑ and that community defense is something that doesn’t have to be immediately associated with a white supremacist, industrial technology.

So following that, we want to affirm that there can be a cyber defense pedagogy that can be ant-iracist, anti‑binary, and pro‑femme. But also one that’s trauma informed, right? And doesn’t reinforce paranoia. Because we know there are white supremacist institutions. And teaching from a place of gentleness. And considering, because of our myriad identities, the previous harm people might have experienced, and trying not to replicate it or force people to relive it.

So if you need to take space at any point during this workshop, we want to honor that, and this will be reported and available for view at a later time, as well.

INGRID: Thank you, Olivia. That was great.

My name is Ingrid. I go by she/her pronouns. And we are ‑‑ welcome, welcome to the internet! (Laughs) This is the first of a series of three digital literacy you sessions where we’re gonna be walking through a few different concepts.

And this first one we wanted to start with was really getting into just some of the baseline, you know, technical things around what the internet actually is and how people experience it, or how it, you know, works.

And… We’ve sort of organized this into a couple of sections. We’re gonna, you know, talk ‑‑ start kind of with a couple things about our personal kind of opinions about how to talk about some of these things, some grounding perspectives we’re bringing to it. The internet and kind of how it works as an infrastructure.

Browsers? Which, as like a particular technology for interfacing with the internet. And the World Wide Web, which is… you know, basically the thing that the browser takes you to. (Laughs)

So, starting with our opinions… (Laughs) We got ‑‑ we got more, but these seem important to start with.

The first one that we wanted to convey is that, you know, some of this stuff around what ‑‑ around how the internet works gets treated like this sort of special knowledge, or like something only for smart people. But, you know, companies have a lot more resources to do things. The people who run, work in, found tech companies often have had, you know, privileges like generational wealth! Or like early exposure to technology, that mean that some of this stuff was just more available to them.

And has been for a long time. And if there are things that are confusing, or unfamiliar, it’s ‑‑ you know, it is not because you don’t understand. And it’s because the people who kind of have a lot of control and power, like, are able to like overcome things that are confusing… Yeah.

We’ll come back to this point in other ways, I think, in this presentation today.

OLIVIA: The other point that we really want to hammer in is that nothing is completely secure online. And that’s due to the nature of how we connect to the internet, right? The only way you can really have a completely secure computer is to have a really, really boring computer! Right?

Computers are interesting because… computers and the internet are able to be interesting and fun things to use because we are able to connect to other computers. Right? Because it’s a form of a telecommunication device. And so it’s kind of okay! That our computers can’t be completely secure, because if they were, they’d just be kind of like brick boxes that don’t really do anything.

So instead of trying to chase like a mythological, like, security purity, what we do is we learn to manage risk instead. Right? We create systems so that we put ourselves in at least danger as possible.

What is the internet?

INGRID: So, for our kind of initial kind of grounding point, we want to just ‑‑ or, what the internet is. And this is, this is a hard question, sometimes, I find? Because… The word “internet” comes to kind of mean lots of different things. I ‑‑ for me, one of the most, like, the simplest summary I can ever provide is that the internet is just computers talking to computers. (Laughs)

It’s information going between computers. This image, which is, you know, one of many you can find when you Google image search “internet diagram” is a bunch of computers in, you know, a household, including a game machine and a few PCs. Who is this person? With all these devices? And they’re connecting to a router in their house, which has connected to a modem, which connects to the internet! Which is more computers. Not the ones that you’re seeing on the screen.

It’s kind of dorky, but this is a really goofy example of a computer talking to another computer. It’s from the movie Terminator 3. This also, I realize, is an Italian dub?

INGRID: So, I show this ‑‑ so what’s actually happening in this scene, which is, yes, very garbled, is the lady terminator, who is a robot, a very large sentient computer, is using a cell phone, like a dumb phone, to call another computer? And then she is making noises into the phone that are a translation of data into audio signal. And that is allowing her to hack into the LA School District’s database. It’s ‑‑ and it’s, you know, it’s very 2003? (Laughs) In that that was an era where, when people were getting online in their homes, they would have to connect to a modem that made sounds like that, too.

So I think, you know, it’s kind of a corny old example, but I like it because it also shows something that is hard to see in our day‑to‑day use of the internet, which is that for information to move from one computer to another computer, it has to be rendered into something material. In this case, it’s tones? It’s sound? On a home computer connected to a wi‑fi network, it would be radio waves. And kind of when you get to different layers of the internet, it’s going to be pulses of light traveling through fiberoptic cable.

So everything you type, every image you post, at some point it gets ‑‑ you know, that digital data gets transformed into a collection of, you know, arrangements of points of light, or, you know, a sound, or like a different material.

And it’s, you know, it’s much bigger! (Laughs) Than, like, than what we see on a screen! This is a map of the submarine cables that cross oceans that make it possible for the internet to be a global experience. It’s very terrestrial?

This is just for fun. This is just a video of a shark trying to eat one of the cables in the ocean… A cutie.

Rawrumph!! I just love his little… The point being, yeah. The internet is vulnerable to sharks! It is… it is very big, and it is complicated, and it is ‑‑ it is not just, you know, a thing on a screen. It needs a lot of physical stuff.

And when computers talk to computers, that doesn’t usually mean, like, a one‑to‑one connection? Right? So… I’m talking in this webinar to all of you right now, but, like, my computer is not directly connecting to your computer. What’s actually happening is that both of our computers are talking to the same computer… somewhere else.

There’s like a, you know, intermediary machine, that’s probably in a big building like this. This is an Amazon data center in Ashburn, Virginia. And that’s kind of the model that most of the internet takes; it’s usually, there’s kind of intermediary platforms, right?

And in a lot of technical language, this is called the client‑server model. The idea being that a server, which is a computer, holds things that are, you know, content on the internet, or applications like Zoom, and the client, which is just a computer, requests things from the server. You know, the server serves that. This goes ‑‑ this gets to the client computer through a routing process, that usually means that the information has to travel through multiple computers.

But! Again, this, like ‑‑ these words just mean computer and computer? Technically, you could turn a home computer into a server and get a stable internet connection and make it something ‑‑ make it something that just serves information to the internet. Or, you know, you could even think about the fact that because, you know, lots of information is taken from personal computers and sent to companies, you know, in some ways we are serving all of the time!

And I ‑‑ mostly, this is just a dynamic, again, thinking about… who controls and how the internet is governed, that I think is important to acknowledge? I mean, in some ways, the internet is not computers talking to computers so much as… computers owned by companies talking to computers owned by people?

The internet, you know, it began as a project funded by the U.S. military, but became the domain of private companies in the late 1990s. So all of that stuff that I was talking about earlier? You know, the submarine cables, the data centers, they’re all private property owned by corporations. And it’s kind of ‑‑ all of the, you know, technical infrastructure that makes the internet possible is a public good… but it’s all managed by private companies. So it’s kinda, it’s more, you know, a neoliberal private partnership. And it has been more a long time.

And I mention this mainly because it’s good to remember that companies are beholden to laws and markets, and it’s in a company’s interest to be compliant with laws and be risk‑averse, and that’s partly why a lot of decisions made by platforms or other companies are often, like, kind of harmful ‑‑ like, can be harmful to communities like sex workers.

And again, like, this doesn’t have to be the way the internet is? It’s just sort of how it has been for a very long time.

So, computers talking to other computers is what, you know, is our very simple summary of what the internet is. But computers don’t necessarily ‑‑ don’t ‑‑ can talk to each other in different kind of languages or dialects, let’s say? Which, in, you know, internet speak, are called protocols. Which, you know, a protocol is what it sounds like: It’s a set of rules about how something’s done. And so that’s, I find, maybe the dialect or language thing kind of useful.

Common Internet Protocols

So a few protocols that exist for the internet that you probably encounter in your daily life that you maybe don’t think that much about are Internet Protocol, wi‑fi, Address Resolution Protocol, Simple Mail Transfer Protocol, and HyperText Transfer Protocol. Maybe you haven’t heard as much, or it’s not as commonly talked about? But I’ll explain about these.

And I apologize; these screenshots are from my Mac. There are ways to access these same sorts of things from a Windows machine? I don’t have screenshots. (Laughs)
So Internet Protocol is basically the foundation of getting on the internet. It assigns a number called an IP address, Internet Protocol address, to a computer when it’s connected to a network. And that sort of ‑‑ that is the ID that is used for understanding, like, who a computer is and how do you access it.

So when I want to go get content from a specific website, what I’m actually requesting under the hood is… is a set of numbers that is an IP address, which is like the name or ID of the computer that I want to go to.

I’m hoping this isn’t too abstract, and I hope, like ‑‑ yeah, please, if there are places where you have questions… please, add things to the Q&A.

So, Address Resolution Protocol and Media Access Control are a little different, but I wanted to talk about because it’s sort of related to understanding how your computer becomes a particular identity.

So, all ‑‑ there’s a question: Do all computers have their own IP address? They do, but they change, because different ‑‑ basically, when you go ‑‑ when you join the network, the address is assigned. It’s not a fixed ID. But there is a fixed ID that is connected to your computer, and it’s called a Media Access Control, or MAC, address.

And this is another screenshot from my machine. You can see this thing I circled here. That is my MAC address. And that is at the level of like my hardware, of my computer, an ID that has been… basically, like, baked into the machine. Everything that can connect to a network has one of these IDs.

And when ‑‑ and so Address Resolution Protocol is a mechanism for associating your temporary IP address with the MAC address, and it mainly exists so that if there’s, like ‑‑ like, if the network screws up and assigns the same IP address to two things, to like two different devices, the MAC address can help resolve like, oh, we actually mean this device, not that device.

Oh, I realize I didn’t make a slide for wi‑fi. I think most of you probably know wi‑fi as, like, it is the wireless ‑‑ the way that basically information is transferred to something wireless.
Yes! Your IP ‑‑ well. Your IP address… will change when you connect, although it generally won’t change that much… It’s, it’s not like ‑‑ how am I answering this?

Like, if you’re ‑‑ if you’re connecting to the internet, in like your home? It’s probably ‑‑ you’re probably gonna get the same ID number, just ’cause it’s the same device you’re connecting to? But when you connect to a network at ‑‑ I guess no one goes to coffee shops anymore…

But in the time when you would go to a place with a different wireless network and connect to the internet! (Laughing) You would probably have a different IP address, because you’re connecting from a different device in a different network.

Oh, the other thing ‑‑ the only other thing about wi‑fi thing I will mention right now is that “wi‑fi” doesn’t actually mean anything. It’s not an acronym; it’s not an abbreviation. It’s a completely made‑up name… No one ‑‑ no one has a good answer for why it’s named that! (Laughs) I think like a branding consultant named it? It’s ‑‑ anyway.

So other protocols. So the Simple Mail Transfer Protocol, that underlies how e‑mail works.

So you encounter it a lot, but probably don’t think much about what ‑‑ like, that’s its own special kind of language for moving information, that’s different from the HyperText Transfer Protocol, which is one that may be familiar to all of you because it is the central protocol used for moving information in the browser!

Which is a nice segue, but I realized I also should mention that there is a variant of HTTP called HyperText Transfer Protocol Secure, or HTTPS. It’s an implementation of HTTP that encrypts the information transferred. So, that wasn’t adopted or implemented when browsers and HTTP were first being developed?

Because, again, these technologies were being developed with, you know, public funding and thought of as tools for scientific research, not for making purchases with credit cards or having, you know, private communications. So the implementation of security features and encryption into the internet is sometimes clumsy or frustrating because it was not designed into the original concept.

What’s an internet browser?

All right. So, we are next moving into the browser. I’m kind of a nerd about internet history things, so part of what I wanted to talk about with the browser is just its origin story?

The first example of a browser that was easy to use was created by researchers at a University of Illinois, including a guy named Marc Andreessen. He made something called Netscape Navigator. It was kind of a… It was a very important opening of the internet to the general public, and it changed a lot of the people’s the perception and ability to be part of the internet.

Marc Andreessen became very rich because he did this, and he founded a venture capital company, or firm, called Andreessen Horowitz. Returning to the idea that a lot of these companies are not smart, they’re just rich? He worked on a thing that is very important… That is not a good reason that he gets to throw money at Airbnb and decide how, you know, urban planning and housing is going to be changed forever!

There are fundamental kind of reasons ‑‑ like, there’s something about that which I feel is kind of important to remember. Both to acknowledge ‑‑ it’s not that Marc Andreessen is a dumb guy; that’s that he’s been given a lot of authority through getting a lot of money through being part of one ‑‑ through doing a clever thing.

A lot of the things that define the browser in the 1990s when it was first becoming an adopted thing were actually proprietary technologies made by different companies. So different companies had their own browsers that they had made. And they wanted to be The Browser everyone used. Right? And so they invented new things to make their browser cool? But they wouldn’t work on other ones.

So Olivia will talk a little bit more about these, I think, in the section on the web. But Cascading Style Sheets, which is a way of adding, you know, designing aspects of a web page, were invented by Microsoft. Javascript, which is a programming language that works in browsers, was created by a guy at Netscape in 14 days? (Laughs) And, yeah, if you wanted to ‑‑ if you made a website and it had, you know, CSS in its layout, it would be like ‑‑ it would be visible in a Microsoft browser, but not in a Netscape browser.

This was a terrible way of doing things? And possibly because companies got nervous about possibly getting regulated, and partly because it was just bad for business, they started ‑‑ they sort of, they figured out how to kind of put aside some of their differences and develop standards, basically.

So the standardization of browsers, so that basically when I open something in Chrome and I open something in Firefox it looks the same and it works the same… kind of starts to be worked on in 1998. It really only starts to be implemented/widespread in 2007, and it continues to be worked on. There are entire kind of committees of people who mostly work at the tech companies that make these browsers who kind of come and talk to each other about, like, what are the things we’re all gonna agree are gonna ‑‑ about, like, in terms of how this technology works?

And we’re looking at, and wanting to talk a little bit, about browsers also because they are really useful teaching tools. It’s really easy ‑‑ well, it’s not “really” easy. It is pretty easy to kind of look at what’s going on behind the scenes, using a browser. And that’s mainly because they’re very old.

You know, by 2007 when the iPhone emerges, and when I think the App Store is in 2010 or 2011, you can’t really look and see ‑‑ it’s much harder to go on your phone and see, like, I wonder what kind of data Instagram is sending back to, you know, Facebook right now! Like, to actually try and look for that on your phone is almost impossible. But you can kind of start to look for that in a web browser.

And that’s sort of a privileging of desktop technology, and a legacy of this being kind of an old technology, where transparency was treated as just inherently a good idea. And I think that if they were being built today, we probably wouldn’t have it.

So, we’re going to introduce you to some browser tools in this next section ‑‑ oh, wait, sorry, one more thing I wanted to acknowledge. This isn’t super detailed as far as comparing the privacy features of different browsers? But ‑‑ and we are working on a list of sort of, like, a bibliography that we can share with everyone later.

The point being ‑‑ the main thing I just wanted to convey here is like different browsers defined by different companies, they’re gonna all work more or less the same, but they do have kind of underlying qualities that might not be great for user privacy. And, also, there’s, you know, questions of like… when, you know, one company kind of controls the browser market, how does that change kind of the way that people see the internet?

So, you know, doing some research, doing some comparison of, of what different browsers… you know, do and don’t do. Most of the screenshots for this were done in Firefox. If you use other browsers, that’s fine. But… Yeah.

All right. Now ‑‑ (Laughs) Now we will move to World Wide Web!

What are web pages and how do they work?

OLIVIA: Hi, everyone! So, this part is talking a lot about the actual content that you are able to look at using your browser. So we’ll be making use of a lot of the tools that Ingrid mentioned about looking deeper into the actual… web pages themselves.

Awesome. So, this is a web page. It’s the same page that the video that we showed earlier in the beginning of sharks biting undersea cables! (Laughs) And it’s accessible to anyone who can connect their computer to the World Wide Web. And so, a lot of times we use “the internet” and “the web” interchangeably?

But the internet itself is more of the infrastructure, and the actual place, if we can call it a place, that we’re going to logically… is called the World Wide Web. Right? That’s the whole WWW‑dot thing that we’ve all been doing.

So, web pages are hosted on computers! You can host a web page on your own computer; you can pay another company to host it for you; other companies host themselves, if they have a lot of money. And… If you are paying someone else to host your website for you, you might end up ‑‑ you have a lot less autonomy. Right?

So there’s a lot of movements for people to like start hosting things themselves to avoid things like censorship and surveillance. Because like we said in the beginning, companies are beholden to a lot stricter laws than individuals are. And individuals are able to kind of themselves say ‑‑

What’s the difference between VPN and TOR? If we have time at the end, we will cover that a little bit, briefly. But essentially, a VPN ‑‑ TOR is a browser, and a VPN is something that you can install into your computer.

TOR does something, does things, that are very similar to what VPNs do, in terms of like onion routing? But they’re not… they’re not the same. Like, you can use TOR to navigate the internet, or you can use a VPN and use your normal browser. Right.

To look at a web page’s source, right, oftentimes you can right click or can N‑click? And you click the, like, You click View Page Source, and you’ll be able to get a closer look at the actual web page itself.

And so when you, when you view the source, you ‑‑ oh, you can go back. When you view the source, you end up seeing HTML. Right? So we told you earlier that the web uses HTTP, which is the HyperText Transfer Protocol, to send and receive data. The data that’s being sent and received is HyperText. Right? That’s written in the HyperText Markup Language.

So… HTML isn’t a programming language, per se; it’s a markup language. So it defines the structure of your content. It displays things, like text and images and links to other web pages.

And there are two ways that HTML pages can exist: Static and dynamic. So static would be a lot of the pages that we might code ourselves, right? Dynamic is more of the… the web pages that are generated dynamically are like Facebook and Instagram. The user requests a page, which triggers code that generates an HTML page.

So sometimes you would want… ‑‑ if you try to look at the source code of a website, you won’t really see much of anything? Because that code, like, doesn’t exist yet. Unless you open an inspector, and you look at the code that’s visible on your side.

So, to make this content look better, it’s often styled. Right? ‘Cause otherwise, it would just be plain Arial size 12. So we add color, add shape, animation, layouts, italics. And we do that using Cascading Style Sheets, or CSS.

CSS is also not a programming language. It’s a way of representing information.
So this is what a static HTML file might look like. I grabbed this from a teaching resource, so that’s why you can see things like explanations of what HTML is, because I thought it would look a bit cleaner than the WIRED article.

And this is a CSS file! You see things like font size, font family, color, background color, position. Right? So those are the types of things that you can control using CSS. You can even make animations.

So, knowing that ‑‑ the point that we’re trying to make in saying this is that HTML can’t do anything with your data. Neither can CSS. They just display things that are coming from the other computer that you’re connecting to. So how are web pages collecting our data? Well, the code that actually does stuff in your browser is usually written in Javascript.

So… To see it in action, we can go into Tools, and Web Developer, and Inspector! And we can see some of the stuff that’s going on behind the scenes, right? This is how you do this in Firefox, and it’s similar but not identical in other browsers like Chrome and Safari. You ‑‑ I don’t think you can do this in Safari at all, but I might be wrong about that.

So if you check out the Inspector tab, we have an easier way of reading the HTML source than just pulling it all up in a really large, confusing doc. Right? We get syntax highlighting. We get little disclosure triangles. And we’re able to highlight things and see ‑‑ we’re able to hover over different parts of the HTML, and it’ll highlight that section in the actual web page. So it’s a really useful teaching tool.

The Console tab, we’re able to see more of the Javascript activity that’s happening in the background of the page. So we’re able to see all of these, like, jQuery calls and database calls and analytics. Right? So this is how a web page might try to get information about you so that they ‑‑ the company, in this case WIRED, can use that information to structure their own marketing practices. Like, how many people went to this article about sharks biting undersea cables? They would use Javascript in order to record the fact that you, one person, went to this website.

In the Network tab, it shows data being sent and data being received by your browser. Right? So all of the, all of the ones marked “POST,” or ‑‑ you can only see the P‑O, in this part, are being sent, and all the ones marked “GET” are being received.

And so some of this stuff is fairly, like, normal. It’s actual HTML stuff that’s being included on the page. You can see the different types. And then some of it, in other places, you would be able to see like actual trackers. Right?

And when you click on one of the items, you’re able to see more information about what’s being transferred.

INGRID: And this is not necessarily ‑‑ I mean, although this is not very helpful? Like, when you click the headers? It’s like, here is a bunch of words! I don’t know what’s going on! But the other tabs can give us a little more, and depending on the type of network request, you’ll get slightly easier‑to‑read data.

What are cookies and how do they work?

So, in this section, we’re going to talk a little bit about some of the tracking methods. Cookies… are called cookies, because in ‑‑ they’re called cookies with the web, because in a different technology, whose name I do not recall, this same thing was called a magic cookie.

And I don’t know why it was called that in the other one… It’s just a, it’s a… it’s a holdover from the fact that a small number of people working on the internet had inside jokes, as far as I can tell.

But a cookie is a text file that contains information. Usually it’s something like an ID. And it’s used for doing things like storing preferences, or kind of managing things like paywalls on news websites.

So in this case, the cookie that was handed off to me from this particular page gave me this ID number that’s just like a pile of letters and numbers. And my browser will store that cookie, and then when I ‑‑ if I go back to the WIRED website, it’ll see ‑‑ it’ll check to see, like, oh, do I already have a cookie assigned to this one?

And if it does, it will take note of how many other WIRED articles I’ve already read. And that’s how WIRED gets ‑‑ is able to say, hey, we noticed you’ve gone, you’ve read all your free articles… Stop, stop doing that. You don’t get anymore.

And they’re not ‑‑ they can also be used for things like, you know, like say you have, you know, a certain kind of like ‑‑ like, you have a log‑in with a particular website, and you don’t want to have to log in every time, and the cookie can store some information for you.
But they’re also used for things, like, kind of tracking ‑‑ like, just trying to see where people go online to, you know, be able to figure out how to sell them things.

Just a distinction note, like, if you look at things in the Network tab: A response cookie is a file that comes from, like, a website to your computer; a request cookie is a file that your computer generates that goes to that computer. And it’s, you know. And a lot of this is stuff that is encrypted or encoded or kind of arbitrary ‑‑ like, which is good, in so far as it’s not creating ‑‑ oh, sorry.

It’s not ‑‑ it’s not just giving, you know, information, passing information about you and storing it in the clear? You still probably don’t want it? (Laughs)

So cookies can also be used for, like, tracking. This website has like, you know, a lot of different scripts running on it, because media companies work with other, you know, companies that do this kind of audience tracking stuff.

So like, when I was looking at this one, it was like the URL that the domain was coming from is elsa.memoinsights.com. That’s a weird name, and I don’t know what any of this is. If I type that into the browser, it doesn’t produce a web page?

But when I Google “memo insights,” I find: A company that works with companies to give them, you know, competitive analysis and campaign summaries. I don’t know what these things are, but this is some boutique company that works with Conde Nast, which owns WIRED. Maybe they do something with what I read, and maybe we can learn that people who read WIRED also read the New Yorker, or something.

What are pixel trackers and how do they work?

There are other trackers on the web that are not based in cookies and are a little bit weirder. So, pixel trackers are basically just tiny image files. They’re called this because, you know, sometimes they’re literally just one pixel by one pixel. And to load the image, you know, so the image is hosted on a server somewhere else, not on the WIRED website.

It’s hosted by whatever company, who knows, is doing this work. And because the image has to load from this other server, my computer makes a request to that server. And once that request is logged, it’s ‑‑ that’s, you know, the… that server can, you know, get information from my request about, you know, my computer, where I’m coming from, what ‑‑ like, how long I spent on it, what time I accessed it.

If you ever used, like, e‑mail marketing software, or like newsletter software, like MailChimp or TinyLetter, this is usually how those services are able to tell you how many people have opened your e‑mail. They’ll have like an invisible pixel tracker loaded into the, into the actual e‑mail, and will send the information about when that image loaded to the newsletter web service.

So, and so pixel trackers, they’re sort of sneaky in that they’re like… Again, like, you literally can’t see them on the web page. And they’re not as transparently kind of present? (Laughs) As other things?

What is browser fingerprinting and how does it work?

A more ‑‑ another method of tracking users on the internet across different websites is something called browser fingerprinting, which is a bit more sophisticated than cookies. So in the last few years, browsers have become a lot more dependent on and intertwined with a computer’s, like, operating system and hardware. For example, when you join a Google Hangout or a Zoom call! (Laughs)

You ‑‑ the browser is gonna need to access your webcam and your microphone. Right? And those are, those are, you know, parts of the hardware. So there needs to be like ways for the browser to talk to those parts of your computer? And that in and of itself isn’t a bad thing. But! It means that if a, you know, if some code is triggered that asks questions about, you know, those other parts of hardware, it might just get ‑‑ like, that’s data that could get sent to another server.

So in this example, we’re looking at the loaded information includes things like browser name, browser version. And that’s stuff ‑‑ like, that will usually be in a typical request. Like, knowing what the browser is or what kind of browser isn’t that unusual? But then we get things like what operating system am I on? What version of the operating system am I on?

I don’t ‑‑ like, I don’t know why this site needs that information! And I didn’t see any fingerprinting happening on the WIRED website, so I had to go to the YouTube page that the video was on. (Laughs)

There are a lot of more detailed sorts of things that can be, like, pulled into fingerprinting. So like your camera. Like, is your camera on? What kind of camera is it? That can get ‑‑ that can be something that a, you know, browser fingerprint will want to collect. Your, like, your battery percentage, weirdly? It’s ‑‑ and all of this is in the service of creating, like, an ID to associate with you that is definitively your computer, basically.

As opposed to, like, you know, you can actually like erase cookies from your browser, if you want to. Or you can say, like, don’t store cookies. But it’s a lot harder to not have a battery.

In terms of knowing if fingerprinting’s happening, one way to do that in the Network tab is you’re looking for the POST requests, which means that your computer is sending something to another computer. And one way that it can get sent is in a format called JSON, which is an abbreviation for JavaScript Object Notation. Which is basically a format for data that can be processed by the programming language that works in the browser.

This is ‑‑ another way if, like, if the, you know, the Network tab is like a little overwhelming, there are browser extensions that can show you more kind of detailed things about what’s going on with fingerprinting.

Additionally, just as a sidenote, browser ‑‑ like, browser extensions are another example of like throwbacks of the browser. The idea that anyone can build, like, extra software for that piece of software? It’s like, no one would ever let you do that to the Instagram app on your phone. And it’s sort of a, it’s kind of a leftover thing from something ‑‑ like, Firefox started doing it in 2004, and then everyone copied them. (Laughs)

But, back to fingerprinting.

Just as far as ‑‑ this is a Chrome extension called DFPM, Don’t FingerPrint Me, which just logs this in a slightly tidier way. So I thought I would show it. And it highlights a couple of examples of ways that this page is currently doing fingerprinting that I might want to know about.

So canvas fingerprinting is a method ‑‑ it sort of describes it here. It draws like a little hidden image on the page that then is kind of encoded to be, like, your fingerprint. I think Firefox actually blocks this by default, so I had to do this in Chrome! (Laughs)

WebRTC, that’s related to your camera and microphone. WebRTC stands for Web RealTime Communication, or Chat, I’m not sure which. But that’s basically the tool used for making ‑‑ for doing web calls. They’ll also look at what fonts you have on your computer, your screen resolution. You can see here the battery level stuff.

So I guess the point I wanted to bring across with the fingerprinting stuff is just that, like, there are lots of different things in play here.

Should we ‑‑ do you think we have time for our bonus round…? Oo, it’s almost 1:00. But I feel like there was ‑‑ I’m hoping, I think there was some interest in this. I don’t know, Olivia, what do you think?

OLIVIA: I just pasted in the chat an answer to the TOR versus VPN question? So we can skip those slides. But it might be useful to kind of rapid‑fire go through safer browsing techniques? Yeah, I just got a “yes please” in the Q&A.

What is a VPN and how does it work?

INGRID: Okay. Quick version of the VPN thing. This is how a normal connection, you know, logs data about you. I go to a website, and it logs this computer came to me! This computer over here.

A VPN basically means that you’re connecting to that computer through another computer. And so your request looks as though it’s coming from kind of somewhere else. That being said, like, it’s ‑‑ you know, there’s still other data. Like, given the point I just made about fingerprinting, there’s other data that could be collected there that’s worth thinking about.

When data travels through TOR, TOR is an acronym for The Onion Router, and the idea is that it wraps your request in multiple ‑‑ by going through multiple different computers, which are called relays.

So when you use TOR, which is a browser, to connect, it sends your request through this computer and this computer and this computer, and whatever is the last one you were on before you get to the page you want to visit, that’s where this ‑‑ that’s the, like, IP address that this device is going to log. These are called ‑‑ this last sort of like hop in the routing is called the exit relay. Those can be ‑‑ yeah. I think that that, that was my attempt at being quick. I apologize. (Laughs)

OLIVIA: Fun fact about VPNs. If you ‑‑ because the United States has different privacy laws than other countries, if you were to connect to a VPN server that was in, for example, the European Union, you might get a lot more notifications from the sites that you normally go to about different cookies and different things that they do with your data. Because in Europe, they’re required to tell you, and in America, they’re not always required to tell you what they’re doing with your data.

What is private web browsing and how does it work?

Oh, I can take it. So this is how, in Firefox, you would open a private window. And private windows, I think we’re all kind of a little bit familiar with them. They clear your search and browsing history once you quit. And it doesn’t make you anonymous to websites, or to your internet service provider. It just keeps it private from anyone else.

But that might be really useful to you if you are using a public computer, or if you’re using a computer that might be compromised for any other reason. Like say if you suspect that you’re going to protest and a cop might take your device from you.

What are script blockers and how do they work?

INGRID: So script blockers, so like the tracking and the little analytic tools and stuff usually are written in Javascript, because that is the only programming language that works in a browser. So there are tools that will prevent Javascript from running in your browser. And that can be helpful for preventing some of those tracking tools from sending data back to, back to some, you know, computer somewhere else. It can be a little bit frustrating, because Javascript is used from all ‑‑ for all sorts of things on websites. Sometimes it’s used for loading all of the content of the web page! (Laughs)

Sometimes it’s used to, you know, make things kind of have like fun UI! But it’s ‑‑ so it sort of… You know. It’s worth ‑‑ it’s interesting to try, if only to see how much of your internet experience needs Javascript? But yeah. There are some tools that will, you know ‑‑ the Electronic Frontier Foundation has a cool extension called Privacy Badger that sort of learns which scripts are trackers and which ones aren’t as you browse. But yeah. These are, yeah, these are extensions that browsers will ‑‑ you can install onto a browser.

And then firewalls!

What is a firewall and how does it work?

OLIVIA: So firewalls are kind of the first line of defense for your computer’s security. It would prevent, basically, other computers from connecting directly to your computer, unless you like say yes or no. And so… They’re really easy to turn on? On your computers? But they’re not that way by default.

So in a Mac computer, like I’ve shown here, you would literally just go to security and privacy, and go to the firewall tab, and it’s like one button. Turn off, or turn on. And you don’t really have to do much more than that.

And in Windows, there’s a similar process, if you go to the next slide, where you really just go into settings, go into security, and switch the “on” setting. It’s pretty… It’s pretty easy, and it’s kind of annoying that it’s not done for you automatically.

But I recommend everyone to just check out and see, like, hey, is my firewall turned on? Because it’s a really easy step to immediately make your computer much safer.

INGRID: All right! We went through all the slides! (Laughter)

BLUNT: That was perfectly timed! You got it exactly at 1:00

What’s the difference between a VPN and TOR?

OLIVIA: Okay. So for the TOR versus VPN answer.

As we said just a while ago, TOR uses onion routing and sends your data through multiple computers called TOR nodes to obscure traffic and anonymize you, while a VPN just connects you to a VPN server, that are often owned by VPN providers, which is sometimes you have to pay to use them and other ones are free.

So I described it as kind of like a condom? (Laughs) Between you and your internet service provider? So Verizon knows that you’re using a VPN, but it doesn’t know what you’re doing on it, because a VPN would encrypt all your traffic.

It’s really important that you use a VPN that you trust, because all of your internet traffic is being routed through their computer, which is another reason people like to pay. Because you can have a little bit more faith that it’s like a trusted service if you’re paying for it? Even though that’s of course not always true.

But there is Proton Beacon, which is one I use that’s free, which is run by the same people who run Proton Mail, which I use. I haven’t had any problems with it.

You can use a VPN and TOR at the same time, which is what the question directly asked. And I believe that your ISP would know that you’re using a VPN, but because you’re using a VPN it wouldn’t know that you’re using TOR. Ingrid, if that’s not true, you can like clap me on that.

Because TOR is super slow and it routes your computer through a bunch of different things, it can break a lot of websites, including video streaming like YouTube and Netflix. A lot of people use VPNs, however, so they can access videos or things that are banned in different countries by making it look like they’re in a different place.

But if you’re doing something highly sensitive or illegal, you’d probably want to use TOR, and probably some other precautions, too.

BLUNT: Thank you so much. That was super helpful. Do folks have any questions? Is there anything that people would benefit from sort of like going back and going into in a little bit more detail?

Someone just said: Is there a way around TOR breaking websites? I’ve had used it and it throws a lot of captcha tests on regular websites.

OLIVIA: So Cloudflare kind of hates TOR? (Laughs) It takes a really aggressive stance towards TOR users, actually? There was like an Ars Technica article I read that said Cloudflare said 90% of TOR traffic we see is, per se, malicious.

So I don’t know if there’s going to be a time that you can use captcha and not have it act up, because Cloudflare sees that kind of activity as malicious activity.

Can Apple see what you’re doing on your computer or phone?

INGRID: “This may be hardware‑related, but does Apple see what you’re doing on your computer because you connect to the internet, e.g. any photos, videos you store?”

Okay, to make sure I understand the question: Is the question whether, like, if you’re using an Apple device, whether Apple is able to see or collect anything if you’re connected to the internet from that device?

Okay. So I think ‑‑ I mean, the answer to that is you would need to kind of tell them to do that? (Laughs)

They’re ‑‑ so like, if you are using something like iCloud to store photos and videos, then yes, they would be able to see and have all of those. But in terms of, like, just being on the internet doing things on an Apple device? Apple can’t personally, like, kind of peek in and see that. I mean, they, like ‑‑ there are, you know, other computers will know that you’re on an Apple device.

But yeah, you have to be directly interfacing with Apple’s network for Apple to be able to have anything on or from your computer.

OLIVIA: And when it comes to things like iMessage and iCloud, they… say? That that information is encrypted. Of course, it’s like not open sourced, so we don’t actually know how they’re encrypting it or what they do. But Apple has said for a while that communications between, like say two iMessage users?

So not someone using it to speak to someone who has an Android; that’s SMS.

But two iMessage users speaking to each other, that’s technically an end‑to‑end encrypted conversation. Apple does collect some information from you when you are initially typing in someone’s number to text them, because it pings the server to find out if that number is associated with an iCloud amount.

So for iPhone users, that little moment between when a number that you’re typing in turns either blue or green, in that moment it’s sort of pinging Apple’s servers. So they do have a list of the times that that ping has occurred.

But of course, that doesn’t tell you if you actually contacted the person whose number you typed in; it just knows that you made that query. And that’s the extent, so Apple says, of the information that they collect about your iMessage conversations.

So, yes, they do ‑‑ they can technically see that information? But they tell us that they don’t look at it. So.

Open-source vs. Closed-source Technology

BLUNT: Can you explain a little bit more about open source or closed source technologies?

OLIVIA: Yeah! So, open source technologies are… basically, they’re apps, websites, tools that they’re ‑‑ the code that’s used to write them and run them is publicly available.

When it comes to security technologies, it’s really… best practice to try to use tools that are open source, because that means that they’re able to be publicly audited.

So like, regular security experts can like go in and like actually perform an audit on open source security tools, and know that they work. Versus, you have a lot of paid security tools that you basically assume that they work because people tell you that they work?

And you can’t really, like ‑‑ the public can’t really hold them to any, like, public accountability for whether or not they work or not.

Versus you can actually, like, test the encryption algorithm, say, of Signal, which is a messaging app and all of their code is public information.

INGRID: Open source, it’s also like a way of… kind of letting people developing software kind of support each other, in a way? Because the fact that Signal is open source, it’s not just like oh, we can be accountable if Signal says it’s doing something but it’s not; it’s also a way to be like, hey, I noticed something. Is it working? And you can actually directly contribute to improving that technology.

It’s complicated ‑‑ I mean, the world of open source, it’s complicated in that it’s like, it still has elements of the like… you know, snobby, like, like culture of tech, sometimes? But it, it’s kind of ‑‑ in principle, it’s very like useful for being able to have technologies that are accountable and that kind of have some element of like public engagement and understanding.

How to Choose a VPN

BLUNT: Awesome. Thank you. And so I have another question in the chat: What are some good ways to assess the trustworthiness of a VPN, as you were discussing before?

OLIVIA: The way most people do it, I think, Ingrid, you could check me on this, is kind of by reputation. If you look up how to find a good VPN, you’ll find a lot of articles where people talk about the pros and cons of different ones. And you’ll be kind of directed to the ones considered by the public to be the most trustworthy ones?

INGRID: Yeah. And I think one way I guess I evaluate companies sometimes on this is like looking at their level of engagement with the actual, like, issues that they… of like user privacy?

So like, one of the, you know, things I ended up using as a reference for this workshop as a guide to making ‑‑ as a guide for, like, different browsers, was like a blog post by Express VPN. And they’re a company that, they don’t have to tell me anything about like which browser ‑‑ there’s no reason for them to generate that content.

I mean, it’s good PR‑ish? But they’re not going to get new customers because I’m using a different browser now.

So some of it’s thinking, you know, is it open source or not? What is the like business model? And are they kind of actively, you know, engaging with issues related to user privacy?

We’ll talk a little bit more tomorrow about legislative issues around privacy, and that’s also another way. Like, have they taken positions on particular, you know, proposed laws that could harm user privacy?

To me, those are sort of like, how are they kind of like acting on principles?

OLIVIA: It also might be a good way of checking to see if ‑‑ yeah! If they produce logs in court proceedings, so you know that they don’t track traffic.

Also, to see like, say, certain companies might be funded by other companies that, like, are less concerned about… public safety or privacy or human rights.

So that might also be a good way of like checking to see, like, the integrity of a VPN company. ‘Cause at the end of the day, they’re all companies.

Is WordPress a reputable option for sex workers?

INGRID: All right. The next question: Would y’all consider WordPress reputable for housing a sex worker website?

This ‑‑ thank you for asking, because it lets us kind of talk about something I wanted to figure out how to include in that whole presentation but didn’t.

So… Just as like a point of clarification, and maybe this is understood by people, but maybe for the video it will be helpful… WordPress? (Sighs) Is both a, like, hosting company and a piece of software. WordPress, I think ‑‑ WordPress.org is the hosting one? Or WordPress.com? I can never remember. (Laughs)

I think it’s WordPress.com. But you can host a website on WordPress’s, like, platform, and when you do that you will be running a website that is built using WordPress’s software. Which is also called WordPress! This is confusing and annoying.

But… you can also use WordPress’s software on another web, like, hosting service. Like, you can install WordPress onto a like hosting service website. I think a fair amount today, like of hosting services, actually do sort of a one‑step click, like they’ll set up a server with WordPress for you option.

In terms of WordPress, like, as the host of a website? And as a host for sex worker websites… I don’t actually know. I would say ‑‑ I would, like, check ‑‑ I would need to go check their terms of service? (Laughs)

I think in general… Yeah. I don’t totally ‑‑ I think with all hosting companies, it’s hard ‑‑ like, they’re, like, figuring ‑‑ figuring out which ones are kind of the most reputable is partly about looking at any past incidents they’ve had in terms of takedowns, or like what their ‑‑ also like where they’re located?

So like, WordPress is a company based in the United States, so they’re beholden to United States laws and regulations. And I’m guessing part of the reason this question was asked is that this person ‑‑ that you probably know a little bit about FOSTA‑SESTA, which makes it harder for companies to allow any content related to sex work on their servers.

And as far as I know, WordPress wants to be compliant with it and hasn’t taken a radical stance against it.

Blunt, do you have any…?

BLUNT: Yeah, I can say I think hosting anywhere on a U.S.‑based company right now has a certain amount of risk, which you can decide if that works for you or not. If you are hosting on WordPress right now, I would just recommend making lots of backups of everything, as like a harm reduction tool. So if they decide to stop hosting your content, you don’t lose everything.

And I also just recommend that for most platforms that you’re working on. (Silence)

Cool. So we have around 15 minutes left. So if there are any other questions, now’s the time to ask them. And… I don’t ‑‑ and if not, I wonder if just chatting Ingrid and Olivia a little bit about what y’all will be covering in the next two days!

Okay, we have two more questions.

Can you reverse browser fingerprinting?

“This may be a digital surveillance question, but once you get browser fingerprinted, is it reversible?”

INGRID: Hmm. That’s actually a question where I’m not sure I know the answer. Olivia, do you know…?

OLIVIA: No…

INGRID: I do know that… you can sort of ‑‑ I know on some, on mobile devices, you can like spoof aspects of your identity?

So, like, you can ‑‑ like, so I mentioned Mac addresses are sort of this hard coded thing. That’s just the idea of your like device? A phone can actually ‑‑ like, you can actually generate sort of like fake MAC addresses? (Laughs)

That are the one that’s presenting to the world? So if that sort of was a piece of your fingerprinted identity, that’s one way to kind of, like ‑‑ you know. It’s like you wouldn’t be a perfect match anymore? But… Yeah, I don’t know if there’s sort of a way to completely undo a fingerprinting.

Yeah. I will also look into that and see if I can give you an answer tomorrow, if you’re going to be here tomorrow. If you’re not, it will be in the video for tomorrow.

Additional Digital Literacy Resources

BLUNT: Great, thank you. And someone asked: Are there any readings that y’all would recommend? I’ve read Algorithms of Oppression and am looking for more. I love this question!

OLIVIA: That… the minute I heard that question, like, a really long list of readings just like ran through my brain and then deleted itself? (Laughs) We’ll definitely share like a short reading list in the bibliography that we’ll send out later.

BLUNT: Awesome. That’s great.

Okay, cool! This has been really amazing. Thank you so much. I’m just going to say, one more chance for questions before we begin to wrap up.

Or also, I suppose, things that you’re interested in for the next two days, to see if we’re on track for that.

How do fintech companies use digital surveillance?

Someone asks: This is a fintech‑related question for digital surveillance, but can you talk about how that kind of works internet‑wise?

INGRID: Fintech…

BLUNT: For financial technologies. And how they track you. Oh! So like, if you’re using the same e‑mail address for different things? Is that sort of on the…?

OLIVIA: Like bank tracking? Like money type of…?

INGRID: So… Depending on the, you know, like financial servicer you’re working with, like PayPal or Stripe or whatever, they’re going to have ‑‑ like, they ‑‑ like, in order to work with banks and credit card companies, they are sort of expected to kind of know things about you.

These are like related to rules called KYC, Know Your Customer. And so part of the tracking or like ‑‑ or, not tracking, but part of information that is collected by those providers is a matter of them being legally compliant?

That doesn’t mean it produces great results; it’s simply true.

And I think the ‑‑ in terms of the layer ‑‑ I’m trying to think of what’s ‑‑ I don’t know as much about whether or not companies like Venmo or… Stripe or PayPal are sharing transaction data? I’m pretty sure that’s illegal! (Laughs) But… Who can say. You know, lots of things happen. That would be capitalism.

BLUNT: I also just dropped the account shutdown harm reduction guide that Ingrid and Hacking//Hustling worked on last year, which focuses a lot on financial technologies and the way that, like, data is sort of traced between them and potentially your escorting website. So that was just dropped into the chat below, and I can tweet that out as well in a little bit.

Zoom vs. Jitsi: which is more secure?

OLIVIA: Privacy/security issues of Zoom versus Jitsi… I also prefer to use Jitsi when feasible? But I also found that call quality kind of drops really harshly the more people log on. Like, I don’t think we can actually sustainably have a call of this many people on Jitsi without using like a different ‑‑ without hosting Jitsi on a different server.

Concerning how I handle the privacy/security issues of Zoom, they’re saying they’re going to start betaing end‑to‑end encryption later this month. I don’t know what that actually even means for them, considering that they’re not open source, right?

But I do say that one of the things that I tend to try and practice when it comes to, like, using Zoom, is kind of maintaining security culture amongst me and people who we’re talking to. Right? So I’m never going to talk about, like, any direct actions, right, that are going to happen in real life on Zoom. Refrain from just, like, discussing activity that could get other people in trouble anyway.

Like, while it would be nice to have, like, say this kind of conversation that we’re all having now over an encrypted channel, I think it’s generally much safer and ‑‑ I don’t like using the word “innocent,” but that’s like the word that is popping into my head, to talk about ‑‑ to use Zoom for education, even if it is security education, than it would be to actually discuss real plans.

So… It might be really beneficial to you if you are, like, say, having ‑‑ using Zoom to talk to a large group of people about something that is kind of confidential? To talk over, like, Signal in a group chat, or some other encrypted group chat platform, and decide like, okay, what are you allowed to say over Zoom, and what you’re not allowed to say. And to think of Zoom as basically you having a conversation in public.

Assume for all of your, like, Zoom meetings that someone’s recording and posting it to ‑‑ (Laughs)

YouTube later! And that would probably be… that would probably be the most… secure way to use it, in general? Is just to assume that all of your conversation’s in public.

BLUNT: Yeah. I totally agree, Olivia. And that’s why this is going to be a public‑facing document. So, Zoom felt okay for us for that.

INGRID: Yeah. I mean, I think another way I’ve thought about this with Zoom is like, just remembering what Zoom’s actually designed for, which is workplace surveillance? Right? It’s like, you know, its primary market, like when it was first created, and still, is corporations. Right?

So there’s lots of ‑‑ so like also, when you’re going into like ‑‑ even if you’re going to a, you know, public Zoom thing that is, you know, about learning something. Like, whoever is managing that Zoom call gets a copy of all of the chats. Right?

And even if you’re chatting like privately with one other person, that message is stored by ‑‑ like, someone gets access to that! And… Mostly just that’s something to… like, thinking ‑‑ like, just keep in mind with, yeah, what you do and don’t say. Like, especially if you are not the person who is running the call.

Think about what you would or wouldn’t want someone you don’t know to kind of like have about you.

What’s to come in the digital literacy lunch series?

BLUNT: Awesome. Thank you so much. Do you want to start to wrap up and maybe chat briefly about what we’ll be seeing in the next two sessions?

OLIVIA: Sure, yeah. So the next two sessions are going to be one talking more about how platforms work and sort of the whole, like, algorithmic ‑‑ bleh! (Laughs)

Algorithmic curation, and how misinformation spreads on platforms, and security in the Twitter sphere, rather than just thinking about using the internet in general. And then the third will be talking more explicitly about internet surveillance.

So we’re going to be talking a little bit about surveillance capitalism, as well as like state surveillance, and the places where those intersect, and the places where you might be in danger and how to mitigate risk in that way.

@babyfat.jpeg on Lesbians Who Tech

Last year, two organizers from Hacking//Hustling were rejected from speaking at last year’s Lesbians Who Tech convening in San Francisco, which took place shortly after SESTA-FOSTA was signed into law. Hacking//Hustling provided a partial scholarship to Baby Fat (@babyfat.jpeg) to attend and make sure that there would be sex worker representation at the conference. Baby Fat’s reflections on her experience at Lesbians Who Tech as a sex working Femme are below.

A few months ago, I was able to attend my first Lesbians Who Tech summit thanks largely to the support of my community. At the time of attending I was working as a digital media associate at a Queer healthcare nonprofit. Most of my 9-5 background has come from my work in Queer nonprofits, working mostly in direct outreach. For the last three years I have worked in tech specific positions within nonprofits, skills which I was able to acquire because of my hustling. I’m from a nontraditional background, but hustling has taught me everything I know about tech, marketing, and community management.

It’s worth mentioning I was able to attend the conference because I was awarded a partial scholarship for them. I attended the summit because I have always had a passion for social media and believe in its ability to connect community and provide accessible education, especially as it relates to Queer sexuality and wellness. From a hustling perspective, it’s the best way for me to engage and advertise to those who utilize the multitude of my services. Post FOSTA/SESTA I have had to rely even more heavily on social media and have since began operating more discreetly. 

While the conference was exciting and I was able to connect with some great folks I often felt that some overall nuance was missing. There was a lack of intentional conversations around gentrification, sex work, and the Queer complacency. Navigating the space as a fat femme sex worker was complex and exhausting at times, between being unable to fit in certain seating, being talked down to by masc attendees, or feeling uncomfortable disclosing the extent of my work. Because a bulk of my 9-5 career has been in nonprofits a majority of the conferences I have attended have been specifically dedicated to Queer theory, resistance, and community building. However, these spaces often lack on seeing the importance of tech within these movements and have been slow to adapt to the changes tech have created in communities. I think LWT is doing better work than most other tech specific conferences, but I do think they could benefit from adapting some of the approaches and topics Queer nonprofit conferences have.

Throughout the summit, I heard no mentions of gentrification from LWT leadership, which felt especially out of place considering that LWT seeks to empower the very people gentrification disproportionately effects. While gentrification has been a popular conversation in tech spaces, having been discussed in length, I can understand how it feels like it might not need as much attention. But I still feel it’s incredibly important to have some intentional dialogue and education around it. I’m from Chicago and the city’s recent tech expansion and attempt at being a global city has reinvigorated the conversation of gentrification and tech. If LWT truly aims to create a more intersectional and diverse tech workforce than they need to fully engage the communities that are being displaced by tech gentrification. LWT leadership needs to recognize they have a platform to educate and incite change. Choosing not to talk about gentrification is choosing to be complicit in it.

At the root of complicity are respectability politics, something LWT engages heavily in, in order to maintain funding, connections, and a respectable reputation. But with these politics comes the erasure of some folks who rely on tech for their safety and economic stability. Sex workers have always been at the forefront of using and building the popularity of tech platforms and services. Between navigating digital banking, advertising online, and censorship on social media sex workers utilize tech at significant rates. Sex workers made Cashapp and Venmo mainstream, and continue to be a driving force between both banking systems growth. But both systems, as well as most social media platforms, have made it increasingly difficult for sex workers to continue using them. 

I went to LWT knowing that there were no formal mentions of sex work in the programming, an oversight considering the historical connections between sex work and Queer folks. After all, pride was started by Marsha P. Johnson, a Black Trans woman, and a sex worker. Countless other Queer revolutionaries like Sylvia Rivera, Amber L. Hollibaugh, and Miss Major among numerous others have been on the front lines of Queer liberation. But as Queer folks have become more assimilated into mainstream culture, Queer sex workers have been pushed farther to the fringes by their own communities.

Whenever in casual conversation with other attendees, the mention of sex work would make them uncomfortable. When I disclosed my experiences in navigating social media as a sex worker, I could feel them try to calculate what type of work I did. It felt like I had to prove my credentials and cleanliness to them. I had a few people implore what type of sex work I did, and I generally got the feeling from them that some forms were more acceptable than others. Often times folks would withdraw from the conversation or worse, explain to me how they knew things were “difficult” because they read a Vice article once. When I pressed them for ways that they were working to make their companies and products better for sex workers since they read a Vice article, they often said there wasn’t much they could do because they weren’t a decision-maker or programmer. But I think that’s just coded language for “I don’t want to do anything.”

I don’t think it’s a matter of people not understanding the difficulties sex workers face while trying to navigate tech. I think it’s an issue of respectability politics; additionally, those that are willing to make change are unsure where to start. Sex work, despite what sex positivity would have you think, is still incredibly stigmatized, especially within educated Queer spaces, like LWT. Leadership at LWT has the power to educate attendees on the nuances of tech and sex work and can impact attendees to do more within their positions, but once again, they choose not to.

The highpoint of the conference for me was being able to see Angelica Ross speak, Ross has been incredibly vocal about the importance of sex workers in tech and has provided visibility to the larger movement. I want to see more more dialogue around sex work and sex workers speaking and facilitating conversations specifically at LWT in the future. Additionally, I would like to see LWT engaging more with sex-workers by partnering with sex worker specific organizations and speaking about sex work more vocally on their digital platforms. I think engaging more sex worker based organizations would encourage more sex workers to attend, and if anyone needs better tech, it’s sex workers. 

Publicly talking about sex work not only educates civilians on the nuances of tech and sex work but also actively destigmatizes sex work in tech spaces, making it easier for folks to openly (and comfortably) talk about their narratives as sex workers. I’m critical of LWT because I want it to succeed, I want people to feel comfortable and for tech to be reclaimed.