Digital Literacy Training (Part 3) with Olivia McKayla Ross (@cyberdoula) and Ingrid Burrington (@lifewinning)

INGRID: So today, we were gonna talk about platform surveillance. And… There’s, in general, what we’re kind of focused on is both ways that platforms surveil and forms of state surveillance that utilize platforms.

So, this is sort of a outline of where we’re going today. First, kind of doing a little bit of clarifying some terms that we’re gonna use. Those two different kind of terms being state surveillance and surveillance capitalism. And then we’ll talk a little bit about mitigation strategies, with both.

What is surveillance capitalism?

So, clarifying some terms. We ‑‑ we wanted to kind of make sure we were clear about what we were talking about. So surveillance capitalism is a term used to describe sort of a system of capitalism reliant on the monetization of personal data, generally collected by people doing things online.

What is state surveillance?

State surveillance is a tactic used by governments to intimidate and attack. It can be digital? It can also be IRL. For many years, it was just people following people, or decoding paper messages. It’s more sophisticated now, but it takes many different forms.

And surveillance capitalism kind of emerges in part because of the demands of state surveillance. So in the early 2000s, after 9/11, the existence of a military industrial complex that had funded the development of surveillance technologies was already well established, but was amped up in some ways by the demands of the global war on terror. And there are… you know, deep interconnections between the history of the Silicon Valley and the history of the military industrial complex.

The state will utilize the resources of surveillance capitalism. But we wanted to make it clear, like, surveillance capitalism’s negative impacts and risk to personal safety are not solely confined to state violence. They can perpetuate interpersonal violence; they can, you know, create greater harms for your ability to kind of, like, find a place to live, or change your mobility. And those are ‑‑ you know, those two distinctions, we just wanted to kind of clarify as we go into this.

And I feel ‑‑ like, part of me wants to believe this is kind of already taken as a given, but it’s good to kind of have it said: Neither surveillance capitalism nor state surveillance exist independent of racism and the legacy of the transatlantic slave trade. Sometimes ‑‑ I found when I first started getting into this space, in 2012, 2013, that was kind of an angle that was kind of neglected. But you don’t have an Industrial Revolution without transatlantic slave trade, and you don’t have surveillance without the, you know, need to monitor people who are considered, you know ‑‑ property, basically. I would highly recommend Simone Browne’s Dark Matters as an introduction to the role of Blackness in surveillance. But, yeah. Just wanted to raise that up, and be really honest about who is affected by these systems the most.

And… Also wanted to reiterate something that we said in the first day of this series, which is, nothing is completely secure… and that’s okay. We’re learning how to manage risk instead. Right? The only way to be completely safe on the internet is to not be on the internet. Connection is, you know, involves vulnerability by default. And… The work that we’re trying to all do is find ways to create space. To mitigate or, like, understand the risks that we’re taking, rather than just assuming everything’s a threat and closing off. Or saying, well, I’m doomed anyway; doesn’t matter.

What is threat modeling?

In the world of security studies and learning about, you know, building a security ‑‑ like, a way of thinking about keeping yourself safe, a term that comes from that space is threat modeling. Which is a practice of basically assessing your potential vulnerabilities with regard to surveillance. We’re not going to be doing threat modeling, per se, in this presentation. There are some really great resources out there that exist on how to do that that we can ‑‑ we’re happy to point you to. But we wanted to raise it up as something that can help in approaching managing risk and mitigation, because it’s sort of inventorying your own, you know, circumstances and understanding where you are and aren’t at risk, which can make it a little less overwhelming.

All right. So the concept of state surveillance, for this presentation, we’re going to be talking about it on and using platforms, which we talked about yesterday. There’s all kinds of other ways? (Laughs) That, you know ‑‑ as I said earlier, that the state can engage in surveillance. We’re, right this second, just gonna focus on platforms. We might be ‑‑ if there are questions specifically about non‑platform things, maybe in the Q&A part, we could talk about those, if there’s time.

What are platforms and why do they matter?

So, platforms are companies. I think we’ve said this a lot over the last three days. And what that generally means is that platforms have to, and will, comply with requests from law enforcement from user data. They don’t have to tell anyone that that happens. They do, some of the companies; some big companies do. These are from ‑‑ the one on the top is Twitter’s annual transparency report, and the one below is Facebook’s. And these are just graphs that their ‑‑ digitalizations they made about government requests for user data. And ‑‑ but again, this is almost a courtesy? This is like something that’s kind of done maybe for the brand, not necessarily because they have any obligation. But… It’s also just a reminder, like, there’s ‑‑ they can’t actually necessarily say no to, like, a warrant. This also applies to internet service providers, like Verizon; mobile data providers; hosting services. Like, companies have to do what the law tells them, and most of the internet is run by companies.

Next slide… So, companies don’t actually ‑‑ but governments don’t really always have to ask platforms to share like private data, if there’s sort of enough publicly available material to draw from. The method of using sort of publicly accessible data from, you know, online sources is sometimes called open source investigations, in that the method is reproducible and the data is publicly available. When Backpage still existed, that was more or less how cops would use it to conduct raids. One older example of this is in 2014, the New York Police Department conducted a massive raid on public housing project Harlem to arrest suspected gang members. It was called ‑‑ oh, shoot. It had like some terrible name… Operation Crew Cut. That’s what it was called. (Laughs) And much of the evidence used in the raid was culled, basically, from cops lurking on social media, interpreting slang and inside jokes between teenage boys as gang coordination. Some of the young men ‑‑ I think they were as young as 16 and as old as 23 ‑‑ who were caught in that raid are still serving sentences. Some of them were able to challenge the case and be let out, but they still ‑‑ it was a pretty terrible process.

A more recent example of sort of police using publicly available data is this one on the left in June. This woman in this photo was charged with allegedly setting a Philadelphia police vehicle on fire. And the police were able to figure out who she was, based on a tattoo visible in this photo ‑‑ which you can’t really see in this image because it’s quite small; I couldn’t get a bigger one. Based on a T‑shirt she had previously bought from an Etsy store, and previous amateur modeling work on Instagram. So, you know, maybe only a handful of people had bought that one Etsy shirt. And they were ability to kind of figure out and match her to these other images than online out in the public.

What is open source investigation and why does it matter?

I want to note briefly that open source investigation, or digital investigation using publicly available resources, isn’t inherently a bad thing or technique. It’s research. Activists and journalists use it to identify misinformation campaigns and human rights violations when it’s not safe for them to be on the ground. I’ve used it in my own work. But you know, the point is don’t make it easier for the police to do their job.

What is metadata and why does it matter?

The next slide… is another source of information that can be pulled from publicly available sites, besides just sort of reading the material and the images, is metadata. And “metadata” is sort of just a fancy word for data about data. If ‑‑ one way that sometimes this gets described is like, if you have like a piece of physical mail, the letter is sort of like the data, but who it’s ‑‑ the envelope, so who it’s mailed to, where it was mailed from, things like that, that’s the metadata. That’s the container that has relevant information about the content.

So in an image, it’s encoded into the file with information about the photo. This is some screenshots of a photo of my dog, on my phone. (Laughs) She, she briefly interrupted the session yesterday, so I thought I’d let her be a guest in today’s presentation. And if I scroll down on my phone and look kind of further below in the Android interface, I can see that it has the day and the time that the photo was taken, and then what I’ve blocked out with that big blue square right there is a Google Maps embed that has a map of exactly where I took the picture. You can also see what kind of phone I used to take this picture. You can see where the image is stored in my folder. You can see how big the image file is. These are examples of, like, metadata. That, combined with things like actual information in the data, like the actual information in the image, like a T‑shirt or a tattoo, all of that is like really useful for law enforcement. And metadata is something that is stored ‑‑ if you use like Instagram’s API to access photos, you can get metadata like this from the app.

Is it possible to remove metadata from your phone’s camera?

OLIVIA: So, surveillance capitalism! Really big ‑‑ oh, there’s a Q&A. Is it possible to remove metadata from your phone’s camera?

INGRID: So there’s two things that you can do. One is ‑‑ I think that you can, in your settings, like you can disable location? Being stored on the photos? Depending, I think, on the make and model. Another thing, if you’re concerned about the ‑‑ you know, detailed metadata… you can ‑‑ like, taking a screenshot of the image on your phone is not gonna store the location that the screenshot was taken. It’s not gonna store ‑‑ like, it might ‑‑ I think that the screenshot data might store, like, what kind of device the screenshot was taken on. But given ‑‑ but that doesn’t necessarily narrow it down in a world of mostly iPhones and, you know, Samsung devices and Android devices. Like, it could be ‑‑ it’s like a bit less granular. Yeah.

OLIVIA: Awesome.

Surveillance Capitalism: How It Works and Why It Matters

So, surveillance capitalism! I don’t know if you guys notice… I’ve been seeing them a lot more often. But some advertisements in between YouTube videos are just kind of like multiple choice questions? Some of them ask how old you are; some of them might ask if you’ve graduated from school yet; et cetera. So like, in what world is a single‑question survey a replacement for, say, a 30‑second advertisement for Old Spice deodorant?

Our world! Specifically, our world under surveillance capitalism. So, to go further on Ingrid’s initial definition, surveillance capitalism occurs when our data is the commodity for sale on the market. And usually, almost always created and captured through companies who provide online service ‑‑ free online services, like Facebook, Google, YouTube, et cetera.

We can’t really know for sure how much our data is worth? There’s no industry standard. Because, at the end of the day, information that’s valuable for one company could be completely useless for another company. But we do know that Facebook makes about $30 every quarter off of each individual user.

What are data brokers and why do they matter?

But social media sites aren’t the only ones with business models designed around selling information. We also have data brokers. And data brokers… If we go back to the private investigator example that we saw in the state surveillance section, thinking about the tools at their disposal, the level of openness that you have online, they could find out a lot of things about you. Like where you’ve lived, your habits, what you spend money on, who your family members and romantic partners are, your political alignments, your health status, et cetera. That’s like one person.

But imagine that rather than searching through your data and piecing together a story themselves, they actually just had access to a giant vacuum cleaner and were able to hoover up the entire internet instead. That is kind of what a data broker is!

I made up this tiny case study for Exact Data. They’re a data broker, and they’re in Chicago.

And Exact Data has profiles of about 20 ‑‑ not twenty. 240 million individuals. And they have about 700 elements associated with each of them. So some of the questions that you could answer, if you looked at a stranger’s profile through Exact Data, would be their name, their address, their ethnicity, their level of education, if they have grandkids or not, if they like woodworking. So it goes from basic data to your interests and what you spend time on.

So, potential for harm. You get lumped in algorithmically with a group or demographic when you would prefer to be anonymous. Your profile may appear in algorithmic recommendations because of traits about yourself you normally keep private. The advertisements you see might be reminders of previous habits that could be triggering to see now. And it’s also a gateway for local authorities to obtain extremely detailed information about you. I don’t know if Ingrid has any other points that might be potential for harm.

How to Mitigate Harm

But, luckily, there are ways to mitigate. Right? You can opt out of this. Even though it’s pretty hard? But if you remember from our first workshop where we talked about how websites collect data from us, we know that it’s captured mostly using scripts: trackers, cookies, et cetera. So you can use a script blocker! Also, reading the terms of service, it will probably mention the type of data an online service collects and what it’s for. They don’t always, but a lot of them do. So if you read it, you’re able to have a bit more agency over when you agree to use that service or not, and you might be able to look for alternatives that have different terms of service.

Privacy laws in the United States are pretty relaxed when it comes to forcing companies to report things. So, one tip is also to try setting your VPN location to a country that has stronger privacy laws. And then you might get a lot more banners about cookies and other trackers that they’re required to tell you if you live somewhere else that’s not here.

You can also contact data brokers and ask to be put on their internal suppression list. And a lot of brokers have forms you can fill out online requesting that. But the only issue is that this is really hard? Because there are a lot of data broker companies, and we don’t actually know how many more there are, because this is an industry that’s pretty unregulated.

Another mitigation strategy is creating, essentially, a digital alter ego that’s difficult to trace to your other online accounts. So if you are behaving in a way that you don’t want to be conflated algorithmically with the rest of your life, you can create separate online profiles using different e‑mail addresses, potentially using them in other places, and just creating as much distance between you and one aspect of your life and you in the other aspect of your life, and compartmentalizing in a way that makes it difficult to connect the two of you.

And then of course you can use encrypted apps that don’t store metadata or actual data. This could include messaging apps, but this could also include… word processors like CryptPad; it could include video conferencing; it could include a lot of different apps.

So, to wrap everything up: Over the past three days…

Wrapping Up the Digital Literacy Series

INGRID: So, I guess we wanted to kind of try and provide some wrap up, because we covered a lot of things in three days. And that was, like, a very broad version of a very, like, deep and complicated subject. But we sort of ‑‑ you know. We went through, you know, the foundational kind of depths of the internet, how it’s, you know, made, what’s like the actual kind of technical aspects of how it works. The platforms that, you know, are built atop that foundation that extract value out of it. And systems of power that incentivize those platforms to exist and control and govern kind of how some of that value is used ‑‑ (Laughs) Or misused.

And I guess across all three of these, I had a couple of, like, kind of bigger questions or things to think about that I wanted to kind of put forward. One is sort of, like, I think in some ways the neoliberal public/private structure of the internet as a ‑‑ as an infrastructure that everyone lives with… like, they’re ‑‑ like, you ‑‑ it’s ‑‑ it shapes the way that, like, everything else kind of follows. Right? When a… when something that was originally sort of like a government‑built property becomes a commodity that becomes the foundation of how anyone kind of can like live in the world, it creates kind of a lot of these aftereffects.

And I think ‑‑ I find internet history always really fascinating, because it’s a reminder that a lot of this is very contingent, and it could have gone different ways. Sometimes, people wanted it to go a different way? And thinking about what it looks like to build different networks, build different services and structures. And, you know, while living within surveillance capitalism. ‘Cause we haven’t built different internets and different structures quite yet. Our surveillance capitalism’s still pretty big. A big part of taking care of one another and ourselves is… taking care with where and how we speak and act online. Which is different from being afraid to say things? And more being kind of competent in where and how you, like, choose to speak, to protect yourself and to protect people you care about.

I think… that’s ‑‑ yeah, that went by really fast! (Laughs)

BLUNT: We’ll just shower you with questions. (Laughs)

How are companies making money off of data?

I have two questions in the chat. Someone says: How is it exactly that companies make money off of our data? Is it just through ads? Are there other processes?

OLIVIA: So, when it comes to making money off of it, sometimes… let’s say you’re a company that sells… let’s say you’re a company that sells headphones. And you are tracking data of the people who are using your headphones. They buy them, and then in order to use them, they have to download an app into their phone. Right? Through that app, they might record things like the songs you listen to, what time of day you listen to them, how long you’re using the app, where you are when you’re listening to it… And they might have this, like, select little package of data about you.

Now, they might find that that’s data that, like, a music… campaign ‑‑ the people who like do advertisement for musicians, I guess? I don’t remember what that job’s called. But it’s more like the idea that different companies collect data that’s useful for other companies in their… in their marketing practices, or in their business practices.

So Facebook collects data that a lot of different companies might want for a myriad of reasons, because the amount of data Facebook kind of siphons from people is so large. And so ‑‑ yeah, does that…? Do any of you guys have something to say around that, about other ways that companies might ‑‑

INGRID: Yeah. I mean, a lot of it bottoms out in ads and market research.


INGRID: There ‑‑ I mean, another, another place where ‑‑ I don’t think, it’s not the most lucrative source of revenue, I think? In so far as, it’s not the biggest buyer? But like, police departments will buy from data brokers. That’s some ‑‑ and that’s, there’s no real regulation on whether or when they do that.

So. Like, you know, it’s ‑‑ just, information in general is valuable. (Laughs) So, it’s ‑‑ it’s not ‑‑ like, I think the ‑‑ and I mean, ironically, I think what’s kind of so fascinating to me about the model of surveillance capitalism is that, like, ads don’t work. Or like, they kinda work, but like. They don’t… In terms of ‑‑ like, in terms of actually proving that, like, after I look at a pair of glasses once, and then I’m followed around on the internet by the same pair of glasses for like two and a half months, like… The actual success rate that I actually bought the glasses, I don’t think is that high? But there is just enough faith in the idea of it that it continues to make lots and lots of money. It’s like, very speculative.

OLIVIA: I actually saw a article recently that said… instead of advertising ‑‑ like, say like we all paid for, like, an ad‑free internet? It would cost about like $35 a month, for each of us. In terms of, like, being able to maintain, like, internet infrastructure and pay for things, without having advertisements.

If you have an alter ego for privacy, how can you ensure it remains separate? Is facial recognition something to worry about?

OLIVIA: “If you have an alter ego account and a personal account, how do you ensure your online accounts stay completely compartmentalized and aren’t associated through your device or IP address, et cetera?” And then they say, “Is there a way to protect your face from being collected on facial recognition if you post pictures on both accounts?”

INGRID: Yeah. So we didn’t ‑‑ we didn’t talk about facial recognition. And I ‑‑ I kind of ‑‑ I kind of falsely assumed that that’s ‑‑ it’s been so heavily talked about in the news that maybe it was sort of the thing people were kind of ‑‑ not ‑‑ but I also didn’t want to overemphasize that as a risk, because there’s so much information beyond a face that can be used when trying to identify people?

In terms of if you’re posting pictures on two different accounts… I don’t ‑‑ like ‑‑ I mean, if they’re similar photos, I don’t think ‑‑ I think the answer is, like, your face will be captured no matter, like, what? That’s sort of a piece of it. I don’t know. Olivia, can you think of any examples of, like, mitigate ‑‑ like, mitigation of face recognition, beyond like ‑‑ I mean, Dazzle doesn’t really work anymore. But like, in the same way that like people will avoid copyright, like, bots catching them on YouTube, by like changing the cropping, or subtly altering like a video file?

BLUNT: I just dropped a link. Have you seen this? It’s from the Sand Lab in Chicago, called the Fawkes Tool, and it like slightly alters the pixels so that it’s unrecognizable to facial recognition technologies. I’m still sort of looking into it, but I think it’s an interesting thing to think about when we’re thinking about uploading photos to, like, escort ads or something like that.

OLIVIA: I think that’s difficult when it comes to, like, facial recognition, is because depending on like who the other actor is, they have access to like a different level of technology. Like, the consumer‑facing facial recognition software, like the stuff that’s in Instagram face filters, and like the stuff that’s on your Photobooth app on your laptop, it’s really different from the kinds of tools that like, say, the state would have at their disposal.

So it’s kind of like a different… I don’t know if the word “threat model” is even a good way to phrase it, because we know that like, say, for instance, the New York Police Department definitely has tools that allow them to identify pictures of protesters with just their eyes and their eyebrows.

And so, normally… if I were talking to someone who uses ‑‑ who has, like, two different accounts and is interested in not, like, being connected to both of those accounts because of their bio‑metric data, like their face, I would normally suggest that they like wear a mask that covers their whole face, honestly. Because I don’t really know of a foolproof way to avoid it, digitally, without like actively, like, destroying the file. Like, you’d have to put like an emoji ‑‑ like, you’d have to physically ‑‑ you’d have to physically cover your face in a way that doesn’t… that’s irreversible for someone else who downloads the photo to do. Because there’s a lot of tricks online when it comes to, like, changing the ‑‑ changing the lighting, and like, putting glitter on your face, and doing a lot of different stuff?

And some of those work on consumer‑facing facial recognition technology. But we don’t actually know how ‑‑ if that even works at the state level, if that makes sense.

So depending on like, who you’re worried about tracking your account… you might just want to straight up cover your face, or leave your face out of photos.

What is gait analysis and why is it important?

BLUNT: I wonder, too, do you ‑‑ if you could talk a little bit about gait analysis, and how that’s also used? Are you familiar with that?

INGRID: I don’t ‑‑ I don’t know enough about gait analysis…

OLIVIA: I know that it exists.

INGRID: Yeah. And I think ‑‑ like, it is ‑‑ and this is another thing where, in trying to figure out what to talk about for this session, figuring out like what are things that we actually know where the risks are, and what are things that… may exist, but we, like, can’t necessarily like identify where they are?

OLIVIA: I have heard of resources for people who are interested. Like, for high‑risk people who are worried about being founded via gait analysis? And gait analysis is literally being identified by the way that you walk, and the way that you move. And there are resources of people who, like, teach workshops about like, how to mess with your, like, walk in a way that makes you not recognizable. And how to, like, practice doing that.

BLUNT: It’s fascinating.

Does it matter if you use popular platforms in browsers or apps?

INGRID: “If you use popular platforms like Facebook and Instagram in browsers instead of apps, does that give you a little more control over your data, or does it not really matter?”

I ‑‑ so, Olivia and Blunt, if you have other thoughts on this, please jump in. I mean, my position is that it kind of doesn’t matter, in so far as what ‑‑ the things that Facebook, like, stores about you are things you do on Facebook. Like, it’s still tied to an account. Unless you’re talking about ‑‑ so I don’t think ‑‑ so it’s kind of whether it’s, you know ‑‑ like, ultimately, like every ‑‑ it’s not just like… you know, passive, kind of, like trackers are happening that you could maybe use a script blocker on, and that’s cool? But things you like, and things you click on, on Facebook in the browser, are still going to be stored in a database as attached to your profile. So it doesn’t necessarily change, I think, the concerns over both of those. But.

BLUNT: I’m not totally ‑‑ I have also heard things about having the Facebook app on your phone, that it gives Facebook access to more things. Like, the terms of service are different. I’m not totally sure about it. I don’t have it on my phone.

INGRID: That’s actually ‑‑ that’s a good point. I apologize. I guess it also ‑‑ it depends on what thing you’re trying ‑‑ kind of concerned about. So, one way that ‑‑ one thing that Facebook really likes having information on for users, individual users, is who else they might want to be Facebook friends with. Right? The like “People You May Know” feature, I once read, uses up like a very large percentage of, like, the Facebook infrastructure compute. Because connecting people to other people is really, really hard. And once ‑‑ and like, the Facebook app being on your phone does give it kind of the opportunity to be opened up to your phone contacts, and places you take your phone. Which can expand, like, the network of people that it thinks might be, like, in your proximity, or in your social network. Because if a phone number in your phone has a Facebook account, maybe they will ‑‑ they’ll say, like, oh, you know this person, probably!

In 2017, Kashmir Hill and Surya Mattu did a feature for Gizmodo on how it works, that was inspired by Kashmir getting recommended a friend on Facebook that was a long‑lost relative, from her like father’s previous marriage or something. That there was, like, no way they would have otherwise met. And part of ‑‑ so, her interest partly came out of trying to figure out how they could have possibly even made those connections. And Facebook wouldn’t tell them! (Laughs) Because the “People You Know” feature is also a very powerful tool that makes them, like, an app that people want to use, in theory. They also did some follow‑up stories about sex workers being, like, outed on their alt apps, on their like alt accounts, because the “People You May Know” feature was recommending friends who knew the sex worker from like other parts of their life the alt account. And there also were examples of, like, therapists and like mental health professionals being recommended people who were clients as Facebook friends. People who met other people in, like, you know, 12‑step meetings being recommended as Facebook friends.

So there is ‑‑ in terms of, like, app versus browser, like, Facebook won’t say whether or not some of this stuff goes into the, you know, whether or not information it gathers from mobile devices goes into its “People You May Know” recommendations. But based on examples like this, it seems likely that that plays a role.

So that doesn’t ‑‑ I guess, in terms of control over your data, like… I think I misunderstood the framing of the question, ’cause I guess it’s more ‑‑ it gives you slightly more control over what Facebook does and doesn’t know about you. Because if Facebook doesn’t know what you’re doing with and on your phone, that’s probably like not a bad idea.

Did that all make sense, or was that…? I don’t know.

How secure is Facebook Messenger? How secure is Instagram?

BLUNT: No, I think that made sense. Someone was asking about the Facebook Messenger app. I think the same thing sort of applies to that, ’cause it’s all connected. I don’t know if anyone has anything else to say about that.

INGRID: This is the part where I admit that I’m not on Facebook. So, I’m actually terrible at answering a lot of Facebook questions, because I don’t…

BLUNT: Yeah, I ‑‑ I also think, like, Instagram is owned by Facebook, so also having the Instagram app on your phone, I feel like, might also bring up some of the same concerns?

INGRID: Yeah. I think it’s slightly ‑‑ from ‑‑ like, from what I ‑‑ I mean, I do use Instagram, so I can remember that interface slightly better? But… Like… My experience ‑‑ like, so as far as I’ve been able ‑‑ like. So like, as my point of comparison, I had sort of a dummy, like, lurker Facebook account, for some research a while ago. And the difference between its attempts to like connect me and suggest follows, versus Instagram’s attempts, were like ‑‑ Facebook seemed far more aggressive and given far less information about me was able to draw connections that, like, didn’t make any sense ‑‑ that like, were, very accurate. So, I think that’s just a good thing ‑‑ like, it’s ‑‑ you know. Don’t trust Instagram, because don’t trust Facebook, but. In my experience ‑‑ like, I don’t know if it’s… as much a part ‑‑ like, it’s not as central to the business model.

BLUNT: Yeah. And I think, too, just speaking from my own personal experience, like when I have had Facebook or Instagram, I use like a tertiary alias and lock the account and don’t use a face photo on the app, just so if it does recommend me to a client they’re much less likely to know that it’s me. And like, that has happened on my Instagram account. So. My personal Instagram account.

What is Palantir and why does it matter?

INGRID: Yeah. There are several follow‑ups, but I feel like this question “Can you explain about Palantir?” has been sitting for a while, so I want to make sure it gets answered, and then come back to this a little bit. So ‑‑

OLIVIA: I can explain a little bit about Palantir. So, it’s kind of the devil. We have ‑‑ I think it’s existed for… since, like, 2014? That might be the wrong ‑‑ no, not ‑‑ I think it was 2004, actually!

INGRID: Yeah, no, it’s 2004. I just ‑‑ I accidentally stumbled into a Wall Street Journal article about them from 2009 yesterday, while researching something else, and died a little? It was like, “This company’s great! I can’t see how anything could be problematic.”

BLUNT: It’s 2003, yeah. 17 years.

OLIVIA: It was started by Peter Thiel, who is a really strong Trump supporter and is linked to this really weird, like, anti‑democracy pro‑monarchy movement in Silicon Valley that’s, like, held by a lot of like a weird circle of rich people. And they are kind of the pioneers of predictive policing, and have also assisted ICE with tracking down and deporting immigrants. And they actually recently went public ‑‑ hmm?

INGRID: They did? Wow!

OLIVIA: Yeah. They haven’t, like, turned a profit yet, in 17 years. But it was initially funded, if I’m getting this right, I’m pretty sure they were initially funded by like the venture capital arm of the CIA.

INGRID: Okay, they actually, they haven’t gone public yet, but they are planning for an IPO…

OLIVIA: Soon. Is that it?



INGRID: Sorry, just ‑‑ ’cause they ‑‑ so, a thing about this company is that ‑‑ like, every two years, there is a flurry of press about them maybe doing an IPO, and then they don’t. And… So, I mean ‑‑ and yeah, sorry. So they were ‑‑ their initial funding partially came from In‑Q‑Tel, which is a capital firm run by the CIA that funds companies that make products that the CIA might need. Which ‑‑ so… Keyhole, which was a satellite imagery, like, software company, was initially funded by the CIA. And that company was later acquired by Google and became Google Earth. So just an example of the kind of things that they fund. It’s stuff like that.

OLIVIA: Oh, and to clarify, it’s like a datamining company. So they do the same kind of thing that the case study that I showed earlier does. But they have ‑‑ they’re really good at it. And they also create tools and technologies to do more of it.

INGRID: So ‑‑ and they’re kind of a good example of the point made at the beginning of this about surveillance capitalism and state surveillance being closely intertwined. Palantir has corporate and government contracts to do datamining services. I think they were working with Nestle for a while, and Coca‑Cola. They want to be providing as much tools to businesses as they do to ICE. And those, you know, kinds of services are seen as sort of interchangeable. (Laughs)

I mean, the funny thing to me about Palantir, too, is that ‑‑ it’s not like they’re ‑‑ in some ways, I feel like I’m not even sure it’s that they’re the best at what they do? It’s that they’re the best at getting contracts and making people think they’re cool at what they do?

OLIVIA: They market themselves as like a software company, but they really just have a lot of files.

INGRID: They’re kind of ‑‑ somebody in the tech industry once described them to me as like the McKinsey of datamining? That’s a firm that ‑‑ they work with lots of governments and corporations that do things that seem to just make everything worse, but they keep making money? (Laughs) Is the best way to describe it!

So I think, in terms of, like, explaining about Palantir, like… I guess, they are a high‑profile example of the kind of company that is rampant throughout the tech industry. They’ve had the most cartoonish accoutrement in so far as, you know, one of their founders is literally a vampire. And ‑‑ you know, they took money from the CIA. And their name comes from a Lord of the Rings, like, magical seeing stone. In some ways, I think that there is… a level ‑‑ like. They have ‑‑ there have been like documented instances of them doing egregious things, such as working with the City of New Orleans Police Department to develop predictive policing tools without an actual contract, so without any oversight from the City Council, without any oversight from the Mayor’s Office, based on ‑‑ and basically through the funding for the project coming through a private foundation. But in terms of, like, you personally in your day‑to‑day life, should you worry about this specific company any more than you would worry about a specific state agency? I don’t think that’s ‑‑ it’s going to depend on your particular risk levels, but… They’re kind of ‑‑ they’re a tool of the state, but not necessarily themselves going to, like, come for people.

OLIVIA: Also ‑‑

INGRID: Oh, literally a vampire? Peter Thiel is one of those people who believes in getting infusions of young people’s blood to stay healthy and young, so. As far as I’m concerned? A vampire.

What are Thorn and Spotlight?

BLUNT: I also just wanted to talk briefly about Thorn and Spotlight, ’cause I think that ‑‑ so, Spotlight is a tool used by Thorn, which I believe Ashton Kutcher is a cofounder of? The mission of Thorn is to, quote, stop human trafficking, and what they do is use their technology to scrape escort ads and create databases of facial recognition built off of those ads. And so I think it’s just interesting to think about the relationship between these tools and how they collaborate with the police and with ICE and in a way that could potentially facilitate the deportation of migrant sex workers, as well.

INGRID: Okay. Sorry, let’s ‑‑ (Laughs) Yes. Ashton. Fuck him.

Will deleting the Facebook or Instagram app help?

So, one question here… “Deleting the Facebook or Insta app won’t help, right, because the info on you has will be been collected?” Not necessarily. I mean, there won’t be any more data collected? And I think ‑‑ it’s true that it won’t be erased, unless you delete your account. And like, go through the steps to like actually‑actually delete it, because they’ll trick you and be like “Just deactivate it! You can always come back!” Never come back…

But like, yeah. There’s ‑‑ if it’s something that, like ‑‑ you know. As you continue to live your life and go places, although I guess people aren’t going places right now… They won’t have any more material. Yeah.

What data points does Facebook have?

BLUNT: Someone asked: “If you have an existing Facebook account that only has civilian photos and you haven’t touched it for four years, it only has those data points, right?” I think that’s a good follow‑up to the previous question.

INGRID: Yeah, that’s true. Well ‑‑ there’s also people you know who have Facebook accounts? And like, Facebook has this internal mechanism for, like, tracking non‑Facebook users as, like, air quote, like, “users,” or as like entities that they can serve ads to. And generally, it’s based on, like, those people being, like, tagged in other people’s images, even if they don’t have an account, or if they have a dead account. Like, if somebody ‑‑ if you have like a four‑year‑old untouched Facebook account, and somebody tags a contemporary photo of you, like, those data points are connected.

So, you know. Whatever other people do that could connect back to that original account, or whatever ‑‑ yeah. Whatever followers or friends you have on it… Updates that they produce could kind of be new data points about you.

Can fintech platforms connect your pay app accounts to your social accounts?

“In terms of fintech, how easy is it for companies to link your pay app accounts to social accounts?” Blunt, you might have a more comprehensive answer on this than I will.

BLUNT: Okay… Let me take a look. So, I think that there are, like, databases built off of escort ads that then get shared on the back end to sort of facilitate, like, a network‑type effect of de‑platforming sex workers. So a lot of people report ‑‑ and some of the research that we’re doing sort of confirms this ‑‑ that once you experience, like, de‑platforming or shadowbanning on one platform, you’re significantly more likely to experience it or lose access to it on another. So, as like a form of harm reduction and being able to keep access to those financial technologies, I suggest just having like a burner e‑mail account that you only use for that that’s not listed anywhere else publicly, that sounds like vanilla and civilian.

So that way, if they’re, like, running ‑‑ if they’re scraping e‑mail addresses from escort ads and then selling that data to facial ‑‑ to financial technologies, that your e‑mail won’t be in there. It’s just like adding one layer of protection. It might be connected in some other ways, but… just sort of as a form of harm reduction.

I don’t know if that answered… that question.

And I’m looking right now for resources that specifically ‑‑ resource specifically on protecting community members in regards to ICE centering trans sex workers. I know that… Red Canary Song does some work around this, specifically with massage parlor workers, and I’m looking up the name of the organization of trans Latinx sex workers in Jackson Heights. So I will drop that link so you can follow their work, as well.

And please feel free to drop any other questions in the chat, even if it wasn’t covered. We’ll see if we can answer them, and this is your time. So, feel free to ask away.


Tech Journals or Websites to Follow

INGRID: “What tech update journals or websites do we follow to stay up to date?” Oh! I want to know more about what other people do, too. I tend to, like ‑‑ I tend to follow specific writers, more than specific outlets, partly because… like, there are freelancers who kind of go to different places. But also, I kind of value seeing people who, like, have developed expertise in things. So… Julia Carrie Wong at The Guardian is someone I read a lot. Melissa Gira Grant, at The New Republic. (Laughing) Is not always writing about tech, but is probably one of the smartest and sharpest and most integrity‑filled writers I know.

BLUNT: Definitely one of the first to cover FOSTA‑SESTA, for sure.

INGRID: Yeah. Yeah. And… I’ve been… Let’s see. Motherboard, in general, Jason Koebler and Janus Rose, are very well‑sourced in the industry. So I usually trust things that they cover and find. Caroline Haskins is a young reporter who used to be at Motherboard and is now at BuzzFeed and does excellent work, along with Ryan Mac. And Kashmir Hill, who is now at The New York Times, but has also written for Gizmodo and others. And who else… Davey Alba is also with The New York Times, and is really great. Those are mine.

BLUNT: I sign up for the ‑‑ it’s like, Casey Newton’s daily e‑mail letter? And I just read that to stay up to date on certain news, and then research more. I know that the Internet Freedom Festival also has good updates, and I’m also happy to drop links to other weekly mailing lists that I sign up to, as well.

OLIVIA: Oh, I was muted! Oops. I, I listen to a lot of podcasts. And I know the, like, Mozilla IRL podcast was really good, for me, in terms of like learning more about like the internet, and specifically like surveillance infrastructure. And they have a lot of episodes. So if you’re, like, idling, and you have time to listen rather than reading. They also ‑‑ Mozilla also has their transcripts out, which is pretty nice.

Can browser bookmarks be accessed?

INGRID: “Is there any way for bookmarks on my browser to be accessed?” I believe the answer for that is gonna depend on the browser. Because ‑‑ or, it can depend on the browser? So, I think in the case of a browser like Chrome, which is owned by Google, if you are like logged into your Google account as part of, like, your browser using, I think all of that information then gets associated with your Google account. So Google will have that data. In terms of access beyond that, I’m not sure.

And then I think for other browsers, I don’t believe that that would be something that’s stored on Firefox. I’m not sure about Microsoft Edge. Olivia, do you have any other thoughts on that one?

OLIVIA: I, I don’t know…

How secure and safe is Zoom?

INGRID: Talking about safety of Zoom! Okay. Yeah. We talked, we talked a little bit about this yesterday, I think. Zoom is, you know, it’s software that was made for like workplace calls, and is designed for that setting. Which means ‑‑ (Laughs) In some ways, like, it is inherently a workplace surveillance tool. It is… relatively, like ‑‑ I mean, it’s, it’s not secure in the sense that, like, these ‑‑ I mean, first of all, this call, this is being recorded, and can ‑‑ you know, Zoom calls can be broadcast to livestream, like this one is. But also, the, you know, communications ‑‑ like, the actual calls aren’t, you know, encrypted in any way. Like, they kind of can just be like logged onto if they’re public. There can kind of just be some URL hacking. There are, you know, different settings you can make in terms of letting people in or out of meetings. But at the end of the day, also, Zoom has access to the calls! (Laughs) And how much you trust Zoom with that material is, you know, at your discretion.

I… (Sighs) I mean, in general, like… When it comes to, like ‑‑ like, Zoom calls are not where I would discuss, like, sensitive topics, or anything I wouldn’t want to have on the record. And that’s generally just the protocol I take with it. And I think ‑‑ I mean, that being said, like, yeah. There are… it is in such common use at this point, in terms of like spaces for events, like this one! That I won’t, like, kind of outright boycott it, simply because it’s become such a ubiquitous tool. But I think compartmentalizing what I do and don’t use it for has been helpful.

BLUNT: And so if you’re interested in staying up to date with EARN IT, I would suggest following Hacking//Hustling and Kate Villadano. I can drop their handle. And also, on July 21st, we’ll be hosting with our legal team… a legal seminar, sort of similar to this with space to answer questions, and we’re providing more information as to where EARN IT is and how you can sort of plug in on that.

Is there government regulation surrounding data tracking?

√: “Is there government regulation of data tracking, or not so much?” Not so much! Yes! In the United States, there’s very little regulation.

So, the reason ‑‑ or, one of the reasons that if you use a VPN and set it to a place like Switzerland and use it, you get a lot more information about what tracking is happening and you can make requests for data from platforms, is because of a European Union regulation called GDPR, General Data Protection Regulations? Or maybe General Data Privacy Regulations, sorry. And, yeah. The United States does not have an equivalent to that. Some websites ‑‑ in some ways, like, because the European Union is such a large market, I have seen some companies kind of just unilaterally become GDPR‑compliant, for like all users, simply because it’s easier than having, like, a GDPR version and a “other places” version. But, you know, when it comes to Facebook or… like, Instagram, or like large platforms, there’s ‑‑ like, they don’t ‑‑ they don’t really have an incentive to create conditions where they collect less data. So I think there, it’s kind of like, well, sorry. It’s gonna be that way. (Laughs) Yeah.

And it’s not as though ‑‑ and I think it is a thing that, like, lawmakers have interest in? But I think part of the challenge is… both, like, these companies are, you know, very well‑funded and will, like, seek to ‑‑ and will like lobby against further regulation? And also like a lot of people in Congress are… old! And bad at computer? And… don’t necessarily have ‑‑ sometimes have trouble, I think, wrapping their heads around some of the concepts underlying this. And, you know, and are not necessarily ‑‑ and like, I think the general atmosphere and attitude around, like, free markets solve problems! Kind of further undermines pursuit of regulation.

What exactly contributes to shadowbanning?

“In terms of social media accounts following your activity, based on your research so far for shadowbanning et cetera, who do you follow and… to certain lists?” I think, Blunt, this question might be for you, because of the research.

BLUNT: Yeah. I think it’s less about who you follow, and more about who you interact with. So, like, we’re still analyzing this research, but there seems to be a trend that, like, if ‑‑ if you’re shadowbanned, the people that you know are more likely to be shadowbanned, and there might be some relationship between the way that you interact and the platform? But we’re still figuring that out? But just like one thing you can try and ‑‑ we talked about this in another one, but having a backup account for promo tweets, so your primary account with the most followers doesn’t exhibit quote‑unquote “bot‑like activity” of automated tweets. And just having, like, casual conversation about cheese, or nature…

(Laughs) We’re not totally sure how it works.

Oh, and also! I believe her name is Amber ‑‑ I’m going to drop the link to it. But someone is doing a training on shadowbanning, and it seems like we’re collecting data on like multiple accounts. And it seems like there’s some interesting things to say. So I’m going to go grab a link to that. If you’re interested in learning more on shadowbanning, that’s on the 25th, at like 5:00 p.m. I think. So I’ll drop a link.

And just for the livestream: So, this is with Amberly Rothfield, and it’s called Shadowbanning: Algorithms Explained, on July 25th at 6:00 p.m. Still a few spots left. And it looks like she was pulling data on different accounts and playing with the accounts and analyzing the posts’ interaction and traction. So that should be really interesting, too.

Cool. Thank you so much for all of the amazing questions. I think we’ll give it a few more minutes for any other questions that folks have, or any other resources we can maybe connect people with, and then we’ll log off!

Can you request your information from datamining companies?

BLUNT: Oh. Someone’s asking, can I request my information from datamining companies?

OLIVIA: Yes, you can! Yes, you can. And a lot of them… Let me see if I can find a link? ‘Cause a lot of them have, like, forms, either on their website or available where you can make requests like that. You can request to see it, and I’m pretty sure you can also ‑‑ I know you can request that they stop collecting it and that they get rid of your file. But I think you can also request to see it.

BLUNT: I also just dropped a link to this. This is an old tool that I’m not sure if it still works, but it’s Helen Nissenbaum’s AdNauseam, which clicks every single advertisement, as well as tracking and organizing all of the ads targeted at you. It’s really overwhelming to see. I remember looking at it once, and I could track when I was taking what class in school, when I was going through a breakup, just based on my searches and my targeted ads.

Cool. So, is there anything else you want to say before we log off?

INGRID: I mean, thank you all so much for participating. Thank you, again, Cory, for doing the transcription. And… Yeah! Yeah, this has been really great.

Leave a Reply