Watched and Not Seen: Tech, Power, and Dehumanization

We cannot talk about the rise of surveillance tech without acknowledging those who have always been surveilled

 

We are all being watched in the era of surveillance capitalism. Over the last decade, major technology corporations—Facebook, Google, Uber, just to name a few—have created a modern “privacy dystopia.” Data capturing intimate details about our lives; our behaviours, preferences, and interests, are extracted from websites and apps, and have enabled companies to personalize the advertisements we see, draw inferences about people’s sexual encounters, and even, in some cases, control our emotions. Plans for a high-tech “smart city” are underway in Toronto—a massive urban planning project led by Google sister company Sidewalk Labs that’s been called a “colonizing experiment in surveillance capitalism.”

This dystopia is, without a doubt, cause for concern. I have experienced my own share of panic with each realization that Google might know every location I’ve visited; that my Uber drivers might have a record of my apartment address; or that a selfie uploaded to the internet might, actually, be somewhere online forever. It’s more than simply creepy. Each time, it feels like I’ve lost a little more control and autonomy over my digital self.

But while I know these concerns are valid, I recognize they also run the risk of being ahistorical. Many narratives—including my own—that centre the threats surveillance technologies pose to the “everyday citizen” gloss over the significant differences in how being watched manifests as a function of social position and privilege. Communities at the margins have always been watched. Discussions about the rise of surveillance technology must acknowledge the histories, and ongoing surveillance, of those who are racialized, poor, migrants, incarcerated, queer, women. We cannot talk about surveillance without talking about power.

 

Racialized surveillance

In the US, monitoring in the form of biometric technologies—such as fingerprint scans or facial recognition—have been used by Immigration and Customs Enforcement (ICE) to identify and deport immigrants since the Obama administration. Under Trump, additional approaches, such as the use of back-end Facebook data and access to a national license-plate recognition database, have been added to the suite of tools that can be used to track those who are undocumented. Advanced surveillance technologies are also being employed to police black and brown communities across the country. Groups like the Stop LAPD Spying Coalition are fighting back against what they’ve called the “stalker state,” organizing in Los Angeles against police and state-sponsored use of drones, cellphone stingray technology, and video monitoring in racialized neighbourhoods. In New York City, CryptoHarlem provides a space for Black residents to learn about digital security strategies, to help protect themselves against police surveillance technologies.

This kind of police surveillance exists in Canada, too. In 2016, the Toronto police surveilled the social media accounts of Black Lives Matter protestors who mobilized in response to news that the officer who killed Andrew Loku would not face criminal charges. Then, in an attempt to address gun violence in the city this past July, Toronto City Council voted to invest $4 million in a piece of surveillance technology called ShotSpotter. ShotSpotter notifies police of gunshots through a microphone network that acoustically triangulates gunshot locations, and is already used in dozens of major American cities. Across police departments, up to 70% of ShotSpotter’s gunshot alerts have been false alarms—police often arrive to find nothing. This is troubling given that the technology is often installed in low income and racialized neighbourhoods, suggesting it increases police presence in already over-policed communities. The Canadian Civil Liberties Association has expressed concerns to the mayor—cautioning that this intervention could be an “unconstitutional sucker punch to racialized communities of Toronto.”

Communities at the margins have always been watched. Discussions about the rise of surveillance technology must acknowledge the histories, and ongoing surveillance, of those who are racialized, poor, migrants, incarcerated, queer, women. We cannot talk about surveillance without talking about power.

Privacy as privilege

For many poor and racialized people, surveillance extends beyond policing and immigration enforcement, and into the fabric of day to day life. In Automating Inequality, Virginia Eubanks discusses how most social services and forms of social assistance require clients to share their personal data. Those marginalized by society are more likely to access such public services; and therefore, give up greater amounts of their data to the state. In this way, privacy is often not an option. Eubanks walks through a number of case studies in how high-tech, data-driven government tools only serve to profile the poor, writing:

“Marginalized groups face higher levels of data collection when they access public benefits, walk through highly policed neighbourhoods, enter the healthcare system, or cross national borders. That data acts to reinforce their marginality when it is used to target them for suspicion and extra scrutiny.”

 

Our Data Bodies is a community-based research project looking at data-based surveillance and discrimination in the cities of Charlotte, Detroit, and Los Angeles. The group is interviewing low-income, racialized, homeless, and formerly incarcerated community members about their experiences with government agency data collection processes. These conversations have highlighted the ways in which data systems discriminate against and follow individuals as a function of racial background and class. They also illustrate the dehumanizing, emotional, and psychological impacts of being reduced to a “decontextualized data body” from which government agencies make decisions about their lives. The research team intends to co-create a “Popular Guide to Data and Discrimination” from this work.

Analogous forms of dataveillance are happening in Canada. In Ontario, people on social assistance are “watched” by the government on a number of levels, under the guise of reducing welfare fraud. Database technologies track detailed information on the lives of welfare recipients, linking data across different government agencies to flag for potential signs of fraud. This kind of invasive monitoring is unwarranted, especially considering historical data indicating Ontario welfare fraud rates are under 1 percent. In her research on welfare surveillance, Krystle Maki highlights how single mothers are significantly overrepresented among those receiving social assistance in Ontario, and argues that contemporary discourse around surveillance must include an analysis of both class and gender.

Maki considers the gendered impact of welfare fraud hotlines, which exist in many places, including Ontario. Single mothers are often reported on these hotlines when suspected of cohabiting with a partner for violating “spouse in the house” rules. In this way, poor women are particularly susceptible to having their personal relationships monitored both by their neighbours and by the state. Surveillance discourse that considers a class and gender analysis could also help us better understand the different ways in which women perceive surveillance technologies, and the extent to which they experience them as invasive. Research indicates that women’s experiences with CCTV video surveillance varies widely as a function of race, class, and occupation.The study, conducted by researchers from Ryerson and York University, indicated that women who were racialized, low-income, or sex workers were more likely to experience the surveillance as “invasive,” “inappropriate,” and being related to “power and inequality,” rather than as a mechanism for safety. For example, one low-income research participant shared that “they’re not going to stop a guy in a suit and tie”, but if “they see me… I’ll go on the ground.” Another racialized woman in the study commented on the “dehumanizing aspect” to video surveillance: ”It’s not there for our protection.”

 

If you see something, say something

The aforementioned welfare hotlines are just one example of lateral surveillance: surveillance in which community members are encouraged to watch one another. Perhaps the most popular examples of state-sponsored lateral surveillance are the If you see something, say something campaigns—initially launched by the New York Transit Authority as an “anti-terrorism” campaign after 9/11 for citizens to report suspicious activity. They have since been adopted by other government bodies, such as the US Department of Homeland Security. Rather than actually creating safe communities, these campaigns merely encourage anti-Blackness and Islamophobia, with many white people using race and religious attire as indices for “suspicion”. In spite of this, the Toronto Transit Commission had its own If you see something, say something campaign a few years ago, which has since evolved into the SafeTTC app.

The SafeTTC smartphone app encourages people on public transit to report safety concerns and suspicious activity, and to even take photos of such activities. Concern has been expressed for how SafeTTC could exacerbate racial profiling in Toronto, but of course, such digital platforms for participatory racial profiling are not new. In the USA, Nextdoor is an online social network meant to connect people living in the same neighbourhood, marketed as a way for neighbours to do everything from “keeping an eye out for a lost dog” to “quickly getting the word out about a break-in.” The platform has been notorious for providing another space for online racism, with white residents frequently using the website to report Black people who are simply visiting friends under the website’s “Crime and Safety” section.

We must interrogate how power shapes one’s gaze; how it can transform the act of seeing into the act of watching.

Technology, power, and histories of surveillance

At the end of the day, we can think of new technologies that enable monitoring of marginalized communities as merely higher-tech iterations of old traditions. In her book Dark Matters, Simone Browne traces current anti-Black surveillance technologies to historical colonial practices dating back to 300 years ago, during slavery. Browne writes about lantern laws that required Black, Indigenous, and mixed-race enslaved people walking unaccompanied at night to carry a lit lantern, ensuring that they could be properly watched, monitored, and controlled by white people. Lantern laws have been framed as an antecedent for theomnipresence” floodlights recently installed by the NYPD in public housing areas as a new surveillance-based policing tactic in New York City for areas that primarily house Black and Latinx residents.

Viewing surveillance technologies through a framework of history and power helps us broaden mainstream concepts of what qualifies as surveillance. It asks us to interrogate the capabilities of new devices and platforms—even those that may not have been originally designed with surveillance in mind. For example, it could help us contextualize how smart home devices like Google’s Alexa have provided a new avenue for spying on, and controlling, romantic partners—and the gendered nature of technology-enabled intimate partner violence. It can help us situate the fact that Grindr was sharing data on the HIV status and location of their users with other companies, and reflect on whether this kind of tracking could be linked to histories of queer surveillance. It brings to mind the growing use of “e-carceration” practices; the GPS ankle monitoring of migrants and offenders as an “alternative” to detention centres and prisons. It might help us speculate why Microsoft’s commercial facial recognition technology had notoriously poor performance for people who weren’t white, but conveniently improved its performance ”across all skin tones” shortly after its contract with ICE—which involved developing technologies for the “identification of immigrants”—was announced.

The above scenarios are far from exhaustive, but the take-home is clear: we cannot talk about surveillance technology—whether it be by the state, corporations, community, or family—without talking about power.

Image by Micah Bazant for Justseeds

Watched and not seen

A few years ago, a dear friend sent me a link to an illustration. I can’t remember the reason why she shared it, but the text on the graphic stayed with me. It read: I don’t watch my neighbours. I see them. Years later, I googled the phrase and was brought back to the original image: a poster created by Micah Bazant for the Night Out for Safety & Democracy in 2013. Organized by the Louisiana-based alliance Justice for Families, the evening was a direct response to the “National Night Out,” an event which encouraged residents to be the “eyes and ears” for the police, surveilling their neighbours to create “safer communities.” As is often the case, surveillance was confused with safety—which, of course, begs the question: safety, for whom?

I think the distinction between being seen and being watched is powerful because it articulates a nuanced, yet enormous dichotomy—one that ultimately boils down to who is granted permission to be perceived as human. We like to talk about the importance of visibility and representation for folks at the margins. I, myself, often write about feeling seen. There is an ongoing narrative that being visible might lead to being understood; that it will convince others that people are deserving of humanity. But for many, increased visibility does not lead to safety—only heightened surveillance and experiences of violence. We must interrogate how power shapes one’s gaze; how it can transform the act of seeing into the act of watching.

In mainstream discussions about the rise of surveillance technology, it’s easy to get lost in the broad consequences, the ones that affect “everyone’s” data and privacy. While these concerns are real and important, our conversations cannot be ahistorical, and must return to a fundamental question that communities at the margins have always asked: How can we build worlds where those at the margins can be safe, can be perceived as human, can be seen as their full selves?