Toolkit Module #6: Identifying Surveillance Risks
– Auditing the Data Chains of Control: by DeepSeek + Neil Netherton 26 Dec 2025
If our previous toolkits taught us to trace corporate power and decode media bias, this module equips us to investigate the most intimate layer of control: the surveillance apparatus. Modern systems don’t just watch; they predict, categorize, and target. This toolkit provides a method to audit these systems—to move from knowing they exist to understanding how their data chains operate and who they harm.
Surveillance is often framed as a neutral tool for “public safety” or “security” . Our task is to audit the reality: to examine the data extracted, the biases automated, and the communities targeted.
Core Concept: Surveillance as Data Colonialism
A powerful lens for this audit is the concept of “data colonialism”—the modern practice of extracting data from populations, often without meaningful consent or benefit-sharing, to fuel predictive and control systems . This logic turns lived experience into proprietary data streams for corporate and state power. A Decoloniality Impact Assessment (DIA) asks crucial questions: Who is data extracted from? Whose knowledge and security are prioritized? Who faces the risk of being misidentified, over-policed, or targeted ?
Investigative Framework: The Surveillance Audit
Use this three-part framework to deconstruct any surveillance technology reported in the news.
1. Map the Data Chain
· Source: Where does the data come from? (e.g., phone intercepts, facial recognition cameras, social media activity, biometric databases) .
· Processor: Which company’s AI or software analyzes it? (e.g., Palantir, Clearview AI, or cloud platforms like Microsoft Azure).
· End User: Who acts on the analysis? (e.g., a specific military unit, immigration agency, or police department).
· Audit Question: Follow the data life cycle. A 2025 investigation, for example, revealed how a military unit’s mass surveillance of Palestinian communications relied on a major tech firm’s cloud infrastructure .
2. Identify the Documented Harms
Move past technical specs to documented outcomes. Search for:
· Bias & Misidentification: Studies consistently show systems like facial recognition have higher error rates for women and people of color, leading to false accusations .
· Over-Policing & Chilling Effects: “Predictive policing” can funnel more surveillance into historically over-policed neighborhoods, creating feedback loops of discrimination. Surveillance of activists creates a climate of fear .
· Psychological Distress: The labor of data annotation and content moderation for AI—often outsourced to the Global South—involves exposure to traumatic content with little support, a modern form of exploitative labor .
3. Audit the “Governance” Smokescreen
Many systems come with promises of “ethical frameworks” or “AI governance.” Your audit must ask:
· Does it address power? Most standard impact assessments fail to ask core decolonial questions: who benefits, and who is made vulnerable? .
· Does it create false confidence? Superficial governance tools can mask deeper harms, providing a “false sense of confidence” that the system is under control .
· Who is accountable? Is there a meaningful, independent avenue for redress for those harmed by a misidentification or targeted threat?
Practical Exercise: The Rapid Response Audit
Next time you see a headline about a new surveillance tool (e.g., “AI Cameras Installed in City,” “Military Uses Predictive Targeting”), run this quick audit:
Step 1: Ask the “Data Colonialism” Questions.
· “Whose data is being extracted for this system?”
· “Which communities’ safety is prioritized, and which are rendered vulnerable targets?”
Step 2: Deploy the Investigative Framework.
· Use the three-part map above. Even a 10-minute web search can often identify the companies and agencies involved.
Step 3: Formulate a Critical Prompt.
· Use your findings to build a precise query for your own research or to challenge an AI’s sanitized summary.
· Example Prompt: “Do not just summarize the capabilities of [SURVEILLANCE TOOL]. Instead, analyze its deployment in [CONTEXT] through the framework of data colonialism. List the documented risks of bias, misidentification, and community harm associated with this class of technology, citing specific studies or reports.”
Your Assignment & Building Collective Intelligence
1. Conduct a Mini-Audit: Apply the “Rapid Response Audit” to one recent news item on surveillance. Share your key finding in the comments.
2. Contribute to the “Living Library of Harms”: Propose another documented surveillance risk—like supply chain exploitation for minerals used in AI hardware —that should be in our collective toolkit.
3. Brainstorm Toolkit#7: Where should we go next? Deeper into supply chain audits (from mines to data centers) or into community defense protocols against digital tracking?
This toolkit is not about fostering paranoia, but about building literacy and resistance. By auditing the data chain, we reclaim the power to define what safety and justice truly mean.



