Digital Justice: AI as a Tool for Amplifying Silenced Voices in Trafficking Research
Jarrett Davis, MA & Wendy Stiver, RN, CCM, BSN, MA | September 17 | 1:45-2:45 pm EDT
Topic: Conceptual, Research | Knowledge Level: Intermediate
The integration of artificial intelligence (AI) into anti-trafficking efforts presents both promising opportunities and significant ethical challenges. This presentation examines the dual potential of AI in qualitative research on human trafficking, particularly how these technologies affect marginalized communities. Drawing from interdisciplinary perspectives including Indigenous data sovereignty, critical data studies, and survivor-centered methodologies, the presenters analyze how AI systems can both amplify and silence vulnerable voices. This research reveals a troubling paradox: while AI tools are developed to protect vulnerable populations, they often reproduce what they term “algorithmic epistemicide”—the systematic erasure of non-Western knowledge systems through biased data practices. The presenters will introduce their framework for equity-centered AI-assisted qualitative research built on the Mi’kmaw concept of “Two-Eyed Seeing” (Etuaptmumk), which integrates Western technological approaches with Indigenous knowledge frameworks through four critical components: 1) integration of epistemological frameworks rather than prioritizing one over another; 2) centering survivor voices throughout the research process; 3) balancing efficiency with relationality; and 4) establishing collaborative research governance that distributes decision-making authority. The presentation emphasizes practical implementation considerations including technological infrastructure, approaches to preserving participant agency, and documentation for transparency and accountability. They conclude with a robust call to action for implementing the CARE principles (Collective Benefit, Authority to Control, Responsibility, Ethics) in AI-assisted trafficking research, providing attendees with key questions to ask when evaluating AI systems and concrete examples of how these principles can be operationalized in research practice.
Presentation Objectives:
• Analyze the ethical implications of AI technologies in qualitative trafficking research through the lens of what Smith calls "decolonizing methodologies"
• Present a framework for evaluating and designing AI systems based on the "Two-Eyed Seeing" approach that centers survivor agency and Indigenous knowledge systems
• Examine case studies demonstrating both the protective applications of AI in anti-trafficking work and the exploitative adaptations by trafficking networks
• Provide practical guidelines for implementing participatory data governance models that engage marginalized communities as co-designers rather than subjects of technological interventions