Voices, Geography, and Environment: Responsible AI is Incomplete Without Global Justice

Reviewing how intersectional approaches can move AI from performative policy to transformative practice by restoring the essence of feminism as holistic equity at scale that’s rooted in lived realities rather than rhetoric.

AI is transforming how we work, make decisions, and build societal systems. Yet, as digital transformation accelerates, foundational issues of justice, equity, and inclusion remain unresolved, or even amplified by the very technologies meant to improve our world. Today’s innovation requires a deep reckoning with the historic and ongoing inequities baked into our technologies.

Key Takeaways

  • AI deployment is amplifying historic inequities across demographics, regions, and environments.
  • Responsible AI policy must address geo-environmental harms and global inclusion, along with bias.
  • Inclusive, participatory governance is essential for sustainable and equitable outcomes.
  • Centralized structures reinforce exclusion and the rapid advancement of AI has only magnified these challenges.

Dynamics rooted in the history of technology, where access and decision-making have been scoped to a limited group of actors, introducing the society to dense power clusters and new blind spots in how institutions function. From early mainframes to the internet revolution, access and authorship created a “digital divide” that excluded multiple realities of rural, Global South, low-income communities, etc. Tech R&D prioritized wealthy nations reinforcing persistent gaps in access, adoption, and opportunity. Early computers and software often ignored accessibility, exclusion persisted even as standards improved.

AI repeats these patterns. Socioeconomic and geographic concerns have been voiced, and silenced with AI’s exclusionary Go-to-Market strategies. No AI research is complete without the considering racial and ethnic exclusion in Joy Buolamwini’s and Timnit Gebru’s “Gender Shades” study.1 They highlighted that facial recognition systems have higher error rates on darker-skinned and female faces due to biased and underrepresented training data.

Historic sociopolitical prejudices are encoded into early and current data choices: from redlining in financial algorithms, to English-centric translation engines, to sexist voice technologies. Seemingly “neutral” technologies continue to replicate racism, sexism, ableism, and other systemic harms across search engines, social media platforms, and algorithmic governance.

The choices made have lasting ripple effects that manifest in gendered language models, facial recognition misidentifying people of color, automated systems reinforcing disability barriers, and AI-driven policy tools exacerbating digital and social divides. Why do these gaps persist even with clear evidence?

Research Crossroads

AI systems are drawing on massive datasets and opaque, complex models that continue to scale up old biases to new levels of reach and speed. When ethical or social analysis is ignored, long-critiqued patterns in science and technology repeat. But research is where change begins. By centering feminist and other marginalized perspectives, AI research gains critical tools for questioning assumptions, rethinking methodologies, and democratizing both problems and solutions.

Industry must take responsibility and actively support efforts to address the global digital divide. This divide is evident as AI-powered health and education solutions are largely designed for high-income, urban populations, leaving rural communities in the Global South with minimal access, support, or representation in system design. The Global South faces severe climate change impacts, often as a direct result of decisions made by Global North superpowers. Bridging this divide requires policies that prioritize local languages, infrastructure development, and cross-border data collaboration.

Ethical and justice-oriented AI research requires more than post-hoc audits or aspirational pledges. In fact, proactive approaches that are exclusive and inequitable are not improving AI. Addressing systemic inequity means accounting not only for demographic exclusions, but also the geographic and environmental dimensions embedded in processes of technological invention and deployment.

Why Feminist and Marginalized Perspectives Matter

A feminist approach to AI challenges not just explicit bias, but also the dynamics of power, exclusion, and harm in technical systems. Feminism in tech strives for:

  • Equity: Actively countering both gendered and intersectional gaps in data, design, and deployment.
  • Justice: Prioritizing freedom, agency, and reparative outcomes for people historically erased, sidelined, or harmed by technology.
  • Transparency: Empowering communities and users to understand—and contest—the impacts of AI on their lives.

Incorporating perspectives from racialized, disabled, neurodiverse, LGBTQ+, and economically marginalized communities exposes structural failings and pushes the field toward true inclusion. These voices highlight how intersectional barriers, including poverty, racism, migration status, language, etc., shape real experiences with technology and AI.

A feminist approach is fundamentally equitable and inclusive, going beyond demographics to encompass geographic and environmental factors. Responsible AI truly earns its name when it begins to include those most impacted by injustice. Feminist AI ethics insist on power analysis, transparency, and reparative practices. Such frameworks enable us to examine how power and exclusion operate in every research decision, from question framing, to data sourcing, to impact evaluation.

Actionable Insights and Field Notes

The evidence demonstrates that equity, environmental sustainability, and justice are not optional fundamental requirements. These principles must be embedded within research design, methodological rigor, and policy engagement from inception, not retrofitted after harms emerge.

Centering the examination of biases, prioritized geographies, climate change is critical to producing socially and ecologically just AI systems. The lived experiences and knowledge of those most vulnerable to exclusion, displacement, and harm must be recognized as essential to the production of legitimate, trustworthy technological research.  AI’s future must reject inherited inequalities and include all communities.

Researchers, practitioners, policymakers, and technologists are urged to:

  • Proactively center excluded voices and geographies in every stage of AI design, research, and implementation.
  • Demand transparency, accountability, and environmental responsibility from all actors in the AI ecosystem.
  • Frameworks must treat equity and sustainability as essential, not peripheral, to technological progress.

Recent global regulatory developments such as the EU AI Act’s entry into force, China’s AI Plus initiative, Canada’s risk-based approach, etc. make explicit the urgency of embedding transparency in AI governance. However, these frameworks largely originate in the Global North, and there remain significant gaps in representation, accessibility, and enforcement.

Innovation at scale has no room for Kool-Aid. Notions that “AI solves all problems” or that “governance provides clarity and control” must be discarded. Governance is necessary but often damage control; real progress means proactive ethical and equitable action, led by inclusive policymaking.

Lasting solutions must bridge geographic and resource divides by including marginalized perspectives in the international policy processes to avoid further disproportionate risks of exclusion, surveillance, algorithmic bias, and climate change. By making such commitments actionable that the promise of AI can become a truly shared legacy, not another chapter in the history of exclusion.


References & Recommended Reading

  1. Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT 2018), PMLR 81:77-91. [Link]
  2. Trewin, S. (2018). AI fairness for people with disabilities: Point of view. arXiv Preprint, arXiv:1811.10670. [Link]
  3. AI Regulations in 2025: US, EU, UK, Japan, China and More, Anecdotes. [Link]
  4. Sparkco. (2025, October 21). AI Governance: Global Standards and Trends 2025. Sparkco Blog. [Link]
  5. Buza, M., Vaithyanathan, A., & Giardini, T. (2025). Global Digital Policy Roundup: August 2025. Tech Policy Press. [Link]
  6. Sinders, C. Feminist Data Set [Link]
  7. Cookson, T., Berryhill, A., & Kelleher, D. (2021). Moving From Diagnosis to Change. Feminist AI. Link
  8. Mandal, A. (2021). The Algorithmic Origins of Bias. Feminist AI. [Link]

Leave a comment