top of page

Are Past Biases Dominating Future Tools? Exploring How AI Reinforces Discrimination

  • Writer: Mirna Hamdan
    Mirna Hamdan
  • 4 days ago
  • 7 min read

Reliance on Artificial Intelligence (AI) technologies is increasing as their capacity to analyze large datasets accelerates. Governments, corporations, and individuals worldwide are harnessing the power of these technologies to transform nearly every sector of our lives. However, with this rapid improvement comes potential harm to the human experience. One of the urgent challenges in utilizing AI tools is the risk of exacerbating societal biases, especially when AI models are trained on data that already reflects rooted inequalities and discrimination (Sakubu, 2025).


Recent studies have shown an increasing number of incidents where AI displays systemic inequalities in its decision-making processes.


The Berkeley Haas Center for Equity, Gender, and Leadership analyzed about 133 biased systems and found that roughly 44% of these systems exhibited gender bias, and around 25% displayed both gender and racial bias (Smith & Rustagi, 2021).  

This bias, for example, can be particularly noticeable in the employment sector, where AI-powered hiring systems are used to streamline the recruitment process. Resume screening algorithms often replicate historical discrimination in workforce data and may reject candidates from marginalized racial and ethnic groups due to hidden biases (Köchling et al., 2025). Similarly, using such technologies in criminal justice was one of the steps that sparked more controversy because of their ability to promote prejudice and perpetuate racial biases (Kumar, 2024).  


Collage with mirrors reflecting diverse human figures, symbolising AI data's human origin and the 'human in the loop' concept. Anne Fehres and Luke Conroy & AI4Media / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Collage with mirrors reflecting diverse human figures, symbolising AI data's human origin and the 'human in the loop' concept. Anne Fehres and Luke Conroy & AI4Media / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

One of the widely used software programs for criminal risk assessment is Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), which rebranded itself to “Equivant” in 2017 (Dressel & Farid, 2018). Although COMPAS does not rely on advanced machine learning, it still represents an early form of algorithmic bias in risk assessment (Washington, 2018). Since 1998, this software has been continuously assessing offenders (Dressel & Farid, 2018). Within two years of assessment, it uses 137 personal characteristics and a person's prior criminal history to predict a defendant's risk of committing a misdemeanor (Dressel & Farid, 2018). The accuracy of this assessment with white defendants was slightly higher compared to black defendants.


An analysis conducted between 2013 and 2014 showed that Black and Hispanic defendants, for example, have a higher chance of being labeled as high-risk compared to white defendants (Dressel & Farid, 2018).

Christopher Gatlin was among the victims of modern AI bias, spending 17 months in prison for a crime flagged by an AI program. In January 2025, the Washington Post’s Post Reports podcast covered his case. Guest host Doug MacMillan, a business and tech investigations reporter, detailed Gatlin’s experience as one of at least eight people in the United States wrongfully arrested due to facial recognition technology (MacMillan, 2025).


So far, AI remains a tool shaped by historical biases, and as people rely on it as a primary information source, it starts to reproduce discrimination, fuel divisions, and threaten human rights and freedoms. Therefore, regulating AI is becoming an urgent international concern.


Algorithms rely on digital data to make decisions; however, these datasets often underrepresent certain racial and ethnic groups (Ashwini, 2024). As a result, some answers provided to users can be biased and misleading. Additionally, if the training data is insufficient to produce accurate answers, algorithms may inadvertently generate discriminatory predictions (Ashwini, 2024). The use of flawed data to inform real-life decisions can cause further harm to marginalized racial groups. When that data is used within AI tools, it generates new data that is later used to shape future decisions (Ashwini, 2024).


Giant tech and AI companies are now largely dominated by white Western owners who still amplify colonial-era hierarchies that continue to have a significant influence on science, technology, education, and the economy (Kwet, 2019). In order to understand the various forms of inequality, it is essential to examine colonial narratives and their influence on technology.


Today, digital colonialism is taking place. It represents a new form of domination and exploitation of populations through data control, mirroring historical patterns of colonial exploitation.

This is characterized by the concentration of resources and power in Western tech companies that seek to control software, hardware, and internet access to dominate digital spaces in other countries (Kwet, 2019). This is seen as a contemporary form of imperial control that enables large corporations to expand their global influence through surveillance systems (Kwet, 2019).


A highly-contrasted digital landscape of a train going West with settlements in the bottom left. The image has 5 boxes symbolising computer vision, with text that reads "settler" “beckoning mountains” and “the future awaits” and “indigenous population” which is non-existent. Hanna Barakat  & Archival Images of AI + AIxDESIGN / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
A highly-contrasted digital landscape of a train going West with settlements in the bottom left. The image has 5 boxes symbolising computer vision, with text that reads "settler" “beckoning mountains” and “the future awaits” and “indigenous population” which is non-existent. Hanna Barakat & Archival Images of AI + AIxDESIGN / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

In South Africa, partnerships between the government and tech companies emerged, leading to long-term dependence on foreign technologies. The Operation Phakisa Education initiative, for example, encouraged reliance on companies like Microsoft and Google, which frequently provided free internet access and learning tools. However, these companies were believed to reduce the influence of local systems by controlling data and communication (Kwet, 2019).


The so-called “free” digital services offered by big Western companies build dependencies that hinder local innovation and growth while reinforcing unequal power dynamics (Kwet, 2017). Today’s racial inequalities are subtly reinforced through technology because designers overlook systemic racism. AI systems can replicate social oppression and uphold a system of unequal treatment through social norms and rules. Some scholars argue that ethical frameworks should evolve to fit modern technologies while protecting values like transparency and non-maleficence (Mohamed et al., 2020).


Nonetheless, algorithmic oppression persists as tech developers neglect to implement these principles in real-world settings. Ethical AI frameworks often exclude underrepresented voices and fail to account for colonial power dynamics (Mohamed et al., 2020). 

The rapid deployment of AI across various fields is revealing gaps in regulatory frameworks. Although the core international human rights instruments, like the EU AI Act, focus on values such as dignity and equity, their scope remains limited in addressing algorithmic harm to marginalized groups. At the same time, there is another gap in determining who sets the rules and whose voices oversee the development of international norms (Rodrigues, 2020). The dominance of Western actors in AI governance creates a power imbalance, leading to frameworks that primarily represent the interests of economically powerful communities (Mohamed et al., 2020).


It has become clear that there is a lack of algorithmic transparency; users cannot understand why and how AI tools generate answers or what sources they rely on. Sometimes, even developers cannot justify why some individuals are denied jobs, for example. The pressure is increasing to design and regulate AI to be accountable, fair, and transparent (Rodrigues, 2020).


As power continues to influence AI rules, individuals must wonder who benefits from these systems and who is left out, or sometimes directly harmed by underrepresentation and discrimination. A decolonial approach can encourage users to investigate how knowledge is produced, who produces it, and what grants itlegitimacy (Mohamed et al., 2020).

When users start to question the embedded power dynamics within AI, they recognize the importance of pluralism and value diverse methods of governance and thought while investigating answers. Following a decolonial thought process might not provide definitive solutions to eliminating AI bias; however, it prevents users from falling into its trap and regenerating discrimination. Aiming for a structural decolonization, achieved by dismantling colonial mechanisms of power, economics, culture, and thinking, opens space to rethink fairness, inclusion, and responsibility in global technological governance (Mohamed et al., 2020).  


Bibliography

  1. Ashwini, K. P. (2024). Contemporary forms of racism, racial discrimination, xenophobia and related intolerance: Report of the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance (A/HRC/56/68). United Nations Human Rights Council.https://tandis.odihr.pl/bitstream/20.500.12389/23109/1/23109_ENG.pdf 

  2. Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science advances4(1), eaao5580.https://www.science.org/doi/full/10.1126/sciadv.aao5580

  3. Köchling, A., Wehner, M. C., & Ruhle, S. A. (2025). This (AI) n’t fair? Employee reactions to artificial intelligence (AI) in career development systems. Review of Managerial Science19(4), 1195-1228.https://link.springer.com/article/10.1007/s11846-024-00789-3 

  4. Kumar, V. (2024). Legal and ethical impact of AI in criminal justice: An analytical study. International Journal of Novel Research and Development, 9(8), 552–561.https://ijnrd.org/papers/IJNRD2408261.pdf 

  5. Kwet, M. (2017). Operation Phakisa education: Why a secret? Mass surveillance, inequality, and race in South Africa's emerging national e-Education system. Mass Surveillance, Inequality, and Race in South Africa's Emerging National E-Education System (December 4, 2017).https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3672408 

  6. Kwet, M. (2019). Digital colonialism: US empire and the new imperialism in the Global South. Race & class60(4), 3-26.https://journals.sagepub.com/doi/abs/10.1177/0306396818823172 

  7. MacMillan, D. (Guest host). (2025, January 14). Arrested by AI [Audio podcast episode]. In Post Reports. The Washington Post. https://www.washingtonpost.com/podcasts/post-reports/arrested-by-ai/ 

  8. Mohamed, S., Png, M. T., & Isaac, W. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology33(4), 659-684https://link.springer.com/content/pdf/10.1007/s13347-020-00405-8.pdf 

  9. Rodrigues, R. (2020). Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology4, 100005https://www.sciencedirect.com/science/article/pii/S2666659620300056 

  10. Sakubu, D. (2025). Challenges of Artificial Intelligence today and future implications for society and the world. World Journal of Advanced Research and Reviews.https://journalwjarr.com/node/1347 

  11. Smith, G., & Rustagi, I. (2021). When good algorithms go sexist: Why and how to advance AI gender equity. Standford Social Innovation Review, 1-8. https://ssir.org/articles/entry/when_good_algorithms_go_sexist_why_and_how_to_advance_ai_gender_equity 

  12. Washington, A. L. (2018). How to argue with an algorithm: Lessons from the COMPAS-ProPublica debate. Colo. Tech. LJ17, 131. https://heinonline.org/HOL/LandingPage?handle=hein.journals/jtelhtel17&div=9&id=&page=


Disclaimer

The opinions expressed herein belong solely to the columnist and do not represent the official position of our think-tank. Humanotions cannot be held liable for any consequences arising from this content. Content published on Humanotions may contain links to third-party sources. Humanotions is not responsible for the content of these external links. Please refer to our Legal Notices & Policies page for legal details and our Guidelines For Republishing page for republication terms.

bottom of page