From approving loans to predicting crime and even deciding who gets a job interview, AI systems are now a part of daily decision-making. Although these algorithms are regarded as neutral and effective tools for making informed decisions, we sometimes overlook the fact that their outcomes are only as reliable as the data they are trained on – and that data can carry inherent biases.
One striking example lies in hiring practices. With thousands of applications for a single position, it’s nearly impossible for human recruiters to sift through them all quickly. AI promises speed, consistency, and a data-driven approach to finding the best candidates. Tools like applicant tracking systems (ATS) and AI-powered resume screening softwares claim to make the process more efficient by matching key skills the recruiter lays out for a role with the applicants capabilities.
These systems are increasingly being used because they streamline and expedite the initial stages of hiring, but they are far from perfect. Algorithms don’t “think” like humans do, they analyse patterns in data. And when that data reflects historical biases – whether against women, minorities, or people from lower socioeconomic backgrounds – AI unintentionally amplifies them.
Is bias truly embedded in the data?
One of the most infamous examples of AI bias in hiring comes from Amazon. In 2018, the company scrapped an experimental AI recruiting tool when it became clear that the system was biased against women.
The tool was trained on 10 years of hiring data and through this data, the tool learned to favour male resumes as it observed male-dominated patterns in the tech industry, such as specific achievements associated with men. While the system didn’t explicitly discriminate, it reflected historical inequalities in the data it was fed.
This is not an isolated incident.
In the US, an investigation found that AI used in the criminal justice systems was biased against defendants of colour, unfairly labelling them as high-risk compared to other defendants. The bias wasn’t inherent to AI but came from the data. Data that was simply mirroring societal inequalities.
The feedback loop of bias
Why does this happen?
At its core, AI learns by analysing huge amounts of data to identify patterns and make predictions. Through techniques like machine learning, it refines its understanding over time, basing decisions on the information it has been trained on. When the information it has been trained on reflects biases, knowingly or unknowingly, AI considers these data patterns as “normal” – and this informs future responses, creating a feedback loop.
Bias in AI can also come from incomplete datasets, algorithm design (such as prioritising profitability over social impact), misinterpretation of context, and human influence during development.

Building ethical AI
The benefits and possibilities of this technology are too important to ignore, which is why it’s so important to put systems in place that prevent bias while allowing us to make the most of what it has to offer.
The key lies in developing ethical AI systems that prioritise fairness, transparency, and accountability at every stage of their creation and implementation.
- Here are some ways it can be done:
- Diverse datasets that accurately reflect the diversity in the world must be used to train AI systems, adopting inclusivity from the start.
- AI developers and stakeholders should be provided with training in ethics and fairness. It’s important for them to consider the societal impact of their work, even if that involves collaborating with non-technical experts who work with ethics.
- Regular testing or bias audits need to be done to identify potential biases and refine AI algorithms.
- Human oversight is necessary for key decision making and for flagging biases. We need to remember that AI complements human decision-making, doesn’t replace it entirely (although many would disagree here).
- Transparency needs to be encouraged to empower the end-user to understand and question the AI systems. Organisations deploying AI must share how their systems operate and the data/reasoning behind their decisions for better accountability.
- And lastly, but most importantly, we need Governments and policymakers to step-up and start recognising the need and urgency of Ethical AI and regulation. For example, recently, the European Union’s Artificial Intelligence Act (AI Act) was officially adopted and entered into force on August 1, 2024. This legislation establishes a comprehensive regulatory framework for AI within the EU, aiming to ensure the development and deployment of AI systems that are safe, transparent, and respect fundamental rights.
This, along with many other solutions, must be adopted to address biases in AI systems and move toward a more equitable and transparent future.
Move towards responsible AI
Bias in AI isn’t just a technical issue, it’s a societal one that is making us face the uncomfortable truth of inequality. AI holds the potential to improve our lives – from smarter education systems to environmental solutions and health advancements, but for that potential to be realised, we must address its flaws.
The question isn’t whether AI will shape our future – it already is in a way.
The real question is whether we’ll let the feedback loop encourage bias or demand systems that reflect the values of fairness and inclusivity. The stakes are too high to settle for anything less, wouldn’t you agree?