Through training, machines acquire AI capability to mimic human's decision-making and problem-solving skills. AI learns from the data fed to it by people who designed it. The training data can be skewed and so the AI can display biases which get amplified over time. For example, the data models selected can have traces of racial bias, gender bias, and age discrimination. This will cause AI to take decisions that are unfair for a particular group. I share some examples of AI biases below:
Facial Recognition
In a study by MIT, a researcher found that the algorithms powering commercially available facial recognition software systems failed to recognize darker skin complexions.
Another study by Georgia Tech revealed that self-driving cars powered by AI struggle in detecting dark-colored pedestrians might risk their life.
Criminal Justice
The COMPAS algorithm used by the judges was found to be biased against African-Americans. The algorithm gives a risk score to the defendant for the probability of committing a future offense based on the records. When compared to white males who had a similar probability of future offense resulted in less detention period as to African-Americans.
While these examples are not exhaustive but showcase those biases aren’t just theoretical but a real-world issue.
Causes of Bias
There are mainly two reasons for this to happen:
Personal bias due to data gatherer - Sourcing of training data is done by someone like us, and knowingly or unknowingly, we tend to involve our internal biases. This is projected in the data used to build machine learning models.
Environmental bias caused intentionally/ unintentionally in the process of data gathering - Environmental bias can be due to sourcing data locally for a system that is to be served globally. The AI system is not trained with enough representative data.
Solution - Mitigating bias
Bias can creep into data sets and algorithms. Taking steps towards solving the AI bias is crucial as it has huge personal and societal consequences. Some useful strategies are:
Determine your unique vulnerabilities and identify potential sites of bias of your AI system. Calculate the risk and plan mitigation efforts.
Evaluate and control your dataset. Traditional process to control is probably not sufficient. Pay particular attention when working with historical data and/or data acquired from a third party.
Continuously manage and monitor AI system as AI system learns continuously.
Due to personal bias, sometimes we inject our own biases and we are not able to detect our own blind spots. bias. Diversifying the team may help notice different biases that some would be neglecting.
Since AI is going to be a major force in our lives, in addition to education. It is important that the system earns trust from its users. The first step towards earning trust is acknowleging algorithmic bias and working towards rectifying this responsibly. Part of this responsible approach includes establishing transparent governance and controls mechanisms, diversifying teams and continual monitoring.
I have produced a short podcast to discuss AI and ethical decision making. You can access it below.
A study by an education software solution provider saw a predictive algorithm bias against Guamanian students because original data used to train the data had very few Guamanian students. The implications of such systematic underestimation of those children can negatively influence administrative and educator decisions.
Algorithmic bias in higher education is showing up in similar ways. In 2020 University of Texas at Austin stopped using a machine-learning system to evaluate its applicants for Ph.D. in Computer science. It was found that the program’s database used past admission decisions in its algorithm. The system uses patterns from those decisions to calculate its scores for candidates, which critics contended reduced opportunities for students from diverse backgrounds and exacerbated existing inequalities in the field.
Concerns around accountability and ethics arise as institutions are increasingly turning to AI for recruitment and student assessment.
When human decision makers make mistakes, it is easy to assign blames and but how can we make machine algorithms accountable? Given the lack of transparency of AI due to companies not sharing their proprietary data sets, discriminatory practices may continue longer before it is spotted and challenged.
We are only beginning to think of algorithmic accountability. Only recently in 2019, Algorithmic Accountability Act was introduced, and is still under discussion, in the House of Representatives in the USA. The law would require businesses to conduct internal reviews of their automated decision-making processes to uncover any bias, unfairness, or discrimination in the algorithm’s output.
Meanwhile, higher education institutions must protect their students' personal information and take steps to prevent unintended bias caused by algorithms, according to experts in the field. In Algorithms, Diversity, and Privacy: Better Data Practices to Create Greater Student Equity, Rosa Calabrese, manager-digital design, WCET – WICHE Cooperative for Educational Technologies, suggests that humans should be included in the decision-making process alongside AI-powered algorithms, and that institutions should audit algorithms for bias on a regular basis.
Another way to minimize algorithmic bias is to diversify data sets and to diversify the group of people involved in creating the algorithms. Developers and designers should not always design for a typical, or a prototypical student, as such as approach can perpetuate bias against non-typical learners.
An example of such bias was reported by The Markup. Earlier this year, the technology news site The Markup investigated the advising software Navigate, which is widely used by many public universities in the USA. They found that certain student population were deemed 'high risk' and the 'Major explorer' function suggested administrators to push those students to another major where their risk of completion is lower. Such algorithmic decision making can leave advisors with life-changing impression of students and their chances of success within a given major. This can allow for misusing algorithmic driven recommendations or steering educational priorities in a certain way.
I reflect on algorithmic bias and fairness in my podcast. I used Audacity to record my podcast; used Youtube library for non-copyright audio for background; and combined the two pieces of audio using Camtasia.