“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” —Stephen Hawking.
We live in a world where algorithms algorithmic decision making is pervasive. AI can help decide who gets into college, who is offered a job interview, who receives a home loan, and more. Since AI learns from its training-data, which can be skewed and biased, its decisions can be problematic. Lately, controversy over AI has has highlighted current lack of oversight, regulation, and formal auditing processes for algorithms.
Click on the following links for easier navigation.
Debates surrounding AI usually centers on its similarities and differences with HI. The debates concern trustworthiness, explainability and ethics suggest that AI seek to achieve human-like characteristics to realize its full potential. Because HI is multi-faceted, despite its limitations, Artificial intelligence cannot outdo HI wholly but in some narrow domains, AI will be more efficient. I have summarized the fundamental differences below:
Human intelligence adapts to modern environments by using a mixture of distinct cognitive procedures; on the other hand artificial intelligence mocks human behaviour and conduct human-like actions.
The simple difference is that human beings use their brain and cognitive processes that are influenced by reason, emotion, intuition, memory and imagination, while AI machines depend on the data given to them.
Human intelligence learns from past mistakes and this learning is tempered by attitudes, beliefs and values. Artificial intelligence learns not from its past but through the data and the feedback that it receives. Uniquely human thinking and learning development process cannot be matched by machines.
Artificial intelligence takes much more time to adjust to the new changes whereas human beings can adapt to changes easily and this makes people able to learn and ace several abilities.
However, machines excel at handling enormous amount of data at a speedier rate as compared to humans. Humans cannot beat the speed of computers.
Artificial Intelligence has yet to master related social and cultural codes. Humans are uniquely endowed with self-awareness, and are attuned to others' emotions, beliefs and values and can flexibly respond to them.
As an educator, I am familiar with Howard Gardner's theory of multiple intelligences. I have prepared a short infographic based on Gardner's book "Frames of mind: The theory of multiple intelligence (1983). Following this, we can see that AI is yet incapable of developing the full range of human intelligences although AI can perform much better in narrow range of intelligence. For example, IBM's Watson the quiz show, Jeopardy, in 2021.
I have produced a quick timeline to represent the development of AI, beginning from 1950 when Alan Turing proposed a test for machine intelligence. The premise was that if a machine can trick humans into thinking it is human, then it has intelligence.
After preparing this infographic, I discovered a timeline that captures some key milestones of the last decade.
Global adoption of technology in education is transforming the way we teach and learn, in addition to making enrollment decisions. Artificial Intelligence is reconfiguring interaction between learners and teachers, and this technology is going to revolutionize teaching and learning in some distinct ways:
1) Personalized learning: AI can use student performance and interaction data to find a personalized schedule and diet of learning that cater to specific needs of students. Learning is less regimented, more flexible and tailored to students' needs.
2) Smart content production: Digital lessons, bite-sized contents, visualization, simulation, and other news ways of perceiving information will be developed. Learners today are already consuming learning content and interacting with their learning content in innovative ways and the AI technology is going to accelerate that.
3) Automation of administrative tasks: Smart AI assistants will simplify time-consuming tasks such as grading, assessing and providing feedback.
It is obvious that Using AI will transform learning experiences by removing some of the traditional limitations and introducing new possibilities. However, it is important to be mindful of the inner workings of Artificial Intelligence and the quality of the solutions AI claims to offer.
_____________________________________________
Artificial Intelligence is no longer a distant utopia. A lot has happened since John McCarthy coined the term in 1956.
What was once the realm of science fiction is now a reality – smart assistants, chatbots, smart home devices, self-driving cars, and other intelligent systems have become ubiquitous and continue to shape and change our lives.
A range of industries, from health care to finance, and ecommerce to transportation have embraced AI systems. But there’s one field that was initially hesitant to adopt AI, but it is probably going to see disrupting changes because of AI. Here we’re talking about teaching and learning.
Lately, educational institutions have been experimenting with AI technology, and there is a consensus that AI is essential to spearhead changes for the future of learning. This Global adoption of technology in education is transforming the way we teach and learn, in addition to making enrollment decisions. Artificial Intelligence is reconfiguring interaction between learners and teachers, and this technology is going to revolutionize teaching and learning in some distinct ways. Let’s discuss four distinct areas of changes that will be driven by AI.
1) Personalized learning:
We all know that everybody learns differently based upon their personal interests, preferences, motivation, and needs. Traditional teaching methods with their one-size-fits-all approach cannot account for the diverse abilities, needs, interests, and preferences of all learners. So not all learners reach their full potential under traditional educational approaches. Personalized learning enabled by AI can narrow achievement gap by providing personalized learning experiences.
Specifically, adaptive learning platforms can create learning profiles for students, based on their abilities, preferences, learning styles and the performance data. This information is then used to adapt content and learning style to suit students’ needs and provide targeted and timely feedback on progress.
2) Assisting with routine administrative tasks
Educators spend huge amount of their time carrying out routine administrative tasks. Educators can automate many of these tasks using AI. AI can do much of the heavy lifting for teachers by grading assignments, organizing paperwork, and managing communications with students.
Automated grading can save hours for teachers and provide feedback for students. AI can organize and retrieve paperwork efficiently. Furthermore, AI solutions can help with a range of administrative duties from such as processing student application forms and HR management. This will result in lower costs and increase administrative efficiency.
3) Education without boundaries
Traditional brick and mortar education has several limitations and require students and teachers to be present at a predetermined time and space. In contrast, AI enabled learning can help make high-quality instruction easily accessible for everyone, including those who cannot attend in-person learning.
AI can make flexible learning possible, and students can learn anytime. Anywhere and they can control the pace of learning that suits them.
4) Creation of digital content
AI is slowly changing the way students interact with and consume learning content. Use of digital text books, simulation, and virtual reality are redefining the ways students interact with learning content. Together with this, students receive timely feedback on their progress and educators can use this information to make critical decisions to support their learners.
What does all this mean for teachers?
It is obvious that AI will be handling some of the tasks previously performed by teachers and administrators, but this technology is unlikely to make teachers redundant. However, AI will continue to develop into a smart and efficient assistant that enables educators to work efficiently to meet their students’ needs better. There is only so much that AI can do in a highly complex social activity such as teaching. No machine can connect with students, guide and inspire them the way teachers do.
Final thought: It is obvious that Using AI will transform learning experiences by removing some of the traditional limitations and introducing new possibilities. However, it is important to be mindful of the inner workings of Artificial Intelligence and the quality of the solutions AI claims to offer.
Emergence of AI, deep learning and machine learning cannot be separated with the emergence of Learning analytics as a field within education. Many of AI-enabed personalized learning system, intelligent tutor system and adaptive learning systems depend on learning analytics. I discuss learning analytics and its use in education in the following podcast:
Creativity may be one of the ultimate frontiers for artificial intelligence. AI has mimicked the styles of great painters, writers and assisted in making informed creative decisions in photography, filmmaking and design. AI has been able to achieve this feat through a 'generative model' meaning AI learns how to mimic the huge amount of data that it is trained on.
The problem with this sort of generative model has some obvious limitations. If we define creativity as producing something that is original, and unexpected but has value, it is doubtful if mimicking huge amounts of training data leads to creative output.
John Smith, Manager of Multimedia and Vision at IBM Research notes, "“It’s easy for AI to come up with something novel just randomly. But it’s very hard to come up with something that is novel and unexpected and useful.”
Experts wonder if AI can produce something original without guidance, understanding what is beautiful and of value. Certainly some parameters for creativity can be taught but it remains to be seen if AI can develop its own sense of creativity.
Nevertheless, knowledge workers and creative work professionals have already begun to benefit from AI developments. For example, Adobe photoshop using AI features in its programs and film-making software identify patterns and sequences to generate newer content. AI is already proving to be a very useful assistant in the creative domains but it may never replace the human creativity. Human creativity is going to be central to success in any human endeavors and provide competitive advantage over others because creativity is not likely to be accomplished through AI anytime soon.
UCLA AI researcher Michael Jordan believes we should refer to intelligence augmentation (IA) rather than AI. He believes that more can be achieved when we create computerized tools that complement human intelligence rather than attempting to replicate or replace it. From Michael Jordan cited in Browne, J. Make, Think, Imagine - Engineering the Future of Civilisation. Bloomsbury, 2019. A case in point is Google’s AutoDraw illustrates the principle of IA. After we provide it with a rudimentary sketch, it gives us (usually) a more detailed drawing.
Through training, machines acquire AI capability to mimic human's decision-making and problem-solving skills. AI learns from the data fed to it by people who designed it. The training data can be skewed and so the AI can display biases which get amplified over time. For example, the data models selected can have traces of racial bias, gender bias, and age discrimination. This will cause AI to take decisions that are unfair for a particular group. I share some examples of AI biases below:
Facial Recognition
In a study by MIT, a researcher found that the algorithms powering commercially available facial recognition software systems failed to recognize darker skin complexions.
Another study by Georgia Tech revealed that self-driving cars powered by AI struggle in detecting dark-colored pedestrians might risk their life.
Criminal Justice
The COMPAS algorithm used by the judges was found to be biased against African-Americans. The algorithm gives a risk score to the defendant for the probability of committing a future offense based on the records. When compared to white males who had a similar probability of future offense resulted in less detention period as to African-Americans.
While these examples are not exhaustive but showcase those biases aren’t just theoretical but a real-world issue.
Causes of Bias
There are mainly two reasons for this to happen:
Personal bias due to data gatherer - Sourcing of training data is done by someone like us, and knowingly or unknowingly, we tend to involve our internal biases. This is projected in the data used to build machine learning models.
Environmental bias caused intentionally/ unintentionally in the process of data gathering - Environmental bias can be due to sourcing data locally for a system that is to be served globally. The AI system is not trained with enough representative data.
Solution - Mitigating bias
Bias can creep into data sets and algorithms. Taking steps towards solving the AI bias is crucial as it has huge personal and societal consequences. Some useful strategies are:
Determine your unique vulnerabilities and identify potential sites of bias of your AI system. Calculate the risk and plan mitigation efforts.
Evaluate and control your dataset. Traditional process to control is probably not sufficient. Pay particular attention when working with historical data and/or data acquired from a third party.
Continuously manage and monitor AI system as AI system learns continuously.
Due to personal bias, sometimes we inject our own biases and we are not able to detect our own blind spots. bias. Diversifying the team may help notice different biases that some would be neglecting.
Since AI is going to be a major force in our lives, in addition to education. It is important that the system earns trust from its users. The first step towards earning trust is acknowleging algorithmic bias and working towards rectifying this responsibly. Part of this responsible approach includes establishing transparent governance and controls mechanisms, diversifying teams and continual monitoring.
I have produced a short podcast to discuss AI and ethical decision making. You can access it below.
A study by an education software solution provider saw a predictive algorithm bias against Guamanian students because original data used to train the data had very few Guamanian students. The implications of such systematic underestimation of those children can negatively influence administrative and educator decisions.
Algorithmic bias in higher education is showing up in similar ways. In 2020 University of Texas at Austin stopped using a machine-learning system to evaluate its applicants for Ph.D. in Computer science. It was found that the program’s database used past admission decisions in its algorithm. The system uses patterns from those decisions to calculate its scores for candidates, which critics contended reduced opportunities for students from diverse backgrounds and exacerbated existing inequalities in the field.
Concerns around accountability and ethics arise as institutions are increasingly turning to AI for recruitment and student assessment.
When human decision makers make mistakes, it is easy to assign blames and but how can we make machine algorithms accountable? Given the lack of transparency of AI due to companies not sharing their proprietary data sets, discriminatory practices may continue longer before it is spotted and challenged.
We are only beginning to think of algorithmic accountability. Only recently in 2019, Algorithmic Accountability Act was introduced, and is still under discussion, in the House of Representatives in the USA. The law would require businesses to conduct internal reviews of their automated decision-making processes to uncover any bias, unfairness, or discrimination in the algorithm’s output.
Meanwhile, higher education institutions must protect their students' personal information and take steps to prevent unintended bias caused by algorithms, according to experts in the field. In Algorithms, Diversity, and Privacy: Better Data Practices to Create Greater Student Equity, Rosa Calabrese, manager-digital design, WCET – WICHE Cooperative for Educational Technologies, suggests that humans should be included in the decision-making process alongside AI-powered algorithms, and that institutions should audit algorithms for bias on a regular basis.
Another way to minimize algorithmic bias is to diversify data sets and to diversify the group of people involved in creating the algorithms. Developers and designers should not always design for a typical, or a prototypical student, as such as approach can perpetuate bias against non-typical learners.
An example of such bias was reported by The Markup. Earlier this year, the technology news site The Markup investigated the advising software Navigate, which is widely used by many public universities in the USA. They found that certain student population were deemed 'high risk' and the 'Major explorer' function suggested administrators to push those students to another major where their risk of completion is lower. Such algorithmic decision making can leave advisors with life-changing impression of students and their chances of success within a given major. This can allow for misusing algorithmic driven recommendations or steering educational priorities in a certain way.
I reflect on algorithmic bias and fairness in my podcast. I used Audacity to record my podcast; used Youtube library for non-copyright audio for background; and combined the two pieces of audio using Camtasia.