Artificial Intelligence (AI) is behind unforeseen innovations across industries in today's digitally-driven business ecosystem. From orchestrating smart manufacturing and digital healthcare operations to autonomous banking and infrastructural subtleties within smart cities, AI has become a crucial aspect of human existence. In other words, AI is seamlessly working with humanity to make things easier for them. But, is AI capable of treating humans fair and square?
In this joint point of view article: Andy Packham - Chief Architect of the HCLTech Microsoft Business Unit, Microsoft Cloud Solutions Architect - Poonam Sampat, and HCLTech Head of AI and Data Science - Aruna Pattam, explore the pressing concern that underlines all AI developments today, the issue of bias in AI algorithms.
Noting the AI innovations that are currently underway, Andy raises the question of whether AI innovations can impact society in a consistently positive manner. There have been multiple instances of bias at play in AI-driven systems. Unarguably, bias is a serious matter that can severely damage an organization's reputation. Poonam mentions that bias is the real danger of AI. And quite rightly so - it has manifested as gender bias in hiring algorithms at Amazon and racial bias in facial recognition algorithms and policing systems in the US. Aruna Pattam mentions a similar bias at play in Apple's credit card limit allotment system. The model was reported to be biased against women, who were allotted a 20th of the credit limit assigned to men with similar credentials and credit histories. According to Gartner, 85% of AI projects will deliver erroneous outcomes due to multiple sources of bias at play.
Clearly, if AI is to stay with us and amongst us, making decisions that affect our communities at large, and mitigating bias from AI use cases is an urgent imperative for businesses. Especially so because AI is also a key source of competitive advantage across most industries today, that's why leading digital natives like Microsoft and researchers working in this arena are jointly driving the AI wave to a fair and ethical ground. Read on to understand the sources of bias in AI and how you can leverage the responsible AI framework to eliminate bias from your use cases.
How bias creeps into AI applications
Bias in AI and machine learning comes from multiple sources and at different life cycle stages. Some programs that use biased datasets are doomed to output biased results. In contrast, others become biased through their interaction with people like you and me. Aruna takes us through some of the key types of problems in AI - here are some of them:
- Bias within datasets: AI models that carry the footprints of human bias - for example, a dataset of loan decisions, hiring decisions, or a list of buyers from a shopping mall - can inject bias into a model built with them. Such datasets can inject bias into a model if you label the predictors erroneously, if certain types of communities are under-represented, or if your data sampling strategy was biased, to begin with .
- Bias in model development: Excluding some features or conducting train-test splits without adequate oversight can inject bias into the model. For instance, when you train a model with data from the earlier quarter of the year for a variable that follows an annual trend, you can end up with a biased model.
- Injecting bias during validation: Some algorithms are prone to overfitting, while others are underfitting. Therefore, failure to prevent leakage between test and training data can turn into another source of bias in the model testing and validation phase.
- Acquiring interaction bias: NLP-powered bots that generate engagement and interaction within communities can pick up prejudices from the humans they interact with. These algorithms can amplify the racial and gender biases that are found in the human voices on the platform.
- Bias in purpose: Some models are biased in the very purpose they are built for. Consider the example of a news application which returns similar stories when you run a search query. Such programs can land us in an information bubble built by the intent and parameters of our search query.
Recognizing the source of biases is the first step to building fair and bias-free AI models - followed by which enterprises should consider the following framework for mitigating bias from their AI strategy.
Here's how to achieve fairness in AI
Poonam outlines some of the critical efforts that are being led by Microsoft, HCLTech, and the AI community at large - mitigating bias from AI. Microsoft's responsible AI framework, founded on principles of fairness, reliability and safety, privacy, security, inclusiveness, transparency, and accountability - is a holistic approach to building unbiased AI models. Here is how Microsoft's responsible AI framework can help you build fair and ethical AI applications:
Fairness
A critical test of bias in AI is whether a system treats all people fairly. For instance, does a facial recognition system work for people of all races, of all ages, and genders? You can achieve fairness by testing and training through datasets that encompass portraits of people from multiple races, nations, genders, and ages. Aruna also mentions AI self-check systems that can essentially help an algorithm challenge its own decision to understand if it is a biased one. Yet another way of achieving fairness includes keeping human oversight over AI decisions, that is, keeping a human in the loop.
Reliability and safety
Because AI use cases are making their way into inherently and potentially high-risk situations, bias can add to the risk in such situations. For instance, racial discrimination has affected black people in the healthcare industry. Therefore, extensive testing is crucial to eliminate such risks before taking a model into production. Moreover, building a user feedback loop can help you spot such biases and risks early on if they do exist. AI can be leveraged to check for bias in live data streams in real-time - this can help mitigate bias in systems based on reinforcement learning and consequently de-risk their performance.
Inclusiveness
When used in the real world, AI models tend to reflect the sources of bias back to their application area. However, diversity and inclusivity can work to eliminate multiple sources of bias. For instance, bringing people from all genders and nationalities to the tech community can help eliminate biased algorithm design and stereotypical responses from AI-powered systems and make AI models more inclusive. According to Aruna, AI should also be made to increase diversity at work to ensure that all groups and communities are represented in AI data and algorithms.
Transparency and accountability
Lastly, AI systems should not only be built with transparency but also made to work with transparency. To make it happen, consider documenting the decisions and processes that go into the design of an AI system. In addition, your users should also be told how AI is used in a process, how it can affect them, and who to reach out to if they think that the system mistreats them.
This is a crucial step in the direction of explainable AI. In this paradigm, AI systems can communicate their thinking to human beings in a comprehensive manner. It is also suggested that making AI models open source instead of proprietary can be a crucial step in building trust in AI-driven systems at large. Lastly, hold your teams accountable for making their systems operate fairly and reward those who take a positive step in making it happen.
What’s next?
Six in ten organizations lost customers or revenues due to bias in their AI algorithms. But more than financial damage, bias can extensively harm an organization's reputation and lead them astray from its values. AI will only live up to its promise when it treats people fair and square. That's why it is imperative to ingrain a consciousness around the risk of bias throughout your organization as they put AI to use - it is a definitive first step in your journey to fair and responsible AI. We believe that despite the risks that AI poses if used irresponsibly, there are several exciting innovations that will foster trust and fairness in AI systems in the time to come.