IBM AI Fairness 360: Taking steps towards a more complete and trusted AI
IBM has been exploring artificial intelligence and machine technology for decades, firmly believing AI will transform the world in the coming years.
As AI advances, and humans and AI systems increasingly work together, it is essential that we trust the output of these systems to inform our decisions. AI algorithms are increasingly used to make high-stakes decisions about people and their personal choices, and although a method of learning, AI can also recapitulate biases contained in the data on which it is trained on. Datasets may even contain historical traces of intentional systemic bias that could put unprivileged groups of data at an unfair disadvantage.
IBM is pleased to announce AI Fairness 360 (AIF360), a comprehensive open-source toolkit that will help our business partners and users examine, report and mitigate discrimination and bias in machine learning models through the AI application lifecycle.
To put it simply, it will analyse how and why algorithms make certain decisions in real time, scan for bias and make the appropriate recommendations for remedy. It’s a step closer to a more trusted and accurate AI system.
Read this blog post from IBM to learn more about AI Fairness 360 and the steps it takes to tackle machine learning bias and data discrimination.