Chennai: Researchers from Indian Institute of Technology Madras (IIT-Madras) and Queen’s University Belfast in UK, have developed an innovative new algorithm to make Artificial Intelligence (AI) fairer and less biased when processing data.
Companies often use AI technologies to sift through huge amounts of data in situations such as an oversubscribed job vacancy or in policing when there is a large volume of CCTV data linked to a crime.
“AI techniques for exploratory data analysis, known as ‘clustering algorithms’, are often criticised as being biased in terms of ‘sensitive attributes’ such as race, gender, age, religion and country of origin,” said study researcher Deepak Padmanabhan from Queen’s University Belfast.
It has been reported that white-sounding names received 50 per cent more call-backs than those with black-sounding names.
Studies also suggest that call-back rates tend to fall substantially for workers in their 40s and beyond.
When a company is faced with a process that involves lots of data, it is impossible to manually sift through this.
Clustering is a common process to use in processes such as recruitment where there are thousands of applications submitted.
While this may cut back on time in terms of sifting through large numbers of applications, there is a big catch. It is often observed that this clustering process exacerbates workplace discrimination by producing clusters that are highly skewed.
Over the last few years ‘fair clustering’ techniques have been developed and these prevent bias in a single chosen attribute, such as gender.
The research team has now developed a method that, for the first time, can achieve fairness in many attributes.
“Fairness in AI techniques is of significance in developing countries such as India. These countries experience drastic social and economic disparities and these are reflected in the data,” said Savitha Abraham from IIT Madras.
“Employing AI techniques directly on raw data results in biased insights, which influence public policy and this could amplify existing disparities. The uptake of fairer AI methods is critical, especially in the public sector, when it comes to such scenarios,” Abraham added.
Our fair clustering algorithm, called ‘FairKM,’ can be invoked with any number of specified sensitive attributes, leading to a much fairer process, researchers said.
In a way, FairKM takes a significant step towards algorithms assuming the role of ensuring fairness in shortlisting, especially in terms of human resources.
FairKM can be applied across a number of data scenarios where AI is being used to aid decision making, such as pro-active policing for crime prevention and detection of suspicious activities.
The research work is scheduled to presented in Copenhagen in April 2020 at the EDBT 2020 conference in Denmark.