Skip to main content

Striving for fairness in AI

Posted: 

From personalized music playlists to improved health care, artificial intelligence continues to enhance daily life. The field of AI moves quickly, making it important to understand how to ensure that this technology is fair, transparent, accountable, inclusive and beneficial to all. 

In partnership with Amazon, the National Science Foundation has awarded a multi-university collaborative, including The Ohio State University, a Fairness in Artificial Intelligence (AI) grant for addressing bias in AI and machine learning systems.

Parinaz Naghizadeh Ardabili
Naghizadeh

Led by University of California Santa Cruz Assistant Professor Yang Liu, the project will explore the long-term effects of AI decision-support systems by using human-AI feedback during repeated interactions involving sequences of decisions. Ohio State Engineering Assistant Professor Parinaz Naghizadeh is a co-principal investigator, along with the University of Michigan's Mingyan Liu and Purdue University's Ming Yin.

“We are excited to see NSF select an incredibly talented group of researchers whose research efforts are informed by a multiplicity of perspectives,” said Prem Natarajan, vice president in Amazon’s Alexa unit. “As AI technologies become more prevalent in our daily lives, AI fairness is an increasingly important area of scientific endeavor.  And we are delighted to collaborate with NSF to accelerate progress in this area by supporting the work of the top research teams in the world.”

The research team asserts that the understanding of the long-term impact of a fair decision provides guidelines to policy makers when deploying an algorithmic model in a dynamic environment, and is critical to its trustworthiness and adoption. It will also drive the design of algorithms with an eye toward the welfare of both the makers and the users of these algorithms, with an ultimate goal of achieving more equitable outcomes.

The project's goal, Ohio State's Naghizadeh said, is to advance understanding of the long-term implications of automating decision-making using machine learning algorithms. 

“For instance, some of the existing algorithms used for predicting recidivism in U.S. courts have exhibited racial biases, and those used for job advertising have exhibited gender biases," said Naghizadeh, who holds a joint appointment in the Departments of Electrical and Computer Engineering and Integrated Systems Engineering. 

"In the long-term, biased algorithms can reinforce pre-existing social injustices, and increase the bias in the datasets that will be used for training future algorithms," she added. "Preventing these feedback loops and guaranteeing fairness is a legal and ethical imperative.”

Automated decision-making involves human participation throughout its life cycle: algorithms are trained using data collected from humans, and they also make decisions that impact humans.

“We are specifically interested in accounting for human subjects whose behavior, participation incentives, and qualification states, will evolve over time when facing these algorithms," Naghizadeh said. "This creates a decision-action feedback loop that informs and complicates the design of fair AI."

Categories: ResearchFaculty
Tag: AI