Bassily’s NSF CAREER award will enhance data privacy
Raef Bassily, assistant professor of computer science and engineering, has earned a $500,060 Faculty Early Career Development (CAREER) award from the National Science Foundation for his research in privacy-preserving machine learning.
The CAREER award is the National Science Foundation’s (NSF) most prestigious recognition of junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of both. Bassily’s project, “Extending the Foundations of Privacy-Preserving Machine Learning” is funded by NSF’s Secure &Trustworthy Cyberspace program.
Training machine learning models on personal data can be beneficial in many settings, but it can also lead to the undesirable disclosure of sensitive information. The performance of existing privacy-preserving machine learning algorithms is far from satisfactory, according to Bassily, especially in modern applications, such as protecting the confidentiality and privacy of customer or patient data.
“This deficit is mainly due to the severe limitation in our current understanding of the fundamental characteristics of these algorithms,” he explained. “The lack of solid and comprehensive theoretical foundations means a lack of optimal design principles for building algorithms with superior and provable performance guarantees. This presents a real challenge hindering the leap of privacy-preserving machine learning to the practical domains and hampering its real-world impact.”
To address this challenge, Bassily’s CAREER project aims to fully understand the computational and statistical limits of privacy-preserving machine learning and optimization algorithms. He will then build a comprehensive theory that would enable the development of new privacy-preserving algorithms that can be implemented for widespread practical use.
At this early stage of the project, the research is focused on constructing new privacy-preserving algorithms for convex and non-convex optimization, which will form the basis for developing practical algorithms for modern applications of machine learning with strong performance guarantees. Bassily’s next goal is to establish the fundamental statistical and computational limits of privacy-preserving optimization and develop a rigorous algorithmic framework for privacy-preserving distributed and federated learning.
His research is centered around a rigorous mathematical framework for data privacy known as differential privacy, a mathematical criterion that allows one to reason about privacy in a formal and quantifiable fashion.
“Algorithms that are designed to satisfy this criterion, as those developed in this project, provide a strong and provable privacy guarantee to their input datasets,” said Bassily. “The performance of such algorithms exhibits an inherent tradeoff between privacy and utility.”
The central challenge is to develop algorithms that offer the optimal tradeoff between the two—in other words, offer the best utility under the same level of privacy protection, he added. Developing optimal differentially private algorithms often requires devising new design and analysis tools.
The educational component of Bassily’s project includes launching new graduate courses on optimization and differential privacy. He also plans to develop interdisciplinary workshops to understand the current threats to data privacy and discuss the role of privacy-preserving machine learning in dealing with those threats.
Bassily joined The Ohio State University in 2017 as an assistant professor in the Department of Computer Science and Engineering. He is also a core faculty member at the Translational Data Analytics Institute. Previously, he was a data-science postdoctoral fellow at the University of California, San Diego, and a postdoc at The Pennsylvania State University. He earned his PhD in electrical and computer engineering at the University of Maryland, College Park.
by Meggie Biss, College of Engineering Communications | firstname.lastname@example.org