We Moved Fast. We Broke Things.
In 2016, less than 24 hours after coming online, Microsoft’s AI-powered Twitter chatbot, “Tay” was taken down for tweeting out a series of less-than-appropriate messages about Hitler and 9/11. Of course, the early-generation bot didn’t really think 9/11 was an inside job. It was simply repeating, however clumsily, things real humans had said to it.
While that event was somewhat of a dark comedy, the ways in which artificial intelligence has breached societal trust in the years since are no laughing matter. From leaked personal data to mistreated minority groups to online political echo chambers, AI mistakes have caused significant damage to real people in real ways.
In the past five years, two conflicting facts remain true: AI has grown to play an increasingly important role in how our society functions, yet people have become increasingly distrustful of its ability to do the job right. Indeed, according to a 2018 global survey by IPSOS, only 25% of Americans deemed AI trustworthy, compared to 70% in China and 50% in India.
At the center of this riddle are engineers and their amazing, beautiful, mind-boggling, empowering — and sometimes disempowering — technologies.
The problem has never really been with technology, but rather with us. Humans have a special way of moving too fast. We’re the ultimate “early adopter” species. Give us the secret of the atom and we’ll make a bomb — immediately. Give us the internet and we’ll start trolling each other before you can say “Zuckerberg.” But we also have an amazing capacity to adapt. Every time technology has upended society, people have found ways not only to live with it but to thrive with it.
The USC + Amazon Center for Secure and Trusted Machine Learning was established at USC Viterbi in January with exactly this challenge in mind. The center will support a new era of research that is focused on building societal trust in AI and machine learning.
Earlier this summer, the first cohort of USC Viterbi researchers whose work will be supported by the center was announced. Salman Avestimehr, the center’s director, said interest among researchers to work on this project was higher than expected.
“We had a very enthusiastic response by USC faculty for the first call for proposals for the USC + Amazon center,” said Avestimehr, who is also a Dean’s Professor of Electrical and Computer Engineering, and an Amazon Scholar. “We received innovative ideas from our faculty proposing exciting research into various aspects of trustworthy machine learning. The response was so overwhelming that we decided to fund one more project than our original intention, and at the end, we selected the five research projects to be funded for the first year of the center.”
These are the five projects selected.
Protecting Patient Data
Imagine a federation of hospitals working to understand how a particular disease is spreading and mutating. (Maybe that’s not so hard to imagine.) Each hospital has a vast and detailed database of patient information that, when combined, could lead to critical insight. However, for privacy and security reasons, none of the hospitals can share that data with each other. But what if there were a way for these hospitals to use all the data without actually sharing it?
USC’s José Luis Ambite, research associate professor of computer science; Muhammad Naveed, assistant professor of computer science; and Paul Thompson, professor of ophthalmology, neurology, psychiatry and the behavioral sciences, are working on ways for multiple large organizations to solve problems that require a lot of data without having them actually share that data.
“Our approach will make it easier for organizations to come together to allow learning over larger datasets by ensuring that local data remains private,” said Ambite. “So organizations have an incentive to allow learning over their local data. They can get a better-performing predictor without actually disclosing their private data.”
Taking Human Bias Out of AI
Imagine a company that, in an attempt to hire a more robust and talented workforce, designs a tool to identify candidates that ends up doing the opposite. Again, perhaps that’s not so hard to imagine: Amazon itself struggled with this in 2015 when an algorithm it designed for that purpose unintentionally favored men over women. We know that AI often makes predictions that unfairly target minorities or makes lower-accuracy decisions for protected groups.
Keith Burghardt, a computer scientist at USC Viterbi’s Information Sciences Institute, plans to update machine learning programs to more fairly benefit everyone. Today, people rely heavily on “decentralized machine learning,” that is, many small tools, like smartphones, used together to run machine learning programs more securely and at less expense. But this emphasis on cost and security has come with a big price — equity. These small, disparate devices do not have the built-in tools to address biases.
“We propose complementary methods that help address these issues. First, we will develop a general model that is specialized to particular demographics. Second, we will apply new techniques to reduce biases in the data itself, which can be applied to a large set of commercially available tools,” Burghardt said.
AI That Listens Less
Imagine asking your virtual assistant to set a reminder to go to the gym at noon. But in addition to setting the reminder, your virtual assistant analyzes your voice to glean all kinds of other information about you: your gender, your geographic location, even your physical or mental state! Suddenly, you’re getting ads for everything from homes for sale in your neighborhood to therapy services.
This is called human bio-behavior sensing, and it’s a new and at times scary aspect of AI.
Shri Narayanan, University Professor and Niki and C. L. Max Nikias Chair in Engineering, is a leader in signal and image processing who for years has put human-centered approaches to engineering at the forefront of his work. Making sure technology collects the intended information from you — and nothing more — is a huge part of that goal.
Narayanan aims to develop a system that more accurately, sensitively and securely collects bio-behavior data. Of course, collecting the right data is vital — large sets of it are hugely important to modern medicine, for example. But systems could conceivably collect that relevant data while cataloguing so much more about you for future use, all from just your voice.
This project aims to develop methods that can reliably collect the right data from the subject while keeping more private information out of the hands of the companies behind the technology. “Our proposed framework is cognizant of the diversity and subjectivity inherent in the generation and processing of human of bio-behavior data,” said Narayanan.
AI That Listens More
Imagine you are visiting a doctor for the first time. He or she asks you several questions about your medical history and takes detailed notes. But what if most of the information in those notes wasn’t going to good use? What if it was as much as 80%?
When doctors enter their notes into your records, natural language processing (NLP) tools help aggregate the data and make analyses about patient health. But those tools aren’t very good at understanding the unstructured data in those notes. Formal information clearly written in a specific place about the date of your last physical? Easy. The doctor’s additional comments, written in his or her own shorthand, about who you share a home with and what support you have in life? Not very easy at all.
Xiang Ren, assistant professor of computer science, is working on ways to automate the processing of this subtler type of data in order to derive deeper insights about patient health on a much larger scale. “By teaching NLPs to ‘read between the lines’ of someone’s medical record, we can provide an extremely useful tool to doctors and hospitals that will help them better understand their patients’ needs,” he said.
The language that doctors use in their notes can be very different based on their gender, culture or nationality. A poorly designed NLP tool could unintentionally recommend treating one group of people with more care than another simply based on how it interprets the doctor’s language. “Because data distribution is often unbalanced with respect to different groups, we end up with biased models that favor certain groups when it comes to things like résumé screening or fraud detection. We hope to remove this major flaw from our current systems,” Ren said.
Holding AI Accountable
You’ve already been asked to consider several very real scenarios in which an AI system was unintentionally biased. But how were those biases discovered in the first place? After all, we can’t make our systems less biased or more trustworthy if we can’t identify where and why they are making mistakes.
Aleksandra Korolova, assistant professor of computer science, is working on better ways to do just that. She audits existing AI systems in a “black box” manner, that is, from the perspective of an outsider. Without being given access to the inner workings of a system, Korolova finds ways to learn about the algorithms used in the models and the data being collected. “It is this auditing of a system — as a total outsider — that allows us to identify mistakes in the design,” she explained. “By using this approach, we can better understand the unexpected or undesirable consequences of a complex AI system and find challenges that the company itself did not have the ability or desire to test for.”
Once Korolova’s audit is complete, she can suggest alternatives to the companies to adjust their algorithms, or recommend to policy makers what to look for or require from designers of AI systems to ensure their accountability. By doing so, Korolova can do more than help make the engineering behind AI more trustworthy and help educate influential decision-makers on how to think about AI in a more holistic and society-focused way.
A New Kind of Trust
Individually, all of these projects seem like good ideas — and they are. But, as Avestimehr explained, “these projects represent some of the first building blocks of what can be thought of as ‘computational trust.’ In the same way that institutions of the past needed to earn society’s trust before they could reach their full potential, we must now do the same with the tools that are forming the foundations of the future world.”
Indeed, the federal government has even gotten involved. The National Institute of Standards and Technology, or NIST, best known for measuring things like time or the number of photons that pass through a chicken, now seeks to measure our trust in artificial intelligence.
NIST, part of the U.S. Department of Commerce, wants to put an actual number to that perceived trust, based on characteristics like accuracy and explainability. Its March 2021 paper suggested, for example, that an AI system used by doctors to diagnose disease should be more accurate than one recommending music.
The era of computational trust cannot come fast enough. “Already, AI is making our lives easier and safer every day,” said Burghardt. “But every time an AI system unintentionally targets a certain group or releases sensitive data, distrust in the system reduces progress and makes future research harder. An AI system people fully trust is the only way we can see it truly serve humanity to its fullest potential.”
In an increasingly technological world, Yannis C. Yortsos, dean of the USC Viterbi School of Engineering, understands that gaining computational trust is an imperative. “As the interface between technology, society and humanity becomes increasingly intertwined, trust has emerged as a truly fundamental issue,” he said. “Trustworthiness must address not just the technology side, but the human side as well. Engineers, perhaps most of all, must have an understanding of intended and unintended consequences and the character, dedication and ingenuity to address them. These themes of trust and human-centered engineering have become extraordinarily relevant to all our engineering education and research.”
With the new USC + Amazon Center in place, USC Viterbi has taken that call to heart.