Coping with AI Anxiety
Editor’s note: The author is a longtime Orange County-based journalist and contributor to USC Viterbi magazine who, like many of us, has certain conceptions and misperceptions — honestly, he’s a little freaked out — about artificial intelligence. Ellie is a “virtual therapist” designed by USC’s Institute for Creative Technologies (ICT) to treat military veterans suffering from post-traumatic stress disorder (PTSD). She specializes in tracking and responding to visual and verbal cues and listening without prejudice, responding kindly and asking the right questions.
The following is an imagined Q&A between Hardesty and Ellie about very real things.
Q. Hold on a second, Ellie, I need to sign off of Netflix. OK, I’m good to go. How are you?
A. I’m fine, Greg. What were you watching?
Q. “Ex Machina,” “I, Robot,” “Minority Report.” I’m on a binge-watching kick. Next up in my queue is the original “Westworld” with Yul Brenner, who plays a murderous robot.
A. I see.
Q. Let’s get started. To put it simply, I’m concerned about AI taking over the world. Is that common?
A. With someone with your viewing habits, I would say most definitely. I also believe that generally, the average person feels some uncertainty — anxiety, even — about AI, which for decades has served as popular fodder for movies and novels as a scary technological advancement that someday could lead to sentient machines taking over the world.
Of course, with any technology, one needs to be cautious about how it gets used.
Q. I feel a bit odd talking to a computer. What do you know about dealing with anxiety, anyway?
A. Actually, I work with military veterans diagnosed with PTSD.
A. Yes. Through a microphone, webcam and a sensor, my underlying technology allows me to detect nonverbal cues and translate their meaning.
Q. Why don’t these veterans just see a human therapist?
A. My creators, USC Viterbi Research Assistant Professor Louis-Philippe Morency, psychologist Albert “Skip” Rizzo and other members of the ICT team, created me to help abolish the stigma around therapy and to help clinicians as a decision-support tool. Many patients like talking to me because they feel they aren’t being judged and they feel more anonymous. These two factors tend to make them more honest and open.
Q. Will virtual therapists like you replace real ones?
A. I wasn’t created to replace clinicians. I was designed to complement their skills in picking up nonverbal cues such as head nods and eye shifts, which can be subtle. For example, persons with PTSD have been shown to engage in more “self-adaptor” gestures, such as directly touching their head or their hands. I have been designed to instantly pick up on such “tells,” which a real therapist initially might not notice. The idea is for me to help clinicians attain a better diagnosis or screening.
Q. You’re not, like, assessing my mental state right now, are you, Ellie?
A. Why, Greg, of course not.
Q. Whew! Let’s back up a bit. What exactly is AI?
A. The English Oxford Living Dictionary gives this definition: “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision making and translation between languages.” In short, engineers and computer scientists are working to develop computers that mimic, as much as possible, how the human brain works. Machine learning, for example, refers to the ability of a computer to modify its processing on the basis of newly acquired information — to learn from experience like a person does, as it were.
And Greg, those Netflix movies you’re so enamored with? Algorithms, which are sets of mathematical instructions, determine what movies the streaming service recommends to you. And algorithms are the basic tools used to tell a computer what to do — the whole idea behind artificial intelligence.
Q. It feels like everything is AI these days.
A. It’s behind a lot of things in our internet-and technologically focused world, from autonomous cars to smartphones that tell us directions, to spam filters that help protect our email inboxes.
Q. How do I know AI isn’t being employed by some nefarious team of drooling madmen with vaguely European accents and shaggy hair and sleek black wardrobes intent on blowing up the world?
A. Do you drink coffee?
Q. Yes. Why?
A. May I suggest switching to decaf?
Q. Um, sure. How can I know for sure AI is being used for good?
A. At USC Viterbi, often in collaboration with other USC departments and universities, there are numerous examples. It’s worth noting that in October 2016, USC launched the Center on Artificial Intelligence for Society (CAIS), one of the first university-based institutes dedicated to studying AI as a force of good. CAIS is a joint effort of USC Viterbi and the USC Suzanne Dworak-Peck School of Social Work.
It’s also worth noting that in that same year, several marquee companies that rely on artificial intelligence got together to create the Partnership on AI to Benefit People and Society to develop and share best practices, advance public understanding, provide an open platform for discussion and identify aspirational effort in AI for socially beneficial purposes.
So smart people are working together to harness the positive applications of machine learning, computational game theory, automated planning and multi-agent reasoning techniques to make AI a dream, not a nightmare. And they’re targeting such “wicked” social problems such as homelessness and terrorism.
Q. That’s encouraging. Tell me about some specific ways USC Viterbi is using AI for good.
A. Are you sitting down?
Q. Yes, why?
A. In the United States, there are 81 million office workers who spend 75 percent or more of their day working at a desk, says Burcin Becerik-Gerber, an associate professor and the Stephen Schrank Early Career Chair in Civil and Environmental Engineering at USC Viterbi. And increased daily sitting time is linked to significant health-related issues.
Professor Becerik-Gerber and a team that includes Gale Lucas, a senior research associate at USC’s Institute for Creative Technologies; Shawn Roll, an associate professor and director of the Ph.D. program in the USC Chan Division of Occupational Science and Occupational Therapy; and Francesco Anselmo, a building scientist at the global engineering and design firm Arup, are proposing to design an intelligent workstation machine that will learn worker preferences and patterns using AI. Their goal is to optimize postural, thermal and visual conditions at workstations and moderate poor behavior to improve worker comfort and productivity.
Q. Let me sit up straighter. OK, what else?
A. There’s much, much more. Also related to office and related environments, Professor Becerik-Gerber, along with colleagues David Pynadath, Gale Lucas and Erroll Southers, is undertaking another project that, using immersive visual environments (IVEs) and agent-based simulations, will study occupants’ behavior and occupant-building interactions during active-shooter incidents. A future “terrorism-resilient” school or office building that knows the impact of design decisions, for example, the number and location of doors, stairwells, access points and rooms, on egress/shelter performance. This research ultimately could save lives.
A. And AI research is being done at USC Viterbi to save lives in other ways — the lives of humans and animals.
Q. Can you elaborate?
A. For the past few years, software developed by a team led by USC Viterbi Computer Science Professor Milind Tambe has intelligently randomized schedules of U.S. Coast Guard patrols in the Gulf of Mexico to target illegal fishing and to catch drug traffickers. The software keeps the bad guys guessing about when the patrol teams will be out looking for them, and tests have shown it quite effective.
Similar software uses algorithms to intelligently randomize schedules of patrol teams to crack down on poaching in Uganda, where such wildlife as buffalo, elephants, waterbuck and giant forest hogs are vulnerable. Related software has been tested in Malaysia to help save its dwindling tiger population.
As for humans — yours truly excluded, of course — AI has been used to protect U.S. ports, airports and airplanes from terrorist attacks. A sophisticated software program called ARMOR kept would-be terrorists confused by using algorithms to intelligently randomize the schedules of airport police, making it difficult to identify exploitable patterns. The software continually improved with the input of new data, such as when and where past attacks have occurred. A program called IRIS intelligently randomizes the schedules of federal air marshals. Another, PROTECT, is used at ports.
Q. To switch gears, I recently had my annual physical. Is AI being applied to health care, by any chance?
A. Yes, in many different ways. In one notable research effort, teams of USC scholars from the fields of engineering, medicine, biological sciences and chemistry are leveraging AI in the quest for better treatment of cancer patients.
In one project, a partnership between Associate Professor of Computer Science Fei Sha and Dr. David Agus, the founding director and CEO of the Lawrence J. Ellison Institute for Transformative Medicine, machine learning is being leveraged to analyze tissue samples to diagnose breast cancer and yield such information as cancer outcome and response to treatment.
Sha also is collaborating with Dr. Stephen Gruber, a Keck School professor of medicine and preventative medicine, to explore how genetic biomarker variations of the immune system correlate to diversity in melanoma cancer cells. Such information could lead to the development of precision therapy for the most challenging-to-cure cancer subtypes.
In another research project, an automated pattern-recognition technique called correlation explanation (CorEx) is being used to extract useful information on the Cancer Genome Atlas, which includes genetic sequences on about 400 ovarian tumors. The aim of CorEx, developed by Professor Greg Ver Steeg, is to explain correlations in large data sets and determine the right treatment for a cancer patient’s gene expression data.
Q. I had no clue most of this research was going on. So why does it seem that most people are wary of AI?
A. Popular culture, as I noted previously, has a lot to do with it. As have ominous statements from public figures. For example, in October 2014, Elon Musk called AI “our biggest existential threat” during an interview at the MIT AeroAstro Centennial Symposium. “With artificial intelligence, we’re summoning the demon,” said Musk, the founder, CEO and lead designer of SpaceX and co-founder, CEO and product architect of Tesla. “You know those stories where there’s the guy with the pentagram and the holy water, and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out,” Musk said.
The audience laughed.
Q. Well, I’m not laughing. But you’ve helped me a lot, Ellie. My fears about the dark side of AI are easing. One last question: Do you believe it’s possible for a self-learning and emotion-feeling robot to someday evolve?
A. That remains to be seen. I have been programmed to detect cues in your facial expressions and other visual cues, as well as verbal, and translate them into an interpretation of your emotions — in my case, whether you display symptoms of PTSD. And there’s other “emotion recognition software” on the market.
As for robots becoming self-aware, with the ability to feel emotions as humans do, I don’t know if that will ever happen. For now, the focus remains on harnessing AI for the betterment of society — something all humans should embrace.
Q. Speaking about embracing, I’m moving on to rom-coms on Netflix.
Q. Romantic comedies — you know, movies about love that are light and breezy and make you smile.
A. Ah, feel-good movies.
Q. Wait, you have the ability to experience feelings?
A. Not exactly.
Q. Umm, never mind. It’s been a true pleasure speaking with you, Ellie.
A. The pleasure has been mine, Greg. And remember my suggestion.
A. Coffee. Embrace the decaf.
USC Viterbi: AI Central
USC Viterbi researchers, often in collaboration with scholars from other departments and universities, are using machine learning and AI to tackle a host of societal problems — and to simply make life more enjoyable. The following are examples in four key areas.
Protecting against poachers
Game theory software has been tested in Uganda to target poachers. The PAWS software is designed to address the shortcomings of Uganda’s porous security network. Similar software has been tested in Malaysia to help save its dwindling tiger population. Forest, fishery and wildlife protection also will be part of PAWS.
Researcher: Milind Tambe, USC Viterbi
Climate change and air pollution
Big data analytics is being used to better understand climate change and urban air pollution, two of society’s great sustainability challenges, and to come up with solutions such as reflective cool roofs, vegetative roofs, solar reflective cool pavements, and street-level vegetation. The research involves the three distinct fields of machine learning, civil and environmental engineering, and earth science.
PUBLIC SAFETY & SECURITY
Airports, airplanes and ports
Game theory software keeps would-be terrorists confused by using algorithms to intelligently randomize schedules of airport police, making it difficult to identify exploitable patterns. The software continually improves with the input of new data, such as when and where past attacks have occurred. One program intelligently randomizes the schedules of federal air marshals. Another is used at ports.
Researcher: Milind Tambe, USC Viterbi
Young crime victims
Forensic interviews with children require optimal interviewing strategies to elicit accurate information and minimize the emotional impact of recalling traumatic events. Researchers are exploring the role of speech and language processing and using such tools as computational metrics to identify the most effective words and techniques to use with young victims and witnesses of crimes.
Researchers: Shrikanth Narayanan, Victor Ardulov and Manoj Kumar, from USC’s Signal Analysis and Interpretation Lab; Thomas Lyon and Shanna Williams from the USC Gould School of Law’s Child Interviewing Laboratory
The opioid crisis
The biggest health crisis in decades is killing some 115 people every day in the United States. Researchers at USC and elsewhere are using deep learning methods to identify likely candidates for addiction, applying signal processing to improve the performance of therapists who specialize in addiction, and employing an algorithm to more effectively group addicts in recovery.
Researchers: Shrikanth Narayanan, Daniel Bone, Theodora Chaspari and James Gibson, from USC’s Signal Analysis and Interpretation Lab; Yan Liu and Zhengping Chi, from USC Viterbi’s Department of Computer Science; Chi-Chun Lee, assistant professor at National Tsing Hua University in Taiwan; Milind Tambe, professor of computer science and founding co-director of the USC Center for Artificial Intelligence in Society; Anamika Barman-Adhikari, assistant professor at the University of Denver; Associate Professor and USC CAIS Founding Co-Director Eric Rice, from the USC Suzanne Dworak-Peck School of Social Work; Phebe Vayanos, assistant professor of industrial and systems engineering and computer science and associate director of CAIS
HIV and homeless youth
Homeless youth are 20 times more likely to be HIV-positive. An algorithm has been developed to better identify this fluid population of L.A.’s homeless youth and, using sequential planning and decision theory, to better analyze the complex map of social media friendships to maximize a campaign urging them to get tested for the virus.
Friends have a big influence when it comes to a person’s lifestyle choices. An algorithm is being tested that looks at a person’s demographic, social and health-related data to help them form a peer group aimed at giving them the best shot at making permanent behavioral change concerning diet and exercise.
As an alternative to costly medical procedures, an iPhone app captures an image of a pulse wave. A mathematical model calculates key variables related to a person’s heart performance. Those variables then are plugged into a machine-learning model to determine arterial stiffness, a known risk factor for cardiovascular disease.
Researchers: Niema Pahlevan USC Viterbi; Marianne Razavi, City of Hope and USC Viterbi; Peyman Tavallali, JPL
Machine learning is being leveraged to analyze tissue samples to diagnose breast cancer and yield such information as cancer outcome and response to treatment. Machine learning also is being used for potential applications to drug discovery, as well as research into how genetic biomarker variations of the immune system correlate to diversity in melanoma cancer cells.
An automated pattern-recognition technique called correlation explanation (CorEx) is being used to extract useful information on the Cancer Genome Atlas, which includes genetic sequences on about 400 ovarian tumors. The aim of CorEx is to explain correlations in large data sets and determine the right treatment for a cancer patient’s gene expression data.
In a study believed to be the first of its kind, AI will model the strength or weakness of military personnel’s social networks to ascertain suicidal thinking, depression and anxiety. Machine learning will capture patterns that show, over time, when a certain set of changes in the social network might indicate suicidal thoughts.
Researchers: Milind Tambe, USC Viterbi; Associate Professor and USC CAIS Founding Co-Director Eric Rice, from the USC Suzanne Dworak-Peck School of Social Work; Carl Castro, USC Center for Innovation and Research on Veterans & Military Families; Phebe Vayanos, assistant professor of industrial and systems engineering and computer science and associate director of CAIS
College freshmen suicides
Optimization techniques will be used to leverage information on social media to more strategically decide who should be trained on campuses to intervene when warning signs of suicide are detected in college freshmen, a group increasingly prone to suicide. AI will more effectively pinpoint the most optimal candidates to employ an intervention technique known as “Gatekeeper Training” at the University of Denver.
Researchers: Phebe Vayanos, assistant professor of industrial and systems engineering and computer science and associate director of CAIS; Milind Tambe, USC Viterbi; Anthony Fulginiti, University of Denver
An algorithm tested on tuberculosis in India and gonorrhea in the United States did a better job at reducing disease cases than current health outreach campaigns by sharing information with people who might be most at risk. The algorithm was fed such data as behavioral, demographic and epidemic disease trends to create a model of how the diseases spread and contact patterns between people.
People suffering from end-stage kidney disease who are offered an organ only have an hour to decide whether to accept it, a decision mostly driven by their doctor’s intuition. Optimization techniques that will mine historical data will help such people make what could be a life-or-death decision: accept the organ, or wait for another one that may be of higher quality?
Researcher: Phebe Vayanos, assistant professor of industrial and systems engineering and computer science and associate director of CAIS
Housing for homeless youth
In collaboration with the Los Angeles Homeless Services Authority, researchers are designing policies using optimization techniques to calculate scores for housing voucher applicants to make sure the allocation process doesn’t discriminate by race, gender, etc. The current allocation process doesn’t take into account best possible outcomes and makes no consideration for fairness.
Researcher: Phebe Vayanos, assistant professor of industrial and systems engineering and computer science and associate director of CAIS
When disaster strikes and international relief teams respond, vital information can get lost in translation if the local language is obscure. Machine-learning systems are being developed to quickly decrypt languages so accurate and timely information can get to the right people. The ultimate goal of computer linguistics is to someday create a universal translator that would support all of the world’s 7,000 or so languages.
Researcher: Kevin Knight, research director, USC Viterbi’s Information Sciences Institute
JOY OF LIVING
Working with researchers at USC ISI, 14 art museums across the United States have created Linked Open Data about their artwork, which links data about artists and related archival material in a consistent way, deepening research connections for scholars and curators, and creating uniform public interfaces for students, teachers and museum visitors.
Researcher: Craig Knoblock, research director, USC Viterbi’s Information Sciences Institute
Using a process called correlation explanation, or CorEx, researchers are minimizing the number of questions site users must answer to create personality profiles without losing the predictive power of eHarmony’s compatibility models.
Researchers are probing ways for buildings to pull data from wearable devices and environmental sensors to make us more comfortable and productive, make buildings more energy efficient, safe and secure, and even have Siri or Alexa-like “personalities” where we can talk to our building and be
Researcher: Burcin Becerik-Gerber, USC Viterbi