The Transformative Potential of AI: USC’s Bill Swartout Considers the Future

Image
In 2023, amid the buzz about generative AI tools like ChatGPT, USC formed the Center for Generative AI and Society with $10 million in seed money and influential experts from the fields of computer science, film, media, education and more. The center is co-directed by Bill Swartout, a USC Viterbi computer science research professor and chief technology officer for USC Viterbi’s Institute for Creative Technologies.

The University of Southern California’s new Center for Generative AI and Society is delving into the fast-changing landscape of artificial intelligence and the potential impacts of generative AI on our world, including culture, media and education. Its primary focus is to promote the ethical use and innovative applications of generative AI to support higher education and to help postsecondary institutions navigate this evolution. Leading the $10 million center is Bill Swartout, who serves as co-director as well as chief technology officer for USC Viterbi’s Institute for Creative Technologies. We recently sat down with Swartout to talk about the center’s mission, the transformative potential of generative AI and the ethical considerations guiding the center’s research endeavors.

 

The Center for Generative AI and Society is engaged in various initiatives. Can you highlight one or two projects that you believe will significantly affect our future?

There are two main aspects to the center. One is led by our co-director, Holly Willis, a professor and chair at the USC School of Cinematic Arts, and Mike Anani, an associate professor of communication and journalism at the USC Annenberg School of Journalism. They’re examining the impact of AI on the entertainment industry, journalism and the media. The other aspect, in which I’m involved, is a joint venture with the USC Rossier School of Education led by faculty members Gale Sinatra and Stephen Aguilar, postdoctoral student Changzhao Wang, and Dr. Benjamin D. Nye, director of learning sciences at USC-ICT. We’re exploring the significant impact of generative AI on education. Another important mission of ours is building a community around AI. We’ve started a program that we’re calling the AI Generative Fellows Program, where we work with USC faculty members who are interested in using generative AI in their courses.

 

How is generative AI impacting education?

Generative AI is having a substantial impact on education, and the center is adopting an approach of “embrace and enhance.” Some educators advocate for denying or detecting generative AI use, while others, including us, believe in recognizing and embracing it to enhance education. Current detection tools are not very accurate, with false positives being a significant issue. We’re working to understand the landscape through surveys, building a community through the AI Generative Fellows Program and developing a framework supporting the use of generative AI in classrooms.

 

How is the center approaching the ethical considerations associated with AI and addressing concerns about cheating?

Ethical considerations are integral to the fabric of everything we’re doing at the center. We’re trying to think about how we can use generative AI in a way that improves the student’s experience and at the same time reduces the possibility that they would use it to cheat on assignments. It’s like the analogy, which is often drawn, of when calculators appeared decades ago. Education evolved, and now the use of calculators is allowed, encouraged and sometimes even required because it frees up the students to concentrate on the higher-level concepts. That’s actually a win. By analogy, that’s what we’re trying to do here — figure out how we can improve the educational experience and at the same time reduce the stumbling blocks that we would otherwise put in a student’s way.

In the context of addressing cheating concerns, we are shifting from grading based on the final artifact to grading based on the process. By using generative AI for “authoring by editing” — where students write a first draft using generative AI and then edit it — students are less likely to cheat, as the focus is on the process and critical thinking. Additionally, we are exploring pre-writing interactions with generative AI to help students explore counterarguments and alternative points of view before they start writing. The aim is to expand their horizon and help them develop critical thinking skills, which are crucial as more text will be generated mechanically in the future.

 

Can you share any insights into the development of generative AI programs that could be applied to higher education?

We are working on prototypes that use generative AI in various ways, including pre-writing interactions to explore ideas, critique written pieces and identify narrative elements such as a good hook. We are also developing a software framework for using generative AI in a classroom setting. Initially, we focused on authoring by editing. We are now beginning to explore something we’re calling “reverse outlining.” The idea is that after a student has written an essay, they could ask generative AI to read the essay and then produce an outline of the essay based on what the student wrote. The value of this could be to let a student see how their essay is perceived — they could see if the organization in the outline matched what they intended and whether or not their ideas came across — and then modify what they wrote to improve it.

 

The center released its inaugural report, “Critical Thinking and Ethics in the Age of Generative AI in Education,” in January. What were the findings?

This survey is an internal report conducted among faculty at USC. Our focus was on examining emerging practices and concerns related to AI that faculty members are observing. We want to understand their perspectives and ideas on this matter. Moving forward, we plan to generate additional reports that extend beyond USC, encompassing national surveys involving both students and faculty across the country. Our objective is to systematically gauge the prevailing sentiments and insights within the AI landscape, providing a more comprehensive understanding than what can be gleaned solely from popular media coverage.