Imagining possible futures

The Wagner Centers and campus partners bring together students, faculty, staff, and alums to discuss AI and the liberal arts

A student sits at a table, holds a microphone and gestures with her hand.
Hao Ju ’29 asks a question during one of the panels at the “Imagining the Future: AI and the Liberal Arts” event.
Image credit: Sam Williams
Author  Shannon O’Brien
Published on 

Imagine a world where AI technology accurately predicts environmental disasters and helps us use renewable energy resources, where fossil fuels are banned, wildlife is protected, and green spaces flourish, and people have equitable access to food, shelter, and water. This was one vision a group tasked with thinking about climate and AI proposed during “Imagining the Future: AI and the Liberal Arts,” a daylong event hosted May 2 by the Wagner Centers for Wellesley in the Worldthe Lulu Chow Wang ’66 Center for Career Education, and the AI Working Group. 

Faculty, alumnae leaders, students, and staff came together for the on-campus event to discuss the challenges and opportunities generative AI presents, both in a liberal arts environment and beyond. The day’s agenda included two panels—one on alumnae experiences with AI and one on faculty research and experiences—and a lunchtime session, “Speculative Futures through a Liberal Arts Lens,” during which the audience broke into six small groups to discuss the impact of AI on a variety of disciplines. 

Orit Shaer, Michael and Denise Kellen ’68 Chair in the Sciences and professor and co-chair of computer science, moderated the first panel, which featured three alums working in AI research, investments, and policy. Their wide-ranging conversation covered their time at Wellesley, challenges and opportunities in technology, and the ways a liberal arts education provides the tools needed to approach issues of AI analytically. 

Three women sit on a stage. The one in the middle holds a microphone and answers a question while the other two sit on each side of her and turn toward her.
Left to right: Holly Yanco ’91, distinguished professor of information and computer science and of mechanical and industrial engineering at the University of Massachusetts Amherst, Rudina Seseri ’00, founder and managing partner of investment firm Glasswing Ventures, and Grace Abuhamad ’13, product policy lead at Google, spoke about their career experiences and shared their thoughts regarding AI.

Panelist Rudina Seseri ’00, founder and managing partner of investment firm Glasswing Ventures, said she typically hires employees with a liberal arts background because of their experience with critical and creative thinking. Grace Abuhamad ’13, product policy lead at Google, said she did a lot of comparative analysis as a history major. In her current role, she said, she uses generative AI every day: “I am not as critical [of generative AI] as some of my fellow panelists are, but that's because I think I'm confident in my ability to sort of assess the quality of the work I'm getting from different tools.” 

Panelist Holly Yanco ’91, distinguished professor of information and computer science and of mechanical and industrial engineering at the University of Massachusetts Amherst, reminded the audience that there needs to be ethical thinking behind these technologies. She referenced a project she is working on with colleagues from Georgia Tech and Carnegie Mellon to help people with mild cognitive impairment. A question they ask themselves is whether AI should be lying to people. “Do I want my AI to tell me ‘Yeah, you look great’ when I don’t?” she said. “We need to be thinking about a lot of the ethics about, what can the AI do? Should it be completely truthful all the time? Can it lie?… And if you consent to let it lie to you, and you have a mild cognitive impairment and you change over time, does that consent still hold?”

The second panel featured Wellesley GenAI Fellows, faculty selected by the AI Working Group based on proposals they submitted about examining AI’s role in a liberal arts education. The panel included Stephen Chen, associate professor of psychology; Christian Hosam, assistant professor of political science; Cassandra Pattanayak, Jack and Sandra Polk Guthman ’65 Director, Quantitative Analysis Institute, and associate teaching professor in quantitative reasoning and mathematics; Jordan Tynes, assistant teaching professor in computer science; Jeremy Wilmer, professor of psychology, and Monica Mohamed ’27, a student researcher who presented research she’s conducted with Sun-Hee Lee, professor of Korean. Jen Pollard, Lulu Chow Wang ’66 Executive Director and Associate Provost for Career Education and Experiential Learning, moderated. Each panelist spoke briefly about their experience with generative AI in the classroom or in their research.

Pattanayak teaches statistics courses that require students to code, but she said coding is not the point of the class. “I had an epiphany last summer that the question was no longer whether to continue banning AI in the classes, but actually how to start allowing it,” she said. “The questions I'm trying to answer are: What happens when we allow AI for coding in classes where coding is something we're doing, but it's not the main goal? How are students using AI in those contexts? And what can we learn about the best way to structure a class in that context?” She said allowing AI in the classroom to help with elements that are not the focus of her courses unexpectedly left her more time “to talk about statistics, conceptual ideas, which is what I actually care about.” 

One alum and two students sit at a table having a conversation.

During lunch after the two panel discussions, groups of participants explored the possibilities and risks AI poses in one of six specific areas: the arts, social justice, public and sexual health education, climate, science and innovation, and K-12 education. Then the groups came together to present what they had discussed.  

The students in the climate group who offered the optimistic view of the future acknowledged that governmental policies would need to be implemented along with helpful predictive AI technology. They also shared a dystopian vision of the effects of using this same predictive technology, in which companies deploy drones to exploit the weather information AI provides and hog solar energy for their own needs. 

Exploitation and data privacy were recurring themes in the groups’ presentations, with ideal versions of AI usage taking both into consideration. The group addressing social justice and AI proposed “pAIr,” an app that would connect people to government services to break down barriers—financial, bureaucratic, geographic—that stand between people and resources. They suggested strong data encryption and separation between user data and service data to ensure data security and to keep personal information within the “pAIr” ecosystem, and they said the app should be run by a nonprofit. Users sharing all their data would be really terrible “if this was a for-profit company … then we know your income level, we know your deepest fears, and we know all your problems in life. And that would be very dystopian,” said group member Hao Ju ’29. 

Jenny Musto, associate professor of women’s and gender studies, and Sarah Abdulkerim ’22, an AI ethicist who examines the sociotechnical harms of AI on underrepresented communities, led the social justice conversation. They introduced their group to the concept of data feminism for AI, “which push[es] us to consider how AI raises new questions and require[s] new ways to study and track its surveillant and data colonialism effects,” Musto wrote in an email.

Aubrey Cantrell ’26, a computer science and studio arts double major who wrote her senior thesis on AI’s impact on the creative process, was in the group that addressed art and AI. “As someone who creates art, I’ve come to see AI as a medium as opposed to a final output,” she said. The group developed two storylines for a character named Pandora, one in which she gives all her creative agency to AI and another in which she chooses to co-create with it. “It is something new, not just something synthetic, but it's like a whole new mode of art and experience we are trying to imagine,” said Yixi Gao ’28 when presenting the group’s ideas.

Earlier in the day, Shaer had asked the alums to give advice to students who are concerned about the economy, the future of their jobs, and even what it means to be human in the age of AI. Seseri gave a direct answer: “I don’t think AI as an output is inherently bad or inherently good. It is inherently powerful. So make your choice. Use it for good and use it to make lives better. Use it to establish guardrails. Ask the questions. You can be an outsider throwing stones. You can be an insider changing from within. Both are needed. But I think you need to participate.”

Visit AI and the liberal arts at Wellesley to learn more about the College’s approach to the technology. Visit Wellesley’s YouTube channel to watch the Wellesley Alums in AI and Generative AI Fellows panels.

A white board with sketches and text from the day’s alum in AI panel
An artist sketched the conversations that happened during the panels.