A student writes on a white board. Some of the words include Pandora’s Box, capability of critical thinking, environmental concerns.

Shaping AI before it shapes us

Professor Eni Mustafaraj and Julie Walsh asked students to work in groups and write out what they think about AI.
Image credit: Shannon O'Brien

Career Education offers students an opportunity to think deeply about issues related to artificial intelligence

Author  Shannon OBrien
Published on 

“AI is at a tipping point where we can really shape it, or it can shape us,” Stephanie Georges ’85, founder of The Meraki Dignity Project, told students on January 13, during the first day of the Upskill AI for Social Impact workshop on campus.

Georges was a guest of Jen Pollard, Lulu Chow Wang ’66 Executive Director and Associate Provost for Career Education, who led the AI workshop, the newest track the Lulu Chow Wang ’66 Center for Career Education offered during the College’s annual Upskill skill-building series. Students can sign up for a weeklong session on one of six topics: the new AI offering, entrepreneurship, journalism, investment banking, medicine, or software development.

“You all will be working in fields that AI will impact,” Pollard said to the students on opening day, noting that generative AI is here to stay and is being used in unproductive ways, like creating deepfakes and spreading misinformation.

Wellesley wants to prepare its students to become leaders who can fine-tune AI, make it more accurate, and imagine ways to use it for a powerful positive impact, Pollard said, such as by anticipating cancer diagnoses before they become terminal.

To do that, students will need to understand the technology’s history, what’s new about generative AI, how it’s being deployed and by whom, and how it can be better. And they need to be in the rooms where decisions about AI use are being made.

A professor stands at a white board and looks at two students as one of the students gestures while speaking.
Eni Mustafaraj, associate professor of computer science, listens as students share their thoughts on AI.

Pollard talked about Anne Toth ’93, head of data policy at the World Economic Forum. In the early 1990s she held one of the country’s first data privacy positions, at Yahoo, and asked how the company was protecting all the data they had collected from users. Later, at Amazon, she helped with the international rollout of the company’s virtual assistant, Alexa, which involved considering how Alexa’s responses would reflect various cultures and contexts as well as address ethics and privacy concerns. Toth was successful in her work not because she was a great technologist, but because “she was a brilliant thinker, and she was very considerate of all of the different cultural elements, the ethical considerations, the privacy considerations,” Pollard said. Toth has had to think deeply and critically about issues and audiences, skills the students are gaining through a liberal arts education.

AI for social good

Georges said she worked on Wall Street immediately after graduating from Wellesley, and it was there that she discovered her passion for disruption—moments when she would have to shift in response to unexpected changes. She left to work at a telecommunications company on the brink of bankruptcy, and she helped it navigate out of that situation. Customers kept buying landlines, so the company kept pushing off new technology. She said the experience taught her that “disruption creeps up on you.”

“If you're not paying attention, or you delude yourself that it’s not going to happen, then you may miss it,” she said. She finds a parallel in AI and how people are responding to it.

“He sees so many positives in it, and I see so many negatives in it. There has to be a middle ground. Nothing is all negative or all positive.”

Vita Kirschtein ’29

She described how her team at The Meraki Dignity Project is using AI to create a digital space to support women by providing them with information and resources to help them with issues such as health and wellness, caregiving, and community. Unlike the more well-known large language models (LLMs), like OpenAI’s ChatGPT, it is a small language model that uses a small dataset to improve its AI assistant, Sophy. The data is private and helps Sophy personalize insights for the user, and it is specific to women and women’s needs and concerns, unlike LLMs, which are trained on information not tailored to this demographic.

Ethics and concerns

Julie Walsh, Whitehead Associate Professor of Critical Thought and associate professor of philosophy, and Eni Mustafaraj, associate professor of computer science, visited the workshop to discuss the ethics of technology and AI, how AI technology is being marketed to the public, and who benefits from it. Mustafaraj noted that AI is not new, and used the post office as an example: “[W]e taught machines to understand human handwriting, handwritten digits, and letters” to sort letters more efficiently. “That’s a good use of AI, but people have forgotten about the good uses of AI, and now everybody is just focused on the bad uses of AI,” Mustafaraj said. “Why? Because it appeals to fear, which is so potent, right? Whenever you want to control people, you appeal to fear, and this is why we should look beyond the rhetoric at who is advancing certain narratives.”

Walsh said a critical part of a liberal arts education is learning to notice fallacious reasoning, or reasoning that has a fatal flaw that renders the argument invalid. In the generative AI development and deployment space, she said, a lot of fallacious claims are being made by tech companies as well as the media covering those areas. She noted issues like false dichotomies and unfalsifiable claims that aim to steer the public toward specific viewpoints about the technology.

Changing outlooks

Learning how to control AI rather than letting it control us, as Georges put it, was one of the goals of the Upskill workshop. Vita Kirschtein ’29 said she has seen a family member, whom she described as an “information sponge,” increasingly turn to ChatGPT rather than books to learn more. “He sees so many positives in it, and I see so many negatives in it. There has to be a middle ground,” she said. “Nothing is all negative or all positive.”

Students discuss business models that might best benefit the goals of The Meraki Dignity Project.


For Hannah Williams ’26, a meaningful use of AI means it has a long-lasting, significant social impact. She said companies should consider how AI use could deepen their missions. An example she pointed to is BirdNET, which “creates visuals of bird vocalizations to study bird species in detail, and is promising for preserving endangered bird species,” she said, “but is ultimately an add-on tool for researchers.”

Kirschtein pointed to another example the students learned about: A small pizza shop could, with enough data, use AI to optimize how much pizza to make, leading to less waste. Though she called herself an AI skeptic, Kirschtein said she hadn’t thought about practical uses for AI, like preventing food waste: “I’m not gonna say it changed my mind, but it definitely impacted my thinking. It made me consider a more positive outlook.”

Williams said she left the program feeling better informed, and noted that the differences in opinion about generative AI “was a strength, not a weakness, because they added so many layers to our group discussions.”

“During the program, I was exposed to so many different perspectives and backgrounds regarding AI, from computer science majors who did not agree with it to humanities majors who used Grammarly regularly,” she said. “At the very least, this program was a helpful forum in hearing real opinions from students like me who care about the same things I do.”