Preparing ethical leaders with an interdisciplinary understanding of AI and the confidence and agency to transform our digital future

For 150 years, Wellesley has prepared students to be the bold, critical, and ethical leaders who will set the course for a better world. This effort is particularly important in the age of artificial intelligence (AI). The liberal arts compel us to focus not only on the value of citizenship and the construction of knowledge, but also and most importantly on what it means to be human. Thus, a commitment to engaging with AI is not only in our interest; it is our imperative. As a community, we call upon students who will become future leaders to investigate bias, grapple with philosophical and technical challenges, preserve humanistic values, and innovate in ways that prioritize positive societal impact. Our graduates will pioneer bridges between technological solutions and communities of need, shape policy development, and elucidate critical questions while inviting diverse voices to the table. Through a multifaceted and insightful liberal arts education, Wellesley students will lead as wise stewards, mitigating harm to leverage positive advancements for human flourishing.

Professional preparation Applied Skill-Building

Opportunities for applied AI skill-building through Career Education are responsive to employer and industry needs to best prepare our students for the entry-level workforce.

Upskill: AI track

Career Education is launching a new immersive one-week Upskill track for January 2026 where participants will learn how to build the digital future they want with applied AI. Students will gain an understanding of AI through the lens of their future industry of interest while building project management and adaptive leadership tools for ethical change. Students will move from basic AI literacy to prototyping with AI. No technology prerequisites or previous AI experience required. Students earn one experiential learning unit on their transcript upon completion.

Academics In the Classroom

Our academic approach encourages individual faculty to think critically and explore creatively as they iterate and develop curricula in a world of AI.

  • Marc Tetel, wearing a lab coat and gloves, lectures about an image of a brain projected on the whiteboard

    NEUR 301: Career Exploration and Grant Writing in Neuroscience

    In NEUR 301, students write an NIH-style grant proposal on a neuroscience topic of their choice. Students use generative AI as a tutor to help explain complicated neuroscience techniques they encounter in their literature searches. In addition, students are encouraged to use GenAI to help explore the critical questions in their topic (for example, mechanisms of cell death in Alzheimer’s disease) and help determine the advantages and limitations of the experimental approaches in their proposals.

  • Professors Julie Walsh and Eni Mustafaraj meet with their research associate CJ Larkin.

    PHIL 322/CS 334: Methods for Ethics of Technology Seminar

    In this seminar, students learn theoretical frameworks from both philosophy and computational and data sciences. Students work together to see how knowledge of frameworks from both disciplines serves to enrich our understanding of the ethical issues facing digital technologies.

  • Three students in a classroom work on their laptops

    CAMS 236: AI and the Human Machine

    This course is designed to support the development of student agency and skills in analyzing AI through humanities and media studies. Students examine AI from historical, cultural, and technological perspectives. Readings cover a range of methodologies from media history, media archaeology, and cultural studies to posthumanism, feminist theory, and decolonial approaches. This work encourages students to expand their understanding of what AI is and how it reshapes the cultural order, social relations, and human-machine interaction.

  • Standing outside on the grass, three students watch a drone in the sky while while one pilots the drone

    CS 110: Sociotechnical Dimensions of Computing in the Age of AI

    An introductory course open to all students, CS 110 provides a foundation in how computers, the web, and AI systems work while also examining how these technologies shape power, inclusion, bias, privacy, trust, and equity. Students participate in innovative activities such as a drone rescue simulation, confronting both the technical challenges and ethical trade-offs of autonomous systems. Through labs, collaborative projects, and guest lectures, students gain both practical skills and the critical lens needed to understand and influence the role of AI in society.

  • Three students in a classroom work on their laptops

    ANTH 248/REL 238: Digital Religion

    How has technology impacted religion? How has religion influenced technology? This course explores how digital technologies like the internet, social media, gaming, virtual reality, telecommunications, and AI have changed the way that people think about and practice religion. Throughout this course, students focus on the relationships between religion, digital media, robotics, and popular culture online using both real-world case studies and current research in the fields of religion, anthropology, and science and technology studies.

  • Carolyn Anderson stands in front of a chalkboard and lectures to a class. She is holding an iPad.

    CS232: Artificial Intelligence

    What is AI, and should humans fear it as one of “our biggest existential threats”? This course discusses the development of the field from the symbolic, knowledge-rich approaches of 20th-century AI to statistical approaches that rely on increasingly large amounts of data, including an overview of contemporary deep learning techniques. Students explore how to apply these techniques in several AI application areas, including robotics, computer vision, and natural language processing, and consider ethical issues around AI in society.

Experiential learning Student Internships and Research

Experiential learning provides opportunities for undergraduates to apply their liberal arts skills and knowledge in the world and facilitates the ongoing iterative evolution of our campus approach to AI as they bring new perspectives, ideas, and innovations back to the classroom and community.

  • Two students  seated on a bench, having a discussion

    Can AI Help Combat the Next Pandemic?

    The world’s lack of preparedness for the COVID-19 pandemic revealed how urgently we need effective tools for vaccine and therapeutic development. In summer 2024, Seojean Kim ’26 and Sage Widder ’26 worked at the Marks Lab at Harvard Medical School, which has developed a machine learning model that predicts viral evolution in COVID-19 based on genetic sequences.

  • Vicky Lee posing with her advisors

    Crossing Disciplines: AI Research to Policy Exploration

    In summer 2024, Vicky Lee ’25 bridged technology and policy through two experiences: researching the reliability of large language model (LLM)-generated relevance judgments at the National Institute of Standards and Technology, and exploring national security theory within the context of U.S.-China relations with the Hertog Foundation. While working in D.C., she gained exposure to the intersection of AI and machine learning with public policy, particularly in the rigorous testing that informs safety-focused deployment of AI.

  • Colorful molecular model

    Evaluating 3D Molecular Generative Chemotherapy Models

    Using AI models for drug development has become a promising strategy in cancer research, as it reduces the time and cost to find possible starter molecules. During summer 2024, Lucia Urreta ’26 worked as an intern with Dr. Al-Lazikani at the MD Anderson Cancer Center, developing code to evaluate 3D molecular generative models using several metrics and analyzing intermolecular interactions between the generated molecules and their protein pockets.

  • Screenshot of Natural News's article on corporate media layoffs with a Credbot pop up assessment in the corner

    Real-Time Credibility Assessment of Webpages

    Nina Howley ’27 and Dianna Gonzalez ’27 worked in summer 2025 under the advisement of Professor Eni Mustafaraj on CredBot. CredBot is a Chrome extension powered by LLMS that is capable of annotating articles based on six established credibility signals.

  • Students look at sheet music projected onto a screen

    Harmonizing Data and Music for Global Wellbeing

    Margot Lang ’26 spent summer 2025 conducting research at the MIT Media Lab centered on leveraging audio data and AI to democratize music education and collaborative composition. She analyzed, processed, and curated a library of sounds to train AI models for an innovative instrument that co-creates a “global symphony.”

  • Nadine Gibson seated at table with peers, gesturing with hands

    Understanding AI Use as a Social Interaction

    Is ChatGPT really a new coworker? Nadine Gibson ’28 spent summer 2025 at MIT Sloan School of Management, researching the similarities between how people engage with AI and engage with humans. A first-gen student, she developed research methods and glimpsed what the discovery, development, and dispute of knowledge looks like at the intersection of technology and society.

Academics Faculty Research

Research driven by Wellesley faculty investigates AI innovations across disciplines, leading to academic and applied findings that inspire the future.

  • Sun-Hee Lee sits at her desk and smiles softly.

    Integrating Corpus Linguistics and AI-Based Technologies

    This collaborative project brings together researchers, including Sun-Hee Lee, professor of Korean, in corpus linguistics and AI to examine how immigrants are represented in Korean- and English-language media. By integrating large, annotated corpora with fine-tuned AI models such as BERT and GPT, it tracks discourse patterns over time and across regions, showing how language shapes public opinion on immigration. The study establishes a scalable, data-rich model for digital humanities research on migration and diversity.

  • Orit Shaer sits at a desk and gesticulates while smiling. She is wearing a purple blazer.

    Building Trust in AI for Emergency Medical Decision Support

    Professor Orit Shaer and her students study how AI can enhance the team-based decisions made by emergency medical responders. Through interviews, experiments, and immersive VR simulations with EMS professionals, the team will develop a framework for human-centered AI tailored to emergency workflows. The findings will not only advance emergency care but also inform AI design in other high-stakes domains, from disaster response to automated transportation.

  • Illustration of cats working on research together at MOGU Lab

    Model-Guided Uncertainty (MOGU) Lab

    Intensive longitudinal studies of suicide offer opportunities to understand how suicidal thoughts unfold in real-world settings using smartphone surveys and wearables; however, this data comes with several challenges that bar the use of traditional machine learning methods. Led by Yaniv Yacoby, assistant professor of computer science, researchers in the MOGU Lab address those challenges by developing a new class of generative models to help better identify types of at-risk patients, forecast when patients are at imminent risk, and help prevent suicide and related behaviors.

Brian Brubach speaks in the foreground while follow panelists look on

GenAI Literacy Faculty Fellows

The first cohort of GenAI Literacy Faculty Fellows launches in spring 2026 as an initiative of the AI Working Group on behalf of the Provost’s Office. The initiative will support faculty in critically examining generative AI’s role in liberal arts education through disciplinary research, pedagogical innovation, and community engagement.

This yearlong fellowship centers on inquiry. Projects will engage with GenAI from all perspectives including interrogating the ethics of AI tools, investigating their potential for positive use in teaching and research, and everything in between. Fellows contribute to Wellesley’s collective understanding of GenAI’s potential benefits, limitations, and risks, helping to shape a thoughtful, responsible, and informed approach to GenAI across different areas of study.

Wellesley College AI Working Group

The programs and initiatives presented on this page represent a sample of Wellesley’s cross-disciplinary involvement in AI and have been curated by the AI Working Group—a representative group of faculty, staff, and students charged with designing the College’s creative and strategic engagement with AI.