Skip to main content

Smithies vs. The Machine

Alumnae News

AI threatens to upend how we live, work, and even think. Amid this tumult, alums and faculty members are harnessing its potential for positive change.

Illustration by Lucy Jones

BY LAURA J. COLE

Published July 17, 2024

Los Angeles–based journalist Carolina Miranda ’93 was playing around with Midjourney for a writing assignment, asking the generative artificial intelligence platform to create images of L.A. based on vivid descriptions she pulled from literature.

The line “One woke in the night troubled not only by the peacocks screaming in the olive trees but by the eerie absence of surf” by Joan Didion, for example, returned an image of hills covered in olive trees. The biggest tree had the neck and head of a peacock growing from its branches.

Digital illustration of Carolina Miranda, wearing a blue shirt in front of a yellow background
Carolina Miranda ’93
Major: Latin American studies
House: Cutter

Curious, Miranda, who until recently worked at the Los Angeles Times, then entered the words she would use to describe herself: Latina journalist.

“The renderings I was being given were of these highly sexualized images of women wearing a lot of makeup with big boobs, revealing tops, and sitting at a computer,” she says. “In AI’s mind, that is what Latina journalists look like. Nerd-at-a-computer–Latina-journalist, which is me, did not come up in AI because the ideas that are already promulgated about Latinas in the culture—that’s what AI is drawing from.”

When prompted to provide common stereotypes of Latinas, ChatGPT returns the following descriptions: “promiscuous,” “hot-tempered,” “low socioeconomic status,” “not proficient in English,” “uneducated.” Admittedly, this is the result produced when AI is asked for stereotypes, but these very descriptors are also the biases often built into large language models (LLMs) that are used to train generative AI platforms, making experiences like Miranda’s more common than they should be.

That’s because—in the case of Midjourney, ChatGPT, and other generative AI platforms—LLMs are essentially just massive volumes of content pulled from the internet.

Digital illustration of R. Jordan Crouser, wearing a yellow shirt in front of a salmon-colored background
R. Jordan Crouser ’08
Majors: Computer science and mathematics
Houses: Cutter, Morrow, Hubbard, and Tyler

“I did a back-of-envelope calculation, and if we were to magically have a human who could read from birth, 24 hours a day, seven days a week, for 100 years, it would take about 2,000 lifetimes to be able to read the training data for ChatGPT 3.5,” says R. Jordan Crouser ’08, an associate professor of computer science at Smith who specializes in human-machine collaborations. It’s worth pointing out that the AI software used to transcribe this conversation with Crouser interpreted “Chat” as “Chad,” slang for a stereotypical alpha male, which seems fitting given what Crouser said next: “We know that some of the most prolific speakers on the internet are the ones that have a lot of social capital and have the ability to broadcast ideas through a lot of privilege, which is largely white cisgender men concentrated in certain geographic areas. We know that voices on the internet are being suppressed actively. If we think about what that means for the bias in the training data, and therefore what it means in terms of what this model will have learned about how humans tend to use language, it’s learned a biased representation.”

The underlying fear is that the racist, sexist, and otherwise abusive language found across the internet is embedded in data used to train systems that will have major repercussions for how we live, work, learn, and interact with one another. And it has posed important questions, such as: Can language models be too big? How will generative AI impact us? And what can we do to ensure AI best represents our worlds and worldviews?

Behind all of these questions are Smithies developing solutions for a more human-centered approach to AI.

AMANDA BALANTINE ’02 IS ON THE FRONT LINE of making sure that workers have a voice in how generative AI impacts careers and industries.

As the director of the AFL-CIO Technology Institute, Ballantyne is at the intersection of new technologies, workers, and their unions, looking at labor market trends, worker and civil rights, and democracy issues. She is also one of 25 members of the National AI Advisory Committee, a body that advises the president and the National AI Initiative Office on a range of topics, including AI workforce issues.

“In terms of automation and job loss, we do have concern about AI leaving people behind,” she says.

This concern is echoed in a recent AFL-CIO poll in which 70% of employees said they worry about being replaced by AI. The occupations most at risk, according to a report by the consulting firm McKinsey, are in production work, food services, customer service and sales, and office support—jobs held mostly by women. But labor unions in a range of sectors from agriculture to entertainment are striking to ensure workers remain central. For example, when the Screen Writers Guild went on strike last year, its members demanded that studios and production companies disclose when material is generated by AI and ensure that AI is not listed as a credited writer. At the heart of these concerns is the fear of losing jobs, wages, and control.

Digital illustration of Amanda Ballantyne, wearing a blue shirt in front of a red background
Amanda Ballantyne ’02
Major: Government
House: Tenney

“If we want this technology to actually make our jobs better and safer and make our society better, it’s not only putting the guardrails on this technology to make it safe, but it’s also building confidence for people that they can move through this transition and end up in a better place,” Ballantyne said in a recent podcast episode recorded during The Washington Post’s Futurist Summit: The New Age of Tech.

Among the ways of building that confidence, according to Ballantyne, is focusing on stakeholder engagement and “involving workers, involving women, involving communities of color in the design of the technology and to influence how it’s developed,” she says. “At AFL-CIO, we believe when you bring workers into the conversation about what technology makes their job better and faster and safer and more efficient, you end up with better jobs and better technology.”

For example, AI is being used for everything from monitoring employee productivity and training call center agents to making initial decisions about whether Social Security disability cases advance to an appeal. What Ballantyne and others are hearing from employees is that the tech can improve their jobs, but the engineers who are designing these systems don’t always understand the complexity of the jobs they’re monitoring or the decisions being made. So, AFL-CIO partnered with Carnegie Mellon University to launch the What’s Human About Work initiative to interview call center agents and others to better map out their real job functions—beyond the ones listed in a formal position description.

To further keep humans at the center of digital tech, AFL-CIO also recently entered into a first-of-its-kind partnership with Microsoft. The goal? To better communicate AI trends to people on the front lines, involve them in how the technology can best be used, and shape policy centered on their needs.

ANNISAH UM'RANI ’99 IS FOCUSED ON ADDRESSING another need: improving the diagnosis and treatment of women’s health issues around the globe. As senior product counsel for Google Health—the tech titan’s initiative to advance health outcomes—she is part of a team using AI to speed up breast cancer diagnoses and reduce maternal mortality rates.

In the field of mammography, Google Health is focused on better detection of anomalies such as breast cancer, which is the leading cause of cancer death for women in 95% of countries worldwide—and the second-leading cause of cancer death for women in the United States.

“Through AI, we can identify if there is a potential issue in a mammogram and then have a radiologist look at it,” Um’rani says. “I think that’s really important in terms of improving the process and reducing anxiety, because a lot of anxiety arises while waiting for the result. We’re looking to cut that time down significantly—as soon as the same day.” Though some results can be delivered at the time of screening, according to the breast cancer foundation Susan G. Komen, other results can take up to two weeks for a routine screening in the United States and are allowed to take up to 30 days by federal law. In addition to elevating anxiety, a longer waiting period can also increase a patient’s chances of dying from the disease. But by being trained in what to scan for in the imagery, AI is reducing the wait time—and the strain on radiologists—by detecting breast cancer more accurately, quickly, and consistently. A new study published in the journal The Lancet Oncology found that scans read by AI as well as a radiologist detected 20% more cancers than those read only by radiologists, and they reduced radiologists’ reading workload by 44%.

Google Health is also partnering with Chang Gung Memorial Hospital in Taiwan and Jacaranda Health in Kenya, which has one of the highest maternal mortality rates in the world. The Chang Gung partnership is exploring ways AI might be used to detect breast cancer via ultrasound, as mammograms are less accessible in rural and lower-income regions and less effective in people with higher breast density. With Jacaranda Health, the tech giant is leveraging AI to make ultrasounds easier to perform and interpret, especially in areas where ultrasound technicians aren’t always available.

“The cheaper handheld ultrasound uses machine learning to help identify when something is diverging from what is normal and allows providers to track a person’s pregnancy,” Um’rani says.

“Technology is never inherently good or evil. It is just a tool for us to express our humanity.”

ŞERIFE (SHERRY) WONG ’00 WILL TELL YOU she’s not an academic or a computer scientist. She’s an artist living in California’s Silicon Valley. Her role in AI came about naturally after she became worried about the ethical problems she saw surfacing.

Digital illustration of Sherry Wong, wearing a red shirt in front of a blue background
Şerife (Sherry) Wong ’00
Major: English language and literature
Houses: Talbot and Chase

Several years ago, Wong decided she wanted to tackle the challenges posed by the emerging tech but felt alone—until she started talking to others about what made her nervous and discovered that the network of like-minded individuals was far more vast than she had imagined.

Keeping up with all the projects aimed at addressing the problems of AI can be daunting—especially given the rapid pace of its growth. To expand understanding for researchers of the diverse range of AI ethics and governance initiatives and to foster discovery of the new organizations popping up, Wong developed Fluxus Landscape, an art and research project that mapped over 500 stakeholders. The project was created in partnership with Stanford University’s Center for Advanced Study in the Behavioral Sciences.

Through an interactive 3D map, users can explore AI topics and discover a global network of actors spanning academia, nonprofit organizations, governments, and corporations working on everything from accountability and civic impact to policy and human rights. Selecting the topic “Autonomous Weapons,” for example, yields 24 results, including the United Kingdom’s International Committee for Robot Arms Control, focused on the use of robots in peace and war; Canada’s Open Roboethics Institute, a think tank at the University of British Columbia that takes a stakeholder-inclusive approach to questions of self-driving vehicles and lethal autonomous weapons; and the U.S. Department of Defense–funded Moral Competence in Computational Architectures for Robots, aimed at integrating moral compe- tence in robots.

“Stanford didn’t hire me because I had a Ph.D. in AI,” Wong says. “They hired me because I am an artist. They wanted someone who had the skills that I have: to bring things together and share an interdisciplinary perspective.”

The project launched in 2019 and remains the largest interactive landscape survey of its kind to pull together large tech companies, governing bodies, nonprofits, activist groups, and artists like Wong.

“The map is curatorial and qualitative— not indexed and quantitative,” she writes in the project statement. “Unlike the data scraped by computers and sorted by rules, the data within the Fluxus Landscape were gathered one by one and categorized through deep conversations.”

Wong purposefully emphasizes the bias in her own data collection process to underscore how all methods of data collection, including automated data scraping, are inevitably biased and shaped by personal choice.

Today, Wong continues to study AI actors and initiatives as part of her art practice, including by leading Icarus Salon, which explores the politics of technology, and serving as an affiliate research scientist at Kidd Lab at the University of California, Berkeley. And she continues to advocate for having AI and data managed at the community level rather than adhering to the top-down, “more is more” approach adopted by most tech companies. The latter risks reinforcing stereotypes and further marginalizing underrepresented people, including Wong herself.

“We’re learning more and getting better at understanding the challenges of AI as well as how to fight and make change and stick up for people who are traditionally marginalized or oppressed,” Wong says. “Ironically, being so interested in how this technology could be harming us led me to understand the world better—and myself better. It’s made me want to connect more with my Turkish and Hawaiian roots, to question which experiences are being left out, and to challenge who gets to make those decisions.”

A CONCERN ABOUT EXCLUSION LIKEWISE INSPIRED YACINE FALL ’21 to rethink how data is collected from underrepresented communities, especially when public health decisions are on the line.

“When I was doing global health research, there was this assumption that we would know the best people to talk to in different communities in order to collect certain health data,” says Fall, a fellow at Bloomberg Associates who graduated from the Harvard School of Public Health in 2023. “Growing up in a Senegalese Muslim household, I saw that we passed down our stories through oral history and collective culture. I asked, ‘How can science and technology learn from that?’”

Digital illustration of Yacine Fall, wearing a red headscarf in front of a blue background
Yacine Fall ’21
Majors: Biological sciences and African studies
Houses: Northrop and Parsons Annex

Her answer is Hyve, a platform that gathers data from historically underrepresented communities. Using AI to translate across language barriers and organize information from participants, Hyve follows Fall’s three guiding principles of data collection: Localize how data is being collected to better gain trust; take data security seriously—which means removing all identifiers, ensuring compliance with international data privacy standards, and deleting recordings as soon as the project is over; and leverage the “auntie network.”

“The auntie network comes back to the idea that people who are from an area, people who are living that experience, are the best experts to tell you about their own experience,” Fall says. “When you don’t know where to go, find the aunties who gather under the baobab tree.”

The results are promising. During its pilot program in rural Senegal, the organization saw an 84% increase in data collection efficiency in the region. The major change? Switching from written to oral methods. Fall knew that most people in the community didn’t have reliable internet access. By looking at the community members’ needs—such as having a conversation rather than asking them to fill out a survey online—Fall was able to gain trust and increase women’s participation.

“Biased AI emerges when algorithms are not trained on representative data,” Fall says. “AI practitioners have an ethical responsibility to build inclusive tools. Data deserts don’t just exist, they are historically formed and systematically maintained. With Hyve, we’re trying to close the data desert gap to realize better health outcomes for everyone.”

AT SMITH, NEXT YEAR'S KAHN INSTITUTE LONGTERM PROJECT will explore how AI may reshape the human experience. Led by Professor of Mathematical Sciences Luca Capogna and Susan Levin, Roe/Straut Professor in the Humanities and professor of philosophy, the project will bring together a multidisciplinary group of faculty to discuss AI-related challenges and opportunities, ranging from work and health care to how best to collect “good” data.

Jordan Crouser—the professor who created that back-of-napkin calculation based on 45 terabytes of data, or about 25 billion words being read at an average speed of 238 words per minute—is one of the fellows participating in the project, as is Associate Professor of Computer Science Jamie Macbeth. They are also among the Smith faculty already at work teaching the next generation of computer scientists not just how to make AI better but also what it can teach us about ourselves.

Crouser encourages his students to be curious about generative AI. For instance, he’ll project ChatGPT on a screen in front of a class and prompt it to write his bio. As the chatbot produces text line by line, Crouser will point out how the result is grammatically correct and looks good, but it’s “factually incorrect, every single time.” For example, if the result mentions Crouser’s doctorate at all, it’s often attributed to the wrong university, citing North Carolina State rather than Tufts because there’s more material online connecting him to the former.

“Part of my approach to teaching students is not shying away from the technology but rather saying, ‘Let’s look at what it’s doing,’” he says. “‘What’s missing and what’s not?’ Those questions get at the heart of understanding where data comes from and how it may be flawed.”

This includes questioning the LLMs it’s drawing from—and, in the case of Macbeth and Assistant Professor of Computer Science Johanna Brewer, pushing back against them.

As part of his research, Macbeth works with symbolic artificial intelligence systems, which are built by hand-selecting material rather than relying on a massive collection of data. He compares the difference to discussing how the nucleus of an atom works versus building an A-bomb.

“We’re focused on building AI systems to better understand humanity and human intelligence rather than building something huge that can make billions of dollars,” Macbeth says. “We build structures by hand, so we totally know everything that’s in them, and we totally understand them as well.”

The benefit is fourfold, Macbeth says. The data can be designed more inclusively. There’s less worry about the technology growing beyond human control. The systems are more environmentally responsible because they don’t have the massive carbon footprint that comes from training and hosting LLMs. And we can potentially learn more from them, such as how we understand language.

The solution underscores what is being taught in Smith’s classrooms: AI may present problems, but there are always solutions to be found.

“Technology is never inherently good or evil,” Brewer says. “It is just a tool for us to express our humanity.”

Within the computer science department, Brewer is known as the resident social justice and equity expert. They describe themselves as a “tryhard indiginerd hacktivist...doing my best to represent my peoples,” including those who are nonbinary, Native American, and neurodivergent. As the creator of the Inclusive Design Lab at Smith and co-director of AnyKey, which advocates for inclusion in gaming and livestreaming, Brewer has seen the perils and promise of emergent technologies but likes to focus on the human role in both.

That task is often the focus of Brewer’s courses, in which they present students with real-world scenarios they’ll encounter in their careers, such as being pressured to reuse code from an underfunded startup or to redact research findings that don’t align with their corporation’s beliefs.

“It’s really preparing students for the sociotechnical complexity that they’ll face in their careers and recognizing that things aren’t fixed instantly but they’re also not completely ruined,” Brewer says. “Many students feel overwhelmed, and my focus is encouraging them to get involved and understand where the points of change are, interrogating them, and just going and going and going.”

Which, in many ways, is the story of humanity writ large. We survive and endure. We encounter problems and find solutions. We develop new technologies and find ways to better adapt them to our needs.

Artificial intelligence is vast and complex, but it doesn’t have to be scary. After all, as Brewer says, “All AI is made by people. It’s nothing more than us.”

Laura J. Cole is a freelance writer, editor, and content strategist based in Portland, Oregon.

Portrait illustrations by Abigail Giuseppe.