Over my PhD degree in political science, I have found myself reading widely in anthropology. That training fundamentally changed how I interpret the world. My husband once joked, “Is everything I say a symbol to you now?” In some ways, yes. Anthropology has taught me to see everyday interactions as part of larger systems of meaning. My social science training taught me to look past individual behavior and ask what technical systems make certain outcomes more likely.
This way of thinking has become unexpectedly valuable in the age of AI.
When I was finishing my master’s degree at Northeastern University, I took a machine learning class out of curiosity. I didn’t anticipate that, by the time I was a few years into my PhD program at Boston University, AI (particularly LLMs) would evolve so rapidly and become so deeply embedded across industries. As I was training to be a social scientist, the technological landscape was concurrently transforming at an extraordinary pace.
That coincidence has shaped my perspective. I came to see that the role of social scientists today is not separate from technological development but deeply intertwined with it.
AI in its current form has rapidly and fundamentally transformed many sectors, from algorithmic trading in finance to chatbots in customer service to pedagogy. While the use of AI in these sectors is often described as an agnostic data-driven process, it is not inherently ethical. For example, during my PhD, I worked on a project that used AI tools to classify a piece of text as propagandistic rather than objective. But if the historical examples used to guide that classification reflect narrow assumptions about political language, the model can reproduce those assumptions while presenting them as neutral analysis. In practice, this means that contested judgements about ideology and propaganda can be embedded in the system itself. The ethical problem is not just misclassification. It is that AI can make subjective political interpretations appear objective.
My PhD dissertation examines the politics and economic consequences of the end of China’s one-child policy, particularly its effects on men’s and women’s labor market outcomes. I also consider how the ongoing technological shifts may impact these dynamics. One lesson from that work is that large-scale systems are often designed around simplified assumptions about people’s behavior (for a classic and widely cited work, see Scott 1998). In China, the one-child policy and later its two- and three-child policies were not designed as gender policies, yet they reshaped women’s economic status in lasting ways.
My research has made me skeptical of any claim that a large-scale system is neutral because it is optimized for efficiency. In both public policy and AI, designers choose what to measure and what to optimize for. These choices shape real lives. In China, market reform and family planning policy did not affect everyone equally; they had uneven consequences for people’s labor market participation, economic security, and household decision-making. AI systems raise similar questions. What is being optimized? Whose behavior is treated as the norm? Who bears the cost when a system works well for some groups but poorly for others?
Ultimately, while the rapid advancement of AI is exciting, it also demands careful attention. Without it, we risk unintended consequences that extend far beyond the technical domain.
Scott, James C. Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. Yale University Press, 1988.