Amid the philosophical and ethical issues swirling around the booming artificial intelligence (AI) revolution—from questions about privacy, safety risks and surveillance to generative AI deepfakes, AI companionship and AI systems going rogue—Syracuse University professor Johannes Himmelreich views its use with guarded optimism. “I think there is a good way of using AI and I want to contribute to finding it,” he says. “We should be using AI, but the question is how should we use it?”
Himmelreich has long been fascinated by the role of philosophy and ethics in technology. And as the landscape of AI rapidly expands, he is investigating its impact on society. “I have interests across a whole range of topics in philosophy, and what binds many of them together is their association with AI,” says the associate professor of public administration and international affairs in the Maxwell School of Citizenship and Public Affairs. “I’m drawn to issues that intersect with emerging social topics or use new methods. That’s why computation and AI are an attractive topic for me to work on—so many changes coming into philosophy today are from these areas.”
Himmelreich is focused on AI’s use in government and its role in decision-making, as well as AI regulations and policy. He’s co-editor of The Oxford Handbook of AI Governance (Oxford University Press, 2024), which examines the challenges and opportunities at the intersection of AI and governance. His interest in politics, social justice, computation and the digital economy often spark his interdisciplinary approach to research. He’s delved into such topics as killer robots and self-driving cars. With the support of a two-year grant from the National Endowment for the Humanities, Himmelreich is working on a book about the philosophy and ethics of data science and good decision-making. “If data science is about supporting decision-making, then you want to make sure the decisions are fair and don’t harm people,” he says.
Deliberating Data’s Ethical Dilemmas
Data-driven decision-making is deeply embedded in today’s world, offering informed choices through analysis designed to improve performance. The process is routinely accepted as factual and accurate—a fail-safe counter to “gut instinct” decisions. But for all the successes, if data is flawed, it can lead to unforeseen problems. And since massive datasets fuel AI, Himmelreich sees inherent ethical dilemmas associated with data—bias and privacy risks, for instance. His current focus is on how data science methods that researchers use to collect, clean, analyze, model and present data can unintentionally distort the truth.
“The question—What is good data?—is not as easy to answer as you might think, because good data is sometimes made up, generated by what’s called synthetic data,” he says.
Synthetic data is artificially generated information that resembles real data, mimicking its patterns, relationships, and structures. Although the information isn’t collected from actual people or events, it is used to test and train AI models.
Often this type of data drives AI’s progress—in applications such as ChatGPT, facial recognition and tabular data. “AI becomes smarter with more data, and the more data you have, the better your AI is,” Himmelreich says. “That scaling law holds also from made-up data. The algorithm doesn’t care if the data is real or not. It just wants more.”
If the original data is distorted, the synthetic data can inherit that distortion. “The sinister part is not that synthetic data is made up,” he says. “You might not even know it’s happening because to show distortion you need to have data, and oftentimes you don’t have the data to show there’s a distortion.”

Maxwell School professor Johannes Himmelreich is interested in examining how data science methods used by researchers can unintentionally distort the truth.
Employing AI in Government Decision-Making
Himmelreich highlights one routine example of AI employed in the public sector: The Social Security Administration uses AI to help decide who qualifies for disability benefits—bringing automation into some of the most critical, human-centered decisions in government. AI systems are trained to detect fraud, seemingly making it simpler to process claims and decisions, but they produce both false positives and negatives.
This raises the ethical issue of which mistake is worse: denying an individual’s legitimate claim, which could jeopardize their livelihood, or missing a fraudulent case that will cost taxpayers. “The challenge is to figure out which mistakes are you more willing to accept given that you will make mistakes,” he says.
An appropriate use of AI, Himmelreich believes, is in sorting cases based on their difficulty, prompting decision-makers to grant more attention to challenging cases. “That’s not just true for disability insurance or unemployment insurance, it’s also true in the medical sector,” he says. “We are already using AI for cancer diagnosis and assisting radiologists in reading medical imaging.”
Himmelreich believes that problems in AI projects often happen at the point where humans and machines interact. To prevent these breakdowns, he says it’s important to have a clear goal, a well-defined step-by-step process and strong communication. “Good data science is exactly at this interface and not necessarily in the technical analysis,” he says. “The really important skill for data scientists is to understand the situation of the decision-makers they’re supporting and produce something that augments their work and helps them make better decisions.”

In his Philosophy and Ethics of Data Science graduate course, Himmelreich emphasizes that data science involves countless dilemmas, trade-offs and value conflicts. “I want my students to have the knowledge and methods to solve those dilemmas and to have confidence in their ability to solve them,” he says.
Balancing Benefits and Harms
From the early beginnings, safety concerns have gone hand in hand with AI. Himmelreich cautions that the more prevalent AI’s use becomes in government and other areas, the more risk is introduced. “The harms that can come from AI can be much more nefarious because they’re harder to detect,” he says. “The question of how we control AI is really relevant.”
Himmelreich is drawn to how technologies are designed and built and understanding how they’re implemented—for better or worse. Ultimately, he hopes AI enhances society, providing support where it’s needed, rather than replacing us. This will enable us to excel in our strengths and reap the rewards of AI while acknowledging its limitations and guarding against erroneous behavior. “To me,” he says, “the ethical questions are super important because over the next few years we will make important decisions about how we’re going to use AI.”