By continuing to use this site, you agree to the use of cookies in accordance with our privacy policy.

AI and the Fight Against Deepfakes

Syracuse ROTC students help combat emerging digital threats.
Person on the computer looking at AI.

Pope Francis in a puffer coat. Tom Cruise performing magic tricks. President Volodymyr Zelenskyy calling for surrender.

In the age of artificial intelligence (AI), seeing is no longer believing. From lighthearted memes and Instagram filters to phishing scams and deepfake propaganda, AI is blurring the line between fact and fiction—and spreading falsehoods faster than truth can catch up.

“The things that fool us humans haven’t changed, but the ability to create those things at scale has really changed in the last five years,” says Jason Davis, a research professor in the S.I. Newhouse School of Public Communications.

Teachers presenting to students.

Newhouse professor Regina Luttrell supports student researchers as they develop tools to identify deceptive digital content.

Through a Department of Defense grant, Davis and co-principal investigator Regina Luttrell—Newhouse senior associate dean and associate professor—are working with Syracuse University Reserve Officers’ Training Corps (ROTC) Army cadets Glenn Miller ’27, Daniel Canhedo ’27 and Allison Simpson ’27 to develop innovative tools that detect and defend against disinformation across text, image and video.

Deepfake Defense

Miller, a computer engineering major in the College of Engineering and Computer Science, is developing a deepfake detection model that verifies the authenticity of live video calls.

“The end goal is to have a model where if you’re on a Zoom call with someone, you can verify if that person is real or a deepfake,” Miller explains. “Is this person actually who they say that they are?”

The work builds on lessons learned from incidents like the Zelenskyy deepfake. “With the ever-evolving nature of AI and deepfake technology, it could be a very real threat of disinformation,” Miller says.

Bot or Not?

Professor and student looking at data on a screen.

Jason Davis (left) and Daniel Canhedo ’27analyze an image to determine if it was human or AI generated.

Canhedo and Simpson, both computer science majors, are addressing another common challenge: distinguishing authentic online interactions from automated bots.

“A lot of YouTube comments are bots—many of them are phishing comments that try to get money from innocent people,” Canhedo explains. “Basically, we’re working on creating a neural network machine learning model that can identify whether a YouTube comment is a bot or not.”

Not only have we developed our technical understanding of things but also our ability to think more ethically and see how this can impact people.

Daniel Canhedo

Their “Bot or Not” project involves building comprehensive datasets to train more accurate detection systems. “We are gathering a lot of data,” Simpson says. “We need to collect both real comments and AI-generated comments to be able to train the machine learning models to be more accurate.”

Newhouse Synthetic Media Lab

Hand using pen to point to data on a computer screen.

Student researchers develop a machine learning model to identify bot-generated YouTube comments.

Once students develop working models that have been repeatedly trained and tested using collected data, the models become tools in the newly launched Newhouse Synthetic Media Lab. The online suite currently houses over 20 tools built, developed and continually updated by Syracuse student and faculty researchers to analyze text, images and video for signs of manipulation or deception.

“Taking a really hard and seemingly difficult concept, compartmentalizing it into a series of steps, and then actually developing a finished product—that was really rewarding,” Canhedo says.

The tools in the Newhouse Synthetic Media Lab use a three-layer model to evaluate and respond to synthetic content:

  1. Detection: Is it real or synthetic?
  2. Attribution: Who made it? What tool or model was used?
  3. Characterization: Is it malicious or benign? Factual or manipulative?

“These detectors help us understand if we’re being fooled,” Luttrell says. “When an image is synthetic, it changes how we analyze the content around it. That transparency alone plays a big role in helping us protect ourselves.”

Preparing Future Leaders

People looking at data on a computer together.

Drawing on their research at Syracuse, ROTC cadets Allison Simpson ’27 and Daniel Canhedo ’27 aim to serve in the U.S. Army Cyber Corps.

With support from the Syracuse Office of Undergraduate Research and Creative Engagement, Davis and Luttrell have specifically sought ROTC students to work on their interdisciplinary research team spanning fields like communication, computer science and policy.

“They’re just fantastic students,” says Davis, who provides cadets with flexible remote research opportunities that align with their unique schedules and military commitments. “I wanted to create opportunities for them to be able to do research that they find interesting, rewarding and that, most importantly, supports their next steps.”

Nearly every one of their students has secured their first choice in deployment, including Christopher Nemeth ’25, whom Miller, Canhedo and Simpson credit for getting them involved in their current projects. After working with Davis and Luttrell on the Defense Advanced Research Projects Agency’s Semantic Forensics project, Nemeth is now a second lieutenant in the U.S. Army’s Cyber Corps.

Student and teacher looking at data on a computer.

Davis intentionally brings ROTC cadets onto his research team, offering flexible opportunities that align with their military training and career goals.

Canhedo and Simpson both hope to follow suit. “Through our projects, we’ve been able to get a better understanding of the good and bad side of AI,” Canhedo explains. “Not only have we developed our technical understanding of things but also our ability to think more ethically and see how this can impact people. Overall, it is really awesome to be taking part in an initiative that’s so important and will only continue to grow in coming years.”

“The research has definitely opened doors and given me a lot of experience working with other people—which is especially important with where we’re headed as officers in the military,” Miller says.

As AI-generated content becomes increasingly sophisticated, the work of Davis, Luttrell and their team offers a path forward. Their tools empower people to navigate the digital world with greater confidence and clarity. For these student researchers, the work represents an opportunity to shape a more transparent and trustworthy digital future.

Also of Interest

Person working on microscope in a lab.

Shaping the Future of Women’s Health Research

An undergraduate researcher turns personal history into pioneering advances for reproductive science.

Learn more
People networking on a career fair.

Where Students Meet Their Future

From global brands to local employers, students make professional connections at our annual career fair.

Discover more