.

  • Written by Nir Eisikovits, Associate Professor of Philosophy and Director, Applied Ethics Center, University of Massachusetts Boston
The slippery slope of using AI and deepfakes to bring history to life

To mark Israel’s Memorial Day in 2021, the Israel Defense Forces musical ensembles collaborated with a company that specializes in synthetic videos, also known as “deepfake” technology, to bring photos from the 1948 Israeli-Arab war to life.

They produced a video[1] in which young singers clad in period uniforms and carrying period weapons sang “Hareut,” an iconic song commemorating soldiers killed in combat. As they sing, the musicians stare at faded black-and-white photographs they hold. The young soldiers in the old pictures blink and smile back at them, thanks to artificial intelligence.

The result is uncanny. The past comes to life, Harry Potter style[2].

For the past few years, my colleagues and I at UMass Boston’s Applied Ethics Center[3] have been studying how everyday engagement with AI[4] challenges the way people think about themselves and politics. We’ve found that AI has the potential to weaken people’s capacity to make ordinary judgments[5]. We’ve also found that it undermines the role of serendipity[6] in their lives and can lead them to question what they know or believe about human rights[7].

Now AI is making it easier than ever to reanimate the past. Will that change how we understand history and, as a result, ourselves?

Musicians dressed as soldiers connect with soldiers in old photographs in a 2021 production by the Israel Defense Forces and the artificial intelligence company D-ID.

Low financial risk, high moral cost

The desire to bring the past back to life in vivid fashion is not new. Civil War or Revolutionary War reenactments are commonplace. In 2018, Peter Jackson painstakingly restored and colorized World War I footage to create “They Shall Not Grow Old[8],” a film that allowed 21st-century viewers to experience the Great War more immediately than ever before.

Live reenactments and carefully processed historical footage are expensive and time-consuming undertakings. Deepfake technology democratizes such efforts, offering a cheap and widely available tool for animating old photos or creating convincing fake videos from scratch.

But as with all new technologies, alongside the exciting possibilities are serious moral questions. And the questions get even trickier when these new tools are used to enhance understanding of the past and reanimate historical episodes.

The 18th-century writer and statesman Edmund Burke famously argued[9] that society is a “partnership not only between those who are living, but between those who are living, those who are dead, and those who are to be born.” Political identity, in his view, is not simply what people make of it. It is not merely a product of our own fabrication. Rather, to be part of a community is to be part of a compact between generations – part of a joint enterprise connecting the living, the dead and those who will live in the future.

If Burke is right to understand political belonging this way, deepfake technology offers a powerful way to connect people to the past, to forge this intergenerational contract. By bringing the past to life in a vivid, convincing way, the technology enlivens the “dead” past and makes it more vivid and vibrant. If these images spur empathy and concern for ancestors, deepfakes can make the past matter a lot more.

But this capability comes with risk. One obvious danger is the creation of fake historical episodes. Imagined, mythologized and fake events can precipitate wars: a storied 14th-century defeat in the Battle of Kosovo still inflames Serbian anti-Muslim sentiments, even though nobody knows[10] if the Serbian coalition actually lost that battle to the Ottomans.

Similarly, the second Gulf of Tonkin attack on American warships on Aug. 4, 1964, was used to escalate American involvement in Vietnam. It later turned out the attack never happened[11].

An atrophying of the imagination

It used to be difficult and expensive to stage fake events. Not anymore.

Imagine, for example, what strategically doctored deepfake footage from the Jan. 6 events in the United States could do to inflame political tensions or what fake video from a Centers for Disease Control and Prevention meeting appearing to disparage COVID-19 vaccines would do to public health efforts.

The upshot, of course, is that deepfakes may gradually destabilize the very idea of a historical “event.” Perhaps over time, as this technology advances and becomes ubiquitous, people will automatically question whether what they are seeing is real.

Whether this will lead to more political instability or – paradoxically, to more stability as a result of hesitancy to act on the basis of what are possibly fabricated occurrences – is open to question.

But beyond anxieties about the wholesale fabrication of history, there are subtler consequences that worry me.

Yes, deepfakes let us experience the past as more alive and, as a result, may increase our sense of commitment to history. But does this use of the technology carry the risk of atrophying our imagination – providing us with ready-made, limited images of the past that will serve as the standard associations for historical events? An exertion of the imagination can render the horrors of World War II, the 1906 San Francisco earthquake or the 1919 Paris Peace Conference in endless variations.

[Like what you’ve read? Want more? Sign up for The Conversation’s daily newsletter[12].]

But will people keep exerting their imagination in that way? Or will deepfakes, with their lifelike, moving depictions, become the practical stand-ins for history? I worry that animated versions of the past might give viewers the impression that they know exactly what happened – that the past is fully present to them – which will then obviate the need to learn more about the historical event.

People tend to think that technology makes life easier. But they don’t realize that their technological tools always remake the toolmakers – causing existing skills to deteriorate even as they open up unimaginable and exciting possibilities.

The advent of smartphones meant photos could be posted online with ease. But it’s also meant that some people don’t experience breathtaking views as they used to[13], since they’re so fixated on capturing an “instagrammable” moment. Nor is getting lost experienced the same way since the ubiquity of GPS. Similarly, AI-generated deepfakes are not just tools that will automatically enhance our understanding of the past.

Nevertheless, this technology will soon revolutionize society’s connection to history, for better and worse.

People have always been better at inventing things than at thinking about what the things they invent do to them – “always adroiter with objects than lives,” as the poet W.H. Auden put it[14]. This incapacity to imagine the underside of technical achievements is not destiny. It is still possible to slow down and think about the best way to experience the past.

Authors: Nir Eisikovits, Associate Professor of Philosophy and Director, Applied Ethics Center, University of Massachusetts Boston

Read more https://theconversation.com/the-slippery-slope-of-using-ai-and-deepfakes-to-bring-history-to-life-166464

Metropolitan republishes selected articles from The Conversation USA with permission

Visit The Conversation to see more