This AI-generated photo of Pope Francis drew plenty of attention to artificial intelligence. Photo from Twitter

Funny? Maybe it is. Concerning? Definitely

By 
  • May 5, 2023

Popping up on Twitter just before Easter was a funny photograph of Pope Francis looking super cool in a puffer jacket.

Or maybe not: not funny, not real, not an isolated incident.

Pope Francis fell prey to a deepfake, what Merriam-Webster defines as being “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.”

On a lark, someone used Artificial Intelligence (AI) to make the Pope look like a rock star. For a short while, much of the world believed the image to be a genuine photograph.

But now that we know it’s a fake, we must resist the temptation to shrug this off as a benign prank and see it for what it is: a deception with far-reaching consequences, the byproduct of a worrying trend in which the power of AI is taking off exponentially.

A branch of AI called machine learning is amassing more and more information from the Internet. Algorithms and statistical models gather and analyze this data to form increasingly sophisticated patterns from which computer systems can draw certain inferences. This process occurs without human programmers supplying the system with explicit instructions; in essence, the system is learning how to learn on its own.

The result? AI tools can now supply interactive answers to questions, conduct back-and-forth conversations with humans, write essays, compose photographs and artwork, and perform many other functions to produce outcomes that are virtually indistinguishable from what a human can create.

This “learning to learn” process has many profound implications. One area of concern is the quality of the data itself that underlies the machine learning process. There is no overarching regulation, oversight or accountability of the accuracy and ethics of information posted on the Internet, making it very difficult for the technology to distinguish between lies and the truth.

As Melissa Heikkilä writes in her December 2022 MIT Technology Review article: “Large language models are trained on data sets that are built by scraping the Internet for text, including all the toxic, silly, false, malicious things humans have written online. The finished AI models regurgitate these falsehoods as fact, and their output is spread everywhere online.”

False information, whether inadvertent or intentional, can result in injury or death, lead people astray and even perpetuate gender, racial and other types of biases. For instance, the image of Pope Francis in a puffer jacket sends the subliminal message that he, and by extension the Catholic Church, are adopting consumerism, celebrity and other worldly values that conflict with Christian teachings during a season, Easter, of great significance for the Church.

Even though people consciously know that the photo was fake, the residue remains: one cannot “unsee” an image.

Another area of concern falls under the realm of plagiarism and violations of copyright and intellectual property. For example, educators are unsure how to navigate students’ increasing use of the conversational program ChatGPT to write essays. Indeed, a March 2023 Best Colleges survey reports that one in five of the American college students surveyed use AI tools such as ChatGPT to complete their schoolwork. This form of cheating prevents the development of critical thinking, reasoning, planning and communication skills.

An anonymous person harnessed an AI tool to generate a fake song from the voices of musicians Drake and The Weeknd, which went viral and was pulled from platforms recently.

There have also been worries about the potential of AI to replace human workers. A Business Insider article quotes one worker as saying he uses ChatGPT to do 80 per cent of his job, enabling him to take on a second position at the same time unbeknownst to his employer.

These and many other ethical concerns have prompted a group of more than 1,000 industry officials, scientists and others to sign a March 22 open letter calling for “AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” says the statement, which calls for a set of shared safety protocols for advanced AI design and development audited and overseen by independent outside experts.

Such rapid, unchecked AI development is bringing about results that violate core Catholic Church and Biblical teachings, including dignity of the human person, dignity of work and upholding the truth, which sets us free. Deception is becoming deeper, harder to detect and more integrated into our daily lives, which we must not allow to continue unabated.

(Majtenyi is a public relations officer specializing in research at an Ontario university.)

Please support The Catholic Register

Unlike many media companies, The Catholic Register has never charged readers for access to the news and information on our website. We want to keep our award-winning journalism as widely available as possible. But we need your help.

For more than 125 years, The Register has been a trusted source of faith-based journalism. By making even a small donation you help ensure our future as an important voice in the Catholic Church. If you support the mission of Catholic journalism, please donate today. Thank you.

DONATE