A photo that went viral shows Pope Francis in a designer bomber jacket. It was later exposed as fake. Photo from Twitter

Ethical concerns for an AI world

By 
  • April 12, 2023

A recent photo created by an app showing Pope Francis wearing a designer bomber jacket went viral on social media, fooling millions around the world. 

Though quickly exposed as a fake, it punctuated the potential moral dangers of what many fear will be a world overly reliant on technologies that carry many shortcomings and where the lines between what’s real and what’s artificial become blurred. 

Ethical concerns about artificial intelligence (AI) are gaining more attention in light of recent innovations such as ChatGPT and other programs and applications with the ability to create sophisticated AI-generated text, art, photos and other content, and has led tech leaders like Elon Musk and AI researchers to call for a six-month pause on the development of large-scale AI systems.

The Pope himself has expressed concerns and at a recent science and technology summit at the Vatican, while lauding the potential benefits of AI, especially as it pertains to medicine, engineering and communications, also cautioned on the responsibility of those using these technologies to act responsibly and with awareness of human dignity and the common good. 

At St. Francis Xavier University in Antigonish, N.S., Milton King teaches natural language processing — the component of AI that can understand human language as it is spoken or written. Programs like ChatGPT are built on this technology. 

ChatGPT can hold conversations on topics from history to philosophy, generate lyrics in the style of a particular musician and suggest edits to computer programming code. However, since models like this only know what it is fed, explains King, consumers must be very careful and aware of the potential discrimination in the information it pretends to learn. 

“One of the big issues that I’ve seen in these types of models is that they tend to emphasize biases in the text they see,” said King. “If you feed a language model or a dialogue system like ChatGPT a bunch of texts, if there’s any biases towards any type of gender, nationality or whatever demographic or group, the model will emphasize that bias. So, you tend to get these models that are unfair towards certain demographic groups which is not nice to see.”

The Pope expressed concerns about AI software being used in to analyse criminal records to produce legal sentences. Though AI might be used to streamline the legal process, the concept of human value and dignity should remain salient and not be measured by data alone, he urged. 

“We should be cautious about delegating judgments to algorithms that process data, often collected surreptitiously, on an individual’s makeup and prior behaviour,” the Pope said, warning that such data can be “contaminated” by societal prejudices and preconceptions.  

“A person’s past behaviour should not be used to deny him or her the opportunity to change, grow and contribute to society.”

In discourse on how the advances in AI impact education, awareness of the shortcomings of this software offer opportunities to support students learning in other meaningful ways, says King. Although ChatGPT has access to a large amount of information, it is not able to access all knowledge available. It may not have the ability to answer questions about very specific or niche topics, and it may not be aware of recent news developments or changes in certain fields. Instead of fearing the technology, awareness of these limitations offers educators the opportunity to train the next generation of young people on how to work ethically with the software without abusing it.  

“You can change the dynamics of a classroom to just focus on the same type of concepts, but maybe ground them in more modern or recent events,” said King. “Students can’t just ask ChatGPT for the answer because the program just hasn’t seen it yet. The other one is to avoid common events. I saw one example where they showed how ChatGPT did very poorly trying to write a report on an historical figure that is not mentioned often or in common texts. That’s another way that you can get the students to get those skills of being able to do research and investigation where they can’t just rely on ChatGPT.”

As it pertains to verifying the authenticity of photos such as a stylish Pope, it has been suggested in some discourses that photo verifiers might someday become a job within media organizations and newsrooms. The creation of AI-generated supermodels used for major brands has also raised concerns about cultural appropriation if the creator of the model does not belong to the ethnic group the model represents and other ethical considerations. 

As far as how these latest developments in AI will impact everyday life, the world has already seen a significant increase in the use of automation in customer service, transportation and medicine, for example. Whether that means certain jobs such as cashiers, taxi drivers and medical assistants will become completely obsolete soon is unclear but unlikely, says King.  

“I feel like the jury might still be out on that one,” he said. “For many of these (programs), you have a low tolerance for incorrect information so you would want a human there that is credible. These models give the façade of the right answer even though it could be completely wrong, so I don’t think it’s going to just kick out a bunch of jobs.”

Please support The Catholic Register

Unlike many media companies, The Catholic Register has never charged readers for access to the news and information on our website. We want to keep our award-winning journalism as widely available as possible. But we need your help.

For more than 125 years, The Register has been a trusted source of faith-based journalism. By making even a small donation you help ensure our future as an important voice in the Catholic Church. If you support the mission of Catholic journalism, please donate today. Thank you.

DONATE