AI: Help or hindrance in fight against disinformation?

Sunday, 24 September 2023 (11:54 IST)
If artificial intelligence (AI) can determine emotions, feelings and moods, possibly resulting in manipulated decisions, what kind of effect does that have on us as people?
 
The question is made concrete by the philosopher Claudia Paganini. Following a burst of rain, the ballroom of Berlin's Humboldt University is now bathed in late evening sunlight. Paganini picks up on this natural phenomenon, observing its effect on the audience. AI can easily create a photo of a beautiful sunset, she said — and when this is disseminated on social media, it has a manipulative effect. The same happens when a post with a faked politician's voice appears on an Instagram timeline, or AI-generated photos bearing a fake logo of a reputable media outlet are distributed.
 
Paganini was speaking as part of a discussion among media experts, including German parliamentary representatives, about the "power of facts" and "journalism in the age of disinformation and generative AI." As part of its program marking the 70th anniversary of its inauguration as Germany's international broadcaster, DW invited people to the Humboldt University medical campus in September to discuss the dangers and benefits of AI in the media.
 
In the oncology department of Berlin's Charite hospital, which is affiliated with Humboldt University, the benefits would appear to outweigh the dangers. Researchers there are working on using AI for the early detection of cancer. 
 
But, according to DW Director General Peter Limbourg, AI is "a nightmare for everyone dealing with disinformation" and especially for the media. DW, he said, was keen to help make sense of the new developments.
 
Deception, disinformation pre-date AI
 
Paganini, a professor at the Munich School of Philosophy, tried to cool the heated debate about the dangers posed by AI manipulation.
 
"Multiple forms of deception have always existed," she pointed out, then offered a suggestion. Journalism claims to be "truthful" — it aspires to depict reality as faithfully as possible. As such, she said, it needs to develop greater "transparency" by presenting the originators of the information — the journalists — and their expertise in the same way. "Transparency is the new truthfulness," Paganini said.
 
Tabea Rössner, a Green Party member of the Bundestag, commented that what was needed was an "error culture," which is far more common in the United States than in Germany.
 
Rössner, herself once a journalist, said she now fears that the profession's credibility is at risk.
 
"Journalism thrives on authenticity and truthfulness," she said, adding that there should always be a "technology assessment" before AI is used — both in journalism and in general.
 
The European Union is currently drafting a regulation to this effect. Rössner hopes that this will also be adopted by tech companies in the US — as the EU's rules on online data protection did when it was introduced.
 
Rapid development and disinformation
 
Helge Lindh, a lawmaker from the center-left Social Democrats, pointed out that politicians face huge problems when dealing with the increasingly rapid development of new AI software.
 
"We're in a real-time laboratory, simultaneously trying to be both participants and observers," he said.
 
Products are being developed that create authentic-looking videos in which people are made to say artificially generated sentences — and these could have any content whatsoever. Lindh said this reinforces a sense of looming danger posed by Russian troll farms and other international private providers who earn money from disseminating disinformation on social networks.
 
This is especially worrying given that, in Germany, increasingly sophisticated AI applications are operating in a social climate that has been stirred up by populists.
 
"We have developed a culture of outrage," comments Christiane Schenderlein, the center-right Christian Democratic Union's spokesperson for cultural and media affairs in the Bundestag.
 
All three members of parliament who spoke at the DW event complained that facts and analysis are being supplemented with misinformation deliberately disseminated to manufacture emotional responses to topics of political debate.
 
Using AI to establish the truth?
 
However, one person on the podium was convinced that "technology is the best instrument for protecting our democracy." Sven Weizenegger, head of the German military's "Cyber Innovation Hub," said the Bundeswehr is developing algorithms that differentiate between true and false information.
 
"This is already happening," said Weizenegger, adding that it will have far-reaching consequences in the analog world, for example, when making strategic decisions about potentially life-saving deployments.
 
Weizenegger said Russia's war of aggression against Ukraine showed how fast technology influences warfare. Since last summer, he said, the Ukrainian army had been able to "reduce its use of ammunition by 80% to 90%" by deploying AI programs on the front line while Russia was still carpet-bombing.
 
Finally, the media ethics philosopher Claudia Paganini said she was convinced that the application of artificial intelligence will ultimately benefit more people than it harms — as long as people consider the "profoundly human" aspects of dealing with the technology.
 
She added the pressure to find solutions will increase as awareness of the dangers grows. Just as democratic society finds ways to respond to dangers when it comes to children getting safely to school, it also will have to develop responses to the development of AI. 
 
As the discussion ended, Berlin Instagram accounts were filled with photographs of rainbows over the capital at sunset after the rain shower. Most of the photos were taken without a filter — no need for artificial intelligence.

Read on Webdunia

Related Article