Opinion
Artificial Intelligence and Disinformation an Educational Challenge
Miguel Dominguez Rigo*
Department of Language, Arts and Physical Education Teaching, Faculty of Education. Complutense University of Madrid, Spain
Miguel Domínguez Rigo, Associate Professor, Faculty of Education, Complutense University of Madrid, Spain
Received Date: November 27, 2024; Published Date: December 02, 2024
Summary
Disinformation is not a new phenomenon, but the rapid advances in new technologies and in particular the use of Artificial Intelligence (AI), have altered the ways in which disinformation is now part of our lives. In a digital world, where the economy is also increasingly digital, traditional media and information have been displaced by new channels of dissemination such as social networks, which in many cases act solely under the mandate of advertising revenue. The challenge we are facing is related to a series of components that together create the perfect storm and which unfortunately is acquiring very worrying dimensions in recent years. We are facing a global crisis of disinformation, which affects people in a notable and alarming way in the real world. Artificial intelligence creates efficient images and algorithms, while disinformation and digital advertising have generated a new business. Although there are and may be better mechanisms that help minimize or alleviate the adverse effects produced by the conjunction of these elements, it is the end user who decides what content to accept, consume and share, and it is in this sense that education makes an essential commitment if we aspire to eradicate one of the greatest problems we face as a society.
Keywords: Artificial intelligence; Disinformation; Fake news; Education
Introduction
In general terms, journalism is losing power and influence. The media that substantiate, contrast, investigate and publish information in order to establish a reputation as serious informants are increasingly scarce. Publishing true news is expensive, requires investment and sometimes hard verification work, among many other things, while fake news only needs one or more lies. Fake news pushes true information into the background; sensational headlines quickly attract attention, capturing all our attention without the spreaders of disinformation caring about ethical or moral aspects and without there being any consequences for these bad practices.
In the world of communication and the generation of digital content, artificial intelligence becomes an indispensable tool, but also a double-edged sword. AI allows us to generate content in record time at a very low cost, write texts and generate static or moving images in seconds, but this is not enough to compete with a multitude of channels, formats, online spaces, or various media, in a market where competition is fierce. A multitude of websites, social media profiles, groups, platforms, etc. compete to capture our attention through content of dubious origin, but also large corporations and companies position themselves in favor of disinformation, by doing nothing to combat it. Artificial intelligence is capable of generating images and videos that support and sustain false or ill-intentioned content, providing them with the authenticity that only the image can provide. Likewise, AI is capable of creating sophisticated algorithms that learn by themselves, seeking to maximize interaction with the greatest number of users. Algorithms that suggest fake content to us, because it is precisely this content that generates more interactions, more viewing time and therefore more income.
We are therefore facing a social problem, hate messages and misinformation seriously affect the population. Disinformation advances, displacing critical thinking, reflection, and prudence. Given this situation, it is vitally important to address various issues from the earliest stages of the educational system.
Lying as a Business
There is potentially harmful content that no one moderates, refutes, or contradicts, and this content finds its ideal habitat on the Internet. Online, disinformation not only grows, but it also amplifies and spreads rapidly. We might think that this type of false, malicious, or harmful content does not have much of a reach and much less enjoys the ability to generate income, but the reality is different, spreading this type of adulterated news, hoaxes, hate messages, etc. is very profitable. Why is disinformation so profitable? The answer is that advertising is the main business model on large platforms or world-renowned corporations and those spaces that create and spread disinformation generate a lot of traffic on the Internet, receiving high income for displaying advertising. Most brands hire digital ads, but digital advertising is no longer managed by people. Due to the huge amount of existing advertising, it is AI and complicated algorithms that distribute this advertising among thousands of spaces through systems that are as complex as they are opaque. In fact, many brands and companies are unaware of where their ads are ultimately displayed and where the money generated by this advertising goes. It is well known that most of the seemingly free sites that provide services on the Internet are not actually free, since they are supported by the ads they display, while collecting user data. Generating traffic on, for example, a website, is generating advertising revenue, the more traffic, and interactions, the higher the revenue.
We must ask ourselves which website gets the most traffic, the most activity, and what is the main reason? For example, a page intended to publish news taken from articles written by the scientific community of a certain city with headlines such as: “ An algorithm has been developed that will help reduce acute rejection in liver transplants ” or a page that shows provocative or inflammatory headlines about local aspects of that same city such as: “ Water supply cuts in the city after human biological remains were found in municipal deposits ”. The second headline is false, a hoax, but it would undoubtedly get many more clicks and as a result much more income from the advertising that we could see framing the text or image of that false news.
The “cyber bait” tries to “catch” as many people as possible and even though sometimes when we bite the bait, the content disappoints us or raises serious doubts about its authenticity, the lie has already won, because it seeks to generate the greatest possible number of interactions because that is how they get rich. These channels use hate messages, lies, hoaxes or deceptions, obviously without entering into ethical or moral considerations, because economic interests prevail over any other aspect. This is what is already called the “Disinformation Economy”, false or misleading information not only lasts longer or gains more credibility than real information, but it is also much more profitable. In business terms, it is a new business model. Messages aimed at minorities, low-quality information, provocative or hate-filled content, eyecatching or inflammatory headlines, lies, etc. anything goes when it comes to making money.
We would be surprised to see how big brands or wellregarded and positioned companies display their advertising on pages of dubious reputation, alongside false headlines of fake news accompanied by manipulated or out-of-context images or created using artificial intelligence, financing (sometimes through ignorance) disinformation. The proliferation of this type of websites and fake content on social networks is increasing, gaining the power and income that the media and spaces that publish authentic content are losing.
I Believe Everything
Conspiracy theories, hate messages, hoaxes, fake news, manipulations, scams, denialist theories, outright lies, etc. We may know someone who has ever been a victim of disinformation, but surely that person who we warn has just shared a fake news story, not only does not recognize the manipulation of which he has been a victim but questions the source that indicates that this news is actually fake news. What is the reason why disinformation has so much power? Sometimes because it aligns with our line of thinking, other times because the person sharing it with us is a person or a channel that we completely trust or also because it has already been shared many times. Without realizing it, we can be propagators of disinformation. We live next to smart devices constantly and it is precisely this type of content that conditions us to spend even more time in front of our mobile phone, without being aware that various algorithms track our habits, our movements, our tastes, etc. as well as those elements that capture our attention, showing us more and more similar content. Algorithms suggest ads, pages, groups, videos, profiles, news, etc. regardless of whether their recommendations show hate, violence, conflict, anger, or other potentially harmful content.
Many people consider real what they have seen or read many times in different digital spaces, without questioning that the same content may appear repeatedly, not because it happens often or because it is objectively relevant, true or offers convincing arguments, we see it constantly because the algorithm offers it to us, perhaps they are isolated or anecdotal events, but when something is shown to us continuously we give it much greater importance. Likewise, if a certain event is very popular, that is, if many individuals spread it, it is not considered false, simply because of the belief that such a large number of people cannot be wrong, when our suspicions should go precisely in that direction and ask ourselves the reason why its diffusion is so massive. It is also relevant to point out that certain messages try to appeal to our feelings or emotions in an intense and direct way or concern us directly for some reason. If, for example, we have just selflessly sent a donation of clothes, motivated by an exceptional event, such as a natural disaster, and our intention is to help the affected people who have lost all their belongings, and then we are shown a video showing a warehouse with people destroying tons of clothes and the text or voice accompanying the images declares that these are clothes from the donations, our reaction will surely be indignation, repulsion or rage. We will consider it an injustice that clothes that other people need are destroyed, without reflecting on whether this video really belongs to the event that has motivated our altruistic gesture or if it is a video recorded in another place and at another time, where the clothes are being disposed of for reasons unknown to us. These strong and primary emotions will make us react and share this false video and unwittingly contribute to the spread of the hoax, amplifying an event that in the real world would not have had such an impact or would have been quickly refuted or contrasted. Emergency situations and major crises are constantly used to spread misinformation on social media.
We can assume that the large corporations in charge of moderating or filtering content on social networks operate in an ethical and fair manner. We may come to believe that these platforms have effective mechanisms against misinformation and therefore trust everything they show us, but harmful content generates much more activity, hoaxes, fake news, etc. generate much more interaction and attract much more attention. Unfortunately, we are immersed in constant digital stimulation, where bad news, negative news, violent content, scandal, hate or violence have greater influence than positive or true news. Let’s not fool ourselves, the more time we spend on a website or social network, the better for the website or social network. Recommendations generated by algorithms aim to get the user to spend as much time as possible in front of a screen and that same algorithm recommends false or conflicting content because it is aware that it causes a significant increase in user dwell time and consequently greater advertising revenue. Algorithms are not designed to show recommendations that reveal different points of view to the user, they are not designed to make us question or doubt what we see, they do not provide us with content that allows us to refute or contrast certain information suspected of being false or malicious, they only seek to maximize user interaction without any real moderation and without entering into ethical or moral considerations about those contents that generate more activity. Given the colossal amount of information that social networks have to handle, it is artificial intelligence that is in charge of moderating or reviewing the content that is going to be published, but it is not capable of detecting or does not intend to censor certain false content or messages that incite hatred or violence. These large platforms have specific policies, but it seems that they do not implement them against the vast majority of harmful content that they finally authorize, being a superficial moderation that only acts on grammatical errors or images of a sexual nature. We believe that we are navigating in safe environments, but these large platforms, precisely because of the lack of competition, are extremely opaque, lack regulation and do not do everything they could do, becoming complicit in disinformation and therefore in its negative consequences.
We have also observed that there are advertisements on wellknown pages and sites created by criminals with the sole intention of defrauding the user who accesses this misleading advertising. All kinds of strategies are used, such as, for example, the unauthorized use of images of well-known people or prestigious brands, promising financial gains, incredible offers, etc. The images that appear in this type of fraudulent advertising are intended to generate confidence, but the sites that allow this type of advertisement also seem safe to us, so... How is it possible that trusted sites show misleading advertising? For the same reason that advertisements from large companies end up on pages dedicated to misinformation, the AI that is in charge of managing advertising does not carry out the appropriate filters. Thanks to algorithms, advertisements reach the right audience (on-demand segmentation) at the right time and through the most relevant or accessible channel for the user. Content and advertising tend to be personalized, even if we are not aware of it.
A Picture is Worth a Thousand Words
Just a couple of years ago, creating or manipulating an image could take hours of work and require a series of technical skills that were not within everyone’s reach. Today, the rapid development of artificial intelligence allows anyone without specific knowledge to create totally realistic static images or videos according to their needs or manipulate real images or videos. The degree of sophistication in this type of tool is such that in many cases it is no longer possible to distinguish manipulation from reality. We can even come to think that a real image has been created through AI and that an image generated by AI is real.
But the power of the image is extremely significant, as well as for the interests of disinformation, which uses images constantly. If this continues, images will not be synonymous with reality and disbelief will run rampant. At the moment we are getting used to lies. We are told so many lies that we doubt what could be true. Lying does not entail risks or consequences for the person who lies, there are countless examples. Lies are accepted and prevail even over the truth. The image had survived this type of massive manipulation, until now. Now, the image does not have to imply truthfulness.
Conclusion
The challenges are significant. Hate and disinformation have a significant impact on people both inside and outside the digital environment. Disinformation is amplified because it is more profitable, and this leads to constant violations of human rights. The most vulnerable groups and minorities suffer the consequences most severely. What would be illegal, illicit, immoral, or hateful outside the digital environment is permitted on the Internet and on social media. We do not doubt the benefits of using artificial intelligence, but it is urgent to raise a series of regulations and ethical questions about the malicious use of AI and its involvement in disinformation. New and enormous challenges are presented, but we cannot leave aspects such as regulation or transparency in the hands of the large platforms that operate on the Internet. Our best weapon is education. If the end user is able to reject certain content, the industry behind disinformation will change towards a more responsible model, but if we do not prepare ourselves, if we do not improve our skills to be able to detect and reject disinformation, the situation will worsen.
Young people are mainly informed through social networks, they are digital natives and can be very manipulable and highly controllable. Adequate training from the first educational levels is essential. We must be well informed of the risks, not only of the multiple uses of technology, but we must also therefore prioritize technological, media, digital and visual literacy. Currently we are at the service of technology and not the other way around, posing a serious social problem. Students in educational centers must strengthen their critical thinking and autonomy, they must be able, through educational programs, to be able to defend themselves and understand the risks that arise from misinformation.
Acknowledgements
None.
Conflict of Interest
There are no conflicts of interest.
-
Miguel Dominguez Rigo*. Artificial Intelligence and Disinformation an Educational Challenge. Iris J of Edu & Res. 4(3): 2024. IJER. MS.ID.000589.
-
Artificial intelligence, Disinformation, Fake news, Education
-
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.