Media and Artificial intelligence

WORKING GROUP:

Media and Artificial intelligence

As internet users, we are often confronted with messages that captivate us, offers we are interested in and opinions that we find exciting and persuasive. Search engines, streaming channels and social network sites seem to know our music preferences, clothing style and favourite holiday destinations. From the traces we leave behind on the internet, it is apparently possible to predict what we desire, hope and think. However, the artificial intelligence used for this purpose raises questions both on a personal and a societal level.

Artificial Intelligence (AI) is used for very different communication purposes. In order to avoid the LIAS discussion going in all directions, we propose to focus our discussion on the impact of personalised communication strategies on news reporting: What is the effect of selecting news facts, opinions and messages according to what interests, intrigues or excites someone, on the way people form an image of current affairs via news sites, discussion groups, social network sites, etc.?

From the perspective of the individual, questions arise about the impact of AI on privacy and about the possible damage that personalised communication can do to one's integrity and well-being. What can people find out about our personal lives on the basis of AI, and how do they use that information? Can more transparency be created about who has our personal data and with what objectives AI algorithms are developed that are deployed on the basis of that data to be able to approach us in a personalised way? Is it justified that a system can convince us of a purchase or an opinion based on stimuli that we are sensitive to that an AI algorithm can predict better than we can ourselves? Is there an inventory of the concrete risks we run when private individuals, commercial enterprises, political organisations or the government make use of our private data? Are the possibilities offered to us by internet and smartphone applications to shield our personal data sufficient, and do people make sufficient use of these possibilities? In what way does the personalised selection of opinions and messages contribute to a person's dissatisfaction with his or her identity and body image?

 

From a societal perspective, questions will be asked about the impact of AI on the quality and intensity of public engagement with society. It is through reporting that citizens realise how to assess society's problems and governmental actions. Media can create an engaged and alert public opinion in a society if, on the basis of balanced, accurate and honest information, a strong social interest can be formed. However, media can also give rise to polarisation, hatred and misunderstanding. If the information shared through mass media is socially irrelevant, misleading or one-sided, public support for sensible policy decisions will not easily form. In relation to the application of AI, the question therefore arises as to whether the business model of companies responsible for managing the data and the algorithms that allow the form and content of information, opinions and beliefs to be tailored to the profile of private recipients encourages the spread of unbalanced information, conspiracy theories and fake news. Can it be made clear to a wide audience how the corporate objectives of the big Internet players have a concrete impact on the content of the information people are confronted with? Can the management of Internet companies be regulated? If so, how? And how can such regulation be justified and organised, given the global scale of the Internet?

 

Question

The relevance and sincerity of a message and the type of engagement it fosters are found to be not only of individual but also of public interest. Depending on the kind of framework from which one interprets the function of mass media, the side effects of the implementation of AI applications for the development of personalised communication strategies will be estimated differently.

The central question we want to address is: In what way can AI applications support or jeopardise the private and public function of messaging? When AI is used to tailor news coverage more precisely to the preferences of a private audience, to increase the efficiency of political campaigns, and to advertise more effectively, do they contribute to the quality of public beliefs or do they indirectly create more division, more private advocacy, more misconceptions, and more attention to lies and false reporting? Can AI also be used in a positive sense and what are the contextual conditions for this: is it sufficient to better regulate the current setting or will one have to tinker with the institutional environment that determines the objectives for which AI can be used?

AI in itself is a technique, not an actor. It is people who use that technique and it is people who determine the objectives of those techniques. It is possible that people often impulsively and without careful consideration want to put into practice the possibilities that new techniques offer, which creates the impression that not man but technology itself determines the course of history. Nevertheless, we can ask ourselves a number of specific questions:

How can we create more transparency about what exactly AI is and how it is used in practice? From a technological point of view, we should be able to clarify to a broad audience what the personalisation of information, messages and advertisements concretely entails. Who does it, based on which data and how do they do it? Once it is clear what the technique entails in concrete terms, who employs it and on the basis of which data, we can ask ourselves additional questions concerning the following five main issues:

  1. What are the problems for which the technique offers a solution?
  2. For whom is the solution to these problems relevant? (For consumers, for businesses, for society?)
  3. What new problems are created by the use of that technology?
  4. To whom do these new problems apply?
  5. What are possible measures to address these problems?

 

The problem statement as presented here may give the impression that we are convinced that the development of AI in the field of mass communication has only adverse effects. However, this is not the case. AI can also be used to better inform the general public about how the community is doing, how certain behaviours are related and what behavioural profiles seem to be developing. If, through the analyses that AI systems can make of their behaviour, people can better understand why they often do what they do and what the possible consequences are, this could greatly increase people's self-determination.

Coördination

  • Bart Pattyn
  • Tinne De Laet
Luc Sels EN

The intention is to gather scientific insights about major societal challenges in LIAS on the basis of international and interdisciplinary consultation.

Luc Sels

Rector KU Leuven and Co-Chair LIAS Foundation
Contact