On 31 May, EACA participated in the AI Summit 2021 organised by Politico. Artificial Intelligence was at the centre of the debate. Here is a brief overview of what was discussed.
A definition of Artificial Intelligence
Artificial intelligence (AI) is derived from the initial idea to create simulated human intelligence in machines that ‘think’ like humans and imitate how a person acts. The fundamental characteristic of artificial intelligence is its ability to analyse data and take actions that provide the best chance of achieving a set goal.
AI in everyday life
Many applications use AI without us even realising it. AI is widely used to provide suggestions based, for example, on previous purchases, searches, and other recorded online behaviour. AI is also commonly used in retail to optimise inventories and organise supplies and logistics. Mobile phones use artificial intelligence to offer the most personalised product possible. Virtual assistants answer questions, make suggestions, and help manage the diaries of many smartphone owners. AI systems can also help recognise and combat cyber-attacks and threats. In the COVID19 outbreak, AI is being used for temperature controls in public places. In medicine, it is used to recognise infections from CT images of the lungs. AI can also be used to provide data on the progression of the epidemic. Finally, AI applications can help detect fake news and misinformation by analysing social media content and identifying suspicious words and phrases because they are shocking or alarming. They can thus help to understand which sources can be considered authoritative.
Ethical and social challenges of AI
In addition to the obvious issues of privacy, for some time now, this has also raised significant ethical, political, and social problems, such as those linked to the spread of social discrimination. Hiring a worker, assessing his or her abilities, determining his or her reliability will be decisions increasingly related to machines and mathematical models that will return scores and predictions, translated into judgements capable of changing people’s lives. Whatever a machine does, it does so by following precise calculation algorithms that must be pre-programmed by human programmers to ‘feed’ AI systems. However, recent studies have shown that even algorithms can be affected by biases of various kinds (gender, ethnicity, class), precisely because they are designed and fed by human beings and are therefore subject to the risk of implementing discriminatory behaviour and producing decisions that are more or less advantageous for specific categories of the population (e.g., if there is a form of gender discrimination in a database, the algorithm used to filter the CVs of job applicants, will manifest that bias in its behaviour, and a young woman who becomes unemployed because of a wrong decision by an algorithm will have even more difficulty in finding a job). Therefore, the ethical issue condenses precisely on this non-predictability of the machine’s choices and how and to what extent we will be willing to tolerate the machine’s possible errors and predictions.
What is the EU doing to regulate AI?
The Commission unveiled on 21 April a proposal for a regulation to renew and harmonise European rules on AI. The aim, which is high on the agenda of the von der Leyen executive, is to combat the uses of the technology that could be detrimental to the “fundamental rights and security” of EU citizens. Among the applications that should be banned are those capable of “manipulating people through subliminal techniques” or exploiting the vulnerabilities of particularly fragile groups, such as children or people with disabilities.
But the clampdown will also include social scoring systems, the ‘scores’ given by governments such as China to assess citizens’ trustworthiness, and facial recognition systems: biometric recognition technologies should be banned, with the only exception of emergency cases such as searching for kidnap victims, countering terrorist activity, or investigating criminals.