Artificial Intelligence: Towards “trustworthy AI”

“Trustworthy AI” is the motto of the high-level group’s guidelines for ethical Artificial Intelligence, released on Monday, the 8th of April, by the European Commission. Although not binding, the guidelines are likely to be a major reference in future discussions and serve as blueprint for potential regulation.

The document establishes seven key requirements to which AI systems should comply with, namely:

  • Human agency and oversight;
  • Technical robustness and safety;
  • Privacy and data governance;
  • Transparency;
  • Diversity, non-discrimination and fairness;
  • Environmental and societal well-being; and
  • Accountability.

The document invites stakeholders to assess their organisational AI systems through a set of questions headed by the above-mentioned requirements. This includes questions such as “Did you ensure that the AI system clearly signals that its social interaction is simulated and that it has no capacities of “understanding” and “feeling”?”. The assessment can be found on pages 26 to 31 and aims at being relevant across different organisational levels, from operations to top management.

In the next couple of months, the document will be complemented by guidance on the product liability directive, recommendations on how to boost European investment in AI as well as by information on the extent of regulation required. As part of the European Commission’s Artificial Intelligence Alliance, EACA has been actively engaged in monitoring developments in this field.

Speak Your Mind

*