Back to All Events

Inaugural Athens Roundtable


The inaugural edition of the Athens AI Roundtable was held September 19–21, 2019 and was co-hosted by The Future Society, Covington & Burling LLP, IEEE, and the European Law Observatory on New Technologies. The event was placed under the auspices of H.E. the President of the Hellenic Republic Mr. Prokopios Pavlopoulos.

The Roundtable took place at a remarkable time for AI governance. Over the preceding months, the OECD, the European Commission, the IEEE, and major corporations and think tanks all published principles for the ethical adoption of AI in society. In addition, the Council of Europe and the IEEE published, nearly-simultaneously, the first sets of principles specifically focused on AI in legal systems and the practice of law. The promulgation of these principles, which espouse both common and complementary themes, establishes an ethical foundation for the advancement of human rights, human well-being, and the rule of law in the age of AI. 

This promulgation of ethical principles for the adoption (or avoidance of adoption) of AI creates a new challenge for all stakeholders: how to implement such principles in practice. The Athens AI Roundtable seeks to address this challenge as applied to the law, under the theme "From Principles to Practice". 

At The Roundtable, we identified three pillars that are necessary to ensure the trustworthy adoption of AI: Stakeholder Education, Sound Evidence, and Policy Foundations. The Roundtable was designed with three working groups, who convened periodically over the summer in the months leading up to Athens to surface ideas across these three areas. 

THE THREE PILLARS

Stakeholder Education

This group focused on three key stakeholder groups involved in the adoption of AI in legal systems, with a discussion on strategies to ensure they are adequately informed and equipped to navigate fundamental questions and challenges in their field:

  1. Law students, Judges, and the legal profession, who must understand the limits of their knowledge about AI, and thus be able to identify actionable lines of inquiry when engaging with experts in their work.

  2. Engineers and the technical community, who should understand the legal and ethical implications of their work, engaging proactively with legal professionals who can help imbue systems with appropriate ethical and compliance-oriented frameworks into the design of AI systems. 

  3. Policymakers, civil society, and media, who should be empowered to learn about and communicate the ways that AI is increasingly involved in legal decision-making.

Sound Evidence  

The evidence working group focused on determining the factual basis upon which societies can determine the extent to which AI systems and their operators are effective, and fit for purpose. Among the ideas cited were:

  1. The need for sound evidence in legal settings that can demonstrate, to a non-technical audience, that an AI system achieved its intended purpose

  2. The need for clear standards for assessing the competence of an operator behind an AI system

  3. Evidence of the extent to which the four principles for the operationalization of trustworthy adoption of AI were met (effectiveness, operator competence, transparency, and accountability). 

Policy Foundations

This working group looked into the overarching role that policy has in cultivating an environment that is conducive to trustworthy adoption of AI—in both public and private settings. There is still no independent body or trusted third-party that can validate AI systems, and this represents a key challenge. While approaches vary broadly between societies and states, three common threads emerged, providing clear grounds for further discussion and development:

  1. Procurement stands out as a key lever, across both public and private domains, for the application of clear standards that can guide the market and developers of AI systems

  2. Transparency, in one form or another, is required, despite acknowledgements of the challenges involved given the nature of AI systems

  3. Government-led entities must be empowered to certify the performance of AI systems (for example, the U.S. National Institute of Standards and Technology)

Next
Next
November 16

Second Edition of The Athens Roundtable