by

Check out this Scholarly Article, Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI, from Harvard Library Office for Scholarly Communication.

The Principled AI visualization is arranged like a wheel. Each document is represented by a spoke of that wheel and labeled with the sponsoring actors, date, and place of origin. Designed by Arushi Singh and Melissa Axelrod.
The Principled AI visualization is arranged like a wheel. Each document is represented by a spoke of that wheel and labeled with the sponsoring actors, date, and place of origin. Designed by Arushi Singh and Melissa Axelrod.

Citation
Jessica Fjeld: Berkman Klein Center for Internet & Society
Nele Achten: Harvard Law School; University of Exeter, School of Law, Students
Hannah Hilligoss: Harvard University, Law School, Students ; Harvard University – Berkman Klein Center for Internet & Society
Adam Nagy: Harvard University – Berkman Klein Center for Internet & Society
Madhulika Srikumar: Harvard University, Law School, Students

Date Written
January 15, 2020

Collections
Berkman Klein Center (BKC) for Internet & Society Scholarly Articles

Over the past several years, a number of companies, organizations, and governments produced or endorsed principles documents for artificial intelligence. The proliferation of these documents inspired a Berkman Klein research team to delve into the details and map their findings, which illustrate convergence around eight specific themes.

With a sample of documents and support from staff and students in BKC’s Cyberlaw Clinic, the team analyzed 36 principles documents from around the world; their findings are published in the latest report in the BKC research series: Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI.

Abstract
The rapid spread of artificial intelligence (AI) systems has precipitated a rise in ethical and human rights-based frameworks intended to guide the development and use of these technologies. Despite the proliferation of these “AI principles,” there has been little scholarly focus on understanding these efforts either individually or as contextualized within an expanding universe of principles with discernible trends.

To that end, this white paper and its associated data visualization compare the contents of thirty-six prominent AI principles documents side-by-side. This effort uncovered a growing consensus around eight key thematic trends: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. Underlying this “normative core,” our analysis examined the forty-seven individual principles that make up the themes, detailing notable similarities and differences in interpretation found across the documents. In sharing these observations, it is our hope that policymakers, advocates, scholars, and others working to maximize the benefits and minimize the harms of AI will be better positioned to build on existing efforts and to push the fractured, global conversation on the future of AI toward consensus.

Introduction
Alongside the rapid development of artificial intelligence (AI) technology, we have witnessed a proliferation of “principles” documents aimed at providing normative guidance regarding AI-based systems. Our desire for a way to compare these documents – and the individual principles they contain – side by side, to assess them and identify trends, and to uncover the hidden momentum in a fractured, global conversation around the future of AI, resulted in this white paper and the associated data visualization.

It is our hope that the Principled Artificial Intelligence project will be of use to policymakers, advocates, scholars, and others working on the frontlines to capture the benefits and reduce the harms of AI technology as it continues to be developed and deployed around the globe.

Executive Summary
In the past several years, seemingly every organization with a connection to technology policy has authored or endorsed a set of principles for AI. As guidelines for ethical, rights-respecting, and socially beneficial AI develop in tandem with – and as rapidly as – the underlying technology, there is an urgent need to understand them, individually and in context. To that end, we analyzed the contents of thirty-six prominent AI principles documents, and in the process, discovered thematic trends that suggest the earliest emergence of sectoral norms.

While each set of principles serves the same basic purpose, to present a vision for the governance of AI, the documents in our dataset are diverse. They vary in their intended audience, composition, scope, and depth. They come from Latin America, East and South Asia, the Middle East, North America, and Europe, and cultural differences doubtless impact their contents. Perhaps most saliently, though, they are authored by different actors: governments and intergovernmental organizations, companies, professional associations, advocacy groups, and multi-stakeholder initiatives. Civil society and multi-stakeholder documents may serve to set an advocacy agenda or establish a floor for ongoing discussions. National governments’ principles are often presented as part of an overall national AI strategy. Many private sector principles appear intended to govern the authoring organization’s internal development and use of AI technology, as well as to communicate its goals to other relevant stakeholders including customers and regulators. Given the range of variation across numerous axes, it’s all the more surprising that our close study of AI principles documents revealed common themes.

Eight Key themes of our findings:

    • Privacy. Principles under this theme stand for the idea that AI systems should respect individuals’ privacy, both in the use of data for the development of technological systems and by providing impacted people with agency over their data and decisions made with it. Privacy principles are present in 97% of documents in the dataset.
    • Accountability. This theme includes principles concerning the importance of mechanisms to ensure that accountability for the impacts of AI systems is appropriately distributed, and that adequate remedies are provided. Accountability principles are present in 97% of documents in the dataset.
    • Safety and Security. These principles express requirements that AI systems be safe, performing as intended, and also secure, resistant to being compromised by unauthorized parties. Safety and Security principles are present in 81% of documents in the dataset.
    • Transparency and Explainability. Principles under this theme articulate requirements that AI systems be designed and implemented to allow for oversight, including through translation of their operations into intelligible outputs and the provision of information about where, when, and how they are being used. Transparency and Explainability principles are present in 94% of documents in the dataset.
    • Fairness and Non-discrimination. With concerns about AI bias already impacting individuals globally, Fairness and Non-discrimination principles call for AI systems to be designed and used to maximize fairness and promote inclusivity. Fairness and Non-discrimination principles are present in 100% of documents in the dataset.
    • Human Control of Technology. The principles under this theme require that important decisions remain subject to human review. Human Control of Technology principles are present in 69% of documents in the dataset.
    • Professional Responsibility. These principles recognize the vital role that individuals involved in the development and deployment of AI systems play in the systems’ impacts, and call on their professionalism and integrity in ensuring that the appropriate stakeholders are consulted and long-term effects are planned for. Professional Responsibility principles are present in 78% of documents in the dataset.
    • Promotion of Human Values. Finally, Human Values principles state that the ends to which AI is devoted, and the means by which it is implemented, should correspond with our core values and generally promote humanity’s well-being. Promotion of Human Values principles are present in 69% of documents in the dataset.

The second, and perhaps even more striking, side of our findings is that more recent documents tend to cover all eight of these themes, suggesting that the conversation around principled AI is beginning to converge, at least among the communities responsible for the development of these documents. Thus, these themes may represent the “normative core” of a principle-based approach to AI ethics and governance.

However, we caution readers against inferring that, in any individual principles document, broader coverage of the key themes is necessarily better. Context matters. Principles should be understood in their cultural, linguistic, geographic, and organizational context, and some themes will be more relevant to a particular context and audience than others. Moreover, principles are a starting place for governance, not an end. On its own, a set of principles is unlikely to be more than gently persuasive. Its impact is likely to depend on how it is embedded in a larger governance ecosystem, including for instance relevant policies (e.g. AI national plans), laws, regulations, but also professional practices and everyday routines.

One existing governance regime with significant potential relevance to the impacts of AI systems is international human rights law. Scholars, advocates, and professionals have increasingly been attentive to the connection between AI governance and human rights laws and norms, and we observed the impacts of this attention among the principles documents we studied. 64% of our documents contained a reference to human rights, and five documents took international human rights as a framework for their overall effort. Existing mechanisms for the interpretation and protection of human rights may well provide useful input as principles documents are brought to bear on individuals cases and decisions, which will require precise adjudication of standards like “privacy” and “fairness,” as well as solutions for complex situations in which separate principles within a single document are in tension with one another.

The thirty-six documents in the Principled Artificial Intelligence were curated for variety, with a focus on documents that have been especially visible or influential. As noted above, a range of sectors, geographies, and approaches are represented. Given our subjective sampling method and the fact that the field of ethical and rights-respecting AI is still very much emergent, we expect that perspectives will continue to evolve beyond those reflected here. We hope that this paper and the data visualization that accompanies it can be a resource to advance the conversation on ethical and rights-respecting AI.

References (33)
1. M Broussard. Coined the term “technochauvinism” in her recent book Artificial Unintelligence.

2. Amnesty International, Access Now, volume 17

 
AvatarAuthor:
ECWA Editorial Board: Our editorial board or advisory board consists of a group of well published, prominent professors, with academic credentials and a detailed knowledge of their subject area.

Author

  • ECWA USA

    We are a congregation of Christian people seeking to spread the news of Jesus Christ through bible and prayers. We welcome people from all walks of life, no matter what their origin, race, color or nationality. Our faith community is a diverse and inclusive community that emphasizes the display of God’s glory in all races and cultures.

Comments are closed.