Lack of process makes challenging decisions made by algorithm difficult
In 2020, as a result of COVID-19 restrictions on students sitting exams in person, the United Kingdom’s school exam regulator, Ofqual, used an algorithm to determine their final year grades.
And the students didn’t like it.
Following protests and concerns about socioeconomic discrimination, the algorithmic grades were scrapped in favor of teacher-assessed grading. One of the key criticisms of the algorithmic grading system was that there was no process available to students to appeal their grades.
And this is not an isolated incident or a new problem.
In 2014, seven teachers and the Houston Federation of Teachers successfully argued that the use of an algorithmic performance measurement system to terminate their teaching contracts breached their constitutional right to due process. They argued they were unable to “meaningfully challenge” their termination “due to lack of sufficient information.”
The company that created the algorithmic system claimed that the equations, source code, decision rules and assumptions it used were all proprietary trade secrets and, as such, could not be provided to the teachers.
This left the teachers with no clear understanding of what factors the system took into account and how their performance scores were actually calculated.
There are many other challenges associated with algorithms on top of their opacity. For example, what can actually be contested is often unclear.
Should people be able to contest the data used to make the decision? If the algorithm follows the process that it was programmed to follow, on what grounds can the decision be contested? Or should the very use of the algorithm in the first place be contestable?
Numerous guidelines and principles have been developed to address the use of artificial intelligence in recent years. Many of these mention the ability to challenge, appeal or contest algorithmic decisions—but they offer limited guidance as to what type of process should be provided.
Guidance relating to the European Union’s General Data Protection Regulation suggests that contestation requires an internal review post-decision.
Within human-computer interaction, the notion of contestability is seen as a more interactive process—one where people impacted by a decision can interact with the decision-making system to shape the decision-making.
Given these different approaches to contestability, our team wanted to understand more about what stakeholders—including the public and decision-makers like businesses and government—expect in relation to the ability to contest.
Our research analyzed submissions made in response to a discussion paper released by the Australian government in 2019—Artificial Intelligence: Australia’s Ethics Framework.
This is the first framework of its kind to specifically include “contestability” as a principle, which was defined as: “When an algorithm impacts a person there must be an efficient process to allow that person to challenge the use or output of the algorithm.”
From our analysis of the submissions, the inclusion of “contestability” as its own principle was generally supported, although some thought it was better seen as an aspect of a higher-order principle such as “fairness” or “accountability.”
While contestability was seen as a form of protection, many questioned its usefulness, given that it’s currently unenforceable.
There was also acknowledgement that different people affected by algorithmic decisions would have different capacities and abilities to contest. This means that any contestation process should be made as clear and accessible as possible and isn’t the only tool used to regulate algorithmic decision-making.
Many submissions sought more clarity and guidance from the government on a number of important policy questions. For example, who can contest a decision? What can be contested? How should a review process run?
And then there’s the corporate picture. Associate Professor at the University of Colorado Law School, Margot Kaminski notes that a lack of guidance around contestability could disadvantage affected people:
“This raises the question of whether a company whose interests do not always align with its users’ will be capable of providing adequate process and fair results. There is room for substantially more policy development in fleshing out this contestation right,” Associate Professor Kaminski says.
Many submissions outlined processes that resemble those currently used for reviewing human decisions. However, human decision-making is very different to the way algorithmic decision-making works.
So, it’s important to consider whether existing processes designed to check human bias and error will be adequate for reviewing algorithmic decision-making.
A number of submissions also emphasized the need for a human to review the decision. But this then raises concerns around the scalability of human review—it could simply be far too much work for a team of people to do.
Instead of relying purely on post-hoc decision review processes, there’s value in building algorithmic decision-making systems that consider contestability from their conception.
One approach—”contestability by design” by European researcher Marco Almada—emphasizes the value of participatory design: where those most likely to be impacted by a decision-making system are involved in the design of the system itself.
This kind of process would help to highlight problems with the system and potentially reduce the need for future contestation.
Having the ability to interact with a system, check the information it has taken into account, make corrections if needed or lodge disputes could help people understand how a system works and exercise some control over the outcome—which may also reduce the need for post-hoc contestation processes.
Ultimately, algorithmic decision-making is very different to human decision-making. We need to carefully consider how to design systems that not only support the ability to contest but also reduce the need for anyone to contest a decision in the first place.
Algorithms can decide your marks, your work prospects and your financial security. How do you know they’re fair?
Henrietta Lyons et al, Conceptualising Contestability, Proceedings of the ACM on Human-Computer Interaction (2021). DOI: 10.1145/3449180
Citation:
Lack of process makes challenging decisions made by algorithm difficult (2021, November 24)
retrieved 24 November 2021
from https://techxplore.com/news/2021-11-lack-decisions-algorithm-difficult.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.