Accountability in algorithmic injustice

Lealholm is a postcard village – the kind of thousand-year-old settlement with just a tea room, pub, provincial train station and a solitary Post Office to distinguish it from the rolling wilderness around it.

Chris Trousdale’s family had worked as subpostmasters managing that Post Office, a family profession going back 150 years. When his grandfather fell ill and was forced to retire from the store, Trousdale quit university at 19 years old to come home and keep the family business alive and serving the community.

Less than two years later, he was facing seven years in prison and charges of theft for a crime he didn’t commit. He was told by Post Office head office that £8,000 had gone missing from the Post Office he was managing, and in the ensuing weeks he faced interrogation, a search of his home and private prosecution.

“I was convicted of false accounting, and pled guilty to false accounting – because they said if I didn’t plead guilty, I would be facing seven years in jail,” he says.

“You can’t really explain to people what it’s like to [realise], ‘If you don’t plead guilty to something you haven’t done, we’re gonna send you to jail for seven years’. After that, my life [was] completely ruined.”

The charges of theft hung over the rest of his life. He was even diagnosed with PTSD.

But Trousdale was just one of more than 700 Post Office staff wrongly victimised and prosecuted as part of the Horizon scandal, named for the bug-ridden accounting system that was actually causing the shortfalls in branch accounts humans were blamed for. 

Automated dismissal

Almost 15 years after Trousdale’s conviction, more than 200 miles away near London, Ernest* (name changed) woke up, got ready for work and got into the driver’s seat of his car, like any other day. He was excited. He had just bought a new Mercedes on finance – after two years and 2,500 rides with Uber, he was told his ratings meant he could qualify to be an executive Uber driver, and the higher earnings that come with.

But when he logged into the Uber app that day, he was told he’d been dismissed from Uber. He wasn’t told why.

“It was all random. I didn’t get a warning or a notice or something saying they wanted to see me or talk to me. Everything just stopped,” says Ernest.

He has spent the past three years campaigning to have the decision overturned with the App Drivers and Couriers Union (ADCU), a trade union for private hire drivers, including taking his case to court.

Even after three years, it isn’t completely clear why Ernest was dismissed. He was initially accused of fraudulent behaviour by Uber, but the firm has since said that he was dismissed due to rejecting too many jobs.

Computer Weekly contacted Uber about the dismissal and consequent court case, but received no response.

The impact the automated dismissal has had on Ernest over the years has been huge. “It hit me so badly that I had to borrow money to pay off my finance every month. I couldn’t even let it out that I had been sacked from work for fraudulent activity. It’s embarrassing, isn’t it?” he says.

He is currently working seven days a week as a taxi driver and a variety of side hustles to keep his head above water, and to afford the nearly £600 a month on finance for his car.

“[Uber’s] system has a defect,” he says. “It’s lacking a few things, and one of those few things is how can a computer decide if someone is definitely doing fraudulent activity or not?”

But Uber is far from alone. Disabled activists in Manchester are trying to take the Department for Work and Pensions (DWP) to court over an algorithm that allegedly wrongly targets disabled people for benefit fraud. Uber Eats drivers face being automatically fired by a facial recognition system that has a 6% failure rate for non-white faces. Algorithms on hiring platforms such as LinkedIn and TaskRabbit have been found to be biased against certain candidates. In the US, flawed facial recognition has led to wrongful arrests, while algorithms prioritised white patients over black patients for life-saving care.

The list only grows each year. And these are just the cases we find out about. Algorithms and wider automated decision-making has supercharged the damage flawed government or corporate decision-making can have to a previously unthinkable size, thanks to all the efficiency and scale provided by the technology.

Justice held back by lack of clarity

Often, journalists fixate on finding broken or abusive systems, but miss out on what happens next. Yet, in the majority of cases, little to no justice is found for the victims. At most, the faulty systems are unceremoniously taken out of circulation.

So, why is it so hard to get justice and accountability when algorithms go wrong? The answer goes deep into the way society interacts with technology and exposes fundamental flaws in the way our entire legal system operates.

“I suppose the preliminary question is: do you even know that you’ve been shafted?” says Karen Yeung, a professor and an expert in law and technology policy at the University of Birmingham. “There’s just a basic problem of total opacity that’s really difficult to contend with.”

The ADCU, for example, had to take Uber and Ola to court in the Netherlands to try to gain access to more insight on how the company’s algorithms make automated decisions on everything from how much pay and deductions drivers receive, to whether or not they are fired. Even then, the court largely refused their request for information.

There’s just a basic problem of total opacity that’s really difficult to contend with
Karen Yeung, University of Birmingham

Further, even if the details of systems are made public, that’s no guarantee people will be able to fully understand it either – and that includes those using the systems.

“I’ve been having phone calls with local councils and I have to speak to five or six people sometimes before I can find the person who understands even which algorithm is being used,” says Martha Dark, director of legal charity Foxglove.

The group has specialised in taking tech giants and government to court over their use of algorithmic decision making, and has forced the UK government to u-turn on multiple occasions. In just one of those cases, dealing with a now retracted “racist” Home Office algorithm used to stream immigration requests, Dark recalls how one Home Office official wrongly insisted, repeatedly, that the system wasn’t an algorithm.

And that kind of inexperience gets baked into the legal system too. “I don’t have a lot of confidence in the capacity of the average lawyer – and even the average judge – to understand how new technologies should be responded to, because it’s a whole layer of sophistication that is very unfamiliar to the ordinary lawyer,” says Yeung.

Part of the issue is that lawyers rely on drawing analogies to establish if there is already legal precedent in past cases for the issue being deliberated on. But most analogies to technology don’t work all too well.

Yeung cites a court case in Wales where misused mass facial recognition technology was accepted by authorities through comparisons to a police officer taking surveillance photos of protestors.

“There’s a qualitative difference between a policeman with a notepad and a pen, and a policeman with a smartphone that has access to a total central database that is connected to facial recognition,” she explains. “It’s like the difference between a pen knife and a machine gun.”

Who is to blame?

Then there’s the thorny issue of who exactly is to blame in cases with so many different actors, or what is generally known in the legal world as ‘the problem of many hands’. While it’s far from a new problem for the legal system to try to solve, tech companies and algorithmic injustice pose a bunch of added problems.

Take the case of non-white Uber Eats couriers who face auto-firing at the hands of a “racist” facial recognition algorithm. While Uber was deploying a system that led to a large number of non-white couriers being fired (it has between a 6 and 20% failure rate for non-white faces), the system and algorithm were made by Microsoft.

Given how little different parties often know about the flaws in these kind of systems, the question of who should be auditing them for algorithmic injustices, and how, isn’t completely clear. Dark, for example, also cites the case of Facebook content moderators.

Foxglove are currently taking Facebook to court in multiple jurisdictions over its treatment of content moderators, who they say are underpaid and given no support as they filter through everything from child pornography to graphic violence.

However, because the staff are outsourced rather than directly employed by Facebook, the company is able to suggest it isn’t legally accountable for their systemically poor conditions.

Then, even if you manage to navigate all of that, your chances in front of a court could be limited for one simple reason – automation bias, or the tendency to assume that the automated answer is the most accurate one.

In the UK, there’s even a legal rule that means that prosecutors don’t have to prove the veracity of the automated systems they’re using – though Yeung says that could be set to change at some point in future.

And while the current General Data Protection Regulation (GDPR) legislation mandates human oversight of any automated decisions that could “significantly affect them”, there’s no concrete rules that mean human intervention has to be anything more than a rubber stamp – especially as in a large number of cases that humans do oversee, thanks to that same automation bias, they regularly side with the automated decision even if it may not make sense.

Stepping stone to transparency 

As inescapable and dystopian as algorithmic injustice sounds, however, those Computer Weekly spoke to were adamant there were things that can be done about it.

For one thing, governments and companies could be forced to disclose how any algorithms and systems work. Cities such as Helsinki and Amsterdam have already acted in some way on this, introducing registers for any AI or algorithms deployed by the cities.

While the UK has made positive steps towards introducing its own algorithmic transparency standard for public sector bodies too, it only covers the public sector and is currently voluntary, according to Dark.

The people who are using systems that could be the most problematic are not going to voluntarily opt for registering them
Martha Dark, Foxglove

“The people who are using systems that could be the most problematic are not going to voluntarily opt for registering them,” she says.

For many, that transparency would be a stepping stone to much more rigorous auditing of automated systems to make sure that they aren’t hurting people. Yeung compares the situation as it currently stands to an era before financial auditing and accounts were mandated in the business world.

“Now, there is a culture now of doing it properly, and we need to sort of get to that point in relation for digital technologies,” she says. “Because, the trouble is, once the infrastructure is there, there is no going back – you will never get that dismantled.”

For the victims of algorithmic injustice, the battle rarely, if ever, ends. The “permanency of the digital record” as Yeung explains it, means that once convictions or negative decisions are out there, much like a nude photo, they can “never get that back”.

In Trousdale’s case, despite nearly two decades of frantic campaigning meaning his conviction was overturned in 2019, he still hasn’t received any compensation, and still has his DNA and fingerprints permanently logged on the police national database.

“This is nearly two years now since my conviction was overturned, and still I’m a victim of the Horizon system,” he says. “This isn’t over. We are still fighting this daily.”

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.