AI adopted without due consideration for workers, MPs told

Enterprises’ rapid adoption of artificial intelligence (AI) during the pandemic has left workers across the UK vulnerable to range of algorithmically induced harms, including invasive surveillance, discrimination and severe work intensification, MPs have been told.

In a session examining how AI and digital technology generally are changing the workplace, the Business, Energy and Industrial Strategy (BEIS) Committee was told by Andrew Pakes, deputy general secretary of Prospect Union, that the speedy introduction of new technologies to workplaces across the UK helped many enterprises to stay afloat during the pandemic.

But he said the rapid roll-out of AI-powered technologies in workplaces – including systems for recruitment, emotion detection, surveillance, productivity monitoring and more  – meant the downsides were not properly considered, creating a situation in which employment laws are no longer fit to deal with the changes to how people are managed via digital technologies.

“What we’ve seen during the pandemic is the acceleration of digital technology which allows us to stay safe, connected and well – but we’ve also seen in that acceleration less time spent on scrutiny of it,” said Pakes.

Giving the example of task allocation software that can help bosses with surveillance or micromanagement of their staff, Pakes added: “You log in, and it tells you how much time you have to do a task. What we don’t have is clarity yet of how that data is then used in management process to determine the speed of a job, or whether you are a ‘good’ worker or a ‘bad’ worker.

“What sits behind AI is the use of our data to make choices about individual working lives and where they sit within the workplace. Many of our laws are currently based on physical presence, so health and safety laws about physical harms and risks. We don’t yet have a language or legal framework that adequately represents harm or risk created by use of our data.”             

In March 2021, the Trades Union Congress (TUC) similarly warned that huge gaps exist in UK law over the use of AI in the workplace, which would lead to discrimination and unfair treatment of working people, and called for “urgent legislative changes”.

A year later, in March 2022, the TUC said the intrusive and increasing use of surveillance technology in the workplace – often powered by AI – was “spiralling out of control”, and pushed for workers to be consulted on the implementation of new technologies at work.

Referring to a report by Robin Allen QC, which found that UK law did not adequately cover discrimination and equality risks that might arise in the workplace as a result of AI, Carly Kind, director of the Ada Lovelace Institute, told MPs that many of the AI tools being deployed were not only at the outer edge of legality, but also of scientific veracity.

“Things like emotion recognition or classification, which is when interviewees are asked to interview either with an automated interviewer or otherwise on screen, and a form of image recognition is applied to them that tries to distil from their facial movements whether or not they are reliable or trustworthy,” she said, adding that there is a real “legal gap” in the use of AI for emotion classification.

Speaking about how AI-powered tools such as emotion recognition could impact those with neurodivergent conditions, for example, Kind said inequity was a “real concern with AI generally” because it “uses existing datasets to build predictions about the future and tends to optimise for homogeneity and for the status quo – it’s not very good at optimising for difference or diversity, for example”.

When it comes to holding AI accountable, Anna Thomas, director of the Institute for the Future of Work, said that while auditing tools are usually seen as a way of addressing AI harms, they are often inadequate to ensure compliance with UK labour and equality laws.

“In particular, the auditing tools themselves will rarely be explicit about the purpose of the audit, or key definitions including equality and fairness, and assumptions for the US were brought in,” she said, adding that policymakers should look to implement a wider system of socio-technical auditing to address harms caused by AI. “The tools were generally not designed or equipped to actually address problems that had been had been found,” she said.

The importing of cultural assumptions via technology was also touched on by Pakes, who said problems around AI in the workplace were exacerbated by the fact that most enterprises do not develop their own internal management systems, and so rely on off-the-shelf products made elsewhere in the world, where management practices and labour rights can be very different.

Giving the example of Microsoft Teams and Office365 – which contain tools that allow employers to covertly read staff emails and monitor their computer use at work – Pakes said although it was useful to begin with, the introduction of automated “productivity scoring” later on created a host of issues.

“If suddenly, as we’ve found, six months down the line when people are holed up in disciplinaries, and managers are saying ‘we’ve been looking at your email traffic, we’ve been looking at the software you’ve been using, we’ve been looking at your websites, we do not believe you are a productive worker’ – we think that gets into the creepier use of this technology,” said Pakes.

But he added that the problem is not the technology itself, “it’s the management practice of how technology is applied that needs to be fixed”.

Case study: AI-powered automation at Amazon

As regards the benefits of AI to productivity, Brian Palmer, head of public policy Europe at Amazon, told MPs that the use of automation by the e-commerce giant in its fulfilment centres is not designed to replace existing jobs, and is instead used to target mundane or repetitive tasks for workers.

“In terms of improving outcomes for people, what we see is an improvement in safety, the reduction of things like repetitive motion injuries or musculoskeletal disorders, improvements in employee retention, the jobs are more sustainable,” he said.

Repeating recent testimony given to the Digital, Culture, Media and Sport (DCMS) Committee by Matthew Cole, a postdoctoral researcher at the Oxford Internet Institute, Labour MP Andy McDonald said: “Overwhelmingly, the evidence shows that the technologies that Amazon uses are not empowering – they lead to overwork, extreme stress and anxiety, and there have been issues with joints and health problems.”

Asked about how data is used to track employee behaviours and productivity, Palmer denied that Amazon was seeking to monitor or surveil employees.

“Their privacy is something we respect,” he said. “The focus of the software and hardware that we’ve been discussing is on the goods, it’s not on the people themselves.” Palmer added that the performance data that is collected is accessible to the employee through internal systems.

When challenged by committee chair Darren Jones, who told Palmer he was “incorrect” in his characterisation, Palmer said the primary and secondary purpose of Amazon’s systems was to monitor “the health of the network” and “inventory control”, respectively.

Relating the story of a 63-year-old constituent who works for Amazon, Jones said it was a fact that the company tracks individual worker productivity, as this constituent already had two strikes for being too slow at packaging items, and could be fired by his manager for a third strike.

Following this exchange, Palmer admitted that Amazon workers could be fired for not meeting productivity targets. However, he maintained there would always be a “human in the loop”, and that any performance issues usually result in the worker being moved to a different “function” within the company.  

Other witnesses also challenged Palmer’s characterisation of automation at Amazon. Laurence Turner, head of research and policy at the GMB union, said its members had reported an increase in “the intensity of work” as a result of ever-higher productivity targets managed via algorithm.

Turner said algorithmic surveillance was also having an impact on people’s mental health, with workers reporting “a sense of betrayal” to the GMB “when it becomes apparent that the employer has been monitoring them covertly – members too often report that they will be called into a disciplinary and presented with a set of numbers, or set of metrics, that they weren’t aware was being collected on them, and that they don’t feel confident in challenging”.

Pakes said Prospect union members had also reported similar, “considerable” concerns around AI’s effect on work intensity.

“There is a danger, we think, about AI becoming a new form of modern Taylorism – also of algorithms being used for short-term productivity gains that are to the detriment of employer in the long run,” said Turner, adding that the testimony from Palmer was “quite an extraordinary set of evidence that doesn’t reflect what our members are telling us about what happens in those warehouses”.

On the role of AI in work intensification, Thomas said systems need to be designed with the outcomes for workers in mind. “If the aim wasn’t solely to increase the number of bags that somebody had to pack within a minute, but also if it was it was done with a more holistic understanding about the impacts on people – on their wellbeing, dignity, autonomy, on participation – the outcomes are more likely to be successful,” he said.

The Committee opened an inquiry into the post-pandemic economic growth of UK labour markets in June 2020, with the stated task of understanding issues around the UK’s workforce, including the impact of new technologies.

A parliamentary inquiry into AI-powered workplace surveillance previously found that AI was being used to monitor and control workers with little accountability or transparency, and called for the creation of an Accountability for Algorithms Act.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.