Digital Ethics Summit: Who benefits from new technology? | TechTarget
The siloed and insulated nature of how the tech sector approaches innovation is sidelining ethical considerations, it has been claimed, diminishing public trust in the idea that new technologies will benefit everyone.
Speaking at TechUK’s sixth annual Digital Ethics Summit this month, panellists discussed the ethical development of new technologies, particularly artificial intelligence (AI), and how to ensure that process is as human-centric and socially useful as possible.
A major theme of the Summit’s discussions was: who dictates and controls how technologies are developed and deployed, and who gets to lead discussions around what is considered “ethical”?
In a conversation about the ethics of regulation, Carly Kind, director of the Ada Lovelace Institute, said a key issue permeating the development of new technologies is the fact that it is “led by what is technically possible”, rather than “what is politically desirable”, leading to harmful outcomes for ordinary people who are, more often than not, excluded from these discussions.
Kind added: “It is the experience of most people that their relationship to technology is an extractive one which takes away their agency – and public research shows again and again that people would like to see more regulation, even if it comes at the cost of innovation.”
Andrew Strait, associate director of research partnerships at the Ada Lovelace Institute, said the tech sector’s “move fast and break things” mentality has created a “culture problem” in which the fixation on innovating quickly leads to a “great disregard” for ethical and moral considerations when developing new technologies, leading to problems further down the line.
Strait said that when ethical or moral risks are considered, there is a tendency for the issues to be “thrown over a wall” for other teams within an organisation to deal with. “That creates a…lack of clarity over ownership of those risks or confusion over responsibilities,” he added.
Building on this point during a separate session on the tech sector’s role in human rights, Anjali Mazumder, justice and human rights theme lead at the Alan Turing Institute, said there is a tendency for those involved in the development of new technologies and knowledge to be siloed off from each other, which inhibits understanding of key, intersecting issues.
For Mazumder, the key question is therefore “how do we develop oversight and mechanisms recognising that all actors in the space also have different incentives and priorities within that system”, while also ensuring better multi- and interdisciplinary collaboration between those actors.
In the same session, Tehtena Mebratu-Tsegaye, a strategy and governance manager in BT’s “responsible tech and human rights team”, said that ethical considerations, and human rights in particular, need to be embedded into technological development processes from the ideation stage onwards, if attempts to limit harm are to be successful.
But Strait said the incentive issues exist across the entire lifecycle of new technologies, adding: “Funders are incentivising to move very quickly, they’re not incentivising considering risk, they’re not incentivising engaging with members of the public being impacted by these technologies, to really empower them.”
For the public sector, which relies heavily on the private sector for access to new technologies, Fraser Sampson, commissioner for the retention and use of biometric material and surveillance camera commissioner, said ethical preconditions should be inserted into procurement procedures to ensure that such risks are properly considered when buying new tech.
A key issue around the development of new technologies, particularly AI, is that while much of the risk is socialised – in that its operation affects ordinary people, especially during the developmental phase – all the benefit then accrues to the private interests that own the technology in question, he said.
Jack Stilgoe, a professor in science and technology studies at University College London, said ethical discussions around technology are hamstrung by tech firms dictating their own ethical standards, which creates a very narrow range of debate around what is, and is not, considered ethical.
“To me, the biggest ethical question around AI – the one that really, really matters and I think will define people’s relationships of trust – is the question of who benefits from the technology,” he said, adding that data from the Centre for Data Ethics and Innovation (CDEI) reveals “substantial public scepticism that the benefits of AI are going to be widespread, which creates a big issue for the social contract”.
Stilgoe said there is “a real danger of complacency” in tech companies, especially given their misunderstanding around how trust is developed and maintained.
“They say to themselves, ‘yes, people seem to trust our technology, people seem happy to give up privacy in exchange for the benefits of technology’…[but] for a social scientist like me, I would look at that phenomenon and say, ‘well, people don’t really have a choice’,” he said. “So to interpret that as a trusting relationship is to massively misunderstand the relationship that you have with your users.”
Both Strait and Stilgoe said part of the issue is the relentless over-hyping of new technologies by the tech sector’s public relations teams.
For Strait, the tech sector’s PR creates such great expectations that it leads to “a loss of public trust, as we’ve seen time and time again” whenever technology fails to live up to the hype. He said the hype cycle also stymies honest conversations about the actual limits and potential of new technologies.
Stilgoe went further, describing it as “attention-seeking” and an attempt to “privatise progress, which makes it almost useless as a guide for any discussion about what we can [do]”.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.