How values-driven artificial intelligence can reshape the way we communicate

Credit: Yale University Press

Mike Ananny walked his dog this morning. He did so with no expectation of privacy.

“I know that I was subject to a wide variety of cameras, whether it’s Ring doorbells, cars driving along, or even city traffic cameras,” he said. “I didn’t choose to participate in this whole variety of video surveillance systems. I just took my dog for a walk.”

Ananny understands that, wherever he goes, data about him is being collected, analyzed and monetized by artificial intelligence (AI).

Kate Crawford drove a van deep into the arid Nevada landscape to get a good look at the evaporating brine ponds of the Silver Peak Lithium Mine.

Those desolate swaths of liquid are not only the biggest U.S. source of lithium—the metal that is essential to the batteries that power everything from laptops to mobile devices to electric cars—they are also a vivid reminder of the impact AI has on the material world.

“Metaphors that people use to talk about AI like ‘the cloud’ imply something floating and abstract,” Crawford said. “But large-scale computation has an enormous carbon footprint and environmental impact.”

Crawford knows that the world’s systems energy, mining, labor and political power are being rewritten by the needs of AI.

When the COVID-19 pandemic started, Ashley Alvarado knew that her station’s listeners were scared and confused.

At KPCC-FM and LAist, Alvarado has used a variety of communication tools to connect with audiences, but the scale of the comments, questions and tips the station was receiving required a solution that could process large amounts of data, fast.

“With COVID there was so much need for information at the start of the pandemic that the way we could be most human for Angelenos was by leveraging AI,” Alvarado said.

Known by many names—algorithms, bots, big data, natural language processing, machine learning, intelligent agents—technologies that fall under the broad definition of AI are reshaping not only the world of communication, but the world as a whole. Across USC Annenberg, faculty, students and alumni are exploring both the immense potential and the often less-obvious pitfalls presented by these technologies.

“Annenberg is uniquely positioned to lead on this conversation, because these are socio-technical and communication problems,” Ananny said. “I don’t want our answers to these questions to just be technical. I want an answer that is deeply historical and rooted in cultural understanding.”

Search terms

In the popular imagination, AI can mean anything from the quotidian convenience of your phone picking songs that it knows you might like or telling you the best route to your friend’s house, or the promise of big-data panaceas for issues like climate change or the COVID-19 pandemic. It’s also next to impossible to discuss AI without referencing how often AI is cast as the villain of science fiction: the ban on “thinking machines” in Frank Herbert’s Dune, HAL 9000 in Arthur C. Clarke’s 2001: A Space Odyssey, the Borg in Star Trek, Agent Smith in The Matrix.

“I think most people tend to think of it as this sort of sci-fi technology from Terminator or Ready Player One,” said Fred Cook, director of the Center for Public Relations. “In reality, it is the engine behind many of the things that people, especially in the PR industry, already use in their daily work.”

To grossly oversimplify, most of what is commonly thought of as AI comes down to the interaction of algorithms—mathematical functions—making calculations based on enormous amounts of data.

“Algorithms are the instructions and rules that govern computing,” said Marlon Twyman II, who researches how technology shapes the interactions of individuals and teams in the workplace. “Artificial intelligence must have algorithms underpinning the decisions and engagements it makes.”

Twyman cites the example of image recognition: AI that tries to detect whether a picture of a cat is a cat or a dog. The more examples the algorithms are exposed to—the more data—the better they are able to make these determinations.

“Artificial intelligence is when computers start being able to respond to inputs that they were not necessarily trained on—or exposed to—when they were programmed,” said Twyman, assistant professor of communication.

“What we’re interacting with is just math,” said Ignacio Cruz, who earned his Ph.D. in communication in 2021 and now teaches at Northwestern University. He stresses that, despite AI’s capabilities for recognizing trends and patterns, it isn’t all that mysterious. Technology that has, if not sentience, then at least some independent agency—or what Cruz calls “agentic qualities”—is, for now, largely the sci-fi stuff.

“Algorithms don’t work the way the human brain works,” noted Lynn Miller, professor of communication. “AI is really just a prediction machine.”

Such machines allow for remarkable technological achievements in healthcare, logistics, games, entertainment, criminal justice, hiring, and many other fields—including local journalism—in unexpected ways.

AI and community

KPCC-FM didn’t expect to be using AI to build community engagement, but when the pandemic hit in 2020 and they started getting inundated with panicked messages about the lockdown, the Pasadena-based public radio station’s leadership knew they had to do something to help their listeners.

“It started out with just concern,” said Alvarado. “And then it went into just full-fledged panic—questions about shortages at Target, whether to cancel a wedding, whether it was illegal to gather with loved ones to mourn somebody.”

Most of these questions were coming through a tool the radio station had embedded on its homepage that uses Hearken, an engagement and organizational support platform. “We were sometimes getting 10 messages a minute through this tool,” said Alvarado, vice president of community engagement and strategic initiatives for KPCC-FM and LAist. “We had to think creatively about how we could meet the information needs of thousands and thousands of people.”

She talked to Paul Cheung, then-director of journalism and technology innovation at the John S. and James L. Knight Foundation, who asked if she had thought about machine learning. “And I had not,” she said with a chuckle. Cheung connected them with some journalists who were working with AI at the online publication Quartz, and they helped Alvarado and her team develop a natural language processing tool that could analyze the requests they were receiving from listeners.

“With the tool, we could identify themes we needed to focus on—not just for answering questions, but for what stories we should cover and where,” said Alvarado, who earned her BA in 2005 with a double major in print journalism and Spanish.

Alvarado sees great potential for this technology to enable audience input to surface patterns from other fast-moving news events from wildfires to political debates. “Normally, you would have to read through every question as it came in and hope that you observed a trend, as opposed to having the AI in place to say, “Here’s something that’s popping up again and again.'”

Some publications are already directly using AI to write stories, usually basic, easily formatted pieces like stock reports, weather bulletins and sports stories. Though these pieces end up saving some entry-level reporter from rote drudgery, Twyman sees a potential downside.

“The problem is, this removes the possibility of innovating, even in these simple tasks,” he said. “If we keep removing humans from more and more complex writing tasks, we could end up in a world that looks very different.”

Agents with agency

Sometimes, removing humans from the equation is necessary for their safety. In her research into risky sexual activity more than 25 years ago, Miller was running into a very fundamental—and very human—problem. “I was interested in sexual behavior among young men who have sex with men,” she said. “I did a lot of qualitative work on what led up to these moments of risk, but I obviously couldn’t hide under beds to figure out what was going on. That’s when I started getting interested in creating virtual environments.”

Miller wanted to create an interactive game where human subjects could make decisions about whether or not to engage in risky sexual behavior, but she was limited by the technology she had available to creating scripted situations.

The answer was a virtual environment populated by “intelligent agents,” characters whose behavior was governed by algorithms that set their preferences and goals—in other words, AI—rather than by fixed scripts. Working with a team of USC computer scientists and psychologists, Miller developed characters whose behavior was representative of people in real life. These characters populated a virtual world and could interact with human research subjects in more natural ways that would actually yield actionable research data about risky sexual behavior without the risk.

“You can have a human in the loop who responds to what the intelligent agent is doing, which then shapes its behavior, or you can have all agents interacting and running simulations,” Miller said. Her work helped identify not only patterns of risky behavior but ways to effectively intervene and mitigate that risk.

In her award-winning research over the past decade and a half that has built upon those original virtual environments, Miller and her team have also learned what kinds of interventions work best to limit risk in sexual situations—none of which would have been possible without AI.

Her more recent work has moved into the realm of neuroscience, using those intelligent agents to model more complex human processes, like communication competence and how humans make meaning through social interaction.

“One of the problems with current AI in general is that it can only get up to a certain point as far as being able to infer emotions,” Miller said. “Having said that, there are certain probabilities and parameters we can program into our intelligent agents when it comes to social interaction that actually do a pretty good job of modeling how actual humans, in a highly interactive and flexible environment, will make decisions.”

While the future of AI is hard to predict, Miller said cutting-edge AI researchers are already trying to leverage how human brains understand the world. “As with any innovations, there are risks to be mitigated,” Miller noted. “But there are also enormous opportunities to enhance interventions and therapies to dramatically improve communication and individual and societal well-being.”

Parsing polarization

As Miller points out, one of the strengths of AI is finding patterns among enormous data sets. Fred Cook wanted to take a particularly contentious data set—social media posts about controversial political issues—and find out if AI could help measure the degree of polarization in the debate around those issues.

The process started with a survey the Center for Public Relations conducted for its 2021 Global Communication Report, which identified several major issues that PR professionals thought they’d have to address in the coming year. Cook shared those issues with executives at the PR firm Golin, where he had been CEO (and still has a financial interest), and then shared them with the software firm Zignal Labs.

“Given the enormous problem that the current level of polarization causes for people, government and business, we decided to develop a new tool that would measure it—and hopefully help reduce it,” Cook said.

Their approach is based on the Ad Fontes graph of media bias, which categorizes media outlets by a left-right political spectrum on one axis, and reliability on the other axis. The Zignal AI tool inputs the top 10 hot political issues and cross-references them with social posts that include links to articles from publications that are on the Ad Fontes chart. Based on the publication’s position on the chart, the tool assigns a score that determines how left or right most of the social media shares are on a particular issue. The gap between how many right/conservative articles are shared on an issue vs. how many left/liberal publications are shared gives a Polarization Index score.

The sheer number of posts involved in creating this score—more than 60 million—requires AI to do the work quickly.

“The Polarization Index provides a heat-map of what issues are the most controversial and the factors that are contributing to their divisiveness,” Cook said. “We can draw implications for people, companies and communicators who may want to engage on these topics.”

Cook also says that PR practitioners will have to continue to address criticism of AI based on privacy, labor, bias and social justice concerns, but adds that his own experience has shown that AI can make positive impacts in these areas as well.

That being said, Cook added, “Every new technology has aspects of it that are frightening, and AI is no different than anything else. While we used AI to do really important work on our Polarization Index, AI can and has been used to spread disinformation and influence political campaigns through bots. Any time there’s a new technology, somebody is going to use it in a detrimental way.”

Hunting AI with AI

When it comes to interrogating both the positive and negative aspects of AI, USC Annenberg’s doctoral students in communication are at the forefront of that research, bridging computer science and social science to build deep insight into both the technical and cultural implications of AI.

Doctoral student Ho-Chun Herbert Chang says that his undergraduate years at Dartmouth College were formative. “Dartmouth was the place where the term AI was coined in 1952,” he noted. “I studied mathematics and quantitative social science, and for my senior fellowship program, I did a fiction project about artificial intelligence. That was the start of me looking at AI from both a technical and a humanistic way.”

As his academic career progressed, Chang saw a “chasm” between how practitioners and the public see artificial intelligence. “From the computer science side, there’s more of an emphasis on the technical aspects of designing algorithms,” he said. “From the humanistic side, there’s a focus on societal values as the primary principle in terms of organizing research.”

One of the projects Chang worked on in the past year showed the potential of AI to investigate human behavior—and the behavior of other AI systems. Working with Emilio Ferrara, associate professor of communication and computer science whose groundbreaking research identified how Twitter bots affected the 2016 U.S. presidential campaign, Chang helped build on that work in the run-up to the 2020 election. Using an AI tool called the Botometer, the team was able to quantify how much Twitter traffic around conspiracy theories was generated and amplified by bots. “The Botometer looks at each Twitter account’s timeline data and metadata, using machine learning to figure out whether an account is a human or a bot,” Chang said.

Chang also worked with Allissa Richardson, assistant professor of journalism, to analyze the movement for racial justice that followed the murder of George Floyd by Minneapolis police. “A big part of communication research is about how users participate on social platforms—mediated by algorithms—and how they use these platforms to self-organize for democratic movements,” he said. “That’s the kind of work I want to do. I’m engaging holistically with AI, and Annenberg is the perfect place for that research.”

Ignacio Cruz focused his dissertation on the use of AI tools in workplace recruitment. Perhaps not surprisingly, he found that the human recruiters who used AI to sort and recommend applicants for positions had very polarized opinions about the effectiveness of the AI. “They often saw AI as either an adversary or an ally,” said Cruz, now a postdoctoral fellow at Northwestern University. “Sometimes recruiters see these systems as a time-saver, as an ally. But the job candidates these systems surface often don’t jibe with the recruiters’ expertise.”

While acknowledging the power of AI to help people make meaning out of huge data sets, Cruz also cautions about many issues that can arise from uncritically accepting the outputs of such systems. Using AI as an intermediary for communication is such a new phenomenon, “We just need a lot more education and critical inquiry about how these technologies are developed before they are deployed to the masses,” he said.

Cruz’s own research has shown that AI systems often reflect the biases of those who develop them, as they rely upon human intervention during their creation and implementation. “Artificial intelligence as it’s being developed is scattered and largely unregulated,” he said. “If these technologies really are going to help us create a better tomorrow, then they need to be designed with purpose, and they need to be continually audited—not only for efficiency, but for sustainability and ethics.”

The desert of AI

For Kate Crawford, the problem with much of the public conversation around the potential of AI is the lack of any critical lens by which to monitor it in the meaningful ways Cruz suggests.

“We are subjected to huge amounts of marketing hype, advertising and boosterism around artificial intelligence,” said Crawford, research professor of communication. “Part of what I do is look at the way in which artificial intelligence is not just a series of algorithms or code … but to really look at this much bigger set of questions around what happens when we create these planetary-scale computational networks? Who gains, but also, who loses?”

In the first chapter of her new book “Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence ” (Yale University Press, 2021), Crawford begins with her journey to that lithium mine, setting the tone for an exploration of the planetary costs of AI. Her devastating critique frames AI as an extractive industry—both literally, in its reliance on finite resources and labor for its components and its power, and figuratively, in the amount of data it consumes, categorizes and monetizes.

“Over the course of researching this book, I learned much more about the environmental harms of AI systems,” Crawford said. “Servers are hidden in nondescript data centers, and their polluting qualities are far less visible than the billowing smokestacks of coal-fired power stations.”

Describing the amount of energy needed to power something like Amazon Web Services as “gargantuan,” Crawford noted that the environmental impact of the AI systems that run on those platforms is continuing to grow. “Certainly, the industry has made significant efforts to make data centers more energy-efficient and to increase their use of renewable energy,” Crawford said. “But already, the carbon footprint of AI has matched that of the aviation industry at its height.”

Crawford said that the entire model of AI is extractive and exploitative and would need to be “re-architected” to work differently. “We also need regulatory and democratic oversight,” she added. “The proposed European Union AI regulations offer a good starting point, but that’s just one effort—and we have yet to see something similar in the United States or China, the two largest producers of AI technologies.”

Working with her USC Annenberg colleagues, Crawford is hoping to contribute to what a reimagined AI would look like.Crawford has teamed up with Mike Ananny and a team of doctoral students and practitioners on a new research project that will analyze issues within the data sets used to train AI systems.

“AI could help design a shipping system that would minimize the carbon imprint, rather than maximizing profit margin,” said Ananny, associate professor of communication. “It’s a question of, what do we want to maximize for in our AI systems? It pushes the problem back onto the people with power and it says, it’s not a data problem. It’s a values problem.”

Crawford said that USC Annenberg’s combination of technical expertise with a deep understanding of human communication makes it the ideal place for that kind of reimagining of a less-harmful AI.

“Our hope is that the research will contribute to how USC and the broader academic community thinks about the future of AI, in terms of how we build it, use it, and regulate it,” she said.

Toward an ethical AI

As part of his studies of media and technology, Ananny is a scholar of, and a contributor to, the big conversations about how society can reap the benefits of big-data AI systems while still preserving (or better, reestablishing) something that might be recognized as ethics and privacy.

While many critics and policymakers have proposed stronger tech company regulations that would force them to behave more like public utilities, with greater transparency, Ananny is among those who argue that regulatory reforms don’t go far enough.

“We’ve allowed capitalist institutions to have massive amounts of power for commodifying people, for allowing wealth inequalities and wealth concentrations—and data is just a part of that, and part of perpetuating that,” Ananny said. “Honestly, until you solve this problem of late capitalism where individuals have zero power and companies have all the power, you can kind of nibble around the edges with regulations, but that won’t have any real effect on the problem.”

Ananny echoes Crawford’s work, asserting that the climate crisis is bringing increasing urgency to the problem of AI as an extractive industry.

“We cannot allow the planet to burn because of the energy needs of Bitcoin’s server farms,” he said. “These AI systems are optimizing Amazon’s ability to fly products all over the world, with a huge carbon footprint, so people can have a spatula delivered to their Amazon box.”

Ananny does note that some scholars, scientists, activists and politicians are looking for opportunities to leverage the positive impacts of AI’s computing power in a way that doesn’t exacerbate the climate emergency.

“This is the language we’re using to create a new kind of reality,” Ananny said. “Data sets, statistical certainty, optimization, model-making, error detection—all those kinds of seemingly technical terms. But we also need to engage with questions of values. Is it worth it to have all of these things happening at such a huge scale? At what point, in terms of the human and material cost, do you tip too far over? We’re going to have to be able to make these kinds of judgments about particular AI tools—including, “Don’t build it.'”


Training computers to tease out the subtext behind the text


Provided by
University of Southern California


Citation:
How values-driven artificial intelligence can reshape the way we communicate (2022, February 7)
retrieved 7 February 2022
from https://techxplore.com/news/2022-02-values-driven-artificial-intelligence-reshape.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.