A team of researchers with members from MIT, UC Berkely and FAR AI has created a computer program to target vulnerabilities in the KataGo program that allow it to beat the AI-based system. They have published a paper describing their efforts on the arXiv preprint server.
In 2016, a computer program created by the DeepMind project succeeded in beating human champion Go players for the first time. The program used a deep-learning neural network to learn how the game works and then how to play at increasingly higher levels by simply playing against itself.
More recently, a similar open-source program called KataGo was released to the public—it can also beat the best human players. But, as has been noted in other studies, deep-learning-based programs tend to have one major vulnerability—they are only as good as the data they’re trained on. This has led to holes in learning, which in turn has led to vulnerabilities in skill. In this new effort, the researchers looked for and found a vulnerability in KataGo.
Because KataGo is trained on “normal” ways of playing Go, it can run into trouble with opponents who play in seemingly odd ways. The researchers noted that an adversarial (odd) way to play Go could involve working to lay claim to one small corner of the board. Taking this approach tricks KataGo into thinking it has won the game prematurely because it controls all the rest of the board. And one of the rules of Go is that if a player passes and then the other does too, then the game ends and both sides count their points. Because the adversary gets all the points for its small corner territory, while KataGo does not get points for unsecured territory that hosts adversarial stones, the adversary tallies more points and wins.
The researchers note that the ploy only works with KataGo; using it against other humans will result in a quick defeat because they will intuitively see what is happening. They also note that the reason they wrote their adversarial program was to show that AI systems still suffer from significant vulnerabilities—and that means much care needs to be taken when they are used in critical applications, such as in self-driving cars or in scanning images for cancer.
Tony Tong Wang et al, Adversarial Policies Beat Professional-Level Go AIs, arXiv (2022). DOI: 10.48550/arxiv.2211.00241
© 2022 Science X Network
Citation:
Adversarial technique targeting vulnerability in KataGo allows sub-par program to win (2022, November 8)
retrieved 8 November 2022
from https://techxplore.com/news/2022-11-adversarial-technique-vulnerability-katago-sub-par.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.