The US government is building an AI sandbox to tackle cybercrime
Top US security agencies are developing a virtual environment that uses machine learning in an effort to gain insight on cyberthreats and share findings with both public and private organizations.
A joint effort between the Science and Technology Directorate (S&T) – housed within the Department of Homeland Security (DHS) – and the Cybersecurity and Infrastructure Security Agency (CISA), an AI sandbox will be designed for researchers to collaborate and test analytical approaches and techniques in combating cyber threats.
CISA’s Advanced Analytics Platform for Machine Learning (CAP-M) will be used in both on-premise and in multi-cloud scenarios for this purpose.
Learning threats
“While initially supporting cyber missions, this environment will be flexible and extensible to support data sets, tools, and collaboration for other infrastructure security missions”, the DHS said.
Various experiments will be conducted in CAP-M, and data will be analyzed and correlated to assist all kinds of organizations in protecting themselves against the ever-evolving world of cybersecurity threats.
The experimental data will be made available to other government departments, as well as academic institutions and firms in the private sector. The S&T assured that privacy concerns will be taken into account.
Part of the experiments will involve testing AI and machine learning techniques in their analytical capabilities of cyberthreats and their effectiveness as tools in helping to fight them. CAP-M will also create a machine learning loop to automate workflows, such as exporting and tuning data.
Speaking to The Register (opens in new tab), Monti Knode, a director at pentesting platform Horizon3.ai, said that such a plan is long overdue, but welcomed the ability for analytical skills to be tested.
Knode commented on past failures that have “contributed overwhelmingly to alert fatigue over the years, leading analysts and practitioners on wild goose chases and rabbit holes, as well as real alerts that matter but are buried.”
He added that “labs rarely replicate the complexity and noise of a live production environment, but [CAP-M] could be a positive step.”
Speculating on how it might work, Knode suggested that simulated attacks could be run automatically to train the AI on them to learn how they work and how to spot them.
Sami Elhini, biometrics specialist at Cerberus Sentinel, was also optimistic that the learning and analyzing of threats could lead to deeper understanding about them, but cautioned that models may become too generalized and so miss threats on smaller targets, filtering them out as insignificant.
He also raised security concerns, claiming that “When… exposing [AI/ML] models to a larger audience, the probability of an exploit increases”. He said that other nations could target CAP-M to learn about or even interfere with its workings.
Mostly, however, it seems there is positivity around the federal project. Craig Lurey, co-founder and CTO of Keeper Security, also told The Register that “Research and development projects within the federal government can help support and catalyze disparate R&D efforts within the private sector. … Cybersecurity is national security and must be prioritized as such.”
Tom Kellermann, a VP at Contrast Security, echoed these sentiments, stating that CAP-M is a “critical project to improve information sharing on TTPs [tactics, techniques, and procedures] and enhance situational awareness across American cyberspace.”
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.