Intelligence agencies have been used AI for decades to help target terrorists.
5 min read
Recent publicity around the artificial intelligence chatbot has led to a great deal of public concern about its growth and potential. Italy recently banned the latest version, concerns about privacy because of its ability to use information without permission.
But intelligence agencies, including , in charge of foreign intelligence for the US, and its sister organisation the National Security Agency (NSA), have been using earlier forms of AI since the start of the cold war.
of foreign language documents laid the foundation for modern-day natural language processing techniques. NLP helps machines understand human language, enabling them to carry out simple tasks, such as spell checks.
Towards the end of the cold war, AI-driven were made to reproduce the decision-making of human experts for image analysis to help identify possible targets for terrorists, by analysing information over time and using this to make predictions.
In the 21st century, organisations working in international security around the globe are using AI to help them find, as former US director of national intelligence Dan Coats in 2017, “innovative ways to exploit and establish relevance and ensure the veracity” of the information they deal with.
Coats said budgetary constraints, human limitations and increasing levels of information were making it impossible for intelligence agencies to produce analysis fast enough for policy makers.
The Directorate of National Intelligence, which oversees US intelligence operations, issued the AIM Initiative in 2019. This is a designed to add to intelligence using machines, enabling agencies like the CIA to process huge amounts data quicker than before and allow human intelligence officers to deal with other tasks.
Machines work faster than humans
Politicians are under increasing pressure to make quicker informed decisions than their predecessors because information is available faster than ever before. As intelligence scholar Amy Zegart out, John F. Kennedy had 13 days to decide on a course of action on the Cuban Missile Crisis in 1962. George W. Bush had 13 hours to formulate a response to the 9/11 terrorist attacks in 2001. The decisions of tomorrow might need to be made in 13 minutes.
AI already intelligence agencies process and analyse vast amounts of data from a wide range of sources, and it does so far quicker and efficiently than humans can. AI can identify patterns in the data as well as detect any anomalies that might be hard for human intelligence officers to detect.
Intelligence agencies are also able to use AI to spot any potential threats to the that is used to communicate across the internet, respond to cyber-attacks, and identify unusual behaviour on networks. It against possible malware and contribute to a more secure digital environment.
AI brings security threats
AI creates both opportunities and challenges for intelligence agencies. While it can help protect networks from cyber-attacks, it can also be used by hostile individuals or agencies to , install malware, steal information or disrupt and deny use of digital systems.
AI cyber-attacks have become a “critical threat”, to Alberto Domingo, technical director of cyberspace at Nato Allied Command Transformation, who called for international regulation to slow down the number of attacks that are “increasing exponentially”.
AI that analyses surveillance data can also reflect human biases. Research into facial recognition programmes has they are often worse at identifying women and people with darker skin tones because they have predominately been trained using data on white men. This has led to police being from using facial recognition in cities including Boston and San Francisco.
Such is the concern about AI-driven surveillance that researchers have designed aimed at fooling AI analysis of sounds, using a combination of predictive learning and data analysis.
Truth or lie?
Online misinformation (incorrect information) and (deliberately false information) represent another major AI-related concern for intelligence agencies.
AI can generate “deepfake” images, videos and audio recordings, as well as text in the case of ChatGPT. Gordon Crovits of online misinformation research company Newsguard has that ChatGPT could evolve into “the most powerful tool for spreading misinformation that has ever been on the internet”.
Some intelligence agencies are tasked with stopping the spread of online falsehoods from affecting democratic processes. But it is to identify AI-generated mis- or disinformation before it goes viral. And once fake stories are widely believed, they are very difficult to counter.
Agencies are also at increased risk themselves of mistaking false information for the real thing, as the AI tools used to analyse online data may not be able to tell the difference.
Privacy concerns
The vast amount of data collected from surveillance activities that AI analyses is also creating concerns and civil liberties.
The World Economic Forum has that AI must place privacy before efficiency when used by governments in surveillance programmes, while some scholars and others are for regulation to limit AI’s impact on society.
Governments must ensure that agencies that use AI to conduct surveillance are doing so within the law. Such oversight would require clear guidelines being set, regulations to be enforced, and transgressors to be punished. are that governments have been slow to keep up, even in the United States.
The vulnerabilities of AI mean that, despite the technological advances of the post-cold war world, there is still a need for human agents and intelligence officers.
As Zegart , what AI will do is undertake most time-consuming menial analysis roles that humans currently do. While AI will allow intelligence agencies to understand what the objects are in a photograph, for example, human intelligence officers will be able to say why those are objects are there.
This should lead to greater efficiency within intelligence agencies. But to overcome the of many citizens, legislation may need to catch up with the way the AI world works.
is a Teaching Fellow in the School of in the .
This article is republished from under a Creative Commons Licence. .
More The Conversation Articles...
The Conversation is an independent source of news analysis and informed comment written by academic experts, working with professional journalists who help share their knowledge with the world.
ChatGPT: how to use AI as a virtual financial adviser
ChatGPT: how to use AI as a virtual financial adviser
25 April 2023
4 min read
ChatGPT: what the law says about who owns the copyright of AI-generated content
17 April 2023
5 min read
How tech is driving new forms of domestic abuse
22 February 2022
6 min read