Obtain Capabilities: Artificial Intelligence

Adversaries may obtain access to generative artificial intelligence tools, such as large language models (LLMs), to aid various techniques during targeting. These tools may be used to inform, bolster, and enable a variety of malicious tasks, including conducting Reconnaissance, creating basic scripts, assisting social engineering, and even developing payloads.[1]

For example, by utilizing a publicly available LLM an adversary is essentially outsourcing or automating certain tasks to the tool. Using AI, the adversary may draft and generate content in a variety of written languages to be used in Phishing/Phishing for Information campaigns. The same publicly available tool may further enable vulnerability or other offensive research supporting Develop Capabilities. AI tools may also automate technical tasks by generating, refining, or otherwise enhancing (e.g., Obfuscated Files or Information) malicious scripts and payloads.[2] Finally, AI-generated text, images, audio, and video may be used for fraud, Impersonation, and other malicious activities.[3][4][5]

ID: T1588.007
Sub-technique of:  T1588
Platforms: PRE
Contributors: Menachem Goldstein
Version: 1.1
Created: 11 March 2024
Last Modified: 15 April 2025

Mitigations

ID Mitigation Description
M1056 Pre-compromise

This technique cannot be easily mitigated with preventive controls since it is based on behaviors performed outside of the scope of enterprise defenses and controls.

Detection

Much of this activity will take place outside the visibility of the target organization, making detection of this behavior difficult. Detection efforts may be focused on behaviors relating to the potential use of generative artificial intelligence (i.e. Phishing, Phishing for Information).

References