:
October 14, 2025: The American military is seeking to upgrade its Artificial Intelligence (AI) capabilities to match the levels of China, Russia, and allies like Israel. The U.S. has been collaborating with the Israeli military, which has made substantial progress in adapting AI software to integrate with their combat control systems, utilizing new drone warfare weapons and techniques, and gathering intelligence on enemy identities, locations, and capabilities. Amid these advancements, the new system, called Refaim, can coordinate attacks on detected targets across army, air force, and naval units.
This has put pressure on the United States to develop AI technology its military can use for propaganda and influence operations against enemy troops and populations. Current AI technology enables mimicking the voices of enemy officers to send confusing radio messages to their subordinates. As a result, enemy forces may move in the wrong direction or fire artillery at incorrect targets, including their own troops.
AI can also assist commanders in making decisions more quickly. New technology does not gain widespread acceptance until it proves its usefulness and trustworthiness to users. This was true for the telegraph in the late 1800s, broadcast radio in the 1920s, and television three decades later. In fact, the development of more effective telegraph systems coincided with efforts to create commercial radio and television services. In the 1970s, personal computers (PCs) were developed. The idea seemed absurd at first, but as tinkerers and hobbyist developers produced the first functional PCs, a new industry was born. By the late 1970s, Apple, Radio Shack, and other firms were selling PCs to an enthusiastic and sizable audience. Decades of American government and military work on the internet became commercially available in 1995, making the maturing PC industry a must-have product.
In the 21st century, AI became a viable product, and as it reached more users, new and marketable applications emerged. Some uses were illegal, dividing the programmer and user community into good ("White Hat") and bad ("Black Hat") factions. Hacking soon became a military and intelligence asset. Many Black Hat programmers became national assets after being hired to protect American commercial and government networks from foreign Black Hats. Programmers who performed both Black and White Hat tasks were sometimes called Grey Hats. The spectrum of roles expanded as programmers developed new tools and applications, particularly with AI software produced by firms, individuals, or small groups who modified commercial AI software and offered it on the black market. These malicious offerings evolved into marketable products, quickly transitioning from the black market to legitimate, though sometimes restricted, markets due to their applications.
AI products like ChatGPT and related tools made it easy to create and modify malware, as malicious hacker software came to be known. ChatGPT also became a major source of antidotes for this malware. The fact that the lights are still on and bank accounts remain largely secure indicates that White Hats currently have the upper hand. However, some less visible damage goes unnoticed. Several hacks have stolen billions of dollars from banks or individual firms, often carried out by nations at odds with the United States, such as North Korea and Iran. These countries, facing increasingly crippling economic sanctions, rely on Black Hat hackers to fund their governments. Their Black Hat hackers are recognized as national assets and are well-compensated for their work. In North Korea, where few citizens can travel abroad, successful Black Hats live in relative luxury and can travel internationally whenever they wish.
Sometimes, North Korean Black Hats need to examine what Western hackers are doing. Software trade shows feature special sections for malware and its antidotes, though malware is traded covertly. No one can legally sell malware openly. Malware can be transported on thumb drives or smaller SIM chips used in cell phones, which are easily concealed and transferred to new owners. Payments can be quickly made to and from bank accounts using smartphone, tablet, or laptop apps. Trade shows are preferred venues for these transactions due to the variety of people and unexpected opportunities they offer.
New developments are often best discovered at trade shows. Hackers from China, Russia, Iran, North Korea, and other nations have been using OpenAI systems. Microsoft and OpenAI believe these nations initially used AI for routine tasks, but this quickly escalated to cyberattacks. Some hackers with ties to foreign governments are using generative artificial intelligence in their attacks. Instead of creating exotic attacks, as some in the tech industry feared, hackers have used AI for mundane tasks like drafting emails, translating documents, and debugging code. These countries leverage AI to enhance productivity.
Microsoft, which committed nearly $23 billion to and is a close partner with OpenAI, shares threat information to document how five hacking groups tied to China, Russia, North Korea, and Iran used OpenAI’s technology. The companies did not specify which OpenAI technology was involved. OpenAI shut down these groups’ access after learning of the misuse.
Since OpenAI released ChatGPT in 2022, concerns have persisted that hackers might weaponize these powerful tools to exploit vulnerabilities in new and creative ways. Like any technology, AI can be used for illegal and disruptive purposes.
OpenAI requires customers to sign up for accounts, but some users evade detection through techniques like masking their locations. This enables them to develop illegal or harmful AI applications. For example, a hacking group linked to the Iranian Islamic Revolutionary Guard Corps (IRGC) used AI to research ways to bypass antivirus scanners and generate phishing emails. One phishing email pretended to come from an international development agency, while another attempted to lure prominent feminists to an attacker-built website on feminism. In another case, a Russian-affiliated group used OpenAI’s systems to research satellite communication protocols and radar imaging technology to influence the war in Ukraine. Russia has long relied on a large propaganda organization to attack and weaken enemies, and AI is now another tool in their arsenal.
Microsoft tracks over 300 hacker organizations, including independent cybercriminals and AI operations conducted by various nations. OpenAI’s proprietary systems make it easier to track and disrupt their use, according to executives. They noted that while there are ways to identify hackers using open-source AI technology, the proliferation of open systems complicates the task.
When work is open-sourced, it becomes difficult to know who is using AI technology and whether they adhere to responsible use policies. Microsoft did not uncover any use of generative AI in a recent Russian hack of top Microsoft executives.
In combat situations, AI has been used increasingly over the past decade. As AI improves, it is employed more effectively and frequently in combat. For example, a Ukrainian firm developed an AI system that can accurately distinguish between Ukrainian and Russian soldiers in the distance, reducing instances of friendly fire. Friendly fire, when troops accidentally fire on their own, is an unfortunate and recurring aspect of modern warfare that no one likes to discuss. AI-assisted targeting reduces the likelihood of such incidents.
A final note: This article was written, revised, and edited with the help of AI software, providing writers with an efficient tool to detect and correct issues with style, format, grammar, and spelling.