Warfare Technology News Stories
The DoD has ambitious plans for full spectrum dominance, seeking control over all potential battlespaces: land, ocean, air, outerspace, and cyberspace. Artificial intelligence and other emerging technologies are being used to further these agendas, reshaping the military and geopolitical landscape in unprecedented ways.
In our news archive below, we examine how emerging warfare technology undermines national security, fuels terrorism, and causes devastating civilian casualties.
Related: Weapons of Mass Destruction, Biotech Dangers, Non-Lethal Weapons
Department of Defense spending is increasingly going to large tech companies including Microsoft, Google parent company Alphabet, Oracle, and IBM. Open AI recently brought on former U.S. Army general and National Security Agency Director Paul M. Nakasone to its Board of Directors. The U.S. military discreetly, yet frequently, collaborated with prominent tech companies through thousands of subcontractors through much of the 2010s, obfuscating the extent of the two sectors’ partnership from tech employees and the public alike. The long-term, deep-rooted relationship between the institutions, spurred by massive Cold War defense and research spending and bound ever tighter by the sectors’ revolving door, ensures that advances in the commercial tech sector benefit the defense industry’s bottom line. Military, tech spending has manifested myriad landmark inventions. The internet, for example, began as an Advanced Research Projects Agency (ARPA, now known as Defense Advanced Research Projects Agency, or DARPA) research project called ARPANET, the first network of computers. Decades later, graduate students Sergey Brin and Larry Page received funding from DARPA, the National Science Foundation, and U.S. intelligence community-launched development program Massive Digital Data Systems to create what would become Google. Other prominent DARPA-funded inventions include transit satellites, a precursor to GPS, and the iPhone Siri app, which, instead of being picked up by the military, was ultimately adapted to consumer ends by Apple.
Note: Watch our latest video on the militarization of Big Tech. For more, read our concise summaries of news articles on AI, warfare technology, and Big Tech.
The US military may soon have an army of faceless suicide bombers at their disposal, as an American defense contractor has revealed their newest war-fighting drone. AeroVironment unveiled the Red Dragon in a video on their YouTube page, the first in a new line of 'one-way attack drones.' This new suicide drone can reach speeds up to 100 mph and can travel nearly 250 miles. The new drone takes just 10 minutes to set up and launch and weighs just 45 pounds. Once the small tripod the Red Dragon takes off from is set up, AeroVironment said soldiers would be able to launch up to five per minute. Since the suicide robot can choose its own target in the air, the US military may soon be taking life-and-death decisions out of the hands of humans. Once airborne, its AVACORE software architecture functions as the drone's brain, managing all its systems and enabling quick customization. Red Dragon's SPOTR-Edge perception system acts like smart eyes, using AI to find and identify targets independently. Simply put, the US military will soon have swarms of bombs with brains that don't land until they've chosen a target and crash into it. Despite Red Dragon's ability to choose a target with 'limited operator involvement,' the Department of Defense (DoD) has said it's against the military's policy to allow such a thing to happen. The DoD updated its own directives to mandate that 'autonomous and semi-autonomous weapon systems' always have the built-in ability to allow humans to control the device.
Note: Drones create more terrorists than they kill. For more, read our concise summaries of news articles on warfare technology and Big Tech.
In 2003 [Alexander Karp] – together with Peter Thiel and three others – founded a secretive tech company called Palantir. And some of the initial funding came from the investment arm of – wait for it – the CIA! The lesson that Karp and his co-author draw [in their book The Technological Republic: Hard Power, Soft Belief and the Future of the West] is that “a more intimate collaboration between the state and the technology sector, and a closer alignment of vision between the two, will be required if the United States and its allies are to maintain an advantage that will constrain our adversaries over the longer term. The preconditions for a durable peace often come only from a credible threat of war.” Or, to put it more dramatically, maybe the arrival of AI makes this our “Oppenheimer moment”. For those of us who have for decades been critical of tech companies, and who thought that the future for liberal democracy required that they be brought under democratic control, it’s an unsettling moment. If the AI technology that giant corporations largely own and control becomes an essential part of the national security apparatus, what happens to our concerns about fairness, diversity, equity and justice as these technologies are also deployed in “civilian” life? For some campaigners and critics, the reconceptualisation of AI as essential technology for national security will seem like an unmitigated disaster – Big Brother on steroids, with resistance being futile, if not criminal.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and intelligence agency corruption.
Before signing its lucrative and controversial Project Nimbus deal with Israel, Google knew it couldn’t control what the nation and its military would do with the powerful cloud-computing technology, a confidential internal report obtained by The Intercept reveals. The report makes explicit the extent to which the tech giant understood the risk of providing state-of-the-art cloud and machine learning tools to a nation long accused of systemic human rights violations. Not only would Google be unable to fully monitor or prevent Israel from using its software to harm Palestinians, but the report also notes that the contract could obligate Google to stonewall criminal investigations by other nations into Israel’s use of its technology. And it would require close collaboration with the Israeli security establishment — including joint drills and intelligence sharing — that was unprecedented in Google’s deals with other nations. The rarely discussed question of legal culpability has grown in significance as Israel enters the third year of what has widely been acknowledged as a genocide in Gaza — with shareholders pressing the company to conduct due diligence on whether its technology contributes to human rights abuses. Google doesn’t furnish weapons to the military, but it provides computing services that allow the military to function — its ultimate function being, of course, the lethal use of those weapons. Under international law, only countries, not corporations, have binding human rights obligations.
Note: For more along these lines, read our concise summaries of news articles on AI and government corruption.
2,500 US service members from the 15th Marine Expeditionary Unit [tested] a leading AI tool the Pentagon has been funding. The generative AI tools they used were built by the defense-tech company Vannevar Labs, which in November was granted a production contract worth up to $99 million by the Pentagon’s startup-oriented Defense Innovation Unit. The company, founded in 2019 by veterans of the CIA and US intelligence community, joins the likes of Palantir, Anduril, and Scale AI as a major beneficiary of the US military’s embrace of artificial intelligence. In December, the Pentagon said it will spend $100 million in the next two years on pilots specifically for generative AI applications. In addition to Vannevar, it’s also turning to Microsoft and Palantir, which are working together on AI models that would make use of classified data. People outside the Pentagon are warning about the potential risks of this plan, including Heidy Khlaaf ... at the AI Now Institute. She says this rush to incorporate generative AI into military decision-making ignores more foundational flaws of the technology: “We’re already aware of how LLMs are highly inaccurate, especially in the context of safety-critical applications that require precision.” Khlaaf adds that even if humans are “double-checking” the work of AI, there's little reason to think they're capable of catching every mistake. “‘Human-in-the-loop’ is not always a meaningful mitigation,” she says.
Note: For more, read our concise summaries of news articles on warfare technology and Big Tech.
Alexander Balan was on a California beach when the idea for a new kind of drone came to him. This eureka moment led Balan to found Xdown, the company that’s building the P.S. Killer (PSK)—an autonomous kamikaze drone that works like a hand grenade and can be thrown like a football. The PSK is a “throw-and-forget” drone, Balan says, referencing the “fire-and-forget” missile that, once locked on to a target, can seek it on its own. Instead of depending on remote controls, the PSK will be operated by AI. Soldiers should be able to grab it, switch it on, and throw it—just like a football. The PSK can carry one or two 40 mm grenades commonly used in grenade launchers today. The grenades could be high-explosive dual purpose, designed to penetrate armor while also creating an explosive fragmentation effect against personnel. These grenades can also “airburst”—programmed to explode in the air above a target for maximum effect. Infantry, special operations, and counterterrorism units can easily store PSK drones in a field backpack and tote them around, taking one out to throw at any given time. They can also be packed by the dozen in cargo airplanes, which can fly over an area and drop swarms of them. Balan says that one Defense Department official told him “This is the most American munition I have ever seen.” The nonlethal version of the PSK [replaces] its warhead with a supply container so that it’s able to “deliver food, medical kits, or ammunition to frontline troops” (though given the 1.7-pound payload capacity, such packages would obviously be small).
Note: The US military is using Xbox controllers to operate weapons systems. The latest US Air Force recruitment tool is a video game that allows players to receive in-game medals and achievements for drone bombing Iraqis and Afghans. For more, read our concise summaries of news articles on warfare technologies and watch our latest video on the militarization of Big Tech.
Last April, in a move generating scant media attention, the Air Force announced that it had chosen two little-known drone manufacturers—Anduril Industries of Costa Mesa, California, and General Atomics of San Diego—to build prototype versions of its proposed Collaborative Combat Aircraft (CCA), a future unmanned plane intended to accompany piloted aircraft on high-risk combat missions. The Air Force expects to acquire at least 1,000 CCAs over the coming decade at around $30 million each, making this one of the Pentagon’s costliest new projects. In winning the CCA contract, Anduril and General Atomics beat out three of the country’s largest and most powerful defense contractors ... posing a severe threat to the continued dominance of the existing military-industrial complex, or MIC. The very notion of a “military-industrial complex” linking giant defense contractors to powerful figures in Congress and the military was introduced on January 17, 1961, by President Dwight D. Eisenhower in his farewell address. In 2024, just five companies—Lockheed Martin (with $64.7 billion in defense revenues), RTX (formerly Raytheon, with $40.6 billion), Northrop Grumman ($35.2 billion), General Dynamics ($33.7 billion), and Boeing ($32.7 billion)—claimed the vast bulk of Pentagon contracts. Now ... a new force—Silicon Valley startup culture—has entered the fray, and the military-industrial complex equation is suddenly changing dramatically.
Note: For more, read our concise summaries of news articles on warfare technologies and watch our latest video on the militarization of Big Tech.
In the Air Force, drone pilots did not pick the targets. That was the job of someone pilots called “the customer.” The customer might be a conventional ground force commander, the C.I.A. or a classified Special Operations strike cell. [Drone operator] Captain Larson described a mission in which the customer told him to track and kill a suspected Al Qaeda member. Then, the customer told him to use the Reaper’s high-definition camera to follow the man’s body to the cemetery and kill everyone who attended the funeral. In December 2016, the Obama administration loosened the rules. Strikes once carried out only after rigorous intelligence-gathering and approval processes were often ordered up on the fly, hitting schools, markets and large groups of women and children. Before the rules changed, [former Air Force captain James] Klein said, his squadron launched about 16 airstrikes in two years. Afterward, it conducted them almost daily. Once, Mr. Klein said, the customer pressed him to fire on two men walking by a river in Syria, saying they were carrying weapons over their shoulders. The weapons turned out to be fishing poles. Over time, Mr. Klein grew angry and depressed. Eventually, he refused to fire any more missiles. In 2020, he retired, one of many disillusioned drone operators who quietly dropped out. “We were so isolated," he said. “The biggest tell is that very few people stayed in the field. They just couldn’t take it.” Bennett Miller was an intelligence analyst, trained to study the Reaper’s video feed. In late 2019 ... his team tracked a man in Afghanistan who the customer said was a high-level Taliban financier. For a week, the crew watched the man feed his animals, eat with family in his courtyard. Then the customer ordered the crew to kill him. A week later, the Taliban financier’s name appeared again on the target list. “We got the wrong guy. I had just killed someone’s dad,” Mr. Miller said. “I had watched his kids pick up the body parts.” In February 2020, he ... was hospitalized, diagnosed with PTSD and medically retired. Veterans with combat-related injuries, even injuries suffered in training, get special compensation worth about $1,000 per month. Mr. Miller does not qualify, because the Department of Veterans Affairs does not consider drone missions combat. “It’s like they are saying all the people we killed somehow don’t really count,” he said. “And neither do we.”
Note: Captain Larson took his own life in 2020. Furthermore, drones create more terrorists than they kill. Read about former drone operator Brandon Bryant's emotional experience of killing a child in Afghanistan that his superiors told him was a dog. For more along these lines, explore concise summaries of revealing news articles on war.
The Defense Advanced Research Project Agency, the Pentagon's top research arm, wants to find out if red blood cells could be modified in novel ways to protect troops. The DARPA program, called the Red Blood Cell Factory, is looking for researchers to study the insertion of "biologically active components" or "cargoes" in red blood cells. The hope is that modified cells would enhance certain biological systems, "thus allowing recipients, such as warfighters, to operate more effectively in dangerous or extreme environments." Red blood cells could act like a truck, carrying "cargo" or special protections, to all parts of the body, since they already circulate oxygen everywhere, [said] Christopher Bettinger, a professor of biomedical engineering overseeing the program. "What if we could add in additional cargo ... inside of that disc," Bettinger said, referring to the shape of red blood cells, "that could then confer these interesting benefits?" The research could impact the way troops battle diseases that reproduce in red blood cells, such as malaria, Bettinger hypothesized. "Imagine an alternative world where we have a warfighter that has a red blood cell that's accessorized with a compound that can sort of defeat malaria," Bettinger said. In 2019, the Army released a report called "Cyborg Soldier 2050," which laid out a vision of the future where troops would benefit from neural and optical enhancements, though the report acknowledged ethical and legal concerns.
Note: Read about the Pentagon's plans to use our brains as warfare, describing how the human body is war's next domain. Learn more about biotech dangers.
Militaries, law enforcement, and more around the world are increasingly turning to robot dogs — which, if we're being honest, look like something straight out of a science-fiction nightmare — for a variety of missions ranging from security patrol to combat. Robot dogs first really came on the scene in the early 2000s with Boston Dynamics' "BigDog" design. They have been used in both military and security activities. In November, for instance, it was reported that robot dogs had been added to President-elect Donald Trump's security detail and were on patrol at his home in Mar-a-Lago. Some of the remote-controlled canines are equipped with sensor systems, while others have been equipped with rifles and other weapons. One Ohio company made one with a flamethrower. Some of these designs not only look eerily similar to real dogs but also act like them, which can be unsettling. In the Ukraine war, robot dogs have seen use on the battlefield, the first known combat deployment of these machines. Built by British company Robot Alliance, the systems aren't autonomous, instead being operated by remote control. They are capable of doing many of the things other drones in Ukraine have done, including reconnaissance and attacking unsuspecting troops. The dogs have also been useful for scouting out the insides of buildings and trenches, particularly smaller areas where operators have trouble flying an aerial drone.
Note: Learn more about the troubling partnership between Big Tech and the military. For more, read our concise summaries of news articles on military corruption.
It is often said that autonomous weapons could help minimize the needless horrors of war. Their vision algorithms could be better than humans at distinguishing a schoolhouse from a weapons depot. Some ethicists have long argued that robots could even be hardwired to follow the laws of war with mathematical consistency. And yet for machines to translate these virtues into the effective protection of civilians in war zones, they must also possess a key ability: They need to be able to say no. Human control sits at the heart of governments’ pitch for responsible military AI. Giving machines the power to refuse orders would cut against that principle. Meanwhile, the same shortcomings that hinder AI’s capacity to faithfully execute a human’s orders could cause them to err when rejecting an order. Militaries will therefore need to either demonstrate that it’s possible to build ethical, responsible autonomous weapons that don’t say no, or show that they can engineer a safe and reliable right-to-refuse that’s compatible with the principle of always keeping a human “in the loop.” If they can’t do one or the other ... their promises of ethical and yet controllable killer robots should be treated with caution. The killer robots that countries are likely to use will only ever be as ethical as their imperfect human commanders. They would only promise a cleaner mode of warfare if those using them seek to hold themselves to a higher standard.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and military corruption.
Mitigating the risk of extinction from AI should be a global priority. However, as many AI ethicists warn, this blinkered focus on the existential future threat to humanity posed by a malevolent AI ... has often served to obfuscate the myriad more immediate dangers posed by emerging AI technologies. These “lesser-order” AI risks ... include pervasive regimes of omnipresent AI surveillance and panopticon-like biometric disciplinary control; the algorithmic replication of existing racial, gender, and other systemic biases at scale ... and mass deskilling waves that upend job markets, ushering in an age monopolized by a handful of techno-oligarchs. Killer robots have become a twenty-first-century reality, from gun-toting robotic dogs to swarms of autonomous unmanned drones, changing the face of warfare from Ukraine to Gaza. Palestinian civilians have frequently spoken about the paralyzing psychological trauma of hearing the “zanzana” — the ominous, incessant, unsettling, high-pitched buzzing of drones loitering above. Over a decade ago, children in Waziristan, a region of Pakistan’s tribal belt bordering Afghanistan, experienced a similar debilitating dread of US Predator drones that manifested as a fear of blue skies. “I no longer love blue skies. In fact, I now prefer gray skies. The drones do not fly when the skies are gray,” stated thirteen-year-old Zubair in his testimony before Congress in 2013.
Note: For more along these lines, read our concise summaries of news articles on AI and military corruption.
The Pentagon is turning to a new class of weapons to fight the numerically superior [China's] People’s Liberation Army: drones, lots and lots of drones. In August 2023, the Defense Department unveiled Replicator, its initiative to field thousands of “all-domain, attritable autonomous (ADA2) systems”: Pentagon-speak for low-cost (and potentially AI-driven) machines — in the form of self-piloting ships, large robot aircraft, and swarms of smaller kamikaze drones — that they can use and lose en masse to overwhelm Chinese forces. For the last 25 years, uncrewed Predators and Reapers, piloted by military personnel on the ground, have been killing civilians across the planet. Experts worry that mass production of new low-cost, deadly drones will lead to even more civilian casualties. Advances in AI have increasingly raised the possibility of robot planes, in various nations’ arsenals, selecting their own targets. During the first 20 years of the war on terror, the U.S. conducted more than 91,000 airstrikes ... and killed up to 48,308 civilians, according to a 2021 analysis. “The Pentagon has yet to come up with a reliable way to account for past civilian harm caused by U.S. military operations,” [Columbia Law’s Priyanka Motaparthy] said. “So the question becomes, ‘With the potential rapid increase in the use of drones, what safeguards potentially fall by the wayside? How can they possibly hope to reckon with future civilian harm when the scale becomes so much larger?’”
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on military corruption.
At the Technology Readiness Experimentation (T-REX) event in August, the US Defense Department tested an artificial intelligence-enabled autonomous robotic gun system developed by fledgling defense contractor Allen Control Systems dubbed the “Bullfrog.” Consisting of a 7.62-mm M240 machine gun mounted on a specially designed rotating turret outfitted with an electro-optical sensor, proprietary AI, and computer vision software, the Bullfrog was designed to deliver small arms fire on drone targets with far more precision than the average US service member can achieve with a standard-issue weapon. Footage of the Bullfrog in action published by ACS shows the truck-mounted system locking onto small drones and knocking them out of the sky with just a few shots. Should the Pentagon adopt the system, it would represent the first publicly known lethal autonomous weapon in the US military’s arsenal. In accordance with the Pentagon’s current policy governing lethal autonomous weapons, the Bullfrog is designed to keep a human “in the loop” in order to avoid a potential “unauthorized engagement." In other words, the gun points at and follows targets, but does not fire until commanded to by a human operator. However, ACS officials claim that the system can operate totally autonomously should the US military require it to in the future, with sentry guns taking the entire kill chain out of the hands of service members.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.
On the sidelines of the International Institute for Strategic Studies’ annual Shangri-La Dialogue in June, US Indo-Pacific Command chief Navy Admiral Samuel Paparo colorfully described the US military’s contingency plan for a Chinese invasion of Taiwan as flooding the narrow Taiwan Strait between the two countries with swarms of thousands upon thousands of drones, by land, sea, and air, to delay a Chinese attack enough for the US and its allies to muster additional military assets. “I want to turn the Taiwan Strait into an unmanned hellscape using a number of classified capabilities,” Paparo said, “so that I can make their lives utterly miserable for a month, which buys me the time for the rest of everything.” China has a lot of drones and can make a lot more drones quickly, creating a likely advantage during a protracted conflict. This stands in contrast to American and Taiwanese forces, who do not have large inventories of drones. The Pentagon’s “hellscape” plan proposes that the US military make up for this growing gap by producing and deploying what amounts to a massive screen of autonomous drone swarms designed to confound enemy aircraft, provide guidance and targeting to allied missiles, knock out surface warships and landing craft, and generally create enough chaos to blunt (if not fully halt) a Chinese push across the Taiwan Strait. Planning a “hellscape" of hundreds of thousands of drones is one thing, but actually making it a reality is another.
Note: Learn more about warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more along these lines, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
Razish [is] a fake village built by the US army to train its soldiers for urban warfare. It is one of a dozen pretend settlements scattered across “the Box” (as in sandbox) – a vast landscape of unforgiving desert at the Fort Irwin National Training Center (NTC), the largest such training facility in the world. Covering more than 1,200 square miles, it is a place where soldiers come to practise liberating the citizens of the imaginary oil-rich nation Atropia from occupation by the evil authoritarian state of Donovia. Fake landmines dot the valleys, fake police stations are staffed by fake police, and fake villages populated by citizens of fake nation states are invaded daily by the US military – wielding very real artillery. It operates a fake cable news channel, on which officers are subjected to aggressive TV interviews, trained to win the media war as well as the physical one. Recently, it even introduced internal social media networks, called Tweeter and Fakebook, where mock civilians spread fake news about the battles – social media being the latest weapon in the arsenal of modern war. Razish may still have a Middle Eastern look, but the actors hawking chunks of plastic meat and veg in the street market speak not English or Arabic, but Russian. This military role-playing industry has ballooned since the early 2000s, now comprising a network of 256 companies across the US, receiving more than $250m a year in government contracts. The actors are often recent refugees, having fled one real-world conflict only to enter another, simulated one.
Note: For more along these lines, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
Billionaire Elon Musk’s brain-computer interface (BCI) company Neuralink made headlines earlier this year for inserting its first brain implant into a human being. Such implants ... are described as “fully implantable, cosmetically invisible, and designed to let you control a computer or mobile device anywhere you go." They can help people regain abilities lost due to aging, ailments, accidents or injuries, thus improving quality of life. Yet, great ethical concerns arise with such advancements, and the tech is already being used for questionable purposes. Some Chinese employers have started using “emotional surveillance technology” to monitor workers’ brainwaves. Governments and militaries are already ... describing the human body and brain as war’s next domain. On this new “battlefield,” an era of neuroweapons ... has begun. The Pentagon’s research arm DARPA directly or indirectly funds about half of invasive neural interface technology companies in the US. DARPA has initiated at least 40 neurotechnology-related programs over the past 24 years. As a 2024 RAND report speculates, if BCI technologies are hacked or compromised, “a malicious adversary could potentially inject fear, confusion, or anger into [a BCI] commander’s brain and cause them to make decisions that result in serious harm.” Academic Nicholas Evans speculates, further, that neuroimplants could “control an individual’s mental functions,” perhaps to manipulate memories, emotions, or even to torture the wearer. In a [military research paper] on neurowarfare: "Microbiologists have recently discovered mind-controlling parasites that can manipulate the behavior of their hosts according to their needs by switching genes on or off. Since human behavior is at least partially influenced by their genetics, nonlethal behavior modifying genetic bioweapons that spread through a highly contagious virus could thus be, in principle, possible.
Note: The CIA once used brain surgery to make six remote controlled dogs. For more, see important information on microchip implants and CIA mind control programs from reliable major media sources.
The Palestinian population is intimately familiar with how new technological innovations are first weaponized against them–ranging from electric fences and unmanned drones to trap people in Gaza—to the facial recognition software monitoring Palestinians in the West Bank. Groups like Amnesty International have called Israel an Automated Apartheid and repeatedly highlight stories, testimonies, and reports about cyber-intelligence firms, including the infamous NSO Group (the Israeli surveillance company behind the Pegasus software) conducting field tests and experiments on Palestinians. Reports have highlighted: “Testing and deployment of AI surveillance and predictive policing systems in Palestinian territories. In the occupied West Bank, Israel increasingly utilizes facial recognition technology to monitor and regulate the movement of Palestinians. Israeli military leaders described AI as a significant force multiplier, allowing the IDF to use autonomous robotic drone swarms to gather surveillance data, identify targets, and streamline wartime logistics.” The Palestinian towns and villages near Israeli settlements have been described as laboratories for security solutions companies to experiment their technologies on Palestinians before marketing them to places like Colombia. The Israeli government hopes to crystalize its “automated apartheid” through the tokenization and privatization of various industries and establishing a technocratic government in Gaza.
Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.
In 2023, this country’s drone warfare program has entered its third decade with no end in sight. Despite the fact that the 22nd anniversary of 9/11 is approaching, policymakers have demonstrated no evidence of reflecting on the failures of drone warfare and how to stop it. Instead, the focus continues to be on simply shifting drone policy in minor ways within an ongoing violent system. Washington’s war on terror has inflicted disproportionate violence on communities across the globe, while using this form of asymmetrical warfare to further expand the space between the value placed on American lives and those of Muslims. Since the war on terror was launched, the London-based watchdog group Airwars has estimated that American air strikes have killed at least 22,679 civilians and possibly up to 48,308 of them. Such killings have been carried out for the most part by desensitized killers, who have been primed towards the dehumanization of the targets of those murderous machines. In the words of critic Saleh Sharief, “The detached nature of drone warfare has anonymized and dehumanized the enemy, greatly diminishing the necessary psychological barriers of killing.” While the use of drones in the war on terror began under President George W. Bush, it escalated dramatically under Obama. Then, in the Trump years, it rose yet again. Though the use of drones in Joe Biden’s first year in office was lower than Trump’s, what has remained consistent is the lack of ... accountability for the slaughter of civilians.
Note: A 2014 analysis found that attempts to kill 41 people with drones resulted in 1,147 deaths. For more along these lines, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility. In addition to developing a wide variety of "autonomous," or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called "robot generals." In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when, and how they kill enemy troops or take fire from their opponents. In its budget submission for 2023, for example, the Air Force requested $231 million to develop the Advanced Battlefield Management System (ABMS), a complex network of sensors and AI-enabled computers designed to ... provide pilots and ground forces with a menu of optimal attack options. As the technology advances, the system will be capable of sending "fire" instructions directly to "shooters," largely bypassing human control. The Air Force's ABMS is intended to ... connect all US combat forces, the Joint All-Domain Command-and-Control System (JADC2, pronounced "Jad-C-two"). "JADC2 intends to enable commanders to make better decisions by collecting data from numerous sensors, processing the data using artificial intelligence algorithms to identify targets, then recommending the optimal weapon ... to engage the target," the Congressional Research Service reported in 2022.
Note: Read about the emerging threat of killer robots on the battlefield. For more along these lines, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
Important Note: Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.



