Warfare Technology News Articles
The DoD has ambitious plans for full spectrum dominance, seeking control over all potential battlespaces: land, ocean, air, outerspace, and cyberspace. Artificial intelligence and other emerging technologies are being used to further these agendas, reshaping the military and geopolitical landscape in unprecedented ways.
In our news archive below, we examine how emerging warfare technology undermines national security, fuels terrorism, and causes devastating civilian casualties.
Related: Weapons of Mass Destruction, Biotech Dangers, Non-Lethal Weapons
A soldier wears a skullcap that stimulates his brain to make him learn skills faster, or reads his thoughts as a way to control a drone. Another is plugged into a Tron-like active cyber defense system, in which she mentally teams up with computer systems to successfully multitask during complex military missions. The Pentagon is already researching these seemingly sci-fi concepts. The basics of brain-machine interfaces are being developedjust watch the videos of patients moving prosthetic limbs with their minds. The Defense Department is examining newly scientific tools, like genetic engineering, brain chemistry, and shrinking robotics, for even more dramatic enhancements. But the real trick may not be granting superpowers, but rather making sure those effects are temporary. Last year, three Canadian defense researchers published a paper that explored the intersection of human enhancement and ethics. They found that the permanence of the enhancement could have impacts on troops in the field ... as well as a return to civilian life. They also note that many soldier resilience human enhancement technologies raised health and safety questions. The Canadian researchers wrote: Are there unknown side effects or long term effects that could lead to unanticipated health problems during deployment or after discharge? Moreover, is it ethical to force a soldier to use the technology in question, or should he/she be allowed to consent to its use? Can consent be fully free from coercion in the military?
Note: Read an excellent article detailing the risks of biosensors implanted under the skin which have already been developed. Some smaller than a grain of rice can be injected with a needle. Watch a slick video promoting this brave new world. Learn how this is already planned for use on soldiers. For more along these lines, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
Future wars just might revolve around insect-size spy robots. A recent digest of present-day microbots by US national security magazine The National Interest breaks down the many machines currently in development by the US military and its associates. They include sea-based microdrones, cockroach-style surveillance bots, and even cyborg insects. Arguably the most refined program to date is the RoboBee, currently being shopped by Harvard’s Wyss Institute. Originally funded by a $9.3 million grant from the National Science Foundation in 2009, the RoboBee is a bug-sized autonomous flying vehicle capable of transitioning from water to air, perching on surfaces, and autonomous collision avoidance in swarms. The RoboBee features two “wafer-thin” wings that flap some 120 times a second to achieve vertical takeoff and mid-air hovering. The US Defense Advanced Research Projects Agency (DARPA) has reportedly taken a keen interest in RoboBee prototypes, sponsoring research into microfabrication technology, presumably for quick field deployments. Other developments, like the aforementioned cyborg insect, remain in early stages. Researchers have successfully demonstrated the capabilities of these remote-control systems using of a range of insect hosts, from the unicorn beetle to the humble cockroach. Underwater microrobotics are another area of interest for DARPA.
Note: Explore all news article summaries on emerging warfare technology in our comprehensive news database.
Mitigating the risk of extinction from AI should be a global priority. However, as many AI ethicists warn, this blinkered focus on the existential future threat to humanity posed by a malevolent AI ... has often served to obfuscate the myriad more immediate dangers posed by emerging AI technologies. These “lesser-order” AI risks ... include pervasive regimes of omnipresent AI surveillance and panopticon-like biometric disciplinary control; the algorithmic replication of existing racial, gender, and other systemic biases at scale ... and mass deskilling waves that upend job markets, ushering in an age monopolized by a handful of techno-oligarchs. Killer robots have become a twenty-first-century reality, from gun-toting robotic dogs to swarms of autonomous unmanned drones, changing the face of warfare from Ukraine to Gaza. Palestinian civilians have frequently spoken about the paralyzing psychological trauma of hearing the “zanzana” — the ominous, incessant, unsettling, high-pitched buzzing of drones loitering above. Over a decade ago, children in Waziristan, a region of Pakistan’s tribal belt bordering Afghanistan, experienced a similar debilitating dread of US Predator drones that manifested as a fear of blue skies. “I no longer love blue skies. In fact, I now prefer gray skies. The drones do not fly when the skies are gray,” stated thirteen-year-old Zubair in his testimony before Congress in 2013.
Note: For more along these lines, read our concise summaries of news articles on AI and military corruption.
Chinese researchers are working on a new handheld laser weapon capable of burning skin and clothing from up to half a mile away. Boasts about the ... laser rifle's capabilities come amid a diplomatic row between the US and China over the use of blinding lasers on military aircraft. The new laser rifle will be used by anti-terrorism squads in the Chinese Armed Police, the South China Morning Post reports, with prototypes of the device already being tested ... at the Chinese Academy of Sciences. The ZKZM assault rifle causes "instant carbonisation" of human tissue, according to the researchers behind it, and will "burn through clothes in a split second. " The unnamed researcher from the Chinese Academy of Sciences added: "If the fabric is flammable, the whole person will be set on fire." While the gun is not powerful enough to kill someone, the researcher said: "The pain will be beyond endurance." Aerospace and defence giant Lockheed Martin has previously developed laser beam systems that can be attached to planes or ground-based vehicles. In May, the Pentagon made a formal complaint to the Chinese government over Chinese nationals allegedly pointing lasers at US planes in east Africa. While there has been no word on when China's new laser gun might see service, the US Navy recently announced its own $300 million research fund aimed at creating a family of laser weapons for its fleet. In 2016, the US military said it would be deploying laser weapons as early as 2023.
Note: China was reported in 2006 to be using lasers to blind American spy satellites. For more along these lines, see concise summaries of deeply revealing non-lethal weapons news articles from reliable major media sources.
In 2009, not long after his historic election and seven years after the first U.S. drone strike, President Barack Obama accepted the Nobel Peace Prize. Since then, however, deadly U.S. drone strikes have increased sharply, as have doubts about the programs reliability and effectiveness. The latest criticism comes from Drone, a new documentary about the CIAs covert drone war. To help promote the film and inveigh against the agencys drone program ... four former operators - Stephen Lewis, Michael Haas, Cian Westmoreland and Brandon Bryant - appeared at a press conference. Speaking out can lead to veiled threats and prosecution. Which is why for years Bryant was the only drone veteran who openly rebuked the drone war. But his persistence and his appearance in the film, the other three say, inspired them to come forward. On multiple occasions, the men say they complained to their superiors about their concerns to no avail. Drone strikes kill far more civilians than the government admits. These deaths, they argue, wind up helping militant groups recruit new members and hurt the U.S.s long-term security. By distancing soldiers from the battlefield, the operators suggest the people carrying out strikes may become even more desensitized to killing than their counterparts on the front lines. On some occasions, Haas says operators referred to children as fun-sized terrorists or TITS, terrorists in training.
Note: A human rights attorney has stated the four former Air Force drone operators-turned-whistleblowers mentioned above have had their credit cards and bank accounts frozen. How many more have not spoken out against these abuses for fear of retaliation like this? Read more about the major failings of US drone attacks. For more along these lines, see concise summaries of deeply revealing war news articles from reliable major media sources.
Retired Army Gen. Mike Flynn, a top intelligence official in the post-9/11 wars in Iraq and Afghanistan, says in a forthcoming interview ... that the drone war is creating more terrorists than it is killing. He also asserts that the U.S. invasion of Iraq helped create the Islamic State. Flynn, who in 2014 was forced out as head of the Defense Intelligence Agency, has in recent months become an outspoken critic of the Obama administrations Middle East strategy. The former three star general ... describes the present approach of drone warfare as a failed strategy. What we have is this continued investment in conflict, the retired general says. The more weapons we give, the more bombs we drop, that just fuels the conflict. In 2010, [Flynn] published a controversial report on intelligence operations in Afghanistan, stating in part that the military could not answer fundamental questions about the country and its people despite nearly a decade of engagement there. Earlier this year, Flynn commended the Senate Intelligence Committee report on CIA torture saying that torture had eroded American values and that in time, the U.S. will look back on it, and it wont be a pretty picture.
Note: Drone strikes almost always miss their intended targets. Casualties of war whose identities are unknown are frequently mis-reported to be "militants". For more along these lines, see concise summaries of deeply revealing news articles about military corruption.
The Obama administration is again allowing the CIA to use drone strikes to secretly kill people that the spy agency does not know the identities of in multiple countries - despite repeated statements to the contrary. Apparently the drone operators didn't even know at the time who they were aiming at - only that they thought the target was possibly a terrorist hideout. It's what's known as a "signature" strike. Signature strikes has led to scores of civilians being killed over the past decade, including two completely innocent hostages ... one of whom was a US citizen ... less than two months ago. It's a way of killing that's been roundly condemned by human rights organizations and that some members of Congress have tried to outlaw. Here's how the New York Times described it: "The joke was that when the CIA sees "three guys doing jumping jacks," the agency thinks it is a terrorist training camp, said one senior official. Men loading a truck with fertilizer could be bombmakers but they might also be farmers." It has become increasingly clear that the "rules" are virtually meaningless. As is typical with the US government's extrajudicial killing policy, there was no public debate about any of the changes to the supposed rules, or even announcement that they ever changed - only an unofficial leak to a journalist after the latest killing. Beyond the enormous human rights consequences related to such a dangerous policy, these types of strikes backfire on the United States, sowing hatred in the populations of bombed countries and breeding sympathy for al-Qaida where there was none before.
Note: For more along these lines, see concise summaries of deeply revealing news articles on terrorism from reliable major media sources. Then explore the excellent, reliable resources provided in our War Information Center.
Theyre called lethal autonomous weapons, or LAWs, and their military mission would be to seek out, identify and kill a human target independent of human control. Representatives of 60 nations ... met in Geneva during the third week of April in an attempt to define the level of artificial intelligence needed for an international definition of robotic autonomy. The Panel of Experts, under the Convention on Conventional Weapons (CCW), will meet again next year to continue the discussion. None of the industrial nations admits having a LAW, but theres really no way to confirm the nonexistence of a weapon that would be classified as secret. The U.S. Department of Defense has had a directive in place for three years that outlines the chain of command that would approve their deployment on a case-by-case basis. Its called Directive 3000.09. On April 15, the third day of the panel meeting, Secretary of the Navy Ray Mabus ... announced the creation of a new office for unmanned warfare systems. According to Stuart Russell, who addressed the panel, Devices in the 1-gram range might be able to selectively kill a chosen human target on contact using a shaped explosive charge. Im not sure what countermeasures one might try against a swarm of 5-gram robots. There will be ... a LAWs arms race."
Note: Current surveillance drones can be hacked, hijacked, and redirected while flying in some cases. Under human guidance, current killer drones almost always miss their intended targets. What will happen when tiny lethal flying robots begin operating without being controlled by human decision makers?
Drone strikes and "targeted killings" of terror targets by the United States can be counterproductive and bolster the support of extremist groups, the CIA has admitted in a secret report released by WikiLeaks. The document, by the intelligence agency's Directorate of Intelligence, said that despite the effectiveness of "high value targeting" (HVT), air strikes and special forces operations had a negative impact by boosting the popular support of terror organisations. The CIA report is dated 2009 and talks of operations conducted in countries such as Iraq, Pakistan, Somalia, Afghanistan and Yemen. Operations against terror targets "may increase support for the insurgents, particularly if these strikes enhance insurgent leaders' lore, if non-combatants are killed in the attacks, if legitimate or semi-legitimate politicians aligned with the insurgents are targeted, or if the government is already seen as overly repressive or violent," the report said. "Senior Taliban leaders' use of sanctuary in Pakistan has also complicated the HVT effort," it reveals. "Moreover, the Taliban has a high overall ability to replace lost leaders ... especially at the middle levels." It speaks of drone strikes also having limited effect in Iraq. According to the Bureau of Investigative Journalism, US drone strikes have killed between 2,400 and 3,888 people in Pakistan in the years 2004 to 2014 and between 371 and 541 people in Yemen in the years 2002 to 2014.
Note: This report proves that the CIA has been aware that drone strikes are ineffective since at least 2009. If drones help terrorists, almost always miss their intended targets, and may be used to target people in the US in the future, what are the real reasons for the US government's drone program?
It is often said that autonomous weapons could help minimize the needless horrors of war. Their vision algorithms could be better than humans at distinguishing a schoolhouse from a weapons depot. Some ethicists have long argued that robots could even be hardwired to follow the laws of war with mathematical consistency. And yet for machines to translate these virtues into the effective protection of civilians in war zones, they must also possess a key ability: They need to be able to say no. Human control sits at the heart of governments’ pitch for responsible military AI. Giving machines the power to refuse orders would cut against that principle. Meanwhile, the same shortcomings that hinder AI’s capacity to faithfully execute a human’s orders could cause them to err when rejecting an order. Militaries will therefore need to either demonstrate that it’s possible to build ethical, responsible autonomous weapons that don’t say no, or show that they can engineer a safe and reliable right-to-refuse that’s compatible with the principle of always keeping a human “in the loop.” If they can’t do one or the other ... their promises of ethical and yet controllable killer robots should be treated with caution. The killer robots that countries are likely to use will only ever be as ethical as their imperfect human commanders. They would only promise a cleaner mode of warfare if those using them seek to hold themselves to a higher standard.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and military corruption.
The terrorist attack on the airport in Kabul, Afghanistan’s capital ... killed more than 170 Afghan civilians and 13 U.S. soldiers. Three days later, Biden authorized a drone strike that the U.S. claimed took out a dangerous cell of ISIS fighters. Biden held up this strike, and another one a day earlier, as evidence of his commitment to take the fight to the terrorists in Afghanistan. But the Kabul strike, which targeted a white Toyota Corolla, did not kill any members of ISIS. The victims were 10 civilians, seven of them children. The driver of the car, Zemari Ahmadi, was a respected employee of a U.S. aid organization. Following a New York Times investigation that fully exposed the lie of the U.S. version of events, the Pentagon and the White House admitted that they had killed innocent civilians, calling it “a horrible tragedy of war.” This week, the Pentagon released a summary of its classified review into the attack, which it originally hailed as a “righteous strike” that had thwarted an imminent terror plot. The results were predictable. The report recommended that no personnel be held responsible for the murder of 10 civilians; there was no “criminal negligence,” as the report put it. Daniel Hale, a military veteran who pleaded guilty to disclosing classified documents that exposed lethal weaknesses in the drone program, is serving four years in prison. Hale’s documents exposed how as many as nine out of 10 victims of U.S. drone strikes in Afghanistan were not the intended targets. In Biden’s recent drone strike, 10 of 10 were innocent civilians.
Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and war from reliable major media sources.
A bomb hit the house. [Rua Moataz] Khadr and her two daughters were able to free themselves from the rubble that had fallen on them, but her 4-year-old son, Ibrahim Ahmed Yahya, was crushed to death. He was among the 9,000 to 11,000 civilians killed during the yearlong battle for Mosul. Khadr, like most bombing victims in Iraq, has no idea which nation was responsible for the airstrike that killed her son. Was it an American aircraft, British, Dutch? “Even if I found out, what would I do?” she told The Intercept. In its final days in Afghanistan, the U.S. conducted a drone strike that killed 10 civilians in Kabul — seven of them children. Their deaths bring up a thorny question surrounding the frequent U.S. killing of civilians in the 9/11 wars: What would justice look like for the families of civilians who have been wrongfully killed? The media attention generated by the Kabul strike has prompted a rare admission of guilt from the Pentagon and may ultimately lead to monetary compensation for the survivors. But byzantine laws in the U.S. make it all but impossible for foreigners to file for compensation if a relative was killed in combat. The only hope for most survivors is a “sympathy” payment from the U.S. military that does not acknowledge responsibility for causing the deaths. But unsurprisingly, those payments are rare: None were issued in 2020. Meanwhile, U.S. allies involved in bombing campaigns usually hide behind the shield of joint operations to avoid taking responsibility for civilian deaths.
Note: For more along these lines, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
Local cops have gotten tens of millions of dollars’ worth of discounted military gear under a secretive federal program that is poised to grow under recent executive action. The 1122 program ... presents a danger to people facing off against militarized cops, according to Women for Weapons Trade Transparency. “All of these things combined serve as a threat to free speech, an intimidation tactic to protest,” said Lillian Mauldin, the co-founder of the nonprofit group, which produced the report released this week. The federal government’s 1033 program ... has long sent surplus gear like mine-resistant vehicles and bayonets to local police. Since 1994, however, the even more obscure 1122 program has allowed local cops to purchase everything from uniforms to riot shields at federal government rates. The program turns the feds into purchasing agents for local police. Local cops have used the program to pick up 16 Lenco BearCats, fearsome-looking armored police vehicles. Those vehicles represented 4.8 percent of the total spending identified in the ... report. Surveillance gear and software represented another 6.4 percent, and weapons or riot gear represented 5 percent. One agency bought a $428,000 Star Safire thermal imaging system, the kind used in military helicopters. The Texas Department of Public Safety’s intelligence and counterterrorism unit purchased a $1.5 million surveillance software license. Another agency bought an $89,000 covert camera system.
Note: Read more about the Pentagon's 1033 program. For more along these lines, read our concise summaries of news articles on police corruption and the erosion of civil liberties.
Militaries, law enforcement, and more around the world are increasingly turning to robot dogs — which, if we're being honest, look like something straight out of a science-fiction nightmare — for a variety of missions ranging from security patrol to combat. Robot dogs first really came on the scene in the early 2000s with Boston Dynamics' "BigDog" design. They have been used in both military and security activities. In November, for instance, it was reported that robot dogs had been added to President-elect Donald Trump's security detail and were on patrol at his home in Mar-a-Lago. Some of the remote-controlled canines are equipped with sensor systems, while others have been equipped with rifles and other weapons. One Ohio company made one with a flamethrower. Some of these designs not only look eerily similar to real dogs but also act like them, which can be unsettling. In the Ukraine war, robot dogs have seen use on the battlefield, the first known combat deployment of these machines. Built by British company Robot Alliance, the systems aren't autonomous, instead being operated by remote control. They are capable of doing many of the things other drones in Ukraine have done, including reconnaissance and attacking unsuspecting troops. The dogs have also been useful for scouting out the insides of buildings and trenches, particularly smaller areas where operators have trouble flying an aerial drone.
Note: Learn more about the troubling partnership between Big Tech and the military. For more, read our concise summaries of news articles on military corruption.
At the Technology Readiness Experimentation (T-REX) event in August, the US Defense Department tested an artificial intelligence-enabled autonomous robotic gun system developed by fledgling defense contractor Allen Control Systems dubbed the “Bullfrog.” Consisting of a 7.62-mm M240 machine gun mounted on a specially designed rotating turret outfitted with an electro-optical sensor, proprietary AI, and computer vision software, the Bullfrog was designed to deliver small arms fire on drone targets with far more precision than the average US service member can achieve with a standard-issue weapon. Footage of the Bullfrog in action published by ACS shows the truck-mounted system locking onto small drones and knocking them out of the sky with just a few shots. Should the Pentagon adopt the system, it would represent the first publicly known lethal autonomous weapon in the US military’s arsenal. In accordance with the Pentagon’s current policy governing lethal autonomous weapons, the Bullfrog is designed to keep a human “in the loop” in order to avoid a potential “unauthorized engagement." In other words, the gun points at and follows targets, but does not fire until commanded to by a human operator. However, ACS officials claim that the system can operate totally autonomously should the US military require it to in the future, with sentry guns taking the entire kill chain out of the hands of service members.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.
The Pentagon is turning to a new class of weapons to fight the numerically superior [China's] People’s Liberation Army: drones, lots and lots of drones. In August 2023, the Defense Department unveiled Replicator, its initiative to field thousands of “all-domain, attritable autonomous (ADA2) systems”: Pentagon-speak for low-cost (and potentially AI-driven) machines — in the form of self-piloting ships, large robot aircraft, and swarms of smaller kamikaze drones — that they can use and lose en masse to overwhelm Chinese forces. For the last 25 years, uncrewed Predators and Reapers, piloted by military personnel on the ground, have been killing civilians across the planet. Experts worry that mass production of new low-cost, deadly drones will lead to even more civilian casualties. Advances in AI have increasingly raised the possibility of robot planes, in various nations’ arsenals, selecting their own targets. During the first 20 years of the war on terror, the U.S. conducted more than 91,000 airstrikes ... and killed up to 48,308 civilians, according to a 2021 analysis. “The Pentagon has yet to come up with a reliable way to account for past civilian harm caused by U.S. military operations,” [Columbia Law’s Priyanka Motaparthy] said. “So the question becomes, ‘With the potential rapid increase in the use of drones, what safeguards potentially fall by the wayside? How can they possibly hope to reckon with future civilian harm when the scale becomes so much larger?’”
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on military corruption.
Ties between Silicon Valley and the Pentagon are deeper than previously known, according to thousands of previously unreported subcontracts published Wednesday. The subcontracts were obtained through open records requests by accountability nonprofit Tech Inquiry. They show that tech giants including Google, Amazon, and Microsoft have secured more than 5,000 agreements with agencies including the Department of Defense, Immigrations and Customs Enforcement, the Drug Enforcement Agency, and the FBI. Tech workers in recent years have pressured their employers to drop contracts with law enforcement and the military. Google workers revolted in 2018 after Gizmodo revealed that Google was building artificial intelligence for drone targeting through a subcontract with the Pentagon after some employees quit in protest, Google agreed not to renew the contract. Employees at Amazon and Microsoft have petitioned both companies to drop their contracts with ICE and the military. Neither company has. The newly-surfaced subcontracts ... show that the companies' connections to the Pentagon run deeper than many employees were previously aware. Tech Inquiry's research was led by Jack Poulson, a former Google researcher. "Often the high-level contract description between tech companies and the military looks very vanilla," Poulson [said]. "But only when you look at the details ... do you see the workings of how the customization from a tech company would actually be involved."
Note: For more along these lines, see concise summaries of deeply revealing news articles on corruption in government and in the corporate world from reliable major media sources.
AI could mean fewer body bags on the battlefield — but that's exactly what terrifies the godfather of AI. Geoffrey Hinton, the computer scientist known as the "godfather of AI," said the rise of killer robots won't make wars safer. It will make conflicts easier to start by lowering the human and political cost of fighting. Hinton said ... that "lethal autonomous weapons, that is weapons that decide by themselves who to kill or maim, are a big advantage if a rich country wants to invade a poor country." "The thing that stops rich countries invading poor countries is their citizens coming back in body bags," he said. "If you have lethal autonomous weapons, instead of dead people coming back, you'll get dead robots coming back." That shift could embolden governments to start wars — and enrich defense contractors in the process, he said. Hinton also said AI is already reshaping the battlefield. "It's fairly clear it's already transformed warfare," he said, pointing to Ukraine as an example. "A $500 drone can now destroy a multimillion-dollar tank." Traditional hardware is beginning to look outdated, he added. "Fighter jets with people in them are a silly idea now," Hinton said. "If you can have AI in them, AIs can withstand much bigger accelerations — and you don't have to worry so much about loss of life." One Ukrainian soldier who works with drones and uncrewed systems [said] in a February report that "what we're doing in Ukraine will define warfare for the next decade."
Note: As law expert Dr. Salah Sharief put it, "The detached nature of drone warfare has anonymized and dehumanized the enemy, greatly diminishing the necessary psychological barriers of killing." For more, read our concise summaries of news articles on AI and warfare technology.
“Ice is just around the corner,” my friend said, looking up from his phone. A day earlier, I had met with foreign correspondents at the United Nations to explain the AI surveillance architecture that Immigration and Customs Enforcement (Ice) is using across the United States. The law enforcement agency uses targeting technologies which one of my past employers, Palantir Technologies, has both pioneered and proliferated. Technology like Palantir’s plays a major role in world events, from wars in Iran, Gaza and Ukraine to the detainment of immigrants and dissident students in the United States. Known as intelligence, surveillance, target acquisition and reconnaissance (Istar) systems, these tools, built by several companies, allow users to track, detain and, in the context of war, kill people at scale with the help of AI. They deliver targets to operators by combining immense amounts of publicly and privately sourced data to detect patterns, and are particularly helpful in projects of mass surveillance, forced migration and urban warfare. Also known as “AI kill chains”, they pull us all into a web of invisible tracking mechanisms that we are just beginning to comprehend, yet are starting to experience viscerally in the US as Ice wields these systems near our homes, churches, parks and schools. The dragnets powered by Istar technology trap more than migrants and combatants ... in their wake. They appear to violate first and fourth amendment rights.
Note: Read how Palantir helped the NSA and its allies spy on the entire planet. Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and Big Tech.
Before signing its lucrative and controversial Project Nimbus deal with Israel, Google knew it couldn’t control what the nation and its military would do with the powerful cloud-computing technology, a confidential internal report obtained by The Intercept reveals. The report makes explicit the extent to which the tech giant understood the risk of providing state-of-the-art cloud and machine learning tools to a nation long accused of systemic human rights violations. Not only would Google be unable to fully monitor or prevent Israel from using its software to harm Palestinians, but the report also notes that the contract could obligate Google to stonewall criminal investigations by other nations into Israel’s use of its technology. And it would require close collaboration with the Israeli security establishment — including joint drills and intelligence sharing — that was unprecedented in Google’s deals with other nations. The rarely discussed question of legal culpability has grown in significance as Israel enters the third year of what has widely been acknowledged as a genocide in Gaza — with shareholders pressing the company to conduct due diligence on whether its technology contributes to human rights abuses. Google doesn’t furnish weapons to the military, but it provides computing services that allow the military to function — its ultimate function being, of course, the lethal use of those weapons. Under international law, only countries, not corporations, have binding human rights obligations.
Note: For more along these lines, read our concise summaries of news articles on AI and government corruption.
Important Note: Explore our full index to revealing excerpts of key major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.



















































































