Military Corruption News Stories
Below are key excerpts of revealing news articles on military corruption from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.
For further exploration, delve into our comprehensive Military-Intelligence Corruption Information Center.
A former housing official who worked under President George H. W. Bush has made an astonishing claim that the U.S. government spent years funneling money into the creation of a secret underground "city" where the rich and powerful can shelter in the event of a "near-extinction event." Catherine Austin Fitts ... served as the assistant secretary of Housing and Urban Development for Housing between 1989 and 1990. Fitts ... cited research by Michigan State University economist Mark Skidmore, who released a report in 2017 stating that he and a team of scholars had uncovered $21 trillion in "unauthorized spending in the departments of Defense and Housing and Urban Development for the years 1998-2015." According to Fitts, who worked as an investment banker before joining Bush's administration, that money was used to fund the development of what she described as an "underground base, city infrastructure and transportation system" that has been kept hidden from the public. She [said] that she spent two years researching where the $21 trillion had gone, alleging that she uncovered evidence that there are 170 secret facilities in the U.S. alone, explaining that she and a team of investigators combed through "all the data and all the allegations on underground bases" in order to make a "guess" as to how many might exist. Additionally, Fitts alleged that several of these bases are located beneath oceans—not just underground.
Note: Read more about the groundbreaking work of Mark Skidmore and Catherine Austin Fitts. For more along these lines, read our concise summaries of news articles on military corruption and government waste.
Four top tech execs from OpenAI, Meta, and Palantir have just joined the US Army. The Army Reserve has commissioned these senior tech leaders to serve as midlevel officers, skipping tradition to pursue transformation. The newcomers won't attend any current version of the military's most basic and ingrained rite of passage— boot camp. Instead, they'll be ushered in through express training that Army leaders are still hashing out, Col. Dave Butler ... said. The execs — Shyam Sankar, the chief technology officer of Palantir; Andrew Bosworth, the chief technology officer of Meta; Kevin Weil, the chief product officer at OpenAI; and Bob McGrew, an advisor at Thinking Machines Lab who was formerly the chief research officer for OpenAI — are joining the Army as lieutenant colonels. The name of their unit, "Detachment 201," is named for the "201" status code generated when a new resource is created for Hypertext Transfer Protocols in internet coding, Butler explained. "In this role they will work on targeted projects to help guide rapid and scalable tech solutions to complex problems," read the Army press release. "By bringing private-sector know-how into uniform, Det. 201 is supercharging efforts like the Army Transformation Initiative, which aims to make the force leaner, smarter, and more lethal." Lethality, a vague Pentagon buzzword, has been at the heart of the massive modernization and transformation effort the Army is undergoing.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and military corruption.
In 2018, still in the throes of painful withdrawal from a psychiatric drug cocktail, U.S. Air Force veteran Derek Blumke began connecting the dots. He heard horror story after horror story that followed a disturbingly familiar pattern: starting, adjusting the dose, or abruptly stopping antidepressants was followed by personality changes, outbursts and acts of violence or suicide, leaving countless families and lives destroyed. Timothy Jensen ... an Iraq war veteran who served in the Marines, had been researching psychiatric drug overprescribing in the Veterans’ Health Administration (VA) system for years. He had his own harrowing personal story of antidepressant harm, and he had lost his best friend, a fellow veteran, to suicide soon after he was prescribed Wellbutrin for smoking cessation. Poring through the data, Blumke landed on some startling statistics: 68% of all veterans seen at least one time for care at the VA in 2019 had been prescribed psychotropic drugs, and 28% were issued prescriptions for antidepressants. “It should be zero shock that veterans have the suicide rates we do,” Blumke said. “Veteran suicide rates are two to two and a half times that of the civilian population. Prescription rates of antidepressants and psychiatric drugs are of the same multiples, which are both the highest in the world.” Antidepressants and other psychotropic drugs have huge risk profiles, but doctors and counselors aren’t even being trained about these issues.
Note: Suicide among post-9/11 veterans rose more than tenfold from 2006 to 2020. Why is Mad in America the only media outlet covering this important issue affecting so many veterans? Along these lines, the UK’s medicines regulator is launching a review of over 30 commonly prescribed antidepressants, including Prozac, amid rising concerns about links to suicide, self-harm, and long-term side effects like persistent sexual dysfunction—especially in children.
Haiti could be Erik Prince’s deadliest gambit yet. Prince's Blackwater reigned during the Global War on Terror, but left a legacy of disastrous mishaps, most infamously the 2007 Nisour massacre in Iraq, where Blackwater mercenaries killed 17 civilians. This, plus his willingness in recent years to work for foreign governments in conflicts and for law enforcement across the globe, have made Prince one of the world’s most controversial entrepreneurs. A desperate Haiti has now hired him to “conduct lethal operations” against armed groups, who control about 85% of Haitian capital Port-Au-Prince. Prince will send about 150 private mercenaries to Haiti over the summer. He will advise Haiti’s police force on countering Haiti’s armed groups, where some Prince-hired mercenaries are already operating attack drones. The Prince deal is occurring within the context of extensive ongoing American intervention in Haiti. Currently the U.S.-backed, Kenyan-led multinational police force operating in Haiti to combat the armed groups is largely seen as a failure. Previously, a U.N. peacekeeping mission aimed at stabilizing Haiti from 2004 through 2017 was undermined by scandal, where U.N. officials were condemned for killing civilians during efforts aimed at armed groups, sexually assaulting Haitians, and introducing cholera to Haiti. Before that, the U.S. was accused of ousting Haitian leader Jean-Bertrand Aristide after he proved obstructive to U.S. foreign policy goals, in 2004.
Note: This article doesn't mention the US-backed death squads that recently terrorized Haiti. For more along these lines, read our concise summaries of news articles on corruption in the military and in the corporate world.
President George W. Bush created a new command to oversee all military operations in Africa 18 years ago. U.S. Africa Command was meant to help “bring peace and security to the people of Africa.” Gen. Michael Langley, the head of AFRICOM, offered a grim assessment of security on the African continent during a recent press conference. The West African Sahel, he said last Friday, was now the “epicenter of terrorism” and the gravest terrorist threats to the U.S. homeland were “unfortunately right here on the African continent.” Throughout all of Africa, the State Department counted 23 deaths from terrorist violence in 2002 and 2003, the first years of U.S. counterterrorism efforts in the Sahel and Somalia. By 2010, two years after AFRICOM began operations, fatalities from attacks by militant Islamists had already spiked to 2,674. There were an estimated 18,900 fatalities linked to militant Islamist violence in Africa last year, with 79 percent of those coming from the Sahel and Somalia. This constitutes a jump of more than 82,000 percent since the U.S. launched its post-9/11 counterterrorism efforts on the continent. As violence spiraled in the region over the past decades, at least 15 officers who benefited from U.S. security assistance were key leaders in 12 coups in West Africa and the greater Sahel during the war on terror. At least five leaders of the 2023 coup d’état in [Niger] received American assistance.
Note: Learn more about the US military's shadow wars in Africa. For more along these lines, read our concise summaries of news articles on terrorism and military corruption.
Today marks 50 years since the end of the American War in Vietnam, which killed an estimated 3.3 million Vietnamese people, hundreds of thousands of Cambodians, tens of thousands of Laotians and more than 58,000 U.S. service members. But for many Vietnamese, Laotian and Cambodian people; Vietnamese Americans; and U.S. Vietnam veterans and their descendants, the impacts of the war never ended. They continue to suffer the devastating consequences of Agent Orange, an herbicide mixture used by the U.S. military that contained dioxin, the deadliest chemical known to humankind. As a result, many people have been born with congenital anomalies — disabling changes in the formation of the spinal cord, limbs, heart, palate, and more. This remains the largest deployment of herbicidal warfare in history. In the 1973 Paris Peace Accords, the Nixon administration promised to contribute $3 billion for compensation and postwar reconstruction of Vietnam. But that promise remains unfulfilled. Between 2,100,000 and 4,800,000 Vietnamese, Lao and Cambodian people, and tens of thousands of Americans were exposed to Agent Orange/dioxin during the spraying operations. Many other Vietnamese people were or continue to be exposed to Agent Orange/dioxin through contact with the environment and food that was contaminated. Many offspring of those who were exposed have congenital anomalies, developmental disabilities, and other diseases.
Note: Rep. Rashida Tlaib recently introduced The Agent Orange Relief Act of 2025 to attempt to provide relief for some of the victims of this toxic chemical. For more along these lines, read our concise summaries of news articles on military corruption and toxic chemicals.
Dave Crete adds another name to a growing memorial list, now more than 400 in total — men and women he says he served with on a secretive range in the Nevada desert that encompasses Area 51. Crete and his fellow veterans were hand-picked and tasked with top-secret work. They couldn’t even tell their wives what they did every day. Many are developing serious health issues, multiple tumors and, in too many cases, deadly cancers. A group of these veterans are exclusively telling NewsNation’s Natasha Zouves that they are unable to get the care and benefits they need because the Department of Defense refuses to acknowledge they were ever stationed in the desert. The DOD records sent to Veterans Affairs lists the same two words between asterisks in black and white: “DATA MASKED.” “They keep us classified to protect themselves,” said Crete. A 2016 reunion barbecue at Crete’s Las Vegas home was supposed to be a chance for Air Force buddies to reminisce. The veterans discovered that out of the eight men sitting around that circle, six of them had developed tumors. The seventh man said, “I don’t have any, but my son was born with one.” “There was an issue where we were. That’s the one common denominator. We were all there,” said Groves. “There” was the Nevada Test and Training Range (NTTR), an area encompassing the infamous Area 51. Nuclear weapons tests were conducted in the area ... from the 1950s to the early 1990s.
Note: The existence of Area 51 was denied for years. For more along these lines, read our concise summaries of news articles on UFOs and military corruption.
Uncle Sam conducted several pointless and destructive experiments on his own people during the Cold War. The most infamous was MKUltra, the CIA's project to develop procedures for mind control using psychedelic drugs and psychological torture. During Operation Sea-Spray, the U.S. Navy secretly sprayed San Francisco with bacteria to simulate a biological attack. San Francisco was also the site of a series of radiation experiments by the U.S. Navy. A 2024 investigation by the San Francisco Public Press and The Guardian revealed that the city's U.S. Naval Radiological Defense Laboratory had exposed at least 1,073 people to radiation over 24 experiments between 1946 and 1963. The tests came during a time when the effects of nuclear radiation were a pressing concern, and were conducted without ethical safeguards. Conscripted soldiers and civilian volunteers were sent into radioactive conditions or purposely dosed with radiation without their informed consent. The lab didn't bother following up. The Radiological Defense Laboratory ... closed in 1969. In 2013, whistleblowers brought a lawsuit against a decontamination contractor for cutting corners and faking results; in January 2025, the contractor agreed to pay a $97 million settlement. Scientists [there had] developed "synthetic fallout"—dirt laced with radioactive isotopes to simulate the waste created by a nuclear war. They had test subjects practice cleaning it up, rub it on their skin, or crawl around in it.
Note: Read about the long history of humans being treated like guinea pigs in science experiments. Learn more about the MKUltra Program in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on military corruption.
Department of Defense spending is increasingly going to large tech companies including Microsoft, Google parent company Alphabet, Oracle, and IBM. Open AI recently brought on former U.S. Army general and National Security Agency Director Paul M. Nakasone to its Board of Directors. The U.S. military discreetly, yet frequently, collaborated with prominent tech companies through thousands of subcontractors through much of the 2010s, obfuscating the extent of the two sectors’ partnership from tech employees and the public alike. The long-term, deep-rooted relationship between the institutions, spurred by massive Cold War defense and research spending and bound ever tighter by the sectors’ revolving door, ensures that advances in the commercial tech sector benefit the defense industry’s bottom line. Military, tech spending has manifested myriad landmark inventions. The internet, for example, began as an Advanced Research Projects Agency (ARPA, now known as Defense Advanced Research Projects Agency, or DARPA) research project called ARPANET, the first network of computers. Decades later, graduate students Sergey Brin and Larry Page received funding from DARPA, the National Science Foundation, and U.S. intelligence community-launched development program Massive Digital Data Systems to create what would become Google. Other prominent DARPA-funded inventions include transit satellites, a precursor to GPS, and the iPhone Siri app, which, instead of being picked up by the military, was ultimately adapted to consumer ends by Apple.
Note: Watch our latest video on the militarization of Big Tech. For more, read our concise summaries of news articles on AI, warfare technology, and Big Tech.
The US military may soon have an army of faceless suicide bombers at their disposal, as an American defense contractor has revealed their newest war-fighting drone. AeroVironment unveiled the Red Dragon in a video on their YouTube page, the first in a new line of 'one-way attack drones.' This new suicide drone can reach speeds up to 100 mph and can travel nearly 250 miles. The new drone takes just 10 minutes to set up and launch and weighs just 45 pounds. Once the small tripod the Red Dragon takes off from is set up, AeroVironment said soldiers would be able to launch up to five per minute. Since the suicide robot can choose its own target in the air, the US military may soon be taking life-and-death decisions out of the hands of humans. Once airborne, its AVACORE software architecture functions as the drone's brain, managing all its systems and enabling quick customization. Red Dragon's SPOTR-Edge perception system acts like smart eyes, using AI to find and identify targets independently. Simply put, the US military will soon have swarms of bombs with brains that don't land until they've chosen a target and crash into it. Despite Red Dragon's ability to choose a target with 'limited operator involvement,' the Department of Defense (DoD) has said it's against the military's policy to allow such a thing to happen. The DoD updated its own directives to mandate that 'autonomous and semi-autonomous weapon systems' always have the built-in ability to allow humans to control the device.
Note: Drones create more terrorists than they kill. For more, read our concise summaries of news articles on warfare technology and Big Tech.
Before signing its lucrative and controversial Project Nimbus deal with Israel, Google knew it couldn’t control what the nation and its military would do with the powerful cloud-computing technology, a confidential internal report obtained by The Intercept reveals. The report makes explicit the extent to which the tech giant understood the risk of providing state-of-the-art cloud and machine learning tools to a nation long accused of systemic human rights violations. Not only would Google be unable to fully monitor or prevent Israel from using its software to harm Palestinians, but the report also notes that the contract could obligate Google to stonewall criminal investigations by other nations into Israel’s use of its technology. And it would require close collaboration with the Israeli security establishment — including joint drills and intelligence sharing — that was unprecedented in Google’s deals with other nations. The rarely discussed question of legal culpability has grown in significance as Israel enters the third year of what has widely been acknowledged as a genocide in Gaza — with shareholders pressing the company to conduct due diligence on whether its technology contributes to human rights abuses. Google doesn’t furnish weapons to the military, but it provides computing services that allow the military to function — its ultimate function being, of course, the lethal use of those weapons. Under international law, only countries, not corporations, have binding human rights obligations.
Note: For more along these lines, read our concise summaries of news articles on AI and government corruption.
In recent years, Israeli security officials have boasted of a “ChatGPT-like” arsenal used to monitor social media users for supporting or inciting terrorism. It was released in full force after Hamas’s bloody attack on October 7. Right-wing activists and politicians instructed police forces to arrest hundreds of Palestinians ... for social media-related offenses. Many had engaged in relatively low-level political speech, like posting verses from the Quran on WhatsApp. Hundreds of students with various legal statuses have been threatened with deportation on similar grounds in the U.S. this year. Recent high-profile cases have targeted those associated with student-led dissent against the Israeli military’s policies in Gaza. In some instances, the State Department has relied on informants, blacklists, and technology as simple as a screenshot. But the U.S. is in the process of activating a suite of algorithmic surveillance tools Israeli authorities have also used to monitor and criminalize online speech. In March, Secretary of State Marco Rubio announced the State Department was launching an AI-powered “Catch and Revoke” initiative to accelerate the cancellation of student visas. Algorithms would collect data from social media profiles, news outlets, and doxing sites to enforce the January 20 executive order targeting foreign nationals who threaten to “overthrow or replace the culture on which our constitutional Republic stands.”
Note: For more along these lines, read our concise summaries of news articles on AI and the erosion of civil liberties.
While attempting to control the weather might sound like science fiction, countries have been seeding clouds for decades to try to make rain or snow fall in specific regions. Invented in the 1940s, seeding involves a variety of techniques including adding particles to clouds via aircraft. It is used today across the world in an attempt to alleviate drought, fight forest fires and even to disperse fog at airports. In 2008, China used it to try to stop rain from falling on Beijing’s Olympic stadium. But experts say that there is insufficient oversight of the practice, as countries show an increasing interest in this and other geoengineering techniques as the planet warms. The American Meteorological Society has said that “unintended consequences” of cloud seeding have not been clearly shown — or ruled out — and raised concerns that unanticipated effects from weather modification could cross political boundaries. And there have been instances when cloud seeding was used deliberately in warfare. The United States used it during “Operation Popeye” to slow the enemy advance during the Vietnam War. In response, the UN created a 1976 convention prohibiting “military or any other hostile use of environmental modification techniques”. A number of countries have not signed the convention. Researcher Laura Kuhl said there was “significant danger that cloud seeding may do more harm than good”, in a 2022 article for the Bulletin of the Atomic Scientists.
Note: Regenerative farming is far safer and more promising than geoengineering for stabilizing the climate. For more along these lines, read our concise summaries of news articles on geoengineering and science corruption.
2,500 US service members from the 15th Marine Expeditionary Unit [tested] a leading AI tool the Pentagon has been funding. The generative AI tools they used were built by the defense-tech company Vannevar Labs, which in November was granted a production contract worth up to $99 million by the Pentagon’s startup-oriented Defense Innovation Unit. The company, founded in 2019 by veterans of the CIA and US intelligence community, joins the likes of Palantir, Anduril, and Scale AI as a major beneficiary of the US military’s embrace of artificial intelligence. In December, the Pentagon said it will spend $100 million in the next two years on pilots specifically for generative AI applications. In addition to Vannevar, it’s also turning to Microsoft and Palantir, which are working together on AI models that would make use of classified data. People outside the Pentagon are warning about the potential risks of this plan, including Heidy Khlaaf ... at the AI Now Institute. She says this rush to incorporate generative AI into military decision-making ignores more foundational flaws of the technology: “We’re already aware of how LLMs are highly inaccurate, especially in the context of safety-critical applications that require precision.” Khlaaf adds that even if humans are “double-checking” the work of AI, there's little reason to think they're capable of catching every mistake. “‘Human-in-the-loop’ is not always a meaningful mitigation,” she says.
Note: For more, read our concise summaries of news articles on warfare technology and Big Tech.
The inaugural “AI Expo for National Competitiveness” [was] hosted by the Special Competitive Studies Project – better known as the “techno-economic” thinktank created by the former Google CEO and current billionaire Eric Schmidt. The conference’s lead sponsor was Palantir, a software company co-founded by Peter Thiel that’s best known for inspiring 2019 protests against its work with Immigration and Customs Enforcement (Ice) at the height of Trump’s family separation policy. Currently, Palantir is supplying some of its AI products to the Israel Defense Forces. I ... went to a panel in Palantir’s booth titled Civilian Harm Mitigation. It was led by two “privacy and civil liberties engineers” [who] described how Palantir’s Gaia map tool lets users “nominate targets of interest” for “the target nomination process”. It helps people choose which places get bombed. After [clicking] a few options on an interactive map, a targeted landmass lit up with bright blue blobs. These blobs ... were civilian areas like hospitals and schools. Gaia uses a large language model (something like ChatGPT) to sift through this information and simplify it. Essentially, people choosing bomb targets get a dumbed-down version of information about where children sleep and families get medical treatment. “Let’s say you’re operating in a place with a lot of civilian areas, like Gaza,” I asked the engineers afterward. “Does Palantir prevent you from ‘nominating a target’ in a civilian location?” Short answer, no.
Note: "Nominating a target" is military jargon that means identifying a person, place, or object to be attacked with bombs, drones, or other weapons. Palantir's Gaia map tool makes life-or-death decisions easier by turning human lives and civilian places into abstract data points on a screen. Read about Palantir's growing influence in law enforcement and the war machine. For more, watch our 9-min video on the militarization of Big Tech.
Skydio, with more than $740m in venture capital funding and a valuation of about $2.5bn, makes drones for the military along with civilian organisations such as police forces and utility companies. The company moved away from the consumer market in 2020 and is now the largest US drone maker. Military uses touted on its website include gaining situational awareness on the battlefield and autonomously patrolling bases. Skydio is one of a number of new military technology unicorns – venture capital-backed startups valued at more than $1bn – many led by young men aiming to transform the US and its allies’ military capabilities with advanced technology, be it straight-up software or software-imbued hardware. The rise of startups doing defence tech is a “big trend”, says Cynthia Cook, a defence expert at the Center for Strategic and International Studies, a Washington-based-thinktank. She likens it to a contagion – and the bug is going around. According to financial data company PitchBook, investors funnelled nearly $155bn globally into defence tech startups between 2021 and 2024, up from $58bn over the previous four years. The US has more than 1,000 venture capital-backed companies working on “smarter, faster and cheaper” defence, says Dale Swartz from consultancy McKinsey. The types of technologies the defence upstarts are working on are many and varied, though autonomy and AI feature heavily.
Note: For more, watch our 9-min video on the militarization of Big Tech.
Palantir is profiting from a “revolving door” of executives and officials passing between the $264bn data intelligence company and high level positions in Washington and Westminster, creating an influence network who have guided its extraordinary growth. The US group, whose billionaire chair Peter Thiel has been a key backer of Donald Trump, has enjoyed an astonishing stock price rally on the back of strong rise of sales from government contracts and deals with the world’s largest corporations. Palantir has hired extensively from government agencies critical to its sales. Palantir has won more than $2.7bn in US contracts since 2009, including over $1.3bn in Pentagon contracts, according to federal records. In the UK, Palantir has been awarded more than £376mn in contracts, according to Tussell, a data provider. Thiel threw a celebration party for Trump’s inauguration at his DC home last month, attended by Vance as well as Silicon Valley leaders like Meta’s Mark Zuckerberg and OpenAI’s Sam Altman. After the US election in November, Trump began tapping Palantir executives for key government roles. At least six individuals have moved between Palantir and the Pentagon’s Chief Digital and Artificial Intelligence Office (CDAO), an office that oversees the defence department’s adoption of data, analytics and AI. Meanwhile, [Palantir co-founder] Joe Lonsdale ... has played a central role in setting up and staffing Musk’s Department of Government Efficiency.
Note: Read about Palantir's growing influence in law enforcement and the war machine. For more, read our concise summaries of news articles on corruption in the military and in the corporate world.
The Pentagon’s technologists and the leaders of the tech industry envision a future of an AI-enabled military force wielding swarms of autonomous weapons on land, at sea, and in the skies. Assuming the military does one day build a force with an uncrewed front rank, what happens if the robot army is defeated? Will the nation’s leaders surrender at that point, or do they then send in the humans? It is difficult to imagine the services will maintain parallel fleets of digital and analog weapons. The humans on both sides of a conflict will seek every advantage possible. When a weapon system is connected to the network, the means to remotely defeat it is already built into the design. The humans on the other side would be foolish not to unleash their cyber warriors to find any way to penetrate the network to disrupt cyber-physical systems. The United States may find that the future military force may not even cross the line of departure because it has been remotely disabled in a digital Pearl Harbor-style attack. According to the Government Accountability Office, the Department of Defense reported 12,077 cyber-attacks between 2015 and 2021. The incidents included unauthorized access to information systems, denial of service, and the installation of malware. Pentagon officials created a vulnerability disclosure program in 2016 to engage so-called ethical hackers to test the department’s systems. On March 15, 2024, the program registered its 50,000th discovered vulnerability.
Note: For more, watch our 9-min video on the militarization of Big Tech.
Outer space is no longer just for global superpowers and large multinational corporations. Developing countries, start-ups, universities, and even high schools can now gain access to space. In 2024, a record 2,849 objects were launched into space. The commercial satellite industry saw global revenue rise to $285 billion in 2023, driven largely by the growth of SpaceX’s Starlink constellation. While the democratization of space is a positive development, it has introduced ... an ethical quandary that I call the “double dual-use dilemma.” The double dual-use dilemma refers to how private space companies themselves—not just their technologies—can become militarized and integrated into national security while operating commercially. Space companies fluidly shift between civilian and military roles. Their expertise in launch systems, satellites, and surveillance infrastructure allows them to serve both markets, often without clear regulatory oversight. Companies like Walchandnagar Industries in India, SpaceX in the United States, and the private Chinese firms that operate under a national strategy of the Chinese Communist Party called Military-Civil Fusion exemplify this trend, maintaining commercial identities while actively supporting defense programs. This blurring of roles, including the possibility that private space companies may develop their own weapons, raises concerns over unchecked militarization and calls for stronger oversight.
Note: For more along these lines, read our concise summaries of news articles on corruption in the military and in the corporate world.
Secretary of Defense Pete Hegseth’s February memo ordering all diversity, equity and inclusion-related content to be removed from Pentagon websites was so vague that military units were instructed to simply use keyword searches like “racism,” “ethnicity,” “history” and “first” when searching for articles and photos to remove. The implications of Hegseth’s memo were overwhelming, since the Defense Department manages over 1,000 public-facing websites and a huge visual media database known as DVIDS – with officials expected to purge everything relevant within two weeks. As a result, the manual work of individual units was supplemented with an algorithm that also used keywords to automate much of the purge, officials explained. Other keywords officials were instructed to search for included “firsts” in history, including content about the first female ranger and first Black commanding general, as well as the words “LGBTQ,” “historic,” “accessibility,” “opportunity,” “belonging,” “justice,” “privilege,” respect” and “values,” according to a list reviewed by CNN. The department is now scrambling to republish some of the content, officials said. “Of all the things they could be doing, the places they’re putting their focuses on first are really things that just don’t matter ... This was literally a waste of our time,” a defense official said. “This does absolutely nothing to make us stronger, more lethal, better prepared.”
Note: For more along these lines, read our concise summaries of news articles on censorship and military corruption.
Important Note: Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.



