Big Tech News Stories
Google announced this week that it would begin the international rollout of its new artificial intelligence-powered search feature, called AI Overviews. When billions of people search a range of topics from news to recipes to general knowledge questions, what they see first will now be an AI-generated summary. While Google was once mostly a portal to reach other parts of the internet, it has spent years consolidating content and services to make itself into the web’s primary destination. Weather, flights, sports scores, stock prices, language translation, showtimes and a host of other information have gradually been incorporated into Google’s search page over the past 15 or so years. Finding that information no longer requires clicking through to another website. With AI Overviews, the rest of the internet may meet the same fate. Google has tried to assuage publishers’ fears that users will no longer see their links or click through to their sites. Research firm Gartner predicts a 25% drop in traffic to websites from search engines by 2026 – a decrease that would be disastrous for most outlets and creators. What’s left for publishers is largely direct visits to their own home pages and Google referrals. If AI Overviews take away a significant portion of the latter, it could mean less original reporting, fewer creators publishing cooking blogs or how-to guides, and a less diverse range of information sources.
Note: WantToKnow.info traffic from Google search has fallen sharply as Google has stopped indexing most websites. These new AI summaries make independent media sites even harder to find. For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
The bedrock of Google’s empire sustained a major blow on Monday after a judge found its search and ad businesses violated antitrust law. The ruling, made by the District of Columbia's Judge Amit Mehta, sided with the US Justice Department and a group of states in a set of cases alleging the tech giant abused its dominance in online search. "Google is a monopolist, and it has acted as one to maintain its monopoly," Mehta wrote in his ruling. The findings, if upheld, could outlaw contracts that for years all but assured Google's dominance. Judge Mehta ruled that Google violated antitrust law in the markets for "general search" and "general search text" ads, which are the ads that appear at the top of the search results page. Apple, Amazon, and Meta are defending themselves against a series of other federal- and state-led antitrust suits, some of which make similar claims. Google’s disputed behavior revolved around contracts it entered into with manufacturers of computer devices and mobile devices, as well as with browser services, browser developers, and wireless carriers. These contracts, the government claimed, violated antitrust laws because they made Google the mandatory default search provider. Companies that entered into those exclusive contracts have included Apple, LG, Samsung, AT&T, T-Mobile, Verizon, and Mozilla. Those deals are why smartphones ... come preloaded with Google's various apps.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
Liquid capital, growing market dominance, slick ads, and fawning media made it easy for giants like Google, Microsoft, Apple, and Amazon to expand their footprint and grow their bottom lines. Yet ... these companies got lazy, entitled, and demanding. They started to care less about the foundations of their business — like having happy customers and stable products — and more about making themselves feel better by reinforcing their monopolies. Big Tech has decided the way to keep customers isn't to compete or provide them with a better service but instead make it hard to leave, trick customers into buying things, or eradicate competition so that it can make things as profitable as possible, even if the experience is worse. After two decades of consistent internal innovation, Big Tech got addicted to acquisitions in the 2010s: Apple bought Siri; Meta bought WhatsApp, Instagram, and Oculus; Amazon bought Twitch; Google bought Nest and Motorola's entire mobility division. Over time, the acquisitions made it impossible for these companies to focus on delivering the features we needed. Google, Meta, Amazon, and Apple are simply no longer forces for innovation. Generative AI is the biggest, dumbest attempt that tech has ever made to escape the fallout of building companies by acquiring other companies, taking their eyes off actually inventing things, and ignoring the most important part of their world: the customer.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
The National Science Foundation spent millions of taxpayer dollars developing censorship tools powered by artificial intelligence that Big Tech could use “to counter misinformation online” and “advance state-of-the-art misinformation research.” House investigators on the Judiciary Committee and Select Committee on the Weaponization of Government said the NSF awarded nearly $40 million ... to develop AI tools that could censor information far faster and at a much greater scale than human beings. The University of Michigan, for instance, was awarded $750,000 from NSF to develop its WiseDex artificial intelligence tool to help Big Tech outsource the “responsibility of censorship” on social media. The release of [an] interim report follows new revelations that the Biden White House pressured Amazon to censor books about the COVID-19 vaccine and comes months after court documents revealed White House officials leaned on Twitter, Facebook, YouTube and other sites to remove posts and ban users whose content they opposed, even threatening the social media platforms with federal action. House investigators say the NSF project is potentially more dangerous because of the scale and speed of censorship that artificial intelligence could enable. “AI-driven tools can monitor online speech at a scale that would far outmatch even the largest team of ’disinformation’ bureaucrats and researchers,” House investigators wrote in the interim report.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and censorship from reliable sources.
Once upon a time ... Google was truly great. A couple of lads at Stanford University in California had the idea to build a search engine that would crawl the world wide web, create an index of all the sites on it and rank them by the number of inbound links each had from other sites. The arrival of ChatGPT and its ilk ... disrupts search behaviour. Google’s mission – “to organise the world’s information and make it universally accessible” – looks like a much more formidable task in a world in which AI can generate infinite amounts of humanlike content. Vincent Schmalbach, a respected search engine optimisation (SEO) expert, thinks that Google has decided that it can no longer aspire to index all the world’s information. That mission has been abandoned. “Google is no longer trying to index the entire web,” writes Schmalbach. “In fact, it’s become extremely selective, refusing to index most content. This isn’t about content creators failing to meet some arbitrary standard of quality. Rather, it’s a fundamental change in how Google approaches its role as a search engine.” The default setting from now on will be not to index content unless it is genuinely unique, authoritative and has “brand recognition”. “They might index content they perceive as truly unique,” says Schmalbach. “But if you write about a topic that Google considers even remotely addressed elsewhere, they likely won’t index it. This can happen even if you’re a well-respected writer with a substantial readership.”
Note: WantToKnow.info and other independent media websites are disappearing from Google search results because of this. For more along these lines, see concise summaries of deeply revealing news articles on AI and censorship from reliable sources.
Google and a few other search engines are the portal through which several billion people navigate the internet. Many of the world’s most powerful tech companies, including Google, Microsoft, and OpenAI, have recently spotted an opportunity to remake that gateway with generative AI, and they are racing to seize it. Nearly two years after the arrival of ChatGPT, and with users growing aware that many generative-AI products have effectively been built on stolen information, tech companies are trying to play nice with the media outlets that supply the content these machines need. The start-up Perplexity ... announced revenue-sharing deals with Time, Fortune, and several other publishers. These publishers will be compensated when Perplexity earns ad revenue from AI-generated answers that cite partner content. The site does not currently run ads, but will begin doing so in the form of sponsored “related follow-up questions.” OpenAI has been building its own roster of media partners, including News Corp, Vox Media, and The Atlantic. Google has purchased the rights to use Reddit content to train future AI models, and ... appears to be the only major search engine that Reddit is permitting to surface its content. The default was once that you would directly consume work by another person; now an AI may chew and regurgitate it first, then determine what you see based on its opaque underlying algorithm. Many of the human readers whom media outlets currently show ads and sell subscriptions to will have less reason to ever visit publishers’ websites. Whether OpenAI, Perplexity, Google, or someone else wins the AI search war might not depend entirely on their software: Media partners are an important part of the equation. AI search will send less traffic to media websites than traditional search engines. The growing number of AI-media deals, then, are a shakedown. AI is scraping publishers’ content whether they want it to or not: Media companies can be chumps or get paid.
Note: The AI search war has nothing to do with journalists and content creators getting paid and acknowledged for their work. It’s all about big companies doing deals with each other to control our information environment and capture more consumer spending. For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable sources.
Amazon has been accused of using “intrusive algorithms” as part of a sweeping surveillance program to monitor and deter union organizing activities. Workers at a warehouse run by the technology giant on the outskirts of St Louis, Missouri, are today filing an unfair labor practice charge with the National Labor Relations Board (NLRB). A copy of the charge ... alleges that Amazon has “maintained intrusive algorithms and other workplace controls and surveillance which interfere with Section 7 rights of employees to engage in protected concerted activity”. There have been several reports of Amazon surveilling workers over union organizing and activism, including human resources monitoring employee message boards, software to track union threats and job listings for intelligence analysts to monitor “labor organizing threats”. Artificial intelligence can be used by warehouse employers like Amazon “to essentially have 24/7 unregulated and algorithmically processed and recorded video, and often audio data of what their workers are doing all the time”, said Seema N Patel ... at Stanford Law School. “It enables employers to control, record, monitor and use that data to discipline hundreds of thousands of workers in a way that no human manager or group of managers could even do.” The National Labor Relations Board issued a memo in 2022 announcing its intent to protect workers from AI-enabled monitoring of labor organizing activities.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
On July 16, the S&P 500 index, one of the most widely cited benchmarks in American capitalism, reached its highest-ever market value: $47 trillion. 1.4 percent of those companies were worth more than $16 trillion, the greatest concentration of capital in the smallest number of companies in the history of the U.S. stock market. The names are familiar: Microsoft, Apple, Amazon, Nvidia, Meta, Alphabet, and Tesla. All of them, too, have made giant bets on artificial intelligence. For all their similarities, these trillion-dollar-plus companies have been grouped together under a single banner: the Magnificent Seven. In the past month, though, these giants of the U.S. economy have been faltering. A recent rout led to a collapse of $2.6 trillion in their market value. Earlier this year, Goldman Sachs issued a deeply skeptical report on the industry, calling it too expensive, too clunky, and just simply not as useful as it has been chalked up to be. “There’s not a single thing that this is being used for that’s cost-effective at this point,” Jim Covello, an influential Goldman analyst, said on a company podcast. AI is not going away, and it will surely become more sophisticated. This explains why, even with the tempering of the AI-investment thesis, these companies are still absolutely massive. When you talk with Silicon Valley CEOs, they love to roll their eyes at their East Coast skeptics. Banks, especially, are too cautious, too concerned with short-term goals, too myopic to imagine another world.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
The Ukrainian military has used AI-equipped drones mounted with explosives to fly into battlefields and strike at Russian oil refineries. American AI systems identified targets in Syria and Yemen for airstrikes earlier this year. The Israel Defense Forces used another kind of AI-enabled targeting system to label as many as 37,000 Palestinians as suspected militants during the first weeks of its war in Gaza. Growing conflicts around the world have acted as both accelerant and testing ground for AI warfare while making it even more evident how unregulated the nascent field is. The result is a multibillion-dollar AI arms race that is drawing in Silicon Valley giants and states around the world. Altogether, the US military has more than 800 active AI-related projects and requested $1.8bn worth of funding for AI in the 2024 budget alone. Many of these companies and technologies are able to operate with extremely little transparency and accountability. Defense contractors are generally protected from liability when their products accidentally do not work as intended, even when the results are deadly. The Pentagon plans to spend $1bn by 2025 on its Replicator Initiative, which aims to develop swarms of unmanned combat drones that will use artificial intelligence to seek out threats. The air force wants to allocate around $6bn over the next five years to research and development of unmanned collaborative combat aircraft, seeking to build a fleet of 1,000 AI-enabled fighter jets that can fly autonomously. The Department of Defense has also secured hundreds of millions of dollars in recent years to fund its secretive AI initiative known as Project Maven, a venture focused on technologies like automated target recognition and surveillance.
Note:Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.
After government officials like former White House advisers Rob Flaherty and Andy Slavitt repeatedly harangued platforms such as Facebook to censor Americans who contested the government’s narrative on COVID-19 vaccines, Missouri and Louisiana sued. They claimed that the practice violates the First Amendment. Following years of litigation, the Supreme Court threw cold water on their efforts, ruling in Murthy v. Missouri that states and the individual plaintiffs lacked standing to sue the government for its actions. The government often disguised its censorship requests by coordinating with ostensibly “private” civil society groups to pressure tech companies to remove or shadow ban targeted content. According to the U.S. House Weaponization Committee’s November 2023 interim report, the Cybersecurity and Infrastructure Security Agency requested that the now-defunct Stanford Internet Observatory create a public-private partnership to counter election “misinformation” in 2020. This consortium of government and private entities took the form of the Election Integrity Partnership (EIP). EIP’s “private” civil society partners then forwarded the flagged content to Big Tech platforms like Facebook, YouTube, TikTok and Twitter. These “private” groups ... receive millions of taxpayer dollars from the National Science Foundation, the State Department and the U.S Department of Justice. Legislation like the COLLUDE Act would ... clarify that Section 230 does not apply when platforms censor legal speech “as a result of a communication” from a “governmental entity” or from an non-profit “acting at the request or behest of a governmental entity.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on censorship and government corruption from reliable sources.
OnlyFans makes reassuring promises to the public: It’s strictly adults-only, with sophisticated measures to monitor every user, vet all content and swiftly remove and report any child sexual abuse material. Reuters documented 30 complaints in U.S. police and court records that child sexual abuse material appeared on the site between December 2019 and June 2024. The case files examined by the news organization cited more than 200 explicit videos and images of kids, including some adults having oral sex with toddlers. In one case, multiple videos of a minor remained on OnlyFans for more than a year, according to a child exploitation investigator who found them while assisting Reuters. OnlyFans “presents itself as a platform that provides unrivaled access to influencers, celebrities and models,” said Elly Hanson, a clinical psychologist and researcher who focuses on preventing sexual abuse and reducing its impact. “This is an attractive mix to many teens, who are pulled into its world of commodified sex, unprepared for what this entails.” In 2021 ... 102 Republican and Democratic members of the U.S. House of Representatives called on the Justice Department to investigate child sexual abuse on OnlyFans. The Justice Department told the lawmakers three months later that it couldn’t confirm or deny it was investigating OnlyFans. Contacted recently, a department spokesperson declined to comment further.
Note: For more along these lines, see concise summaries of deeply revealing news articles on sexual abuse scandals from reliable major media sources.
Jonathan Haidt is a man with a mission ... to alert us to the harms that social media and modern parenting are doing to our children. His latest book, The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness ... writes of a “tidal wave” of increases in mental illness and distress beginning around 2012. Young adolescent girls are hit hardest, but boys are in pain, too. He sees two factors that have caused this. The first is the decline of play-based childhood caused by overanxious parenting, which allows children fewer opportunities for unsupervised play and restricts their movement. The second factor is the ubiquity of smartphones and the social media apps that thrive upon them. The result is the “great rewiring of childhood” of his book’s subtitle and an epidemic of mental illness and distress. You don’t have to be a statistician to know that ... Instagram is toxic for some – perhaps many – teenage girls. Ever since Frances Haugen’s revelations, we have known that Facebook itself knew that 13% of British teenage girls said that their suicidal thoughts became more frequent after starting on Instagram. And the company’s own researchers found that 32% of teen girls said that when they felt bad about their bodies, Instagram made them feel worse. These findings might not meet the exacting standards of the best scientific research, but they tell you what you need to know.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and mental health from reliable major media sources.
Recall ... takes constant screenshots in the background while you go about your daily computer business. Microsoft’s Copilot+ machine-learning tech then scans (and “reads”) each of these screenshots in order to make a searchable database of every action performed on your computer and then stores it on the machine’s disk. “Recall is like bestowing a photographic memory on everyone who buys a Copilot+ PC,” [Microsoft marketing officer Yusuf] Mehdi said. “Anything you’ve ever seen or done, you’ll now more or less be able to find.” Charlie Stross, the sci-fi author and tech critic, called it a privacy “shit-show for any organisation that handles medical records or has a duty of legal confidentiality.” He also said: “Suddenly, every PC becomes a target for discovery during legal proceedings. Lawyers can subpoena your Recall database and search it, no longer being limited to email but being able to search for terms that came up in Teams or Slack or Signal messages, and potentially verbally via Zoom or Skype if speech-to-text is included in Recall data.” Faced with this pushback, Microsoft [announced] that Recall would be made opt-in instead of on by default, and also introducing extra security precautions – only producing results from Recall after user authentication, for example, and never decrypting data stored by the tool until after a search query. The only good news for Microsoft here is that it seems to have belatedly acknowledged that Recall has been a fiasco.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
High-level former intelligence and national security officials have provided crucial assistance to Silicon Valley giants as the tech firms fought off efforts to weaken online monopolies. John Ratcliffe, the former Director of National Intelligence, Brian Cavanaugh, a former intelligence aide in the White House, and [former White House National Security Advisor Robert] O'Brien jointly wrote to congressional leaders, warning darkly that certain legislative proposals to check the power of Amazon, Google, Meta, and Apple would embolden America's enemies. The letter left unmentioned that the former officials were paid by tech industry lobbyists at the time as part of a campaign to suppress support for the legislation. The Open App Markets App was designed to break Apple and Google's duopoly over the smartphone app store market. The companies use their control over the app markets to force app developers to pay as much as 30 percent in fees on every transaction. Breaking up Apple and Google’s hold over the smartphone app store would enable greater free expression and innovation. The American Innovation and Choice Online Act similarly encourages competition by preventing tech platforms from self-preferencing their own products. The Silicon Valley giants deployed hundreds of millions of dollars in lobbying efforts to stymie the reforms. For Republicans, they crafted messages on national security and jobs. For Democrats, as other reports have revealed, tech giants paid LGBT, Black, and Latino organizations to lobby against the reforms, claiming that powerful tech platforms are beneficial to communities of color and that greater competition online would lead to a rise in hate speech.The lobbying tactics have so far paid off. Every major tech antitrust and competition bill in Congress has died over the last four years.
Note: For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption and Big Tech from reliable major media sources.
Twenty years ago, FedEx established its own police force. Now it's working with local police to build out an AI car surveillance network. The shipping and business services company is using AI tools made by Flock Safety, a $4 billion car surveillance startup, to monitor its distribution and cargo facilities across the United States. As part of the deal, FedEx is providing its Flock surveillance feeds to law enforcement, an arrangement that Flock has with at least four multi-billion dollar private companies. Some local police departments are also sharing their Flock feeds with FedEx — a rare instance of a private company availing itself of a police surveillance apparatus. Such close collaboration has the potential to dramatically expand Flock’s car surveillance network, which already spans 4,000 cities across over 40 states and some 40,000 cameras that track vehicles by license plate, make, model, color and other identifying characteristics, like dents or bumper stickers. Jay Stanley ... at the American Civil Liberties Union, said it was “profoundly disconcerting” that FedEx was exchanging data with law enforcement as part of Flock’s “mass surveillance” system. “It raises questions about why a private company ... would have privileged access to data that normally is only available to law enforcement,” he said. Forbes previously found that [Flock] had itself likely broken the law across various states by installing cameras without the right permits.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.
I had to watch every frame of a recent stabbing video ... It will never leave me,” says Harun*, one of many moderators reviewing harmful online content in India, as social media companies increasingly move the challenging work offshore. Moderators working in Hyderabad, a major IT hub in south Asia, have spoken of the strain on their mental health of reviewing images and videos of sexual and violent content, sometimes including trafficked children. Many social media platforms in the UK, European Union and US have moved the work to countries such as India and the Philippines. While OpenAI, creator of ChatGPT, has said artificial intelligence could be used to speed up content moderation, it is not expected to end the need for the thousands of human moderators employed by social media platforms. Content moderators in Hyderabad say the work has left them emotionally distressed, depressed and struggling to sleep. “I had to watch every frame of a recent stabbing video of a girl. What upset me most is that the passersby didn’t help her,” says Harun. “There have been instances when I’ve flagged a video containing child nudity and received continuous calls from my supervisors,” [said moderator Akash]. “Most of these half-naked pictures of minors are from the US or Europe. I’ve received multiple warnings from my supervisors not to flag these videos. One of them asked me to ‘man up’ when I complained that these videos need to be discussed in detail.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and Big Tech from reliable major media sources.
Trevin Brownie had to sift through lots of disturbing content for the three years he worked as an online content moderator in Nairobi, Kenya. "We take off any form of abusive content that violates policies such as bullying and harassment or hate speech or violent graphic content suicides," Brownie [said]. Brownie has encountered content ranging from child pornography, material circulated by organized crime groups and terrorists, and images taken from war zones. "I've seen more than 500 beheadings on a monthly basis," he said. Brownie moved from South Africa, where he previously worked at a call center, to Nairobi, where he worked as a subcontractor for Facebook's main moderation hub in East Africa, which was operated by a U.S.-based company called Sama AI. Content moderators working in Kenya say Sama AI and other third-party outsourcing companies took advantage of them. They allege they received low-paying wages and inadequate mental health support compared to their counterparts overseas. Brownie says ... PTSD has become a common side effect he and others in this industry now live with, he said. "It's really traumatic. Disturbing, especially for the suicide videos," he said. A key obstacle to getting better protections for content moderators lies in how people think social media platforms work. More than 150 content moderators who work with the artificial intelligence (AI) systems used by Facebook, TikTok and ChatGPT, from all parts of the continent, gathered in Kenya to form the African Content Moderator's Union. The union is calling on companies in the industry to increase salaries, provide access to onsite psychiatrists, and a redrawing of policies to protect employees from exploitative labour practices.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and Big Tech from reliable major media sources.
Once upon a time, Google was great. They intensively monitored what people searched for, and then used that information continually to improve the engine’s performance. Their big idea was that the information thus derived had a commercial value; it indicated what people were interested in and might therefore be of value to advertisers who wanted to sell them stuff. Thus was born what Shoshana Zuboff christened “surveillance capitalism”, the dominant money machine of the networked world. The launch of generative AIs such as ChatGPT clearly took Google by surprise, which is odd given that the company had for years been working on the technology. The question became: how will Google respond to the threat? Now we know: it’s something called AI overviews, in which an increasing number of search queries are initially answered by AI-generated responses. Users have been told that glue is useful for ensuring that cheese sticks to pizza, that they could stare at the sun for for up to 30 minutes, and that geologists suggest eating one rock per day. There’s a quaint air of desperation in the publicity for this sudden pivot from search engine to answerbot. The really big question about the pivot, though, is what its systemic impact on the link economy will be. Already, the news is not great. Gartner, a market-research consultancy, for example, predicts that search engine volume will drop 25% by 2026 owing to AI chatbots and other virtual agents.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
Venture capital and military startup firms in Silicon Valley have begun aggressively selling a version of automated warfare that will deeply incorporate artificial intelligence (AI). This surge of support for emerging military technologies is driven by the ultimate rationale of the military-industrial complex: vast sums of money to be made. Untold billions of dollars of private money now pouring into firms seeking to expand the frontiers of techno-war. According to the New York Times, $125 billion over the past four years. Whatever the numbers, the tech sector and its financial backers sense that there are massive amounts of money to be made in next-generation weaponry and aren’t about to let anyone stand in their way. Meanwhile, an investigation by Eric Lipton of the New York Times found that venture capitalists and startup firms already pushing the pace on AI-driven warfare are also busily hiring ex-military and Pentagon officials to do their bidding. Former Google CEO Eric Schmidt [has] become a virtual philosopher king when it comes to how new technology will reshape society. [Schmidt] laid out his views in a 2021 book modestly entitled The Age of AI and Our Human Future, coauthored with none other than the late Henry Kissinger. Schmidt is aware of the potential perils of AI, but he’s also at the center of efforts to promote its military applications. AI is coming, and its impact on our lives, whether in war or peace, is likely to stagger the imagination.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.
The center of the U.S. military-industrial complex has been shifting over the past decade from the Washington, D.C. metropolitan area to Northern California—a shift that is accelerating with the rise of artificial intelligence-based systems, according to a report published Wednesday. "Although much of the Pentagon's $886 billion budget is spent on conventional weapon systems and goes to well-established defense giants such as Lockheed Martin, RTX, Northrop Grumman, General Dynamics, Boeing, and BAE Systems, a new political economy is emerging, driven by the imperatives of big tech companies, venture capital (VC), and private equity firms," [report author Roberto J.] González wrote. "Defense Department officials have ... awarded large multibillion-dollar contracts to Microsoft, Amazon, Google, and Oracle." González found that the five largest military contracts to major tech firms between 2018 and 2022 "had contract ceilings totaling at least $53 billion combined." There's also the danger of a "revolving door" between Silicon Valley and the Pentagon as many senior government officials "are now gravitating towards defense-related VC or private equity firms as executives or advisers after they retire from public service." "Members of the armed services and civilians are in danger of being harmed by inadequately tested—or algorithmically flawed—AI-enabled technologies. By nature, VC firms seek rapid returns on investment by quickly bringing a product to market, and then 'cashing out' by either selling the startup or going public. This means that VC-funded defense tech companies are under pressure to produce prototypes quickly and then move to production before adequate testing has occurred."
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
Important Note: Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.

















































































