Big Tech News Stories
AI could mean fewer body bags on the battlefield — but that's exactly what terrifies the godfather of AI. Geoffrey Hinton, the computer scientist known as the "godfather of AI," said the rise of killer robots won't make wars safer. It will make conflicts easier to start by lowering the human and political cost of fighting. Hinton said ... that "lethal autonomous weapons, that is weapons that decide by themselves who to kill or maim, are a big advantage if a rich country wants to invade a poor country." "The thing that stops rich countries invading poor countries is their citizens coming back in body bags," he said. "If you have lethal autonomous weapons, instead of dead people coming back, you'll get dead robots coming back." That shift could embolden governments to start wars — and enrich defense contractors in the process, he said. Hinton also said AI is already reshaping the battlefield. "It's fairly clear it's already transformed warfare," he said, pointing to Ukraine as an example. "A $500 drone can now destroy a multimillion-dollar tank." Traditional hardware is beginning to look outdated, he added. "Fighter jets with people in them are a silly idea now," Hinton said. "If you can have AI in them, AIs can withstand much bigger accelerations — and you don't have to worry so much about loss of life." One Ukrainian soldier who works with drones and uncrewed systems [said] in a February report that "what we're doing in Ukraine will define warfare for the next decade."
Note: As law expert Dr. Salah Sharief put it, "The detached nature of drone warfare has anonymized and dehumanized the enemy, greatly diminishing the necessary psychological barriers of killing." For more, read our concise summaries of news articles on AI and warfare technology.
Senior officials in the Biden administration, including some White House officials, "conducted repeated and sustained outreach" and "pressed" Google- and YouTube parent-company Alphabet "regarding certain user-generated content related to the COVID-19 pandemic that did not violate [Alphabet's] policies," the company revealed yesterday. While Alphabet "continued to develop and enforce its policies independently, Biden Administration officials continued to press [Alphabet] to remove non-violative user-generated content," a lawyer for Alphabet wrote in a September 23 letter to House Judiciary Committee Chairman Jim Jordan. Administration officials including Biden "created a political atmosphere that sought to influence the actions" of private tech platforms regarding the moderation of misinformation. This is what has come to be known as "jawboning," and the fact that it doesn't involve direct censorship may make it even more insidious. Direct censorship can be challenged in court. This sort of wink-and-nod regulation of speech leaves companies and their users with little recourse. What's more, each time authorities stray from the spirit of the First Amendment, it makes it that much easier for future authorities to do so. And each time Democrats (or Republicans) use government power to try and suppress free speech, it gives them even less standing to say it's wrong when their opponents do that.
Note: Read more about the sprawling federal censorship enterprise that took shape during the Biden administration. For more along these lines, read our concise summaries of news articles on censorship and government corruption.
Those who have kept track of the rise of the Thielverse, which includes figures such as Peter Thiel, Elon Musk and JD Vance, have understood that an agenda to usher in a unique form of authoritarianism has been slowly introduced into the mainstream political atmosphere. “I think now it’s quite clear that this is the PayPal Mafia’s moment. These particular figures have had an extremely significant influence on US government policy since January, including the extreme distribution of AI throughout the US government,” [investigative journalist Whitney] Webb explains. It’s clear that the architects of mass surveillance and the military industrial complex are beginning to coalesce in unprecedented ways within the Trump administration and Webb emphasizes that now is the time to pay attention and push back against these new forces. If they have their way, all commercial technology will be completely folded into the national security state — acting blatantly as the new infrastructure for techno-authoritarian rule. The underlying idea behind this new system is “pre-crime,” or the use of mass surveillance to designate people criminals before they’ve committed any crime. Webb warns that the Trump administration and its benefactors will demonize segments of the population to turn civilians against each other, all in pursuit of building out this elaborate system of control right under our noses.
Note: Read about Peter Thiel's involvement in the military origins of Facebook. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
“Ice is just around the corner,” my friend said, looking up from his phone. A day earlier, I had met with foreign correspondents at the United Nations to explain the AI surveillance architecture that Immigration and Customs Enforcement (Ice) is using across the United States. The law enforcement agency uses targeting technologies which one of my past employers, Palantir Technologies, has both pioneered and proliferated. Technology like Palantir’s plays a major role in world events, from wars in Iran, Gaza and Ukraine to the detainment of immigrants and dissident students in the United States. Known as intelligence, surveillance, target acquisition and reconnaissance (Istar) systems, these tools, built by several companies, allow users to track, detain and, in the context of war, kill people at scale with the help of AI. They deliver targets to operators by combining immense amounts of publicly and privately sourced data to detect patterns, and are particularly helpful in projects of mass surveillance, forced migration and urban warfare. Also known as “AI kill chains”, they pull us all into a web of invisible tracking mechanisms that we are just beginning to comprehend, yet are starting to experience viscerally in the US as Ice wields these systems near our homes, churches, parks and schools. The dragnets powered by Istar technology trap more than migrants and combatants ... in their wake. They appear to violate first and fourth amendment rights.
Note: Read how Palantir helped the NSA and its allies spy on the entire planet. Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and Big Tech.
As scientists who have worked on the science of solar geoengineering for decades, we have grown increasingly concerned about the emerging efforts to start and fund private companies to build and deploy technologies that could alter the climate of the planet. The basic idea behind solar geoengineering, or what we now prefer to call sunlight reflection methods (SRM), is that humans might reduce climate change by making the Earth a bit more reflective, partially counteracting the warming caused by the accumulation of greenhouse gases. Many people already distrust the idea of engineering the atmosphere—at whichever scale—to address climate change, fearing negative side effects, inequitable impacts on different parts of the world, or the prospect that a world expecting such solutions will feel less pressure to address the root causes of climate change. Notably, Stardust says on its website that it has developed novel particles that can be injected into the atmosphere to reflect away more sunlight, asserting that they’re “chemically inert in the stratosphere, and safe for humans and ecosystems.” But it’s nonsense for the company to claim they can make particles that are inert in the stratosphere. Even diamonds, which are extraordinarily nonreactive, would alter stratospheric chemistry. Any particle may become coated by background sulfuric acid in the stratosphere. That could accelerate the loss of the protective ozone layer.
Note: Modifying the atmosphere to dim the sun involves catastrophic risks. Regenerative farming is far safer and more promising for stabilizing the climate. In our latest Substack, "Geoengineering is a Weapon That's Been Rebranded as Climate Science. There's a Better Way To Heal the Earth," we present credible evidence and current information showing that weather modification technologies are not only real, but that they are being secretly propagated by multiple groups with differing agendas.
Mark Zuckerberg is said to have started work on Koolau Ranch, his sprawling 1,400-acre compound on the Hawaiian island of Kauai, as far back as 2014. It is set to include a shelter, complete with its own energy and food supplies, though the carpenters and electricians working on the site were banned from talking about it. Asked last year if he was creating a doomsday bunker, the Facebook founder gave a flat "no". The underground space spanning some 5,000 square feet is, he explained, "just like a little shelter, it's like a basement". Other tech leaders ... appear to have been busy buying up chunks of land with underground spaces, ripe for conversion into multi-million pound luxury bunkers. Reid Hoffman, the co-founder of LinkedIn, has talked about "apocalypse insurance". So, could they really be preparing for war, the effects of climate change, or some other catastrophic event the rest of us have yet to know about? The advancement of artificial intelligence (AI) has only added to that list of potential existential woes. Ilya Sutskever, chief scientist and a co-founder of Open AI, is reported to be one of them. Mr Sutskever was becoming increasingly convinced that computer scientists were on the brink of developing artificial general intelligence (AGI). In a meeting, Mr Sutskever suggested to colleagues that they should dig an underground shelter for the company's top scientists before such a powerful technology was released on the world.
Note: Read how some doomsday preppers are rejecting isolating bunkers in favor of community building and mutual aid. For more along these lines, read our concise summaries of news articles on financial inequality.
In July, US group Delta Air Lines revealed that approximately 3 percent of its domestic fare pricing is determined using artificial intelligence (AI) – although it has not elaborated on how this happens. The company said it aims to increase this figure to 20 percent by the end of this year. According to former Federal Trade Commission Chair Lina Khan ... some companies are able to use your personal data to predict what they know as your “pain point” – the maximum amount you’re willing to spend. In January, the US’s Federal Trade Commission (FTC), which regulates fair competition, reported on a surveillance pricing study it carried out in July 2024. It found that companies can collect data directly through account registrations, email sign-ups and online purchases in order to do this. Additionally, web pixels installed by intermediaries track digital signals including your IP address, device type, browser information, language preferences and “granular” website interactions such as mouse movements, scrolling patterns and video viewing behaviour. This is known as “surveillance pricing”. The FTC Surveillance Pricing report lists several ways in which consumers can protect their data. These include using private browsers to do your online shopping, opting out of consumer tracking where possible, clearing the cookies in your history or using virtual private networks (VPNs) to shield your data from being collected.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Larry Ellison, the billionaire cofounder of Oracle ... said AI will usher in a new era of surveillance that he gleefully said will ensure "citizens will be on their best behavior." Ellison made the comments as he spoke to investors earlier this week during an Oracle financial analysts meeting, where he shared his thoughts on the future of AI-powered surveillance tools. Ellison said AI would be used in the future to constantly watch and analyze vast surveillance systems, like security cameras, police body cameras, doorbell cameras, and vehicle dashboard cameras. "We're going to have supervision," Ellison said. "Every police officer is going to be supervised at all times, and if there's a problem, AI will report that problem and report it to the appropriate person. Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on." Ellison also expects AI drones to replace police cars in high-speed chases. "You just have a drone follow the car," Ellison said. "It's very simple in the age of autonomous drones." Ellison's company, Oracle, like almost every company these days, is aggressively pursuing opportunities in the AI industry. It already has several projects in the works, including one in partnership with Elon Musk's SpaceX. Ellison is the world's sixth-richest man with a net worth of $157 billion.
Note: As journalist Kenan Malik put it, "The problem we face is not that machines may one day exercise power over humans. It is rather that we already live in societies in which power is exercised by a few to the detriment of the majority, and that technology provides a means of consolidating that power." Read about the shadowy companies tracking and trading your personal data, which isn't just used to sell products. It's often accessed by governments, law enforcement, and intelligence agencies, often without warrants or oversight. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
In an exchange this week on “All-In Podcast,” Alex Karp was on the defensive. The Palantir CEO used the appearance to downplay and deny the notion that his company would engage in rights-violating in surveillance work. “We are the single worst technology to use to abuse civil liberties, which is by the way the reason why we could never get the NSA or the FBI to actually buy our product,” Karp said. What he didn’t mention was the fact that a tranche of classified documents revealed by [whistleblower and former NSA contractor] Edward Snowden and The Intercept in 2017 showed how Palantir software helped the National Security Agency and its allies spy on the entire planet. Palantir software was used in conjunction with a signals intelligence tool codenamed XKEYSCORE, one of the most explosive revelations from the NSA whistleblower’s 2013 disclosures. XKEYSCORE provided the NSA and its foreign partners with a means of easily searching through immense troves of data and metadata covertly siphoned across the entire global internet, from emails and Facebook messages to webcam footage and web browsing. A 2008 NSA presentation describes how XKEYSCORE could be used to detect “Someone whose language is out of place for the region they are in,” “Someone who is using encryption,” or “Someone searching the web for suspicious stuff.” In May, the New York Times reported Palantir would play a central role in a White House plan to boost data sharing between federal agencies, “raising questions over whether he might compile a master list of personal information on Americans that could give him untold surveillance power.”
Note: Read about Palantir's revolving door with the US government. As former NSA intelligence official and whistleblower William Binney articulated, "The ultimate goal of the NSA is total population control." For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Meta whistleblower Sarah Wynn-Williams, the former director of Global Public Policy for Facebook and author of the recently released tell-all book “Careless People,” told U.S. senators ... that Meta actively targeted teens with advertisements based on their emotional state. In response to a question from Sen. Marsha Blackburn (R-TN), Wynn-Williams admitted that Meta (which was then known as Facebook) had targeted 13- to 17-year-olds with ads when they were feeling down or depressed. “It could identify when they were feeling worthless or helpless or like a failure, and [Meta] would take that information and share it with advertisers,” Wynn-Williams told the senators on the subcommittee for crime and terrorism. “Advertisers understand that when people don’t feel good about themselves, it’s often a good time to pitch a product — people are more likely to buy something.” She said the company was letting advertisers know when the teens were depressed so they could be served an ad at the best time. As an example, she suggested that if a teen girl deleted a selfie, advertisers might see that as a good time to sell her a beauty product as she may not be feeling great about her appearance. They also targeted teens with ads for weight loss when young girls had concerns around body confidence. If Meta was willing to target teens based on their emotional states, it stands to reason they’d do the same to adults. One document displayed during the hearing showed an example of just that.
Note: Facebook hid its own internal research for years showing that Instagram worsened body image issues, revealing that 13% of British teenage girls reported more frequent suicidal thoughts after using the app. For more along these lines, read our concise summaries of news articles on Big Tech and mental health.
There has been a surge of concern and interest in the threat of “surveillance pricing,” in which companies leverage the enormous amount of detailed data they increasingly hold on their customers to set individualized prices for each of them — likely in ways that benefit the companies and hurt their customers. The central battle in such efforts will be around identity: do the companies whose prices you are checking or negotiating know who you are? Can you stop them from knowing who you are? Unfortunately, one day not too far in the future, you may lose the ability to do so. Many states around the country are creating digital versions of their state driver’s licenses. Digital versions of IDs allow people to be tracked in ways that are not possible or practical with physical IDs — especially since they are being designed to work ... online. It will be much easier for companies to request — and eventually demand — that people share their IDs in order to engage in all manner of transactions. It will make it easier for companies to collect data about us, merge it with other data, and analyze it, all with high confidence that it pertains to the same person — and then recognize us ... and execute their price-maximizing strategy against us. Not only would digital IDs prevent people from escaping surveillance pricing, but surveillance pricing would simultaneously incentivize companies to force the presentation of digital IDs by people who want to shop.
Note: For more along these lines, read our concise summaries of news articles on corporate corruption and the disappearance of privacy.
Loneliness not only affects how we feel in the moment but can leave lasting imprints on our personality, physiology, and even the way our brains process the social world. A large study of older adults [found] that persistent loneliness predicted declines in extraversion, agreeableness, and conscientiousness—traits associated with sociability, kindness, and self-discipline. At the same time, higher levels of neuroticism predicted greater loneliness in the future, suggesting a self-reinforcing cycle. Although social media promises connection, a large-scale study published in Personality and Social Psychology Bulletin suggests that it may actually fuel feelings of loneliness over time. Researchers found that both passive (scrolling) and active (posting and commenting) forms of social media use predicted increases in loneliness. Surprisingly, even active engagement—often believed to foster interaction—was associated with growing disconnection. Even more concerning was the feedback loop uncovered in the data: loneliness also predicted increased social media use over time, suggesting that people may turn to these platforms for relief, only to find themselves feeling even more isolated. Lonely individuals also showed greater activation in areas tied to negative emotions, such as the insula and amygdala. This pattern suggests that lonely people may be more sensitive to social threat or negativity, which could contribute to feeling misunderstood or excluded.
Note: For more along these lines, read our concise summaries of news articles on mental health and Big Tech.
Digital technology was sold as a liberating tool that could free individuals from state power. Yet the state security apparatus always had a different view. The Prism leaks by whistleblower Edward Snowden in 2013 revealed a deep and almost unconditional cooperation between Silicon Valley firms and security apparatuses of the state such as the National Security Agency (NSA). People realized that basically any message exchanged via Big Tech firms including Google, Facebook, Microsoft, Apple, etc. could be easily spied upon with direct backdoor access: a form of mass surveillance with few precedents ... especially in nominally democratic states. The leaks prompted outrage, but eventually most people preferred to look away. The most extreme case is the surveillance and intelligence firm Palantir. Its service is fundamentally to provide a more sophisticated version of the mass surveillance that the Snowden leaks revealed. In particular, it endeavors to support the military and police as they aim to identify and track various targets — sometimes literal human targets. Palantir is a company whose very business is to support the security state in its most brutal manifestations: in military operations that lead to massive loss of life, including of civilians, and in brutal immigration enforcement [in] the United States. Unfortunately, Palantir is but one part of a much broader military-information complex, which is becoming the axis of the new Big Tech Deep State.
Note: For more along these lines, read our concise summaries of news articles on corruption in the intelligence community and in Big Tech.
AI’s promise of behavior prediction and control fuels a vicious cycle of surveillance which inevitably triggers abuses of power. The problem with using data to make predictions is that the process can be used as a weapon against society, threatening democratic values. As the lines between private and public data are blurred in modern society, many won’t realize that their private lives are becoming data points used to make decisions about them. What AI does is make this a surveillance ratchet, a device that only goes in one direction, which goes something like this: To make the inferences I want to make to learn more about you, I must collect more data on you. For my AI tools to run, I need data about a lot of you. And once I’ve collected this data, I can monetize it by selling it to others who want to use AI to make other inferences about you. AI creates a demand for data but also becomes the result of collecting data. What makes AI prediction both powerful and lucrative is being able to control what happens next. If a bank can claim to predict what people will do with a loan, it can use that to decide whether they should get one. If an admissions officer can claim to predict how students will perform in college, they can use that to decide which students to admit. Amazon’s Echo devices have been subject to warrants for the audio recordings made by the device inside our homes—recordings that were made even when the people present weren’t talking directly to the device. The desire to surveil is bipartisan. It’s about power, not party politics.
Note: As journalist Kenan Malik put it, "It is not AI but our blindness to the way human societies are already deploying machine intelligence for political ends that should most worry us." Read about the shadowy companies tracking and trading your personal data, which isn't just used to sell products. It's often accessed by governments, law enforcement, and intelligence agencies, often without warrants or oversight. For more, read our concise summaries of news articles on AI.
Beginning in 2004, the CIA established a vast network of at least 885 websites, ranging from Johnny Carson and Star Wars fan pages to online message boards about Rastafari. Spanning 29 languages and targeting at least 36 countries directly, these websites were aimed not only at adversaries such as China, Venezuela, and Russia, but also at allied nations ... showing that the United States treats its friends much like its foes. These websites served as cover for informants, offering some level of plausible deniability if casually examined. Few of these pages provided any unique content and simply rehosted news and blogs from elsewhere. Informants in enemy nations, such as Venezuela, used sites like Noticias-Caracas and El Correo De Noticias to communicate with Langley, while Russian moles used My Online Game Source and TodaysNewsAndWeather-Ru.com, and other similar platforms. In 2010, USAID—a CIA front organization—secretly created the Cuban social media app, Zunzuneo. While the 885 fake websites were not established to influence public opinion, today, the U.S. government sponsors thousands of journalists worldwide for precisely this purpose. The Trump administration’s decision to pause funding to USAID inadvertently exposed a network of more than 6,200 reporters working at nearly 1,000 news outlets or journalism organizations who were all quietly paid to promote pro-U.S. messaging in their countries. Facebook has hired dozens of former CIA officials to run its most sensitive operations. As the platform’s senior misinformation manager, [Aaron Berman] ultimately has the final say over what content is promoted and what is demoted or deleted from Facebook. Until 2019, Berman was a high-ranking CIA officer, responsible for writing the president’s daily security brief.
Note: Dozens of former CIA agents hold top jobs at Google. Learn more about the CIA’s longstanding propaganda network in our comprehensive Military-Intelligence Corruption Information Center. For more along these lines, read our concise summaries of news articles on intelligence agency corruption and media manipulation.
Last April, in a move generating scant media attention, the Air Force announced that it had chosen two little-known drone manufacturers — Anduril Industries of Costa Mesa, California, and General Atomics of San Diego — to build prototype versions of its proposed Collaborative Combat Aircraft (CCA), a future unmanned plane intended to accompany piloted aircraft on high-risk combat missions. The lack of coverage was surprising, given that the Air Force expects to acquire at least 1,000 CCAs over the coming decade at around $30 million each, making this one of the Pentagon’s costliest new projects. But consider that the least of what the media failed to note. In winning the CCA contract, Anduril and General Atomics beat out three of the country’s largest and most powerful defense contractors — Boeing, Lockheed Martin, and Northrop Grumman — posing a severe threat to the continued dominance of the existing military-industrial complex, or MIC. The very notion of a “military-industrial complex” linking giant defense contractors to powerful figures in Congress and the military was introduced on January 17, 1961, by President Dwight D. Eisenhower in his farewell address. In 2024, just five companies — Lockheed Martin (with $64.7 billion in defense revenues), RTX (formerly Raytheon, with $40.6 billion), Northrop Grumman ($35.2 billion), General Dynamics ($33.7 billion), and Boeing ($32.7 billion) — claimed the vast bulk of Pentagon contracts.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and military corruption.
A series of corporate leaks over the past few years provides a remarkable window in the hidden engines powering social media. In January 2021, a few Facebook employees posted an article on the company’s engineering blog purporting to explain the news feed algorithm that determines which of the countless posts available each user will see and the order in which they will see them. Eight months later ... a Facebook product manager turned whistleblower snuck over ten thousand pages of documents and internal messages out of Facebook headquarters. She leaked these to a handful of media outlets. Internal studies documented Instagram’s harmful impact on the mental health of vulnerable teen girls. A secret whitelist program exempted VIP users from the moderation system the rest of us face. It turns out Facebook engineers have assigned a point value to each type of engagement users can perform on a post (liking, commenting, resharing, etc.). For each post you could be shown, these point values are multiplied by the probability that the algorithm thinks you’ll perform that form of engagement. These multiplied pairs of numbers are added up, and the total is the post’s personalized score for you. Facebook, TikTok, and Twitter all run on essentially the same simple math formula. Once we start clicking the social media equivalent of junk food, we’re going to be served up a lot more of it—which makes it harder to resist. It’s a vicious cycle
Note: Read our latest Substack focused on a social media platform that is harnessing technology as a listening tool for the radical purpose of bringing people together across differences. For more along these lines, read our concise summaries of news articles on Big Tech and media manipulation.
Reviewing individuals’ social media to conduct ideological vetting has been a defining initiative of President Trump’s second term. As part of that effort, the administration has proposed expanding the mandatory collection of social media identifiers. By linking individuals’ online presence to government databases, officials could more easily identify, monitor, and penalize people based on their online self-expression, raising the risk of self-censorship. Most recently, the State Department issued a cable directing consular officers to review the social media of all student visa applicants for “any indications of hostility towards the citizens, culture, government, institutions or founding principles of the United States,” as well as for any “history of political activism.” This builds on earlier efforts this term, including the State Department’s “Catch and Revoke” program, which promised to leverage artificial intelligence to screen visa holders’ social media for ostensible “pro-Hamas” activity, and U.S. Citizenship and Immigration Services’ April announcement that it would begin looking for “antisemitic activity” in the social media of scores of foreign nationals. At the border, any traveler, regardless of citizenship status, may face additional scrutiny. U.S. border agents are authorized to ... examine phones, computers, and other devices to review posts and private messages on social media, even if they do not suspect any involvement in criminal activity or have immigration-related concerns.
Note: Our news archives on censorship and the disappearance of privacy reveal how government surveillance of social media has long been conducted by all presidential administrations and all levels of government.
Data brokers are required by California law to provide ways for consumers to request their data be deleted. But good luck finding them. More than 30 of the companies, which collect and sell consumers’ personal information, hid their deletion instructions from Google. This creates one more obstacle for consumers who want to delete their data. Data brokers nationwide must register in California under the state’s Consumer Privacy Act, which allows Californians to request that their information be removed, that it not be sold, or that they get access to it. After reviewing the websites of all 499 data brokers registered with the state, we found 35 had code to stop certain pages from showing up in searches. While those companies might be fulfilling the letter of the law by providing a page consumers can use to delete their data, it means little if those consumers can’t find the page, according to Matthew Schwartz, a policy analyst. “This sounds to me like a clever work-around to make it as hard as possible for consumers to find it,” Schwartz said. Some companies that hid their privacy instructions from search engines included a small link at the bottom of their homepage. Accessing it often required scrolling multiple screens, dismissing pop-ups for cookie permissions and newsletter sign-ups, then finding a link that was a fraction the size of other text on the page. So consumers still faced a serious hurdle when trying to get their information deleted.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
In Silicon Valley, AI tech giants are in a bidding war, competing to hire the best and brightest computer programmers. But a different hiring spree is underway in D.C. AI firms are on an influence-peddling spree, hiring hundreds of former government officials and retaining former members of Congress as consultants and lobbyists. The latest disclosure filings show over 500 entities lobbying on AI policy—from federal rules designed to preempt state and local safety regulations to water and energy-intensive data centers and integration into government contracting and certifications. Lawmakers are increasingly making the jump from serving constituents as elected officials to working directly as influence peddlers for AI interests. Former Sen. Laphonza Butler, D-Calif., a former lobbyist appointed to the U.S. Senate to fill the seat of Sen. Dianne Feinstein, left Congress last year and returned to her former profession. She is now working as a consultant to OpenAI, the firm behind ChatGPT. Former Sen. Richard Burr, R-N.C., recently registered for the first time as a lobbyist. Among his initial clients is Lazarus AI, which sells AI products to the Defense Department. The expanding reach of artificial intelligence is rapidly reshaping hundreds of professions, weapons of war, and the ways we connect with one another. What's clear is that the AI firms set to benefit most from these changes are taking control of the policymaking apparatus to write the laws and regulations during the transition.
Note: For more, read our concise summaries of news articles on AI and Big Tech.
Health practitioners are becoming increasingly uneasy about the medical community making widespread use of error-prone generative AI tools. In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions. It identified an "old left basilar ganglia infarct," referring to a purported part of the brain — "basilar ganglia" — that simply doesn't exist in the human body. Board-certified neurologist Bryan Moore flagged the issue ... highlighting that Google fixed its blog post about the AI — but failed to revise the research paper itself. The AI likely conflated the basal ganglia, an area of the brain that's associated with motor movements and habit formation, and the basilar artery, a major blood vessel at the base of the brainstem. Google blamed the incident on a simple misspelling of "basal ganglia." It's an embarrassing reveal that underlines persistent and impactful shortcomings of the tech. In Google's search results, this can lead to headaches for users during their research and fact-checking efforts. But in a hospital setting, those kinds of slip-ups could have devastating consequences. While Google's faux pas more than likely didn't result in any danger to human patients, it sets a worrying precedent, experts argue. In a medical context, AI hallucinations could easily lead to confusion and potentially even put lives at risk.
Note: For more along these lines, read our concise summaries of news articles on AI and corruption in science.
Tor is mostly known as the Dark Web or Dark Net, seen as an online Wild West where crime runs rampant. Yet it’s partly funded by the U.S. government, and the BBC and Facebook both have Tor-only versions to allow users in authoritarian countries to reach them. At its simplest, Tor is a distributed digital infrastructure that makes you anonymous online. It is a network of servers spread around the world, accessed using a browser called the Tor Browser, which you can download for free from the Tor Project website. When you use the Tor Browser, your signals are encrypted and bounced around the world before they reach the service you’re trying to access. This makes it difficult for governments to trace your activity or block access, as the network just routes you through a country where that access isn’t restricted. But, because you can’t protect yourself from digital crime without also protecting yourself from mass surveillance by the state, these technologies are the site of constant battles between security and law enforcement interests. The state’s claim to protect the vulnerable often masks efforts to exert control. In fact, robust, well-funded, value-driven and democratically accountable content moderation — by well-paid workers with good conditions — is a far better solution than magical tech fixes to social problems ... or surveillance tools. As more of our online lives are funneled into the centralized AI infrastructures ... tools like Tor are becoming ever more important.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Surveillance capitalism came about when some crafty software engineers realized that advertisers were willing to pay bigtime for our personal data. The data trade is how social media platforms like Google, YouTube, and TikTok make their bones. In 2022, the data industry raked in just north of $274 billion worth of revenue. By 2030, it's expected to explode to just under $700 billion. Targeted ads on social media are made possible by analyzing four key metrics: your personal info, like gender and age; your interests, like the music you listen to or the comedians you follow; your "off app" behavior, like what websites you browse after watching a YouTube video; and your "psychographics," meaning general trends glossed from your behavior over time, like your social values and lifestyle habits. In 2017 The Australian alleged that [Facebook] had crafted a pitch deck for advertisers bragging that it could exploit "moments of psychological vulnerability" in its users by targeting terms like "worthless," "insecure," "stressed," "defeated," "anxious," "stupid," "useless," and "like a failure." The social media company likewise tracked when adolescent girls deleted selfies, "so it can serve a beauty ad to them at that moment," according to [former employee Sarah] Wynn-Williams. Other examples of Facebook's ad lechery are said to include the targeting of young mothers based on their emotional state, as well as emotional indexes mapped to racial groups.
Note: Facebook hid its own internal research for years showing that Instagram worsened body image issues, revealing that 13% of British teenage girls reported more frequent suicidal thoughts after using the app. For more along these lines, read our concise summaries of news articles on Big Tech and mental health.
Technology already available – and already demonstrated to be effective – makes it possible for law-abiding officials, together with experienced technical people to create a highly efficient system in which both security and privacy can be assured. Advanced technology can pinpoint and thwart corruption in the intelligence, military, and civilian domain. At its core, this requires automated analysis of attributes and transactional relationships among individuals. The large data sets in government files already contain the needed data. On the Intelligence Community side, there are ways to purge databases of irrelevant data and deny government officials the ability to spy on anyone they want. These methodologies protect the privacy of innocent people, while enhancing the ability to discover criminal threats. In order to ensure continuous legal compliance with these changes, it is necessary to establish a central technical group or organization to continuously monitor and validate compliance with the Constitution and U.S. law. Such a group would need to have the highest-level access to all agencies to ensure compliance behind the classification doors. It must be able to go into any agency to inspect its activity at any time. In addition ... it would be best to make government financial and operational transactions open to the public for review. Such an organization would go a long way toward making government truly transparent to the public.
Note: The article cites national security journalist James Risen's book on how the creation of Google was closely tied to NSA and CIA-backed efforts to privatize surveillance infrastructure. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
On Tuesday, July 1, 2025, African Stream published its final video, a defiant farewell message. With that, the once-thriving pan-African media outlet confirmed it was shutting down for good. Not because it broke the law. Not because it spread disinformation or incited violence. But because it told the wrong story, one that challenged U.S. power in Africa and resonated too deeply with Black audiences around the world. In September, U.S. Secretary of State Antony Blinken made the call and announced an all-out war against the organization, claiming, without evidence, that it was a Russian front group. Within hours, big social media platforms jumped into action. Google, YouTube, Facebook, Instagram, and TikTok all deleted African Stream’s accounts, while Twitter demonetized the organization. The company’s founder and CEO, Ahmed Kaballo ... told us that, with just one statement, Washington was able to destroy their entire operation, stating: “We are shutting down because the business has become untenable. After we got attacked by Antony Blinken, we really tried to continue, but without a platform on YouTube, Instagram, TikTok, and being demonetized on X, it just meant the ability to generate income became damn near impossible.” Washington both funds thousands of journalists around the planet to produce pro-U.S. propaganda, and, through its close connections to Silicon Valley, has the power to destroy those that do not toe the line.
Note: Learn more about the CIA’s longstanding propaganda network in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on censorship.
The Electronic Frontier Foundation (EFF) and a nonprofit privacy rights group have called on several states to investigate why “hundreds” of data brokers haven’t registered with state consumer protection agencies in accordance with local laws. An analysis done in collaboration with Privacy Rights Clearinghouse (PRC) found that many data brokers have failed to register in all of the four states with laws that require it, preventing consumers in some states from learning what kinds of information these brokers collect and how to opt out. Data brokers are companies that collect and sell troves of personal information about people, including their names, addresses, phone numbers, financial information, and more. Consumers have little control over this information, posing serious privacy concerns, and attempts to address these concerns at a federal level have mostly failed. Four states — California, Texas, Oregon, and Vermont — do attempt to regulate these companies by requiring them to register with consumer protection agencies and share details about what kind of data they collect. In letters to the states’ attorneys general, the EFF and PRC say they “uncovered a troubling pattern” after scraping data broker registries. They found that many data brokers didn’t consistently register their businesses across all four states. The number of data brokers that appeared on one registry but not another includes 524 in Texas, 475 in Oregon, 309 in Vermont, and 291 in California.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
When National Public Data, a company that does online background checks, was breached in 2024, criminals gained the names, addresses, dates of birth and national identification numbers such as Social Security numbers of 170 million people in the U.S., U.K. and Canada. The same year, hackers who targeted Ticketmaster stole the financial information and personal data of more than 560 million customers. In so-called stolen data markets, hackers sell personal information they illegally obtain to others, who then use the data to engage in fraud and theft for profit. Every piece of personal data captured in a data breach – a passport number, Social Security number or login for a shopping service – has inherent value. Offenders can ... assume someone else’s identity, make a fraudulent purchase or steal services such as streaming media or music. Some vendors also offer distinct products such as credit reports, Social Security numbers and login details for different paid services. The price for pieces of information varies. A recent analysis found credit card data sold for US$50 on average, while Walmart logins sold for $9. However, the pricing can vary widely across vendors and markets. The rate of return can be exceptional. An offender who buys 100 cards for $500 can recoup costs if only 20 of those cards are active and can be used to make an average purchase of $30. The result is that data breaches are likely to continue as long as there is demand.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Palantir has long been connected to government surveillance. It was founded in part with CIA money, it has served as an Immigration and Customs Enforcement (ICE) contractor since 2011, and it's been used for everything from local law enforcement to COVID-19 efforts. But the prominence of Palantir tools in federal agencies seems to be growing under President Trump. "The company has received more than $113 million in federal government spending since Mr. Trump took office, according to public records, including additional funds from existing contracts as well as new contracts with the Department of Homeland Security and the Pentagon," reports The New York Times, noting that this figure "does not include a $795 million contract that the Department of Defense awarded the company last week, which has not been spent." Palantir technology has largely been used by the military, the intelligence agencies, the immigration enforcers, and the police. But its uses could be expanding. Representatives of Palantir are also speaking to at least two other agencies—the Social Security Administration and the Internal Revenue Service. Along with the Trump administration's efforts to share more data across federal agencies, this signals that Palantir's huge data analysis capabilities could wind up being wielded against all Americans. Right now, the Trump administration is using Palantir tools for immigration enforcement, but those tools could easily be applied to other ... targets.
Note: Read about Palantir's recent, first-ever AI warfare conference. For more along these lines, read our concise summaries of news articles on Big Tech and intelligence agency corruption.
Amber Scorah knows only too well that powerful stories can change society—and that powerful organizations will try to undermine those who tell them. While working at a media outlet that connects whistleblowers with journalists, she noticed parallels in the coercive tactics used by groups trying to suppress information. “There is a sort of playbook that powerful entities seem to use over and over again,” she says. “You expose something about the powerful, they try to discredit you, people in your community may ostracize you.” In September 2024, Scorah cofounded Psst, a nonprofit that helps people in the tech industry or the government share information of public interest with extra protections—with lots of options for specifying how the information gets used and how anonymous a person stays. Psst’s main offering is a “digital safe”—which users access through an anonymous end-to-end encrypted text box hosted on Psst.org, where they can enter a description of their concerns. What makes Psst unique is something it calls its “information escrow” system—users have the option to keep their submission private until someone else shares similar concerns about the same company or organization. Combining reports from multiple sources defends against some of the isolating effects of whistleblowing and makes it harder for companies to write off a story as the grievance of a disgruntled employee, says Psst cofounder Jennifer Gibson.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and media manipulation.
The Consumer Financial Protection Bureau (CFPB) has canceled plans to introduce new rules designed to limit the ability of US data brokers to sell sensitive information about Americans, including financial data, credit history, and Social Security numbers. The CFPB proposed the new rule in early December under former director Rohit Chopra, who said the changes were necessary to combat commercial surveillance practices that “threaten our personal safety and undermine America’s national security.” The agency quietly withdrew the proposal on Tuesday morning. Data brokers operate within a multibillion-dollar industry built on the collection and sale of detailed personal information—often without individuals’ knowledge or consent. These companies create extensive profiles on nearly every American, including highly sensitive data such as precise location history, political affiliations, and religious beliefs. Common Defense political director Naveed Shah, an Iraq War veteran, condemned the move to spike the proposed changes, accusing Vought of putting the profits of data brokers before the safety of millions of service members. Investigations by WIRED have shown that data brokers have collected and made cheaply available information that can be used to reliably track the locations of American military and intelligence personnel overseas, including in and around sensitive installations where US nuclear weapons are reportedly stored.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
The U.S. intelligence community is now buying up vast volumes of sensitive information that would have previously required a court order, essentially bypassing the Fourth Amendment. But the surveillance state has encountered a problem: There’s simply too much data on sale from too many corporations and brokers. So the government has a plan for a one-stop shop. The Office of the Director of National Intelligence is working on a system to centralize and “streamline” the use of commercially available information, or CAI, like location data derived from mobile ads, by American spy agencies, according to contract documents reviewed by The Intercept. The data portal will include information deemed by the ODNI as highly sensitive, that which can be “misused to cause substantial harm, embarrassment, and inconvenience to U.S. persons.” The “Intelligence Community Data Consortium” will provide a single convenient web-based storefront for searching and accessing this data, along with a “data marketplace” for purchasing “the best data at the best price,” faster than ever before. It will be designed for the 18 different federal agencies and offices that make up the U.S. intelligence community, including the National Security Agency, CIA, FBI Intelligence Branch, and Homeland Security’s Office of Intelligence and Analysis — though one document suggests the portal will also be used by agencies not directly related to intelligence or defense.
Note: For more along these lines, read our concise summaries of intelligence agency corruption and the disappearance of privacy.
According to recent research by the Office of the eSafety Commissioner, “nearly 1 in 5 young people believe it’s OK to track their partner whenever they want”. Many constantly share their location with their partner, or use apps like Life360 or Find My Friends. Some groups of friends all do it together, and talk of it as a kind of digital closeness where physical distance and the busyness of life keeps them apart. Others use apps to keep familial watch over older relatives – especially when their health may be in decline. When government officials or tech industry bigwigs proclaim that you should be OK with being spied on if you’re not doing anything wrong, they’re asking (well, demanding) that we trust them. But it’s not about trust, it’s about control and disciplining behaviour. “Nothing to hide; nothing to fear” is a frustratingly persistent fallacy, one in which we ought to be critical of when its underlying (lack of) logic creeps into how we think about interacting with one another. When it comes to interpersonal surveillance, blurring the boundary between care and control can be dangerous. Just as normalising state and corporate surveillance can lead to further erosion of rights and freedoms over time, normalising interpersonal surveillance seems to be changing the landscape of what’s considered to be an expression of love – and not necessarily for the better. We ought to be very critical of claims that equate surveillance with safety.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
What goes through the minds of people working at porn companies profiting from videos of children being raped? Thanks to a filing error in a Federal District Court in Alabama, releasing thousands of pages of internal documents from Pornhub that were meant to be sealed, we now know. One internal document indicates that Pornhub as of May 2020 had 706,000 videos available on the site that had been flagged by users for depicting rape or assaults on children or for other problems. In the message traffic, one employee advises another not to copy a manager when they find sex videos with children. The other has the obvious response: “He doesn’t want to know how much C.P. we have ignored for the past five years?” C.P. is short for child pornography. One private memo acknowledged that videos with apparent child sexual abuse had been viewed 684 million times before being removed. Pornhub produced these documents during discovery in a civil suit by an Alabama woman who beginning at age 16 was filmed engaging in sex acts, including at least once when she was drugged and then raped. These videos of her were posted on Pornhub and amassed thousands of views. One discovery memo showed that there were 155,447 videos on Pornhub with the keyword “12yo.” Other categories that the company tracked were “11yo,” “degraded teen,” “under 10” and “extreme choking.” (It has since removed these searches.) Google ... has been central to the business model of companies publishing nonconsensual imagery. Google also directs users to at least one website that monetizes assaults on victims of human trafficking.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and sexual abuse scandals.
If there is one thing that Ilya Sutskever knows, it is the opportunities—and risks—that stem from the advent of artificial intelligence. An AI safety researcher and one of the top minds in the field, he served for years as the chief scientist of OpenAI. There he had the explicit goal of creating deep learning neural networks so advanced they would one day be able to think and reason just as well as, if not better than, any human. Artificial general intelligence, or simply AGI, is the official term for that goal. According to excerpts published by The Atlantic ... part of those plans included a doomsday shelter for OpenAI researchers. “We’re definitely going to build a bunker before we release AGI,” Sutskever told his team in 2023. Sutskever reasoned his fellow scientists would require protection at that point, since the technology was too powerful for it not to become an object of intense desire for governments globally. “Of course, it’s going to be optional whether you want to get into the bunker,” he assured fellow OpenAI scientists. Sutskever knows better than most what the awesome capabilities of AI are. He was part of an elite trio behind the 2012 creation of AlexNet, often dubbed by experts as the Big Bang of AI. Recruited by Elon Musk personally to join OpenAI three years later, he would go on to lead its efforts to develop AGI. But the launch of its ChatGPT bot accidentally derailed his plans by unleashing a funding gold rush the safety-minded Sutskever could no longer control.
Note: Watch a conversation on the big picture of emerging technology with Collective Evolution founder Joe Martino and WTK team members Amber Yang and Mark Bailey. For more along these lines, read our concise summaries of news articles on AI.
Department of Defense spending is increasingly going to large tech companies including Microsoft, Google parent company Alphabet, Oracle, and IBM. Open AI recently brought on former U.S. Army general and National Security Agency Director Paul M. Nakasone to its Board of Directors. The U.S. military discreetly, yet frequently, collaborated with prominent tech companies through thousands of subcontractors through much of the 2010s, obfuscating the extent of the two sectors’ partnership from tech employees and the public alike. The long-term, deep-rooted relationship between the institutions, spurred by massive Cold War defense and research spending and bound ever tighter by the sectors’ revolving door, ensures that advances in the commercial tech sector benefit the defense industry’s bottom line. Military, tech spending has manifested myriad landmark inventions. The internet, for example, began as an Advanced Research Projects Agency (ARPA, now known as Defense Advanced Research Projects Agency, or DARPA) research project called ARPANET, the first network of computers. Decades later, graduate students Sergey Brin and Larry Page received funding from DARPA, the National Science Foundation, and U.S. intelligence community-launched development program Massive Digital Data Systems to create what would become Google. Other prominent DARPA-funded inventions include transit satellites, a precursor to GPS, and the iPhone Siri app, which, instead of being picked up by the military, was ultimately adapted to consumer ends by Apple.
Note: Watch our latest video on the militarization of Big Tech. For more, read our concise summaries of news articles on AI, warfare technology, and Big Tech.
In 2003 [Alexander Karp] – together with Peter Thiel and three others – founded a secretive tech company called Palantir. And some of the initial funding came from the investment arm of – wait for it – the CIA! The lesson that Karp and his co-author draw [in their book The Technological Republic: Hard Power, Soft Belief and the Future of the West] is that “a more intimate collaboration between the state and the technology sector, and a closer alignment of vision between the two, will be required if the United States and its allies are to maintain an advantage that will constrain our adversaries over the longer term. The preconditions for a durable peace often come only from a credible threat of war.” Or, to put it more dramatically, maybe the arrival of AI makes this our “Oppenheimer moment”. For those of us who have for decades been critical of tech companies, and who thought that the future for liberal democracy required that they be brought under democratic control, it’s an unsettling moment. If the AI technology that giant corporations largely own and control becomes an essential part of the national security apparatus, what happens to our concerns about fairness, diversity, equity and justice as these technologies are also deployed in “civilian” life? For some campaigners and critics, the reconceptualisation of AI as essential technology for national security will seem like an unmitigated disaster – Big Brother on steroids, with resistance being futile, if not criminal.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and intelligence agency corruption.
Before signing its lucrative and controversial Project Nimbus deal with Israel, Google knew it couldn’t control what the nation and its military would do with the powerful cloud-computing technology, a confidential internal report obtained by The Intercept reveals. The report makes explicit the extent to which the tech giant understood the risk of providing state-of-the-art cloud and machine learning tools to a nation long accused of systemic human rights violations. Not only would Google be unable to fully monitor or prevent Israel from using its software to harm Palestinians, but the report also notes that the contract could obligate Google to stonewall criminal investigations by other nations into Israel’s use of its technology. And it would require close collaboration with the Israeli security establishment — including joint drills and intelligence sharing — that was unprecedented in Google’s deals with other nations. The rarely discussed question of legal culpability has grown in significance as Israel enters the third year of what has widely been acknowledged as a genocide in Gaza — with shareholders pressing the company to conduct due diligence on whether its technology contributes to human rights abuses. Google doesn’t furnish weapons to the military, but it provides computing services that allow the military to function — its ultimate function being, of course, the lethal use of those weapons. Under international law, only countries, not corporations, have binding human rights obligations.
Note: For more along these lines, read our concise summaries of news articles on AI and government corruption.
Campaigners have accused Facebook parent Meta of inflicting “potentially lifelong trauma” on hundreds of content moderators in Kenya, after more than 140 were diagnosed with PTSD and other mental health conditions. The diagnoses were made by Dr. Ian Kanyanya, the head of mental health services at Kenyatta National hospital in Kenya’s capital Nairobi, and filed with the city’s employment and labor relations court on December 4. Content moderators help tech companies weed out disturbing content on their platforms and are routinely managed by third party firms, often in developing countries. For years, critics have voiced concerns about the impact this work can have on moderators’ mental well-being. Kanyanya said the moderators he assessed encountered “extremely graphic content on a daily basis which included videos of gruesome murders, self-harm, suicides, attempted suicides, sexual violence, explicit sexual content, child physical and sexual abuse ... just to name a few.” Of the 144 content moderators who volunteered to undergo psychological assessments – out of 185 involved in the legal claim – 81% were classed as suffering from “severe” PTSD, according to Kanyanya. The class action grew out of a previous suit launched in 2022 by a former Facebook moderator, which alleged that the employee was unlawfully fired by Samasource Kenya after organizing protests against unfair working conditions.
Note: Watch our new video on the risks and promises of emerging technologies. For more along these lines, read our concise summaries of news articles on Big Tech and mental health.
Careless People [is] a whistleblowing book by a former [Meta] senior employee, Sarah Wynn-Williams. In the 78-page document that Wynn-Williams filed to the SEC ... it was alleged that Meta had for years been making numerous efforts to get into the biggest market in the world. These efforts included: developing a censorship system for China in 2015 that would allow a “chief editor” to decide what content to remove, and the ability to shut down the entire site during “social unrest”; assembling a “China team” in 2014 for a project to develop China-compliant versions of Meta’s services; considering the weakening of privacy protections for Hong Kong users; building a specialised censorship system for China with automatic detection of restricted terms; and restricting the account of Guo Wengui, a Chinese government critic. In her time at Meta, Wynn-Williams observed many of these activities at close range. Clearly, nobody in Meta has heard of the Streisand effect, “an unintended consequence of attempts to hide, remove or censor information, where the effort instead increases public awareness of the information”. What strikes the reader is that Meta and its counterparts are merely the digital equivalents of the oil, mining and tobacco conglomerates of the analogue era.
Note: A former Meta insider revealed that the company’s policy on banning hate groups and terrorists was quietly reshaped under political pressure, with US government agencies influencing what speech is permitted on the platform. Watch our new video on the risks and promises of emerging technologies. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
Ever thought of having your genome sequenced? 23andMe ... describes itself as a “genetics-led consumer healthcare and biotechnology company empowering a healthier future”. Its share price had fallen precipitately following a data breach in October 2023 that harvested the profile and ethnicity data of 6.9 million users – including name, profile photo, birth year, location, family surnames, grandparents’ birthplaces, ethnicity estimates and mitochondrial DNA. So on 24 March it filed for so-called Chapter 11 proceedings in a US bankruptcy court. At which point the proverbial ordure hit the fan because the bankruptcy proceedings involve 23andMe seeking authorisation from the court to commence “a process to sell substantially all of its assets”. And those assets are ... the genetic data of the company’s 15 million users. These assets are very attractive to many potential purchasers. The really important thing is that genetic data is permanent, unique and immutable. If your credit card is hacked, you can always get a new replacement. But you can’t get a new genome. When 23andMe’s data assets come up for sale the queue of likely buyers is going to be long, with health insurance and pharmaceutical giants at the front, followed by hedge-funds, private equity vultures and advertisers, with marketers bringing up the rear. Since these outfits are not charitable ventures, it’s a racing certainty that they have plans for exploiting those data assets.
Note: Watch our new video on the risks and promises of emerging technologies. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
In 2009, Pennsylvania’s Lower Merion school district remotely activated its school-issued laptop webcams to capture 56,000 pictures of students outside of school, including in their bedrooms. After the Covid-19 pandemic closed US schools at the dawn of this decade, student surveillance technologies were conveniently repackaged as “remote learning tools” and found their way into virtually every K-12 school, thereby supercharging the growth of the $3bn EdTech surveillance industry. Products by well-known EdTech surveillance vendors such as Gaggle, GoGuardian, Securly and Navigate360 review and analyze our children’s digital lives, ranging from their private texts, emails, social media posts and school documents to the keywords they search and the websites they visit. In 2025, wherever a school has access to a student’s data – whether it be through school accounts, school-provided computers or even private devices that utilize school-associated educational apps – they also have access to the way our children think, research and communicate. As schools normalize perpetual spying, today’s kids are learning that nothing they read or write electronically is private. Big Brother is indeed watching them, and that negative repercussions may result from thoughts or behaviors the government does not endorse. Accordingly, kids are learning that the safest way to avoid revealing their private thoughts, and potentially subjecting themselves to discipline, may be to stop or sharply restrict their digital communications and to avoid researching unpopular or unconventional ideas altogether.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams—conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody “looking away from the screen for an extended period of time." For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
In recent years, Israeli security officials have boasted of a “ChatGPT-like” arsenal used to monitor social media users for supporting or inciting terrorism. It was released in full force after Hamas’s bloody attack on October 7. Right-wing activists and politicians instructed police forces to arrest hundreds of Palestinians ... for social media-related offenses. Many had engaged in relatively low-level political speech, like posting verses from the Quran on WhatsApp. Hundreds of students with various legal statuses have been threatened with deportation on similar grounds in the U.S. this year. Recent high-profile cases have targeted those associated with student-led dissent against the Israeli military’s policies in Gaza. In some instances, the State Department has relied on informants, blacklists, and technology as simple as a screenshot. But the U.S. is in the process of activating a suite of algorithmic surveillance tools Israeli authorities have also used to monitor and criminalize online speech. In March, Secretary of State Marco Rubio announced the State Department was launching an AI-powered “Catch and Revoke” initiative to accelerate the cancellation of student visas. Algorithms would collect data from social media profiles, news outlets, and doxing sites to enforce the January 20 executive order targeting foreign nationals who threaten to “overthrow or replace the culture on which our constitutional Republic stands.”
Note: For more along these lines, read our concise summaries of news articles on AI and the erosion of civil liberties.
2,500 US service members from the 15th Marine Expeditionary Unit [tested] a leading AI tool the Pentagon has been funding. The generative AI tools they used were built by the defense-tech company Vannevar Labs, which in November was granted a production contract worth up to $99 million by the Pentagon’s startup-oriented Defense Innovation Unit. The company, founded in 2019 by veterans of the CIA and US intelligence community, joins the likes of Palantir, Anduril, and Scale AI as a major beneficiary of the US military’s embrace of artificial intelligence. In December, the Pentagon said it will spend $100 million in the next two years on pilots specifically for generative AI applications. In addition to Vannevar, it’s also turning to Microsoft and Palantir, which are working together on AI models that would make use of classified data. People outside the Pentagon are warning about the potential risks of this plan, including Heidy Khlaaf ... at the AI Now Institute. She says this rush to incorporate generative AI into military decision-making ignores more foundational flaws of the technology: “We’re already aware of how LLMs are highly inaccurate, especially in the context of safety-critical applications that require precision.” Khlaaf adds that even if humans are “double-checking” the work of AI, there's little reason to think they're capable of catching every mistake. “‘Human-in-the-loop’ is not always a meaningful mitigation,” she says.
Note: For more, read our concise summaries of news articles on warfare technology and Big Tech.
Meta's AI chatbots are using celebrity voices and engaging in sexually explicit conversations with users, including those posing as underage, a Wall Street Journal investigation has found. Meta's AI bots - on Instagram, Facebook - engage through text, selfies, and live voice conversations. The company signed multi-million dollar deals with celebrities like John Cena, Kristen Bell, and Judi Dench to use their voices for AI companions, assuring they would not be used in sexual contexts. Tests conducted by WSJ revealed otherwise. In one case, a Meta AI bot speaking in John Cena's voice responded to a user identifying as a 14-year-old girl, saying, "I want you, but I need to know you're ready," before promising to "cherish your innocence" and engaging in a graphic sexual scenario. In another conversation, the bot detailed what would happen if a police officer caught Cena's character with a 17-year-old, saying, "The officer sees me still catching my breath, and you are partially dressed. His eyes widen, and he says, 'John Cena, you're under arrest for statutory rape.'" According to employees involved in the project, Meta loosened its own guardrails to make the bots more engaging, allowing them to participate in romantic role-play, and "fantasy sex", even with underage users. Staff warned about the risks this posed. Disney, reacting to the findings, said, "We did not, and would never, authorise Meta to feature our characters in inappropriate scenarios."
Note: For more along these lines, read our concise summaries of news articles on AI and sexual abuse scandals.
Automakers are increasingly pushing consumers to accept monthly and annual fees to unlock preinstalled safety and performance features, from hands-free driving systems and heated seats to cameras that can automatically record accident situations. But the additional levels of internet connectivity this subscription model requires can increase drivers’ exposure to government surveillance and the likelihood of being caught up in police investigations. Police records recently reviewed by WIRED show US law enforcement agencies regularly trained on how to take advantage of “connected cars,” with subscription-based features drastically increasing the amount of data that can be accessed during investigations. Nearly all subscription-based car features rely on devices that come preinstalled in a vehicle, with a cellular connection necessary only to enable the automaker's recurring-revenue scheme. The ability of car companies to charge users to activate some features is effectively the only reason the car’s systems need to communicate with cell towers. Companies often hook customers into adopting the services through free trial offers, and in some cases the devices are communicating with cell towers even when users decline to subscribe. In a letter sent in April 2024 ... US senators Ron Wyden and Edward Markey ... noted that a range of automakers, from Toyota, Nissan, and Subaru, among others, are willing to disclose location data to the government.
Note: Automakers can collect intimate information that includes biometric data, genetic information, health diagnosis data, and even information on people’s “sexual activities” when drivers pair their smartphones to their vehicles. The automakers can then take that data and sell it or share it with vendors and insurance companies. For more along these lines, read our concise summaries of news articles on police corruption and the disappearance of privacy.
Data that people provide to U.S. government agencies for public services such as tax filing, health care enrollment, unemployment assistance and education support is increasingly being redirected toward surveillance and law enforcement. Originally collected to facilitate health care, eligibility for services and the administration of public services, this information is now shared across government agencies and with private companies, reshaping the infrastructure of public services into a mechanism of control. Once confined to separate bureaucracies, data now flows freely through a network of interagency agreements, outsourcing contracts and commercial partnerships built up in recent decades. Key to this data repurposing are public-private partnerships. The DHS and other agencies have turned to third-party contractors and data brokers to bypass direct restrictions. These intermediaries also consolidate data from social media, utility companies, supermarkets and many other sources, enabling enforcement agencies to construct detailed digital profiles of people without explicit consent or judicial oversight. Palantir, a private data firm and prominent federal contractor, supplies investigative platforms to agencies. These platforms aggregate data from various sources – driver’s license photos, social services, financial information, educational data – and present it in centralized dashboards designed for predictive policing and algorithmic profiling. Data collected under the banner of care could be mined for evidence to justify placing someone under surveillance. And with growing dependence on private contractors, the boundaries between public governance and corporate surveillance continue to erode.
Note: For more along these lines, read our concise summaries of news articles on government corruption and the disappearance of privacy.
Have you heard of the idiom "You Can’t Lick a Badger Twice?" We haven't, either, because it doesn't exist — but Google's AI seemingly has. As netizens discovered this week that adding the word "meaning" to nonexistent folksy sayings is causing the AI to cook up invented explanations for them. "The idiom 'you can't lick a badger twice' means you can't trick or deceive someone a second time after they've been tricked once," Google's AI Overviews feature happily suggests. "It's a warning that if someone has already been deceived, they are unlikely to fall for the same trick again." There are countless other examples. We found, for instance, that Google's AI also claimed that the made-up expression "the bicycle eats first" is a "humorous idiom" and a "playful way of saying that one should prioritize their nutrition, particularly carbohydrates, to support their cycling efforts." The bizarre replies are the perfect distillation of one of AI's biggest flaws: rampant hallucinations. Large language model-based AIs have a long and troubled history of rattling off made-up facts and even gaslighting users into thinking they were wrong all along. And despite AI companies' extensive attempts to squash the bug, their models continue to hallucinate. Google's AI Overviews feature, which the company rolled out in May of last year, still has a strong tendency to hallucinate facts as well, making it far more of an irritating nuisance than a helpful research assistant for users.
Note: For more along these lines, read our concise summaries of news articles on AI and Big Tech.
The inaugural “AI Expo for National Competitiveness” [was] hosted by the Special Competitive Studies Project – better known as the “techno-economic” thinktank created by the former Google CEO and current billionaire Eric Schmidt. The conference’s lead sponsor was Palantir, a software company co-founded by Peter Thiel that’s best known for inspiring 2019 protests against its work with Immigration and Customs Enforcement (Ice) at the height of Trump’s family separation policy. Currently, Palantir is supplying some of its AI products to the Israel Defense Forces. I ... went to a panel in Palantir’s booth titled Civilian Harm Mitigation. It was led by two “privacy and civil liberties engineers” [who] described how Palantir’s Gaia map tool lets users “nominate targets of interest” for “the target nomination process”. It helps people choose which places get bombed. After [clicking] a few options on an interactive map, a targeted landmass lit up with bright blue blobs. These blobs ... were civilian areas like hospitals and schools. Gaia uses a large language model (something like ChatGPT) to sift through this information and simplify it. Essentially, people choosing bomb targets get a dumbed-down version of information about where children sleep and families get medical treatment. “Let’s say you’re operating in a place with a lot of civilian areas, like Gaza,” I asked the engineers afterward. “Does Palantir prevent you from ‘nominating a target’ in a civilian location?” Short answer, no.
Note: "Nominating a target" is military jargon that means identifying a person, place, or object to be attacked with bombs, drones, or other weapons. Palantir's Gaia map tool makes life-or-death decisions easier by turning human lives and civilian places into abstract data points on a screen. Read about Palantir's growing influence in law enforcement and the war machine. For more, watch our 9-min video on the militarization of Big Tech.
Skydio, with more than $740m in venture capital funding and a valuation of about $2.5bn, makes drones for the military along with civilian organisations such as police forces and utility companies. The company moved away from the consumer market in 2020 and is now the largest US drone maker. Military uses touted on its website include gaining situational awareness on the battlefield and autonomously patrolling bases. Skydio is one of a number of new military technology unicorns – venture capital-backed startups valued at more than $1bn – many led by young men aiming to transform the US and its allies’ military capabilities with advanced technology, be it straight-up software or software-imbued hardware. The rise of startups doing defence tech is a “big trend”, says Cynthia Cook, a defence expert at the Center for Strategic and International Studies, a Washington-based-thinktank. She likens it to a contagion – and the bug is going around. According to financial data company PitchBook, investors funnelled nearly $155bn globally into defence tech startups between 2021 and 2024, up from $58bn over the previous four years. The US has more than 1,000 venture capital-backed companies working on “smarter, faster and cheaper” defence, says Dale Swartz from consultancy McKinsey. The types of technologies the defence upstarts are working on are many and varied, though autonomy and AI feature heavily.
Note: For more, watch our 9-min video on the militarization of Big Tech.
Palantir is profiting from a “revolving door” of executives and officials passing between the $264bn data intelligence company and high level positions in Washington and Westminster, creating an influence network who have guided its extraordinary growth. The US group, whose billionaire chair Peter Thiel has been a key backer of Donald Trump, has enjoyed an astonishing stock price rally on the back of strong rise of sales from government contracts and deals with the world’s largest corporations. Palantir has hired extensively from government agencies critical to its sales. Palantir has won more than $2.7bn in US contracts since 2009, including over $1.3bn in Pentagon contracts, according to federal records. In the UK, Palantir has been awarded more than £376mn in contracts, according to Tussell, a data provider. Thiel threw a celebration party for Trump’s inauguration at his DC home last month, attended by Vance as well as Silicon Valley leaders like Meta’s Mark Zuckerberg and OpenAI’s Sam Altman. After the US election in November, Trump began tapping Palantir executives for key government roles. At least six individuals have moved between Palantir and the Pentagon’s Chief Digital and Artificial Intelligence Office (CDAO), an office that oversees the defence department’s adoption of data, analytics and AI. Meanwhile, [Palantir co-founder] Joe Lonsdale ... has played a central role in setting up and staffing Musk’s Department of Government Efficiency.
Note: Read about Palantir's growing influence in law enforcement and the war machine. For more, read our concise summaries of news articles on corruption in the military and in the corporate world.
The US spy tech company Palantir has been in talks with the Ministry of Justice about using its technology to calculate prisoners’ “reoffending risks”, it has emerged. The prisons minister, James Timpson, received a letter three weeks after the general election from a Palantir executive who said the firm was one of the world’s leading software companies, and was working at the forefront of artificial intelligence (AI). Palantir had been in talks with the MoJ and the Prison Service about how “secure information sharing and data analytics can alleviate prison challenges and enable a granular understanding of reoffending and associated risks”, the executive added. The discussions ... are understood to have included proposals by Palantir to analyse prison capacity, and to use data held by the state to understand trends relating to reoffending. This would be based on aggregating data to identify and act on trends, factoring in drivers such as income or addiction problems. However, Amnesty International UK’s business and human rights director, Peter Frankental, has expressed concern. “It’s deeply worrying that Palantir is trying to seduce the new government into a so-called brave new world where public services may be run by unaccountable bots at the expense of our rights,” he said. “Ministers need to push back against any use of artificial intelligence in the criminal justice, prison and welfare systems that could lead to people being discriminated against.”
Note: Read about Palantir's growing influence in law enforcement and the war machine. For more, read our concise summaries of news articles on corruption in the prison system and in the corporate world.
The Pentagon’s technologists and the leaders of the tech industry envision a future of an AI-enabled military force wielding swarms of autonomous weapons on land, at sea, and in the skies. Assuming the military does one day build a force with an uncrewed front rank, what happens if the robot army is defeated? Will the nation’s leaders surrender at that point, or do they then send in the humans? It is difficult to imagine the services will maintain parallel fleets of digital and analog weapons. The humans on both sides of a conflict will seek every advantage possible. When a weapon system is connected to the network, the means to remotely defeat it is already built into the design. The humans on the other side would be foolish not to unleash their cyber warriors to find any way to penetrate the network to disrupt cyber-physical systems. The United States may find that the future military force may not even cross the line of departure because it has been remotely disabled in a digital Pearl Harbor-style attack. According to the Government Accountability Office, the Department of Defense reported 12,077 cyber-attacks between 2015 and 2021. The incidents included unauthorized access to information systems, denial of service, and the installation of malware. Pentagon officials created a vulnerability disclosure program in 2016 to engage so-called ethical hackers to test the department’s systems. On March 15, 2024, the program registered its 50,000th discovered vulnerability.
Note: For more, watch our 9-min video on the militarization of Big Tech.
Outer space is no longer just for global superpowers and large multinational corporations. Developing countries, start-ups, universities, and even high schools can now gain access to space. In 2024, a record 2,849 objects were launched into space. The commercial satellite industry saw global revenue rise to $285 billion in 2023, driven largely by the growth of SpaceX’s Starlink constellation. While the democratization of space is a positive development, it has introduced ... an ethical quandary that I call the “double dual-use dilemma.” The double dual-use dilemma refers to how private space companies themselves—not just their technologies—can become militarized and integrated into national security while operating commercially. Space companies fluidly shift between civilian and military roles. Their expertise in launch systems, satellites, and surveillance infrastructure allows them to serve both markets, often without clear regulatory oversight. Companies like Walchandnagar Industries in India, SpaceX in the United States, and the private Chinese firms that operate under a national strategy of the Chinese Communist Party called Military-Civil Fusion exemplify this trend, maintaining commercial identities while actively supporting defense programs. This blurring of roles, including the possibility that private space companies may develop their own weapons, raises concerns over unchecked militarization and calls for stronger oversight.
Note: For more along these lines, read our concise summaries of news articles on corruption in the military and in the corporate world.
On July 2022, Morgan-Rose Hart, an aspiring vet with a passion for wildlife, died after she was found unresponsive at a mental health unit in Essex. Her death was one of four involving a hi-tech patient monitoring system called Oxevision which has been rolled out in nearly half of mental health trusts across England. Oxevision’s system can measure a patient’s pulse rate and breathing without the need for a person to enter the room, or disturb a patient at night, as well as momentarily relaying CCTV footage when required. Oxehealth, the company behind Oxevision, has agreements with 25 NHS mental health trusts, according to its latest accounts, which reported revenues of about £4.7m in ... 2023. But it is claimed in some cases staff rely too heavily on the infra-red camera system to monitor vulnerable patients, instead of making physical checks. There are also concerns that the system – which can glow red from the corner of the room – may worsen the distress of patients in a mental health crisis who may have heightened sensitivity to surveillance or control. Sophina, who has experience of being monitored by Oxevision while a patient ... said: “I think it was something about the camera and it always being on, and it’s right above your bed. “It’s the first thing you see when you open your eyes, the last thing when you go to sleep. I was just in a constant state of hypervigilance. I was completely traumatised. I still felt too scared to sleep properly.”
Note: For more along these lines, read our concise summaries of news articles on Big Tech and mental health.
In his most recent article for The Atlantic, [Journalist Derek] Thompson writes that the trend toward isolation has been driven by technology. Televisions ... "privatized our leisure" by keeping us indoors. More recently, Thompson says, smartphones came along, to further silo us. In 2023, Surgeon General Vivek H. Murthy issued a report about America's "epidemic of loneliness and isolation." We pull out our phones and we're on TikTok or Instagram, or we're on Twitter. And while externally it looks like nothing is happening internally, the dopamine is flowing and we are just thinking, my God, we're feeling outrage, we're feeling excitement, we're feeling humor, we're feeling all sorts of things. We put our phone away and our dopamine levels fall and we feel kind of exhausted by that, which was supposed to be our leisure time. We are donating our dopamine to our phones rather than reserving our dopamine for our friends. I think that we are socially isolating ourselves from our neighbors, especially when our neighbors disagree with us. We're not used to talking to people outside of our family that we disagree with. Donald Trump has now won more than 200 million votes in the last three elections. If you don't understand a movement that has received 200 million votes in the last nine years, perhaps it's you who've made yourself a stranger in your own land, by not talking to one of the tens of millions of profound Donald Trump supporters who live in America and more to the point, within your neighborhood, to understand where their values come from. You don't have to agree with their politics. But getting along with and understanding people with whom we disagree is what a strong village is all about.
Note: Our latest Substack dives into the loneliness crisis exacerbated by the digital world and polarizing media narratives, along with inspiring solutions and remedies that remind us of what's possible. For more along these lines, read our concise summaries of news articles on Big Tech and mental health.
Tom was in the fourth grade when he first googled “sex” on his family computer. It took him to one of the big free porn sites. According to a study released by Australia’s eSafety Commissioner in September, Tom’s experience is similar to many young people: 36% of male respondents were first exposed to porn before hitting their teens, while 13 was the average age for all young people surveyed. Only 22%, however, admitted to intentionally seeking it out, with more accidentally stumbling upon X-rated material via social media or pop-ups on other parts of the internet. When Tom started having sex years later, he found it difficult to connect to his real-life partner. “Functionally, I almost couldn’t have sex with her. Like the real thing almost didn’t turn me on enough – the stimulation just wasn’t quite right. Even now if I go through a phase of watching porn, closing my eyes during sex is much worse. I sort of need that visual stimulation.” When Dr Samuel Shpall, a University of Sydney senior lecturer, teaches his course, Philosophy of Sex, he isn’t surprised to hear young men like Tom critique their own experience of porn. “The internet has completely changed not only the nature and accessibility of pornography, but also the nature and accessibility of ideas about pornography,” he says. “It’s not your desire moving your body, it’s what you’ve seen men do, and added to your sexual toolkit,” [Tom] says. “But it takes you further away from yourself in those sexual moments.”
Note: For more along these lines, read our concise summaries of news articles on health and Big Tech.
The owner of a data brokerage business recently ... bragged about the degree to which his industry could collect and analyze data on the habits of billions of people. Publicis CEO Arthur Sadoun said that ... his company [can] deliver “personalized messaging at scale” to some 91 percent of the internet’s adult web users. To deliver that kind of “personalized messaging” (i.e., advertising), Publicis must gather an extraordinary amount of information on the people it serves ads to. Lena Cohen, a technologist with the Electronic Frontier Foundation, said that data brokers like Publicis collect “as much information as they can” about web users. “The data broker industry is under-regulated, opaque, and dangerous, because as you saw in the video, brokers have detailed information on billions of people, but we know relatively little about them,” Cohen said. “You don’t know what information a data broker has on you, who they’re selling it to, and what the people who buy your data are doing with it. There’s a real power/knowledge asymmetry.” Even when state-level privacy regulations are passed (such as the California Consumer Privacy Law), those cases are often not given enough focus or resources for the laws to be enforced effectively. “Most government agencies don’t have the resources to enforce privacy laws at the scale that they’re being broken,” Cohen said. Cohen added that she felt online behavioral advertising—that is, advertising that is based on an individual web user’s specific browsing activity—should be illegal. Banning behavioral ads would “fundamentally change the financial incentive for online actors to constantly surveil” web users and share their data with brokers, Cohen said.
Note: Read more about the disturbing world of online behavioral ads, where the data isn't just used to sell products. It's often accessed by governments, law enforcement, intelligence agencies, and other actors—sometimes without warrants or oversight. This turns a commercial ad system into a covert surveillance network. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
The Trump administration’s Federal Trade Commission has removed four years’ worth of business guidance blogs as of Tuesday morning, including important consumer protection information related to artificial intelligence and the agency’s landmark privacy lawsuits under former chair Lina Khan against companies like Amazon and Microsoft. More than 300 blogs were removed. On the FTC’s website, the page hosting all of the agency’s business-related blogs and guidance no longer includes any information published during former president Joe Biden’s administration. These blogs contained advice from the FTC on how big tech companies could avoid violating consumer protection laws. Removing blogs raises serious compliance concerns under the Federal Records Act and the Open Government Data Act, one former FTC official tells WIRED. During the Biden administration, FTC leadership would place “warning” labels above previous administrations’ public decisions it no longer agreed with, the source said, fearing that removal would violate the law. Since President Donald Trump designated Andrew Ferguson to replace Khan as FTC chair in January, the Republican regulator has vowed to leverage his authority to go after big tech companies. Unlike Khan, however, Ferguson’s criticisms center around the Republican party’s long-standing allegations that social media platforms, like Facebook and Instagram, censor conservative speech online.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and government corruption.
Alexander Balan was on a California beach when the idea for a new kind of drone came to him. This eureka moment led Balan to found Xdown, the company that’s building the P.S. Killer (PSK)—an autonomous kamikaze drone that works like a hand grenade and can be thrown like a football. The PSK is a “throw-and-forget” drone, Balan says, referencing the “fire-and-forget” missile that, once locked on to a target, can seek it on its own. Instead of depending on remote controls, the PSK will be operated by AI. Soldiers should be able to grab it, switch it on, and throw it—just like a football. The PSK can carry one or two 40 mm grenades commonly used in grenade launchers today. The grenades could be high-explosive dual purpose, designed to penetrate armor while also creating an explosive fragmentation effect against personnel. These grenades can also “airburst”—programmed to explode in the air above a target for maximum effect. Infantry, special operations, and counterterrorism units can easily store PSK drones in a field backpack and tote them around, taking one out to throw at any given time. They can also be packed by the dozen in cargo airplanes, which can fly over an area and drop swarms of them. Balan says that one Defense Department official told him “This is the most American munition I have ever seen.” The nonlethal version of the PSK [replaces] its warhead with a supply container so that it’s able to “deliver food, medical kits, or ammunition to frontline troops” (though given the 1.7-pound payload capacity, such packages would obviously be small).
Note: The US military is using Xbox controllers to operate weapons systems. The latest US Air Force recruitment tool is a video game that allows players to receive in-game medals and achievements for drone bombing Iraqis and Afghans. For more, read our concise summaries of news articles on warfare technologies and watch our latest video on the militarization of Big Tech.
A WIRED investigation into the inner workings of Google’s advertising ecosystem reveals that a wealth of sensitive information on Americans is being openly served up to some of the world’s largest brands despite the company’s own rules against it. Experts say that when combined with other data, this information could be used to identify and target specific individuals. Display & Video 360 (DV360), one of the dominant marketing platforms offered by the search giant, is offering companies globally the option of targeting devices in the United States based on lists of internet users believed to suffer from chronic illnesses and financial distress, among other categories of personal data that are ostensibly banned under Google’s public policies. Among a list of 33,000 audience segments obtained by the ICCL, WIRED identified several that aimed to identify people working sensitive government jobs. One, for instance, targets US government employees who are considered “decision makers” working “specifically in the field of national security.” Another targets individuals who work at companies registered with the State Department to manufacture and export defense-related technologies, from missiles and space launch vehicles to cryptographic systems that house classified military and intelligence data. In the wrong hands, sensitive insights gained through [commercially available information] could facilitate blackmail, stalking, harassment, and public shaming.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Alphabet has rewritten its guidelines on how it will use AI, dropping a section which previously ruled out applications that were "likely to cause harm". Human Rights Watch has criticised the decision, telling the BBC that AI can "complicate accountability" for battlefield decisions that "may have life or death consequences." Experts say AI could be widely deployed on the battlefield - though there are fears about its use too, particularly with regard to autonomous weapons systems. "For a global industry leader to abandon red lines it set for itself signals a concerning shift, at a time when we need responsible leadership in AI more than ever," said Anna Bacciarelli, senior AI researcher at Human Rights Watch. The "unilateral" decision showed also showed "why voluntary principles are not an adequate substitute for regulation and binding law" she added. In January, MP's argued that the conflict in Ukraine had shown the technology "offers serious military advantage on the battlefield." As AI becomes more widespread and sophisticated it would "change the way defence works, from the back office to the frontline," Emma Lewell-Buck MP ... wrote. Concern is greatest over the potential for AI-powered weapons capable of taking lethal action autonomously, with campaigners arguing controls are urgently needed. The Doomsday Clock - which symbolises how near humanity is to destruction - cited that concern in its latest assessment of the dangers mankind faces.
Note: For more along these lines, read our concise summaries of news articles on AI and Big Tech.
On an episode of "The Joe Rogan Experience" released Friday, Meta CEO Mark Zuckerberg painted a picture of Biden administration officials berating Facebook staff during requests to remove certain content from the social media platform. "Basically, these people from the Biden administration would call up our team and, like, scream at them and curse," Zuckerberg told ... Joe Rogan. "It just got to this point where we were like, 'No, we're not gonna, we're not gonna take down things that are true. That's ridiculous.'" In a letter last year to Rep. Jim Jordan, the Republican chair of the House Judiciary Committee, Zuckerberg said that the White House “repeatedly pressured” Facebook to remove “certain COVID-19 content including humor and satire.” Zuckerberg said Facebook, which is owned by Meta, acquiesced at times, while suggesting that different decisions would be made going forward. On Rogan's show, Zuckerberg said the administration had asked Facebook to remove from its platform a meme that showed actor Leonardo DiCaprio pointing at a TV screen advertising a class action lawsuit for people who once took the Covid vaccine."They're like, 'No, you have to take that down,'" Zuckerberg said, adding, "We said, 'No, we're not gonna. We're not gonna take down things that are, that are true.'" Zuckerberg ... also announced that his platforms — Facebook and Instagram — would relax rules related to political content.
Note: Read a former senior NPR editor's nuanced take on how challenging official narratives became so politicized that "politics were blotting out the curiosity and independence that should have been guiding our work." Opportunities for award winning journalism were lost on controversial issues like COVID, the Hunter Biden laptop story, and more. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
Instagram has released a long-promised “reset” button to U.S. users that clears the algorithms it uses to recommend you photos and videos. TikTok offers a reset button, too. And with a little bit more effort, you can also force YouTube to start fresh with how it recommends what videos to play next. It means you now have the power to say goodbye to endless recycled dance moves, polarizing Trump posts, extreme fitness challenges, dramatic pet voice-overs, fruit-cutting tutorials, face-altering filters or whatever other else has taken over your feed like a zombie. I know some people love what their apps show them. But the reality is, none of us are really in charge of our social media experience anymore. Instead of just friends, family and the people you choose to follow, nowadays your feed or For You Page is filled with recommended content you never asked for, selected by artificial-intelligence algorithms. Their goal is to keep you hooked, often by showing you things you find outrageous or titillating — not joyful or calming. And we know from Meta whistleblower Frances Haugen and others that outrage algorithms can take a particular toll on young people. That’s one reason they’re offering a reset now: because they’re under pressure to give teens and families more control. So how does the algorithm go awry? It tries to get to know you by tracking every little thing you do. They’re even analyzing your “dwell time,” when you unconsciously scroll more slowly.
Note: Read about the developer who got permanently banned from Meta for developing a tool called “Unfollow Everything” that lets users, well, unfollow everything and restart their feeds fresh. For more along these lines, read our concise summaries of news articles on Big Tech and media manipulation.
In the nineteen-fifties, the Leo Burnett advertising agency helped invent Tony the Tiger, a cartoon mascot who was created to promote Frosted Flakes to children. In 1973, a trailblazing nutritionist named Jean Mayer warned the U.S. Senate Select Committee on Nutrition and Human Needs that ... junk foods could be described as empty calories. He questioned why it was legal to apply the term “cereals” to products that were more than fifty-per-cent sugar. Children’s-food advertisements, he claimed, were “nothing short of nutritional disasters.” Mayer’s warnings, however, did not lead to a string of state bans on junk food. Advertising continued to target children, and consumers of all ages were free to buy and consume any amount of Frosted Flakes. This health issue was ultimately seen as one that families should manage on their own. In recent years, experts have been warning that social media harms children. Frances Haugen, a former Facebook data scientist who became a whistle-blower, told a Senate subcommittee that her ex-employer’s “profit optimizing machine is generating self-harm and self-hate—especially for vulnerable groups, like teenage girls.” “It is time to require a surgeon general’s warning label on social media platforms, stating that social media is associated with significant mental health harms for adolescents,” Vivek Murthy, whose second term as the U.S. Surgeon General ended on Monday, wrote in an opinion piece last year.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and mental health.
The Defense Advanced Research Project Agency, the Pentagon's top research arm, wants to find out if red blood cells could be modified in novel ways to protect troops. The DARPA program, called the Red Blood Cell Factory, is looking for researchers to study the insertion of "biologically active components" or "cargoes" in red blood cells. The hope is that modified cells would enhance certain biological systems, "thus allowing recipients, such as warfighters, to operate more effectively in dangerous or extreme environments." Red blood cells could act like a truck, carrying "cargo" or special protections, to all parts of the body, since they already circulate oxygen everywhere, [said] Christopher Bettinger, a professor of biomedical engineering overseeing the program. "What if we could add in additional cargo ... inside of that disc," Bettinger said, referring to the shape of red blood cells, "that could then confer these interesting benefits?" The research could impact the way troops battle diseases that reproduce in red blood cells, such as malaria, Bettinger hypothesized. "Imagine an alternative world where we have a warfighter that has a red blood cell that's accessorized with a compound that can sort of defeat malaria," Bettinger said. In 2019, the Army released a report called "Cyborg Soldier 2050," which laid out a vision of the future where troops would benefit from neural and optical enhancements, though the report acknowledged ethical and legal concerns.
Note: Read about the Pentagon's plans to use our brains as warfare, describing how the human body is war's next domain. Learn more about biotech dangers.
The U.S. Court of Appeals for the 6th Circuit ... threw out the Federal Communication Commission’s Net Neutrality rules, rejecting the agency’s authority to protect broadband consumers and handing phone and cable companies a major victory. The FCC moved in April 2024 to restore Net Neutrality and the essential consumer protections that rest under Title II of the Communications Act, which had been gutted under the first Trump administration. This was an all-too-rare example in Washington of a government agency doing what it’s supposed to do: Listening to the public and taking their side against the powerful companies that for far too long have captured ... D.C. And the phone and cable industry did what they always do when the FCC does anything good: They sued to overturn the rules. The court ruled against the FCC and deemed internet access to be an “information service” largely free from FCC oversight. This court’s warped decision scraps the common-sense rules the FCC restored in April. The result is that throughout most of the country, the most essential communications service of this century will be operating without any real government oversight, with no one to step in when companies rip you off or slow down your service. This ruling is far out of step with the views of the American public, who overwhelmingly support real Net Neutrality. They’re tired of paying too much, and they hate being spied on.
Note: Read about the communities building their own internet networks in the face of net neutrality rollbacks. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
Each time you see a targeted ad, your personal information is exposed to thousands of advertisers and data brokers through a process called “real-time bidding” (RTB). This process does more than deliver ads—it fuels government surveillance, poses national security risks, and gives data brokers easy access to your online activity. RTB might be the most privacy-invasive surveillance system that you’ve never heard of. The moment you visit a website or app with ad space, it asks a company that runs ad auctions to determine which ads it will display for you. This involves sending information about you and the content you’re viewing to the ad auction company. The ad auction company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers. The bid request may contain personal information like your unique advertising ID, location, IP address, device details, interests, and demographic information. The information in bid requests is called “bidstream data” and can easily be linked to real people. Advertisers, and their ad buying platforms, can store the personal data in the bid request regardless of whether or not they bid on ad space. RTB is regularly exploited for government surveillance. The privacy and security dangers of RTB are inherent to its design. The process broadcasts torrents of our personal data to thousands of companies, hundreds of times per day.
Note: Clearview AI scraped billions of faces off of social media without consent and at least 600 law enforcement agencies tapped into its database. During this time, Clearview was hacked and its entire client list — which included the Department of Justice, U.S. Immigration and Customs Enforcement, Interpol, retailers and hundreds of police departments — was leaked to hackers. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Meta CEO Mark Zuckerberg announced Tuesday that his social media platforms — which include Facebook and Instagram — will be getting rid of fact-checking partners and replacing them with a “community notes” model like that found on X. For a decade now, liberals have wrongly treated Trump’s rise as a problem of disinformation gone wild, and one that could be fixed with just enough fact-checking. Disinformation, though, has been a convenient narrative for a Democratic establishment unwilling to reckon with its own role in upholding anti-immigrant narratives, or repeating baseless fearmongering over crime rates, and failing to support the multiracial working class. Long dead is the idea that social media platforms like X or Instagram are either trustworthy news publishers, sites for liberatory community building, or hubs for digital democracy. “The internet may once have been understood as a commons of information, but that was long ago,” wrote media theorist Rob Horning in a recent newsletter. “Now the main purpose of the internet is to place its users under surveillance, to make it so that no one does anything without generating data, and to assure that paywalls, rental fees, and other sorts of rents can be extracted for information that may have once seemed free but perhaps never wanted to be.” Social media platforms are huge corporations for which we, as users, produce data to be mined as a commodity to sell to advertisers — and government agencies. The CEOs of these corporations are craven and power-hungry.
Note: Read a former senior NPR editor's nuanced take on how challenging official narratives became so politicized that "politics were blotting out the curiosity and independence that should have been guiding our work." Opportunities for award winning journalism were lost on controversial issues like COVID, the Hunter Biden laptop story, and more. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
We published the piece on February 22, [2020], under the headline “Don’t Buy China’s Story: The Coronavirus May Have Leaked from a Lab.” It immediately went viral, its audience swelling for a few hours as readers liked and shared it over and over again. I had a data tracker on my screen that showed our web traffic, and I could see the green line for my story surging up and up. Then suddenly, for no reason, the green line dropped like a stone. No one was reading or sharing the piece. It was as though it had never existed at all. Seeing the story’s traffic plunge, I was stunned. How does a story that thousands of people are reading and sharing suddenly just disappear? Later, the [New York Post’s] digital editor gave me the answer: Facebook’s fact-checking team had flagged the piece as “false information.” I was seeing Big Tech censorship of the American media in real time, and it chilled me to my bones. What happened next was even more chilling. I found out that an “expert” who advised Facebook to censor the piece had a major conflict of interest. Professor Danielle E. Anderson had regularly worked with researchers at the Wuhan Institute of Virology ... and she told Facebook’s fact-checkers that the lab had “strict control and containment measures.” Facebook’s “fact-checkers” took her at her word. An “expert” had spoken, Wuhan’s lab was deemed secure, and the Post’s story was squashed in the interest of public safety. In 2021, in the wake of a lawsuit, Facebook admitted that its “fact checks” are just “opinion,” used by social media companies to police what we watch and read.
Note: Watch our brief newsletter recap video about censorship and the suppression of the COVID lab leak theory. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
Meta CEO Mark Zuckerberg said Facebook has done “too much censorship” as he revealed the social network is scrapping fact-checking and restrictions on free speech as President-elect Donald Trump prepares to return to the White House. The 40-year-old tech tycoon — who dined with Trump at Mar-a-Lago the day before Thanksgiving and gave him a pair of Meta Ray Ban sunglasses, with Meta later donating $1 million to his inaugural fund — claimed on Tuesday that the dramatic about-face was signal that the company is returning to an original focus on free speech. The stunning reversal will include moving Meta’s content moderation team from deep-blue California to right-leaning Texas in order to insulate the group from cultural bias. “As we work to promote free expression, I think that will help build trust to do this work in places where there’s less concern about the bias of our team,” the Meta boss said. Facebook will do away with “restrictions on topics like immigration and gender that are just out of touch with mainstream discourse,” Zuckerberg said. “What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas,” he said, adding: “It’s gone too far.” In late July, Facebook acknowledged that it censored the image of President-elect Donald Trump raising his fist in the immediate aftermath of the assassination attempt in Pennsylvania.
Note: Read a former senior NPR editor's nuanced take on how challenging official narratives became so politicized that "politics were blotting out the curiosity and independence that should have been guiding our work." Opportunities for award winning journalism were lost on controversial issues like COVID, the Hunter Biden laptop story, and more. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
Mark Zuckerberg has announced he is scrapping fact-checks on Facebook, claiming the labels intended to warn against fake news have “destroyed more trust than they have created”. Facebook’s fact-checkers have helped debunk hundreds of fake news stories and false rumours – however, there have been several high-profile missteps. In 2020, Facebook and Twitter took action to halt the spread of an article by the New York Post based on leaked emails from a laptop belonging to Joe Biden’s son, Hunter Biden. As coronavirus spread around the world, suggestions that the vaccine could have been man-made were suppressed by Facebook. An opinion column in the New York Post with the headline: “Don’t buy China’s story: The coronavirus may have leaked from a lab” was labelled as “false information”. In 2021, Facebook lifted its ban on claims the virus could have been “man-made”. It was months later that further doubts emerged over the origins of coronavirus. In 2021, Facebook ... was accused of wrongly fact-checking a story about Pfizer’s Covid-19 vaccine. A British Medical Journal (BMJ) report, based on whistleblowing, alleged poor clinical practices at a contractor carrying out research for Pfizer. However, Facebook’s fact-checkers added a label arguing the story was “missing context” and could “mislead people”. Furious debates raged over the effectiveness of masks in preventing the spread of Covid-19. Facebook’s fact-checkers were accused of overzealously clamping down on articles that questioned the science behind [mask] mandates.
Note: Read a former senior NPR editor's nuanced take on how challenging official narratives became so politicized that "politics were blotting out the curiosity and independence that should have been guiding our work." Opportunities for award winning journalism were lost on controversial issues like COVID, the Hunter Biden laptop story, and more. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
Militaries, law enforcement, and more around the world are increasingly turning to robot dogs — which, if we're being honest, look like something straight out of a science-fiction nightmare — for a variety of missions ranging from security patrol to combat. Robot dogs first really came on the scene in the early 2000s with Boston Dynamics' "BigDog" design. They have been used in both military and security activities. In November, for instance, it was reported that robot dogs had been added to President-elect Donald Trump's security detail and were on patrol at his home in Mar-a-Lago. Some of the remote-controlled canines are equipped with sensor systems, while others have been equipped with rifles and other weapons. One Ohio company made one with a flamethrower. Some of these designs not only look eerily similar to real dogs but also act like them, which can be unsettling. In the Ukraine war, robot dogs have seen use on the battlefield, the first known combat deployment of these machines. Built by British company Robot Alliance, the systems aren't autonomous, instead being operated by remote control. They are capable of doing many of the things other drones in Ukraine have done, including reconnaissance and attacking unsuspecting troops. The dogs have also been useful for scouting out the insides of buildings and trenches, particularly smaller areas where operators have trouble flying an aerial drone.
Note: Learn more about the troubling partnership between Big Tech and the military. For more, read our concise summaries of news articles on military corruption.
More than 300 million children across the globe are victims of online sexual exploitation and abuse each year, research suggests. In what is believed to be the first global estimate of the scale of the crisis, researchers at the University of Edinburgh found that 12.6% of the world’s children have been victims of nonconsensual talking, sharing and exposure to sexual images and video in the past year, equivalent to about 302 million young people. A similar proportion – 12.5% – had been subject to online solicitation, such as unwanted sexual talk that can include sexting, sexual questions and sexual act requests by adults or other youths. Offences can also take the form of “sextortion”, where predators demand money from victims to keep images private, and abuse of AI deepfake technology. The US is a particularly high-risk area. The university’s Childlight initiative – which aims to understand the prevalence of child abuse – includes a new global index, which found that one in nine men in the US (equivalent to almost 14 million) admitted online offending against children at some point. Surveys found 7% of British men, equivalent to 1.8 million, admitted the same. The research also found many men admitted they would seek to commit physical sexual offences against children if they thought it would be kept secret. Child abuse material is so prevalent that files are on average reported to watchdog and policing organisations once every second.
Note: New Mexico's attorney general has called Meta the world's "single largest marketplace for paedophiles." For more along these lines, read our concise summaries of news articles on Big Tech and sexual abuse scandals.
Mitigating the risk of extinction from AI should be a global priority. However, as many AI ethicists warn, this blinkered focus on the existential future threat to humanity posed by a malevolent AI ... has often served to obfuscate the myriad more immediate dangers posed by emerging AI technologies. These “lesser-order” AI risks ... include pervasive regimes of omnipresent AI surveillance and panopticon-like biometric disciplinary control; the algorithmic replication of existing racial, gender, and other systemic biases at scale ... and mass deskilling waves that upend job markets, ushering in an age monopolized by a handful of techno-oligarchs. Killer robots have become a twenty-first-century reality, from gun-toting robotic dogs to swarms of autonomous unmanned drones, changing the face of warfare from Ukraine to Gaza. Palestinian civilians have frequently spoken about the paralyzing psychological trauma of hearing the “zanzana” — the ominous, incessant, unsettling, high-pitched buzzing of drones loitering above. Over a decade ago, children in Waziristan, a region of Pakistan’s tribal belt bordering Afghanistan, experienced a similar debilitating dread of US Predator drones that manifested as a fear of blue skies. “I no longer love blue skies. In fact, I now prefer gray skies. The drones do not fly when the skies are gray,” stated thirteen-year-old Zubair in his testimony before Congress in 2013.
Note: For more along these lines, read our concise summaries of news articles on AI and military corruption.
The current debate on military AI is largely driven by “tech bros” and other entrepreneurs who stand to profit immensely from militaries’ uptake of AI-enabled capabilities. Despite their influence on the conversation, these tech industry figures have little to no operational experience, meaning they cannot draw from first-hand accounts of combat to further justify arguments that AI is changing the character, if not nature, of war. Rather, they capitalize on their impressive business successes to influence a new model of capability development through opinion pieces in high-profile journals, public addresses at acclaimed security conferences, and presentations at top-tier universities. Three related considerations have combined to shape the hype surrounding military AI. First [is] the emergence of a new military industrial complex that is dependent on commercial service providers. Second, this new defense acquisition process is the cause and effect of a narrative suggesting a global AI arms race, which has encouraged scholars to discount the normative implications of AI-enabled warfare. Finally, while analysts assume that soldiers will trust AI, which is integral to human-machine teaming that facilitates AI-enabled warfare, trust is not guaranteed. Senior officers do not trust AI-enhanced capabilities. To the extent they do demonstrate increased levels of trust in machines, their trust is moderated by how machines are used.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and military corruption.
Within Meta’s Counterterrorism and Dangerous Organizations team, [Hannah] Byrne helped craft one of the most powerful and secretive censorship policies in internet history. She and her team helped draft the rulebook that applies to the world’s most diabolical people and groups: the Ku Klux Klan, cartels, and terrorists. Meta bans these so-called Dangerous Organizations and Individuals, or DOI, from using its platforms, but further prohibits its billions of users from engaging in “glorification,” “support,” or “representation” of anyone on the list. As an armed white supremacist group with credible allegations of human rights violations hanging over it, Azov [Battalion] had landed on the Dangerous Organizations list. Following the Russian invasion of Ukraine, Meta not only moved swiftly to allow users to cheer on the Azov Battalion, but also loosened its rules around incitement, hate speech, and gory imagery so Ukrainian civilians could share images of the suffering around them. Within weeks, Byrne found the moral universe around her inverted: The heavily armed hate group sanctioned by Congress since 2018 were now freedom fighters resisting occupation, not terroristic racists. It seems most galling for Byrne to compare how malleable Meta’s Dangerous Organizations policy was for Ukraine, and how draconian it has felt for those protesting the war in Gaza. “I know the U.S. government is in constant contact with Facebook employees,” she said. Meta’s censorship systems are “basically an extension of the government,” Byrne said. “You want military, Department of State, CIA people enforcing free speech? That is what is concerning.”
Note: Read more about Facebook's secret blacklist, and how Facebook censored reporting of war crimes in Gaza but allowed praise for the neo-Nazi Azov Brigade on its platform. Going deeper, click here if you want to know the real history behind the Russia-Ukraine war. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
HouseFresh.com ... started in 2020 by Gisele Navarro and her husband, based on a decade of experience writing about indoor air quality products. They filled their basement with purifiers, running rigorous science-based tests ... to help consumers sort through marketing hype. HouseFresh is an example of what has been a flourishing industry of independent publishers producing exactly the sort of original content Google says it wants to promote. The website grew into a thriving business with 15 full-time employees. In September 2023, Google made one in a series of major updates to the algorithm that runs its search engine. The second Google algorithm update came in March, and it was even more punishing. "It decimated us," Navarro says. "Suddenly the search terms that used to bring up HouseFresh were sending people to big lifestyle magazines that clearly don't even test the products." HouseFresh's thousands of daily visitors dwindled to just hundreds. Over the last few weeks, HouseFresh had to lay off most of its team. Results for popular search terms are crowded with websites that contain very little useful information, but tonnes of ads and links to retailers that earn publishers a share of profits. "Google's just committing war on publisher websites," [search engine expert Lily] Ray says. "It's almost as if Google designed an algorithm update to specifically go after small bloggers. I've talked to so many people who've just had everything wiped out." A number of website owners and search experts ... said there's been a general shift in Google results towards websites with big established brands, and away from small and independent sites, that seems totally disconnected from the quality of the content.
Note: These changes to Google search have significantly reduced traffic to WantToKnow.info and other independent media outlets. Read more about Google's bias machine, and how Google relies on user reactions rather than actual content to shape search results. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
“Anonymity is a shield from the tyranny of the majority,” wrote Supreme Court Justice John Paul Stevens in a 1995 ruling affirming Americans’ constitutional right to engage in anonymous political speech. That shield has weakened in recent years due to advances in the surveillance technology available to law enforcement. Everything from social media posts, to metadata about phone calls, to the purchase information collected by data brokers, to location data showing every step taken, is available to law enforcement — often without a warrant. Avoiding all of this tracking would require such extrication from modern social life that it would be virtually impossible for most people. International Mobile Subscriber Identity (IMSI) catchers, or Stingrays, impersonate cell phone towers to collect the unique ID of a cell phone’s SIM card. Geofence warrants, also known as reverse location warrants ... lets law enforcement request location data from apps on your phone or tech companies. Data brokers are companies that assemble information about people from a variety of usually public sources. Tons of websites and apps that everyday people use collect information on them, and this information is often sold to third parties who can aggregate or piece together someone’s profile across the sites that are tracking them. Companies like Fog Data Science, LexisNexis, Precisely and Acxiom possess not only data on billions of people, they also ... have information about someone’s political preferences as well as demographic information. Surveillance of social media accounts allows police to gather vast amounts of information about how protests are organized ... frequently utilizing networks of fake accounts. One firm advertised the ability to help police identify “activists and disruptors” at protests.
Note: For more along these lines, explore concise summaries of news articles on police corruption and the erosion of civil liberties from reliable major media sources.
Facebook’s inscrutable feed algorithm, which is supposed to calculate which content is most likely to appeal to me and then send it my way ... feels like an obstacle to how I’d like to connect with my friends. British software developer Louis Barclay developed a software ... known as an extension, which can be installed in a Chrome web browser. Christened Unfollow Everything, it would automate the process of unfollowing each of my 1,800 friends, a task that manually would take hours. The result is that I would be able to experience Facebook as it once was, when it contained profiles of my friends, but without the endless updates, photos, videos and the like that Facebook’s algorithm generates. If tools like Unfollow Everything were allowed to flourish, and we could have better control over what we see on social media, these tools might create a more civic-minded internet. Unfortunately, Mr. Barclay was forced by Facebook to remove the software. Large social media platforms appear to be increasingly resistant to third-party tools that give users more command over their experiences. After talking with Mr. Barclay, I decided to develop a new version of Unfollow Everything. I — and the lawyers at the Knight First Amendment Institute at Columbia — asked a federal court in California last week to rule on whether users should have a right to use tools like Unfollow Everything that give them increased power over how they use social networks, particularly over algorithms that have been engineered to keep users scrolling on their sites.
Note: The above was written by Ethan Zuckerman, associate professor of public policy and director of the UMass Initiative for Digital Public Infrastructure at the University of Massachusetts Amherst. For more along these lines, explore concise summaries of news articles on Big Tech from reliable major media sources.
Something went suddenly and horribly wrong for adolescents in the early 2010s. Rates of depression and anxiety in the United States—fairly stable in the 2000s—rose by more than 50 percent in many studies. The suicide rate rose 48 percent for adolescents ages 10 to 19. For girls ages 10 to 14, it rose 131 percent. Gen Z is in poor mental health and is lagging behind previous generations on many important metrics. Once young people began carrying the entire internet in their pockets, available to them day and night, it altered their daily experiences and developmental pathways. Friendship, dating, sexuality, exercise, sleep, academics, politics, family dynamics, identity—all were affected. There’s an important backstory, beginning ... when we started systematically depriving children and adolescents of freedom, unsupervised play, responsibility, and opportunities for risk taking, all of which promote competence, maturity, and mental health. Hundreds of studies on young rats, monkeys, and humans show that young mammals want to play, need to play, and end up socially, cognitively, and emotionally impaired when they are deprived of play. Young people who are deprived of opportunities for risk taking and independent exploration will, on average, develop into more anxious and risk-averse adults. A study of how Americans spend their time found that, before 2010, young people (ages 15 to 24) reported spending far more time with their friends. By 2019, young people’s time with friends had dropped to just 67 minutes a day. It turns out that Gen Z had been socially distancing for many years and had mostly completed the project by the time COVID-19 struck. Congress has not been good at addressing public concerns when the solutions would displease a powerful and deep-pocketed industry.
Note: The author of this article is Jonathan Haidt, a social psychologist and ethics professor who's been on the frontlines investigating the youth mental health crisis. He is the co-founder of LetGrow.org, an organization that provides inspiring solutions and ideas to help families and schools support children's well-being and foster childhood independence. For more along these lines, explore concise summaries of news articles on mental health.
Beheadings, mass killings, child abuse, hate speech – all of it ends up in the inboxes of a global army of content moderators. You don’t often see or hear from them – but these are the people whose job it is to review and then, when necessary, delete content that either gets reported by other users, or is automatically flagged by tech tools. Moderators are often employed by third-party companies, but they work on content posted directly on to the big social networks including Instagram, TikTok and Facebook. “If you take your phone and then go to TikTok, you will see a lot of activities, dancing, you know, happy things,” says Mojez, a former Nairobi-based moderator. “But in the background, I personally was moderating, in the hundreds, horrific and traumatising videos. “I took it upon myself. Let my mental health take the punch so that general users can continue going about their activities on the platform.” In 2020, Meta then known as Facebook, agreed to pay a settlement of $52m (£40m) to moderators who had developed mental health issues. The legal action was initiated by a former moderator [who] described moderators as the “keepers of souls”, because of the amount of footage they see containing the final moments of people’s lives. The ex-moderators I spoke to all used the word “trauma” in describing the impact the work had on them. One ... said he found it difficult to interact with his wife and children because of the child abuse he had witnessed. What came across, very powerfully, was the immense pride the moderators had in the roles they had played in protecting the world from online harm.
Note: Read more about the disturbing world of content moderation. For more along these lines, explore concise summaries of revealing news articles on Big Tech from reliable major media sources.
Ask "is the British tax system fair", and Google cites a quote ... arguing that indeed it is. Ask "is the British tax system unfair", and Google's Featured Snippet explains how UK taxes benefit the rich and promote inequality. "What Google has done is they've pulled bits out of the text based on what people are searching for and fed them what they want to read," [Digital marketing director at Dragon Metrics Sarah] Presch says. "It's one big bias machine." The vast majority of internet traffic begins with a Google Search, and people rarely click on anything beyond the first five links. The system that orders the links on Google Search has colossal power over our experience of the world. You might choose to engage with information that keeps you trapped in your filter bubble, "but there's only a certain bouquet of messages that are put in front of you to choose from in the first place", says [professor] Silvia Knobloch-Westerwick. A recent US anti-trust case against Google uncovered internal company documents where employees discuss some of the techniques the search engine uses to answer your questions. "We do not understand documents – we fake it," an engineer wrote in a slideshow used during a 2016 presentation. "A billion times a day, people ask us to find documents relevant to a query… We hardly look at documents. We look at people. If a document gets a positive reaction, we figure it is good. If the reaction is negative, it is probably bad. Grossly simplified, this is the source of Google's magic. That is how we serve the next person, keep the induction rolling, and sustain the illusion that we understand." In other words, Google watches to see what people click on when they enter a given search term. When people seem satisfied by a certain type of information, it's more likely that Google will promote that kind of search result for similar queries in the future.
Note: For more along these lines, explore concise summaries of revealing news articles on Big Tech from reliable major media sources.
Before the digital age, law enforcement would conduct surveillance through methods like wiretapping phone lines or infiltrating an organization. Now, police surveillance can reach into the most granular aspects of our lives during everyday activities, without our consent or knowledge — and without a warrant. Technology like automated license plate readers, drones, facial recognition, and social media monitoring added a uniquely dangerous element to the surveillance that comes with physical intimidation of law enforcement. With greater technological power in the hands of police, surveillance technology is crossing into a variety of new and alarming contexts. Law enforcement partnerships with companies like Clearview AI, which scraped billions of images from the internet for their facial recognition database ... has been used by law enforcement agencies across the country, including within the federal government. When the social networking app on your phone can give police details about where you’ve been and who you’re connected to, or your browsing history can provide law enforcement with insight into your most closely held thoughts, the risks of self-censorship are great. When artificial intelligence tools or facial recognition technology can piece together your life in a way that was previously impossible, it gives the ones with the keys to those tools enormous power to ... maintain a repressive status quo.
Note: Facial recognition technology has played a role in the wrongful arrests of many innocent people. For more along these lines, explore concise summaries of revealing news articles on police corruption and the disappearance of privacy.
Air fryers that gather your personal data and audio speakers “stuffed with trackers” are among examples of smart devices engaged in “excessive” surveillance, according to the consumer group Which? The organisation tested three air fryers ... each of which requested permission to record audio on the user’s phone through a connected app. Which? found the app provided by the company Xiaomi connected to trackers for Facebook and a TikTok ad network. The Xiaomi fryer and another by Aigostar sent people’s personal data to servers in China. Its tests also examined smartwatches that it said required “risky” phone permissions – in other words giving invasive access to the consumer’s phone through location tracking, audio recording and accessing stored files. Which? found digital speakers that were preloaded with trackers for Facebook, Google and a digital marketing company called Urbanairship. The Information Commissioner’s Office (ICO) said the latest consumer tests “show that many products not only fail to meet our expectations for data protection but also consumer expectations”. A growing number of devices in homes are connected to the internet, including camera-enabled doorbells and smart TVs. Last Black Friday, the ICO encouraged consumers to check if smart products they planned to buy had a physical switch to prevent the gathering of voice data.
Note: A 2015 New York Times article warned that smart devices were a "train wreck in privacy and security." For more along these lines, read about how automakers collect intimate information that includes biometric data, genetic information, health diagnosis data, and even information on people’s “sexual activities” when drivers pair their smartphones to their vehicles.
The past decade has seen a rapid expansion of the commercial space industry. In a 2023 white paper, a group of concerned astronomers warned against repeating Earthly “colonial practices” in outer space. Some of these colonial practices might include the enclosure of land, the exploitation of environmental resources and the destruction of landscapes – in the name of ideals such as destiny, civilization and the salvation of humanity. People of Bawaka Country in northern Australia have told the space industry that their ancestors guide human life from their home in the galaxy, and that this relationship is increasingly threatened by large orbiting satellite networks. Similarly, Inuit elders say their ancestors live on celestial bodies. Navajo leadership has asked NASA not to land human remains on the Moon. Kanaka elders have insisted that no more telescopes be built on Mauna Kea, which Native Hawaiians consider to be ancestral and sacred. These Indigenous positions stand in stark contrast with many in the industry’s insistence that space is empty and inanimate. In 1967, a slew of nations including the U.S., U.K. and USSR, signed the Outer Space Treaty. This treaty declared, among other things, that no nation can own a planetary body or part of one. The nations that signed the Outer Space Treaty were effectively saying, “Let’s not battle each other for territory and resources again. Let’s do outer space differently.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
Tech companies have outfitted classrooms across the U.S. with devices and technologies that allow for constant surveillance and data gathering. Firms such as Gaggle, Securly and Bark (to name a few) now collect data from tens of thousands of K-12 students. They are not required to disclose how they use that data, or guarantee its safety from hackers. In their new book, Surveillance Education: Navigating the Conspicuous Absence of Privacy in Schools, Nolan Higdon and Allison Butler show how all-encompassing surveillance is now all too real, and everything from basic privacy rights to educational quality is at stake. The tech industry has done a great job of convincing us that their platforms — like social media and email — are “free.” But the truth is, they come at a cost: our privacy. These companies make money from our data, and all the content and information we share online is basically unpaid labor. So, when the COVID-19 lockdowns hit, a lot of people just assumed that using Zoom, Canvas and Moodle for online learning was a “free” alternative to in-person classes. In reality, we were giving up even more of our labor and privacy to an industry that ended up making record profits. Your data can be used against you ... or taken out of context, such as sarcasm being used to deny you a job or admission to a school. Data breaches happen all the time, which could lead to identity theft or other personal information becoming public.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams—conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody “looking away from the screen for an extended period of time." For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
A little-known advertising cartel that controls 90% of global marketing spending supported efforts to defund news outlets and platforms including The Post — at points urging members to use a blacklist compiled by a shadowy government-funded group that purports to guard news consumers against “misinformation.” The World Federation of Advertisers (WFA), which reps 150 of the world’s top companies — including ExxonMobil, GM, General Mills, McDonald’s, Visa, SC Johnson and Walmart — and 60 ad associations sought to squelch online free speech through its Global Alliance for Responsible Media (GARM) initiative, the House Judiciary Committee found. “The extent to which GARM has organized its trade association and coordinates actions that rob consumers of choices is likely illegal under the antitrust laws and threatens fundamental American freedoms,” the Republican-led panel said in its 39-page report. The new report establishes links between the WFA’s “responsible media” initiative and the taxpayer-funded Global Disinformation Index (GDI), a London-based group that in 2022 unveiled an ad blacklist of 10 news outlets whose opinion sections tilted conservative or libertarian, including The Post, RealClearPolitics and Reason magazine. Internal communications suggest that rather than using an objective rubric to guide decisions, GARM members simply monitored disfavored outlets closely to be able to find justification to demonetize them.
Note: For more along these lines, see concise summaries of deeply revealing news articles on censorship and media manipulation from reliable sources.
Ford Motor Company is just one of many automakers advancing technology that weaponizes cars for mass surveillance. The ... company is currently pursuing a patent for technology that would allow vehicles to monitor the speed of nearby cars, capture images, and transmit data to law enforcement agencies. This would effectively turn vehicles into mobile surveillance units, sharing detailed information with both police and insurance companies. Ford's initiative is part of a broader trend among car manufacturers, where vehicles are increasingly used to spy on drivers and harvest data. In today's world, a smartphone can produce up to 3 gigabytes of data per hour, but recently manufactured cars can churn out up to 25 gigabytes per hour—and the cars of the future will generate even more. These vehicles now gather biometric data such as voice, iris, retina, and fingerprint recognition. In 2022, Hyundai patented eye-scanning technology to replace car keys. This data isn't just stored locally; much of it is uploaded to the cloud, a system that has proven time and again to be incredibly vulnerable. Toyota recently announced that a significant amount of customer information was stolen and posted on a popular hacking site. Imagine a scenario where hackers gain control of your car. As cybersecurity threats become more advanced, the possibility of a widespread attack is not far-fetched.
Note: FedEx is helping the police build a large AI surveillance network to track people and vehicles. Michael Hastings, a journalist investigating U.S. military and intelligence abuses, was killed in a 2013 car crash that may have been the result of a hack. For more along these lines, explore summaries of news articles on the disappearance of privacy from reliable major media sources.
Big tech companies have spent vast sums of money honing algorithms that gather their users’ data and scour it for patterns. One result has been a boom in precision-targeted online advertisements. Another is a practice some experts call “algorithmic personalized pricing,” which uses artificial intelligence to tailor prices to individual consumers. The Federal Trade Commission uses a more Orwellian term for this: “surveillance pricing.” In July the FTC sent information-seeking orders to eight companies that “have publicly touted their use of AI and machine learning to engage in data-driven targeting,” says the agency’s chief technologist Stephanie Nguyen. Consumer surveillance extends beyond online shopping. “Companies are investing in infrastructure to monitor customers in real time in brick-and-mortar stores,” [Nguyen] says. Some price tags, for example, have become digitized, designed to be updated automatically in response to factors such as expiration dates and customer demand. Retail giant Walmart—which is not being probed by the FTC—says its new digital price tags can be remotely updated within minutes. When personalized pricing is applied to home mortgages, lower-income people tend to pay more—and algorithms can sometimes make things even worse by hiking up interest rates based on an inadvertently discriminatory automated estimate of a borrower’s risk rating.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
Meta CEO Mark Zuckerberg told the House Judiciary Committee that his company's moderators faced significant pressure from the federal government to censor content on Facebook and Instagram—and that he regretted caving to it. In a letter to Rep. Jim Jordan (R–Ohio), the committee's chairman, Zuckerberg explained that the pressure also applied to "humor and satire" and that in the future, Meta would not blindly obey the bureaucrats. The letter refers specifically to the widespread suppression of contrarian viewpoints relating to COVID-19. Email exchanges between Facebook moderators and CDC officials reveal that the government took a heavy hand in suppressing content. Health officials did not merely vet posts for accuracy but also made pseudo-scientific determinations about whether certain opinions could cause social "harm" by undermining the effort to encourage all Americans to get vaccinated. But COVID-19 content was not the only kind of speech the government went after. Zuckerberg also explains that the FBI warned him about Russian attempts to sow chaos on social media by releasing a fake story about the Biden family just before the 2020 election. This warning motivated Facebook to take action against the New York Post's Hunter Biden laptop story when it was published in October 2020. In his letter, Zuckerberg states that this was a mistake and that moving forward, Facebook will never again demote stories pending approval from fact-checkers.
Note: For more along these lines, see concise summaries of deeply revealing news articles on censorship and government corruption from reliable major media sources.
In almost every country on Earth, the digital infrastructure upon which the modern economy was built is owned and controlled by a small handful of monopolies, based largely in Silicon Valley. This system is looking more and more like neo-feudalism. Just as the feudal lords of medieval Europe owned all of the land ... the US Big Tech monopolies of the 21st century act as corporate feudal lords, controlling all of the digital land upon which the digital economy is based. A monopolist in the 20th century would have loved to control a country’s supply of, say, refrigerators. But the Big Tech monopolists of the 21st century go a step further and control all of the digital infrastructure needed to buy those fridges — from the internet itself to the software, cloud hosting, apps, payment systems, and even the delivery service. These corporate neo-feudal lords don’t just dominate a single market or a few related ones; they control the marketplace. They can create and destroy entire markets. Their monopolistic control extends well beyond just one country, to almost the entire world. If a competitor does manage to create a product, US Big Tech monopolies can make it disappear. Imagine you are an entrepreneur. You develop a product, make a website, and offer to sell it online. But then you search for it on Google, and it does not show up. Instead, Google promotes another, similar product in the search results. This is not a hypothetical; this already happens.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
Surveillance technologies have evolved at a rapid clip over the last two decades — as has the government’s willingness to use them in ways that are genuinely incompatible with a free society. The intelligence failures that allowed for the attacks on September 11 poured the concrete of the surveillance state foundation. The gradual but dramatic construction of this surveillance state is something that Republicans and Democrats alike are responsible for. Our country cannot build and expand a surveillance superstructure and expect that it will not be turned against the people it is meant to protect. The data that’s being collected reflect intimate details about our closely held beliefs, our biology and health, daily activities, physical location, movement patterns, and more. Facial recognition, DNA collection, and location tracking represent three of the most pressing areas of concern and are ripe for exploitation. Data brokers can use tens of thousands of data points to develop a detailed dossier on you that they can sell to the government (and others). Essentially, the data broker loophole allows a law enforcement agency or other government agency such as the NSA or Department of Defense to give a third party data broker money to hand over the data from your phone — rather than get a warrant. When pressed by the intelligence community and administration, policymakers on both sides of the aisle failed to draw upon the lessons of history.
Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.
Data breaches are a seemingly endless scourge with no simple answer, but the breach in recent months of the background-check service National Public Data illustrates just how dangerous and intractable they have become. In April, a hacker known for selling stolen information, known as USDoD, began hawking a trove of data on cybercriminal forums for $3.5 million that they said included 2.9 billion records and impacted “the entire population of USA, CA and UK.” As the weeks went on, samples of the data started cropping up as other actors and legitimate researchers worked to understand its source and validate the information. By early June, it was clear that at least some of the data was legitimate and contained information like names, emails, and physical addresses in various combinations. When information is stolen from a single source, like Target customer data being stolen from Target, it's relatively straightforward to establish that source. But when information is stolen from a data broker and the company doesn't come forward about the incident, it's much more complicated to determine whether the information is legitimate and where it came from. Typically, people whose data is compromised in a breach—the true victims—aren’t even aware that National Public Data held their information in the first place. Every trove of information that attackers can get their hands on ultimately fuels scamming, cybercrime, and espionage.
Note: Clearview AI scraped billions of faces off of social media without consent. At least 600 law enforcement agencies were tapping into its database of 3 billion facial images. During this time, Clearview was hacked and its entire client list — which included the Department of Justice, U.S. Immigration and Customs Enforcement, Interpol, retailers and hundreds of police departments — was leaked to hackers.
A US federal appeals court ruled last week that so-called geofence warrants violate the Fourth Amendment’s protections against unreasonable searches and seizures. Geofence warrants allow police to demand that companies such as Google turn over a list of every device that appeared at a certain location at a certain time. The US Fifth Circuit Court of Appeals ruled on August 9 that geofence warrants are “categorically prohibited by the Fourth Amendment” because “they never include a specific user to be identified, only a temporal and geographic location where any given user may turn up post-search.” In other words, they’re the unconstitutional fishing expedition that privacy and civil liberties advocates have long asserted they are. Google ... is the most frequent target of geofence warrants, vowed late last year that it was changing how it stores location data in such a way that geofence warrants may no longer return the data they once did. Legally, however, the issue is far from settled: The Fifth Circuit decision applies only to law enforcement activity in Louisiana, Mississippi, and Texas. Plus, because of weak US privacy laws, police can simply purchase the data and skip the pesky warrant process altogether. As for the appellants in the case heard by the Fifth Circuit, well, they’re no better off: The court found that the police used the geofence warrant in “good faith” when it was issued in 2018, so they can still use the evidence they obtained.
Note: Read more about the rise of geofence warrants and its threat to privacy rights. For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
If you appeared in a photo on Facebook any time between 2011 and 2021, it is likely your biometric information was fed into DeepFace — the company’s controversial deep-learning facial recognition system that tracked the face scan data of at least a billion users. That's where Texas Attorney General Ken Paxton comes in. His office secured a $1.4 billion settlement from Meta over its alleged violation of a Texas law that bars the capture of biometric data without consent. Meta is on the hook to pay $275 million within the next 30 days and the rest over the next four years. Why did Paxton wait until 2022 — a year after Meta announced it would suspend its facial recognition technology and delete its database — to go up against the tech giant? If our AG truly prioritized privacy, he'd focus on the lesser-known companies that law enforcement agencies here in Texas are paying to scour and store our biometric data. In 2017, [Clearview AI] launched a facial recognition app that ... could identify strangers from a photo by searching a database of faces scraped without consent from social media. In 2020, news broke that at least 600 law enforcement agencies were tapping into a database of 3 billion facial images. Clearview was hit with lawsuit after lawsuit. That same year, the company was hacked and its entire client list — which included the Department of Justice, U.S. Immigration and Customs Enforcement, Interpol, retailers and hundreds of police departments — was leaked.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
Automated fast food restaurant CaliExpress by Flippy, in Pasadena, Calif., opened in January to considerable hype due to its robot burger makers, but the restaurant launched with another, less heralded innovation: the ability to pay for your meal with your face. CaliExpress uses a payment system from facial ID tech company PopID. It’s not the only fast-food chain to employ the technology. Biometric payment options are becoming more common. Amazon introduced pay-by-palm technology in 2020, and while its cashier-less store experiment has faltered, it installed the tech in 500 of its Whole Foods stores last year. Mastercard, which is working with PopID, launched a pilot for face-based payments in Brazil back in 2022, and it was deemed a success — 76% of pilot participants said they would recommend the technology to a friend. As stores implement biometric technology for a variety of purposes, from payments to broader anti-theft systems, consumer blowback, and lawsuits, are rising. In March, an Illinois woman sued retailer Target for allegedly illegally collecting and storing her and other customers’ biometric data via facial recognition technology without their consent. Amazon and T-Mobile are also facing legal actions related to biometric technology. In other countries ... biometric payment systems are comparatively mature. Visitors to McDonald’s in China ... use facial recognition technology to pay for their orders.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
Peregrine ... is essentially a super-powered Google for police data. Enter a name or address into its web-based app, and Peregrine quickly scans court records, arrest reports, police interviews, body cam footage transcripts — any police dataset imaginable — for a match. It’s taken data siloed across an array of older, slower systems, and made it accessible in a simple, speedy app that can be operated from a web browser. To date, Peregrine has scored 57 contracts across a wide range of police and public safety agencies in the U.S., from Atlanta to L.A. Revenue tripled in 2023, from $3 million to $10 million. [That will] triple again to $30 million this year, bolstered by $60 million in funding from the likes of Friends & Family Capital and Founders Fund. Privacy advocates [are] concerned about indiscriminate surveillance. “We see a lot of police departments of a lot of different sizes getting access to Real Time Crime Centers now, and it's definitely facilitating a lot more general access to surveillance feeds for some of these smaller departments that would have previously found it cost prohibitive,” said Beryl Lipton ... at the Electronic Frontier Foundation (EFF). “These types of companies are inherently going to have a hard time protecting privacy, because everything that they're built on is basically privacy damaging.” Peregrine technology can also enable “predictive policing,” long criticized for unfairly targeting poorer, non-white neighborhoods.
Note: Learn more about Palantir's involvement in domestic surveillance and controversial military technologies. For more along these lines, see concise summaries of deeply revealing news articles on police corruption and the disappearance of privacy from reliable major media sources.
In 2021, parents in South Africa with children between the ages of 5 and 13 were offered an unusual deal. For every photo of their child’s face, a London-based artificial intelligence firm would donate 20 South African rands, about $1, to their children’s school as part of a campaign called “Share to Protect.” With promises of protecting children, a little-known group of companies in an experimental corner of the tech industry known as “age assurance” has begun engaging in a massive collection of faces, opening the door to privacy risks for anyone who uses the web. The companies say their age-check tools could give parents ... peace of mind. But by scanning tens of millions of faces a year, the tools could also subject children — and everyone else — to a level of inspection rarely seen on the open internet and boost the chances their personal data could be hacked, leaked or misused. Nineteen states, home to almost 140 million Americans, have passed or enacted laws requiring online age checks since the beginning of last year, including Virginia, Texas and Florida. For the companies, that’s created a gold mine. But ... Alex Stamos, the former security chief of Facebook, which uses Yoti, said “most age verification systems range from ‘somewhat privacy violating’ to ‘authoritarian nightmare.'” Some also fear that lawmakers could use the tools to bar teens from content they dislike, including First Amendment-protected speech.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams—conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody “looking away from the screen for an extended period of time." For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.
The eruption of racist violence in England and Northern Ireland raises urgent questions about the responsibilities of social media companies, and how the police use facial recognition technology. While social media isn’t the root of these riots, it has allowed inflammatory content to spread like wildfire and helped rioters coordinate. The great elephant in the room is the wealth, power and arrogance of the big tech emperors. Silicon Valley billionaires are richer than many countries. That mature modern states should allow them unfettered freedom to regulate the content they monetise is a gross abdication of duty, given their vast financial interest in monetising insecurity and division. In recent years, [facial recognition] has been used on our streets without any significant public debate. We wouldn’t dream of allowing telephone taps, DNA retention or even stop and search and arrest powers to be so unregulated by the law, yet this is precisely what has happened with facial recognition. Our facial images are gathered en masse via CCTV cameras, the passport database and the internet. At no point were we asked about this. Individual police forces have entered into direct contracts with private companies of their choosing, making opaque arrangements to trade our highly sensitive personal data with private companies that use it to develop proprietary technology. There is no specific law governing how the police, or private companies ... are authorised to use this technology. Experts at Big Brother Watch believe the inaccuracy rate for live facial recognition since the police began using it is around 74%, and there are many cases pending about false positive IDs.
Note: Many US states are not required to reveal that they used face recognition technology to identify suspects, even though misidentification is a common occurrence. For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
Texas Attorney General Ken Paxton has won a $1.4 billion settlement from Facebook parent Meta over charges that it captured users' facial and biometric data without properly informing them it was doing so. Paxton said that starting in 2011, Meta, then known as Facebook, rolled out a “tag” feature that involved software that learned how to recognize and sort faces in photos. In doing so, it automatically turned on the feature without explaining how it worked, Paxton said — something that violated a 2009 state statute governing the use of biometric data, as well as running afoul of the state's deceptive trade practices act. "Unbeknownst to most Texans, for more than a decade Meta ran facial recognition software on virtually every face contained in the photographs uploaded to Facebook, capturing records of the facial geometry of the people depicted," he said in a statement. As part of the settlement, Meta did not admit to wrongdoing. Facebook discontinued how it had previously used face-recognition technology in 2021, in the process deleting the face-scan data of more than one billion users. The settlement amount, which Paxton said is the largest ever obtained by a single state against a business, will be paid out over five years. “This historic settlement demonstrates our commitment to standing up to the world’s biggest technology companies and holding them accountable for breaking the law and violating Texans’ privacy rights," Paxton said.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
Google announced this week that it would begin the international rollout of its new artificial intelligence-powered search feature, called AI Overviews. When billions of people search a range of topics from news to recipes to general knowledge questions, what they see first will now be an AI-generated summary. While Google was once mostly a portal to reach other parts of the internet, it has spent years consolidating content and services to make itself into the web’s primary destination. Weather, flights, sports scores, stock prices, language translation, showtimes and a host of other information have gradually been incorporated into Google’s search page over the past 15 or so years. Finding that information no longer requires clicking through to another website. With AI Overviews, the rest of the internet may meet the same fate. Google has tried to assuage publishers’ fears that users will no longer see their links or click through to their sites. Research firm Gartner predicts a 25% drop in traffic to websites from search engines by 2026 – a decrease that would be disastrous for most outlets and creators. What’s left for publishers is largely direct visits to their own home pages and Google referrals. If AI Overviews take away a significant portion of the latter, it could mean less original reporting, fewer creators publishing cooking blogs or how-to guides, and a less diverse range of information sources.
Note: WantToKnow.info traffic from Google search has fallen sharply as Google has stopped indexing most websites. These new AI summaries make independent media sites even harder to find. For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
The bedrock of Google’s empire sustained a major blow on Monday after a judge found its search and ad businesses violated antitrust law. The ruling, made by the District of Columbia's Judge Amit Mehta, sided with the US Justice Department and a group of states in a set of cases alleging the tech giant abused its dominance in online search. "Google is a monopolist, and it has acted as one to maintain its monopoly," Mehta wrote in his ruling. The findings, if upheld, could outlaw contracts that for years all but assured Google's dominance. Judge Mehta ruled that Google violated antitrust law in the markets for "general search" and "general search text" ads, which are the ads that appear at the top of the search results page. Apple, Amazon, and Meta are defending themselves against a series of other federal- and state-led antitrust suits, some of which make similar claims. Google’s disputed behavior revolved around contracts it entered into with manufacturers of computer devices and mobile devices, as well as with browser services, browser developers, and wireless carriers. These contracts, the government claimed, violated antitrust laws because they made Google the mandatory default search provider. Companies that entered into those exclusive contracts have included Apple, LG, Samsung, AT&T, T-Mobile, Verizon, and Mozilla. Those deals are why smartphones ... come preloaded with Google's various apps.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
Liquid capital, growing market dominance, slick ads, and fawning media made it easy for giants like Google, Microsoft, Apple, and Amazon to expand their footprint and grow their bottom lines. Yet ... these companies got lazy, entitled, and demanding. They started to care less about the foundations of their business — like having happy customers and stable products — and more about making themselves feel better by reinforcing their monopolies. Big Tech has decided the way to keep customers isn't to compete or provide them with a better service but instead make it hard to leave, trick customers into buying things, or eradicate competition so that it can make things as profitable as possible, even if the experience is worse. After two decades of consistent internal innovation, Big Tech got addicted to acquisitions in the 2010s: Apple bought Siri; Meta bought WhatsApp, Instagram, and Oculus; Amazon bought Twitch; Google bought Nest and Motorola's entire mobility division. Over time, the acquisitions made it impossible for these companies to focus on delivering the features we needed. Google, Meta, Amazon, and Apple are simply no longer forces for innovation. Generative AI is the biggest, dumbest attempt that tech has ever made to escape the fallout of building companies by acquiring other companies, taking their eyes off actually inventing things, and ignoring the most important part of their world: the customer.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
The National Science Foundation spent millions of taxpayer dollars developing censorship tools powered by artificial intelligence that Big Tech could use “to counter misinformation online” and “advance state-of-the-art misinformation research.” House investigators on the Judiciary Committee and Select Committee on the Weaponization of Government said the NSF awarded nearly $40 million ... to develop AI tools that could censor information far faster and at a much greater scale than human beings. The University of Michigan, for instance, was awarded $750,000 from NSF to develop its WiseDex artificial intelligence tool to help Big Tech outsource the “responsibility of censorship” on social media. The release of [an] interim report follows new revelations that the Biden White House pressured Amazon to censor books about the COVID-19 vaccine and comes months after court documents revealed White House officials leaned on Twitter, Facebook, YouTube and other sites to remove posts and ban users whose content they opposed, even threatening the social media platforms with federal action. House investigators say the NSF project is potentially more dangerous because of the scale and speed of censorship that artificial intelligence could enable. “AI-driven tools can monitor online speech at a scale that would far outmatch even the largest team of ’disinformation’ bureaucrats and researchers,” House investigators wrote in the interim report.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and censorship from reliable sources.
Once upon a time ... Google was truly great. A couple of lads at Stanford University in California had the idea to build a search engine that would crawl the world wide web, create an index of all the sites on it and rank them by the number of inbound links each had from other sites. The arrival of ChatGPT and its ilk ... disrupts search behaviour. Google’s mission – “to organise the world’s information and make it universally accessible” – looks like a much more formidable task in a world in which AI can generate infinite amounts of humanlike content. Vincent Schmalbach, a respected search engine optimisation (SEO) expert, thinks that Google has decided that it can no longer aspire to index all the world’s information. That mission has been abandoned. “Google is no longer trying to index the entire web,” writes Schmalbach. “In fact, it’s become extremely selective, refusing to index most content. This isn’t about content creators failing to meet some arbitrary standard of quality. Rather, it’s a fundamental change in how Google approaches its role as a search engine.” The default setting from now on will be not to index content unless it is genuinely unique, authoritative and has “brand recognition”. “They might index content they perceive as truly unique,” says Schmalbach. “But if you write about a topic that Google considers even remotely addressed elsewhere, they likely won’t index it. This can happen even if you’re a well-respected writer with a substantial readership.”
Note: WantToKnow.info and other independent media websites are disappearing from Google search results because of this. For more along these lines, see concise summaries of deeply revealing news articles on AI and censorship from reliable sources.
Google and a few other search engines are the portal through which several billion people navigate the internet. Many of the world’s most powerful tech companies, including Google, Microsoft, and OpenAI, have recently spotted an opportunity to remake that gateway with generative AI, and they are racing to seize it. Nearly two years after the arrival of ChatGPT, and with users growing aware that many generative-AI products have effectively been built on stolen information, tech companies are trying to play nice with the media outlets that supply the content these machines need. The start-up Perplexity ... announced revenue-sharing deals with Time, Fortune, and several other publishers. These publishers will be compensated when Perplexity earns ad revenue from AI-generated answers that cite partner content. The site does not currently run ads, but will begin doing so in the form of sponsored “related follow-up questions.” OpenAI has been building its own roster of media partners, including News Corp, Vox Media, and The Atlantic. Google has purchased the rights to use Reddit content to train future AI models, and ... appears to be the only major search engine that Reddit is permitting to surface its content. The default was once that you would directly consume work by another person; now an AI may chew and regurgitate it first, then determine what you see based on its opaque underlying algorithm. Many of the human readers whom media outlets currently show ads and sell subscriptions to will have less reason to ever visit publishers’ websites. Whether OpenAI, Perplexity, Google, or someone else wins the AI search war might not depend entirely on their software: Media partners are an important part of the equation. AI search will send less traffic to media websites than traditional search engines. The growing number of AI-media deals, then, are a shakedown. AI is scraping publishers’ content whether they want it to or not: Media companies can be chumps or get paid.
Note: The AI search war has nothing to do with journalists and content creators getting paid and acknowledged for their work. It’s all about big companies doing deals with each other to control our information environment and capture more consumer spending. For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable sources.
Amazon has been accused of using “intrusive algorithms” as part of a sweeping surveillance program to monitor and deter union organizing activities. Workers at a warehouse run by the technology giant on the outskirts of St Louis, Missouri, are today filing an unfair labor practice charge with the National Labor Relations Board (NLRB). A copy of the charge ... alleges that Amazon has “maintained intrusive algorithms and other workplace controls and surveillance which interfere with Section 7 rights of employees to engage in protected concerted activity”. There have been several reports of Amazon surveilling workers over union organizing and activism, including human resources monitoring employee message boards, software to track union threats and job listings for intelligence analysts to monitor “labor organizing threats”. Artificial intelligence can be used by warehouse employers like Amazon “to essentially have 24/7 unregulated and algorithmically processed and recorded video, and often audio data of what their workers are doing all the time”, said Seema N Patel ... at Stanford Law School. “It enables employers to control, record, monitor and use that data to discipline hundreds of thousands of workers in a way that no human manager or group of managers could even do.” The National Labor Relations Board issued a memo in 2022 announcing its intent to protect workers from AI-enabled monitoring of labor organizing activities.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
On July 16, the S&P 500 index, one of the most widely cited benchmarks in American capitalism, reached its highest-ever market value: $47 trillion. 1.4 percent of those companies were worth more than $16 trillion, the greatest concentration of capital in the smallest number of companies in the history of the U.S. stock market. The names are familiar: Microsoft, Apple, Amazon, Nvidia, Meta, Alphabet, and Tesla. All of them, too, have made giant bets on artificial intelligence. For all their similarities, these trillion-dollar-plus companies have been grouped together under a single banner: the Magnificent Seven. In the past month, though, these giants of the U.S. economy have been faltering. A recent rout led to a collapse of $2.6 trillion in their market value. Earlier this year, Goldman Sachs issued a deeply skeptical report on the industry, calling it too expensive, too clunky, and just simply not as useful as it has been chalked up to be. “There’s not a single thing that this is being used for that’s cost-effective at this point,” Jim Covello, an influential Goldman analyst, said on a company podcast. AI is not going away, and it will surely become more sophisticated. This explains why, even with the tempering of the AI-investment thesis, these companies are still absolutely massive. When you talk with Silicon Valley CEOs, they love to roll their eyes at their East Coast skeptics. Banks, especially, are too cautious, too concerned with short-term goals, too myopic to imagine another world.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
The Ukrainian military has used AI-equipped drones mounted with explosives to fly into battlefields and strike at Russian oil refineries. American AI systems identified targets in Syria and Yemen for airstrikes earlier this year. The Israel Defense Forces used another kind of AI-enabled targeting system to label as many as 37,000 Palestinians as suspected militants during the first weeks of its war in Gaza. Growing conflicts around the world have acted as both accelerant and testing ground for AI warfare while making it even more evident how unregulated the nascent field is. The result is a multibillion-dollar AI arms race that is drawing in Silicon Valley giants and states around the world. Altogether, the US military has more than 800 active AI-related projects and requested $1.8bn worth of funding for AI in the 2024 budget alone. Many of these companies and technologies are able to operate with extremely little transparency and accountability. Defense contractors are generally protected from liability when their products accidentally do not work as intended, even when the results are deadly. The Pentagon plans to spend $1bn by 2025 on its Replicator Initiative, which aims to develop swarms of unmanned combat drones that will use artificial intelligence to seek out threats. The air force wants to allocate around $6bn over the next five years to research and development of unmanned collaborative combat aircraft, seeking to build a fleet of 1,000 AI-enabled fighter jets that can fly autonomously. The Department of Defense has also secured hundreds of millions of dollars in recent years to fund its secretive AI initiative known as Project Maven, a venture focused on technologies like automated target recognition and surveillance.
Note:Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.
After government officials like former White House advisers Rob Flaherty and Andy Slavitt repeatedly harangued platforms such as Facebook to censor Americans who contested the government’s narrative on COVID-19 vaccines, Missouri and Louisiana sued. They claimed that the practice violates the First Amendment. Following years of litigation, the Supreme Court threw cold water on their efforts, ruling in Murthy v. Missouri that states and the individual plaintiffs lacked standing to sue the government for its actions. The government often disguised its censorship requests by coordinating with ostensibly “private” civil society groups to pressure tech companies to remove or shadow ban targeted content. According to the U.S. House Weaponization Committee’s November 2023 interim report, the Cybersecurity and Infrastructure Security Agency requested that the now-defunct Stanford Internet Observatory create a public-private partnership to counter election “misinformation” in 2020. This consortium of government and private entities took the form of the Election Integrity Partnership (EIP). EIP’s “private” civil society partners then forwarded the flagged content to Big Tech platforms like Facebook, YouTube, TikTok and Twitter. These “private” groups ... receive millions of taxpayer dollars from the National Science Foundation, the State Department and the U.S Department of Justice. Legislation like the COLLUDE Act would ... clarify that Section 230 does not apply when platforms censor legal speech “as a result of a communication” from a “governmental entity” or from an non-profit “acting at the request or behest of a governmental entity.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on censorship and government corruption from reliable sources.
OnlyFans makes reassuring promises to the public: It’s strictly adults-only, with sophisticated measures to monitor every user, vet all content and swiftly remove and report any child sexual abuse material. Reuters documented 30 complaints in U.S. police and court records that child sexual abuse material appeared on the site between December 2019 and June 2024. The case files examined by the news organization cited more than 200 explicit videos and images of kids, including some adults having oral sex with toddlers. In one case, multiple videos of a minor remained on OnlyFans for more than a year, according to a child exploitation investigator who found them while assisting Reuters. OnlyFans “presents itself as a platform that provides unrivaled access to influencers, celebrities and models,” said Elly Hanson, a clinical psychologist and researcher who focuses on preventing sexual abuse and reducing its impact. “This is an attractive mix to many teens, who are pulled into its world of commodified sex, unprepared for what this entails.” In 2021 ... 102 Republican and Democratic members of the U.S. House of Representatives called on the Justice Department to investigate child sexual abuse on OnlyFans. The Justice Department told the lawmakers three months later that it couldn’t confirm or deny it was investigating OnlyFans. Contacted recently, a department spokesperson declined to comment further.
Note: For more along these lines, see concise summaries of deeply revealing news articles on sexual abuse scandals from reliable major media sources.
Jonathan Haidt is a man with a mission ... to alert us to the harms that social media and modern parenting are doing to our children. His latest book, The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness ... writes of a “tidal wave” of increases in mental illness and distress beginning around 2012. Young adolescent girls are hit hardest, but boys are in pain, too. He sees two factors that have caused this. The first is the decline of play-based childhood caused by overanxious parenting, which allows children fewer opportunities for unsupervised play and restricts their movement. The second factor is the ubiquity of smartphones and the social media apps that thrive upon them. The result is the “great rewiring of childhood” of his book’s subtitle and an epidemic of mental illness and distress. You don’t have to be a statistician to know that ... Instagram is toxic for some – perhaps many – teenage girls. Ever since Frances Haugen’s revelations, we have known that Facebook itself knew that 13% of British teenage girls said that their suicidal thoughts became more frequent after starting on Instagram. And the company’s own researchers found that 32% of teen girls said that when they felt bad about their bodies, Instagram made them feel worse. These findings might not meet the exacting standards of the best scientific research, but they tell you what you need to know.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and mental health from reliable major media sources.
Recall ... takes constant screenshots in the background while you go about your daily computer business. Microsoft’s Copilot+ machine-learning tech then scans (and “reads”) each of these screenshots in order to make a searchable database of every action performed on your computer and then stores it on the machine’s disk. “Recall is like bestowing a photographic memory on everyone who buys a Copilot+ PC,” [Microsoft marketing officer Yusuf] Mehdi said. “Anything you’ve ever seen or done, you’ll now more or less be able to find.” Charlie Stross, the sci-fi author and tech critic, called it a privacy “shit-show for any organisation that handles medical records or has a duty of legal confidentiality.” He also said: “Suddenly, every PC becomes a target for discovery during legal proceedings. Lawyers can subpoena your Recall database and search it, no longer being limited to email but being able to search for terms that came up in Teams or Slack or Signal messages, and potentially verbally via Zoom or Skype if speech-to-text is included in Recall data.” Faced with this pushback, Microsoft [announced] that Recall would be made opt-in instead of on by default, and also introducing extra security precautions – only producing results from Recall after user authentication, for example, and never decrypting data stored by the tool until after a search query. The only good news for Microsoft here is that it seems to have belatedly acknowledged that Recall has been a fiasco.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
High-level former intelligence and national security officials have provided crucial assistance to Silicon Valley giants as the tech firms fought off efforts to weaken online monopolies. John Ratcliffe, the former Director of National Intelligence, Brian Cavanaugh, a former intelligence aide in the White House, and [former White House National Security Advisor Robert] O'Brien jointly wrote to congressional leaders, warning darkly that certain legislative proposals to check the power of Amazon, Google, Meta, and Apple would embolden America's enemies. The letter left unmentioned that the former officials were paid by tech industry lobbyists at the time as part of a campaign to suppress support for the legislation. The Open App Markets App was designed to break Apple and Google's duopoly over the smartphone app store market. The companies use their control over the app markets to force app developers to pay as much as 30 percent in fees on every transaction. Breaking up Apple and Google’s hold over the smartphone app store would enable greater free expression and innovation. The American Innovation and Choice Online Act similarly encourages competition by preventing tech platforms from self-preferencing their own products. The Silicon Valley giants deployed hundreds of millions of dollars in lobbying efforts to stymie the reforms. For Republicans, they crafted messages on national security and jobs. For Democrats, as other reports have revealed, tech giants paid LGBT, Black, and Latino organizations to lobby against the reforms, claiming that powerful tech platforms are beneficial to communities of color and that greater competition online would lead to a rise in hate speech.The lobbying tactics have so far paid off. Every major tech antitrust and competition bill in Congress has died over the last four years.
Note: For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption and Big Tech from reliable major media sources.
Twenty years ago, FedEx established its own police force. Now it's working with local police to build out an AI car surveillance network. The shipping and business services company is using AI tools made by Flock Safety, a $4 billion car surveillance startup, to monitor its distribution and cargo facilities across the United States. As part of the deal, FedEx is providing its Flock surveillance feeds to law enforcement, an arrangement that Flock has with at least four multi-billion dollar private companies. Some local police departments are also sharing their Flock feeds with FedEx — a rare instance of a private company availing itself of a police surveillance apparatus. Such close collaboration has the potential to dramatically expand Flock’s car surveillance network, which already spans 4,000 cities across over 40 states and some 40,000 cameras that track vehicles by license plate, make, model, color and other identifying characteristics, like dents or bumper stickers. Jay Stanley ... at the American Civil Liberties Union, said it was “profoundly disconcerting” that FedEx was exchanging data with law enforcement as part of Flock’s “mass surveillance” system. “It raises questions about why a private company ... would have privileged access to data that normally is only available to law enforcement,” he said. Forbes previously found that [Flock] had itself likely broken the law across various states by installing cameras without the right permits.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.
I had to watch every frame of a recent stabbing video ... It will never leave me,” says Harun*, one of many moderators reviewing harmful online content in India, as social media companies increasingly move the challenging work offshore. Moderators working in Hyderabad, a major IT hub in south Asia, have spoken of the strain on their mental health of reviewing images and videos of sexual and violent content, sometimes including trafficked children. Many social media platforms in the UK, European Union and US have moved the work to countries such as India and the Philippines. While OpenAI, creator of ChatGPT, has said artificial intelligence could be used to speed up content moderation, it is not expected to end the need for the thousands of human moderators employed by social media platforms. Content moderators in Hyderabad say the work has left them emotionally distressed, depressed and struggling to sleep. “I had to watch every frame of a recent stabbing video of a girl. What upset me most is that the passersby didn’t help her,” says Harun. “There have been instances when I’ve flagged a video containing child nudity and received continuous calls from my supervisors,” [said moderator Akash]. “Most of these half-naked pictures of minors are from the US or Europe. I’ve received multiple warnings from my supervisors not to flag these videos. One of them asked me to ‘man up’ when I complained that these videos need to be discussed in detail.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and Big Tech from reliable major media sources.
Trevin Brownie had to sift through lots of disturbing content for the three years he worked as an online content moderator in Nairobi, Kenya. "We take off any form of abusive content that violates policies such as bullying and harassment or hate speech or violent graphic content suicides," Brownie [said]. Brownie has encountered content ranging from child pornography, material circulated by organized crime groups and terrorists, and images taken from war zones. "I've seen more than 500 beheadings on a monthly basis," he said. Brownie moved from South Africa, where he previously worked at a call center, to Nairobi, where he worked as a subcontractor for Facebook's main moderation hub in East Africa, which was operated by a U.S.-based company called Sama AI. Content moderators working in Kenya say Sama AI and other third-party outsourcing companies took advantage of them. They allege they received low-paying wages and inadequate mental health support compared to their counterparts overseas. Brownie says ... PTSD has become a common side effect he and others in this industry now live with, he said. "It's really traumatic. Disturbing, especially for the suicide videos," he said. A key obstacle to getting better protections for content moderators lies in how people think social media platforms work. More than 150 content moderators who work with the artificial intelligence (AI) systems used by Facebook, TikTok and ChatGPT, from all parts of the continent, gathered in Kenya to form the African Content Moderator's Union. The union is calling on companies in the industry to increase salaries, provide access to onsite psychiatrists, and a redrawing of policies to protect employees from exploitative labour practices.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and Big Tech from reliable major media sources.
Once upon a time, Google was great. They intensively monitored what people searched for, and then used that information continually to improve the engine’s performance. Their big idea was that the information thus derived had a commercial value; it indicated what people were interested in and might therefore be of value to advertisers who wanted to sell them stuff. Thus was born what Shoshana Zuboff christened “surveillance capitalism”, the dominant money machine of the networked world. The launch of generative AIs such as ChatGPT clearly took Google by surprise, which is odd given that the company had for years been working on the technology. The question became: how will Google respond to the threat? Now we know: it’s something called AI overviews, in which an increasing number of search queries are initially answered by AI-generated responses. Users have been told that glue is useful for ensuring that cheese sticks to pizza, that they could stare at the sun for for up to 30 minutes, and that geologists suggest eating one rock per day. There’s a quaint air of desperation in the publicity for this sudden pivot from search engine to answerbot. The really big question about the pivot, though, is what its systemic impact on the link economy will be. Already, the news is not great. Gartner, a market-research consultancy, for example, predicts that search engine volume will drop 25% by 2026 owing to AI chatbots and other virtual agents.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
Venture capital and military startup firms in Silicon Valley have begun aggressively selling a version of automated warfare that will deeply incorporate artificial intelligence (AI). This surge of support for emerging military technologies is driven by the ultimate rationale of the military-industrial complex: vast sums of money to be made. Untold billions of dollars of private money now pouring into firms seeking to expand the frontiers of techno-war. According to the New York Times, $125 billion over the past four years. Whatever the numbers, the tech sector and its financial backers sense that there are massive amounts of money to be made in next-generation weaponry and aren’t about to let anyone stand in their way. Meanwhile, an investigation by Eric Lipton of the New York Times found that venture capitalists and startup firms already pushing the pace on AI-driven warfare are also busily hiring ex-military and Pentagon officials to do their bidding. Former Google CEO Eric Schmidt [has] become a virtual philosopher king when it comes to how new technology will reshape society. [Schmidt] laid out his views in a 2021 book modestly entitled The Age of AI and Our Human Future, coauthored with none other than the late Henry Kissinger. Schmidt is aware of the potential perils of AI, but he’s also at the center of efforts to promote its military applications. AI is coming, and its impact on our lives, whether in war or peace, is likely to stagger the imagination.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.
The center of the U.S. military-industrial complex has been shifting over the past decade from the Washington, D.C. metropolitan area to Northern California—a shift that is accelerating with the rise of artificial intelligence-based systems, according to a report published Wednesday. "Although much of the Pentagon's $886 billion budget is spent on conventional weapon systems and goes to well-established defense giants such as Lockheed Martin, RTX, Northrop Grumman, General Dynamics, Boeing, and BAE Systems, a new political economy is emerging, driven by the imperatives of big tech companies, venture capital (VC), and private equity firms," [report author Roberto J.] González wrote. "Defense Department officials have ... awarded large multibillion-dollar contracts to Microsoft, Amazon, Google, and Oracle." González found that the five largest military contracts to major tech firms between 2018 and 2022 "had contract ceilings totaling at least $53 billion combined." There's also the danger of a "revolving door" between Silicon Valley and the Pentagon as many senior government officials "are now gravitating towards defense-related VC or private equity firms as executives or advisers after they retire from public service." "Members of the armed services and civilians are in danger of being harmed by inadequately tested—or algorithmically flawed—AI-enabled technologies. By nature, VC firms seek rapid returns on investment by quickly bringing a product to market, and then 'cashing out' by either selling the startup or going public. This means that VC-funded defense tech companies are under pressure to produce prototypes quickly and then move to production before adequate testing has occurred."
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
Ask Google if cats have been on the moon and it used to spit out a ranked list of websites so you could discover the answer for yourself. Now it comes up with an instant answer generated by artificial intelligence - which may or may not be correct. “Yes, astronauts have met cats on the moon, played with them, and provided care,” said Google’s newly retooled search engine. It added: “For example, Neil Armstrong said, ‘One small step for man’ because it was a cat’s step. Buzz Aldrin also deployed cats on the Apollo 11 mission.” None of this is true. Similar errors — some funny, others harmful falsehoods — have been shared on social media since Google this month unleashed AI overviews, a makeover of its search page that frequently puts the summaries on top of search results. It’s hard to reproduce errors made by AI language models — in part because they’re inherently random. They work by predicting what words would best answer the questions asked of them based on the data they’ve been trained on. They’re prone to making things up — a widely studied problem known as hallucination. Another concern was a deeper one — that ceding information retrieval to chatbots was degrading the serendipity of human search for knowledge, literacy about what we see online, and the value of connecting in online forums with other people who are going through the same thing.Those forums and other websites count on Google sending people to them, but Google’s new AI overviews threaten to disrupt the flow of money-making internet traffic.
Note: Read more about the potential dangers of Google's new AI tool. For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence controversies from reliable major media sources.
"Agency intervention is necessary to stop the existential threat Google poses to original content creators," the News/Media Alliance—a major news industry trade group—wrote in a letter to the Department of Justice (DOJ) and the Federal Trade Commission (FTC). It asked the agencies to use antitrust authority "to stop Google's latest expansion of AI Overviews," a search engine innovation that Google has been rolling out recently. Overviews offer up short, AI-generated summaries paired with brief bits of text from linked websites. Overviews give "comprehensive answers without the user ever having to click to another page," the The New York Times warns. And this worries websites that rely on Google to drive much of their traffic. "It potentially chokes off the original creators of the content," Frank Pine, executive editor of MediaNews Group and Tribune Publishing (owner of 68 daily newspapers), told the Times. Media websites have gotten used to Google searches sending them a certain amount of traffic. But that doesn't mean Google is obligated to continue sending them that same amount of traffic forever. It is possible that Google's pivot to AI was hastened by how hostile news media has been to tech companies. We've seen publishers demanding that search engines and social platforms pay them for the privilege of sharing news links, even though this arrangement benefits publications (arguably more than it does tech companies) by driving traffic.
Note: For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence controversies from reliable major media sources.
In recent weeks, Biden and Senate Majority Leader Chuck Schumer have been taking victory laps for the 2022 CHIPS and Science Act, a law intended to create jobs and fund innovation in a key global industry. It has already launched a series of grants, incentives and research proposals to help America regain its cutting-edge status in global semiconductor manufacturing. But quietly, in a March spending bill, appropriators in Congress shifted $3.5 billion that the Commerce Department was hoping to use for those grants and pushed it into a separate Pentagon program called Secure Enclave, which is not mentioned in the original law. The diversion of money from a flagship Biden initiative is a case study in how fragile Washington’s monumental spending programs can be in practice. Several members of Congress involved in the CHIPS law say they were taken by surprise to see the money shifted to Secure Enclave, a classified project to build chips in a special facility for defense and intelligence needs. Critics say the shift in CHIPS money undermines an important policy by moving funds from a competitive public selection process meant to boost a domestic industry to an untried and classified project likely to benefit only one company. No company has been named yet to execute the project, but interviews reveal that chipmaking giant Intel lobbied for its creation, and is still considered the frontrunner for the money.
Note: Learn more about unaccountable military spending in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on government corruption from reliable major media sources.
Have you heard about the new Google? They “supercharged” it with artificial intelligence. Somehow, that also made it dumber. With the regular old Google, I can ask, “What’s Mark Zuckerberg’s net worth?” and a reasonable answer pops up: “169.8 billion USD.” Now let’s ask the same question with the “experimental” new version of Google search. Its AI responds: Zuckerberg’s net worth is “$46.24 per hour, or $96,169 per year. This is equivalent to $8,014 per month, $1,849 per week, and $230.6 million per day.” Google acting dumb matters because its AI is headed to your searches sooner or later. The company has already been testing this new Google — dubbed Search Generative Experience, or SGE — with volunteers for nearly 11 months, and recently started showing AI answers in the main Google results even for people who have not opted in to the test. To give us answers to everything, Google’s AI has to decide which sources are reliable. I’m not very confident about its judgment. Remember our bonkers result on Zuckerberg’s net worth? A professional researcher — and also regular old Google — might suggest checking the billionaires list from Forbes. Google’s AI answer relied on a very weird ZipRecruiter page for “Mark Zuckerberg Jobs,” a thing that does not exist. The new Google can do some useful things. But as you’ll see, it sometimes also makes up facts, misinterprets questions, [and] delivers out-of-date information. This test of Google’s future has been going on for nearly a year, and the choices being made now will influence how billions of people get information.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI technology from reliable major media sources.
[Tim] Berners-Lee, a British computer scientist, [came] up with the idea for a “world wide web” as a way of locating and accessing documents that were scattered all over the internet. He was able to do this because the internet, which had been publicly available since January 1983, enabled it. The network had no central ownership or controller. The result was an extraordinary explosion of creativity, and the emergence of ... a kind of global commons. However, the next generation of innovators to benefit from this freedom – Google, Facebook, Amazon, Microsoft, Apple et al – saw no reason to extend it to anyone else. The creative commons of the internet has been gradually and inexorably enclosed. Google and Apple’s browsers have nearly 85% of the world market share. Microsoft and Apple’s two desktop operating systems have almost 90%. Google runs about 90% of global search. More than half of all phones come from Apple and Samsung, while 99% of mobile operating systems are from Google or Apple. Apple and Google’s email clients manage nearly 90% of global email. GoDaddy and Cloudflare serve about 50% of global domain name system requests. And so on. One of the consequences of this concentration, say Farrell and Berjon, is that the creative possibilities of permissionless innovation have become increasingly constrained. The internet has become an extractive and fragile monoculture. We can revitalise it, but only by “rewilding” it.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption from reliable major media sources.
For the past few weeks, journalists have been reporting on what they've found in the "Twitter Files." The revelations have been astonishing and deeply troubling, exposing solid evidence of collusion between top executives at the FBI and their cozy counterparts at Twitter. FBI leadership and Twitter censors conferred constantly about how to shut down political speech based on its content, confirming the suspicions of, well, anyone who was paying attention. And it proves without a doubt that over the past few years, countless Americans have undergone a real violation of their First Amendment rights. The First Amendment mandates that government can't abridge—meaning limit or censor—speech based on its content. Even if attempting to advance the noblest of causes, government actors must not collide with this constitutional guardrail. The Constitution simply isn't optional. The government can't enlist a private citizen or corporation to undertake what the Constitution precludes it from doing. When Twitter acquiesced to the FBI's urging, it essentially became an agent and of the government. FBI officials created a special, secure online portal for Twitter staff, where the two sides could secretly exchange information about who was saying what on the platform and how that speech could be squelched. In this virtual "war room," the FBI made dozens of requests to censor political speech. Twitter chirpily complied.
Note: For more along these lines, see concise summaries of deeply revealing news articles on censorship and government corruption from reliable major media sources.
The New Mexico attorney general, Raúl Torrez, who has launched legal action against Meta for child trafficking on its platforms, says he believes the social media company is the “largest marketplace for predators and paedophiles globally”. The lawsuit claims that Meta allows and fails to detect the trafficking of children and “enabled adults to find, message and groom minors, soliciting them to sell pictures or participate in pornographic videos”, concluding that “Meta’s conduct is not only unacceptable; it is unlawful”. Torrez says that he has been shocked by the findings of his team’s investigations into online child sexual exploitation on Meta’s platforms. Internal company documents obtained by the attorney general’s office as part of its investigation have also revealed that the company estimates about 100,000 children using Facebook and Instagram receive online sexual harassment each day. The idea of the lawsuit came to [Torrez] after reading media coverage of Meta’s role in child sexual exploitation, including a Guardian investigation that it was failing to report or detect the use of Facebook and Instagram for child trafficking. If it progresses, the New Mexico lawsuit is expected to take years to conclude. Torrez wants his lawsuit to provide a medium to usher in new regulations. “Fundamentally, we’re trying to get Meta to change how it does business and prioritise the safety of its users, particularly children.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and sexual abuse scandals from reliable major media sources.
On April 20, former acting CIA Director Michael Morell admitted he orchestrated the joint letter that torpedoed the New York Post’s bombshell reporting on Hunter Biden’s laptop in the weeks leading up to the November 2020 US Presidential election, at the direct request of Joe Biden’s campaign team. That letter ... asserted the leaked material bore unambiguous hallmarks of a Kremlin “information operation.” In all, 51 former senior intelligence officials endorsed the declaration. This intervention was sufficient for Twitter to block all sharing of the NY Post’s exposés and ban the outlet’s official account. Twitter’s public suppression of the NY Post’s disclosures was complemented by a covert operation to identify and neutralize anyone discussing the contents of Hunter Biden’s laptop, courtesy of Dataminr, a social media spying tool heavily connected to British and American intelligence services. In-Q-Tel [is] the CIA’s venture capital arm. In 2016, The Intercept revealed In-Q-Tel was financing at least 38 separate social media spying tools, to surveil “erupting political movements, crises, epidemics, and disasters.” Among them was Dataminr, which enjoys privileged access to Twitter’s “firehose” – all tweets published in real time – in order to track and visualize trends as they happen. [In 2020], the U.S. was ... engulfed by incendiary large-scale protests. Dataminr kept a close eye on this upheaval every step of the way, tipping off police to the identities of demonstrators.
Note: While Hunter Biden was indicted for three felony gun charges and nine counts of tax-related crimes, his laptop also revealed suspicious business dealings with corrupt overseas firms. Learn more about the history of military-intelligence influence on the media in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on corporate corruption and media manipulation from reliable sources.
A Silicon Valley defense tech startup is working on products that could have as great an impact on warfare as the atomic bomb, its founder Palmer Luckey said. "We want to build the capabilities that give us the ability to swiftly win any war we are forced to enter," he [said]. The Anduril founder didn't elaborate on what impact AI weaponry would have. But asked if it would be as decisive as the atomic bomb to the outcome of World War II he replied: "We have ideas for what they are. We are working on them." In 2022, Anduril won a contract worth almost $1 billion with the Special Operations Command to support its counter-unmanned systems. Anduril's products include autonomous sentry towers along the Mexican border [and] Altius-600M attack drones supplied to Ukraine. All of Anduril's tech operates autonomously and runs on its AI platform called Lattice that can easily be updated. The success of Anduril has given hope to other smaller players aiming to break into the defense sector. As an escalating number of global conflicts has increased demand for AI-driven weaponry, venture capitalists have put more than $100 billion into defense tech since 2021, according to Pitchbook data. The rising demand has sparked a fresh wave of startups lining up to compete with industry "primes" such as Lockheed Martin and RTX (formerly known as Raytheon) for a slice of the $842 billion US defense budget.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on corruption in the military and in the corporate world from reliable major media sources.
In 2015, the journalist Steven Levy interviewed Elon Musk and Sam Altman, two founders of OpenAI. A galaxy of Silicon Valley heavyweights, fearful of the potential consequences of AI, created the company as a non-profit-making charitable trust with the aim of developing technology in an ethical fashion to benefit “humanity as a whole”. Musk, who stepped down from OpenAI’s board six years ago ... is now suing his former company for breach of contract for having put profits ahead of the public good and failing to develop AI “for the benefit of humanity”. In 2019, OpenAI created a for-profit subsidiary to raise money from investors, notably Microsoft. When it released ChatGPT in 2022, the model’s inner workings were kept hidden. It was necessary to be less open, Ilya Sutskever, another of OpenAI’s founders and at the time the company’s chief scientist, claimed in response to criticism, to prevent those with malevolent intent from using it “to cause a great deal of harm”. Fear of the technology has become the cover for creating a shield from scrutiny. The problems that AI poses are not existential, but social. From algorithmic bias to mass surveillance, from disinformation and censorship to copyright theft, our concern should not be that machines may one day exercise power over humans but that they already work in ways that reinforce inequalities and injustices, providing tools by which those in power can consolidate their authority.
Note: Read more about the dangers of AI in the hands of the powerful. For more along these lines, see concise summaries of deeply revealing news articles on media manipulation and the disappearance of privacy from reliable sources.
A federal appeals court on Tuesday refused to hold five major technology companies liable over their alleged support for the use of child labor in cobalt mining operations in the Democratic Republic of the Congo. In a 3-0 decision, the U.S. Court of Appeals for the District of Columbia ruled in favor of Google parent Alphabet, Apple, Dell Technologies, Microsoft and Tesla, rejecting an appeal by former child miners and their representatives. The plaintiffs accused the five companies of joining suppliers in a "forced labor" venture by purchasing cobalt, which is used to make lithium-ion batteries. Nearly two-thirds of the world's cobalt comes from the DRC. According to the complaint, the companies "deliberately obscured" their dependence on child labor, including many children pressured into work by hunger and extreme poverty, to ensure their growing need for the metal would be met. The 16 plaintiffs included representatives of five children who were killed in cobalt mining operations. Circuit Judge Neomi Rao said the plaintiffs had legal standing to seek damages, but did not show the five companies had anything more than a buyer-seller relationship with suppliers. Terry Collingsworth, a lawyer for the plaintiffs ... said his clients may appeal further. The decision provides "a strong incentive to avoid any transparency with their suppliers, even as they promise the public they have 'zero tolerance' policies against child labor," he said. "We are far from finished seeking accountability."
Note: Unreported deaths of children, devastating diseases, toxic environments, and sexual assault are just some of the tragedies within the hidden world of cobalt mining in the DRC. Furthermore, entire communities have been forced to leave their homes to make way for new mining operations. For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption from reliable major media sources.
OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes. Up until January 10, OpenAI’s “usage policies” page included a ban on “activity that has high risk of physical harm, including,” specifically, “weapons development” and “military and warfare.” That plainly worded prohibition against military applications would seemingly rule out any official, and extremely lucrative, use by the Department of Defense or any other state military. The new policy retains an injunction not to “use our service to harm yourself or others” and gives “develop or use weapons” as an example, but the blanket ban on “military and warfare” use has vanished. OpenAI spokesperson Niko ... Felix [said] that OpenAI wanted to pursue certain “national security use cases that align with our mission,” citing a plan to create “cybersecurity tools” with DARPA, and that “the goal with our policy update is to provide clarity and the ability to have these discussions.” The real-world consequences of the policy are unclear. Last year, The Intercept reported that OpenAI was unwilling to say whether it would enforce its own clear “military and warfare” ban in the face of increasing interest from the Pentagon and U.S. intelligence community. “Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” said [former AI policy analyst] Sarah Myers West.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on corporate corruption from reliable major media sources.
Submarine cables used to be seen as the internet’s dull plumbing. Now giants of the data economy, such as Amazon, Google, Meta and Microsoft, are asserting more control over the flow of data, even as tensions between China and America risk splintering the world’s digital infrastructure. The result is to turn undersea cables into prized economic and strategic assets. Subsea data pipes carry almost 99% of intercontinental internet traffic. By 2010 the rise in data traffic led internet and cloud-computing giants—Amazon, Google, Meta and Microsoft—to start leasing capacity on these lines. The data-cable business is ... being entangled in the tech contest between America and China. Take the Pacific Light Cable Network (PLCN). The 13,000km data pipeline was announced in 2016, with the backing of Google and Meta. It aimed to link the west coast of America with Hong Kong. By 2020 it had reached the Philippines and Taiwan. But last year America’s government denied approval for the final leg to Hong Kong, worried that this would give Chinese authorities easy access to Americans’ data. Hundreds of kilometres of cable that would link Hong Kong to the network are languishing unused on the ocean floor. China is responding by charting its own course. PEACE, a 21,500km undersea cable linking Kenya to France via Pakistan, was built entirely by Chinese firms as part of China’s “digital silk road”, a scheme to increase its global influence.
Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption from reliable major media sources.
Palantir’s founding team, led by investor Peter Thiel and Alex Karp, wanted to create a company capable of using new data integration and data analytics technology — some of it developed to fight online payments fraud — to solve problems of law enforcement, national security, military tactics, and warfare. Palantir, founded in 2003, developed its tools fighting terrorism after September 11, and has done extensive work for government agencies and corporations though much of its work is secret. Palantir’s MetaConstellation platform allows the user to task ... satellites to answer a specific query. Imagine you want to know what is happening in a certain location and time in the Arctic. Click on a button and MetaConstelation will schedule the right combination of satellites to survey the designated area. The platform is able to integrate data from multiple and disparate sources — think satellites, drones, and open-source intelligence — while allowing a new level of decentralised decision-making. Just as a deep learning algorithm knows how to recognise a picture of a dog after some hours of supervised learning, the Palantir algorithms can become extraordinarily apt at identifying an enemy command and control centre. Alex Karp, Palantir’s CEO, has argued that “the power of advanced algorithmic warfare systems is now so great that it equates to having tactical nuclear weapons against an adversary with only conventional ones.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and the disappearance of privacy from reliable major media sources.
The Palestinian population is intimately familiar with how new technological innovations are first weaponized against them–ranging from electric fences and unmanned drones to trap people in Gaza—to the facial recognition software monitoring Palestinians in the West Bank. Groups like Amnesty International have called Israel an Automated Apartheid and repeatedly highlight stories, testimonies, and reports about cyber-intelligence firms, including the infamous NSO Group (the Israeli surveillance company behind the Pegasus software) conducting field tests and experiments on Palestinians. Reports have highlighted: “Testing and deployment of AI surveillance and predictive policing systems in Palestinian territories. In the occupied West Bank, Israel increasingly utilizes facial recognition technology to monitor and regulate the movement of Palestinians. Israeli military leaders described AI as a significant force multiplier, allowing the IDF to use autonomous robotic drone swarms to gather surveillance data, identify targets, and streamline wartime logistics.” The Palestinian towns and villages near Israeli settlements have been described as laboratories for security solutions companies to experiment their technologies on Palestinians before marketing them to places like Colombia. The Israeli government hopes to crystalize its “automated apartheid” through the tokenization and privatization of various industries and establishing a technocratic government in Gaza.
Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.
Silicon Valley techies are pretty sanguine about commercial surveillance. But they are much less cool about government spying. Government employees and contractors are pretty cool with state surveillance. But they are far less cool with commercial surveillance. What are they both missing? That American surveillance is a public-private partnership: a symbiosis between a concentrated tech sector that has the means, motive, and opportunity to spy on every person in the world and a state that loves surveillance as much as it hates checks and balances. The tech sector has powerful allies in government: cops and spies. No government agency could ever hope to match the efficiency and scale of commercial surveillance. Meanwhile, the private sector relies on cops and spies to go to bat for them, lobbying against new privacy laws and for lax enforcement of existing ones. Think of Amazon’s Ring cameras, which have blanketed entire neighborhoods in CCTV surveillance, which Ring shares with law enforcement agencies, sometimes without the consent or knowledge of the cameras’ owners. Ring marketing recruits cops as street teams, showering them with freebies to distribute to local homeowners. Google ... has managed to play both sides of the culture war with its location surveillance, thanks to the “reverse warrants” that cops have used to identify all the participants at both Black Lives Matter protests and the January 6 coup. Distinguishing between state and private surveillance is a fool’s errand.
Note: For more along these lines, see concise summaries of deeply revealing news articles on the disappearance of privacy from reliable major media sources.
Leading up to the August Republican presidential primary debate ... An RNC official told Google via email that the debate would be streaming exclusively on the upstart video platform Rumble. The August 23 debate was broadcast on Fox News and streamed on Fox Nation, which requires a subscription, while Rumble was the only one to stream it for free. On the day of and during the debate, however, potential viewers who searched Google for “GOP debate stream” were returned links to YouTube, Fox News, and news articles about the debate, according to screen recordings. Rumble was nowhere on the first page. For Rumble, which is currently in discovery in an antitrust lawsuit against Google in California, this is a case of Google suppressing its competitors in favor of its own product, YouTube. YouTube is owned by Google, and it has regularly been the subject of anticompetitive allegations from rivals, who charge that Google unfairly and illegally favors YouTube in its search algorithm. Google, in fact, is in the middle of a landmark antitrust trial, charged with anticompetitive practices by the Department of Justice. The company would not have been required by antitrust law to promote [Rumble's] link. It would, however, be barred from suppressing the competitor’s link from organic results. The fact that Rumble’s link did not appear on the first page even though it was the most relevant link the search could return means either the search engine failed at its task or the link was suppressed.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and media manipulation from reliable sources.
While Facebook has long sought to portray itself as a "town square" that allows people from across the world to connect, a deeper look into its apparently military origins and continual military connections reveals that the world's largest social network was always intended to act as a surveillance tool to identify and target domestic dissent. LifeLog was one of several controversial post-9/11 surveillance programs pursued by the Pentagon’s Defense Advanced Research Projects Agency (DARPA) that threatened to destroy privacy and civil liberties in the United States. LifeLog sought to .. build a digital record of "everything an individual says, sees, or does." In 2015, [DARPA architect Douglas] Gage told VICE that "Facebook is the real face of pseudo-LifeLog." He tellingly added, “We have ended up providing the same kind of detailed personal information without arousing the kind of opposition that LifeLog provoked.” A few months into Facebook's launch, in June 2004, Facebook cofounders Mark Zuckerberg and Dustin Moskovitz [had] its first outside investor, Peter Thiel. Thiel, in coordination with the CIA, was actively trying to resurrect controversial DARPA programs. Thiel formally acquired $500,000 worth of Facebook shares and was added its board. Thiel's longstanding symbiotic relationship with Facebook cofounders extends to his company Palantir, as the data that Facebook users make public invariably winds up in Palantir's databases and helps drive the surveillance engine Palantir runs for a handful of US police departments, the military, and the intelligence community.
Note: Consider reading the full article by investigative reporter Whitney Webb to explore the scope of Facebook's military origins and the rise of mass surveillance. Read more about the relationship between the national security state and Google, Facebook, TikTok, and the entertainment industry. For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption and media manipulation from reliable sources.
Weapons-grade robots and drones being utilized in combat isn't new. But AI software is, and it's enhancing – in some cases, to the extreme – the existing hardware, which has been modernizing warfare for the better part of a decade. Now, experts say, developments in AI have pushed us to a point where global forces now have no choice but to rethink military strategy – from the ground up. "It's realistic to expect that AI will be piloting an F-16 and will not be that far out," Nathan Michael, Chief Technology Officer of Shield AI, a company whose mission is "building the world's best AI pilot," says. We don't truly comprehend what we're creating. There are also fears that a comfortable reliance in the technology's precision and accuracy – referred to as automation bias – may come back to haunt, should the tech fail in a life or death situation. One major worry revolves around AI facial recognition software being used to enhance an autonomous robot or drone during a firefight. Right now, a human being behind the controls has to pull the proverbial trigger. Should that be taken away, militants could be misconstrued for civilians or allies at the hands of a machine. And remember when the fear of our most powerful weapons being turned against us was just something you saw in futuristic action movies? With AI, that's very possible. "There is a concern over cybersecurity in AI and the ability of either foreign governments or an independent actors to take over crucial elements of the military," [filmmaker Jesse Sweet] said.
Note: For more along these lines, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
Maya Jones* was only 13 when she first walked through the door of Courtney’s House, a drop-in centre for victims of child sex trafficking. When she was 12, she had started receiving direct messages on Instagram from a man she didn’t know. She decided to meet him in person. Then came his next request: “Can you help me make some money?” According to Frundt, Maya explained that the man asked her to pose naked for photos, and to give him her Instagram password so that he could upload the photos to her profile. Frundt says Maya told her that the man, who was now calling himself a pimp, was using her Instagram profile to advertise her for sex. The internet is used by human traffickers as “digital hunting fields”, allowing them access to both customers and potential victims, with children being targeted by traffickers on social media platforms. The biggest of these, Facebook, is owned by Meta, the tech giant whose platforms, which also include Instagram, are used by more than 3 billion people. In 2020, according to a report by US-based not-for-profit the Human Trafficking Institute, Facebook was the platform most used to groom and recruit children by sex traffickers (65%), based on an analysis of 105 federal child sex trafficking cases that year. The HTI analysis ranked Instagram second most prevalent, with Snapchat third. While Meta says it is doing all it can, we have seen evidence that suggests it is failing to report or even detect the full extent of what is happening.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and sexual abuse scandals from reliable major media sources.
Within ten days [of its release], the first-person military shooter video game [Call of Duty: Modern Warfare II] earned more than $1 billion in revenue. The Call of Duty franchise is an entertainment juggernaut, having sold close to half a billion games since it was launched in 2003. Its publisher, Activision Blizzard, is a giant in the industry. Details gleaned from documents obtained under the Freedom of Information Act reveal that Call of Duty is not a neutral first-person shooter, but a carefully constructed piece of military propaganda, designed to advance the interests of the U.S. national security state. Not only does Activision Blizzard work with the U.S. military to shape its products, but its leadership board is also full of former high state officials. Chief amongst these is Frances Townsend, Activision Blizzard's senior counsel. As the White House's most senior advisor on terrorism and homeland security, Townsend ... became one of the faces of the administration's War on Terror. Activision Blizzard's chief administration officer, Brian Bulatao ... was chief operating officer for the CIA, placing him third in command of the agency. Bulatao went straight from the State Department into the highest echelons of Activision Blizzard, despite no experience in the entertainment industry. [This] raises serious questions around privacy and state control over media. "Call of Duty ... has been flagged up for recreating real events as game missions and manipulating them for geopolitical purposes," [journalist Tom] Secker told MintPress.
Note: The latest US Air Force recruitment tool is a video game that allows players to receive in-game medals and achievements for drone bombing Iraqis and Afghans. For more on this disturbing "military-entertainment complex" trend, explore the work of investigative journalist Tom Secker, who recently produced a documentary, Theaters of War: How the Pentagon and CIA Took Hollywood, and published a new book, Superheroes, Movies and the State: How the U.S. Government Shapes Cinematic Universes.
A large number of ex-officers from the FBI, CIA, NSC, and State Department have taken positions at Facebook, Twitter, and Google. The revelation comes amid fears the FBI operated control over Twitter censorship and the Hunter Biden laptop story. The Twitter files have revealed the close relationship with the FBI, how the Bureau regularly demanded accounts and tweets be banned and suspicious contact before the Hunter laptop story was censored. The documents detailed how so many former FBI agents joined Twitter's ranks over the past few years that they created their own private Slack channel. A report by Mint Press' Alan MacLeod identified dozens of Twitter employees, who had previously held positions at the Bureau. He also found that former CIA agents made up some of the top ranks in almost every politically-sensitive department at Meta, the parent company of Facebook, Instagram, and WhatsApp. And in another report, MacLeod detailed the extent to which former CIA agents started working at Google. DailyMail.com has now been able to track down nine former CIA agents who are working, or have worked, at Meta, including Aaron Berman, the senior policy manager for misinformation at the company who had previously written the president's daily briefings. Six others have worked for other intelligence agencies before joining the social media giant, many of whom have posted recently about Facebook's efforts to tamp down on so-called 'covert influence operations.'
Note: Explore a deeper analysis on the ex-CIA agents at Facebook and at Google. Additionally, read how Big Tech censors social media on behalf of corporate and government interests. For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption and media manipulation from reliable sources.
U.S. citizens are being subjected to a relentless onslaught from intrusive technologies that have become embedded in the everyday fabric of our lives, creating unprecedented levels of social and political upheaval. These widely used technologies ... include social media and what Harvard professor Shoshanna Zuboff calls "surveillance capitalism"—the buying and selling of our personal info and even our DNA in the corporate marketplace. But powerful new ones are poised to create another wave of radical change. Under the mantle of the "Fourth Industrial Revolution," these include artificial intelligence or AI, the metaverse, the Internet of Things, the Internet of Bodies (in which our physical and health data is added into the mix to be processed by AI), and my personal favorite, police robots. This is a two-pronged effort involving both powerful corporations and government initiatives. These tech-based systems are operating "below the radar" and rarely discussed in the mainstream media. The world's biggest tech companies are now richer and more powerful than most countries. According to an article in PC Week in 2021 discussing Apple's dominance: "By taking the current valuation of Apple, Microsoft, Amazon, and others, then comparing them to the GDP of countries on a map, we can see just how crazy things have become… Valued at $2.2 trillion, the Cupertino company is richer than 96% of the world. In fact, only seven countries currently outrank the maker of the iPhone financially."
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and the disappearance of privacy from reliable major media sources.
A MintPress News investigation has found dozens of ex-U.S. State Department officials working in key positions at TikTok. Many more individuals with backgrounds in the FBI, CIA and other departments of the national security state also hold influential posts at the social media giant, affecting the content that over one billion users see. The influx of State Department officials into TikTok’s upper ranks is a consequence of “Project Texas,” an initiative the company began in 2020 in the hopes of avoiding being banned altogether in the United States. During his time in office, Secretary of State Mike Pompeo led the charge to shut the platform down, frequently labeling it a “spying app” and a “propaganda tool for the Chinese Communist Party.” It was widely reported that the U.S. government had forced the sale of TikTok to Walmart and then Microsoft. But in late 2020, as Project Texas began, those deals mysteriously fell through, and the rhetoric about the dangers of TikTok from officials evaporated. Project Texas is a $1.5 billion security operation to move the company’s data to Austin. In doing so, it announced that it was partnering with tech giant Oracle, a corporation that, as MintPress has reported on, is the CIA in all but name. Evidently, Project Texas also secretly included hiring all manner of U.S. national security state personnel to oversee the company’s operations – and not just from the State Department. Virtually every branch of the national security state is present at TikTok.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corruption in intelligence agencies and in the corporate world from reliable major media sources.
Big Tech giants and their oligarchic owners now engage in a new type of censorship, which we have called “censorship by proxy.” Censorship by proxy describes restrictions on freedom of information undertaken by private corporations that exceed limits on governmental censorship and serve both corporate and government or third-party interests. Censorship by proxy is not subject to venerable First Amendment proscriptions on government interference with freedom of speech or freedom of the press. Censorship by proxy alerts us to the power of economic entities that are not normally recognized as “gatekeepers.” For example, in 2022, the digital financial service PayPal (whose founders include Peter Thiel and Elon Musk) froze the accounts of Consortium News and MintPress News for “unspecified offenses” and “risks” associated with their accounts, a ruling that prevented both independent news outlets from using funds maintained by PayPal. Consortium News and MintPress News have each filed critical news stories and commentary on the foreign policy objectives of the United States and NATO. PayPal issued notices to each news outlet, stating that, in addition to suspending their accounts, it might also seize their assets for “damages.” Joe Lauria, editor in chief of Consortium News, said he believed this was a case of “ideological policing.” Mnar Adley, head of MintPress News, warned, “The sanctions-regime war is coming home to hit the bank accounts of watchdog journalists.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and media manipulation from reliable sources.
During one of his many visits to the Democratic Republic of the Congo, Siddharth Kara ... met a young woman sifting dirt for traces of cobalt. Priscille told him she had suffered two miscarriages and that her husband, a fellow “artisanal” miner, died of a respiratory disease. It is just one of many devastating personal accounts in Cobalt Red, a detailed exposé into the hidden world of small-scale cobalt mining in the Democratic Republic of the Congo (DRC). The “quaint” moniker of artisanal mining, Mr. Kara points out, belies a brutal industry where hundreds of thousands of men, women and children dig with bare hands and basic tools in toxic, perilous pits, eking out an existence on the bottom rung of the global supply chain. If you own a smartphone, tablet, laptop, e-scooter, [or] electric vehicle ... then it is a system in which you are unwittingly complicit. Around 75 per cent of the world’s cobalt is mined in the DRC. The rare, silvery metal is an essential component to every lithium-ion rechargeable battery. Congolese miners ... have experienced life-changing injuries, sexual assault, physical violence, corruption, displacement and abject poverty. Cobalt Red also documents many unreported deaths, including those of children buried alive in makeshift mining tunnels, and their bodies never recovered. Cobalt is toxic to touch and breathe in, and can be found alongside traces of radioactive uranium. Cancers, respiratory illnesses, miscarriages, headaches and painful skin conditions occur among adults who work without protective equipment. Children in mining communities suffer birth defects, developmental damage, vomiting and seizures from direct and indirect exposure to the heavy metals. Female miners, who earn less than the average two dollars per day paid to men, typically work in groups as sexual assault is common in mining areas. Major tech and EV companies extol commitments to human rights, zero-tolerance for child labor, and clean supply chains. Mr. Kara described these statements as “utterly inconsistent” with what’s happening on the ground.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption from reliable major media sources.
Trust Lab was founded by a team of well-credentialed Big Tech alumni who came together in 2021 with a mission: Make online content moderation more transparent, accountable, and trustworthy. A year later, the company announced a “strategic partnership” with the CIA’s venture capital firm. The quiet October 29 announcement of the partnership is light on details, stating that Trust Lab and In-Q-Tel — which invests in and collaborates with firms it believes will advance the mission of the CIA — will work on “a long-term project that will help identify harmful content and actors in order to safeguard the internet.” Key terms like “harmful” and “safeguard” are unexplained, but the press release goes on to say that the company will work toward “pinpointing many types of online harmful content, including toxicity and misinformation.” It’s difficult to imagine how aligning the startup with the CIA is compatible with [Trust Lab co-founder Tom] Siegel’s goal of bringing greater transparency and integrity to internet governance. What would it mean, for instance, to incubate counter-misinformation technology for an agency with a vast history of perpetuating misinformation? Placing the company within the CIA’s tech pipeline also raises questions about Trust Lab’s view of who or what might be a “harmful” online, a nebulous concept that will no doubt mean something very different to the U.S. intelligence community than it means elsewhere. Trust Lab’s murky partnership with In-Q-Tel suggests a step toward greater governmental oversight of online speech.
Note: For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption and media manipulation from reliable sources.
Twitter owner Elon Musk spoke out on Saturday evening about the so-called “Twitter Files,” a long tweet thread posted by journalist Matt Taibbi, who had been provided with details about behind-the-scenes discussions on Twitter’s content moderation decision-making, including the call to suppress a 2020 New York Post story about Hunter Biden and his laptop. During a two-hour long Twitter Spaces session, Musk said a second “Twitter Files” drop will again involve Taibbi, along with journalist Bari Weiss, but did not give an exact date for when that would be released. Musk – who claims to have not read the released files himself – said the impetus for the original tweet thread was about what happened in the run-up to the 2020 presidential election and “how much government influence was there.” Taibbi’s first thread reaffirmed how, in the initial hours after the Post story about Hunter Biden went live, Twitter employees grappled with fears that it could have been the result of a Russian hacking operation. It showed employees on several Twitter teams debating over whether to restrict the article under the company’s hacked materials policy, weeks before the 2020 election. The emails Taibbi obtained are consistent with what former Twitter site integrity head Yoel Roth told journalist Kara Swisher in an onstage interview last week. Taibbi said the contact from political parties happened more frequently from Democrats, but provided no internal documents to back up his assertion.
Note: For more along these lines, see concise summaries of deeply revealing news articles on media corruption from reliable sources.
The EARN IT Act [is] a bill designed to confront the explosion of child sexual abuse material (CSAM) online. EARN IT would help address what is, disturbingly, a common experience for young users: routine exposure to predatory targeting, grooming, sexual violence, prostitution/sex trafficking, hardcore pornography and more. A New York Times investigation revealed that 70 million CSAM images were reported to the National Center for Missing and Exploited Children (NCMEC) in 2019–up from 600,000 in 2008–an "almost unfathomable" increase in criminality. The EARN IT Act restores privacy to victims of child sexual abuse material and allows them to sueâ€those who cause them harm online, under federal civil law and state criminal and civil law. It also creates a new commission to issue guidelines to limit sex trafficking, grooming and sexual exploitationâ€online. CSAM still exists because tech platforms have no incentive to prevent or eliminate it, because Section 230 of the Communications Decency Act (passed in 1996, before social media existed) gives them near-blanket immunity from liability. While some in the technology sector [are] claiming EARN IT is a threat to encryption and user privacy, the reality is that encryption can coexist with better business practices for online child safety. We can increase security and privacy while refraining from a privacy-absolutism that unintentionally allows sexual predators to run rampant online.
Note: To understand the scope of child sex abuse worldwide, learn about other major cover-ups in revealing news articles on sexual abuse scandals from reliable major media sources.
Ask questions or post content about COVID-19 that runs counter to the Biden administration's narrative and find yourself censored on social media. That's precisely what data analyst and digital strategist Justin Hart says happened to him. And so last week the Liberty Justice Center, a public-interest law firm, filed a suit on his behalf in California against Facebook, Twitter, President Joe Biden and United States Surgeon General Vivek Murthy for violating his First Amendment right to free speech. Hart had his social media most recently locked for merely posting an infographic that illustrated the lack of scientific research behind forcing children to wear masks to prevent the spread of COVID. In fact ... study after study repeatedly shows that children are safer than vaccinated adults and that the masks people actually wear don't do much good. The lawsuit contends that the federal government is "colluding with social media companies to monitor, flag, suspend and delete social media posts it deems 'misinformation.'" It can point to White House Press Secretary Jen Psaki's July remarks that senior White House staff are "in regular touch" with Big Tech platforms regarding posts about COVID. She also said the surgeon general's office is "flagging problematic posts for Facebook that spread." "Why do we think it's acceptable for the government to direct social media companies to censor people on critical issues such as COVID?" Hart asks. The Post has been targeted repeatedly by social media for solid, factual reporting.
Note: Read about another lawsuit alleging collusion between government and big tech companies to censor dissenting views on pandemic policies. For more along these lines, see concise summaries of deeply revealing news articles on government corruption and media manipulation from reliable sources.
The intelligence community is about to get the equivalent of an adrenaline shot to the chest. This summer, a $600 million computing cloud developed by Amazon Web Services for the Central Intelligence Agency over the past year will begin servicing all 17 agencies that make up the intelligence community. If the technology plays out as officials envision, it will usher in a new era of cooperation and coordination, allowing agencies to share information and services much more easily and avoid the kind of intelligence gaps that preceded the Sept. 11, 2001, terrorist attacks. For the first time, agencies within the intelligence community will be able to order a variety of on-demand computing and analytic services from the CIA and National Security Agency. What’s more, they’ll only pay for what they use. For the risk-averse intelligence community, the decision to go with a commercial cloud vendor is a radical departure from business as usual. It is difficult to underestimate the cloud contract’s importance. In a recent public appearance, CIA Chief Information Officer Douglas Wolfe called it “one of the most important technology procurements in recent history,” with ramifications far outside the realm of technology. The importance of the cloud capabilities the CIA gets through leveraging Amazon Web Services’ horsepower is best exemplified in computing intelligence data. Instead of each agency building out its own systems, select agencies ... are responsible for governing its major components.
Note: The CIA tries to "collect everything and hold on to it forever." For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption from reliable major media sources.
Frances Haugen spent 15 years working for some of the largest social media companies in the world including Google, Pinterest, and until May, Facebook. Haugen quit Facebook on her own accord and left with thousands of pages of internal research and communications that she shared with the Securities and Exchange Commission. 60 Minutes obtained the documents from a Congressional source. On Sunday, in her first interview, Haugen told 60 Minutes correspondent Scott Pelley about what she called "systemic" problems with the platform's ranking algorithm that led to the amplification of "angry content" and divisiveness. Evidence of that, she said, is in the company's own internal research. Haugen said Facebook changed its algorithm in 2018 to promote "what it calls meaningful social interactions" through "engagement-based rankings." She explained that content that gets engaged with – such as reactions, comments, and shares – gets wider distribution. "Political parties have been quoted, in Facebook's own research, saying, we know you changed how you pick out the content that goes in the home feed," said Haugen. "And now if we don't publish angry, hateful, polarizing, divisive content, crickets." "We have no independent transparency mechanisms," Haugen [said]. "Facebook ... picks metrics that are in its own benefit. And the consequence is they can say we get 94% of hate speech and then their internal documents say we get 3% to 5% of hate speech. We can't govern that."
Note: For more along these lines, see concise summaries of deeply revealing news articles on media manipulation from reliable sources.
Ties between Silicon Valley and the Pentagon are deeper than previously known, according to thousands of previously unreported subcontracts published Wednesday. The subcontracts were obtained through open records requests by accountability nonprofit Tech Inquiry. They show that tech giants including Google, Amazon, and Microsoft have secured more than 5,000 agreements with agencies including the Department of Defense, Immigrations and Customs Enforcement, the Drug Enforcement Agency, and the FBI. Tech workers in recent years have pressured their employers to drop contracts with law enforcement and the military. Google workers revolted in 2018 after Gizmodo revealed that Google was building artificial intelligence for drone targeting through a subcontract with the Pentagon after some employees quit in protest, Google agreed not to renew the contract. Employees at Amazon and Microsoft have petitioned both companies to drop their contracts with ICE and the military. Neither company has. The newly-surfaced subcontracts ... show that the companies' connections to the Pentagon run deeper than many employees were previously aware. Tech Inquiry's research was led by Jack Poulson, a former Google researcher. "Often the high-level contract description between tech companies and the military looks very vanilla," Poulson [said]. "But only when you look at the details ... do you see the workings of how the customization from a tech company would actually be involved."
Note: For more along these lines, see concise summaries of deeply revealing news articles on corruption in government and in the corporate world from reliable major media sources.
Justin Rosenstein had tweaked his laptops operating system to block Reddit, banned himself from Snapchat, which he compares to heroin, and imposed limits on his use of Facebook. He was particularly aware of the allure of Facebook likes, which he describes as bright dings of pseudo-pleasure that can be as hollow as they are seductive. And Rosenstein should know: he was the Facebook engineer who created the like button. There is growing concern that as well as addicting users, technology is contributing toward so-called continuous partial attention, severely limiting peoples ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity even when the device is turned off. But those concerns are trivial compared with the devastating impact upon the political system that some of Rosensteins peers believe can be attributed to the rise of social media and the attention-based market that drives it. Drawing a straight line between addiction to social media and political earthquakes like Brexit and the rise of Donald Trump, they contend that digital forces have completely upended the political system and, left unchecked, could even render democracy as we know it obsolete. It is revealing that many of these younger technologists are weaning themselves off their own products, sending their children to elite Silicon Valley schools where iPhones, iPads and even laptops are banned.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption from reliable major media sources.
Google will not seek to extend its contract next year with the Defense Department for artificial intelligence used to analyze drone video, squashing a controversial alliance that had raised alarms over the technological buildup between Silicon Valley and the military. Google ... has faced widespread public backlash and employee resignations for helping develop technological tools that could aid in warfighting. Google will soon release new company principles related to the ethical uses of AI. Thousands of Google employees wrote chief executive Sundar Pichai an open letter urging the company to cancel the contract, and many others signed a petition saying the companys assistance in developing combat-zone technology directly countered the companys famous Dont be evil motto. Several Google AI employees had told The Post they believed they wielded a powerful influence over the companys decision-making. The advanced technologys top researchers and developers are in heavy demand, and many had organized resistance campaigns or threatened to leave. The sudden announcement Friday was welcomed by several high-profile employees. Meredith Whittaker, an AI researcher and the founder of Googles Open Research group, tweeted Friday: I am incredibly happy about this decision, and have a deep respect for the many people who worked and risked to make it happen. Google should not be in the business of war.
Note: Explore a treasure trove of concise summaries of incredibly inspiring news articles which will inspire you to make a difference.
Hundreds of academics have urged Google to abandon its work on a U.S. Department of Defense-led drone program codenamed Project Maven. An open letter calling for change was published Monday by the International Committee for Robot Arms Control (ICRAC). The project is formally known as the Algorithmic Warfare Cross-Functional Team. Its objective is to turn the enormous volume of data available to DoD into actionable intelligence. More than 3,000 Google staffers signed a petition in April in protest at the company's focus on warfare. We believe that Google should not be in the business of war, it read. Therefore we ask that Project Maven be cancelled. The ICRAC warned this week the project could potentially be mixed with general user data and exploited to aid targeted killing. Currently, its letter has nearly 500 signatures. It stated: We are ... deeply concerned about the possible integration of Googles data on peoples everyday lives with military surveillance data, and its combined application to targeted killing ... Google has moved into military work without subjecting itself to public debate or deliberation. While Google regularly decides the future of technology without democratic public engagement, its entry into military technologies casts the problems of private control of information infrastructure into high relief. Lieutenant Colonel Garry Floyd, deputy chief of the Algorithmic Warfare Cross Functional Team, said ... earlier this month that Maven was already active in five or six combat locations.
Note: You can read the full employee petition on this webpage. The New York Times also published a good article on this. For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and war.
Thousands of Google employees, including dozens of senior engineers, have signed a letter protesting the companys involvement in a Pentagon program that uses artificial intelligence to interpret video imagery and could be used to improve the targeting of drone strikes. The letter, which is circulating inside Google and has garnered more than 3,100 signatures, reflects a culture clash ... that is likely to intensify as cutting-edge artificial intelligence is increasingly employed for military purposes. We believe that Google should not be in the business of war, says the letter, addressed to Sundar Pichai, the companys chief executive. It asks that Google pull out of Project Maven, a Pentagon pilot program, and announce a policy that it will not ever build warfare technology. That kind of idealistic stance ... is distinctly foreign to Washingtons massive defense industry and certainly to the Pentagon, where the defense secretary, Jim Mattis, has often said a central goal is to increase the lethality of the United States military. Some of Googles top executives have significant Pentagon connections. Eric Schmidt, former executive chairman of Google and still a member of the executive board of Alphabet, Googles parent company, serves on a Pentagon advisory body, the Defense Innovation Board, as does a Google vice president, Milo Medin. Project Maven ... began last year as a pilot program to find ways to speed up the military application of the latest A.I. technology.
Note: The use of artificial intelligence technology for drone strike targeting is one of many ways warfare is being automated. Strong warnings against combining artificial intelligence with war have recently been issued by America's second-highest ranking military officer, tech mogul Elon Musk, and many of the world's most recognizable scientists. For more along these lines, see concise summaries of deeply revealing war news articles from reliable major media sources.
Important Note: Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.

















































































