As a 501(c)(3) nonprofit, we depend almost entirely on donations from people like you.
Please consider making a donation.
Subscribe here and join over 13,000 subscribers to our free weekly newsletter

Big Tech News Stories

The world’s biggest tech companies are becoming more powerful than most countries. Yet too often, corporate profits are prioritized over environmental and human rights.
Explore our comprehensive news index on a wide variety of fascinating topics.
Explore the top 20 most revealing news media articles we've summarized.
Check out 10 useful approaches for making sense of the media landscape.

Sort articles by: Article Date | Date Posted on WantToKnow.info | Importance

Doctors Horrified After Google's Healthcare AI Makes Up a Body Part That Does Not Exist in
2025-08-06, Neoscope
Posted: 2025-08-23 19:43:53
https://futurism.com/neoscope/google-healthcare-ai-makes-up-body-part

Health practitioners are becoming increasingly uneasy about the medical community making widespread use of error-prone generative AI tools. In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions. It identified an "old left basilar ganglia infarct," referring to a purported part of the brain — "basilar ganglia" — that simply doesn't exist in the human body. Board-certified neurologist Bryan Moore flagged the issue ... highlighting that Google fixed its blog post about the AI — but failed to revise the research paper itself. The AI likely conflated the basal ganglia, an area of the brain that's associated with motor movements and habit formation, and the basilar artery, a major blood vessel at the base of the brainstem. Google blamed the incident on a simple misspelling of "basal ganglia." It's an embarrassing reveal that underlines persistent and impactful shortcomings of the tech. In Google's search results, this can lead to headaches for users during their research and fact-checking efforts. But in a hospital setting, those kinds of slip-ups could have devastating consequences. While Google's faux pas more than likely didn't result in any danger to human patients, it sets a worrying precedent, experts argue. In a medical context, AI hallucinations could easily lead to confusion and potentially even put lives at risk.

Note: For more along these lines, read our concise summaries of news articles on AI and corruption in science.


The Secret History of Tor: How a Military Project Became a Lifeline for Privacy
2025-08-08, MIT Press Reader
Posted: 2025-08-23 19:42:00
https://thereader.mitpress.mit.edu/the-secret-history-of-tor-how-a-military-p...

Tor is mostly known as the Dark Web or Dark Net, seen as an online Wild West where crime runs rampant. Yet it’s partly funded by the U.S. government, and the BBC and Facebook both have Tor-only versions to allow users in authoritarian countries to reach them. At its simplest, Tor is a distributed digital infrastructure that makes you anonymous online. It is a network of servers spread around the world, accessed using a browser called the Tor Browser, which you can download for free from the Tor Project website. When you use the Tor Browser, your signals are encrypted and bounced around the world before they reach the service you’re trying to access. This makes it difficult for governments to trace your activity or block access, as the network just routes you through a country where that access isn’t restricted. But, because you can’t protect yourself from digital crime without also protecting yourself from mass surveillance by the state, these technologies are the site of constant battles between security and law enforcement interests. The state’s claim to protect the vulnerable often masks efforts to exert control. In fact, robust, well-funded, value-driven and democratically accountable content moderation — by well-paid workers with good conditions — is a far better solution than magical tech fixes to social problems ... or surveillance tools. As more of our online lives are funneled into the centralized AI infrastructures ... tools like Tor are becoming ever more important.

Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.


Facebook Allegedly Detected When Teen Girls Deleted Selfies So It Could Serve Them Beauty Ads
2025-05-03, Futurism
Posted: 2025-07-18 22:08:56
https://futurism.com/facebook-beauty-targeted-ads

Surveillance capitalism came about when some crafty software engineers realized that advertisers were willing to pay bigtime for our personal data. The data trade is how social media platforms like Google, YouTube, and TikTok make their bones. In 2022, the data industry raked in just north of $274 billion worth of revenue. By 2030, it's expected to explode to just under $700 billion. Targeted ads on social media are made possible by analyzing four key metrics: your personal info, like gender and age; your interests, like the music you listen to or the comedians you follow; your "off app" behavior, like what websites you browse after watching a YouTube video; and your "psychographics," meaning general trends glossed from your behavior over time, like your social values and lifestyle habits. In 2017 The Australian alleged that [Facebook] had crafted a pitch deck for advertisers bragging that it could exploit "moments of psychological vulnerability" in its users by targeting terms like "worthless," "insecure," "stressed," "defeated," "anxious," "stupid," "useless," and "like a failure." The social media company likewise tracked when adolescent girls deleted selfies, "so it can serve a beauty ad to them at that moment," according to [former employee Sarah] Wynn-Williams. Other examples of Facebook's ad lechery are said to include the targeting of young mothers based on their emotional state, as well as emotional indexes mapped to racial groups.

Note: Facebook hid its own internal research for years showing that Instagram worsened body image issues, revealing that 13% of British teenage girls reported more frequent suicidal thoughts after using the app. For more along these lines, read our concise summaries of news articles on Big Tech and mental health.


Data Collection Can Be Effective AND Legal
2025-07-07, ScheerPost
Posted: 2025-07-18 21:59:34
https://scheerpost.com/2025/07/07/vips-data-collection-can-be-effective-and-l...

Technology already available – and already demonstrated to be effective – makes it possible for law-abiding officials, together with experienced technical people to create a highly efficient system in which both security and privacy can be assured. Advanced technology can pinpoint and thwart corruption in the intelligence, military, and civilian domain. At its core, this requires automated analysis of attributes and transactional relationships among individuals. The large data sets in government files already contain the needed data. On the Intelligence Community side, there are ways to purge databases of irrelevant data and deny government officials the ability to spy on anyone they want. These methodologies protect the privacy of innocent people, while enhancing the ability to discover criminal threats. In order to ensure continuous legal compliance with these changes, it is necessary to establish a central technical group or organization to continuously monitor and validate compliance with the Constitution and U.S. law. Such a group would need to have the highest-level access to all agencies to ensure compliance behind the classification doors. It must be able to go into any agency to inspect its activity at any time. In addition ... it would be best to make government financial and operational transactions open to the public for review. Such an organization would go a long way toward making government truly transparent to the public.

Note: The article cites national security journalist James Risen's book on how the creation of Google was closely tied to NSA and CIA-backed efforts to privatize surveillance infrastructure. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.


Blinken Ordered the Hit. Big Tech Carried It Out. African Stream Is Dead
2025-07-05, ScheerPost
Posted: 2025-07-18 21:47:34
https://scheerpost.com/2025/07/05/blinken-ordered-the-hit-big-tech-carried-it...

On Tuesday, July 1, 2025, African Stream published its final video, a defiant farewell message. With that, the once-thriving pan-African media outlet confirmed it was shutting down for good. Not because it broke the law. Not because it spread disinformation or incited violence. But because it told the wrong story, one that challenged U.S. power in Africa and resonated too deeply with Black audiences around the world. In September, U.S. Secretary of State Antony Blinken made the call and announced an all-out war against the organization, claiming, without evidence, that it was a Russian front group. Within hours, big social media platforms jumped into action. Google, YouTube, Facebook, Instagram, and TikTok all deleted African Stream’s accounts, while Twitter demonetized the organization. The company’s founder and CEO, Ahmed Kaballo ... told us that, with just one statement, Washington was able to destroy their entire operation, stating: “We are shutting down because the business has become untenable. After we got attacked by Antony Blinken, we really tried to continue, but without a platform on YouTube, Instagram, TikTok, and being demonetized on X, it just meant the ability to generate income became damn near impossible.” Washington both funds thousands of journalists around the planet to produce pro-U.S. propaganda, and, through its close connections to Silicon Valley, has the power to destroy those that do not toe the line.

Note: Learn more about the CIA’s longstanding propaganda network in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on censorship.


Hundreds of data brokers might be breaking state laws, say privacy advocate
2025-06-25, The Verge
Posted: 2025-07-07 17:16:38
https://www.theverge.com/news/693109/eff-privacy-advocates-state-investigate-...

The Electronic Frontier Foundation (EFF) and a nonprofit privacy rights group have called on several states to investigate why “hundreds” of data brokers haven’t registered with state consumer protection agencies in accordance with local laws. An analysis done in collaboration with Privacy Rights Clearinghouse (PRC) found that many data brokers have failed to register in all of the four states with laws that require it, preventing consumers in some states from learning what kinds of information these brokers collect and how to opt out. Data brokers are companies that collect and sell troves of personal information about people, including their names, addresses, phone numbers, financial information, and more. Consumers have little control over this information, posing serious privacy concerns, and attempts to address these concerns at a federal level have mostly failed. Four states — California, Texas, Oregon, and Vermont — do attempt to regulate these companies by requiring them to register with consumer protection agencies and share details about what kind of data they collect. In letters to the states’ attorneys general, the EFF and PRC say they “uncovered a troubling pattern” after scraping data broker registries. They found that many data brokers didn’t consistently register their businesses across all four states. The number of data brokers that appeared on one registry but not another includes 524 in Texas, 475 in Oregon, 309 in Vermont, and 291 in California.

Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.


How illicit markets fueled by data breaches sell your personal information to criminals
2025-06-05, The Conversation
Posted: 2025-06-19 20:43:01
https://theconversation.com/how-illicit-markets-fueled-by-data-breaches-sell-...

When National Public Data, a company that does online background checks, was breached in 2024, criminals gained the names, addresses, dates of birth and national identification numbers such as Social Security numbers of 170 million people in the U.S., U.K. and Canada. The same year, hackers who targeted Ticketmaster stole the financial information and personal data of more than 560 million customers. In so-called stolen data markets, hackers sell personal information they illegally obtain to others, who then use the data to engage in fraud and theft for profit. Every piece of personal data captured in a data breach – a passport number, Social Security number or login for a shopping service – has inherent value. Offenders can ... assume someone else’s identity, make a fraudulent purchase or steal services such as streaming media or music. Some vendors also offer distinct products such as credit reports, Social Security numbers and login details for different paid services. The price for pieces of information varies. A recent analysis found credit card data sold for US$50 on average, while Walmart logins sold for $9. However, the pricing can vary widely across vendors and markets. The rate of return can be exceptional. An offender who buys 100 cards for $500 can recoup costs if only 20 of those cards are active and can be used to make an average purchase of $30. The result is that data breaches are likely to continue as long as there is demand.

Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.


How Palantir Is Expanding the Surveillance State
2025-06-02, Reason
Posted: 2025-06-11 16:02:23
https://reason.com/2025/06/02/palantir-paves-way-for-trump-police-state/

Palantir has long been connected to government surveillance. It was founded in part with CIA money, it has served as an Immigration and Customs Enforcement (ICE) contractor since 2011, and it's been used for everything from local law enforcement to COVID-19 efforts. But the prominence of Palantir tools in federal agencies seems to be growing under President Trump. "The company has received more than $113 million in federal government spending since Mr. Trump took office, according to public records, including additional funds from existing contracts as well as new contracts with the Department of Homeland Security and the Pentagon," reports The New York Times, noting that this figure "does not include a $795 million contract that the Department of Defense awarded the company last week, which has not been spent." Palantir technology has largely been used by the military, the intelligence agencies, the immigration enforcers, and the police. But its uses could be expanding. Representatives of Palantir are also speaking to at least two other agencies—the Social Security Administration and the Internal Revenue Service. Along with the Trump administration's efforts to share more data across federal agencies, this signals that Palantir's huge data analysis capabilities could wind up being wielded against all Americans. Right now, the Trump administration is using Palantir tools for immigration enforcement, but those tools could easily be applied to other ... targets.

Note: Read about Palantir's recent, first-ever AI warfare conference. For more along these lines, read our concise summaries of news articles on Big Tech and intelligence agency corruption.


For Tech Whistleblowers, There’s Safety in Numbers
2025-05-19, Wired
Posted: 2025-05-28 13:06:17
https://www.wired.com/story/amber-scorah-psst-tech-whistleblowers/

Amber Scorah knows only too well that powerful stories can change society—and that powerful organizations will try to undermine those who tell them. While working at a media outlet that connects whistleblowers with journalists, she noticed parallels in the coercive tactics used by groups trying to suppress information. “There is a sort of playbook that powerful entities seem to use over and over again,” she says. “You expose something about the powerful, they try to discredit you, people in your community may ostracize you.” In September 2024, Scorah cofounded Psst, a nonprofit that helps people in the tech industry or the government share information of public interest with extra protections—with lots of options for specifying how the information gets used and how anonymous a person stays. Psst’s main offering is a “digital safe”—which users access through an anonymous end-to-end encrypted text box hosted on Psst.org, where they can enter a description of their concerns. What makes Psst unique is something it calls its “information escrow” system—users have the option to keep their submission private until someone else shares similar concerns about the same company or organization. Combining reports from multiple sources defends against some of the isolating effects of whistleblowing and makes it harder for companies to write off a story as the grievance of a disgruntled employee, says Psst cofounder Jennifer Gibson.

Note: For more along these lines, read our concise summaries of news articles on Big Tech and media manipulation.


CFPB Quietly Kills Rule to Shield Americans From Data Brokers
2025-05-14, Wired
Posted: 2025-05-28 13:04:33
https://www.wired.com/story/cfpb-quietly-kills-rule-to-shield-americans-from-...

The Consumer Financial Protection Bureau (CFPB) has canceled plans to introduce new rules designed to limit the ability of US data brokers to sell sensitive information about Americans, including financial data, credit history, and Social Security numbers. The CFPB proposed the new rule in early December under former director Rohit Chopra, who said the changes were necessary to combat commercial surveillance practices that “threaten our personal safety and undermine America’s national security.” The agency quietly withdrew the proposal on Tuesday morning. Data brokers operate within a multibillion-dollar industry built on the collection and sale of detailed personal information—often without individuals’ knowledge or consent. These companies create extensive profiles on nearly every American, including highly sensitive data such as precise location history, political affiliations, and religious beliefs. Common Defense political director Naveed Shah, an Iraq War veteran, condemned the move to spike the proposed changes, accusing Vought of putting the profits of data brokers before the safety of millions of service members. Investigations by WIRED have shown that data brokers have collected and made cheaply available information that can be used to reliably track the locations of American military and intelligence personnel overseas, including in and around sensitive installations where US nuclear weapons are reportedly stored.

Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.


U.S. Spy Agencies Are Getting a One-Stop Shop to Buy Your Most Sensitive Personal Data
2025-05-22, The Intercept
Posted: 2025-05-28 13:02:49
https://theintercept.com/2025/05/22/intel-agencies-buying-data-portal-privacy/

The U.S. intelligence community is now buying up vast volumes of sensitive information that would have previously required a court order, essentially bypassing the Fourth Amendment. But the surveillance state has encountered a problem: There’s simply too much data on sale from too many corporations and brokers. So the government has a plan for a one-stop shop. The Office of the Director of National Intelligence is working on a system to centralize and “streamline” the use of commercially available information, or CAI, like location data derived from mobile ads, by American spy agencies, according to contract documents reviewed by The Intercept. The data portal will include information deemed by the ODNI as highly sensitive, that which can be “misused to cause substantial harm, embarrassment, and inconvenience to U.S. persons.” The “Intelligence Community Data Consortium” will provide a single convenient web-based storefront for searching and accessing this data, along with a “data marketplace” for purchasing “the best data at the best price,” faster than ever before. It will be designed for the 18 different federal agencies and offices that make up the U.S. intelligence community, including the National Security Agency, CIA, FBI Intelligence Branch, and Homeland Security’s Office of Intelligence and Analysis — though one document suggests the portal will also be used by agencies not directly related to intelligence or defense.

Note: For more along these lines, read our concise summaries of intelligence agency corruption and the disappearance of privacy.


Tracking apps might make us feel safe, but blurring the line between care and control can be dangerous
2025-05-19, The Guardian (One of the UK's Leading Newspapers)
Posted: 2025-05-28 13:01:06
https://www.theguardian.com/commentisfree/2025/may/19/tracking-apps-might-mak...

According to recent research by the Office of the eSafety Commissioner, “nearly 1 in 5 young people believe it’s OK to track their partner whenever they want”. Many constantly share their location with their partner, or use apps like Life360 or Find My Friends. Some groups of friends all do it together, and talk of it as a kind of digital closeness where physical distance and the busyness of life keeps them apart. Others use apps to keep familial watch over older relatives – especially when their health may be in decline. When government officials or tech industry bigwigs proclaim that you should be OK with being spied on if you’re not doing anything wrong, they’re asking (well, demanding) that we trust them. But it’s not about trust, it’s about control and disciplining behaviour. “Nothing to hide; nothing to fear” is a frustratingly persistent fallacy, one in which we ought to be critical of when its underlying (lack of) logic creeps into how we think about interacting with one another. When it comes to interpersonal surveillance, blurring the boundary between care and control can be dangerous. Just as normalising state and corporate surveillance can lead to further erosion of rights and freedoms over time, normalising interpersonal surveillance seems to be changing the landscape of what’s considered to be an expression of love – and not necessarily for the better. We ought to be very critical of claims that equate surveillance with safety.

Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.


These Internal Documents Show Why We Shouldn’t Trust Porn Companies
2025-05-10, New York Times
Posted: 2025-05-28 12:59:20
https://www.nytimes.com/2025/05/10/opinion/pornhub-children-documents.html

What goes through the minds of people working at porn companies profiting from videos of children being raped? Thanks to a filing error in a Federal District Court in Alabama, releasing thousands of pages of internal documents from Pornhub that were meant to be sealed, we now know. One internal document indicates that Pornhub as of May 2020 had 706,000 videos available on the site that had been flagged by users for depicting rape or assaults on children or for other problems. In the message traffic, one employee advises another not to copy a manager when they find sex videos with children. The other has the obvious response: “He doesn’t want to know how much C.P. we have ignored for the past five years?” C.P. is short for child pornography. One private memo acknowledged that videos with apparent child sexual abuse had been viewed 684 million times before being removed. Pornhub produced these documents during discovery in a civil suit by an Alabama woman who beginning at age 16 was filmed engaging in sex acts, including at least once when she was drugged and then raped. These videos of her were posted on Pornhub and amassed thousands of views. One discovery memo showed that there were 155,447 videos on Pornhub with the keyword “12yo.” Other categories that the company tracked were “11yo,” “degraded teen,” “under 10” and “extreme choking.” (It has since removed these searches.) Google ... has been central to the business model of companies publishing nonconsensual imagery. Google also directs users to at least one website that monetizes assaults on victims of human trafficking.

Note: For more along these lines, read our concise summaries of news articles on Big Tech and sexual abuse scandals.


OpenAI ex-chief scientist planned for a doomsday bunker for the day when machines become smarter than man
2025-05-20, AOL News
Posted: 2025-05-28 12:57:50
https://www.aol.com/openai-ex-chief-scientist-planned-115047191.html

If there is one thing that Ilya Sutskever knows, it is the opportunities—and risks—that stem from the advent of artificial intelligence. An AI safety researcher and one of the top minds in the field, he served for years as the chief scientist of OpenAI. There he had the explicit goal of creating deep learning neural networks so advanced they would one day be able to think and reason just as well as, if not better than, any human. Artificial general intelligence, or simply AGI, is the official term for that goal. According to excerpts published by The Atlantic ... part of those plans included a doomsday shelter for OpenAI researchers. “We’re definitely going to build a bunker before we release AGI,” Sutskever told his team in 2023. Sutskever reasoned his fellow scientists would require protection at that point, since the technology was too powerful for it not to become an object of intense desire for governments globally. “Of course, it’s going to be optional whether you want to get into the bunker,” he assured fellow OpenAI scientists. Sutskever knows better than most what the awesome capabilities of AI are. He was part of an elite trio behind the 2012 creation of AlexNet, often dubbed by experts as the Big Bang of AI. Recruited by Elon Musk personally to join OpenAI three years later, he would go on to lead its efforts to develop AGI. But the launch of its ChatGPT bot accidentally derailed his plans by unleashing a funding gold rush the safety-minded Sutskever could no longer control.

Note: Watch a conversation on the big picture of emerging technology with Collective Evolution founder Joe Martino and WTK team members Amber Yang and Mark Bailey. For more along these lines, read our concise summaries of news articles on AI.


How the Pentagon built Silicon Valley
2024-08-20, Quincy Center for Responsible Statecraft
Posted: 2025-05-23 13:34:35
https://responsiblestatecraft.org/silicon-valley/

Department of Defense spending is increasingly going to large tech companies including Microsoft, Google parent company Alphabet, Oracle, and IBM. Open AI recently brought on former U.S. Army general and National Security Agency Director Paul M. Nakasone to its Board of Directors. The U.S. military discreetly, yet frequently, collaborated with prominent tech companies through thousands of subcontractors through much of the 2010s, obfuscating the extent of the two sectors’ partnership from tech employees and the public alike. The long-term, deep-rooted relationship between the institutions, spurred by massive Cold War defense and research spending and bound ever tighter by the sectors’ revolving door, ensures that advances in the commercial tech sector benefit the defense industry’s bottom line. Military, tech spending has manifested myriad landmark inventions. The internet, for example, began as an Advanced Research Projects Agency (ARPA, now known as Defense Advanced Research Projects Agency, or DARPA) research project called ARPANET, the first network of computers. Decades later, graduate students Sergey Brin and Larry Page received funding from DARPA, the National Science Foundation, and U.S. intelligence community-launched development program Massive Digital Data Systems to create what would become Google. Other prominent DARPA-funded inventions include transit satellites, a precursor to GPS, and the iPhone Siri app, which, instead of being picked up by the military, was ultimately adapted to consumer ends by Apple.

Note: Watch our latest video on the militarization of Big Tech. For more, read our concise summaries of news articles on AI, warfare technology, and Big Tech.


If the best defence against AI is more AI, this could be tech’s Oppenheimer moment
2025-03-02, The Guardian (One of the UK's Leading Newspapers)
Posted: 2025-05-23 13:28:34
https://www.theguardian.com/technology/2025/mar/02/ai-oppenheimer-moment-karp...

In 2003 [Alexander Karp] – together with Peter Thiel and three others – founded a secretive tech company called Palantir. And some of the initial funding came from the investment arm of – wait for it – the CIA! The lesson that Karp and his co-author draw [in their book The Technological Republic: Hard Power, Soft Belief and the Future of the West] is that “a more intimate collaboration between the state and the technology sector, and a closer alignment of vision between the two, will be required if the United States and its allies are to maintain an advantage that will constrain our adversaries over the longer term. The preconditions for a durable peace often come only from a credible threat of war.” Or, to put it more dramatically, maybe the arrival of AI makes this our “Oppenheimer moment”. For those of us who have for decades been critical of tech companies, and who thought that the future for liberal democracy required that they be brought under democratic control, it’s an unsettling moment. If the AI technology that giant corporations largely own and control becomes an essential part of the national security apparatus, what happens to our concerns about fairness, diversity, equity and justice as these technologies are also deployed in “civilian” life? For some campaigners and critics, the reconceptualisation of AI as essential technology for national security will seem like an unmitigated disaster – Big Brother on steroids, with resistance being futile, if not criminal.

Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and intelligence agency corruption.


Google Worried It Couldn’t Control How Israel Uses Project Nimbus, Files Reveal
2025-05-12, The Intercept
Posted: 2025-05-23 13:26:44
https://theintercept.com/2025/05/12/google-nimbus-israel-military-ai-human-ri...

Before signing its lucrative and controversial Project Nimbus deal with Israel, Google knew it couldn’t control what the nation and its military would do with the powerful cloud-computing technology, a confidential internal report obtained by The Intercept reveals. The report makes explicit the extent to which the tech giant understood the risk of providing state-of-the-art cloud and machine learning tools to a nation long accused of systemic human rights violations. Not only would Google be unable to fully monitor or prevent Israel from using its software to harm Palestinians, but the report also notes that the contract could obligate Google to stonewall criminal investigations by other nations into Israel’s use of its technology. And it would require close collaboration with the Israeli security establishment — including joint drills and intelligence sharing — that was unprecedented in Google’s deals with other nations. The rarely discussed question of legal culpability has grown in significance as Israel enters the third year of what has widely been acknowledged as a genocide in Gaza — with shareholders pressing the company to conduct due diligence on whether its technology contributes to human rights abuses. Google doesn’t furnish weapons to the military, but it provides computing services that allow the military to function — its ultimate function being, of course, the lethal use of those weapons. Under international law, only countries, not corporations, have binding human rights obligations.

Note: For more along these lines, read our concise summaries of news articles on AI and government corruption.


Facebook inflicted ‘lifelong trauma’ on Kenyan content moderators, campaigners say, as more than 140 are diagnosed with PTSD
2024-12-22, CNN News
Posted: 2025-05-23 13:24:30
https://www.cnn.com/2024/12/22/business/facebook-content-moderators-kenya-pts...

Campaigners have accused Facebook parent Meta of inflicting “potentially lifelong trauma” on hundreds of content moderators in Kenya, after more than 140 were diagnosed with PTSD and other mental health conditions. The diagnoses were made by Dr. Ian Kanyanya, the head of mental health services at Kenyatta National hospital in Kenya’s capital Nairobi, and filed with the city’s employment and labor relations court on December 4. Content moderators help tech companies weed out disturbing content on their platforms and are routinely managed by third party firms, often in developing countries. For years, critics have voiced concerns about the impact this work can have on moderators’ mental well-being. Kanyanya said the moderators he assessed encountered “extremely graphic content on a daily basis which included videos of gruesome murders, self-harm, suicides, attempted suicides, sexual violence, explicit sexual content, child physical and sexual abuse ... just to name a few.” Of the 144 content moderators who volunteered to undergo psychological assessments – out of 185 involved in the legal claim – 81% were classed as suffering from “severe” PTSD, according to Kanyanya. The class action grew out of a previous suit launched in 2022 by a former Facebook moderator, which alleged that the employee was unlawfully fired by Samasource Kenya after organizing protests against unfair working conditions.

Note: Watch our new video on the risks and promises of emerging technologies. For more along these lines, read our concise summaries of news articles on Big Tech and mental health.


Whistleblower’s exposé of the cult of Zuckerberg reveals peril of power-crazy tech bros
2025-03-15, The Guardian (One of the UK's Leading Newspapers)
Posted: 2025-05-23 13:22:47
https://www.theguardian.com/commentisfree/2025/mar/15/whistleblowers-cult-zuc...

Careless People [is] a whistleblowing book by a former [Meta] senior employee, Sarah Wynn-Williams. In the 78-page document that Wynn-Williams filed to the SEC ... it was alleged that Meta had for years been making numerous efforts to get into the biggest market in the world. These efforts included: developing a censorship system for China in 2015 that would allow a “chief editor” to decide what content to remove, and the ability to shut down the entire site during “social unrest”; assembling a “China team” in 2014 for a project to develop China-compliant versions of Meta’s services; considering the weakening of privacy protections for Hong Kong users; building a specialised censorship system for China with automatic detection of restricted terms; and restricting the account of Guo Wengui, a Chinese government critic. In her time at Meta, Wynn-Williams observed many of these activities at close range. Clearly, nobody in Meta has heard of the Streisand effect, “an unintended consequence of attempts to hide, remove or censor information, where the effort instead increases public awareness of the information”. What strikes the reader is that Meta and its counterparts are merely the digital equivalents of the oil, mining and tobacco conglomerates of the analogue era.

Note: A former Meta insider revealed that the company’s policy on banning hate groups and terrorists was quietly reshaped under political pressure, with US government agencies influencing what speech is permitted on the platform. Watch our new video on the risks and promises of emerging technologies. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.


Genetic data is another asset to be exploited – beware who has yours
2025-04-05, The Guardian (One of the UK's Leading Newspapers)
Posted: 2025-05-23 13:20:51
https://www.theguardian.com/science/2025/apr/05/genetic-data-breach-23andme-b...

Ever thought of having your genome sequenced? 23andMe ... describes itself as a “genetics-led consumer healthcare and biotechnology company empowering a healthier future”. Its share price had fallen precipitately following a data breach in October 2023 that harvested the profile and ethnicity data of 6.9 million users – including name, profile photo, birth year, location, family surnames, grandparents’ birthplaces, ethnicity estimates and mitochondrial DNA. So on 24 March it filed for so-called Chapter 11 proceedings in a US bankruptcy court. At which point the proverbial ordure hit the fan because the bankruptcy proceedings involve 23andMe seeking authorisation from the court to commence “a process to sell substantially all of its assets”. And those assets are ... the genetic data of the company’s 15 million users. These assets are very attractive to many potential purchasers. The really important thing is that genetic data is permanent, unique and immutable. If your credit card is hacked, you can always get a new replacement. But you can’t get a new genome. When 23andMe’s data assets come up for sale the queue of likely buyers is going to be long, with health insurance and pharmaceutical giants at the front, followed by hedge-funds, private equity vultures and advertisers, with marketers bringing up the rear. Since these outfits are not charitable ventures, it’s a racing certainty that they have plans for exploiting those data assets.

Note: Watch our new video on the risks and promises of emerging technologies. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.


Important Note: Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.