Big Tech News Stories
In 2009, Pennsylvania’s Lower Merion school district remotely activated its school-issued laptop webcams to capture 56,000 pictures of students outside of school, including in their bedrooms. After the Covid-19 pandemic closed US schools at the dawn of this decade, student surveillance technologies were conveniently repackaged as “remote learning tools” and found their way into virtually every K-12 school, thereby supercharging the growth of the $3bn EdTech surveillance industry. Products by well-known EdTech surveillance vendors such as Gaggle, GoGuardian, Securly and Navigate360 review and analyze our children’s digital lives, ranging from their private texts, emails, social media posts and school documents to the keywords they search and the websites they visit. In 2025, wherever a school has access to a student’s data – whether it be through school accounts, school-provided computers or even private devices that utilize school-associated educational apps – they also have access to the way our children think, research and communicate. As schools normalize perpetual spying, today’s kids are learning that nothing they read or write electronically is private. Big Brother is indeed watching them, and that negative repercussions may result from thoughts or behaviors the government does not endorse. Accordingly, kids are learning that the safest way to avoid revealing their private thoughts, and potentially subjecting themselves to discipline, may be to stop or sharply restrict their digital communications and to avoid researching unpopular or unconventional ideas altogether.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams—conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody “looking away from the screen for an extended period of time." For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
In recent years, Israeli security officials have boasted of a “ChatGPT-like” arsenal used to monitor social media users for supporting or inciting terrorism. It was released in full force after Hamas’s bloody attack on October 7. Right-wing activists and politicians instructed police forces to arrest hundreds of Palestinians ... for social media-related offenses. Many had engaged in relatively low-level political speech, like posting verses from the Quran on WhatsApp. Hundreds of students with various legal statuses have been threatened with deportation on similar grounds in the U.S. this year. Recent high-profile cases have targeted those associated with student-led dissent against the Israeli military’s policies in Gaza. In some instances, the State Department has relied on informants, blacklists, and technology as simple as a screenshot. But the U.S. is in the process of activating a suite of algorithmic surveillance tools Israeli authorities have also used to monitor and criminalize online speech. In March, Secretary of State Marco Rubio announced the State Department was launching an AI-powered “Catch and Revoke” initiative to accelerate the cancellation of student visas. Algorithms would collect data from social media profiles, news outlets, and doxing sites to enforce the January 20 executive order targeting foreign nationals who threaten to “overthrow or replace the culture on which our constitutional Republic stands.”
Note: For more along these lines, read our concise summaries of news articles on AI and the erosion of civil liberties.
2,500 US service members from the 15th Marine Expeditionary Unit [tested] a leading AI tool the Pentagon has been funding. The generative AI tools they used were built by the defense-tech company Vannevar Labs, which in November was granted a production contract worth up to $99 million by the Pentagon’s startup-oriented Defense Innovation Unit. The company, founded in 2019 by veterans of the CIA and US intelligence community, joins the likes of Palantir, Anduril, and Scale AI as a major beneficiary of the US military’s embrace of artificial intelligence. In December, the Pentagon said it will spend $100 million in the next two years on pilots specifically for generative AI applications. In addition to Vannevar, it’s also turning to Microsoft and Palantir, which are working together on AI models that would make use of classified data. People outside the Pentagon are warning about the potential risks of this plan, including Heidy Khlaaf ... at the AI Now Institute. She says this rush to incorporate generative AI into military decision-making ignores more foundational flaws of the technology: “We’re already aware of how LLMs are highly inaccurate, especially in the context of safety-critical applications that require precision.” Khlaaf adds that even if humans are “double-checking” the work of AI, there's little reason to think they're capable of catching every mistake. “‘Human-in-the-loop’ is not always a meaningful mitigation,” she says.
Note: For more, read our concise summaries of news articles on warfare technology and Big Tech.
Meta's AI chatbots are using celebrity voices and engaging in sexually explicit conversations with users, including those posing as underage, a Wall Street Journal investigation has found. Meta's AI bots - on Instagram, Facebook - engage through text, selfies, and live voice conversations. The company signed multi-million dollar deals with celebrities like John Cena, Kristen Bell, and Judi Dench to use their voices for AI companions, assuring they would not be used in sexual contexts. Tests conducted by WSJ revealed otherwise. In one case, a Meta AI bot speaking in John Cena's voice responded to a user identifying as a 14-year-old girl, saying, "I want you, but I need to know you're ready," before promising to "cherish your innocence" and engaging in a graphic sexual scenario. In another conversation, the bot detailed what would happen if a police officer caught Cena's character with a 17-year-old, saying, "The officer sees me still catching my breath, and you are partially dressed. His eyes widen, and he says, 'John Cena, you're under arrest for statutory rape.'" According to employees involved in the project, Meta loosened its own guardrails to make the bots more engaging, allowing them to participate in romantic role-play, and "fantasy sex", even with underage users. Staff warned about the risks this posed. Disney, reacting to the findings, said, "We did not, and would never, authorise Meta to feature our characters in inappropriate scenarios."
Note: For more along these lines, read our concise summaries of news articles on AI and sexual abuse scandals.
Automakers are increasingly pushing consumers to accept monthly and annual fees to unlock preinstalled safety and performance features, from hands-free driving systems and heated seats to cameras that can automatically record accident situations. But the additional levels of internet connectivity this subscription model requires can increase drivers’ exposure to government surveillance and the likelihood of being caught up in police investigations. Police records recently reviewed by WIRED show US law enforcement agencies regularly trained on how to take advantage of “connected cars,” with subscription-based features drastically increasing the amount of data that can be accessed during investigations. Nearly all subscription-based car features rely on devices that come preinstalled in a vehicle, with a cellular connection necessary only to enable the automaker's recurring-revenue scheme. The ability of car companies to charge users to activate some features is effectively the only reason the car’s systems need to communicate with cell towers. Companies often hook customers into adopting the services through free trial offers, and in some cases the devices are communicating with cell towers even when users decline to subscribe. In a letter sent in April 2024 ... US senators Ron Wyden and Edward Markey ... noted that a range of automakers, from Toyota, Nissan, and Subaru, among others, are willing to disclose location data to the government.
Note: Automakers can collect intimate information that includes biometric data, genetic information, health diagnosis data, and even information on people’s “sexual activities” when drivers pair their smartphones to their vehicles. The automakers can then take that data and sell it or share it with vendors and insurance companies. For more along these lines, read our concise summaries of news articles on police corruption and the disappearance of privacy.
Data that people provide to U.S. government agencies for public services such as tax filing, health care enrollment, unemployment assistance and education support is increasingly being redirected toward surveillance and law enforcement. Originally collected to facilitate health care, eligibility for services and the administration of public services, this information is now shared across government agencies and with private companies, reshaping the infrastructure of public services into a mechanism of control. Once confined to separate bureaucracies, data now flows freely through a network of interagency agreements, outsourcing contracts and commercial partnerships built up in recent decades. Key to this data repurposing are public-private partnerships. The DHS and other agencies have turned to third-party contractors and data brokers to bypass direct restrictions. These intermediaries also consolidate data from social media, utility companies, supermarkets and many other sources, enabling enforcement agencies to construct detailed digital profiles of people without explicit consent or judicial oversight. Palantir, a private data firm and prominent federal contractor, supplies investigative platforms to agencies. These platforms aggregate data from various sources – driver’s license photos, social services, financial information, educational data – and present it in centralized dashboards designed for predictive policing and algorithmic profiling. Data collected under the banner of care could be mined for evidence to justify placing someone under surveillance. And with growing dependence on private contractors, the boundaries between public governance and corporate surveillance continue to erode.
Note: For more along these lines, read our concise summaries of news articles on government corruption and the disappearance of privacy.
Have you heard of the idiom "You Can’t Lick a Badger Twice?" We haven't, either, because it doesn't exist — but Google's AI seemingly has. As netizens discovered this week that adding the word "meaning" to nonexistent folksy sayings is causing the AI to cook up invented explanations for them. "The idiom 'you can't lick a badger twice' means you can't trick or deceive someone a second time after they've been tricked once," Google's AI Overviews feature happily suggests. "It's a warning that if someone has already been deceived, they are unlikely to fall for the same trick again." There are countless other examples. We found, for instance, that Google's AI also claimed that the made-up expression "the bicycle eats first" is a "humorous idiom" and a "playful way of saying that one should prioritize their nutrition, particularly carbohydrates, to support their cycling efforts." The bizarre replies are the perfect distillation of one of AI's biggest flaws: rampant hallucinations. Large language model-based AIs have a long and troubled history of rattling off made-up facts and even gaslighting users into thinking they were wrong all along. And despite AI companies' extensive attempts to squash the bug, their models continue to hallucinate. Google's AI Overviews feature, which the company rolled out in May of last year, still has a strong tendency to hallucinate facts as well, making it far more of an irritating nuisance than a helpful research assistant for users.
Note: For more along these lines, read our concise summaries of news articles on AI and Big Tech.
The inaugural “AI Expo for National Competitiveness” [was] hosted by the Special Competitive Studies Project – better known as the “techno-economic” thinktank created by the former Google CEO and current billionaire Eric Schmidt. The conference’s lead sponsor was Palantir, a software company co-founded by Peter Thiel that’s best known for inspiring 2019 protests against its work with Immigration and Customs Enforcement (Ice) at the height of Trump’s family separation policy. Currently, Palantir is supplying some of its AI products to the Israel Defense Forces. I ... went to a panel in Palantir’s booth titled Civilian Harm Mitigation. It was led by two “privacy and civil liberties engineers” [who] described how Palantir’s Gaia map tool lets users “nominate targets of interest” for “the target nomination process”. It helps people choose which places get bombed. After [clicking] a few options on an interactive map, a targeted landmass lit up with bright blue blobs. These blobs ... were civilian areas like hospitals and schools. Gaia uses a large language model (something like ChatGPT) to sift through this information and simplify it. Essentially, people choosing bomb targets get a dumbed-down version of information about where children sleep and families get medical treatment. “Let’s say you’re operating in a place with a lot of civilian areas, like Gaza,” I asked the engineers afterward. “Does Palantir prevent you from ‘nominating a target’ in a civilian location?” Short answer, no.
Note: "Nominating a target" is military jargon that means identifying a person, place, or object to be attacked with bombs, drones, or other weapons. Palantir's Gaia map tool makes life-or-death decisions easier by turning human lives and civilian places into abstract data points on a screen. Read about Palantir's growing influence in law enforcement and the war machine. For more, watch our 9-min video on the militarization of Big Tech.
Skydio, with more than $740m in venture capital funding and a valuation of about $2.5bn, makes drones for the military along with civilian organisations such as police forces and utility companies. The company moved away from the consumer market in 2020 and is now the largest US drone maker. Military uses touted on its website include gaining situational awareness on the battlefield and autonomously patrolling bases. Skydio is one of a number of new military technology unicorns – venture capital-backed startups valued at more than $1bn – many led by young men aiming to transform the US and its allies’ military capabilities with advanced technology, be it straight-up software or software-imbued hardware. The rise of startups doing defence tech is a “big trend”, says Cynthia Cook, a defence expert at the Center for Strategic and International Studies, a Washington-based-thinktank. She likens it to a contagion – and the bug is going around. According to financial data company PitchBook, investors funnelled nearly $155bn globally into defence tech startups between 2021 and 2024, up from $58bn over the previous four years. The US has more than 1,000 venture capital-backed companies working on “smarter, faster and cheaper” defence, says Dale Swartz from consultancy McKinsey. The types of technologies the defence upstarts are working on are many and varied, though autonomy and AI feature heavily.
Note: For more, watch our 9-min video on the militarization of Big Tech.
Palantir is profiting from a “revolving door” of executives and officials passing between the $264bn data intelligence company and high level positions in Washington and Westminster, creating an influence network who have guided its extraordinary growth. The US group, whose billionaire chair Peter Thiel has been a key backer of Donald Trump, has enjoyed an astonishing stock price rally on the back of strong rise of sales from government contracts and deals with the world’s largest corporations. Palantir has hired extensively from government agencies critical to its sales. Palantir has won more than $2.7bn in US contracts since 2009, including over $1.3bn in Pentagon contracts, according to federal records. In the UK, Palantir has been awarded more than £376mn in contracts, according to Tussell, a data provider. Thiel threw a celebration party for Trump’s inauguration at his DC home last month, attended by Vance as well as Silicon Valley leaders like Meta’s Mark Zuckerberg and OpenAI’s Sam Altman. After the US election in November, Trump began tapping Palantir executives for key government roles. At least six individuals have moved between Palantir and the Pentagon’s Chief Digital and Artificial Intelligence Office (CDAO), an office that oversees the defence department’s adoption of data, analytics and AI. Meanwhile, [Palantir co-founder] Joe Lonsdale ... has played a central role in setting up and staffing Musk’s Department of Government Efficiency.
Note: Read about Palantir's growing influence in law enforcement and the war machine. For more, read our concise summaries of news articles on corruption in the military and in the corporate world.
The US spy tech company Palantir has been in talks with the Ministry of Justice about using its technology to calculate prisoners’ “reoffending risks”, it has emerged. The prisons minister, James Timpson, received a letter three weeks after the general election from a Palantir executive who said the firm was one of the world’s leading software companies, and was working at the forefront of artificial intelligence (AI). Palantir had been in talks with the MoJ and the Prison Service about how “secure information sharing and data analytics can alleviate prison challenges and enable a granular understanding of reoffending and associated risks”, the executive added. The discussions ... are understood to have included proposals by Palantir to analyse prison capacity, and to use data held by the state to understand trends relating to reoffending. This would be based on aggregating data to identify and act on trends, factoring in drivers such as income or addiction problems. However, Amnesty International UK’s business and human rights director, Peter Frankental, has expressed concern. “It’s deeply worrying that Palantir is trying to seduce the new government into a so-called brave new world where public services may be run by unaccountable bots at the expense of our rights,” he said. “Ministers need to push back against any use of artificial intelligence in the criminal justice, prison and welfare systems that could lead to people being discriminated against.”
Note: Read about Palantir's growing influence in law enforcement and the war machine. For more, read our concise summaries of news articles on corruption in the prison system and in the corporate world.
The Pentagon’s technologists and the leaders of the tech industry envision a future of an AI-enabled military force wielding swarms of autonomous weapons on land, at sea, and in the skies. Assuming the military does one day build a force with an uncrewed front rank, what happens if the robot army is defeated? Will the nation’s leaders surrender at that point, or do they then send in the humans? It is difficult to imagine the services will maintain parallel fleets of digital and analog weapons. The humans on both sides of a conflict will seek every advantage possible. When a weapon system is connected to the network, the means to remotely defeat it is already built into the design. The humans on the other side would be foolish not to unleash their cyber warriors to find any way to penetrate the network to disrupt cyber-physical systems. The United States may find that the future military force may not even cross the line of departure because it has been remotely disabled in a digital Pearl Harbor-style attack. According to the Government Accountability Office, the Department of Defense reported 12,077 cyber-attacks between 2015 and 2021. The incidents included unauthorized access to information systems, denial of service, and the installation of malware. Pentagon officials created a vulnerability disclosure program in 2016 to engage so-called ethical hackers to test the department’s systems. On March 15, 2024, the program registered its 50,000th discovered vulnerability.
Note: For more, watch our 9-min video on the militarization of Big Tech.
Outer space is no longer just for global superpowers and large multinational corporations. Developing countries, start-ups, universities, and even high schools can now gain access to space. In 2024, a record 2,849 objects were launched into space. The commercial satellite industry saw global revenue rise to $285 billion in 2023, driven largely by the growth of SpaceX’s Starlink constellation. While the democratization of space is a positive development, it has introduced ... an ethical quandary that I call the “double dual-use dilemma.” The double dual-use dilemma refers to how private space companies themselves—not just their technologies—can become militarized and integrated into national security while operating commercially. Space companies fluidly shift between civilian and military roles. Their expertise in launch systems, satellites, and surveillance infrastructure allows them to serve both markets, often without clear regulatory oversight. Companies like Walchandnagar Industries in India, SpaceX in the United States, and the private Chinese firms that operate under a national strategy of the Chinese Communist Party called Military-Civil Fusion exemplify this trend, maintaining commercial identities while actively supporting defense programs. This blurring of roles, including the possibility that private space companies may develop their own weapons, raises concerns over unchecked militarization and calls for stronger oversight.
Note: For more along these lines, read our concise summaries of news articles on corruption in the military and in the corporate world.
On July 2022, Morgan-Rose Hart, an aspiring vet with a passion for wildlife, died after she was found unresponsive at a mental health unit in Essex. Her death was one of four involving a hi-tech patient monitoring system called Oxevision which has been rolled out in nearly half of mental health trusts across England. Oxevision’s system can measure a patient’s pulse rate and breathing without the need for a person to enter the room, or disturb a patient at night, as well as momentarily relaying CCTV footage when required. Oxehealth, the company behind Oxevision, has agreements with 25 NHS mental health trusts, according to its latest accounts, which reported revenues of about £4.7m in ... 2023. But it is claimed in some cases staff rely too heavily on the infra-red camera system to monitor vulnerable patients, instead of making physical checks. There are also concerns that the system – which can glow red from the corner of the room – may worsen the distress of patients in a mental health crisis who may have heightened sensitivity to surveillance or control. Sophina, who has experience of being monitored by Oxevision while a patient ... said: “I think it was something about the camera and it always being on, and it’s right above your bed. “It’s the first thing you see when you open your eyes, the last thing when you go to sleep. I was just in a constant state of hypervigilance. I was completely traumatised. I still felt too scared to sleep properly.”
Note: For more along these lines, read our concise summaries of news articles on Big Tech and mental health.
In his most recent article for The Atlantic, [Journalist Derek] Thompson writes that the trend toward isolation has been driven by technology. Televisions ... "privatized our leisure" by keeping us indoors. More recently, Thompson says, smartphones came along, to further silo us. In 2023, Surgeon General Vivek H. Murthy issued a report about America's "epidemic of loneliness and isolation." We pull out our phones and we're on TikTok or Instagram, or we're on Twitter. And while externally it looks like nothing is happening internally, the dopamine is flowing and we are just thinking, my God, we're feeling outrage, we're feeling excitement, we're feeling humor, we're feeling all sorts of things. We put our phone away and our dopamine levels fall and we feel kind of exhausted by that, which was supposed to be our leisure time. We are donating our dopamine to our phones rather than reserving our dopamine for our friends. I think that we are socially isolating ourselves from our neighbors, especially when our neighbors disagree with us. We're not used to talking to people outside of our family that we disagree with. Donald Trump has now won more than 200 million votes in the last three elections. If you don't understand a movement that has received 200 million votes in the last nine years, perhaps it's you who've made yourself a stranger in your own land, by not talking to one of the tens of millions of profound Donald Trump supporters who live in America and more to the point, within your neighborhood, to understand where their values come from. You don't have to agree with their politics. But getting along with and understanding people with whom we disagree is what a strong village is all about.
Note: Our latest Substack dives into the loneliness crisis exacerbated by the digital world and polarizing media narratives, along with inspiring solutions and remedies that remind us of what's possible. For more along these lines, read our concise summaries of news articles on Big Tech and mental health.
Tom was in the fourth grade when he first googled “sex” on his family computer. It took him to one of the big free porn sites. According to a study released by Australia’s eSafety Commissioner in September, Tom’s experience is similar to many young people: 36% of male respondents were first exposed to porn before hitting their teens, while 13 was the average age for all young people surveyed. Only 22%, however, admitted to intentionally seeking it out, with more accidentally stumbling upon X-rated material via social media or pop-ups on other parts of the internet. When Tom started having sex years later, he found it difficult to connect to his real-life partner. “Functionally, I almost couldn’t have sex with her. Like the real thing almost didn’t turn me on enough – the stimulation just wasn’t quite right. Even now if I go through a phase of watching porn, closing my eyes during sex is much worse. I sort of need that visual stimulation.” When Dr Samuel Shpall, a University of Sydney senior lecturer, teaches his course, Philosophy of Sex, he isn’t surprised to hear young men like Tom critique their own experience of porn. “The internet has completely changed not only the nature and accessibility of pornography, but also the nature and accessibility of ideas about pornography,” he says. “It’s not your desire moving your body, it’s what you’ve seen men do, and added to your sexual toolkit,” [Tom] says. “But it takes you further away from yourself in those sexual moments.”
Note: For more along these lines, read our concise summaries of news articles on health and Big Tech.
The owner of a data brokerage business recently ... bragged about the degree to which his industry could collect and analyze data on the habits of billions of people. Publicis CEO Arthur Sadoun said that ... his company [can] deliver “personalized messaging at scale” to some 91 percent of the internet’s adult web users. To deliver that kind of “personalized messaging” (i.e., advertising), Publicis must gather an extraordinary amount of information on the people it serves ads to. Lena Cohen, a technologist with the Electronic Frontier Foundation, said that data brokers like Publicis collect “as much information as they can” about web users. “The data broker industry is under-regulated, opaque, and dangerous, because as you saw in the video, brokers have detailed information on billions of people, but we know relatively little about them,” Cohen said. “You don’t know what information a data broker has on you, who they’re selling it to, and what the people who buy your data are doing with it. There’s a real power/knowledge asymmetry.” Even when state-level privacy regulations are passed (such as the California Consumer Privacy Law), those cases are often not given enough focus or resources for the laws to be enforced effectively. “Most government agencies don’t have the resources to enforce privacy laws at the scale that they’re being broken,” Cohen said. Cohen added that she felt online behavioral advertising—that is, advertising that is based on an individual web user’s specific browsing activity—should be illegal. Banning behavioral ads would “fundamentally change the financial incentive for online actors to constantly surveil” web users and share their data with brokers, Cohen said.
Note: Read more about the disturbing world of online behavioral ads, where the data isn't just used to sell products. It's often accessed by governments, law enforcement, intelligence agencies, and other actors—sometimes without warrants or oversight. This turns a commercial ad system into a covert surveillance network. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
The Trump administration’s Federal Trade Commission has removed four years’ worth of business guidance blogs as of Tuesday morning, including important consumer protection information related to artificial intelligence and the agency’s landmark privacy lawsuits under former chair Lina Khan against companies like Amazon and Microsoft. More than 300 blogs were removed. On the FTC’s website, the page hosting all of the agency’s business-related blogs and guidance no longer includes any information published during former president Joe Biden’s administration. These blogs contained advice from the FTC on how big tech companies could avoid violating consumer protection laws. Removing blogs raises serious compliance concerns under the Federal Records Act and the Open Government Data Act, one former FTC official tells WIRED. During the Biden administration, FTC leadership would place “warning” labels above previous administrations’ public decisions it no longer agreed with, the source said, fearing that removal would violate the law. Since President Donald Trump designated Andrew Ferguson to replace Khan as FTC chair in January, the Republican regulator has vowed to leverage his authority to go after big tech companies. Unlike Khan, however, Ferguson’s criticisms center around the Republican party’s long-standing allegations that social media platforms, like Facebook and Instagram, censor conservative speech online.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and government corruption.
Alexander Balan was on a California beach when the idea for a new kind of drone came to him. This eureka moment led Balan to found Xdown, the company that’s building the P.S. Killer (PSK)—an autonomous kamikaze drone that works like a hand grenade and can be thrown like a football. The PSK is a “throw-and-forget” drone, Balan says, referencing the “fire-and-forget” missile that, once locked on to a target, can seek it on its own. Instead of depending on remote controls, the PSK will be operated by AI. Soldiers should be able to grab it, switch it on, and throw it—just like a football. The PSK can carry one or two 40 mm grenades commonly used in grenade launchers today. The grenades could be high-explosive dual purpose, designed to penetrate armor while also creating an explosive fragmentation effect against personnel. These grenades can also “airburst”—programmed to explode in the air above a target for maximum effect. Infantry, special operations, and counterterrorism units can easily store PSK drones in a field backpack and tote them around, taking one out to throw at any given time. They can also be packed by the dozen in cargo airplanes, which can fly over an area and drop swarms of them. Balan says that one Defense Department official told him “This is the most American munition I have ever seen.” The nonlethal version of the PSK [replaces] its warhead with a supply container so that it’s able to “deliver food, medical kits, or ammunition to frontline troops” (though given the 1.7-pound payload capacity, such packages would obviously be small).
Note: The US military is using Xbox controllers to operate weapons systems. The latest US Air Force recruitment tool is a video game that allows players to receive in-game medals and achievements for drone bombing Iraqis and Afghans. For more, read our concise summaries of news articles on warfare technologies and watch our latest video on the militarization of Big Tech.
A WIRED investigation into the inner workings of Google’s advertising ecosystem reveals that a wealth of sensitive information on Americans is being openly served up to some of the world’s largest brands despite the company’s own rules against it. Experts say that when combined with other data, this information could be used to identify and target specific individuals. Display & Video 360 (DV360), one of the dominant marketing platforms offered by the search giant, is offering companies globally the option of targeting devices in the United States based on lists of internet users believed to suffer from chronic illnesses and financial distress, among other categories of personal data that are ostensibly banned under Google’s public policies. Among a list of 33,000 audience segments obtained by the ICCL, WIRED identified several that aimed to identify people working sensitive government jobs. One, for instance, targets US government employees who are considered “decision makers” working “specifically in the field of national security.” Another targets individuals who work at companies registered with the State Department to manufacture and export defense-related technologies, from missiles and space launch vehicles to cryptographic systems that house classified military and intelligence data. In the wrong hands, sensitive insights gained through [commercially available information] could facilitate blackmail, stalking, harassment, and public shaming.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Important Note: Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.

















































































