Big Tech Media Articles
Alexander Balan was on a California beach when the idea for a new kind of drone came to him. This eureka moment led Balan to found Xdown, the company that’s building the P.S. Killer (PSK)—an autonomous kamikaze drone that works like a hand grenade and can be thrown like a football. The PSK is a “throw-and-forget” drone, Balan says, referencing the “fire-and-forget” missile that, once locked on to a target, can seek it on its own. Instead of depending on remote controls, the PSK will be operated by AI. Soldiers should be able to grab it, switch it on, and throw it—just like a football. The PSK can carry one or two 40 mm grenades commonly used in grenade launchers today. The grenades could be high-explosive dual purpose, designed to penetrate armor while also creating an explosive fragmentation effect against personnel. These grenades can also “airburst”—programmed to explode in the air above a target for maximum effect. Infantry, special operations, and counterterrorism units can easily store PSK drones in a field backpack and tote them around, taking one out to throw at any given time. They can also be packed by the dozen in cargo airplanes, which can fly over an area and drop swarms of them. Balan says that one Defense Department official told him “This is the most American munition I have ever seen.” The nonlethal version of the PSK [replaces] its warhead with a supply container so that it’s able to “deliver food, medical kits, or ammunition to frontline troops” (though given the 1.7-pound payload capacity, such packages would obviously be small).
Note: The US military is using Xbox controllers to operate weapons systems. The latest US Air Force recruitment tool is a video game that allows players to receive in-game medals and achievements for drone bombing Iraqis and Afghans. For more, read our concise summaries of news articles on warfare technologies and watch our latest video on the militarization of Big Tech.
A WIRED investigation into the inner workings of Google’s advertising ecosystem reveals that a wealth of sensitive information on Americans is being openly served up to some of the world’s largest brands despite the company’s own rules against it. Experts say that when combined with other data, this information could be used to identify and target specific individuals. Display & Video 360 (DV360), one of the dominant marketing platforms offered by the search giant, is offering companies globally the option of targeting devices in the United States based on lists of internet users believed to suffer from chronic illnesses and financial distress, among other categories of personal data that are ostensibly banned under Google’s public policies. Among a list of 33,000 audience segments obtained by the ICCL, WIRED identified several that aimed to identify people working sensitive government jobs. One, for instance, targets US government employees who are considered “decision makers” working “specifically in the field of national security.” Another targets individuals who work at companies registered with the State Department to manufacture and export defense-related technologies, from missiles and space launch vehicles to cryptographic systems that house classified military and intelligence data. In the wrong hands, sensitive insights gained through [commercially available information] could facilitate blackmail, stalking, harassment, and public shaming.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Alphabet has rewritten its guidelines on how it will use AI, dropping a section which previously ruled out applications that were "likely to cause harm". Human Rights Watch has criticised the decision, telling the BBC that AI can "complicate accountability" for battlefield decisions that "may have life or death consequences." Experts say AI could be widely deployed on the battlefield - though there are fears about its use too, particularly with regard to autonomous weapons systems. "For a global industry leader to abandon red lines it set for itself signals a concerning shift, at a time when we need responsible leadership in AI more than ever," said Anna Bacciarelli, senior AI researcher at Human Rights Watch. The "unilateral" decision showed also showed "why voluntary principles are not an adequate substitute for regulation and binding law" she added. In January, MP's argued that the conflict in Ukraine had shown the technology "offers serious military advantage on the battlefield." As AI becomes more widespread and sophisticated it would "change the way defence works, from the back office to the frontline," Emma Lewell-Buck MP ... wrote. Concern is greatest over the potential for AI-powered weapons capable of taking lethal action autonomously, with campaigners arguing controls are urgently needed. The Doomsday Clock - which symbolises how near humanity is to destruction - cited that concern in its latest assessment of the dangers mankind faces.
Note: For more along these lines, read our concise summaries of news articles on AI and Big Tech.
Instagram has released a long-promised “reset” button to U.S. users that clears the algorithms it uses to recommend you photos and videos. TikTok offers a reset button, too. And with a little bit more effort, you can also force YouTube to start fresh with how it recommends what videos to play next. It means you now have the power to say goodbye to endless recycled dance moves, polarizing Trump posts, extreme fitness challenges, dramatic pet voice-overs, fruit-cutting tutorials, face-altering filters or whatever other else has taken over your feed like a zombie. I know some people love what their apps show them. But the reality is, none of us are really in charge of our social media experience anymore. Instead of just friends, family and the people you choose to follow, nowadays your feed or For You Page is filled with recommended content you never asked for, selected by artificial-intelligence algorithms. Their goal is to keep you hooked, often by showing you things you find outrageous or titillating — not joyful or calming. And we know from Meta whistleblower Frances Haugen and others that outrage algorithms can take a particular toll on young people. That’s one reason they’re offering a reset now: because they’re under pressure to give teens and families more control. So how does the algorithm go awry? It tries to get to know you by tracking every little thing you do. They’re even analyzing your “dwell time,” when you unconsciously scroll more slowly.
Note: Read about the developer who got permanently banned from Meta for developing a tool called “Unfollow Everything” that lets users, well, unfollow everything and restart their feeds fresh. For more along these lines, read our concise summaries of news articles on Big Tech and media manipulation.
In the nineteen-fifties, the Leo Burnett advertising agency helped invent Tony the Tiger, a cartoon mascot who was created to promote Frosted Flakes to children. In 1973, a trailblazing nutritionist named Jean Mayer warned the U.S. Senate Select Committee on Nutrition and Human Needs that ... junk foods could be described as empty calories. He questioned why it was legal to apply the term “cereals” to products that were more than fifty-per-cent sugar. Children’s-food advertisements, he claimed, were “nothing short of nutritional disasters.” Mayer’s warnings, however, did not lead to a string of state bans on junk food. Advertising continued to target children, and consumers of all ages were free to buy and consume any amount of Frosted Flakes. This health issue was ultimately seen as one that families should manage on their own. In recent years, experts have been warning that social media harms children. Frances Haugen, a former Facebook data scientist who became a whistle-blower, told a Senate subcommittee that her ex-employer’s “profit optimizing machine is generating self-harm and self-hate—especially for vulnerable groups, like teenage girls.” “It is time to require a surgeon general’s warning label on social media platforms, stating that social media is associated with significant mental health harms for adolescents,” Vivek Murthy, whose second term as the U.S. Surgeon General ended on Monday, wrote in an opinion piece last year.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and mental health.
The Defense Advanced Research Project Agency, the Pentagon's top research arm, wants to find out if red blood cells could be modified in novel ways to protect troops. The DARPA program, called the Red Blood Cell Factory, is looking for researchers to study the insertion of "biologically active components" or "cargoes" in red blood cells. The hope is that modified cells would enhance certain biological systems, "thus allowing recipients, such as warfighters, to operate more effectively in dangerous or extreme environments." Red blood cells could act like a truck, carrying "cargo" or special protections, to all parts of the body, since they already circulate oxygen everywhere, [said] Christopher Bettinger, a professor of biomedical engineering overseeing the program. "What if we could add in additional cargo ... inside of that disc," Bettinger said, referring to the shape of red blood cells, "that could then confer these interesting benefits?" The research could impact the way troops battle diseases that reproduce in red blood cells, such as malaria, Bettinger hypothesized. "Imagine an alternative world where we have a warfighter that has a red blood cell that's accessorized with a compound that can sort of defeat malaria," Bettinger said. In 2019, the Army released a report called "Cyborg Soldier 2050," which laid out a vision of the future where troops would benefit from neural and optical enhancements, though the report acknowledged ethical and legal concerns.
Note: Read about the Pentagon's plans to use our brains as warfare, describing how the human body is war's next domain. Learn more about biotech dangers.
On an episode of "The Joe Rogan Experience" released Friday, Meta CEO Mark Zuckerberg painted a picture of Biden administration officials berating Facebook staff during requests to remove certain content from the social media platform. "Basically, these people from the Biden administration would call up our team and, like, scream at them and curse," Zuckerberg told ... Joe Rogan. "It just got to this point where we were like, 'No, we're not gonna, we're not gonna take down things that are true. That's ridiculous.'" In a letter last year to Rep. Jim Jordan, the Republican chair of the House Judiciary Committee, Zuckerberg said that the White House “repeatedly pressured” Facebook to remove “certain COVID-19 content including humor and satire.” Zuckerberg said Facebook, which is owned by Meta, acquiesced at times, while suggesting that different decisions would be made going forward. On Rogan's show, Zuckerberg said the administration had asked Facebook to remove from its platform a meme that showed actor Leonardo DiCaprio pointing at a TV screen advertising a class action lawsuit for people who once took the Covid vaccine."They're like, 'No, you have to take that down,'" Zuckerberg said, adding, "We said, 'No, we're not gonna. We're not gonna take down things that are, that are true.'" Zuckerberg ... also announced that his platforms — Facebook and Instagram — would relax rules related to political content.
Note: Read a former senior NPR editor's nuanced take on how challenging official narratives became so politicized that "politics were blotting out the curiosity and independence that should have been guiding our work." Opportunities for award winning journalism were lost on controversial issues like COVID, the Hunter Biden laptop story, and more. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
We published the piece on February 22, [2020], under the headline “Don’t Buy China’s Story: The Coronavirus May Have Leaked from a Lab.” It immediately went viral, its audience swelling for a few hours as readers liked and shared it over and over again. I had a data tracker on my screen that showed our web traffic, and I could see the green line for my story surging up and up. Then suddenly, for no reason, the green line dropped like a stone. No one was reading or sharing the piece. It was as though it had never existed at all. Seeing the story’s traffic plunge, I was stunned. How does a story that thousands of people are reading and sharing suddenly just disappear? Later, the [New York Post’s] digital editor gave me the answer: Facebook’s fact-checking team had flagged the piece as “false information.” I was seeing Big Tech censorship of the American media in real time, and it chilled me to my bones. What happened next was even more chilling. I found out that an “expert” who advised Facebook to censor the piece had a major conflict of interest. Professor Danielle E. Anderson had regularly worked with researchers at the Wuhan Institute of Virology ... and she told Facebook’s fact-checkers that the lab had “strict control and containment measures.” Facebook’s “fact-checkers” took her at her word. An “expert” had spoken, Wuhan’s lab was deemed secure, and the Post’s story was squashed in the interest of public safety. In 2021, in the wake of a lawsuit, Facebook admitted that its “fact checks” are just “opinion,” used by social media companies to police what we watch and read.
Note: Watch our brief newsletter recap video about censorship and the suppression of the COVID lab leak theory. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
Meta CEO Mark Zuckerberg said Facebook has done “too much censorship” as he revealed the social network is scrapping fact-checking and restrictions on free speech as President-elect Donald Trump prepares to return to the White House. The 40-year-old tech tycoon — who dined with Trump at Mar-a-Lago the day before Thanksgiving and gave him a pair of Meta Ray Ban sunglasses, with Meta later donating $1 million to his inaugural fund — claimed on Tuesday that the dramatic about-face was signal that the company is returning to an original focus on free speech. The stunning reversal will include moving Meta’s content moderation team from deep-blue California to right-leaning Texas in order to insulate the group from cultural bias. “As we work to promote free expression, I think that will help build trust to do this work in places where there’s less concern about the bias of our team,” the Meta boss said. Facebook will do away with “restrictions on topics like immigration and gender that are just out of touch with mainstream discourse,” Zuckerberg said. “What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas,” he said, adding: “It’s gone too far.” In late July, Facebook acknowledged that it censored the image of President-elect Donald Trump raising his fist in the immediate aftermath of the assassination attempt in Pennsylvania.
Note: Read a former senior NPR editor's nuanced take on how challenging official narratives became so politicized that "politics were blotting out the curiosity and independence that should have been guiding our work." Opportunities for award winning journalism were lost on controversial issues like COVID, the Hunter Biden laptop story, and more. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
Mark Zuckerberg has announced he is scrapping fact-checks on Facebook, claiming the labels intended to warn against fake news have “destroyed more trust than they have created”. Facebook’s fact-checkers have helped debunk hundreds of fake news stories and false rumours – however, there have been several high-profile missteps. In 2020, Facebook and Twitter took action to halt the spread of an article by the New York Post based on leaked emails from a laptop belonging to Joe Biden’s son, Hunter Biden. As coronavirus spread around the world, suggestions that the vaccine could have been man-made were suppressed by Facebook. An opinion column in the New York Post with the headline: “Don’t buy China’s story: The coronavirus may have leaked from a lab” was labelled as “false information”. In 2021, Facebook lifted its ban on claims the virus could have been “man-made”. It was months later that further doubts emerged over the origins of coronavirus. In 2021, Facebook ... was accused of wrongly fact-checking a story about Pfizer’s Covid-19 vaccine. A British Medical Journal (BMJ) report, based on whistleblowing, alleged poor clinical practices at a contractor carrying out research for Pfizer. However, Facebook’s fact-checkers added a label arguing the story was “missing context” and could “mislead people”. Furious debates raged over the effectiveness of masks in preventing the spread of Covid-19. Facebook’s fact-checkers were accused of overzealously clamping down on articles that questioned the science behind [mask] mandates.
Note: Read a former senior NPR editor's nuanced take on how challenging official narratives became so politicized that "politics were blotting out the curiosity and independence that should have been guiding our work." Opportunities for award winning journalism were lost on controversial issues like COVID, the Hunter Biden laptop story, and more. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
Meta CEO Mark Zuckerberg announced Tuesday that his social media platforms — which include Facebook and Instagram — will be getting rid of fact-checking partners and replacing them with a “community notes” model like that found on X. For a decade now, liberals have wrongly treated Trump’s rise as a problem of disinformation gone wild, and one that could be fixed with just enough fact-checking. Disinformation, though, has been a convenient narrative for a Democratic establishment unwilling to reckon with its own role in upholding anti-immigrant narratives, or repeating baseless fearmongering over crime rates, and failing to support the multiracial working class. Long dead is the idea that social media platforms like X or Instagram are either trustworthy news publishers, sites for liberatory community building, or hubs for digital democracy. “The internet may once have been understood as a commons of information, but that was long ago,” wrote media theorist Rob Horning in a recent newsletter. “Now the main purpose of the internet is to place its users under surveillance, to make it so that no one does anything without generating data, and to assure that paywalls, rental fees, and other sorts of rents can be extracted for information that may have once seemed free but perhaps never wanted to be.” Social media platforms are huge corporations for which we, as users, produce data to be mined as a commodity to sell to advertisers — and government agencies. The CEOs of these corporations are craven and power-hungry.
Note: Read a former senior NPR editor's nuanced take on how challenging official narratives became so politicized that "politics were blotting out the curiosity and independence that should have been guiding our work." Opportunities for award winning journalism were lost on controversial issues like COVID, the Hunter Biden laptop story, and more. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
Each time you see a targeted ad, your personal information is exposed to thousands of advertisers and data brokers through a process called “real-time bidding” (RTB). This process does more than deliver ads—it fuels government surveillance, poses national security risks, and gives data brokers easy access to your online activity. RTB might be the most privacy-invasive surveillance system that you’ve never heard of. The moment you visit a website or app with ad space, it asks a company that runs ad auctions to determine which ads it will display for you. This involves sending information about you and the content you’re viewing to the ad auction company. The ad auction company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers. The bid request may contain personal information like your unique advertising ID, location, IP address, device details, interests, and demographic information. The information in bid requests is called “bidstream data” and can easily be linked to real people. Advertisers, and their ad buying platforms, can store the personal data in the bid request regardless of whether or not they bid on ad space. RTB is regularly exploited for government surveillance. The privacy and security dangers of RTB are inherent to its design. The process broadcasts torrents of our personal data to thousands of companies, hundreds of times per day.
Note: Clearview AI scraped billions of faces off of social media without consent and at least 600 law enforcement agencies tapped into its database. During this time, Clearview was hacked and its entire client list — which included the Department of Justice, U.S. Immigration and Customs Enforcement, Interpol, retailers and hundreds of police departments — was leaked to hackers. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
The U.S. Court of Appeals for the 6th Circuit ... threw out the Federal Communication Commission’s Net Neutrality rules, rejecting the agency’s authority to protect broadband consumers and handing phone and cable companies a major victory. The FCC moved in April 2024 to restore Net Neutrality and the essential consumer protections that rest under Title II of the Communications Act, which had been gutted under the first Trump administration. This was an all-too-rare example in Washington of a government agency doing what it’s supposed to do: Listening to the public and taking their side against the powerful companies that for far too long have captured ... D.C. And the phone and cable industry did what they always do when the FCC does anything good: They sued to overturn the rules. The court ruled against the FCC and deemed internet access to be an “information service” largely free from FCC oversight. This court’s warped decision scraps the common-sense rules the FCC restored in April. The result is that throughout most of the country, the most essential communications service of this century will be operating without any real government oversight, with no one to step in when companies rip you off or slow down your service. This ruling is far out of step with the views of the American public, who overwhelmingly support real Net Neutrality. They’re tired of paying too much, and they hate being spied on.
Note: Read about the communities building their own internet networks in the face of net neutrality rollbacks. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
Militaries, law enforcement, and more around the world are increasingly turning to robot dogs — which, if we're being honest, look like something straight out of a science-fiction nightmare — for a variety of missions ranging from security patrol to combat. Robot dogs first really came on the scene in the early 2000s with Boston Dynamics' "BigDog" design. They have been used in both military and security activities. In November, for instance, it was reported that robot dogs had been added to President-elect Donald Trump's security detail and were on patrol at his home in Mar-a-Lago. Some of the remote-controlled canines are equipped with sensor systems, while others have been equipped with rifles and other weapons. One Ohio company made one with a flamethrower. Some of these designs not only look eerily similar to real dogs but also act like them, which can be unsettling. In the Ukraine war, robot dogs have seen use on the battlefield, the first known combat deployment of these machines. Built by British company Robot Alliance, the systems aren't autonomous, instead being operated by remote control. They are capable of doing many of the things other drones in Ukraine have done, including reconnaissance and attacking unsuspecting troops. The dogs have also been useful for scouting out the insides of buildings and trenches, particularly smaller areas where operators have trouble flying an aerial drone.
Note: Learn more about the troubling partnership between Big Tech and the military. For more, read our concise summaries of news articles on military corruption.
Mitigating the risk of extinction from AI should be a global priority. However, as many AI ethicists warn, this blinkered focus on the existential future threat to humanity posed by a malevolent AI ... has often served to obfuscate the myriad more immediate dangers posed by emerging AI technologies. These “lesser-order” AI risks ... include pervasive regimes of omnipresent AI surveillance and panopticon-like biometric disciplinary control; the algorithmic replication of existing racial, gender, and other systemic biases at scale ... and mass deskilling waves that upend job markets, ushering in an age monopolized by a handful of techno-oligarchs. Killer robots have become a twenty-first-century reality, from gun-toting robotic dogs to swarms of autonomous unmanned drones, changing the face of warfare from Ukraine to Gaza. Palestinian civilians have frequently spoken about the paralyzing psychological trauma of hearing the “zanzana” — the ominous, incessant, unsettling, high-pitched buzzing of drones loitering above. Over a decade ago, children in Waziristan, a region of Pakistan’s tribal belt bordering Afghanistan, experienced a similar debilitating dread of US Predator drones that manifested as a fear of blue skies. “I no longer love blue skies. In fact, I now prefer gray skies. The drones do not fly when the skies are gray,” stated thirteen-year-old Zubair in his testimony before Congress in 2013.
Note: For more along these lines, read our concise summaries of news articles on AI and military corruption.
Within Meta’s Counterterrorism and Dangerous Organizations team, [Hannah] Byrne helped craft one of the most powerful and secretive censorship policies in internet history. She and her team helped draft the rulebook that applies to the world’s most diabolical people and groups: the Ku Klux Klan, cartels, and terrorists. Meta bans these so-called Dangerous Organizations and Individuals, or DOI, from using its platforms, but further prohibits its billions of users from engaging in “glorification,” “support,” or “representation” of anyone on the list. As an armed white supremacist group with credible allegations of human rights violations hanging over it, Azov [Battalion] had landed on the Dangerous Organizations list. Following the Russian invasion of Ukraine, Meta not only moved swiftly to allow users to cheer on the Azov Battalion, but also loosened its rules around incitement, hate speech, and gory imagery so Ukrainian civilians could share images of the suffering around them. Within weeks, Byrne found the moral universe around her inverted: The heavily armed hate group sanctioned by Congress since 2018 were now freedom fighters resisting occupation, not terroristic racists. It seems most galling for Byrne to compare how malleable Meta’s Dangerous Organizations policy was for Ukraine, and how draconian it has felt for those protesting the war in Gaza. “I know the U.S. government is in constant contact with Facebook employees,” she said. Meta’s censorship systems are “basically an extension of the government,” Byrne said. “You want military, Department of State, CIA people enforcing free speech? That is what is concerning.”
Note: Read more about Facebook's secret blacklist, and how Facebook censored reporting of war crimes in Gaza but allowed praise for the neo-Nazi Azov Brigade on its platform. Going deeper, click here if you want to know the real history behind the Russia-Ukraine war. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
“Anonymity is a shield from the tyranny of the majority,” wrote Supreme Court Justice John Paul Stevens in a 1995 ruling affirming Americans’ constitutional right to engage in anonymous political speech. That shield has weakened in recent years due to advances in the surveillance technology available to law enforcement. Everything from social media posts, to metadata about phone calls, to the purchase information collected by data brokers, to location data showing every step taken, is available to law enforcement — often without a warrant. Avoiding all of this tracking would require such extrication from modern social life that it would be virtually impossible for most people. International Mobile Subscriber Identity (IMSI) catchers, or Stingrays, impersonate cell phone towers to collect the unique ID of a cell phone’s SIM card. Geofence warrants, also known as reverse location warrants ... lets law enforcement request location data from apps on your phone or tech companies. Data brokers are companies that assemble information about people from a variety of usually public sources. Tons of websites and apps that everyday people use collect information on them, and this information is often sold to third parties who can aggregate or piece together someone’s profile across the sites that are tracking them. Companies like Fog Data Science, LexisNexis, Precisely and Acxiom possess not only data on billions of people, they also ... have information about someone’s political preferences as well as demographic information. Surveillance of social media accounts allows police to gather vast amounts of information about how protests are organized ... frequently utilizing networks of fake accounts. One firm advertised the ability to help police identify “activists and disruptors” at protests.
Note: For more along these lines, explore concise summaries of news articles on police corruption and the erosion of civil liberties from reliable major media sources.
Beheadings, mass killings, child abuse, hate speech – all of it ends up in the inboxes of a global army of content moderators. You don’t often see or hear from them – but these are the people whose job it is to review and then, when necessary, delete content that either gets reported by other users, or is automatically flagged by tech tools. Moderators are often employed by third-party companies, but they work on content posted directly on to the big social networks including Instagram, TikTok and Facebook. “If you take your phone and then go to TikTok, you will see a lot of activities, dancing, you know, happy things,” says Mojez, a former Nairobi-based moderator. “But in the background, I personally was moderating, in the hundreds, horrific and traumatising videos. “I took it upon myself. Let my mental health take the punch so that general users can continue going about their activities on the platform.” In 2020, Meta then known as Facebook, agreed to pay a settlement of $52m (£40m) to moderators who had developed mental health issues. The legal action was initiated by a former moderator [who] described moderators as the “keepers of souls”, because of the amount of footage they see containing the final moments of people’s lives. The ex-moderators I spoke to all used the word “trauma” in describing the impact the work had on them. One ... said he found it difficult to interact with his wife and children because of the child abuse he had witnessed. What came across, very powerfully, was the immense pride the moderators had in the roles they had played in protecting the world from online harm.
Note: Read more about the disturbing world of content moderation. For more along these lines, explore concise summaries of revealing news articles on Big Tech from reliable major media sources.
Before the digital age, law enforcement would conduct surveillance through methods like wiretapping phone lines or infiltrating an organization. Now, police surveillance can reach into the most granular aspects of our lives during everyday activities, without our consent or knowledge — and without a warrant. Technology like automated license plate readers, drones, facial recognition, and social media monitoring added a uniquely dangerous element to the surveillance that comes with physical intimidation of law enforcement. With greater technological power in the hands of police, surveillance technology is crossing into a variety of new and alarming contexts. Law enforcement partnerships with companies like Clearview AI, which scraped billions of images from the internet for their facial recognition database ... has been used by law enforcement agencies across the country, including within the federal government. When the social networking app on your phone can give police details about where you’ve been and who you’re connected to, or your browsing history can provide law enforcement with insight into your most closely held thoughts, the risks of self-censorship are great. When artificial intelligence tools or facial recognition technology can piece together your life in a way that was previously impossible, it gives the ones with the keys to those tools enormous power to ... maintain a repressive status quo.
Note: Facial recognition technology has played a role in the wrongful arrests of many innocent people. For more along these lines, explore concise summaries of revealing news articles on police corruption and the disappearance of privacy.
Air fryers that gather your personal data and audio speakers “stuffed with trackers” are among examples of smart devices engaged in “excessive” surveillance, according to the consumer group Which? The organisation tested three air fryers ... each of which requested permission to record audio on the user’s phone through a connected app. Which? found the app provided by the company Xiaomi connected to trackers for Facebook and a TikTok ad network. The Xiaomi fryer and another by Aigostar sent people’s personal data to servers in China. Its tests also examined smartwatches that it said required “risky” phone permissions – in other words giving invasive access to the consumer’s phone through location tracking, audio recording and accessing stored files. Which? found digital speakers that were preloaded with trackers for Facebook, Google and a digital marketing company called Urbanairship. The Information Commissioner’s Office (ICO) said the latest consumer tests “show that many products not only fail to meet our expectations for data protection but also consumer expectations”. A growing number of devices in homes are connected to the internet, including camera-enabled doorbells and smart TVs. Last Black Friday, the ICO encouraged consumers to check if smart products they planned to buy had a physical switch to prevent the gathering of voice data.
Note: A 2015 New York Times article warned that smart devices were a "train wreck in privacy and security." For more along these lines, read about how automakers collect intimate information that includes biometric data, genetic information, health diagnosis data, and even information on people’s “sexual activities” when drivers pair their smartphones to their vehicles.
Important Note: Explore our full index to key excerpts of revealing major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.



