Big Tech News Stories
“Anonymity is a shield from the tyranny of the majority,” wrote Supreme Court Justice John Paul Stevens in a 1995 ruling affirming Americans’ constitutional right to engage in anonymous political speech. That shield has weakened in recent years due to advances in the surveillance technology available to law enforcement. Everything from social media posts, to metadata about phone calls, to the purchase information collected by data brokers, to location data showing every step taken, is available to law enforcement — often without a warrant. Avoiding all of this tracking would require such extrication from modern social life that it would be virtually impossible for most people. International Mobile Subscriber Identity (IMSI) catchers, or Stingrays, impersonate cell phone towers to collect the unique ID of a cell phone’s SIM card. Geofence warrants, also known as reverse location warrants ... lets law enforcement request location data from apps on your phone or tech companies. Data brokers are companies that assemble information about people from a variety of usually public sources. Tons of websites and apps that everyday people use collect information on them, and this information is often sold to third parties who can aggregate or piece together someone’s profile across the sites that are tracking them. Companies like Fog Data Science, LexisNexis, Precisely and Acxiom possess not only data on billions of people, they also ... have information about someone’s political preferences as well as demographic information. Surveillance of social media accounts allows police to gather vast amounts of information about how protests are organized ... frequently utilizing networks of fake accounts. One firm advertised the ability to help police identify “activists and disruptors” at protests.
Note: For more along these lines, explore concise summaries of news articles on police corruption and the erosion of civil liberties from reliable major media sources.
Facebook’s inscrutable feed algorithm, which is supposed to calculate which content is most likely to appeal to me and then send it my way ... feels like an obstacle to how I’d like to connect with my friends. British software developer Louis Barclay developed a software ... known as an extension, which can be installed in a Chrome web browser. Christened Unfollow Everything, it would automate the process of unfollowing each of my 1,800 friends, a task that manually would take hours. The result is that I would be able to experience Facebook as it once was, when it contained profiles of my friends, but without the endless updates, photos, videos and the like that Facebook’s algorithm generates. If tools like Unfollow Everything were allowed to flourish, and we could have better control over what we see on social media, these tools might create a more civic-minded internet. Unfortunately, Mr. Barclay was forced by Facebook to remove the software. Large social media platforms appear to be increasingly resistant to third-party tools that give users more command over their experiences. After talking with Mr. Barclay, I decided to develop a new version of Unfollow Everything. I — and the lawyers at the Knight First Amendment Institute at Columbia — asked a federal court in California last week to rule on whether users should have a right to use tools like Unfollow Everything that give them increased power over how they use social networks, particularly over algorithms that have been engineered to keep users scrolling on their sites.
Note: The above was written by Ethan Zuckerman, associate professor of public policy and director of the UMass Initiative for Digital Public Infrastructure at the University of Massachusetts Amherst. For more along these lines, explore concise summaries of news articles on Big Tech from reliable major media sources.
Something went suddenly and horribly wrong for adolescents in the early 2010s. Rates of depression and anxiety in the United States—fairly stable in the 2000s—rose by more than 50 percent in many studies. The suicide rate rose 48 percent for adolescents ages 10 to 19. For girls ages 10 to 14, it rose 131 percent. Gen Z is in poor mental health and is lagging behind previous generations on many important metrics. Once young people began carrying the entire internet in their pockets, available to them day and night, it altered their daily experiences and developmental pathways. Friendship, dating, sexuality, exercise, sleep, academics, politics, family dynamics, identity—all were affected. There’s an important backstory, beginning ... when we started systematically depriving children and adolescents of freedom, unsupervised play, responsibility, and opportunities for risk taking, all of which promote competence, maturity, and mental health. Hundreds of studies on young rats, monkeys, and humans show that young mammals want to play, need to play, and end up socially, cognitively, and emotionally impaired when they are deprived of play. Young people who are deprived of opportunities for risk taking and independent exploration will, on average, develop into more anxious and risk-averse adults. A study of how Americans spend their time found that, before 2010, young people (ages 15 to 24) reported spending far more time with their friends. By 2019, young people’s time with friends had dropped to just 67 minutes a day. It turns out that Gen Z had been socially distancing for many years and had mostly completed the project by the time COVID-19 struck. Congress has not been good at addressing public concerns when the solutions would displease a powerful and deep-pocketed industry.
Note: The author of this article is Jonathan Haidt, a social psychologist and ethics professor who's been on the frontlines investigating the youth mental health crisis. He is the co-founder of LetGrow.org, an organization that provides inspiring solutions and ideas to help families and schools support children's well-being and foster childhood independence. For more along these lines, explore concise summaries of news articles on mental health.
Beheadings, mass killings, child abuse, hate speech – all of it ends up in the inboxes of a global army of content moderators. You don’t often see or hear from them – but these are the people whose job it is to review and then, when necessary, delete content that either gets reported by other users, or is automatically flagged by tech tools. Moderators are often employed by third-party companies, but they work on content posted directly on to the big social networks including Instagram, TikTok and Facebook. “If you take your phone and then go to TikTok, you will see a lot of activities, dancing, you know, happy things,” says Mojez, a former Nairobi-based moderator. “But in the background, I personally was moderating, in the hundreds, horrific and traumatising videos. “I took it upon myself. Let my mental health take the punch so that general users can continue going about their activities on the platform.” In 2020, Meta then known as Facebook, agreed to pay a settlement of $52m (£40m) to moderators who had developed mental health issues. The legal action was initiated by a former moderator [who] described moderators as the “keepers of souls”, because of the amount of footage they see containing the final moments of people’s lives. The ex-moderators I spoke to all used the word “trauma” in describing the impact the work had on them. One ... said he found it difficult to interact with his wife and children because of the child abuse he had witnessed. What came across, very powerfully, was the immense pride the moderators had in the roles they had played in protecting the world from online harm.
Note: Read more about the disturbing world of content moderation. For more along these lines, explore concise summaries of revealing news articles on Big Tech from reliable major media sources.
Ask "is the British tax system fair", and Google cites a quote ... arguing that indeed it is. Ask "is the British tax system unfair", and Google's Featured Snippet explains how UK taxes benefit the rich and promote inequality. "What Google has done is they've pulled bits out of the text based on what people are searching for and fed them what they want to read," [Digital marketing director at Dragon Metrics Sarah] Presch says. "It's one big bias machine." The vast majority of internet traffic begins with a Google Search, and people rarely click on anything beyond the first five links. The system that orders the links on Google Search has colossal power over our experience of the world. You might choose to engage with information that keeps you trapped in your filter bubble, "but there's only a certain bouquet of messages that are put in front of you to choose from in the first place", says [professor] Silvia Knobloch-Westerwick. A recent US anti-trust case against Google uncovered internal company documents where employees discuss some of the techniques the search engine uses to answer your questions. "We do not understand documents – we fake it," an engineer wrote in a slideshow used during a 2016 presentation. "A billion times a day, people ask us to find documents relevant to a query… We hardly look at documents. We look at people. If a document gets a positive reaction, we figure it is good. If the reaction is negative, it is probably bad. Grossly simplified, this is the source of Google's magic. That is how we serve the next person, keep the induction rolling, and sustain the illusion that we understand." In other words, Google watches to see what people click on when they enter a given search term. When people seem satisfied by a certain type of information, it's more likely that Google will promote that kind of search result for similar queries in the future.
Note: For more along these lines, explore concise summaries of revealing news articles on Big Tech from reliable major media sources.
Before the digital age, law enforcement would conduct surveillance through methods like wiretapping phone lines or infiltrating an organization. Now, police surveillance can reach into the most granular aspects of our lives during everyday activities, without our consent or knowledge — and without a warrant. Technology like automated license plate readers, drones, facial recognition, and social media monitoring added a uniquely dangerous element to the surveillance that comes with physical intimidation of law enforcement. With greater technological power in the hands of police, surveillance technology is crossing into a variety of new and alarming contexts. Law enforcement partnerships with companies like Clearview AI, which scraped billions of images from the internet for their facial recognition database ... has been used by law enforcement agencies across the country, including within the federal government. When the social networking app on your phone can give police details about where you’ve been and who you’re connected to, or your browsing history can provide law enforcement with insight into your most closely held thoughts, the risks of self-censorship are great. When artificial intelligence tools or facial recognition technology can piece together your life in a way that was previously impossible, it gives the ones with the keys to those tools enormous power to ... maintain a repressive status quo.
Note: Facial recognition technology has played a role in the wrongful arrests of many innocent people. For more along these lines, explore concise summaries of revealing news articles on police corruption and the disappearance of privacy.
Air fryers that gather your personal data and audio speakers “stuffed with trackers” are among examples of smart devices engaged in “excessive” surveillance, according to the consumer group Which? The organisation tested three air fryers ... each of which requested permission to record audio on the user’s phone through a connected app. Which? found the app provided by the company Xiaomi connected to trackers for Facebook and a TikTok ad network. The Xiaomi fryer and another by Aigostar sent people’s personal data to servers in China. Its tests also examined smartwatches that it said required “risky” phone permissions – in other words giving invasive access to the consumer’s phone through location tracking, audio recording and accessing stored files. Which? found digital speakers that were preloaded with trackers for Facebook, Google and a digital marketing company called Urbanairship. The Information Commissioner’s Office (ICO) said the latest consumer tests “show that many products not only fail to meet our expectations for data protection but also consumer expectations”. A growing number of devices in homes are connected to the internet, including camera-enabled doorbells and smart TVs. Last Black Friday, the ICO encouraged consumers to check if smart products they planned to buy had a physical switch to prevent the gathering of voice data.
Note: A 2015 New York Times article warned that smart devices were a "train wreck in privacy and security." For more along these lines, read about how automakers collect intimate information that includes biometric data, genetic information, health diagnosis data, and even information on people’s “sexual activities” when drivers pair their smartphones to their vehicles.
The past decade has seen a rapid expansion of the commercial space industry. In a 2023 white paper, a group of concerned astronomers warned against repeating Earthly “colonial practices” in outer space. Some of these colonial practices might include the enclosure of land, the exploitation of environmental resources and the destruction of landscapes – in the name of ideals such as destiny, civilization and the salvation of humanity. People of Bawaka Country in northern Australia have told the space industry that their ancestors guide human life from their home in the galaxy, and that this relationship is increasingly threatened by large orbiting satellite networks. Similarly, Inuit elders say their ancestors live on celestial bodies. Navajo leadership has asked NASA not to land human remains on the Moon. Kanaka elders have insisted that no more telescopes be built on Mauna Kea, which Native Hawaiians consider to be ancestral and sacred. These Indigenous positions stand in stark contrast with many in the industry’s insistence that space is empty and inanimate. In 1967, a slew of nations including the U.S., U.K. and USSR, signed the Outer Space Treaty. This treaty declared, among other things, that no nation can own a planetary body or part of one. The nations that signed the Outer Space Treaty were effectively saying, “Let’s not battle each other for territory and resources again. Let’s do outer space differently.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
Tech companies have outfitted classrooms across the U.S. with devices and technologies that allow for constant surveillance and data gathering. Firms such as Gaggle, Securly and Bark (to name a few) now collect data from tens of thousands of K-12 students. They are not required to disclose how they use that data, or guarantee its safety from hackers. In their new book, Surveillance Education: Navigating the Conspicuous Absence of Privacy in Schools, Nolan Higdon and Allison Butler show how all-encompassing surveillance is now all too real, and everything from basic privacy rights to educational quality is at stake. The tech industry has done a great job of convincing us that their platforms — like social media and email — are “free.” But the truth is, they come at a cost: our privacy. These companies make money from our data, and all the content and information we share online is basically unpaid labor. So, when the COVID-19 lockdowns hit, a lot of people just assumed that using Zoom, Canvas and Moodle for online learning was a “free” alternative to in-person classes. In reality, we were giving up even more of our labor and privacy to an industry that ended up making record profits. Your data can be used against you ... or taken out of context, such as sarcasm being used to deny you a job or admission to a school. Data breaches happen all the time, which could lead to identity theft or other personal information becoming public.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams—conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody “looking away from the screen for an extended period of time." For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
A little-known advertising cartel that controls 90% of global marketing spending supported efforts to defund news outlets and platforms including The Post — at points urging members to use a blacklist compiled by a shadowy government-funded group that purports to guard news consumers against “misinformation.” The World Federation of Advertisers (WFA), which reps 150 of the world’s top companies — including ExxonMobil, GM, General Mills, McDonald’s, Visa, SC Johnson and Walmart — and 60 ad associations sought to squelch online free speech through its Global Alliance for Responsible Media (GARM) initiative, the House Judiciary Committee found. “The extent to which GARM has organized its trade association and coordinates actions that rob consumers of choices is likely illegal under the antitrust laws and threatens fundamental American freedoms,” the Republican-led panel said in its 39-page report. The new report establishes links between the WFA’s “responsible media” initiative and the taxpayer-funded Global Disinformation Index (GDI), a London-based group that in 2022 unveiled an ad blacklist of 10 news outlets whose opinion sections tilted conservative or libertarian, including The Post, RealClearPolitics and Reason magazine. Internal communications suggest that rather than using an objective rubric to guide decisions, GARM members simply monitored disfavored outlets closely to be able to find justification to demonetize them.
Note: For more along these lines, see concise summaries of deeply revealing news articles on censorship and media manipulation from reliable sources.
Ford Motor Company is just one of many automakers advancing technology that weaponizes cars for mass surveillance. The ... company is currently pursuing a patent for technology that would allow vehicles to monitor the speed of nearby cars, capture images, and transmit data to law enforcement agencies. This would effectively turn vehicles into mobile surveillance units, sharing detailed information with both police and insurance companies. Ford's initiative is part of a broader trend among car manufacturers, where vehicles are increasingly used to spy on drivers and harvest data. In today's world, a smartphone can produce up to 3 gigabytes of data per hour, but recently manufactured cars can churn out up to 25 gigabytes per hour—and the cars of the future will generate even more. These vehicles now gather biometric data such as voice, iris, retina, and fingerprint recognition. In 2022, Hyundai patented eye-scanning technology to replace car keys. This data isn't just stored locally; much of it is uploaded to the cloud, a system that has proven time and again to be incredibly vulnerable. Toyota recently announced that a significant amount of customer information was stolen and posted on a popular hacking site. Imagine a scenario where hackers gain control of your car. As cybersecurity threats become more advanced, the possibility of a widespread attack is not far-fetched.
Note: FedEx is helping the police build a large AI surveillance network to track people and vehicles. Michael Hastings, a journalist investigating U.S. military and intelligence abuses, was killed in a 2013 car crash that may have been the result of a hack. For more along these lines, explore summaries of news articles on the disappearance of privacy from reliable major media sources.
Big tech companies have spent vast sums of money honing algorithms that gather their users’ data and scour it for patterns. One result has been a boom in precision-targeted online advertisements. Another is a practice some experts call “algorithmic personalized pricing,” which uses artificial intelligence to tailor prices to individual consumers. The Federal Trade Commission uses a more Orwellian term for this: “surveillance pricing.” In July the FTC sent information-seeking orders to eight companies that “have publicly touted their use of AI and machine learning to engage in data-driven targeting,” says the agency’s chief technologist Stephanie Nguyen. Consumer surveillance extends beyond online shopping. “Companies are investing in infrastructure to monitor customers in real time in brick-and-mortar stores,” [Nguyen] says. Some price tags, for example, have become digitized, designed to be updated automatically in response to factors such as expiration dates and customer demand. Retail giant Walmart—which is not being probed by the FTC—says its new digital price tags can be remotely updated within minutes. When personalized pricing is applied to home mortgages, lower-income people tend to pay more—and algorithms can sometimes make things even worse by hiking up interest rates based on an inadvertently discriminatory automated estimate of a borrower’s risk rating.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
Meta CEO Mark Zuckerberg told the House Judiciary Committee that his company's moderators faced significant pressure from the federal government to censor content on Facebook and Instagram—and that he regretted caving to it. In a letter to Rep. Jim Jordan (R–Ohio), the committee's chairman, Zuckerberg explained that the pressure also applied to "humor and satire" and that in the future, Meta would not blindly obey the bureaucrats. The letter refers specifically to the widespread suppression of contrarian viewpoints relating to COVID-19. Email exchanges between Facebook moderators and CDC officials reveal that the government took a heavy hand in suppressing content. Health officials did not merely vet posts for accuracy but also made pseudo-scientific determinations about whether certain opinions could cause social "harm" by undermining the effort to encourage all Americans to get vaccinated. But COVID-19 content was not the only kind of speech the government went after. Zuckerberg also explains that the FBI warned him about Russian attempts to sow chaos on social media by releasing a fake story about the Biden family just before the 2020 election. This warning motivated Facebook to take action against the New York Post's Hunter Biden laptop story when it was published in October 2020. In his letter, Zuckerberg states that this was a mistake and that moving forward, Facebook will never again demote stories pending approval from fact-checkers.
Note: For more along these lines, see concise summaries of deeply revealing news articles on censorship and government corruption from reliable major media sources.
In almost every country on Earth, the digital infrastructure upon which the modern economy was built is owned and controlled by a small handful of monopolies, based largely in Silicon Valley. This system is looking more and more like neo-feudalism. Just as the feudal lords of medieval Europe owned all of the land ... the US Big Tech monopolies of the 21st century act as corporate feudal lords, controlling all of the digital land upon which the digital economy is based. A monopolist in the 20th century would have loved to control a country’s supply of, say, refrigerators. But the Big Tech monopolists of the 21st century go a step further and control all of the digital infrastructure needed to buy those fridges — from the internet itself to the software, cloud hosting, apps, payment systems, and even the delivery service. These corporate neo-feudal lords don’t just dominate a single market or a few related ones; they control the marketplace. They can create and destroy entire markets. Their monopolistic control extends well beyond just one country, to almost the entire world. If a competitor does manage to create a product, US Big Tech monopolies can make it disappear. Imagine you are an entrepreneur. You develop a product, make a website, and offer to sell it online. But then you search for it on Google, and it does not show up. Instead, Google promotes another, similar product in the search results. This is not a hypothetical; this already happens.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
Surveillance technologies have evolved at a rapid clip over the last two decades — as has the government’s willingness to use them in ways that are genuinely incompatible with a free society. The intelligence failures that allowed for the attacks on September 11 poured the concrete of the surveillance state foundation. The gradual but dramatic construction of this surveillance state is something that Republicans and Democrats alike are responsible for. Our country cannot build and expand a surveillance superstructure and expect that it will not be turned against the people it is meant to protect. The data that’s being collected reflect intimate details about our closely held beliefs, our biology and health, daily activities, physical location, movement patterns, and more. Facial recognition, DNA collection, and location tracking represent three of the most pressing areas of concern and are ripe for exploitation. Data brokers can use tens of thousands of data points to develop a detailed dossier on you that they can sell to the government (and others). Essentially, the data broker loophole allows a law enforcement agency or other government agency such as the NSA or Department of Defense to give a third party data broker money to hand over the data from your phone — rather than get a warrant. When pressed by the intelligence community and administration, policymakers on both sides of the aisle failed to draw upon the lessons of history.
Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.
Data breaches are a seemingly endless scourge with no simple answer, but the breach in recent months of the background-check service National Public Data illustrates just how dangerous and intractable they have become. In April, a hacker known for selling stolen information, known as USDoD, began hawking a trove of data on cybercriminal forums for $3.5 million that they said included 2.9 billion records and impacted “the entire population of USA, CA and UK.” As the weeks went on, samples of the data started cropping up as other actors and legitimate researchers worked to understand its source and validate the information. By early June, it was clear that at least some of the data was legitimate and contained information like names, emails, and physical addresses in various combinations. When information is stolen from a single source, like Target customer data being stolen from Target, it's relatively straightforward to establish that source. But when information is stolen from a data broker and the company doesn't come forward about the incident, it's much more complicated to determine whether the information is legitimate and where it came from. Typically, people whose data is compromised in a breach—the true victims—aren’t even aware that National Public Data held their information in the first place. Every trove of information that attackers can get their hands on ultimately fuels scamming, cybercrime, and espionage.
Note: Clearview AI scraped billions of faces off of social media without consent. At least 600 law enforcement agencies were tapping into its database of 3 billion facial images. During this time, Clearview was hacked and its entire client list — which included the Department of Justice, U.S. Immigration and Customs Enforcement, Interpol, retailers and hundreds of police departments — was leaked to hackers.
A US federal appeals court ruled last week that so-called geofence warrants violate the Fourth Amendment’s protections against unreasonable searches and seizures. Geofence warrants allow police to demand that companies such as Google turn over a list of every device that appeared at a certain location at a certain time. The US Fifth Circuit Court of Appeals ruled on August 9 that geofence warrants are “categorically prohibited by the Fourth Amendment” because “they never include a specific user to be identified, only a temporal and geographic location where any given user may turn up post-search.” In other words, they’re the unconstitutional fishing expedition that privacy and civil liberties advocates have long asserted they are. Google ... is the most frequent target of geofence warrants, vowed late last year that it was changing how it stores location data in such a way that geofence warrants may no longer return the data they once did. Legally, however, the issue is far from settled: The Fifth Circuit decision applies only to law enforcement activity in Louisiana, Mississippi, and Texas. Plus, because of weak US privacy laws, police can simply purchase the data and skip the pesky warrant process altogether. As for the appellants in the case heard by the Fifth Circuit, well, they’re no better off: The court found that the police used the geofence warrant in “good faith” when it was issued in 2018, so they can still use the evidence they obtained.
Note: Read more about the rise of geofence warrants and its threat to privacy rights. For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
If you appeared in a photo on Facebook any time between 2011 and 2021, it is likely your biometric information was fed into DeepFace — the company’s controversial deep-learning facial recognition system that tracked the face scan data of at least a billion users. That's where Texas Attorney General Ken Paxton comes in. His office secured a $1.4 billion settlement from Meta over its alleged violation of a Texas law that bars the capture of biometric data without consent. Meta is on the hook to pay $275 million within the next 30 days and the rest over the next four years. Why did Paxton wait until 2022 — a year after Meta announced it would suspend its facial recognition technology and delete its database — to go up against the tech giant? If our AG truly prioritized privacy, he'd focus on the lesser-known companies that law enforcement agencies here in Texas are paying to scour and store our biometric data. In 2017, [Clearview AI] launched a facial recognition app that ... could identify strangers from a photo by searching a database of faces scraped without consent from social media. In 2020, news broke that at least 600 law enforcement agencies were tapping into a database of 3 billion facial images. Clearview was hit with lawsuit after lawsuit. That same year, the company was hacked and its entire client list — which included the Department of Justice, U.S. Immigration and Customs Enforcement, Interpol, retailers and hundreds of police departments — was leaked.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
Automated fast food restaurant CaliExpress by Flippy, in Pasadena, Calif., opened in January to considerable hype due to its robot burger makers, but the restaurant launched with another, less heralded innovation: the ability to pay for your meal with your face. CaliExpress uses a payment system from facial ID tech company PopID. It’s not the only fast-food chain to employ the technology. Biometric payment options are becoming more common. Amazon introduced pay-by-palm technology in 2020, and while its cashier-less store experiment has faltered, it installed the tech in 500 of its Whole Foods stores last year. Mastercard, which is working with PopID, launched a pilot for face-based payments in Brazil back in 2022, and it was deemed a success — 76% of pilot participants said they would recommend the technology to a friend. As stores implement biometric technology for a variety of purposes, from payments to broader anti-theft systems, consumer blowback, and lawsuits, are rising. In March, an Illinois woman sued retailer Target for allegedly illegally collecting and storing her and other customers’ biometric data via facial recognition technology without their consent. Amazon and T-Mobile are also facing legal actions related to biometric technology. In other countries ... biometric payment systems are comparatively mature. Visitors to McDonald’s in China ... use facial recognition technology to pay for their orders.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
Peregrine ... is essentially a super-powered Google for police data. Enter a name or address into its web-based app, and Peregrine quickly scans court records, arrest reports, police interviews, body cam footage transcripts — any police dataset imaginable — for a match. It’s taken data siloed across an array of older, slower systems, and made it accessible in a simple, speedy app that can be operated from a web browser. To date, Peregrine has scored 57 contracts across a wide range of police and public safety agencies in the U.S., from Atlanta to L.A. Revenue tripled in 2023, from $3 million to $10 million. [That will] triple again to $30 million this year, bolstered by $60 million in funding from the likes of Friends & Family Capital and Founders Fund. Privacy advocates [are] concerned about indiscriminate surveillance. “We see a lot of police departments of a lot of different sizes getting access to Real Time Crime Centers now, and it's definitely facilitating a lot more general access to surveillance feeds for some of these smaller departments that would have previously found it cost prohibitive,” said Beryl Lipton ... at the Electronic Frontier Foundation (EFF). “These types of companies are inherently going to have a hard time protecting privacy, because everything that they're built on is basically privacy damaging.” Peregrine technology can also enable “predictive policing,” long criticized for unfairly targeting poorer, non-white neighborhoods.
Note: Learn more about Palantir's involvement in domestic surveillance and controversial military technologies. For more along these lines, see concise summaries of deeply revealing news articles on police corruption and the disappearance of privacy from reliable major media sources.
Important Note: Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.