Big Tech News Stories
The past decade has seen a rapid expansion of the commercial space industry. In a 2023 white paper, a group of concerned astronomers warned against repeating Earthly “colonial practices” in outer space. Some of these colonial practices might include the enclosure of land, the exploitation of environmental resources and the destruction of landscapes – in the name of ideals such as destiny, civilization and the salvation of humanity. People of Bawaka Country in northern Australia have told the space industry that their ancestors guide human life from their home in the galaxy, and that this relationship is increasingly threatened by large orbiting satellite networks. Similarly, Inuit elders say their ancestors live on celestial bodies. Navajo leadership has asked NASA not to land human remains on the Moon. Kanaka elders have insisted that no more telescopes be built on Mauna Kea, which Native Hawaiians consider to be ancestral and sacred. These Indigenous positions stand in stark contrast with many in the industry’s insistence that space is empty and inanimate. In 1967, a slew of nations including the U.S., U.K. and USSR, signed the Outer Space Treaty. This treaty declared, among other things, that no nation can own a planetary body or part of one. The nations that signed the Outer Space Treaty were effectively saying, “Let’s not battle each other for territory and resources again. Let’s do outer space differently.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
Tech companies have outfitted classrooms across the U.S. with devices and technologies that allow for constant surveillance and data gathering. Firms such as Gaggle, Securly and Bark (to name a few) now collect data from tens of thousands of K-12 students. They are not required to disclose how they use that data, or guarantee its safety from hackers. In their new book, Surveillance Education: Navigating the Conspicuous Absence of Privacy in Schools, Nolan Higdon and Allison Butler show how all-encompassing surveillance is now all too real, and everything from basic privacy rights to educational quality is at stake. The tech industry has done a great job of convincing us that their platforms — like social media and email — are “free.” But the truth is, they come at a cost: our privacy. These companies make money from our data, and all the content and information we share online is basically unpaid labor. So, when the COVID-19 lockdowns hit, a lot of people just assumed that using Zoom, Canvas and Moodle for online learning was a “free” alternative to in-person classes. In reality, we were giving up even more of our labor and privacy to an industry that ended up making record profits. Your data can be used against you ... or taken out of context, such as sarcasm being used to deny you a job or admission to a school. Data breaches happen all the time, which could lead to identity theft or other personal information becoming public.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams—conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody “looking away from the screen for an extended period of time." For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
A little-known advertising cartel that controls 90% of global marketing spending supported efforts to defund news outlets and platforms including The Post — at points urging members to use a blacklist compiled by a shadowy government-funded group that purports to guard news consumers against “misinformation.” The World Federation of Advertisers (WFA), which reps 150 of the world’s top companies — including ExxonMobil, GM, General Mills, McDonald’s, Visa, SC Johnson and Walmart — and 60 ad associations sought to squelch online free speech through its Global Alliance for Responsible Media (GARM) initiative, the House Judiciary Committee found. “The extent to which GARM has organized its trade association and coordinates actions that rob consumers of choices is likely illegal under the antitrust laws and threatens fundamental American freedoms,” the Republican-led panel said in its 39-page report. The new report establishes links between the WFA’s “responsible media” initiative and the taxpayer-funded Global Disinformation Index (GDI), a London-based group that in 2022 unveiled an ad blacklist of 10 news outlets whose opinion sections tilted conservative or libertarian, including The Post, RealClearPolitics and Reason magazine. Internal communications suggest that rather than using an objective rubric to guide decisions, GARM members simply monitored disfavored outlets closely to be able to find justification to demonetize them.
Note: For more along these lines, see concise summaries of deeply revealing news articles on censorship and media manipulation from reliable sources.
Ford Motor Company is just one of many automakers advancing technology that weaponizes cars for mass surveillance. The ... company is currently pursuing a patent for technology that would allow vehicles to monitor the speed of nearby cars, capture images, and transmit data to law enforcement agencies. This would effectively turn vehicles into mobile surveillance units, sharing detailed information with both police and insurance companies. Ford's initiative is part of a broader trend among car manufacturers, where vehicles are increasingly used to spy on drivers and harvest data. In today's world, a smartphone can produce up to 3 gigabytes of data per hour, but recently manufactured cars can churn out up to 25 gigabytes per hour—and the cars of the future will generate even more. These vehicles now gather biometric data such as voice, iris, retina, and fingerprint recognition. In 2022, Hyundai patented eye-scanning technology to replace car keys. This data isn't just stored locally; much of it is uploaded to the cloud, a system that has proven time and again to be incredibly vulnerable. Toyota recently announced that a significant amount of customer information was stolen and posted on a popular hacking site. Imagine a scenario where hackers gain control of your car. As cybersecurity threats become more advanced, the possibility of a widespread attack is not far-fetched.
Note: FedEx is helping the police build a large AI surveillance network to track people and vehicles. Michael Hastings, a journalist investigating U.S. military and intelligence abuses, was killed in a 2013 car crash that may have been the result of a hack. For more along these lines, explore summaries of news articles on the disappearance of privacy from reliable major media sources.
Big tech companies have spent vast sums of money honing algorithms that gather their users’ data and scour it for patterns. One result has been a boom in precision-targeted online advertisements. Another is a practice some experts call “algorithmic personalized pricing,” which uses artificial intelligence to tailor prices to individual consumers. The Federal Trade Commission uses a more Orwellian term for this: “surveillance pricing.” In July the FTC sent information-seeking orders to eight companies that “have publicly touted their use of AI and machine learning to engage in data-driven targeting,” says the agency’s chief technologist Stephanie Nguyen. Consumer surveillance extends beyond online shopping. “Companies are investing in infrastructure to monitor customers in real time in brick-and-mortar stores,” [Nguyen] says. Some price tags, for example, have become digitized, designed to be updated automatically in response to factors such as expiration dates and customer demand. Retail giant Walmart—which is not being probed by the FTC—says its new digital price tags can be remotely updated within minutes. When personalized pricing is applied to home mortgages, lower-income people tend to pay more—and algorithms can sometimes make things even worse by hiking up interest rates based on an inadvertently discriminatory automated estimate of a borrower’s risk rating.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
Meta CEO Mark Zuckerberg told the House Judiciary Committee that his company's moderators faced significant pressure from the federal government to censor content on Facebook and Instagram—and that he regretted caving to it. In a letter to Rep. Jim Jordan (R–Ohio), the committee's chairman, Zuckerberg explained that the pressure also applied to "humor and satire" and that in the future, Meta would not blindly obey the bureaucrats. The letter refers specifically to the widespread suppression of contrarian viewpoints relating to COVID-19. Email exchanges between Facebook moderators and CDC officials reveal that the government took a heavy hand in suppressing content. Health officials did not merely vet posts for accuracy but also made pseudo-scientific determinations about whether certain opinions could cause social "harm" by undermining the effort to encourage all Americans to get vaccinated. But COVID-19 content was not the only kind of speech the government went after. Zuckerberg also explains that the FBI warned him about Russian attempts to sow chaos on social media by releasing a fake story about the Biden family just before the 2020 election. This warning motivated Facebook to take action against the New York Post's Hunter Biden laptop story when it was published in October 2020. In his letter, Zuckerberg states that this was a mistake and that moving forward, Facebook will never again demote stories pending approval from fact-checkers.
Note: For more along these lines, see concise summaries of deeply revealing news articles on censorship and government corruption from reliable major media sources.
In almost every country on Earth, the digital infrastructure upon which the modern economy was built is owned and controlled by a small handful of monopolies, based largely in Silicon Valley. This system is looking more and more like neo-feudalism. Just as the feudal lords of medieval Europe owned all of the land ... the US Big Tech monopolies of the 21st century act as corporate feudal lords, controlling all of the digital land upon which the digital economy is based. A monopolist in the 20th century would have loved to control a country’s supply of, say, refrigerators. But the Big Tech monopolists of the 21st century go a step further and control all of the digital infrastructure needed to buy those fridges — from the internet itself to the software, cloud hosting, apps, payment systems, and even the delivery service. These corporate neo-feudal lords don’t just dominate a single market or a few related ones; they control the marketplace. They can create and destroy entire markets. Their monopolistic control extends well beyond just one country, to almost the entire world. If a competitor does manage to create a product, US Big Tech monopolies can make it disappear. Imagine you are an entrepreneur. You develop a product, make a website, and offer to sell it online. But then you search for it on Google, and it does not show up. Instead, Google promotes another, similar product in the search results. This is not a hypothetical; this already happens.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
Surveillance technologies have evolved at a rapid clip over the last two decades — as has the government’s willingness to use them in ways that are genuinely incompatible with a free society. The intelligence failures that allowed for the attacks on September 11 poured the concrete of the surveillance state foundation. The gradual but dramatic construction of this surveillance state is something that Republicans and Democrats alike are responsible for. Our country cannot build and expand a surveillance superstructure and expect that it will not be turned against the people it is meant to protect. The data that’s being collected reflect intimate details about our closely held beliefs, our biology and health, daily activities, physical location, movement patterns, and more. Facial recognition, DNA collection, and location tracking represent three of the most pressing areas of concern and are ripe for exploitation. Data brokers can use tens of thousands of data points to develop a detailed dossier on you that they can sell to the government (and others). Essentially, the data broker loophole allows a law enforcement agency or other government agency such as the NSA or Department of Defense to give a third party data broker money to hand over the data from your phone — rather than get a warrant. When pressed by the intelligence community and administration, policymakers on both sides of the aisle failed to draw upon the lessons of history.
Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.
Data breaches are a seemingly endless scourge with no simple answer, but the breach in recent months of the background-check service National Public Data illustrates just how dangerous and intractable they have become. In April, a hacker known for selling stolen information, known as USDoD, began hawking a trove of data on cybercriminal forums for $3.5 million that they said included 2.9 billion records and impacted “the entire population of USA, CA and UK.” As the weeks went on, samples of the data started cropping up as other actors and legitimate researchers worked to understand its source and validate the information. By early June, it was clear that at least some of the data was legitimate and contained information like names, emails, and physical addresses in various combinations. When information is stolen from a single source, like Target customer data being stolen from Target, it's relatively straightforward to establish that source. But when information is stolen from a data broker and the company doesn't come forward about the incident, it's much more complicated to determine whether the information is legitimate and where it came from. Typically, people whose data is compromised in a breach—the true victims—aren’t even aware that National Public Data held their information in the first place. Every trove of information that attackers can get their hands on ultimately fuels scamming, cybercrime, and espionage.
Note: Clearview AI scraped billions of faces off of social media without consent. At least 600 law enforcement agencies were tapping into its database of 3 billion facial images. During this time, Clearview was hacked and its entire client list — which included the Department of Justice, U.S. Immigration and Customs Enforcement, Interpol, retailers and hundreds of police departments — was leaked to hackers.
A US federal appeals court ruled last week that so-called geofence warrants violate the Fourth Amendment’s protections against unreasonable searches and seizures. Geofence warrants allow police to demand that companies such as Google turn over a list of every device that appeared at a certain location at a certain time. The US Fifth Circuit Court of Appeals ruled on August 9 that geofence warrants are “categorically prohibited by the Fourth Amendment” because “they never include a specific user to be identified, only a temporal and geographic location where any given user may turn up post-search.” In other words, they’re the unconstitutional fishing expedition that privacy and civil liberties advocates have long asserted they are. Google ... is the most frequent target of geofence warrants, vowed late last year that it was changing how it stores location data in such a way that geofence warrants may no longer return the data they once did. Legally, however, the issue is far from settled: The Fifth Circuit decision applies only to law enforcement activity in Louisiana, Mississippi, and Texas. Plus, because of weak US privacy laws, police can simply purchase the data and skip the pesky warrant process altogether. As for the appellants in the case heard by the Fifth Circuit, well, they’re no better off: The court found that the police used the geofence warrant in “good faith” when it was issued in 2018, so they can still use the evidence they obtained.
Note: Read more about the rise of geofence warrants and its threat to privacy rights. For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
If you appeared in a photo on Facebook any time between 2011 and 2021, it is likely your biometric information was fed into DeepFace — the company’s controversial deep-learning facial recognition system that tracked the face scan data of at least a billion users. That's where Texas Attorney General Ken Paxton comes in. His office secured a $1.4 billion settlement from Meta over its alleged violation of a Texas law that bars the capture of biometric data without consent. Meta is on the hook to pay $275 million within the next 30 days and the rest over the next four years. Why did Paxton wait until 2022 — a year after Meta announced it would suspend its facial recognition technology and delete its database — to go up against the tech giant? If our AG truly prioritized privacy, he'd focus on the lesser-known companies that law enforcement agencies here in Texas are paying to scour and store our biometric data. In 2017, [Clearview AI] launched a facial recognition app that ... could identify strangers from a photo by searching a database of faces scraped without consent from social media. In 2020, news broke that at least 600 law enforcement agencies were tapping into a database of 3 billion facial images. Clearview was hit with lawsuit after lawsuit. That same year, the company was hacked and its entire client list — which included the Department of Justice, U.S. Immigration and Customs Enforcement, Interpol, retailers and hundreds of police departments — was leaked.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
Automated fast food restaurant CaliExpress by Flippy, in Pasadena, Calif., opened in January to considerable hype due to its robot burger makers, but the restaurant launched with another, less heralded innovation: the ability to pay for your meal with your face. CaliExpress uses a payment system from facial ID tech company PopID. It’s not the only fast-food chain to employ the technology. Biometric payment options are becoming more common. Amazon introduced pay-by-palm technology in 2020, and while its cashier-less store experiment has faltered, it installed the tech in 500 of its Whole Foods stores last year. Mastercard, which is working with PopID, launched a pilot for face-based payments in Brazil back in 2022, and it was deemed a success — 76% of pilot participants said they would recommend the technology to a friend. As stores implement biometric technology for a variety of purposes, from payments to broader anti-theft systems, consumer blowback, and lawsuits, are rising. In March, an Illinois woman sued retailer Target for allegedly illegally collecting and storing her and other customers’ biometric data via facial recognition technology without their consent. Amazon and T-Mobile are also facing legal actions related to biometric technology. In other countries ... biometric payment systems are comparatively mature. Visitors to McDonald’s in China ... use facial recognition technology to pay for their orders.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
Peregrine ... is essentially a super-powered Google for police data. Enter a name or address into its web-based app, and Peregrine quickly scans court records, arrest reports, police interviews, body cam footage transcripts — any police dataset imaginable — for a match. It’s taken data siloed across an array of older, slower systems, and made it accessible in a simple, speedy app that can be operated from a web browser. To date, Peregrine has scored 57 contracts across a wide range of police and public safety agencies in the U.S., from Atlanta to L.A. Revenue tripled in 2023, from $3 million to $10 million. [That will] triple again to $30 million this year, bolstered by $60 million in funding from the likes of Friends & Family Capital and Founders Fund. Privacy advocates [are] concerned about indiscriminate surveillance. “We see a lot of police departments of a lot of different sizes getting access to Real Time Crime Centers now, and it's definitely facilitating a lot more general access to surveillance feeds for some of these smaller departments that would have previously found it cost prohibitive,” said Beryl Lipton ... at the Electronic Frontier Foundation (EFF). “These types of companies are inherently going to have a hard time protecting privacy, because everything that they're built on is basically privacy damaging.” Peregrine technology can also enable “predictive policing,” long criticized for unfairly targeting poorer, non-white neighborhoods.
Note: Learn more about Palantir's involvement in domestic surveillance and controversial military technologies. For more along these lines, see concise summaries of deeply revealing news articles on police corruption and the disappearance of privacy from reliable major media sources.
In 2021, parents in South Africa with children between the ages of 5 and 13 were offered an unusual deal. For every photo of their child’s face, a London-based artificial intelligence firm would donate 20 South African rands, about $1, to their children’s school as part of a campaign called “Share to Protect.” With promises of protecting children, a little-known group of companies in an experimental corner of the tech industry known as “age assurance” has begun engaging in a massive collection of faces, opening the door to privacy risks for anyone who uses the web. The companies say their age-check tools could give parents ... peace of mind. But by scanning tens of millions of faces a year, the tools could also subject children — and everyone else — to a level of inspection rarely seen on the open internet and boost the chances their personal data could be hacked, leaked or misused. Nineteen states, home to almost 140 million Americans, have passed or enacted laws requiring online age checks since the beginning of last year, including Virginia, Texas and Florida. For the companies, that’s created a gold mine. But ... Alex Stamos, the former security chief of Facebook, which uses Yoti, said “most age verification systems range from ‘somewhat privacy violating’ to ‘authoritarian nightmare.'” Some also fear that lawmakers could use the tools to bar teens from content they dislike, including First Amendment-protected speech.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams—conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody “looking away from the screen for an extended period of time." For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.
The eruption of racist violence in England and Northern Ireland raises urgent questions about the responsibilities of social media companies, and how the police use facial recognition technology. While social media isn’t the root of these riots, it has allowed inflammatory content to spread like wildfire and helped rioters coordinate. The great elephant in the room is the wealth, power and arrogance of the big tech emperors. Silicon Valley billionaires are richer than many countries. That mature modern states should allow them unfettered freedom to regulate the content they monetise is a gross abdication of duty, given their vast financial interest in monetising insecurity and division. In recent years, [facial recognition] has been used on our streets without any significant public debate. We wouldn’t dream of allowing telephone taps, DNA retention or even stop and search and arrest powers to be so unregulated by the law, yet this is precisely what has happened with facial recognition. Our facial images are gathered en masse via CCTV cameras, the passport database and the internet. At no point were we asked about this. Individual police forces have entered into direct contracts with private companies of their choosing, making opaque arrangements to trade our highly sensitive personal data with private companies that use it to develop proprietary technology. There is no specific law governing how the police, or private companies ... are authorised to use this technology. Experts at Big Brother Watch believe the inaccuracy rate for live facial recognition since the police began using it is around 74%, and there are many cases pending about false positive IDs.
Note: Many US states are not required to reveal that they used face recognition technology to identify suspects, even though misidentification is a common occurrence. For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
Texas Attorney General Ken Paxton has won a $1.4 billion settlement from Facebook parent Meta over charges that it captured users' facial and biometric data without properly informing them it was doing so. Paxton said that starting in 2011, Meta, then known as Facebook, rolled out a “tag” feature that involved software that learned how to recognize and sort faces in photos. In doing so, it automatically turned on the feature without explaining how it worked, Paxton said — something that violated a 2009 state statute governing the use of biometric data, as well as running afoul of the state's deceptive trade practices act. "Unbeknownst to most Texans, for more than a decade Meta ran facial recognition software on virtually every face contained in the photographs uploaded to Facebook, capturing records of the facial geometry of the people depicted," he said in a statement. As part of the settlement, Meta did not admit to wrongdoing. Facebook discontinued how it had previously used face-recognition technology in 2021, in the process deleting the face-scan data of more than one billion users. The settlement amount, which Paxton said is the largest ever obtained by a single state against a business, will be paid out over five years. “This historic settlement demonstrates our commitment to standing up to the world’s biggest technology companies and holding them accountable for breaking the law and violating Texans’ privacy rights," Paxton said.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
Google announced this week that it would begin the international rollout of its new artificial intelligence-powered search feature, called AI Overviews. When billions of people search a range of topics from news to recipes to general knowledge questions, what they see first will now be an AI-generated summary. While Google was once mostly a portal to reach other parts of the internet, it has spent years consolidating content and services to make itself into the web’s primary destination. Weather, flights, sports scores, stock prices, language translation, showtimes and a host of other information have gradually been incorporated into Google’s search page over the past 15 or so years. Finding that information no longer requires clicking through to another website. With AI Overviews, the rest of the internet may meet the same fate. Google has tried to assuage publishers’ fears that users will no longer see their links or click through to their sites. Research firm Gartner predicts a 25% drop in traffic to websites from search engines by 2026 – a decrease that would be disastrous for most outlets and creators. What’s left for publishers is largely direct visits to their own home pages and Google referrals. If AI Overviews take away a significant portion of the latter, it could mean less original reporting, fewer creators publishing cooking blogs or how-to guides, and a less diverse range of information sources.
Note: WantToKnow.info traffic from Google search has fallen sharply as Google has stopped indexing most websites. These new AI summaries make independent media sites even harder to find. For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
The bedrock of Google’s empire sustained a major blow on Monday after a judge found its search and ad businesses violated antitrust law. The ruling, made by the District of Columbia's Judge Amit Mehta, sided with the US Justice Department and a group of states in a set of cases alleging the tech giant abused its dominance in online search. "Google is a monopolist, and it has acted as one to maintain its monopoly," Mehta wrote in his ruling. The findings, if upheld, could outlaw contracts that for years all but assured Google's dominance. Judge Mehta ruled that Google violated antitrust law in the markets for "general search" and "general search text" ads, which are the ads that appear at the top of the search results page. Apple, Amazon, and Meta are defending themselves against a series of other federal- and state-led antitrust suits, some of which make similar claims. Google’s disputed behavior revolved around contracts it entered into with manufacturers of computer devices and mobile devices, as well as with browser services, browser developers, and wireless carriers. These contracts, the government claimed, violated antitrust laws because they made Google the mandatory default search provider. Companies that entered into those exclusive contracts have included Apple, LG, Samsung, AT&T, T-Mobile, Verizon, and Mozilla. Those deals are why smartphones ... come preloaded with Google's various apps.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
Liquid capital, growing market dominance, slick ads, and fawning media made it easy for giants like Google, Microsoft, Apple, and Amazon to expand their footprint and grow their bottom lines. Yet ... these companies got lazy, entitled, and demanding. They started to care less about the foundations of their business — like having happy customers and stable products — and more about making themselves feel better by reinforcing their monopolies. Big Tech has decided the way to keep customers isn't to compete or provide them with a better service but instead make it hard to leave, trick customers into buying things, or eradicate competition so that it can make things as profitable as possible, even if the experience is worse. After two decades of consistent internal innovation, Big Tech got addicted to acquisitions in the 2010s: Apple bought Siri; Meta bought WhatsApp, Instagram, and Oculus; Amazon bought Twitch; Google bought Nest and Motorola's entire mobility division. Over time, the acquisitions made it impossible for these companies to focus on delivering the features we needed. Google, Meta, Amazon, and Apple are simply no longer forces for innovation. Generative AI is the biggest, dumbest attempt that tech has ever made to escape the fallout of building companies by acquiring other companies, taking their eyes off actually inventing things, and ignoring the most important part of their world: the customer.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
The National Science Foundation spent millions of taxpayer dollars developing censorship tools powered by artificial intelligence that Big Tech could use “to counter misinformation online” and “advance state-of-the-art misinformation research.” House investigators on the Judiciary Committee and Select Committee on the Weaponization of Government said the NSF awarded nearly $40 million ... to develop AI tools that could censor information far faster and at a much greater scale than human beings. The University of Michigan, for instance, was awarded $750,000 from NSF to develop its WiseDex artificial intelligence tool to help Big Tech outsource the “responsibility of censorship” on social media. The release of [an] interim report follows new revelations that the Biden White House pressured Amazon to censor books about the COVID-19 vaccine and comes months after court documents revealed White House officials leaned on Twitter, Facebook, YouTube and other sites to remove posts and ban users whose content they opposed, even threatening the social media platforms with federal action. House investigators say the NSF project is potentially more dangerous because of the scale and speed of censorship that artificial intelligence could enable. “AI-driven tools can monitor online speech at a scale that would far outmatch even the largest team of ’disinformation’ bureaucrats and researchers,” House investigators wrote in the interim report.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and censorship from reliable sources.
Once upon a time ... Google was truly great. A couple of lads at Stanford University in California had the idea to build a search engine that would crawl the world wide web, create an index of all the sites on it and rank them by the number of inbound links each had from other sites. The arrival of ChatGPT and its ilk ... disrupts search behaviour. Google’s mission – “to organise the world’s information and make it universally accessible” – looks like a much more formidable task in a world in which AI can generate infinite amounts of humanlike content. Vincent Schmalbach, a respected search engine optimisation (SEO) expert, thinks that Google has decided that it can no longer aspire to index all the world’s information. That mission has been abandoned. “Google is no longer trying to index the entire web,” writes Schmalbach. “In fact, it’s become extremely selective, refusing to index most content. This isn’t about content creators failing to meet some arbitrary standard of quality. Rather, it’s a fundamental change in how Google approaches its role as a search engine.” The default setting from now on will be not to index content unless it is genuinely unique, authoritative and has “brand recognition”. “They might index content they perceive as truly unique,” says Schmalbach. “But if you write about a topic that Google considers even remotely addressed elsewhere, they likely won’t index it. This can happen even if you’re a well-respected writer with a substantial readership.”
Note: WantToKnow.info and other independent media websites are disappearing from Google search results because of this. For more along these lines, see concise summaries of deeply revealing news articles on AI and censorship from reliable sources.
Google and a few other search engines are the portal through which several billion people navigate the internet. Many of the world’s most powerful tech companies, including Google, Microsoft, and OpenAI, have recently spotted an opportunity to remake that gateway with generative AI, and they are racing to seize it. Nearly two years after the arrival of ChatGPT, and with users growing aware that many generative-AI products have effectively been built on stolen information, tech companies are trying to play nice with the media outlets that supply the content these machines need. The start-up Perplexity ... announced revenue-sharing deals with Time, Fortune, and several other publishers. These publishers will be compensated when Perplexity earns ad revenue from AI-generated answers that cite partner content. The site does not currently run ads, but will begin doing so in the form of sponsored “related follow-up questions.” OpenAI has been building its own roster of media partners, including News Corp, Vox Media, and The Atlantic. Google has purchased the rights to use Reddit content to train future AI models, and ... appears to be the only major search engine that Reddit is permitting to surface its content. The default was once that you would directly consume work by another person; now an AI may chew and regurgitate it first, then determine what you see based on its opaque underlying algorithm. Many of the human readers whom media outlets currently show ads and sell subscriptions to will have less reason to ever visit publishers’ websites. Whether OpenAI, Perplexity, Google, or someone else wins the AI search war might not depend entirely on their software: Media partners are an important part of the equation. AI search will send less traffic to media websites than traditional search engines. The growing number of AI-media deals, then, are a shakedown. AI is scraping publishers’ content whether they want it to or not: Media companies can be chumps or get paid.
Note: The AI search war has nothing to do with journalists and content creators getting paid and acknowledged for their work. It’s all about big companies doing deals with each other to control our information environment and capture more consumer spending. For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable sources.
Amazon has been accused of using “intrusive algorithms” as part of a sweeping surveillance program to monitor and deter union organizing activities. Workers at a warehouse run by the technology giant on the outskirts of St Louis, Missouri, are today filing an unfair labor practice charge with the National Labor Relations Board (NLRB). A copy of the charge ... alleges that Amazon has “maintained intrusive algorithms and other workplace controls and surveillance which interfere with Section 7 rights of employees to engage in protected concerted activity”. There have been several reports of Amazon surveilling workers over union organizing and activism, including human resources monitoring employee message boards, software to track union threats and job listings for intelligence analysts to monitor “labor organizing threats”. Artificial intelligence can be used by warehouse employers like Amazon “to essentially have 24/7 unregulated and algorithmically processed and recorded video, and often audio data of what their workers are doing all the time”, said Seema N Patel ... at Stanford Law School. “It enables employers to control, record, monitor and use that data to discipline hundreds of thousands of workers in a way that no human manager or group of managers could even do.” The National Labor Relations Board issued a memo in 2022 announcing its intent to protect workers from AI-enabled monitoring of labor organizing activities.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
On July 16, the S&P 500 index, one of the most widely cited benchmarks in American capitalism, reached its highest-ever market value: $47 trillion. 1.4 percent of those companies were worth more than $16 trillion, the greatest concentration of capital in the smallest number of companies in the history of the U.S. stock market. The names are familiar: Microsoft, Apple, Amazon, Nvidia, Meta, Alphabet, and Tesla. All of them, too, have made giant bets on artificial intelligence. For all their similarities, these trillion-dollar-plus companies have been grouped together under a single banner: the Magnificent Seven. In the past month, though, these giants of the U.S. economy have been faltering. A recent rout led to a collapse of $2.6 trillion in their market value. Earlier this year, Goldman Sachs issued a deeply skeptical report on the industry, calling it too expensive, too clunky, and just simply not as useful as it has been chalked up to be. “There’s not a single thing that this is being used for that’s cost-effective at this point,” Jim Covello, an influential Goldman analyst, said on a company podcast. AI is not going away, and it will surely become more sophisticated. This explains why, even with the tempering of the AI-investment thesis, these companies are still absolutely massive. When you talk with Silicon Valley CEOs, they love to roll their eyes at their East Coast skeptics. Banks, especially, are too cautious, too concerned with short-term goals, too myopic to imagine another world.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
The Ukrainian military has used AI-equipped drones mounted with explosives to fly into battlefields and strike at Russian oil refineries. American AI systems identified targets in Syria and Yemen for airstrikes earlier this year. The Israel Defense Forces used another kind of AI-enabled targeting system to label as many as 37,000 Palestinians as suspected militants during the first weeks of its war in Gaza. Growing conflicts around the world have acted as both accelerant and testing ground for AI warfare while making it even more evident how unregulated the nascent field is. The result is a multibillion-dollar AI arms race that is drawing in Silicon Valley giants and states around the world. Altogether, the US military has more than 800 active AI-related projects and requested $1.8bn worth of funding for AI in the 2024 budget alone. Many of these companies and technologies are able to operate with extremely little transparency and accountability. Defense contractors are generally protected from liability when their products accidentally do not work as intended, even when the results are deadly. The Pentagon plans to spend $1bn by 2025 on its Replicator Initiative, which aims to develop swarms of unmanned combat drones that will use artificial intelligence to seek out threats. The air force wants to allocate around $6bn over the next five years to research and development of unmanned collaborative combat aircraft, seeking to build a fleet of 1,000 AI-enabled fighter jets that can fly autonomously. The Department of Defense has also secured hundreds of millions of dollars in recent years to fund its secretive AI initiative known as Project Maven, a venture focused on technologies like automated target recognition and surveillance.
Note:Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.
After government officials like former White House advisers Rob Flaherty and Andy Slavitt repeatedly harangued platforms such as Facebook to censor Americans who contested the government’s narrative on COVID-19 vaccines, Missouri and Louisiana sued. They claimed that the practice violates the First Amendment. Following years of litigation, the Supreme Court threw cold water on their efforts, ruling in Murthy v. Missouri that states and the individual plaintiffs lacked standing to sue the government for its actions. The government often disguised its censorship requests by coordinating with ostensibly “private” civil society groups to pressure tech companies to remove or shadow ban targeted content. According to the U.S. House Weaponization Committee’s November 2023 interim report, the Cybersecurity and Infrastructure Security Agency requested that the now-defunct Stanford Internet Observatory create a public-private partnership to counter election “misinformation” in 2020. This consortium of government and private entities took the form of the Election Integrity Partnership (EIP). EIP’s “private” civil society partners then forwarded the flagged content to Big Tech platforms like Facebook, YouTube, TikTok and Twitter. These “private” groups ... receive millions of taxpayer dollars from the National Science Foundation, the State Department and the U.S Department of Justice. Legislation like the COLLUDE Act would ... clarify that Section 230 does not apply when platforms censor legal speech “as a result of a communication” from a “governmental entity” or from an non-profit “acting at the request or behest of a governmental entity.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on censorship and government corruption from reliable sources.
OnlyFans makes reassuring promises to the public: It’s strictly adults-only, with sophisticated measures to monitor every user, vet all content and swiftly remove and report any child sexual abuse material. Reuters documented 30 complaints in U.S. police and court records that child sexual abuse material appeared on the site between December 2019 and June 2024. The case files examined by the news organization cited more than 200 explicit videos and images of kids, including some adults having oral sex with toddlers. In one case, multiple videos of a minor remained on OnlyFans for more than a year, according to a child exploitation investigator who found them while assisting Reuters. OnlyFans “presents itself as a platform that provides unrivaled access to influencers, celebrities and models,” said Elly Hanson, a clinical psychologist and researcher who focuses on preventing sexual abuse and reducing its impact. “This is an attractive mix to many teens, who are pulled into its world of commodified sex, unprepared for what this entails.” In 2021 ... 102 Republican and Democratic members of the U.S. House of Representatives called on the Justice Department to investigate child sexual abuse on OnlyFans. The Justice Department told the lawmakers three months later that it couldn’t confirm or deny it was investigating OnlyFans. Contacted recently, a department spokesperson declined to comment further.
Note: For more along these lines, see concise summaries of deeply revealing news articles on sexual abuse scandals from reliable major media sources.
Jonathan Haidt is a man with a mission ... to alert us to the harms that social media and modern parenting are doing to our children. His latest book, The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness ... writes of a “tidal wave” of increases in mental illness and distress beginning around 2012. Young adolescent girls are hit hardest, but boys are in pain, too. He sees two factors that have caused this. The first is the decline of play-based childhood caused by overanxious parenting, which allows children fewer opportunities for unsupervised play and restricts their movement. The second factor is the ubiquity of smartphones and the social media apps that thrive upon them. The result is the “great rewiring of childhood” of his book’s subtitle and an epidemic of mental illness and distress. You don’t have to be a statistician to know that ... Instagram is toxic for some – perhaps many – teenage girls. Ever since Frances Haugen’s revelations, we have known that Facebook itself knew that 13% of British teenage girls said that their suicidal thoughts became more frequent after starting on Instagram. And the company’s own researchers found that 32% of teen girls said that when they felt bad about their bodies, Instagram made them feel worse. These findings might not meet the exacting standards of the best scientific research, but they tell you what you need to know.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and mental health from reliable major media sources.
Recall ... takes constant screenshots in the background while you go about your daily computer business. Microsoft’s Copilot+ machine-learning tech then scans (and “reads”) each of these screenshots in order to make a searchable database of every action performed on your computer and then stores it on the machine’s disk. “Recall is like bestowing a photographic memory on everyone who buys a Copilot+ PC,” [Microsoft marketing officer Yusuf] Mehdi said. “Anything you’ve ever seen or done, you’ll now more or less be able to find.” Charlie Stross, the sci-fi author and tech critic, called it a privacy “shit-show for any organisation that handles medical records or has a duty of legal confidentiality.” He also said: “Suddenly, every PC becomes a target for discovery during legal proceedings. Lawyers can subpoena your Recall database and search it, no longer being limited to email but being able to search for terms that came up in Teams or Slack or Signal messages, and potentially verbally via Zoom or Skype if speech-to-text is included in Recall data.” Faced with this pushback, Microsoft [announced] that Recall would be made opt-in instead of on by default, and also introducing extra security precautions – only producing results from Recall after user authentication, for example, and never decrypting data stored by the tool until after a search query. The only good news for Microsoft here is that it seems to have belatedly acknowledged that Recall has been a fiasco.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
High-level former intelligence and national security officials have provided crucial assistance to Silicon Valley giants as the tech firms fought off efforts to weaken online monopolies. John Ratcliffe, the former Director of National Intelligence, Brian Cavanaugh, a former intelligence aide in the White House, and [former White House National Security Advisor Robert] O'Brien jointly wrote to congressional leaders, warning darkly that certain legislative proposals to check the power of Amazon, Google, Meta, and Apple would embolden America's enemies. The letter left unmentioned that the former officials were paid by tech industry lobbyists at the time as part of a campaign to suppress support for the legislation. The Open App Markets App was designed to break Apple and Google's duopoly over the smartphone app store market. The companies use their control over the app markets to force app developers to pay as much as 30 percent in fees on every transaction. Breaking up Apple and Google’s hold over the smartphone app store would enable greater free expression and innovation. The American Innovation and Choice Online Act similarly encourages competition by preventing tech platforms from self-preferencing their own products. The Silicon Valley giants deployed hundreds of millions of dollars in lobbying efforts to stymie the reforms. For Republicans, they crafted messages on national security and jobs. For Democrats, as other reports have revealed, tech giants paid LGBT, Black, and Latino organizations to lobby against the reforms, claiming that powerful tech platforms are beneficial to communities of color and that greater competition online would lead to a rise in hate speech.The lobbying tactics have so far paid off. Every major tech antitrust and competition bill in Congress has died over the last four years.
Note: For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption and Big Tech from reliable major media sources.
Twenty years ago, FedEx established its own police force. Now it's working with local police to build out an AI car surveillance network. The shipping and business services company is using AI tools made by Flock Safety, a $4 billion car surveillance startup, to monitor its distribution and cargo facilities across the United States. As part of the deal, FedEx is providing its Flock surveillance feeds to law enforcement, an arrangement that Flock has with at least four multi-billion dollar private companies. Some local police departments are also sharing their Flock feeds with FedEx — a rare instance of a private company availing itself of a police surveillance apparatus. Such close collaboration has the potential to dramatically expand Flock’s car surveillance network, which already spans 4,000 cities across over 40 states and some 40,000 cameras that track vehicles by license plate, make, model, color and other identifying characteristics, like dents or bumper stickers. Jay Stanley ... at the American Civil Liberties Union, said it was “profoundly disconcerting” that FedEx was exchanging data with law enforcement as part of Flock’s “mass surveillance” system. “It raises questions about why a private company ... would have privileged access to data that normally is only available to law enforcement,” he said. Forbes previously found that [Flock] had itself likely broken the law across various states by installing cameras without the right permits.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.
I had to watch every frame of a recent stabbing video ... It will never leave me,” says Harun*, one of many moderators reviewing harmful online content in India, as social media companies increasingly move the challenging work offshore. Moderators working in Hyderabad, a major IT hub in south Asia, have spoken of the strain on their mental health of reviewing images and videos of sexual and violent content, sometimes including trafficked children. Many social media platforms in the UK, European Union and US have moved the work to countries such as India and the Philippines. While OpenAI, creator of ChatGPT, has said artificial intelligence could be used to speed up content moderation, it is not expected to end the need for the thousands of human moderators employed by social media platforms. Content moderators in Hyderabad say the work has left them emotionally distressed, depressed and struggling to sleep. “I had to watch every frame of a recent stabbing video of a girl. What upset me most is that the passersby didn’t help her,” says Harun. “There have been instances when I’ve flagged a video containing child nudity and received continuous calls from my supervisors,” [said moderator Akash]. “Most of these half-naked pictures of minors are from the US or Europe. I’ve received multiple warnings from my supervisors not to flag these videos. One of them asked me to ‘man up’ when I complained that these videos need to be discussed in detail.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and Big Tech from reliable major media sources.
Trevin Brownie had to sift through lots of disturbing content for the three years he worked as an online content moderator in Nairobi, Kenya. "We take off any form of abusive content that violates policies such as bullying and harassment or hate speech or violent graphic content suicides," Brownie [said]. Brownie has encountered content ranging from child pornography, material circulated by organized crime groups and terrorists, and images taken from war zones. "I've seen more than 500 beheadings on a monthly basis," he said. Brownie moved from South Africa, where he previously worked at a call center, to Nairobi, where he worked as a subcontractor for Facebook's main moderation hub in East Africa, which was operated by a U.S.-based company called Sama AI. Content moderators working in Kenya say Sama AI and other third-party outsourcing companies took advantage of them. They allege they received low-paying wages and inadequate mental health support compared to their counterparts overseas. Brownie says ... PTSD has become a common side effect he and others in this industry now live with, he said. "It's really traumatic. Disturbing, especially for the suicide videos," he said. A key obstacle to getting better protections for content moderators lies in how people think social media platforms work. More than 150 content moderators who work with the artificial intelligence (AI) systems used by Facebook, TikTok and ChatGPT, from all parts of the continent, gathered in Kenya to form the African Content Moderator's Union. The union is calling on companies in the industry to increase salaries, provide access to onsite psychiatrists, and a redrawing of policies to protect employees from exploitative labour practices.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and Big Tech from reliable major media sources.
Once upon a time, Google was great. They intensively monitored what people searched for, and then used that information continually to improve the engine’s performance. Their big idea was that the information thus derived had a commercial value; it indicated what people were interested in and might therefore be of value to advertisers who wanted to sell them stuff. Thus was born what Shoshana Zuboff christened “surveillance capitalism”, the dominant money machine of the networked world. The launch of generative AIs such as ChatGPT clearly took Google by surprise, which is odd given that the company had for years been working on the technology. The question became: how will Google respond to the threat? Now we know: it’s something called AI overviews, in which an increasing number of search queries are initially answered by AI-generated responses. Users have been told that glue is useful for ensuring that cheese sticks to pizza, that they could stare at the sun for for up to 30 minutes, and that geologists suggest eating one rock per day. There’s a quaint air of desperation in the publicity for this sudden pivot from search engine to answerbot. The really big question about the pivot, though, is what its systemic impact on the link economy will be. Already, the news is not great. Gartner, a market-research consultancy, for example, predicts that search engine volume will drop 25% by 2026 owing to AI chatbots and other virtual agents.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
Venture capital and military startup firms in Silicon Valley have begun aggressively selling a version of automated warfare that will deeply incorporate artificial intelligence (AI). This surge of support for emerging military technologies is driven by the ultimate rationale of the military-industrial complex: vast sums of money to be made. Untold billions of dollars of private money now pouring into firms seeking to expand the frontiers of techno-war. According to the New York Times, $125 billion over the past four years. Whatever the numbers, the tech sector and its financial backers sense that there are massive amounts of money to be made in next-generation weaponry and aren’t about to let anyone stand in their way. Meanwhile, an investigation by Eric Lipton of the New York Times found that venture capitalists and startup firms already pushing the pace on AI-driven warfare are also busily hiring ex-military and Pentagon officials to do their bidding. Former Google CEO Eric Schmidt [has] become a virtual philosopher king when it comes to how new technology will reshape society. [Schmidt] laid out his views in a 2021 book modestly entitled The Age of AI and Our Human Future, coauthored with none other than the late Henry Kissinger. Schmidt is aware of the potential perils of AI, but he’s also at the center of efforts to promote its military applications. AI is coming, and its impact on our lives, whether in war or peace, is likely to stagger the imagination.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.
The center of the U.S. military-industrial complex has been shifting over the past decade from the Washington, D.C. metropolitan area to Northern California—a shift that is accelerating with the rise of artificial intelligence-based systems, according to a report published Wednesday. "Although much of the Pentagon's $886 billion budget is spent on conventional weapon systems and goes to well-established defense giants such as Lockheed Martin, RTX, Northrop Grumman, General Dynamics, Boeing, and BAE Systems, a new political economy is emerging, driven by the imperatives of big tech companies, venture capital (VC), and private equity firms," [report author Roberto J.] González wrote. "Defense Department officials have ... awarded large multibillion-dollar contracts to Microsoft, Amazon, Google, and Oracle." González found that the five largest military contracts to major tech firms between 2018 and 2022 "had contract ceilings totaling at least $53 billion combined." There's also the danger of a "revolving door" between Silicon Valley and the Pentagon as many senior government officials "are now gravitating towards defense-related VC or private equity firms as executives or advisers after they retire from public service." "Members of the armed services and civilians are in danger of being harmed by inadequately tested—or algorithmically flawed—AI-enabled technologies. By nature, VC firms seek rapid returns on investment by quickly bringing a product to market, and then 'cashing out' by either selling the startup or going public. This means that VC-funded defense tech companies are under pressure to produce prototypes quickly and then move to production before adequate testing has occurred."
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
Ask Google if cats have been on the moon and it used to spit out a ranked list of websites so you could discover the answer for yourself. Now it comes up with an instant answer generated by artificial intelligence - which may or may not be correct. “Yes, astronauts have met cats on the moon, played with them, and provided care,” said Google’s newly retooled search engine. It added: “For example, Neil Armstrong said, ‘One small step for man’ because it was a cat’s step. Buzz Aldrin also deployed cats on the Apollo 11 mission.” None of this is true. Similar errors — some funny, others harmful falsehoods — have been shared on social media since Google this month unleashed AI overviews, a makeover of its search page that frequently puts the summaries on top of search results. It’s hard to reproduce errors made by AI language models — in part because they’re inherently random. They work by predicting what words would best answer the questions asked of them based on the data they’ve been trained on. They’re prone to making things up — a widely studied problem known as hallucination. Another concern was a deeper one — that ceding information retrieval to chatbots was degrading the serendipity of human search for knowledge, literacy about what we see online, and the value of connecting in online forums with other people who are going through the same thing.Those forums and other websites count on Google sending people to them, but Google’s new AI overviews threaten to disrupt the flow of money-making internet traffic.
Note: Read more about the potential dangers of Google's new AI tool. For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence controversies from reliable major media sources.
"Agency intervention is necessary to stop the existential threat Google poses to original content creators," the News/Media Alliance—a major news industry trade group—wrote in a letter to the Department of Justice (DOJ) and the Federal Trade Commission (FTC). It asked the agencies to use antitrust authority "to stop Google's latest expansion of AI Overviews," a search engine innovation that Google has been rolling out recently. Overviews offer up short, AI-generated summaries paired with brief bits of text from linked websites. Overviews give "comprehensive answers without the user ever having to click to another page," the The New York Times warns. And this worries websites that rely on Google to drive much of their traffic. "It potentially chokes off the original creators of the content," Frank Pine, executive editor of MediaNews Group and Tribune Publishing (owner of 68 daily newspapers), told the Times. Media websites have gotten used to Google searches sending them a certain amount of traffic. But that doesn't mean Google is obligated to continue sending them that same amount of traffic forever. It is possible that Google's pivot to AI was hastened by how hostile news media has been to tech companies. We've seen publishers demanding that search engines and social platforms pay them for the privilege of sharing news links, even though this arrangement benefits publications (arguably more than it does tech companies) by driving traffic.
Note: For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence controversies from reliable major media sources.
In recent weeks, Biden and Senate Majority Leader Chuck Schumer have been taking victory laps for the 2022 CHIPS and Science Act, a law intended to create jobs and fund innovation in a key global industry. It has already launched a series of grants, incentives and research proposals to help America regain its cutting-edge status in global semiconductor manufacturing. But quietly, in a March spending bill, appropriators in Congress shifted $3.5 billion that the Commerce Department was hoping to use for those grants and pushed it into a separate Pentagon program called Secure Enclave, which is not mentioned in the original law. The diversion of money from a flagship Biden initiative is a case study in how fragile Washington’s monumental spending programs can be in practice. Several members of Congress involved in the CHIPS law say they were taken by surprise to see the money shifted to Secure Enclave, a classified project to build chips in a special facility for defense and intelligence needs. Critics say the shift in CHIPS money undermines an important policy by moving funds from a competitive public selection process meant to boost a domestic industry to an untried and classified project likely to benefit only one company. No company has been named yet to execute the project, but interviews reveal that chipmaking giant Intel lobbied for its creation, and is still considered the frontrunner for the money.
Note: Learn more about unaccountable military spending in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on government corruption from reliable major media sources.
Have you heard about the new Google? They “supercharged” it with artificial intelligence. Somehow, that also made it dumber. With the regular old Google, I can ask, “What’s Mark Zuckerberg’s net worth?” and a reasonable answer pops up: “169.8 billion USD.” Now let’s ask the same question with the “experimental” new version of Google search. Its AI responds: Zuckerberg’s net worth is “$46.24 per hour, or $96,169 per year. This is equivalent to $8,014 per month, $1,849 per week, and $230.6 million per day.” Google acting dumb matters because its AI is headed to your searches sooner or later. The company has already been testing this new Google — dubbed Search Generative Experience, or SGE — with volunteers for nearly 11 months, and recently started showing AI answers in the main Google results even for people who have not opted in to the test. To give us answers to everything, Google’s AI has to decide which sources are reliable. I’m not very confident about its judgment. Remember our bonkers result on Zuckerberg’s net worth? A professional researcher — and also regular old Google — might suggest checking the billionaires list from Forbes. Google’s AI answer relied on a very weird ZipRecruiter page for “Mark Zuckerberg Jobs,” a thing that does not exist. The new Google can do some useful things. But as you’ll see, it sometimes also makes up facts, misinterprets questions, [and] delivers out-of-date information. This test of Google’s future has been going on for nearly a year, and the choices being made now will influence how billions of people get information.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI technology from reliable major media sources.
[Tim] Berners-Lee, a British computer scientist, [came] up with the idea for a “world wide web” as a way of locating and accessing documents that were scattered all over the internet. He was able to do this because the internet, which had been publicly available since January 1983, enabled it. The network had no central ownership or controller. The result was an extraordinary explosion of creativity, and the emergence of ... a kind of global commons. However, the next generation of innovators to benefit from this freedom – Google, Facebook, Amazon, Microsoft, Apple et al – saw no reason to extend it to anyone else. The creative commons of the internet has been gradually and inexorably enclosed. Google and Apple’s browsers have nearly 85% of the world market share. Microsoft and Apple’s two desktop operating systems have almost 90%. Google runs about 90% of global search. More than half of all phones come from Apple and Samsung, while 99% of mobile operating systems are from Google or Apple. Apple and Google’s email clients manage nearly 90% of global email. GoDaddy and Cloudflare serve about 50% of global domain name system requests. And so on. One of the consequences of this concentration, say Farrell and Berjon, is that the creative possibilities of permissionless innovation have become increasingly constrained. The internet has become an extractive and fragile monoculture. We can revitalise it, but only by “rewilding” it.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption from reliable major media sources.
For the past few weeks, journalists have been reporting on what they've found in the "Twitter Files." The revelations have been astonishing and deeply troubling, exposing solid evidence of collusion between top executives at the FBI and their cozy counterparts at Twitter. FBI leadership and Twitter censors conferred constantly about how to shut down political speech based on its content, confirming the suspicions of, well, anyone who was paying attention. And it proves without a doubt that over the past few years, countless Americans have undergone a real violation of their First Amendment rights. The First Amendment mandates that government can't abridge—meaning limit or censor—speech based on its content. Even if attempting to advance the noblest of causes, government actors must not collide with this constitutional guardrail. The Constitution simply isn't optional. The government can't enlist a private citizen or corporation to undertake what the Constitution precludes it from doing. When Twitter acquiesced to the FBI's urging, it essentially became an agent and of the government. FBI officials created a special, secure online portal for Twitter staff, where the two sides could secretly exchange information about who was saying what on the platform and how that speech could be squelched. In this virtual "war room," the FBI made dozens of requests to censor political speech. Twitter chirpily complied.
Note: For more along these lines, see concise summaries of deeply revealing news articles on censorship and government corruption from reliable major media sources.
The New Mexico attorney general, Raúl Torrez, who has launched legal action against Meta for child trafficking on its platforms, says he believes the social media company is the “largest marketplace for predators and paedophiles globally”. The lawsuit claims that Meta allows and fails to detect the trafficking of children and “enabled adults to find, message and groom minors, soliciting them to sell pictures or participate in pornographic videos”, concluding that “Meta’s conduct is not only unacceptable; it is unlawful”. Torrez says that he has been shocked by the findings of his team’s investigations into online child sexual exploitation on Meta’s platforms. Internal company documents obtained by the attorney general’s office as part of its investigation have also revealed that the company estimates about 100,000 children using Facebook and Instagram receive online sexual harassment each day. The idea of the lawsuit came to [Torrez] after reading media coverage of Meta’s role in child sexual exploitation, including a Guardian investigation that it was failing to report or detect the use of Facebook and Instagram for child trafficking. If it progresses, the New Mexico lawsuit is expected to take years to conclude. Torrez wants his lawsuit to provide a medium to usher in new regulations. “Fundamentally, we’re trying to get Meta to change how it does business and prioritise the safety of its users, particularly children.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and sexual abuse scandals from reliable major media sources.
On April 20, former acting CIA Director Michael Morell admitted he orchestrated the joint letter that torpedoed the New York Post’s bombshell reporting on Hunter Biden’s laptop in the weeks leading up to the November 2020 US Presidential election, at the direct request of Joe Biden’s campaign team. That letter ... asserted the leaked material bore unambiguous hallmarks of a Kremlin “information operation.” In all, 51 former senior intelligence officials endorsed the declaration. This intervention was sufficient for Twitter to block all sharing of the NY Post’s exposés and ban the outlet’s official account. Twitter’s public suppression of the NY Post’s disclosures was complemented by a covert operation to identify and neutralize anyone discussing the contents of Hunter Biden’s laptop, courtesy of Dataminr, a social media spying tool heavily connected to British and American intelligence services. In-Q-Tel [is] the CIA’s venture capital arm. In 2016, The Intercept revealed In-Q-Tel was financing at least 38 separate social media spying tools, to surveil “erupting political movements, crises, epidemics, and disasters.” Among them was Dataminr, which enjoys privileged access to Twitter’s “firehose” – all tweets published in real time – in order to track and visualize trends as they happen. [In 2020], the U.S. was ... engulfed by incendiary large-scale protests. Dataminr kept a close eye on this upheaval every step of the way, tipping off police to the identities of demonstrators.
Note: While Hunter Biden was indicted for three felony gun charges and nine counts of tax-related crimes, his laptop also revealed suspicious business dealings with corrupt overseas firms. Learn more about the history of military-intelligence influence on the media in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on corporate corruption and media manipulation from reliable sources.
A Silicon Valley defense tech startup is working on products that could have as great an impact on warfare as the atomic bomb, its founder Palmer Luckey said. "We want to build the capabilities that give us the ability to swiftly win any war we are forced to enter," he [said]. The Anduril founder didn't elaborate on what impact AI weaponry would have. But asked if it would be as decisive as the atomic bomb to the outcome of World War II he replied: "We have ideas for what they are. We are working on them." In 2022, Anduril won a contract worth almost $1 billion with the Special Operations Command to support its counter-unmanned systems. Anduril's products include autonomous sentry towers along the Mexican border [and] Altius-600M attack drones supplied to Ukraine. All of Anduril's tech operates autonomously and runs on its AI platform called Lattice that can easily be updated. The success of Anduril has given hope to other smaller players aiming to break into the defense sector. As an escalating number of global conflicts has increased demand for AI-driven weaponry, venture capitalists have put more than $100 billion into defense tech since 2021, according to Pitchbook data. The rising demand has sparked a fresh wave of startups lining up to compete with industry "primes" such as Lockheed Martin and RTX (formerly known as Raytheon) for a slice of the $842 billion US defense budget.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on corruption in the military and in the corporate world from reliable major media sources.
In 2015, the journalist Steven Levy interviewed Elon Musk and Sam Altman, two founders of OpenAI. A galaxy of Silicon Valley heavyweights, fearful of the potential consequences of AI, created the company as a non-profit-making charitable trust with the aim of developing technology in an ethical fashion to benefit “humanity as a whole”. Musk, who stepped down from OpenAI’s board six years ago ... is now suing his former company for breach of contract for having put profits ahead of the public good and failing to develop AI “for the benefit of humanity”. In 2019, OpenAI created a for-profit subsidiary to raise money from investors, notably Microsoft. When it released ChatGPT in 2022, the model’s inner workings were kept hidden. It was necessary to be less open, Ilya Sutskever, another of OpenAI’s founders and at the time the company’s chief scientist, claimed in response to criticism, to prevent those with malevolent intent from using it “to cause a great deal of harm”. Fear of the technology has become the cover for creating a shield from scrutiny. The problems that AI poses are not existential, but social. From algorithmic bias to mass surveillance, from disinformation and censorship to copyright theft, our concern should not be that machines may one day exercise power over humans but that they already work in ways that reinforce inequalities and injustices, providing tools by which those in power can consolidate their authority.
Note: Read more about the dangers of AI in the hands of the powerful. For more along these lines, see concise summaries of deeply revealing news articles on media manipulation and the disappearance of privacy from reliable sources.
A federal appeals court on Tuesday refused to hold five major technology companies liable over their alleged support for the use of child labor in cobalt mining operations in the Democratic Republic of the Congo. In a 3-0 decision, the U.S. Court of Appeals for the District of Columbia ruled in favor of Google parent Alphabet, Apple, Dell Technologies, Microsoft and Tesla, rejecting an appeal by former child miners and their representatives. The plaintiffs accused the five companies of joining suppliers in a "forced labor" venture by purchasing cobalt, which is used to make lithium-ion batteries. Nearly two-thirds of the world's cobalt comes from the DRC. According to the complaint, the companies "deliberately obscured" their dependence on child labor, including many children pressured into work by hunger and extreme poverty, to ensure their growing need for the metal would be met. The 16 plaintiffs included representatives of five children who were killed in cobalt mining operations. Circuit Judge Neomi Rao said the plaintiffs had legal standing to seek damages, but did not show the five companies had anything more than a buyer-seller relationship with suppliers. Terry Collingsworth, a lawyer for the plaintiffs ... said his clients may appeal further. The decision provides "a strong incentive to avoid any transparency with their suppliers, even as they promise the public they have 'zero tolerance' policies against child labor," he said. "We are far from finished seeking accountability."
Note: Unreported deaths of children, devastating diseases, toxic environments, and sexual assault are just some of the tragedies within the hidden world of cobalt mining in the DRC. Furthermore, entire communities have been forced to leave their homes to make way for new mining operations. For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption from reliable major media sources.
OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes. Up until January 10, OpenAI’s “usage policies” page included a ban on “activity that has high risk of physical harm, including,” specifically, “weapons development” and “military and warfare.” That plainly worded prohibition against military applications would seemingly rule out any official, and extremely lucrative, use by the Department of Defense or any other state military. The new policy retains an injunction not to “use our service to harm yourself or others” and gives “develop or use weapons” as an example, but the blanket ban on “military and warfare” use has vanished. OpenAI spokesperson Niko ... Felix [said] that OpenAI wanted to pursue certain “national security use cases that align with our mission,” citing a plan to create “cybersecurity tools” with DARPA, and that “the goal with our policy update is to provide clarity and the ability to have these discussions.” The real-world consequences of the policy are unclear. Last year, The Intercept reported that OpenAI was unwilling to say whether it would enforce its own clear “military and warfare” ban in the face of increasing interest from the Pentagon and U.S. intelligence community. “Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” said [former AI policy analyst] Sarah Myers West.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on corporate corruption from reliable major media sources.
Submarine cables used to be seen as the internet’s dull plumbing. Now giants of the data economy, such as Amazon, Google, Meta and Microsoft, are asserting more control over the flow of data, even as tensions between China and America risk splintering the world’s digital infrastructure. The result is to turn undersea cables into prized economic and strategic assets. Subsea data pipes carry almost 99% of intercontinental internet traffic. By 2010 the rise in data traffic led internet and cloud-computing giants—Amazon, Google, Meta and Microsoft—to start leasing capacity on these lines. The data-cable business is ... being entangled in the tech contest between America and China. Take the Pacific Light Cable Network (PLCN). The 13,000km data pipeline was announced in 2016, with the backing of Google and Meta. It aimed to link the west coast of America with Hong Kong. By 2020 it had reached the Philippines and Taiwan. But last year America’s government denied approval for the final leg to Hong Kong, worried that this would give Chinese authorities easy access to Americans’ data. Hundreds of kilometres of cable that would link Hong Kong to the network are languishing unused on the ocean floor. China is responding by charting its own course. PEACE, a 21,500km undersea cable linking Kenya to France via Pakistan, was built entirely by Chinese firms as part of China’s “digital silk road”, a scheme to increase its global influence.
Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption from reliable major media sources.
Palantir’s founding team, led by investor Peter Thiel and Alex Karp, wanted to create a company capable of using new data integration and data analytics technology — some of it developed to fight online payments fraud — to solve problems of law enforcement, national security, military tactics, and warfare. Palantir, founded in 2003, developed its tools fighting terrorism after September 11, and has done extensive work for government agencies and corporations though much of its work is secret. Palantir’s MetaConstellation platform allows the user to task ... satellites to answer a specific query. Imagine you want to know what is happening in a certain location and time in the Arctic. Click on a button and MetaConstelation will schedule the right combination of satellites to survey the designated area. The platform is able to integrate data from multiple and disparate sources — think satellites, drones, and open-source intelligence — while allowing a new level of decentralised decision-making. Just as a deep learning algorithm knows how to recognise a picture of a dog after some hours of supervised learning, the Palantir algorithms can become extraordinarily apt at identifying an enemy command and control centre. Alex Karp, Palantir’s CEO, has argued that “the power of advanced algorithmic warfare systems is now so great that it equates to having tactical nuclear weapons against an adversary with only conventional ones.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and the disappearance of privacy from reliable major media sources.
The Palestinian population is intimately familiar with how new technological innovations are first weaponized against them–ranging from electric fences and unmanned drones to trap people in Gaza—to the facial recognition software monitoring Palestinians in the West Bank. Groups like Amnesty International have called Israel an Automated Apartheid and repeatedly highlight stories, testimonies, and reports about cyber-intelligence firms, including the infamous NSO Group (the Israeli surveillance company behind the Pegasus software) conducting field tests and experiments on Palestinians. Reports have highlighted: “Testing and deployment of AI surveillance and predictive policing systems in Palestinian territories. In the occupied West Bank, Israel increasingly utilizes facial recognition technology to monitor and regulate the movement of Palestinians. Israeli military leaders described AI as a significant force multiplier, allowing the IDF to use autonomous robotic drone swarms to gather surveillance data, identify targets, and streamline wartime logistics.” The Palestinian towns and villages near Israeli settlements have been described as laboratories for security solutions companies to experiment their technologies on Palestinians before marketing them to places like Colombia. The Israeli government hopes to crystalize its “automated apartheid” through the tokenization and privatization of various industries and establishing a technocratic government in Gaza.
Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.
Silicon Valley techies are pretty sanguine about commercial surveillance. But they are much less cool about government spying. Government employees and contractors are pretty cool with state surveillance. But they are far less cool with commercial surveillance. What are they both missing? That American surveillance is a public-private partnership: a symbiosis between a concentrated tech sector that has the means, motive, and opportunity to spy on every person in the world and a state that loves surveillance as much as it hates checks and balances. The tech sector has powerful allies in government: cops and spies. No government agency could ever hope to match the efficiency and scale of commercial surveillance. Meanwhile, the private sector relies on cops and spies to go to bat for them, lobbying against new privacy laws and for lax enforcement of existing ones. Think of Amazon’s Ring cameras, which have blanketed entire neighborhoods in CCTV surveillance, which Ring shares with law enforcement agencies, sometimes without the consent or knowledge of the cameras’ owners. Ring marketing recruits cops as street teams, showering them with freebies to distribute to local homeowners. Google ... has managed to play both sides of the culture war with its location surveillance, thanks to the “reverse warrants” that cops have used to identify all the participants at both Black Lives Matter protests and the January 6 coup. Distinguishing between state and private surveillance is a fool’s errand.
Note: For more along these lines, see concise summaries of deeply revealing news articles on the disappearance of privacy from reliable major media sources.
Leading up to the August Republican presidential primary debate ... An RNC official told Google via email that the debate would be streaming exclusively on the upstart video platform Rumble. The August 23 debate was broadcast on Fox News and streamed on Fox Nation, which requires a subscription, while Rumble was the only one to stream it for free. On the day of and during the debate, however, potential viewers who searched Google for “GOP debate stream” were returned links to YouTube, Fox News, and news articles about the debate, according to screen recordings. Rumble was nowhere on the first page. For Rumble, which is currently in discovery in an antitrust lawsuit against Google in California, this is a case of Google suppressing its competitors in favor of its own product, YouTube. YouTube is owned by Google, and it has regularly been the subject of anticompetitive allegations from rivals, who charge that Google unfairly and illegally favors YouTube in its search algorithm. Google, in fact, is in the middle of a landmark antitrust trial, charged with anticompetitive practices by the Department of Justice. The company would not have been required by antitrust law to promote [Rumble's] link. It would, however, be barred from suppressing the competitor’s link from organic results. The fact that Rumble’s link did not appear on the first page even though it was the most relevant link the search could return means either the search engine failed at its task or the link was suppressed.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and media manipulation from reliable sources.
While Facebook has long sought to portray itself as a "town square" that allows people from across the world to connect, a deeper look into its apparently military origins and continual military connections reveals that the world's largest social network was always intended to act as a surveillance tool to identify and target domestic dissent. LifeLog was one of several controversial post-9/11 surveillance programs pursued by the Pentagon’s Defense Advanced Research Projects Agency (DARPA) that threatened to destroy privacy and civil liberties in the United States. LifeLog sought to .. build a digital record of "everything an individual says, sees, or does." In 2015, [DARPA architect Douglas] Gage told VICE that "Facebook is the real face of pseudo-LifeLog." He tellingly added, “We have ended up providing the same kind of detailed personal information without arousing the kind of opposition that LifeLog provoked.” A few months into Facebook's launch, in June 2004, Facebook cofounders Mark Zuckerberg and Dustin Moskovitz [had] its first outside investor, Peter Thiel. Thiel, in coordination with the CIA, was actively trying to resurrect controversial DARPA programs. Thiel formally acquired $500,000 worth of Facebook shares and was added its board. Thiel's longstanding symbiotic relationship with Facebook cofounders extends to his company Palantir, as the data that Facebook users make public invariably winds up in Palantir's databases and helps drive the surveillance engine Palantir runs for a handful of US police departments, the military, and the intelligence community.
Note: Consider reading the full article by investigative reporter Whitney Webb to explore the scope of Facebook's military origins and the rise of mass surveillance. Read more about the relationship between the national security state and Google, Facebook, TikTok, and the entertainment industry. For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption and media manipulation from reliable sources.
Maya Jones* was only 13 when she first walked through the door of Courtney’s House, a drop-in centre for victims of child sex trafficking. When she was 12, she had started receiving direct messages on Instagram from a man she didn’t know. She decided to meet him in person. Then came his next request: “Can you help me make some money?” According to Frundt, Maya explained that the man asked her to pose naked for photos, and to give him her Instagram password so that he could upload the photos to her profile. Frundt says Maya told her that the man, who was now calling himself a pimp, was using her Instagram profile to advertise her for sex. The internet is used by human traffickers as “digital hunting fields”, allowing them access to both customers and potential victims, with children being targeted by traffickers on social media platforms. The biggest of these, Facebook, is owned by Meta, the tech giant whose platforms, which also include Instagram, are used by more than 3 billion people. In 2020, according to a report by US-based not-for-profit the Human Trafficking Institute, Facebook was the platform most used to groom and recruit children by sex traffickers (65%), based on an analysis of 105 federal child sex trafficking cases that year. The HTI analysis ranked Instagram second most prevalent, with Snapchat third. While Meta says it is doing all it can, we have seen evidence that suggests it is failing to report or even detect the full extent of what is happening.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and sexual abuse scandals from reliable major media sources.
A large number of ex-officers from the FBI, CIA, NSC, and State Department have taken positions at Facebook, Twitter, and Google. The revelation comes amid fears the FBI operated control over Twitter censorship and the Hunter Biden laptop story. The Twitter files have revealed the close relationship with the FBI, how the Bureau regularly demanded accounts and tweets be banned and suspicious contact before the Hunter laptop story was censored. The documents detailed how so many former FBI agents joined Twitter's ranks over the past few years that they created their own private Slack channel. A report by Mint Press' Alan MacLeod identified dozens of Twitter employees, who had previously held positions at the Bureau. He also found that former CIA agents made up some of the top ranks in almost every politically-sensitive department at Meta, the parent company of Facebook, Instagram, and WhatsApp. And in another report, MacLeod detailed the extent to which former CIA agents started working at Google. DailyMail.com has now been able to track down nine former CIA agents who are working, or have worked, at Meta, including Aaron Berman, the senior policy manager for misinformation at the company who had previously written the president's daily briefings. Six others have worked for other intelligence agencies before joining the social media giant, many of whom have posted recently about Facebook's efforts to tamp down on so-called 'covert influence operations.'
Note: Explore a deeper analysis on the ex-CIA agents at Facebook and at Google. Additionally, read how Big Tech censors social media on behalf of corporate and government interests. For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption and media manipulation from reliable sources.
U.S. citizens are being subjected to a relentless onslaught from intrusive technologies that have become embedded in the everyday fabric of our lives, creating unprecedented levels of social and political upheaval. These widely used technologies ... include social media and what Harvard professor Shoshanna Zuboff calls "surveillance capitalism"—the buying and selling of our personal info and even our DNA in the corporate marketplace. But powerful new ones are poised to create another wave of radical change. Under the mantle of the "Fourth Industrial Revolution," these include artificial intelligence or AI, the metaverse, the Internet of Things, the Internet of Bodies (in which our physical and health data is added into the mix to be processed by AI), and my personal favorite, police robots. This is a two-pronged effort involving both powerful corporations and government initiatives. These tech-based systems are operating "below the radar" and rarely discussed in the mainstream media. The world's biggest tech companies are now richer and more powerful than most countries. According to an article in PC Week in 2021 discussing Apple's dominance: "By taking the current valuation of Apple, Microsoft, Amazon, and others, then comparing them to the GDP of countries on a map, we can see just how crazy things have become… Valued at $2.2 trillion, the Cupertino company is richer than 96% of the world. In fact, only seven countries currently outrank the maker of the iPhone financially."
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and the disappearance of privacy from reliable major media sources.
A MintPress News investigation has found dozens of ex-U.S. State Department officials working in key positions at TikTok. Many more individuals with backgrounds in the FBI, CIA and other departments of the national security state also hold influential posts at the social media giant, affecting the content that over one billion users see. The influx of State Department officials into TikTok’s upper ranks is a consequence of “Project Texas,” an initiative the company began in 2020 in the hopes of avoiding being banned altogether in the United States. During his time in office, Secretary of State Mike Pompeo led the charge to shut the platform down, frequently labeling it a “spying app” and a “propaganda tool for the Chinese Communist Party.” It was widely reported that the U.S. government had forced the sale of TikTok to Walmart and then Microsoft. But in late 2020, as Project Texas began, those deals mysteriously fell through, and the rhetoric about the dangers of TikTok from officials evaporated. Project Texas is a $1.5 billion security operation to move the company’s data to Austin. In doing so, it announced that it was partnering with tech giant Oracle, a corporation that, as MintPress has reported on, is the CIA in all but name. Evidently, Project Texas also secretly included hiring all manner of U.S. national security state personnel to oversee the company’s operations – and not just from the State Department. Virtually every branch of the national security state is present at TikTok.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corruption in intelligence agencies and in the corporate world from reliable major media sources.
Big Tech giants and their oligarchic owners now engage in a new type of censorship, which we have called “censorship by proxy.” Censorship by proxy describes restrictions on freedom of information undertaken by private corporations that exceed limits on governmental censorship and serve both corporate and government or third-party interests. Censorship by proxy is not subject to venerable First Amendment proscriptions on government interference with freedom of speech or freedom of the press. Censorship by proxy alerts us to the power of economic entities that are not normally recognized as “gatekeepers.” For example, in 2022, the digital financial service PayPal (whose founders include Peter Thiel and Elon Musk) froze the accounts of Consortium News and MintPress News for “unspecified offenses” and “risks” associated with their accounts, a ruling that prevented both independent news outlets from using funds maintained by PayPal. Consortium News and MintPress News have each filed critical news stories and commentary on the foreign policy objectives of the United States and NATO. PayPal issued notices to each news outlet, stating that, in addition to suspending their accounts, it might also seize their assets for “damages.” Joe Lauria, editor in chief of Consortium News, said he believed this was a case of “ideological policing.” Mnar Adley, head of MintPress News, warned, “The sanctions-regime war is coming home to hit the bank accounts of watchdog journalists.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and media manipulation from reliable sources.
During one of his many visits to the Democratic Republic of the Congo, Siddharth Kara ... met a young woman sifting dirt for traces of cobalt. Priscille told him she had suffered two miscarriages and that her husband, a fellow “artisanal” miner, died of a respiratory disease. It is just one of many devastating personal accounts in Cobalt Red, a detailed exposé into the hidden world of small-scale cobalt mining in the Democratic Republic of the Congo (DRC). The “quaint” moniker of artisanal mining, Mr. Kara points out, belies a brutal industry where hundreds of thousands of men, women and children dig with bare hands and basic tools in toxic, perilous pits, eking out an existence on the bottom rung of the global supply chain. If you own a smartphone, tablet, laptop, e-scooter, [or] electric vehicle ... then it is a system in which you are unwittingly complicit. Around 75 per cent of the world’s cobalt is mined in the DRC. The rare, silvery metal is an essential component to every lithium-ion rechargeable battery. Congolese miners ... have experienced life-changing injuries, sexual assault, physical violence, corruption, displacement and abject poverty. Cobalt Red also documents many unreported deaths, including those of children buried alive in makeshift mining tunnels, and their bodies never recovered. Cobalt is toxic to touch and breathe in, and can be found alongside traces of radioactive uranium. Cancers, respiratory illnesses, miscarriages, headaches and painful skin conditions occur among adults who work without protective equipment. Children in mining communities suffer birth defects, developmental damage, vomiting and seizures from direct and indirect exposure to the heavy metals. Female miners, who earn less than the average two dollars per day paid to men, typically work in groups as sexual assault is common in mining areas. Major tech and EV companies extol commitments to human rights, zero-tolerance for child labor, and clean supply chains. Mr. Kara described these statements as “utterly inconsistent” with what’s happening on the ground.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption from reliable major media sources.
Trust Lab was founded by a team of well-credentialed Big Tech alumni who came together in 2021 with a mission: Make online content moderation more transparent, accountable, and trustworthy. A year later, the company announced a “strategic partnership” with the CIA’s venture capital firm. The quiet October 29 announcement of the partnership is light on details, stating that Trust Lab and In-Q-Tel — which invests in and collaborates with firms it believes will advance the mission of the CIA — will work on “a long-term project that will help identify harmful content and actors in order to safeguard the internet.” Key terms like “harmful” and “safeguard” are unexplained, but the press release goes on to say that the company will work toward “pinpointing many types of online harmful content, including toxicity and misinformation.” It’s difficult to imagine how aligning the startup with the CIA is compatible with [Trust Lab co-founder Tom] Siegel’s goal of bringing greater transparency and integrity to internet governance. What would it mean, for instance, to incubate counter-misinformation technology for an agency with a vast history of perpetuating misinformation? Placing the company within the CIA’s tech pipeline also raises questions about Trust Lab’s view of who or what might be a “harmful” online, a nebulous concept that will no doubt mean something very different to the U.S. intelligence community than it means elsewhere. Trust Lab’s murky partnership with In-Q-Tel suggests a step toward greater governmental oversight of online speech.
Note: For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption and media manipulation from reliable sources.
Twitter owner Elon Musk spoke out on Saturday evening about the so-called “Twitter Files,” a long tweet thread posted by journalist Matt Taibbi, who had been provided with details about behind-the-scenes discussions on Twitter’s content moderation decision-making, including the call to suppress a 2020 New York Post story about Hunter Biden and his laptop. During a two-hour long Twitter Spaces session, Musk said a second “Twitter Files” drop will again involve Taibbi, along with journalist Bari Weiss, but did not give an exact date for when that would be released. Musk – who claims to have not read the released files himself – said the impetus for the original tweet thread was about what happened in the run-up to the 2020 presidential election and “how much government influence was there.” Taibbi’s first thread reaffirmed how, in the initial hours after the Post story about Hunter Biden went live, Twitter employees grappled with fears that it could have been the result of a Russian hacking operation. It showed employees on several Twitter teams debating over whether to restrict the article under the company’s hacked materials policy, weeks before the 2020 election. The emails Taibbi obtained are consistent with what former Twitter site integrity head Yoel Roth told journalist Kara Swisher in an onstage interview last week. Taibbi said the contact from political parties happened more frequently from Democrats, but provided no internal documents to back up his assertion.
Note: For more along these lines, see concise summaries of deeply revealing news articles on media corruption from reliable sources.
The EARN IT Act [is] a bill designed to confront the explosion of child sexual abuse material (CSAM) online. EARN IT would help address what is, disturbingly, a common experience for young users: routine exposure to predatory targeting, grooming, sexual violence, prostitution/sex trafficking, hardcore pornography and more. A New York Times investigation revealed that 70 million CSAM images were reported to the National Center for Missing and Exploited Children (NCMEC) in 2019–up from 600,000 in 2008–an "almost unfathomable" increase in criminality. The EARN IT Act restores privacy to victims of child sexual abuse material and allows them to sueâ€those who cause them harm online, under federal civil law and state criminal and civil law. It also creates a new commission to issue guidelines to limit sex trafficking, grooming and sexual exploitationâ€online. CSAM still exists because tech platforms have no incentive to prevent or eliminate it, because Section 230 of the Communications Decency Act (passed in 1996, before social media existed) gives them near-blanket immunity from liability. While some in the technology sector [are] claiming EARN IT is a threat to encryption and user privacy, the reality is that encryption can coexist with better business practices for online child safety. We can increase security and privacy while refraining from a privacy-absolutism that unintentionally allows sexual predators to run rampant online.
Note: To understand the scope of child sex abuse worldwide, learn about other major cover-ups in revealing news articles on sexual abuse scandals from reliable major media sources.
Ask questions or post content about COVID-19 that runs counter to the Biden administration's narrative and find yourself censored on social media. That's precisely what data analyst and digital strategist Justin Hart says happened to him. And so last week the Liberty Justice Center, a public-interest law firm, filed a suit on his behalf in California against Facebook, Twitter, President Joe Biden and United States Surgeon General Vivek Murthy for violating his First Amendment right to free speech. Hart had his social media most recently locked for merely posting an infographic that illustrated the lack of scientific research behind forcing children to wear masks to prevent the spread of COVID. In fact ... study after study repeatedly shows that children are safer than vaccinated adults and that the masks people actually wear don't do much good. The lawsuit contends that the federal government is "colluding with social media companies to monitor, flag, suspend and delete social media posts it deems 'misinformation.'" It can point to White House Press Secretary Jen Psaki's July remarks that senior White House staff are "in regular touch" with Big Tech platforms regarding posts about COVID. She also said the surgeon general's office is "flagging problematic posts for Facebook that spread." "Why do we think it's acceptable for the government to direct social media companies to censor people on critical issues such as COVID?" Hart asks. The Post has been targeted repeatedly by social media for solid, factual reporting.
Note: Read about another lawsuit alleging collusion between government and big tech companies to censor dissenting views on pandemic policies. For more along these lines, see concise summaries of deeply revealing news articles on government corruption and media manipulation from reliable sources.
The intelligence community is about to get the equivalent of an adrenaline shot to the chest. This summer, a $600 million computing cloud developed by Amazon Web Services for the Central Intelligence Agency over the past year will begin servicing all 17 agencies that make up the intelligence community. If the technology plays out as officials envision, it will usher in a new era of cooperation and coordination, allowing agencies to share information and services much more easily and avoid the kind of intelligence gaps that preceded the Sept. 11, 2001, terrorist attacks. For the first time, agencies within the intelligence community will be able to order a variety of on-demand computing and analytic services from the CIA and National Security Agency. What’s more, they’ll only pay for what they use. For the risk-averse intelligence community, the decision to go with a commercial cloud vendor is a radical departure from business as usual. It is difficult to underestimate the cloud contract’s importance. In a recent public appearance, CIA Chief Information Officer Douglas Wolfe called it “one of the most important technology procurements in recent history,” with ramifications far outside the realm of technology. The importance of the cloud capabilities the CIA gets through leveraging Amazon Web Services’ horsepower is best exemplified in computing intelligence data. Instead of each agency building out its own systems, select agencies ... are responsible for governing its major components.
Note: The CIA tries to "collect everything and hold on to it forever." For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption from reliable major media sources.
Frances Haugen spent 15 years working for some of the largest social media companies in the world including Google, Pinterest, and until May, Facebook. Haugen quit Facebook on her own accord and left with thousands of pages of internal research and communications that she shared with the Securities and Exchange Commission. 60 Minutes obtained the documents from a Congressional source. On Sunday, in her first interview, Haugen told 60 Minutes correspondent Scott Pelley about what she called "systemic" problems with the platform's ranking algorithm that led to the amplification of "angry content" and divisiveness. Evidence of that, she said, is in the company's own internal research. Haugen said Facebook changed its algorithm in 2018 to promote "what it calls meaningful social interactions" through "engagement-based rankings." She explained that content that gets engaged with – such as reactions, comments, and shares – gets wider distribution. "Political parties have been quoted, in Facebook's own research, saying, we know you changed how you pick out the content that goes in the home feed," said Haugen. "And now if we don't publish angry, hateful, polarizing, divisive content, crickets." "We have no independent transparency mechanisms," Haugen [said]. "Facebook ... picks metrics that are in its own benefit. And the consequence is they can say we get 94% of hate speech and then their internal documents say we get 3% to 5% of hate speech. We can't govern that."
Note: For more along these lines, see concise summaries of deeply revealing news articles on media manipulation from reliable sources.
Justin Rosenstein had tweaked his laptops operating system to block Reddit, banned himself from Snapchat, which he compares to heroin, and imposed limits on his use of Facebook. He was particularly aware of the allure of Facebook likes, which he describes as bright dings of pseudo-pleasure that can be as hollow as they are seductive. And Rosenstein should know: he was the Facebook engineer who created the like button. There is growing concern that as well as addicting users, technology is contributing toward so-called continuous partial attention, severely limiting peoples ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity even when the device is turned off. But those concerns are trivial compared with the devastating impact upon the political system that some of Rosensteins peers believe can be attributed to the rise of social media and the attention-based market that drives it. Drawing a straight line between addiction to social media and political earthquakes like Brexit and the rise of Donald Trump, they contend that digital forces have completely upended the political system and, left unchecked, could even render democracy as we know it obsolete. It is revealing that many of these younger technologists are weaning themselves off their own products, sending their children to elite Silicon Valley schools where iPhones, iPads and even laptops are banned.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption from reliable major media sources.
Google will not seek to extend its contract next year with the Defense Department for artificial intelligence used to analyze drone video, squashing a controversial alliance that had raised alarms over the technological buildup between Silicon Valley and the military. Google ... has faced widespread public backlash and employee resignations for helping develop technological tools that could aid in warfighting. Google will soon release new company principles related to the ethical uses of AI. Thousands of Google employees wrote chief executive Sundar Pichai an open letter urging the company to cancel the contract, and many others signed a petition saying the companys assistance in developing combat-zone technology directly countered the companys famous Dont be evil motto. Several Google AI employees had told The Post they believed they wielded a powerful influence over the companys decision-making. The advanced technologys top researchers and developers are in heavy demand, and many had organized resistance campaigns or threatened to leave. The sudden announcement Friday was welcomed by several high-profile employees. Meredith Whittaker, an AI researcher and the founder of Googles Open Research group, tweeted Friday: I am incredibly happy about this decision, and have a deep respect for the many people who worked and risked to make it happen. Google should not be in the business of war.
Note: Explore a treasure trove of concise summaries of incredibly inspiring news articles which will inspire you to make a difference.
Hundreds of academics have urged Google to abandon its work on a U.S. Department of Defense-led drone program codenamed Project Maven. An open letter calling for change was published Monday by the International Committee for Robot Arms Control (ICRAC). The project is formally known as the Algorithmic Warfare Cross-Functional Team. Its objective is to turn the enormous volume of data available to DoD into actionable intelligence. More than 3,000 Google staffers signed a petition in April in protest at the company's focus on warfare. We believe that Google should not be in the business of war, it read. Therefore we ask that Project Maven be cancelled. The ICRAC warned this week the project could potentially be mixed with general user data and exploited to aid targeted killing. Currently, its letter has nearly 500 signatures. It stated: We are ... deeply concerned about the possible integration of Googles data on peoples everyday lives with military surveillance data, and its combined application to targeted killing ... Google has moved into military work without subjecting itself to public debate or deliberation. While Google regularly decides the future of technology without democratic public engagement, its entry into military technologies casts the problems of private control of information infrastructure into high relief. Lieutenant Colonel Garry Floyd, deputy chief of the Algorithmic Warfare Cross Functional Team, said ... earlier this month that Maven was already active in five or six combat locations.
Note: You can read the full employee petition on this webpage. The New York Times also published a good article on this. For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and war.
Thousands of Google employees, including dozens of senior engineers, have signed a letter protesting the companys involvement in a Pentagon program that uses artificial intelligence to interpret video imagery and could be used to improve the targeting of drone strikes. The letter, which is circulating inside Google and has garnered more than 3,100 signatures, reflects a culture clash ... that is likely to intensify as cutting-edge artificial intelligence is increasingly employed for military purposes. We believe that Google should not be in the business of war, says the letter, addressed to Sundar Pichai, the companys chief executive. It asks that Google pull out of Project Maven, a Pentagon pilot program, and announce a policy that it will not ever build warfare technology. That kind of idealistic stance ... is distinctly foreign to Washingtons massive defense industry and certainly to the Pentagon, where the defense secretary, Jim Mattis, has often said a central goal is to increase the lethality of the United States military. Some of Googles top executives have significant Pentagon connections. Eric Schmidt, former executive chairman of Google and still a member of the executive board of Alphabet, Googles parent company, serves on a Pentagon advisory body, the Defense Innovation Board, as does a Google vice president, Milo Medin. Project Maven ... began last year as a pilot program to find ways to speed up the military application of the latest A.I. technology.
Note: The use of artificial intelligence technology for drone strike targeting is one of many ways warfare is being automated. Strong warnings against combining artificial intelligence with war have recently been issued by America's second-highest ranking military officer, tech mogul Elon Musk, and many of the world's most recognizable scientists. For more along these lines, see concise summaries of deeply revealing war news articles from reliable major media sources.
Important Note: Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.