Big Tech News Stories
Militaries, law enforcement, and more around the world are increasingly turning to robot dogs — which, if we're being honest, look like something straight out of a science-fiction nightmare — for a variety of missions ranging from security patrol to combat. Robot dogs first really came on the scene in the early 2000s with Boston Dynamics' "BigDog" design. They have been used in both military and security activities. In November, for instance, it was reported that robot dogs had been added to President-elect Donald Trump's security detail and were on patrol at his home in Mar-a-Lago. Some of the remote-controlled canines are equipped with sensor systems, while others have been equipped with rifles and other weapons. One Ohio company made one with a flamethrower. Some of these designs not only look eerily similar to real dogs but also act like them, which can be unsettling. In the Ukraine war, robot dogs have seen use on the battlefield, the first known combat deployment of these machines. Built by British company Robot Alliance, the systems aren't autonomous, instead being operated by remote control. They are capable of doing many of the things other drones in Ukraine have done, including reconnaissance and attacking unsuspecting troops. The dogs have also been useful for scouting out the insides of buildings and trenches, particularly smaller areas where operators have trouble flying an aerial drone.
Note: Learn more about the troubling partnership between Big Tech and the military. For more, read our concise summaries of news articles on military corruption.
More than 300 million children across the globe are victims of online sexual exploitation and abuse each year, research suggests. In what is believed to be the first global estimate of the scale of the crisis, researchers at the University of Edinburgh found that 12.6% of the world’s children have been victims of nonconsensual talking, sharing and exposure to sexual images and video in the past year, equivalent to about 302 million young people. A similar proportion – 12.5% – had been subject to online solicitation, such as unwanted sexual talk that can include sexting, sexual questions and sexual act requests by adults or other youths. Offences can also take the form of “sextortion”, where predators demand money from victims to keep images private, and abuse of AI deepfake technology. The US is a particularly high-risk area. The university’s Childlight initiative – which aims to understand the prevalence of child abuse – includes a new global index, which found that one in nine men in the US (equivalent to almost 14 million) admitted online offending against children at some point. Surveys found 7% of British men, equivalent to 1.8 million, admitted the same. The research also found many men admitted they would seek to commit physical sexual offences against children if they thought it would be kept secret. Child abuse material is so prevalent that files are on average reported to watchdog and policing organisations once every second.
Note: New Mexico's attorney general has called Meta the world's "single largest marketplace for paedophiles." For more along these lines, read our concise summaries of news articles on Big Tech and sexual abuse scandals.
Mitigating the risk of extinction from AI should be a global priority. However, as many AI ethicists warn, this blinkered focus on the existential future threat to humanity posed by a malevolent AI ... has often served to obfuscate the myriad more immediate dangers posed by emerging AI technologies. These “lesser-order” AI risks ... include pervasive regimes of omnipresent AI surveillance and panopticon-like biometric disciplinary control; the algorithmic replication of existing racial, gender, and other systemic biases at scale ... and mass deskilling waves that upend job markets, ushering in an age monopolized by a handful of techno-oligarchs. Killer robots have become a twenty-first-century reality, from gun-toting robotic dogs to swarms of autonomous unmanned drones, changing the face of warfare from Ukraine to Gaza. Palestinian civilians have frequently spoken about the paralyzing psychological trauma of hearing the “zanzana” — the ominous, incessant, unsettling, high-pitched buzzing of drones loitering above. Over a decade ago, children in Waziristan, a region of Pakistan’s tribal belt bordering Afghanistan, experienced a similar debilitating dread of US Predator drones that manifested as a fear of blue skies. “I no longer love blue skies. In fact, I now prefer gray skies. The drones do not fly when the skies are gray,” stated thirteen-year-old Zubair in his testimony before Congress in 2013.
Note: For more along these lines, read our concise summaries of news articles on AI and military corruption.
The current debate on military AI is largely driven by “tech bros” and other entrepreneurs who stand to profit immensely from militaries’ uptake of AI-enabled capabilities. Despite their influence on the conversation, these tech industry figures have little to no operational experience, meaning they cannot draw from first-hand accounts of combat to further justify arguments that AI is changing the character, if not nature, of war. Rather, they capitalize on their impressive business successes to influence a new model of capability development through opinion pieces in high-profile journals, public addresses at acclaimed security conferences, and presentations at top-tier universities. Three related considerations have combined to shape the hype surrounding military AI. First [is] the emergence of a new military industrial complex that is dependent on commercial service providers. Second, this new defense acquisition process is the cause and effect of a narrative suggesting a global AI arms race, which has encouraged scholars to discount the normative implications of AI-enabled warfare. Finally, while analysts assume that soldiers will trust AI, which is integral to human-machine teaming that facilitates AI-enabled warfare, trust is not guaranteed. Senior officers do not trust AI-enhanced capabilities. To the extent they do demonstrate increased levels of trust in machines, their trust is moderated by how machines are used.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and military corruption.
Within Meta’s Counterterrorism and Dangerous Organizations team, [Hannah] Byrne helped craft one of the most powerful and secretive censorship policies in internet history. She and her team helped draft the rulebook that applies to the world’s most diabolical people and groups: the Ku Klux Klan, cartels, and terrorists. Meta bans these so-called Dangerous Organizations and Individuals, or DOI, from using its platforms, but further prohibits its billions of users from engaging in “glorification,” “support,” or “representation” of anyone on the list. As an armed white supremacist group with credible allegations of human rights violations hanging over it, Azov [Battalion] had landed on the Dangerous Organizations list. Following the Russian invasion of Ukraine, Meta not only moved swiftly to allow users to cheer on the Azov Battalion, but also loosened its rules around incitement, hate speech, and gory imagery so Ukrainian civilians could share images of the suffering around them. Within weeks, Byrne found the moral universe around her inverted: The heavily armed hate group sanctioned by Congress since 2018 were now freedom fighters resisting occupation, not terroristic racists. It seems most galling for Byrne to compare how malleable Meta’s Dangerous Organizations policy was for Ukraine, and how draconian it has felt for those protesting the war in Gaza. “I know the U.S. government is in constant contact with Facebook employees,” she said. Meta’s censorship systems are “basically an extension of the government,” Byrne said. “You want military, Department of State, CIA people enforcing free speech? That is what is concerning.”
Note: Read more about Facebook's secret blacklist, and how Facebook censored reporting of war crimes in Gaza but allowed praise for the neo-Nazi Azov Brigade on its platform. Going deeper, click here if you want to know the real history behind the Russia-Ukraine war. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
HouseFresh.com ... started in 2020 by Gisele Navarro and her husband, based on a decade of experience writing about indoor air quality products. They filled their basement with purifiers, running rigorous science-based tests ... to help consumers sort through marketing hype. HouseFresh is an example of what has been a flourishing industry of independent publishers producing exactly the sort of original content Google says it wants to promote. The website grew into a thriving business with 15 full-time employees. In September 2023, Google made one in a series of major updates to the algorithm that runs its search engine. The second Google algorithm update came in March, and it was even more punishing. "It decimated us," Navarro says. "Suddenly the search terms that used to bring up HouseFresh were sending people to big lifestyle magazines that clearly don't even test the products." HouseFresh's thousands of daily visitors dwindled to just hundreds. Over the last few weeks, HouseFresh had to lay off most of its team. Results for popular search terms are crowded with websites that contain very little useful information, but tonnes of ads and links to retailers that earn publishers a share of profits. "Google's just committing war on publisher websites," [search engine expert Lily] Ray says. "It's almost as if Google designed an algorithm update to specifically go after small bloggers. I've talked to so many people who've just had everything wiped out." A number of website owners and search experts ... said there's been a general shift in Google results towards websites with big established brands, and away from small and independent sites, that seems totally disconnected from the quality of the content.
Note: These changes to Google search have significantly reduced traffic to WantToKnow.info and other independent media outlets. Read more about Google's bias machine, and how Google relies on user reactions rather than actual content to shape search results. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
“Anonymity is a shield from the tyranny of the majority,” wrote Supreme Court Justice John Paul Stevens in a 1995 ruling affirming Americans’ constitutional right to engage in anonymous political speech. That shield has weakened in recent years due to advances in the surveillance technology available to law enforcement. Everything from social media posts, to metadata about phone calls, to the purchase information collected by data brokers, to location data showing every step taken, is available to law enforcement — often without a warrant. Avoiding all of this tracking would require such extrication from modern social life that it would be virtually impossible for most people. International Mobile Subscriber Identity (IMSI) catchers, or Stingrays, impersonate cell phone towers to collect the unique ID of a cell phone’s SIM card. Geofence warrants, also known as reverse location warrants ... lets law enforcement request location data from apps on your phone or tech companies. Data brokers are companies that assemble information about people from a variety of usually public sources. Tons of websites and apps that everyday people use collect information on them, and this information is often sold to third parties who can aggregate or piece together someone’s profile across the sites that are tracking them. Companies like Fog Data Science, LexisNexis, Precisely and Acxiom possess not only data on billions of people, they also ... have information about someone’s political preferences as well as demographic information. Surveillance of social media accounts allows police to gather vast amounts of information about how protests are organized ... frequently utilizing networks of fake accounts. One firm advertised the ability to help police identify “activists and disruptors” at protests.
Note: For more along these lines, explore concise summaries of news articles on police corruption and the erosion of civil liberties from reliable major media sources.
Facebook’s inscrutable feed algorithm, which is supposed to calculate which content is most likely to appeal to me and then send it my way ... feels like an obstacle to how I’d like to connect with my friends. British software developer Louis Barclay developed a software ... known as an extension, which can be installed in a Chrome web browser. Christened Unfollow Everything, it would automate the process of unfollowing each of my 1,800 friends, a task that manually would take hours. The result is that I would be able to experience Facebook as it once was, when it contained profiles of my friends, but without the endless updates, photos, videos and the like that Facebook’s algorithm generates. If tools like Unfollow Everything were allowed to flourish, and we could have better control over what we see on social media, these tools might create a more civic-minded internet. Unfortunately, Mr. Barclay was forced by Facebook to remove the software. Large social media platforms appear to be increasingly resistant to third-party tools that give users more command over their experiences. After talking with Mr. Barclay, I decided to develop a new version of Unfollow Everything. I — and the lawyers at the Knight First Amendment Institute at Columbia — asked a federal court in California last week to rule on whether users should have a right to use tools like Unfollow Everything that give them increased power over how they use social networks, particularly over algorithms that have been engineered to keep users scrolling on their sites.
Note: The above was written by Ethan Zuckerman, associate professor of public policy and director of the UMass Initiative for Digital Public Infrastructure at the University of Massachusetts Amherst. For more along these lines, explore concise summaries of news articles on Big Tech from reliable major media sources.
Something went suddenly and horribly wrong for adolescents in the early 2010s. Rates of depression and anxiety in the United States—fairly stable in the 2000s—rose by more than 50 percent in many studies. The suicide rate rose 48 percent for adolescents ages 10 to 19. For girls ages 10 to 14, it rose 131 percent. Gen Z is in poor mental health and is lagging behind previous generations on many important metrics. Once young people began carrying the entire internet in their pockets, available to them day and night, it altered their daily experiences and developmental pathways. Friendship, dating, sexuality, exercise, sleep, academics, politics, family dynamics, identity—all were affected. There’s an important backstory, beginning ... when we started systematically depriving children and adolescents of freedom, unsupervised play, responsibility, and opportunities for risk taking, all of which promote competence, maturity, and mental health. Hundreds of studies on young rats, monkeys, and humans show that young mammals want to play, need to play, and end up socially, cognitively, and emotionally impaired when they are deprived of play. Young people who are deprived of opportunities for risk taking and independent exploration will, on average, develop into more anxious and risk-averse adults. A study of how Americans spend their time found that, before 2010, young people (ages 15 to 24) reported spending far more time with their friends. By 2019, young people’s time with friends had dropped to just 67 minutes a day. It turns out that Gen Z had been socially distancing for many years and had mostly completed the project by the time COVID-19 struck. Congress has not been good at addressing public concerns when the solutions would displease a powerful and deep-pocketed industry.
Note: The author of this article is Jonathan Haidt, a social psychologist and ethics professor who's been on the frontlines investigating the youth mental health crisis. He is the co-founder of LetGrow.org, an organization that provides inspiring solutions and ideas to help families and schools support children's well-being and foster childhood independence. For more along these lines, explore concise summaries of news articles on mental health.
Beheadings, mass killings, child abuse, hate speech – all of it ends up in the inboxes of a global army of content moderators. You don’t often see or hear from them – but these are the people whose job it is to review and then, when necessary, delete content that either gets reported by other users, or is automatically flagged by tech tools. Moderators are often employed by third-party companies, but they work on content posted directly on to the big social networks including Instagram, TikTok and Facebook. “If you take your phone and then go to TikTok, you will see a lot of activities, dancing, you know, happy things,” says Mojez, a former Nairobi-based moderator. “But in the background, I personally was moderating, in the hundreds, horrific and traumatising videos. “I took it upon myself. Let my mental health take the punch so that general users can continue going about their activities on the platform.” In 2020, Meta then known as Facebook, agreed to pay a settlement of $52m (£40m) to moderators who had developed mental health issues. The legal action was initiated by a former moderator [who] described moderators as the “keepers of souls”, because of the amount of footage they see containing the final moments of people’s lives. The ex-moderators I spoke to all used the word “trauma” in describing the impact the work had on them. One ... said he found it difficult to interact with his wife and children because of the child abuse he had witnessed. What came across, very powerfully, was the immense pride the moderators had in the roles they had played in protecting the world from online harm.
Note: Read more about the disturbing world of content moderation. For more along these lines, explore concise summaries of revealing news articles on Big Tech from reliable major media sources.
Ask "is the British tax system fair", and Google cites a quote ... arguing that indeed it is. Ask "is the British tax system unfair", and Google's Featured Snippet explains how UK taxes benefit the rich and promote inequality. "What Google has done is they've pulled bits out of the text based on what people are searching for and fed them what they want to read," [Digital marketing director at Dragon Metrics Sarah] Presch says. "It's one big bias machine." The vast majority of internet traffic begins with a Google Search, and people rarely click on anything beyond the first five links. The system that orders the links on Google Search has colossal power over our experience of the world. You might choose to engage with information that keeps you trapped in your filter bubble, "but there's only a certain bouquet of messages that are put in front of you to choose from in the first place", says [professor] Silvia Knobloch-Westerwick. A recent US anti-trust case against Google uncovered internal company documents where employees discuss some of the techniques the search engine uses to answer your questions. "We do not understand documents – we fake it," an engineer wrote in a slideshow used during a 2016 presentation. "A billion times a day, people ask us to find documents relevant to a query… We hardly look at documents. We look at people. If a document gets a positive reaction, we figure it is good. If the reaction is negative, it is probably bad. Grossly simplified, this is the source of Google's magic. That is how we serve the next person, keep the induction rolling, and sustain the illusion that we understand." In other words, Google watches to see what people click on when they enter a given search term. When people seem satisfied by a certain type of information, it's more likely that Google will promote that kind of search result for similar queries in the future.
Note: For more along these lines, explore concise summaries of revealing news articles on Big Tech from reliable major media sources.
Before the digital age, law enforcement would conduct surveillance through methods like wiretapping phone lines or infiltrating an organization. Now, police surveillance can reach into the most granular aspects of our lives during everyday activities, without our consent or knowledge — and without a warrant. Technology like automated license plate readers, drones, facial recognition, and social media monitoring added a uniquely dangerous element to the surveillance that comes with physical intimidation of law enforcement. With greater technological power in the hands of police, surveillance technology is crossing into a variety of new and alarming contexts. Law enforcement partnerships with companies like Clearview AI, which scraped billions of images from the internet for their facial recognition database ... has been used by law enforcement agencies across the country, including within the federal government. When the social networking app on your phone can give police details about where you’ve been and who you’re connected to, or your browsing history can provide law enforcement with insight into your most closely held thoughts, the risks of self-censorship are great. When artificial intelligence tools or facial recognition technology can piece together your life in a way that was previously impossible, it gives the ones with the keys to those tools enormous power to ... maintain a repressive status quo.
Note: Facial recognition technology has played a role in the wrongful arrests of many innocent people. For more along these lines, explore concise summaries of revealing news articles on police corruption and the disappearance of privacy.
Air fryers that gather your personal data and audio speakers “stuffed with trackers” are among examples of smart devices engaged in “excessive” surveillance, according to the consumer group Which? The organisation tested three air fryers ... each of which requested permission to record audio on the user’s phone through a connected app. Which? found the app provided by the company Xiaomi connected to trackers for Facebook and a TikTok ad network. The Xiaomi fryer and another by Aigostar sent people’s personal data to servers in China. Its tests also examined smartwatches that it said required “risky” phone permissions – in other words giving invasive access to the consumer’s phone through location tracking, audio recording and accessing stored files. Which? found digital speakers that were preloaded with trackers for Facebook, Google and a digital marketing company called Urbanairship. The Information Commissioner’s Office (ICO) said the latest consumer tests “show that many products not only fail to meet our expectations for data protection but also consumer expectations”. A growing number of devices in homes are connected to the internet, including camera-enabled doorbells and smart TVs. Last Black Friday, the ICO encouraged consumers to check if smart products they planned to buy had a physical switch to prevent the gathering of voice data.
Note: A 2015 New York Times article warned that smart devices were a "train wreck in privacy and security." For more along these lines, read about how automakers collect intimate information that includes biometric data, genetic information, health diagnosis data, and even information on people’s “sexual activities” when drivers pair their smartphones to their vehicles.
The past decade has seen a rapid expansion of the commercial space industry. In a 2023 white paper, a group of concerned astronomers warned against repeating Earthly “colonial practices” in outer space. Some of these colonial practices might include the enclosure of land, the exploitation of environmental resources and the destruction of landscapes – in the name of ideals such as destiny, civilization and the salvation of humanity. People of Bawaka Country in northern Australia have told the space industry that their ancestors guide human life from their home in the galaxy, and that this relationship is increasingly threatened by large orbiting satellite networks. Similarly, Inuit elders say their ancestors live on celestial bodies. Navajo leadership has asked NASA not to land human remains on the Moon. Kanaka elders have insisted that no more telescopes be built on Mauna Kea, which Native Hawaiians consider to be ancestral and sacred. These Indigenous positions stand in stark contrast with many in the industry’s insistence that space is empty and inanimate. In 1967, a slew of nations including the U.S., U.K. and USSR, signed the Outer Space Treaty. This treaty declared, among other things, that no nation can own a planetary body or part of one. The nations that signed the Outer Space Treaty were effectively saying, “Let’s not battle each other for territory and resources again. Let’s do outer space differently.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
Tech companies have outfitted classrooms across the U.S. with devices and technologies that allow for constant surveillance and data gathering. Firms such as Gaggle, Securly and Bark (to name a few) now collect data from tens of thousands of K-12 students. They are not required to disclose how they use that data, or guarantee its safety from hackers. In their new book, Surveillance Education: Navigating the Conspicuous Absence of Privacy in Schools, Nolan Higdon and Allison Butler show how all-encompassing surveillance is now all too real, and everything from basic privacy rights to educational quality is at stake. The tech industry has done a great job of convincing us that their platforms — like social media and email — are “free.” But the truth is, they come at a cost: our privacy. These companies make money from our data, and all the content and information we share online is basically unpaid labor. So, when the COVID-19 lockdowns hit, a lot of people just assumed that using Zoom, Canvas and Moodle for online learning was a “free” alternative to in-person classes. In reality, we were giving up even more of our labor and privacy to an industry that ended up making record profits. Your data can be used against you ... or taken out of context, such as sarcasm being used to deny you a job or admission to a school. Data breaches happen all the time, which could lead to identity theft or other personal information becoming public.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams—conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody “looking away from the screen for an extended period of time." For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
A little-known advertising cartel that controls 90% of global marketing spending supported efforts to defund news outlets and platforms including The Post — at points urging members to use a blacklist compiled by a shadowy government-funded group that purports to guard news consumers against “misinformation.” The World Federation of Advertisers (WFA), which reps 150 of the world’s top companies — including ExxonMobil, GM, General Mills, McDonald’s, Visa, SC Johnson and Walmart — and 60 ad associations sought to squelch online free speech through its Global Alliance for Responsible Media (GARM) initiative, the House Judiciary Committee found. “The extent to which GARM has organized its trade association and coordinates actions that rob consumers of choices is likely illegal under the antitrust laws and threatens fundamental American freedoms,” the Republican-led panel said in its 39-page report. The new report establishes links between the WFA’s “responsible media” initiative and the taxpayer-funded Global Disinformation Index (GDI), a London-based group that in 2022 unveiled an ad blacklist of 10 news outlets whose opinion sections tilted conservative or libertarian, including The Post, RealClearPolitics and Reason magazine. Internal communications suggest that rather than using an objective rubric to guide decisions, GARM members simply monitored disfavored outlets closely to be able to find justification to demonetize them.
Note: For more along these lines, see concise summaries of deeply revealing news articles on censorship and media manipulation from reliable sources.
Ford Motor Company is just one of many automakers advancing technology that weaponizes cars for mass surveillance. The ... company is currently pursuing a patent for technology that would allow vehicles to monitor the speed of nearby cars, capture images, and transmit data to law enforcement agencies. This would effectively turn vehicles into mobile surveillance units, sharing detailed information with both police and insurance companies. Ford's initiative is part of a broader trend among car manufacturers, where vehicles are increasingly used to spy on drivers and harvest data. In today's world, a smartphone can produce up to 3 gigabytes of data per hour, but recently manufactured cars can churn out up to 25 gigabytes per hour—and the cars of the future will generate even more. These vehicles now gather biometric data such as voice, iris, retina, and fingerprint recognition. In 2022, Hyundai patented eye-scanning technology to replace car keys. This data isn't just stored locally; much of it is uploaded to the cloud, a system that has proven time and again to be incredibly vulnerable. Toyota recently announced that a significant amount of customer information was stolen and posted on a popular hacking site. Imagine a scenario where hackers gain control of your car. As cybersecurity threats become more advanced, the possibility of a widespread attack is not far-fetched.
Note: FedEx is helping the police build a large AI surveillance network to track people and vehicles. Michael Hastings, a journalist investigating U.S. military and intelligence abuses, was killed in a 2013 car crash that may have been the result of a hack. For more along these lines, explore summaries of news articles on the disappearance of privacy from reliable major media sources.
Big tech companies have spent vast sums of money honing algorithms that gather their users’ data and scour it for patterns. One result has been a boom in precision-targeted online advertisements. Another is a practice some experts call “algorithmic personalized pricing,” which uses artificial intelligence to tailor prices to individual consumers. The Federal Trade Commission uses a more Orwellian term for this: “surveillance pricing.” In July the FTC sent information-seeking orders to eight companies that “have publicly touted their use of AI and machine learning to engage in data-driven targeting,” says the agency’s chief technologist Stephanie Nguyen. Consumer surveillance extends beyond online shopping. “Companies are investing in infrastructure to monitor customers in real time in brick-and-mortar stores,” [Nguyen] says. Some price tags, for example, have become digitized, designed to be updated automatically in response to factors such as expiration dates and customer demand. Retail giant Walmart—which is not being probed by the FTC—says its new digital price tags can be remotely updated within minutes. When personalized pricing is applied to home mortgages, lower-income people tend to pay more—and algorithms can sometimes make things even worse by hiking up interest rates based on an inadvertently discriminatory automated estimate of a borrower’s risk rating.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
Meta CEO Mark Zuckerberg told the House Judiciary Committee that his company's moderators faced significant pressure from the federal government to censor content on Facebook and Instagram—and that he regretted caving to it. In a letter to Rep. Jim Jordan (R–Ohio), the committee's chairman, Zuckerberg explained that the pressure also applied to "humor and satire" and that in the future, Meta would not blindly obey the bureaucrats. The letter refers specifically to the widespread suppression of contrarian viewpoints relating to COVID-19. Email exchanges between Facebook moderators and CDC officials reveal that the government took a heavy hand in suppressing content. Health officials did not merely vet posts for accuracy but also made pseudo-scientific determinations about whether certain opinions could cause social "harm" by undermining the effort to encourage all Americans to get vaccinated. But COVID-19 content was not the only kind of speech the government went after. Zuckerberg also explains that the FBI warned him about Russian attempts to sow chaos on social media by releasing a fake story about the Biden family just before the 2020 election. This warning motivated Facebook to take action against the New York Post's Hunter Biden laptop story when it was published in October 2020. In his letter, Zuckerberg states that this was a mistake and that moving forward, Facebook will never again demote stories pending approval from fact-checkers.
Note: For more along these lines, see concise summaries of deeply revealing news articles on censorship and government corruption from reliable major media sources.
In almost every country on Earth, the digital infrastructure upon which the modern economy was built is owned and controlled by a small handful of monopolies, based largely in Silicon Valley. This system is looking more and more like neo-feudalism. Just as the feudal lords of medieval Europe owned all of the land ... the US Big Tech monopolies of the 21st century act as corporate feudal lords, controlling all of the digital land upon which the digital economy is based. A monopolist in the 20th century would have loved to control a country’s supply of, say, refrigerators. But the Big Tech monopolists of the 21st century go a step further and control all of the digital infrastructure needed to buy those fridges — from the internet itself to the software, cloud hosting, apps, payment systems, and even the delivery service. These corporate neo-feudal lords don’t just dominate a single market or a few related ones; they control the marketplace. They can create and destroy entire markets. Their monopolistic control extends well beyond just one country, to almost the entire world. If a competitor does manage to create a product, US Big Tech monopolies can make it disappear. Imagine you are an entrepreneur. You develop a product, make a website, and offer to sell it online. But then you search for it on Google, and it does not show up. Instead, Google promotes another, similar product in the search results. This is not a hypothetical; this already happens.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
Important Note: Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.