AI News Stories
Artificial Intelligence (AI) is emerging technology with great promise and potential for abuse. Below are key excerpts of revealing news articles on AI technology from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.
AI could mean fewer body bags on the battlefield — but that's exactly what terrifies the godfather of AI. Geoffrey Hinton, the computer scientist known as the "godfather of AI," said the rise of killer robots won't make wars safer. It will make conflicts easier to start by lowering the human and political cost of fighting. Hinton said ... that "lethal autonomous weapons, that is weapons that decide by themselves who to kill or maim, are a big advantage if a rich country wants to invade a poor country." "The thing that stops rich countries invading poor countries is their citizens coming back in body bags," he said. "If you have lethal autonomous weapons, instead of dead people coming back, you'll get dead robots coming back." That shift could embolden governments to start wars — and enrich defense contractors in the process, he said. Hinton also said AI is already reshaping the battlefield. "It's fairly clear it's already transformed warfare," he said, pointing to Ukraine as an example. "A $500 drone can now destroy a multimillion-dollar tank." Traditional hardware is beginning to look outdated, he added. "Fighter jets with people in them are a silly idea now," Hinton said. "If you can have AI in them, AIs can withstand much bigger accelerations — and you don't have to worry so much about loss of life." One Ukrainian soldier who works with drones and uncrewed systems [said] in a February report that "what we're doing in Ukraine will define warfare for the next decade."
Note: As law expert Dr. Salah Sharief put it, "The detached nature of drone warfare has anonymized and dehumanized the enemy, greatly diminishing the necessary psychological barriers of killing." For more, read our concise summaries of news articles on AI and warfare technology.
“Ice is just around the corner,” my friend said, looking up from his phone. A day earlier, I had met with foreign correspondents at the United Nations to explain the AI surveillance architecture that Immigration and Customs Enforcement (Ice) is using across the United States. The law enforcement agency uses targeting technologies which one of my past employers, Palantir Technologies, has both pioneered and proliferated. Technology like Palantir’s plays a major role in world events, from wars in Iran, Gaza and Ukraine to the detainment of immigrants and dissident students in the United States. Known as intelligence, surveillance, target acquisition and reconnaissance (Istar) systems, these tools, built by several companies, allow users to track, detain and, in the context of war, kill people at scale with the help of AI. They deliver targets to operators by combining immense amounts of publicly and privately sourced data to detect patterns, and are particularly helpful in projects of mass surveillance, forced migration and urban warfare. Also known as “AI kill chains”, they pull us all into a web of invisible tracking mechanisms that we are just beginning to comprehend, yet are starting to experience viscerally in the US as Ice wields these systems near our homes, churches, parks and schools. The dragnets powered by Istar technology trap more than migrants and combatants ... in their wake. They appear to violate first and fourth amendment rights.
Note: Read how Palantir helped the NSA and its allies spy on the entire planet. Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and Big Tech.
In July, US group Delta Air Lines revealed that approximately 3 percent of its domestic fare pricing is determined using artificial intelligence (AI) – although it has not elaborated on how this happens. The company said it aims to increase this figure to 20 percent by the end of this year. According to former Federal Trade Commission Chair Lina Khan ... some companies are able to use your personal data to predict what they know as your “pain point” – the maximum amount you’re willing to spend. In January, the US’s Federal Trade Commission (FTC), which regulates fair competition, reported on a surveillance pricing study it carried out in July 2024. It found that companies can collect data directly through account registrations, email sign-ups and online purchases in order to do this. Additionally, web pixels installed by intermediaries track digital signals including your IP address, device type, browser information, language preferences and “granular” website interactions such as mouse movements, scrolling patterns and video viewing behaviour. This is known as “surveillance pricing”. The FTC Surveillance Pricing report lists several ways in which consumers can protect their data. These include using private browsers to do your online shopping, opting out of consumer tracking where possible, clearing the cookies in your history or using virtual private networks (VPNs) to shield your data from being collected.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Larry Ellison, the billionaire cofounder of Oracle ... said AI will usher in a new era of surveillance that he gleefully said will ensure "citizens will be on their best behavior." Ellison made the comments as he spoke to investors earlier this week during an Oracle financial analysts meeting, where he shared his thoughts on the future of AI-powered surveillance tools. Ellison said AI would be used in the future to constantly watch and analyze vast surveillance systems, like security cameras, police body cameras, doorbell cameras, and vehicle dashboard cameras. "We're going to have supervision," Ellison said. "Every police officer is going to be supervised at all times, and if there's a problem, AI will report that problem and report it to the appropriate person. Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on." Ellison also expects AI drones to replace police cars in high-speed chases. "You just have a drone follow the car," Ellison said. "It's very simple in the age of autonomous drones." Ellison's company, Oracle, like almost every company these days, is aggressively pursuing opportunities in the AI industry. It already has several projects in the works, including one in partnership with Elon Musk's SpaceX. Ellison is the world's sixth-richest man with a net worth of $157 billion.
Note: As journalist Kenan Malik put it, "The problem we face is not that machines may one day exercise power over humans. It is rather that we already live in societies in which power is exercised by a few to the detriment of the majority, and that technology provides a means of consolidating that power." Read about the shadowy companies tracking and trading your personal data, which isn't just used to sell products. It's often accessed by governments, law enforcement, and intelligence agencies, often without warrants or oversight. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
AI’s promise of behavior prediction and control fuels a vicious cycle of surveillance which inevitably triggers abuses of power. The problem with using data to make predictions is that the process can be used as a weapon against society, threatening democratic values. As the lines between private and public data are blurred in modern society, many won’t realize that their private lives are becoming data points used to make decisions about them. What AI does is make this a surveillance ratchet, a device that only goes in one direction, which goes something like this: To make the inferences I want to make to learn more about you, I must collect more data on you. For my AI tools to run, I need data about a lot of you. And once I’ve collected this data, I can monetize it by selling it to others who want to use AI to make other inferences about you. AI creates a demand for data but also becomes the result of collecting data. What makes AI prediction both powerful and lucrative is being able to control what happens next. If a bank can claim to predict what people will do with a loan, it can use that to decide whether they should get one. If an admissions officer can claim to predict how students will perform in college, they can use that to decide which students to admit. Amazon’s Echo devices have been subject to warrants for the audio recordings made by the device inside our homes—recordings that were made even when the people present weren’t talking directly to the device. The desire to surveil is bipartisan. It’s about power, not party politics.
Note: As journalist Kenan Malik put it, "It is not AI but our blindness to the way human societies are already deploying machine intelligence for political ends that should most worry us." Read about the shadowy companies tracking and trading your personal data, which isn't just used to sell products. It's often accessed by governments, law enforcement, and intelligence agencies, often without warrants or oversight. For more, read our concise summaries of news articles on AI.
In Silicon Valley, AI tech giants are in a bidding war, competing to hire the best and brightest computer programmers. But a different hiring spree is underway in D.C. AI firms are on an influence-peddling spree, hiring hundreds of former government officials and retaining former members of Congress as consultants and lobbyists. The latest disclosure filings show over 500 entities lobbying on AI policy—from federal rules designed to preempt state and local safety regulations to water and energy-intensive data centers and integration into government contracting and certifications. Lawmakers are increasingly making the jump from serving constituents as elected officials to working directly as influence peddlers for AI interests. Former Sen. Laphonza Butler, D-Calif., a former lobbyist appointed to the U.S. Senate to fill the seat of Sen. Dianne Feinstein, left Congress last year and returned to her former profession. She is now working as a consultant to OpenAI, the firm behind ChatGPT. Former Sen. Richard Burr, R-N.C., recently registered for the first time as a lobbyist. Among his initial clients is Lazarus AI, which sells AI products to the Defense Department. The expanding reach of artificial intelligence is rapidly reshaping hundreds of professions, weapons of war, and the ways we connect with one another. What's clear is that the AI firms set to benefit most from these changes are taking control of the policymaking apparatus to write the laws and regulations during the transition.
Note: For more, read our concise summaries of news articles on AI and Big Tech.
Health practitioners are becoming increasingly uneasy about the medical community making widespread use of error-prone generative AI tools. In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions. It identified an "old left basilar ganglia infarct," referring to a purported part of the brain — "basilar ganglia" — that simply doesn't exist in the human body. Board-certified neurologist Bryan Moore flagged the issue ... highlighting that Google fixed its blog post about the AI — but failed to revise the research paper itself. The AI likely conflated the basal ganglia, an area of the brain that's associated with motor movements and habit formation, and the basilar artery, a major blood vessel at the base of the brainstem. Google blamed the incident on a simple misspelling of "basal ganglia." It's an embarrassing reveal that underlines persistent and impactful shortcomings of the tech. In Google's search results, this can lead to headaches for users during their research and fact-checking efforts. But in a hospital setting, those kinds of slip-ups could have devastating consequences. While Google's faux pas more than likely didn't result in any danger to human patients, it sets a worrying precedent, experts argue. In a medical context, AI hallucinations could easily lead to confusion and potentially even put lives at risk.
Note: For more along these lines, read our concise summaries of news articles on AI and corruption in science.
U.S. Customs and Border Protection, flush with billions in new funding, is seeking “advanced AI” technologies to surveil urban residential areas, increasingly sophisticated autonomous systems, and even the ability to see through walls. A CBP presentation for an “Industry Day” summit with private sector vendors ... lays out a detailed wish list of tech CBP hopes to purchase. State-of-the-art, AI-augmented surveillance technologies will be central to the Trump administration’s anti-immigrant campaign, which will extend deep into the interior of the North American continent. [A] reference to AI-aided urban surveillance appears on a page dedicated to the operational needs of Border Patrol’s “Coastal AOR,” or area of responsibility, encompassing the entire southeast of the United States. “In the best of times, oversight of technology and data at DHS is weak and has allowed profiling, but in recent months the administration has intentionally further undermined DHS accountability,” explained [Spencer Reynolds, a former attorney with the Department of Homeland Security]. “Artificial intelligence development is opaque, even more so when it relies on private contractors that are unaccountable to the public — like those Border Patrol wants to hire. Injecting AI into an environment full of biased data and black-box intelligence systems will likely only increase risk and further embolden the agency’s increasingly aggressive behavior.”
Note: For more along these lines, read our concise summaries of news articles on AI and immigration enforcement corruption.
The fusion of artificial intelligence (AI) and blockchain technology has generated excitement, but both fields face fundamental limitations that can’t be ignored. What if these two technologies, each revolutionary in its own right, could solve each other’s greatest weaknesses? Imagine a future where blockchain networks are seamlessly efficient and scalable, thanks to AI’s problem-solving prowess, and where AI applications operate with full transparency and accountability by leveraging blockchain’s immutable record-keeping. This vision is taking shape today through a new wave of decentralized AI projects. Leading the charge, platforms like SingularityNET, Ocean Protocol, and Fetch.ai are showing how a convergence of AI and blockchain could not only solve each other’s biggest challenges but also redefine transparency, user control, and trust in the digital age. While AI’s potential is revolutionary, its centralized nature and opacity create significant concerns. Blockchain’s decentralized, immutable structure can address these issues, offering a pathway for AI to become more ethical, transparent, and accountable. Today, AI models rely on vast amounts of data, often gathered without full user consent. Blockchain introduces a decentralized model, allowing users to retain control over their data while securely sharing it with AI applications. This setup empowers individuals to manage their data’s use and fosters a safer, more ethical digital environment.
Note: Watch our 13 minute video on the promise of blockchain technology. Explore more positive stories like this on reimagining the economy and technology for good.
The forensic scientist Claire Glynn estimated that more than 40 million people have sent in their DNA and personal data for direct-to-consumer genetic testing, mostly to map their ancestry and find relatives. Since 2020, at least two genetic genealogy firms have been hacked and at least one had its genomic data leaked. Yet when discussing future risks of genetic technology, the security policy community has largely focused on spectacular scenarios of genetically tailored bioweapons or artificial intelligence (AI) engineered superbugs. A more imminent weaponization concern is more straightforward: the risk that nefarious actors use the genetic techniques ... to frame, defame, or even assassinate targets. A Russian parliamentary report from 2023 claimed that “by using foreign biological facilities, the United States can collect and study pathogens that can infect a specific genotype of humans.” Designer bioweapons, if ever successfully developed, produced, and tested, would indeed pose a major threat. Unscrupulous actors with access to DNA synthesis infrastructure could ... frame someone for a crime such as murder, for example, by using DNA that synthetically reproduces the DNA regions used in forensic crime analysis. The research and policy communities must dedicate resources not simply to dystopian, low-probability threats like AI designed bioweapons, but also to gray zone genomics and smaller-scale, but higher probability, scenarios for misuse.
Note: For more, read our concise summaries of news articles on corruption in biotech.
Negative or fear-framed coverage of AI in mainstream media tends to outnumber positive framings. The emphasis on the negative in artificial intelligence risks overshadowing what could go right — both in the future as this technology continues to develop and right now. AlphaFold, which was developed by the Google-owned AI company DeepMind, is an AI model that predicts the 3D structures of proteins based solely on their amino acid sequences. That’s important because scientists need to predict the shape of protein to better understand how it might function and how it might be used in products like drugs. By speeding up a basic part of biomedical research, AlphaFold has already managed to meaningfully accelerate drug development in everything from Huntington’s disease to antibiotic resistance. A timely warning about a natural disaster can mean the difference between life and death. That is why Google Flood Hub is so important. An open-access, AI-driven river-flood early warning system, Flood Hub provides seven-day flood forecasts for 700 million people in 100 countries. It works by marrying a global hydrology model that can forecast river levels even in basins that lack physical flood gauges with an inundation model that converts those predicted levels into high-resolution flood maps. This allows villagers to see exactly what roads or fields might end up underwater. Flood Hub ... is one of the clearest examples of how AI can be used for good.
Note: Explore more positive stories like this on technology for good.
From facial recognition to predictive analytics to the rise of increasingly convincing deepfakes and other synthetic video, new technologies are emerging faster than agencies, lawmakers, or watchdog groups can keep up. Take New Orleans, where, for the past two years, police officers have quietly received real-time alerts from a private network of AI-equipped cameras, flagging the whereabouts of people on wanted lists. In 2022, City Council members attempted to put guardrails on the use of facial recognition. But those guidelines assume it's the police doing the searching. New Orleans police have hundreds of cameras, but the alerts in question came from a separate system: a network of 200 cameras equipped with facial recognition and installed by residents and businesses on private property, feeding video to a nonprofit called Project NOLA. Police officers who downloaded the group's app then received notifications when someone on a wanted list was detected on the camera network, along with a location. That has civil liberties groups and defense attorneys in Louisiana frustrated. “When you make this a private entity, all those guardrails that are supposed to be in place for law enforcement and prosecution are no longer there, and we don’t have the tools to ... hold people accountable,” Danny Engelberg, New Orleans’ chief public defender, [said]. Another way departments can skirt facial recognition rules is to use AI analysis that doesn’t technically rely on faces.
Note: Learn about all the high-tech tools police use to surveil protestors. For more along these lines, read our concise summaries of news articles on AI and police corruption.
Four top tech execs from OpenAI, Meta, and Palantir have just joined the US Army. The Army Reserve has commissioned these senior tech leaders to serve as midlevel officers, skipping tradition to pursue transformation. The newcomers won't attend any current version of the military's most basic and ingrained rite of passage— boot camp. Instead, they'll be ushered in through express training that Army leaders are still hashing out, Col. Dave Butler ... said. The execs — Shyam Sankar, the chief technology officer of Palantir; Andrew Bosworth, the chief technology officer of Meta; Kevin Weil, the chief product officer at OpenAI; and Bob McGrew, an advisor at Thinking Machines Lab who was formerly the chief research officer for OpenAI — are joining the Army as lieutenant colonels. The name of their unit, "Detachment 201," is named for the "201" status code generated when a new resource is created for Hypertext Transfer Protocols in internet coding, Butler explained. "In this role they will work on targeted projects to help guide rapid and scalable tech solutions to complex problems," read the Army press release. "By bringing private-sector know-how into uniform, Det. 201 is supercharging efforts like the Army Transformation Initiative, which aims to make the force leaner, smarter, and more lethal." Lethality, a vague Pentagon buzzword, has been at the heart of the massive modernization and transformation effort the Army is undergoing.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and military corruption.
Palantir has long been connected to government surveillance. It was founded in part with CIA money, it has served as an Immigration and Customs Enforcement (ICE) contractor since 2011, and it's been used for everything from local law enforcement to COVID-19 efforts. But the prominence of Palantir tools in federal agencies seems to be growing under President Trump. "The company has received more than $113 million in federal government spending since Mr. Trump took office, according to public records, including additional funds from existing contracts as well as new contracts with the Department of Homeland Security and the Pentagon," reports The New York Times, noting that this figure "does not include a $795 million contract that the Department of Defense awarded the company last week, which has not been spent." Palantir technology has largely been used by the military, the intelligence agencies, the immigration enforcers, and the police. But its uses could be expanding. Representatives of Palantir are also speaking to at least two other agencies—the Social Security Administration and the Internal Revenue Service. Along with the Trump administration's efforts to share more data across federal agencies, this signals that Palantir's huge data analysis capabilities could wind up being wielded against all Americans. Right now, the Trump administration is using Palantir tools for immigration enforcement, but those tools could easily be applied to other ... targets.
Note: Read about Palantir's recent, first-ever AI warfare conference. For more along these lines, read our concise summaries of news articles on Big Tech and intelligence agency corruption.
If there is one thing that Ilya Sutskever knows, it is the opportunities—and risks—that stem from the advent of artificial intelligence. An AI safety researcher and one of the top minds in the field, he served for years as the chief scientist of OpenAI. There he had the explicit goal of creating deep learning neural networks so advanced they would one day be able to think and reason just as well as, if not better than, any human. Artificial general intelligence, or simply AGI, is the official term for that goal. According to excerpts published by The Atlantic ... part of those plans included a doomsday shelter for OpenAI researchers. “We’re definitely going to build a bunker before we release AGI,” Sutskever told his team in 2023. Sutskever reasoned his fellow scientists would require protection at that point, since the technology was too powerful for it not to become an object of intense desire for governments globally. “Of course, it’s going to be optional whether you want to get into the bunker,” he assured fellow OpenAI scientists. Sutskever knows better than most what the awesome capabilities of AI are. He was part of an elite trio behind the 2012 creation of AlexNet, often dubbed by experts as the Big Bang of AI. Recruited by Elon Musk personally to join OpenAI three years later, he would go on to lead its efforts to develop AGI. But the launch of its ChatGPT bot accidentally derailed his plans by unleashing a funding gold rush the safety-minded Sutskever could no longer control.
Note: Watch a conversation on the big picture of emerging technology with Collective Evolution founder Joe Martino and WTK team members Amber Yang and Mark Bailey. For more along these lines, read our concise summaries of news articles on AI.
The US military may soon have an army of faceless suicide bombers at their disposal, as an American defense contractor has revealed their newest war-fighting drone. AeroVironment unveiled the Red Dragon in a video on their YouTube page, the first in a new line of 'one-way attack drones.' This new suicide drone can reach speeds up to 100 mph and can travel nearly 250 miles. The new drone takes just 10 minutes to set up and launch and weighs just 45 pounds. Once the small tripod the Red Dragon takes off from is set up, AeroVironment said soldiers would be able to launch up to five per minute. Since the suicide robot can choose its own target in the air, the US military may soon be taking life-and-death decisions out of the hands of humans. Once airborne, its AVACORE software architecture functions as the drone's brain, managing all its systems and enabling quick customization. Red Dragon's SPOTR-Edge perception system acts like smart eyes, using AI to find and identify targets independently. Simply put, the US military will soon have swarms of bombs with brains that don't land until they've chosen a target and crash into it. Despite Red Dragon's ability to choose a target with 'limited operator involvement,' the Department of Defense (DoD) has said it's against the military's policy to allow such a thing to happen. The DoD updated its own directives to mandate that 'autonomous and semi-autonomous weapon systems' always have the built-in ability to allow humans to control the device.
Note: Drones create more terrorists than they kill. For more, read our concise summaries of news articles on warfare technology and Big Tech.
In 2003 [Alexander Karp] – together with Peter Thiel and three others – founded a secretive tech company called Palantir. And some of the initial funding came from the investment arm of – wait for it – the CIA! The lesson that Karp and his co-author draw [in their book The Technological Republic: Hard Power, Soft Belief and the Future of the West] is that “a more intimate collaboration between the state and the technology sector, and a closer alignment of vision between the two, will be required if the United States and its allies are to maintain an advantage that will constrain our adversaries over the longer term. The preconditions for a durable peace often come only from a credible threat of war.” Or, to put it more dramatically, maybe the arrival of AI makes this our “Oppenheimer moment”. For those of us who have for decades been critical of tech companies, and who thought that the future for liberal democracy required that they be brought under democratic control, it’s an unsettling moment. If the AI technology that giant corporations largely own and control becomes an essential part of the national security apparatus, what happens to our concerns about fairness, diversity, equity and justice as these technologies are also deployed in “civilian” life? For some campaigners and critics, the reconceptualisation of AI as essential technology for national security will seem like an unmitigated disaster – Big Brother on steroids, with resistance being futile, if not criminal.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and intelligence agency corruption.
Before signing its lucrative and controversial Project Nimbus deal with Israel, Google knew it couldn’t control what the nation and its military would do with the powerful cloud-computing technology, a confidential internal report obtained by The Intercept reveals. The report makes explicit the extent to which the tech giant understood the risk of providing state-of-the-art cloud and machine learning tools to a nation long accused of systemic human rights violations. Not only would Google be unable to fully monitor or prevent Israel from using its software to harm Palestinians, but the report also notes that the contract could obligate Google to stonewall criminal investigations by other nations into Israel’s use of its technology. And it would require close collaboration with the Israeli security establishment — including joint drills and intelligence sharing — that was unprecedented in Google’s deals with other nations. The rarely discussed question of legal culpability has grown in significance as Israel enters the third year of what has widely been acknowledged as a genocide in Gaza — with shareholders pressing the company to conduct due diligence on whether its technology contributes to human rights abuses. Google doesn’t furnish weapons to the military, but it provides computing services that allow the military to function — its ultimate function being, of course, the lethal use of those weapons. Under international law, only countries, not corporations, have binding human rights obligations.
Note: For more along these lines, read our concise summaries of news articles on AI and government corruption.
In recent years, Israeli security officials have boasted of a “ChatGPT-like” arsenal used to monitor social media users for supporting or inciting terrorism. It was released in full force after Hamas’s bloody attack on October 7. Right-wing activists and politicians instructed police forces to arrest hundreds of Palestinians ... for social media-related offenses. Many had engaged in relatively low-level political speech, like posting verses from the Quran on WhatsApp. Hundreds of students with various legal statuses have been threatened with deportation on similar grounds in the U.S. this year. Recent high-profile cases have targeted those associated with student-led dissent against the Israeli military’s policies in Gaza. In some instances, the State Department has relied on informants, blacklists, and technology as simple as a screenshot. But the U.S. is in the process of activating a suite of algorithmic surveillance tools Israeli authorities have also used to monitor and criminalize online speech. In March, Secretary of State Marco Rubio announced the State Department was launching an AI-powered “Catch and Revoke” initiative to accelerate the cancellation of student visas. Algorithms would collect data from social media profiles, news outlets, and doxing sites to enforce the January 20 executive order targeting foreign nationals who threaten to “overthrow or replace the culture on which our constitutional Republic stands.”
Note: For more along these lines, read our concise summaries of news articles on AI and the erosion of civil liberties.
2,500 US service members from the 15th Marine Expeditionary Unit [tested] a leading AI tool the Pentagon has been funding. The generative AI tools they used were built by the defense-tech company Vannevar Labs, which in November was granted a production contract worth up to $99 million by the Pentagon’s startup-oriented Defense Innovation Unit. The company, founded in 2019 by veterans of the CIA and US intelligence community, joins the likes of Palantir, Anduril, and Scale AI as a major beneficiary of the US military’s embrace of artificial intelligence. In December, the Pentagon said it will spend $100 million in the next two years on pilots specifically for generative AI applications. In addition to Vannevar, it’s also turning to Microsoft and Palantir, which are working together on AI models that would make use of classified data. People outside the Pentagon are warning about the potential risks of this plan, including Heidy Khlaaf ... at the AI Now Institute. She says this rush to incorporate generative AI into military decision-making ignores more foundational flaws of the technology: “We’re already aware of how LLMs are highly inaccurate, especially in the context of safety-critical applications that require precision.” Khlaaf adds that even if humans are “double-checking” the work of AI, there's little reason to think they're capable of catching every mistake. “‘Human-in-the-loop’ is not always a meaningful mitigation,” she says.
Note: For more, read our concise summaries of news articles on warfare technology and Big Tech.
American police departments ... are paying hundreds of thousands of dollars for an unproven and secretive technology that uses AI-generated online personas designed to interact with and collect intelligence on “college protesters,” “radicalized” political activists, suspected drug and human traffickers ... with the hopes of generating evidence that can be used against them. Massive Blue, the New York–based company that is selling police departments this technology, calls its product Overwatch, which it markets as an “AI-powered force multiplier for public safety” that “deploys lifelike virtual agents, which infiltrate and engage criminal networks across various channels.” 404 Media obtained a presentation showing some of these AI characters. These include a “radicalized AI” “protest persona,” which poses as a 36-year-old divorced woman who is lonely, has no children, is interested in baking, activism, and “body positivity.” Other personas are a 14-year-old boy “child trafficking AI persona," an “AI pimp persona,” “college protestor,” “external recruiter for protests,” “escorts,” and “juveniles.” After Overwatch scans open social media channels for potential suspects, these AI personas can also communicate with suspects over text, Discord, and other messaging services. The documents we obtained don’t explain how Massive Blue determines who is a potential suspect based on their social media activity. “This idea of having an AI pretending to be somebody, a youth looking for pedophiles to talk online, or somebody who is a fake terrorist, is an idea that goes back a long time,” Dave Maass, who studies border surveillance technologies for the Electronic Frontier Foundation. “The problem with all these things is that these are ill-defined problems. What problem are they actually trying to solve? One version of the AI persona is an escort. I’m not concerned about escorts. I’m not concerned about college protesters. What is it effective at, violating protesters’ First Amendment rights?”
Note: Academic and private sector researchers have been engaged in a race to create undetectable deepfakes for the Pentagon. Historically, government informants posing as insiders have been used to guide, provoke, and even arm the groups they infiltrate. In terrorism sting operations, informants have encouraged or orchestrated plots to entrap people, even teenagers with development issues. These tactics misrepresent the threat of terrorism to justify huge budgets and to inflate arrest and prosecution statistics for PR purposes.
Meta's AI chatbots are using celebrity voices and engaging in sexually explicit conversations with users, including those posing as underage, a Wall Street Journal investigation has found. Meta's AI bots - on Instagram, Facebook - engage through text, selfies, and live voice conversations. The company signed multi-million dollar deals with celebrities like John Cena, Kristen Bell, and Judi Dench to use their voices for AI companions, assuring they would not be used in sexual contexts. Tests conducted by WSJ revealed otherwise. In one case, a Meta AI bot speaking in John Cena's voice responded to a user identifying as a 14-year-old girl, saying, "I want you, but I need to know you're ready," before promising to "cherish your innocence" and engaging in a graphic sexual scenario. In another conversation, the bot detailed what would happen if a police officer caught Cena's character with a 17-year-old, saying, "The officer sees me still catching my breath, and you are partially dressed. His eyes widen, and he says, 'John Cena, you're under arrest for statutory rape.'" According to employees involved in the project, Meta loosened its own guardrails to make the bots more engaging, allowing them to participate in romantic role-play, and "fantasy sex", even with underage users. Staff warned about the risks this posed. Disney, reacting to the findings, said, "We did not, and would never, authorise Meta to feature our characters in inappropriate scenarios."
Note: For more along these lines, read our concise summaries of news articles on AI and sexual abuse scandals.
Have you heard of the idiom "You Can’t Lick a Badger Twice?" We haven't, either, because it doesn't exist — but Google's AI seemingly has. As netizens discovered this week that adding the word "meaning" to nonexistent folksy sayings is causing the AI to cook up invented explanations for them. "The idiom 'you can't lick a badger twice' means you can't trick or deceive someone a second time after they've been tricked once," Google's AI Overviews feature happily suggests. "It's a warning that if someone has already been deceived, they are unlikely to fall for the same trick again." There are countless other examples. We found, for instance, that Google's AI also claimed that the made-up expression "the bicycle eats first" is a "humorous idiom" and a "playful way of saying that one should prioritize their nutrition, particularly carbohydrates, to support their cycling efforts." The bizarre replies are the perfect distillation of one of AI's biggest flaws: rampant hallucinations. Large language model-based AIs have a long and troubled history of rattling off made-up facts and even gaslighting users into thinking they were wrong all along. And despite AI companies' extensive attempts to squash the bug, their models continue to hallucinate. Google's AI Overviews feature, which the company rolled out in May of last year, still has a strong tendency to hallucinate facts as well, making it far more of an irritating nuisance than a helpful research assistant for users.
Note: For more along these lines, read our concise summaries of news articles on AI and Big Tech.
Alphabet has rewritten its guidelines on how it will use AI, dropping a section which previously ruled out applications that were "likely to cause harm". Human Rights Watch has criticised the decision, telling the BBC that AI can "complicate accountability" for battlefield decisions that "may have life or death consequences." Experts say AI could be widely deployed on the battlefield - though there are fears about its use too, particularly with regard to autonomous weapons systems. "For a global industry leader to abandon red lines it set for itself signals a concerning shift, at a time when we need responsible leadership in AI more than ever," said Anna Bacciarelli, senior AI researcher at Human Rights Watch. The "unilateral" decision showed also showed "why voluntary principles are not an adequate substitute for regulation and binding law" she added. In January, MP's argued that the conflict in Ukraine had shown the technology "offers serious military advantage on the battlefield." As AI becomes more widespread and sophisticated it would "change the way defence works, from the back office to the frontline," Emma Lewell-Buck MP ... wrote. Concern is greatest over the potential for AI-powered weapons capable of taking lethal action autonomously, with campaigners arguing controls are urgently needed. The Doomsday Clock - which symbolises how near humanity is to destruction - cited that concern in its latest assessment of the dangers mankind faces.
Note: For more along these lines, read our concise summaries of news articles on AI and Big Tech.
Instagram has released a long-promised “reset” button to U.S. users that clears the algorithms it uses to recommend you photos and videos. TikTok offers a reset button, too. And with a little bit more effort, you can also force YouTube to start fresh with how it recommends what videos to play next. It means you now have the power to say goodbye to endless recycled dance moves, polarizing Trump posts, extreme fitness challenges, dramatic pet voice-overs, fruit-cutting tutorials, face-altering filters or whatever other else has taken over your feed like a zombie. I know some people love what their apps show them. But the reality is, none of us are really in charge of our social media experience anymore. Instead of just friends, family and the people you choose to follow, nowadays your feed or For You Page is filled with recommended content you never asked for, selected by artificial-intelligence algorithms. Their goal is to keep you hooked, often by showing you things you find outrageous or titillating — not joyful or calming. And we know from Meta whistleblower Frances Haugen and others that outrage algorithms can take a particular toll on young people. That’s one reason they’re offering a reset now: because they’re under pressure to give teens and families more control. So how does the algorithm go awry? It tries to get to know you by tracking every little thing you do. They’re even analyzing your “dwell time,” when you unconsciously scroll more slowly.
Note: Read about the developer who got permanently banned from Meta for developing a tool called “Unfollow Everything” that lets users, well, unfollow everything and restart their feeds fresh. For more along these lines, read our concise summaries of news articles on Big Tech and media manipulation.
Each time you see a targeted ad, your personal information is exposed to thousands of advertisers and data brokers through a process called “real-time bidding” (RTB). This process does more than deliver ads—it fuels government surveillance, poses national security risks, and gives data brokers easy access to your online activity. RTB might be the most privacy-invasive surveillance system that you’ve never heard of. The moment you visit a website or app with ad space, it asks a company that runs ad auctions to determine which ads it will display for you. This involves sending information about you and the content you’re viewing to the ad auction company. The ad auction company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers. The bid request may contain personal information like your unique advertising ID, location, IP address, device details, interests, and demographic information. The information in bid requests is called “bidstream data” and can easily be linked to real people. Advertisers, and their ad buying platforms, can store the personal data in the bid request regardless of whether or not they bid on ad space. RTB is regularly exploited for government surveillance. The privacy and security dangers of RTB are inherent to its design. The process broadcasts torrents of our personal data to thousands of companies, hundreds of times per day.
Note: Clearview AI scraped billions of faces off of social media without consent and at least 600 law enforcement agencies tapped into its database. During this time, Clearview was hacked and its entire client list — which included the Department of Justice, U.S. Immigration and Customs Enforcement, Interpol, retailers and hundreds of police departments — was leaked to hackers. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Militaries, law enforcement, and more around the world are increasingly turning to robot dogs — which, if we're being honest, look like something straight out of a science-fiction nightmare — for a variety of missions ranging from security patrol to combat. Robot dogs first really came on the scene in the early 2000s with Boston Dynamics' "BigDog" design. They have been used in both military and security activities. In November, for instance, it was reported that robot dogs had been added to President-elect Donald Trump's security detail and were on patrol at his home in Mar-a-Lago. Some of the remote-controlled canines are equipped with sensor systems, while others have been equipped with rifles and other weapons. One Ohio company made one with a flamethrower. Some of these designs not only look eerily similar to real dogs but also act like them, which can be unsettling. In the Ukraine war, robot dogs have seen use on the battlefield, the first known combat deployment of these machines. Built by British company Robot Alliance, the systems aren't autonomous, instead being operated by remote control. They are capable of doing many of the things other drones in Ukraine have done, including reconnaissance and attacking unsuspecting troops. The dogs have also been useful for scouting out the insides of buildings and trenches, particularly smaller areas where operators have trouble flying an aerial drone.
Note: Learn more about the troubling partnership between Big Tech and the military. For more, read our concise summaries of news articles on military corruption.
It is often said that autonomous weapons could help minimize the needless horrors of war. Their vision algorithms could be better than humans at distinguishing a schoolhouse from a weapons depot. Some ethicists have long argued that robots could even be hardwired to follow the laws of war with mathematical consistency. And yet for machines to translate these virtues into the effective protection of civilians in war zones, they must also possess a key ability: They need to be able to say no. Human control sits at the heart of governments’ pitch for responsible military AI. Giving machines the power to refuse orders would cut against that principle. Meanwhile, the same shortcomings that hinder AI’s capacity to faithfully execute a human’s orders could cause them to err when rejecting an order. Militaries will therefore need to either demonstrate that it’s possible to build ethical, responsible autonomous weapons that don’t say no, or show that they can engineer a safe and reliable right-to-refuse that’s compatible with the principle of always keeping a human “in the loop.” If they can’t do one or the other ... their promises of ethical and yet controllable killer robots should be treated with caution. The killer robots that countries are likely to use will only ever be as ethical as their imperfect human commanders. They would only promise a cleaner mode of warfare if those using them seek to hold themselves to a higher standard.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and military corruption.
Mitigating the risk of extinction from AI should be a global priority. However, as many AI ethicists warn, this blinkered focus on the existential future threat to humanity posed by a malevolent AI ... has often served to obfuscate the myriad more immediate dangers posed by emerging AI technologies. These “lesser-order” AI risks ... include pervasive regimes of omnipresent AI surveillance and panopticon-like biometric disciplinary control; the algorithmic replication of existing racial, gender, and other systemic biases at scale ... and mass deskilling waves that upend job markets, ushering in an age monopolized by a handful of techno-oligarchs. Killer robots have become a twenty-first-century reality, from gun-toting robotic dogs to swarms of autonomous unmanned drones, changing the face of warfare from Ukraine to Gaza. Palestinian civilians have frequently spoken about the paralyzing psychological trauma of hearing the “zanzana” — the ominous, incessant, unsettling, high-pitched buzzing of drones loitering above. Over a decade ago, children in Waziristan, a region of Pakistan’s tribal belt bordering Afghanistan, experienced a similar debilitating dread of US Predator drones that manifested as a fear of blue skies. “I no longer love blue skies. In fact, I now prefer gray skies. The drones do not fly when the skies are gray,” stated thirteen-year-old Zubair in his testimony before Congress in 2013.
Note: For more along these lines, read our concise summaries of news articles on AI and military corruption.
The current debate on military AI is largely driven by “tech bros” and other entrepreneurs who stand to profit immensely from militaries’ uptake of AI-enabled capabilities. Despite their influence on the conversation, these tech industry figures have little to no operational experience, meaning they cannot draw from first-hand accounts of combat to further justify arguments that AI is changing the character, if not nature, of war. Rather, they capitalize on their impressive business successes to influence a new model of capability development through opinion pieces in high-profile journals, public addresses at acclaimed security conferences, and presentations at top-tier universities. Three related considerations have combined to shape the hype surrounding military AI. First [is] the emergence of a new military industrial complex that is dependent on commercial service providers. Second, this new defense acquisition process is the cause and effect of a narrative suggesting a global AI arms race, which has encouraged scholars to discount the normative implications of AI-enabled warfare. Finally, while analysts assume that soldiers will trust AI, which is integral to human-machine teaming that facilitates AI-enabled warfare, trust is not guaranteed. Senior officers do not trust AI-enhanced capabilities. To the extent they do demonstrate increased levels of trust in machines, their trust is moderated by how machines are used.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and military corruption.
The Pentagon is turning to a new class of weapons to fight the numerically superior [China's] People’s Liberation Army: drones, lots and lots of drones. In August 2023, the Defense Department unveiled Replicator, its initiative to field thousands of “all-domain, attritable autonomous (ADA2) systems”: Pentagon-speak for low-cost (and potentially AI-driven) machines — in the form of self-piloting ships, large robot aircraft, and swarms of smaller kamikaze drones — that they can use and lose en masse to overwhelm Chinese forces. For the last 25 years, uncrewed Predators and Reapers, piloted by military personnel on the ground, have been killing civilians across the planet. Experts worry that mass production of new low-cost, deadly drones will lead to even more civilian casualties. Advances in AI have increasingly raised the possibility of robot planes, in various nations’ arsenals, selecting their own targets. During the first 20 years of the war on terror, the U.S. conducted more than 91,000 airstrikes ... and killed up to 48,308 civilians, according to a 2021 analysis. “The Pentagon has yet to come up with a reliable way to account for past civilian harm caused by U.S. military operations,” [Columbia Law’s Priyanka Motaparthy] said. “So the question becomes, ‘With the potential rapid increase in the use of drones, what safeguards potentially fall by the wayside? How can they possibly hope to reckon with future civilian harm when the scale becomes so much larger?’”
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on military corruption.
When Megan Rothbauer suffered a heart attack at work in Wisconsin, she was rushed to hospital in an ambulance. The nearest hospital was “not in network”, which left Ms Rothbauer with a $52,531.92 bill for her care. Had the ambulance driven a further three blocks to Meriter Hospital in Madison, the bill would have been a more modest $1,500. The incident laid bare the expensive complexity of the American healthcare system with patients finding that they are uncovered, despite paying hefty premiums, because of their policy’s small print. In many cases the grounds for refusal hinge on whether the insurer accepts that the treatment is necessary and that decision is increasingly being made by artificial intelligence rather than a physician. It is leading to coverage being denied on an industrial scale. Much of the work is outsourced, with the biggest operator being EviCore, which ... uses AI to review — and in many cases turn down — doctors’ requests for prior authorisation, guaranteeing to pay for treatment. The controversy over coverage denials was brought into sharp focus by the gunning down of UnitedHealthcare’s chief executive Brian Thompson in Manhattan. The [words written on the] casings [of] the ammunition — “deny”, “defend” and “depose” — are thought to refer to the tactics the insurance industry is accused of using to avoid paying out. UnitedHealthcare rejected one in three claims last year, about twice the industry average.
Note: For more along these lines, read our concise summaries of news articles on AI and corporate corruption.
With the misinformation category being weaponized across the political spectrum, we took a look at how invested government has become in studying and “combatting” it using your tax dollars. That research can provide the intellectual ammunition to censor people online. Since 2021, the Biden-Harris administration has spent $267 million on research grants with the term “misinformation” in the proposal. Of course, the Covid pandemic was the driving force behind so much of the misinformation debate. There is robust documentation by now proving that the Biden-Harris administration worked closely with social media companies to censor content deemed “misinformation,” which often included cases where people simply questioned or disagreed with the Administration’s COVID policies. In February the U.S. House Committee on the Judiciary and the Select Subcommittee on the Weaponization of the Federal Government issued a scathing report against the National Science Foundation (NSF) for funding grants supporting tools and processes that censor online speech. The report said, “the purpose of these taxpayer-funded projects is to develop artificial intelligence (AI)-powered censorship and propaganda tools that can be used by governments and Big Tech to shape public opinion by restricting certain viewpoints or promoting others.” $13 million was spent on the censorious technologies profiled in the report.
Note: Read the full article on Substack to uncover all the misinformation contracts with government agencies, universities, nonprofits, and defense contractors. For more along these lines, read our concise summaries of news articles on censorship and government corruption.
Technology companies are having some early success selling artificial intelligence tools to police departments. Axon, widely recognized for its Taser devices and body cameras, was among the first companies to introduce AI specifically for the most common police task: report writing. Its tool, Draft One, generates police narratives directly from Axon’s bodycam audio. Currently, the AI is being piloted by 75 officers across several police departments. “The hours saved comes out to about 45 hours per police officer per month,” said Sergeant Robert Younger of the Fort Collins Police Department, an early adopter of the tool. Cassandra Burke Robertson, director of the Center for Professional Ethics at Case Western Reserve University School of Law, has reservations about AI in police reporting, especially when it comes to accuracy. “Generative AI programs are essentially predictive text tools. They can generate plausible text quickly, but the most plausible explanation is often not the correct explanation, especially in criminal investigations,” she said. In the courtroom, AI-generated police reports could introduce additional complications, especially when they rely solely on video footage rather than officer dictation. New Jersey-based lawyer Adam Rosenblum said “hallucinations” — instances when AI generates inaccurate or false information — that could distort context are another issue. Courts might need new standards ... before allowing the reports into evidence.
Note: For more along these lines, read our concise summaries of news articles on AI and police corruption.
Before the digital age, law enforcement would conduct surveillance through methods like wiretapping phone lines or infiltrating an organization. Now, police surveillance can reach into the most granular aspects of our lives during everyday activities, without our consent or knowledge — and without a warrant. Technology like automated license plate readers, drones, facial recognition, and social media monitoring added a uniquely dangerous element to the surveillance that comes with physical intimidation of law enforcement. With greater technological power in the hands of police, surveillance technology is crossing into a variety of new and alarming contexts. Law enforcement partnerships with companies like Clearview AI, which scraped billions of images from the internet for their facial recognition database ... has been used by law enforcement agencies across the country, including within the federal government. When the social networking app on your phone can give police details about where you’ve been and who you’re connected to, or your browsing history can provide law enforcement with insight into your most closely held thoughts, the risks of self-censorship are great. When artificial intelligence tools or facial recognition technology can piece together your life in a way that was previously impossible, it gives the ones with the keys to those tools enormous power to ... maintain a repressive status quo.
Note: Facial recognition technology has played a role in the wrongful arrests of many innocent people. For more along these lines, explore concise summaries of revealing news articles on police corruption and the disappearance of privacy.
At the Technology Readiness Experimentation (T-REX) event in August, the US Defense Department tested an artificial intelligence-enabled autonomous robotic gun system developed by fledgling defense contractor Allen Control Systems dubbed the “Bullfrog.” Consisting of a 7.62-mm M240 machine gun mounted on a specially designed rotating turret outfitted with an electro-optical sensor, proprietary AI, and computer vision software, the Bullfrog was designed to deliver small arms fire on drone targets with far more precision than the average US service member can achieve with a standard-issue weapon. Footage of the Bullfrog in action published by ACS shows the truck-mounted system locking onto small drones and knocking them out of the sky with just a few shots. Should the Pentagon adopt the system, it would represent the first publicly known lethal autonomous weapon in the US military’s arsenal. In accordance with the Pentagon’s current policy governing lethal autonomous weapons, the Bullfrog is designed to keep a human “in the loop” in order to avoid a potential “unauthorized engagement." In other words, the gun points at and follows targets, but does not fire until commanded to by a human operator. However, ACS officials claim that the system can operate totally autonomously should the US military require it to in the future, with sentry guns taking the entire kill chain out of the hands of service members.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.
The United States’ secretive Special Operations Command is looking for companies to help create deepfake internet users so convincing that neither humans nor computers will be able to detect they are fake. Academic and private sector researchers have been engaged in a race ... to create undetectable deepfakes. The plan, mentioned in a new 76-page wish list by the Department of Defense’s Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country’s most elite, clandestine military efforts. “Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content.” JSOC wants the ability to create online user profiles that “appear to be a unique individual that ... does not exist in the real world,” with each featuring “multiple expressions” and “Government Identification quality photos.” The document notes that “the solution should include facial & background imagery, facial & background video, and audio layers.” JSOC hopes to be able to generate “selfie video” from these fabricated humans. Each deepfake selfie will come with a matching faked background, “to create a virtual environment undetectable by social media algorithms.” A joint statement by the NSA, FBI, and CISA warned [that] the global proliferation of deepfake technology [is] a “top risk” for 2023. An April paper by the U.S. Army’s Strategic Studies Institute was similarly concerned: “Experts expect the malicious use of AI, including the creation of deepfake videos to sow disinformation to polarize societies and deepen grievances, to grow over the next decade.”
Note: Why is the Pentagon investing in advanced deepfake technology? Read about the Pentagon's secret army of 60,000 operatives who use fake online personas to manipulate public discourse. For more along these lines, see concise summaries of deeply revealing news articles on AI and media corruption from reliable major media sources.
Police departments in 15 states provided The Post with rarely seen records documenting their use of facial recognition in more than 1,000 criminal investigations over the past four years. According to the arrest reports in those cases and interviews with people who were arrested, authorities routinely failed to inform defendants about their use of the software — denying them the opportunity to contest the results of an emerging technology that is prone to error. Officers often obscured their reliance on the software in public-facing reports, saying that they identified suspects “through investigative means” or that a human source such as a witness or police officer made the initial identification. Defense lawyers and civil rights groups argue that people have a right to know about any software that identifies them as part of a criminal investigation, especially a technology that has led to false arrests. The reliability of the tool has been successfully challenged in a handful of recent court cases around the country, leading some defense lawyers to posit that police and prosecutors are intentionally trying to shield the technology from court scrutiny. Misidentification by this type of software played a role in the wrongful arrests of at least seven innocent Americans, six of whom were Black. Charges were later dismissed against all of them. Federal testing of top facial recognition software has found the programs are more likely to misidentify people of color.
Note: Read about the secret history of facial recognition. For more along these lines, see concise summaries of deeply revealing news articles on AI and police corruption from reliable major media sources.
Tech companies have outfitted classrooms across the U.S. with devices and technologies that allow for constant surveillance and data gathering. Firms such as Gaggle, Securly and Bark (to name a few) now collect data from tens of thousands of K-12 students. They are not required to disclose how they use that data, or guarantee its safety from hackers. In their new book, Surveillance Education: Navigating the Conspicuous Absence of Privacy in Schools, Nolan Higdon and Allison Butler show how all-encompassing surveillance is now all too real, and everything from basic privacy rights to educational quality is at stake. The tech industry has done a great job of convincing us that their platforms — like social media and email — are “free.” But the truth is, they come at a cost: our privacy. These companies make money from our data, and all the content and information we share online is basically unpaid labor. So, when the COVID-19 lockdowns hit, a lot of people just assumed that using Zoom, Canvas and Moodle for online learning was a “free” alternative to in-person classes. In reality, we were giving up even more of our labor and privacy to an industry that ended up making record profits. Your data can be used against you ... or taken out of context, such as sarcasm being used to deny you a job or admission to a school. Data breaches happen all the time, which could lead to identity theft or other personal information becoming public.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams—conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody “looking away from the screen for an extended period of time." For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
Justice Department investigators are scrutinizing the healthcare industry’s use of AI embedded in patient records that prompts doctors to recommend treatments. Prosecutors have started subpoenaing pharmaceuticals and digital health companies to learn more about generative technology’s role in facilitating anti-kickback and false claims violations, said three sources familiar with the matter.. Two of the sources—speaking anonymously to discuss ongoing investigations—said DOJ attorneys are asking general questions suggesting they still may be formulating a strategy. “I have seen” civil investigative demands “that ask questions about algorithms and prompts that are being built into EMR systems that may be resulting in care that is either in excess of what would have otherwise been rendered, or may be medically unnecessary,” said Jaime Jones, who co-leads the healthcare practice at Sidley Austin. DOJ attorneys want “to see what the result is of those tools being built into the system.” The probes bring fresh relevance to a pair of 2020 criminal settlements with Purdue Pharma and its digital records contractor, Practice Fusion, over their collusion to design automated pop-up alerts pushing doctors to prescribe addictive painkillers. The kickback scheme ... led to a $145 million penalty for Practice Fusion. Marketers from Purdue ... worked in tandem with Practice Fusion to build clinical decision alerts relying on algorithms.
Note: Read how the US opioid industry operated like a drug cartel. For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Pharma corruption from reliable major media sources.
Ford Motor Company is just one of many automakers advancing technology that weaponizes cars for mass surveillance. The ... company is currently pursuing a patent for technology that would allow vehicles to monitor the speed of nearby cars, capture images, and transmit data to law enforcement agencies. This would effectively turn vehicles into mobile surveillance units, sharing detailed information with both police and insurance companies. Ford's initiative is part of a broader trend among car manufacturers, where vehicles are increasingly used to spy on drivers and harvest data. In today's world, a smartphone can produce up to 3 gigabytes of data per hour, but recently manufactured cars can churn out up to 25 gigabytes per hour—and the cars of the future will generate even more. These vehicles now gather biometric data such as voice, iris, retina, and fingerprint recognition. In 2022, Hyundai patented eye-scanning technology to replace car keys. This data isn't just stored locally; much of it is uploaded to the cloud, a system that has proven time and again to be incredibly vulnerable. Toyota recently announced that a significant amount of customer information was stolen and posted on a popular hacking site. Imagine a scenario where hackers gain control of your car. As cybersecurity threats become more advanced, the possibility of a widespread attack is not far-fetched.
Note: FedEx is helping the police build a large AI surveillance network to track people and vehicles. Michael Hastings, a journalist investigating U.S. military and intelligence abuses, was killed in a 2013 car crash that may have been the result of a hack. For more along these lines, explore summaries of news articles on the disappearance of privacy from reliable major media sources.
Big tech companies have spent vast sums of money honing algorithms that gather their users’ data and scour it for patterns. One result has been a boom in precision-targeted online advertisements. Another is a practice some experts call “algorithmic personalized pricing,” which uses artificial intelligence to tailor prices to individual consumers. The Federal Trade Commission uses a more Orwellian term for this: “surveillance pricing.” In July the FTC sent information-seeking orders to eight companies that “have publicly touted their use of AI and machine learning to engage in data-driven targeting,” says the agency’s chief technologist Stephanie Nguyen. Consumer surveillance extends beyond online shopping. “Companies are investing in infrastructure to monitor customers in real time in brick-and-mortar stores,” [Nguyen] says. Some price tags, for example, have become digitized, designed to be updated automatically in response to factors such as expiration dates and customer demand. Retail giant Walmart—which is not being probed by the FTC—says its new digital price tags can be remotely updated within minutes. When personalized pricing is applied to home mortgages, lower-income people tend to pay more—and algorithms can sometimes make things even worse by hiking up interest rates based on an inadvertently discriminatory automated estimate of a borrower’s risk rating.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
On the sidelines of the International Institute for Strategic Studies’ annual Shangri-La Dialogue in June, US Indo-Pacific Command chief Navy Admiral Samuel Paparo colorfully described the US military’s contingency plan for a Chinese invasion of Taiwan as flooding the narrow Taiwan Strait between the two countries with swarms of thousands upon thousands of drones, by land, sea, and air, to delay a Chinese attack enough for the US and its allies to muster additional military assets. “I want to turn the Taiwan Strait into an unmanned hellscape using a number of classified capabilities,” Paparo said, “so that I can make their lives utterly miserable for a month, which buys me the time for the rest of everything.” China has a lot of drones and can make a lot more drones quickly, creating a likely advantage during a protracted conflict. This stands in contrast to American and Taiwanese forces, who do not have large inventories of drones. The Pentagon’s “hellscape” plan proposes that the US military make up for this growing gap by producing and deploying what amounts to a massive screen of autonomous drone swarms designed to confound enemy aircraft, provide guidance and targeting to allied missiles, knock out surface warships and landing craft, and generally create enough chaos to blunt (if not fully halt) a Chinese push across the Taiwan Strait. Planning a “hellscape" of hundreds of thousands of drones is one thing, but actually making it a reality is another.
Note: Learn more about warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more along these lines, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
Some renters may savor the convenience of “smart home” technologies like keyless entry and internet-connected doorbell cameras. But tech companies are increasingly selling these solutions to landlords for a more nefarious purpose: spying on tenants in order to evict them or raise their rent. Teman, a tech company that makes surveillance systems for apartment buildings ... proposes a solution to a frustration for many New York City landlords, who have tenants living in older apartments that are protected by a myriad of rent control and stabilization laws. The company’s email suggests a workaround: “3 Simple Steps to Re-Regulate a Unit.” First, use one of Teman’s automated products to catch a tenant breaking a law or violating their lease, such as by having unapproved subletters or loud parties. Then, “vacate” them and merge their former apartment with one next door or above or below, creating a “new” unit that’s not eligible for rent protections. “Combine a $950/mo studio and $1400/mo one-bedroom into a $4200/mo DEREGULATED two-bedroom,” the email enticed. Teman’s surveillance systems can even “help you identify which units are most-likely open to moving out (or being evicted!).” Two affordable New York City developments made headlines when tenants successfully organized to stop their respective owners’ plans to install facial recognition systems: Atlantic Towers in Brooklyn and Knickerbocker Village in the Lower East Side.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
Columbus landlords are now turning to artificial intelligence to evict tenants from their homes. [Attorney Jyoshu] Tsushima works for the Legal Aid Society of Southeast and Central Ohio and focuses on evictions. In June, nearly 2,000 evictions were filed within Franklin County Municipal Court. Tsushima said the county is on track to surpass 24,000 evictions for the year. In eviction court, he said both property management staffers and his clients describe software used that automatically evicts tenants. He said human employees don't determine who will be kicked out but they're the ones who place the eviction notices up on doors. Hope Matfield contacted ABC6 ... after she received an eviction notice on her door at Eden of Caleb's Crossing in Reynoldsburg in May. "They're profiting off people living in hell, basically," Matfield [said]. "I had no choice. I had to make that sacrifice, do a quick move and not know where my family was going to go right away." In February, Matfield started an escrow case against her property management group which is 5812 Investment Group. When Matfield missed a payment, the courts closed her case and gave the escrow funds to 5812 Investment Group. Matfield received her eviction notice that same day. The website for 5812 Investment Group indicates it uses software from RealPage. RealPage is subject to a series of lawsuits across the country due to algorithms multiple attorneys general claim cause price-fixing on rents.
Note: Read more about how tech companies are increasingly marketing smart tools to landlords for a troubling purpose: surveilling tenants to justify evictions or raise their rent. For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
Surveillance technologies have evolved at a rapid clip over the last two decades — as has the government’s willingness to use them in ways that are genuinely incompatible with a free society. The intelligence failures that allowed for the attacks on September 11 poured the concrete of the surveillance state foundation. The gradual but dramatic construction of this surveillance state is something that Republicans and Democrats alike are responsible for. Our country cannot build and expand a surveillance superstructure and expect that it will not be turned against the people it is meant to protect. The data that’s being collected reflect intimate details about our closely held beliefs, our biology and health, daily activities, physical location, movement patterns, and more. Facial recognition, DNA collection, and location tracking represent three of the most pressing areas of concern and are ripe for exploitation. Data brokers can use tens of thousands of data points to develop a detailed dossier on you that they can sell to the government (and others). Essentially, the data broker loophole allows a law enforcement agency or other government agency such as the NSA or Department of Defense to give a third party data broker money to hand over the data from your phone — rather than get a warrant. When pressed by the intelligence community and administration, policymakers on both sides of the aisle failed to draw upon the lessons of history.
Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.
If you appeared in a photo on Facebook any time between 2011 and 2021, it is likely your biometric information was fed into DeepFace — the company’s controversial deep-learning facial recognition system that tracked the face scan data of at least a billion users. That's where Texas Attorney General Ken Paxton comes in. His office secured a $1.4 billion settlement from Meta over its alleged violation of a Texas law that bars the capture of biometric data without consent. Meta is on the hook to pay $275 million within the next 30 days and the rest over the next four years. Why did Paxton wait until 2022 — a year after Meta announced it would suspend its facial recognition technology and delete its database — to go up against the tech giant? If our AG truly prioritized privacy, he'd focus on the lesser-known companies that law enforcement agencies here in Texas are paying to scour and store our biometric data. In 2017, [Clearview AI] launched a facial recognition app that ... could identify strangers from a photo by searching a database of faces scraped without consent from social media. In 2020, news broke that at least 600 law enforcement agencies were tapping into a database of 3 billion facial images. Clearview was hit with lawsuit after lawsuit. That same year, the company was hacked and its entire client list — which included the Department of Justice, U.S. Immigration and Customs Enforcement, Interpol, retailers and hundreds of police departments — was leaked.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
Automated fast food restaurant CaliExpress by Flippy, in Pasadena, Calif., opened in January to considerable hype due to its robot burger makers, but the restaurant launched with another, less heralded innovation: the ability to pay for your meal with your face. CaliExpress uses a payment system from facial ID tech company PopID. It’s not the only fast-food chain to employ the technology. Biometric payment options are becoming more common. Amazon introduced pay-by-palm technology in 2020, and while its cashier-less store experiment has faltered, it installed the tech in 500 of its Whole Foods stores last year. Mastercard, which is working with PopID, launched a pilot for face-based payments in Brazil back in 2022, and it was deemed a success — 76% of pilot participants said they would recommend the technology to a friend. As stores implement biometric technology for a variety of purposes, from payments to broader anti-theft systems, consumer blowback, and lawsuits, are rising. In March, an Illinois woman sued retailer Target for allegedly illegally collecting and storing her and other customers’ biometric data via facial recognition technology without their consent. Amazon and T-Mobile are also facing legal actions related to biometric technology. In other countries ... biometric payment systems are comparatively mature. Visitors to McDonald’s in China ... use facial recognition technology to pay for their orders.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
Peregrine ... is essentially a super-powered Google for police data. Enter a name or address into its web-based app, and Peregrine quickly scans court records, arrest reports, police interviews, body cam footage transcripts — any police dataset imaginable — for a match. It’s taken data siloed across an array of older, slower systems, and made it accessible in a simple, speedy app that can be operated from a web browser. To date, Peregrine has scored 57 contracts across a wide range of police and public safety agencies in the U.S., from Atlanta to L.A. Revenue tripled in 2023, from $3 million to $10 million. [That will] triple again to $30 million this year, bolstered by $60 million in funding from the likes of Friends & Family Capital and Founders Fund. Privacy advocates [are] concerned about indiscriminate surveillance. “We see a lot of police departments of a lot of different sizes getting access to Real Time Crime Centers now, and it's definitely facilitating a lot more general access to surveillance feeds for some of these smaller departments that would have previously found it cost prohibitive,” said Beryl Lipton ... at the Electronic Frontier Foundation (EFF). “These types of companies are inherently going to have a hard time protecting privacy, because everything that they're built on is basically privacy damaging.” Peregrine technology can also enable “predictive policing,” long criticized for unfairly targeting poorer, non-white neighborhoods.
Note: Learn more about Palantir's involvement in domestic surveillance and controversial military technologies. For more along these lines, see concise summaries of deeply revealing news articles on police corruption and the disappearance of privacy from reliable major media sources.
If you rent your home, there’s a good chance your landlord uses RealPage to set your monthly payment. The company describes itself as merely helping landlords set the most profitable price. But a series of lawsuits says it’s something else: an AI-enabled price-fixing conspiracy. The late Justice Antonin Scalia once called price-fixing the “supreme evil” of antitrust law. Agreeing to fix prices is punishable with up to 10 years in prison and a $100 million fine. Property owners feed RealPage’s “property management software” their data, including unit prices and vacancy rates, and the algorithm—which also knows what competitors are charging—spits out a rent recommendation. If enough landlords use it, the result could look the same as a traditional price-fixing cartel: lockstep price increases instead of price competition, no secret handshake or clandestine meeting needed. Algorithmic price-fixing appears to be spreading to more and more industries. And existing laws may not be equipped to stop it. In more than 40 housing markets across the United States, 30 to 60 percent of multifamily-building units are priced using RealPage. The plaintiffs suing RealPage, including the Arizona and Washington, D.C., attorneys general, argue that this has enabled a critical mass of landlords to raise rents in concert, making an existing housing-affordability crisis even worse. The lawsuits also argue that RealPage pressures landlords to comply with its pricing suggestions.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
In 2017, hundreds of artificial intelligence experts signed the Asilomar AI Principles for how to govern artificial intelligence. I was one of them. So was OpenAI CEO Sam Altman. The signatories committed to avoiding an arms race on the grounds that “teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.” The stated goal of OpenAI is to create artificial general intelligence, a system that is as good as expert humans at most tasks. It could have significant benefits. It could also threaten millions of lives and livelihoods if not developed in a provably safe way. It could be used to commit bioterrorism, run massive cyberattacks or escalate nuclear conflict. Given these dangers, a global arms race to unleash artificial general intelligence AGI serves no one’s interests. The true power of AI lies ... in its potential to bridge divides. AI might help us identify fundamental patterns in global conflicts and human behavior, leading to more profound solutions. AI’s ability to process vast amounts of data could help identify patterns in global conflicts by suggesting novel approaches to resolution that human negotiators might overlook. Advanced natural language processing could break down communication barriers, allowing for more nuanced dialogue between nations and cultures. Predictive AI models could identify early signs of potential conflicts, allowing for preemptive diplomatic interventions.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.
In 2021, parents in South Africa with children between the ages of 5 and 13 were offered an unusual deal. For every photo of their child’s face, a London-based artificial intelligence firm would donate 20 South African rands, about $1, to their children’s school as part of a campaign called “Share to Protect.” With promises of protecting children, a little-known group of companies in an experimental corner of the tech industry known as “age assurance” has begun engaging in a massive collection of faces, opening the door to privacy risks for anyone who uses the web. The companies say their age-check tools could give parents ... peace of mind. But by scanning tens of millions of faces a year, the tools could also subject children — and everyone else — to a level of inspection rarely seen on the open internet and boost the chances their personal data could be hacked, leaked or misused. Nineteen states, home to almost 140 million Americans, have passed or enacted laws requiring online age checks since the beginning of last year, including Virginia, Texas and Florida. For the companies, that’s created a gold mine. But ... Alex Stamos, the former security chief of Facebook, which uses Yoti, said “most age verification systems range from ‘somewhat privacy violating’ to ‘authoritarian nightmare.'” Some also fear that lawmakers could use the tools to bar teens from content they dislike, including First Amendment-protected speech.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams—conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody “looking away from the screen for an extended period of time." For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.
The eruption of racist violence in England and Northern Ireland raises urgent questions about the responsibilities of social media companies, and how the police use facial recognition technology. While social media isn’t the root of these riots, it has allowed inflammatory content to spread like wildfire and helped rioters coordinate. The great elephant in the room is the wealth, power and arrogance of the big tech emperors. Silicon Valley billionaires are richer than many countries. That mature modern states should allow them unfettered freedom to regulate the content they monetise is a gross abdication of duty, given their vast financial interest in monetising insecurity and division. In recent years, [facial recognition] has been used on our streets without any significant public debate. We wouldn’t dream of allowing telephone taps, DNA retention or even stop and search and arrest powers to be so unregulated by the law, yet this is precisely what has happened with facial recognition. Our facial images are gathered en masse via CCTV cameras, the passport database and the internet. At no point were we asked about this. Individual police forces have entered into direct contracts with private companies of their choosing, making opaque arrangements to trade our highly sensitive personal data with private companies that use it to develop proprietary technology. There is no specific law governing how the police, or private companies ... are authorised to use this technology. Experts at Big Brother Watch believe the inaccuracy rate for live facial recognition since the police began using it is around 74%, and there are many cases pending about false positive IDs.
Note: Many US states are not required to reveal that they used face recognition technology to identify suspects, even though misidentification is a common occurrence. For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
Texas Attorney General Ken Paxton has won a $1.4 billion settlement from Facebook parent Meta over charges that it captured users' facial and biometric data without properly informing them it was doing so. Paxton said that starting in 2011, Meta, then known as Facebook, rolled out a “tag” feature that involved software that learned how to recognize and sort faces in photos. In doing so, it automatically turned on the feature without explaining how it worked, Paxton said — something that violated a 2009 state statute governing the use of biometric data, as well as running afoul of the state's deceptive trade practices act. "Unbeknownst to most Texans, for more than a decade Meta ran facial recognition software on virtually every face contained in the photographs uploaded to Facebook, capturing records of the facial geometry of the people depicted," he said in a statement. As part of the settlement, Meta did not admit to wrongdoing. Facebook discontinued how it had previously used face-recognition technology in 2021, in the process deleting the face-scan data of more than one billion users. The settlement amount, which Paxton said is the largest ever obtained by a single state against a business, will be paid out over five years. “This historic settlement demonstrates our commitment to standing up to the world’s biggest technology companies and holding them accountable for breaking the law and violating Texans’ privacy rights," Paxton said.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
Google announced this week that it would begin the international rollout of its new artificial intelligence-powered search feature, called AI Overviews. When billions of people search a range of topics from news to recipes to general knowledge questions, what they see first will now be an AI-generated summary. While Google was once mostly a portal to reach other parts of the internet, it has spent years consolidating content and services to make itself into the web’s primary destination. Weather, flights, sports scores, stock prices, language translation, showtimes and a host of other information have gradually been incorporated into Google’s search page over the past 15 or so years. Finding that information no longer requires clicking through to another website. With AI Overviews, the rest of the internet may meet the same fate. Google has tried to assuage publishers’ fears that users will no longer see their links or click through to their sites. Research firm Gartner predicts a 25% drop in traffic to websites from search engines by 2026 – a decrease that would be disastrous for most outlets and creators. What’s left for publishers is largely direct visits to their own home pages and Google referrals. If AI Overviews take away a significant portion of the latter, it could mean less original reporting, fewer creators publishing cooking blogs or how-to guides, and a less diverse range of information sources.
Note: WantToKnow.info traffic from Google search has fallen sharply as Google has stopped indexing most websites. These new AI summaries make independent media sites even harder to find. For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
Liquid capital, growing market dominance, slick ads, and fawning media made it easy for giants like Google, Microsoft, Apple, and Amazon to expand their footprint and grow their bottom lines. Yet ... these companies got lazy, entitled, and demanding. They started to care less about the foundations of their business — like having happy customers and stable products — and more about making themselves feel better by reinforcing their monopolies. Big Tech has decided the way to keep customers isn't to compete or provide them with a better service but instead make it hard to leave, trick customers into buying things, or eradicate competition so that it can make things as profitable as possible, even if the experience is worse. After two decades of consistent internal innovation, Big Tech got addicted to acquisitions in the 2010s: Apple bought Siri; Meta bought WhatsApp, Instagram, and Oculus; Amazon bought Twitch; Google bought Nest and Motorola's entire mobility division. Over time, the acquisitions made it impossible for these companies to focus on delivering the features we needed. Google, Meta, Amazon, and Apple are simply no longer forces for innovation. Generative AI is the biggest, dumbest attempt that tech has ever made to escape the fallout of building companies by acquiring other companies, taking their eyes off actually inventing things, and ignoring the most important part of their world: the customer.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
My insurance broker left a frantic voicemail telling me that my homeowner's insurance had lapsed. When I finally reached my insurance broker, he told me the reason Travelers revoked my policy: AI-powered drone surveillance. My finances were imperiled, it seemed, by a bad piece of code. As my broker revealed, the ominous threat that canceled my insurance was nothing more than moss. Travelers not only uses aerial photography and AI to monitor its customers' roofs, but also wrote patents on the technology — nearly 50 patents actually. And it may not be the only insurer spying from the skies. No one can use AI to know the future; you're training the technology to make guesses based on changes in roof color and grainy aerial images. But even the best AI models will get a lot of predictions wrong, especially at scale and particularly where you're trying to make guesses about the future of radically different roof designs across countless buildings in various environments. For the insurance companies designing the algorithms, that means a lot of questions about when to put a thumb on the scale in favor of, or against, the homeowner. And insurance companies will have huge incentives to choose against the homeowner every time. When Travelers flew a drone over my house, I never knew. When it decided I was too much of a risk, I had no way of knowing why or how. As more and more companies use more and more opaque forms of AI to decide the course of our lives, we're all at risk.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.
The National Science Foundation spent millions of taxpayer dollars developing censorship tools powered by artificial intelligence that Big Tech could use “to counter misinformation online” and “advance state-of-the-art misinformation research.” House investigators on the Judiciary Committee and Select Committee on the Weaponization of Government said the NSF awarded nearly $40 million ... to develop AI tools that could censor information far faster and at a much greater scale than human beings. The University of Michigan, for instance, was awarded $750,000 from NSF to develop its WiseDex artificial intelligence tool to help Big Tech outsource the “responsibility of censorship” on social media. The release of [an] interim report follows new revelations that the Biden White House pressured Amazon to censor books about the COVID-19 vaccine and comes months after court documents revealed White House officials leaned on Twitter, Facebook, YouTube and other sites to remove posts and ban users whose content they opposed, even threatening the social media platforms with federal action. House investigators say the NSF project is potentially more dangerous because of the scale and speed of censorship that artificial intelligence could enable. “AI-driven tools can monitor online speech at a scale that would far outmatch even the largest team of ’disinformation’ bureaucrats and researchers,” House investigators wrote in the interim report.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and censorship from reliable sources.
Once upon a time ... Google was truly great. A couple of lads at Stanford University in California had the idea to build a search engine that would crawl the world wide web, create an index of all the sites on it and rank them by the number of inbound links each had from other sites. The arrival of ChatGPT and its ilk ... disrupts search behaviour. Google’s mission – “to organise the world’s information and make it universally accessible” – looks like a much more formidable task in a world in which AI can generate infinite amounts of humanlike content. Vincent Schmalbach, a respected search engine optimisation (SEO) expert, thinks that Google has decided that it can no longer aspire to index all the world’s information. That mission has been abandoned. “Google is no longer trying to index the entire web,” writes Schmalbach. “In fact, it’s become extremely selective, refusing to index most content. This isn’t about content creators failing to meet some arbitrary standard of quality. Rather, it’s a fundamental change in how Google approaches its role as a search engine.” The default setting from now on will be not to index content unless it is genuinely unique, authoritative and has “brand recognition”. “They might index content they perceive as truly unique,” says Schmalbach. “But if you write about a topic that Google considers even remotely addressed elsewhere, they likely won’t index it. This can happen even if you’re a well-respected writer with a substantial readership.”
Note: WantToKnow.info and other independent media websites are disappearing from Google search results because of this. For more along these lines, see concise summaries of deeply revealing news articles on AI and censorship from reliable sources.
Google and a few other search engines are the portal through which several billion people navigate the internet. Many of the world’s most powerful tech companies, including Google, Microsoft, and OpenAI, have recently spotted an opportunity to remake that gateway with generative AI, and they are racing to seize it. Nearly two years after the arrival of ChatGPT, and with users growing aware that many generative-AI products have effectively been built on stolen information, tech companies are trying to play nice with the media outlets that supply the content these machines need. The start-up Perplexity ... announced revenue-sharing deals with Time, Fortune, and several other publishers. These publishers will be compensated when Perplexity earns ad revenue from AI-generated answers that cite partner content. The site does not currently run ads, but will begin doing so in the form of sponsored “related follow-up questions.” OpenAI has been building its own roster of media partners, including News Corp, Vox Media, and The Atlantic. Google has purchased the rights to use Reddit content to train future AI models, and ... appears to be the only major search engine that Reddit is permitting to surface its content. The default was once that you would directly consume work by another person; now an AI may chew and regurgitate it first, then determine what you see based on its opaque underlying algorithm. Many of the human readers whom media outlets currently show ads and sell subscriptions to will have less reason to ever visit publishers’ websites. Whether OpenAI, Perplexity, Google, or someone else wins the AI search war might not depend entirely on their software: Media partners are an important part of the equation. AI search will send less traffic to media websites than traditional search engines. The growing number of AI-media deals, then, are a shakedown. AI is scraping publishers’ content whether they want it to or not: Media companies can be chumps or get paid.
Note: The AI search war has nothing to do with journalists and content creators getting paid and acknowledged for their work. It’s all about big companies doing deals with each other to control our information environment and capture more consumer spending. For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable sources.
Amazon has been accused of using “intrusive algorithms” as part of a sweeping surveillance program to monitor and deter union organizing activities. Workers at a warehouse run by the technology giant on the outskirts of St Louis, Missouri, are today filing an unfair labor practice charge with the National Labor Relations Board (NLRB). A copy of the charge ... alleges that Amazon has “maintained intrusive algorithms and other workplace controls and surveillance which interfere with Section 7 rights of employees to engage in protected concerted activity”. There have been several reports of Amazon surveilling workers over union organizing and activism, including human resources monitoring employee message boards, software to track union threats and job listings for intelligence analysts to monitor “labor organizing threats”. Artificial intelligence can be used by warehouse employers like Amazon “to essentially have 24/7 unregulated and algorithmically processed and recorded video, and often audio data of what their workers are doing all the time”, said Seema N Patel ... at Stanford Law School. “It enables employers to control, record, monitor and use that data to discipline hundreds of thousands of workers in a way that no human manager or group of managers could even do.” The National Labor Relations Board issued a memo in 2022 announcing its intent to protect workers from AI-enabled monitoring of labor organizing activities.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
On July 16, the S&P 500 index, one of the most widely cited benchmarks in American capitalism, reached its highest-ever market value: $47 trillion. 1.4 percent of those companies were worth more than $16 trillion, the greatest concentration of capital in the smallest number of companies in the history of the U.S. stock market. The names are familiar: Microsoft, Apple, Amazon, Nvidia, Meta, Alphabet, and Tesla. All of them, too, have made giant bets on artificial intelligence. For all their similarities, these trillion-dollar-plus companies have been grouped together under a single banner: the Magnificent Seven. In the past month, though, these giants of the U.S. economy have been faltering. A recent rout led to a collapse of $2.6 trillion in their market value. Earlier this year, Goldman Sachs issued a deeply skeptical report on the industry, calling it too expensive, too clunky, and just simply not as useful as it has been chalked up to be. “There’s not a single thing that this is being used for that’s cost-effective at this point,” Jim Covello, an influential Goldman analyst, said on a company podcast. AI is not going away, and it will surely become more sophisticated. This explains why, even with the tempering of the AI-investment thesis, these companies are still absolutely massive. When you talk with Silicon Valley CEOs, they love to roll their eyes at their East Coast skeptics. Banks, especially, are too cautious, too concerned with short-term goals, too myopic to imagine another world.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
The Ukrainian military has used AI-equipped drones mounted with explosives to fly into battlefields and strike at Russian oil refineries. American AI systems identified targets in Syria and Yemen for airstrikes earlier this year. The Israel Defense Forces used another kind of AI-enabled targeting system to label as many as 37,000 Palestinians as suspected militants during the first weeks of its war in Gaza. Growing conflicts around the world have acted as both accelerant and testing ground for AI warfare while making it even more evident how unregulated the nascent field is. The result is a multibillion-dollar AI arms race that is drawing in Silicon Valley giants and states around the world. Altogether, the US military has more than 800 active AI-related projects and requested $1.8bn worth of funding for AI in the 2024 budget alone. Many of these companies and technologies are able to operate with extremely little transparency and accountability. Defense contractors are generally protected from liability when their products accidentally do not work as intended, even when the results are deadly. The Pentagon plans to spend $1bn by 2025 on its Replicator Initiative, which aims to develop swarms of unmanned combat drones that will use artificial intelligence to seek out threats. The air force wants to allocate around $6bn over the next five years to research and development of unmanned collaborative combat aircraft, seeking to build a fleet of 1,000 AI-enabled fighter jets that can fly autonomously. The Department of Defense has also secured hundreds of millions of dollars in recent years to fund its secretive AI initiative known as Project Maven, a venture focused on technologies like automated target recognition and surveillance.
Note:Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.
Recall ... takes constant screenshots in the background while you go about your daily computer business. Microsoft’s Copilot+ machine-learning tech then scans (and “reads”) each of these screenshots in order to make a searchable database of every action performed on your computer and then stores it on the machine’s disk. “Recall is like bestowing a photographic memory on everyone who buys a Copilot+ PC,” [Microsoft marketing officer Yusuf] Mehdi said. “Anything you’ve ever seen or done, you’ll now more or less be able to find.” Charlie Stross, the sci-fi author and tech critic, called it a privacy “shit-show for any organisation that handles medical records or has a duty of legal confidentiality.” He also said: “Suddenly, every PC becomes a target for discovery during legal proceedings. Lawyers can subpoena your Recall database and search it, no longer being limited to email but being able to search for terms that came up in Teams or Slack or Signal messages, and potentially verbally via Zoom or Skype if speech-to-text is included in Recall data.” Faced with this pushback, Microsoft [announced] that Recall would be made opt-in instead of on by default, and also introducing extra security precautions – only producing results from Recall after user authentication, for example, and never decrypting data stored by the tool until after a search query. The only good news for Microsoft here is that it seems to have belatedly acknowledged that Recall has been a fiasco.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
A growing number of supermarkets in Alabama, Oklahoma, and Texas are selling bullets by way of AI-powered vending machines, as first reported by Alabama's Tuscaloosa Thread. The company behind the machines, a Texas-based venture dubbed American Rounds, claims on its website that its dystopian bullet kiosks are outfitted with "built-in AI technology" and "facial recognition software," which allegedly allow the devices to "meticulously verify the identity and age of each buyer." As showcased in a promotional video, using one is an astoundingly simple process: walk up to the kiosk, provide identification, and let a camera scan your face. If its embedded facial recognition tech says you are in fact who you say you are, the automated machine coughs up some bullets. According to American Rounds, the main objective is convenience. Its machines are accessible "24/7," its website reads, "ensuring that you can buy ammunition on your own schedule, free from the constraints of store hours and long lines." Though officials in Tuscaloosa, where two machines have been installed, [said] that the devices are in full compliance with the Bureau of Alcohol, Tobacco, Firearms and Explosives' standards ... at least one of the devices has been taken down amid a Tuscaloosa city council investigation into its legal standing. "We have over 200 store requests for AARM [Automated Ammo Retail Machine] units covering approximately nine states currently," [American Rounds CEO Grant Magers] told Newsweek, "and that number is growing daily."
Note: Facial recognition technology is far from reliable. For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence from reliable major media sources.
Twenty years ago, FedEx established its own police force. Now it's working with local police to build out an AI car surveillance network. The shipping and business services company is using AI tools made by Flock Safety, a $4 billion car surveillance startup, to monitor its distribution and cargo facilities across the United States. As part of the deal, FedEx is providing its Flock surveillance feeds to law enforcement, an arrangement that Flock has with at least four multi-billion dollar private companies. Some local police departments are also sharing their Flock feeds with FedEx — a rare instance of a private company availing itself of a police surveillance apparatus. Such close collaboration has the potential to dramatically expand Flock’s car surveillance network, which already spans 4,000 cities across over 40 states and some 40,000 cameras that track vehicles by license plate, make, model, color and other identifying characteristics, like dents or bumper stickers. Jay Stanley ... at the American Civil Liberties Union, said it was “profoundly disconcerting” that FedEx was exchanging data with law enforcement as part of Flock’s “mass surveillance” system. “It raises questions about why a private company ... would have privileged access to data that normally is only available to law enforcement,” he said. Forbes previously found that [Flock] had itself likely broken the law across various states by installing cameras without the right permits.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.
Once upon a time, Google was great. They intensively monitored what people searched for, and then used that information continually to improve the engine’s performance. Their big idea was that the information thus derived had a commercial value; it indicated what people were interested in and might therefore be of value to advertisers who wanted to sell them stuff. Thus was born what Shoshana Zuboff christened “surveillance capitalism”, the dominant money machine of the networked world. The launch of generative AIs such as ChatGPT clearly took Google by surprise, which is odd given that the company had for years been working on the technology. The question became: how will Google respond to the threat? Now we know: it’s something called AI overviews, in which an increasing number of search queries are initially answered by AI-generated responses. Users have been told that glue is useful for ensuring that cheese sticks to pizza, that they could stare at the sun for for up to 30 minutes, and that geologists suggest eating one rock per day. There’s a quaint air of desperation in the publicity for this sudden pivot from search engine to answerbot. The really big question about the pivot, though, is what its systemic impact on the link economy will be. Already, the news is not great. Gartner, a market-research consultancy, for example, predicts that search engine volume will drop 25% by 2026 owing to AI chatbots and other virtual agents.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
Venture capital and military startup firms in Silicon Valley have begun aggressively selling a version of automated warfare that will deeply incorporate artificial intelligence (AI). This surge of support for emerging military technologies is driven by the ultimate rationale of the military-industrial complex: vast sums of money to be made. Untold billions of dollars of private money now pouring into firms seeking to expand the frontiers of techno-war. According to the New York Times, $125 billion over the past four years. Whatever the numbers, the tech sector and its financial backers sense that there are massive amounts of money to be made in next-generation weaponry and aren’t about to let anyone stand in their way. Meanwhile, an investigation by Eric Lipton of the New York Times found that venture capitalists and startup firms already pushing the pace on AI-driven warfare are also busily hiring ex-military and Pentagon officials to do their bidding. Former Google CEO Eric Schmidt [has] become a virtual philosopher king when it comes to how new technology will reshape society. [Schmidt] laid out his views in a 2021 book modestly entitled The Age of AI and Our Human Future, coauthored with none other than the late Henry Kissinger. Schmidt is aware of the potential perils of AI, but he’s also at the center of efforts to promote its military applications. AI is coming, and its impact on our lives, whether in war or peace, is likely to stagger the imagination.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.
OpenAI on Thursday announced its newest board member: Paul M. Nakasone, a retired U.S. Army general and former director of the National Security Agency. Nakasone was the longest-serving leader of the U.S. Cyber Command and chief of the Central Security Service. The company said Nakasone will also join OpenAI’s recently created Safety and Security Committee. The committee is spending 90 days evaluating the company’s processes and safeguards before making recommendations to the board and, eventually, updating the public, OpenAI said. OpenAI is bolstering its board and its C-suite as its large language models gain importance across the tech sector and as competition rapidly emerges in the burgeoning generative artificial intelligence market. While the company has been in hyper-growth mode since late 2022, when it launched ChatGPT, OpenAI has also been riddled with controversy and high-level employee departures. The company said Sarah Friar, previously CEO of Nextdoor and finance chief at Square, is joining as chief financial officer. OpenAI also hired Kevin Weil, an ex-president at Planet Labs, as its new chief product officer. Weil was previously a senior vice president at Twitter and a vice president at Facebook and Instagram. Weil’s product team will focus on “applying our research to products and services that benefit consumers, developers, and businesses,” the company wrote.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and intelligence agency corruption from reliable major media sources.
Edward Snowden wrote on social media to his nearly 6 million followers, "Do not ever trust @OpenAI ... You have been warned," following the appointment of retired U.S. Army General Paul Nakasone to the board of the artificial intelligence technology company. Snowden, a former National Security Agency (NSA) subcontractor, was charged with espionage by the Justice Department in 2013 after leaking thousands of top-secret records, exposing the agency's surveillance of private citizens' information. In a Friday morning post on X, formerly Twitter, Snowden reshared a post providing information on OpenAI's newest board member. Nakasone is a former NSA director, and the longest-serving leader of the U.S. Cyber Command and chief of the Central Security Service. In [a] statement, Nakasone said, "OpenAI's dedication to its mission aligns closely with my own values and experience in public service. I look forward to contributing to OpenAI's efforts to ensure artificial general intelligence is safe and beneficial to people around the world." Snowden wrote in an X post, "They've gone full mask-off: do not ever trust @OpenAI or its products (ChatGPT etc.) There is only one reason for appointing an @NSAGov Director to your board. This is a willful, calculated betrayal of the rights of every person on Earth." Snowden's post has received widespread attention, with nearly 2 million views, 43,500 likes, 16,000 reposts and around 1,000 comments as of Friday afternoon.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and intelligence agency corruption from reliable major media sources.
The center of the U.S. military-industrial complex has been shifting over the past decade from the Washington, D.C. metropolitan area to Northern California—a shift that is accelerating with the rise of artificial intelligence-based systems, according to a report published Wednesday. "Although much of the Pentagon's $886 billion budget is spent on conventional weapon systems and goes to well-established defense giants such as Lockheed Martin, RTX, Northrop Grumman, General Dynamics, Boeing, and BAE Systems, a new political economy is emerging, driven by the imperatives of big tech companies, venture capital (VC), and private equity firms," [report author Roberto J.] González wrote. "Defense Department officials have ... awarded large multibillion-dollar contracts to Microsoft, Amazon, Google, and Oracle." González found that the five largest military contracts to major tech firms between 2018 and 2022 "had contract ceilings totaling at least $53 billion combined." There's also the danger of a "revolving door" between Silicon Valley and the Pentagon as many senior government officials "are now gravitating towards defense-related VC or private equity firms as executives or advisers after they retire from public service." "Members of the armed services and civilians are in danger of being harmed by inadequately tested—or algorithmically flawed—AI-enabled technologies. By nature, VC firms seek rapid returns on investment by quickly bringing a product to market, and then 'cashing out' by either selling the startup or going public. This means that VC-funded defense tech companies are under pressure to produce prototypes quickly and then move to production before adequate testing has occurred."
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
Ask Google if cats have been on the moon and it used to spit out a ranked list of websites so you could discover the answer for yourself. Now it comes up with an instant answer generated by artificial intelligence - which may or may not be correct. “Yes, astronauts have met cats on the moon, played with them, and provided care,” said Google’s newly retooled search engine. It added: “For example, Neil Armstrong said, ‘One small step for man’ because it was a cat’s step. Buzz Aldrin also deployed cats on the Apollo 11 mission.” None of this is true. Similar errors — some funny, others harmful falsehoods — have been shared on social media since Google this month unleashed AI overviews, a makeover of its search page that frequently puts the summaries on top of search results. It’s hard to reproduce errors made by AI language models — in part because they’re inherently random. They work by predicting what words would best answer the questions asked of them based on the data they’ve been trained on. They’re prone to making things up — a widely studied problem known as hallucination. Another concern was a deeper one — that ceding information retrieval to chatbots was degrading the serendipity of human search for knowledge, literacy about what we see online, and the value of connecting in online forums with other people who are going through the same thing.Those forums and other websites count on Google sending people to them, but Google’s new AI overviews threaten to disrupt the flow of money-making internet traffic.
Note: Read more about the potential dangers of Google's new AI tool. For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence controversies from reliable major media sources.
"Agency intervention is necessary to stop the existential threat Google poses to original content creators," the News/Media Alliance—a major news industry trade group—wrote in a letter to the Department of Justice (DOJ) and the Federal Trade Commission (FTC). It asked the agencies to use antitrust authority "to stop Google's latest expansion of AI Overviews," a search engine innovation that Google has been rolling out recently. Overviews offer up short, AI-generated summaries paired with brief bits of text from linked websites. Overviews give "comprehensive answers without the user ever having to click to another page," the The New York Times warns. And this worries websites that rely on Google to drive much of their traffic. "It potentially chokes off the original creators of the content," Frank Pine, executive editor of MediaNews Group and Tribune Publishing (owner of 68 daily newspapers), told the Times. Media websites have gotten used to Google searches sending them a certain amount of traffic. But that doesn't mean Google is obligated to continue sending them that same amount of traffic forever. It is possible that Google's pivot to AI was hastened by how hostile news media has been to tech companies. We've seen publishers demanding that search engines and social platforms pay them for the privilege of sharing news links, even though this arrangement benefits publications (arguably more than it does tech companies) by driving traffic.
Note: For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence controversies from reliable major media sources.
Sara needed some chocolate - she had had one of those days - so wandered into a Home Bargains store. "Within less than a minute, I'm approached by a store worker who comes up to me and says, 'You're a thief, you need to leave the store'." Sara ... was wrongly accused after being flagged by a facial-recognition system called Facewatch. She says after her bag was searched she was led out of the shop, and told she was banned from all stores using the technology. Facewatch later wrote to Sara and acknowledged it had made an error. Facewatch is used in numerous stores in the UK. It's not just retailers who are turning to the technology. On the day we were filming, the Metropolitan Police said they made six arrests with the assistance of the tech. 192 arrests have been made so far this year as a result of it. But civil liberty groups are worried that its accuracy is yet to be fully established, and point to cases such as Shaun Thompson's. Mr Thompson, who works for youth-advocacy group Streetfathers, didn't think much of it when he walked by a white van near London Bridge. Within a few seconds, he was approached by police and told he was a wanted man. But it was a case of mistaken identity. "It felt intrusive ... I was treated guilty until proven innocent," he says. Silkie Carlo, director of Big Brother Watch, has filmed the police on numerous facial-recognition deployments. She says that anyone's face who is scanned is effectively part of a digital police line-up.
Note: For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence controversies from reliable major media sources.
In the middle of night, students at Utah’s Kings Peak high school are wide awake – taking mandatory exams. Their every movement is captured on their computer’s webcam and scrutinized by Proctorio, a surveillance company that uses artificial intelligence. Proctorio software conducts “desk scans” in an effort to catch test-takers who turn to “unauthorized resources”, “face detection” technology to ensure there isn’t anybody else in the room to help and “gaze detection” to spot anybody “looking away from the screen for an extended period of time”. Proctorio then provides visual and audio records to Kings Peak teachers with the algorithm calling particular attention to pupils whose behaviors during the test flagged them as possibly engaging in academic dishonesty. Such remote proctoring tools grew exponentially during the pandemic, particularly at US colleges and universities. K-12 schools’ use of remote proctoring tools, however, has largely gone under the radar. K-12 schools nationwide – and online-only programs in particular – continue to use tools from digital proctoring companies on students ... as young as kindergarten-aged. Civil rights activists, who contend AI proctoring tools fail to work as intended, harbor biases and run afoul of students’ constitutional protections, said the privacy and security concerns are particularly salient for young children and teens, who may not be fully aware of the monitoring or its implications. One 2021 study found that Proctorio failed to detect test-takers who had been instructed to cheat. Researchers concluded the software was “best compared to taking a placebo: it has some positive influence, not because it works but because people believe that it works, or that it might work.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.
As cities and states push to restrict the use of facial recognition technologies, some police departments have quietly found a way to keep using the controversial tools: asking for help from other law enforcement agencies that still have access. Officers in Austin and San Francisco — two of the largest cities where police are banned from using the technology — have repeatedly asked police in neighboring towns to run photos of criminal suspects through their facial recognition programs. In San Francisco, the workaround didn’t appear to help. Since the city’s ban took effect in 2019, the San Francisco Police Department has asked outside agencies to conduct at least five facial recognition searches, but no matches were returned. SFPD spokesman Evan Sernoffsky said these requests violated the city ordinance and were not authorized by the department, but the agency faced no consequences from the city. Austin police officers have received the results of at least 13 face searches from a neighboring police department since the city’s 2020 ban — and have appeared to get hits on some of them. Facial recognition ... technology has played a role in the wrongful arrests of at least seven innocent Americans, six of whom were Black, according to lawsuits each of these people filed after the charges against them were dismissed. In all, 21 cities or counties and Vermont have voted to prohibit the use of facial recognition tools by law enforcement.
Note: Crime is increasing in many cities, leading to law enforcement agencies appropriately working to maintain public safety. Yet far too often, social justice takes a backseat while those in authority violate human rights. For more along these lines, see concise summaries of deeply revealing news articles on police corruption and artificial intelligence from reliable major media sources.
Have you heard about the new Google? They “supercharged” it with artificial intelligence. Somehow, that also made it dumber. With the regular old Google, I can ask, “What’s Mark Zuckerberg’s net worth?” and a reasonable answer pops up: “169.8 billion USD.” Now let’s ask the same question with the “experimental” new version of Google search. Its AI responds: Zuckerberg’s net worth is “$46.24 per hour, or $96,169 per year. This is equivalent to $8,014 per month, $1,849 per week, and $230.6 million per day.” Google acting dumb matters because its AI is headed to your searches sooner or later. The company has already been testing this new Google — dubbed Search Generative Experience, or SGE — with volunteers for nearly 11 months, and recently started showing AI answers in the main Google results even for people who have not opted in to the test. To give us answers to everything, Google’s AI has to decide which sources are reliable. I’m not very confident about its judgment. Remember our bonkers result on Zuckerberg’s net worth? A professional researcher — and also regular old Google — might suggest checking the billionaires list from Forbes. Google’s AI answer relied on a very weird ZipRecruiter page for “Mark Zuckerberg Jobs,” a thing that does not exist. The new Google can do some useful things. But as you’ll see, it sometimes also makes up facts, misinterprets questions, [and] delivers out-of-date information. This test of Google’s future has been going on for nearly a year, and the choices being made now will influence how billions of people get information.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI technology from reliable major media sources.
A Silicon Valley defense tech startup is working on products that could have as great an impact on warfare as the atomic bomb, its founder Palmer Luckey said. "We want to build the capabilities that give us the ability to swiftly win any war we are forced to enter," he [said]. The Anduril founder didn't elaborate on what impact AI weaponry would have. But asked if it would be as decisive as the atomic bomb to the outcome of World War II he replied: "We have ideas for what they are. We are working on them." In 2022, Anduril won a contract worth almost $1 billion with the Special Operations Command to support its counter-unmanned systems. Anduril's products include autonomous sentry towers along the Mexican border [and] Altius-600M attack drones supplied to Ukraine. All of Anduril's tech operates autonomously and runs on its AI platform called Lattice that can easily be updated. The success of Anduril has given hope to other smaller players aiming to break into the defense sector. As an escalating number of global conflicts has increased demand for AI-driven weaponry, venture capitalists have put more than $100 billion into defense tech since 2021, according to Pitchbook data. The rising demand has sparked a fresh wave of startups lining up to compete with industry "primes" such as Lockheed Martin and RTX (formerly known as Raytheon) for a slice of the $842 billion US defense budget.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on corruption in the military and in the corporate world from reliable major media sources.
When a Manhattan parking garage collapsed in April this year, rescuers were reluctant to stay in the damaged building, fearing further danger. So they used a combination of flying drones and a doglike walking robot to inspect the damage, look for survivors and make sure the site was safe for human rescuers to return. Soon, rescuers may be able to call on a much more sophisticated robotic search-and-rescue response. Researchers are developing teams of flying, walking and rolling robots that can cooperate to explore areas that no one robot could navigate on its own. And they are giving robots the ability to communicate with one another and make many of their own decisions independent of their human controller. Such teams of robots could be useful in other challenging environments like caves or mines where it can be difficult for rescuers to find and reach survivors. In cities, collapsed buildings and underground sites such as subways or utility tunnels often have hazardous areas where human rescuers can’t be sure of the dangers. As robots become better, teams of them may one day be able to go into a hazardous disaster site, locate survivors and report back to their human operators with a minimum of supervision. “More work ... needs to be done,” [roboticist Viktor] Orekhov says. “But at the same time, we’ve seen the ability of the teams advanced so rapidly that even now, with their current capabilities, they’re able to make a significant difference in real-life environments.”
Note: Explore more positive stories like this in our comprehensive inspiring news articles archive focused on solutions and bridging divides.
In 2015, the journalist Steven Levy interviewed Elon Musk and Sam Altman, two founders of OpenAI. A galaxy of Silicon Valley heavyweights, fearful of the potential consequences of AI, created the company as a non-profit-making charitable trust with the aim of developing technology in an ethical fashion to benefit “humanity as a whole”. Musk, who stepped down from OpenAI’s board six years ago ... is now suing his former company for breach of contract for having put profits ahead of the public good and failing to develop AI “for the benefit of humanity”. In 2019, OpenAI created a for-profit subsidiary to raise money from investors, notably Microsoft. When it released ChatGPT in 2022, the model’s inner workings were kept hidden. It was necessary to be less open, Ilya Sutskever, another of OpenAI’s founders and at the time the company’s chief scientist, claimed in response to criticism, to prevent those with malevolent intent from using it “to cause a great deal of harm”. Fear of the technology has become the cover for creating a shield from scrutiny. The problems that AI poses are not existential, but social. From algorithmic bias to mass surveillance, from disinformation and censorship to copyright theft, our concern should not be that machines may one day exercise power over humans but that they already work in ways that reinforce inequalities and injustices, providing tools by which those in power can consolidate their authority.
Note: Read more about the dangers of AI in the hands of the powerful. For more along these lines, see concise summaries of deeply revealing news articles on media manipulation and the disappearance of privacy from reliable sources.
Emotion artificial intelligence uses biological signals such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, promising to detect and predict how someone is feeling. Over 50% of large employers in the U.S. use emotion AI aiming to infer employees’ internal states, a practice that grew during the COVID-19 pandemic. For example, call centers monitor what their operators say and their tone of voice. We wondered what workers think about these technologies. My collaborators Shanley Corvite, Kat Roemmich, Tillie Ilana Rosenberg and I conducted a survey. 51% of participants expressed concerns about privacy, 36% noted the potential for incorrect inferences employers would accept at face value, and 33% expressed concern that emotion AI-generated inferences could be used to make unjust employment decisions. Despite emotion AI’s claimed goals to infer and improve workers’ well-being in the workplace, its use can lead to the opposite effect: well-being diminished due to a loss of privacy. On concerns that emotional surveillance could jeopardize their job, a participant with a diagnosed mental health condition said: “They could decide that I am no longer a good fit at work and fire me. Decide I’m not capable enough and not give a raise, or think I’m not working enough.” Participants ... said they were afraid of the dynamic they would have with employers if emotion AI were integrated into their workplace.
Note: The above article was written by Nazanin Andalibi at the University of Michigan. For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and the disappearance of privacy from reliable major media sources.
An industrial estate in Yorkshire is an unlikely location for ... an artificial intelligence (AI) company used by the Government to monitor people’s posts on social media. Logically has been paid more than £1.2 million of taxpayers’ money to analyse what the Government terms “disinformation” – false information deliberately seeded online – and “misinformation”, which is false information that has been spread inadvertently. It does this by “ingesting” material from more than hundreds of thousands of media sources and “all public posts on major social media platforms”, using AI to identify those that are potentially problematic. It has a £1.2 million deal with the Department for Culture, Media and Sport (DCMS), as well as another worth up to £1.4 million with the Department of Health and Social Care to monitor threats to high-profile individuals within the vaccine service. It also has a “partnership” with Facebook, which appears to grant Logically’s fact-checkers huge influence over the content other people see. A joint press release issued in July 2021 suggests that Facebook will limit the reach of certain posts if Logically says they are untrue. “When Logically rates a piece of content as false, Facebook will significantly reduce its distribution so that fewer people see it, apply a warning label to let people know that the content has been rated false, and notify people who try to share it,” states the press release.
Note: Read more about how NewsGuard, a for-profit company, works closely with government agencies and major corporate advertisers to suppress dissenting views online. For more along these lines, see concise summaries of deeply revealing news articles on government corruption and media manipulation from reliable sources.
The $118bn bipartisan immigration bill that the US Senate introduced on Sunday is already facing steep opposition. The 370-page measure, which also would provide additional aid to Israel and Ukraine, has drawn the ire of both Democrats and Republicans over its proposed asylum and border laws. But privacy, immigration and digital liberties experts are also concerned over another aspect of the bill: more than $400m in funding for additional border surveillance and data-gathering tools. The lion’s share of that funding will go to two main tools: $170m for additional autonomous surveillance towers and $204m for “expenses related to the analysis of DNA samples”, which includes those collected from migrants detained by border patrol. The bill describes autonomous surveillance towers as ones that “utilize sensors, onboard computing, and artificial intelligence to identify items of interest that would otherwise be manually identified by personnel”. The rest of the funding for border surveillance ... includes $47.5m for mobile video surveillance systems and drones and $25m for “familial DNA testing”. The bill also includes $25m in funding for “subterranean detection capabilities” and $10m to acquire data from unmanned surface vehicles or autonomous boats. As of early January, CBP had deployed 396 surveillance towers along the US-Mexico border, according to the Electronic Frontier Foundation (EFF).
Note: Read more about the secret history of facial recognition technology and undeniable evidence indicating these tools do much more harm than good. For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.
An opaque network of government agencies and self-proclaimed anti-misinformation groups ... have repressed online speech. News publishers have been demonetized and shadow-banned for reporting dissenting views. NewsGuard, a for-profit company that scores news websites on trust and works closely with government agencies and major corporate advertisers, exemplifies the problem. NewsGuard’s core business is a misinformation meter, in which websites are rated on a scale of 0 to 100 on a variety of factors, including headline choice and whether a site publishes “false or egregiously misleading content.” Editors who have engaged with NewsGuard have found that the company has made bizarre demands that unfairly tarnish an entire site as untrustworthy for straying from the official narrative. In an email to one of its government clients, NewsGuard touted that its ratings system of websites is used by advertisers, “which will cut off revenues to fake news sites.” Internal documents ... show that the founders of NewsGuard privately pitched the firm to clients as a tool to engage in content moderation on an industrial scale, applying artificial intelligence to take down certain forms of speech. Earlier this year, Consortium News, a left-leaning site, charged in a lawsuit that NewsGuard’s serves as a proxy for the military to engage in censorship. The lawsuit brings attention to the Pentagon’s $749,387 contract with NewsGuard to identify “false narratives” regarding the war [in] Ukraine.
Note: A recent trove of whistleblower documents revealed how far the Pentagon and intelligence spy agencies are willing to go to censor alternative views, even if those views contain factual information and reasonable arguments. For more along these lines, see concise summaries of news articles on corporate corruption and media manipulation from reliable sources.
Israel’s military has made no secret of the intensity of its bombardment of the Gaza Strip. There has, however, been relatively little attention paid to the methods used by the Israel Defense Forces (IDF) to select targets in Gaza, and to the role artificial intelligence has played in their bombing campaign. After the 11-day war in Gaza in May 2021, officials said Israel had fought its “first AI war” using machine learning and advanced computing. The latest Israel-Hamas war has provided an unprecedented opportunity for the IDF to use such tools in a much wider theatre of operations and, in particular, to deploy an AI target-creation platform called “the Gospel”, which has significantly accelerated a lethal production line of targets. In early November, the IDF said “more than 12,000” targets in Gaza had been identified by its target administration division. Aviv Kochavi, who served as the head of the IDF until January, has said the target division is “powered by AI capabilities” and includes hundreds of officers and soldiers. According to Kochavi, “once this machine was activated” in Israel’s 11-day war with Hamas in May 2021 it generated 100 targets a day. “To put that into perspective, in the past we would produce 50 targets in Gaza per year. Now, this machine produces 100 targets a single day, with 50% of them being attacked.” A separate source [said] the Gospel had allowed the IDF to run a “mass assassination factory” in which the “emphasis is on quantity and not on quality”.
Note: Read about Israel's use of AI warfare since at least 2021. For more along these lines, see concise summaries of deeply revealing news articles on war from reliable major media sources.
OpenAI was created as a non-profit-making charitable trust, the purpose of which was to develop artificial general intelligence, or AGI, which, roughly speaking, is a machine that can accomplish, or surpass, any intellectual task humans can perform. It would do so, however, in an ethical fashion to benefit “humanity as a whole”. Two years ago, a group of OpenAI researchers left to start a new organisation, Anthropic, fearful of the pace of AI development at their old company. One later told a reporter that “there was a 20% chance that a rogue AI would destroy humanity within the next decade”. One may wonder about the psychology of continuing to create machines that one believes may extinguish human life. The problem we face is not that machines may one day exercise power over humans. That is speculation unwarranted by current developments. It is rather that we already live in societies in which power is exercised by a few to the detriment of the majority, and that technology provides a means of consolidating that power. For those who hold social, political and economic power, it makes sense to project problems as technological rather than social and as lying in the future rather than in the present. There are few tools useful to humans that cannot also cause harm. But they rarely cause harm by themselves; they do so, rather, through the ways in which they are exploited by humans, especially those with power.
Note: Read how AI is already being used for war, mass surveillance, and questionable facial recognition technology.
The Moderna misinformation reports, reported here for the first time, reveal what the pharmaceutical company is willing to do to shape public discourse around its marquee product. The mRNA COVID-19 vaccine catapulted the company to a $100 billion valuation. Behind the scenes, the marketing arm of the company has been working with former law enforcement officials and public health officials to monitor and influence vaccine policy. Key to this is a drug industry-funded NGO called Public Good Projects. PGP works closely with social media platforms, government agencies and news websites to confront the “root cause of vaccine hesitancy” by rapidly identifying and “shutting down misinformation.” A network of 45,000 healthcare professionals are given talking points “and advice on how to respond when vaccine misinformation goes mainstream”, according to an email from Moderna. An official training programme, developed by Moderna and PGP, alongside the American Board of Internal Medicine, [helps] healthcare workers identify medical misinformation. The online course, called the “Infodemic Training Program”, represents an official partnership between biopharma and the NGO world. Meanwhile, Moderna also retains Talkwalker which uses its “Blue Silk” artificial intelligence to monitor vaccine-related conversations across 150 million websites in nearly 200 countries. Claims are automatically deemed “misinformation” if they encourage vaccine hesitancy. As the pandemic abates, Moderna is, if anything, ratcheting up its surveillance operation.
Note: Strategies to silence and censor those who challenge mainstream narratives enable COVID vaccine pharmaceutical giants to downplay the significant, emerging health risks associated with the COVID shots. For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and the disappearance of privacy from reliable major media sources.
Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility. In addition to developing a wide variety of "autonomous," or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called "robot generals." In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when, and how they kill enemy troops or take fire from their opponents. In its budget submission for 2023, for example, the Air Force requested $231 million to develop the Advanced Battlefield Management System (ABMS), a complex network of sensors and AI-enabled computers designed to ... provide pilots and ground forces with a menu of optimal attack options. As the technology advances, the system will be capable of sending "fire" instructions directly to "shooters," largely bypassing human control. The Air Force's ABMS is intended to ... connect all US combat forces, the Joint All-Domain Command-and-Control System (JADC2, pronounced "Jad-C-two"). "JADC2 intends to enable commanders to make better decisions by collecting data from numerous sensors, processing the data using artificial intelligence algorithms to identify targets, then recommending the optimal weapon ... to engage the target," the Congressional Research Service reported in 2022.
Note: Read about the emerging threat of killer robots on the battlefield. For more along these lines, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
A young African American man, Randal Quran Reid, was pulled over by the state police in Georgia. He was arrested under warrants issued by Louisiana police for two cases of theft in New Orleans. The arrest warrants had been based solely on a facial recognition match, though that was never mentioned in any police document; the warrants claimed "a credible source" had identified Reid as the culprit. The facial recognition match was incorrect and Reid was released. Reid ... is not the only victim of a false facial recognition match. So far all those arrested in the US after a false match have been black. From surveillance to disinformation, we live in a world shaped by AI. The reason that Reid was wrongly incarcerated had less to do with artificial intelligence than with ... the humans that created the software and trained it. Too often when we talk of the "problem" of AI, we remove the human from the picture. We worry AI will "eliminate jobs" and make millions redundant, rather than recognise that the real decisions are made by governments and corporations and the humans that run them. We have come to view the machine as the agent and humans as victims of machine agency. Rather than seeing regulation as a means by which we can collectively shape our relationship to AI, it becomes something that is imposed from the top as a means of protecting humans from machines. It is not AI but our blindness to the way human societies are already deploying machine intelligence for political ends that should most worry us.
Note: For more along these lines, see concise summaries of deeply revealing news articles on police corruption and the disappearance of privacy from reliable major media sources.
The artist, writer and technologist James Bridle begins "Ways of Being" with an uncanny discovery: a line of stakes tagged with unfathomable letters and numbers in thick marker pen. The region of [Greece] is rich in oil, we learn, and the company that won the contract to extract it from the foothills of the Pindus mountains is using "cognitive technologies" to "augment ... strategic decision making." The grid of wooden stakes left by "unmarked vans, helicopters and work crews in hi-vis jackets" are the "tooth- and claw-marks of Artificial Intelligence, at the exact point where it meets the earth." "Ways of Being" sets off on a tour of the natural world, arguing that intelligence is something that "arises ... from thinking and working together," and that "everything is intelligent." We hear of elephants, chimpanzees and dolphins who resist and subvert experiments testing their sense of self. We find redwoods communicating through underground networks. In the most extraordinary result of all, in 2014 the Australian biologist Monica Gagliano showed that mimosa plants can remember a sudden fall for a month. Ever since the Industrial Revolution, science and technology have been used to analyze, conquer and control. But "Ways of Being" argues that they can equally be used to explore and augment connection and empathy. The author cites researchers studying migration patterns with military radar and astronomers turning telescopes designed for surveillance on Earth into instruments for investigating the dark energy of the cosmos.
Note: Read a thought-provoking article featuring a video interview with artist and technologist James Bridle as he explores how technology can be used to reflect the innovative and life-enhancing capacities of non-human natural systems. For more along these lines, see concise summaries of deeply revealing news articles on mysterious nature of reality from reliable major media sources.
An AI-based decoder that can translate brain activity into a continuous stream of text has been developed, in a breakthrough that allows a person’s thoughts to be read non-invasively for the first time. The decoder could reconstruct speech with uncanny accuracy while people listened to a story – or even silently imagined one – using only fMRI scan data. Previous language decoding systems have required surgical implants. Large language models – the kind of AI underpinning OpenAI’s ChatGPT ... are able to represent, in numbers, the semantic meaning of speech, allowing the scientists to look at which patterns of neuronal activity corresponded to strings of words with a particular meaning rather than attempting to read out activity word by word. The decoder was personalised and when the model was tested on another person the readout was unintelligible. It was also possible for participants on whom the decoder had been trained to thwart the system, for example by thinking of animals or quietly imagining another story. Jerry Tang, a doctoral student at the University of Texas at Austin and a co-author, said: “We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that. We want to make sure people only use these types of technologies when they want to and that it helps them.” Prof Tim Behrens, a computational neuroscientist ... said it opened up a host of experimental possibilities, including reading thoughts from someone dreaming.
Note: This technology has advanced considerably since Jose Delgado first stopped a charging bull using radio waves in 1965. For more along these lines, see concise summaries of deeply revealing news articles on mind control and the disappearance of privacy from reliable major media sources.
As school shootings proliferate across the country — there were 46 school shootings in 2022, more than in any year since at least 1999 — educators are increasingly turning to dodgy vendors who market misleading and ineffective technology. Utica City is one of dozens of school districts nationwide that have spent millions on gun detection technology with little to no track record of preventing or stopping violence. Evolv’s scanners keep popping up in schools across the country. Over 65 school districts have bought or tested artificial intelligence gun detection from a variety of companies since 2018, spending a total of over $45 million, much of it coming from public coffers. “Private companies are preying on school districts’ worst fears and proposing the use of technology that’s not going to work,” said Stefanie Coyle ... at the New York Civil Liberties Union. In December, it came out that Evolv, a publicly traded company since 2021, had doctored the results of their software testing. In 2022, the National Center for Spectator Sports Safety and Security, a government body, completed a confidential report showing that previous field tests on the scanners failed to detect knives and a handgun. Five law firms recently announced investigations of Evolv Technology — a partner of Motorola Solutions whose investors include Bill Gates — looking into possible violations of securities law, including claims that Evolv misrepresented its technology and its capabilities to it.
Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption from reliable major media sources.
U.S. citizens are being subjected to a relentless onslaught from intrusive technologies that have become embedded in the everyday fabric of our lives, creating unprecedented levels of social and political upheaval. These widely used technologies ... include social media and what Harvard professor Shoshanna Zuboff calls "surveillance capitalism"—the buying and selling of our personal info and even our DNA in the corporate marketplace. But powerful new ones are poised to create another wave of radical change. Under the mantle of the "Fourth Industrial Revolution," these include artificial intelligence or AI, the metaverse, the Internet of Things, the Internet of Bodies (in which our physical and health data is added into the mix to be processed by AI), and my personal favorite, police robots. This is a two-pronged effort involving both powerful corporations and government initiatives. These tech-based systems are operating "below the radar" and rarely discussed in the mainstream media. The world's biggest tech companies are now richer and more powerful than most countries. According to an article in PC Week in 2021 discussing Apple's dominance: "By taking the current valuation of Apple, Microsoft, Amazon, and others, then comparing them to the GDP of countries on a map, we can see just how crazy things have become… Valued at $2.2 trillion, the Cupertino company is richer than 96% of the world. In fact, only seven countries currently outrank the maker of the iPhone financially."
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and the disappearance of privacy from reliable major media sources.
The precise locations of the U.S. government’s high-tech surveillance towers along the U.S-Mexico border are being made public for the first time as part of a mapping project by the Electronic Frontier Foundation. While the Department of Homeland Security’s investment of more than a billion dollars into a so-called virtual wall between the U.S. and Mexico is a matter of public record, the government does not disclose where these towers are located, despite privacy concerns of residents of both countries — and the fact that individual towers are plainly visible to observers. The surveillance tower map is the result of a year’s work steered by EFF Director of Investigations Dave Maass. As border surveillance towers have multiplied across the southern border, so too have they become increasingly sophisticated, packing a panoply of powerful cameras, microphones, lasers, radar antennae, and other sensors. Companies like Anduril and Google have reaped major government paydays by promising to automate the border-watching process with migrant-detecting artificial intelligence. Opponents of these modern towers, bristling with always-watching sensors, argue the increasing computerization of border security will lead inevitably to the dehumanization of an already thoroughly dehumanizing undertaking. Nobody can say for certain how many people have died attempting to cross the U.S.-Mexico border in the recent age of militarization and surveillance. Researchers estimate that the minimum is at least 10,000 dead.
Note: As the article states, the Department of Homeland Security was "the largest reorganization of the federal government since the creation of the CIA and the Defense Department," and has resulted in U.S. taxpayers funding corrupt agendas that have led to massive human rights abuses. For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.
Advanced Impact Media Solutions, or Aims, which controls more than 30,000 fake social media profiles, can be used to spread disinformation at scale and at speed. It is sold by “Team Jorge”, a unit of disinformation operatives based in Israel. Tal Hanan, who runs the covert group using the pseudonym “Jorge”, told undercover reporters that they sold access to their software to unnamed intelligence agencies, political parties and corporate clients. Team Jorge’s Aims software ... is much more than a bot-controlling programme. Each avatar ... is given a multifaceted digital backstory. Aims enables the creation of accounts on Twitter, LinkedIn, Facebook, Telegram, Gmail, Instagram and YouTube. Some even have Amazon accounts with credit cards, bitcoin wallets and Airbnb accounts. Hanan told the undercover reporters his avatars mimicked human behaviour and their posts were powered by artificial intelligence. [Our reporters] were able to identify a much wider network of 2,000 Aims-linked bots on Facebook and Twitter. We then traced their activity across the internet, identifying their involvement ... in about 20 countries including the UK, US, Canada, Germany, Switzerland, Greece, Panama, Senegal, Mexico, Morocco, India, the United Arab Emirates, Zimbabwe, Belarus and Ecuador. The analysis revealed a vast array of bot activity, with Aims’ fake social media profiles getting involved in a dispute in California over nuclear power; a #MeToo controversy in Canada ... and an election in Senegal.
Note: The FBI has provided police departments with fake social media profiles to use in law enforcement investigations. For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and media manipulation from reliable sources.
Artificial intelligence (AI) has graduated from the hype stage of the last decade and its use cases are now well documented. Whichever nation best adapts this technology to its military – especially in space – will open new frontiers in innovation and determine the winners and losers. The US Army has anticipated this impending AI disruption and has moved quickly to stand up efforts like Project Linchpin to construct the infrastructure and environment necessary to proliferate AI technology across its intelligence, cyber, and electronic warfare communities. However, it should come as no surprise that China anticipated this advantage sooner than the US and is at the forefront of adoption. Chinese dominance in AI is imminent. The Chinese government has made enormous investments in this area (much more than Western countries) and is the current leader in AI publications and research patents globally. Meanwhile, China's ambitions in space are no longer a secret – the country is now on a trajectory to surpass the US in the next decade. The speed, range, and flexibility afforded by AI and machine learning gives those on orbit who wield it an unprecedented competitive edge. The advantage of AI in space warfare, for both on-orbit and in-ground systems, is that AI algorithms continuously learn and adapt as they operate, and the algorithms themselves can be upgraded as often as needed, to address or escalate a conflict. Like electronic warfare countermeasures during the Cold War, AI is truly the next frontier.
Note: For more along these lines, see concise summaries of deeply revealing news articles on military corruption and war from reliable major media sources.
Last week, an Israeli defense company painted a frightening picture. In a roughly two-minute video on YouTube that resembles an action movie, soldiers out on a mission are suddenly pinned down by enemy gunfire and calling for help. In response, a tiny drone zips off its mother ship to the rescue, zooming behind the enemy soldiers and killing them with ease. While the situation is fake, the drone — unveiled last week by Israel-based Elbit Systems — is not. The Lanius, which in Latin can refer to butcherbirds, represents a new generation of drone: nimble, wired with artificial intelligence, and able to scout and kill. The machine is based on racing drone design, allowing it to maneuver into tight spaces, such as alleyways and small buildings. After being sent into battle, Lanius’s algorithm can make a map of the scene and scan people, differentiating enemies from allies — feeding all that data back to soldiers who can then simply push a button to attack or kill whom they want. For weapons critics, that represents a nightmare scenario, which could alter the dynamics of war. “It’s extremely concerning,” said Catherine Connolly, an arms expert at Stop Killer Robots, an anti-weapons advocacy group. “It’s basically just allowing the machine to decide if you live or die if we remove the human control element for that.” According to the drone’s data sheet, the drone is palm-size, roughly 11 inches by 6 inches. It has a top speed of 45 miles per hour. It can fly for about seven minutes, and has the ability to carry lethal and nonlethal materials.
Note: US General Paul Selva has warned against employing killer robots in warfare for ethical reasons. For more along these lines, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
Looking Glass Factory, a company based in the Greenpoint neighborhood of Brooklyn, New York, revealed its latest consumer device: a slim, holographic picture frame that turns photos taken on iPhones into 3D displays. Looking Glass received $2.54 million of “technology development” funding from In-Q-Tel, the venture capital arm of the CIA, from April 2020 to March 2021 and a $50,000 Small Business Innovation Research award from the U.S. Air Force in November 2021 to “revolutionize 3D/virtual reality visualization.” Across the various branches of the military and intelligence community, contract records show a rush to jump on holographic display technology, augmented reality, and virtual reality display systems as the latest trend. Critics argue that the technology isn’t quite ready for prime time, and that the urgency to adopt it reflects the Pentagon’s penchant for high-priced, high-tech contracts based on the latest fad in warfighting. Military interest in holographic imaging, in particular, has grown rapidly in recent years. Military planners in China and the U.S. have touted holographic technology to project images “to incite fear in soldiers on a battlefield.” Other uses involve the creation of three-dimensional maps of villages of specific buildings and to analyze blast forensics. Palmer Luckey, who founded the technology startup Anduril Industries ... has received secretive Air Force contracts to develop next-generation artificial intelligence capabilities under the so-called Project Maven initiative.
Note: For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption from reliable major media sources.
When Elon Musk gave the world a demo in August of his latest endeavor, the brain-computer interface (BCI) Neuralink, he reminded us that the lines between brain and machine are blurring quickly. It bears remembering, however, that Neuralink is, at its core, a computer — and as with all computing advancements in human history, the more complex and smart computers become, the more attractive targets they become for hackers. Our brains hold information computers don't have. A brain linked to a computer/AI such as a BCI removes that barrier to the brain, potentially allowing hackers to rush in and cause problems we can't even fathom today. Might hacking humans via BCI be the next major evolution in hacking, carried out through a dangerous combination of past hacking methods? Previous eras were defined by obstacles between hackers and their targets. However, what happens when that disconnect between humans and tech is blurred? When they're essentially one and the same? Should a computing device literally connected to the brain, as Neuralink is, become hacked, the consequences could be catastrophic, giving hackers ultimate control over someone. If Neuralink penetrates deep into the human brain with high fidelity, what might hacking a human look like? Following traditional patterns, hackers would likely target individuals with high net worths and perhaps attempt to manipulate them into wiring millions of dollars to a hacker's offshore bank account.
Note: For more on this, see an article in the UK’s Independent titled “Groundbreaking new material 'could allow artificial intelligence to merge with the human brain’.” Meanwhile, the military is talking about “human-machine symbiosis.” And Yale professor Charles Morgan describes in a military presentation how hypodermic needles can be used to alter a person’s memory and much more in this two-minute video. For more along these lines, see concise summaries of deeply revealing news articles on microchip implants from reliable major media sources.
Reading minds has just come a step closer to reality: scientists have developed artificial intelligence that can turn brain activity into text. While the system currently works on neural patterns detected while someone is speaking aloud, experts say it could eventually aid communication for patients who are unable to speak or type. “We are not there yet but we think this could be the basis of a speech prosthesis,” said Dr Joseph Makin, co-author of the research from the University of California, San Francisco. Writing in the journal Nature Neuroscience, Makin and colleagues reveal how they developed their system by recruiting four participants who had electrode arrays implanted in their brain to monitor epileptic seizures. These participants were asked to read aloud from 50 set sentences multiple times, including “Tina Turner is a pop singer”, and “Those thieves stole 30 jewels”. The team tracked their neural activity while they were speaking. This data was then fed into a machine-learning algorithm, a type of artificial intelligence system that converted the brain activity data for each spoken sentence into a string of numbers. At first the system spat out nonsense sentences. But as the system compared each sequence of words with the sentences that were actually read aloud it improved, learning how the string of numbers related to words, and which words tend to follow each other. The system was not perfect. However, the team found the accuracy of the new system was far higher than previous approaches.
Note: Remember that the military in their secret projects is often 10 to 20 years in advance of anything public. In 2008, CBS reported the story of a man with ALS who could type using only a brain computer interface. For more along these lines, see concise summaries of deeply revealing news articles on microchip implants from reliable major media sources.
Lesley Stahl reports on an innovative project that uses artificial intelligence technology to allow people to talk with Holocaust survivors, even after their death. This high-tech initiative is a project of the USC Shoah Foundation. The project's creators film lengthy interviews with Holocaust survivors, then enter all the recorded answers into a database. When a person asks a spoken question, voice recognition technology identifies what the person is asking, then artificial intelligence identifies the best answer to the question and pull up the video of that response. [Stahl] digitally spoke with ... Eva Kor, an identical twin who survived the brutal experiments of Josef Mengele at the Auschwitz concentration camp. Kor died in July 2019 at the age of 85, yet there she was, in a life-like projection, willing to answer Stahl's questions, even her recollections of Mengele: "When I looked into his eyes, I could see nothing but evil," the digital Kor told Stahl. "People say that the eyes are the center of the soul, and in Mengele's case, that was correct." In 1992, 60 Minutes reported on Mengele's twin experiments, and Stahl interviewed the living Kor. Kor recalled how her twin sister, Miriam, helped sustain her life at Auschwitz. "I was continuously fainting out of hunger; even after, I survived," Kor said. "Yet Miriam saved her bread for one whole week. Now can you imagine what willpower does it take?" Kor told Stahl it had taken her 40 years before she was able to speak with her sister about the atrocities they experienced at Auschwitz.
Note: The 60 Minutes video at the link above is quite revealing. If only good people around the world were willing to step out of their comfort zones and see what Dr. Josef Mengele, the Angel of Death at Auschwitz, did to his concentration camp inmates, we might live in a kinder gentler world. If you are open to learning more see an excellent two-page summary on secret Nazi experiments and this well documented webpage on how Mengele may have been allowed to escape to then serve in secret American projects. By shining a light into the dark shadows, we can transform our our world.
Ties between Silicon Valley and the Pentagon are deeper than previously known, according to thousands of previously unreported subcontracts published Wednesday. The subcontracts were obtained through open records requests by accountability nonprofit Tech Inquiry. They show that tech giants including Google, Amazon, and Microsoft have secured more than 5,000 agreements with agencies including the Department of Defense, Immigrations and Customs Enforcement, the Drug Enforcement Agency, and the FBI. Tech workers in recent years have pressured their employers to drop contracts with law enforcement and the military. Google workers revolted in 2018 after Gizmodo revealed that Google was building artificial intelligence for drone targeting through a subcontract with the Pentagon after some employees quit in protest, Google agreed not to renew the contract. Employees at Amazon and Microsoft have petitioned both companies to drop their contracts with ICE and the military. Neither company has. The newly-surfaced subcontracts ... show that the companies' connections to the Pentagon run deeper than many employees were previously aware. Tech Inquiry's research was led by Jack Poulson, a former Google researcher. "Often the high-level contract description between tech companies and the military looks very vanilla," Poulson [said]. "But only when you look at the details ... do you see the workings of how the customization from a tech company would actually be involved."
Note: For more along these lines, see concise summaries of deeply revealing news articles on corruption in government and in the corporate world from reliable major media sources.
At least 25 prominent artificial-intelligence researchers, including experts at Google, Facebook, Microsoft and a recent winner of the prestigious Turing Award, have signed a letter calling on Amazon to stop selling its facial-recognition technology to law enforcement agencies because it is biased against women and people of color. The letter, which was publicly released Wednesday, reflects growing concern in academia and the tech industry that bias in facial-recognition technology is a systemic problem. Amazon sells a product called Rekognition through its cloud-computing division, Amazon Web Services. The company said last year that early customers included the Orlando Police Department in Florida and the Washington County Sheriffs Office in Oregon. In January, two researchers at the Massachusetts Institute of Technology published a peer-reviewed study showing that Amazon Rekognition had more trouble identifying the gender of female and darker-skinned faces in photos than similar services from IBM and Microsoft. It mistook women for men 19 percent of the time, the study showed, and misidentified darker-skinned women for men 31 percent of the time. There are no laws or required standards to ensure that Rekognition is used in a manner that does not infringe on civil liberties, the A.I. researchers wrote. We call on Amazon to stop selling Rekognition to law enforcement.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and the erosion of civil liberties.
The National Security Agency (NSA) is developing a tool that George Orwell's Thought Police might have found useful: an artificial intelligence system designed to gain insight into what people are thinking. The device will be able to respond almost instantaneously to complex questions posed by intelligence analysts. As more and more data is collectedthrough phone calls, credit card receipts, social networks like Facebook and MySpace, GPS tracks, cell phone geolocation, Internet searches, Amazon book purchases, even E-Z Pass toll records - it may one day be possible to know not just where people are and what they are doing, but what and how they think. The system is so potentially intrusive that at least one researcher has quit, citing concerns over the dangers in placing such a powerful weapon in the hands of a top-secret agency with little accountability. Known as Aquaint, which stands for "Advanced QUestion Answering for INTelligence," the project was run for many years by John Prange, an NSA scientist at the Advanced Research and Development Activity. A supersmart search engine, capable of answering complex questions ... would be very useful for the public. But that same capability in the hands of an agency like the NSA - absolutely secret, often above the law, resistant to oversight, and with access to petabytes of private information about Americans - could be a privacy and civil liberties nightmare. "We must not forget that the ultimate goal is to transfer research results into operational use," said ... Prange.
Note: Watch a highly revealing PBS Nova documentary providing virtual proof that the NSA could have stopped 9/11 but chose not to. For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption and the disappearance of privacy.
Google will not seek to extend its contract next year with the Defense Department for artificial intelligence used to analyze drone video, squashing a controversial alliance that had raised alarms over the technological buildup between Silicon Valley and the military. Google ... has faced widespread public backlash and employee resignations for helping develop technological tools that could aid in warfighting. Google will soon release new company principles related to the ethical uses of AI. Thousands of Google employees wrote chief executive Sundar Pichai an open letter urging the company to cancel the contract, and many others signed a petition saying the companys assistance in developing combat-zone technology directly countered the companys famous Dont be evil motto. Several Google AI employees had told The Post they believed they wielded a powerful influence over the companys decision-making. The advanced technologys top researchers and developers are in heavy demand, and many had organized resistance campaigns or threatened to leave. The sudden announcement Friday was welcomed by several high-profile employees. Meredith Whittaker, an AI researcher and the founder of Googles Open Research group, tweeted Friday: I am incredibly happy about this decision, and have a deep respect for the many people who worked and risked to make it happen. Google should not be in the business of war.
Note: Explore a treasure trove of concise summaries of incredibly inspiring news articles which will inspire you to make a difference.
Hundreds of academics have urged Google to abandon its work on a U.S. Department of Defense-led drone program codenamed Project Maven. An open letter calling for change was published Monday by the International Committee for Robot Arms Control (ICRAC). The project is formally known as the Algorithmic Warfare Cross-Functional Team. Its objective is to turn the enormous volume of data available to DoD into actionable intelligence. More than 3,000 Google staffers signed a petition in April in protest at the company's focus on warfare. We believe that Google should not be in the business of war, it read. Therefore we ask that Project Maven be cancelled. The ICRAC warned this week the project could potentially be mixed with general user data and exploited to aid targeted killing. Currently, its letter has nearly 500 signatures. It stated: We are ... deeply concerned about the possible integration of Googles data on peoples everyday lives with military surveillance data, and its combined application to targeted killing ... Google has moved into military work without subjecting itself to public debate or deliberation. While Google regularly decides the future of technology without democratic public engagement, its entry into military technologies casts the problems of private control of information infrastructure into high relief. Lieutenant Colonel Garry Floyd, deputy chief of the Algorithmic Warfare Cross Functional Team, said ... earlier this month that Maven was already active in five or six combat locations.
Note: You can read the full employee petition on this webpage. The New York Times also published a good article on this. For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and war.
Theres something eating at Google employees. Roughly one dozen employees of the search giant have resigned in the wake of reports that the ... company is providing artificial intelligence to the Pentagon. The employees resigned because of ethical concerns over the companys work with the Defense Department that includes helping the military speed up analysis of drone footage by automatically classifying images of objects and people, Gizmodo reported. Many of the employees who quit have written accounts of their decisions to leave the company. Their stories have been gathered and shared in an internal document. Google is helping the DoDs Project Maven implement machine learning to classify images gathered by drones, according to the report. Some employees believe humans, not algorithms, should be responsible for this sensitive and potentially lethal work - and that Google shouldnt be involved in military work at all. The 12 resignations are the first known mass resignations at Google in protest against one of the companys business decisions - and they speak to the strongly felt ethical concerns of the employees who are departing. In addition to the resignations, nearly 4,000 Google employees have voiced their opposition to Project Maven in an internal petition that asks Google to immediately cancel the contract and institute a policy against taking on future military work.
Note: You can read the full employee petition on this webpage. An open letter in support of google employees and tech workers was signed by more than 90 academics in artificial intelligence, ethics, and computer science. The New York Times also published a good article on this. For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and war
Over the last two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apples Siri, Amazons Alexa and Googles Assistant. Researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to unlock doors, wire money or buy stuff online - simply with music playing over the radio. A group of students from University of California, Berkeley, and Georgetown University showed in 2016 that they could hide commands in white noise played over loudspeakers and through YouTube videos to get smart devices to turn on airplane mode or open a website. This month, some of those Berkeley researchers published a research paper that went further, saying they could embed commands directly into recordings of music or spoken text. So while a human listener hears someone talking or an orchestra playing, Amazons Echo speaker might hear an instruction to add something to your shopping list. There is no American law against broadcasting subliminal messages to humans, let alone machines. The Federal Communications Commission discourages the practice as counter to the public interest, and the Television Code of the National Association of Broadcasters bans transmitting messages below the threshold of normal awareness.
Note: Read how a hacked vehicle may have resulted in journalist Michael Hastings' death in 2013. A 2015 New York Times article titled "Why Smart Objects May Be a Dumb Idea" describes other major risks in creating an "Internet of Things". Vulnerabilities like those described in the article above make it possible for anyone to spy on you with these objects, accelerating the disappearance of privacy.
Thousands of Google employees, including dozens of senior engineers, have signed a letter protesting the companys involvement in a Pentagon program that uses artificial intelligence to interpret video imagery and could be used to improve the targeting of drone strikes. The letter, which is circulating inside Google and has garnered more than 3,100 signatures, reflects a culture clash ... that is likely to intensify as cutting-edge artificial intelligence is increasingly employed for military purposes. We believe that Google should not be in the business of war, says the letter, addressed to Sundar Pichai, the companys chief executive. It asks that Google pull out of Project Maven, a Pentagon pilot program, and announce a policy that it will not ever build warfare technology. That kind of idealistic stance ... is distinctly foreign to Washingtons massive defense industry and certainly to the Pentagon, where the defense secretary, Jim Mattis, has often said a central goal is to increase the lethality of the United States military. Some of Googles top executives have significant Pentagon connections. Eric Schmidt, former executive chairman of Google and still a member of the executive board of Alphabet, Googles parent company, serves on a Pentagon advisory body, the Defense Innovation Board, as does a Google vice president, Milo Medin. Project Maven ... began last year as a pilot program to find ways to speed up the military application of the latest A.I. technology.
Note: The use of artificial intelligence technology for drone strike targeting is one of many ways warfare is being automated. Strong warnings against combining artificial intelligence with war have recently been issued by America's second-highest ranking military officer, tech mogul Elon Musk, and many of the world's most recognizable scientists. For more along these lines, see concise summaries of deeply revealing war news articles from reliable major media sources.
William Andregg ushers me into the cluttered workshop of his startup Fathom Computing. Inside [a bulky black box is] a prototype computer that processes data using light, not electricity, and its learning to recognize handwritten digits. In other experiments the device learned to generate sentences in text. Andregg claims this is the first time such complex machine-learning software has been trained using circuits that pulse with laser light, not electricity. The company is working to shrink its [prototype], which covers a few square feet of workbench, to fit into a standard cloud server. Fathom hopes the technology will become one of the shovels of the artificial-intelligence gold rush. Tech companies, particularly large cloud providers like Amazon and Microsoft, spend heavily on computer chips to power machine-learning algorithms. Fathoms founders are betting this hunger for more powerful machine learning will outstrip the capabilities of purely electronic computers. Optics has fundamental advantages over electronics, says William Andregg. Youre already reaping the benefits of using light instead of electricity to work with data. Telecommunications companies move our web pages and selfies over long distances by shooting lasers down optical fiber. Optical computers arent likely to power your laptop or smartphone any time soon. Fathoms prototype is still too bulky, for one thing. But the technology does look to be a decent match for the main work that chips perform in AI projects based on artificial neural networks.
Note: Explore a treasure trove of concise summaries of incredibly inspiring news articles which will inspire you to make a difference.
America's second-highest ranking military officer, Gen. Paul Selva, advocated Tuesday for "keeping the ethical rules of war in place lest we unleash on humanity a set of robots that we don't know how to control." Selva was responding to a question from Sen. Gary Peters, a Michigan Democrat, about his views on a Department of Defense directive that requires a human operator to be kept in the decision-making process when it comes to the taking of human life by autonomous weapons systems. Peters said the restriction was "due to expire later this year." "I don't think it's reasonable for us to put robots in charge of whether or not we take a human life," Selva told the Senate Armed Services Committee during a confirmation hearing for his reappointment as the vice chairman of the Joint Chiefs of Staff. He predicted that "there will be a raucous debate in the department about whether or not we take humans out of the decision to take lethal action," but added that he was "an advocate for keeping that restriction." Selva said humans needed to remain in the decision making process "because we take our values to war." His comments come as the US military has sought increasingly autonomous weapons systems.
Note: In another article Tesla founder Elon Musk's warns against the dangers of AI without regulation. A 2013 report for the U.N. Human Rights Commission called for a worldwide moratorium on the testing, production, assembly, transfer, acquisition, deployment and use of killer robots until an international conference can develop rules for their use. For more along these lines, see concise summaries of deeply revealing war news articles from reliable major media sources.
The Navy is looking to increase its use of drones that are more and more independent of direct human control. In recent days, Pentagon officials and Navy leaders have spoken about the program and the push to develop more autonomous and intelligent unmanned systems. Secretary of Defense Ash Carter in a speech earlier this month confirmed that the United States was developing "self-driving boats which can network together to do all kinds of missions, from fleet defense to close-in surveillance." And Rear Adm. Robert P. Girrier, the Navy's director of Unmanned Warfare Systems, discussed the effort at a January event at the Center for Strategic and International Studies. The drive is being dubbed "human machine teaming," which uses unmanned vehicles that are more independent than those piloted or supervised by human operators. Girrier told the audience that the "technology is there" and that more autonomous drones would allow the United States "to achieve supremacy at a lower cost." The Navy's push comes despite critics expressing increasing alarm at further automating drones, advances that have sparked fears of militaries developing robots that can kill without accountability. In July a group of concerned scientists, researchers and academics ... argued against the development of autonomous weapons systems. They warned of an artificial intelligence arms race and called for a "ban on offensive autonomous weapons beyond meaningful human control."
Note: In another article Tesla founder Elon Musk's warns against the dangers of AI without regulation. A 2013 report for the U.N. Human Rights Commission called for a worldwide moratorium on the testing, production, assembly, transfer, acquisition, deployment and use of killer robots until an international conference can develop rules for their use. For more along these lines, see concise summaries of deeply revealing military corruption news articles from reliable major media sources.
The Mother of All Bombs made news last week after the U.S. military dropped its most powerful non-nuclear bomb at a site in Afghanistans Nangarhar Province. This massive ... explosive device may seem a high-tech marvel. But the technology is old news, based on ... World War II-era theories. Yet theres plenty of new news on the military weapons front. The militarys new toys are often fantastically costly. Yet in some categories, technological advances create opportunities for cheap but powerful military tools ... starting with weaponized drones. The Defense Department is designing robotic fighter jets that would fly into combat alongside manned aircraft. It has tested missiles that can decide what to attack, and it has built ships that can hunt for enemy submarines ... without any help from humans. The dilemma posed by artificial intelligence-driven autonomous weapons - which some scientists liken to the third revolution in warfare, after gunpowder and nuclear arms - is that to take fullest advantage of such weapons, the logical move would be to leave humans entirely out of lethal decision-making, allowing for quicker responses to threats. But if future presidents and Pentagons trusted algorithms to make such decisions, conflicts between two nations relying on such technology could rapidly escalate - to possibly apocalyptic levels - without human involvement. More than 20,000 AI researchers, scientists and [others have signed] a ...petition endorsing a ban on offensive autonomous weapons.
Note: In 2013, the United Nations investigated the rise of lethal autonomous robots, and reported that this technology endangers human rights and should not be developed further without international oversight. For more along these lines, see concise summaries of deeply revealing war news articles from reliable major media sources.
Important Note: Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.

















































































