As a 501(c)(3) nonprofit, we depend almost entirely on donations from people like you.
Please consider making a donation.
Subscribe here and join over 13,000 subscribers to our free weekly newsletter

AI Media Articles

We worry AI will "eliminate jobs" and make millions redundant, rather than recognise that the real decisions are made by governments and corporations and the humans that run them.Kenan Malik


Artificial Intelligence (AI) is emerging technology with great promise and potential for abuse. Below are key excerpts of revealing news articles on AI technology from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.

Explore our comprehensive news index on a wide variety of fascinating topics.
Explore the top 20 most revealing news media articles we've summarized.
Check out 10 useful approaches for making sense of the media landscape.

Sort articles by: Article Date | Date Posted on WantToKnow.info | Importance

The Pentagon Is Planning a Drone ‘Hellscape’ to Defend Taiwan
2024-08-19, Wired
https://www.wired.com/story/china-taiwan-pentagon-drone-hellscape/

On the sidelines of the International Institute for Strategic Studies’ annual Shangri-La Dialogue in June, US Indo-Pacific Command chief Navy Admiral Samuel Paparo colorfully described the US military’s contingency plan for a Chinese invasion of Taiwan as flooding the narrow Taiwan Strait between the two countries with swarms of thousands upon thousands of drones, by land, sea, and air, to delay a Chinese attack enough for the US and its allies to muster additional military assets. “I want to turn the Taiwan Strait into an unmanned hellscape using a number of classified capabilities,” Paparo said, “so that I can make their lives utterly miserable for a month, which buys me the time for the rest of everything.” China has a lot of drones and can make a lot more drones quickly, creating a likely advantage during a protracted conflict. This stands in contrast to American and Taiwanese forces, who do not have large inventories of drones. The Pentagon’s “hellscape” plan proposes that the US military make up for this growing gap by producing and deploying what amounts to a massive screen of autonomous drone swarms designed to confound enemy aircraft, provide guidance and targeting to allied missiles, knock out surface warships and landing craft, and generally create enough chaos to blunt (if not fully halt) a Chinese push across the Taiwan Strait. Planning a “hellscape" of hundreds of thousands of drones is one thing, but actually making it a reality is another.

Note: Learn more about warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more along these lines, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.


How A Former Palantir Exec Built A Google-Like Surveillance Tool For The Police
2024-08-13, Forbes
https://www.forbes.com/sites/thomasbrewster/2024/08/13/how-a-former-palantir-...

Peregrine ... is essentially a super-powered Google for police data. Enter a name or address into its web-based app, and Peregrine quickly scans court records, arrest reports, police interviews, body cam footage transcripts — any police dataset imaginable — for a match. It’s taken data siloed across an array of older, slower systems, and made it accessible in a simple, speedy app that can be operated from a web browser. To date, Peregrine has scored 57 contracts across a wide range of police and public safety agencies in the U.S., from Atlanta to L.A. Revenue tripled in 2023, from $3 million to $10 million. [That will] triple again to $30 million this year, bolstered by $60 million in funding from the likes of Friends & Family Capital and Founders Fund. Privacy advocates [are] concerned about indiscriminate surveillance. “We see a lot of police departments of a lot of different sizes getting access to Real Time Crime Centers now, and it's definitely facilitating a lot more general access to surveillance feeds for some of these smaller departments that would have previously found it cost prohibitive,” said Beryl Lipton ... at the Electronic Frontier Foundation (EFF). “These types of companies are inherently going to have a hard time protecting privacy, because everything that they're built on is basically privacy damaging.” Peregrine technology can also enable “predictive policing,” long criticized for unfairly targeting poorer, non-white neighborhoods.

Note: Learn more about Palantir's involvement in domestic surveillance and controversial military technologies. For more along these lines, see concise summaries of deeply revealing news articles on police corruption and the disappearance of privacy from reliable major media sources.


Paxton's win against Meta is a win for privacy. It's only a first step.
2024-08-12, Houston Chronicle
https://www.houstonchronicle.com/opinion/editorials/article/paxton-facebook-m...

If you appeared in a photo on Facebook any time between 2011 and 2021, it is likely your biometric information was fed into DeepFace — the company’s controversial deep-learning facial recognition system that tracked the face scan data of at least a billion users. That's where Texas Attorney General Ken Paxton comes in. His office secured a $1.4 billion settlement from Meta over its alleged violation of a Texas law that bars the capture of biometric data without consent. Meta is on the hook to pay $275 million within the next 30 days and the rest over the next four years. Why did Paxton wait until 2022 — a year after Meta announced it would suspend its facial recognition technology and delete its database — to go up against the tech giant? If our AG truly prioritized privacy, he'd focus on the lesser-known companies that law enforcement agencies here in Texas are paying to scour and store our biometric data. In 2017, [Clearview AI] launched a facial recognition app that ... could identify strangers from a photo by searching a database of faces scraped without consent from social media. In 2020, news broke that at least 600 law enforcement agencies were tapping into a database of 3 billion facial images. Clearview was hit with lawsuit after lawsuit. That same year, the company was hacked and its entire client list — which included the Department of Justice, U.S. Immigration and Customs Enforcement, Interpol, retailers and hundreds of police departments — was leaked.

Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.


We’re Entering an AI Price-Fixing Dystopia
2024-08-10, The Atlantic
https://www.theatlantic.com/ideas/archive/2024/08/ai-price-algorithms-realpag...

If you rent your home, there’s a good chance your landlord uses RealPage to set your monthly payment. The company describes itself as merely helping landlords set the most profitable price. But a series of lawsuits says it’s something else: an AI-enabled price-fixing conspiracy. The late Justice Antonin Scalia once called price-fixing the “supreme evil” of antitrust law. Agreeing to fix prices is punishable with up to 10 years in prison and a $100 million fine. Property owners feed RealPage’s “property management software” their data, including unit prices and vacancy rates, and the algorithm—which also knows what competitors are charging—spits out a rent recommendation. If enough landlords use it, the result could look the same as a traditional price-fixing cartel: lockstep price increases instead of price competition, no secret handshake or clandestine meeting needed. Algorithmic price-fixing appears to be spreading to more and more industries. And existing laws may not be equipped to stop it. In more than 40 housing markets across the United States, 30 to 60 percent of multifamily-building units are priced using RealPage. The plaintiffs suing RealPage, including the Arizona and Washington, D.C., attorneys general, argue that this has enabled a critical mass of landlords to raise rents in concert, making an existing housing-affordability crisis even worse. The lawsuits also argue that RealPage pressures landlords to comply with its pricing suggestions.

Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.


Big tech firms profit from disorder. Don’t let them use these riots to push for more surveillance
2024-08-07, The Guardian (One of the UK's Leading Newspapers)
https://www.theguardian.com/commentisfree/article/2024/aug/07/big-tech-disord...

The eruption of racist violence in England and Northern Ireland raises urgent questions about the responsibilities of social media companies, and how the police use facial recognition technology. While social media isn’t the root of these riots, it has allowed inflammatory content to spread like wildfire and helped rioters coordinate. The great elephant in the room is the wealth, power and arrogance of the big tech emperors. Silicon Valley billionaires are richer than many countries. That mature modern states should allow them unfettered freedom to regulate the content they monetise is a gross abdication of duty, given their vast financial interest in monetising insecurity and division. In recent years, [facial recognition] has been used on our streets without any significant public debate. We wouldn’t dream of allowing telephone taps, DNA retention or even stop and search and arrest powers to be so unregulated by the law, yet this is precisely what has happened with facial recognition. Our facial images are gathered en masse via CCTV cameras, the passport database and the internet. At no point were we asked about this. Individual police forces have entered into direct contracts with private companies of their choosing, making opaque arrangements to trade our highly sensitive personal data with private companies that use it to develop proprietary technology. There is no specific law governing how the police, or private companies ... are authorised to use this technology. Experts at Big Brother Watch believe the inaccuracy rate for live facial recognition since the police began using it is around 74%, and there are many cases pending about false positive IDs.

Note: Many US states are not required to reveal that they used face recognition technology to identify suspects, even though misidentification is a common occurrence. For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.


A booming industry of AI age scanners, aimed at children’s faces
2024-08-07, Washington Post
https://www.washingtonpost.com/technology/2024/08/07/face-scanning-kids-onlin...

In 2021, parents in South Africa with children between the ages of 5 and 13 were offered an unusual deal. For every photo of their child’s face, a London-based artificial intelligence firm would donate 20 South African rands, about $1, to their children’s school as part of a campaign called “Share to Protect.” With promises of protecting children, a little-known group of companies in an experimental corner of the tech industry known as “age assurance” has begun engaging in a massive collection of faces, opening the door to privacy risks for anyone who uses the web. The companies say their age-check tools could give parents ... peace of mind. But by scanning tens of millions of faces a year, the tools could also subject children — and everyone else — to a level of inspection rarely seen on the open internet and boost the chances their personal data could be hacked, leaked or misused. Nineteen states, home to almost 140 million Americans, have passed or enacted laws requiring online age checks since the beginning of last year, including Virginia, Texas and Florida. For the companies, that’s created a gold mine. But ... Alex Stamos, the former security chief of Facebook, which uses Yoti, said “most age verification systems range from ‘somewhat privacy violating’ to ‘authoritarian nightmare.'” Some also fear that lawmakers could use the tools to bar teens from content they dislike, including First Amendment-protected speech.

Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams—conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody “looking away from the screen for an extended period of time." For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.


My home insurer is spying on me
2024-08-07, Business Insider
https://www.businessinsider.com/homeowners-insurance-nightmare-cancellation-s...

My insurance broker left a frantic voicemail telling me that my homeowner's insurance had lapsed. When I finally reached my insurance broker, he told me the reason Travelers revoked my policy: AI-powered drone surveillance. My finances were imperiled, it seemed, by a bad piece of code. As my broker revealed, the ominous threat that canceled my insurance was nothing more than moss. Travelers not only uses aerial photography and AI to monitor its customers' roofs, but also wrote patents on the technology — nearly 50 patents actually. And it may not be the only insurer spying from the skies. No one can use AI to know the future; you're training the technology to make guesses based on changes in roof color and grainy aerial images. But even the best AI models will get a lot of predictions wrong, especially at scale and particularly where you're trying to make guesses about the future of radically different roof designs across countless buildings in various environments. For the insurance companies designing the algorithms, that means a lot of questions about when to put a thumb on the scale in favor of, or against, the homeowner. And insurance companies will have huge incentives to choose against the homeowner every time. When Travelers flew a drone over my house, I never knew. When it decided I was too much of a risk, I had no way of knowing why or how. As more and more companies use more and more opaque forms of AI to decide the course of our lives, we're all at risk.

Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.


Silicon Valley is giving off divorced dad energy
2024-08-06, Business Insider
https://www.businessinsider.com/tech-industry-divorced-dad-energy-google-micr...

Liquid capital, growing market dominance, slick ads, and fawning media made it easy for giants like Google, Microsoft, Apple, and Amazon to expand their footprint and grow their bottom lines. Yet ... these companies got lazy, entitled, and demanding. They started to care less about the foundations of their business — like having happy customers and stable products — and more about making themselves feel better by reinforcing their monopolies. Big Tech has decided the way to keep customers isn't to compete or provide them with a better service but instead make it hard to leave, trick customers into buying things, or eradicate competition so that it can make things as profitable as possible, even if the experience is worse. After two decades of consistent internal innovation, Big Tech got addicted to acquisitions in the 2010s: Apple bought Siri; Meta bought WhatsApp, Instagram, and Oculus; Amazon bought Twitch; Google bought Nest and Motorola's entire mobility division. Over time, the acquisitions made it impossible for these companies to focus on delivering the features we needed. Google, Meta, Amazon, and Apple are simply no longer forces for innovation. Generative AI is the biggest, dumbest attempt that tech has ever made to escape the fallout of building companies by acquiring other companies, taking their eyes off actually inventing things, and ignoring the most important part of their world: the customer.

Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.


‘I’m afraid I can’t do that’: Should killer robots be allowed to disobey orders?
2024-08-06, Bulletin of the Atomic Scientists
https://thebulletin.org/2024/08/im-afraid-i-cant-do-that-should-killer-robots...

It is often said that autonomous weapons could help minimize the needless horrors of war. Their vision algorithms could be better than humans at distinguishing a schoolhouse from a weapons depot. Some ethicists have long argued that robots could even be hardwired to follow the laws of war with mathematical consistency. And yet for machines to translate these virtues into the effective protection of civilians in war zones, they must also possess a key ability: They need to be able to say no. Human control sits at the heart of governments’ pitch for responsible military AI. Giving machines the power to refuse orders would cut against that principle. Meanwhile, the same shortcomings that hinder AI’s capacity to faithfully execute a human’s orders could cause them to err when rejecting an order. Militaries will therefore need to either demonstrate that it’s possible to build ethical, responsible autonomous weapons that don’t say no, or show that they can engineer a safe and reliable right-to-refuse that’s compatible with the principle of always keeping a human “in the loop.” If they can’t do one or the other ... their promises of ethical and yet controllable killer robots should be treated with caution. The killer robots that countries are likely to use will only ever be as ethical as their imperfect human commanders. They would only promise a cleaner mode of warfare if those using them seek to hold themselves to a higher standard.

Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and military corruption.


There’s no way for humanity to win an AI arms race
2024-08-04, Washington Post
https://www.washingtonpost.com/opinions/2024/08/04/sam-altman-ai-arms-race/

In 2017, hundreds of artificial intelligence experts signed the Asilomar AI Principles for how to govern artificial intelligence. I was one of them. So was OpenAI CEO Sam Altman. The signatories committed to avoiding an arms race on the grounds that “teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.” The stated goal of OpenAI is to create artificial general intelligence, a system that is as good as expert humans at most tasks. It could have significant benefits. It could also threaten millions of lives and livelihoods if not developed in a provably safe way. It could be used to commit bioterrorism, run massive cyberattacks or escalate nuclear conflict. Given these dangers, a global arms race to unleash artificial general intelligence AGI serves no one’s interests. The true power of AI lies ... in its potential to bridge divides. AI might help us identify fundamental patterns in global conflicts and human behavior, leading to more profound solutions. AI’s ability to process vast amounts of data could help identify patterns in global conflicts by suggesting novel approaches to resolution that human negotiators might overlook. Advanced natural language processing could break down communication barriers, allowing for more nuanced dialogue between nations and cultures. Predictive AI models could identify early signs of potential conflicts, allowing for preemptive diplomatic interventions.

Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.


Wall Street’s $2 Trillion AI Reckoning
2024-08-02, New York Magazine
https://nymag.com/intelligencer/article/wall-streets-usd2-trillion-ai-reckoni...

On July 16, the S&P 500 index, one of the most widely cited benchmarks in American capitalism, reached its highest-ever market value: $47 trillion. 1.4 percent of those companies were worth more than $16 trillion, the greatest concentration of capital in the smallest number of companies in the history of the U.S. stock market. The names are familiar: Microsoft, Apple, Amazon, Nvidia, Meta, Alphabet, and Tesla. All of them, too, have made giant bets on artificial intelligence. For all their similarities, these trillion-dollar-plus companies have been grouped together under a single banner: the Magnificent Seven. In the past month, though, these giants of the U.S. economy have been faltering. A recent rout led to a collapse of $2.6 trillion in their market value. Earlier this year, Goldman Sachs issued a deeply skeptical report on the industry, calling it too expensive, too clunky, and just simply not as useful as it has been chalked up to be. “There’s not a single thing that this is being used for that’s cost-effective at this point,” Jim Covello, an influential Goldman analyst, said on a company podcast. AI is not going away, and it will surely become more sophisticated. This explains why, even with the tempering of the AI-investment thesis, these companies are still absolutely massive. When you talk with Silicon Valley CEOs, they love to roll their eyes at their East Coast skeptics. Banks, especially, are too cautious, too concerned with short-term goals, too myopic to imagine another world.

Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.


The AI Search War Has Begun
2024-07-30, The Atlantic
https://www.theatlantic.com/technology/archive/2024/07/perplexity-ai-search-m...

Google and a few other search engines are the portal through which several billion people navigate the internet. Many of the world’s most powerful tech companies, including Google, Microsoft, and OpenAI, have recently spotted an opportunity to remake that gateway with generative AI, and they are racing to seize it. Nearly two years after the arrival of ChatGPT, and with users growing aware that many generative-AI products have effectively been built on stolen information, tech companies are trying to play nice with the media outlets that supply the content these machines need. The start-up Perplexity ... announced revenue-sharing deals with Time, Fortune, and several other publishers. These publishers will be compensated when Perplexity earns ad revenue from AI-generated answers that cite partner content. The site does not currently run ads, but will begin doing so in the form of sponsored “related follow-up questions.” OpenAI has been building its own roster of media partners, including News Corp, Vox Media, and The Atlantic. Google has purchased the rights to use Reddit content to train future AI models, and ... appears to be the only major search engine that Reddit is permitting to surface its content. The default was once that you would directly consume work by another person; now an AI may chew and regurgitate it first, then determine what you see based on its opaque underlying algorithm. Many of the human readers whom media outlets currently show ads and sell subscriptions to will have less reason to ever visit publishers’ websites. Whether OpenAI, Perplexity, Google, or someone else wins the AI search war might not depend entirely on their software: Media partners are an important part of the equation. AI search will send less traffic to media websites than traditional search engines. The growing number of AI-media deals, then, are a shakedown. AI is scraping publishers’ content whether they want it to or not: Media companies can be chumps or get paid.

Note: The AI search war has nothing to do with journalists and content creators getting paid and acknowledged for their work. It’s all about big companies doing deals with each other to control our information environment and capture more consumer spending. For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable sources.


Texas AG wins $1.4B settlement from Facebook parent Meta over facial-capture charges
2024-07-30, NBC News
https://www.nbcnews.com/business/business-news/texas-ag-wins-1point4-billion-...

Texas Attorney General Ken Paxton has won a $1.4 billion settlement from Facebook parent Meta over charges that it captured users' facial and biometric data without properly informing them it was doing so. Paxton said that starting in 2011, Meta, then known as Facebook, rolled out a “tag” feature that involved software that learned how to recognize and sort faces in photos. In doing so, it automatically turned on the feature without explaining how it worked, Paxton said — something that violated a 2009 state statute governing the use of biometric data, as well as running afoul of the state's deceptive trade practices act. "Unbeknownst to most Texans, for more than a decade Meta ran facial recognition software on virtually every face contained in the photographs uploaded to Facebook, capturing records of the facial geometry of the people depicted," he said in a statement. As part of the settlement, Meta did not admit to wrongdoing. Facebook discontinued how it had previously used face-recognition technology in 2021, in the process deleting the face-scan data of more than one billion users. The settlement amount, which Paxton said is the largest ever obtained by a single state against a business, will be paid out over five years. “This historic settlement demonstrates our commitment to standing up to the world’s biggest technology companies and holding them accountable for breaking the law and violating Texans’ privacy rights," Paxton said.

Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.


Google’s wrong answer to the threat of AI – stop indexing content
2024-07-20, The Guardian (One of the UK's Leading Newspapers)
https://www.theguardian.com/commentisfree/article/2024/jul/20/googles-wrong-a...

Once upon a time ... Google was truly great. A couple of lads at Stanford University in California had the idea to build a search engine that would crawl the world wide web, create an index of all the sites on it and rank them by the number of inbound links each had from other sites. The arrival of ChatGPT and its ilk ... disrupts search behaviour. Google’s mission – “to organise the world’s information and make it universally accessible” – looks like a much more formidable task in a world in which AI can generate infinite amounts of humanlike content. Vincent Schmalbach, a respected search engine optimisation (SEO) expert, thinks that Google has decided that it can no longer aspire to index all the world’s information. That mission has been abandoned. “Google is no longer trying to index the entire web,” writes Schmalbach. “In fact, it’s become extremely selective, refusing to index most content. This isn’t about content creators failing to meet some arbitrary standard of quality. Rather, it’s a fundamental change in how Google approaches its role as a search engine.” The default setting from now on will be not to index content unless it is genuinely unique, authoritative and has “brand recognition”. “They might index content they perceive as truly unique,” says Schmalbach. “But if you write about a topic that Google considers even remotely addressed elsewhere, they likely won’t index it. This can happen even if you’re a well-respected writer with a substantial readership.”

Note: WantToKnow.info and other independent media websites are disappearing from Google search results because of this. For more along these lines, see concise summaries of deeply revealing news articles on AI and censorship from reliable sources.


'We have A.I. landlords,' housing attorney warns of automated evictions in Columbus
2024-07-15, ABC News (Ohio Affiliate)
https://abc6onyourside.com/on-your-side/6-on-your-side/ai-landlords-attorney-...

Columbus landlords are now turning to artificial intelligence to evict tenants from their homes. [Attorney Jyoshu] Tsushima works for the Legal Aid Society of Southeast and Central Ohio and focuses on evictions. In June, nearly 2,000 evictions were filed within Franklin County Municipal Court. Tsushima said the county is on track to surpass 24,000 evictions for the year. In eviction court, he said both property management staffers and his clients describe software used that automatically evicts tenants. He said human employees don't determine who will be kicked out but they're the ones who place the eviction notices up on doors. Hope Matfield contacted ABC6 ... after she received an eviction notice on her door at Eden of Caleb's Crossing in Reynoldsburg in May. "They're profiting off people living in hell, basically," Matfield [said]. "I had no choice. I had to make that sacrifice, do a quick move and not know where my family was going to go right away." In February, Matfield started an escrow case against her property management group which is 5812 Investment Group. When Matfield missed a payment, the courts closed her case and gave the escrow funds to 5812 Investment Group. Matfield received her eviction notice that same day. The website for 5812 Investment Group indicates it uses software from RealPage. RealPage is subject to a series of lawsuits across the country due to algorithms multiple attorneys general claim cause price-fixing on rents.

Note: Read more about how tech companies are increasingly marketing smart tools to landlords for a troubling purpose: surveilling tenants to justify evictions or raise their rent. For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.


AI’s ‘Oppenheimer moment’: autonomous weapons enter the battlefield
2024-07-14, The Guardian (One of the UK's Leading Newspapers)
https://www.theguardian.com/technology/article/2024/jul/14/ais-oppenheimer-mo...

The Ukrainian military has used AI-equipped drones mounted with explosives to fly into battlefields and strike at Russian oil refineries. American AI systems identified targets in Syria and Yemen for airstrikes earlier this year. The Israel Defense Forces used another kind of AI-enabled targeting system to label as many as 37,000 Palestinians as suspected militants during the first weeks of its war in Gaza. Growing conflicts around the world have acted as both accelerant and testing ground for AI warfare while making it even more evident how unregulated the nascent field is. The result is a multibillion-dollar AI arms race that is drawing in Silicon Valley giants and states around the world. Altogether, the US military has more than 800 active AI-related projects and requested $1.8bn worth of funding for AI in the 2024 budget alone. Many of these companies and technologies are able to operate with extremely little transparency and accountability. Defense contractors are generally protected from liability when their products accidentally do not work as intended, even when the results are deadly. The Pentagon plans to spend $1bn by 2025 on its Replicator Initiative, which aims to develop swarms of unmanned combat drones that will use artificial intelligence to seek out threats. The air force wants to allocate around $6bn over the next five years to research and development of unmanned collaborative combat aircraft, seeking to build a fleet of 1,000 AI-enabled fighter jets that can fly autonomously. The Department of Defense has also secured hundreds of millions of dollars in recent years to fund its secretive AI initiative known as Project Maven, a venture focused on technologies like automated target recognition and surveillance.

Note:Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.


In Fresh Hell, American Vending Machines are Selling Bullets Using Facial Recognition
2024-07-08, Futurism
https://futurism.com/vending-machines-bullets-facial-recognition

A growing number of supermarkets in Alabama, Oklahoma, and Texas are selling bullets by way of AI-powered vending machines, as first reported by Alabama's Tuscaloosa Thread. The company behind the machines, a Texas-based venture dubbed American Rounds, claims on its website that its dystopian bullet kiosks are outfitted with "built-in AI technology" and "facial recognition software," which allegedly allow the devices to "meticulously verify the identity and age of each buyer." As showcased in a promotional video, using one is an astoundingly simple process: walk up to the kiosk, provide identification, and let a camera scan your face. If its embedded facial recognition tech says you are in fact who you say you are, the automated machine coughs up some bullets. According to American Rounds, the main objective is convenience. Its machines are accessible "24/7," its website reads, "ensuring that you can buy ammunition on your own schedule, free from the constraints of store hours and long lines." Though officials in Tuscaloosa, where two machines have been installed, [said] that the devices are in full compliance with the Bureau of Alcohol, Tobacco, Firearms and Explosives' standards ... at least one of the devices has been taken down amid a Tuscaloosa city council investigation into its legal standing. "We have over 200 store requests for AARM [Automated Ammo Retail Machine] units covering approximately nine states currently," [American Rounds CEO Grant Magers] told Newsweek, "and that number is growing daily."

Note: Facial recognition technology is far from reliable. For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence from reliable major media sources.


Microsoft’s climbdown over its creepy Recall feature shows its AI strategy is far from intelligent
2024-07-06, The Guardian (One of the UK's Leading Newspapers)
https://www.theguardian.com/commentisfree/article/2024/jul/06/microsoft-recal...

Recall ... takes constant screenshots in the background while you go about your daily computer business. Microsoft’s Copilot+ machine-learning tech then scans (and “reads”) each of these screenshots in order to make a searchable database of every action performed on your computer and then stores it on the machine’s disk. “Recall is like bestowing a photographic memory on everyone who buys a Copilot+ PC,” [Microsoft marketing officer Yusuf] Mehdi said. “Anything you’ve ever seen or done, you’ll now more or less be able to find.” Charlie Stross, the sci-fi author and tech critic, called it a privacy “shit-show for any organisation that handles medical records or has a duty of legal confidentiality.” He also said: “Suddenly, every PC becomes a target for discovery during legal proceedings. Lawyers can subpoena your Recall database and search it, no longer being limited to email but being able to search for terms that came up in Teams or Slack or Signal messages, and potentially verbally via Zoom or Skype if speech-to-text is included in Recall data.” Faced with this pushback, Microsoft [announced] that Recall would be made opt-in instead of on by default, and also introducing extra security precautions – only producing results from Recall after user authentication, for example, and never decrypting data stored by the tool until after a search query. The only good news for Microsoft here is that it seems to have belatedly acknowledged that Recall has been a fiasco.

Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.


Silicon Valley Rushes Toward Automated Warfare That Deeply Incorporates AI
2024-06-25, Truthout
https://truthout.org/articles/silicon-valley-rushes-toward-automated-warfare-...

Venture capital and military startup firms in Silicon Valley have begun aggressively selling a version of automated warfare that will deeply incorporate artificial intelligence (AI). This surge of support for emerging military technologies is driven by the ultimate rationale of the military-industrial complex: vast sums of money to be made. Untold billions of dollars of private money now pouring into firms seeking to expand the frontiers of techno-war. According to the New York Times, $125 billion over the past four years. Whatever the numbers, the tech sector and its financial backers sense that there are massive amounts of money to be made in next-generation weaponry and aren’t about to let anyone stand in their way. Meanwhile, an investigation by Eric Lipton of the New York Times found that venture capitalists and startup firms already pushing the pace on AI-driven warfare are also busily hiring ex-military and Pentagon officials to do their bidding. Former Google CEO Eric Schmidt [has] become a virtual philosopher king when it comes to how new technology will reshape society. [Schmidt] laid out his views in a 2021 book modestly entitled The Age of AI and Our Human Future, coauthored with none other than the late Henry Kissinger. Schmidt is aware of the potential perils of AI, but he’s also at the center of efforts to promote its military applications. AI is coming, and its impact on our lives, whether in war or peace, is likely to stagger the imagination.

Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.


FedEx’s Secretive Police Force Is Helping Cops Build An AI Car Surveillance Network
2024-06-19, Forbes
https://www.forbes.com/sites/thomasbrewster/2024/06/19/fedex-police-help-cops...

Twenty years ago, FedEx established its own police force. Now it's working with local police to build out an AI car surveillance network. The shipping and business services company is using AI tools made by Flock Safety, a $4 billion car surveillance startup, to monitor its distribution and cargo facilities across the United States. As part of the deal, FedEx is providing its Flock surveillance feeds to law enforcement, an arrangement that Flock has with at least four multi-billion dollar private companies. Some local police departments are also sharing their Flock feeds with FedEx — a rare instance of a private company availing itself of a police surveillance apparatus. Such close collaboration has the potential to dramatically expand Flock’s car surveillance network, which already spans 4,000 cities across over 40 states and some 40,000 cameras that track vehicles by license plate, make, model, color and other identifying characteristics, like dents or bumper stickers. Jay Stanley ... at the American Civil Liberties Union, said it was “profoundly disconcerting” that FedEx was exchanging data with law enforcement as part of Flock’s “mass surveillance” system. “It raises questions about why a private company ... would have privileged access to data that normally is only available to law enforcement,” he said. Forbes previously found that [Flock] had itself likely broken the law across various states by installing cameras without the right permits.

Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.


Important Note: Explore our full index to key excerpts of revealing major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.