AI Media Articles
Artificial Intelligence (AI) is emerging technology with great promise and potential for abuse. Below are key excerpts of revealing news articles on AI technology from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.
Tech companies have outfitted classrooms across the U.S. with devices and technologies that allow for constant surveillance and data gathering. Firms such as Gaggle, Securly and Bark (to name a few) now collect data from tens of thousands of K-12 students. They are not required to disclose how they use that data, or guarantee its safety from hackers. In their new book, Surveillance Education: Navigating the Conspicuous Absence of Privacy in Schools, Nolan Higdon and Allison Butler show how all-encompassing surveillance is now all too real, and everything from basic privacy rights to educational quality is at stake. The tech industry has done a great job of convincing us that their platforms — like social media and email — are “free.” But the truth is, they come at a cost: our privacy. These companies make money from our data, and all the content and information we share online is basically unpaid labor. So, when the COVID-19 lockdowns hit, a lot of people just assumed that using Zoom, Canvas and Moodle for online learning was a “free” alternative to in-person classes. In reality, we were giving up even more of our labor and privacy to an industry that ended up making record profits. Your data can be used against you ... or taken out of context, such as sarcasm being used to deny you a job or admission to a school. Data breaches happen all the time, which could lead to identity theft or other personal information becoming public.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams—conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody “looking away from the screen for an extended period of time." For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
Big tech companies have spent vast sums of money honing algorithms that gather their users’ data and scour it for patterns. One result has been a boom in precision-targeted online advertisements. Another is a practice some experts call “algorithmic personalized pricing,” which uses artificial intelligence to tailor prices to individual consumers. The Federal Trade Commission uses a more Orwellian term for this: “surveillance pricing.” In July the FTC sent information-seeking orders to eight companies that “have publicly touted their use of AI and machine learning to engage in data-driven targeting,” says the agency’s chief technologist Stephanie Nguyen. Consumer surveillance extends beyond online shopping. “Companies are investing in infrastructure to monitor customers in real time in brick-and-mortar stores,” [Nguyen] says. Some price tags, for example, have become digitized, designed to be updated automatically in response to factors such as expiration dates and customer demand. Retail giant Walmart—which is not being probed by the FTC—says its new digital price tags can be remotely updated within minutes. When personalized pricing is applied to home mortgages, lower-income people tend to pay more—and algorithms can sometimes make things even worse by hiking up interest rates based on an inadvertently discriminatory automated estimate of a borrower’s risk rating.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
Ford Motor Company is just one of many automakers advancing technology that weaponizes cars for mass surveillance. The ... company is currently pursuing a patent for technology that would allow vehicles to monitor the speed of nearby cars, capture images, and transmit data to law enforcement agencies. This would effectively turn vehicles into mobile surveillance units, sharing detailed information with both police and insurance companies. Ford's initiative is part of a broader trend among car manufacturers, where vehicles are increasingly used to spy on drivers and harvest data. In today's world, a smartphone can produce up to 3 gigabytes of data per hour, but recently manufactured cars can churn out up to 25 gigabytes per hour—and the cars of the future will generate even more. These vehicles now gather biometric data such as voice, iris, retina, and fingerprint recognition. In 2022, Hyundai patented eye-scanning technology to replace car keys. This data isn't just stored locally; much of it is uploaded to the cloud, a system that has proven time and again to be incredibly vulnerable. Toyota recently announced that a significant amount of customer information was stolen and posted on a popular hacking site. Imagine a scenario where hackers gain control of your car. As cybersecurity threats become more advanced, the possibility of a widespread attack is not far-fetched.
Note: FedEx is helping the police build a large AI surveillance network to track people and vehicles. Michael Hastings, a journalist investigating U.S. military and intelligence abuses, was killed in a 2013 car crash that may have been the result of a hack. For more along these lines, explore summaries of news articles on the disappearance of privacy from reliable major media sources.
Surveillance technologies have evolved at a rapid clip over the last two decades — as has the government’s willingness to use them in ways that are genuinely incompatible with a free society. The intelligence failures that allowed for the attacks on September 11 poured the concrete of the surveillance state foundation. The gradual but dramatic construction of this surveillance state is something that Republicans and Democrats alike are responsible for. Our country cannot build and expand a surveillance superstructure and expect that it will not be turned against the people it is meant to protect. The data that’s being collected reflect intimate details about our closely held beliefs, our biology and health, daily activities, physical location, movement patterns, and more. Facial recognition, DNA collection, and location tracking represent three of the most pressing areas of concern and are ripe for exploitation. Data brokers can use tens of thousands of data points to develop a detailed dossier on you that they can sell to the government (and others). Essentially, the data broker loophole allows a law enforcement agency or other government agency such as the NSA or Department of Defense to give a third party data broker money to hand over the data from your phone — rather than get a warrant. When pressed by the intelligence community and administration, policymakers on both sides of the aisle failed to draw upon the lessons of history.
Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.
On the sidelines of the International Institute for Strategic Studies’ annual Shangri-La Dialogue in June, US Indo-Pacific Command chief Navy Admiral Samuel Paparo colorfully described the US military’s contingency plan for a Chinese invasion of Taiwan as flooding the narrow Taiwan Strait between the two countries with swarms of thousands upon thousands of drones, by land, sea, and air, to delay a Chinese attack enough for the US and its allies to muster additional military assets. “I want to turn the Taiwan Strait into an unmanned hellscape using a number of classified capabilities,” Paparo said, “so that I can make their lives utterly miserable for a month, which buys me the time for the rest of everything.” China has a lot of drones and can make a lot more drones quickly, creating a likely advantage during a protracted conflict. This stands in contrast to American and Taiwanese forces, who do not have large inventories of drones. The Pentagon’s “hellscape” plan proposes that the US military make up for this growing gap by producing and deploying what amounts to a massive screen of autonomous drone swarms designed to confound enemy aircraft, provide guidance and targeting to allied missiles, knock out surface warships and landing craft, and generally create enough chaos to blunt (if not fully halt) a Chinese push across the Taiwan Strait. Planning a “hellscape" of hundreds of thousands of drones is one thing, but actually making it a reality is another.
Note: Learn more about warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more along these lines, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
Peregrine ... is essentially a super-powered Google for police data. Enter a name or address into its web-based app, and Peregrine quickly scans court records, arrest reports, police interviews, body cam footage transcripts — any police dataset imaginable — for a match. It’s taken data siloed across an array of older, slower systems, and made it accessible in a simple, speedy app that can be operated from a web browser. To date, Peregrine has scored 57 contracts across a wide range of police and public safety agencies in the U.S., from Atlanta to L.A. Revenue tripled in 2023, from $3 million to $10 million. [That will] triple again to $30 million this year, bolstered by $60 million in funding from the likes of Friends & Family Capital and Founders Fund. Privacy advocates [are] concerned about indiscriminate surveillance. “We see a lot of police departments of a lot of different sizes getting access to Real Time Crime Centers now, and it's definitely facilitating a lot more general access to surveillance feeds for some of these smaller departments that would have previously found it cost prohibitive,” said Beryl Lipton ... at the Electronic Frontier Foundation (EFF). “These types of companies are inherently going to have a hard time protecting privacy, because everything that they're built on is basically privacy damaging.” Peregrine technology can also enable “predictive policing,” long criticized for unfairly targeting poorer, non-white neighborhoods.
Note: Learn more about Palantir's involvement in domestic surveillance and controversial military technologies. For more along these lines, see concise summaries of deeply revealing news articles on police corruption and the disappearance of privacy from reliable major media sources.
If you appeared in a photo on Facebook any time between 2011 and 2021, it is likely your biometric information was fed into DeepFace — the company’s controversial deep-learning facial recognition system that tracked the face scan data of at least a billion users. That's where Texas Attorney General Ken Paxton comes in. His office secured a $1.4 billion settlement from Meta over its alleged violation of a Texas law that bars the capture of biometric data without consent. Meta is on the hook to pay $275 million within the next 30 days and the rest over the next four years. Why did Paxton wait until 2022 — a year after Meta announced it would suspend its facial recognition technology and delete its database — to go up against the tech giant? If our AG truly prioritized privacy, he'd focus on the lesser-known companies that law enforcement agencies here in Texas are paying to scour and store our biometric data. In 2017, [Clearview AI] launched a facial recognition app that ... could identify strangers from a photo by searching a database of faces scraped without consent from social media. In 2020, news broke that at least 600 law enforcement agencies were tapping into a database of 3 billion facial images. Clearview was hit with lawsuit after lawsuit. That same year, the company was hacked and its entire client list — which included the Department of Justice, U.S. Immigration and Customs Enforcement, Interpol, retailers and hundreds of police departments — was leaked.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
If you rent your home, there’s a good chance your landlord uses RealPage to set your monthly payment. The company describes itself as merely helping landlords set the most profitable price. But a series of lawsuits says it’s something else: an AI-enabled price-fixing conspiracy. The late Justice Antonin Scalia once called price-fixing the “supreme evil” of antitrust law. Agreeing to fix prices is punishable with up to 10 years in prison and a $100 million fine. Property owners feed RealPage’s “property management software” their data, including unit prices and vacancy rates, and the algorithm—which also knows what competitors are charging—spits out a rent recommendation. If enough landlords use it, the result could look the same as a traditional price-fixing cartel: lockstep price increases instead of price competition, no secret handshake or clandestine meeting needed. Algorithmic price-fixing appears to be spreading to more and more industries. And existing laws may not be equipped to stop it. In more than 40 housing markets across the United States, 30 to 60 percent of multifamily-building units are priced using RealPage. The plaintiffs suing RealPage, including the Arizona and Washington, D.C., attorneys general, argue that this has enabled a critical mass of landlords to raise rents in concert, making an existing housing-affordability crisis even worse. The lawsuits also argue that RealPage pressures landlords to comply with its pricing suggestions.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
The eruption of racist violence in England and Northern Ireland raises urgent questions about the responsibilities of social media companies, and how the police use facial recognition technology. While social media isn’t the root of these riots, it has allowed inflammatory content to spread like wildfire and helped rioters coordinate. The great elephant in the room is the wealth, power and arrogance of the big tech emperors. Silicon Valley billionaires are richer than many countries. That mature modern states should allow them unfettered freedom to regulate the content they monetise is a gross abdication of duty, given their vast financial interest in monetising insecurity and division. In recent years, [facial recognition] has been used on our streets without any significant public debate. We wouldn’t dream of allowing telephone taps, DNA retention or even stop and search and arrest powers to be so unregulated by the law, yet this is precisely what has happened with facial recognition. Our facial images are gathered en masse via CCTV cameras, the passport database and the internet. At no point were we asked about this. Individual police forces have entered into direct contracts with private companies of their choosing, making opaque arrangements to trade our highly sensitive personal data with private companies that use it to develop proprietary technology. There is no specific law governing how the police, or private companies ... are authorised to use this technology. Experts at Big Brother Watch believe the inaccuracy rate for live facial recognition since the police began using it is around 74%, and there are many cases pending about false positive IDs.
Note: Many US states are not required to reveal that they used face recognition technology to identify suspects, even though misidentification is a common occurrence. For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
In 2021, parents in South Africa with children between the ages of 5 and 13 were offered an unusual deal. For every photo of their child’s face, a London-based artificial intelligence firm would donate 20 South African rands, about $1, to their children’s school as part of a campaign called “Share to Protect.” With promises of protecting children, a little-known group of companies in an experimental corner of the tech industry known as “age assurance” has begun engaging in a massive collection of faces, opening the door to privacy risks for anyone who uses the web. The companies say their age-check tools could give parents ... peace of mind. But by scanning tens of millions of faces a year, the tools could also subject children — and everyone else — to a level of inspection rarely seen on the open internet and boost the chances their personal data could be hacked, leaked or misused. Nineteen states, home to almost 140 million Americans, have passed or enacted laws requiring online age checks since the beginning of last year, including Virginia, Texas and Florida. For the companies, that’s created a gold mine. But ... Alex Stamos, the former security chief of Facebook, which uses Yoti, said “most age verification systems range from ‘somewhat privacy violating’ to ‘authoritarian nightmare.'” Some also fear that lawmakers could use the tools to bar teens from content they dislike, including First Amendment-protected speech.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams—conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody “looking away from the screen for an extended period of time." For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.
My insurance broker left a frantic voicemail telling me that my homeowner's insurance had lapsed. When I finally reached my insurance broker, he told me the reason Travelers revoked my policy: AI-powered drone surveillance. My finances were imperiled, it seemed, by a bad piece of code. As my broker revealed, the ominous threat that canceled my insurance was nothing more than moss. Travelers not only uses aerial photography and AI to monitor its customers' roofs, but also wrote patents on the technology — nearly 50 patents actually. And it may not be the only insurer spying from the skies. No one can use AI to know the future; you're training the technology to make guesses based on changes in roof color and grainy aerial images. But even the best AI models will get a lot of predictions wrong, especially at scale and particularly where you're trying to make guesses about the future of radically different roof designs across countless buildings in various environments. For the insurance companies designing the algorithms, that means a lot of questions about when to put a thumb on the scale in favor of, or against, the homeowner. And insurance companies will have huge incentives to choose against the homeowner every time. When Travelers flew a drone over my house, I never knew. When it decided I was too much of a risk, I had no way of knowing why or how. As more and more companies use more and more opaque forms of AI to decide the course of our lives, we're all at risk.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.
Liquid capital, growing market dominance, slick ads, and fawning media made it easy for giants like Google, Microsoft, Apple, and Amazon to expand their footprint and grow their bottom lines. Yet ... these companies got lazy, entitled, and demanding. They started to care less about the foundations of their business — like having happy customers and stable products — and more about making themselves feel better by reinforcing their monopolies. Big Tech has decided the way to keep customers isn't to compete or provide them with a better service but instead make it hard to leave, trick customers into buying things, or eradicate competition so that it can make things as profitable as possible, even if the experience is worse. After two decades of consistent internal innovation, Big Tech got addicted to acquisitions in the 2010s: Apple bought Siri; Meta bought WhatsApp, Instagram, and Oculus; Amazon bought Twitch; Google bought Nest and Motorola's entire mobility division. Over time, the acquisitions made it impossible for these companies to focus on delivering the features we needed. Google, Meta, Amazon, and Apple are simply no longer forces for innovation. Generative AI is the biggest, dumbest attempt that tech has ever made to escape the fallout of building companies by acquiring other companies, taking their eyes off actually inventing things, and ignoring the most important part of their world: the customer.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
In 2017, hundreds of artificial intelligence experts signed the Asilomar AI Principles for how to govern artificial intelligence. I was one of them. So was OpenAI CEO Sam Altman. The signatories committed to avoiding an arms race on the grounds that “teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.” The stated goal of OpenAI is to create artificial general intelligence, a system that is as good as expert humans at most tasks. It could have significant benefits. It could also threaten millions of lives and livelihoods if not developed in a provably safe way. It could be used to commit bioterrorism, run massive cyberattacks or escalate nuclear conflict. Given these dangers, a global arms race to unleash artificial general intelligence AGI serves no one’s interests. The true power of AI lies ... in its potential to bridge divides. AI might help us identify fundamental patterns in global conflicts and human behavior, leading to more profound solutions. AI’s ability to process vast amounts of data could help identify patterns in global conflicts by suggesting novel approaches to resolution that human negotiators might overlook. Advanced natural language processing could break down communication barriers, allowing for more nuanced dialogue between nations and cultures. Predictive AI models could identify early signs of potential conflicts, allowing for preemptive diplomatic interventions.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.
On July 16, the S&P 500 index, one of the most widely cited benchmarks in American capitalism, reached its highest-ever market value: $47 trillion. 1.4 percent of those companies were worth more than $16 trillion, the greatest concentration of capital in the smallest number of companies in the history of the U.S. stock market. The names are familiar: Microsoft, Apple, Amazon, Nvidia, Meta, Alphabet, and Tesla. All of them, too, have made giant bets on artificial intelligence. For all their similarities, these trillion-dollar-plus companies have been grouped together under a single banner: the Magnificent Seven. In the past month, though, these giants of the U.S. economy have been faltering. A recent rout led to a collapse of $2.6 trillion in their market value. Earlier this year, Goldman Sachs issued a deeply skeptical report on the industry, calling it too expensive, too clunky, and just simply not as useful as it has been chalked up to be. “There’s not a single thing that this is being used for that’s cost-effective at this point,” Jim Covello, an influential Goldman analyst, said on a company podcast. AI is not going away, and it will surely become more sophisticated. This explains why, even with the tempering of the AI-investment thesis, these companies are still absolutely massive. When you talk with Silicon Valley CEOs, they love to roll their eyes at their East Coast skeptics. Banks, especially, are too cautious, too concerned with short-term goals, too myopic to imagine another world.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
Google and a few other search engines are the portal through which several billion people navigate the internet. Many of the world’s most powerful tech companies, including Google, Microsoft, and OpenAI, have recently spotted an opportunity to remake that gateway with generative AI, and they are racing to seize it. Nearly two years after the arrival of ChatGPT, and with users growing aware that many generative-AI products have effectively been built on stolen information, tech companies are trying to play nice with the media outlets that supply the content these machines need. The start-up Perplexity ... announced revenue-sharing deals with Time, Fortune, and several other publishers. These publishers will be compensated when Perplexity earns ad revenue from AI-generated answers that cite partner content. The site does not currently run ads, but will begin doing so in the form of sponsored “related follow-up questions.” OpenAI has been building its own roster of media partners, including News Corp, Vox Media, and The Atlantic. Google has purchased the rights to use Reddit content to train future AI models, and ... appears to be the only major search engine that Reddit is permitting to surface its content. The default was once that you would directly consume work by another person; now an AI may chew and regurgitate it first, then determine what you see based on its opaque underlying algorithm. Many of the human readers whom media outlets currently show ads and sell subscriptions to will have less reason to ever visit publishers’ websites. Whether OpenAI, Perplexity, Google, or someone else wins the AI search war might not depend entirely on their software: Media partners are an important part of the equation. AI search will send less traffic to media websites than traditional search engines. The growing number of AI-media deals, then, are a shakedown. AI is scraping publishers’ content whether they want it to or not: Media companies can be chumps or get paid.
Note: The AI search war has nothing to do with journalists and content creators getting paid and acknowledged for their work. It’s all about big companies doing deals with each other to control our information environment and capture more consumer spending. For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable sources.
Texas Attorney General Ken Paxton has won a $1.4 billion settlement from Facebook parent Meta over charges that it captured users' facial and biometric data without properly informing them it was doing so. Paxton said that starting in 2011, Meta, then known as Facebook, rolled out a “tag” feature that involved software that learned how to recognize and sort faces in photos. In doing so, it automatically turned on the feature without explaining how it worked, Paxton said — something that violated a 2009 state statute governing the use of biometric data, as well as running afoul of the state's deceptive trade practices act. "Unbeknownst to most Texans, for more than a decade Meta ran facial recognition software on virtually every face contained in the photographs uploaded to Facebook, capturing records of the facial geometry of the people depicted," he said in a statement. As part of the settlement, Meta did not admit to wrongdoing. Facebook discontinued how it had previously used face-recognition technology in 2021, in the process deleting the face-scan data of more than one billion users. The settlement amount, which Paxton said is the largest ever obtained by a single state against a business, will be paid out over five years. “This historic settlement demonstrates our commitment to standing up to the world’s biggest technology companies and holding them accountable for breaking the law and violating Texans’ privacy rights," Paxton said.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
Once upon a time ... Google was truly great. A couple of lads at Stanford University in California had the idea to build a search engine that would crawl the world wide web, create an index of all the sites on it and rank them by the number of inbound links each had from other sites. The arrival of ChatGPT and its ilk ... disrupts search behaviour. Google’s mission – “to organise the world’s information and make it universally accessible” – looks like a much more formidable task in a world in which AI can generate infinite amounts of humanlike content. Vincent Schmalbach, a respected search engine optimisation (SEO) expert, thinks that Google has decided that it can no longer aspire to index all the world’s information. That mission has been abandoned. “Google is no longer trying to index the entire web,” writes Schmalbach. “In fact, it’s become extremely selective, refusing to index most content. This isn’t about content creators failing to meet some arbitrary standard of quality. Rather, it’s a fundamental change in how Google approaches its role as a search engine.” The default setting from now on will be not to index content unless it is genuinely unique, authoritative and has “brand recognition”. “They might index content they perceive as truly unique,” says Schmalbach. “But if you write about a topic that Google considers even remotely addressed elsewhere, they likely won’t index it. This can happen even if you’re a well-respected writer with a substantial readership.”
Note: WantToKnow.info and other independent media websites are disappearing from Google search results because of this. For more along these lines, see concise summaries of deeply revealing news articles on AI and censorship from reliable sources.
Columbus landlords are now turning to artificial intelligence to evict tenants from their homes. [Attorney Jyoshu] Tsushima works for the Legal Aid Society of Southeast and Central Ohio and focuses on evictions. In June, nearly 2,000 evictions were filed within Franklin County Municipal Court. Tsushima said the county is on track to surpass 24,000 evictions for the year. In eviction court, he said both property management staffers and his clients describe software used that automatically evicts tenants. He said human employees don't determine who will be kicked out but they're the ones who place the eviction notices up on doors. Hope Matfield contacted ABC6 ... after she received an eviction notice on her door at Eden of Caleb's Crossing in Reynoldsburg in May. "They're profiting off people living in hell, basically," Matfield [said]. "I had no choice. I had to make that sacrifice, do a quick move and not know where my family was going to go right away." In February, Matfield started an escrow case against her property management group which is 5812 Investment Group. When Matfield missed a payment, the courts closed her case and gave the escrow funds to 5812 Investment Group. Matfield received her eviction notice that same day. The website for 5812 Investment Group indicates it uses software from RealPage. RealPage is subject to a series of lawsuits across the country due to algorithms multiple attorneys general claim cause price-fixing on rents.
Note: Read more about how tech companies are increasingly marketing smart tools to landlords for a troubling purpose: surveilling tenants to justify evictions or raise their rent. For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
The Ukrainian military has used AI-equipped drones mounted with explosives to fly into battlefields and strike at Russian oil refineries. American AI systems identified targets in Syria and Yemen for airstrikes earlier this year. The Israel Defense Forces used another kind of AI-enabled targeting system to label as many as 37,000 Palestinians as suspected militants during the first weeks of its war in Gaza. Growing conflicts around the world have acted as both accelerant and testing ground for AI warfare while making it even more evident how unregulated the nascent field is. The result is a multibillion-dollar AI arms race that is drawing in Silicon Valley giants and states around the world. Altogether, the US military has more than 800 active AI-related projects and requested $1.8bn worth of funding for AI in the 2024 budget alone. Many of these companies and technologies are able to operate with extremely little transparency and accountability. Defense contractors are generally protected from liability when their products accidentally do not work as intended, even when the results are deadly. The Pentagon plans to spend $1bn by 2025 on its Replicator Initiative, which aims to develop swarms of unmanned combat drones that will use artificial intelligence to seek out threats. The air force wants to allocate around $6bn over the next five years to research and development of unmanned collaborative combat aircraft, seeking to build a fleet of 1,000 AI-enabled fighter jets that can fly autonomously. The Department of Defense has also secured hundreds of millions of dollars in recent years to fund its secretive AI initiative known as Project Maven, a venture focused on technologies like automated target recognition and surveillance.
Note:Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.
A growing number of supermarkets in Alabama, Oklahoma, and Texas are selling bullets by way of AI-powered vending machines, as first reported by Alabama's Tuscaloosa Thread. The company behind the machines, a Texas-based venture dubbed American Rounds, claims on its website that its dystopian bullet kiosks are outfitted with "built-in AI technology" and "facial recognition software," which allegedly allow the devices to "meticulously verify the identity and age of each buyer." As showcased in a promotional video, using one is an astoundingly simple process: walk up to the kiosk, provide identification, and let a camera scan your face. If its embedded facial recognition tech says you are in fact who you say you are, the automated machine coughs up some bullets. According to American Rounds, the main objective is convenience. Its machines are accessible "24/7," its website reads, "ensuring that you can buy ammunition on your own schedule, free from the constraints of store hours and long lines." Though officials in Tuscaloosa, where two machines have been installed, [said] that the devices are in full compliance with the Bureau of Alcohol, Tobacco, Firearms and Explosives' standards ... at least one of the devices has been taken down amid a Tuscaloosa city council investigation into its legal standing. "We have over 200 store requests for AARM [Automated Ammo Retail Machine] units covering approximately nine states currently," [American Rounds CEO Grant Magers] told Newsweek, "and that number is growing daily."
Note: Facial recognition technology is far from reliable. For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence from reliable major media sources.
Important Note: Explore our full index to key excerpts of revealing major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.