As a 501(c)(3) nonprofit, we depend almost entirely on donations from people like you.
We really need your help to continue this work! Please consider making a donation.
Subscribe here and join over 13,000 subscribers to our free weekly newsletter

AI News Stories

We worry AI will "eliminate jobs" and make millions redundant, rather than recognise that the real decisions are made by governments and corporations and the humans that run them.Kenan Malik


Artificial Intelligence (AI) is emerging technology with great promise and potential for abuse. Below are key excerpts of revealing news articles on AI technology from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.

Explore our comprehensive news index on a wide variety of fascinating topics.
Explore the top 20 most revealing news media articles we've summarized.
Check out 10 useful approaches for making sense of the media landscape.

Sort articles by: Article Date | Date Posted on WantToKnow.info | Importance

Microsoft’s climbdown over its creepy Recall feature shows its AI strategy is far from intelligent
2024-07-06, The Guardian (One of the UK's Leading Newspapers)
Posted: 2024-07-16 13:20:50
https://www.theguardian.com/commentisfree/article/2024/jul/06/microsoft-recal...

Recall ... takes constant screenshots in the background while you go about your daily computer business. Microsoft’s Copilot+ machine-learning tech then scans (and “reads”) each of these screenshots in order to make a searchable database of every action performed on your computer and then stores it on the machine’s disk. “Recall is like bestowing a photographic memory on everyone who buys a Copilot+ PC,” [Microsoft marketing officer Yusuf] Mehdi said. “Anything you’ve ever seen or done, you’ll now more or less be able to find.” Charlie Stross, the sci-fi author and tech critic, called it a privacy “shit-show for any organisation that handles medical records or has a duty of legal confidentiality.” He also said: “Suddenly, every PC becomes a target for discovery during legal proceedings. Lawyers can subpoena your Recall database and search it, no longer being limited to email but being able to search for terms that came up in Teams or Slack or Signal messages, and potentially verbally via Zoom or Skype if speech-to-text is included in Recall data.” Faced with this pushback, Microsoft [announced] that Recall would be made opt-in instead of on by default, and also introducing extra security precautions – only producing results from Recall after user authentication, for example, and never decrypting data stored by the tool until after a search query. The only good news for Microsoft here is that it seems to have belatedly acknowledged that Recall has been a fiasco.

Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.


In Fresh Hell, American Vending Machines are Selling Bullets Using Facial Recognition
2024-07-08, Futurism
Posted: 2024-07-16 13:06:53
https://futurism.com/vending-machines-bullets-facial-recognition

A growing number of supermarkets in Alabama, Oklahoma, and Texas are selling bullets by way of AI-powered vending machines, as first reported by Alabama's Tuscaloosa Thread. The company behind the machines, a Texas-based venture dubbed American Rounds, claims on its website that its dystopian bullet kiosks are outfitted with "built-in AI technology" and "facial recognition software," which allegedly allow the devices to "meticulously verify the identity and age of each buyer." As showcased in a promotional video, using one is an astoundingly simple process: walk up to the kiosk, provide identification, and let a camera scan your face. If its embedded facial recognition tech says you are in fact who you say you are, the automated machine coughs up some bullets. According to American Rounds, the main objective is convenience. Its machines are accessible "24/7," its website reads, "ensuring that you can buy ammunition on your own schedule, free from the constraints of store hours and long lines." Though officials in Tuscaloosa, where two machines have been installed, [said] that the devices are in full compliance with the Bureau of Alcohol, Tobacco, Firearms and Explosives' standards ... at least one of the devices has been taken down amid a Tuscaloosa city council investigation into its legal standing. "We have over 200 store requests for AARM [Automated Ammo Retail Machine] units covering approximately nine states currently," [American Rounds CEO Grant Magers] told Newsweek, "and that number is growing daily."

Note: Facial recognition technology is far from reliable. For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence from reliable major media sources.


FedEx’s Secretive Police Force Is Helping Cops Build An AI Car Surveillance Network
2024-06-19, Forbes
Posted: 2024-07-11 15:06:38
https://www.forbes.com/sites/thomasbrewster/2024/06/19/fedex-police-help-cops...

Twenty years ago, FedEx established its own police force. Now it's working with local police to build out an AI car surveillance network. The shipping and business services company is using AI tools made by Flock Safety, a $4 billion car surveillance startup, to monitor its distribution and cargo facilities across the United States. As part of the deal, FedEx is providing its Flock surveillance feeds to law enforcement, an arrangement that Flock has with at least four multi-billion dollar private companies. Some local police departments are also sharing their Flock feeds with FedEx — a rare instance of a private company availing itself of a police surveillance apparatus. Such close collaboration has the potential to dramatically expand Flock’s car surveillance network, which already spans 4,000 cities across over 40 states and some 40,000 cameras that track vehicles by license plate, make, model, color and other identifying characteristics, like dents or bumper stickers. Jay Stanley ... at the American Civil Liberties Union, said it was “profoundly disconcerting” that FedEx was exchanging data with law enforcement as part of Flock’s “mass surveillance” system. “It raises questions about why a private company ... would have privileged access to data that normally is only available to law enforcement,” he said. Forbes previously found that [Flock] had itself likely broken the law across various states by installing cameras without the right permits.

Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.


Sure, Google’s AI overviews could be useful – if you like eating rocks
2024-06-01, The Guardian (One of the UK's Leading Newspapers)
Posted: 2024-07-02 11:31:43
https://www.theguardian.com/commentisfree/article/2024/jun/01/sure-googles-ai...

Once upon a time, Google was great. They intensively monitored what people searched for, and then used that information continually to improve the engine’s performance. Their big idea was that the information thus derived had a commercial value; it indicated what people were interested in and might therefore be of value to advertisers who wanted to sell them stuff. Thus was born what Shoshana Zuboff christened “surveillance capitalism”, the dominant money machine of the networked world. The launch of generative AIs such as ChatGPT clearly took Google by surprise, which is odd given that the company had for years been working on the technology. The question became: how will Google respond to the threat? Now we know: it’s something called AI overviews, in which an increasing number of search queries are initially answered by AI-generated responses. Users have been told that glue is useful for ensuring that cheese sticks to pizza, that they could stare at the sun for for up to 30 minutes, and that geologists suggest eating one rock per day. There’s a quaint air of desperation in the publicity for this sudden pivot from search engine to answerbot. The really big question about the pivot, though, is what its systemic impact on the link economy will be. Already, the news is not great. Gartner, a market-research consultancy, for example, predicts that search engine volume will drop 25% by 2026 owing to AI chatbots and other virtual agents.

Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.


Silicon Valley Rushes Toward Automated Warfare That Deeply Incorporates AI
2024-06-25, Truthout
Posted: 2024-07-02 11:19:24
https://truthout.org/articles/silicon-valley-rushes-toward-automated-warfare-...

Venture capital and military startup firms in Silicon Valley have begun aggressively selling a version of automated warfare that will deeply incorporate artificial intelligence (AI). This surge of support for emerging military technologies is driven by the ultimate rationale of the military-industrial complex: vast sums of money to be made. Untold billions of dollars of private money now pouring into firms seeking to expand the frontiers of techno-war. According to the New York Times, $125 billion over the past four years. Whatever the numbers, the tech sector and its financial backers sense that there are massive amounts of money to be made in next-generation weaponry and aren’t about to let anyone stand in their way. Meanwhile, an investigation by Eric Lipton of the New York Times found that venture capitalists and startup firms already pushing the pace on AI-driven warfare are also busily hiring ex-military and Pentagon officials to do their bidding. Former Google CEO Eric Schmidt [has] become a virtual philosopher king when it comes to how new technology will reshape society. [Schmidt] laid out his views in a 2021 book modestly entitled The Age of AI and Our Human Future, coauthored with none other than the late Henry Kissinger. Schmidt is aware of the potential perils of AI, but he’s also at the center of efforts to promote its military applications. AI is coming, and its impact on our lives, whether in war or peace, is likely to stagger the imagination.

Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.


OpenAI adds former NSA chief to its board
2024-06-13, CNBC News
Posted: 2024-06-24 22:34:23
https://www.cnbc.com/2024/06/13/openai-adds-former-nsa-chief-to-its-board-pau...

OpenAI on Thursday announced its newest board member: Paul M. Nakasone, a retired U.S. Army general and former director of the National Security Agency. Nakasone was the longest-serving leader of the U.S. Cyber Command and chief of the Central Security Service. The company said Nakasone will also join OpenAI’s recently created Safety and Security Committee. The committee is spending 90 days evaluating the company’s processes and safeguards before making recommendations to the board and, eventually, updating the public, OpenAI said. OpenAI is bolstering its board and its C-suite as its large language models gain importance across the tech sector and as competition rapidly emerges in the burgeoning generative artificial intelligence market. While the company has been in hyper-growth mode since late 2022, when it launched ChatGPT, OpenAI has also been riddled with controversy and high-level employee departures. The company said Sarah Friar, previously CEO of Nextdoor and finance chief at Square, is joining as chief financial officer. OpenAI also hired Kevin Weil, an ex-president at Planet Labs, as its new chief product officer. Weil was previously a senior vice president at Twitter and a vice president at Facebook and Instagram. Weil’s product team will focus on “applying our research to products and services that benefit consumers, developers, and businesses,” the company wrote.

Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and intelligence agency corruption from reliable major media sources.


Edward Snowden Releases New Message: 'You Have Been Warned'
2024-06-14, Newsweek
Posted: 2024-06-24 22:31:59
https://www.newsweek.com/edward-snowden-open-ai-nsa-warning-1913173

Edward Snowden wrote on social media to his nearly 6 million followers, "Do not ever trust @OpenAI ... You have been warned," following the appointment of retired U.S. Army General Paul Nakasone to the board of the artificial intelligence technology company. Snowden, a former National Security Agency (NSA) subcontractor, was charged with espionage by the Justice Department in 2013 after leaking thousands of top-secret records, exposing the agency's surveillance of private citizens' information. In a Friday morning post on X, formerly Twitter, Snowden reshared a post providing information on OpenAI's newest board member. Nakasone is a former NSA director, and the longest-serving leader of the U.S. Cyber Command and chief of the Central Security Service. In [a] statement, Nakasone said, "OpenAI's dedication to its mission aligns closely with my own values and experience in public service. I look forward to contributing to OpenAI's efforts to ensure artificial general intelligence is safe and beneficial to people around the world." Snowden wrote in an X post, "They've gone full mask-off: do not ever trust @OpenAI or its products (ChatGPT etc.) There is only one reason for appointing an @NSAGov Director to your board. This is a willful, calculated betrayal of the rights of every person on Earth." Snowden's post has received widespread attention, with nearly 2 million views, 43,500 likes, 16,000 reposts and around 1,000 comments as of Friday afternoon.

Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and intelligence agency corruption from reliable major media sources.


Report Sounds Alarm Over Growing Role of Big Tech in US Military-Industrial Complex
2024-04-17, Common Dreams
Posted: 2024-06-24 22:29:37
https://www.commondreams.org/news/military-industrial-complex-big-tech

The center of the U.S. military-industrial complex has been shifting over the past decade from the Washington, D.C. metropolitan area to Northern California—a shift that is accelerating with the rise of artificial intelligence-based systems, according to a report published Wednesday. "Although much of the Pentagon's $886 billion budget is spent on conventional weapon systems and goes to well-established defense giants such as Lockheed Martin, RTX, Northrop Grumman, General Dynamics, Boeing, and BAE Systems, a new political economy is emerging, driven by the imperatives of big tech companies, venture capital (VC), and private equity firms," [report author Roberto J.] González wrote. "Defense Department officials have ... awarded large multibillion-dollar contracts to Microsoft, Amazon, Google, and Oracle." González found that the five largest military contracts to major tech firms between 2018 and 2022 "had contract ceilings totaling at least $53 billion combined." There's also the danger of a "revolving door" between Silicon Valley and the Pentagon as many senior government officials "are now gravitating towards defense-related VC or private equity firms as executives or advisers after they retire from public service." "Members of the armed services and civilians are in danger of being harmed by inadequately tested—or algorithmically flawed—AI-enabled technologies. By nature, VC firms seek rapid returns on investment by quickly bringing a product to market, and then 'cashing out' by either selling the startup or going public. This means that VC-funded defense tech companies are under pressure to produce prototypes quickly and then move to production before adequate testing has occurred."

Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.


Cats on the moon? Google’s AI tool is producing misleading responses that have experts worried
2024-05-24, Associated Press
Posted: 2024-06-11 12:52:47
https://apnews.com/article/google-ai-overviews-96e763ea2a6203978f581ca9c10f1b07

Ask Google if cats have been on the moon and it used to spit out a ranked list of websites so you could discover the answer for yourself. Now it comes up with an instant answer generated by artificial intelligence - which may or may not be correct. “Yes, astronauts have met cats on the moon, played with them, and provided care,” said Google’s newly retooled search engine. It added: “For example, Neil Armstrong said, ‘One small step for man’ because it was a cat’s step. Buzz Aldrin also deployed cats on the Apollo 11 mission.” None of this is true. Similar errors — some funny, others harmful falsehoods — have been shared on social media since Google this month unleashed AI overviews, a makeover of its search page that frequently puts the summaries on top of search results. It’s hard to reproduce errors made by AI language models — in part because they’re inherently random. They work by predicting what words would best answer the questions asked of them based on the data they’ve been trained on. They’re prone to making things up — a widely studied problem known as hallucination. Another concern was a deeper one — that ceding information retrieval to chatbots was degrading the serendipity of human search for knowledge, literacy about what we see online, and the value of connecting in online forums with other people who are going through the same thing.Those forums and other websites count on Google sending people to them, but Google’s new AI overviews threaten to disrupt the flow of money-making internet traffic.

Note: Read more about the potential dangers of Google's new AI tool. For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence controversies from reliable major media sources.


News Publishers Try To Sic the Government on Google AI
2024-06-03, Reason
Posted: 2024-06-11 12:50:50
https://reason.com/2024/06/03/news-publishers-try-to-sic-the-government-on-go...

"Agency intervention is necessary to stop the existential threat Google poses to original content creators," the News/Media Alliance—a major news industry trade group—wrote in a letter to the Department of Justice (DOJ) and the Federal Trade Commission (FTC). It asked the agencies to use antitrust authority "to stop Google's latest expansion of AI Overviews," a search engine innovation that Google has been rolling out recently. Overviews offer up short, AI-generated summaries paired with brief bits of text from linked websites. Overviews give "comprehensive answers without the user ever having to click to another page," the The New York Times warns. And this worries websites that rely on Google to drive much of their traffic. "It potentially chokes off the original creators of the content," Frank Pine, executive editor of MediaNews Group and Tribune Publishing (owner of 68 daily newspapers), told the Times. Media websites have gotten used to Google searches sending them a certain amount of traffic. But that doesn't mean Google is obligated to continue sending them that same amount of traffic forever. It is possible that Google's pivot to AI was hastened by how hostile news media has been to tech companies. We've seen publishers demanding that search engines and social platforms pay them for the privilege of sharing news links, even though this arrangement benefits publications (arguably more than it does tech companies) by driving traffic.

Note: For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence controversies from reliable major media sources.


'I was misidentified as shoplifter by facial recognition tech'
2024-05-25, BBC News
Posted: 2024-06-11 12:49:06
https://www.bbc.com/news/technology-69055945

Sara needed some chocolate - she had had one of those days - so wandered into a Home Bargains store. "Within less than a minute, I'm approached by a store worker who comes up to me and says, 'You're a thief, you need to leave the store'." Sara ... was wrongly accused after being flagged by a facial-recognition system called Facewatch. She says after her bag was searched she was led out of the shop, and told she was banned from all stores using the technology. Facewatch later wrote to Sara and acknowledged it had made an error. Facewatch is used in numerous stores in the UK. It's not just retailers who are turning to the technology. On the day we were filming, the Metropolitan Police said they made six arrests with the assistance of the tech. 192 arrests have been made so far this year as a result of it. But civil liberty groups are worried that its accuracy is yet to be fully established, and point to cases such as Shaun Thompson's. Mr Thompson, who works for youth-advocacy group Streetfathers, didn't think much of it when he walked by a white van near London Bridge. Within a few seconds, he was approached by police and told he was a wanted man. But it was a case of mistaken identity. "It felt intrusive ... I was treated guilty until proven innocent," he says. Silkie Carlo, director of Big Brother Watch, has filmed the police on numerous facial-recognition deployments. She says that anyone's face who is scanned is effectively part of a digital police line-up.

Note: For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence controversies from reliable major media sources.


Are your kids being spied on? The rise of anti-cheating software in US schools
2024-04-18, The Guardian (One of the UK's Leading Newspapers)
Posted: 2024-05-26 20:43:21
https://www.theguardian.com/education/2024/apr/18/us-schools-anti-cheating-so...

In the middle of night, students at Utah’s Kings Peak high school are wide awake – taking mandatory exams. Their every movement is captured on their computer’s webcam and scrutinized by Proctorio, a surveillance company that uses artificial intelligence. Proctorio software conducts “desk scans” in an effort to catch test-takers who turn to “unauthorized resources”, “face detection” technology to ensure there isn’t anybody else in the room to help and “gaze detection” to spot anybody “looking away from the screen for an extended period of time”. Proctorio then provides visual and audio records to Kings Peak teachers with the algorithm calling particular attention to pupils whose behaviors during the test flagged them as possibly engaging in academic dishonesty. Such remote proctoring tools grew exponentially during the pandemic, particularly at US colleges and universities. K-12 schools’ use of remote proctoring tools, however, has largely gone under the radar. K-12 schools nationwide – and online-only programs in particular – continue to use tools from digital proctoring companies on students ... as young as kindergarten-aged. Civil rights activists, who contend AI proctoring tools fail to work as intended, harbor biases and run afoul of students’ constitutional protections, said the privacy and security concerns are particularly salient for young children and teens, who may not be fully aware of the monitoring or its implications. One 2021 study found that Proctorio failed to detect test-takers who had been instructed to cheat. Researchers concluded the software was “best compared to taking a placebo: it has some positive influence, not because it works but because people believe that it works, or that it might work.”

Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.


These cities bar facial recognition tech. Police still found ways to access it.
2024-05-18, Washington Post
Posted: 2024-05-26 20:41:20
https://www.washingtonpost.com/business/2024/05/18/facial-recognition-law-enf...

As cities and states push to restrict the use of facial recognition technologies, some police departments have quietly found a way to keep using the controversial tools: asking for help from other law enforcement agencies that still have access. Officers in Austin and San Francisco — two of the largest cities where police are banned from using the technology — have repeatedly asked police in neighboring towns to run photos of criminal suspects through their facial recognition programs. In San Francisco, the workaround didn’t appear to help. Since the city’s ban took effect in 2019, the San Francisco Police Department has asked outside agencies to conduct at least five facial recognition searches, but no matches were returned. SFPD spokesman Evan Sernoffsky said these requests violated the city ordinance and were not authorized by the department, but the agency faced no consequences from the city. Austin police officers have received the results of at least 13 face searches from a neighboring police department since the city’s 2020 ban — and have appeared to get hits on some of them. Facial recognition ... technology has played a role in the wrongful arrests of at least seven innocent Americans, six of whom were Black, according to lawsuits each of these people filed after the charges against them were dismissed. In all, 21 cities or counties and Vermont have voted to prohibit the use of facial recognition tools by law enforcement.

Note: Crime is increasing in many cities, leading to law enforcement agencies appropriately working to maintain public safety. Yet far too often, social justice takes a backseat while those in authority violate human rights. For more along these lines, see concise summaries of deeply revealing news articles on police corruption and artificial intelligence from reliable major media sources.


I tried the new Google. Its answers are worse.
2024-04-01, Washington Post
Posted: 2024-05-13 18:58:00
https://www.washingtonpost.com/technology/2024/04/01/new-ai-google-search-sge/

Have you heard about the new Google? They “supercharged” it with artificial intelligence. Somehow, that also made it dumber. With the regular old Google, I can ask, “What’s Mark Zuckerberg’s net worth?” and a reasonable answer pops up: “169.8 billion USD.” Now let’s ask the same question with the “experimental” new version of Google search. Its AI responds: Zuckerberg’s net worth is “$46.24 per hour, or $96,169 per year. This is equivalent to $8,014 per month, $1,849 per week, and $230.6 million per day.” Google acting dumb matters because its AI is headed to your searches sooner or later. The company has already been testing this new Google — dubbed Search Generative Experience, or SGE — with volunteers for nearly 11 months, and recently started showing AI answers in the main Google results even for people who have not opted in to the test. To give us answers to everything, Google’s AI has to decide which sources are reliable. I’m not very confident about its judgment. Remember our bonkers result on Zuckerberg’s net worth? A professional researcher — and also regular old Google — might suggest checking the billionaires list from Forbes. Google’s AI answer relied on a very weird ZipRecruiter page for “Mark Zuckerberg Jobs,” a thing that does not exist. The new Google can do some useful things. But as you’ll see, it sometimes also makes up facts, misinterprets questions, [and] delivers out-of-date information. This test of Google’s future has been going on for nearly a year, and the choices being made now will influence how billions of people get information.

Note: For more along these lines, see concise summaries of deeply revealing news articles on AI technology from reliable major media sources.


Palmer Luckey says Anduril is working on AI weapons that 'give us the ability to swiftly win any war'
2024-03-28, Business Insider
Posted: 2024-04-08 23:15:46
https://www.businessinsider.in/tech/news/palmer-luckey-says-anduril-is-workin...

A Silicon Valley defense tech startup is working on products that could have as great an impact on warfare as the atomic bomb, its founder Palmer Luckey said. "We want to build the capabilities that give us the ability to swiftly win any war we are forced to enter," he [said]. The Anduril founder didn't elaborate on what impact AI weaponry would have. But asked if it would be as decisive as the atomic bomb to the outcome of World War II he replied: "We have ideas for what they are. We are working on them." In 2022, Anduril won a contract worth almost $1 billion with the Special Operations Command to support its counter-unmanned systems. Anduril's products include autonomous sentry towers along the Mexican border [and] Altius-600M attack drones supplied to Ukraine. All of Anduril's tech operates autonomously and runs on its AI platform called Lattice that can easily be updated. The success of Anduril has given hope to other smaller players aiming to break into the defense sector. As an escalating number of global conflicts has increased demand for AI-driven weaponry, venture capitalists have put more than $100 billion into defense tech since 2021, according to Pitchbook data. The rising demand has sparked a fresh wave of startups lining up to compete with industry "primes" such as Lockheed Martin and RTX (formerly known as Raytheon) for a slice of the $842 billion US defense budget.

Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on corruption in the military and in the corporate world from reliable major media sources.


Deep underground, robotic teamwork saves the day
2023-07-20, Knowable Magazine
Posted: 2024-04-08 22:59:37
https://knowablemagazine.org/content/article/technology/2023/deep-underground...

When a Manhattan parking garage collapsed in April this year, rescuers were reluctant to stay in the damaged building, fearing further danger. So they used a combination of flying drones and a doglike walking robot to inspect the damage, look for survivors and make sure the site was safe for human rescuers to return. Soon, rescuers may be able to call on a much more sophisticated robotic search-and-rescue response. Researchers are developing teams of flying, walking and rolling robots that can cooperate to explore areas that no one robot could navigate on its own. And they are giving robots the ability to communicate with one another and make many of their own decisions independent of their human controller. Such teams of robots could be useful in other challenging environments like caves or mines where it can be difficult for rescuers to find and reach survivors. In cities, collapsed buildings and underground sites such as subways or utility tunnels often have hazardous areas where human rescuers can’t be sure of the dangers. As robots become better, teams of them may one day be able to go into a hazardous disaster site, locate survivors and report back to their human operators with a minimum of supervision. “More work ... needs to be done,” [roboticist Viktor] Orekhov says. “But at the same time, we’ve seen the ability of the teams advanced so rapidly that even now, with their current capabilities, they’re able to make a significant difference in real-life environments.”

Note: Explore more positive stories like this in our comprehensive inspiring news articles archive focused on solutions and bridging divides.


Elon Musk v OpenAI: tech giants are inciting existential fears to evade scrutiny
2024-03-10, The Guardian (One of the UK's Leading Newspapers)
Posted: 2024-04-01 19:51:56
https://www.theguardian.com/commentisfree/2024/mar/10/ai-wont-destroy-us-but-...

In 2015, the journalist Steven Levy interviewed Elon Musk and Sam Altman, two founders of OpenAI. A galaxy of Silicon Valley heavyweights, fearful of the potential consequences of AI, created the company as a non-profit-making charitable trust with the aim of developing technology in an ethical fashion to benefit “humanity as a whole”. Musk, who stepped down from OpenAI’s board six years ago ... is now suing his former company for breach of contract for having put profits ahead of the public good and failing to develop AI “for the benefit of humanity”. In 2019, OpenAI created a for-profit subsidiary to raise money from investors, notably Microsoft. When it released ChatGPT in 2022, the model’s inner workings were kept hidden. It was necessary to be less open, Ilya Sutskever, another of OpenAI’s founders and at the time the company’s chief scientist, claimed in response to criticism, to prevent those with malevolent intent from using it “to cause a great deal of harm”. Fear of the technology has become the cover for creating a shield from scrutiny. The problems that AI poses are not existential, but social. From algorithmic bias to mass surveillance, from disinformation and censorship to copyright theft, our concern should not be that machines may one day exercise power over humans but that they already work in ways that reinforce inequalities and injustices, providing tools by which those in power can consolidate their authority.

Note: Read more about the dangers of AI in the hands of the powerful. For more along these lines, see concise summaries of deeply revealing news articles on media manipulation and the disappearance of privacy from reliable sources.


Emotion-tracking AI on the job: Workers fear being watched – and misunderstood
2024-03-06, Yahoo News
Posted: 2024-03-18 19:33:53
https://finance.yahoo.com/news/emotion-tracking-ai-job-workers-133506859.html

Emotion artificial intelligence uses biological signals such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, promising to detect and predict how someone is feeling. Over 50% of large employers in the U.S. use emotion AI aiming to infer employees’ internal states, a practice that grew during the COVID-19 pandemic. For example, call centers monitor what their operators say and their tone of voice. We wondered what workers think about these technologies. My collaborators Shanley Corvite, Kat Roemmich, Tillie Ilana Rosenberg and I conducted a survey. 51% of participants expressed concerns about privacy, 36% noted the potential for incorrect inferences employers would accept at face value, and 33% expressed concern that emotion AI-generated inferences could be used to make unjust employment decisions. Despite emotion AI’s claimed goals to infer and improve workers’ well-being in the workplace, its use can lead to the opposite effect: well-being diminished due to a loss of privacy. On concerns that emotional surveillance could jeopardize their job, a participant with a diagnosed mental health condition said: “They could decide that I am no longer a good fit at work and fire me. Decide I’m not capable enough and not give a raise, or think I’m not working enough.” Participants ... said they were afraid of the dynamic they would have with employers if emotion AI were integrated into their workplace.

Note: The above article was written by Nazanin Andalibi at the University of Michigan. For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and the disappearance of privacy from reliable major media sources.


The AI firm that conducted ‘state surveillance’ of your social media posts
2023-06-03, The Telegraph (One of the UK's Leading Newspapers)
Posted: 2024-02-26 13:49:37
https://www.telegraph.co.uk/news/2023/06/03/logically-ai-firm-social-media-po...

An industrial estate in Yorkshire is an unlikely location for ... an artificial intelligence (AI) company used by the Government to monitor people’s posts on social media. Logically has been paid more than £1.2 million of taxpayers’ money to analyse what the Government terms “disinformation” – false information deliberately seeded online – and “misinformation”, which is false information that has been spread inadvertently. It does this by “ingesting” material from more than hundreds of thousands of media sources and “all public posts on major social media platforms”, using AI to identify those that are potentially problematic. It has a £1.2 million deal with the Department for Culture, Media and Sport (DCMS), as well as another worth up to £1.4 million with the Department of Health and Social Care to monitor threats to high-profile individuals within the vaccine service. It also has a “partnership” with Facebook, which appears to grant Logically’s fact-checkers huge influence over the content other people see. A joint press release issued in July 2021 suggests that Facebook will limit the reach of certain posts if Logically says they are untrue. “When Logically rates a piece of content as false, Facebook will significantly reduce its distribution so that fewer people see it, apply a warning label to let people know that the content has been rated false, and notify people who try to share it,” states the press release.

Note: Read more about how NewsGuard, a for-profit company, works closely with government agencies and major corporate advertisers to suppress dissenting views online. For more along these lines, see concise summaries of deeply revealing news articles on government corruption and media manipulation from reliable sources.


‘A privacy nightmare’: the $400m surveillance package inside the US immigration bill
2024-02-06, The Guardian (One of the UK's Leading Newspapers)
Posted: 2024-02-12 19:31:26
https://www.theguardian.com/us-news/2024/feb/06/us-immigration-bill-mexico-bo...

The $118bn bipartisan immigration bill that the US Senate introduced on Sunday is already facing steep opposition. The 370-page measure, which also would provide additional aid to Israel and Ukraine, has drawn the ire of both Democrats and Republicans over its proposed asylum and border laws. But privacy, immigration and digital liberties experts are also concerned over another aspect of the bill: more than $400m in funding for additional border surveillance and data-gathering tools. The lion’s share of that funding will go to two main tools: $170m for additional autonomous surveillance towers and $204m for “expenses related to the analysis of DNA samples”, which includes those collected from migrants detained by border patrol. The bill describes autonomous surveillance towers as ones that “utilize sensors, onboard computing, and artificial intelligence to identify items of interest that would otherwise be manually identified by personnel”. The rest of the funding for border surveillance ... includes $47.5m for mobile video surveillance systems and drones and $25m for “familial DNA testing”. The bill also includes $25m in funding for “subterranean detection capabilities” and $10m to acquire data from unmanned surface vehicles or autonomous boats. As of early January, CBP had deployed 396 surveillance towers along the US-Mexico border, according to the Electronic Frontier Foundation (EFF).

Note: Read more about the secret history of facial recognition technology and undeniable evidence indicating these tools do much more harm than good. For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.


Important Note: Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.