As a 501(c)(3) nonprofit, we depend almost entirely on donations from people like you.
We really need your help to continue this work! Please consider making a donation.
Subscribe here and join over 13,000 subscribers to our free weekly newsletter

AI Media Articles

We worry AI will "eliminate jobs" and make millions redundant, rather than recognise that the real decisions are made by governments and corporations and the humans that run them.Kenan Malik


Artificial Intelligence (AI) is emerging technology with great promise and potential for abuse. Below are key excerpts of revealing news articles on AI technology from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.

Explore our comprehensive news index on a wide variety of fascinating topics.
Explore the top 20 most revealing news media articles we've summarized.
Check out 10 useful approaches for making sense of the media landscape.

Sort articles by: Article Date | Date Posted on WantToKnow.info | Importance

These cities bar facial recognition tech. Police still found ways to access it.
2024-05-18, Washington Post
https://www.washingtonpost.com/business/2024/05/18/facial-recognition-law-enf...

As cities and states push to restrict the use of facial recognition technologies, some police departments have quietly found a way to keep using the controversial tools: asking for help from other law enforcement agencies that still have access. Officers in Austin and San Francisco — two of the largest cities where police are banned from using the technology — have repeatedly asked police in neighboring towns to run photos of criminal suspects through their facial recognition programs. In San Francisco, the workaround didn’t appear to help. Since the city’s ban took effect in 2019, the San Francisco Police Department has asked outside agencies to conduct at least five facial recognition searches, but no matches were returned. SFPD spokesman Evan Sernoffsky said these requests violated the city ordinance and were not authorized by the department, but the agency faced no consequences from the city. Austin police officers have received the results of at least 13 face searches from a neighboring police department since the city’s 2020 ban — and have appeared to get hits on some of them. Facial recognition ... technology has played a role in the wrongful arrests of at least seven innocent Americans, six of whom were Black, according to lawsuits each of these people filed after the charges against them were dismissed. In all, 21 cities or counties and Vermont have voted to prohibit the use of facial recognition tools by law enforcement.

Note: Crime is increasing in many cities, leading to law enforcement agencies appropriately working to maintain public safety. Yet far too often, social justice takes a backseat while those in authority violate human rights. For more along these lines, see concise summaries of deeply revealing news articles on police corruption and artificial intelligence from reliable major media sources.


Are your kids being spied on? The rise of anti-cheating software in US schools
2024-04-18, The Guardian (One of the UK's Leading Newspapers)
https://www.theguardian.com/education/2024/apr/18/us-schools-anti-cheating-so...

In the middle of night, students at Utah’s Kings Peak high school are wide awake – taking mandatory exams. Their every movement is captured on their computer’s webcam and scrutinized by Proctorio, a surveillance company that uses artificial intelligence. Proctorio software conducts “desk scans” in an effort to catch test-takers who turn to “unauthorized resources”, “face detection” technology to ensure there isn’t anybody else in the room to help and “gaze detection” to spot anybody “looking away from the screen for an extended period of time”. Proctorio then provides visual and audio records to Kings Peak teachers with the algorithm calling particular attention to pupils whose behaviors during the test flagged them as possibly engaging in academic dishonesty. Such remote proctoring tools grew exponentially during the pandemic, particularly at US colleges and universities. K-12 schools’ use of remote proctoring tools, however, has largely gone under the radar. K-12 schools nationwide – and online-only programs in particular – continue to use tools from digital proctoring companies on students ... as young as kindergarten-aged. Civil rights activists, who contend AI proctoring tools fail to work as intended, harbor biases and run afoul of students’ constitutional protections, said the privacy and security concerns are particularly salient for young children and teens, who may not be fully aware of the monitoring or its implications. One 2021 study found that Proctorio failed to detect test-takers who had been instructed to cheat. Researchers concluded the software was “best compared to taking a placebo: it has some positive influence, not because it works but because people believe that it works, or that it might work.”

Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.


I tried the new Google. Its answers are worse.
2024-04-01, Washington Post
https://www.washingtonpost.com/technology/2024/04/01/new-ai-google-search-sge/

Have you heard about the new Google? They “supercharged” it with artificial intelligence. Somehow, that also made it dumber. With the regular old Google, I can ask, “What’s Mark Zuckerberg’s net worth?” and a reasonable answer pops up: “169.8 billion USD.” Now let’s ask the same question with the “experimental” new version of Google search. Its AI responds: Zuckerberg’s net worth is “$46.24 per hour, or $96,169 per year. This is equivalent to $8,014 per month, $1,849 per week, and $230.6 million per day.” Google acting dumb matters because its AI is headed to your searches sooner or later. The company has already been testing this new Google — dubbed Search Generative Experience, or SGE — with volunteers for nearly 11 months, and recently started showing AI answers in the main Google results even for people who have not opted in to the test. To give us answers to everything, Google’s AI has to decide which sources are reliable. I’m not very confident about its judgment. Remember our bonkers result on Zuckerberg’s net worth? A professional researcher — and also regular old Google — might suggest checking the billionaires list from Forbes. Google’s AI answer relied on a very weird ZipRecruiter page for “Mark Zuckerberg Jobs,” a thing that does not exist. The new Google can do some useful things. But as you’ll see, it sometimes also makes up facts, misinterprets questions, [and] delivers out-of-date information. This test of Google’s future has been going on for nearly a year, and the choices being made now will influence how billions of people get information.

Note: For more along these lines, see concise summaries of deeply revealing news articles on AI technology from reliable major media sources.


Palmer Luckey says Anduril is working on AI weapons that 'give us the ability to swiftly win any war'
2024-03-28, Business Insider
https://www.businessinsider.in/tech/news/palmer-luckey-says-anduril-is-workin...

A Silicon Valley defense tech startup is working on products that could have as great an impact on warfare as the atomic bomb, its founder Palmer Luckey said. "We want to build the capabilities that give us the ability to swiftly win any war we are forced to enter," he [said]. The Anduril founder didn't elaborate on what impact AI weaponry would have. But asked if it would be as decisive as the atomic bomb to the outcome of World War II he replied: "We have ideas for what they are. We are working on them." In 2022, Anduril won a contract worth almost $1 billion with the Special Operations Command to support its counter-unmanned systems. Anduril's products include autonomous sentry towers along the Mexican border [and] Altius-600M attack drones supplied to Ukraine. All of Anduril's tech operates autonomously and runs on its AI platform called Lattice that can easily be updated. The success of Anduril has given hope to other smaller players aiming to break into the defense sector. As an escalating number of global conflicts has increased demand for AI-driven weaponry, venture capitalists have put more than $100 billion into defense tech since 2021, according to Pitchbook data. The rising demand has sparked a fresh wave of startups lining up to compete with industry "primes" such as Lockheed Martin and RTX (formerly known as Raytheon) for a slice of the $842 billion US defense budget.

Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on corruption in the military and in the corporate world from reliable major media sources.


Elon Musk v OpenAI: tech giants are inciting existential fears to evade scrutiny
2024-03-10, The Guardian (One of the UK's Leading Newspapers)
https://www.theguardian.com/commentisfree/2024/mar/10/ai-wont-destroy-us-but-...

In 2015, the journalist Steven Levy interviewed Elon Musk and Sam Altman, two founders of OpenAI. A galaxy of Silicon Valley heavyweights, fearful of the potential consequences of AI, created the company as a non-profit-making charitable trust with the aim of developing technology in an ethical fashion to benefit “humanity as a whole”. Musk, who stepped down from OpenAI’s board six years ago ... is now suing his former company for breach of contract for having put profits ahead of the public good and failing to develop AI “for the benefit of humanity”. In 2019, OpenAI created a for-profit subsidiary to raise money from investors, notably Microsoft. When it released ChatGPT in 2022, the model’s inner workings were kept hidden. It was necessary to be less open, Ilya Sutskever, another of OpenAI’s founders and at the time the company’s chief scientist, claimed in response to criticism, to prevent those with malevolent intent from using it “to cause a great deal of harm”. Fear of the technology has become the cover for creating a shield from scrutiny. The problems that AI poses are not existential, but social. From algorithmic bias to mass surveillance, from disinformation and censorship to copyright theft, our concern should not be that machines may one day exercise power over humans but that they already work in ways that reinforce inequalities and injustices, providing tools by which those in power can consolidate their authority.

Note: Read more about the dangers of AI in the hands of the powerful. For more along these lines, see concise summaries of deeply revealing news articles on media manipulation and the disappearance of privacy from reliable sources.


Emotion-tracking AI on the job: Workers fear being watched – and misunderstood
2024-03-06, Yahoo News
https://finance.yahoo.com/news/emotion-tracking-ai-job-workers-133506859.html

Emotion artificial intelligence uses biological signals such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, promising to detect and predict how someone is feeling. Over 50% of large employers in the U.S. use emotion AI aiming to infer employees’ internal states, a practice that grew during the COVID-19 pandemic. For example, call centers monitor what their operators say and their tone of voice. We wondered what workers think about these technologies. My collaborators Shanley Corvite, Kat Roemmich, Tillie Ilana Rosenberg and I conducted a survey. 51% of participants expressed concerns about privacy, 36% noted the potential for incorrect inferences employers would accept at face value, and 33% expressed concern that emotion AI-generated inferences could be used to make unjust employment decisions. Despite emotion AI’s claimed goals to infer and improve workers’ well-being in the workplace, its use can lead to the opposite effect: well-being diminished due to a loss of privacy. On concerns that emotional surveillance could jeopardize their job, a participant with a diagnosed mental health condition said: “They could decide that I am no longer a good fit at work and fire me. Decide I’m not capable enough and not give a raise, or think I’m not working enough.” Participants ... said they were afraid of the dynamic they would have with employers if emotion AI were integrated into their workplace.

Note: The above article was written by Nazanin Andalibi at the University of Michigan. For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and the disappearance of privacy from reliable major media sources.


‘A privacy nightmare’: the $400m surveillance package inside the US immigration bill
2024-02-06, The Guardian (One of the UK's Leading Newspapers)
https://www.theguardian.com/us-news/2024/feb/06/us-immigration-bill-mexico-bo...

The $118bn bipartisan immigration bill that the US Senate introduced on Sunday is already facing steep opposition. The 370-page measure, which also would provide additional aid to Israel and Ukraine, has drawn the ire of both Democrats and Republicans over its proposed asylum and border laws. But privacy, immigration and digital liberties experts are also concerned over another aspect of the bill: more than $400m in funding for additional border surveillance and data-gathering tools. The lion’s share of that funding will go to two main tools: $170m for additional autonomous surveillance towers and $204m for “expenses related to the analysis of DNA samples”, which includes those collected from migrants detained by border patrol. The bill describes autonomous surveillance towers as ones that “utilize sensors, onboard computing, and artificial intelligence to identify items of interest that would otherwise be manually identified by personnel”. The rest of the funding for border surveillance ... includes $47.5m for mobile video surveillance systems and drones and $25m for “familial DNA testing”. The bill also includes $25m in funding for “subterranean detection capabilities” and $10m to acquire data from unmanned surface vehicles or autonomous boats. As of early January, CBP had deployed 396 surveillance towers along the US-Mexico border, according to the Electronic Frontier Foundation (EFF).

Note: Read more about the secret history of facial recognition technology and undeniable evidence indicating these tools do much more harm than good. For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.


In the name of ‘fake news,’ NewsGuard extorts sites to follow the government narrative
2023-12-10, New York Post
https://nypost.com/2023/12/10/opinion/newsguard-extorts-sites-to-follow-the-g...

An opaque network of government agencies and self-proclaimed anti-misinformation groups ... have repressed online speech. News publishers have been demonetized and shadow-banned for reporting dissenting views. NewsGuard, a for-profit company that scores news websites on trust and works closely with government agencies and major corporate advertisers, exemplifies the problem. NewsGuard’s core business is a misinformation meter, in which websites are rated on a scale of 0 to 100 on a variety of factors, including headline choice and whether a site publishes “false or egregiously misleading content.” Editors who have engaged with NewsGuard have found that the company has made bizarre demands that unfairly tarnish an entire site as untrustworthy for straying from the official narrative. In an email to one of its government clients, NewsGuard touted that its ratings system of websites is used by advertisers, “which will cut off revenues to fake news sites.” Internal documents ... show that the founders of NewsGuard privately pitched the firm to clients as a tool to engage in content moderation on an industrial scale, applying artificial intelligence to take down certain forms of speech. Earlier this year, Consortium News, a left-leaning site, charged in a lawsuit that NewsGuard’s serves as a proxy for the military to engage in censorship. The lawsuit brings attention to the Pentagon’s $749,387 contract with NewsGuard to identify “false narratives” regarding the war [in] Ukraine.

Note: A recent trove of whistleblower documents revealed how far the Pentagon and intelligence spy agencies are willing to go to censor alternative views, even if those views contain factual information and reasonable arguments. For more along these lines, see concise summaries of news articles on corporate corruption and media manipulation from reliable sources.


‘The Gospel’: how Israel uses AI to select bombing targets in Gaza
2023-12-01, The Guardian (One of the UK's Leading Newspapers)
https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-t...

Israel’s military has made no secret of the intensity of its bombardment of the Gaza Strip. There has, however, been relatively little attention paid to the methods used by the Israel Defense Forces (IDF) to select targets in Gaza, and to the role artificial intelligence has played in their bombing campaign. After the 11-day war in Gaza in May 2021, officials said Israel had fought its “first AI war” using machine learning and advanced computing. The latest Israel-Hamas war has provided an unprecedented opportunity for the IDF to use such tools in a much wider theatre of operations and, in particular, to deploy an AI target-creation platform called “the Gospel”, which has significantly accelerated a lethal production line of targets. In early November, the IDF said “more than 12,000” targets in Gaza had been identified by its target administration division. Aviv Kochavi, who served as the head of the IDF until January, has said the target division is “powered by AI capabilities” and includes hundreds of officers and soldiers. According to Kochavi, “once this machine was activated” in Israel’s 11-day war with Hamas in May 2021 it generated 100 targets a day. “To put that into perspective, in the past we would produce 50 targets in Gaza per year. Now, this machine produces 100 targets a single day, with 50% of them being attacked.” A separate source [said] the Gospel had allowed the IDF to run a “mass assassination factory” in which the “emphasis is on quantity and not on quality”.

Note: Read about Israel's use of AI warfare since at least 2021. For more along these lines, see concise summaries of deeply revealing news articles on war from reliable major media sources.


Moderna is spying on you
2023-11-27, Lee Fang on Substack
https://www.leefang.com/p/moderna-is-spying-on-you

The Moderna misinformation reports, reported here for the first time, reveal what the pharmaceutical company is willing to do to shape public discourse around its marquee product. The mRNA COVID-19 vaccine catapulted the company to a $100 billion valuation. Behind the scenes, the marketing arm of the company has been working with former law enforcement officials and public health officials to monitor and influence vaccine policy. Key to this is a drug industry-funded NGO called Public Good Projects. PGP works closely with social media platforms, government agencies and news websites to confront the “root cause of vaccine hesitancy” by rapidly identifying and “shutting down misinformation.” A network of 45,000 healthcare professionals are given talking points “and advice on how to respond when vaccine misinformation goes mainstream”, according to an email from Moderna. An official training programme, developed by Moderna and PGP, alongside the American Board of Internal Medicine, [helps] healthcare workers identify medical misinformation. The online course, called the “Infodemic Training Program”, represents an official partnership between biopharma and the NGO world. Meanwhile, Moderna also retains Talkwalker which uses its “Blue Silk” artificial intelligence to monitor vaccine-related conversations across 150 million websites in nearly 200 countries. Claims are automatically deemed “misinformation” if they encourage vaccine hesitancy. As the pandemic abates, Moderna is, if anything, ratcheting up its surveillance operation.

Note: Strategies to silence and censor those who challenge mainstream narratives enable COVID vaccine pharmaceutical giants to downplay the significant, emerging health risks associated with the COVID shots. For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and the disappearance of privacy from reliable major media sources.


AI doesn’t cause harm by itself. We should worry about the people who control it
2023-11-26, The Guardian (One of the UK's Leading Newspapers)
https://www.theguardian.com/commentisfree/2023/nov/26/artificial-intelligence...

OpenAI was created as a non-profit-making charitable trust, the purpose of which was to develop artificial general intelligence, or AGI, which, roughly speaking, is a machine that can accomplish, or surpass, any intellectual task humans can perform. It would do so, however, in an ethical fashion to benefit “humanity as a whole”. Two years ago, a group of OpenAI researchers left to start a new organisation, Anthropic, fearful of the pace of AI development at their old company. One later told a reporter that “there was a 20% chance that a rogue AI would destroy humanity within the next decade”. One may wonder about the psychology of continuing to create machines that one believes may extinguish human life. The problem we face is not that machines may one day exercise power over humans. That is speculation unwarranted by current developments. It is rather that we already live in societies in which power is exercised by a few to the detriment of the majority, and that technology provides a means of consolidating that power. For those who hold social, political and economic power, it makes sense to project problems as technological rather than social and as lying in the future rather than in the present. There are few tools useful to humans that cannot also cause harm. But they rarely cause harm by themselves; they do so, rather, through the ways in which they are exploited by humans, especially those with power.

Note: Read how AI is already being used for war, mass surveillance, and questionable facial recognition technology.


Deep underground, robotic teamwork saves the day
2023-07-20, Knowable Magazine
https://knowablemagazine.org/content/article/technology/2023/deep-underground...

When a Manhattan parking garage collapsed in April this year, rescuers were reluctant to stay in the damaged building, fearing further danger. So they used a combination of flying drones and a doglike walking robot to inspect the damage, look for survivors and make sure the site was safe for human rescuers to return. Soon, rescuers may be able to call on a much more sophisticated robotic search-and-rescue response. Researchers are developing teams of flying, walking and rolling robots that can cooperate to explore areas that no one robot could navigate on its own. And they are giving robots the ability to communicate with one another and make many of their own decisions independent of their human controller. Such teams of robots could be useful in other challenging environments like caves or mines where it can be difficult for rescuers to find and reach survivors. In cities, collapsed buildings and underground sites such as subways or utility tunnels often have hazardous areas where human rescuers can’t be sure of the dangers. As robots become better, teams of them may one day be able to go into a hazardous disaster site, locate survivors and report back to their human operators with a minimum of supervision. “More work ... needs to be done,” [roboticist Viktor] Orekhov says. “But at the same time, we’ve seen the ability of the teams advanced so rapidly that even now, with their current capabilities, they’re able to make a significant difference in real-life environments.”

Note: Explore more positive stories like this in our comprehensive inspiring news articles archive focused on solutions and bridging divides.


The Future of AI Is War
2023-07-17, The Nation
https://www.thenation.com/article/world/artificial-intelligence-us-military/

Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility. In addition to developing a wide variety of "autonomous," or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called "robot generals." In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when, and how they kill enemy troops or take fire from their opponents. In its budget submission for 2023, for example, the Air Force requested $231 million to develop the Advanced Battlefield Management System (ABMS), a complex network of sensors and AI-enabled computers designed to ... provide pilots and ground forces with a menu of optimal attack options. As the technology advances, the system will be capable of sending "fire" instructions directly to "shooters," largely bypassing human control. The Air Force's ABMS is intended to ... connect all US combat forces, the Joint All-Domain Command-and-Control System (JADC2, pronounced "Jad-C-two"). "JADC2 intends to enable commanders to make better decisions by collecting data from numerous sensors, processing the data using artificial intelligence algorithms to identify targets, then recommending the optimal weapon ... to engage the target," the Congressional Research Service reported in 2022.

Note: Read about the emerging threat of killer robots on the battlefield. For more along these lines, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.


Fantasy fears about AI are obscuring how we already abuse machine intelligence
2023-06-11, The Guardian (One of the UK's Leading Newspapers)
https://www.theguardian.com/commentisfree/2023/jun/11/big-tech-warns-of-threa...

A young African American man, Randal Quran Reid, was pulled over by the state police in Georgia. He was arrested under warrants issued by Louisiana police for two cases of theft in New Orleans. The arrest warrants had been based solely on a facial recognition match, though that was never mentioned in any police document; the warrants claimed "a credible source" had identified Reid as the culprit. The facial recognition match was incorrect and Reid was released. Reid ... is not the only victim of a false facial recognition match. So far all those arrested in the US after a false match have been black. From surveillance to disinformation, we live in a world shaped by AI. The reason that Reid was wrongly incarcerated had less to do with artificial intelligence than with ... the humans that created the software and trained it. Too often when we talk of the "problem" of AI, we remove the human from the picture. We worry AI will "eliminate jobs" and make millions redundant, rather than recognise that the real decisions are made by governments and corporations and the humans that run them. We have come to view the machine as the agent and humans as victims of machine agency. Rather than seeing regulation as a means by which we can collectively shape our relationship to AI, it becomes something that is imposed from the top as a means of protecting humans from machines. It is not AI but our blindness to the way human societies are already deploying machine intelligence for political ends that should most worry us.

Note: For more along these lines, see concise summaries of deeply revealing news articles on police corruption and the disappearance of privacy from reliable major media sources.


The AI firm that conducted ‘state surveillance’ of your social media posts
2023-06-03, The Telegraph (One of the UK's Leading Newspapers)
https://www.telegraph.co.uk/news/2023/06/03/logically-ai-firm-social-media-po...

An industrial estate in Yorkshire is an unlikely location for ... an artificial intelligence (AI) company used by the Government to monitor people’s posts on social media. Logically has been paid more than £1.2 million of taxpayers’ money to analyse what the Government terms “disinformation” – false information deliberately seeded online – and “misinformation”, which is false information that has been spread inadvertently. It does this by “ingesting” material from more than hundreds of thousands of media sources and “all public posts on major social media platforms”, using AI to identify those that are potentially problematic. It has a £1.2 million deal with the Department for Culture, Media and Sport (DCMS), as well as another worth up to £1.4 million with the Department of Health and Social Care to monitor threats to high-profile individuals within the vaccine service. It also has a “partnership” with Facebook, which appears to grant Logically’s fact-checkers huge influence over the content other people see. A joint press release issued in July 2021 suggests that Facebook will limit the reach of certain posts if Logically says they are untrue. “When Logically rates a piece of content as false, Facebook will significantly reduce its distribution so that fewer people see it, apply a warning label to let people know that the content has been rated false, and notify people who try to share it,” states the press release.

Note: Read more about how NewsGuard, a for-profit company, works closely with government agencies and major corporate advertisers to suppress dissenting views online. For more along these lines, see concise summaries of deeply revealing news articles on government corruption and media manipulation from reliable sources.


Schools Are Pouring Millions Into AI-Powered Weapon Detection Systems. Do They Work?
2023-05-07, The Intercept
https://theintercept.com/2023/05/07/ai-gun-weapons-detection-schools-evolv/

As school shootings proliferate across the country — there were 46 school shootings in 2022, more than in any year since at least 1999 — educators are increasingly turning to dodgy vendors who market misleading and ineffective technology. Utica City is one of dozens of school districts nationwide that have spent millions on gun detection technology with little to no track record of preventing or stopping violence. Evolv’s scanners keep popping up in schools across the country. Over 65 school districts have bought or tested artificial intelligence gun detection from a variety of companies since 2018, spending a total of over $45 million, much of it coming from public coffers. “Private companies are preying on school districts’ worst fears and proposing the use of technology that’s not going to work,” said Stefanie Coyle ... at the New York Civil Liberties Union. In December, it came out that Evolv, a publicly traded company since 2021, had doctored the results of their software testing. In 2022, the National Center for Spectator Sports Safety and Security, a government body, completed a confidential report showing that previous field tests on the scanners failed to detect knives and a handgun. Five law firms recently announced investigations of Evolv Technology — a partner of Motorola Solutions whose investors include Bill Gates — looking into possible violations of securities law, including claims that Evolv misrepresented its technology and its capabilities to it.

Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption from reliable major media sources.


AI makes non-invasive mind-reading possible by turning thoughts into text
2023-05-01, The Guardian (One of the UK's Leading Newspapers)
https://www.theguardian.com/technology/2023/may/01/ai-makes-non-invasive-mind...

An AI-based decoder that can translate brain activity into a continuous stream of text has been developed, in a breakthrough that allows a person’s thoughts to be read non-invasively for the first time. The decoder could reconstruct speech with uncanny accuracy while people listened to a story – or even silently imagined one – using only fMRI scan data. Previous language decoding systems have required surgical implants. Large language models – the kind of AI underpinning OpenAI’s ChatGPT ... are able to represent, in numbers, the semantic meaning of speech, allowing the scientists to look at which patterns of neuronal activity corresponded to strings of words with a particular meaning rather than attempting to read out activity word by word. The decoder was personalised and when the model was tested on another person the readout was unintelligible. It was also possible for participants on whom the decoder had been trained to thwart the system, for example by thinking of animals or quietly imagining another story. Jerry Tang, a doctoral student at the University of Texas at Austin and a co-author, said: “We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that. We want to make sure people only use these types of technologies when they want to and that it helps them.” Prof Tim Behrens, a computational neuroscientist ... said it opened up a host of experimental possibilities, including reading thoughts from someone dreaming.

Note: This technology has advanced considerably since Jose Delgado first stopped a charging bull using radio waves in 1965. For more along these lines, see concise summaries of deeply revealing news articles on mind control and the disappearance of privacy from reliable major media sources.


Big Tech Companies Are Becoming More Powerful Than Nation-States
2023-04-25, Common Dreams
https://www.commondreams.org/opinion/big-tech-companies-more-powerful-than-na...

U.S. citizens are being subjected to a relentless onslaught from intrusive technologies that have become embedded in the everyday fabric of our lives, creating unprecedented levels of social and political upheaval. These widely used technologies ... include social media and what Harvard professor Shoshanna Zuboff calls "surveillance capitalism"—the buying and selling of our personal info and even our DNA in the corporate marketplace. But powerful new ones are poised to create another wave of radical change. Under the mantle of the "Fourth Industrial Revolution," these include artificial intelligence or AI, the metaverse, the Internet of Things, the Internet of Bodies (in which our physical and health data is added into the mix to be processed by AI), and my personal favorite, police robots. This is a two-pronged effort involving both powerful corporations and government initiatives. These tech-based systems are operating "below the radar" and rarely discussed in the mainstream media. The world's biggest tech companies are now richer and more powerful than most countries. According to an article in PC Week in 2021 discussing Apple's dominance: "By taking the current valuation of Apple, Microsoft, Amazon, and others, then comparing them to the GDP of countries on a map, we can see just how crazy things have become… Valued at $2.2 trillion, the Cupertino company is richer than 96% of the world. In fact, only seven countries currently outrank the maker of the iPhone financially."

Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and the disappearance of privacy from reliable major media sources.


Mapping Project Reveals Locations of U.S. Border Surveillance Towers
2023-03-20, The Intercept
https://theintercept.com/2023/03/20/border-surveillance-map/

The precise locations of the U.S. government’s high-tech surveillance towers along the U.S-Mexico border are being made public for the first time as part of a mapping project by the Electronic Frontier Foundation. While the Department of Homeland Security’s investment of more than a billion dollars into a so-called virtual wall between the U.S. and Mexico is a matter of public record, the government does not disclose where these towers are located, despite privacy concerns of residents of both countries — and the fact that individual towers are plainly visible to observers. The surveillance tower map is the result of a year’s work steered by EFF Director of Investigations Dave Maass. As border surveillance towers have multiplied across the southern border, so too have they become increasingly sophisticated, packing a panoply of powerful cameras, microphones, lasers, radar antennae, and other sensors. Companies like Anduril and Google have reaped major government paydays by promising to automate the border-watching process with migrant-detecting artificial intelligence. Opponents of these modern towers, bristling with always-watching sensors, argue the increasing computerization of border security will lead inevitably to the dehumanization of an already thoroughly dehumanizing undertaking. Nobody can say for certain how many people have died attempting to cross the U.S.-Mexico border in the recent age of militarization and surveillance. Researchers estimate that the minimum is at least 10,000 dead.

Note: As the article states, the Department of Homeland Security was "the largest reorganization of the federal government since the creation of the CIA and the Defense Department," and has resulted in U.S. taxpayers funding corrupt agendas that have led to massive human rights abuses. For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.


‘Aims’: the software for hire that can control 30,000 fake online profiles
2023-02-14, The Guardian (One of the UK's Leading Newspapers)
https://www.theguardian.com/world/2023/feb/15/aims-software-avatars-team-jorg...

Advanced Impact Media Solutions, or Aims, which controls more than 30,000 fake social media profiles, can be used to spread disinformation at scale and at speed. It is sold by “Team Jorge”, a unit of disinformation operatives based in Israel. Tal Hanan, who runs the covert group using the pseudonym “Jorge”, told undercover reporters that they sold access to their software to unnamed intelligence agencies, political parties and corporate clients. Team Jorge’s Aims software ... is much more than a bot-controlling programme. Each avatar ... is given a multifaceted digital backstory. Aims enables the creation of accounts on Twitter, LinkedIn, Facebook, Telegram, Gmail, Instagram and YouTube. Some even have Amazon accounts with credit cards, bitcoin wallets and Airbnb accounts. Hanan told the undercover reporters his avatars mimicked human behaviour and their posts were powered by artificial intelligence. [Our reporters] were able to identify a much wider network of 2,000 Aims-linked bots on Facebook and Twitter. We then traced their activity across the internet, identifying their involvement ... in about 20 countries including the UK, US, Canada, Germany, Switzerland, Greece, Panama, Senegal, Mexico, Morocco, India, the United Arab Emirates, Zimbabwe, Belarus and Ecuador. The analysis revealed a vast array of bot activity, with Aims’ fake social media profiles getting involved in a dispute in California over nuclear power; a #MeToo controversy in Canada ... and an election in Senegal.

Note: The FBI has provided police departments with fake social media profiles to use in law enforcement investigations. For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and media manipulation from reliable sources.


Important Note: Explore our full index to key excerpts of revealing major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.