AI Media Articles
Artificial Intelligence (AI) is emerging technology with great promise and potential for abuse. Below are key excerpts of revealing news articles on AI technology from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.
The Pentagon is turning to a new class of weapons to fight the numerically superior [China's] People’s Liberation Army: drones, lots and lots of drones. In August 2023, the Defense Department unveiled Replicator, its initiative to field thousands of “all-domain, attritable autonomous (ADA2) systems”: Pentagon-speak for low-cost (and potentially AI-driven) machines — in the form of self-piloting ships, large robot aircraft, and swarms of smaller kamikaze drones — that they can use and lose en masse to overwhelm Chinese forces. For the last 25 years, uncrewed Predators and Reapers, piloted by military personnel on the ground, have been killing civilians across the planet. Experts worry that mass production of new low-cost, deadly drones will lead to even more civilian casualties. Advances in AI have increasingly raised the possibility of robot planes, in various nations’ arsenals, selecting their own targets. During the first 20 years of the war on terror, the U.S. conducted more than 91,000 airstrikes ... and killed up to 48,308 civilians, according to a 2021 analysis. “The Pentagon has yet to come up with a reliable way to account for past civilian harm caused by U.S. military operations,” [Columbia Law’s Priyanka Motaparthy] said. “So the question becomes, ‘With the potential rapid increase in the use of drones, what safeguards potentially fall by the wayside? How can they possibly hope to reckon with future civilian harm when the scale becomes so much larger?’”
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on military corruption.
Edward Snowden wrote on social media to his nearly 6 million followers, "Do not ever trust @OpenAI ... You have been warned," following the appointment of retired U.S. Army General Paul Nakasone to the board of the artificial intelligence technology company. Snowden, a former National Security Agency (NSA) subcontractor, was charged with espionage by the Justice Department in 2013 after leaking thousands of top-secret records, exposing the agency's surveillance of private citizens' information. In a Friday morning post on X, formerly Twitter, Snowden reshared a post providing information on OpenAI's newest board member. Nakasone is a former NSA director, and the longest-serving leader of the U.S. Cyber Command and chief of the Central Security Service. In [a] statement, Nakasone said, "OpenAI's dedication to its mission aligns closely with my own values and experience in public service. I look forward to contributing to OpenAI's efforts to ensure artificial general intelligence is safe and beneficial to people around the world." Snowden wrote in an X post, "They've gone full mask-off: do not ever trust @OpenAI or its products (ChatGPT etc.) There is only one reason for appointing an @NSAGov Director to your board. This is a willful, calculated betrayal of the rights of every person on Earth." Snowden's post has received widespread attention, with nearly 2 million views, 43,500 likes, 16,000 reposts and around 1,000 comments as of Friday afternoon.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and intelligence agency corruption from reliable major media sources.
OpenAI on Thursday announced its newest board member: Paul M. Nakasone, a retired U.S. Army general and former director of the National Security Agency. Nakasone was the longest-serving leader of the U.S. Cyber Command and chief of the Central Security Service. The company said Nakasone will also join OpenAI’s recently created Safety and Security Committee. The committee is spending 90 days evaluating the company’s processes and safeguards before making recommendations to the board and, eventually, updating the public, OpenAI said. OpenAI is bolstering its board and its C-suite as its large language models gain importance across the tech sector and as competition rapidly emerges in the burgeoning generative artificial intelligence market. While the company has been in hyper-growth mode since late 2022, when it launched ChatGPT, OpenAI has also been riddled with controversy and high-level employee departures. The company said Sarah Friar, previously CEO of Nextdoor and finance chief at Square, is joining as chief financial officer. OpenAI also hired Kevin Weil, an ex-president at Planet Labs, as its new chief product officer. Weil was previously a senior vice president at Twitter and a vice president at Facebook and Instagram. Weil’s product team will focus on “applying our research to products and services that benefit consumers, developers, and businesses,” the company wrote.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and intelligence agency corruption from reliable major media sources.
"Agency intervention is necessary to stop the existential threat Google poses to original content creators," the News/Media Alliance—a major news industry trade group—wrote in a letter to the Department of Justice (DOJ) and the Federal Trade Commission (FTC). It asked the agencies to use antitrust authority "to stop Google's latest expansion of AI Overviews," a search engine innovation that Google has been rolling out recently. Overviews offer up short, AI-generated summaries paired with brief bits of text from linked websites. Overviews give "comprehensive answers without the user ever having to click to another page," the The New York Times warns. And this worries websites that rely on Google to drive much of their traffic. "It potentially chokes off the original creators of the content," Frank Pine, executive editor of MediaNews Group and Tribune Publishing (owner of 68 daily newspapers), told the Times. Media websites have gotten used to Google searches sending them a certain amount of traffic. But that doesn't mean Google is obligated to continue sending them that same amount of traffic forever. It is possible that Google's pivot to AI was hastened by how hostile news media has been to tech companies. We've seen publishers demanding that search engines and social platforms pay them for the privilege of sharing news links, even though this arrangement benefits publications (arguably more than it does tech companies) by driving traffic.
Note: For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence controversies from reliable major media sources.
Once upon a time, Google was great. They intensively monitored what people searched for, and then used that information continually to improve the engine’s performance. Their big idea was that the information thus derived had a commercial value; it indicated what people were interested in and might therefore be of value to advertisers who wanted to sell them stuff. Thus was born what Shoshana Zuboff christened “surveillance capitalism”, the dominant money machine of the networked world. The launch of generative AIs such as ChatGPT clearly took Google by surprise, which is odd given that the company had for years been working on the technology. The question became: how will Google respond to the threat? Now we know: it’s something called AI overviews, in which an increasing number of search queries are initially answered by AI-generated responses. Users have been told that glue is useful for ensuring that cheese sticks to pizza, that they could stare at the sun for for up to 30 minutes, and that geologists suggest eating one rock per day. There’s a quaint air of desperation in the publicity for this sudden pivot from search engine to answerbot. The really big question about the pivot, though, is what its systemic impact on the link economy will be. Already, the news is not great. Gartner, a market-research consultancy, for example, predicts that search engine volume will drop 25% by 2026 owing to AI chatbots and other virtual agents.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
Sara needed some chocolate - she had had one of those days - so wandered into a Home Bargains store. "Within less than a minute, I'm approached by a store worker who comes up to me and says, 'You're a thief, you need to leave the store'." Sara ... was wrongly accused after being flagged by a facial-recognition system called Facewatch. She says after her bag was searched she was led out of the shop, and told she was banned from all stores using the technology. Facewatch later wrote to Sara and acknowledged it had made an error. Facewatch is used in numerous stores in the UK. It's not just retailers who are turning to the technology. On the day we were filming, the Metropolitan Police said they made six arrests with the assistance of the tech. 192 arrests have been made so far this year as a result of it. But civil liberty groups are worried that its accuracy is yet to be fully established, and point to cases such as Shaun Thompson's. Mr Thompson, who works for youth-advocacy group Streetfathers, didn't think much of it when he walked by a white van near London Bridge. Within a few seconds, he was approached by police and told he was a wanted man. But it was a case of mistaken identity. "It felt intrusive ... I was treated guilty until proven innocent," he says. Silkie Carlo, director of Big Brother Watch, has filmed the police on numerous facial-recognition deployments. She says that anyone's face who is scanned is effectively part of a digital police line-up.
Note: For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence controversies from reliable major media sources.
Ask Google if cats have been on the moon and it used to spit out a ranked list of websites so you could discover the answer for yourself. Now it comes up with an instant answer generated by artificial intelligence - which may or may not be correct. “Yes, astronauts have met cats on the moon, played with them, and provided care,” said Google’s newly retooled search engine. It added: “For example, Neil Armstrong said, ‘One small step for man’ because it was a cat’s step. Buzz Aldrin also deployed cats on the Apollo 11 mission.” None of this is true. Similar errors — some funny, others harmful falsehoods — have been shared on social media since Google this month unleashed AI overviews, a makeover of its search page that frequently puts the summaries on top of search results. It’s hard to reproduce errors made by AI language models — in part because they’re inherently random. They work by predicting what words would best answer the questions asked of them based on the data they’ve been trained on. They’re prone to making things up — a widely studied problem known as hallucination. Another concern was a deeper one — that ceding information retrieval to chatbots was degrading the serendipity of human search for knowledge, literacy about what we see online, and the value of connecting in online forums with other people who are going through the same thing.Those forums and other websites count on Google sending people to them, but Google’s new AI overviews threaten to disrupt the flow of money-making internet traffic.
Note: Read more about the potential dangers of Google's new AI tool. For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence controversies from reliable major media sources.
Amazon has been accused of using “intrusive algorithms” as part of a sweeping surveillance program to monitor and deter union organizing activities. Workers at a warehouse run by the technology giant on the outskirts of St Louis, Missouri, are today filing an unfair labor practice charge with the National Labor Relations Board (NLRB). A copy of the charge ... alleges that Amazon has “maintained intrusive algorithms and other workplace controls and surveillance which interfere with Section 7 rights of employees to engage in protected concerted activity”. There have been several reports of Amazon surveilling workers over union organizing and activism, including human resources monitoring employee message boards, software to track union threats and job listings for intelligence analysts to monitor “labor organizing threats”. Artificial intelligence can be used by warehouse employers like Amazon “to essentially have 24/7 unregulated and algorithmically processed and recorded video, and often audio data of what their workers are doing all the time”, said Seema N Patel ... at Stanford Law School. “It enables employers to control, record, monitor and use that data to discipline hundreds of thousands of workers in a way that no human manager or group of managers could even do.” The National Labor Relations Board issued a memo in 2022 announcing its intent to protect workers from AI-enabled monitoring of labor organizing activities.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
Automated fast food restaurant CaliExpress by Flippy, in Pasadena, Calif., opened in January to considerable hype due to its robot burger makers, but the restaurant launched with another, less heralded innovation: the ability to pay for your meal with your face. CaliExpress uses a payment system from facial ID tech company PopID. It’s not the only fast-food chain to employ the technology. Biometric payment options are becoming more common. Amazon introduced pay-by-palm technology in 2020, and while its cashier-less store experiment has faltered, it installed the tech in 500 of its Whole Foods stores last year. Mastercard, which is working with PopID, launched a pilot for face-based payments in Brazil back in 2022, and it was deemed a success — 76% of pilot participants said they would recommend the technology to a friend. As stores implement biometric technology for a variety of purposes, from payments to broader anti-theft systems, consumer blowback, and lawsuits, are rising. In March, an Illinois woman sued retailer Target for allegedly illegally collecting and storing her and other customers’ biometric data via facial recognition technology without their consent. Amazon and T-Mobile are also facing legal actions related to biometric technology. In other countries ... biometric payment systems are comparatively mature. Visitors to McDonald’s in China ... use facial recognition technology to pay for their orders.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
Google announced this week that it would begin the international rollout of its new artificial intelligence-powered search feature, called AI Overviews. When billions of people search a range of topics from news to recipes to general knowledge questions, what they see first will now be an AI-generated summary. While Google was once mostly a portal to reach other parts of the internet, it has spent years consolidating content and services to make itself into the web’s primary destination. Weather, flights, sports scores, stock prices, language translation, showtimes and a host of other information have gradually been incorporated into Google’s search page over the past 15 or so years. Finding that information no longer requires clicking through to another website. With AI Overviews, the rest of the internet may meet the same fate. Google has tried to assuage publishers’ fears that users will no longer see their links or click through to their sites. Research firm Gartner predicts a 25% drop in traffic to websites from search engines by 2026 – a decrease that would be disastrous for most outlets and creators. What’s left for publishers is largely direct visits to their own home pages and Google referrals. If AI Overviews take away a significant portion of the latter, it could mean less original reporting, fewer creators publishing cooking blogs or how-to guides, and a less diverse range of information sources.
Note: WantToKnow.info traffic from Google search has fallen sharply as Google has stopped indexing most websites. These new AI summaries make independent media sites even harder to find. For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
As cities and states push to restrict the use of facial recognition technologies, some police departments have quietly found a way to keep using the controversial tools: asking for help from other law enforcement agencies that still have access. Officers in Austin and San Francisco — two of the largest cities where police are banned from using the technology — have repeatedly asked police in neighboring towns to run photos of criminal suspects through their facial recognition programs. In San Francisco, the workaround didn’t appear to help. Since the city’s ban took effect in 2019, the San Francisco Police Department has asked outside agencies to conduct at least five facial recognition searches, but no matches were returned. SFPD spokesman Evan Sernoffsky said these requests violated the city ordinance and were not authorized by the department, but the agency faced no consequences from the city. Austin police officers have received the results of at least 13 face searches from a neighboring police department since the city’s 2020 ban — and have appeared to get hits on some of them. Facial recognition ... technology has played a role in the wrongful arrests of at least seven innocent Americans, six of whom were Black, according to lawsuits each of these people filed after the charges against them were dismissed. In all, 21 cities or counties and Vermont have voted to prohibit the use of facial recognition tools by law enforcement.
Note: Crime is increasing in many cities, leading to law enforcement agencies appropriately working to maintain public safety. Yet far too often, social justice takes a backseat while those in authority violate human rights. For more along these lines, see concise summaries of deeply revealing news articles on police corruption and artificial intelligence from reliable major media sources.
In the middle of night, students at Utah’s Kings Peak high school are wide awake – taking mandatory exams. Their every movement is captured on their computer’s webcam and scrutinized by Proctorio, a surveillance company that uses artificial intelligence. Proctorio software conducts “desk scans” in an effort to catch test-takers who turn to “unauthorized resources”, “face detection” technology to ensure there isn’t anybody else in the room to help and “gaze detection” to spot anybody “looking away from the screen for an extended period of time”. Proctorio then provides visual and audio records to Kings Peak teachers with the algorithm calling particular attention to pupils whose behaviors during the test flagged them as possibly engaging in academic dishonesty. Such remote proctoring tools grew exponentially during the pandemic, particularly at US colleges and universities. K-12 schools’ use of remote proctoring tools, however, has largely gone under the radar. K-12 schools nationwide – and online-only programs in particular – continue to use tools from digital proctoring companies on students ... as young as kindergarten-aged. Civil rights activists, who contend AI proctoring tools fail to work as intended, harbor biases and run afoul of students’ constitutional protections, said the privacy and security concerns are particularly salient for young children and teens, who may not be fully aware of the monitoring or its implications. One 2021 study found that Proctorio failed to detect test-takers who had been instructed to cheat. Researchers concluded the software was “best compared to taking a placebo: it has some positive influence, not because it works but because people believe that it works, or that it might work.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.
The center of the U.S. military-industrial complex has been shifting over the past decade from the Washington, D.C. metropolitan area to Northern California—a shift that is accelerating with the rise of artificial intelligence-based systems, according to a report published Wednesday. "Although much of the Pentagon's $886 billion budget is spent on conventional weapon systems and goes to well-established defense giants such as Lockheed Martin, RTX, Northrop Grumman, General Dynamics, Boeing, and BAE Systems, a new political economy is emerging, driven by the imperatives of big tech companies, venture capital (VC), and private equity firms," [report author Roberto J.] González wrote. "Defense Department officials have ... awarded large multibillion-dollar contracts to Microsoft, Amazon, Google, and Oracle." González found that the five largest military contracts to major tech firms between 2018 and 2022 "had contract ceilings totaling at least $53 billion combined." There's also the danger of a "revolving door" between Silicon Valley and the Pentagon as many senior government officials "are now gravitating towards defense-related VC or private equity firms as executives or advisers after they retire from public service." "Members of the armed services and civilians are in danger of being harmed by inadequately tested—or algorithmically flawed—AI-enabled technologies. By nature, VC firms seek rapid returns on investment by quickly bringing a product to market, and then 'cashing out' by either selling the startup or going public. This means that VC-funded defense tech companies are under pressure to produce prototypes quickly and then move to production before adequate testing has occurred."
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
Have you heard about the new Google? They “supercharged” it with artificial intelligence. Somehow, that also made it dumber. With the regular old Google, I can ask, “What’s Mark Zuckerberg’s net worth?” and a reasonable answer pops up: “169.8 billion USD.” Now let’s ask the same question with the “experimental” new version of Google search. Its AI responds: Zuckerberg’s net worth is “$46.24 per hour, or $96,169 per year. This is equivalent to $8,014 per month, $1,849 per week, and $230.6 million per day.” Google acting dumb matters because its AI is headed to your searches sooner or later. The company has already been testing this new Google — dubbed Search Generative Experience, or SGE — with volunteers for nearly 11 months, and recently started showing AI answers in the main Google results even for people who have not opted in to the test. To give us answers to everything, Google’s AI has to decide which sources are reliable. I’m not very confident about its judgment. Remember our bonkers result on Zuckerberg’s net worth? A professional researcher — and also regular old Google — might suggest checking the billionaires list from Forbes. Google’s AI answer relied on a very weird ZipRecruiter page for “Mark Zuckerberg Jobs,” a thing that does not exist. The new Google can do some useful things. But as you’ll see, it sometimes also makes up facts, misinterprets questions, [and] delivers out-of-date information. This test of Google’s future has been going on for nearly a year, and the choices being made now will influence how billions of people get information.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI technology from reliable major media sources.
A Silicon Valley defense tech startup is working on products that could have as great an impact on warfare as the atomic bomb, its founder Palmer Luckey said. "We want to build the capabilities that give us the ability to swiftly win any war we are forced to enter," he [said]. The Anduril founder didn't elaborate on what impact AI weaponry would have. But asked if it would be as decisive as the atomic bomb to the outcome of World War II he replied: "We have ideas for what they are. We are working on them." In 2022, Anduril won a contract worth almost $1 billion with the Special Operations Command to support its counter-unmanned systems. Anduril's products include autonomous sentry towers along the Mexican border [and] Altius-600M attack drones supplied to Ukraine. All of Anduril's tech operates autonomously and runs on its AI platform called Lattice that can easily be updated. The success of Anduril has given hope to other smaller players aiming to break into the defense sector. As an escalating number of global conflicts has increased demand for AI-driven weaponry, venture capitalists have put more than $100 billion into defense tech since 2021, according to Pitchbook data. The rising demand has sparked a fresh wave of startups lining up to compete with industry "primes" such as Lockheed Martin and RTX (formerly known as Raytheon) for a slice of the $842 billion US defense budget.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on corruption in the military and in the corporate world from reliable major media sources.
In 2015, the journalist Steven Levy interviewed Elon Musk and Sam Altman, two founders of OpenAI. A galaxy of Silicon Valley heavyweights, fearful of the potential consequences of AI, created the company as a non-profit-making charitable trust with the aim of developing technology in an ethical fashion to benefit “humanity as a whole”. Musk, who stepped down from OpenAI’s board six years ago ... is now suing his former company for breach of contract for having put profits ahead of the public good and failing to develop AI “for the benefit of humanity”. In 2019, OpenAI created a for-profit subsidiary to raise money from investors, notably Microsoft. When it released ChatGPT in 2022, the model’s inner workings were kept hidden. It was necessary to be less open, Ilya Sutskever, another of OpenAI’s founders and at the time the company’s chief scientist, claimed in response to criticism, to prevent those with malevolent intent from using it “to cause a great deal of harm”. Fear of the technology has become the cover for creating a shield from scrutiny. The problems that AI poses are not existential, but social. From algorithmic bias to mass surveillance, from disinformation and censorship to copyright theft, our concern should not be that machines may one day exercise power over humans but that they already work in ways that reinforce inequalities and injustices, providing tools by which those in power can consolidate their authority.
Note: Read more about the dangers of AI in the hands of the powerful. For more along these lines, see concise summaries of deeply revealing news articles on media manipulation and the disappearance of privacy from reliable sources.
Emotion artificial intelligence uses biological signals such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, promising to detect and predict how someone is feeling. Over 50% of large employers in the U.S. use emotion AI aiming to infer employees’ internal states, a practice that grew during the COVID-19 pandemic. For example, call centers monitor what their operators say and their tone of voice. We wondered what workers think about these technologies. My collaborators Shanley Corvite, Kat Roemmich, Tillie Ilana Rosenberg and I conducted a survey. 51% of participants expressed concerns about privacy, 36% noted the potential for incorrect inferences employers would accept at face value, and 33% expressed concern that emotion AI-generated inferences could be used to make unjust employment decisions. Despite emotion AI’s claimed goals to infer and improve workers’ well-being in the workplace, its use can lead to the opposite effect: well-being diminished due to a loss of privacy. On concerns that emotional surveillance could jeopardize their job, a participant with a diagnosed mental health condition said: “They could decide that I am no longer a good fit at work and fire me. Decide I’m not capable enough and not give a raise, or think I’m not working enough.” Participants ... said they were afraid of the dynamic they would have with employers if emotion AI were integrated into their workplace.
Note: The above article was written by Nazanin Andalibi at the University of Michigan. For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and the disappearance of privacy from reliable major media sources.
The National Science Foundation spent millions of taxpayer dollars developing censorship tools powered by artificial intelligence that Big Tech could use “to counter misinformation online” and “advance state-of-the-art misinformation research.” House investigators on the Judiciary Committee and Select Committee on the Weaponization of Government said the NSF awarded nearly $40 million ... to develop AI tools that could censor information far faster and at a much greater scale than human beings. The University of Michigan, for instance, was awarded $750,000 from NSF to develop its WiseDex artificial intelligence tool to help Big Tech outsource the “responsibility of censorship” on social media. The release of [an] interim report follows new revelations that the Biden White House pressured Amazon to censor books about the COVID-19 vaccine and comes months after court documents revealed White House officials leaned on Twitter, Facebook, YouTube and other sites to remove posts and ban users whose content they opposed, even threatening the social media platforms with federal action. House investigators say the NSF project is potentially more dangerous because of the scale and speed of censorship that artificial intelligence could enable. “AI-driven tools can monitor online speech at a scale that would far outmatch even the largest team of ’disinformation’ bureaucrats and researchers,” House investigators wrote in the interim report.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and censorship from reliable sources.
The $118bn bipartisan immigration bill that the US Senate introduced on Sunday is already facing steep opposition. The 370-page measure, which also would provide additional aid to Israel and Ukraine, has drawn the ire of both Democrats and Republicans over its proposed asylum and border laws. But privacy, immigration and digital liberties experts are also concerned over another aspect of the bill: more than $400m in funding for additional border surveillance and data-gathering tools. The lion’s share of that funding will go to two main tools: $170m for additional autonomous surveillance towers and $204m for “expenses related to the analysis of DNA samples”, which includes those collected from migrants detained by border patrol. The bill describes autonomous surveillance towers as ones that “utilize sensors, onboard computing, and artificial intelligence to identify items of interest that would otherwise be manually identified by personnel”. The rest of the funding for border surveillance ... includes $47.5m for mobile video surveillance systems and drones and $25m for “familial DNA testing”. The bill also includes $25m in funding for “subterranean detection capabilities” and $10m to acquire data from unmanned surface vehicles or autonomous boats. As of early January, CBP had deployed 396 surveillance towers along the US-Mexico border, according to the Electronic Frontier Foundation (EFF).
Note: Read more about the secret history of facial recognition technology and undeniable evidence indicating these tools do much more harm than good. For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.
Justice Department investigators are scrutinizing the healthcare industry’s use of AI embedded in patient records that prompts doctors to recommend treatments. Prosecutors have started subpoenaing pharmaceuticals and digital health companies to learn more about generative technology’s role in facilitating anti-kickback and false claims violations, said three sources familiar with the matter.. Two of the sources—speaking anonymously to discuss ongoing investigations—said DOJ attorneys are asking general questions suggesting they still may be formulating a strategy. “I have seen” civil investigative demands “that ask questions about algorithms and prompts that are being built into EMR systems that may be resulting in care that is either in excess of what would have otherwise been rendered, or may be medically unnecessary,” said Jaime Jones, who co-leads the healthcare practice at Sidley Austin. DOJ attorneys want “to see what the result is of those tools being built into the system.” The probes bring fresh relevance to a pair of 2020 criminal settlements with Purdue Pharma and its digital records contractor, Practice Fusion, over their collusion to design automated pop-up alerts pushing doctors to prescribe addictive painkillers. The kickback scheme ... led to a $145 million penalty for Practice Fusion. Marketers from Purdue ... worked in tandem with Practice Fusion to build clinical decision alerts relying on algorithms.
Note: Read how the US opioid industry operated like a drug cartel. For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Pharma corruption from reliable major media sources.
Important Note: Explore our full index to key excerpts of revealing major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.














































































