AI News Stories
Artificial Intelligence (AI) is emerging technology with great promise and potential for abuse. Below are key excerpts of revealing news articles on AI technology from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.
Militaries, law enforcement, and more around the world are increasingly turning to robot dogs — which, if we're being honest, look like something straight out of a science-fiction nightmare — for a variety of missions ranging from security patrol to combat. Robot dogs first really came on the scene in the early 2000s with Boston Dynamics' "BigDog" design. They have been used in both military and security activities. In November, for instance, it was reported that robot dogs had been added to President-elect Donald Trump's security detail and were on patrol at his home in Mar-a-Lago. Some of the remote-controlled canines are equipped with sensor systems, while others have been equipped with rifles and other weapons. One Ohio company made one with a flamethrower. Some of these designs not only look eerily similar to real dogs but also act like them, which can be unsettling. In the Ukraine war, robot dogs have seen use on the battlefield, the first known combat deployment of these machines. Built by British company Robot Alliance, the systems aren't autonomous, instead being operated by remote control. They are capable of doing many of the things other drones in Ukraine have done, including reconnaissance and attacking unsuspecting troops. The dogs have also been useful for scouting out the insides of buildings and trenches, particularly smaller areas where operators have trouble flying an aerial drone.
Note: Learn more about the troubling partnership between Big Tech and the military. For more, read our concise summaries of news articles on military corruption.
It is often said that autonomous weapons could help minimize the needless horrors of war. Their vision algorithms could be better than humans at distinguishing a schoolhouse from a weapons depot. Some ethicists have long argued that robots could even be hardwired to follow the laws of war with mathematical consistency. And yet for machines to translate these virtues into the effective protection of civilians in war zones, they must also possess a key ability: They need to be able to say no. Human control sits at the heart of governments’ pitch for responsible military AI. Giving machines the power to refuse orders would cut against that principle. Meanwhile, the same shortcomings that hinder AI’s capacity to faithfully execute a human’s orders could cause them to err when rejecting an order. Militaries will therefore need to either demonstrate that it’s possible to build ethical, responsible autonomous weapons that don’t say no, or show that they can engineer a safe and reliable right-to-refuse that’s compatible with the principle of always keeping a human “in the loop.” If they can’t do one or the other ... their promises of ethical and yet controllable killer robots should be treated with caution. The killer robots that countries are likely to use will only ever be as ethical as their imperfect human commanders. They would only promise a cleaner mode of warfare if those using them seek to hold themselves to a higher standard.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and military corruption.
Mitigating the risk of extinction from AI should be a global priority. However, as many AI ethicists warn, this blinkered focus on the existential future threat to humanity posed by a malevolent AI ... has often served to obfuscate the myriad more immediate dangers posed by emerging AI technologies. These “lesser-order” AI risks ... include pervasive regimes of omnipresent AI surveillance and panopticon-like biometric disciplinary control; the algorithmic replication of existing racial, gender, and other systemic biases at scale ... and mass deskilling waves that upend job markets, ushering in an age monopolized by a handful of techno-oligarchs. Killer robots have become a twenty-first-century reality, from gun-toting robotic dogs to swarms of autonomous unmanned drones, changing the face of warfare from Ukraine to Gaza. Palestinian civilians have frequently spoken about the paralyzing psychological trauma of hearing the “zanzana” — the ominous, incessant, unsettling, high-pitched buzzing of drones loitering above. Over a decade ago, children in Waziristan, a region of Pakistan’s tribal belt bordering Afghanistan, experienced a similar debilitating dread of US Predator drones that manifested as a fear of blue skies. “I no longer love blue skies. In fact, I now prefer gray skies. The drones do not fly when the skies are gray,” stated thirteen-year-old Zubair in his testimony before Congress in 2013.
Note: For more along these lines, read our concise summaries of news articles on AI and military corruption.
The current debate on military AI is largely driven by “tech bros” and other entrepreneurs who stand to profit immensely from militaries’ uptake of AI-enabled capabilities. Despite their influence on the conversation, these tech industry figures have little to no operational experience, meaning they cannot draw from first-hand accounts of combat to further justify arguments that AI is changing the character, if not nature, of war. Rather, they capitalize on their impressive business successes to influence a new model of capability development through opinion pieces in high-profile journals, public addresses at acclaimed security conferences, and presentations at top-tier universities. Three related considerations have combined to shape the hype surrounding military AI. First [is] the emergence of a new military industrial complex that is dependent on commercial service providers. Second, this new defense acquisition process is the cause and effect of a narrative suggesting a global AI arms race, which has encouraged scholars to discount the normative implications of AI-enabled warfare. Finally, while analysts assume that soldiers will trust AI, which is integral to human-machine teaming that facilitates AI-enabled warfare, trust is not guaranteed. Senior officers do not trust AI-enhanced capabilities. To the extent they do demonstrate increased levels of trust in machines, their trust is moderated by how machines are used.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and military corruption.
The Pentagon is turning to a new class of weapons to fight the numerically superior [China's] People’s Liberation Army: drones, lots and lots of drones. In August 2023, the Defense Department unveiled Replicator, its initiative to field thousands of “all-domain, attritable autonomous (ADA2) systems”: Pentagon-speak for low-cost (and potentially AI-driven) machines — in the form of self-piloting ships, large robot aircraft, and swarms of smaller kamikaze drones — that they can use and lose en masse to overwhelm Chinese forces. For the last 25 years, uncrewed Predators and Reapers, piloted by military personnel on the ground, have been killing civilians across the planet. Experts worry that mass production of new low-cost, deadly drones will lead to even more civilian casualties. Advances in AI have increasingly raised the possibility of robot planes, in various nations’ arsenals, selecting their own targets. During the first 20 years of the war on terror, the U.S. conducted more than 91,000 airstrikes ... and killed up to 48,308 civilians, according to a 2021 analysis. “The Pentagon has yet to come up with a reliable way to account for past civilian harm caused by U.S. military operations,” [Columbia Law’s Priyanka Motaparthy] said. “So the question becomes, ‘With the potential rapid increase in the use of drones, what safeguards potentially fall by the wayside? How can they possibly hope to reckon with future civilian harm when the scale becomes so much larger?’”
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on military corruption.
When Megan Rothbauer suffered a heart attack at work in Wisconsin, she was rushed to hospital in an ambulance. The nearest hospital was “not in network”, which left Ms Rothbauer with a $52,531.92 bill for her care. Had the ambulance driven a further three blocks to Meriter Hospital in Madison, the bill would have been a more modest $1,500. The incident laid bare the expensive complexity of the American healthcare system with patients finding that they are uncovered, despite paying hefty premiums, because of their policy’s small print. In many cases the grounds for refusal hinge on whether the insurer accepts that the treatment is necessary and that decision is increasingly being made by artificial intelligence rather than a physician. It is leading to coverage being denied on an industrial scale. Much of the work is outsourced, with the biggest operator being EviCore, which ... uses AI to review — and in many cases turn down — doctors’ requests for prior authorisation, guaranteeing to pay for treatment. The controversy over coverage denials was brought into sharp focus by the gunning down of UnitedHealthcare’s chief executive Brian Thompson in Manhattan. The [words written on the] casings [of] the ammunition — “deny”, “defend” and “depose” — are thought to refer to the tactics the insurance industry is accused of using to avoid paying out. UnitedHealthcare rejected one in three claims last year, about twice the industry average.
Note: For more along these lines, read our concise summaries of news articles on AI and corporate corruption.
With the misinformation category being weaponized across the political spectrum, we took a look at how invested government has become in studying and “combatting” it using your tax dollars. That research can provide the intellectual ammunition to censor people online. Since 2021, the Biden-Harris administration has spent $267 million on research grants with the term “misinformation” in the proposal. Of course, the Covid pandemic was the driving force behind so much of the misinformation debate. There is robust documentation by now proving that the Biden-Harris administration worked closely with social media companies to censor content deemed “misinformation,” which often included cases where people simply questioned or disagreed with the Administration’s COVID policies. In February the U.S. House Committee on the Judiciary and the Select Subcommittee on the Weaponization of the Federal Government issued a scathing report against the National Science Foundation (NSF) for funding grants supporting tools and processes that censor online speech. The report said, “the purpose of these taxpayer-funded projects is to develop artificial intelligence (AI)-powered censorship and propaganda tools that can be used by governments and Big Tech to shape public opinion by restricting certain viewpoints or promoting others.” $13 million was spent on the censorious technologies profiled in the report.
Note: Read the full article on Substack to uncover all the misinformation contracts with government agencies, universities, nonprofits, and defense contractors. For more along these lines, read our concise summaries of news articles on censorship and government corruption.
Technology companies are having some early success selling artificial intelligence tools to police departments. Axon, widely recognized for its Taser devices and body cameras, was among the first companies to introduce AI specifically for the most common police task: report writing. Its tool, Draft One, generates police narratives directly from Axon’s bodycam audio. Currently, the AI is being piloted by 75 officers across several police departments. “The hours saved comes out to about 45 hours per police officer per month,” said Sergeant Robert Younger of the Fort Collins Police Department, an early adopter of the tool. Cassandra Burke Robertson, director of the Center for Professional Ethics at Case Western Reserve University School of Law, has reservations about AI in police reporting, especially when it comes to accuracy. “Generative AI programs are essentially predictive text tools. They can generate plausible text quickly, but the most plausible explanation is often not the correct explanation, especially in criminal investigations,” she said. In the courtroom, AI-generated police reports could introduce additional complications, especially when they rely solely on video footage rather than officer dictation. New Jersey-based lawyer Adam Rosenblum said “hallucinations” — instances when AI generates inaccurate or false information — that could distort context are another issue. Courts might need new standards ... before allowing the reports into evidence.
Note: For more along these lines, read our concise summaries of news articles on AI and police corruption.
Before the digital age, law enforcement would conduct surveillance through methods like wiretapping phone lines or infiltrating an organization. Now, police surveillance can reach into the most granular aspects of our lives during everyday activities, without our consent or knowledge — and without a warrant. Technology like automated license plate readers, drones, facial recognition, and social media monitoring added a uniquely dangerous element to the surveillance that comes with physical intimidation of law enforcement. With greater technological power in the hands of police, surveillance technology is crossing into a variety of new and alarming contexts. Law enforcement partnerships with companies like Clearview AI, which scraped billions of images from the internet for their facial recognition database ... has been used by law enforcement agencies across the country, including within the federal government. When the social networking app on your phone can give police details about where you’ve been and who you’re connected to, or your browsing history can provide law enforcement with insight into your most closely held thoughts, the risks of self-censorship are great. When artificial intelligence tools or facial recognition technology can piece together your life in a way that was previously impossible, it gives the ones with the keys to those tools enormous power to ... maintain a repressive status quo.
Note: Facial recognition technology has played a role in the wrongful arrests of many innocent people. For more along these lines, explore concise summaries of revealing news articles on police corruption and the disappearance of privacy.
At the Technology Readiness Experimentation (T-REX) event in August, the US Defense Department tested an artificial intelligence-enabled autonomous robotic gun system developed by fledgling defense contractor Allen Control Systems dubbed the “Bullfrog.” Consisting of a 7.62-mm M240 machine gun mounted on a specially designed rotating turret outfitted with an electro-optical sensor, proprietary AI, and computer vision software, the Bullfrog was designed to deliver small arms fire on drone targets with far more precision than the average US service member can achieve with a standard-issue weapon. Footage of the Bullfrog in action published by ACS shows the truck-mounted system locking onto small drones and knocking them out of the sky with just a few shots. Should the Pentagon adopt the system, it would represent the first publicly known lethal autonomous weapon in the US military’s arsenal. In accordance with the Pentagon’s current policy governing lethal autonomous weapons, the Bullfrog is designed to keep a human “in the loop” in order to avoid a potential “unauthorized engagement." In other words, the gun points at and follows targets, but does not fire until commanded to by a human operator. However, ACS officials claim that the system can operate totally autonomously should the US military require it to in the future, with sentry guns taking the entire kill chain out of the hands of service members.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.
The United States’ secretive Special Operations Command is looking for companies to help create deepfake internet users so convincing that neither humans nor computers will be able to detect they are fake. Academic and private sector researchers have been engaged in a race ... to create undetectable deepfakes. The plan, mentioned in a new 76-page wish list by the Department of Defense’s Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country’s most elite, clandestine military efforts. “Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content.” JSOC wants the ability to create online user profiles that “appear to be a unique individual that ... does not exist in the real world,” with each featuring “multiple expressions” and “Government Identification quality photos.” The document notes that “the solution should include facial & background imagery, facial & background video, and audio layers.” JSOC hopes to be able to generate “selfie video” from these fabricated humans. Each deepfake selfie will come with a matching faked background, “to create a virtual environment undetectable by social media algorithms.” A joint statement by the NSA, FBI, and CISA warned [that] the global proliferation of deepfake technology [is] a “top risk” for 2023. An April paper by the U.S. Army’s Strategic Studies Institute was similarly concerned: “Experts expect the malicious use of AI, including the creation of deepfake videos to sow disinformation to polarize societies and deepen grievances, to grow over the next decade.”
Note: Why is the Pentagon investing in advanced deepfake technology? Read about the Pentagon's secret army of 60,000 operatives who use fake online personas to manipulate public discourse. For more along these lines, see concise summaries of deeply revealing news articles on AI and media corruption from reliable major media sources.
Police departments in 15 states provided The Post with rarely seen records documenting their use of facial recognition in more than 1,000 criminal investigations over the past four years. According to the arrest reports in those cases and interviews with people who were arrested, authorities routinely failed to inform defendants about their use of the software — denying them the opportunity to contest the results of an emerging technology that is prone to error. Officers often obscured their reliance on the software in public-facing reports, saying that they identified suspects “through investigative means” or that a human source such as a witness or police officer made the initial identification. Defense lawyers and civil rights groups argue that people have a right to know about any software that identifies them as part of a criminal investigation, especially a technology that has led to false arrests. The reliability of the tool has been successfully challenged in a handful of recent court cases around the country, leading some defense lawyers to posit that police and prosecutors are intentionally trying to shield the technology from court scrutiny. Misidentification by this type of software played a role in the wrongful arrests of at least seven innocent Americans, six of whom were Black. Charges were later dismissed against all of them. Federal testing of top facial recognition software has found the programs are more likely to misidentify people of color.
Note: Read about the secret history of facial recognition. For more along these lines, see concise summaries of deeply revealing news articles on AI and police corruption from reliable major media sources.
Tech companies have outfitted classrooms across the U.S. with devices and technologies that allow for constant surveillance and data gathering. Firms such as Gaggle, Securly and Bark (to name a few) now collect data from tens of thousands of K-12 students. They are not required to disclose how they use that data, or guarantee its safety from hackers. In their new book, Surveillance Education: Navigating the Conspicuous Absence of Privacy in Schools, Nolan Higdon and Allison Butler show how all-encompassing surveillance is now all too real, and everything from basic privacy rights to educational quality is at stake. The tech industry has done a great job of convincing us that their platforms — like social media and email — are “free.” But the truth is, they come at a cost: our privacy. These companies make money from our data, and all the content and information we share online is basically unpaid labor. So, when the COVID-19 lockdowns hit, a lot of people just assumed that using Zoom, Canvas and Moodle for online learning was a “free” alternative to in-person classes. In reality, we were giving up even more of our labor and privacy to an industry that ended up making record profits. Your data can be used against you ... or taken out of context, such as sarcasm being used to deny you a job or admission to a school. Data breaches happen all the time, which could lead to identity theft or other personal information becoming public.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams—conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody “looking away from the screen for an extended period of time." For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
Justice Department investigators are scrutinizing the healthcare industry’s use of AI embedded in patient records that prompts doctors to recommend treatments. Prosecutors have started subpoenaing pharmaceuticals and digital health companies to learn more about generative technology’s role in facilitating anti-kickback and false claims violations, said three sources familiar with the matter.. Two of the sources—speaking anonymously to discuss ongoing investigations—said DOJ attorneys are asking general questions suggesting they still may be formulating a strategy. “I have seen” civil investigative demands “that ask questions about algorithms and prompts that are being built into EMR systems that may be resulting in care that is either in excess of what would have otherwise been rendered, or may be medically unnecessary,” said Jaime Jones, who co-leads the healthcare practice at Sidley Austin. DOJ attorneys want “to see what the result is of those tools being built into the system.” The probes bring fresh relevance to a pair of 2020 criminal settlements with Purdue Pharma and its digital records contractor, Practice Fusion, over their collusion to design automated pop-up alerts pushing doctors to prescribe addictive painkillers. The kickback scheme ... led to a $145 million penalty for Practice Fusion. Marketers from Purdue ... worked in tandem with Practice Fusion to build clinical decision alerts relying on algorithms.
Note: Read how the US opioid industry operated like a drug cartel. For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Pharma corruption from reliable major media sources.
Ford Motor Company is just one of many automakers advancing technology that weaponizes cars for mass surveillance. The ... company is currently pursuing a patent for technology that would allow vehicles to monitor the speed of nearby cars, capture images, and transmit data to law enforcement agencies. This would effectively turn vehicles into mobile surveillance units, sharing detailed information with both police and insurance companies. Ford's initiative is part of a broader trend among car manufacturers, where vehicles are increasingly used to spy on drivers and harvest data. In today's world, a smartphone can produce up to 3 gigabytes of data per hour, but recently manufactured cars can churn out up to 25 gigabytes per hour—and the cars of the future will generate even more. These vehicles now gather biometric data such as voice, iris, retina, and fingerprint recognition. In 2022, Hyundai patented eye-scanning technology to replace car keys. This data isn't just stored locally; much of it is uploaded to the cloud, a system that has proven time and again to be incredibly vulnerable. Toyota recently announced that a significant amount of customer information was stolen and posted on a popular hacking site. Imagine a scenario where hackers gain control of your car. As cybersecurity threats become more advanced, the possibility of a widespread attack is not far-fetched.
Note: FedEx is helping the police build a large AI surveillance network to track people and vehicles. Michael Hastings, a journalist investigating U.S. military and intelligence abuses, was killed in a 2013 car crash that may have been the result of a hack. For more along these lines, explore summaries of news articles on the disappearance of privacy from reliable major media sources.
Big tech companies have spent vast sums of money honing algorithms that gather their users’ data and scour it for patterns. One result has been a boom in precision-targeted online advertisements. Another is a practice some experts call “algorithmic personalized pricing,” which uses artificial intelligence to tailor prices to individual consumers. The Federal Trade Commission uses a more Orwellian term for this: “surveillance pricing.” In July the FTC sent information-seeking orders to eight companies that “have publicly touted their use of AI and machine learning to engage in data-driven targeting,” says the agency’s chief technologist Stephanie Nguyen. Consumer surveillance extends beyond online shopping. “Companies are investing in infrastructure to monitor customers in real time in brick-and-mortar stores,” [Nguyen] says. Some price tags, for example, have become digitized, designed to be updated automatically in response to factors such as expiration dates and customer demand. Retail giant Walmart—which is not being probed by the FTC—says its new digital price tags can be remotely updated within minutes. When personalized pricing is applied to home mortgages, lower-income people tend to pay more—and algorithms can sometimes make things even worse by hiking up interest rates based on an inadvertently discriminatory automated estimate of a borrower’s risk rating.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
On the sidelines of the International Institute for Strategic Studies’ annual Shangri-La Dialogue in June, US Indo-Pacific Command chief Navy Admiral Samuel Paparo colorfully described the US military’s contingency plan for a Chinese invasion of Taiwan as flooding the narrow Taiwan Strait between the two countries with swarms of thousands upon thousands of drones, by land, sea, and air, to delay a Chinese attack enough for the US and its allies to muster additional military assets. “I want to turn the Taiwan Strait into an unmanned hellscape using a number of classified capabilities,” Paparo said, “so that I can make their lives utterly miserable for a month, which buys me the time for the rest of everything.” China has a lot of drones and can make a lot more drones quickly, creating a likely advantage during a protracted conflict. This stands in contrast to American and Taiwanese forces, who do not have large inventories of drones. The Pentagon’s “hellscape” plan proposes that the US military make up for this growing gap by producing and deploying what amounts to a massive screen of autonomous drone swarms designed to confound enemy aircraft, provide guidance and targeting to allied missiles, knock out surface warships and landing craft, and generally create enough chaos to blunt (if not fully halt) a Chinese push across the Taiwan Strait. Planning a “hellscape" of hundreds of thousands of drones is one thing, but actually making it a reality is another.
Note: Learn more about warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more along these lines, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
Some renters may savor the convenience of “smart home” technologies like keyless entry and internet-connected doorbell cameras. But tech companies are increasingly selling these solutions to landlords for a more nefarious purpose: spying on tenants in order to evict them or raise their rent. Teman, a tech company that makes surveillance systems for apartment buildings ... proposes a solution to a frustration for many New York City landlords, who have tenants living in older apartments that are protected by a myriad of rent control and stabilization laws. The company’s email suggests a workaround: “3 Simple Steps to Re-Regulate a Unit.” First, use one of Teman’s automated products to catch a tenant breaking a law or violating their lease, such as by having unapproved subletters or loud parties. Then, “vacate” them and merge their former apartment with one next door or above or below, creating a “new” unit that’s not eligible for rent protections. “Combine a $950/mo studio and $1400/mo one-bedroom into a $4200/mo DEREGULATED two-bedroom,” the email enticed. Teman’s surveillance systems can even “help you identify which units are most-likely open to moving out (or being evicted!).” Two affordable New York City developments made headlines when tenants successfully organized to stop their respective owners’ plans to install facial recognition systems: Atlantic Towers in Brooklyn and Knickerbocker Village in the Lower East Side.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
Columbus landlords are now turning to artificial intelligence to evict tenants from their homes. [Attorney Jyoshu] Tsushima works for the Legal Aid Society of Southeast and Central Ohio and focuses on evictions. In June, nearly 2,000 evictions were filed within Franklin County Municipal Court. Tsushima said the county is on track to surpass 24,000 evictions for the year. In eviction court, he said both property management staffers and his clients describe software used that automatically evicts tenants. He said human employees don't determine who will be kicked out but they're the ones who place the eviction notices up on doors. Hope Matfield contacted ABC6 ... after she received an eviction notice on her door at Eden of Caleb's Crossing in Reynoldsburg in May. "They're profiting off people living in hell, basically," Matfield [said]. "I had no choice. I had to make that sacrifice, do a quick move and not know where my family was going to go right away." In February, Matfield started an escrow case against her property management group which is 5812 Investment Group. When Matfield missed a payment, the courts closed her case and gave the escrow funds to 5812 Investment Group. Matfield received her eviction notice that same day. The website for 5812 Investment Group indicates it uses software from RealPage. RealPage is subject to a series of lawsuits across the country due to algorithms multiple attorneys general claim cause price-fixing on rents.
Note: Read more about how tech companies are increasingly marketing smart tools to landlords for a troubling purpose: surveilling tenants to justify evictions or raise their rent. For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
Surveillance technologies have evolved at a rapid clip over the last two decades — as has the government’s willingness to use them in ways that are genuinely incompatible with a free society. The intelligence failures that allowed for the attacks on September 11 poured the concrete of the surveillance state foundation. The gradual but dramatic construction of this surveillance state is something that Republicans and Democrats alike are responsible for. Our country cannot build and expand a surveillance superstructure and expect that it will not be turned against the people it is meant to protect. The data that’s being collected reflect intimate details about our closely held beliefs, our biology and health, daily activities, physical location, movement patterns, and more. Facial recognition, DNA collection, and location tracking represent three of the most pressing areas of concern and are ripe for exploitation. Data brokers can use tens of thousands of data points to develop a detailed dossier on you that they can sell to the government (and others). Essentially, the data broker loophole allows a law enforcement agency or other government agency such as the NSA or Department of Defense to give a third party data broker money to hand over the data from your phone — rather than get a warrant. When pressed by the intelligence community and administration, policymakers on both sides of the aisle failed to draw upon the lessons of history.
Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.
Important Note: Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.