AI Media Articles
Artificial Intelligence (AI) is emerging technology with great promise and potential for abuse. Below are key excerpts of revealing news articles on AI technology from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.
In 2003 [Alexander Karp] – together with Peter Thiel and three others – founded a secretive tech company called Palantir. And some of the initial funding came from the investment arm of – wait for it – the CIA! The lesson that Karp and his co-author draw [in their book The Technological Republic: Hard Power, Soft Belief and the Future of the West] is that “a more intimate collaboration between the state and the technology sector, and a closer alignment of vision between the two, will be required if the United States and its allies are to maintain an advantage that will constrain our adversaries over the longer term. The preconditions for a durable peace often come only from a credible threat of war.” Or, to put it more dramatically, maybe the arrival of AI makes this our “Oppenheimer moment”. For those of us who have for decades been critical of tech companies, and who thought that the future for liberal democracy required that they be brought under democratic control, it’s an unsettling moment. If the AI technology that giant corporations largely own and control becomes an essential part of the national security apparatus, what happens to our concerns about fairness, diversity, equity and justice as these technologies are also deployed in “civilian” life? For some campaigners and critics, the reconceptualisation of AI as essential technology for national security will seem like an unmitigated disaster – Big Brother on steroids, with resistance being futile, if not criminal.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and intelligence agency corruption.
Alphabet has rewritten its guidelines on how it will use AI, dropping a section which previously ruled out applications that were "likely to cause harm". Human Rights Watch has criticised the decision, telling the BBC that AI can "complicate accountability" for battlefield decisions that "may have life or death consequences." Experts say AI could be widely deployed on the battlefield - though there are fears about its use too, particularly with regard to autonomous weapons systems. "For a global industry leader to abandon red lines it set for itself signals a concerning shift, at a time when we need responsible leadership in AI more than ever," said Anna Bacciarelli, senior AI researcher at Human Rights Watch. The "unilateral" decision showed also showed "why voluntary principles are not an adequate substitute for regulation and binding law" she added. In January, MP's argued that the conflict in Ukraine had shown the technology "offers serious military advantage on the battlefield." As AI becomes more widespread and sophisticated it would "change the way defence works, from the back office to the frontline," Emma Lewell-Buck MP ... wrote. Concern is greatest over the potential for AI-powered weapons capable of taking lethal action autonomously, with campaigners arguing controls are urgently needed. The Doomsday Clock - which symbolises how near humanity is to destruction - cited that concern in its latest assessment of the dangers mankind faces.
Note: For more along these lines, read our concise summaries of news articles on AI and Big Tech.
Instagram has released a long-promised “reset” button to U.S. users that clears the algorithms it uses to recommend you photos and videos. TikTok offers a reset button, too. And with a little bit more effort, you can also force YouTube to start fresh with how it recommends what videos to play next. It means you now have the power to say goodbye to endless recycled dance moves, polarizing Trump posts, extreme fitness challenges, dramatic pet voice-overs, fruit-cutting tutorials, face-altering filters or whatever other else has taken over your feed like a zombie. I know some people love what their apps show them. But the reality is, none of us are really in charge of our social media experience anymore. Instead of just friends, family and the people you choose to follow, nowadays your feed or For You Page is filled with recommended content you never asked for, selected by artificial-intelligence algorithms. Their goal is to keep you hooked, often by showing you things you find outrageous or titillating — not joyful or calming. And we know from Meta whistleblower Frances Haugen and others that outrage algorithms can take a particular toll on young people. That’s one reason they’re offering a reset now: because they’re under pressure to give teens and families more control. So how does the algorithm go awry? It tries to get to know you by tracking every little thing you do. They’re even analyzing your “dwell time,” when you unconsciously scroll more slowly.
Note: Read about the developer who got permanently banned from Meta for developing a tool called “Unfollow Everything” that lets users, well, unfollow everything and restart their feeds fresh. For more along these lines, read our concise summaries of news articles on Big Tech and media manipulation.
Each time you see a targeted ad, your personal information is exposed to thousands of advertisers and data brokers through a process called “real-time bidding” (RTB). This process does more than deliver ads—it fuels government surveillance, poses national security risks, and gives data brokers easy access to your online activity. RTB might be the most privacy-invasive surveillance system that you’ve never heard of. The moment you visit a website or app with ad space, it asks a company that runs ad auctions to determine which ads it will display for you. This involves sending information about you and the content you’re viewing to the ad auction company. The ad auction company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers. The bid request may contain personal information like your unique advertising ID, location, IP address, device details, interests, and demographic information. The information in bid requests is called “bidstream data” and can easily be linked to real people. Advertisers, and their ad buying platforms, can store the personal data in the bid request regardless of whether or not they bid on ad space. RTB is regularly exploited for government surveillance. The privacy and security dangers of RTB are inherent to its design. The process broadcasts torrents of our personal data to thousands of companies, hundreds of times per day.
Note: Clearview AI scraped billions of faces off of social media without consent and at least 600 law enforcement agencies tapped into its database. During this time, Clearview was hacked and its entire client list — which included the Department of Justice, U.S. Immigration and Customs Enforcement, Interpol, retailers and hundreds of police departments — was leaked to hackers. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Militaries, law enforcement, and more around the world are increasingly turning to robot dogs — which, if we're being honest, look like something straight out of a science-fiction nightmare — for a variety of missions ranging from security patrol to combat. Robot dogs first really came on the scene in the early 2000s with Boston Dynamics' "BigDog" design. They have been used in both military and security activities. In November, for instance, it was reported that robot dogs had been added to President-elect Donald Trump's security detail and were on patrol at his home in Mar-a-Lago. Some of the remote-controlled canines are equipped with sensor systems, while others have been equipped with rifles and other weapons. One Ohio company made one with a flamethrower. Some of these designs not only look eerily similar to real dogs but also act like them, which can be unsettling. In the Ukraine war, robot dogs have seen use on the battlefield, the first known combat deployment of these machines. Built by British company Robot Alliance, the systems aren't autonomous, instead being operated by remote control. They are capable of doing many of the things other drones in Ukraine have done, including reconnaissance and attacking unsuspecting troops. The dogs have also been useful for scouting out the insides of buildings and trenches, particularly smaller areas where operators have trouble flying an aerial drone.
Note: Learn more about the troubling partnership between Big Tech and the military. For more, read our concise summaries of news articles on military corruption.
Mitigating the risk of extinction from AI should be a global priority. However, as many AI ethicists warn, this blinkered focus on the existential future threat to humanity posed by a malevolent AI ... has often served to obfuscate the myriad more immediate dangers posed by emerging AI technologies. These “lesser-order” AI risks ... include pervasive regimes of omnipresent AI surveillance and panopticon-like biometric disciplinary control; the algorithmic replication of existing racial, gender, and other systemic biases at scale ... and mass deskilling waves that upend job markets, ushering in an age monopolized by a handful of techno-oligarchs. Killer robots have become a twenty-first-century reality, from gun-toting robotic dogs to swarms of autonomous unmanned drones, changing the face of warfare from Ukraine to Gaza. Palestinian civilians have frequently spoken about the paralyzing psychological trauma of hearing the “zanzana” — the ominous, incessant, unsettling, high-pitched buzzing of drones loitering above. Over a decade ago, children in Waziristan, a region of Pakistan’s tribal belt bordering Afghanistan, experienced a similar debilitating dread of US Predator drones that manifested as a fear of blue skies. “I no longer love blue skies. In fact, I now prefer gray skies. The drones do not fly when the skies are gray,” stated thirteen-year-old Zubair in his testimony before Congress in 2013.
Note: For more along these lines, read our concise summaries of news articles on AI and military corruption.
When Megan Rothbauer suffered a heart attack at work in Wisconsin, she was rushed to hospital in an ambulance. The nearest hospital was “not in network”, which left Ms Rothbauer with a $52,531.92 bill for her care. Had the ambulance driven a further three blocks to Meriter Hospital in Madison, the bill would have been a more modest $1,500. The incident laid bare the expensive complexity of the American healthcare system with patients finding that they are uncovered, despite paying hefty premiums, because of their policy’s small print. In many cases the grounds for refusal hinge on whether the insurer accepts that the treatment is necessary and that decision is increasingly being made by artificial intelligence rather than a physician. It is leading to coverage being denied on an industrial scale. Much of the work is outsourced, with the biggest operator being EviCore, which ... uses AI to review — and in many cases turn down — doctors’ requests for prior authorisation, guaranteeing to pay for treatment. The controversy over coverage denials was brought into sharp focus by the gunning down of UnitedHealthcare’s chief executive Brian Thompson in Manhattan. The [words written on the] casings [of] the ammunition — “deny”, “defend” and “depose” — are thought to refer to the tactics the insurance industry is accused of using to avoid paying out. UnitedHealthcare rejected one in three claims last year, about twice the industry average.
Note: For more along these lines, read our concise summaries of news articles on AI and corporate corruption.
Technology companies are having some early success selling artificial intelligence tools to police departments. Axon, widely recognized for its Taser devices and body cameras, was among the first companies to introduce AI specifically for the most common police task: report writing. Its tool, Draft One, generates police narratives directly from Axon’s bodycam audio. Currently, the AI is being piloted by 75 officers across several police departments. “The hours saved comes out to about 45 hours per police officer per month,” said Sergeant Robert Younger of the Fort Collins Police Department, an early adopter of the tool. Cassandra Burke Robertson, director of the Center for Professional Ethics at Case Western Reserve University School of Law, has reservations about AI in police reporting, especially when it comes to accuracy. “Generative AI programs are essentially predictive text tools. They can generate plausible text quickly, but the most plausible explanation is often not the correct explanation, especially in criminal investigations,” she said. In the courtroom, AI-generated police reports could introduce additional complications, especially when they rely solely on video footage rather than officer dictation. New Jersey-based lawyer Adam Rosenblum said “hallucinations” — instances when AI generates inaccurate or false information — that could distort context are another issue. Courts might need new standards ... before allowing the reports into evidence.
Note: For more along these lines, read our concise summaries of news articles on AI and police corruption.
With the misinformation category being weaponized across the political spectrum, we took a look at how invested government has become in studying and “combatting” it using your tax dollars. That research can provide the intellectual ammunition to censor people online. Since 2021, the Biden-Harris administration has spent $267 million on research grants with the term “misinformation” in the proposal. Of course, the Covid pandemic was the driving force behind so much of the misinformation debate. There is robust documentation by now proving that the Biden-Harris administration worked closely with social media companies to censor content deemed “misinformation,” which often included cases where people simply questioned or disagreed with the Administration’s COVID policies. In February the U.S. House Committee on the Judiciary and the Select Subcommittee on the Weaponization of the Federal Government issued a scathing report against the National Science Foundation (NSF) for funding grants supporting tools and processes that censor online speech. The report said, “the purpose of these taxpayer-funded projects is to develop artificial intelligence (AI)-powered censorship and propaganda tools that can be used by governments and Big Tech to shape public opinion by restricting certain viewpoints or promoting others.” $13 million was spent on the censorious technologies profiled in the report.
Note: Read the full article on Substack to uncover all the misinformation contracts with government agencies, universities, nonprofits, and defense contractors. For more along these lines, read our concise summaries of news articles on censorship and government corruption.
At the Technology Readiness Experimentation (T-REX) event in August, the US Defense Department tested an artificial intelligence-enabled autonomous robotic gun system developed by fledgling defense contractor Allen Control Systems dubbed the “Bullfrog.” Consisting of a 7.62-mm M240 machine gun mounted on a specially designed rotating turret outfitted with an electro-optical sensor, proprietary AI, and computer vision software, the Bullfrog was designed to deliver small arms fire on drone targets with far more precision than the average US service member can achieve with a standard-issue weapon. Footage of the Bullfrog in action published by ACS shows the truck-mounted system locking onto small drones and knocking them out of the sky with just a few shots. Should the Pentagon adopt the system, it would represent the first publicly known lethal autonomous weapon in the US military’s arsenal. In accordance with the Pentagon’s current policy governing lethal autonomous weapons, the Bullfrog is designed to keep a human “in the loop” in order to avoid a potential “unauthorized engagement." In other words, the gun points at and follows targets, but does not fire until commanded to by a human operator. However, ACS officials claim that the system can operate totally autonomously should the US military require it to in the future, with sentry guns taking the entire kill chain out of the hands of service members.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.
Before the digital age, law enforcement would conduct surveillance through methods like wiretapping phone lines or infiltrating an organization. Now, police surveillance can reach into the most granular aspects of our lives during everyday activities, without our consent or knowledge — and without a warrant. Technology like automated license plate readers, drones, facial recognition, and social media monitoring added a uniquely dangerous element to the surveillance that comes with physical intimidation of law enforcement. With greater technological power in the hands of police, surveillance technology is crossing into a variety of new and alarming contexts. Law enforcement partnerships with companies like Clearview AI, which scraped billions of images from the internet for their facial recognition database ... has been used by law enforcement agencies across the country, including within the federal government. When the social networking app on your phone can give police details about where you’ve been and who you’re connected to, or your browsing history can provide law enforcement with insight into your most closely held thoughts, the risks of self-censorship are great. When artificial intelligence tools or facial recognition technology can piece together your life in a way that was previously impossible, it gives the ones with the keys to those tools enormous power to ... maintain a repressive status quo.
Note: Facial recognition technology has played a role in the wrongful arrests of many innocent people. For more along these lines, explore concise summaries of revealing news articles on police corruption and the disappearance of privacy.
The fusion of artificial intelligence (AI) and blockchain technology has generated excitement, but both fields face fundamental limitations that can’t be ignored. What if these two technologies, each revolutionary in its own right, could solve each other’s greatest weaknesses? Imagine a future where blockchain networks are seamlessly efficient and scalable, thanks to AI’s problem-solving prowess, and where AI applications operate with full transparency and accountability by leveraging blockchain’s immutable record-keeping. This vision is taking shape today through a new wave of decentralized AI projects. Leading the charge, platforms like SingularityNET, Ocean Protocol, and Fetch.ai are showing how a convergence of AI and blockchain could not only solve each other’s biggest challenges but also redefine transparency, user control, and trust in the digital age. While AI’s potential is revolutionary, its centralized nature and opacity create significant concerns. Blockchain’s decentralized, immutable structure can address these issues, offering a pathway for AI to become more ethical, transparent, and accountable. Today, AI models rely on vast amounts of data, often gathered without full user consent. Blockchain introduces a decentralized model, allowing users to retain control over their data while securely sharing it with AI applications. This setup empowers individuals to manage their data’s use and fosters a safer, more ethical digital environment.
Note: Watch our 13 minute video on the promise of blockchain technology. Explore more positive stories like this on reimagining the economy and technology for good.
The United States’ secretive Special Operations Command is looking for companies to help create deepfake internet users so convincing that neither humans nor computers will be able to detect they are fake. Academic and private sector researchers have been engaged in a race ... to create undetectable deepfakes. The plan, mentioned in a new 76-page wish list by the Department of Defense’s Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country’s most elite, clandestine military efforts. “Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content.” JSOC wants the ability to create online user profiles that “appear to be a unique individual that ... does not exist in the real world,” with each featuring “multiple expressions” and “Government Identification quality photos.” The document notes that “the solution should include facial & background imagery, facial & background video, and audio layers.” JSOC hopes to be able to generate “selfie video” from these fabricated humans. Each deepfake selfie will come with a matching faked background, “to create a virtual environment undetectable by social media algorithms.” A joint statement by the NSA, FBI, and CISA warned [that] the global proliferation of deepfake technology [is] a “top risk” for 2023. An April paper by the U.S. Army’s Strategic Studies Institute was similarly concerned: “Experts expect the malicious use of AI, including the creation of deepfake videos to sow disinformation to polarize societies and deepen grievances, to grow over the next decade.”
Note: Why is the Pentagon investing in advanced deepfake technology? Read about the Pentagon's secret army of 60,000 operatives who use fake online personas to manipulate public discourse. For more along these lines, see concise summaries of deeply revealing news articles on AI and media corruption from reliable major media sources.
The current debate on military AI is largely driven by “tech bros” and other entrepreneurs who stand to profit immensely from militaries’ uptake of AI-enabled capabilities. Despite their influence on the conversation, these tech industry figures have little to no operational experience, meaning they cannot draw from first-hand accounts of combat to further justify arguments that AI is changing the character, if not nature, of war. Rather, they capitalize on their impressive business successes to influence a new model of capability development through opinion pieces in high-profile journals, public addresses at acclaimed security conferences, and presentations at top-tier universities. Three related considerations have combined to shape the hype surrounding military AI. First [is] the emergence of a new military industrial complex that is dependent on commercial service providers. Second, this new defense acquisition process is the cause and effect of a narrative suggesting a global AI arms race, which has encouraged scholars to discount the normative implications of AI-enabled warfare. Finally, while analysts assume that soldiers will trust AI, which is integral to human-machine teaming that facilitates AI-enabled warfare, trust is not guaranteed. Senior officers do not trust AI-enhanced capabilities. To the extent they do demonstrate increased levels of trust in machines, their trust is moderated by how machines are used.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and military corruption.
Police departments in 15 states provided The Post with rarely seen records documenting their use of facial recognition in more than 1,000 criminal investigations over the past four years. According to the arrest reports in those cases and interviews with people who were arrested, authorities routinely failed to inform defendants about their use of the software — denying them the opportunity to contest the results of an emerging technology that is prone to error. Officers often obscured their reliance on the software in public-facing reports, saying that they identified suspects “through investigative means” or that a human source such as a witness or police officer made the initial identification. Defense lawyers and civil rights groups argue that people have a right to know about any software that identifies them as part of a criminal investigation, especially a technology that has led to false arrests. The reliability of the tool has been successfully challenged in a handful of recent court cases around the country, leading some defense lawyers to posit that police and prosecutors are intentionally trying to shield the technology from court scrutiny. Misidentification by this type of software played a role in the wrongful arrests of at least seven innocent Americans, six of whom were Black. Charges were later dismissed against all of them. Federal testing of top facial recognition software has found the programs are more likely to misidentify people of color.
Note: Read about the secret history of facial recognition. For more along these lines, see concise summaries of deeply revealing news articles on AI and police corruption from reliable major media sources.
Tech companies have outfitted classrooms across the U.S. with devices and technologies that allow for constant surveillance and data gathering. Firms such as Gaggle, Securly and Bark (to name a few) now collect data from tens of thousands of K-12 students. They are not required to disclose how they use that data, or guarantee its safety from hackers. In their new book, Surveillance Education: Navigating the Conspicuous Absence of Privacy in Schools, Nolan Higdon and Allison Butler show how all-encompassing surveillance is now all too real, and everything from basic privacy rights to educational quality is at stake. The tech industry has done a great job of convincing us that their platforms — like social media and email — are “free.” But the truth is, they come at a cost: our privacy. These companies make money from our data, and all the content and information we share online is basically unpaid labor. So, when the COVID-19 lockdowns hit, a lot of people just assumed that using Zoom, Canvas and Moodle for online learning was a “free” alternative to in-person classes. In reality, we were giving up even more of our labor and privacy to an industry that ended up making record profits. Your data can be used against you ... or taken out of context, such as sarcasm being used to deny you a job or admission to a school. Data breaches happen all the time, which could lead to identity theft or other personal information becoming public.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams—conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody “looking away from the screen for an extended period of time." For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
Larry Ellison, the billionaire cofounder of Oracle ... said AI will usher in a new era of surveillance that he gleefully said will ensure "citizens will be on their best behavior." Ellison made the comments as he spoke to investors earlier this week during an Oracle financial analysts meeting, where he shared his thoughts on the future of AI-powered surveillance tools. Ellison said AI would be used in the future to constantly watch and analyze vast surveillance systems, like security cameras, police body cameras, doorbell cameras, and vehicle dashboard cameras. "We're going to have supervision," Ellison said. "Every police officer is going to be supervised at all times, and if there's a problem, AI will report that problem and report it to the appropriate person. Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on." Ellison also expects AI drones to replace police cars in high-speed chases. "You just have a drone follow the car," Ellison said. "It's very simple in the age of autonomous drones." Ellison's company, Oracle, like almost every company these days, is aggressively pursuing opportunities in the AI industry. It already has several projects in the works, including one in partnership with Elon Musk's SpaceX. Ellison is the world's sixth-richest man with a net worth of $157 billion.
Note: As journalist Kenan Malik put it, "The problem we face is not that machines may one day exercise power over humans. It is rather that we already live in societies in which power is exercised by a few to the detriment of the majority, and that technology provides a means of consolidating that power." Read about the shadowy companies tracking and trading your personal data, which isn't just used to sell products. It's often accessed by governments, law enforcement, and intelligence agencies, often without warrants or oversight. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Big tech companies have spent vast sums of money honing algorithms that gather their users’ data and scour it for patterns. One result has been a boom in precision-targeted online advertisements. Another is a practice some experts call “algorithmic personalized pricing,” which uses artificial intelligence to tailor prices to individual consumers. The Federal Trade Commission uses a more Orwellian term for this: “surveillance pricing.” In July the FTC sent information-seeking orders to eight companies that “have publicly touted their use of AI and machine learning to engage in data-driven targeting,” says the agency’s chief technologist Stephanie Nguyen. Consumer surveillance extends beyond online shopping. “Companies are investing in infrastructure to monitor customers in real time in brick-and-mortar stores,” [Nguyen] says. Some price tags, for example, have become digitized, designed to be updated automatically in response to factors such as expiration dates and customer demand. Retail giant Walmart—which is not being probed by the FTC—says its new digital price tags can be remotely updated within minutes. When personalized pricing is applied to home mortgages, lower-income people tend to pay more—and algorithms can sometimes make things even worse by hiking up interest rates based on an inadvertently discriminatory automated estimate of a borrower’s risk rating.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
Ford Motor Company is just one of many automakers advancing technology that weaponizes cars for mass surveillance. The ... company is currently pursuing a patent for technology that would allow vehicles to monitor the speed of nearby cars, capture images, and transmit data to law enforcement agencies. This would effectively turn vehicles into mobile surveillance units, sharing detailed information with both police and insurance companies. Ford's initiative is part of a broader trend among car manufacturers, where vehicles are increasingly used to spy on drivers and harvest data. In today's world, a smartphone can produce up to 3 gigabytes of data per hour, but recently manufactured cars can churn out up to 25 gigabytes per hour—and the cars of the future will generate even more. These vehicles now gather biometric data such as voice, iris, retina, and fingerprint recognition. In 2022, Hyundai patented eye-scanning technology to replace car keys. This data isn't just stored locally; much of it is uploaded to the cloud, a system that has proven time and again to be incredibly vulnerable. Toyota recently announced that a significant amount of customer information was stolen and posted on a popular hacking site. Imagine a scenario where hackers gain control of your car. As cybersecurity threats become more advanced, the possibility of a widespread attack is not far-fetched.
Note: FedEx is helping the police build a large AI surveillance network to track people and vehicles. Michael Hastings, a journalist investigating U.S. military and intelligence abuses, was killed in a 2013 car crash that may have been the result of a hack. For more along these lines, explore summaries of news articles on the disappearance of privacy from reliable major media sources.
Surveillance technologies have evolved at a rapid clip over the last two decades — as has the government’s willingness to use them in ways that are genuinely incompatible with a free society. The intelligence failures that allowed for the attacks on September 11 poured the concrete of the surveillance state foundation. The gradual but dramatic construction of this surveillance state is something that Republicans and Democrats alike are responsible for. Our country cannot build and expand a surveillance superstructure and expect that it will not be turned against the people it is meant to protect. The data that’s being collected reflect intimate details about our closely held beliefs, our biology and health, daily activities, physical location, movement patterns, and more. Facial recognition, DNA collection, and location tracking represent three of the most pressing areas of concern and are ripe for exploitation. Data brokers can use tens of thousands of data points to develop a detailed dossier on you that they can sell to the government (and others). Essentially, the data broker loophole allows a law enforcement agency or other government agency such as the NSA or Department of Defense to give a third party data broker money to hand over the data from your phone — rather than get a warrant. When pressed by the intelligence community and administration, policymakers on both sides of the aisle failed to draw upon the lessons of history.
Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.
Important Note: Explore our full index to key excerpts of revealing major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.














































































