As a 501(c)(3) nonprofit, we depend almost entirely on donations from people like you.
We really need your help to continue this work! Please consider making a donation.
Subscribe here and join over 13,000 subscribers to our free weekly newsletter

Search 13,308 Media Articles

search

We Built a Surveillance State. What Now?
2024-08-20, Project on Government Oversight
https://www.pogo.org/analysis/we-built-a-surveillance-state-what-now

Surveillance technologies have evolved at a rapid clip over the last two decades — as has the government’s willingness to use them in ways that are genuinely incompatible with a free society. The intelligence failures that allowed for the attacks on September 11 poured the concrete of the surveillance state foundation. The gradual but dramatic construction of this surveillance state is something that Republicans and Democrats alike are responsible for. Our country cannot build and expand a surveillance superstructure and expect that it will not be turned against the people it is meant to protect. The data that’s being collected reflect intimate details about our closely held beliefs, our biology and health, daily activities, physical location, movement patterns, and more. Facial recognition, DNA collection, and location tracking represent three of the most pressing areas of concern and are ripe for exploitation. Data brokers can use tens of thousands of data points to develop a detailed dossier on you that they can sell to the government (and others). Essentially, the data broker loophole allows a law enforcement agency or other government agency such as the NSA or Department of Defense to give a third party data broker money to hand over the data from your phone — rather than get a warrant. When pressed by the intelligence community and administration, policymakers on both sides of the aisle failed to draw upon the lessons of history.

Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.


Paxton's win against Meta is a win for privacy. It's only a first step.
2024-08-12, Houston Chronicle
https://www.houstonchronicle.com/opinion/editorials/article/paxton-facebook-m...

If you appeared in a photo on Facebook any time between 2011 and 2021, it is likely your biometric information was fed into DeepFace — the company’s controversial deep-learning facial recognition system that tracked the face scan data of at least a billion users. That's where Texas Attorney General Ken Paxton comes in. His office secured a $1.4 billion settlement from Meta over its alleged violation of a Texas law that bars the capture of biometric data without consent. Meta is on the hook to pay $275 million within the next 30 days and the rest over the next four years. Why did Paxton wait until 2022 — a year after Meta announced it would suspend its facial recognition technology and delete its database — to go up against the tech giant? If our AG truly prioritized privacy, he'd focus on the lesser-known companies that law enforcement agencies here in Texas are paying to scour and store our biometric data. In 2017, [Clearview AI] launched a facial recognition app that ... could identify strangers from a photo by searching a database of faces scraped without consent from social media. In 2020, news broke that at least 600 law enforcement agencies were tapping into a database of 3 billion facial images. Clearview was hit with lawsuit after lawsuit. That same year, the company was hacked and its entire client list — which included the Department of Justice, U.S. Immigration and Customs Enforcement, Interpol, retailers and hundreds of police departments — was leaked.

Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.


Big tech firms profit from disorder. Don’t let them use these riots to push for more surveillance
2024-08-07, The Guardian (One of the UK's Leading Newspapers)
https://www.theguardian.com/commentisfree/article/2024/aug/07/big-tech-disord...

The eruption of racist violence in England and Northern Ireland raises urgent questions about the responsibilities of social media companies, and how the police use facial recognition technology. While social media isn’t the root of these riots, it has allowed inflammatory content to spread like wildfire and helped rioters coordinate. The great elephant in the room is the wealth, power and arrogance of the big tech emperors. Silicon Valley billionaires are richer than many countries. That mature modern states should allow them unfettered freedom to regulate the content they monetise is a gross abdication of duty, given their vast financial interest in monetising insecurity and division. In recent years, [facial recognition] has been used on our streets without any significant public debate. We wouldn’t dream of allowing telephone taps, DNA retention or even stop and search and arrest powers to be so unregulated by the law, yet this is precisely what has happened with facial recognition. Our facial images are gathered en masse via CCTV cameras, the passport database and the internet. At no point were we asked about this. Individual police forces have entered into direct contracts with private companies of their choosing, making opaque arrangements to trade our highly sensitive personal data with private companies that use it to develop proprietary technology. There is no specific law governing how the police, or private companies ... are authorised to use this technology. Experts at Big Brother Watch believe the inaccuracy rate for live facial recognition since the police began using it is around 74%, and there are many cases pending about false positive IDs.

Note: Many US states are not required to reveal that they used face recognition technology to identify suspects, even though misidentification is a common occurrence. For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.


Texas AG wins $1.4B settlement from Facebook parent Meta over facial-capture charges
2024-07-30, NBC News
https://www.nbcnews.com/business/business-news/texas-ag-wins-1point4-billion-...

Texas Attorney General Ken Paxton has won a $1.4 billion settlement from Facebook parent Meta over charges that it captured users' facial and biometric data without properly informing them it was doing so. Paxton said that starting in 2011, Meta, then known as Facebook, rolled out a “tag” feature that involved software that learned how to recognize and sort faces in photos. In doing so, it automatically turned on the feature without explaining how it worked, Paxton said — something that violated a 2009 state statute governing the use of biometric data, as well as running afoul of the state's deceptive trade practices act. "Unbeknownst to most Texans, for more than a decade Meta ran facial recognition software on virtually every face contained in the photographs uploaded to Facebook, capturing records of the facial geometry of the people depicted," he said in a statement. As part of the settlement, Meta did not admit to wrongdoing. Facebook discontinued how it had previously used face-recognition technology in 2021, in the process deleting the face-scan data of more than one billion users. The settlement amount, which Paxton said is the largest ever obtained by a single state against a business, will be paid out over five years. “This historic settlement demonstrates our commitment to standing up to the world’s biggest technology companies and holding them accountable for breaking the law and violating Texans’ privacy rights," Paxton said.

Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.


In Fresh Hell, American Vending Machines are Selling Bullets Using Facial Recognition
2024-07-08, Futurism
https://futurism.com/vending-machines-bullets-facial-recognition

A growing number of supermarkets in Alabama, Oklahoma, and Texas are selling bullets by way of AI-powered vending machines, as first reported by Alabama's Tuscaloosa Thread. The company behind the machines, a Texas-based venture dubbed American Rounds, claims on its website that its dystopian bullet kiosks are outfitted with "built-in AI technology" and "facial recognition software," which allegedly allow the devices to "meticulously verify the identity and age of each buyer." As showcased in a promotional video, using one is an astoundingly simple process: walk up to the kiosk, provide identification, and let a camera scan your face. If its embedded facial recognition tech says you are in fact who you say you are, the automated machine coughs up some bullets. According to American Rounds, the main objective is convenience. Its machines are accessible "24/7," its website reads, "ensuring that you can buy ammunition on your own schedule, free from the constraints of store hours and long lines." Though officials in Tuscaloosa, where two machines have been installed, [said] that the devices are in full compliance with the Bureau of Alcohol, Tobacco, Firearms and Explosives' standards ... at least one of the devices has been taken down amid a Tuscaloosa city council investigation into its legal standing. "We have over 200 store requests for AARM [Automated Ammo Retail Machine] units covering approximately nine states currently," [American Rounds CEO Grant Magers] told Newsweek, "and that number is growing daily."

Note: Facial recognition technology is far from reliable. For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence from reliable major media sources.


'I was misidentified as shoplifter by facial recognition tech'
2024-05-25, BBC News
https://www.bbc.com/news/technology-69055945

Sara needed some chocolate - she had had one of those days - so wandered into a Home Bargains store. "Within less than a minute, I'm approached by a store worker who comes up to me and says, 'You're a thief, you need to leave the store'." Sara ... was wrongly accused after being flagged by a facial-recognition system called Facewatch. She says after her bag was searched she was led out of the shop, and told she was banned from all stores using the technology. Facewatch later wrote to Sara and acknowledged it had made an error. Facewatch is used in numerous stores in the UK. It's not just retailers who are turning to the technology. On the day we were filming, the Metropolitan Police said they made six arrests with the assistance of the tech. 192 arrests have been made so far this year as a result of it. But civil liberty groups are worried that its accuracy is yet to be fully established, and point to cases such as Shaun Thompson's. Mr Thompson, who works for youth-advocacy group Streetfathers, didn't think much of it when he walked by a white van near London Bridge. Within a few seconds, he was approached by police and told he was a wanted man. But it was a case of mistaken identity. "It felt intrusive ... I was treated guilty until proven innocent," he says. Silkie Carlo, director of Big Brother Watch, has filmed the police on numerous facial-recognition deployments. She says that anyone's face who is scanned is effectively part of a digital police line-up.

Note: For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence controversies from reliable major media sources.


With JPMorgan, Mastercard on board in biometric ‘breakthrough’ year, you may soon start paying with your face
2024-05-20, CNBC News
https://www.cnbc.com/2024/05/20/this-may-be-the-year-you-pay-with-your-face-a...

Automated fast food restaurant CaliExpress by Flippy, in Pasadena, Calif., opened in January to considerable hype due to its robot burger makers, but the restaurant launched with another, less heralded innovation: the ability to pay for your meal with your face. CaliExpress uses a payment system from facial ID tech company PopID. It’s not the only fast-food chain to employ the technology. Biometric payment options are becoming more common. Amazon introduced pay-by-palm technology in 2020, and while its cashier-less store experiment has faltered, it installed the tech in 500 of its Whole Foods stores last year. Mastercard, which is working with PopID, launched a pilot for face-based payments in Brazil back in 2022, and it was deemed a success — 76% of pilot participants said they would recommend the technology to a friend. As stores implement biometric technology for a variety of purposes, from payments to broader anti-theft systems, consumer blowback, and lawsuits, are rising. In March, an Illinois woman sued retailer Target for allegedly illegally collecting and storing her and other customers’ biometric data via facial recognition technology without their consent. Amazon and T-Mobile are also facing legal actions related to biometric technology. In other countries ... biometric payment systems are comparatively mature. Visitors to McDonald’s in China ... use facial recognition technology to pay for their orders.

Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.


These cities bar facial recognition tech. Police still found ways to access it.
2024-05-18, Washington Post
https://www.washingtonpost.com/business/2024/05/18/facial-recognition-law-enf...

As cities and states push to restrict the use of facial recognition technologies, some police departments have quietly found a way to keep using the controversial tools: asking for help from other law enforcement agencies that still have access. Officers in Austin and San Francisco — two of the largest cities where police are banned from using the technology — have repeatedly asked police in neighboring towns to run photos of criminal suspects through their facial recognition programs. In San Francisco, the workaround didn’t appear to help. Since the city’s ban took effect in 2019, the San Francisco Police Department has asked outside agencies to conduct at least five facial recognition searches, but no matches were returned. SFPD spokesman Evan Sernoffsky said these requests violated the city ordinance and were not authorized by the department, but the agency faced no consequences from the city. Austin police officers have received the results of at least 13 face searches from a neighboring police department since the city’s 2020 ban — and have appeared to get hits on some of them. Facial recognition ... technology has played a role in the wrongful arrests of at least seven innocent Americans, six of whom were Black, according to lawsuits each of these people filed after the charges against them were dismissed. In all, 21 cities or counties and Vermont have voted to prohibit the use of facial recognition tools by law enforcement.

Note: Crime is increasing in many cities, leading to law enforcement agencies appropriately working to maintain public safety. Yet far too often, social justice takes a backseat while those in authority violate human rights. For more along these lines, see concise summaries of deeply revealing news articles on police corruption and artificial intelligence from reliable major media sources.


Cops Running DNA-Manufactured Faces Through Face Recognition Is a Tornado of Bad Ideas
2024-03-22, Electronic Freedom Foundation
https://www.eff.org/deeplinks/2024/03/cops-running-dna-manufactured-faces-thr...

Police in the U.S. recently combined two existing dystopian technologies in a brand new way to violate civil liberties. A police force in California recently employed the new practice of taking a DNA sample from a crime scene, running this through a service provided by US company Parabon NanoLabs that guesses what the perpetrators face looked like, and plugging this rendered image into face recognition software to build a suspect list. Parabon NanoLabs ... alleges it can create an image of the suspect’s face from their DNA. Parabon NanoLabs claim to have built this system by training machine learning models on the DNA data of thousands of volunteers with 3D scans of their faces. The process is yet to be independently audited, and scientists have affirmed that predicting face shapes—particularly from DNA samples—is not possible. But this has not stopped law enforcement officers from seeking to use it, or from running these fabricated images through face recognition software. Simply put: police are using DNA to create a hypothetical and not at all accurate face, then using that face as a clue on which to base investigations into crimes. This ... threatens the rights, freedom, or even the life of whoever is unlucky enough to look a little bit like that artificial face. These technologies, and their reckless use by police forces, are an inherent threat to our individual privacy, free expression, information security, and social justice.

Note: Law enforcement officers in many U.S. states are not required to reveal that they used face recognition technology to identify suspects. For more along these lines, see concise summaries of important news articles on police corruption and the erosion of civil liberties from reliable major media sources.


Emotion-tracking AI on the job: Workers fear being watched – and misunderstood
2024-03-06, Yahoo News
https://finance.yahoo.com/news/emotion-tracking-ai-job-workers-133506859.html

Emotion artificial intelligence uses biological signals such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, promising to detect and predict how someone is feeling. Over 50% of large employers in the U.S. use emotion AI aiming to infer employees’ internal states, a practice that grew during the COVID-19 pandemic. For example, call centers monitor what their operators say and their tone of voice. We wondered what workers think about these technologies. My collaborators Shanley Corvite, Kat Roemmich, Tillie Ilana Rosenberg and I conducted a survey. 51% of participants expressed concerns about privacy, 36% noted the potential for incorrect inferences employers would accept at face value, and 33% expressed concern that emotion AI-generated inferences could be used to make unjust employment decisions. Despite emotion AI’s claimed goals to infer and improve workers’ well-being in the workplace, its use can lead to the opposite effect: well-being diminished due to a loss of privacy. On concerns that emotional surveillance could jeopardize their job, a participant with a diagnosed mental health condition said: “They could decide that I am no longer a good fit at work and fire me. Decide I’m not capable enough and not give a raise, or think I’m not working enough.” Participants ... said they were afraid of the dynamic they would have with employers if emotion AI were integrated into their workplace.

Note: The above article was written by Nazanin Andalibi at the University of Michigan. For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and the disappearance of privacy from reliable major media sources.


Manufacturing Consent: The Border Fiasco and the “Smart Wall”
2024-02-19, Unlimited Hangout
https://unlimitedhangout.com/2024/02/investigative-reports/manufacturing-cons...

The disastrous situation at the US-Mexico border is, and has been, intentionally produced. Illegal crossings have risen to unprecedented levels. There is a bipartisan consensus about what must be done. Tellingly, the same “solution” is also being quietly rolled out at all American ports of entry that are not currently being “overrun”, such as airports. That solution, of course, is biometric surveillance, enabled by AI, facial recognition/biometrics and autonomous devices. This “solution” is not just being implemented throughout the United States as an alleged means of thwarting migrants, it is also being rapidly implemented throughout the world in apparent lockstep. Global policy agendas, ratified by nearly every country in the world ... seek both to restrict the extent of people’s freedom of movement and to surveil people’s movements ... through the global implementation of digital identity. The defense tech firm Anduril ... is one of the main beneficiaries of government contracts to build autonomous surveillance towers along the US-Mexico border, which are now also being rolled out along the US-Canada border. Anduril will create “a digital wall that is not a barrier so much as a web of all-seeing eyes, with intelligence to know what it sees.” While Anduril is one of the main companies building the “virtual wall,” they are not alone. General Dynamics, a defense firm deeply connected to organized crime, espionage scandals and corruption, has developed several hundred remote video surveillance systems (RVSS) towers for CBP while Google, another Big Tech firm with CIA connections, has been tapped by CBP to have its AI used in conjunction with Anduril’s towers, which also utilize Anduril’s own AI operating system known as Lattice.

Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.


‘A privacy nightmare’: the $400m surveillance package inside the US immigration bill
2024-02-06, The Guardian (One of the UK's Leading Newspapers)
https://www.theguardian.com/us-news/2024/feb/06/us-immigration-bill-mexico-bo...

The $118bn bipartisan immigration bill that the US Senate introduced on Sunday is already facing steep opposition. The 370-page measure, which also would provide additional aid to Israel and Ukraine, has drawn the ire of both Democrats and Republicans over its proposed asylum and border laws. But privacy, immigration and digital liberties experts are also concerned over another aspect of the bill: more than $400m in funding for additional border surveillance and data-gathering tools. The lion’s share of that funding will go to two main tools: $170m for additional autonomous surveillance towers and $204m for “expenses related to the analysis of DNA samples”, which includes those collected from migrants detained by border patrol. The bill describes autonomous surveillance towers as ones that “utilize sensors, onboard computing, and artificial intelligence to identify items of interest that would otherwise be manually identified by personnel”. The rest of the funding for border surveillance ... includes $47.5m for mobile video surveillance systems and drones and $25m for “familial DNA testing”. The bill also includes $25m in funding for “subterranean detection capabilities” and $10m to acquire data from unmanned surface vehicles or autonomous boats. As of early January, CBP had deployed 396 surveillance towers along the US-Mexico border, according to the Electronic Frontier Foundation (EFF).

Note: Read more about the secret history of facial recognition technology and undeniable evidence indicating these tools do much more harm than good. For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.


TSA sparks privacy concerns amid plans to install facial recognition systems at 400 US airports
2024-02-01, New York Post
https://nypost.com/2024/02/01/lifestyle/tsa-sparks-privacy-concerns-amid-plan...

The Transportation Security Administration (TSA) sparked privacy concerns after unveiling plans to roll out controversial facial recognition tech in over 400 US airports soon. “TSA is in the early stages of deploying its facial recognition capability to airport security checkpoints,” a spokesperson [said] regarding the ambitious program. They explained that the cutting-edge tech serves to both enhance and expedite the screening process for passengers. Dubbed CAT-2 machines, these automated identification systems accomplish this by incorporating facial recognition tech to snap real-time pictures of travelers. They then compare this biometric data against the flyer’s photo ID to verify that it’s the real person. These CAT-scans enable “traveler use of mobile driver’s licenses,” thereby improving the security experience, per the spokesperson. The TSA currently has 600 CAT-2 units deployed at about 50 airports nationwide and plans to expand them to 400 federalized airports in the future. Following the implementation of these synthetic security accelerators at US airports last winter, lawmakers expressed concerns that the machines present a major privacy issue. “The TSA program is a precursor to a full-blown national surveillance state,” said Oregon Senator Jeff Merkley. “Nothing could be more damaging to our national values of privacy and freedom. No government should be trusted with this power.”

Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.


Face Recognition Technology Follows a Long Analog History of Surveillance and Control Based on Identifying Physical Features
2024-01-22, ScheerPost
https://scheerpost.com/2024/01/22/face-recognition-technology-follows-a-long-...

American Amara Majeed was accused of terrorism by the Sri Lankan police in 2019. Robert Williams was arrested outside his house in Detroit and detained in jail for 18 hours for allegedly stealing watches in 2020. Randal Reid spent six days in jail in 2022 for supposedly using stolen credit cards in a state he’d never even visited. In all three cases, the authorities had the wrong people. In all three, it was face recognition technology that told them they were right. Law enforcement officers in many U.S. states are not required to reveal that they used face recognition technology to identify suspects. Surveillance is predicated on the idea that people need to be tracked and their movements limited and controlled in a trade-off between privacy and security. The assumption that less privacy leads to more security is built in. That may be the case for some, but not for the people disproportionately targeted by face recognition technology. As of 2019, face recognition technology misidentified Black and Asian people at up to 100 times the rate of white people. In 2018 ... 28 members of the U.S. Congress ... were falsely matched with mug shots on file using Amazon’s Rekognition tool. Much early research into face recognition software was funded by the CIA for the purposes of border surveillance. More recently, private companies have adopted data harvesting techniques, including face recognition, as part of a long practice of leveraging personal data for profit.

Note: For more along these lines, see concise summaries of deeply revealing news articles on police corruption and the disappearance of privacy from reliable major media sources.


Palestine: “Peace to Prosperity” Through Technocracy
2023-12-12, Unlimited Hangout
https://unlimitedhangout.com/2023/12/investigative-reports/palestine-peace-to...

The Palestinian population is intimately familiar with how new technological innovations are first weaponized against them–ranging from electric fences and unmanned drones to trap people in Gaza—to the facial recognition software monitoring Palestinians in the West Bank. Groups like Amnesty International have called Israel an Automated Apartheid and repeatedly highlight stories, testimonies, and reports about cyber-intelligence firms, including the infamous NSO Group (the Israeli surveillance company behind the Pegasus software) conducting field tests and experiments on Palestinians. Reports have highlighted: “Testing and deployment of AI surveillance and predictive policing systems in Palestinian territories. In the occupied West Bank, Israel increasingly utilizes facial recognition technology to monitor and regulate the movement of Palestinians. Israeli military leaders described AI as a significant force multiplier, allowing the IDF to use autonomous robotic drone swarms to gather surveillance data, identify targets, and streamline wartime logistics.” The Palestinian towns and villages near Israeli settlements have been described as laboratories for security solutions companies to experiment their technologies on Palestinians before marketing them to places like Colombia. The Israeli government hopes to crystalize its “automated apartheid” through the tokenization and privatization of various industries and establishing a technocratic government in Gaza.

Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.


AI doesn’t cause harm by itself. We should worry about the people who control it
2023-11-26, The Guardian (One of the UK's Leading Newspapers)
https://www.theguardian.com/commentisfree/2023/nov/26/artificial-intelligence...

OpenAI was created as a non-profit-making charitable trust, the purpose of which was to develop artificial general intelligence, or AGI, which, roughly speaking, is a machine that can accomplish, or surpass, any intellectual task humans can perform. It would do so, however, in an ethical fashion to benefit “humanity as a whole”. Two years ago, a group of OpenAI researchers left to start a new organisation, Anthropic, fearful of the pace of AI development at their old company. One later told a reporter that “there was a 20% chance that a rogue AI would destroy humanity within the next decade”. One may wonder about the psychology of continuing to create machines that one believes may extinguish human life. The problem we face is not that machines may one day exercise power over humans. That is speculation unwarranted by current developments. It is rather that we already live in societies in which power is exercised by a few to the detriment of the majority, and that technology provides a means of consolidating that power. For those who hold social, political and economic power, it makes sense to project problems as technological rather than social and as lying in the future rather than in the present. There are few tools useful to humans that cannot also cause harm. But they rarely cause harm by themselves; they do so, rather, through the ways in which they are exploited by humans, especially those with power.

Note: Read how AI is already being used for war, mass surveillance, and questionable facial recognition technology.


Silicon Valley is piling into the business of snooping
2023-11-05, The Economist
https://www.economist.com/business/2023/11/05/silicon-valley-is-piling-into-t...

New Yorkers may have noticed an unwelcome guest hovering round their parties. In the lead-up to Labour Day weekend the New York Police Department (NYPD) said that it would use drones to look into complaints about festivities, including back-yard gatherings. Snooping police drones are an increasingly common sight in America. According to a recent survey by researchers at the Northwestern Pritzker School of Law, about a quarter of police forces now use them. Among the NYPD’s suppliers is Skydio, a Silicon Valley firm that uses artificial intelligence (AI) to make drones easy to fly. The NYPD is also buying from BRINC, another startup, which makes flying machines equipped with night-vision cameras that can smash through windows. Facial-recognition software is now used more widely across America, too, with around a tenth of police forces having access to the technology. A report released in September by America’s Government Accountability Office found that six federal law-enforcement agencies, including the FBI and the Secret Service, were together executing an average of 69 facial-recognition searches every day. Among the top vendors listed was Clearview AI. Surveillance capabilities may soon be further fortified by generative AI, of the type that powers ChatGPT, thanks to its ability to work with “unstructured” data such as images and video footage. The technology will let users “search the Earth for objects”, much as Google lets users search the internet.

Note: For more along these lines, see concise summaries of deeply revealing news articles on police corruption and the disappearance of privacy from reliable major media sources.


Schools Are Normalizing Intrusive Surveillance
2023-10-06, Reason
https://reason.com/2023/10/06/schools-are-normalizing-intrusive-surveillance/

Public schools ... are the focus of a new report on surveillance and kids by the American Civil Liberties Union (ACLU). "Over the last two decades, a segment of the educational technology (EdTech) sector that markets student surveillance products to schools — the EdTech Surveillance industry — has grown into a $3.1 billion a year economic juggernaut," begins Digital Dystopia The Danger in Buying What the EdTech Surveillance Industry is Selling. "The EdTech Surveillance industry accomplished that feat by playing on school districts' fears of school shootings, student self-harm and suicides, and bullying — marketing them as common, ever-present threats." As the authors detail, among the technologies are surveillance cameras. These are often linked to software for facial recognition, access control, behavior analysis, and weapon detection. That is, cameras scan student faces and then algorithms identify them, allow or deny them entry based on that ID, decide if their activities are threatening, and determine if objects they carry may be dangerous or forbidden. "False hits, such as mistaking a broomstick, three-ring binder, or a Google Chromebook laptop for a gun or other type of weapon, could result in an armed police response to a school," cautions the report. Students are aware that they're being observed. Of students aged 14–18 surveyed by the ACLU ... thirty-two percent say, "I always feel like I'm being watched."

Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.


World powers in rush to get killer robots on battlefield in AI arms race, as concerns grow they can turn on humans
2023-07-10, New York Post
https://nypost.com/2023/07/10/new-netflix-doc-unknown-killer-robots-warns-of-...

Weapons-grade robots and drones being utilized in combat isn't new. But AI software is, and it's enhancing – in some cases, to the extreme – the existing hardware, which has been modernizing warfare for the better part of a decade. Now, experts say, developments in AI have pushed us to a point where global forces now have no choice but to rethink military strategy – from the ground up. "It's realistic to expect that AI will be piloting an F-16 and will not be that far out," Nathan Michael, Chief Technology Officer of Shield AI, a company whose mission is "building the world's best AI pilot," says. We don't truly comprehend what we're creating. There are also fears that a comfortable reliance in the technology's precision and accuracy – referred to as automation bias – may come back to haunt, should the tech fail in a life or death situation. One major worry revolves around AI facial recognition software being used to enhance an autonomous robot or drone during a firefight. Right now, a human being behind the controls has to pull the proverbial trigger. Should that be taken away, militants could be misconstrued for civilians or allies at the hands of a machine. And remember when the fear of our most powerful weapons being turned against us was just something you saw in futuristic action movies? With AI, that's very possible. "There is a concern over cybersecurity in AI and the ability of either foreign governments or an independent actors to take over crucial elements of the military," [filmmaker Jesse Sweet] said.

Note: For more along these lines, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.


Fantasy fears about AI are obscuring how we already abuse machine intelligence
2023-06-11, The Guardian (One of the UK's Leading Newspapers)
https://www.theguardian.com/commentisfree/2023/jun/11/big-tech-warns-of-threa...

A young African American man, Randal Quran Reid, was pulled over by the state police in Georgia. He was arrested under warrants issued by Louisiana police for two cases of theft in New Orleans. The arrest warrants had been based solely on a facial recognition match, though that was never mentioned in any police document; the warrants claimed "a credible source" had identified Reid as the culprit. The facial recognition match was incorrect and Reid was released. Reid ... is not the only victim of a false facial recognition match. So far all those arrested in the US after a false match have been black. From surveillance to disinformation, we live in a world shaped by AI. The reason that Reid was wrongly incarcerated had less to do with artificial intelligence than with ... the humans that created the software and trained it. Too often when we talk of the "problem" of AI, we remove the human from the picture. We worry AI will "eliminate jobs" and make millions redundant, rather than recognise that the real decisions are made by governments and corporations and the humans that run them. We have come to view the machine as the agent and humans as victims of machine agency. Rather than seeing regulation as a means by which we can collectively shape our relationship to AI, it becomes something that is imposed from the top as a means of protecting humans from machines. It is not AI but our blindness to the way human societies are already deploying machine intelligence for political ends that should most worry us.

Note: For more along these lines, see concise summaries of deeply revealing news articles on police corruption and the disappearance of privacy from reliable major media sources.