As a 501(c)(3) nonprofit, we depend almost entirely on donations from people like you.
Please consider making a donation.
Subscribe here and join over 13,000 subscribers to our free weekly newsletter

Search 14,028 Media Articles

search

Paxton's win against Meta is a win for privacy. It's only a first step.
2024-08-12, Houston Chronicle
https://www.houstonchronicle.com/opinion/editorials/article/paxton-facebook-m...

If you appeared in a photo on Facebook any time between 2011 and 2021, it is likely your biometric information was fed into DeepFace — the company’s controversial deep-learning facial recognition system that tracked the face scan data of at least a billion users. That's where Texas Attorney General Ken Paxton comes in. His office secured a $1.4 billion settlement from Meta over its alleged violation of a Texas law that bars the capture of biometric data without consent. Meta is on the hook to pay $275 million within the next 30 days and the rest over the next four years. Why did Paxton wait until 2022 — a year after Meta announced it would suspend its facial recognition technology and delete its database — to go up against the tech giant? If our AG truly prioritized privacy, he'd focus on the lesser-known companies that law enforcement agencies here in Texas are paying to scour and store our biometric data. In 2017, [Clearview AI] launched a facial recognition app that ... could identify strangers from a photo by searching a database of faces scraped without consent from social media. In 2020, news broke that at least 600 law enforcement agencies were tapping into a database of 3 billion facial images. Clearview was hit with lawsuit after lawsuit. That same year, the company was hacked and its entire client list — which included the Department of Justice, U.S. Immigration and Customs Enforcement, Interpol, retailers and hundreds of police departments — was leaked.

Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.


With JPMorgan, Mastercard on board in biometric ‘breakthrough’ year, you may soon start paying with your face
2024-05-20, CNBC News
https://www.cnbc.com/2024/05/20/this-may-be-the-year-you-pay-with-your-face-a...

Automated fast food restaurant CaliExpress by Flippy, in Pasadena, Calif., opened in January to considerable hype due to its robot burger makers, but the restaurant launched with another, less heralded innovation: the ability to pay for your meal with your face. CaliExpress uses a payment system from facial ID tech company PopID. It’s not the only fast-food chain to employ the technology. Biometric payment options are becoming more common. Amazon introduced pay-by-palm technology in 2020, and while its cashier-less store experiment has faltered, it installed the tech in 500 of its Whole Foods stores last year. Mastercard, which is working with PopID, launched a pilot for face-based payments in Brazil back in 2022, and it was deemed a success — 76% of pilot participants said they would recommend the technology to a friend. As stores implement biometric technology for a variety of purposes, from payments to broader anti-theft systems, consumer blowback, and lawsuits, are rising. In March, an Illinois woman sued retailer Target for allegedly illegally collecting and storing her and other customers’ biometric data via facial recognition technology without their consent. Amazon and T-Mobile are also facing legal actions related to biometric technology. In other countries ... biometric payment systems are comparatively mature. Visitors to McDonald’s in China ... use facial recognition technology to pay for their orders.

Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.


Manufacturing Consent: The Border Fiasco and the “Smart Wall”
2024-02-19, Unlimited Hangout
https://unlimitedhangout.com/2024/02/investigative-reports/manufacturing-cons...

The disastrous situation at the US-Mexico border is, and has been, intentionally produced. Illegal crossings have risen to unprecedented levels. There is a bipartisan consensus about what must be done. Tellingly, the same “solution” is also being quietly rolled out at all American ports of entry that are not currently being “overrun”, such as airports. That solution, of course, is biometric surveillance, enabled by AI, facial recognition/biometrics and autonomous devices. This “solution” is not just being implemented throughout the United States as an alleged means of thwarting migrants, it is also being rapidly implemented throughout the world in apparent lockstep. Global policy agendas, ratified by nearly every country in the world ... seek both to restrict the extent of people’s freedom of movement and to surveil people’s movements ... through the global implementation of digital identity. The defense tech firm Anduril ... is one of the main beneficiaries of government contracts to build autonomous surveillance towers along the US-Mexico border, which are now also being rolled out along the US-Canada border. Anduril will create “a digital wall that is not a barrier so much as a web of all-seeing eyes, with intelligence to know what it sees.” While Anduril is one of the main companies building the “virtual wall,” they are not alone. General Dynamics, a defense firm deeply connected to organized crime, espionage scandals and corruption, has developed several hundred remote video surveillance systems (RVSS) towers for CBP while Google, another Big Tech firm with CIA connections, has been tapped by CBP to have its AI used in conjunction with Anduril’s towers, which also utilize Anduril’s own AI operating system known as Lattice.

Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.


Silicon Valley is piling into the business of snooping
2023-11-05, The Economist
https://www.economist.com/business/2023/11/05/silicon-valley-is-piling-into-t...

New Yorkers may have noticed an unwelcome guest hovering round their parties. In the lead-up to Labour Day weekend the New York Police Department (NYPD) said that it would use drones to look into complaints about festivities, including back-yard gatherings. Snooping police drones are an increasingly common sight in America. According to a recent survey by researchers at the Northwestern Pritzker School of Law, about a quarter of police forces now use them. Among the NYPD’s suppliers is Skydio, a Silicon Valley firm that uses artificial intelligence (AI) to make drones easy to fly. The NYPD is also buying from BRINC, another startup, which makes flying machines equipped with night-vision cameras that can smash through windows. Facial-recognition software is now used more widely across America, too, with around a tenth of police forces having access to the technology. A report released in September by America’s Government Accountability Office found that six federal law-enforcement agencies, including the FBI and the Secret Service, were together executing an average of 69 facial-recognition searches every day. Among the top vendors listed was Clearview AI. Surveillance capabilities may soon be further fortified by generative AI, of the type that powers ChatGPT, thanks to its ability to work with “unstructured” data such as images and video footage. The technology will let users “search the Earth for objects”, much as Google lets users search the internet.

Note: For more along these lines, see concise summaries of deeply revealing news articles on police corruption and the disappearance of privacy from reliable major media sources.


The coronavirus is giving China cover to expand its surveillance. What happens when the virus is gone?
2020-03-01, Fortune
https://fortune.com/2020/03/01/coronavirus-china-surveillance-tracking/

The outbreak of Covid-19 has been anathema for most of Chinas economy but the novel coronavirus was a shot in the arm for the states surveillance apparatus, which has expanded rapidly in pursuit of the epidemics spread. Facial recognition cameras, phone tracking technology and voluntary registrations have all been deployed to monitor the flow of people and the possible transmission of disease. The Chinese surveillance systems currently ... has two purposes: the first is to monitor public health and the second is to maintain political control, says Francis Lee, a professor ... at the Chinese University of Hong Kong. Once the outbreak is controlled, however, its unclear whether the government will retract its new powers. While facial recognition provides a way to monitor crowds from a distance, governments have deployed close-range means of tracking individuals too. The municipal government of Hangzhou worked with ecommerce giant Alibaba to launch a feature through the companys mobile wallet app, AliPay, that assesses the users risk of infection. The app generates a QR code. Guards at checkpoints in residential buildings and elsewhere can then scan that code to gain details about the user. John Bacon-Shone ... at Hong Kong University thinks that the ongoing threat of outbreaks will provide a constant justification for the new systems. I am rather pessimistic that there will be full rollback of data collection once it has been implemented, Bacon-Shone says.

Note: Remember all of the privacy and freedoms given up after 9/11? How many of those have been given back? Learn more about the serious risk of the Coronavirus increasing the surveillance state in this excellent article. For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.


Police seldom disclose use of facial recognition despite false arrests
2024-10-06, Washington Post
https://www.washingtonpost.com/business/2024/10/06/police-facial-recognition-...

Police departments in 15 states provided The Post with rarely seen records documenting their use of facial recognition in more than 1,000 criminal investigations over the past four years. According to the arrest reports in those cases and interviews with people who were arrested, authorities routinely failed to inform defendants about their use of the software — denying them the opportunity to contest the results of an emerging technology that is prone to error. Officers often obscured their reliance on the software in public-facing reports, saying that they identified suspects “through investigative means” or that a human source such as a witness or police officer made the initial identification. Defense lawyers and civil rights groups argue that people have a right to know about any software that identifies them as part of a criminal investigation, especially a technology that has led to false arrests. The reliability of the tool has been successfully challenged in a handful of recent court cases around the country, leading some defense lawyers to posit that police and prosecutors are intentionally trying to shield the technology from court scrutiny. Misidentification by this type of software played a role in the wrongful arrests of at least seven innocent Americans, six of whom were Black. Charges were later dismissed against all of them. Federal testing of top facial recognition software has found the programs are more likely to misidentify people of color.

Note: Read about the secret history of facial recognition. For more along these lines, see concise summaries of deeply revealing news articles on AI and police corruption from reliable major media sources.


FBI, Pentagon helped research facial recognition for street cameras, drones
2023-03-07, Washington Post
https://www.washingtonpost.com/technology/2023/03/07/facial-recognition-fbi-d...

The FBI and the Defense Department were actively involved in research and development of facial recognition software that they hoped could be used to identify people from video footage captured by street cameras and flying drones, according to thousands of pages of internal documents that provide new details about the government's ambitions to build out a powerful tool for advanced surveillance. The documents, revealed in response to an ongoing Freedom of Information Act lawsuit the American Civil Liberties Union filed against the FBI, show how closely FBI and Defense officials worked with academic researchers to refine artificial-intelligence techniques that could help in the identification or tracking of Americans without their awareness or consent. Many of the records relate to the Janus program, a project funded by the Intelligence Advanced Research Projects Agency, or IARPA. The improved facial recognition system was ultimately folded into a search tool, called Horus, and made available to the Pentagon's Combating Terrorism Technical Support Office, which helps provide military technologies to civilian police forces. No federal laws regulate how facial recognition systems can be used. The tool's use in domestic mass surveillance would be a "nightmare scenario," said Nathan Wessler, a deputy director at the ACLU. "It could give the government the ability to pervasively track as many people as they want for as long as they want. There's no good outcome for that in a democratic society."

Note: For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption and the disappearance of privacy from reliable major media sources.


How AI-Powered Police Forces Watch Your Every Move
2025-06-07, The Marshall Project
https://www.themarshallproject.org/2025/06/07/ai-police-camera-new-orleans

From facial recognition to predictive analytics to the rise of increasingly convincing deepfakes and other synthetic video, new technologies are emerging faster than agencies, lawmakers, or watchdog groups can keep up. Take New Orleans, where, for the past two years, police officers have quietly received real-time alerts from a private network of AI-equipped cameras, flagging the whereabouts of people on wanted lists. In 2022, City Council members attempted to put guardrails on the use of facial recognition. But those guidelines assume it's the police doing the searching. New Orleans police have hundreds of cameras, but the alerts in question came from a separate system: a network of 200 cameras equipped with facial recognition and installed by residents and businesses on private property, feeding video to a nonprofit called Project NOLA. Police officers who downloaded the group's app then received notifications when someone on a wanted list was detected on the camera network, along with a location. That has civil liberties groups and defense attorneys in Louisiana frustrated. “When you make this a private entity, all those guardrails that are supposed to be in place for law enforcement and prosecution are no longer there, and we don’t have the tools to ... hold people accountable,” Danny Engelberg, New Orleans’ chief public defender, [said]. Another way departments can skirt facial recognition rules is to use AI analysis that doesn’t technically rely on faces.

Note: Learn about all the high-tech tools police use to surveil protestors. For more along these lines, read our concise summaries of news articles on AI and police corruption.


Protest Under a Surveillance State Microscope
2024-11-04, Project on Government Oversight
https://www.pogo.org/analysis/protest-under-a-surveillance-state-microscope

Before the digital age, law enforcement would conduct surveillance through methods like wiretapping phone lines or infiltrating an organization. Now, police surveillance can reach into the most granular aspects of our lives during everyday activities, without our consent or knowledge — and without a warrant. Technology like automated license plate readers, drones, facial recognition, and social media monitoring added a uniquely dangerous element to the surveillance that comes with physical intimidation of law enforcement. With greater technological power in the hands of police, surveillance technology is crossing into a variety of new and alarming contexts. Law enforcement partnerships with companies like Clearview AI, which scraped billions of images from the internet for their facial recognition database ... has been used by law enforcement agencies across the country, including within the federal government. When the social networking app on your phone can give police details about where you’ve been and who you’re connected to, or your browsing history can provide law enforcement with insight into your most closely held thoughts, the risks of self-censorship are great. When artificial intelligence tools or facial recognition technology can piece together your life in a way that was previously impossible, it gives the ones with the keys to those tools enormous power to ... maintain a repressive status quo.

Note: Facial recognition technology has played a role in the wrongful arrests of many innocent people. For more along these lines, explore concise summaries of revealing news articles on police corruption and the disappearance of privacy.


'I was misidentified as shoplifter by facial recognition tech'
2024-05-25, BBC News
https://www.bbc.com/news/technology-69055945

Sara needed some chocolate - she had had one of those days - so wandered into a Home Bargains store. "Within less than a minute, I'm approached by a store worker who comes up to me and says, 'You're a thief, you need to leave the store'." Sara ... was wrongly accused after being flagged by a facial-recognition system called Facewatch. She says after her bag was searched she was led out of the shop, and told she was banned from all stores using the technology. Facewatch later wrote to Sara and acknowledged it had made an error. Facewatch is used in numerous stores in the UK. It's not just retailers who are turning to the technology. On the day we were filming, the Metropolitan Police said they made six arrests with the assistance of the tech. 192 arrests have been made so far this year as a result of it. But civil liberty groups are worried that its accuracy is yet to be fully established, and point to cases such as Shaun Thompson's. Mr Thompson, who works for youth-advocacy group Streetfathers, didn't think much of it when he walked by a white van near London Bridge. Within a few seconds, he was approached by police and told he was a wanted man. But it was a case of mistaken identity. "It felt intrusive ... I was treated guilty until proven innocent," he says. Silkie Carlo, director of Big Brother Watch, has filmed the police on numerous facial-recognition deployments. She says that anyone's face who is scanned is effectively part of a digital police line-up.

Note: For more along these lines, see concise summaries of deeply revealing news articles on artificial intelligence controversies from reliable major media sources.


These cities bar facial recognition tech. Police still found ways to access it.
2024-05-18, Washington Post
https://www.washingtonpost.com/business/2024/05/18/facial-recognition-law-enf...

As cities and states push to restrict the use of facial recognition technologies, some police departments have quietly found a way to keep using the controversial tools: asking for help from other law enforcement agencies that still have access. Officers in Austin and San Francisco — two of the largest cities where police are banned from using the technology — have repeatedly asked police in neighboring towns to run photos of criminal suspects through their facial recognition programs. In San Francisco, the workaround didn’t appear to help. Since the city’s ban took effect in 2019, the San Francisco Police Department has asked outside agencies to conduct at least five facial recognition searches, but no matches were returned. SFPD spokesman Evan Sernoffsky said these requests violated the city ordinance and were not authorized by the department, but the agency faced no consequences from the city. Austin police officers have received the results of at least 13 face searches from a neighboring police department since the city’s 2020 ban — and have appeared to get hits on some of them. Facial recognition ... technology has played a role in the wrongful arrests of at least seven innocent Americans, six of whom were Black, according to lawsuits each of these people filed after the charges against them were dismissed. In all, 21 cities or counties and Vermont have voted to prohibit the use of facial recognition tools by law enforcement.

Note: Crime is increasing in many cities, leading to law enforcement agencies appropriately working to maintain public safety. Yet far too often, social justice takes a backseat while those in authority violate human rights. For more along these lines, see concise summaries of deeply revealing news articles on police corruption and artificial intelligence from reliable major media sources.


Schools Are Normalizing Intrusive Surveillance
2023-10-06, Reason
https://reason.com/2023/10/06/schools-are-normalizing-intrusive-surveillance/

Public schools ... are the focus of a new report on surveillance and kids by the American Civil Liberties Union (ACLU). "Over the last two decades, a segment of the educational technology (EdTech) sector that markets student surveillance products to schools — the EdTech Surveillance industry — has grown into a $3.1 billion a year economic juggernaut," begins Digital Dystopia The Danger in Buying What the EdTech Surveillance Industry is Selling. "The EdTech Surveillance industry accomplished that feat by playing on school districts' fears of school shootings, student self-harm and suicides, and bullying — marketing them as common, ever-present threats." As the authors detail, among the technologies are surveillance cameras. These are often linked to software for facial recognition, access control, behavior analysis, and weapon detection. That is, cameras scan student faces and then algorithms identify them, allow or deny them entry based on that ID, decide if their activities are threatening, and determine if objects they carry may be dangerous or forbidden. "False hits, such as mistaking a broomstick, three-ring binder, or a Google Chromebook laptop for a gun or other type of weapon, could result in an armed police response to a school," cautions the report. Students are aware that they're being observed. Of students aged 14–18 surveyed by the ACLU ... thirty-two percent say, "I always feel like I'm being watched."

Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.


Meet The Spy Tech Companies Helping Landlords Evict People
2023-01-04, Vice
https://www.vice.com/en/article/meet-the-spy-tech-companies-helping-landlords...

Some renters may savor the convenience of “smart home” technologies like keyless entry and internet-connected doorbell cameras. But tech companies are increasingly selling these solutions to landlords for a more nefarious purpose: spying on tenants in order to evict them or raise their rent. Teman, a tech company that makes surveillance systems for apartment buildings ... proposes a solution to a frustration for many New York City landlords, who have tenants living in older apartments that are protected by a myriad of rent control and stabilization laws. The company’s email suggests a workaround: “3 Simple Steps to Re-Regulate a Unit.” First, use one of Teman’s automated products to catch a tenant breaking a law or violating their lease, such as by having unapproved subletters or loud parties. Then, “vacate” them and merge their former apartment with one next door or above or below, creating a “new” unit that’s not eligible for rent protections. “Combine a $950/mo studio and $1400/mo one-bedroom into a $4200/mo DEREGULATED two-bedroom,” the email enticed. Teman’s surveillance systems can even “help you identify which units are most-likely open to moving out (or being evicted!).” Two affordable New York City developments made headlines when tenants successfully organized to stop their respective owners’ plans to install facial recognition systems: Atlantic Towers in Brooklyn and Knickerbocker Village in the Lower East Side.

Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.


Push for Privacy Standards for Facial Recognition Falters
2015-06-16, ABC News/Associated Press
http://abcnews.go.com/Technology/wireStory/push-privacy-standards-facial-reco...

Retailers have the ability to scan your face digitally, and use that identification to offer you special prices or even recognize you as a prior shoplifter. But should they use it? Should they get your permission first? Privacy advocates announced Tuesday they have walked away from a government-run effort with industry intended to ... hash out voluntary protocols for facial recognition technology in a way that doesn't hurt consumers. The Commerce Department's National Telecommunications and Information Administration, or NTIA, was acting as mediator. The two sides had been meeting for 16 months ... until the nine major privacy groups said they had hit a dead end and that "people deserve more protection than they are likely to get in this forum. At a base minimum, people should be able to walk down a public street without fear that companies they've never heard of are tracking their every movement and identifying them by name using facial recognition technology," the groups said. "We have been unable to obtain agreement even with that basic, specific premise." The ability to apply a unique signature to a person's face, even if you don't identify them by name, is particularly invasive, according to privacy advocates. "You can change your password and your credit card number; you cannot change your fingerprints or the precise dimensions of your face. Through facial recognition, these immutable, physical facts can be used to identify you, remotely and in secret, without any recourse."

Note: Read this article for more in this matter. Remember, the same technologies that lead to the disappearance of privacy rights for individuals are also used by corrupt corporations against nonprofit civic organizations to undermine democracy.


The new totalitarianism of surveillance technology
2012-08-15, The Guardian (One of the UK's leading newspapers)
http://www.guardian.co.uk/commentisfree/2012/aug/15/new-totalitarianism-surve...

Last week, New York Mayor Michael Bloomberg joined NYPD Commissioner Ray Kelly to unveil a major new police surveillance infrastructure, developed by Microsoft. The Domain Awareness System links existing police databases with live video feeds, including cameras using vehicle license plate recognition software. No mention was made of whether the system plans to use or already uses facial recognition software. But, at present, there is no law to prevent US government and law enforcement agencies from building facial recognition databases. And we know from industry newsletters that the US military, law enforcement, and the department of homeland security are betting heavily on facial recognition technology. As PC World notes, Facebook itself is a market leader in the technology but military and security agencies are close behind. According to Homeland Security Newswire, billions of dollars are being invested in the development and manufacture of various biometric technologies capable of detecting and identifying anyone, anywhere in the world via iris-scanning systems, already in use; foot-scanning technology (really); voice pattern ID software, and so on. What is very obvious is that this technology will not be applied merely to people under arrest, or to people under surveillance in accordance with the fourth amendment. No, the "targets" here [include] everyone. In the name of "national security", the capacity is being built to identify, track and document any citizen constantly and continuously.

Note: For deeply revealing reports from reliable major media sources on civil liberties, click here.


Texas AG wins $1.4B settlement from Facebook parent Meta over facial-capture charges
2024-07-30, NBC News
https://www.nbcnews.com/business/business-news/texas-ag-wins-1point4-billion-...

Texas Attorney General Ken Paxton has won a $1.4 billion settlement from Facebook parent Meta over charges that it captured users' facial and biometric data without properly informing them it was doing so. Paxton said that starting in 2011, Meta, then known as Facebook, rolled out a “tag” feature that involved software that learned how to recognize and sort faces in photos. In doing so, it automatically turned on the feature without explaining how it worked, Paxton said — something that violated a 2009 state statute governing the use of biometric data, as well as running afoul of the state's deceptive trade practices act. "Unbeknownst to most Texans, for more than a decade Meta ran facial recognition software on virtually every face contained in the photographs uploaded to Facebook, capturing records of the facial geometry of the people depicted," he said in a statement. As part of the settlement, Meta did not admit to wrongdoing. Facebook discontinued how it had previously used face-recognition technology in 2021, in the process deleting the face-scan data of more than one billion users. The settlement amount, which Paxton said is the largest ever obtained by a single state against a business, will be paid out over five years. “This historic settlement demonstrates our commitment to standing up to the world’s biggest technology companies and holding them accountable for breaking the law and violating Texans’ privacy rights," Paxton said.

Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.


India is trying to build the world's biggest facial recognition system
2019-10-18, CNN News
https://www.cnn.com/2019/10/17/tech/india-facial-recognition-intl-hnk/index.html

India has just 144 police officers for every 100,000 citizens. In recent years, authorities have turned to facial recognition technology to make up for the shortfall. India's government now ... wants to construct one of the world's largest facial recognition systems. The project envisions a future in which police from across the country's 29 states and seven union territories would have access to a single, centralized database. The daunting scope of the proposed network is laid out in a detailed 172-page document published by the National Crime Records Bureau, which requests bids from companies to build the project. The project would match images from the country's growing network of CCTV cameras against a database encompassing mug shots of criminals, passport photos and images collected by [government] agencies. It would also recognize faces on closed-circuit cameras and "generate alerts if a blacklist match is found." Security forces would be equipped with hand-held mobile devices enabling them to capture a face in the field and search it instantly against the national database, through a dedicated app. For privacy advocates, this is worrying. "India does not have a data protection law," says [Apar] Gupta [of the Internet Freedom Foundation]. "It will essentially be devoid of safeguards." It might even be linked up to Aadhaar, India's vast biometric database, which contains the personal details of 1.2 billion Indian citizens, enabling India to set up "a total, permanent surveillance state," he adds.

Note: Read an excellent article by The Civil Liberties Union for Europe about the 7 biggest privacy issues that concern facial recognition technology. For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.


Biometric Surveillance Means Someone Is Always Watching
2014-04-17, Newsweek
https://www.newsweek.com/2014/04/25/biometric-surveillance-means-someone-alwa...

From 2008 to 2010, as Edward Snowden has revealed, the National Security Agency (NSA) collaborated with the British Government Communications Headquarters to intercept the webcam footage of over 1.8 million Yahoo users. The agencies were analyzing images they downloaded from webcams and scanning them for known terrorists who might be using the service to communicate, matching faces from the footage to suspects with the help of a new technology called face recognition. In attempting to find faces, the Pentagon's Optic Nerve program recorded webcam sex by its unknowing targets—up to 11 percent of the material the program collected was "undesirable nudity" that employees were warned not to access. And that's just the beginning of what face recognition technology might mean for us in the digital era. The U.S. government is in the process of building the world's largest cache of face recognition data, with the goal of identifying every person in the country. The creation of such a database would mean that anyone could be tracked wherever his or her face appears, whether it's on a city street or in a mall. Today's laws don't protect Americans from having their webcams scanned for facial data. "If cameras connected to databases can do face recognition, it will become impossible to be anonymous in society," [attorney Jennifer] Lynch says. That means every person in the U.S. would be passively tracked at all times.

Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.


Emotion-tracking AI on the job: Workers fear being watched – and misunderstood
2024-03-06, Yahoo News
https://finance.yahoo.com/news/emotion-tracking-ai-job-workers-133506859.html

Emotion artificial intelligence uses biological signals such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, promising to detect and predict how someone is feeling. Over 50% of large employers in the U.S. use emotion AI aiming to infer employees’ internal states, a practice that grew during the COVID-19 pandemic. For example, call centers monitor what their operators say and their tone of voice. We wondered what workers think about these technologies. My collaborators Shanley Corvite, Kat Roemmich, Tillie Ilana Rosenberg and I conducted a survey. 51% of participants expressed concerns about privacy, 36% noted the potential for incorrect inferences employers would accept at face value, and 33% expressed concern that emotion AI-generated inferences could be used to make unjust employment decisions. Despite emotion AI’s claimed goals to infer and improve workers’ well-being in the workplace, its use can lead to the opposite effect: well-being diminished due to a loss of privacy. On concerns that emotional surveillance could jeopardize their job, a participant with a diagnosed mental health condition said: “They could decide that I am no longer a good fit at work and fire me. Decide I’m not capable enough and not give a raise, or think I’m not working enough.” Participants ... said they were afraid of the dynamic they would have with employers if emotion AI were integrated into their workplace.

Note: The above article was written by Nazanin Andalibi at the University of Michigan. For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and the disappearance of privacy from reliable major media sources.


‘A privacy nightmare’: the $400m surveillance package inside the US immigration bill
2024-02-06, The Guardian (One of the UK's Leading Newspapers)
https://www.theguardian.com/us-news/2024/feb/06/us-immigration-bill-mexico-bo...

The $118bn bipartisan immigration bill that the US Senate introduced on Sunday is already facing steep opposition. The 370-page measure, which also would provide additional aid to Israel and Ukraine, has drawn the ire of both Democrats and Republicans over its proposed asylum and border laws. But privacy, immigration and digital liberties experts are also concerned over another aspect of the bill: more than $400m in funding for additional border surveillance and data-gathering tools. The lion’s share of that funding will go to two main tools: $170m for additional autonomous surveillance towers and $204m for “expenses related to the analysis of DNA samples”, which includes those collected from migrants detained by border patrol. The bill describes autonomous surveillance towers as ones that “utilize sensors, onboard computing, and artificial intelligence to identify items of interest that would otherwise be manually identified by personnel”. The rest of the funding for border surveillance ... includes $47.5m for mobile video surveillance systems and drones and $25m for “familial DNA testing”. The bill also includes $25m in funding for “subterranean detection capabilities” and $10m to acquire data from unmanned surface vehicles or autonomous boats. As of early January, CBP had deployed 396 surveillance towers along the US-Mexico border, according to the Electronic Frontier Foundation (EFF).

Note: Read more about the secret history of facial recognition technology and undeniable evidence indicating these tools do much more harm than good. For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.