Connect with us

Tech

Facial recognition reveals political party…

Portland moves to ban technology…

Published

on

Facial recognition reveals political party…

Source: TechCrunch.com

Researchers have created a machine learning system that they claim can determine a person’s political party, with reasonable accuracy, based only on their face. The study, from a group that also showed that sexual preference can seemingly be inferred this way, candidly addresses and carefully avoids the pitfalls of “modern phrenology,” leading to the uncomfortable conclusion that our appearance may express more personal information that we think.

The study, which appeared this week in the Nature journal Scientific Reports, was conducted by Stanford University’s Michal Kosinski. Kosinski made headlines in 2017 with work that found that a person’s sexual preference could be predicted from facial data.

The study drew criticism not so much for its methods but for the very idea that something that’s notionally non-physical could be detected this way. But Kosinski’s work, as he explained then and afterwards, was done specifically to challenge those assumptions and was as surprising and disturbing to him as it was to others. The idea was not to build a kind of AI gaydar — quite the opposite, in fact. As the team wrote at the time, it was necessary to publish in order to warn others that such a thing may be built by people whose interests went beyond the academic:

“We were really disturbed by these results and spent much time considering whether they should be made public at all. We did not want to enable the very risks that we are warning against. The ability to control when and to whom to reveal one’s sexual orientation is crucial not only for one’s well-being, but also for one’s safety.”

“We felt that there is an urgent need to make policymakers and LGBTQ communities aware of the risks that they are facing. We did not create a privacy-invading tool, but rather showed that basic and widely used methods pose serious privacy threats.”

Similar warnings may be sounded here, for while political affiliation at least in the U.S. (and at least at present) is not as sensitive or personal an element as sexual preference, it is still sensitive and personal. A week hardly passes without reading of some political or religious “dissident” or another being arrested or killed. If oppressive regimes could obtain what passes for probable cause by saying “the algorithm flagged you as a possible extremist,” instead of for example intercepting messages, it makes this sort of practice that much easier and more scalable.

The algorithm itself is not some hyper-advanced technology. Kosinski’s paper describes a fairly ordinary process of feeding a machine learning system images of more than a million faces, collected from dating sites in the U.S., Canada and the U.K., as well as American Facebook users. The people whose faces were used identified as politically conservative or liberal as part of the site’s questionnaire.

The algorithm was based on open-source facial recognition software, and after basic processing to crop to just the face (that way no background items creep in as factors), the faces are reduced to 2,048 scores representing various features — as with other face recognition algorithms, these aren’t necessary intuitive things like “eyebrow color” and “nose type” but more computer-native concepts.

The system was given political affiliation data sourced from the people themselves, and with this it diligently began to study the differences between the facial stats of people identifying as conservatives and those identifying as liberal. Because it turns out, there are differences.

Of course it’s not as simple as “conservatives have bushier eyebrows” or “liberals frown more.” Nor does it come down to demographics, which would make things too easy and simple. After all, if political party identification correlates with both age and skin color, that makes for a simple prediction algorithm right there. But although the software mechanisms used by Kosinski are quite standard, he was careful to cover his bases in order that this study, like the last one, can’t be dismissed as pseudoscience.

The most obvious way of addressing this is by having the system make guesses as to the political party of people of the same age, gender and ethnicity. The test involved being presented with two faces, one of each party, and guessing which was which. Obviously chance accuracy is 50%. Humans aren’t very good at this task, performing only slightly above chance, about 55% accurate.

The algorithm managed to reach as high as 71% accurate when predicting political party between two like individuals, and 73% presented with two individuals of any age, ethnicity or gender (but still guaranteed to be one conservative, one liberal).

Getting three out of four may not seem like a triumph for modern AI, but considering people can barely do better than a coin flip, there seems to be something worth considering here. Kosinski has been careful to cover other bases as well; this doesn’t appear to be a statistical anomaly or exaggeration of an isolated result.

The idea that your political party may be written on your face is an unnerving one, for while one’s political leanings are far from the most private of info, it’s also something that is very reasonably thought of as being intangible. People may choose to express their political beliefs with a hat, pin or t-shirt, but one generally considers one’s face to be nonpartisan.

If you’re wondering which facial features in particular are revealing, unfortunately the system is unable to report that. In a sort of para-study, Kosinski isolated a couple dozen facial features (facial hair, directness of gaze, various emotions) and tested whether those were good predictors of politics, but none led to more than a small increase in accuracy over chance or human expertise.

“Head orientation and emotional expression stood out: Liberals tended to face the camera more directly, were more likely to express surprise, and less likely to express disgust,” Kosinski wrote in author’s notes for the paper. But what they added left more than 10 percentage points of accuracy not accounted for: “That indicates that the facial recognition algorithm found many other features revealing political orientation.”

Read more…

Politics

Buried deep in Biden Infrastructure Law: mandatory kill switches on all new cars by 2026

Published

on

Buried deep in Biden Infrastructure Law: mandatory kill switches on all new cars by 2026

Remember that 2700-page, $1 trillion dollar infrastructure bill that the US government passed back in August? Well, have you read it? Of course we’re joking — we know you haven’t read it. Most of the legislators who voted on it probably haven’t either. Some folks have, though, and they’re finding some pretty alarming things buried in that bill.

One of the most concerning things we’ve heard so far is the revelation that this “infrastructure” bill includes a measure mandating vehicle backdoor kill-switches in every car by 2026. The clause is intended to increase vehicle safety by “passively monitoring the performance of a driver of a motor vehicle to accurately identify whether that driver may be impaired,” and if that sentence doesn’t make your hair stand on end, you’re not thinking about the implications.

Let us spell it out for you: by 2026, vehicles sold in the US will be required to automatically and silently record various metrics of driver performance, and then make a decision, absent any human oversight, whether the owner will be allowed to use their own vehicle. Even worse, the measure goes on to require that the system be “open” to remote access by “authorized” third parties at any time.

The passage in the bill was unearthed by former Georgia Representative Bob Barr, writing over at the Daily Caller. Barr notes correctly that this is a privacy disaster in the making. Not only does it make every vehicle a potential tattletale (possibly reporting minor traffic infractions, like slight speeding or forgetting your seat-belt, to authorities or insurance companies), but tracking that data also makes it possible for bad actors to retrieve it.

More pressing than the privacy concerns, though, are the safety issues. Including an automatic kill switch of this sort in a machine with internet access presents the obvious scenario that a malicious agent could disable your vehicle remotely with no warning. Outside that possible-but-admittedly-unlikely idea, there are all kinds of other reasons that someone might need to drive or use their vehicle while “impaired”, such as in the case of emergency, or while injured.

Even if the remote access part of the mandate doesn’t come to pass, the measure is still astonishingly short-sighted. As Barr says, “the choice as to whether a vehicle can or cannot be driven … will rest in the hands of an algorithm over which the car’s owner or driver have neither knowledge or control.” Barr, a lawyer himself, points out that there are legal issues with this whole concept, too. He anticipates challenges to the measure on both 5th Amendment (right to not self-incriminate) and 6th Amendment (right to face one’s accuser) grounds. He also goes on to comment on the vagueness of the legislation. What exactly is “impaired driving”? Every state and many municipalities have differing definitions of “driving while intoxicated.”

Furthermore, there’s also no detail in the legislation about who should have access to the data collected by the system. Would police need a warrant to access the recorded data? Would it be available to insurance companies or medical professionals? If someone is late on their car payment, can the lender remotely disable the vehicle? Certainly beyond concerns of who would be allowed official access, there’s also once again the ever-present fear of hackers gaining access to the data—which security professionals well know, absolutely will happen, sooner or later. As Barr says, the collected data would be a treasure trove of data to “all manner of entities … none of which have our best interests at heart.”

Source: HotHardware

Continue Reading

Tech

Microsoft employees say hello by pronouns and race

Published

on

Continue Reading

Tech

Facebook plans to shut down its facial recognition program

Published

on

Facebook plans to shut down its facial recognition program
  • Meta, the company formerly known as Facebook, on Tuesday announced it will be putting an end to its face recognition system.
  • The company said it will delete more than 1 billion people’s individual facial recognition templates as a result of this change.
  • Facebook services that rely on the face recognition systems will be removed over the coming weeks, Meta said.

Facebook on Tuesday announced it will be putting an end to its facial recognition system amid growing concern from users and regulators.

The social network, whose parent company is now named Meta, said it will delete more than 1 billion people’s individual facial recognition templates as a result of this change. The company said in a blog post that more than a third of Facebook’s daily active users, or over 600 million accounts, had opted into the use of the face recognition technology.

Facebook will no longer automatically recognize people’s faces in photos or videos, the post said. The change, however, will also impact the automatic alt text technology that the company uses to describe images for people who are blind or visually impaired. Facebook services that rely on the face recognition systems will be removed over the coming weeks.

“There are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use,” the company said. “Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.”

Ending the use of the face recognition system is part of “a company-wide move away from this kind of broad identification,” the post said.

Meta, which laid out its road map last week for the creation of a massive virtual world, said it will still consider facial recognition technology for instances where people need to verify their identity or to prevent fraud and impersonation. For future uses of facial recognition technology, Meta will “continue to be public about intended use, how people can have control over these systems and their personal data.”

The decision to shut down the system on Facebook comes amid a barrage of news reports over the past month after Frances Haugen, a former employee turned whistleblower, released a trove of internal company documents to news outlets, lawmakers and regulators.

Read more on CNBC

Continue Reading

Trending