Researchers have created a machine learning system that they claim can determine a person’s political party, with reasonable accuracy, based only on their face. The study, from a group that also showed that sexual preference can seemingly be inferred this way, candidly addresses and carefully avoids the pitfalls of “modern phrenology,” leading to the uncomfortable conclusion that our appearance may express more personal information that we think.
The study, which appeared this week in the Nature journal Scientific Reports, was conducted by Stanford University’s Michal Kosinski. Kosinski made headlines in 2017 with work that found that a person’s sexual preference could be predicted from facial data.
The study drew criticism not so much for its methods but for the very idea that something that’s notionally non-physical could be detected this way. But Kosinski’s work, as he explained then and afterwards, was done specifically to challenge those assumptions and was as surprising and disturbing to him as it was to others. The idea was not to build a kind of AI gaydar — quite the opposite, in fact. As the team wrote at the time, it was necessary to publish in order to warn others that such a thing may be built by people whose interests went beyond the academic:
“We were really disturbed by these results and spent much time considering whether they should be made public at all. We did not want to enable the very risks that we are warning against. The ability to control when and to whom to reveal one’s sexual orientation is crucial not only for one’s well-being, but also for one’s safety.”
“We felt that there is an urgent need to make policymakers and LGBTQ communities aware of the risks that they are facing. We did not create a privacy-invading tool, but rather showed that basic and widely used methods pose serious privacy threats.”
Similar warnings may be sounded here, for while political affiliation at least in the U.S. (and at least at present) is not as sensitive or personal an element as sexual preference, it is still sensitive and personal. A week hardly passes without reading of some political or religious “dissident” or another being arrested or killed. If oppressive regimes could obtain what passes for probable cause by saying “the algorithm flagged you as a possible extremist,” instead of for example intercepting messages, it makes this sort of practice that much easier and more scalable.
The algorithm itself is not some hyper-advanced technology. Kosinski’s paper describes a fairly ordinary process of feeding a machine learning system images of more than a million faces, collected from dating sites in the U.S., Canada and the U.K., as well as American Facebook users. The people whose faces were used identified as politically conservative or liberal as part of the site’s questionnaire.
The algorithm was based on open-source facial recognition software, and after basic processing to crop to just the face (that way no background items creep in as factors), the faces are reduced to 2,048 scores representing various features — as with other face recognition algorithms, these aren’t necessary intuitive things like “eyebrow color” and “nose type” but more computer-native concepts.
The system was given political affiliation data sourced from the people themselves, and with this it diligently began to study the differences between the facial stats of people identifying as conservatives and those identifying as liberal. Because it turns out, there are differences.
Of course it’s not as simple as “conservatives have bushier eyebrows” or “liberals frown more.” Nor does it come down to demographics, which would make things too easy and simple. After all, if political party identification correlates with both age and skin color, that makes for a simple prediction algorithm right there. But although the software mechanisms used by Kosinski are quite standard, he was careful to cover his bases in order that this study, like the last one, can’t be dismissed as pseudoscience.
The most obvious way of addressing this is by having the system make guesses as to the political party of people of the same age, gender and ethnicity. The test involved being presented with two faces, one of each party, and guessing which was which. Obviously chance accuracy is 50%. Humans aren’t very good at this task, performing only slightly above chance, about 55% accurate.
The algorithm managed to reach as high as 71% accurate when predicting political party between two like individuals, and 73% presented with two individuals of any age, ethnicity or gender (but still guaranteed to be one conservative, one liberal).
Getting three out of four may not seem like a triumph for modern AI, but considering people can barely do better than a coin flip, there seems to be something worth considering here. Kosinski has been careful to cover other bases as well; this doesn’t appear to be a statistical anomaly or exaggeration of an isolated result.
The idea that your political party may be written on your face is an unnerving one, for while one’s political leanings are far from the most private of info, it’s also something that is very reasonably thought of as being intangible. People may choose to express their political beliefs with a hat, pin or t-shirt, but one generally considers one’s face to be nonpartisan.
If you’re wondering which facial features in particular are revealing, unfortunately the system is unable to report that. In a sort of para-study, Kosinski isolated a couple dozen facial features (facial hair, directness of gaze, various emotions) and tested whether those were good predictors of politics, but none led to more than a small increase in accuracy over chance or human expertise.
“Head orientation and emotional expression stood out: Liberals tended to face the camera more directly, were more likely to express surprise, and less likely to express disgust,” Kosinski wrote in author’s notes for the paper. But what they added left more than 10 percentage points of accuracy not accounted for: “That indicates that the facial recognition algorithm found many other features revealing political orientation.”
EU demands single plug for phones, major blow to APPLE…
LONDON (AP) — The European Union announced plans Thursday to require the smartphone industry to adopt a uniform charging cord for mobile devices, a push that could eliminate the all-too-familiar experience of rummaging through a drawer full of tangled cables to find the right one.
The European Commission, the bloc’s executive arm, proposed legislation that would mandate USB-C cables for charging, technology that many device makers have already adopted. The main holdout is Apple, which said it was concerned the new rules would limit innovation, and that would end up hurting consumers. iPhones come with the company’s own Lightning charging port, though the newest models come with cables that can be plugged into a USB-C socket.
The push by the EU will certainly be cheered by the millions of people who have searched through a jumble of snarled cables for the one that fits their phone. But the EU also wants to cut down on the 11,000 metric tons of electronic waste thrown out every year by Europeans.
The commission said the typical EU resident owns at least three chargers, and use two regularly, but 38% of people report not being able to charge their phones at least once because they couldn’t find a compatible charger. Some 420 million mobile phones or portable electronic devices were sold in the EU last year.
Internet freedom on the decline in US and globally, study finds
The study cited a lack of regulation in the domestic tech industry, and the rise of authoritarian agencies abroad
Online freedom is continuing to decline globally, according to a new study, with governments increasingly cracking down on user speech and misinformation on the rise.
The report from Freedom House, a Washington DC-based democracy advocacy group, found internet freedom declined for the fifth year in a row in the US and the 11th year internationally – for two distinct reasons.
Domestically, the lack of regulation in the tech industry has allowed companies to grow beyond reproach and misinformation to flourish online. Abroad, authoritarian governments have harnessed their tight control of the internet to subdue free expression.
Freedom House cited a growing lack of diversity among sources of online information in the US that allowed conspiracies and misinformation to rise, an issue that was gravely underscored during the 2020 elections and the 2021 insurrection at the US Capitol.
“The spread of false and conspiracist content about the November 2020 elections shook the foundations of the American political system,” the report said.
The yearly study, which has been published since 1973, uses a standard index to measure internet freedom by country on a 100-point scale. It asks questions about internet infrastructure, government control and obstacles to access, and content regulation. Countries are scored on a scale of 100 points with higher numbers considered more “free”.
INSIDE THE SOCIAL MEDIA SURVEILLANCE SOFTWARE THAT CAN WATCH YOUR EVERY MOVE
The tool is the product of a growing industry whose work is usually kept from the public and utilized by police.
A MICHIGAN STATE POLICE CONTRACT, obtained by The Intercept, sheds new light on the growing use of little-known surveillance software that helps law enforcement agencies and corporations watch people’s social media and other website activity.
The software, put out by a Wyoming company called ShadowDragon, allows police to suck in data from social media and other internet sources, including Amazon, dating apps, and the dark web, so they can identify persons of interest and map out their networks during investigations. By providing powerful searches of more than 120 different online platforms and a decade’s worth of archives, the company claims to speed up profiling work from months to minutes. ShadowDragon even claims its software can automatically adjust its monitoring and help predict violence and unrest. Michigan police acquired the software through a contract with another obscure online policing company named Kaseware for an “MSP Enterprise Criminal Intelligence System.”
The inner workings of the product are generally not known to the public. The contract, and materials published by the companies online, allow a deeper explanation of how this surveillance works, provided below.
ShadowDragon has kept a low profile but has law enforcement customers well beyond Michigan. It was purchased twice by the U.S. Immigration and Customs Enforcement agency in the last two years, documents show, and was reportedly acquired by the Massachusetts State Police and other police departments within the state.
Michigan officials appear to be keeping their contract and the identities of ShadowDragon and Microsoft from the public. The Michigan.gov website does not make the contract available; it instead offers an email address at which to request the document “due to the sensitive nature of this contract.” And the contract it eventually provides has been heavily redacted: The copy given to David Goldberg, a professor at Wayne State University in Detroit had all mentions of ShadowDragon software and Microsoft Azure blacked out. What’s more, Goldberg had to file a Freedom of Information Act request to obtain the contract. When the state website did offer the contract, it was unredacted, and I downloaded it before it was withdrawn.
Last year, The Intercept published several articles detailing how a social media analytics firm called Dataminr relayed tweets about the George Floyd and Black Lives Matter protests to police. The same year, I detailed at The Intercept how Kaseware’s partner Microsoft helps police surveil and patrol communities through its own offerings and a network of partnerships.
This new revelation about the Michigan contract raises questions about what digital surveillance capabilities other police departments and law enforcement agencies in the U.S. might be quietly acquiring. And it comes at a time when previously known government social media surveillance is under fire from civil rights and liberties advocates like MediaJustice and the American Civil Liberties Union. It also raises the specter of further abuses in Michigan, where the FBI has been profiling Muslim communities and so-called Black Identity Extremists. In 2015, it was revealed that for years, the state police agency was using cell site simulators to spy on mobile phones without disclosing it to the public.
Biden May Owe Up To $500,000 In Back Taxes
Biden Pushing For Dishonorable Discharges, Court Martials For Troops Who Refuse Vaccines
CEO of Moderna Says Even Young Will Need to Take Vaccine Booster Shots Indefinitely
Evidence shows ANTIFA being escorted into DC by police ahead of Jan. 6th MAGA Stop The Steal rally…
MSM COVER UP: Video shows BUSLOAD of ANTIFA infiltrating peaceful TRUMP PATRIOT PROTEST… FALSE FLAG EXPOSED!
Blackout in multiple cities in Iran.
Health4 days ago
President of Croatia: We will not be vaccinated anymore
Health4 weeks ago
Israeli study finds natural immunity superior to vaccination
World News4 weeks ago
Taliban hang man from Blackhawk helicopter in show of dominance over Biden
Health3 weeks ago
High-Level FDA Officials Quit Over Biden Administration Meddling
Politics3 weeks ago
Nancy Pelosi refuses to allow names of 13 Marines to be read aloud on House floor
Health3 weeks ago
GoDaddy boots Pro Life Whistleblowers
Health3 weeks ago
In Israel, Being Fully Vaccinated Now Means 3 Shots…
Politics3 weeks ago
‘Antifa’ Teacher Urges Students To Protest; ‘180 Days To Turn Them Into Revolutionaries’