Connect with us

Tech

Government Secretly Orders Google To Identify Anyone Who Searched Names

The U.S. government is secretly ordering Google to provide data on anyone typing in certain search terms, an accidentally unsealed court document shows. There are fears such “keyword warrants” threaten to implicate innocent Web users in serious crimes and are more common than previously thought.

Published

on

Government Secretly Orders Google To Identify Anyone Who Searched Names

In 2019, federal investigators in Wisconsin were hunting men they believed had participated in the trafficking and sexual abuse of a minor. She had gone missing that year but had emerged claiming to have been kidnapped and sexually assaulted, according to a search warrant reviewed by Forbes. In an attempt to chase down the perpetrators, investigators turned to Google, asking the tech giant to provide information on anyone who had searched for the victim’s name, two spellings of her mother’s name and her address over 16 days across the year. After being asked to provide all relevant Google accounts and IP addresses of those who made the searches, Google responded with data in mid-2020, though the court documents do not reveal how many users had their data sent to the government.

It’s a rare example of a so-called keyword warrant and, with the number of search terms included, the broadest on record. (See the update below for other, potentially even broader warrants.) Before this latest case, only two keyword warrants had been made public. One revealed in 2020 asked for anyone who had searched for the address of an arson victim who was a witness in the government’s racketeering case against singer R Kelly. Another, detailed in 2017, revealed that a Minnesota judge signed off on a warrant asking Google to provide information on anyone who searched a fraud victim’s name from within the city of Edina, where the crime took place.

While Google deals with thousands of such orders every year, the keyword warrant is one of the more contentious. In many cases, the government will already have a specific Google account that they want information on and have proof it’s linked to a crime. But search term orders are effectively fishing expeditions, hoping to ensnare possible suspects whose identities the government does not know. It’s not dissimilar to so-called geofence warrants, where investigators ask Google to provide information on anyone within the location of a crime scene at a given time.

“As with all law enforcement requests, we have a rigorous process that is designed to protect the privacy of our users while supporting the important work of law enforcement,” a Google spokesperson said.

The latest case shows Google is continuing to comply with such controversial requests, despite concerns over their legality and the potential to implicate innocent people who happened to search for the relevant terms. From the government’s perspective in Wisconsin, the scope of the warrant should have been limited enough to avoid the latter: the number of people searching for the specific names, address and phone number in the given time frame was likely to be low. But privacy experts are concerned about the precedent set by such warrants and the potential for any such order to be a breach of Fourth Amendment protections from unreasonable searches. There are also concerns about First Amendment freedom of speech issues, given the potential to cause anxiety amongst Google users that their identities could be handed to the government because of what they searched for.

“Trawling through Google’s search history database enables police to identify people merely based on what they might have been thinking about, for whatever reason, at some point in the past. This is a virtual dragnet through the public’s interests, beliefs, opinions, values and friendships, akin to mind reading powered by the Google time machine,” said Jennifer Granick, surveillance and cybersecurity counsel at the American Civil Liberties Union (ACLU). “This never-before-possible technique threatens First Amendment interests and will inevitably sweep up innocent people, especially if the keyword terms are not unique and the time frame not precise. To make matters worse, police are currently doing this in secret, which insulates the practice from public debate and regulation.”

The Wisconsin case was supposed to have remained secret, too. The warrant only came to light because it was accidentally unsealed by the Justice Department in September. Forbes reviewed the document before it was sealed again and is neither publishing it nor providing full details of the case to protect the identities of the victim and her family. The investigation is ongoing, two years after the crimes occurred, and the DOJ didn’t comment on whether or not any charges had been filed.

Forbes was able to identify one other, previously unreported keyword warrant in the Northern District of California in December 2020, though its existence was only noted in a court docket. It also has the potential to be broad. The order, currently under seal, is titled “Application by the United States for a Search Warrant for Google Accounts Associated with Six Search Terms and Four Search Dates.”

There’s more that the government can get with such requests than simple Google account identities and IP addresses. In Wisconsin, the government was hopeful Google could also provide “CookieIDs” belonging to any users who made the searches. These CookieIDs “are identifiers that are used to group together all searches conducted from a given machine, for a certain time period. Such information allows investigators to ascertain, even when the user is not logged into a Google account, whether the same individual may have conducted multiple pertinent searches,” the government wrote.

There was another disturbing aspect to the search warrant: the government had published the kidnapping victim’s name, her Facebook profile (now no longer accessible), her phone number and address, a potential breach of a minor’s privacy. The government has now sealed the document, though was only alerted to the leak after Forbes emailed the Justice Department for comment. That mistake—of revealing the identity of minor victims of sexual abuse in court documents—has become a common one in recent years. As in the latest case, the FBI and DHS have been seen choosing pseudonyms and acronyms for victims, but then publishing their full Facebook profile link, which contains the name of the minor.

Source: Forbes

Politics

Buried deep in Biden Infrastructure Law: mandatory kill switches on all new cars by 2026

Published

on

Buried deep in Biden Infrastructure Law: mandatory kill switches on all new cars by 2026

Remember that 2700-page, $1 trillion dollar infrastructure bill that the US government passed back in August? Well, have you read it? Of course we’re joking — we know you haven’t read it. Most of the legislators who voted on it probably haven’t either. Some folks have, though, and they’re finding some pretty alarming things buried in that bill.

One of the most concerning things we’ve heard so far is the revelation that this “infrastructure” bill includes a measure mandating vehicle backdoor kill-switches in every car by 2026. The clause is intended to increase vehicle safety by “passively monitoring the performance of a driver of a motor vehicle to accurately identify whether that driver may be impaired,” and if that sentence doesn’t make your hair stand on end, you’re not thinking about the implications.

Let us spell it out for you: by 2026, vehicles sold in the US will be required to automatically and silently record various metrics of driver performance, and then make a decision, absent any human oversight, whether the owner will be allowed to use their own vehicle. Even worse, the measure goes on to require that the system be “open” to remote access by “authorized” third parties at any time.

The passage in the bill was unearthed by former Georgia Representative Bob Barr, writing over at the Daily Caller. Barr notes correctly that this is a privacy disaster in the making. Not only does it make every vehicle a potential tattletale (possibly reporting minor traffic infractions, like slight speeding or forgetting your seat-belt, to authorities or insurance companies), but tracking that data also makes it possible for bad actors to retrieve it.

More pressing than the privacy concerns, though, are the safety issues. Including an automatic kill switch of this sort in a machine with internet access presents the obvious scenario that a malicious agent could disable your vehicle remotely with no warning. Outside that possible-but-admittedly-unlikely idea, there are all kinds of other reasons that someone might need to drive or use their vehicle while “impaired”, such as in the case of emergency, or while injured.

Even if the remote access part of the mandate doesn’t come to pass, the measure is still astonishingly short-sighted. As Barr says, “the choice as to whether a vehicle can or cannot be driven … will rest in the hands of an algorithm over which the car’s owner or driver have neither knowledge or control.” Barr, a lawyer himself, points out that there are legal issues with this whole concept, too. He anticipates challenges to the measure on both 5th Amendment (right to not self-incriminate) and 6th Amendment (right to face one’s accuser) grounds. He also goes on to comment on the vagueness of the legislation. What exactly is “impaired driving”? Every state and many municipalities have differing definitions of “driving while intoxicated.”

Furthermore, there’s also no detail in the legislation about who should have access to the data collected by the system. Would police need a warrant to access the recorded data? Would it be available to insurance companies or medical professionals? If someone is late on their car payment, can the lender remotely disable the vehicle? Certainly beyond concerns of who would be allowed official access, there’s also once again the ever-present fear of hackers gaining access to the data—which security professionals well know, absolutely will happen, sooner or later. As Barr says, the collected data would be a treasure trove of data to “all manner of entities … none of which have our best interests at heart.”

Source: HotHardware

Continue Reading

Tech

Microsoft employees say hello by pronouns and race

Published

on

Continue Reading

Tech

Facebook plans to shut down its facial recognition program

Published

on

Facebook plans to shut down its facial recognition program
  • Meta, the company formerly known as Facebook, on Tuesday announced it will be putting an end to its face recognition system.
  • The company said it will delete more than 1 billion people’s individual facial recognition templates as a result of this change.
  • Facebook services that rely on the face recognition systems will be removed over the coming weeks, Meta said.

Facebook on Tuesday announced it will be putting an end to its facial recognition system amid growing concern from users and regulators.

The social network, whose parent company is now named Meta, said it will delete more than 1 billion people’s individual facial recognition templates as a result of this change. The company said in a blog post that more than a third of Facebook’s daily active users, or over 600 million accounts, had opted into the use of the face recognition technology.

Facebook will no longer automatically recognize people’s faces in photos or videos, the post said. The change, however, will also impact the automatic alt text technology that the company uses to describe images for people who are blind or visually impaired. Facebook services that rely on the face recognition systems will be removed over the coming weeks.

“There are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use,” the company said. “Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.”

Ending the use of the face recognition system is part of “a company-wide move away from this kind of broad identification,” the post said.

Meta, which laid out its road map last week for the creation of a massive virtual world, said it will still consider facial recognition technology for instances where people need to verify their identity or to prevent fraud and impersonation. For future uses of facial recognition technology, Meta will “continue to be public about intended use, how people can have control over these systems and their personal data.”

The decision to shut down the system on Facebook comes amid a barrage of news reports over the past month after Frances Haugen, a former employee turned whistleblower, released a trove of internal company documents to news outlets, lawmakers and regulators.

Read more on CNBC

Continue Reading

Trending