NSA Expands Mass Surveillance to Triple Its Collection of U.S. Phone Records

Mass surveillance is damaging to privacy generally and ineffective at preventing stateless terror attacks — its main effect is to increase repressive control.

The National Security Agency (NSA) collected over 530 million phone records of Americans in 2017—that’s three times the amount the spy agency sucked up in 2016.

The figures were released Friday in an annual report from the Office of the Director of National Intelligence (ODNI).

It shows that the number of “call detail records” the agency collected from telecommunications providers during Trump’s first year in office was 534 million, compared to 151 million the year prior.

“The intelligence community’s transparency has yet to extend to explaining dramatic increases in their collection,” said Robyn Greene, policy counsel at the Open Technology Institute.

The content of the calls itself is not collected but so-called “metadata,” which, as Gizmodo notes, “is supposedly anonymous, but it can easily be used to identify an individual. The information can also be paired with other publicly available information from social media and other sources to paint a surprisingly detailed picture of a person’s life.”

The report also revealed that the agency, using its controversial Section 702 authority, increased the number of foreign targets of warrantless surveillance. It was 129,080 in 2017 compared to 106,469 in 2016.

As digital rights group EFF noted earlier this year,

Under Section 702, the NSA collects billions of communications, including those belonging to innocent Americans who are not actually targeted. These communications are then placed in databases that other intelligence and law enforcement agencies can access—for purposes unrelated to national security—without a warrant or any judicial review.

“Overall,” Jake Laperruque, senior counsel at the Project On Government Oversight, said to ZDNet, “the numbers show that the scale of warrantless surveillance is growing at a significant rate, but ODNI still won’t tell Americans how much it affects them.”

Disturbing: Surveillance Database of Journalists Being Built in the U.S.

A large threat to press freedom with Orwellian undertones — more mass surveillance means more repression. It also means an attempted suppression of effective activism due to what’s known as the “chilling effect” of mass surveillance, where people generally take different actions (such as not visiting the Wikipedia pages on terrorism as much) due to being aware that they’re under intrusive surveillance.

Donald Trump is not known for being a friend of the media. Now he seems to be taking up new methods to control unfavorable journalists. The Department of Homeland Security wants to create a database of journalists and bloggers from around the world that can be filtered by location, content and sentiment. While the DHS claims this is standard PR practice, the alarm bells must ring. After all, surveillance is what upcoming autocrats commonly use to undermine democracy.

The Department of Homeland Security (DHS) is looking for contractors to build up a Media Monitoring Service. Details seem to be based on instructions by George Orwell: The DHS asks for the ability to scan more than 290.000 news sources within and outside the US, and store “journalists, editors, correspondents, social media influencers, bloggers etc.” in a database that must be searchable for “content” and “sentiment”.

[…]

The current development in the US is very worrisome, particularly as the freedom of the press is under attack worldwide.

Reporters without Borders state: “Once taken for granted, media freedom is proving to be increasingly fragile in democracies as well. In sickening statements, draconian laws, conflicts of interest, and even the use of physical violence, democratic governments are trampling on a freedom that should, in principle, be one of their leading performance indicators.”

The Freedom of the Press Report 2017 by Freedom House concludes that global media freedom has reached its lowest level in the past 13 year. This is not only down to “further crackdowns on independent media in authoritarian countries like Russia and China.” The report also blames “new threats to journalists and media outlets in major democracies”.

Some Swedes Beginning to Realize the Danger of Being in a Cashless Society

Allowing largely unaccountable corporations to track the overall financial activities and flow of resources among an entire population presents a power imbalance that will inevitably lead to problems. The anonymous ability to pay with cash — and soon hopefully a legitimately privacy-focused cryptocurrency — is an imperative function in maintaining stability in today’s world.

In February, the head of Sweden’s central bank warned that Sweden could soon face a situation where all payments were controlled by private sector banks.

The Riksbank governor, Stefan Ingves, called for new legislation to secure public control over the payments system, arguing that being able to make and receive payments is a “collective good” like defence, the courts, or public statistics.

“Most citizens would feel uncomfortable to surrender these social functions to private companies,” he said.

“It should be obvious that Sweden’s preparedness would be weakened if, in a serious crisis or war, we had not decided in advance how households and companies would pay for fuel, supplies and other necessities.”

[…]

The central bank governor’s remarks are helping to bring other concerns about a cash-free society into the mainstream, says Björn Eriksson, 72, a former national police commissioner and the leader of a group called the Cash Rebellion, or Kontantupproret.

[…]

In this sense, Sweden is far from its famous concept oflagom – “just the right amount” – but instead is “100% extreme”, Eriksson says, by investing so much faith in the banks. “This is a political question. We are leaving these decisions to four major banks who form a monopoly in Sweden.”

[…]

No system based on technology is invulnerable to glitches and fraud, says Mattias Skarec, 29, a digital security consultant. Yet Sweden is divided into two camps: the first says “we love the new technology”, while the other just can’t be bothered, Skarec says. “We are naive to think we can abandon cash completely and rely on technology instead.”

Skarec points to problems with card payments experienced by two Swedish banks just during the past year, and by Bank ID, the digital authorisation system that allows people to identify themselves for payment purposes using their phones.

Fraudsters have already learned to exploit the system’s idiosyncrasies to trick people out of large sums of money, even their pensions.

[…]

But an opinion poll this month revealed unease among Swedes, with almost seven out of 10 saying they wanted to keep the option to use cash, while just 25% wanted a completely cashless society. MPs from left and right expressed concerns at a recent parliamentary hearing. Parliament is conducting a cross-party review of central bank legislation that will also investigate the issues surrounding cash.

The Pirate Party – which made its name in Sweden for its opposition to state and private sector surveillance – welcomes a higher political profile for these issues.
Look at Ireland, Christian Engström says, where abortion is illegal. It is much easier for authorities to identify Irish women who have had an abortion if the state can track all digital financial transactions, he says. And while Sweden’s government might be relatively benign, a quick look at Europe suggests there is no guarantee how things might develop in the future.

“If you have control of the servers belonging to Visa or MasterCard, you have control of Sweden,” Engström says.

Also a relevant entry: Pitfalls of a Cashless Society.

Dangerous Cloud Act Legislation Appears in Congress

The Cloud Act would allow for dangerous violations of consumer privacy rights through abusing the stored data corporations have on people. U.S. citizens, I encourage you to oppose this type of legislation. Privacy rights are going to become much more important in the next several years ahead as more and more of society is effused with technological infrastructure.

Civil libertarians and digital rights advocates are alarmed about an “insidious” and “dangerous” piece of federal legislation that the ACLU warns “threatens activists abroad, individuals here in the U.S., and would empower Attorney General Sessions in new disturbing ways.”

The Clarifying Lawful Overseas Use of Data or CLOUD Act (S. 2383 and H.R. 4943), as David Ruiz at Electronic Fronteir Foundation (EFF) explains, would establish a “new backdoor for cross-border data [that] mirrors another backdoor under Section 702 of the FISA Amendments Act, an invasive NSA surveillance authority for foreign intelligence gathering” recently reauthorized by Congress.

Ruiz outlines how the legislation would enable U.S. authorities to bypass Fourth Amendment rights to obtain Americans’ data and use it against them:

The CLOUD Act allows the president to enter an executive agreement with a foreign nation known for human rights abuses. Using its CLOUD Act powers, police from that nation inevitably will collect Americans’ communications. They can share the content of those communications with the U.S. government under the flawed “significant harm” test. The U.S. government can use that content against these Americans. A judge need not approve the data collection before it is carried out. At no point need probable cause be shown. At no point need a search warrant be obtained.

The EFF and ACLU are among two dozen groups that banded together earlier this month to pen a letter to Congress to express alarm that the bill “fails to protect the rights of Americans and individuals abroad, and would put too much authority in the hands of the executive branch with few mechanisms to prevent abuse.”

[…]

“This controversial legislation would be a poison pill for the omnibus spending bill,” declared Fight for the Future’s deputy director, Evan Greer. “Decisions like this requires rigorous examination and public debate, now more than ever, and should not be made behind closed doors as part of back room Congressional deals.”

The group also pointed out that big tech companies such as Apple, Facebook, and Google are among those lobbying lawmakers to include the CLOUD Act in the spending bill:

DYgig4bX0AEOjsI5

Polisis AI Developed to Help People Understand Privacy Policies

It looks as though this AI development could be quite useful in helping people avoid the exploitation of their personal information. Someone reading this may also want to look into a resource called Terms of Service; Didn’t Read, which “aims at creating a transparent and peer-reviewed process to rate and analyse Terms of Service and Privacy Policies in order to create a rating from Class A to Class E.”

But one group of academics has proposed a way to make those virtually illegible privacy policies into the actual tool of consumer protection they pretend to be: an artificial intelligence that’s fluent in fine print. Today, researchers at Switzerland’s Federal Institute of Technology at Lausanne (EPFL), the University of Wisconsin and the University of Michigan announced the release of Polisis—short for “privacy policy analysis”—a new website and browser extension that uses their machine-learning-trained app to automatically read and make sense of any online service’s privacy policy, so you don’t have to.

In about 30 seconds, Polisis can read a privacy policy it’s never seen before and extract a readable summary, displayed in a graphic flow chart, of what kind of data a service collects, where that data could be sent, and whether a user can opt out of that collection or sharing. Polisis’ creators have also built a chat interface they call Pribot that’s designed to answer questions about any privacy policy, intended as a sort of privacy-focused paralegal advisor. Together, the researchers hope those tools can unlock the secrets of how tech firms use your data that have long been hidden in plain sight.

[…]

Polisis isn’t actually the first attempt to use machine learning to pull human-readable information out of privacy policies. Both Carnegie Mellon University and Columbia have made their own attempts at similar projects in recent years, points out NYU Law Professor Florencia Marotta-Wurgler, who has focused her own research on user interactions with terms of service contracts online. (One of her own studies showed that only .07 percent of users actually click on a terms of service link before clicking “agree.”) The Usable Privacy Policy Project, a collaboration that includes both Columbia and CMU, released its own automated tool to annotate privacy policies just last month. But Marotta-Wurgler notes that Polisis’ visual and chat-bot interfaces haven’t been tried before, and says the latest project is also more detailed in how it defines different kinds of data. “The granularity is really nice,” Marotta-Wurgler says. “It’s a way of communicating this information that’s more interactive.”

[…]

The researchers’ legalese-interpretation apps do still have some kinks to work out. Their conversational bot, in particular, seemed to misinterpret plenty of questions in WIRED’s testing. And for the moment, that bot still answers queries by flagging an intimidatingly large chunk of the original privacy policy; a feature to automatically simplify that excerpt into a short sentence or two remains “experimental,” the researchers warn.

But the researchers see their AI engine in part as the groundwork for future tools. They suggest that future apps could use their trained AI to automatically flag data practices that a user asks to be warned about, or to automate comparisons between different services’ policies that rank how aggressively each one siphons up and share your sensitive data.

“Caring about your privacy shouldn’t mean you have to read paragraphs and paragraphs of text,” says Michigan’s Schaub. But with more eyes on companies’ privacy practices—even automated ones—perhaps those information stewards will think twice before trying to bury their data collection bad habits under a mountain of legal minutiae.

U.S. Federal Government Set to Further Expand Mass Surveillance

It’s striking that the same congressional Democrats who verbally denounce the current president as a tyrant then vote to grant the executive branch extremely unjust surveillance authority. U.S. citizens, I encourage you to call the Senate and tell them to vote no on this mass surveillance bill. The Capitol Switchboard number is (202) 804-3305.

With the Senate set to cast its first votes on a bill that reauthorizes and expands the government’s already vast warrantless spying program in a matter of hours, civil libertarians on Tuesday launched a last-ditch effort to rally opposition to the legislation and demand that lawmakers protect Americans’ constitutional right to privacy.

Fight for the Future (FTF), one of many advocacy groups pressuring lawmakers to stop the mass surveillance bill in its tracks, notes that “just 41 senators can stop” the bill from passing.

“In the age of federal misconduct, every member of Congress must move right now to stop the government’s abuse of the internet to monitor everyone; they must safeguard our freedom and the U.S. Constitution,” FTF urged.

The FISA Amendments Reauthorization Act of 2017 (S.139)—passed by the House last week with the revealing but not surprising help of 65 Democrats—would renew Section 702 of FISA, set to expire this Friday.

As The Intercept‘s Glenn Greenwald notes, “numerous Senate Democrats are poised” to join their House colleagues in voting to re-up Section 702, thus violating “the privacy rights of everyone in the United States” and handing President Donald Trump and Attorney General Jeff Sessions sprawling spying powers.

The Senate’s first procedural vote on a cloture motion is expected at 5:30pm ET. If the motion is approved, the path will be clear for the bill to hit the Senate floor.

“Every member of Congress is going to have to decide whether to protect Americans’ privacy, and shield vulnerable communities from unconstitutional targeting, or to leave unconstitutional spying authority in Trump’s—and Jeff Sessions’—hands,” the advocacy group Indivisible notes.

EU Privacy Shield Standard Should be Adopted by More Countries

Online privacy isn’t as appreciated as it should be, but that may change as exponentially more devices are connected to the Internet over the next several years.

If you’re ever expecting a child, Target wants to be one of the first to know. The company has invested in research to identify pregnant customers early on, based upon their purchasing behavior. Then, it targets them with ads for baby gear.

While companies such as Target mine data about products their customers purchase from them (like prenatal vitamins) to send them personalized ads, many also rely on information gathered about us on the web — like what we search for on Google or email our friends. That lets them realize we’re planning a vacation to the Grand Canyon, for instance, and send us ads for local hotels.

 Many people think that it’s an invasion of privacy for companies to gather sensitive data — such as information about our relationships and medical history — and exploit it for commercial purposes. It could also widen social divisions. For example, Facebook determines our political beliefs based upon the pages we like and preferences we list on our profiles. If algorithms peg us as conservative or liberal and we’re targeted with ads accordingly, we may end up never understanding what people of other political persuasions think. Internet activist and author Eli Pariser has argued that America is so politically polarized in part because social media sites leave us in “filter bubbles.” Targeted political advertising could have the same effect.

That’s part of the reason why, in May, a new regulation will go into effect into the European Union giving citizens the “right to object” to “processing of personal data” about them for marketing and other purposes. As Andrus Ansip, the European Commission vice president for the digital single market, tweeted, “Should I not be asked before my emails are accessed and used? Don’t you think the same?” The new law overcame serious opposition from the advertising industry, whose representatives argue that it will disrupt ad revenues needed by the media. Experts say that websites will have to provide more valuable content to users as an incentive for readers to allow them to use their data.

Here in the U.S., most ads are bought through exchanges that allow advertisers to target people based upon data about them. Companies can choose to buy ads that will be seen, for example, by women who live in a particular ZIP code and graduated from a certain school. But according to guidance established by the Digital Advertising Alliance — a consortium of industry trade associations including the American Association of Advertising Agencies, the Association of National Advertisers, and the Better Business Bureau — consumers should have “the ability to exercise choice with respect to the collection and use of data.” Two members of the alliance accept consumer complaints and do their own research to identify violations of the rule. They work with companies to help them fix problems and report violations to regulators. 1  

While the principle behind the new EU law could justify wide-ranging new regulations and restrictions on how companies throughout the world do business, James Ryseff, a former Google engineer, says it’s likely that initially it will simply allow users to opt out of the “cookies” that track internet users as they surf the web. Although this will reduce the amount of data that tech companies can collect, it doesn’t truly allow users to opt out of targeted advertising, since businesses can still use the information they gather through other techniques — such as in-store purchases — to classify and reach customers. That’s why, Ryseff says, Americans should have more sophisticated ways to determine exactly what advertisers learn about us.

First, for example, we should be able to decide whether companies are able to gather generic data about who we are (such as our age, gender and location) or information about what we’re doing (such as researching a medical condition) — or neither, or both. “In general, I think ‘What I do’ information has a greater ability to freak people out,” Ryseff says. “Used incorrectly, it makes you feel like Google is stalking you.”

Second, Americans should get to decide where and when our data is tracked. For example, some people might be more comfortable being tracked on a search engine that knows their buying behavior and can make recommendations accordingly, but less so on personal email which can identify private facts about their lives — or work email which might contain proprietary information. (Google previously used data from the content of users’ emails to target them with ads, but pledged in June to stop the practice.) And we might want to temporarily stop allowing search engines to track our activities when we’re looking up something private, like medical symptoms. 2

Third, we should get to decide whether we’re willing to be targeted with ads based upon our own behaviors or people algorithms have decided are like us.