Polisis AI Developed to Help People Understand Privacy Policies

It looks as though this AI development could be quite useful in helping people avoid the exploitation of their personal information. Someone reading this may also want to look into a resource called Terms of Service; Didn’t Read, which “aims at creating a transparent and peer-reviewed process to rate and analyse Terms of Service and Privacy Policies in order to create a rating from Class A to Class E.”

But one group of academics has proposed a way to make those virtually illegible privacy policies into the actual tool of consumer protection they pretend to be: an artificial intelligence that’s fluent in fine print. Today, researchers at Switzerland’s Federal Institute of Technology at Lausanne (EPFL), the University of Wisconsin and the University of Michigan announced the release of Polisis—short for “privacy policy analysis”—a new website and browser extension that uses their machine-learning-trained app to automatically read and make sense of any online service’s privacy policy, so you don’t have to.

In about 30 seconds, Polisis can read a privacy policy it’s never seen before and extract a readable summary, displayed in a graphic flow chart, of what kind of data a service collects, where that data could be sent, and whether a user can opt out of that collection or sharing. Polisis’ creators have also built a chat interface they call Pribot that’s designed to answer questions about any privacy policy, intended as a sort of privacy-focused paralegal advisor. Together, the researchers hope those tools can unlock the secrets of how tech firms use your data that have long been hidden in plain sight.

[…]

Polisis isn’t actually the first attempt to use machine learning to pull human-readable information out of privacy policies. Both Carnegie Mellon University and Columbia have made their own attempts at similar projects in recent years, points out NYU Law Professor Florencia Marotta-Wurgler, who has focused her own research on user interactions with terms of service contracts online. (One of her own studies showed that only .07 percent of users actually click on a terms of service link before clicking “agree.”) The Usable Privacy Policy Project, a collaboration that includes both Columbia and CMU, released its own automated tool to annotate privacy policies just last month. But Marotta-Wurgler notes that Polisis’ visual and chat-bot interfaces haven’t been tried before, and says the latest project is also more detailed in how it defines different kinds of data. “The granularity is really nice,” Marotta-Wurgler says. “It’s a way of communicating this information that’s more interactive.”

[…]

The researchers’ legalese-interpretation apps do still have some kinks to work out. Their conversational bot, in particular, seemed to misinterpret plenty of questions in WIRED’s testing. And for the moment, that bot still answers queries by flagging an intimidatingly large chunk of the original privacy policy; a feature to automatically simplify that excerpt into a short sentence or two remains “experimental,” the researchers warn.

But the researchers see their AI engine in part as the groundwork for future tools. They suggest that future apps could use their trained AI to automatically flag data practices that a user asks to be warned about, or to automate comparisons between different services’ policies that rank how aggressively each one siphons up and share your sensitive data.

“Caring about your privacy shouldn’t mean you have to read paragraphs and paragraphs of text,” says Michigan’s Schaub. But with more eyes on companies’ privacy practices—even automated ones—perhaps those information stewards will think twice before trying to bury their data collection bad habits under a mountain of legal minutiae.

Facebook Asks Australian Users for Nude Photos to “Combat Revenge Porn”

I see this effort by Facebook as doing much more harm than good, and it provides me with another justified reason for being against Facebook and having never used it personally. This Facebook effort doesn’t stop revenge porn in general, as other sites besides Facebook could be used against victims to post revenge porn.

Facebook is asking users to send the company their nude photos in an effort to tackle revenge porn, in an attempt to give some control back to victims of this type of abuse.

Individuals who have shared intimate, nude or sexual images with partners and are worried that the partner (or ex-partner) might distribute them without their consent can use Messenger to send the images to be “hashed”. This means that the company converts the image into a unique digital fingerprint that can be used to identify and block any attempts to re-upload that same image.

Facebook is piloting the technology in Australia in partnership with a government agency headed up by the e-safety commissioner, Julia Inman Grant, who told ABC it would allow victims of “image-based abuse” to take action before pictures were posted to Facebook, Instagram or Messenger.

Crucially terrible is that Facebook employees will have to review uncensored nude photos as part of the process. That means that another avenue of potential abuse is opened up against victims.

According to a Facebook spokesperson, Facebook workers will have to review full, uncensored versions of nude images first, volunteered by the user, to determine if malicious posts by other users qualify as revenge porn.

Article on Tinder’s Data Collection

A woman wrote about seeing her personal information that Tinder had stored about her in The Guardian recently. It’s an insight into how a lot of Internet-based corporations are collecting data too.

The dating app has 800 pages of information on me, and probably on you too if you are also one of its 50 million users. In March I asked Tinder to grant me access to my personal data. Every European citizen is allowed to do so under EU data protection law, yet very few actually do, according to Tinder.

With the help of privacy activist Paul-Olivier Dehaye from personaldata.io and human rights lawyer Ravi Naik, I emailed Tinder requesting my personal data and got back way more than I bargained for.

“I am horrified but absolutely not surprised by this amount of data,” said Olivier Keyes, a data scientist at the University of Washington. “Every app you use regularly on your phone owns the same [kinds of information]. Facebook has thousands of pages about you!”

As I flicked through page after page of my data I felt guilty. I was amazed by how much information I was voluntarily disclosing: from locations, interests and jobs, to pictures, music tastes and what I liked to eat. But I quickly realised I wasn’t the only one. A July 2017 study revealed Tinder users are excessively willing to disclose information without realising it.

“You are lured into giving away all this information,” says Luke Stark, a digital technology sociologist at Dartmouth University. “Apps such as Tinder are taking advantage of a simple emotional phenomenon; we can’t feel data. This is why seeing everything printed strikes you. We are physical creatures. We need materiality.”

Reading through the 1,700 Tinder messages I’ve sent since 2013, I took a trip into my hopes, fears, sexual preferences and deepest secrets. Tinder knows me so well. It knows the real, inglorious version of me who copy-pasted the same joke to match 567, 568, and 569; who exchanged compulsively with 16 different people simultaneously one New Year’s Day, and then ghosted 16 of them.

[…]

What will happen if this treasure trove of data gets hacked, is made public or simply bought by another company? I can almost feel the shame I would experience. The thought that, before sending me these 800 pages, someone at Tinder might have read them already makes me cringe.