It looks as though this AI development could be quite useful in helping people avoid the exploitation of their personal information. Someone reading this may also want to look into a resource called Terms of Service; Didn’t Read, which “aims at creating a transparent and peer-reviewed process to rate and analyse Terms of Service and Privacy Policies in order to create a rating from Class A to Class E.”
But the researchers see their AI engine in part as the groundwork for future tools. They suggest that future apps could use their trained AI to automatically flag data practices that a user asks to be warned about, or to automate comparisons between different services’ policies that rank how aggressively each one siphons up and share your sensitive data.
“Caring about your privacy shouldn’t mean you have to read paragraphs and paragraphs of text,” says Michigan’s Schaub. But with more eyes on companies’ privacy practices—even automated ones—perhaps those information stewards will think twice before trying to bury their data collection bad habits under a mountain of legal minutiae.