Post of Recent Noteworthy Facebook Criticisms

Facebook deserves heavy criticism for allowing the exploitation of data by corporations such as Cambridge Analytica, which — according to the Cambridge Analytica whistleblower Christopher Wylie — built psychological profiles on 50 million Facebook users in order to “target their inner demons” and wrongly manipulate them with political advertisements. I’ve been critical of Facebook for several years though, and I know much of importance about it that the corporate mass media has missed, such as Facebook’s experiment to manipulate the news feeds of nearly 700,000 users (without their consent) in an attempt to see much it could influence user emotions.

Facebook has also made a selling point to advertisers that it can identify when teenagers are feeling “worthless” and “insecure,” which of course is a widespread teenage vulnerability that allows for exploitation. Facebook has let advertisers discriminate against people by ethnicity before, it has near pointlessly asked victims of revenge porn to send it their nude photos (letting Facebook employees view them and maybe abuse them), and it has supported the recent Cloud Act that allows for significant violations of consumer privacy by police, among many other outrages. While it’s useful that Facebook has helped some people forge meaningful connections, that doesn’t have to come at the high costs of personal exploitation that the corporate has allowed and still allows.

Article: As Feds Launch Probe, Users Discover ‘Horrifying’ Reach of Facebook’s Data Mining

As the fallout from Facebook’s Cambridge Analytica scandal continued on Monday with the Federal Trade Commission’s (FTC) announcement that it is conducting a long-overdue probe into the tech giant’s privacy practices, many Facebook users are only now discovering the astonishing and in some cases downright “creepy” reach of the platform’s data-mining operations, which form the foundation of its business model.

After a New Zealand man named Dylan McKay called attention in a viral tweet last week to the alarming fact that Facebook had collected his “entire call history” with his partner’s mother and “metadata about every text message [he’s] ever received or sent,” other Facebook users began downloading their archive of personal data the social media giant had stored and discovered that McKay’s experience was hardly anomalous.

Based on the stories of a number of users who shared their experiences and data, Ars Technica concluded in an explosive report published on Saturday that Facebook has been scraping call and text message data from Android phones “for years.”

While the social media giant insisted in a statement that it only collects such data with permission—which is usually requested during the process of installing particular apps such as Messenger—Ars noted that this claim “contradicts the experience of several users who shared their data,” including McKay.

Other articles that have appeared recently are linked to below here.

South Korea fines Facebook $369K for slowing user internet connections

73% of Canadians to change Facebook habits after data mining furor, survey suggests

More than #DeleteFacebook

Facebook’s Surveillance Machine

No one can pretend Facebook is just harmless fun any more

Ex-Facebook president Sean Parker: site made to exploit human ‘vulnerability’

Cambridge Analytica Files

Advertisements

EU Privacy Shield Standard Should be Adopted by More Countries

Online privacy isn’t as appreciated as it should be, but that may change as exponentially more devices are connected to the Internet over the next several years.

If you’re ever expecting a child, Target wants to be one of the first to know. The company has invested in research to identify pregnant customers early on, based upon their purchasing behavior. Then, it targets them with ads for baby gear.

While companies such as Target mine data about products their customers purchase from them (like prenatal vitamins) to send them personalized ads, many also rely on information gathered about us on the web — like what we search for on Google or email our friends. That lets them realize we’re planning a vacation to the Grand Canyon, for instance, and send us ads for local hotels.

 Many people think that it’s an invasion of privacy for companies to gather sensitive data — such as information about our relationships and medical history — and exploit it for commercial purposes. It could also widen social divisions. For example, Facebook determines our political beliefs based upon the pages we like and preferences we list on our profiles. If algorithms peg us as conservative or liberal and we’re targeted with ads accordingly, we may end up never understanding what people of other political persuasions think. Internet activist and author Eli Pariser has argued that America is so politically polarized in part because social media sites leave us in “filter bubbles.” Targeted political advertising could have the same effect.

That’s part of the reason why, in May, a new regulation will go into effect into the European Union giving citizens the “right to object” to “processing of personal data” about them for marketing and other purposes. As Andrus Ansip, the European Commission vice president for the digital single market, tweeted, “Should I not be asked before my emails are accessed and used? Don’t you think the same?” The new law overcame serious opposition from the advertising industry, whose representatives argue that it will disrupt ad revenues needed by the media. Experts say that websites will have to provide more valuable content to users as an incentive for readers to allow them to use their data.

Here in the U.S., most ads are bought through exchanges that allow advertisers to target people based upon data about them. Companies can choose to buy ads that will be seen, for example, by women who live in a particular ZIP code and graduated from a certain school. But according to guidance established by the Digital Advertising Alliance — a consortium of industry trade associations including the American Association of Advertising Agencies, the Association of National Advertisers, and the Better Business Bureau — consumers should have “the ability to exercise choice with respect to the collection and use of data.” Two members of the alliance accept consumer complaints and do their own research to identify violations of the rule. They work with companies to help them fix problems and report violations to regulators. 1  

While the principle behind the new EU law could justify wide-ranging new regulations and restrictions on how companies throughout the world do business, James Ryseff, a former Google engineer, says it’s likely that initially it will simply allow users to opt out of the “cookies” that track internet users as they surf the web. Although this will reduce the amount of data that tech companies can collect, it doesn’t truly allow users to opt out of targeted advertising, since businesses can still use the information they gather through other techniques — such as in-store purchases — to classify and reach customers. That’s why, Ryseff says, Americans should have more sophisticated ways to determine exactly what advertisers learn about us.

First, for example, we should be able to decide whether companies are able to gather generic data about who we are (such as our age, gender and location) or information about what we’re doing (such as researching a medical condition) — or neither, or both. “In general, I think ‘What I do’ information has a greater ability to freak people out,” Ryseff says. “Used incorrectly, it makes you feel like Google is stalking you.”

Second, Americans should get to decide where and when our data is tracked. For example, some people might be more comfortable being tracked on a search engine that knows their buying behavior and can make recommendations accordingly, but less so on personal email which can identify private facts about their lives — or work email which might contain proprietary information. (Google previously used data from the content of users’ emails to target them with ads, but pledged in June to stop the practice.) And we might want to temporarily stop allowing search engines to track our activities when we’re looking up something private, like medical symptoms. 2

Third, we should get to decide whether we’re willing to be targeted with ads based upon our own behaviors or people algorithms have decided are like us.

Research Develops First Reliable Method for Websites to Track Users With Multiple Browsers

Either legal or technological defenses will be required to stop this tracking that so invades personal privacy.

Researchers have recently developed the first reliable technique for websites to track visitors even when they use two or more different browsers. This shatters a key defense against sites that identify visitors based on the digital fingerprint their browsers leave behind.

State-of-the-art fingerprinting techniques are highly effective at identifying users when they use browsers with default or commonly used settings. For instance, the Electronic Frontier Foundation’s privacy tool, known as Panopticlick, found that only one in about 77,691 browsers had the same characteristics as the one commonly used by this reporter. Such fingerprints are the result of specific settings and customizations found in a specific browser installation, including the list of plugins, the selected time zone, whether a “do not track” option is turned on, and whether an adblocker is being used.

Until now, however, the tracking has been limited to a single browser. This constraint made it infeasible to tie, say, the fingerprint left behind by a Firefox browser to the fingerprint from a Chrome or Edge installation running on the same machine. The new technique—outlined in a research paper titled (Cross-)Browser Fingerprinting via OS and Hardware Level Features—not only works across multiple browsers, it’s also more accurate than previous single-browser fingerprinting.

Fingerprinting isn’t automatically bad and, in some cases, offers potential benefits to end users. Banks, for instance, can use it to know that a person logging into an online account isn’t using the computer that has been used on every previous visit. Based on that observation, the bank could check with the account holder by phone to make sure the login was legitimate. But fingerprinting also carries sobering privacy concerns.

“From the negative perspective, people can use our cross-browser tracking to violate users’ privacy by providing customized ads,” Yinzhi Cao, the lead researcher who is an assistant professor in the Computer Science and Engineering Department at Lehigh University, told Ars. “Our work makes the scenario even worse, because after the user switches browsers, the ads company can still recognize the user. In order to defeat the privacy violation, we believe that we need to know our enemy well.”

[…]

Cross-browser fingerprinting is only the latest trick developers have come up with to track people who visit their sites. Besides traditional single-browser fingerprinting, other tracking methods include monitoring the way visitors type passwords and other text and embedding inaudible sound in TV commercials or websites. The Tor browser without an attached microphone or speakers is probably the most effective means of protection, although the researchers said running a browser inside a virtual machine may also work.

Facebook is Still Letting Housing Advertisers Discriminate Against Users by Ethnicity

Another mark against the Facebook corporation that provides another example of why I despise it. I will never approve of a corporation that manipulates the emotions of human beings for profits and takes invasiveness of personal privacy to new extremes.

In February, Facebook said it would step up enforcement of its prohibition against discrimination in advertising for housing, employment or credit.

But our tests showed a significant lapse in the company’s monitoring of the rental market.

Last week, ProPublica bought dozens of rental housing ads on Facebook, but asked that they not be shown to certain categories of users, such as African Americansmothers of high school kids, people interested in wheelchair rampsJewsexpats from Argentina and Spanish speakers.

All of these groups are protected under the federal Fair Housing Act, which makes it illegal to publish any advertisement “with respect to the sale or rental of a dwelling that indicates any preference, limitation, or discrimination based on race, color, religion, sex, handicap, familial status, or national origin.” Violators can face tens of thousands of dollars in fines.

Every single ad was approved within minutes.

The only ad that took longer than three minutes to be approved by Facebook sought to exclude potential renters “interested in Islam, Sunni Islam and Shia Islam.” It was approved after 22 minutes.

Under its own policies, Facebook should have flagged these ads, and prevented the posting of some of them. Its failure to do so revives questions about whether the company is in compliance with federal fair housing rules, as well as about its ability and commitment to police discriminatory advertising on the world’s largest social network.