Study: Social Media Use Can Increase Depression and Loneliness

The study essentially found that people using social media less than they typically would results in major decreases in loneliness and depression, with that effect being more pronounced for people who were most depressed at the start of the study.

Social media does have its share of positives — it allows people otherwise separated by significant physical distance to keep in touch and interact, it provides platforms for sharing ideas and stories, and it provides ways for the disadvantaged in society to gain access to opportunities. There are clear downsides to social media services though:

The link between the two has been talked about for years, but a causal connection had never been proven. For the first time, University of Pennsylvania research based on experimental data connects Facebook, Snapchat, and Instagram use to decreased well-being. Psychologist Melissa G. Hunt published her findings in the December Journal of Social and Clinical Psychology.

Few prior studies have attempted to show that social-media use harms users’ well-being, and those that have either put participants in unrealistic situations or were limited in scope, asking them to completely forego Facebook and relying on self-report data, for example, or conducting the work in a lab in as little time as an hour.

“We set out to do a much more comprehensive, rigorous study that was also more ecologically valid,” says Hunt, associate director of clinical training in Penn’s Psychology Department.

To that end, the research team, which included recent alumni Rachel Marx and Courtney Lipson and Penn senior Jordyn Young, designed their experiment to include the three platforms most popular with a cohort of undergraduates, and then collected objective usage data automatically tracked by iPhones for active apps, not those running the background.

Each of 143 participants completed a survey to determine mood and well-being at the study’s start, plus shared shots of their iPhone battery screens to offer a week’s worth of baseline social-media data. Participants were then randomly assigned to a control group, which had users maintain their typical social-media behavior, or an experimental group that limited time on Facebook, Snapchat, and Instagram to 10 minutes per platform per day.

For the next three weeks, participants shared iPhone battery screenshots to give the researchers weekly tallies for each individual. With those data in hand, Hunt then looked at seven outcome measures including fear of missing out, anxiety, depression, and loneliness.

“Here’s the bottom line,” she says. “Using less social media than you normally would leads to significant decreases in both depression and loneliness. These effects are particularly pronounced for folks who were more depressed when they came into the study.”

Hunt stresses that the findings do not suggest that 18- to 22-year-olds should stop using social media altogether. In fact, she built the study as she did to stay away from what she considers an unrealistic goal. The work does, however, speak to the idea that limiting screen time on these apps couldn’t hurt.

“It is a little ironic that reducing your use of social media actually makes you feel less lonely,” she says. But when she digs a little deeper, the findings make sense. “Some of the existing literature on social media suggests there’s an enormous amount of social comparison that happens. When you look at other people’s lives, particularly on Instagram, it’s easy to conclude that everyone else’s life is cooler or better than yours.”

Because this particular work only looked at Facebook, Instagram, and Snapchat, it’s not clear whether it applies broadly to other social-media platforms. Hunt also hesitates to say that these findings would replicate for other age groups or in different settings. Those are questions she still hopes to answer, including in an upcoming study about the use of dating apps by college students.

Despite those caveats, and although the study didn’t determine the optimal time users should spend on these platforms or the best way to use them, Hunt says the findings do offer two related conclusions it couldn’t hurt any social-media user to follow.

For one, reduce opportunities for social comparison, she says. “When you’re not busy getting sucked into clickbait social media, you’re actually spending more time on things that are more likely to make you feel better about your life.” Secondly, she adds, because these tools are here to stay, it’s incumbent on society to figure out how to use them in a way that limits damaging effects. “In general, I would say, put your phone down and be with the people in your life.”

Making Algorithms Less Biased and Reducing Inequalities of Power

Algorithms increasingly effect society, from employment (Amazon’s algorithms discriminated against women) to the criminal justice system (where they often discriminate against African-Americans), and making them less biased would reduce inequalities in power. This is also related to how research suggests that AI is able to independently develop its own prejudices.

With machine learning systems now being used to determine everything from stock prices to medical diagnoses, it’s never been more important to look at how they arrive at decisions.

A new approach out of MIT demonstrates that the main culprit is not just the algorithms themselves, but how the data itself is collected.

“Computer scientists are often quick to say that the way to make these systems less biased is to simply design better algorithms,” says lead author Irene Chen, a PhD student who wrote the paper with MIT professor David Sontag and postdoctoral associate Fredrik D. Johansson. “But algorithms are only as good as the data they’re using, and our research shows that you can often make a bigger difference with better data.”

Looking at specific examples, researchers were able to both identify potential causes for differences in accuracies and quantify each factor’s individual impact on the data. They then showed how changing the way they collected data could reduce each type of bias while still maintaining the same level of predictive accuracy.

“We view this as a toolbox for helping machine learning engineers figure out what questions to ask of their data in order to diagnose why their systems may be making unfair predictions,” says Sontag.

Chen says that one of the biggest misconceptions is that more data is always better. Getting more participants doesn’t necessarily help, since drawing from the exact same population often leads to the same subgroups being under-represented. Even the popular image database ImageNet, with its many millions of images, has been shown to be biased towards the Northern Hemisphere.

According to Sontag, often the key thing is to go out and get more data from those under-represented groups. For example, the team looked at an income-prediction system and found that it was twice as likely to misclassify female employees as low-income and male employees as high-income. They found that if they had increased the dataset by a factor of 10, those mistakes would happen 40 percent less often.

In another dataset, the researchers found that a system’s ability to predict intensive care unit (ICU) mortality was less accurate for Asian patients. Existing approaches for reducing discrimination would basically just make the non-Asian predictions less accurate, which is problematic when you’re talking about settings like healthcare that can quite literally be life-or-death.

Chen says that their approach allows them to look at a dataset and determine how many more participants from different populations are needed to improve accuracy for the group with lower accuracy while still preserving accuracy for the group with higher accuracy.

Improving and Extending Phone Battery Life

It is a regular complaint among smartphone users that their batteries fade too quickly. With how integral battery life is, along with how expensive newer phones have become and how having an uncharged phone could be a problem in certain dire situations, it is worth briefly addressing how to get more usage out of phone batteries.

Phones use lithium-ion batteries, which means that batteries gradually lose their capacity as the number of charge and discharge cycles grows. There are ways to lessen this degradation, but it will occur over time nonetheless.

Battery life depends on how you’re using the phone on a specific day along with how you’ve previously used it. So there’s value in adopting better charging habits to retain more battery in the future.

First of all, keeping phones plugged in once they reach full charge damages the battery in the long-run. Keeping phones plugged in like that puts them in a high-tension state that does harm to the battery’s internal chemistry. When possible, it’s also better to just charge the phone regularly instead of all the way to 100 percent charge, as the high voltage state puts stress on the battery.

The majority of battery degradation occurs during the more fully charged into discharged cycles . This means that it’s better to limit battery discharge (outside of on and fully charged) in the cycles when possible so that the battery doesn’t go into a deep discharge cycle.

Additionally, it should be noted that the fast charge option often available today can significantly reduce the battery life in a cycle, using wifi is less power-intensive than using 4G data, and reducing screen brightness, avoiding excessive heat, and limiting video use are all ways to extend battery life in a given cycle.

There will eventually be much stronger batteries, just as there eventually be battery protections from water. (Something called F-POSS — which repeals water and oil from sticking to it by having low surface energy — is already in development.) Until then though, users will probably want to handle their somewhat energy-fragile phone batteries with care.

Advanced Automation in the Future

Over the last several decades in the U.S., productivity gains have been concentrated in the upper echelon of the income distribution. The general population hasn’t really received them.

productivitygraph

Productivity means the average output per hour in the economy. This has increased due to technological advances such as faster computer processing power and workers becoming more efficient at their jobs.

The story of robots taking all the jobs is today printed in the mass media with some regularity. However, if the robots were actually taking all the jobs today, it would show up in the data. Massive automation implies massive increases in productivity, but as it is now, productivity gains rates have been quite low. Yearly productivity growth was higher in 2003 than it is today, and since about 2005 there’s been a slowdown in it. So based on the trend of the last dozen years, it is unlikely enough that we will see significant advances in productivity (automation) over the next several years.

Society should be structured so that in the next decades, productivity gains will be distributed to the general population instead of primarily to upper-middle class and wealthy people. In a significant way, this will depend on who owns the technology.

It’s crucial that there be real care taken on the rights awarded to people owning the most valuable technology. This may frankly determine whether that technology is a curse or a blessing for humanity.

In one example, say that the groundbreaking designs for the most highly advanced robotics are developed by a major corporation, which then patents the designs. The patent is valuable since the robotics would be far more efficient than anything else on the market, and it would allow the corporation to charge much higher prices than would otherwise be possible. This would be good for the minority of people who own the company and are invested in it, but it would almost certainly be harmful to the general public.

The case of prescription drugs shows us what happens when legal enforcement via patents goes wrong. The United States spent $450 billion on prescription drugs in 2017, an amount that would have been about a fifth as much (representing thousands of dollars per U.S. household in savings) were there no drug patents and a different system of drug research incentives. The consequence of this disparity is obviously that there are many people who suffer with health ailments due to unnecessarily expensive medications.

The major corporation with the valuable robotics patents may be able to make the distribution of the valuable robotics (which could very efficiently perform a wide range of tasks) much more expensive than necessary, similar to the prescription drugs example. The robotics being too expensive would mean that there’d be less of them to do efficient labor such as assembling various household appliances, and this would manifest itself as a cost to a lot of people.

So instead of the advanced robotics (probably otherwise cheap due to the software and materials needed for them being low cost) being widely distributed inexpensively and allowed to most efficiently automate labor, there could be a case where their use is expensively restricted. The robotics may even be used by the potentially unaccountable corporation for mostly nefarious ends, and this is another problem that arises with the control granted by the patents. Clearly, there need to be public interest solutions to this sort of problem, such as avoiding the use of regressive governmental interventions, considering the use of shared public ownership to allow many people to receive dividends on the value the technology generates, and implementing sensible regulatory measures.

There are also standards that can be set into law and enforced. A basic story is that if (after automation advances lead to less labor requirements among workers generally) the length of the average work year decreases by 20 percent, about 25 percent more people will be employed. The arithmetic may not always be this straightforward, but it’s a basic estimate for consideration.

Less time spent working while employing more people is clearly a good standard for many reasons, particularly in the U.S. where leisure rates among most are low compared to other wealthy countries. More people being employed may also mean tighter labor markets that allow for workers to receive higher real wage gains.

If there is higher output due to technology, that value will go somewhere in the form of more money. Over the last decades we have seen this concentrated at the top, but it is possible to have workers both work shorter hours and have similar or even higher pay levels.

Lacking Net Neutrality Presents Public Safety Risks

It’s horrible that ISPs slowed speeds to emergency respondents in the wake of massive wildfires. The issue of net neutrality is really quite simple at its core — it’s about whether ISPs will have too much control over user access to the Internet or not. The large ISPs would prefer as much control as possible to increase their profits, even if that’s at the expense of public safety.

An ongoing study first reported by Bloomberg reveals the extent to which major American telecom companies are throttling video content on apps such as YouTube and Netflix on mobile phones in the wake of the Republican-controlled Federal Communications Commission (FCC) repealing national net neutrality protections last December.

Researchers from Northeastern University and the University of Massachusetts, Amherst used a smartphone app called Wehe, which has been downloaded by about 100,000 users, to track when wireless carriers engage in data “differentiation,” or when companies alter download speeds depending on the type of content, which violates a key tenet of the repealed rules.

Between January and May of this year, Wehe detected differentiation by Verizon 11,100 times; AT&T 8,398 times; T-Mobile 3,900 times; and Sprint 339 times. David Choffnes, one of the study’s authors and the app’s developer, told Bloomberg that YouTube was the top target, but carriers also slowed down speeds for Netflix, Amazon Prime Video, and the NBC Sports app.

[…]

Jeremy Gillula, tech policy director at Electronic Frontier Foundation, pointed to Verizon slowing down data speeds as Santa Clara County emergency responders battled the largest fire in California’s history. Verizon claimed it was a “customer-support mistake,” but county counsel James Williams said it proves that ISPs “will act in their economic interests, even at the expense of public safety,” and “that is exactly what the Trump administration’s repeal of net neutrality allows and encourages.”

That example, Gillula told Bloomberg, demonstrates “that ISPs are happy to use words like ‘unlimited’ and ‘no throttling’ in their public statements, but then give themselves the right to throttle certain traffic by burying some esoteric language in the fine print” of service contracts. “As a result, it’s especially important that consumers have tools like this to measure whether or not their ISP is throttling certain services.”

Using Virtual Reality in Beneficial Ways

Virtual reality is technology that’s advancing from being fringe to something that’s gradually becoming implemented more in the 21st century. This trend will only continue with lower costs of materials for virtualization and improved software.

The way virtual reality works is obvious enough — some sort of apparatus that covers the eyes and is able to transmit visual of a virtual world is required. Virtual worlds of course will have sounds to make them more immersive, and perhaps in the future there will be an option to stimulate other senses as well. It isn’t unreasonable to expect the possibility of VR technology that somehow provides the replication of smell, taste, and feel. Eventually there is likely to be VR technology with direct brain stimulation too.

Virtual reality is often presented these days as a fun way to spend time through gaming, and while it can be beneficial to provide people with an escape that doesn’t involve hard drugs in a world that’s often crazy and fucked up, virtual reality has other uses that deserve to be known about more.

One of the most notable recent findings is a study finding that people recall information better through virtual reality. Since knowledge is power, the enhanced ability of people to recall knowledge would be helpful in a variety of scenarios, such as training people for meaningful work, keeping fond memories more effectively, and assisting in educational endeavors. This could be combined with other research finding that drawing pictures is a strong way to remember information.

Most people are not especially good multi-taskers — the research tells us that only a few percent of people are “super taskers,” or those with the ability to focus on multiple tasks well. For whatever reason this is, it’s a general principle that human beings tend to perform better when their primary focus is on one task at a given time. Virtual reality thus provides an immersive environment that should allow people to focus more on one task than a traditional 2D learning environment.

VR has been shown to reduce the fear children have for needles in one study. This makes sense due to the distraction from VR’s intense immersion. Since the fear of needles is a suffocating one for some children, something as simple as a VR experience of going to an amusement park or a beach would be immensely helpful.

There’s a problem of too many people avoiding vital vaccinations in the United States, leading to diseases that should have been extinguished in the 20th century suddenly making recurrences in certain parts of the country. This is another example of how technology can be used to solve a real problem and protect society.

VR’s distraction could be extended to surgeries where local anathesia is used, thus protecting people from pain. It has already been found that virtual reality therapy is effective at reducing pain in hospitalized patients. It isn’t entirely clear why, but it may be because the VR experience is so immersive that the brain is unable to concurrently process the pain stimuli along with the VR.

It has been theorized that people have a fixed capacity for attention, and it has also been thought that when people are expecting physical pain in the immediate future, they tend to feel it more intensely. This may be because instead of the pain being a surprise, the increased focus on it before the pain hits may cause it to be felt more strongly.

Virtual reality will also have an important role in the journalism of the future. Studies have found that VR makes journalism more immersive, such as the VR story about factory farming being successful at raising more awareness of the horrific treatment often endured by animals.

VR can thus be an effective tool at fighting corruption and injustice in an era where young people generally — for whatever reason — are reading less than past generations. It has been found that too much use of fantasy-like elements in VR distract from the realism of the story and can make them less credible, however.

VR has also been referred to as an “empathy machine.” It’s conceivable that VR could be used for rehabilitation — use of the technology has already shown promise at increasing empathy levels, and VR shows promising mental health treatment results. The immersive virtual experience of owning a body in VR space has at times shown to really have an impact at altering perceptions and making important impressions.

In sum, while interactions in real life will always have importance that’s often most meaningful, there are many ways that virtual reality may improve the livelihoods of others.

Considerations for Securing and Optimizing the Internet of Things

Devices from smartphones to wifi-connected refrigerators represents what’s called the “Internet of Things,” billions of devices that are connected to the Internet. As the number of devices with Internet connectivity is set to expand significantly in the near future, it is worth examining how to best use the IoT for the future.

It is first of all worth noting that there will be numerous security vulnerabilities opened for consumers because of the expansion of the Internet of Things. Of the tens of billions of devices that will be added over the next several years, few of them will likely have regular security updates.

Security updates are important in computer security because they allow for vulnerabilities in software to be patched. While vulnerabilities in devices are known and persist as unpatched, it creates opportunities for adversaries to exploit them.

Billions of new vulnerabilities create problems because the way computer security tends to work, it may only one vulnerability on a network to compromise much else. That’s part of why defense in computer security has been so difficult — the attacker may only need one opening, while the defender may have to defend everything.

For example, say an adversary manages to compromise someone’s phone. The phone may then later connect to the refrigerator to prepare refreshments, further allowing the spread of malicious software from one infected device to another. This process may repeat itself again if the refrigerator were able to compromise the Internet-connected router, and once the router is compromised, the thermostat could be compromised too, making a home too hot or cold while driving up electricity costs.

There are a variety of realistic enough scenarios like this, which are more concerning when more sensitive items such as computers accessing bank accounts and home cameras are included. There are of course solutions to these concerns though.

It is probably better that some devices (such as pacemakers) are simply never designed to have Internet connectivity to begin with. Thermostats and refrigerators are the type of devices which clearly don’t require Internet connectivity to fulfill their intended purpose. Letting them be connected to the Internet may be convenient, but it may very well not be worth the increased potential of compromising other devices and being compromised themselves, leading to substantial costs in unintended heating or spoiled food.

For the devices that are for whatever reason connected to the Internet, it’s better if there could be multiple networks with strong security in a home or building if possible. That way, if an IoT device is compromised on one network, devices on another network have another barrier of protection against being compromised.

This relates to a concept in security known as security by compartmentalization. Since all of today’s software contains flaws — vulnerabilities that can be exploited — the approach of compartmentalization seeks to limit damage before it can spread too far.

In terms of optimization, some things are worthwhile to have connected. Different machines or robots should be communicating with each other on a task such as how many raw materials are needed. This will save humans the need to say this, allowing them to focus on more productive tasks than those that merely report details.

As cooperation can be powerful among humans, so too can it be among machines and other devices. It’s going to require strong security practices such as implementing compartmentalization, having standards on security updates, and using better encryption schemes for software, but it can be done, and it should be done. Since technology has no moral imperative, what humans do with technology will likely either create dystopias or utopias. It’s a question of whether the Internet of Things will lead primarily to chaos or to widespread benefits.