Past Successes of Teams in Sports Increases Their Chances of Future Victories

A study showing the power of teamwork, with implications for cooperative efforts outside of sports, such as in business or communities.

What makes a team successful? This is not only a crucial question for football coaches, it plays a role in almost all areas of life, from corporate management to politics. It goes without saying that a team can only win if the team members have the necessary skills. But there is another important element: joint successes in the past increase the chances of winning. This effect shows up in a similar way in completely different team sports.

A research team from TU Wien (Vienna), Northwestern University (Evanston, USA), and the Indian Institute of Management (Udaipur) were able to statistically prove this phenomenon by analyzing large amounts of data in physical sports (football, baseball, cricket and basketball), and also in e-sports (namely the multiplayer online game “Dota 2”). The results have now been published in the journal Nature Human Behaviour.

Skills are not everything

The research team collected extensive data on numerous teams from several sports. The strength of individual players was quantified using different parameters — for example in basketball, the number of points scored and the number of assists was taken into account. The strength of the team can then be calculated as the average strength of the players.

“This gives us a value that can predict the outcome of a game reasonably well,” says Julia Neidhardt of the E-Commerce research unit (Institute for Information Systems Engineering, TU Wien, Vienna). She conducts research in the areas of team performance, user modeling and recommender systems. She does not only consider individuals, but also models their relationships, for example with the help of social network analysis. “Teams with better individual players have of course a higher chance of winning — but that’s not the end of the story,” says Neidhardt.

The team effect

In all the sports studied, the actual results of the games can be predicted even better by not only considering the average strength of the team members, but also taking into account how often they have been victorious together in the past. It is therefore not only important to bring the best possible stars to the field, they also have to gain experience together as a team by celebrating joint victories.

Especially in elite sports, where the skills of all involved professionals are extremely high, individual differences do not necessarily play the key role. As the differences in the skill levels decrease, common experience becomes more important.

It is particularly interesting that the effect was to be seen in very different sports: In football or in the e-sport “Dota 2,” the team members permanently depend on each other. Most actions are performed by several players at the same time. In baseball, on the other hand, throwing and hitting the ball are individual actions that have nothing to do with the rest of the team. Nevertheless, the team effect can be seen in all these sports.

Robust result

There are different possible explanations for this: By training and playing together for a long time, the players become better at coordinating their actions and predicting their teammates’ reactions, but there may also be strong psychological effects, when there is a strong emotional bond between the team players. The statistical data cannot conclusively answer the question which effect is more important. “We can see clearly that in the case of similar skill levels, prior shared success is a good predictor of which team is going to win,” says Julia Neidhardt. “This effect is very robust, in a variety of sports. This leads us to suspect that similar effects also occur in other areas.”

Study: Social Media Use Can Increase Depression and Loneliness

The study essentially found that people using social media less than they typically would results in major decreases in loneliness and depression, with that effect being more pronounced for people who were most depressed at the start of the study.

Social media does have its share of positives — it allows people otherwise separated by significant physical distance to keep in touch and interact, it provides platforms for sharing ideas and stories, and it provides ways for the disadvantaged in society to gain access to opportunities. There are clear downsides to social media services though:

The link between the two has been talked about for years, but a causal connection had never been proven. For the first time, University of Pennsylvania research based on experimental data connects Facebook, Snapchat, and Instagram use to decreased well-being. Psychologist Melissa G. Hunt published her findings in the December Journal of Social and Clinical Psychology.

Few prior studies have attempted to show that social-media use harms users’ well-being, and those that have either put participants in unrealistic situations or were limited in scope, asking them to completely forego Facebook and relying on self-report data, for example, or conducting the work in a lab in as little time as an hour.

“We set out to do a much more comprehensive, rigorous study that was also more ecologically valid,” says Hunt, associate director of clinical training in Penn’s Psychology Department.

To that end, the research team, which included recent alumni Rachel Marx and Courtney Lipson and Penn senior Jordyn Young, designed their experiment to include the three platforms most popular with a cohort of undergraduates, and then collected objective usage data automatically tracked by iPhones for active apps, not those running the background.

Each of 143 participants completed a survey to determine mood and well-being at the study’s start, plus shared shots of their iPhone battery screens to offer a week’s worth of baseline social-media data. Participants were then randomly assigned to a control group, which had users maintain their typical social-media behavior, or an experimental group that limited time on Facebook, Snapchat, and Instagram to 10 minutes per platform per day.

For the next three weeks, participants shared iPhone battery screenshots to give the researchers weekly tallies for each individual. With those data in hand, Hunt then looked at seven outcome measures including fear of missing out, anxiety, depression, and loneliness.

“Here’s the bottom line,” she says. “Using less social media than you normally would leads to significant decreases in both depression and loneliness. These effects are particularly pronounced for folks who were more depressed when they came into the study.”

Hunt stresses that the findings do not suggest that 18- to 22-year-olds should stop using social media altogether. In fact, she built the study as she did to stay away from what she considers an unrealistic goal. The work does, however, speak to the idea that limiting screen time on these apps couldn’t hurt.

“It is a little ironic that reducing your use of social media actually makes you feel less lonely,” she says. But when she digs a little deeper, the findings make sense. “Some of the existing literature on social media suggests there’s an enormous amount of social comparison that happens. When you look at other people’s lives, particularly on Instagram, it’s easy to conclude that everyone else’s life is cooler or better than yours.”

Because this particular work only looked at Facebook, Instagram, and Snapchat, it’s not clear whether it applies broadly to other social-media platforms. Hunt also hesitates to say that these findings would replicate for other age groups or in different settings. Those are questions she still hopes to answer, including in an upcoming study about the use of dating apps by college students.

Despite those caveats, and although the study didn’t determine the optimal time users should spend on these platforms or the best way to use them, Hunt says the findings do offer two related conclusions it couldn’t hurt any social-media user to follow.

For one, reduce opportunities for social comparison, she says. “When you’re not busy getting sucked into clickbait social media, you’re actually spending more time on things that are more likely to make you feel better about your life.” Secondly, she adds, because these tools are here to stay, it’s incumbent on society to figure out how to use them in a way that limits damaging effects. “In general, I would say, put your phone down and be with the people in your life.”

Excessive CEO Pay Takes Money Away from Other Workers

The op-ed provides a good analysis of the problem with the economic structures that allow CEOs to be excessively overpaid — the substantial amount of money that the CEOs are overpaid with could instead be going to other lower-level workers. Wages in the United States have hardly increased in decades for most American workers, and the CEO pocket money would make a significant difference in their lives.

The problem is the structure of corporate governance. The people who most immediately determine the CEO’s pay are the corporation’s board of directors. These directors have incredibly cushy jobs. They typically get paid several hundred thousand dollars a year for perhaps 150 hours of work.

Members of corporate boards largely owe their jobs to the CEOs and top management. They almost never get booted out by shareholders; the reelection rate for board members running with board support is over 99 percent.

In this context, board members have no incentive to ask questions like, “Could we get someone as good as our CEO for half the pay?” There is basically no downward pressure on CEO pay and every reason to boost pay. After all, if you were sitting on some huge pot of other people’s money, wouldn’t you want to pay your friends well?

Of course, the CEO pay comes at the expense of returns to shareholders, and these have not been very good in recent years in spite of the best efforts of Trump and the Republicans to help them with tax cuts and pro-business regulation. In the last two decades, stock returns have averaged less than 4.7 percent annually above the rate of inflation. By contrast, in the long Golden Age from 1947 to 1973, real stock returns averaged 8.2 percent.

With the bulk of stock being held by the richest people in the country, there is no reason to shed tears for stockholders, but the fact is they are being ripped off by CEOs and other top management. Given the choice, we should prefer the money ends up in the hands of shareholders rather than CEOs. After all, people below the top 1 percent do own stock in their 401(k)s, as do public and private pension funds. By contrast, every dollar in additional CEO pay is going to someone in the top 0.001 percent of the income distribution.

More important than the money going to the CEOs is the impact that their outlandish pay has on pay structures in the economy more generally. When the CEO is pocketing $20 to $30 million a year, other top executives are likely earning close to $10 million and even the third-tier managers might be topping $1 million.

[…]

If a successful CEO of a large company was pocketing $2-3 million a year, instead of $20 to $30 million, the ripple effect on the pay of others near the top would leave much more money for everyone else. This gives us very good reason to worry about excessive CEO pay.

If the structure of corporate governance makes it too difficult for shareholders to collectively act to limit CEO pay, threatening them with a return to the pre-Trump 35 percent tax rate might give them enough incentive to get the job done. It has always been in the interests of shareholders to pay their CEOs as little as possible, just as they want to pay as little as possible to their other employees.

If shareholders pay a CEO $20 million more than needed to get someone to run the company, it has the same impact on the bottom line as paying $2,000 extra to 10,000 workers. No company deliberately overpays their frontline workers.

Climate Change Threatens Prospects for Human Society

The burning of fossil fuels that leads to climate change carries with it numerous threats to the environment, and the world’s nations must quickly transition to the use of clean energy to avert massive catastrophes in the future.

NOAM CHOMSKY: A couple of weeks ago, the IPCC, the international group of scientists monitoring climate change, came out with a very ominous report warning that the world has maybe a decade or two to basically end its reliance on fossil fuels if we’re to have any hope of controlling global warming below the level of utter disaster. And that, incidentally, is a conservative estimate. It’s a consensus view. There are—repeatedly, over the years, it has been shown that the IPCC analyses are much less alarmist than they should be.

Now comes this report in Nature that you mentioned, a couple of days ago, which shows that there has been a serious underestimate of the warming of the oceans. And they conclude that if these results hold up, the so-called carbon budget, the amount of carbon that we can spew into the atmosphere and still have a survival environment, has to be reduced by about 25 percent. That’s over and above the IPCC report. And the opening up of the Amazon to further exploitation will be another serious blow at the prospects of survival of organized human society.

[…]

We have to make decisions now which will literally determine whether organized human life can survive in any decent form. You can just imagine what the world would be like if the sea level rises, say, 10 or 20 feet or even higher, which is within the range—easily within the range of predictions. I mean, the consequences are unimaginable. But it’s as if we’re kind of like the proverbial lemmings just happily marching off the cliff, led by leaders who understand very well what they’re doing, but are so dedicated to enriching themselves and their friends in the near future that it simply doesn’t matter what happens to the human species. There’s nothing like this in all of human history. There have been plenty of monsters in the past, plenty of them. But you can’t find one who was dedicated, with passion, to destroying the prospects for organized human life. Hitler was horrible enough, but not that.

Making Algorithms Less Biased and Reducing Inequalities of Power

Algorithms increasingly effect society, from employment (Amazon’s algorithms discriminated against women) to the criminal justice system (where they often discriminate against African-Americans), and making them less biased would reduce inequalities in power. This is also related to how research suggests that AI is able to independently develop its own prejudices.

With machine learning systems now being used to determine everything from stock prices to medical diagnoses, it’s never been more important to look at how they arrive at decisions.

A new approach out of MIT demonstrates that the main culprit is not just the algorithms themselves, but how the data itself is collected.

“Computer scientists are often quick to say that the way to make these systems less biased is to simply design better algorithms,” says lead author Irene Chen, a PhD student who wrote the paper with MIT professor David Sontag and postdoctoral associate Fredrik D. Johansson. “But algorithms are only as good as the data they’re using, and our research shows that you can often make a bigger difference with better data.”

Looking at specific examples, researchers were able to both identify potential causes for differences in accuracies and quantify each factor’s individual impact on the data. They then showed how changing the way they collected data could reduce each type of bias while still maintaining the same level of predictive accuracy.

“We view this as a toolbox for helping machine learning engineers figure out what questions to ask of their data in order to diagnose why their systems may be making unfair predictions,” says Sontag.

Chen says that one of the biggest misconceptions is that more data is always better. Getting more participants doesn’t necessarily help, since drawing from the exact same population often leads to the same subgroups being under-represented. Even the popular image database ImageNet, with its many millions of images, has been shown to be biased towards the Northern Hemisphere.

According to Sontag, often the key thing is to go out and get more data from those under-represented groups. For example, the team looked at an income-prediction system and found that it was twice as likely to misclassify female employees as low-income and male employees as high-income. They found that if they had increased the dataset by a factor of 10, those mistakes would happen 40 percent less often.

In another dataset, the researchers found that a system’s ability to predict intensive care unit (ICU) mortality was less accurate for Asian patients. Existing approaches for reducing discrimination would basically just make the non-Asian predictions less accurate, which is problematic when you’re talking about settings like healthcare that can quite literally be life-or-death.

Chen says that their approach allows them to look at a dataset and determine how many more participants from different populations are needed to improve accuracy for the group with lower accuracy while still preserving accuracy for the group with higher accuracy.

Research Into Pain Shows That When People Expect More Pain, They Feel More Pain

A good study that’s needed to be done for a while.

Expect a shot to hurt and it probably will, even if the needle poke isn’t really so painful. Brace for a second shot and you’ll likely flinch again, even though — second time around — you should know better.

That’s the takeaway of a new brain imaging study published in the journal Nature Human Behaviour which found that expectations about pain intensity can become self-fulfilling prophecies. Surprisingly, those false expectations can persist even when reality repeatedly demonstrates otherwise, the study found.

“We discovered that there is a positive feedback loop between expectation and pain,” said senior author Tor Wager, a professor of psychology and neuroscience at the University of Colorado Boulder. “The more pain you expect, the stronger your brain responds to the pain. The stronger your brain responds to the pain, the more you expect.”

For decades, researchers have been intrigued with the idea of self-fulfilling prophecy, with studies showing expectations can influence everything from how one performs on a test to how one responds to a medication. The new study is the first to directly model the dynamics of the feedback loop between expectations and pain and the neural mechanisms underlying it.

Marieke Jepma, then a postdoctoral researcher in Wager’s lab, launched the research after noticing that even when test subjects were shown time and again that something wouldn’t hurt badly, some still expected it to.

“We wanted to get a better understanding of why pain expectations are so resistant to change,” said Jepma, lead author and now a researcher at the University of Amsterdam.

The researchers recruited 34 subjects and taught them to associate one symbol with low heat and another with high, painful heat.

Then, the subjects were placed in a functional magnetic resonance imaging (fMRI) machine, which measures blood flow in the brain as a proxy for neural activity. For 60 minutes, subjects were shown low or high pain cues (the symbols, the words Low or High, or the letters L and W), then asked to rate how much pain they expected.

Then varying degrees of painful but non-damaging heat were applied to their forearm or leg, with the hottest reaching “about what it feels like to hold a hot cup of coffee” Wager explains.

Then they were asked to rate their pain.

Unbeknownst to the subjects, heat intensity was not actually related to the preceding cue.

The study found that when subjects expected more heat, brain regions involved in threat and fear were more activated as they waited. Regions involved in the generation of pain were more active when they received the stimulus. Participants reported more pain with high-pain cues, regardless of how much heat they actually got.

“This suggests that expectations had a rather deep effect, influencing how the brain processes pain,” said Jepma.

Surprisingly, their expectations also highly influenced their ability to learn from experience. Many subjects demonstrated high “confirmation bias” — the tendency to learn from things that reinforce our beliefs and discount those that don’t. For instance, if they expected high pain and got it, they might expect even more pain the next time. But if they expected high pain and didn’t get it, nothing changed.

“You would assume that if you expected high pain and got very little you would know better the next time. But interestingly, they failed to learn,” said Wager.

This phenomenon could have tangible impacts on recovery from painful conditions, suggests Jepma.

“Our results suggest that negative expectations about pain or treatment outcomes may in some situations interfere with optimal recovery, both by enhancing perceived pain and by preventing people from noticing that they are getting better,” she said. “Positive expectations, on the other hand, could have the opposite effects.”

The research also may shed light on why, for some, chronic pain can linger long after damaged tissues have healed.

Whether in the context of pain or mental health, the authors suggest that it may do us good to be aware of our inherent eagerness to confirm our expectations.

“Just realizing that things may not be as bad as you think may help you to revise your expectation and, in doing so, alter your experience,” said Jepma.

AI System Successfully Predicts Alzheimer’s Years in Advance

Important research of Alzheimer’s disease since it’s one of those diseases where the treatment will be more effective the earlier it’s caught.

Artificial intelligence (AI) technology improves the ability of brain imaging to predict Alzheimer’s disease, according to a study published in the journal Radiology.

Timely diagnosis of Alzheimer’s disease is extremely important, as treatments and interventions are more effective early in the course of the disease. However, early diagnosis has proven to be challenging. Research has linked the disease process to changes in metabolism, as shown by glucose uptake in certain regions of the brain, but these changes can be difficult to recognize.

“Differences in the pattern of glucose uptake in the brain are very subtle and diffuse,” said study co-author Jae Ho Sohn, M.D., from the Radiology & Biomedical Imaging Department at the University of California in San Francisco (UCSF). “People are good at finding specific biomarkers of disease, but metabolic changes represent a more global and subtle process.”

The study’s senior author, Benjamin Franc, M.D., from UCSF, approached Dr. Sohn and University of California, Berkeley, undergraduate student Yiming Ding through the Big Data in Radiology (BDRAD) research group, a multidisciplinary team of physicians and engineers focusing on radiological data science. Dr. Franc was interested in applying deep learning, a type of AI in which machines learn by example much like humans do, to find changes in brain metabolism predictive of Alzheimer’s disease.

The researchers trained the deep learning algorithm on a special imaging technology known as 18-F-fluorodeoxyglucose positron emission tomography (FDG-PET). In an FDG-PET scan, FDG, a radioactive glucose compound, is injected into the blood. PET scans can then measure the uptake of FDG in brain cells, an indicator of metabolic activity.

The researchers had access to data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), a major multi-site study focused on clinical trials to improve prevention and treatment of this disease. The ADNI dataset included more than 2,100 FDG-PET brain images from 1,002 patients. Researchers trained the deep learning algorithm on 90 percent of the dataset and then tested it on the remaining 10 percent of the dataset. Through deep learning, the algorithm was able to teach itself metabolic patterns that corresponded to Alzheimer’s disease.

Finally, the researchers tested the algorithm on an independent set of 40 imaging exams from 40 patients that it had never studied. The algorithm achieved 100 percent sensitivity at detecting the disease an average of more than six years prior to the final diagnosis.

“We were very pleased with the algorithm’s performance,” Dr. Sohn said. “It was able to predict every single case that advanced to Alzheimer’s disease.”

Although he cautioned that their independent test set was small and needs further validation with a larger multi-institutional prospective study, Dr. Sohn said that the algorithm could be a useful tool to complement the work of radiologists — especially in conjunction with other biochemical and imaging tests — in providing an opportunity for early therapeutic intervention.

“If we diagnose Alzheimer’s disease when all the symptoms have manifested, the brain volume loss is so significant that it’s too late to intervene,” he said. “If we can detect it earlier, that’s an opportunity for investigators to potentially find better ways to slow down or even halt the disease process.”