AI Becomes Very Good at Diagnosing Breast Cancer

Artificial intelligence is becoming more of a hot topic, and people should remember more that AI can be used both to hurt humans and (as in this case) help humans.

A computer programme can identify breast cancer from routine scans with greater accuracy than human experts, researchers said in what they hoped could prove a breakthrough in the fight against the global killer.

Breast cancer is one of the most common cancers in women, with more than 2 million new diagnoses last year alone.

Regular screening is vital in detecting the earliest signs of the disease in patients who show no obvious symptoms.

In Britain, women over 50 are advised to get a mammogram every three years, the results of which are analysed by two independent experts.

But interpreting the scans leaves room for error, and a small percentage of all mammograms either return a false positive – misdiagnosing a healthy patient as having cancer – or false negative – missing the disease as it spreads.

Now researchers at Google Health have trained an artificial intelligence model to detect cancer in breast scans from thousands of women in Britain and the United States.

The images had already been reviewed by doctors in real life but unlike in a clinical setting, the machine had no patient history to inform its diagnoses.

The team found that their AI model could predict breast cancer from the scans with a similar accuracy level to expert radiographers.

Further, the AI showed a reduction in the proportion of cases where cancer was incorrectly identified – 5.7 percent in the US and 1.2 percent in Britain, respectively.

It also reduced the percentage of missed diagnoses by 9.4 percent among US patients and by 2.7 percent in Britain.

“The earlier you identify a breast cancer the better it is for the patient,” Dominic King, UK lead at Google Health, told AFP.

“We think about this technology in a way that supports and enables an expert, or a patient ultimately, to get the best outcome from whatever diagnostics they’ve had.”

Computer ‘second opinion’

In Britain all mammograms are reviewed by two radiologists, a necessary but labour-intensive process.

The team at Google Health also conducted experiments comparing the computer’s decision with that of the first human scan reader.​

If the two diagnoses agreed, the case was marked as resolved. Only with discordant outcomes was the machine then asked to compare with the second reader’s decision.

The study by King and his team, published in Nature, showed that using AI to verify the first human expert reviewer’s diagnosis could save up to 88 percent of the workload for the second clinician.

“Find me a country where you can find a nurse or doctor that isn’t busy,” said King.

“There’s the opportunity for this technology to support the existing excellent service of the (human) reviewers.”

Ken Young, a doctor who manages mammogram collection for Cancer Research UK, contributed to the study.

He said it was unique for its use of real-life diagnosis scenarios from nearly 30,000 scans.

“We have a sample that is representative of all the women that might come through breast screening,” he said.

“It includes easy cases, difficult cases and everything in between.”

The team said further research was needed but they hoped that the technology could one day act as a “second opinion” for cancer diagnoses.

Using Chronoprinting to Cheaply Detect Food and Drug Impurities

The world has long needed this valuable sort of development to safeguard people’s health.

If we could tell authentic from counterfeit or adulterated drugs and foods just by looking at them, we could save money and lives every year, especially in the developing world, where the problem is worst. Unfortunately, the technologies that can detect what a sample is made of are expensive, energy-intensive, and largely unavailable in regions where they are needed most.

This may change with a simple new technique developed by engineers from the University of California, Riverside that can detect fake drugs from a video taken as the sample undergoes a disturbance.

If you’ve ever used online photo tools, you’ve probably seen how these tools use image analysis algorithms to categorize your photos. By distinguishing the different people in your photos, these algorithms make it easy to find all the photos of your daughter or your dad. Now, in the journal ACS Central Science, researchers report they have used these algorithms to solve a very different problem: identifying fake medicines and other potentially dangerous products.

Called “chronoprinting,” the technology requires only a few relatively inexpensive pieces of equipment and free software to accurately distinguish pure from inferior food and medicines.

The World Health Organization says that about 10 percent of all medicines in low- and middle-income countries are counterfeit, and food fraud is a global problem that costs consumers and industry billions of dollars per year. Fraudulent food and drugs waste money and jeopardize the health and lives of their consumers. But detecting fakes and frauds requires expensive equipment and highly trained experts.

William Grover, an assistant professor of bioengineering in UC Riverside’s Marlan and Rosemary Bourns College of Engineering, and Brittney McKenzie, a doctoral student in Grover’s lab, wondered if it would be possible to distinguish authentic from adulterated drugs and food by observing how they behave when disturbed by temperature changes or other causes. Two substances with identical compositions should respond the same way to a disturbance, and if two substances appear identical but respond differently, their composition must be different, they reasoned.

McKenzie designed a set of experiments to test this idea. She loaded samples of pure olive oil, one of the world’s most commonly adulterated foods, and cough syrup, which is often diluted or counterfeited in the developing world, into tiny channels on a microfluidic chip, and chilled it quickly in liquid nitrogen. A USB microscope camera filmed the samples reacting to the temperature change.

McKenzie and Grover wrote software that converts the video to a bitmap image. Because the image showed how the sample changed over time, the researchers called it a “chronoprint.”

The team then used image analysis algorithms to compare different chronoprints from the same substance. They found that each pure substance had a reliable chronoprint over multiple tests.

Next, they repeated the experiment with samples of olive oil that had been diluted with other oils and cough syrup diluted with water. The adulterated samples produced chronoprints that were different from the pure samples. The difference was so big, so obvious, and so consistent the researchers concluded that chronoprints and image analysis algorithms can reliably detect some types of food and drug fraud.

“The significant visual differences between the samples were both unexpected and exciting, and with them being consistent we knew this could be a useful way to identify a wide range of samples,” McKenzie said.

Grover said their technique creates a powerful new connection between chemistry and computer science.

“By basically converting a chemical sample to an image, we can take advantage of all the different image analysis algorithms that computer scientists have developed,” he said. “And as those algorithms get better, our ability to chemically identify a sample should get better, too.”

The researchers used liquids in their experiments but note the method could also be used on solid materials dissolved in water, and other types of disturbance, such as heat or a centrifuge, could be used for substances that don’t react well to freezing. The technique is easy to learn, making highly trained experts unnecessary. Chronoprinting requires hobbyist-grade equipment and software downloadable for free from Grover’s lab website, putting it well within reach of government agencies and labs with limited resources.

Video on how this chronoprinting works: https://youtu.be/qbyE68qD2Zo

Brain-Training App Has Research-Backed Claims to Improve User Concentration

The app could potentially be quite useful, but it should be noted that research has been finding what a good brain workout exercise is.

A new ‘brain training’ game designed by researchers at the University of Cambridge improves users’ concentration, according to new research published today. The scientists behind the venture say this could provide a welcome antidote to the daily distractions that we face in a busy world.

In their book, The Distracted Mind: Ancient Brains in a High-Tech World, Adam Gazzaley and Larry D. Rosen point out that with the emergence of new technologies requiring rapid responses to emails and texts and working on multiple projects simultaneously, young people, including students, are having more problems with sustaining attention and frequently become distracted. This difficulty in focussing attention and concentrating is made worse by stress from a global environment that never sleeps and also frequent travel leading to jetlag and poor quality sleep.

“We’ve all experienced coming home from work feeling that we’ve been busy all day, but unsure what we actually did,” says Professor Barbara Sahakian from the Department of Psychiatry. “Most of us spend our time answering emails, looking at text messages, searching social media, trying to multitask. But instead of getting a lot done, we sometimes struggle to complete even a single task and fail to achieve our goal for the day. Then we go home, and even there we find it difficult to ‘switch off’ and read a book or watch TV without picking up our smartphones. For complex tasks we need to get in the ‘flow’ and stay focused.”

In recent years, as smartphones have become ubiquitous, there has been a growth in the number of so-called ‘brain training’ apps that claim to improve cognitive skills such as memory, numerical skills and concentration.

Now, a team from the Behavioural and Clinical Neuroscience Institute at the University of Cambridge, has developed and tested ‘Decoder’, a new game that is aimed at helping users improve their attention and concentration. The game is based on the team’s own research and has been evaluated scientifically.

In a study published today in the journal Frontiers in Behavioural Neuroscience Professor Sahakian and colleague Dr George Savulich have demonstrated that playing Decoder on an iPad for eight hours over one month improves attention and concentration. This form of attention activates a frontal-parietal network in the brain.

In their study, the researchers divided 75 healthy young adults into three groups: one group received Decoder, one control group played Bingo for the same amount of time and a second control group received no game. Participants in the first two groups were invited to attend eight one-hour sessions over the course of a month during which they played either Decoder or Bingo under supervision.

All 75 participants were tested at the start of the trial and then after four weeks using the CANTAB Rapid Visual Information Processing test (RVP). CANTAB RVP has been demonstrated in previously published studies to be a highly sensitive test of attention/concentration.

During the test, participants are asked to detect sequences of digits (e.g. 2-4-6, 3-5-7, 4-6-8). A white box appears in the middle of screen, of which digits from 2 to 9 appear in a pseudo-random order, at a rate of 100 digits per minute. Participants are instructed to press a button every time they detect a sequence. The duration of the test is approximately five minutes.

Results from the study showed a significant difference in attention as measured by the RVP. Those who played Decoder were better than those who played Bingo and those who played no game. The difference in performance was significant and meaningful as it was comparable to those effects seen using stimulants, such as methylphenidate, or nicotine. The former, also known as Ritalin, is a common treatment for Attention Deficit Hyperactivity Disorder (ADHD).

To ensure that Decoder improved focussed attention and concentration without impairing the ability to shift attention, the researchers also tested participants’ ability on the Trail Making Test. Decoder performance also improved on this commonly used neuropsychological test of attentional shifting. During this test, participants have to first attend to numbers and then shift their attention to letters and then shift back to numbers. Additionally, participants enjoyed playing the game, and motivation remained high throughout the 8 hours of gameplay.

Professor Sahakian commented: “Many people tell me that they have trouble focussing their attention. Decoder should help them improve their ability to do this. In addition to healthy people, we hope that the game will be beneficial for patients who have impairments in attention, including those with ADHD or traumatic brain injury. We plan to start a study with traumatic brain injury patients this year.”

Dr Savulich added: “Many brain training apps on the market are not supported by rigorous scientific evidence. Our evidence-based game is developed interactively and the games developer, Tom Piercy, ensures that it is engaging and fun to play. The level of difficulty is matched to the individual player and participants enjoy the challenge of the cognitive training.”

The game has now been licensed through Cambridge Enterprise, the technology transfer arm of the University of Cambridge, to app developer Peak, who specialise in evidence-based ‘brain training’ apps. This will allow Decoder to become accessible to the public. Peak has developed a version for Apple devices and is releasing the game today as part of the Peak Brain Training app. Peak Brain Training is available from the App Store for free and Decoder will be available to both free and pro users as part of their daily workout. The company plans to make a version available for Android devices later this year.

“Peak’s version of Decoder is even more challenging than our original test game, so it will allow players to continue to gain even larger benefits in performance over time,” says Professor Sahakian. “By licensing our game, we hope it can reach a wide audience who are able to benefit by improving their attention.”

Study: Social Media Use Can Increase Depression and Loneliness

The study essentially found that people using social media less than they typically would results in major decreases in loneliness and depression, with that effect being more pronounced for people who were most depressed at the start of the study.

Social media does have its share of positives — it allows people otherwise separated by significant physical distance to keep in touch and interact, it provides platforms for sharing ideas and stories, and it provides ways for the disadvantaged in society to gain access to opportunities. There are clear downsides to social media services though:

The link between the two has been talked about for years, but a causal connection had never been proven. For the first time, University of Pennsylvania research based on experimental data connects Facebook, Snapchat, and Instagram use to decreased well-being. Psychologist Melissa G. Hunt published her findings in the December Journal of Social and Clinical Psychology.

Few prior studies have attempted to show that social-media use harms users’ well-being, and those that have either put participants in unrealistic situations or were limited in scope, asking them to completely forego Facebook and relying on self-report data, for example, or conducting the work in a lab in as little time as an hour.

“We set out to do a much more comprehensive, rigorous study that was also more ecologically valid,” says Hunt, associate director of clinical training in Penn’s Psychology Department.

To that end, the research team, which included recent alumni Rachel Marx and Courtney Lipson and Penn senior Jordyn Young, designed their experiment to include the three platforms most popular with a cohort of undergraduates, and then collected objective usage data automatically tracked by iPhones for active apps, not those running the background.

Each of 143 participants completed a survey to determine mood and well-being at the study’s start, plus shared shots of their iPhone battery screens to offer a week’s worth of baseline social-media data. Participants were then randomly assigned to a control group, which had users maintain their typical social-media behavior, or an experimental group that limited time on Facebook, Snapchat, and Instagram to 10 minutes per platform per day.

For the next three weeks, participants shared iPhone battery screenshots to give the researchers weekly tallies for each individual. With those data in hand, Hunt then looked at seven outcome measures including fear of missing out, anxiety, depression, and loneliness.

“Here’s the bottom line,” she says. “Using less social media than you normally would leads to significant decreases in both depression and loneliness. These effects are particularly pronounced for folks who were more depressed when they came into the study.”

Hunt stresses that the findings do not suggest that 18- to 22-year-olds should stop using social media altogether. In fact, she built the study as she did to stay away from what she considers an unrealistic goal. The work does, however, speak to the idea that limiting screen time on these apps couldn’t hurt.

“It is a little ironic that reducing your use of social media actually makes you feel less lonely,” she says. But when she digs a little deeper, the findings make sense. “Some of the existing literature on social media suggests there’s an enormous amount of social comparison that happens. When you look at other people’s lives, particularly on Instagram, it’s easy to conclude that everyone else’s life is cooler or better than yours.”

Because this particular work only looked at Facebook, Instagram, and Snapchat, it’s not clear whether it applies broadly to other social-media platforms. Hunt also hesitates to say that these findings would replicate for other age groups or in different settings. Those are questions she still hopes to answer, including in an upcoming study about the use of dating apps by college students.

Despite those caveats, and although the study didn’t determine the optimal time users should spend on these platforms or the best way to use them, Hunt says the findings do offer two related conclusions it couldn’t hurt any social-media user to follow.

For one, reduce opportunities for social comparison, she says. “When you’re not busy getting sucked into clickbait social media, you’re actually spending more time on things that are more likely to make you feel better about your life.” Secondly, she adds, because these tools are here to stay, it’s incumbent on society to figure out how to use them in a way that limits damaging effects. “In general, I would say, put your phone down and be with the people in your life.”

Making Algorithms Less Biased and Reducing Inequalities of Power

Algorithms increasingly effect society, from employment (Amazon’s algorithms discriminated against women) to the criminal justice system (where they often discriminate against African-Americans), and making them less biased would reduce inequalities in power. This is also related to how research suggests that AI is able to independently develop its own prejudices.

With machine learning systems now being used to determine everything from stock prices to medical diagnoses, it’s never been more important to look at how they arrive at decisions.

A new approach out of MIT demonstrates that the main culprit is not just the algorithms themselves, but how the data itself is collected.

“Computer scientists are often quick to say that the way to make these systems less biased is to simply design better algorithms,” says lead author Irene Chen, a PhD student who wrote the paper with MIT professor David Sontag and postdoctoral associate Fredrik D. Johansson. “But algorithms are only as good as the data they’re using, and our research shows that you can often make a bigger difference with better data.”

Looking at specific examples, researchers were able to both identify potential causes for differences in accuracies and quantify each factor’s individual impact on the data. They then showed how changing the way they collected data could reduce each type of bias while still maintaining the same level of predictive accuracy.

“We view this as a toolbox for helping machine learning engineers figure out what questions to ask of their data in order to diagnose why their systems may be making unfair predictions,” says Sontag.

Chen says that one of the biggest misconceptions is that more data is always better. Getting more participants doesn’t necessarily help, since drawing from the exact same population often leads to the same subgroups being under-represented. Even the popular image database ImageNet, with its many millions of images, has been shown to be biased towards the Northern Hemisphere.

According to Sontag, often the key thing is to go out and get more data from those under-represented groups. For example, the team looked at an income-prediction system and found that it was twice as likely to misclassify female employees as low-income and male employees as high-income. They found that if they had increased the dataset by a factor of 10, those mistakes would happen 40 percent less often.

In another dataset, the researchers found that a system’s ability to predict intensive care unit (ICU) mortality was less accurate for Asian patients. Existing approaches for reducing discrimination would basically just make the non-Asian predictions less accurate, which is problematic when you’re talking about settings like healthcare that can quite literally be life-or-death.

Chen says that their approach allows them to look at a dataset and determine how many more participants from different populations are needed to improve accuracy for the group with lower accuracy while still preserving accuracy for the group with higher accuracy.

Improving and Extending Phone Battery Life

It is a regular complaint among smartphone users that their batteries fade too quickly. With how integral battery life is, along with how expensive newer phones have become and how having an uncharged phone could be a problem in certain dire situations, it is worth briefly addressing how to get more usage out of phone batteries.

Phones use lithium-ion batteries, which means that batteries gradually lose their capacity as the number of charge and discharge cycles grows. There are ways to lessen this degradation, but it will occur over time nonetheless.

Battery life depends on how you’re using the phone on a specific day along with how you’ve previously used it. So there’s value in adopting better charging habits to retain more battery in the future.

First of all, keeping phones plugged in once they reach full charge damages the battery in the long-run. Keeping phones plugged in like that puts them in a high-tension state that does harm to the battery’s internal chemistry. When possible, it’s also better to just charge the phone regularly instead of all the way to 100 percent charge, as the high voltage state puts stress on the battery.

The majority of battery degradation occurs during the more fully charged into discharged cycles . This means that it’s better to limit battery discharge (outside of on and fully charged) in the cycles when possible so that the battery doesn’t go into a deep discharge cycle.

Additionally, it should be noted that the fast charge option often available today can significantly reduce the battery life in a cycle, using wifi is less power-intensive than using 4G data, and reducing screen brightness, avoiding excessive heat, and limiting video use are all ways to extend battery life in a given cycle.

There will eventually be much stronger batteries, just as there eventually be battery protections from water. (Something called F-POSS — which repeals water and oil from sticking to it by having low surface energy — is already in development.) Until then though, users will probably want to handle their somewhat energy-fragile phone batteries with care.

Advanced Automation in the Future

Over the last several decades in the U.S., productivity gains have been concentrated in the upper echelon of the income distribution. The general population hasn’t really received them.

productivitygraph

Productivity means the average output per hour in the economy. This has increased due to technological advances such as faster computer processing power and workers becoming more efficient at their jobs.

The story of robots taking all the jobs is today printed in the mass media with some regularity. However, if the robots were actually taking all the jobs today, it would show up in the data. Massive automation implies massive increases in productivity, but as it is now, productivity gains rates have been quite low. Yearly productivity growth was higher in 2003 than it is today, and since about 2005 there’s been a slowdown in it. So based on the trend of the last dozen years, it is unlikely enough that we will see significant advances in productivity (automation) over the next several years.

Society should be structured so that in the next decades, productivity gains will be distributed to the general population instead of primarily to upper-middle class and wealthy people. In a significant way, this will depend on who owns the technology.

It’s crucial that there be real care taken on the rights awarded to people owning the most valuable technology. This may frankly determine whether that technology is a curse or a blessing for humanity.

In one example, say that the groundbreaking designs for the most highly advanced robotics are developed by a major corporation, which then patents the designs. The patent is valuable since the robotics would be far more efficient than anything else on the market, and it would allow the corporation to charge much higher prices than would otherwise be possible. This would be good for the minority of people who own the company and are invested in it, but it would almost certainly be harmful to the general public.

The case of prescription drugs shows us what happens when legal enforcement via patents goes wrong. The United States spent $450 billion on prescription drugs in 2017, an amount that would have been about a fifth as much (representing thousands of dollars per U.S. household in savings) were there no drug patents and a different system of drug research incentives. The consequence of this disparity is obviously that there are many people who suffer with health ailments due to unnecessarily expensive medications.

The major corporation with the valuable robotics patents may be able to make the distribution of the valuable robotics (which could very efficiently perform a wide range of tasks) much more expensive than necessary, similar to the prescription drugs example. The robotics being too expensive would mean that there’d be less of them to do efficient labor such as assembling various household appliances, and this would manifest itself as a cost to a lot of people.

So instead of the advanced robotics (probably otherwise cheap due to the software and materials needed for them being low cost) being widely distributed inexpensively and allowed to most efficiently automate labor, there could be a case where their use is expensively restricted. The robotics may even be used by the potentially unaccountable corporation for mostly nefarious ends, and this is another problem that arises with the control granted by the patents. Clearly, there need to be public interest solutions to this sort of problem, such as avoiding the use of regressive governmental interventions, considering the use of shared public ownership to allow many people to receive dividends on the value the technology generates, and implementing sensible regulatory measures.

There are also standards that can be set into law and enforced. A basic story is that if (after automation advances lead to less labor requirements among workers generally) the length of the average work year decreases by 20 percent, about 25 percent more people will be employed. The arithmetic may not always be this straightforward, but it’s a basic estimate for consideration.

Less time spent working while employing more people is clearly a good standard for many reasons, particularly in the U.S. where leisure rates among most are low compared to other wealthy countries. More people being employed may also mean tighter labor markets that allow for workers to receive higher real wage gains.

If there is higher output due to technology, that value will go somewhere in the form of more money. Over the last decades we have seen this concentrated at the top, but it is possible to have workers both work shorter hours and have similar or even higher pay levels.

Lacking Net Neutrality Presents Public Safety Risks

It’s horrible that ISPs slowed speeds to emergency respondents in the wake of massive wildfires. The issue of net neutrality is really quite simple at its core — it’s about whether ISPs will have too much control over user access to the Internet or not. The large ISPs would prefer as much control as possible to increase their profits, even if that’s at the expense of public safety.

An ongoing study first reported by Bloomberg reveals the extent to which major American telecom companies are throttling video content on apps such as YouTube and Netflix on mobile phones in the wake of the Republican-controlled Federal Communications Commission (FCC) repealing national net neutrality protections last December.

Researchers from Northeastern University and the University of Massachusetts, Amherst used a smartphone app called Wehe, which has been downloaded by about 100,000 users, to track when wireless carriers engage in data “differentiation,” or when companies alter download speeds depending on the type of content, which violates a key tenet of the repealed rules.

Between January and May of this year, Wehe detected differentiation by Verizon 11,100 times; AT&T 8,398 times; T-Mobile 3,900 times; and Sprint 339 times. David Choffnes, one of the study’s authors and the app’s developer, told Bloomberg that YouTube was the top target, but carriers also slowed down speeds for Netflix, Amazon Prime Video, and the NBC Sports app.

[…]

Jeremy Gillula, tech policy director at Electronic Frontier Foundation, pointed to Verizon slowing down data speeds as Santa Clara County emergency responders battled the largest fire in California’s history. Verizon claimed it was a “customer-support mistake,” but county counsel James Williams said it proves that ISPs “will act in their economic interests, even at the expense of public safety,” and “that is exactly what the Trump administration’s repeal of net neutrality allows and encourages.”

That example, Gillula told Bloomberg, demonstrates “that ISPs are happy to use words like ‘unlimited’ and ‘no throttling’ in their public statements, but then give themselves the right to throttle certain traffic by burying some esoteric language in the fine print” of service contracts. “As a result, it’s especially important that consumers have tools like this to measure whether or not their ISP is throttling certain services.”

Using Virtual Reality in Beneficial Ways

Virtual reality is technology that’s advancing from being fringe to something that’s gradually becoming implemented more in the 21st century. This trend will only continue with lower costs of materials for virtualization and improved software.

The way virtual reality works is obvious enough — some sort of apparatus that covers the eyes and is able to transmit visual of a virtual world is required. Virtual worlds of course will have sounds to make them more immersive, and perhaps in the future there will be an option to stimulate other senses as well. It isn’t unreasonable to expect the possibility of VR technology that somehow provides the replication of smell, taste, and feel. Eventually there is likely to be VR technology with direct brain stimulation too.

Virtual reality is often presented these days as a fun way to spend time through gaming, and while it can be beneficial to provide people with an escape that doesn’t involve hard drugs in a world that’s often crazy and fucked up, virtual reality has other uses that deserve to be known about more.

One of the most notable recent findings is a study finding that people recall information better through virtual reality. Since knowledge is power, the enhanced ability of people to recall knowledge would be helpful in a variety of scenarios, such as training people for meaningful work, keeping fond memories more effectively, and assisting in educational endeavors. This could be combined with other research finding that drawing pictures is a strong way to remember information.

Most people are not especially good multi-taskers — the research tells us that only a few percent of people are “super taskers,” or those with the ability to focus on multiple tasks well. For whatever reason this is, it’s a general principle that human beings tend to perform better when their primary focus is on one task at a given time. Virtual reality thus provides an immersive environment that should allow people to focus more on one task than a traditional 2D learning environment.

VR has been shown to reduce the fear children have for needles in one study. This makes sense due to the distraction from VR’s intense immersion. Since the fear of needles is a suffocating one for some children, something as simple as a VR experience of going to an amusement park or a beach would be immensely helpful.

There’s a problem of too many people avoiding vital vaccinations in the United States, leading to diseases that should have been extinguished in the 20th century suddenly making recurrences in certain parts of the country. This is another example of how technology can be used to solve a real problem and protect society.

VR’s distraction could be extended to surgeries where local anathesia is used, thus protecting people from pain. It has already been found that virtual reality therapy is effective at reducing pain in hospitalized patients. It isn’t entirely clear why, but it may be because the VR experience is so immersive that the brain is unable to concurrently process the pain stimuli along with the VR.

It has been theorized that people have a fixed capacity for attention, and it has also been thought that when people are expecting physical pain in the immediate future, they tend to feel it more intensely. This may be because instead of the pain being a surprise, the increased focus on it before the pain hits may cause it to be felt more strongly.

Virtual reality will also have an important role in the journalism of the future. Studies have found that VR makes journalism more immersive, such as the VR story about factory farming being successful at raising more awareness of the horrific treatment often endured by animals.

VR can thus be an effective tool at fighting corruption and injustice in an era where young people generally — for whatever reason — are reading less than past generations. It has been found that too much use of fantasy-like elements in VR distract from the realism of the story and can make them less credible, however.

VR has also been referred to as an “empathy machine.” It’s conceivable that VR could be used for rehabilitation — use of the technology has already shown promise at increasing empathy levels, and VR shows promising mental health treatment results. The immersive virtual experience of owning a body in VR space has at times shown to really have an impact at altering perceptions and making important impressions.

In sum, while interactions in real life will always have importance that’s often most meaningful, there are many ways that virtual reality may improve the livelihoods of others.

Considerations for Securing and Optimizing the Internet of Things

Devices from smartphones to wifi-connected refrigerators represents what’s called the “Internet of Things,” billions of devices that are connected to the Internet. As the number of devices with Internet connectivity is set to expand significantly in the near future, it is worth examining how to best use the IoT for the future.

It is first of all worth noting that there will be numerous security vulnerabilities opened for consumers because of the expansion of the Internet of Things. Of the tens of billions of devices that will be added over the next several years, few of them will likely have regular security updates.

Security updates are important in computer security because they allow for vulnerabilities in software to be patched. While vulnerabilities in devices are known and persist as unpatched, it creates opportunities for adversaries to exploit them.

Billions of new vulnerabilities create problems because the way computer security tends to work, it may only one vulnerability on a network to compromise much else. That’s part of why defense in computer security has been so difficult — the attacker may only need one opening, while the defender may have to defend everything.

For example, say an adversary manages to compromise someone’s phone. The phone may then later connect to the refrigerator to prepare refreshments, further allowing the spread of malicious software from one infected device to another. This process may repeat itself again if the refrigerator were able to compromise the Internet-connected router, and once the router is compromised, the thermostat could be compromised too, making a home too hot or cold while driving up electricity costs.

There are a variety of realistic enough scenarios like this, which are more concerning when more sensitive items such as computers accessing bank accounts and home cameras are included. There are of course solutions to these concerns though.

It is probably better that some devices (such as pacemakers) are simply never designed to have Internet connectivity to begin with. Thermostats and refrigerators are the type of devices which clearly don’t require Internet connectivity to fulfill their intended purpose. Letting them be connected to the Internet may be convenient, but it may very well not be worth the increased potential of compromising other devices and being compromised themselves, leading to substantial costs in unintended heating or spoiled food.

For the devices that are for whatever reason connected to the Internet, it’s better if there could be multiple networks with strong security in a home or building if possible. That way, if an IoT device is compromised on one network, devices on another network have another barrier of protection against being compromised.

This relates to a concept in security known as security by compartmentalization. Since all of today’s software contains flaws — vulnerabilities that can be exploited — the approach of compartmentalization seeks to limit damage before it can spread too far.

In terms of optimization, some things are worthwhile to have connected. Different machines or robots should be communicating with each other on a task such as how many raw materials are needed. This will save humans the need to say this, allowing them to focus on more productive tasks than those that merely report details.

As cooperation can be powerful among humans, so too can it be among machines and other devices. It’s going to require strong security practices such as implementing compartmentalization, having standards on security updates, and using better encryption schemes for software, but it can be done, and it should be done. Since technology has no moral imperative, what humans do with technology will likely either create dystopias or utopias. It’s a question of whether the Internet of Things will lead primarily to chaos or to widespread benefits.

Using Spectral Cloaking for Object Invisibility

An example of when science fiction becomes science fact. This advance could be used in many different ways, including in digital security, with out of sight possibly meaning out of mind.

180628120102_1_540x360

Researchers and engineers have long sought ways to conceal objects by manipulating how light interacts with them. A new study offers the first demonstration of invisibility cloaking based on the manipulation of the frequency (color) of light waves as they pass through an object, a fundamentally new approach that overcomes critical shortcomings of existing cloaking technologies.

The approach could be applicable to securing data transmitted over fiber optic lines and also help improve technologies for sensing, telecommunications and information processing, researchers say. The concept, theoretically, could be extended to make 3D objects invisible from all directions; a significant step in the development of practical invisibility cloaking technologies.

Most current cloaking devices can fully conceal the object of interest only when the object is illuminated with just one color of light. However, sunlight and most other light sources are broadband, meaning that they contain many colors. The new device, called a spectral invisibility cloak, is designed to completely hide arbitrary objects under broadband illumination.

The spectral cloak operates by selectively transferring energy from certain colors of the light wave to other colors. After the wave has passed through the object, the device restores the light to its original state. Researchers demonstrate the new approach in Optica, The Optical Society’s journal for high impact research.

“Our work represents a breakthrough in the quest for invisibility cloaking,” said José Azaña, National Institute of Scientific Research (INRS), Montréal, Canada. “We have made a target object fully invisible to observation under realistic broadband illumination by propagating the illumination wave through the object with no detectable distortion, exactly as if the object and cloak were not present.”

[…]

While the new design would need further development before it could be translated into a Harry Potter-style, wearable invisibility cloak, the demonstrated spectral cloaking device could be useful for a range of security goals. For example, current telecommunication systems use broadband waves as data signals to transfer and process information. Spectral cloaking could be used to selectively determine which operations are applied to a light wave and which are “made invisible” to it over certain periods of time. This could prevent an eavesdropper from gathering information by probing a fiber optic network with broadband light.

The overall concept of reversible, user-defined spectral energy redistribution could also find applications beyond invisibility cloaking. For example, selectively removing and subsequently reinstating colors in the broadband waves that are used as telecommunication data signals could allow more data to be transmitted over a given link, helping to alleviate logjams as data demands continue to grow. Or, the technique could be used to minimize some key problems in today’s broadband telecommunication links, for example by reorganizing the signal energy spectrum to make it less vulnerable to dispersion, nonlinear phenomena and other undesired effects that impair data signals.

Victory for Privacy as Supreme Court Rules Warrantless Phone Location Tracking Unconstitutional

This is a very important ruling that should serve as a good precedent for technologically-based privacy rights in the future.

The Supreme Court handed down a landmark opinion today in Carpenter v. United States, ruling 5-4 that the Fourth Amendment protects cell phone location information. In an opinion by Chief Justice Roberts, the Court recognized that location information, collected by cell providers like Sprint, AT&T, and Verizon, creates a “detailed chronicle of a person’s physical presence compiled every day, every moment over years.” As a result, police must now get a warrant before obtaining this data.

This is a major victory. Cell phones are essential to modern life, but the way that cell phones operate—by constantly connecting to cell towers to exchange data—makes it possible for cell providers to collect information on everywhere that each phone—and by extension, each phone’s owner—has been for years in the past. As the Court noted, not only does access to this kind of information allow the government to achieve “near perfect surveillance, as if it had attached an ankle monitor to the phone’s user,” but, because phone companies collect it for every device, the “police need not even know in advance whether they want to follow a particular individual, or when.”

[…]

Perhaps the most significant part of today’s ruling for the future is its explicit recognition that individuals can maintain an expectation of privacy in information that they provide to third parties. The Court termed that a “rare” case, but it’s clear that other invasive surveillance technologies, particularly those than can track individuals through physical space, are now ripe for challenge in light of Carpenter. Expect to see much more litigation on this subject from EFF and our friends.