Using Chronoprinting to Cheaply Detect Food and Drug Impurities

The world has long needed this valuable sort of development to safeguard people’s health.

If we could tell authentic from counterfeit or adulterated drugs and foods just by looking at them, we could save money and lives every year, especially in the developing world, where the problem is worst. Unfortunately, the technologies that can detect what a sample is made of are expensive, energy-intensive, and largely unavailable in regions where they are needed most.

This may change with a simple new technique developed by engineers from the University of California, Riverside that can detect fake drugs from a video taken as the sample undergoes a disturbance.

If you’ve ever used online photo tools, you’ve probably seen how these tools use image analysis algorithms to categorize your photos. By distinguishing the different people in your photos, these algorithms make it easy to find all the photos of your daughter or your dad. Now, in the journal ACS Central Science, researchers report they have used these algorithms to solve a very different problem: identifying fake medicines and other potentially dangerous products.

Called “chronoprinting,” the technology requires only a few relatively inexpensive pieces of equipment and free software to accurately distinguish pure from inferior food and medicines.

The World Health Organization says that about 10 percent of all medicines in low- and middle-income countries are counterfeit, and food fraud is a global problem that costs consumers and industry billions of dollars per year. Fraudulent food and drugs waste money and jeopardize the health and lives of their consumers. But detecting fakes and frauds requires expensive equipment and highly trained experts.

William Grover, an assistant professor of bioengineering in UC Riverside’s Marlan and Rosemary Bourns College of Engineering, and Brittney McKenzie, a doctoral student in Grover’s lab, wondered if it would be possible to distinguish authentic from adulterated drugs and food by observing how they behave when disturbed by temperature changes or other causes. Two substances with identical compositions should respond the same way to a disturbance, and if two substances appear identical but respond differently, their composition must be different, they reasoned.

McKenzie designed a set of experiments to test this idea. She loaded samples of pure olive oil, one of the world’s most commonly adulterated foods, and cough syrup, which is often diluted or counterfeited in the developing world, into tiny channels on a microfluidic chip, and chilled it quickly in liquid nitrogen. A USB microscope camera filmed the samples reacting to the temperature change.

McKenzie and Grover wrote software that converts the video to a bitmap image. Because the image showed how the sample changed over time, the researchers called it a “chronoprint.”

The team then used image analysis algorithms to compare different chronoprints from the same substance. They found that each pure substance had a reliable chronoprint over multiple tests.

Next, they repeated the experiment with samples of olive oil that had been diluted with other oils and cough syrup diluted with water. The adulterated samples produced chronoprints that were different from the pure samples. The difference was so big, so obvious, and so consistent the researchers concluded that chronoprints and image analysis algorithms can reliably detect some types of food and drug fraud.

“The significant visual differences between the samples were both unexpected and exciting, and with them being consistent we knew this could be a useful way to identify a wide range of samples,” McKenzie said.

Grover said their technique creates a powerful new connection between chemistry and computer science.

“By basically converting a chemical sample to an image, we can take advantage of all the different image analysis algorithms that computer scientists have developed,” he said. “And as those algorithms get better, our ability to chemically identify a sample should get better, too.”

The researchers used liquids in their experiments but note the method could also be used on solid materials dissolved in water, and other types of disturbance, such as heat or a centrifuge, could be used for substances that don’t react well to freezing. The technique is easy to learn, making highly trained experts unnecessary. Chronoprinting requires hobbyist-grade equipment and software downloadable for free from Grover’s lab website, putting it well within reach of government agencies and labs with limited resources.

Video on how this chronoprinting works: https://youtu.be/qbyE68qD2Zo

Advertisements

Brain-Training App Has Research-Backed Claims to Improve User Concentration

The app could potentially be quite useful, but it should be noted that research has been finding what a good brain workout exercise is.

A new ‘brain training’ game designed by researchers at the University of Cambridge improves users’ concentration, according to new research published today. The scientists behind the venture say this could provide a welcome antidote to the daily distractions that we face in a busy world.

In their book, The Distracted Mind: Ancient Brains in a High-Tech World, Adam Gazzaley and Larry D. Rosen point out that with the emergence of new technologies requiring rapid responses to emails and texts and working on multiple projects simultaneously, young people, including students, are having more problems with sustaining attention and frequently become distracted. This difficulty in focussing attention and concentrating is made worse by stress from a global environment that never sleeps and also frequent travel leading to jetlag and poor quality sleep.

“We’ve all experienced coming home from work feeling that we’ve been busy all day, but unsure what we actually did,” says Professor Barbara Sahakian from the Department of Psychiatry. “Most of us spend our time answering emails, looking at text messages, searching social media, trying to multitask. But instead of getting a lot done, we sometimes struggle to complete even a single task and fail to achieve our goal for the day. Then we go home, and even there we find it difficult to ‘switch off’ and read a book or watch TV without picking up our smartphones. For complex tasks we need to get in the ‘flow’ and stay focused.”

In recent years, as smartphones have become ubiquitous, there has been a growth in the number of so-called ‘brain training’ apps that claim to improve cognitive skills such as memory, numerical skills and concentration.

Now, a team from the Behavioural and Clinical Neuroscience Institute at the University of Cambridge, has developed and tested ‘Decoder’, a new game that is aimed at helping users improve their attention and concentration. The game is based on the team’s own research and has been evaluated scientifically.

In a study published today in the journal Frontiers in Behavioural Neuroscience Professor Sahakian and colleague Dr George Savulich have demonstrated that playing Decoder on an iPad for eight hours over one month improves attention and concentration. This form of attention activates a frontal-parietal network in the brain.

In their study, the researchers divided 75 healthy young adults into three groups: one group received Decoder, one control group played Bingo for the same amount of time and a second control group received no game. Participants in the first two groups were invited to attend eight one-hour sessions over the course of a month during which they played either Decoder or Bingo under supervision.

All 75 participants were tested at the start of the trial and then after four weeks using the CANTAB Rapid Visual Information Processing test (RVP). CANTAB RVP has been demonstrated in previously published studies to be a highly sensitive test of attention/concentration.

During the test, participants are asked to detect sequences of digits (e.g. 2-4-6, 3-5-7, 4-6-8). A white box appears in the middle of screen, of which digits from 2 to 9 appear in a pseudo-random order, at a rate of 100 digits per minute. Participants are instructed to press a button every time they detect a sequence. The duration of the test is approximately five minutes.

Results from the study showed a significant difference in attention as measured by the RVP. Those who played Decoder were better than those who played Bingo and those who played no game. The difference in performance was significant and meaningful as it was comparable to those effects seen using stimulants, such as methylphenidate, or nicotine. The former, also known as Ritalin, is a common treatment for Attention Deficit Hyperactivity Disorder (ADHD).

To ensure that Decoder improved focussed attention and concentration without impairing the ability to shift attention, the researchers also tested participants’ ability on the Trail Making Test. Decoder performance also improved on this commonly used neuropsychological test of attentional shifting. During this test, participants have to first attend to numbers and then shift their attention to letters and then shift back to numbers. Additionally, participants enjoyed playing the game, and motivation remained high throughout the 8 hours of gameplay.

Professor Sahakian commented: “Many people tell me that they have trouble focussing their attention. Decoder should help them improve their ability to do this. In addition to healthy people, we hope that the game will be beneficial for patients who have impairments in attention, including those with ADHD or traumatic brain injury. We plan to start a study with traumatic brain injury patients this year.”

Dr Savulich added: “Many brain training apps on the market are not supported by rigorous scientific evidence. Our evidence-based game is developed interactively and the games developer, Tom Piercy, ensures that it is engaging and fun to play. The level of difficulty is matched to the individual player and participants enjoy the challenge of the cognitive training.”

The game has now been licensed through Cambridge Enterprise, the technology transfer arm of the University of Cambridge, to app developer Peak, who specialise in evidence-based ‘brain training’ apps. This will allow Decoder to become accessible to the public. Peak has developed a version for Apple devices and is releasing the game today as part of the Peak Brain Training app. Peak Brain Training is available from the App Store for free and Decoder will be available to both free and pro users as part of their daily workout. The company plans to make a version available for Android devices later this year.

“Peak’s version of Decoder is even more challenging than our original test game, so it will allow players to continue to gain even larger benefits in performance over time,” says Professor Sahakian. “By licensing our game, we hope it can reach a wide audience who are able to benefit by improving their attention.”

Study: Social Media Use Can Increase Depression and Loneliness

The study essentially found that people using social media less than they typically would results in major decreases in loneliness and depression, with that effect being more pronounced for people who were most depressed at the start of the study.

Social media does have its share of positives — it allows people otherwise separated by significant physical distance to keep in touch and interact, it provides platforms for sharing ideas and stories, and it provides ways for the disadvantaged in society to gain access to opportunities. There are clear downsides to social media services though:

The link between the two has been talked about for years, but a causal connection had never been proven. For the first time, University of Pennsylvania research based on experimental data connects Facebook, Snapchat, and Instagram use to decreased well-being. Psychologist Melissa G. Hunt published her findings in the December Journal of Social and Clinical Psychology.

Few prior studies have attempted to show that social-media use harms users’ well-being, and those that have either put participants in unrealistic situations or were limited in scope, asking them to completely forego Facebook and relying on self-report data, for example, or conducting the work in a lab in as little time as an hour.

“We set out to do a much more comprehensive, rigorous study that was also more ecologically valid,” says Hunt, associate director of clinical training in Penn’s Psychology Department.

To that end, the research team, which included recent alumni Rachel Marx and Courtney Lipson and Penn senior Jordyn Young, designed their experiment to include the three platforms most popular with a cohort of undergraduates, and then collected objective usage data automatically tracked by iPhones for active apps, not those running the background.

Each of 143 participants completed a survey to determine mood and well-being at the study’s start, plus shared shots of their iPhone battery screens to offer a week’s worth of baseline social-media data. Participants were then randomly assigned to a control group, which had users maintain their typical social-media behavior, or an experimental group that limited time on Facebook, Snapchat, and Instagram to 10 minutes per platform per day.

For the next three weeks, participants shared iPhone battery screenshots to give the researchers weekly tallies for each individual. With those data in hand, Hunt then looked at seven outcome measures including fear of missing out, anxiety, depression, and loneliness.

“Here’s the bottom line,” she says. “Using less social media than you normally would leads to significant decreases in both depression and loneliness. These effects are particularly pronounced for folks who were more depressed when they came into the study.”

Hunt stresses that the findings do not suggest that 18- to 22-year-olds should stop using social media altogether. In fact, she built the study as she did to stay away from what she considers an unrealistic goal. The work does, however, speak to the idea that limiting screen time on these apps couldn’t hurt.

“It is a little ironic that reducing your use of social media actually makes you feel less lonely,” she says. But when she digs a little deeper, the findings make sense. “Some of the existing literature on social media suggests there’s an enormous amount of social comparison that happens. When you look at other people’s lives, particularly on Instagram, it’s easy to conclude that everyone else’s life is cooler or better than yours.”

Because this particular work only looked at Facebook, Instagram, and Snapchat, it’s not clear whether it applies broadly to other social-media platforms. Hunt also hesitates to say that these findings would replicate for other age groups or in different settings. Those are questions she still hopes to answer, including in an upcoming study about the use of dating apps by college students.

Despite those caveats, and although the study didn’t determine the optimal time users should spend on these platforms or the best way to use them, Hunt says the findings do offer two related conclusions it couldn’t hurt any social-media user to follow.

For one, reduce opportunities for social comparison, she says. “When you’re not busy getting sucked into clickbait social media, you’re actually spending more time on things that are more likely to make you feel better about your life.” Secondly, she adds, because these tools are here to stay, it’s incumbent on society to figure out how to use them in a way that limits damaging effects. “In general, I would say, put your phone down and be with the people in your life.”

Making Algorithms Less Biased and Reducing Inequalities of Power

Algorithms increasingly effect society, from employment (Amazon’s algorithms discriminated against women) to the criminal justice system (where they often discriminate against African-Americans), and making them less biased would reduce inequalities in power. This is also related to how research suggests that AI is able to independently develop its own prejudices.

With machine learning systems now being used to determine everything from stock prices to medical diagnoses, it’s never been more important to look at how they arrive at decisions.

A new approach out of MIT demonstrates that the main culprit is not just the algorithms themselves, but how the data itself is collected.

“Computer scientists are often quick to say that the way to make these systems less biased is to simply design better algorithms,” says lead author Irene Chen, a PhD student who wrote the paper with MIT professor David Sontag and postdoctoral associate Fredrik D. Johansson. “But algorithms are only as good as the data they’re using, and our research shows that you can often make a bigger difference with better data.”

Looking at specific examples, researchers were able to both identify potential causes for differences in accuracies and quantify each factor’s individual impact on the data. They then showed how changing the way they collected data could reduce each type of bias while still maintaining the same level of predictive accuracy.

“We view this as a toolbox for helping machine learning engineers figure out what questions to ask of their data in order to diagnose why their systems may be making unfair predictions,” says Sontag.

Chen says that one of the biggest misconceptions is that more data is always better. Getting more participants doesn’t necessarily help, since drawing from the exact same population often leads to the same subgroups being under-represented. Even the popular image database ImageNet, with its many millions of images, has been shown to be biased towards the Northern Hemisphere.

According to Sontag, often the key thing is to go out and get more data from those under-represented groups. For example, the team looked at an income-prediction system and found that it was twice as likely to misclassify female employees as low-income and male employees as high-income. They found that if they had increased the dataset by a factor of 10, those mistakes would happen 40 percent less often.

In another dataset, the researchers found that a system’s ability to predict intensive care unit (ICU) mortality was less accurate for Asian patients. Existing approaches for reducing discrimination would basically just make the non-Asian predictions less accurate, which is problematic when you’re talking about settings like healthcare that can quite literally be life-or-death.

Chen says that their approach allows them to look at a dataset and determine how many more participants from different populations are needed to improve accuracy for the group with lower accuracy while still preserving accuracy for the group with higher accuracy.

Improving and Extending Phone Battery Life

It is a regular complaint among smartphone users that their batteries fade too quickly. With how integral battery life is, along with how expensive newer phones have become and how having an uncharged phone could be a problem in certain dire situations, it is worth briefly addressing how to get more usage out of phone batteries.

Phones use lithium-ion batteries, which means that batteries gradually lose their capacity as the number of charge and discharge cycles grows. There are ways to lessen this degradation, but it will occur over time nonetheless.

Battery life depends on how you’re using the phone on a specific day along with how you’ve previously used it. So there’s value in adopting better charging habits to retain more battery in the future.

First of all, keeping phones plugged in once they reach full charge damages the battery in the long-run. Keeping phones plugged in like that puts them in a high-tension state that does harm to the battery’s internal chemistry. When possible, it’s also better to just charge the phone regularly instead of all the way to 100 percent charge, as the high voltage state puts stress on the battery.

The majority of battery degradation occurs during the more fully charged into discharged cycles . This means that it’s better to limit battery discharge (outside of on and fully charged) in the cycles when possible so that the battery doesn’t go into a deep discharge cycle.

Additionally, it should be noted that the fast charge option often available today can significantly reduce the battery life in a cycle, using wifi is less power-intensive than using 4G data, and reducing screen brightness, avoiding excessive heat, and limiting video use are all ways to extend battery life in a given cycle.

There will eventually be much stronger batteries, just as there eventually be battery protections from water. (Something called F-POSS — which repeals water and oil from sticking to it by having low surface energy — is already in development.) Until then though, users will probably want to handle their somewhat energy-fragile phone batteries with care.

Advanced Automation in the Future

Over the last several decades in the U.S., productivity gains have been concentrated in the upper echelon of the income distribution. The general population hasn’t really received them.

productivitygraph

Productivity means the average output per hour in the economy. This has increased due to technological advances such as faster computer processing power and workers becoming more efficient at their jobs.

The story of robots taking all the jobs is today printed in the mass media with some regularity. However, if the robots were actually taking all the jobs today, it would show up in the data. Massive automation implies massive increases in productivity, but as it is now, productivity gains rates have been quite low. Yearly productivity growth was higher in 2003 than it is today, and since about 2005 there’s been a slowdown in it. So based on the trend of the last dozen years, it is unlikely enough that we will see significant advances in productivity (automation) over the next several years.

Society should be structured so that in the next decades, productivity gains will be distributed to the general population instead of primarily to upper-middle class and wealthy people. In a significant way, this will depend on who owns the technology.

It’s crucial that there be real care taken on the rights awarded to people owning the most valuable technology. This may frankly determine whether that technology is a curse or a blessing for humanity.

In one example, say that the groundbreaking designs for the most highly advanced robotics are developed by a major corporation, which then patents the designs. The patent is valuable since the robotics would be far more efficient than anything else on the market, and it would allow the corporation to charge much higher prices than would otherwise be possible. This would be good for the minority of people who own the company and are invested in it, but it would almost certainly be harmful to the general public.

The case of prescription drugs shows us what happens when legal enforcement via patents goes wrong. The United States spent $450 billion on prescription drugs in 2017, an amount that would have been about a fifth as much (representing thousands of dollars per U.S. household in savings) were there no drug patents and a different system of drug research incentives. The consequence of this disparity is obviously that there are many people who suffer with health ailments due to unnecessarily expensive medications.

The major corporation with the valuable robotics patents may be able to make the distribution of the valuable robotics (which could very efficiently perform a wide range of tasks) much more expensive than necessary, similar to the prescription drugs example. The robotics being too expensive would mean that there’d be less of them to do efficient labor such as assembling various household appliances, and this would manifest itself as a cost to a lot of people.

So instead of the advanced robotics (probably otherwise cheap due to the software and materials needed for them being low cost) being widely distributed inexpensively and allowed to most efficiently automate labor, there could be a case where their use is expensively restricted. The robotics may even be used by the potentially unaccountable corporation for mostly nefarious ends, and this is another problem that arises with the control granted by the patents. Clearly, there need to be public interest solutions to this sort of problem, such as avoiding the use of regressive governmental interventions, considering the use of shared public ownership to allow many people to receive dividends on the value the technology generates, and implementing sensible regulatory measures.

There are also standards that can be set into law and enforced. A basic story is that if (after automation advances lead to less labor requirements among workers generally) the length of the average work year decreases by 20 percent, about 25 percent more people will be employed. The arithmetic may not always be this straightforward, but it’s a basic estimate for consideration.

Less time spent working while employing more people is clearly a good standard for many reasons, particularly in the U.S. where leisure rates among most are low compared to other wealthy countries. More people being employed may also mean tighter labor markets that allow for workers to receive higher real wage gains.

If there is higher output due to technology, that value will go somewhere in the form of more money. Over the last decades we have seen this concentrated at the top, but it is possible to have workers both work shorter hours and have similar or even higher pay levels.

Lacking Net Neutrality Presents Public Safety Risks

It’s horrible that ISPs slowed speeds to emergency respondents in the wake of massive wildfires. The issue of net neutrality is really quite simple at its core — it’s about whether ISPs will have too much control over user access to the Internet or not. The large ISPs would prefer as much control as possible to increase their profits, even if that’s at the expense of public safety.

An ongoing study first reported by Bloomberg reveals the extent to which major American telecom companies are throttling video content on apps such as YouTube and Netflix on mobile phones in the wake of the Republican-controlled Federal Communications Commission (FCC) repealing national net neutrality protections last December.

Researchers from Northeastern University and the University of Massachusetts, Amherst used a smartphone app called Wehe, which has been downloaded by about 100,000 users, to track when wireless carriers engage in data “differentiation,” or when companies alter download speeds depending on the type of content, which violates a key tenet of the repealed rules.

Between January and May of this year, Wehe detected differentiation by Verizon 11,100 times; AT&T 8,398 times; T-Mobile 3,900 times; and Sprint 339 times. David Choffnes, one of the study’s authors and the app’s developer, told Bloomberg that YouTube was the top target, but carriers also slowed down speeds for Netflix, Amazon Prime Video, and the NBC Sports app.

[…]

Jeremy Gillula, tech policy director at Electronic Frontier Foundation, pointed to Verizon slowing down data speeds as Santa Clara County emergency responders battled the largest fire in California’s history. Verizon claimed it was a “customer-support mistake,” but county counsel James Williams said it proves that ISPs “will act in their economic interests, even at the expense of public safety,” and “that is exactly what the Trump administration’s repeal of net neutrality allows and encourages.”

That example, Gillula told Bloomberg, demonstrates “that ISPs are happy to use words like ‘unlimited’ and ‘no throttling’ in their public statements, but then give themselves the right to throttle certain traffic by burying some esoteric language in the fine print” of service contracts. “As a result, it’s especially important that consumers have tools like this to measure whether or not their ISP is throttling certain services.”