AI System Successfully Predicts Alzheimer’s Years in Advance

Important research of Alzheimer’s disease since it’s one of those diseases where the treatment will be more effective the earlier it’s caught.

Artificial intelligence (AI) technology improves the ability of brain imaging to predict Alzheimer’s disease, according to a study published in the journal Radiology.

Timely diagnosis of Alzheimer’s disease is extremely important, as treatments and interventions are more effective early in the course of the disease. However, early diagnosis has proven to be challenging. Research has linked the disease process to changes in metabolism, as shown by glucose uptake in certain regions of the brain, but these changes can be difficult to recognize.

“Differences in the pattern of glucose uptake in the brain are very subtle and diffuse,” said study co-author Jae Ho Sohn, M.D., from the Radiology & Biomedical Imaging Department at the University of California in San Francisco (UCSF). “People are good at finding specific biomarkers of disease, but metabolic changes represent a more global and subtle process.”

The study’s senior author, Benjamin Franc, M.D., from UCSF, approached Dr. Sohn and University of California, Berkeley, undergraduate student Yiming Ding through the Big Data in Radiology (BDRAD) research group, a multidisciplinary team of physicians and engineers focusing on radiological data science. Dr. Franc was interested in applying deep learning, a type of AI in which machines learn by example much like humans do, to find changes in brain metabolism predictive of Alzheimer’s disease.

The researchers trained the deep learning algorithm on a special imaging technology known as 18-F-fluorodeoxyglucose positron emission tomography (FDG-PET). In an FDG-PET scan, FDG, a radioactive glucose compound, is injected into the blood. PET scans can then measure the uptake of FDG in brain cells, an indicator of metabolic activity.

The researchers had access to data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), a major multi-site study focused on clinical trials to improve prevention and treatment of this disease. The ADNI dataset included more than 2,100 FDG-PET brain images from 1,002 patients. Researchers trained the deep learning algorithm on 90 percent of the dataset and then tested it on the remaining 10 percent of the dataset. Through deep learning, the algorithm was able to teach itself metabolic patterns that corresponded to Alzheimer’s disease.

Finally, the researchers tested the algorithm on an independent set of 40 imaging exams from 40 patients that it had never studied. The algorithm achieved 100 percent sensitivity at detecting the disease an average of more than six years prior to the final diagnosis.

“We were very pleased with the algorithm’s performance,” Dr. Sohn said. “It was able to predict every single case that advanced to Alzheimer’s disease.”

Although he cautioned that their independent test set was small and needs further validation with a larger multi-institutional prospective study, Dr. Sohn said that the algorithm could be a useful tool to complement the work of radiologists — especially in conjunction with other biochemical and imaging tests — in providing an opportunity for early therapeutic intervention.

“If we diagnose Alzheimer’s disease when all the symptoms have manifested, the brain volume loss is so significant that it’s too late to intervene,” he said. “If we can detect it earlier, that’s an opportunity for investigators to potentially find better ways to slow down or even halt the disease process.”

More Climate Change Worsens Natural Disasters

Hurricane Florence has been receiving massive media coverage for the immense damage it’s doing. There are hundreds of thousands of people without electricity in North Carolina now, and among other things, such as threatening nuclear reactors, the flooding is doing major harm.

In the news media, it is almost never mentioned that climate change has made natural disasters such as hurricanes worse. More warm air translates to more water vapor, and more water vapor means worsened superstorms. In 2017, there was a record amount of U.S. economic costs related to natural disasters, in significant part due to hurricanes like Hurricane Florence.

Amazingly, it is now 2018 and there is not even much discussion about ways that human technology can reduce the strength of superstorms. Hurricanes require a sea surface temperature of 26.5 degrees Celsius to form, and there is some research showing that sending compressed bubbles (via perforated pipes located over a hundred meters down) from deeper in the ocean brings up colder water to the surface. The cold water would cool the warmer surface water, possibly preventing hurricanes through removing their supply of energy.

The United States has given enormous subsidies to fossil fuels companies that operate oil rigs on the ocean, contributing to the greenhouse gas effect that leads to warming and worse storms. It doesn’t seem unreasonable to use the materials from them to create platforms that use the perforated pipes to cool the ocean water and prevent (or perhaps ameliorate) hurricanes. In response to data that predicts where hurricanes are about to form, it doesn’t seem unreasonable that that sort of platform could be quickly deployed or transported to other locations either.

But the absence of a discussion like this is what kind of mass media (and therefore significantly communicative) structure is currently in place — one that doesn’t discuss a key factor in making the problem much worse, and one that doesn’t really mention potentially viable technological solutions in the 21st century.

Climate change (yes, it’s real and at least largely human-caused) will keep making these sorts of disasters much worse if it continues unabated. In 20 years, Hurricane Florence may seem mild compared to the average hurricanes of 2038, and that is clearly a stormy future that needs prevented.

Using Work Sharing to Improve the Economy and Worker Happiness

An important policy idea of reducing average necessary work hours (with at least similar wage levels ideally due to increased value via more productivity growth) that will keep becoming more important as technology continues to advance.

The United States is very much an outlier among wealthy countries in the relatively weak rights that are guaranteed to workers on the job. This is true in a variety of areas. For example, the United States is the only wealthy country in which private sector workers can be dismissed at will, but it shows up most clearly in hours of work.

In other wealthy countries, there has been a consistent downward trend in average annual hours of work over the last four decades. By contrast, in the United States, there has been relatively little change. While people on other wealthy countries can count on paid sick days, paid family leave, and four to six weeks of paid vacation every year, these benefits are only available to better-paid workers in the United States. Even for these workers, the benefits are often less than the average in Western European countries.


Part of the benefit of work sharing is that it can allow workers and employers to gain experience with a more flexible work week or work year. It is possible that this experience can lead workers to place a higher value on leisure or non-work activities and therefore increase their support for policies that allow for reduced work hours.

Work Hours in 1970: The United States Was Not Always an Outlier

When the experience of European countries is raised in the context of proposals for expanding paid time off in the United States, it is common for opponents to dismiss this evidence by pointing to differences in national character. Europeans may value time off with their families or taking vacations, but we are told that Americans place a higher value on work and income.

While debates on national character probably do not provide a useful basis for policy, it is worth noting that the United States was not always an outlier in annual hours worked. If we go back to the 1970s, the United States was near the OECD average in annual hours worked. By contrast, it ranks near the top in 2016.

In 1970, workers in the United States had put in on average 3 to 5 percent more hours than workers in Denmark and Finland, according to the OECD data, by 2016, this difference had grown to more than 25 percent. Workers in France and the Netherlands now have considerably shorter average work years than workers in the United States. Even workers in Japan now work about 5 percent less on average than workers in the United States.



It is also important to consider efforts to reduce hours as being a necessary aspect of making the workplace friendlier to women. It continues to be the case that women have a grossly disproportionate share of the responsibility for caring for children and other family members.


In this respect, it is worth noting that the United States went from ranking near the top in women’s labor force participation in 1980 to being below the OECD average in 2018. While other countries have made workplaces more family friendly, this has been much less true of the United States.

Shortening Work Hours and Full Employment

There has been a largely otherworldly public debate in recent years on the prospects that robots and artificial intelligence would lead to mass unemployment. This debate is otherworldly since it describes a world of rapidly rising productivity growth. In fact, productivity growth has been quite slow ever since 2005. The average annual rate of productivity growth over the last twelve years has been just over 1.0 percent. This compares to a rate of growth of close to 3.0 percent in the long Golden Age from 1947 to 1973 and again from 1995 to 2005.

So this means that we are having this major national debate about the mass displacement of workers due to technology at a time when the data clearly tell us that displacement is moving along very slowly.[2] It is also worth noting that all the official projections from agencies like the Congressional Budget Office and the Office of Management and Budget show the slowdown in productivity growth persisting for the indefinite future. This projection of continued slow productivity growth provides the basis for debates on issues like budget deficits and the finances of Social Security.

However, if we did actually begin to see an uptick in the rate of productivity growth, and robots did begin to displace large numbers of workers, then an obvious solution would be to adopt policies aimed at shortening the average duration of the work year. The basic arithmetic is straightforward: if we reduce average work hours by 20 percent, then we will need 25 percent more workers to get the same amount of labor. While in practice the relationship will never be as simple as the straight arithmetic, if we do get a reduction in average work time, then we will need more workers.

As noted above, reductions in work hours was an important way in which workers in Western Europe have taken the gains from productivity growth over the last four decades. This had also been true in previous decades in the United States, as the standard workweek was shortened to forty hours with the Fair Labor Standards Act in 1937. In many industries, it had been over sixty hours at the turn of the twentieth century.

If the United States can resume a path of shortening work hours and get its standard work year back in line with other wealthy countries, it should be able to absorb even very rapid gains in productivity growth without any concerns about mass unemployment. While job-killing robots may exist primarily in the heads of the people who write about the economy, if they do show up in the world, a policy of aggressive reductions in work hours should ensure they don’t lead to widespread unemployment.

Removing C02 from the Atmosphere — Most Efficient Process Yet Found

With climate change’s dangers looming, it would be sensible for more people to try to lower the cost of direct air capture demonstrated here. Estimating that humans put 50 billion tons of C02 in the atmosphere every year, with the cost of removing one ton of C02 being at maybe $100, it would cost approximately $5 trillion (5-6% of world GDP) a year to offset the new C02 being added yearly. It isn’t clear to me how much value would be able to be generated by the tons of C02 captured, but I am aware that there are good catalysts for recycling C02 into valuable chemicals available.

Even so, it’s troubling that governments around the world don’t join forces to reduce the costs of this direct air capture and contribute money towards using it more. Eventually, my guess is that something similar to this technology is going to have be used much, more in the future. I don’t think humanity is moving fast enough to ditch fossil fuels for clean energy, and the next ten years are going to be especially crucial in what happens with climate change. The problem with C02 removal is going to continue to revolve around the high cost to do it though — if the cost could be further lowered significantly, much of the warming this century could be prevented.

By removing emitted carbon dioxide from the atmosphere and turning it into fresh fuels, engineers at a Canadian firm have demonstrated a scalable and cost-effective way to make deep cuts in the carbon footprint of transportation with minimal disruption to existing vehicles. Their work appears June 7 in the journal Joule.

“The carbon dioxide generated via direct air capture can be combined with sequestration for carbon removal, or it can enable the production of carbon-neutral hydrocarbons, which is a way to take low-cost carbon-free power sources like solar or wind and channel them into fuels that can be used to decarbonize the transportation sector,” says lead author David Keith, founder and chief scientist of Carbon Engineering, a Canadian CO2-capture and clean fuels enterprise, and a professor of applied physics and public policy at Harvard University.

Direct air capture technology works almost exactly like it sounds. Giant fans draw ambient air into contact with an aqueous solution that picks out and traps carbon dioxide. Through heating and a handful of familiar chemical reactions, that same carbon dioxide is re-extracted and ready for further use — as a carbon source for making valuable chemicals like fuels, or for storage via a sequestration strategy of choice. It’s not just theory — Carbon Engineering’s facility in British Columbia is already achieving both CO2capture and fuel generation.

The idea of direct air capture is hardly new, but the successful implementation of a scalable and cost-effective working pilot plant is. After conducting a full process analysis and crunching the numbers, Keith and his colleagues claim that realizing direct air capture on an impactful scale will cost roughly $94-$232 per ton of carbon dioxide captured, which is on the low end of estimates that have ranged up to $1,000 per ton in theoretical analyses.


Centuries of unchecked human carbon emissions also mean that atmospheric carbon dioxide is a virtually unlimited feedstock for transformation into new fuels. “We are not going to run out of air anytime soon,” adds Steve Oldham, CEO of Carbon Engineering. “We can keep collecting carbon dioxide with direct air capture, keep adding hydrogen generation and fuel synthesis, and keep reducing emissions through this AIR TO FUELSTM pathway.”


“After 100 person-years of practical engineering and cost analysis, we can confidently say that while air capture is not some magical cheap solution, it is a viable and buildable technology for producing carbon-neutral fuels in the immediate future and for removing carbon in the long run,” says Keith.

Amazon Grants Authoritarian Facial Recognition Technology to Police

Another reminder that Amazon doesn’t care about its harmful effects on communities. Its CEO is the world’s richest person, yet its workers often work in horrible conditions for pay that’s low enough to make them request food stamps in order to survive. And in terms of the facial recognition technology, it increases repression in communities by allowing police to increase their targeting of vulnerable minority groups.

After internal emails (pdf) published by the ACLU on Tuesday revealed that Amazon has been aggressively selling its facial recognition product to law enforcement agencies throughout the U.S., privacy advocates and civil libertarians raised grave concerns that the retailer is effectively handing out a “user manual for authoritarian surveillance” that could be deployed by governments to track protesters, spy on immigrants and minorities, and crush dissent.

“We know that putting this technology into the hands of already brutal and unaccountable law enforcement agencies places both democracy and dissidence at great risk,” Malkia Cyril, executive director of the Center for Media Justice, said in a statement in response to the ACLU’s findings. “Amazon should never be in the business of aiding and abetting racial discrimination and xenophobia—but that’s exactly what Amazon CEO Jeff Bezos is doing.”

First unveiled in 2016, “Rekognition” was explicitly marketed by Amazon as a tool for “tracking people,” and it has already been put to use by law enforcement agencies in Florida and Oregon.

While Amazon suggests in its marketing materials that Rekognition can be used to track down “people of interest” in criminal cases, ACLU and dozens of pro-privacy groups argued in a letter (pdf) to Amazon CEO Jeff Bezos on Tuesday that the product is “primed for abuse in the hands of governments” and poses a “grave threat” to marginalized groups and dissidents.

Highlighting “the possibility that those labeled suspicious by governments—such as undocumented immigrants or black activists—will be targeted for Rekognition surveillance,” the coalition of advocacy groups urged Amazon to “act swiftly to stand up for civil rights and civil liberties, including those of its own customers, and take Rekognition off the table for governments.”

“People should be free to walk down the street without being watched by the government,” the groups concluded. “Facial recognition in American communities threatens this freedom. In overpoliced communities of color, it could effectively eliminate it.”

The ACLU investigation found that Amazon has not been content to simply market and sell Rekognition to law enforcement agencies—it is also offering “company resources to help government agencies deploy” the tool.

Google Employees Resigning Over Google’s Involvement in Supplying AI to the U.S. Military’s Drone Program

AI used in Project Maven is supposed to decide when humans should be killed by the U.S. military drones. But all software has flaws that can be exploited, and the people writing the code the AI uses will have their own biases, which may be horrifying in practice. It’s also just wrong to further amplify the power (and advanced AI adds real power) of a program that has already lead to the bombings of civilian weddings on numerous occasions.

About a dozen Google employees have resigned in protest of the tech giant’s involvement in an artificial intelligence (AI) collaboration with the U.S. military, in which Google is participating to develop new kinds of drone technology.

“At some point, I realized I could not in good faith recommend anyone join Google, knowing what I knew,” one of the workers told Gizmodo. “I realized if I can’t recommend people join here, then why am I still here?”

The resignations follow Google’s failure to alter course despite approximately 4,000 of its employees signing a petition that urges Google to abandon its work with Project Maven, a Pentagon program focused on the targeting systems of the military’s armed drones. The company is reportedly contributing artificial intelligence technology to the program.

Creating Medicines With Less Side Effects Through a New Chemical Dividing Process

Overall, medications today have way too many harmful side effects, and so this breakthrough technological process should be helpful in reducing them. It also has the potential to “produce better medical and agricultural products, including medicines, food ingredients, dietary supplements and pesticides.”

Chemical compounds are made up of molecules. The most important molecules in biology are chiral molecules. “Chiral,” the Greek word for “hand,” describes molecules that look almost exactly alike and contain the same number of atoms but are mirror images of one another — meaning some are “left-handed” and others are “right-handed.” This different “handedness” is crucial and yields different biological effects.

Understanding chiral differences was made painfully clear by the drug thalidomide. Marketed to pregnant women in the 1950’s and 1960’s to ease morning sickness, thalidomide worked well under a microscope. However, thalidomide is a chiral drug -its “right” chiral molecule provides nausea relief while the “left” molecule causes horrible deformities in babies. Since the drug company producing Thalidomide did not separate out the right and left molecules, Thalidomide had disastrous results for the children of women who took this medication.

Though a crucial step for drug safety, the separation of chiral molecules into their right- and left- handed components is an expensive process and demands a tailor-made approach for each type of molecule. Now, however, following a decade of collaborative research, Paltiel and Naaman have discovered a uniform, generic method that will enable pharmaceutical and chemical manufactures to easily and cheaply separate right from left chiral molecules.

Their method relies on magnets. Chiral molecules interact with a magnetic substrate and line up according to the direction of their handedness — “left” molecules interact better with one pole of the magnet, and “right” molecules with the other one. This technology will allow chemical manufacturers to keep the “good” molecules and to discard the “bad” ones that cause harmful or unwanted side effects.

“Our finding has great practical importance,” shared Prof. Naaman. “It will usher in an era of better, safer drugs, and more environmentally-friendly pesticides.”

While popular drugs, such as Ritalin and Cipramil, are sold in their chirally-pure (i.e., separated) forms, many generic medications are not. Currently only 13% of chiral drugs are separated even though the FDA recommends that all chiral drugs be separated. Further, in the field of agrochemicals, chirally-pure pesticides and fertilizers require smaller doses and cause less environmental contamination than their unseparated counterparts.