Scientist’s Plasma Shot That Could Prevent COVID-19 Isn’t Being Considered by The Government

That the use of plasma (shown effective in many other cases) isn’t being considered is another inefficiency by the (U.S. at least) governmental response to the coronavirus pandemic.

It might be the next best thing to a coronavirus vaccine.

Scientists have devised a way to use the antibody-rich blood plasma of COVID-19 survivors for an upper-arm injection that they say could inoculate people against the virus for months.

Using technology that’s been proven effective in preventing other diseases such as hepatitis A, the injections would be administered to high-risk healthcare workers, nursing home patients, or even at public drive-through sites — potentially protecting millions of lives, the doctors and other experts say.

The two scientists who spearheaded the proposal — an 83-year-old shingles researcher and his counterpart, an HIV gene therapy expert — have garnered widespread support from leading blood and immunology specialists, including those at the center of the nation’s COVID-19 plasma research.

But the idea exists only on paper. Federal officials have twice rejected requests to discuss the proposal, and pharmaceutical companies — even acknowledging the likely efficacy of the plan — have declined to design or manufacture the shots, according to a Times investigation. The lack of interest in launching development of immunity shots comes amid heightened scrutiny of the federal government’s sluggish pandemic response.

There is little disagreement that the idea holds promise; the dispute is over the timing. Federal health officials and industry groups say the development of plasma-based therapies should focus on treating people who are already sick, not on preventing infections in those who are still healthy.

Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases at the National Institutes of Health, said an upper-arm injection that would function like a vaccine “is a very attractive concept.”

However, he said, scientists should first demonstrate that the coronavirus antibodies that are currently delivered to patients intravenously in hospital wards across the country actually work. “Once you show the efficacy, then the obvious next step is to convert it into an intramuscular” shot.

But scientists who question the delay argue that the immunity shots are easy to scale up and should enter clinical trials immediately. They say that until there’s a vaccine, the shots offer the only plausible method for preventing potentially millions of infections at a critical moment in the pandemic.

“Beyond being a lost opportunity, this is a real head-scratcher,” said Dr. Michael Joyner, a Mayo Clinic researcher who leads a program sponsored by the Food and Drug Administration to capitalize on coronavirus antibodies from COVID-19 survivors. “It seems obvious.”

The use of so-called convalescent plasma has already become widespread. More than 28,000 patients have already received the IV treatment, and preliminary data suggest that the method is safe. Researchers are also looking at whether the IV drip products would prevent new infections from taking root.

The antibodies in plasma can be concentrated and delivered to patients through a type of drug called immune globulin, or IG, which can be given through either an IV drip or a shot. IG shots have for decades been used to prevent an array of diseases; the IG shot that prevents hepatitis A was first licensed in 1944. They are available to treat patients who have recently been exposed to hepatitis B, tetanus, varicella and rabies.

[…]

The proposal for an injection approach to coronavirus prevention came from an immunization researcher who drew his inspiration from history.

Dr. Michael Oxman knew that, even during the 1918 flu pandemic, the blood of recovered patients appeared to help treat others. Since then, convalescent plasma has been used to fight measles and severe acute respiratory syndrome, or SARS, among other diseases.

Like other doctors, Oxman surmised that, for a limited time, the blood coursing through the veins of coronavirus survivors probably contains immune-rich antibodies that could prevent — or help treat — an infection.

[…]

Throughout May, researchers and doctors at Yale, Harvard, Johns Hopkins, Duke and four University of California schools sent a barrage of letters to dozens of lawmakers. They held virtual meetings with health policy directors on Capitol Hill, but say they have heard no follow-up to date.

Dr. Arturo Casadevall, the chair of the National COVID-19 Convalescent Plasma Project, said he spoke to FDA officials who told him they do not instruct companies on what to produce. Casadevall told The Times that the leaders of the national project were “very supportive of the need to develop” an IG shot rapidly and that he believed it would be “very helpful in stemming the epidemic.”

Joyner, of the Mayo Clinic, said there are probably 10 million to 20 million people in the U.S. carrying coronavirus antibodies — and the number keeps climbing. If just 2% of them were to donate a standard 800 milliliters of plasma on three separate occasions, their plasma alone could generate millions of IG shots for high-risk Americans.

“At a hot-spot meatpacking plant, or at a mobile unit in the parking lot outside a mall — trust me, you can get the plasma,” Joyner said. “This is not a biological problem nor a technology problem. It’s a back-of-the-envelope intelligence problem.”

The antibody injections, for now, do not appear to be a high priority for the government or the industry.

Grifols, on April 28 — the same day that the U.S. topped 1 million confirmed coronavirus cases — made a major product announcement that would “expand its leadership in disease treatment with immunoglobulins.”

The product was a new vial for IG shots — to treat rabies.

Orwell’s 1984 — Too Real in the 2010s

The interpretation of Orwell’s 1984 that I have is that the mere possibility that people may be being watched by a powerful, corrupt state changes behavior in a way that has significant implications across society. It’s been found in research that people change their behavior when they know they are being watched.

There’s no poking holes in the Party’s control, no loose thread for any opposition to pull. If there is a Resistance, it vanishes halfway through. The book is designed to make The Party and its machinery of oppression look entirely infallible. You accept, like the protagonist Winston Smith, that it can never be overthrown. This isn’t The Hunger Games. There is no cartoonish YA villain like President Snow for a defiant Katniss Everdeen to topple. Even Margaret Atwood, in The Handmaid’s Tale, destroyed Gilead in a far-future postscript.

But 1984? So far as we know, it’s boots on human faces all the way down.

How come? The Party doesn’t get its power from spying on its citizens, or turning them into snitches, or punishing sex crimes. All were presented as mere tools of the state. How did it come to wield that control in the first place?

Orwell, aka Eric Blair, a socialist freedom fighter and a repentant former colonial officer who had a lifelong fascination with language and politics, knew that no control could be total until you colonized people’s heads too. A state like his could only exist with loud, constant, and obvious lies.

To be a totalitarian, he knew from his contemporary totalitarians, you had to seize control of truth itself. You had to redefine truth as “whatever we say it is.” You had to falsify memories and photos and rewrite documents. Your people could be aware that all this was going on, so long as they kept that awareness to themselves and carried on (which is what doublethink is all about).

The upshot is, Winston Smith is gaslit to hell and back. He spends the entire novel wondering exactly what the truth is. Is it even 1984? He isn’t sure. Does Big Brother actually physically exist somewhere in Oceania, or is he just a symbol? ¯\_(ツ)_/¯

Winston is what passes for well-educated in his world; he still remembers the name “Shakespeare.” He’s smart enough not to believe the obvious propaganda accepted by the vast majority, but it doesn’t matter. The novel is about him being worn down, metaphorically and physically, until he’s just too tired and jaded to hold back the tide of screaming nonsense.

Don’t call him Winston Smith. Call him Mr. 2019. Because it’s looking increasingly like we live in Oceania. That fictional state was basically the British Isles, North America, and South America. Now the leaders of the largest countries in each of those regions — Boris Johnson, Donald Trump, Jair Bolsonaro — are men who have learned to flood the zone with obvious lies, because their opponents simply don’t have the time or energy to deal them all.

As we enter 2020, all three of them look increasingly, sickeningly, like they’re going to get away with it. They are protected by Party members who will endure any humiliation to trumpet loyalty to the Great Leader (big shout-out once again to Sen. Lindsay Graham) and by a media environment that actively enables political lies (thanks, Facebook).

All the Winston Smiths of our world can see what the score really is. It doesn’t seem to make any difference. But hey, at least we’re all finally aware of the most important line in 1984, which is now also its most quote-tweeted: “The party told you to reject the evidence of your eyes and ears. It was their final, most essential command.”

In the decades following its 1949 publication, the message of 1984 became corrupted. Popular culture reduced it to a single slogan — Big Brother is Watching You — and those with only a vague memory of studying the book in school thought the surveillance state was the main thing Orwell was warning against.

That was certainly where we were at in 2013, when Edward Snowden released his treasure trove of documents that proved the vast scale of NSA spying programs. “George Orwell warned us of the danger of this kind of information,” Snowden told UK TV viewers in his “alternate Christmas message” that year. “The types of collection in [1984] — microphones and video cameras, TVs that watch us — are nothing compared to what we have available today.”

Which was true, but also beside the point. Orwell doesn’t actually claim the surveillance system in Oceania is all that strong. It would have strained credulity to have a Party that watched all of its members all of the time. It sounded like a bad science fiction plot. (In China, where the growing state systems of facial recognition and social media post ranking make NSA programs look like amateur hour, it no longer does).

In 1984, the only time we definitively know a telescreen is watching Winston is when he’s doing morning exercise and a female instructor calls him out for not pushing hard enough. Here in the real future, people pay Peloton $2200 plus $40 a month for the same basic setup.

It isn’t that Big Brother is watching — that too is another Party lie. It’s that he may be watching, just as knowing there may be a speed camera around the next bend keeps your mph in line. Against that possibility, citizens can still rebel. For much of the book, Winston and Julia are able to escape all cameras, out in the post-atomic countryside. Avoiding surveillance doesn’t matter. What causes their capture is the fact that they fell for a lie (the “Brotherhood,” a fake Resistance operation run by the Inner Party member O’Brien).

We are invited to consider whether we too are falling for The Party’s lies. The book-within-a-book that explains the shape of Winston’s world turns out to be written by O’Brien, the master liar. The rocket bombs dropping on London are dropped by the Party. All the in-universe truth the reader has to go on is Winston’s word, and by the end — as he is tortured into genuinely seeing O’Brien hold up three fingers instead of two, then thinks he hears news of a final victory in the endless war — even that isn’t reliable.

By the end of this decade, even words like “Orwell” and “Orwellian” had become ambivalent. I realized this in 2017 when my wife, knowing my love of the book, had bought me a cap that said “Make Orwell Fiction Again.” I loved it until I found it had been made in a state that voted for Trump, by a company with a line of libertarian merch. We saw the cap as a riposte to the MAGA mentality, but it was also possible to see it as a reinforcement: Make Orwell fiction again by helping Trump fight Deep State surveillance, man!

If there is hope for Oceania in the coming decade, it may come from uniting people under the banner of all that 1984 warns against — starting with the bare-faced lies that Orwell was most concerned about. The lies that social media gatekeepers have taken way too long to notice, if they notice them at all.

If we can’t agree on basic facts of science and history, we’re lost. But if we the people can do that, there’s no surveillance system or endless war or sexcrime we can’t dismantle. “Freedom is the freedom to say that two plus two makes four,” Winston wrote in his diary. “If that is granted, all else follows.”

By remaining skeptical about all we read, but still reading widely and clawing our way back to a world of truths that are as simple and as objective as math, we can prove that we finally learned Orwell’s lesson. And we can make 1984 merely a masterpiece of fictional worldbuilding again.

People Act Differently in Virtual Reality Than in Real Life

In our increasingly digital world, real life remains incredibly important for genuine human interactions.

Immersive virtual reality (VR) can be remarkably lifelike, but new UBC research has found a yawning gap between how people respond psychologically in VR and how they respond in real life.

“People expect VR experiences to mimic actual reality and thus induce similar forms of thought and behaviour,” said Alan Kingstone, a professor in UBC’s department of psychology and the study’s senior author. “This study shows that there’s a big separation between being in the real world, and being in a VR world.”

The study used virtual reality to examine factors that influence yawning, focusing specifically on contagious yawning. Contagious yawning is a well-documented phenomenon in which people — and some non-human animals — yawn reflexively when they detect a yawn nearby.

Research has shown that “social presence” deters contagious yawning. When people believe they are being watched, they yawn less, or at least resist the urge. This may be due to the stigma of yawning in social settings, or its perception in many cultures as a sign of boredom or rudeness.

The team from UBC, along with Andrew Gallup from State University of New York Polytechnic Institute, tried to bring about contagious yawning in a VR environment. They had test subjects wear an immersive headset and exposed them to videos of people yawning. In those conditions, the rate of contagious yawning was 38 per cent, which is in line with the typical real-life rate of 30-60 per cent.

However, when the researchers introduced social presence in the virtual environment, they were surprised to find it had little effect on subjects’ yawning. Subjects yawned at the same rate, even while being watched by a virtual human avatar or a virtual webcam. It was an interesting paradox: stimuli that trigger contagious yawns in real life did the same in virtual reality, but stimuli that suppress yawns in real life did not.

The presence of an actual person in the testing room had a more significant effect on yawning than anything in the VR environment. Even though subjects couldn’t see or hear their company, simply knowing a researcher was present was enough to diminish their yawning. Social cues in actual reality appeared to dominate and supersede those in virtual reality.

Virtual reality has caught on as a research tool in psychology and other fields, but these findings show that researchers may need to account for its limitations.

“Using VR to examine how people think and behave in real life may very well lead to conclusions that are fundamentally wrong. This has profound implications for people who hope to use VR to make accurate projections regarding future behaviours,” said Kingstone. “For example, predicting how pedestrians will behave when walking amongst driverless cars, or the decisions that pilots will make in an emergency situation. Experiences in VR may be a poor proxy for real life.”

AI System Successfully Predicts Alzheimer’s Years in Advance

Important research of Alzheimer’s disease since it’s one of those diseases where the treatment will be more effective the earlier it’s caught.

Artificial intelligence (AI) technology improves the ability of brain imaging to predict Alzheimer’s disease, according to a study published in the journal Radiology.

Timely diagnosis of Alzheimer’s disease is extremely important, as treatments and interventions are more effective early in the course of the disease. However, early diagnosis has proven to be challenging. Research has linked the disease process to changes in metabolism, as shown by glucose uptake in certain regions of the brain, but these changes can be difficult to recognize.

“Differences in the pattern of glucose uptake in the brain are very subtle and diffuse,” said study co-author Jae Ho Sohn, M.D., from the Radiology & Biomedical Imaging Department at the University of California in San Francisco (UCSF). “People are good at finding specific biomarkers of disease, but metabolic changes represent a more global and subtle process.”

The study’s senior author, Benjamin Franc, M.D., from UCSF, approached Dr. Sohn and University of California, Berkeley, undergraduate student Yiming Ding through the Big Data in Radiology (BDRAD) research group, a multidisciplinary team of physicians and engineers focusing on radiological data science. Dr. Franc was interested in applying deep learning, a type of AI in which machines learn by example much like humans do, to find changes in brain metabolism predictive of Alzheimer’s disease.

The researchers trained the deep learning algorithm on a special imaging technology known as 18-F-fluorodeoxyglucose positron emission tomography (FDG-PET). In an FDG-PET scan, FDG, a radioactive glucose compound, is injected into the blood. PET scans can then measure the uptake of FDG in brain cells, an indicator of metabolic activity.

The researchers had access to data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), a major multi-site study focused on clinical trials to improve prevention and treatment of this disease. The ADNI dataset included more than 2,100 FDG-PET brain images from 1,002 patients. Researchers trained the deep learning algorithm on 90 percent of the dataset and then tested it on the remaining 10 percent of the dataset. Through deep learning, the algorithm was able to teach itself metabolic patterns that corresponded to Alzheimer’s disease.

Finally, the researchers tested the algorithm on an independent set of 40 imaging exams from 40 patients that it had never studied. The algorithm achieved 100 percent sensitivity at detecting the disease an average of more than six years prior to the final diagnosis.

“We were very pleased with the algorithm’s performance,” Dr. Sohn said. “It was able to predict every single case that advanced to Alzheimer’s disease.”

Although he cautioned that their independent test set was small and needs further validation with a larger multi-institutional prospective study, Dr. Sohn said that the algorithm could be a useful tool to complement the work of radiologists — especially in conjunction with other biochemical and imaging tests — in providing an opportunity for early therapeutic intervention.

“If we diagnose Alzheimer’s disease when all the symptoms have manifested, the brain volume loss is so significant that it’s too late to intervene,” he said. “If we can detect it earlier, that’s an opportunity for investigators to potentially find better ways to slow down or even halt the disease process.”

More Climate Change Worsens Natural Disasters

Hurricane Florence has been receiving massive media coverage for the immense damage it’s doing. There are hundreds of thousands of people without electricity in North Carolina now, and among other things, such as threatening nuclear reactors, the flooding is doing major harm.

In the news media, it is almost never mentioned that climate change has made natural disasters such as hurricanes worse. More warm air translates to more water vapor, and more water vapor means worsened superstorms. In 2017, there was a record amount of U.S. economic costs related to natural disasters, in significant part due to hurricanes like Hurricane Florence.

Amazingly, it is now 2018 and there is not even much discussion about ways that human technology can reduce the strength of superstorms. Hurricanes require a sea surface temperature of 26.5 degrees Celsius to form, and there is some research showing that sending compressed bubbles (via perforated pipes located over a hundred meters down) from deeper in the ocean brings up colder water to the surface. The cold water would cool the warmer surface water, possibly preventing hurricanes through removing their supply of energy.

The United States has given enormous subsidies to fossil fuels companies that operate oil rigs on the ocean, contributing to the greenhouse gas effect that leads to warming and worse storms. It doesn’t seem unreasonable to use the materials from them to create platforms that use the perforated pipes to cool the ocean water and prevent (or perhaps ameliorate) hurricanes. In response to data that predicts where hurricanes are about to form, it doesn’t seem unreasonable that that sort of platform could be quickly deployed or transported to other locations either.

But the absence of a discussion like this is what kind of mass media (and therefore significantly communicative) structure is currently in place — one that doesn’t discuss a key factor in making the problem much worse, and one that doesn’t really mention potentially viable technological solutions in the 21st century.

Climate change (yes, it’s real and at least largely human-caused) will keep making these sorts of disasters much worse if it continues unabated. In 20 years, Hurricane Florence may seem mild compared to the average hurricanes of 2038, and that is clearly a stormy future that needs prevented.

Using Work Sharing to Improve the Economy and Worker Happiness

An important policy idea of reducing average necessary work hours (with at least similar wage levels ideally due to increased value via more productivity growth) that will keep becoming more important as technology continues to advance.

The United States is very much an outlier among wealthy countries in the relatively weak rights that are guaranteed to workers on the job. This is true in a variety of areas. For example, the United States is the only wealthy country in which private sector workers can be dismissed at will, but it shows up most clearly in hours of work.

In other wealthy countries, there has been a consistent downward trend in average annual hours of work over the last four decades. By contrast, in the United States, there has been relatively little change. While people on other wealthy countries can count on paid sick days, paid family leave, and four to six weeks of paid vacation every year, these benefits are only available to better-paid workers in the United States. Even for these workers, the benefits are often less than the average in Western European countries.

[…]

Part of the benefit of work sharing is that it can allow workers and employers to gain experience with a more flexible work week or work year. It is possible that this experience can lead workers to place a higher value on leisure or non-work activities and therefore increase their support for policies that allow for reduced work hours.

Work Hours in 1970: The United States Was Not Always an Outlier

When the experience of European countries is raised in the context of proposals for expanding paid time off in the United States, it is common for opponents to dismiss this evidence by pointing to differences in national character. Europeans may value time off with their families or taking vacations, but we are told that Americans place a higher value on work and income.

While debates on national character probably do not provide a useful basis for policy, it is worth noting that the United States was not always an outlier in annual hours worked. If we go back to the 1970s, the United States was near the OECD average in annual hours worked. By contrast, it ranks near the top in 2016.

In 1970, workers in the United States had put in on average 3 to 5 percent more hours than workers in Denmark and Finland, according to the OECD data, by 2016, this difference had grown to more than 25 percent. Workers in France and the Netherlands now have considerably shorter average work years than workers in the United States. Even workers in Japan now work about 5 percent less on average than workers in the United States.

baker-worksharing-2018-06-fig-1

[…]

It is also important to consider efforts to reduce hours as being a necessary aspect of making the workplace friendlier to women. It continues to be the case that women have a grossly disproportionate share of the responsibility for caring for children and other family members.

[…]

In this respect, it is worth noting that the United States went from ranking near the top in women’s labor force participation in 1980 to being below the OECD average in 2018. While other countries have made workplaces more family friendly, this has been much less true of the United States.

Shortening Work Hours and Full Employment

There has been a largely otherworldly public debate in recent years on the prospects that robots and artificial intelligence would lead to mass unemployment. This debate is otherworldly since it describes a world of rapidly rising productivity growth. In fact, productivity growth has been quite slow ever since 2005. The average annual rate of productivity growth over the last twelve years has been just over 1.0 percent. This compares to a rate of growth of close to 3.0 percent in the long Golden Age from 1947 to 1973 and again from 1995 to 2005.

So this means that we are having this major national debate about the mass displacement of workers due to technology at a time when the data clearly tell us that displacement is moving along very slowly.[2] It is also worth noting that all the official projections from agencies like the Congressional Budget Office and the Office of Management and Budget show the slowdown in productivity growth persisting for the indefinite future. This projection of continued slow productivity growth provides the basis for debates on issues like budget deficits and the finances of Social Security.

However, if we did actually begin to see an uptick in the rate of productivity growth, and robots did begin to displace large numbers of workers, then an obvious solution would be to adopt policies aimed at shortening the average duration of the work year. The basic arithmetic is straightforward: if we reduce average work hours by 20 percent, then we will need 25 percent more workers to get the same amount of labor. While in practice the relationship will never be as simple as the straight arithmetic, if we do get a reduction in average work time, then we will need more workers.

As noted above, reductions in work hours was an important way in which workers in Western Europe have taken the gains from productivity growth over the last four decades. This had also been true in previous decades in the United States, as the standard workweek was shortened to forty hours with the Fair Labor Standards Act in 1937. In many industries, it had been over sixty hours at the turn of the twentieth century.

If the United States can resume a path of shortening work hours and get its standard work year back in line with other wealthy countries, it should be able to absorb even very rapid gains in productivity growth without any concerns about mass unemployment. While job-killing robots may exist primarily in the heads of the people who write about the economy, if they do show up in the world, a policy of aggressive reductions in work hours should ensure they don’t lead to widespread unemployment.

Removing C02 from the Atmosphere — Most Efficient Process Yet Found

With climate change’s dangers looming, it would be sensible for more people to try to lower the cost of direct air capture demonstrated here. Estimating that humans put 50 billion tons of C02 in the atmosphere every year, with the cost of removing one ton of C02 being at maybe $100, it would cost approximately $5 trillion (5-6% of world GDP) a year to offset the new C02 being added yearly. It isn’t clear to me how much value would be able to be generated by the tons of C02 captured, but I am aware that there are good catalysts for recycling C02 into valuable chemicals available.

Even so, it’s troubling that governments around the world don’t join forces to reduce the costs of this direct air capture and contribute money towards using it more. Eventually, my guess is that something similar to this technology is going to have be used much, more in the future. I don’t think humanity is moving fast enough to ditch fossil fuels for clean energy, and the next ten years are going to be especially crucial in what happens with climate change. The problem with C02 removal is going to continue to revolve around the high cost to do it though — if the cost could be further lowered significantly, much of the warming this century could be prevented.

By removing emitted carbon dioxide from the atmosphere and turning it into fresh fuels, engineers at a Canadian firm have demonstrated a scalable and cost-effective way to make deep cuts in the carbon footprint of transportation with minimal disruption to existing vehicles. Their work appears June 7 in the journal Joule.

“The carbon dioxide generated via direct air capture can be combined with sequestration for carbon removal, or it can enable the production of carbon-neutral hydrocarbons, which is a way to take low-cost carbon-free power sources like solar or wind and channel them into fuels that can be used to decarbonize the transportation sector,” says lead author David Keith, founder and chief scientist of Carbon Engineering, a Canadian CO2-capture and clean fuels enterprise, and a professor of applied physics and public policy at Harvard University.

Direct air capture technology works almost exactly like it sounds. Giant fans draw ambient air into contact with an aqueous solution that picks out and traps carbon dioxide. Through heating and a handful of familiar chemical reactions, that same carbon dioxide is re-extracted and ready for further use — as a carbon source for making valuable chemicals like fuels, or for storage via a sequestration strategy of choice. It’s not just theory — Carbon Engineering’s facility in British Columbia is already achieving both CO2capture and fuel generation.

The idea of direct air capture is hardly new, but the successful implementation of a scalable and cost-effective working pilot plant is. After conducting a full process analysis and crunching the numbers, Keith and his colleagues claim that realizing direct air capture on an impactful scale will cost roughly $94-$232 per ton of carbon dioxide captured, which is on the low end of estimates that have ranged up to $1,000 per ton in theoretical analyses.

[…]

Centuries of unchecked human carbon emissions also mean that atmospheric carbon dioxide is a virtually unlimited feedstock for transformation into new fuels. “We are not going to run out of air anytime soon,” adds Steve Oldham, CEO of Carbon Engineering. “We can keep collecting carbon dioxide with direct air capture, keep adding hydrogen generation and fuel synthesis, and keep reducing emissions through this AIR TO FUELSTM pathway.”

[…]

“After 100 person-years of practical engineering and cost analysis, we can confidently say that while air capture is not some magical cheap solution, it is a viable and buildable technology for producing carbon-neutral fuels in the immediate future and for removing carbon in the long run,” says Keith.

Amazon Grants Authoritarian Facial Recognition Technology to Police

Another reminder that Amazon doesn’t care about its harmful effects on communities. Its CEO is the world’s richest person, yet its workers often work in horrible conditions for pay that’s low enough to make them request food stamps in order to survive. And in terms of the facial recognition technology, it increases repression in communities by allowing police to increase their targeting of vulnerable minority groups.

After internal emails (pdf) published by the ACLU on Tuesday revealed that Amazon has been aggressively selling its facial recognition product to law enforcement agencies throughout the U.S., privacy advocates and civil libertarians raised grave concerns that the retailer is effectively handing out a “user manual for authoritarian surveillance” that could be deployed by governments to track protesters, spy on immigrants and minorities, and crush dissent.

“We know that putting this technology into the hands of already brutal and unaccountable law enforcement agencies places both democracy and dissidence at great risk,” Malkia Cyril, executive director of the Center for Media Justice, said in a statement in response to the ACLU’s findings. “Amazon should never be in the business of aiding and abetting racial discrimination and xenophobia—but that’s exactly what Amazon CEO Jeff Bezos is doing.”

First unveiled in 2016, “Rekognition” was explicitly marketed by Amazon as a tool for “tracking people,” and it has already been put to use by law enforcement agencies in Florida and Oregon.

While Amazon suggests in its marketing materials that Rekognition can be used to track down “people of interest” in criminal cases, ACLU and dozens of pro-privacy groups argued in a letter (pdf) to Amazon CEO Jeff Bezos on Tuesday that the product is “primed for abuse in the hands of governments” and poses a “grave threat” to marginalized groups and dissidents.

Highlighting “the possibility that those labeled suspicious by governments—such as undocumented immigrants or black activists—will be targeted for Rekognition surveillance,” the coalition of advocacy groups urged Amazon to “act swiftly to stand up for civil rights and civil liberties, including those of its own customers, and take Rekognition off the table for governments.”

“People should be free to walk down the street without being watched by the government,” the groups concluded. “Facial recognition in American communities threatens this freedom. In overpoliced communities of color, it could effectively eliminate it.”

The ACLU investigation found that Amazon has not been content to simply market and sell Rekognition to law enforcement agencies—it is also offering “company resources to help government agencies deploy” the tool.

Google Employees Resigning Over Google’s Involvement in Supplying AI to the U.S. Military’s Drone Program

AI used in Project Maven is supposed to decide when humans should be killed by the U.S. military drones. But all software has flaws that can be exploited, and the people writing the code the AI uses will have their own biases, which may be horrifying in practice. It’s also just wrong to further amplify the power (and advanced AI adds real power) of a program that has already lead to the bombings of civilian weddings on numerous occasions.

About a dozen Google employees have resigned in protest of the tech giant’s involvement in an artificial intelligence (AI) collaboration with the U.S. military, in which Google is participating to develop new kinds of drone technology.

“At some point, I realized I could not in good faith recommend anyone join Google, knowing what I knew,” one of the workers told Gizmodo. “I realized if I can’t recommend people join here, then why am I still here?”

The resignations follow Google’s failure to alter course despite approximately 4,000 of its employees signing a petition that urges Google to abandon its work with Project Maven, a Pentagon program focused on the targeting systems of the military’s armed drones. The company is reportedly contributing artificial intelligence technology to the program.

Creating Medicines With Less Side Effects Through a New Chemical Dividing Process

Overall, medications today have way too many harmful side effects, and so this breakthrough technological process should be helpful in reducing them. It also has the potential to “produce better medical and agricultural products, including medicines, food ingredients, dietary supplements and pesticides.”

Chemical compounds are made up of molecules. The most important molecules in biology are chiral molecules. “Chiral,” the Greek word for “hand,” describes molecules that look almost exactly alike and contain the same number of atoms but are mirror images of one another — meaning some are “left-handed” and others are “right-handed.” This different “handedness” is crucial and yields different biological effects.

Understanding chiral differences was made painfully clear by the drug thalidomide. Marketed to pregnant women in the 1950’s and 1960’s to ease morning sickness, thalidomide worked well under a microscope. However, thalidomide is a chiral drug -its “right” chiral molecule provides nausea relief while the “left” molecule causes horrible deformities in babies. Since the drug company producing Thalidomide did not separate out the right and left molecules, Thalidomide had disastrous results for the children of women who took this medication.

Though a crucial step for drug safety, the separation of chiral molecules into their right- and left- handed components is an expensive process and demands a tailor-made approach for each type of molecule. Now, however, following a decade of collaborative research, Paltiel and Naaman have discovered a uniform, generic method that will enable pharmaceutical and chemical manufactures to easily and cheaply separate right from left chiral molecules.

Their method relies on magnets. Chiral molecules interact with a magnetic substrate and line up according to the direction of their handedness — “left” molecules interact better with one pole of the magnet, and “right” molecules with the other one. This technology will allow chemical manufacturers to keep the “good” molecules and to discard the “bad” ones that cause harmful or unwanted side effects.

“Our finding has great practical importance,” shared Prof. Naaman. “It will usher in an era of better, safer drugs, and more environmentally-friendly pesticides.”

While popular drugs, such as Ritalin and Cipramil, are sold in their chirally-pure (i.e., separated) forms, many generic medications are not. Currently only 13% of chiral drugs are separated even though the FDA recommends that all chiral drugs be separated. Further, in the field of agrochemicals, chirally-pure pesticides and fertilizers require smaller doses and cause less environmental contamination than their unseparated counterparts.

U.S. Military Announces Development of Drones That Decide to Kill Using AI

Drone warfare (with its state terrorism causing numerous civilian casualties) is already horrifying enough — this AI drone development would likely be even worse. This announcement also raises the question of how much accountability those who write the algorithms that determine how the drone functions will face.

The US Army recently announced that it is developing the first drones that can spot and target vehicles and people using artificial intelligence (AI).

Whereas current military drones are still controlled by people, this new technology will decide who to kill with almost no human involvement.

Once complete, these drones will represent the ultimate militarisation of AI and trigger vast legal and ethical implications for wider society.

There is a chance that warfare will move from fighting to extermination, losing any semblance of humanity in the process.

At the same time, it could widen the sphere of warfare so that the companies, engineers and scientists building AI become valid military targets.

[…]

Even with these drone killings, human emotions, judgements and ethics have always remained at the centre of war.

The existence of mental trauma and post-traumatic stress disorder (PTSD) among drone operators shows the psychological impact of remote killing.

And this actually points to one possible military and ethical argument by Ronald Arkin, in support of autonomous killing drones. Perhaps if these drones drop the bombs, psychological problems among crew members can be avoided.

The weakness in this argument is that you don’t have to be responsible for killing to be traumatised by it.

Intelligence specialists and other military personnel regularly analyse graphic footage from drone strikes. Research shows that it is possible to suffer psychological harm by frequently viewing images of extreme violence.

[…]

The prospect of totally autonomous drones would radically alter the complex processes and decisions behind military killings.

But legal and ethical responsibility does not somehow just disappear if you remove human oversight. Instead, responsibility will increasingly fall on other people, including artificial intelligence scientists.

The legal implications of these developments are already becoming evident.

Under current international humanitarian law, “dual-use” facilities – those which develop products for both civilian and military application – can be attacked in the right circumstances. For example, in the 1999 Kosovo War, the Pancevo oil refinery was attacked because it could fuel Yugoslav tanks as well as fuel civilian cars.

With an autonomous drone weapon system, certain lines of computer code would almost certainly be classed as dual-use.

Companies like Google, its employees or its systems, could become liable to attack from an enemy state.

For example, if Google’s Project Maven image recognition AI software is incorporated into an American military autonomous drone, Google could find itself implicated in the drone “killing” business, as might every other civilian contributor to such lethal autonomous systems.

Ethically, there are even darker issues still.

The whole point of the self-learning algorithms – programs that independently learn from whatever data they can collect – that technology uses is that they become better at whatever task they are given.

If a lethal autonomous drone is to get better at its job through self-learning, someone will need to decide on an acceptable stage of development – how much it still has to learn – at which it can be deployed.

In militarised machine learning, that means political, military and industry leaders will have to specify how many civilian deaths will count as acceptable as the technology is refined.

Recent experiences of autonomous AI in society should serve as a warning.

New Study Suggests That Smartphone Overuse is Similar to Other Types of Substance Abuse

It shouldn’t be that much of a surprise when technology corporations design smartphones to be as addictive as possible.

Smartphones are an integral part of most people’s lives, allowing us to stay connected and in-the-know at all times. The downside of that convenience is that many of us are also addicted to the constant pings, chimes, vibrations and other alerts from our devices, unable to ignore new emails, texts and images. In a new study published in NeuroRegulation, San Francisco State University Professor of Health Education Erik Peper and Associate Professor of Health Education Richard Harvey argue that overuse of smart phones is just like any other type of substance abuse.

“The behavioral addiction of smartphone use begins forming neurological connections in the brain in ways similar to how opioid addiction is experienced by people taking Oxycontin for pain relief — gradually,” Peper explained.

On top of that, addiction to social media technology may actually have a negative effect on social connection. In a survey of 135 San Francisco State students, Peper and Harvey found that students who used their phones the most reported higher levels of feeling isolated, lonely, depressed and anxious. They believe the loneliness is partly a consequence of replacing face-to-face interaction with a form of communication where body language and other signals cannot be interpreted. They also found that those same students almost constantly multitasked while studying, watching other media, eating or attending class. This constant activity allows little time for bodies and minds to relax and regenerate, says Peper, and also results in “semi-tasking,” where people do two or more tasks at the same time — but half as well as they would have if focused on one task at a time.

Peper and Harvey note that digital addiction is not our fault but a result of the tech industry’s desire to increase corporate profits. “More eyeballs, more clicks, more money,” said Peper. Push notifications, vibrations and other alerts on our phones and computers make us feel compelled to look at them by triggering the same neural pathways in our brains that once alerted us to imminent danger, such as an attack by a tiger or other large predator. “But now we are hijacked by those same mechanisms that once protected us and allowed us to survive — for the most trivial pieces of information,” he said.

But just as we can train ourselves to eat less sugar, for example, we can take charge and train ourselves to be less addicted to our phones and computers. The first step is recognizing that tech companies are manipulating our innate biological responses to danger. Peper suggests turning off push notifications, only responding to email and social media at specific times and scheduling periods with no interruptions to focus on important tasks.