Amazon Grants Authoritarian Facial Recognition Technology to Police

Another reminder that Amazon doesn’t care about its harmful effects on communities. Its CEO is the world’s richest person, yet its workers often work in horrible conditions for pay that’s low enough to make them request food stamps in order to survive. And in terms of the facial recognition technology, it increases repression in communities by allowing police to increase their targeting of vulnerable minority groups.

After internal emails (pdf) published by the ACLU on Tuesday revealed that Amazon has been aggressively selling its facial recognition product to law enforcement agencies throughout the U.S., privacy advocates and civil libertarians raised grave concerns that the retailer is effectively handing out a “user manual for authoritarian surveillance” that could be deployed by governments to track protesters, spy on immigrants and minorities, and crush dissent.

“We know that putting this technology into the hands of already brutal and unaccountable law enforcement agencies places both democracy and dissidence at great risk,” Malkia Cyril, executive director of the Center for Media Justice, said in a statement in response to the ACLU’s findings. “Amazon should never be in the business of aiding and abetting racial discrimination and xenophobia—but that’s exactly what Amazon CEO Jeff Bezos is doing.”

First unveiled in 2016, “Rekognition” was explicitly marketed by Amazon as a tool for “tracking people,” and it has already been put to use by law enforcement agencies in Florida and Oregon.

While Amazon suggests in its marketing materials that Rekognition can be used to track down “people of interest” in criminal cases, ACLU and dozens of pro-privacy groups argued in a letter (pdf) to Amazon CEO Jeff Bezos on Tuesday that the product is “primed for abuse in the hands of governments” and poses a “grave threat” to marginalized groups and dissidents.

Highlighting “the possibility that those labeled suspicious by governments—such as undocumented immigrants or black activists—will be targeted for Rekognition surveillance,” the coalition of advocacy groups urged Amazon to “act swiftly to stand up for civil rights and civil liberties, including those of its own customers, and take Rekognition off the table for governments.”

“People should be free to walk down the street without being watched by the government,” the groups concluded. “Facial recognition in American communities threatens this freedom. In overpoliced communities of color, it could effectively eliminate it.”

The ACLU investigation found that Amazon has not been content to simply market and sell Rekognition to law enforcement agencies—it is also offering “company resources to help government agencies deploy” the tool.

Google Employees Resigning Over Google’s Involvement in Supplying AI to the U.S. Military’s Drone Program

AI used in Project Maven is supposed to decide when humans should be killed by the U.S. military drones. But all software has flaws that can be exploited, and the people writing the code the AI uses will have their own biases, which may be horrifying in practice. It’s also just wrong to further amplify the power (and advanced AI adds real power) of a program that has already lead to the bombings of civilian weddings on numerous occasions.

About a dozen Google employees have resigned in protest of the tech giant’s involvement in an artificial intelligence (AI) collaboration with the U.S. military, in which Google is participating to develop new kinds of drone technology.

“At some point, I realized I could not in good faith recommend anyone join Google, knowing what I knew,” one of the workers told Gizmodo. “I realized if I can’t recommend people join here, then why am I still here?”

The resignations follow Google’s failure to alter course despite approximately 4,000 of its employees signing a petition that urges Google to abandon its work with Project Maven, a Pentagon program focused on the targeting systems of the military’s armed drones. The company is reportedly contributing artificial intelligence technology to the program.

Creating Medicines With Less Side Effects Through a New Chemical Dividing Process

Overall, medications today have way too many harmful side effects, and so this breakthrough technological process should be helpful in reducing them. It also has the potential to “produce better medical and agricultural products, including medicines, food ingredients, dietary supplements and pesticides.”

Chemical compounds are made up of molecules. The most important molecules in biology are chiral molecules. “Chiral,” the Greek word for “hand,” describes molecules that look almost exactly alike and contain the same number of atoms but are mirror images of one another — meaning some are “left-handed” and others are “right-handed.” This different “handedness” is crucial and yields different biological effects.

Understanding chiral differences was made painfully clear by the drug thalidomide. Marketed to pregnant women in the 1950’s and 1960’s to ease morning sickness, thalidomide worked well under a microscope. However, thalidomide is a chiral drug -its “right” chiral molecule provides nausea relief while the “left” molecule causes horrible deformities in babies. Since the drug company producing Thalidomide did not separate out the right and left molecules, Thalidomide had disastrous results for the children of women who took this medication.

Though a crucial step for drug safety, the separation of chiral molecules into their right- and left- handed components is an expensive process and demands a tailor-made approach for each type of molecule. Now, however, following a decade of collaborative research, Paltiel and Naaman have discovered a uniform, generic method that will enable pharmaceutical and chemical manufactures to easily and cheaply separate right from left chiral molecules.

Their method relies on magnets. Chiral molecules interact with a magnetic substrate and line up according to the direction of their handedness — “left” molecules interact better with one pole of the magnet, and “right” molecules with the other one. This technology will allow chemical manufacturers to keep the “good” molecules and to discard the “bad” ones that cause harmful or unwanted side effects.

“Our finding has great practical importance,” shared Prof. Naaman. “It will usher in an era of better, safer drugs, and more environmentally-friendly pesticides.”

While popular drugs, such as Ritalin and Cipramil, are sold in their chirally-pure (i.e., separated) forms, many generic medications are not. Currently only 13% of chiral drugs are separated even though the FDA recommends that all chiral drugs be separated. Further, in the field of agrochemicals, chirally-pure pesticides and fertilizers require smaller doses and cause less environmental contamination than their unseparated counterparts.

U.S. Military Announces Development of Drones That Decide to Kill Using AI

Drone warfare (with its state terrorism causing numerous civilian casualties) is already horrifying enough — this AI drone development would likely be even worse. This announcement also raises the question of how much accountability those who write the algorithms that determine how the drone functions will face.

The US Army recently announced that it is developing the first drones that can spot and target vehicles and people using artificial intelligence (AI).

Whereas current military drones are still controlled by people, this new technology will decide who to kill with almost no human involvement.

Once complete, these drones will represent the ultimate militarisation of AI and trigger vast legal and ethical implications for wider society.

There is a chance that warfare will move from fighting to extermination, losing any semblance of humanity in the process.

At the same time, it could widen the sphere of warfare so that the companies, engineers and scientists building AI become valid military targets.

[…]

Even with these drone killings, human emotions, judgements and ethics have always remained at the centre of war.

The existence of mental trauma and post-traumatic stress disorder (PTSD) among drone operators shows the psychological impact of remote killing.

And this actually points to one possible military and ethical argument by Ronald Arkin, in support of autonomous killing drones. Perhaps if these drones drop the bombs, psychological problems among crew members can be avoided.

The weakness in this argument is that you don’t have to be responsible for killing to be traumatised by it.

Intelligence specialists and other military personnel regularly analyse graphic footage from drone strikes. Research shows that it is possible to suffer psychological harm by frequently viewing images of extreme violence.

[…]

The prospect of totally autonomous drones would radically alter the complex processes and decisions behind military killings.

But legal and ethical responsibility does not somehow just disappear if you remove human oversight. Instead, responsibility will increasingly fall on other people, including artificial intelligence scientists.

The legal implications of these developments are already becoming evident.

Under current international humanitarian law, “dual-use” facilities – those which develop products for both civilian and military application – can be attacked in the right circumstances. For example, in the 1999 Kosovo War, the Pancevo oil refinery was attacked because it could fuel Yugoslav tanks as well as fuel civilian cars.

With an autonomous drone weapon system, certain lines of computer code would almost certainly be classed as dual-use.

Companies like Google, its employees or its systems, could become liable to attack from an enemy state.

For example, if Google’s Project Maven image recognition AI software is incorporated into an American military autonomous drone, Google could find itself implicated in the drone “killing” business, as might every other civilian contributor to such lethal autonomous systems.

Ethically, there are even darker issues still.

The whole point of the self-learning algorithms – programs that independently learn from whatever data they can collect – that technology uses is that they become better at whatever task they are given.

If a lethal autonomous drone is to get better at its job through self-learning, someone will need to decide on an acceptable stage of development – how much it still has to learn – at which it can be deployed.

In militarised machine learning, that means political, military and industry leaders will have to specify how many civilian deaths will count as acceptable as the technology is refined.

Recent experiences of autonomous AI in society should serve as a warning.

New Study Suggests That Smartphone Overuse is Similar to Other Types of Substance Abuse

It shouldn’t be that much of a surprise when technology corporations design smartphones to be as addictive as possible.

Smartphones are an integral part of most people’s lives, allowing us to stay connected and in-the-know at all times. The downside of that convenience is that many of us are also addicted to the constant pings, chimes, vibrations and other alerts from our devices, unable to ignore new emails, texts and images. In a new study published in NeuroRegulation, San Francisco State University Professor of Health Education Erik Peper and Associate Professor of Health Education Richard Harvey argue that overuse of smart phones is just like any other type of substance abuse.

“The behavioral addiction of smartphone use begins forming neurological connections in the brain in ways similar to how opioid addiction is experienced by people taking Oxycontin for pain relief — gradually,” Peper explained.

On top of that, addiction to social media technology may actually have a negative effect on social connection. In a survey of 135 San Francisco State students, Peper and Harvey found that students who used their phones the most reported higher levels of feeling isolated, lonely, depressed and anxious. They believe the loneliness is partly a consequence of replacing face-to-face interaction with a form of communication where body language and other signals cannot be interpreted. They also found that those same students almost constantly multitasked while studying, watching other media, eating or attending class. This constant activity allows little time for bodies and minds to relax and regenerate, says Peper, and also results in “semi-tasking,” where people do two or more tasks at the same time — but half as well as they would have if focused on one task at a time.

Peper and Harvey note that digital addiction is not our fault but a result of the tech industry’s desire to increase corporate profits. “More eyeballs, more clicks, more money,” said Peper. Push notifications, vibrations and other alerts on our phones and computers make us feel compelled to look at them by triggering the same neural pathways in our brains that once alerted us to imminent danger, such as an attack by a tiger or other large predator. “But now we are hijacked by those same mechanisms that once protected us and allowed us to survive — for the most trivial pieces of information,” he said.

But just as we can train ourselves to eat less sugar, for example, we can take charge and train ourselves to be less addicted to our phones and computers. The first step is recognizing that tech companies are manipulating our innate biological responses to danger. Peper suggests turning off push notifications, only responding to email and social media at specific times and scheduling periods with no interruptions to focus on important tasks.

Improved Process for Making Clean Drinking Water Out of Salt Water Developed

It would be helpful in creating much more safe drinking water if it actually becomes mass produced.

Using an innovative combination of sunshine and hydrogels, a new device just unveiled by scientists is able to produce clean drinking water from virtually any source – even the salty waters of the Dead Sea.

This new technique could prevent tens of thousands of death every year, since access to safe drinking water is at a premium in many developing nations, not to even mention the wake of a natural disaster or emergency anywhere in the world.

The technology is compact, inexpensive, and uses ambient solar energy in order to evaporate water and remove impurities, making it a substantial upgrade over similar processes that have been used in the past.

“Water desalination through distillation is a common method for mass production of freshwater,” says one of the researchers, Fei Zhao from the University of Texas at Austin.

“However, current distillation technologies, such as multi-stage flash and multi-effect distillation, require significant infrastructures and are quite energy-intensive.”

“Solar energy, as the most sustainable heat source to potentially power distillation, is widely considered to be a great alternative for water desalination.”

The new filtering device works by combining several gel-polymer hybrid materials that mix both hydrophilic (water-attracting) and semiconducting (solar-adsorbing) properties.

The nanostructure of the gels enables more water vapour to be produced from less solar energy, and without the complicated series of optical instruments that existing devices use to concentrate sunlight. Here, that concentration isn’t needed.

When a jar of contaminated water is placed in direct sunlight with the hydrogel evaporator on top, vapour is released that’s then trapped and stored by a condenser.

“We have essentially rewritten the entire approach to conventional solar water evaporation,” says lead researcher Guihua Yu, from the University of Texas at Austin.

To give their new contraption a thorough testing, the researchers tried it out at the Dead Sea, which borders Israel, the West Bank, and Jordan. With a salinity of around 34 percent, it’s about ten times as salty as your standard ocean water.

The hydrogel filtering device passed its test with flying colours, producing drinking water from the Dead Sea that met the accepted drinking water standards put down by the World Health Organisation (WHO) and the US Environmental Protection Agency (EPA).

Possibility of Stopping Hurricanes Using Air Bubbles

As 2017 showed, hurricanes can do immense damage. The effects of climate change will also make hurricanes worse, as warmer air means more water vapor, and more water vapor translates to more superstorms. It’s uncertain how much using air bubble technology would actually help, but there might be beneficial truth to using it.

Tropical hurricanes are generated when masses of cold and warm air collide. Another essential factor is that the sea surface temperature must be greater than 26.5°C.

“Climate change is causing sea surface temperatures to increase,” says Grim Eidnes, who is a Senior Research Scientist at SINTEF Ocean. “The critical temperature threshold at which evaporation is sufficient to promote the development of hurricanes is 26.5°C. In the case of hurricanes Harvey, Irma and Maria that occurred in the Gulf of Mexico in the period August to September 2017, sea surface temperatures were measured at 32°C”, he says.

So to the big question. Is it possible to cool the sea surface to below 26.5°C by exploiting colder water from deeper in the water column?

[…]

Researchers at SINTEF now intend to save lives by using a tried and tested method called a “bubble curtain”.

The method consists of supplying bubbles of compressed air from a perforated pipe lowered in the water, which then rise, taking with them colder water from deeper in the ocean. At the surface, the cold water mixes with, and cools, the warm surface water.

SINTEF believes that the Yucatan Strait will be an ideal test arena for this technology.

“Our initial investigations show that the pipes must be located at between 100 and 150 metres depth in order to extract water that is cold enough” says Eidnes. “By bringing this water to the surface using the bubble curtains, the surface temperature will fall to below 26.5°C, thus cutting off the hurricane’s energy supply”, he says, before adding that “This method will allow us quite simply to prevent hurricanes from achieving life-threatening intensities”.