U.S. Military Announces Development of Drones That Decide to Kill Using AI

Drone warfare (with its state terrorism causing numerous civilian casualties) is already horrifying enough — this AI drone development would likely be even worse. This announcement also raises the question of how much accountability those who write the algorithms that determine how the drone functions will face.

The US Army recently announced that it is developing the first drones that can spot and target vehicles and people using artificial intelligence (AI).

Whereas current military drones are still controlled by people, this new technology will decide who to kill with almost no human involvement.

Once complete, these drones will represent the ultimate militarisation of AI and trigger vast legal and ethical implications for wider society.

There is a chance that warfare will move from fighting to extermination, losing any semblance of humanity in the process.

At the same time, it could widen the sphere of warfare so that the companies, engineers and scientists building AI become valid military targets.

[…]

Even with these drone killings, human emotions, judgements and ethics have always remained at the centre of war.

The existence of mental trauma and post-traumatic stress disorder (PTSD) among drone operators shows the psychological impact of remote killing.

And this actually points to one possible military and ethical argument by Ronald Arkin, in support of autonomous killing drones. Perhaps if these drones drop the bombs, psychological problems among crew members can be avoided.

The weakness in this argument is that you don’t have to be responsible for killing to be traumatised by it.

Intelligence specialists and other military personnel regularly analyse graphic footage from drone strikes. Research shows that it is possible to suffer psychological harm by frequently viewing images of extreme violence.

[…]

The prospect of totally autonomous drones would radically alter the complex processes and decisions behind military killings.

But legal and ethical responsibility does not somehow just disappear if you remove human oversight. Instead, responsibility will increasingly fall on other people, including artificial intelligence scientists.

The legal implications of these developments are already becoming evident.

Under current international humanitarian law, “dual-use” facilities – those which develop products for both civilian and military application – can be attacked in the right circumstances. For example, in the 1999 Kosovo War, the Pancevo oil refinery was attacked because it could fuel Yugoslav tanks as well as fuel civilian cars.

With an autonomous drone weapon system, certain lines of computer code would almost certainly be classed as dual-use.

Companies like Google, its employees or its systems, could become liable to attack from an enemy state.

For example, if Google’s Project Maven image recognition AI software is incorporated into an American military autonomous drone, Google could find itself implicated in the drone “killing” business, as might every other civilian contributor to such lethal autonomous systems.

Ethically, there are even darker issues still.

The whole point of the self-learning algorithms – programs that independently learn from whatever data they can collect – that technology uses is that they become better at whatever task they are given.

If a lethal autonomous drone is to get better at its job through self-learning, someone will need to decide on an acceptable stage of development – how much it still has to learn – at which it can be deployed.

In militarised machine learning, that means political, military and industry leaders will have to specify how many civilian deaths will count as acceptable as the technology is refined.

Recent experiences of autonomous AI in society should serve as a warning.

New Study Suggests That Smartphone Overuse is Similar to Other Types of Substance Abuse

It shouldn’t be that much of a surprise when technology corporations design smartphones to be as addictive as possible.

Smartphones are an integral part of most people’s lives, allowing us to stay connected and in-the-know at all times. The downside of that convenience is that many of us are also addicted to the constant pings, chimes, vibrations and other alerts from our devices, unable to ignore new emails, texts and images. In a new study published in NeuroRegulation, San Francisco State University Professor of Health Education Erik Peper and Associate Professor of Health Education Richard Harvey argue that overuse of smart phones is just like any other type of substance abuse.

“The behavioral addiction of smartphone use begins forming neurological connections in the brain in ways similar to how opioid addiction is experienced by people taking Oxycontin for pain relief — gradually,” Peper explained.

On top of that, addiction to social media technology may actually have a negative effect on social connection. In a survey of 135 San Francisco State students, Peper and Harvey found that students who used their phones the most reported higher levels of feeling isolated, lonely, depressed and anxious. They believe the loneliness is partly a consequence of replacing face-to-face interaction with a form of communication where body language and other signals cannot be interpreted. They also found that those same students almost constantly multitasked while studying, watching other media, eating or attending class. This constant activity allows little time for bodies and minds to relax and regenerate, says Peper, and also results in “semi-tasking,” where people do two or more tasks at the same time — but half as well as they would have if focused on one task at a time.

Peper and Harvey note that digital addiction is not our fault but a result of the tech industry’s desire to increase corporate profits. “More eyeballs, more clicks, more money,” said Peper. Push notifications, vibrations and other alerts on our phones and computers make us feel compelled to look at them by triggering the same neural pathways in our brains that once alerted us to imminent danger, such as an attack by a tiger or other large predator. “But now we are hijacked by those same mechanisms that once protected us and allowed us to survive — for the most trivial pieces of information,” he said.

But just as we can train ourselves to eat less sugar, for example, we can take charge and train ourselves to be less addicted to our phones and computers. The first step is recognizing that tech companies are manipulating our innate biological responses to danger. Peper suggests turning off push notifications, only responding to email and social media at specific times and scheduling periods with no interruptions to focus on important tasks.

Improved Process for Making Clean Drinking Water Out of Salt Water Developed

It would be helpful in creating much more safe drinking water if it actually becomes mass produced.

Using an innovative combination of sunshine and hydrogels, a new device just unveiled by scientists is able to produce clean drinking water from virtually any source – even the salty waters of the Dead Sea.

This new technique could prevent tens of thousands of death every year, since access to safe drinking water is at a premium in many developing nations, not to even mention the wake of a natural disaster or emergency anywhere in the world.

The technology is compact, inexpensive, and uses ambient solar energy in order to evaporate water and remove impurities, making it a substantial upgrade over similar processes that have been used in the past.

“Water desalination through distillation is a common method for mass production of freshwater,” says one of the researchers, Fei Zhao from the University of Texas at Austin.

“However, current distillation technologies, such as multi-stage flash and multi-effect distillation, require significant infrastructures and are quite energy-intensive.”

“Solar energy, as the most sustainable heat source to potentially power distillation, is widely considered to be a great alternative for water desalination.”

The new filtering device works by combining several gel-polymer hybrid materials that mix both hydrophilic (water-attracting) and semiconducting (solar-adsorbing) properties.

The nanostructure of the gels enables more water vapour to be produced from less solar energy, and without the complicated series of optical instruments that existing devices use to concentrate sunlight. Here, that concentration isn’t needed.

When a jar of contaminated water is placed in direct sunlight with the hydrogel evaporator on top, vapour is released that’s then trapped and stored by a condenser.

“We have essentially rewritten the entire approach to conventional solar water evaporation,” says lead researcher Guihua Yu, from the University of Texas at Austin.

To give their new contraption a thorough testing, the researchers tried it out at the Dead Sea, which borders Israel, the West Bank, and Jordan. With a salinity of around 34 percent, it’s about ten times as salty as your standard ocean water.

The hydrogel filtering device passed its test with flying colours, producing drinking water from the Dead Sea that met the accepted drinking water standards put down by the World Health Organisation (WHO) and the US Environmental Protection Agency (EPA).

Possibility of Stopping Hurricanes Using Air Bubbles

As 2017 showed, hurricanes can do immense damage. The effects of climate change will also make hurricanes worse, as warmer air means more water vapor, and more water vapor translates to more superstorms. It’s uncertain how much using air bubble technology would actually help, but there might be beneficial truth to using it.

Tropical hurricanes are generated when masses of cold and warm air collide. Another essential factor is that the sea surface temperature must be greater than 26.5°C.

“Climate change is causing sea surface temperatures to increase,” says Grim Eidnes, who is a Senior Research Scientist at SINTEF Ocean. “The critical temperature threshold at which evaporation is sufficient to promote the development of hurricanes is 26.5°C. In the case of hurricanes Harvey, Irma and Maria that occurred in the Gulf of Mexico in the period August to September 2017, sea surface temperatures were measured at 32°C”, he says.

So to the big question. Is it possible to cool the sea surface to below 26.5°C by exploiting colder water from deeper in the water column?

[…]

Researchers at SINTEF now intend to save lives by using a tried and tested method called a “bubble curtain”.

The method consists of supplying bubbles of compressed air from a perforated pipe lowered in the water, which then rise, taking with them colder water from deeper in the ocean. At the surface, the cold water mixes with, and cools, the warm surface water.

SINTEF believes that the Yucatan Strait will be an ideal test arena for this technology.

“Our initial investigations show that the pipes must be located at between 100 and 150 metres depth in order to extract water that is cold enough” says Eidnes. “By bringing this water to the surface using the bubble curtains, the surface temperature will fall to below 26.5°C, thus cutting off the hurricane’s energy supply”, he says, before adding that “This method will allow us quite simply to prevent hurricanes from achieving life-threatening intensities”.

Developing Edible QR Codes for Future Medications

Quite a different approach than how medicine is administered today. There will need to be safeguards, however, such as by ensuring the legitimacy of the scans through cryptographic verification.

For the last 100 years, researchers have constantly pushed the boundaries for our knowledge about medicine and how different bodies can respond differently to it. However, the methods for the production of medicine have not yet moved itself away from mass production. Many who have a given illness get the same product with equal amount of an active compound.

This production might soon be in the past. In a new study, researchers from the University of Copenhagen together with colleagues from Åbo Akademi University in Finland have developed a new method for producing medicine. They produce a white edible material. Here, they print a QR code consisting of a medical drug.

“This technology is promising, because the medical drug can be dosed exactly the way you want it to. This gives an opportunity to tailor the medication according to the patient getting it,” says Natalja Genina, Assistant Professor at Department of Pharmacy.

Potential for reducing wrong medication and fake medicine

The shape of a QR code also enables storage of data in the “pill” itself.

“Simply doing a quick scan, you can get all the information about the pharmaceutical product. In that sense it can potentially reduce cases of wrong medication and fake medicine,” says Natalja Genina.

The researchers hope that in the future a regular printer will be able to apply the medical drug in the pattern of a QR code, while the edible material will have to be produced in advance to allow on-demand production of medical drug near end-users.

“If we are successful with applying this production method to relatively simple printers, then it can enable the innovative production of personalized medicine and rethinking of the whole supply chain,” says professor Jukka Rantanen from Department of Pharmacy.

The researchers are now working to refine the methods for this medical production.

Black Mirror Warns of Technology Usage Gone Wrong

Technology basically has no moral imperative — it may be used for both good purposes and bad purposes. There’s little inherently good or bad about most technology, as it’s how the technology is used that matters.

That being said, the show Black Mirror provides a number of warnings for a possible dystopian future. It’s a reminder that countries should now be devising ways to ensure technology is used in the public interest.

THERE’S NO REAL plot in the “Metalhead” episode in the new season of “Black Mirror.” The star of the episode is a small, uncommunicative black robot that walks on all fours and is armed with a pistol stored in its front leg. Who controls the robot, if anyone, is never divulged. The four-legged mechanical creature operates seemingly on its own and for its own purposes. Over the course of the 40-minute episode, it hunts down a woman desperately fleeing through a forest, as she tries in vain to evade its sensors.

For those unfamiliar with the show, “Black Mirror” is a science fiction series on Netflix about a near-future in which new technologies reap terrible unintended consequences on our lives; they strip away personal independence, undermining our societal values and sometimes letting loose uncontrollable violence. As terrifying as they are, the technologies depicted in the show are not outlandish. Like the autonomous robot in “Metalhead,” they reflect easily conceivable, near-term advances upon currently existing technologies, such as drones.

Since the first detonations of atomic bombs in the 20th century, pop culture has been morbidly fascinated by the realization that humanity has developed tools powerful enough to destroy itself. But the malign technologies depicted in “Black Mirror” are more subtle than nuclear weapons. Most of the show’s episodes deal with advances in robotics, surveillance, virtual reality, and artificial intelligence – fields that happen to be key areas for tech companies in the real world. The creators of the series demonstrate how, left unchecked, the internal logic of these new technologies can bring about the destruction of their owners.

“Black Mirror’s” slick production values and acting have won wide critical acclaim. But its social commentary also seems to have struck a nerve with a public that has begun evincing confusion, fear, and alienation over the consequences of new consumer technologies. A 2015 study by Chapman University found that three out of five of the top fears Americans have were related to the consequences of emerging technologies. The potential of automation to wipe out millions of U.S. jobs and artificial intelligence’s potential to undermine democracy have been well-documented.

[…]

Even if the most dire warnings about rogue artificial intelligence programs destroying humanity never come to pass, we have already sacrificed much of our personal autonomy to technologies whose underlying philosophies were unclear when they were introduced to the public. There is a growing backlash to this kind of corporate authoritarianism. Calls to break up tech companies under federal antitrust laws are increasing, while disillusioned former Silicon Valley executives have become increasingly vocal about the negative social side effects of the programs they helped develop. Technological utopianism is slowly giving way to an acknowledgement that technologies aren’t value-neutral, and it’s the role of a functioning society to govern how they are utilized.

Drastic Inequality is from Policy, Not Technology Itself

Technology basically has no moral imperative. Policy has been what’s actually created the disastrous inequalities often seen today.

The most popular explanation for the sharp rise in inequality over the last four decades is technology. The story goes that technology has increased the demand for sophisticated skills while undercutting the demand for routine manual labor.

This view has the advantage over competing explanations, like trade policy and labor market policy, that it can be seen as something that happened independent of policy. If trade policy or labor market policy explain the transfer of income from ordinary worker to shareholders and the most highly skilled, then it implies inequality was policy driven, it is the result of conscious decisions by those in power. By contrast, if technology was the culprit, we can still feel bad about inequality, but it was something that happened, not something we did.

That view may be comforting for the beneficiaries of rising inequality, but it doesn’t make much sense. While the development of technology may to some extent have its own logic, the distribution of the benefits from technology is determined by policy. Most importantly, who gets the benefits of technology depends in a very fundamental way on our policy on patents, copyrights, and other forms of intellectual property.

To make this point clear, consider how much money Bill Gates, the world’s richest person, would have if Windows and other Microsoft software didn’t enjoy patent or copyright protection. This would mean that anyone anywhere in the world could install this software on their computer, and make millions of copies, without sending Bill Gates a penny.

[…]

The argument for intellectual property is well-known. The government grants individuals and corporations monopolies for a period of time, which allow them to charge well above the free market price for the items on which they have a patent or copyright. This monopoly gives them an incentive to innovate and do creative work.

Of course this is not the only way to provide this incentive. For example, the government can and does pay for much research directly. We spend over $30 billion a year on bio-medical research through the National Institutes of Health. Various government departments and agencies finance tens of billions of research each year in a wide variety of areas. In fact, it was Defense Department research that developed the Internet and also Unix, the program that was the basis for Dos, Microsoft’s original operating system.

[…]

It is reasonable to argue whether patents and copyrights are the most efficient mechanisms for supporting innovation and creative work. In my book, Rigged: How the Globalization and the Rules of the Modern Economy Were Structured to Make the Rich Richer, I argued that in the 21st century they are in fact very inefficient mechanisms for this purpose. But separate from the question of whether these are the best mechanisms, there is no real dispute that intellectual property redistributes money from the people who don’t own it to the people who do. Not many people with just high school degrees own patents or copyrights; they are part of the story of upward redistribution.

Since intellectual property can be either longer and stronger or shorter and weaker, the decision about how much intellectual property we have is implicitly a decision about a trade-off between growth and inequality. (This assumes that longer and stronger IP rules lead to more growth, which is a debatable point, especially since productivity growth has slowed to a crawl in the last decade.) If we are concerned about the degree of inequality in society, one way to address it would be to shorten the duration of patents and copyrights or lessen their scope so that they are less valuable.

That would mean less money for the pharmaceutical industry, the medical equipment industry, and the software industry, as well as many other sectors that disproportionately benefit from IP. Shareholders in these industries would see a hit to their income, as would the top executives and highly educated workers they employ. The rest of the country would see a rise to their income as the price of a wide range of products would fall sharply.

And, there is a huge amount of money at stake. We are on a path to spend more than $450 billion this year on prescription drugs alone. If these drugs were sold in a free market without patents or other forms of protection we would almost certainly pay less than $80 billion. (Imagine the next great cancer drug selling for a few hundred dollars rather than a few hundred thousand dollars.)