Advanced Automation in the Future

Over the last several decades in the U.S., productivity gains have been concentrated in the upper echelon of the income distribution. The general population hasn’t really received them.

productivitygraph

Productivity means the average output per hour in the economy. This has increased due to technological advances such as faster computer processing power and workers becoming more efficient at their jobs.

The story of robots taking all the jobs is today printed in the mass media with some regularity. However, if the robots were actually taking all the jobs today, it would show up in the data. Massive automation implies massive increases in productivity, but as it is now, productivity gains rates have been quite low. Yearly productivity growth was higher in 2003 than it is today, and since about 2005 there’s been a slowdown in it. So based on the trend of the last dozen years, it is unlikely enough that we will see significant advances in productivity (automation) over the next several years.

Society should be structured so that in the next decades, productivity gains will be distributed to the general population instead of primarily to upper-middle class and wealthy people. In a significant way, this will depend on who owns the technology.

It’s crucial that there be real care taken on the rights awarded to people owning the most valuable technology. This may frankly determine whether that technology is a curse or a blessing for humanity.

In one example, say that the groundbreaking designs for the most highly advanced robotics are developed by a major corporation, which then patents the designs. The patent is valuable since the robotics would be far more efficient than anything else on the market, and it would allow the corporation to charge much higher prices than would otherwise be possible. This would be good for the minority of people who own the company and are invested in it, but it would almost certainly be harmful to the general public.

The case of prescription drugs shows us what happens when legal enforcement via patents goes wrong. The United States spent $450 billion on prescription drugs in 2017, an amount that would have been about a fifth as much (representing thousands of dollars per U.S. household in savings) were there no drug patents and a different system of drug research incentives. The consequence of this disparity is obviously that there are many people who suffer with health ailments due to unnecessarily expensive medications.

The major corporation with the valuable robotics patents may be able to make the distribution of the valuable robotics (which could very efficiently perform a wide range of tasks) much more expensive than necessary, similar to the prescription drugs example. The robotics being too expensive would mean that there’d be less of them to do efficient labor such as assembling various household appliances, and this would manifest itself as a cost to a lot of people.

So instead of the advanced robotics (probably otherwise cheap due to the software and materials needed for them being low cost) being widely distributed inexpensively and allowed to most efficiently automate labor, there could be a case where their use is expensively restricted. The robotics may even be used by the potentially unaccountable corporation for mostly nefarious ends, and this is another problem that arises with the control granted by the patents. Clearly, there need to be public interest solutions to this sort of problem, such as avoiding the use of regressive governmental interventions, considering the use of shared public ownership to allow many people to receive dividends on the value the technology generates, and implementing sensible regulatory measures.

There are also standards that can be set into law and enforced. A basic story is that if (after automation advances lead to less labor requirements among workers generally) the length of the average work year decreases by 20 percent, about 25 percent more people will be employed. The arithmetic may not always be this straightforward, but it’s a basic estimate for consideration.

Less time spent working while employing more people is clearly a good standard for many reasons, particularly in the U.S. where leisure rates among most are low compared to other wealthy countries. More people being employed may also mean tighter labor markets that allow for workers to receive higher real wage gains.

If there is higher output due to technology, that value will go somewhere in the form of more money. Over the last decades we have seen this concentrated at the top, but it is possible to have workers both work shorter hours and have similar or even higher pay levels.

Google Employees Resigning Over Google’s Involvement in Supplying AI to the U.S. Military’s Drone Program

AI used in Project Maven is supposed to decide when humans should be killed by the U.S. military drones. But all software has flaws that can be exploited, and the people writing the code the AI uses will have their own biases, which may be horrifying in practice. It’s also just wrong to further amplify the power (and advanced AI adds real power) of a program that has already lead to the bombings of civilian weddings on numerous occasions.

About a dozen Google employees have resigned in protest of the tech giant’s involvement in an artificial intelligence (AI) collaboration with the U.S. military, in which Google is participating to develop new kinds of drone technology.

“At some point, I realized I could not in good faith recommend anyone join Google, knowing what I knew,” one of the workers told Gizmodo. “I realized if I can’t recommend people join here, then why am I still here?”

The resignations follow Google’s failure to alter course despite approximately 4,000 of its employees signing a petition that urges Google to abandon its work with Project Maven, a Pentagon program focused on the targeting systems of the military’s armed drones. The company is reportedly contributing artificial intelligence technology to the program.

U.S. Military Announces Development of Drones That Decide to Kill Using AI

Drone warfare (with its state terrorism causing numerous civilian casualties) is already horrifying enough — this AI drone development would likely be even worse. This announcement also raises the question of how much accountability those who write the algorithms that determine how the drone functions will face.

The US Army recently announced that it is developing the first drones that can spot and target vehicles and people using artificial intelligence (AI).

Whereas current military drones are still controlled by people, this new technology will decide who to kill with almost no human involvement.

Once complete, these drones will represent the ultimate militarisation of AI and trigger vast legal and ethical implications for wider society.

There is a chance that warfare will move from fighting to extermination, losing any semblance of humanity in the process.

At the same time, it could widen the sphere of warfare so that the companies, engineers and scientists building AI become valid military targets.

[…]

Even with these drone killings, human emotions, judgements and ethics have always remained at the centre of war.

The existence of mental trauma and post-traumatic stress disorder (PTSD) among drone operators shows the psychological impact of remote killing.

And this actually points to one possible military and ethical argument by Ronald Arkin, in support of autonomous killing drones. Perhaps if these drones drop the bombs, psychological problems among crew members can be avoided.

The weakness in this argument is that you don’t have to be responsible for killing to be traumatised by it.

Intelligence specialists and other military personnel regularly analyse graphic footage from drone strikes. Research shows that it is possible to suffer psychological harm by frequently viewing images of extreme violence.

[…]

The prospect of totally autonomous drones would radically alter the complex processes and decisions behind military killings.

But legal and ethical responsibility does not somehow just disappear if you remove human oversight. Instead, responsibility will increasingly fall on other people, including artificial intelligence scientists.

The legal implications of these developments are already becoming evident.

Under current international humanitarian law, “dual-use” facilities – those which develop products for both civilian and military application – can be attacked in the right circumstances. For example, in the 1999 Kosovo War, the Pancevo oil refinery was attacked because it could fuel Yugoslav tanks as well as fuel civilian cars.

With an autonomous drone weapon system, certain lines of computer code would almost certainly be classed as dual-use.

Companies like Google, its employees or its systems, could become liable to attack from an enemy state.

For example, if Google’s Project Maven image recognition AI software is incorporated into an American military autonomous drone, Google could find itself implicated in the drone “killing” business, as might every other civilian contributor to such lethal autonomous systems.

Ethically, there are even darker issues still.

The whole point of the self-learning algorithms – programs that independently learn from whatever data they can collect – that technology uses is that they become better at whatever task they are given.

If a lethal autonomous drone is to get better at its job through self-learning, someone will need to decide on an acceptable stage of development – how much it still has to learn – at which it can be deployed.

In militarised machine learning, that means political, military and industry leaders will have to specify how many civilian deaths will count as acceptable as the technology is refined.

Recent experiences of autonomous AI in society should serve as a warning.

Polisis AI Developed to Help People Understand Privacy Policies

It looks as though this AI development could be quite useful in helping people avoid the exploitation of their personal information. Someone reading this may also want to look into a resource called Terms of Service; Didn’t Read, which “aims at creating a transparent and peer-reviewed process to rate and analyse Terms of Service and Privacy Policies in order to create a rating from Class A to Class E.”

But one group of academics has proposed a way to make those virtually illegible privacy policies into the actual tool of consumer protection they pretend to be: an artificial intelligence that’s fluent in fine print. Today, researchers at Switzerland’s Federal Institute of Technology at Lausanne (EPFL), the University of Wisconsin and the University of Michigan announced the release of Polisis—short for “privacy policy analysis”—a new website and browser extension that uses their machine-learning-trained app to automatically read and make sense of any online service’s privacy policy, so you don’t have to.

In about 30 seconds, Polisis can read a privacy policy it’s never seen before and extract a readable summary, displayed in a graphic flow chart, of what kind of data a service collects, where that data could be sent, and whether a user can opt out of that collection or sharing. Polisis’ creators have also built a chat interface they call Pribot that’s designed to answer questions about any privacy policy, intended as a sort of privacy-focused paralegal advisor. Together, the researchers hope those tools can unlock the secrets of how tech firms use your data that have long been hidden in plain sight.

[…]

Polisis isn’t actually the first attempt to use machine learning to pull human-readable information out of privacy policies. Both Carnegie Mellon University and Columbia have made their own attempts at similar projects in recent years, points out NYU Law Professor Florencia Marotta-Wurgler, who has focused her own research on user interactions with terms of service contracts online. (One of her own studies showed that only .07 percent of users actually click on a terms of service link before clicking “agree.”) The Usable Privacy Policy Project, a collaboration that includes both Columbia and CMU, released its own automated tool to annotate privacy policies just last month. But Marotta-Wurgler notes that Polisis’ visual and chat-bot interfaces haven’t been tried before, and says the latest project is also more detailed in how it defines different kinds of data. “The granularity is really nice,” Marotta-Wurgler says. “It’s a way of communicating this information that’s more interactive.”

[…]

The researchers’ legalese-interpretation apps do still have some kinks to work out. Their conversational bot, in particular, seemed to misinterpret plenty of questions in WIRED’s testing. And for the moment, that bot still answers queries by flagging an intimidatingly large chunk of the original privacy policy; a feature to automatically simplify that excerpt into a short sentence or two remains “experimental,” the researchers warn.

But the researchers see their AI engine in part as the groundwork for future tools. They suggest that future apps could use their trained AI to automatically flag data practices that a user asks to be warned about, or to automate comparisons between different services’ policies that rank how aggressively each one siphons up and share your sensitive data.

“Caring about your privacy shouldn’t mean you have to read paragraphs and paragraphs of text,” says Michigan’s Schaub. But with more eyes on companies’ privacy practices—even automated ones—perhaps those information stewards will think twice before trying to bury their data collection bad habits under a mountain of legal minutiae.