Black Mirror Warns of Technology Usage Gone Wrong

Technology basically has no moral imperative — it may be used for both good purposes and bad purposes. There’s little inherently good or bad about most technology, as it’s how the technology is used that matters.

That being said, the show Black Mirror provides a number of warnings for a possible dystopian future. It’s a reminder that countries should now be devising ways to ensure technology is used in the public interest.

THERE’S NO REAL plot in the “Metalhead” episode in the new season of “Black Mirror.” The star of the episode is a small, uncommunicative black robot that walks on all fours and is armed with a pistol stored in its front leg. Who controls the robot, if anyone, is never divulged. The four-legged mechanical creature operates seemingly on its own and for its own purposes. Over the course of the 40-minute episode, it hunts down a woman desperately fleeing through a forest, as she tries in vain to evade its sensors.

For those unfamiliar with the show, “Black Mirror” is a science fiction series on Netflix about a near-future in which new technologies reap terrible unintended consequences on our lives; they strip away personal independence, undermining our societal values and sometimes letting loose uncontrollable violence. As terrifying as they are, the technologies depicted in the show are not outlandish. Like the autonomous robot in “Metalhead,” they reflect easily conceivable, near-term advances upon currently existing technologies, such as drones.

Since the first detonations of atomic bombs in the 20th century, pop culture has been morbidly fascinated by the realization that humanity has developed tools powerful enough to destroy itself. But the malign technologies depicted in “Black Mirror” are more subtle than nuclear weapons. Most of the show’s episodes deal with advances in robotics, surveillance, virtual reality, and artificial intelligence – fields that happen to be key areas for tech companies in the real world. The creators of the series demonstrate how, left unchecked, the internal logic of these new technologies can bring about the destruction of their owners.

“Black Mirror’s” slick production values and acting have won wide critical acclaim. But its social commentary also seems to have struck a nerve with a public that has begun evincing confusion, fear, and alienation over the consequences of new consumer technologies. A 2015 study by Chapman University found that three out of five of the top fears Americans have were related to the consequences of emerging technologies. The potential of automation to wipe out millions of U.S. jobs and artificial intelligence’s potential to undermine democracy have been well-documented.

[…]

Even if the most dire warnings about rogue artificial intelligence programs destroying humanity never come to pass, we have already sacrificed much of our personal autonomy to technologies whose underlying philosophies were unclear when they were introduced to the public. There is a growing backlash to this kind of corporate authoritarianism. Calls to break up tech companies under federal antitrust laws are increasing, while disillusioned former Silicon Valley executives have become increasingly vocal about the negative social side effects of the programs they helped develop. Technological utopianism is slowly giving way to an acknowledgement that technologies aren’t value-neutral, and it’s the role of a functioning society to govern how they are utilized.

Drastic Inequality is from Policy, Not Technology Itself

Technology basically has no moral imperative. Policy has been what’s actually created the disastrous inequalities often seen today.

The most popular explanation for the sharp rise in inequality over the last four decades is technology. The story goes that technology has increased the demand for sophisticated skills while undercutting the demand for routine manual labor.

This view has the advantage over competing explanations, like trade policy and labor market policy, that it can be seen as something that happened independent of policy. If trade policy or labor market policy explain the transfer of income from ordinary worker to shareholders and the most highly skilled, then it implies inequality was policy driven, it is the result of conscious decisions by those in power. By contrast, if technology was the culprit, we can still feel bad about inequality, but it was something that happened, not something we did.

That view may be comforting for the beneficiaries of rising inequality, but it doesn’t make much sense. While the development of technology may to some extent have its own logic, the distribution of the benefits from technology is determined by policy. Most importantly, who gets the benefits of technology depends in a very fundamental way on our policy on patents, copyrights, and other forms of intellectual property.

To make this point clear, consider how much money Bill Gates, the world’s richest person, would have if Windows and other Microsoft software didn’t enjoy patent or copyright protection. This would mean that anyone anywhere in the world could install this software on their computer, and make millions of copies, without sending Bill Gates a penny.

[…]

The argument for intellectual property is well-known. The government grants individuals and corporations monopolies for a period of time, which allow them to charge well above the free market price for the items on which they have a patent or copyright. This monopoly gives them an incentive to innovate and do creative work.

Of course this is not the only way to provide this incentive. For example, the government can and does pay for much research directly. We spend over $30 billion a year on bio-medical research through the National Institutes of Health. Various government departments and agencies finance tens of billions of research each year in a wide variety of areas. In fact, it was Defense Department research that developed the Internet and also Unix, the program that was the basis for Dos, Microsoft’s original operating system.

[…]

It is reasonable to argue whether patents and copyrights are the most efficient mechanisms for supporting innovation and creative work. In my book, Rigged: How the Globalization and the Rules of the Modern Economy Were Structured to Make the Rich Richer, I argued that in the 21st century they are in fact very inefficient mechanisms for this purpose. But separate from the question of whether these are the best mechanisms, there is no real dispute that intellectual property redistributes money from the people who don’t own it to the people who do. Not many people with just high school degrees own patents or copyrights; they are part of the story of upward redistribution.

Since intellectual property can be either longer and stronger or shorter and weaker, the decision about how much intellectual property we have is implicitly a decision about a trade-off between growth and inequality. (This assumes that longer and stronger IP rules lead to more growth, which is a debatable point, especially since productivity growth has slowed to a crawl in the last decade.) If we are concerned about the degree of inequality in society, one way to address it would be to shorten the duration of patents and copyrights or lessen their scope so that they are less valuable.

That would mean less money for the pharmaceutical industry, the medical equipment industry, and the software industry, as well as many other sectors that disproportionately benefit from IP. Shareholders in these industries would see a hit to their income, as would the top executives and highly educated workers they employ. The rest of the country would see a rise to their income as the price of a wide range of products would fall sharply.

And, there is a huge amount of money at stake. We are on a path to spend more than $450 billion this year on prescription drugs alone. If these drugs were sold in a free market without patents or other forms of protection we would almost certainly pay less than $80 billion. (Imagine the next great cancer drug selling for a few hundred dollars rather than a few hundred thousand dollars.)

Research Finds Where the Earliest Signs of Alzheimer’s Occur in the Brain

This discovery has considerable potential for stopping the devastation Alzheimer’s often induces in those who develop the disease.

Researchers at Lund University in Sweden have for the first time convincingly shown where in the brain the earliest signs of Alzheimer’s occur. The discovery could potentially become significant to future Alzheimer’s research while contributing to improved diagnostics.

In Alzheimer’s, the initial changes in the brain occur through retention of the protein, ?-amyloid (beta-amyloid). The process begins 10-20 years before the first symptoms become noticeable in the patient.

In Nature Communications, a research team headed by Professor Oskar Hansson at Lund University has now presented results showing where in the brain the initial accumulation of ?-amyloid occurs. It is in the inner parts of the brain, within one of the brain’s most important functional networks — known as the default mode network.

“A big piece of the puzzle in Alzheimer’s research is now falling into place. We previously did not know where in the brain the earliest stages of the disease could be detected. We now know which parts of the brain are to be studied to eventually explain why the disease occurs,” says Sebastian Palmqvist, associate professor at Lund University and physician at Skåne University Hospital.

The default mode network is one of several networks, each of which has a different function in the brain. It is most active when we are in an awake quiescent state without interacting with the outside world, for example, when daydreaming. The network belongs to the more advanced part of the brain. Among other things, it processes and links information from lower systems.

[…]

The difficulty of determining which individuals are at risk of developing dementia later in life, in order to subsequently monitor them in research studies, has been an obstacle in the research world. The research team at Lund University has therefore developed a unique method to identify, at an early stage, which individuals begin to accumulate ?-amyloid and are at risk.

The method combines cerebrospinal fluid test results with PET scan brain imaging. This provides valuable information about the brain’s tendency to accumulate ?-amyloid.

In addition to serving as a roadmap for future research studies of Alzheimer’s disease, the new results also have a clinical benefit:

“Now that we know where Alzheimer’s disease begins, we can improve the diagnostics by focusing more clearly on these parts of the brain, for example in medical imaging examinations with a PET camera,” says Oskar Hansson, professor at Lund University, and medical consultant at Skåne University Hospital.

Although the first symptoms of Alzheimer’s become noticeable to others much later, the current study shows that the brain’s communication activity changes in connection with the early retention of ?-amyloid. How, and with what consequences, will be examined by the research team in further studies.

Canadian Agency Similar to the NSA Releases a Malware Analysis Tool to the Public

Acts like this are what intelligence agencies such as the CSE and the NSA are supposed to be about — defending the public. They have often worked against the public — especially since the year 2000 — by spying on them using mass surveillance, however.

Canada’s electronic spy agency says it is taking the “unprecedented step” of releasing one of its own cyber defence tools to the public, in a bid to help companies and organizations better defend their computers and networks against malicious threats.

The Communications Security Establishment (CSE) rarely goes into detail about its activities — both offensive and defensive — and much of what is known about the agency’s activities have come from leaked documents obtained by U.S. National Security Agency whistleblower Edward Snowden and published in recent years.

But as of late, CSE has acknowledged it needs to do a better job of explaining to Canadians exactly what it does. Today, it is pulling back the curtain on an open-source malware analysis tool called Assemblyline that CSE says is used to protect the Canadian government’s sprawling infrastructure each day.