Using Different Phone Notification Settings for Stress Reduction and Productivity Increases

An alternate approach than what’s usually used now. This is enough of a problem today to be worth posting about.

After you feel a buzz in your pocket or see a flash on your phone, your attention is already fractured.

You could pick up your phone and see if what’s called you away is something you really need to address immediately – or you could try and focus on your work, all the while wondering what you’re missing out on.

Since it can take close to 25 minutes to get back on track after a distraction, according to researchers who study productivity, this is obviously a recipe for a distracted day where not much gets done.

Fortunately, we are learning better ways to handle smartphone notifications, according to research being conducted at Duke University’s Center for Advanced Hindsight, which was presented by senior behavioural researcher Nick Fitz at a recent American Psychological Association conference.

The research was conducted in collaboration with the startup Synapse, which is incubated at the Center.

Fitz and collaborators have found that batching notifications into sets that study participants receive three times a day makes them happier, less stressed, feeling more productive, and more in control.

U.S. Military Announces Development of Drones That Decide to Kill Using AI

Drone warfare (with its state terrorism causing numerous civilian casualties) is already horrifying enough — this AI drone development would likely be even worse. This announcement also raises the question of how much accountability those who write the algorithms that determine how the drone functions will face.

The US Army recently announced that it is developing the first drones that can spot and target vehicles and people using artificial intelligence (AI).

Whereas current military drones are still controlled by people, this new technology will decide who to kill with almost no human involvement.

Once complete, these drones will represent the ultimate militarisation of AI and trigger vast legal and ethical implications for wider society.

There is a chance that warfare will move from fighting to extermination, losing any semblance of humanity in the process.

At the same time, it could widen the sphere of warfare so that the companies, engineers and scientists building AI become valid military targets.

[…]

Even with these drone killings, human emotions, judgements and ethics have always remained at the centre of war.

The existence of mental trauma and post-traumatic stress disorder (PTSD) among drone operators shows the psychological impact of remote killing.

And this actually points to one possible military and ethical argument by Ronald Arkin, in support of autonomous killing drones. Perhaps if these drones drop the bombs, psychological problems among crew members can be avoided.

The weakness in this argument is that you don’t have to be responsible for killing to be traumatised by it.

Intelligence specialists and other military personnel regularly analyse graphic footage from drone strikes. Research shows that it is possible to suffer psychological harm by frequently viewing images of extreme violence.

[…]

The prospect of totally autonomous drones would radically alter the complex processes and decisions behind military killings.

But legal and ethical responsibility does not somehow just disappear if you remove human oversight. Instead, responsibility will increasingly fall on other people, including artificial intelligence scientists.

The legal implications of these developments are already becoming evident.

Under current international humanitarian law, “dual-use” facilities – those which develop products for both civilian and military application – can be attacked in the right circumstances. For example, in the 1999 Kosovo War, the Pancevo oil refinery was attacked because it could fuel Yugoslav tanks as well as fuel civilian cars.

With an autonomous drone weapon system, certain lines of computer code would almost certainly be classed as dual-use.

Companies like Google, its employees or its systems, could become liable to attack from an enemy state.

For example, if Google’s Project Maven image recognition AI software is incorporated into an American military autonomous drone, Google could find itself implicated in the drone “killing” business, as might every other civilian contributor to such lethal autonomous systems.

Ethically, there are even darker issues still.

The whole point of the self-learning algorithms – programs that independently learn from whatever data they can collect – that technology uses is that they become better at whatever task they are given.

If a lethal autonomous drone is to get better at its job through self-learning, someone will need to decide on an acceptable stage of development – how much it still has to learn – at which it can be deployed.

In militarised machine learning, that means political, military and industry leaders will have to specify how many civilian deaths will count as acceptable as the technology is refined.

Recent experiences of autonomous AI in society should serve as a warning.

New Coating for Devices Would Make Them Much More Resistant

Good news for the safety of electronics, especially with regards to their potential exposure to liquids.

Sometimes our phones end up in the toilet bowl, or laptops end up covered in tea. It happens.

But if they were coated with an ‘omniphobic’ material, like the one created by a team of University of Michigan researchers, your devices would be a lot more likely to come out unscathed.

[…]

This everything-proof material works by combining fluorinated polyurethane and fluorodecyl polyhedral oligomeric silsesquioxane (F-POSS).

F-POSS has an extremely low surface energy, which means that things don’t stick to it.

The coating developed by the team stands out from other similar materials because of the clever way these two ingredients work together, forming a more durable product.

“In the past, researchers might have taken a very durable substance and a very repellent substance and mixed them together,” Tuteja said.

“But this doesn’t necessarily yield a durable, repellent coating.”

But these two materials have combined so well, they ended up with a durable coating that can repeal everything – oil, water, or anything else the researchers threw at it.

[…]

Although this all sounds amazing, this incredible coating won’t be available quite yet – F-POSS is rare and expensive right now, although that is changing as manufacturers scale up the product, which should lower the cost.

Inequality is from Policy, Not Technology

The disturbingly popular narrative is that the significant rise in inequality (economic and otherwise) over the last four decades is primarily due to the advancement of technology. Typically it’s claimed that technological progress has significantly raised demand for workers with sophisticated skills while concurrently reducing demand for less-skilled workers. It’s a claim made when there are no laws of nature that dictate how technology must be used.

Part of the reason this technology causing inequality narrative is pushed (overtly or not) is because it presents the view that extreme inequality was inevitable — when in truth it was preventable. Blaming inequality on the existence of technology can be a convenient excuse for those in power, but the reality is that the benefits of technology is determined by policy decisions.

This can be determined through research grants, government spending, and certain subsidies, but it’s largely connected to policy on copyrights, patents, and other forms of intellectual property.

Patents and copyrights are government interventions in markets. These interventions can be positive or negative, but they are the opposite of the theoretical free market. This is seen notably with how the U.S. government potentially has the power to arrest someone violating a patent – there aren’t many forms of government intervention more explicit than that.

The case of Bill Gates (who has long been in at least the top three of the world’s wealthiest people) provides a compelling example here. All else being equal, Gates would be much less rich if he didn’t have copyrights and patents on Windows and other Microsoft software. If that scenario was true, there would be essentially no monetary cost to downloading and sharing many copies of Microsoft software.

Patents and copyrights clearly have a major impact with regards to the massive upwards redistribution of income driving inequality. Patents in the U.S. have become increasingly expensive to acquire and they are disproportionately granted to higher-income people. It’s common sense that there aren’t as many high school dropouts and poorer people with patents as compared to the number of patents that those who have graduate degrees and high net worths possess.

By design, a mechanism such as a patent allows for charging much higher prices than would otherwise be possible, and this often comes with the consequence of higher real prices for most people. Microsoft software could be free to download, but (outside of pirating it) there’s a substantial cost to purchasing it. The revenue from this cost is then in large part transferred to Bill Gates and other major shareholders of Microsoft stock — as in, transferred to wealthy people — and this is simply one example.

Prescription drugs, which in some distant sense can also sometimes be thought of as technology, provide another example. Money is transferred upwards to the pharmaceutical corporations who possess the drug patents in the form of higher costs to consumers. The amount that Americans would save if there were no drug patent monopolies or related unjust protections is substantial, estimated at a few hundred billion dollars a year, or at a few thousand dollars per U.S. household.

There are different and more effective ways to support innovation other than using the patents and copyrights that regularly function as outdated relics of the medieval guild system. The Internet for instance was created through direct funding from the Department of Defense, and there’s a similar case with a surprisingly high number of innovations, which really shows the inefficiency that using patents and copyrights often have. New approaches — such as giving people tax credits to grant to those engaged in creative work, which could be based off of tax deductions for charitable donations — need to be tried more instead.

The narrative of blaming technology for inequality is also shown as flawed when the significant impacts of policy are shown in the case of medical doctors and manufacturing workers, standard examples of people considered high-skill and low-skill.

In the case of doctors, deliberate policy has lead to their wages in the U.S. being on average about twice those of doctors in other wealthy countries. It is illegal to practice medicine in the U.S. without otherwise having completed a U.S. residency program. The argument for maintaining a rule such as this is absurd – it’s essentially saying that doctors from places such as Canada and Germany are unqualified because they haven’t gone through a years-long U.S. residency program.

This artificially limited supply of medical doctors in the U.S. – along with the high demand that remains for them – raises the price of their wages while allowing for there to be less of them. Besides the struggling people that could use more medical assistance around the country, this amounts to added U.S. healthcare costs of around $100 billion annually, an amount about equal to $700 per U.S. household.

For manufacturing workers, policy decisions are again seen through allowing them to be so harshly put in direct competition with other low-wage workers overseas, putting downward pressure on their wages. Allowing the Chinese currency management (which could have been stopped or lessened with better U.S. negotiating) to drive the U.S. trade deficit higher also is one of the main reasons for the secular stagnation (lack of demand) in the economy that has so hurt lower- and middle-income workers.

Thus, the argument that technological advancement has lead to an increase in inequality has virtually no basis in reality. Policy effects being the real drivers of inequality are seen repeatedly, from the under-taxed financial sector, the tax loopholes for major corporations, the rigged corporate governance structure that has allowed for ridiculous CEO pay levels, the labor market policy that’s been set against achieving beneficial policies such as what’s referred to as full employment, etc.

One actual way to reduce inequality would therefore be to actually lessen the duration of intellectual property grants in order to reduce the rents received by those possessing them. If longer and stronger patents and copyrights has been associated with more inequality (with evidence finding that it hasn’t been much of a contributor to economic growth that benefits the general population), then reversing that trend should be an egalitarian development.

These points about technology really are that important and fundamental though. Much of importance on what happens with technology in the near future will be determined by policy, and it’s therefore important that there be clearer general technological understanding among the public for this. It really is an integral point in understanding the role of technology in the modern world.

Determining Whether Free on the Internet Makes Someone the Product

“If it’s free on the Internet, you’re the product.” A lot of people have heard that phrase or some variant of it, but many rarely seem to have considered the implications of what it truly means, despite the amount of time they may spend using what’s monetarily free online. Perhaps unlike some well-known sayings, it is an important phrase for what it represents, and that makes it worth mentioning here.

The phrase obviously implies that something being free online actually presents the cost of it somehow taking advantage of the user. For example, Facebook’s core services cost no money to use, but using them has always come with the cost of being placed under high surveillance from Facebook. This surveillance leaves vast amounts of personal data in the corporation’s control, thereby making it vulnerable to exploitation.

In practice, that abuse of user data has been seen on numerous occasions — recently with the revelations that Cambridge Analytica built psychological profiles on 50 million Facebook users in order to “target their inner demons” and wrongly manipulate them with political advertisements. Also relevant is Facebook having allowed advertisers to unjustly target (discriminate against) people by ethnicity, Facebook’s experiment to manipulate the news feeds of nearly 700,000 users (without their consent) in an attempt to see much it could influence user emotions, the transfer of sensitive user Facebook data to the U.S. government (violating the Fourth Amendment) through the PRISM mass surveillance program, among other corporate misdeeds.

This is of course after Facebook’s CEO and founder said in 2009 that “What the terms say is just, we’re not going to share people’s information except for the people that they’ve asked for it to be shared.” That’s a striking quote considering that the vast majority of people obviously never wanted their information shared with other malicious corporations and the harmful parts of U.S. intelligence agencies.

Thus, avoiding being the product online clearly requires examining what you’re using and whether it’s using you, and if so, then how much. There are times when this is easier to decipher — some services have open source (available for public audit/review) software and others don’t. Even with closed source services though, there’s also more known about some than others — the pervasive surveillance done by Facebook is decently well known, for example.

It should be said that there’s a limited amount that most individual users should be blamed through all of this exploitation, however. Easily accessible knowledge of the sort in this article should be featured more prominently and implemented more, but it’s also important to simply press for the design of systems that limit exploitation much more than is currently allowed.

This shouldn’t only be additional options for cautious users either. As shown repeatedly with the default effect, a large amount of users will often opt use the default option that’s open to them, even if it’s considerably flawed compared to an alternative that requires a few extra clicks. It’s therefore important to have mechanisms such as stronger anti-exploitation laws, more resistant technology, and a structure of incentives for society that isn’t made to reward abuses (indeed, that is run much less by abuses) anywhere near as much as it currently is.

And from the pharmaceutical corporations that have been shown to have manufactured an opioid crisis through flooding economically downtrodden communities with highly addictive opioids to the labor standards (or lack of them) that allow for the exploitation of many employees, it’s clear that much of current society is built on abusive structures.

For individual users willing to invest some time though, there are valuable anti-exploit concepts that can be learned quickly. Concepts such as how to create stronger passwords (linked to here), find resources such as sites that quickly analyze terms of service, and how to do threat modeling can be immensely helpful and a good investment for the relatively low time it takes to learn them. It’s part of what’s needed if society is to be improved and if many more people are to stop being the product online.

Understanding the Default Effect and Its Substantial Relevance

The default effect is an observed phenomenon in human psychology that basically describes how a lot of people will usually use the default option that’s presented to them. It’s an important and valuable concept to understand because of its widespread use in technology and other consumer products. Google has for example paid Firefox hundreds of millions of dollars in the past to have Google’s search engine be the default in Firefox. The Yahoo! corporation has also engaged in a bidding war to have its search engine be the default in Firefox before too, and the reason that these corporations are willing to spend large amounts of money for this basic feature is that their executives understand the power of the default effect in directing substantial amounts of human behavior.

There are plenty of other examples that illustrate the default effect’s relevance, such as Microsoft devoting immense resources to maintaining Windows as the widespread default operating system on many new desktop computers, and Facebook attempting (and fortunately failing) at implementing their restrictive Facebook-only Internet service in India. Profits of these corporations are predicated to a significant extent on personal data, and it’s therefore evident that stronger holds over that data will allow for potentially higher profit shares.

Additionally, the default effect’s power is amplified by the tendency humans often have to form habits. Once a habit is formed around whatever the implementation of the default effect is, that makes the default effect all the stronger.

In upper Silicon Valley domains there is also the talk of “the next billion,” referencing the billions of people who have still not used the Internet. One of the goals of these corporations (whether they admit it or not) is to lock in those users and exclude their competitors so that they are able to take increased advantage of the new data share. The data (especially the surface data) of a lot of other people has already been mined and collected by them, so they are approaching these people who haven’t connected to the online world yet. It’s reminiscent of the cigarette industry’s historic and still ongoing attempts to establish brand loyalty among smokers while also hooking people while they’re young.

In sum here, the default effect is important to understand and therefore be able to more effectively avoid modern technological exploitation. Sharing this sort of insight with others should help them with that as well.

Creating Stronger Passwords Easily and Effectively

Making strong passwords is important and will remain so for at least the next several years. Currently, biometric identification and some other forms of authentication are usually either major sacrifices to privacy and/or security and therefore quite flawed. Police in the U.S. can (under recent court rulings) legally force someone to unlock a device locked with their fingerprint, but they cannot force someone to reveal their password, for example.

So there are several ways to create stronger passwords, and while revealing them it’s useful to understand what actually makes for a strong password.

Passwords are primarily strong based on the degree of randomization that they have. This can be measured through the bits of entropy (or randomness) in the password. Stronger passwords therefore have more bits of entropy, but the problem for humans is that they don’t tend to be that good at generating the efficient elements of password randomness on their own.

The Diceware Method

What’s referred to as the Diceware method is a viable way to make strong passwords though. The process is simple: Use a competent pre-selected word list designed to maximize randomness — such as the one from the Diceware site or the one that the EFF maintains — and roll some physical dice. The numbers rolled on the dice should correspond to the numbers on the word lists in order to decide which of them to use.

Rolling 36362 corresponds to the word levy on the Diceware site’s word list and it corresponds to the word lustily on the longer EFF word list, for example. For a strong password, this process should be repeated at minimum six or seven times so that six or seven words are gathered.

So one example could be lustily able jot playmaker those control astute from the EFF word list. This is a strong password and it could even be made somewhat stronger by placing a space between each of the words, but it’s strong enough if they’re bunched together too. It’s an example of one that should be arrived at by rolling physical dice, and the reason for using those is that the act of rolling the dice actually generates entropy, more entropy than what a human would have in merely selecting the words off the list.

The password generated above is also a unique password, and the unique element complements the strength element of the Diceware method. When there’s a data breach, the passwords of user accounts are often stolen, and when they’re stolen the passwords are often stored or sold by malicious adversaries. Storing those passwords like that allows for the possibility of targeting users who make the mistake of reusing passwords between sites, which of course creates vulnerabilities that users must be cautious about.

There’s also now a website that generates Diceware-type passwords, and it looks legitimate, but I still recommend people roll physical dice when possible instead. Additionally, there is software such as what password managers possess that also generate strong passwords, and although those might be fine too, I still recommend physically rolling real dice.

Password Best Practices

There’s a variety of incorrect password advice floating around society, such as the myth that passwords need to be changed every 90 days or so for better security. In  reality, there’s no need to change passwords every 90 days or even every year as long as they’re strong and the user feels that they haven’t been compromised. Actually, the data shows that the practice of forcing people to change passwords every 90 days actually leads to worse password outcomes than what would have happened if the passwords merely stayed the same.

It’s also fine to keep passwords written down somewhere as long as they’re in a secure location. Relying on memory alone for keeping passwords can cause serious problems if the password memory is lost and an important account or file is no longer able to be opened. With the state of computer security today, it may actually be superior to have passwords written down instead of stored on computer systems. Someone should also always be especially careful about typing sensitive passwords on computers that aren’t theirs — an unsafe computer could easily contain a keylogger that leads to their compromise.

And beyond the recommended advice of avoiding the use of the same passwords between sites, it should also be noted that saving passwords in a web browser is a potentially unwise gamble. Browsers often contain sandboxing security features these days and are therefore better than they used to be, but since they have their own share of vulnerabilities, I would at least recommend against saving the most important passwords in browsers. A stolen computer with an unlocked web browser containing valuable passwords is an easy compromise. It should be obvious that the added convenience sometimes isn’t worth the added risk.

Summary and Notes

Use of the Diceware method is thus shown to be a viable way to create strong passwords in a world of regular data breaches and often inadequate computer security systems. It can take mere seconds for an attacker to use a brute force program and figure out a typical password, and based on recent research showing that even basic password guidance in light of this can have significant benefits, it makes enlightening others about creating stronger passwords all the more important.

Users who lack high threat models (sophisticated adversaries such as elite government agencies and large corporations) should also consider using password managers. Password managers are software that rely on one strong master password that store other passwords, and while I am personally ambivalent about them, I recognize that they can be helpful for many users. The saying about having all of one’s eggs in one basket and individual concerns should be considered, however.

Also, really strong security practices require often require more than passwords. Use of good two-factor authentication can significantly amplify security.

In the interest of avoiding much technical jargon and potentially complicated mathematics, this article was simplified to enhance the clarity of the basic ideas. For the skeptical users questioning the authenticity of this article’s claims, I have provided extensive hyperlinks here that reveal the sources and data used. Digital security is important and can be tricky, and so good users should proceed with caution.