Using Spectral Cloaking for Object Invisibility

An example of when science fiction becomes science fact. This advance could be used in many different ways, including in digital security, with out of sight possibly meaning out of mind.

180628120102_1_540x360

Researchers and engineers have long sought ways to conceal objects by manipulating how light interacts with them. A new study offers the first demonstration of invisibility cloaking based on the manipulation of the frequency (color) of light waves as they pass through an object, a fundamentally new approach that overcomes critical shortcomings of existing cloaking technologies.

The approach could be applicable to securing data transmitted over fiber optic lines and also help improve technologies for sensing, telecommunications and information processing, researchers say. The concept, theoretically, could be extended to make 3D objects invisible from all directions; a significant step in the development of practical invisibility cloaking technologies.

Most current cloaking devices can fully conceal the object of interest only when the object is illuminated with just one color of light. However, sunlight and most other light sources are broadband, meaning that they contain many colors. The new device, called a spectral invisibility cloak, is designed to completely hide arbitrary objects under broadband illumination.

The spectral cloak operates by selectively transferring energy from certain colors of the light wave to other colors. After the wave has passed through the object, the device restores the light to its original state. Researchers demonstrate the new approach in Optica, The Optical Society’s journal for high impact research.

“Our work represents a breakthrough in the quest for invisibility cloaking,” said José Azaña, National Institute of Scientific Research (INRS), Montréal, Canada. “We have made a target object fully invisible to observation under realistic broadband illumination by propagating the illumination wave through the object with no detectable distortion, exactly as if the object and cloak were not present.”

[…]

While the new design would need further development before it could be translated into a Harry Potter-style, wearable invisibility cloak, the demonstrated spectral cloaking device could be useful for a range of security goals. For example, current telecommunication systems use broadband waves as data signals to transfer and process information. Spectral cloaking could be used to selectively determine which operations are applied to a light wave and which are “made invisible” to it over certain periods of time. This could prevent an eavesdropper from gathering information by probing a fiber optic network with broadband light.

The overall concept of reversible, user-defined spectral energy redistribution could also find applications beyond invisibility cloaking. For example, selectively removing and subsequently reinstating colors in the broadband waves that are used as telecommunication data signals could allow more data to be transmitted over a given link, helping to alleviate logjams as data demands continue to grow. Or, the technique could be used to minimize some key problems in today’s broadband telecommunication links, for example by reorganizing the signal energy spectrum to make it less vulnerable to dispersion, nonlinear phenomena and other undesired effects that impair data signals.

Victory for Privacy as Supreme Court Rules Warrantless Phone Location Tracking Unconstitutional

This is a very important ruling that should serve as a good precedent for technologically-based privacy rights in the future.

The Supreme Court handed down a landmark opinion today in Carpenter v. United States, ruling 5-4 that the Fourth Amendment protects cell phone location information. In an opinion by Chief Justice Roberts, the Court recognized that location information, collected by cell providers like Sprint, AT&T, and Verizon, creates a “detailed chronicle of a person’s physical presence compiled every day, every moment over years.” As a result, police must now get a warrant before obtaining this data.

This is a major victory. Cell phones are essential to modern life, but the way that cell phones operate—by constantly connecting to cell towers to exchange data—makes it possible for cell providers to collect information on everywhere that each phone—and by extension, each phone’s owner—has been for years in the past. As the Court noted, not only does access to this kind of information allow the government to achieve “near perfect surveillance, as if it had attached an ankle monitor to the phone’s user,” but, because phone companies collect it for every device, the “police need not even know in advance whether they want to follow a particular individual, or when.”

[…]

Perhaps the most significant part of today’s ruling for the future is its explicit recognition that individuals can maintain an expectation of privacy in information that they provide to third parties. The Court termed that a “rare” case, but it’s clear that other invasive surveillance technologies, particularly those than can track individuals through physical space, are now ripe for challenge in light of Carpenter. Expect to see much more litigation on this subject from EFF and our friends.

Noninvasive Technique to Correct Vision Shows Promise in Early Trials

A potentially safer and more effective solution to a widespread problem.

But, while vision correction surgery has a relatively high success rate, it is an invasive procedure, subject to post-surgical complications, and in rare cases permanent vision loss. In addition, laser-assisted vision correction surgeries such as laser in situ keratomileusis (LASIK) and photorefractive keratectomy (PRK) still use ablative technology, which can thin and in some cases weaken the cornea.

Columbia Engineering researcher Sinisa Vukelic has developed a new non-invasive approach to permanently correct vision that shows great promise in preclinical models. His method uses a femtosecond oscillator, an ultrafast laser that delivers pulses of very low energy at high repetition rate, for selective and localized alteration of the biochemical and biomechanical properties of corneal tissue. The technique, which changes the tissue’s macroscopic geometry, is non-surgical and has fewer side effects and limitations than those seen in refractive surgeries. For instance, patients with thin corneas, dry eyes, and other abnormalities cannot undergo refractive surgery. The study, which could lead to treatment for myopia, hyperopia, astigmatism, and irregular astigmatism, was published May 14 in Nature Photonics.

“We think our study is the first to use this laser output regimen for noninvasive change of corneal curvature or treatment of other clinical problems,” says Vukelic, who is a lecturer in discipline in the department of mechanical engineering. His method uses a femtosecond oscillator to alter biochemical and biomechanical properties of collagenous tissue without causing cellular damage and tissue disruption. The technique allows for enough power to induce a low-density plasma within the set focal volume but does not convey enough energy to cause damage to the tissue within the treatment region.

[…]

“Refractive surgery has been around for many years, and although it is a mature technology, the field has been searching for a viable, less invasive alternative for a long time,” says Leejee H. Suh, Miranda Wong Tang Associate Professor of Ophthalmology at the Columbia University Medical Center, who was not involved with the study. “Vukelic’s next-generation modality shows great promise. This could be a major advance in treating a much larger global population and address the myopia pandemic.”

Vukelic’s group is currently building a clinical prototype and plans to start clinical trials by the end of the year. He is also looking to develop a way to predict corneal behavior as a function of laser irradiation, how the cornea might deform if a small circle or an ellipse, for example, were treated. If researchers know how the cornea will behave, they will be able to personalize the treatment — they could scan a patient’s cornea and then use Vukelic’s algorithm to make patient-specific changes to improve his/her vision.

“What’s especially exciting is that our technique is not limited to ocular media — it can be used on other collagen-rich tissues,” Vukelic adds. “We’ve also been working with Professor Gerard Ateshian’s lab to treat early osteoarthritis, and the preliminary results are very, very encouraging. We think our non-invasive approach has the potential to open avenues to treat or repair collagenous tissue without causing tissue damage.”

Using Different Phone Notification Settings for Stress Reduction and Productivity Increases

An alternate approach than what’s usually used now. This is enough of a problem today to be worth posting about.

After you feel a buzz in your pocket or see a flash on your phone, your attention is already fractured.

You could pick up your phone and see if what’s called you away is something you really need to address immediately – or you could try and focus on your work, all the while wondering what you’re missing out on.

Since it can take close to 25 minutes to get back on track after a distraction, according to researchers who study productivity, this is obviously a recipe for a distracted day where not much gets done.

Fortunately, we are learning better ways to handle smartphone notifications, according to research being conducted at Duke University’s Center for Advanced Hindsight, which was presented by senior behavioural researcher Nick Fitz at a recent American Psychological Association conference.

The research was conducted in collaboration with the startup Synapse, which is incubated at the Center.

Fitz and collaborators have found that batching notifications into sets that study participants receive three times a day makes them happier, less stressed, feeling more productive, and more in control.

U.S. Military Announces Development of Drones That Decide to Kill Using AI

Drone warfare (with its state terrorism causing numerous civilian casualties) is already horrifying enough — this AI drone development would likely be even worse. This announcement also raises the question of how much accountability those who write the algorithms that determine how the drone functions will face.

The US Army recently announced that it is developing the first drones that can spot and target vehicles and people using artificial intelligence (AI).

Whereas current military drones are still controlled by people, this new technology will decide who to kill with almost no human involvement.

Once complete, these drones will represent the ultimate militarisation of AI and trigger vast legal and ethical implications for wider society.

There is a chance that warfare will move from fighting to extermination, losing any semblance of humanity in the process.

At the same time, it could widen the sphere of warfare so that the companies, engineers and scientists building AI become valid military targets.

[…]

Even with these drone killings, human emotions, judgements and ethics have always remained at the centre of war.

The existence of mental trauma and post-traumatic stress disorder (PTSD) among drone operators shows the psychological impact of remote killing.

And this actually points to one possible military and ethical argument by Ronald Arkin, in support of autonomous killing drones. Perhaps if these drones drop the bombs, psychological problems among crew members can be avoided.

The weakness in this argument is that you don’t have to be responsible for killing to be traumatised by it.

Intelligence specialists and other military personnel regularly analyse graphic footage from drone strikes. Research shows that it is possible to suffer psychological harm by frequently viewing images of extreme violence.

[…]

The prospect of totally autonomous drones would radically alter the complex processes and decisions behind military killings.

But legal and ethical responsibility does not somehow just disappear if you remove human oversight. Instead, responsibility will increasingly fall on other people, including artificial intelligence scientists.

The legal implications of these developments are already becoming evident.

Under current international humanitarian law, “dual-use” facilities – those which develop products for both civilian and military application – can be attacked in the right circumstances. For example, in the 1999 Kosovo War, the Pancevo oil refinery was attacked because it could fuel Yugoslav tanks as well as fuel civilian cars.

With an autonomous drone weapon system, certain lines of computer code would almost certainly be classed as dual-use.

Companies like Google, its employees or its systems, could become liable to attack from an enemy state.

For example, if Google’s Project Maven image recognition AI software is incorporated into an American military autonomous drone, Google could find itself implicated in the drone “killing” business, as might every other civilian contributor to such lethal autonomous systems.

Ethically, there are even darker issues still.

The whole point of the self-learning algorithms – programs that independently learn from whatever data they can collect – that technology uses is that they become better at whatever task they are given.

If a lethal autonomous drone is to get better at its job through self-learning, someone will need to decide on an acceptable stage of development – how much it still has to learn – at which it can be deployed.

In militarised machine learning, that means political, military and industry leaders will have to specify how many civilian deaths will count as acceptable as the technology is refined.

Recent experiences of autonomous AI in society should serve as a warning.

New Coating for Devices Would Make Them Much More Resistant

Good news for the safety of electronics, especially with regards to their potential exposure to liquids.

Sometimes our phones end up in the toilet bowl, or laptops end up covered in tea. It happens.

But if they were coated with an ‘omniphobic’ material, like the one created by a team of University of Michigan researchers, your devices would be a lot more likely to come out unscathed.

[…]

This everything-proof material works by combining fluorinated polyurethane and fluorodecyl polyhedral oligomeric silsesquioxane (F-POSS).

F-POSS has an extremely low surface energy, which means that things don’t stick to it.

The coating developed by the team stands out from other similar materials because of the clever way these two ingredients work together, forming a more durable product.

“In the past, researchers might have taken a very durable substance and a very repellent substance and mixed them together,” Tuteja said.

“But this doesn’t necessarily yield a durable, repellent coating.”

But these two materials have combined so well, they ended up with a durable coating that can repeal everything – oil, water, or anything else the researchers threw at it.

[…]

Although this all sounds amazing, this incredible coating won’t be available quite yet – F-POSS is rare and expensive right now, although that is changing as manufacturers scale up the product, which should lower the cost.

Determining Whether Free on the Internet Makes Someone the Product

“If it’s free on the Internet, you’re the product.” A lot of people have heard that phrase or some variant of it, but many rarely seem to have considered the implications of what it truly means, despite the amount of time they may spend using what’s monetarily free online. Perhaps unlike some well-known sayings, it is an important phrase for what it represents, and that makes it worth mentioning here.

The phrase obviously implies that something being free online actually presents the cost of it somehow taking advantage of the user. For example, Facebook’s core services cost no money to use, but using them has always come with the cost of being placed under high surveillance from Facebook. This surveillance leaves vast amounts of personal data in the corporation’s control, thereby making it vulnerable to exploitation.

In practice, that abuse of user data has been seen on numerous occasions — recently with the revelations that Cambridge Analytica built psychological profiles on 50 million Facebook users in order to “target their inner demons” and wrongly manipulate them with political advertisements. Also relevant is Facebook having allowed advertisers to unjustly target (discriminate against) people by ethnicity, Facebook’s experiment to manipulate the news feeds of nearly 700,000 users (without their consent) in an attempt to see much it could influence user emotions, the transfer of sensitive user Facebook data to the U.S. government (violating the Fourth Amendment) through the PRISM mass surveillance program, among other corporate misdeeds.

This is of course after Facebook’s CEO and founder said in 2009 that “What the terms say is just, we’re not going to share people’s information except for the people that they’ve asked for it to be shared.” That’s a striking quote considering that the vast majority of people obviously never wanted their information shared with other malicious corporations and the harmful parts of U.S. intelligence agencies.

Thus, avoiding being the product online clearly requires examining what you’re using and whether it’s using you, and if so, then how much. There are times when this is easier to decipher — some services have open source (available for public audit/review) software and others don’t. Even with closed source services though, there’s also more known about some than others — the pervasive surveillance done by Facebook is decently well known, for example.

It should be said that there’s a limited amount that most individual users should be blamed through all of this exploitation, however. Easily accessible knowledge of the sort in this article should be featured more prominently and implemented more, but it’s also important to simply press for the design of systems that limit exploitation much more than is currently allowed.

This shouldn’t only be additional options for cautious users either. As shown repeatedly with the default effect, a large amount of users will often opt use the default option that’s open to them, even if it’s considerably flawed compared to an alternative that requires a few extra clicks. It’s therefore important to have mechanisms such as stronger anti-exploitation laws, more resistant technology, and a structure of incentives for society that isn’t made to reward abuses (indeed, that is run much less by abuses) anywhere near as much as it currently is.

And from the pharmaceutical corporations that have been shown to have manufactured an opioid crisis through flooding economically downtrodden communities with highly addictive opioids to the labor standards (or lack of them) that allow for the exploitation of many employees, it’s clear that much of current society is built on abusive structures.

For individual users willing to invest some time though, there are valuable anti-exploit concepts that can be learned quickly. Concepts such as how to create stronger passwords (linked to here), find resources such as sites that quickly analyze terms of service, and how to do threat modeling can be immensely helpful and a good investment for the relatively low time it takes to learn them. It’s part of what’s needed if society is to be improved and if many more people are to stop being the product online.