Let the future tell the truth and evaluate each one according to his work and accomplishments. The present is theirs; the future, for which I really worked, is mine. -Nikola Tesla
Skin cancer is the most common cancer in the world, and it’s the one that’s most easily treated when caught early. Since the blood test is less invasive than a biopsy, this new advance should be helpful in convincing more people to receive treatment early on.
It’s a world first. A newly developed blood test is capable of the early detection of melanoma, with over 80 percent accuracy.
It could help save thousands of lives, according to the Australian Edith Cowan University Melanoma Research Group scientists who developed the test.
There’s good news. If caught early, the survival rate for melanoma climbs to 95 percent. But if you miss that early window, your chances will plummet to below 50 percent. This is what the blood test is designed to help prevent.
The blood test, called MelDX, works by detecting the antibodies the body produces as soon as melanoma develops. The team analysed 1,627 different types of antibodies, and narrowed them down to a combination of 10 that indicate the presence of melanoma in the body.
They then took blood from 104 people with melanoma and 105 healthy controls, and found that MelDX was capable of detecting melanoma with 81.5 percent accuracy.
More specifically, it was able to detect the cancer in 79 percent of the patients with melanoma; and has a false positive rate in only 16 percent in healthy patients.
The detection rate may actually be a little higher than the accuracy of skin biopsies, which, according to a 2012 study, was 76 percent in an Australian public hospital.
That’s not a perfect result, but it does provide a starting point before other, more invasive tests are embarked on; in conjunction with current diagnostic techniques, it could improve early diagnosis – and therefore people’s chance of survival.
The next step, the researchers said, will be to take MelDX to clinical trial, which is currently being organised, and which could help refine the test.
“We envision this taking about three years. If this is successful we would hope to be able to have a test ready for use in pathology clinics shortly afterwards,” said Melanoma Research Group head Mel Ziman.
“The ultimate goal is for this blood test to be used to provide greater diagnostic certainty prior to biopsy and for routine screening of people who are at a higher risk of melanoma, such as those with a large number of moles or those with pale skin or a family history of the disease.”
Meanwhile, there are easy ways you can help protect yourself from melanoma and other skin cancers, including wearing sunscreen, staying in the shade during the hottest hours of the day, and avoiding UV tanning beds.
Even as the stock market has boomed over the last decade, a new report finds that these foolish pension fund managers have managed to lose Americans at least $600 billion. This is an amount that’s about equal to $4200 per family.
The costs associated mean that there’s less money to be spent on public goods such as healthcare, libraries, and infrastructure. It would obviously have been much better if these public funds were simply put into low cost index funds (instead of hedge funds and private equity firms) that tried to match the market instead of beating it. (That’s usually a better course of action for most people anyway.) Reducing the pay going to those high-income fund managers would have been a clear economic gain for everyone else.
The financial industry though is mainly an intermediate resource, similar to the trucking industry. This means that unlike housing, education, and healthcare, it isn’t really valuable for its own sake. Similar to the trucking industry, which derives its value from its ability to transport goods efficiently, the financial industry derives its value to the general public by being as benevolently efficient at allocating capital as possible.
There is undeniable evidence that overall, the financial industry has become far less efficient and far more predatory for most people over the last four decades. There’s good reason to think that the financial sector is currently at least three to four times larger than it should be. Back in the 1970s, the industry accounted for about 0.5 percent of GDP, and it now accounts for about 2.3 percent of GDP. The draining difference — diverting money out of the pockets of average workers in wasteful or harmful ways — amounts to at least a few hundred billion dollars of space in the modern economy.
The parallel to this would be if the trucking industry was (all else equal) three to four times too large — there would be way more trucks than necessary to transport goods, there would be costs of heavier pollution and maintaining more salaries than necessary, and people employed as a part of the inefficient trucking industry could instead be working on something with more productive value. To prevent those two industries from becoming corrupted due to excess power, as few resources (labor, oversight, and capital) should be allocated towards them as possible.
The S&L crisis of the 1980s, the stock bubble of the 1990s, and the housing bubble of the 2000s are clear examples of the financial industry allocating capital in ways that are not only inefficient but destructive as well. All of those three events lead to severe economic recessions, with the worst being the housing bubble that caused the Great Recession and global economic turmoil. Comparing this to the two decades before the 1970s, when stronger New Deal financial regulation was in place and there were no serious crashes, there’s clearly a major difference in efficiency.
In sum, it’s clear that the financial system needs to be seriously reorganized around priorities different than making the wealthiest better off at the expense of everyone else. Even moderate measures such as implementing a relatively minor financial transaction tax, limiting the size of the now oligopolistic private banks, and expanding cooperative or public banking would be helpful. Until measures like those happen though, the damage will continue, and the world risks that damage eventually compiling yet again into another major disaster.
Apparently this wasn’t thought much 30 years ago. It is also a bit surprising that some of the differences are driven primarily by repeated experiences.
Like with fingerprints, no two people have the same brain anatomy, a study by researchers of the University of Zurich has shown. This uniqueness is the result of a combination of genetic factors and individual life experiences.
The fingerprint is unique in every individual: As no two fingerprints are the same, they have become the go-to method of identity verification for police, immigration authorities and smartphone producers alike. But what about the central switchboard inside our heads? Is it possible to find out who a brain belongs to from certain anatomical features? This is the question posed by the group working with Lutz Jäncke, UZH professor of neuropsychology. In earlier studies, Jäncke had already been able to demonstrate that individual experiences and life circumstances influence the anatomy of the brain.
Experiences make their mark on the brain
Professional musicians, golfers or chess players, for example, have particular characteristics in the regions of the brain which they use the most for their skilled activity. However, events of shorter duration can also leave behind traces in the brain: If, for example, the right arm is kept still for two weeks, the thickness of the brain’s cortex in the areas responsible for controlling the immobilized arm is reduced. “We suspected that those experiences having an effect on the brain interact with the genetic make-up so that over the course of years every person develops a completely individual brain anatomy,” explains Jäncke.
Magnetic resonance imaging provides basis for calculations
To investigate their hypothesis, Jäncke and his research team examined the brains of nearly 200 healthy older people using magnetic resonance imaging three times over a period of two years. Over 450 brain anatomical features were assessed, including very general ones such as total volume of the brain, thickness of the cortex, and volumes of grey and white matter. For each of the 191 people, the researchers were able to identify an individual combination of specific brain anatomical characteristics, whereby the identification accuracy, even for the very general brain anatomical characteristics, was over 90 percent.
Combination of circumstances and genetics
“With our study we were able to confirm that the structure of people’s brains is very individual,” says Lutz Jäncke on the findings. “The combination of genetic and non-genetic influences clearly affects not only the functioning of the brain, but also its anatomy.” The replacement of fingerprint sensors with MRI scans in the future is unlikely, however. MRIs are too expensive and time-consuming in comparison to the proven and simple method of taking fingerprints.
Progress in neuroscience
An important aspect of the study’s findings for Jäncke is that they reflect the great developments made in the field in recent years: “Just 30 years ago we thought that the human brain had few or no individual characteristics. Personal identification through brain anatomical characteristics was unimaginable.”
The Federal Reserve is the central bank of the United States. There are many valid criticisms of it as an institution, but it can act as a valuable force for the public interest if used properly.
As is the standard of central banks, the Fed has a powerful tool that allows it to have a major impact on short-term interest rates. Interest rates are basically bread and butter to the number of people who have jobs in the economy.
Raising interest rates has the effect of slowing the economy and keeping people — potentially millions of them, as shown in recent years — from finding jobs. Higher interest rates naturally mean less loans to businesses and organizations that could use them to hire more workers. The standard argument for raising interest rates though is to control inflation, as higher interest rates reduce pressure in the labor market, which then leads to workers having less bargaining power for pay increases.
The problem with the argument to control inflation is that inflation has already been quite low in past years, well below the Federal Reserve’s 2.0 percent annual target. The 2.0 percent target is supposed to be an average, and considering recent years, it hasn’t even been near that target. The Federal Reserve was set up with the dual mandate of both adequately controlling prices and maintaining what’s known as full employment. Full employment basically means a strong labor market with low unemployment, where workers can have good access to jobs with fair wages, and it’s quite important as a policy measure. (Due to slow wage growth among most, the U.S. economy clearly isn’t at full employment now, contrary to what you’d likely read in the newspapers.)
If unemployment is low, it means that there will be an increased demand for labor, which should both mean higher wages for workers and that the economy’s resources are being used decently well. The increased demand for labor raising wages is because employers cannot so easily hire other workers if their employees happen to leave the firm. Without enough employees, the firm risks being losing profitability to its competition and going out of business. This situation would allow an existing employee to say something like “Give me a raise or I will find a job elsewhere,” and it would potentially allow a prospective employee to refuse a job unless the wage is adequate enough.
The increase in worker bargaining power leading to higher wages and then higher inflation is due to firms needing deciding to raise prices some after seeing that workers generally can pay more. There is nuance needed in this description, and it should be noted that wages for most workers in the U.S. have been mainly stagnant for decades due to the policy-driven upwards redistribution of income to the wealthy, but it’s a standard point. If the policy is actually directed towards the interest of the general public, the increases in prices will be more than offset by the increases in wages, however. There is evidence of this worker-friendly approach doing well in the U.S. from about 1947 to 1973.
It should also be noted that the only times that many American workers have experienced even minor real wage increases in the last four decades have been when there were tighter labor markets. This occurred in the later 1990s and over the past several years, and it points to the immense importance of the Fed keeping interest rates low.
Alan Greenspan was the economist at the head of the Fed in the 1990s, and for whatever reason, he decided against raising interest rates as the unemployment rate got lower. This was at a time when it was standard in the economics profession to claim that the unemployment rate couldn’t go below about 6 percent without leading to rapid inflation. Inflation never got that high in the 1990s though, and even if Greenspan deserves serious criticism for failing to contain the housing bubble that was the main cause of the devastating economic crash and Great Recession, he does deserve praise for doing this one thing to help low- and middle-income workers.
Janet Yellen’s chair appointment at the Fed also saw the institution keeping interest rates quite low for her tenure, which is clearly one of the main reasons that the U.S. economy is doing decently well now in 2018 relative to the last four decades. Driving around America now would allow someone to see many more help wanted signs than in at almost any other time thus far in the 21st century, and there’s an advantage to this that may not be so obvious: Disadvantaged workers (typically those from minority ethnic groups or with disabilities) will have an easier time finding jobs. The increased demand for labor means that there’s less room for discrimination against them. As proof of this, the disabilities application rate and the unemployment rate for African-Americans have fallen to historically low levels.
The unemployment rate does of course have its flaws. It measures workers looking for jobs, not the amount of people who have dropped out of the labor force and are no longer looking for employment. There is plenty of good work that needs to be done, and there are idle hands that want to do it, but the dysfunctional American economy isn’t putting the two together enough. So while the unemployment rate is an important measure, there are other relevant indicators (such as the labor force participation rate among prime-age workers) that should be considered in assessing the economy.
But if the majority of workers benefit most from lower rather than higher interest rates, why does the Fed continue to raise them then? It’s largely because financial institutions exert significant control over the Fed, and their preference is to keep inflation as low as possible. More worker bargaining power via lower interest rates can mean a shift from net corporate profits to wages for workers, and bank loans also stand to depreciate in value with higher inflation.
The after-tax corporate profit share of national income has almost doubled since 2000, and this to a significant extent is because of wages for workers being diverted into corporate profits that are largely pocketed by executives and major shareholders. According to one reputable estimate, if the after-tax corporate profit share was back at its 2000 level, it would translate to nearly $4000 more per U.S. worker in wages, a fact that is undoubtedly quite disturbing.
The loans of banks and other financial corporations typically are set at a fixed rate, the repayments of those loans will be worth less to them if inflation rises. For one example, if a bank offered a 5 percent home loan while expecting that inflation would be 1 percent, the bank would assume that it would receive a real interest rate of 4 percent. If the inflation rate actually becomes 2 percent, the bank will take a considerable profit loss (receiving a 3 percent real interest rate) compared to what it expected.
In sum though, the issue of the central bank raising interest rates has historically been one that’s favored powerful financial corporations at the expense of the working class, and it’s a very significant issue that should be kept in mind more.
An example of when science fiction becomes science fact. This advance could be used in many different ways, including in digital security, with out of sight possibly meaning out of mind.
Researchers and engineers have long sought ways to conceal objects by manipulating how light interacts with them. A new study offers the first demonstration of invisibility cloaking based on the manipulation of the frequency (color) of light waves as they pass through an object, a fundamentally new approach that overcomes critical shortcomings of existing cloaking technologies.
The approach could be applicable to securing data transmitted over fiber optic lines and also help improve technologies for sensing, telecommunications and information processing, researchers say. The concept, theoretically, could be extended to make 3D objects invisible from all directions; a significant step in the development of practical invisibility cloaking technologies.
Most current cloaking devices can fully conceal the object of interest only when the object is illuminated with just one color of light. However, sunlight and most other light sources are broadband, meaning that they contain many colors. The new device, called a spectral invisibility cloak, is designed to completely hide arbitrary objects under broadband illumination.
The spectral cloak operates by selectively transferring energy from certain colors of the light wave to other colors. After the wave has passed through the object, the device restores the light to its original state. Researchers demonstrate the new approach in Optica, The Optical Society’s journal for high impact research.
“Our work represents a breakthrough in the quest for invisibility cloaking,” said José Azaña, National Institute of Scientific Research (INRS), Montréal, Canada. “We have made a target object fully invisible to observation under realistic broadband illumination by propagating the illumination wave through the object with no detectable distortion, exactly as if the object and cloak were not present.”
While the new design would need further development before it could be translated into a Harry Potter-style, wearable invisibility cloak, the demonstrated spectral cloaking device could be useful for a range of security goals. For example, current telecommunication systems use broadband waves as data signals to transfer and process information. Spectral cloaking could be used to selectively determine which operations are applied to a light wave and which are “made invisible” to it over certain periods of time. This could prevent an eavesdropper from gathering information by probing a fiber optic network with broadband light.
The overall concept of reversible, user-defined spectral energy redistribution could also find applications beyond invisibility cloaking. For example, selectively removing and subsequently reinstating colors in the broadband waves that are used as telecommunication data signals could allow more data to be transmitted over a given link, helping to alleviate logjams as data demands continue to grow. Or, the technique could be used to minimize some key problems in today’s broadband telecommunication links, for example by reorganizing the signal energy spectrum to make it less vulnerable to dispersion, nonlinear phenomena and other undesired effects that impair data signals.
The implications from this should be studied more in light of the major antibiotic resistance problem this century. Among other things, the research found that the compound vanillin (which gives vanilla its taste) combined with an antibiotic that has mostly stopped being used (spectinomycin) increased the effectiveness of the antibiotic.
The effectiveness of antibiotics can be altered by combining them with each other, non-antibiotic drugs or even with food additives. Depending on the bacterial species, some combinations stop antibiotics from working to their full potential whilst others begin to defeat antibiotic resistance, report EMBL researchers and collaborators in Nature on July 4.
In the first large-scale screening of its kind, scientists profiled almost 3000 drug combinations on three different disease-causing bacteria. The research was led by EMBL group leader Nassos Typas.
Overcoming antibiotic resistance
Overuse and misuse of antibiotics has led to widespread antibiotic resistance. Specific combinations of drugs can help in fighting multi-drug resistant bacterial infections, but they are largely unexplored and rarely used in clinics. That is why in the current paper, the team systematically studied the effect of antibiotics paired with each other, as well as with other drugs and food additives in different species.
Whilst many of the investigated drug combinations lessened the antibiotics’ effect, there were over 500 drug combinations which improved antibiotic outcome. A selection of these positive pairings was also tested in multi-drug resistant bacteria, isolated from infected hospital patients, and were found to improve antibiotic effects.
According to Nassos Typas, combinations of drugs that decrease the effect of antibiotics could also be beneficial to human health. “Antibiotics can lead to collateral damage and side effects because they target healthy bacteria as well. But the effects of these drug combinations are highly selective, and often only affect a few bacterial species. In the future, we could use drug combinations to selectively prevent the harmful effects of antibiotics on healthy bacteria. This would also decrease antibiotic resistance development, as healthy bacteria would not be put under pressure to evolve antibiotic resistance, which can later be transferred to dangerous bacteria.”
This research is the first large-scale screening of drug combinations across different bacterial species in the lab. The compounds used have already been approved for safe use in humans, but investigations in mice and clinical studies are still required to test the effectiveness of particular drug combinations in humans. In addition to identifying novel drug combinations, the size of this investigation allowed the scientists to understand some of the general principles behind drug-drug interactions. This will allow more rational selection of drug pairs in the future and may be broadly applicable to other therapeutic areas.
Who the economies of the world were rigged to benefit most. The latest data confirms the trend of the upwards redistribution of income often seen over the last several decades.
The world’s largest economies have grown at a steady pace and unemployment has consistently fallen in the years following the greed-driven global financial crisis of 2008, but income gains during the so-called recovery have been enjoyed almost exclusively by the top one percent while most workers experience “unprecedented wage stagnation.”
That’s according to the OECD’s 2018 Employment Outlook (pdf) published Wednesday, which examines recent economic trends and finds that wage growth for most citizens in the 35 industrialized nations studied is “missing in action” due to a number of factors, including the the rapid rise of temporary low-wage jobs and the relentless corporate assault on unions.
In a statement on Tuesday, OECD Secretary General Angel Gurría said “[t]his trend of wageless growth in the face of a rise in employment highlights the structural changes in our economies that the global crisis has deepened, and it underlines the urgent need for countries to help workers.”