Using a microscope fitted to a bellows camera, Wilson Alwyn Bentley was the first to photograph a single snowflake. http://met.org/2eAdK4e
Tuesday, 27 December 2016
Saturday, 24 December 2016
Humanity Needs Universal Basic Income in Order to Stop Impeding Progress
Humanity Needs Universal Basic Income in Order to Stop Impeding Progress
04/05/2016 01:24 pm ET | Updated Apr 05, 2016
What an interesting place and an interesting time it is for a visit. Earth’s most intelligent primates are busy creating technologies that allow them all to do less work, freeing themselves from millennia of senseless toil and drudgery. Strangely, however, they are using such technologies to force each other to work longer and harder. In one area called the United States, responsible for so much of the world’s technological innovation, at a time when productivity has never been higher, the number of hours spent working for others in exchange for the means to live is now just shy of 50 hours per week, where it was once 40 and soon supposed to be 20 on its way to eventually approaching zero.
Humans are even performing work that doesn’t actually need to be done at all, even by a machine. One of the craziest examples of such completely unnecessary work is in Europe where an entire fake economic universe has been created under the label of “Potemkin companies“ like Candelia.
Candelia was doing well. Its revenue that week was outpacing expenses, even counting taxes and salaries... but in this case the entire business is fake. So are Candelia’s customers and suppliers, from the companies ordering the furniture to the trucking operators that make deliveries. Even the bank where Candelia gets its loans is not real. More than 100 Potemkin companies like Candelia are operating today in France, and there are thousands more across Europe... All these companies’ wares are imaginary.
Incredibly, human beings are waking up early in the mornings to drive to offices to perform imaginary business in imaginary markets involving imaginary customers using imaginary money to buy imaginary goods and services instead of simply enjoying their non-imaginary and most definitely real lives with each other.
Another example of humans coming up with excuses for more work, which may come as a surprise, is actually firefighting, which thanks to technology has been fighting fewer and fewer fires:
On highways, vehicle fires declined 64 percent from 1980 to 2013. Building fires fell 54 percent during that time. When they break out, sprinkler systems almost always extinguish the flames before firefighters can turn on a hose. But oddly, as the number of fires has dropped, the ranks of firefighters have continued to grow — significantly. There are half as many fires as there were 30 years ago, but about 50 percent more people are paid to fight them.
How can this be? If there are far fewer fires, why are there far more firefighters? The short answer is because of something called labor unions, who at some point just up and stopped fighting to reduce hours worked. But why? The reason labor unions now fight so hard to keep humans laboring is because humans require each other to work in order to obtain the resources required to live happy lives, or even to live at all for that matter.
Here lies the greatest obstacle to human progress — the longstanding connection between work and income. As long as everything is owned and the only way to obtain access to that which is owned is through money, and the only way to obtain money is to be born with it or through doing the bidding of someone who owns enough to do the ordering around — what humans call a “job” — then jobs can’t be eliminated. As a worker, any attempt to eliminate jobs must be fought and as a business owner, the elimination of jobs must involve walking a fine line between greater efficiency and public outcry. The elimination of vast swathes of jobs must be avoided unless seen as absolutely necessary so as to avoid angering too many people who may also be customers.
Here lies the greatest obstacle to human progress — the longstanding connection between work and income.
Nowhere is the above more clear than in two recent pieces of news: Google’s announcement that Boston Dynamics is up for sale, and Johnson & Johnson’s announcement that the Sedasys machine would be discontinued.
Atlas Shrugged Off by Google
You probably already saw it, as over ten million others did within days of it being posted to YouTube, but the demonstration video of the new version of Atlas from the robotics team at Boston Dynamics was a stunning display of engineering that shocked the world. Similar to the victory of the AI AlphaGo over world champion human Go player Lee Sedol just weeks later, it dumbfounded people with the realization of how quickly technology is advancing.
People naturally saw with their own eyes how close they are to having robots fully capable of doing physical tasks previously thought to be decades down the road, and the result was a discussion sprinkled with more than a bit of human panic based in entirely legitimate fears of income insecurity. This ended up being a discussion Google had no interest in, and so Boston Dynamics is now up for sale. To be fair, Google already wanted to sell BD, but leaked emails do show the concerns of negative PR as a direct result of advanced robotics:
In yet more emails wrongly published to wider Google employees, Courtney Hohne, a spokeswoman for Google X, wrote: “There’s excitement from the tech press, but we’re also starting to see some negative threads about it being terrifying, ready to take humans’ jobs ... We’re not going to comment on this video because there’s really not a lot we can add, and we don’t want to answer most of the questions it triggers.”
Google wants to advance technology but at the same time, it doesn’t want to answer the questions those advancements will raise. This appears to be a clear example of a major obstacle for human progress. It’s the same likely reason companies like McDonald’s haven’t dived in with both feet to greatly automate their operations and vastly reduce their labor forces. The technology exists, but they aren’t doing it. Why?
Perhaps it’s because as long as people need jobs as their sole source of income, companies have the potential of stepping onto a public relations landmine by automating their jobs out of existence, or being seen as responsible for others doing so. Eliminating jobs also means not only cutting employees, but demand itself.
Putting humans out of work should be a public relations win, not a loss...
Putting humans out of work should be a public relations win, not a loss, and so mankind needs to make sure no one left without a job, for any amount of time, is ever unable to meet their most basic needs. Everyone needs a non-negotiable guarantee of income security, so that the elimination of jobs breeds not fear, but excitement. The loss of a job should be seen as an opportunity for new real choices. And so some amount of basic income should be guaranteed to everyone — universally — as a starting point upon which all can earn additional income.
However, negative PR is just one obstacle along the road to full automation. Another obstacle is something originally devised to make sure employed humans had some amount of bargaining power, so as to not be walked all over by those who employed them, and that’s the forces of organized labor. In an unfortunate turn of events, that which once helped drive prosperity is beginning to hold it back. Organized labor is organizing to perpetuate the employment that tech labor is working to eliminate.
Source: TechCrunch
Organized labor in the form of taxi driver unions have set cars on fire in France in protest of the labor disruptions created by Uber. Fast food workers in the US are busy organizing new unions, the goal of which is not to make sure fast food restaurants heavily invest in automation to free them from such work. None of this however compares to what an organized group of anesthesiologists just did.
Doctors Pulling Plugs
The American Society of Anesthesiologists just killed the first machine to come along capable of eliminating a great deal of need for anesthesiologists — the Sedasys. It was a machine not only capable of performing the same work, but at one-tenth the cost. It was a machine that some innovative humans invented to make becoming healthier far less costly for all humans, over 90% less costly in fact. And another group of humans saw that as competition so they pressed the abort button.
No longer did you need a trained anesthesiologist. And sedation with the Sedasys machine cost $150 to $200 for each procedure, compared to $2,000 for an anesthesiologist, one of healthcare’s best-paid specialties. The machine was seen as the leading lip of an automation wave transforming hospitals. But Johnson & Johnson recently announced it was pulling the plug on Sedasys because of poor sales.
So what caused the poor sales if the device could do so much more for so much less?
Sedasys was never welcomed by human anesthesiologists. Before it even hit the market, the American Society of Anesthesiologists campaigned against it, backing down only once the machine’s potential uses were limited to routine procedures such as colonoscopies. The Post’s story back in May provoked an outpouring of messages from anesthesiologists and nurse anesthetist who claimed a machine could never replicate a human’s care or diligence. Many sounded offended at the notion that a machine could do their job.
The proverbial plug was pulled on a life-saving new technology because a well-paid group of humans saw it in their own best interests to fight against its use to do their work for them.
Pretend for a moment what was invented was a tractor, and the makers of the tractor had to stop making them because of the power of a bunch of oxen who were offended by the claim that tractors could ever replicate an oxen’s care or diligence.
As humans drive forward into the future, they may just have their foot on the brakes and the accelerator at the same time.
Imagine it was an elevator, and the American Society of Elevator Attendants was offended by the idea of everyone simply pushing buttons to operate elevators without the paid help of any attendant. Would all of human society be better off right now with every elevator being operated by a paid attendant?
Or imagine that back in the day, trains were upgraded from coal-based steam engines to today’s diesel engines, and railroad unions fought and won to keep the position of coal-shovelers so that there’d be a job for people on trains doing absolutely nothing for the next 60 years. Believe it or not, that one actually happened.
Such thinking is not progress. It’s regress. Humans have the ideas of work and income so tied up in their minds, that even though they’ve now successfully reached the point where toil is no longer necessary to survive on Earth, they are demanding their toil not be lifted off their shoulders.
Humans are actually demanding that machines not do their work for them. Humans are creating work that does not need to be done, and perhaps worst of all, they are continuing extinction-endangering work like coal mining that should have been stopped decades ago for the good of the species.
Cutting the Cord
To put an end to all this nonsense, it seems in humanity’s best interests to finally sever the self-imposed connection between work and access to the common planetary resources required for life. For as long as humans must toil to live, they will toil for life.
Unemployment is not a disease. It’s the opposite. Employment is the malady and automation is the cure. It is the job of machines to handle as much work for humans as possible, so as to free them to pursue that which each and every individual human being most wishes to pursue. That pursuit may be work or it may be leisure. That pursuit may be knowledge or it may be play. That pursuit may be companionship or it may be solitude. Whatever it may be, the goal is happiness and the pursuit itself self-motivated, the journey its own reward.
So when those like Robert Reich say “There are still a lot of jobs” before suggesting mankind may not yet be ready for universal basic income, but soon most definitely will be, perhaps humans should ask if not having a basic income is actually part of the reason there are any jobs still left for humans. Perhaps it’s the insistence on the existence of jobs that creates jobs, whether they need to exist or not.
As humans drive forward into the future, they may just have their foot on the brakes and the accelerator at the same time. If so, is this in the best interests of humanity? Why not instead stop pressing the brakes by adopting basic income immediately, so as to fully accelerate into an increasingly automated future of increasing abundance and victory over scarcity? That seems to make a lot more sense than perpetuating — and even artificially creating — scarcity.
But then again, these are simply the thoughts of a tourist, in observance of life on the third planet from an average yellow star in a somewhat ordinary spiral galaxy. Pay me little mind if you choose. I’m just passing through on the suggestion this place is incredibly entertaining in all its grand backwardness.
—-
Want to help? You can take this survey about basic income or sign this petition to the President and Congress for a basic income for all, or donate your time or money to Basic Income Action, a non-profit organization founded to transform basic income from idea to reality. You can also support articles like this by sharing them.
Sunday, 16 October 2016
Scientists say Google is changing our brains
The internet has changed every part of our lives. Now scientists say it’s even changing our brains
Image: REUTERS/Darren Staples Written by: Stéphanie Thomson
Editor, World Economic Forum
Published: Thursday 6 October 2016
Back in the pre-internet days, if someone asked you a tricky question, you had a couple of options. You could see if anyone you knew had the answer. You could pull out an encyclopedia. Or you could head down to the library to carry out research. Whichever one you opted for, it was almost certainly more complicated and time-consuming than what you’d do today: Google it.
Thanks to technology – and the internet in particular – we no
longer need to depend on our sometimes unreliable memories for random
facts and pieces of information. Think about it: when was the last time
you bothered to memorize someone’s phone number? And what’s the point in
learning the spelling of that long, complicated word when autocorrect
will pick it up for you?
But with all the knowledge we could ever need at our fingertips, are we outsourcing our memory to the internet?
Our virtual brain
We are indeed, according to recent research. The latest study,
from academics at the universities of California and Illinois, found
that our increasing reliance on the internet is transforming the way we
think and remember.
In the study, two groups of people were asked to answer a set of trivia questions. Those in the first group were told to use only their memories, while the others had to look up the answers online. Both groups were then asked a set of easier questions and given the option of using the internet. Those who had used the internet the first time round were much more likely to do so again.
Not only were they more likely to refer to the internet, they were quicker to do so, making very little attempt to figure out the answer themselves, even when the questions were relatively simple.
All of this is evidence of a trend the researchers refer to as “cognitive offloading”. It has become so easy to just look something up online, we’re giving up even trying to remember certain things.
“Whereas before we might have tried to recall something on our own,
now we don’t bother. As more information becomes available via
smartphones and other devices, we become progressively more reliant on
it in our daily lives,” Benjamin Storm, the study’s lead author, said.
How the internet changes our brains
This latest study builds on existing research that suggests the internet isn’t just changing how we live and work – it’s actually altering our brains.
For anyone familiar with the work of neuroscientist Michael
Merzenich, this won’t come as a surprise. After all, that’s what our
brain is made to do. “It’s constructed for change. It’s all about
change,” he explains in his popular TED talk.
The more important question, then, is whether or not this is a good
thing. “It seems pretty clear that memory is changing,” Storm told us.
“But is it changing for the better? At this point, we don’t know.”
Indeed, opinion seems divided as to whether this is a positive or negative development.
Some argue that by removing the need for rote learning – a system
under which we were forced to memorize dates, names and facts – the
internet has helped free up cognitive resources for other, more
important things.
Nicholas Carr, author of What the internet is doing to our brains, isn’t so optimistic.
By relying on the internet as an external hard drive for our memory, we
are losing the ability to transfer the facts we hear and read on a
daily basis from our working memory to our long-term one – something
Carr describes as “essential to the creation of knowledge and wisdom”.
“Dozens of studies by psychologists, neurobiologists and educators point to the same conclusion: when we go online, we enter an environment that promotes cursory reading, hurried and distracted thinking, and superficial learning,” he writes.
From post-it notes to iPhones
While much more research into the consequences of this remains to
be done, perhaps the change isn’t as significant as we might think.
After all, as technology writer Clive Thompson points out, we’ve actually been outsourcing our memory for a long time.
“Humanity has always relied on coping devices to handle the details
for us. We’ve long stored knowledge in books and on paper and post-it
notes.”
It’s just that today, we turn to more sophisticated tools for that
helping hand. “You can stop worrying about your iPhone moving your
memory outside your head. It moved out a long time ago,” Thompson says.
And for Storm and the team of researchers behind this latest study,
that might not be such a bad thing. “In the end I’m fairly optimistic. I
think the internet (and technology more generally) is going to greatly
expand the capabilities of the human mind.”
________________________________________________
Written byStéphanie Thomson, Editor, World Economic Forum
The views expressed in this article are those of the author alone and not the World Economic Forum
Sunday, 28 August 2016
Forget ideology, liberal democracy’s newest threats come from technology and bioscience: John Naughton in "The Guardian"
A groundbreaking book by historian Yuval Harari claims that artificial intelligence and genetic enhancements will usher in a world of inequality and powerful elites. How real is the threat?
What price humanity when consciousness in no longer required. Photo: Alamy
The BBC Reith Lectures in 1967 were given by Edmund Leach, a Cambridge social anthropologist. “Men have become like gods,” Leach began. “Isn’t it about time that we understood our divinity? Science offers us total mastery over our environment and over our destiny, yet instead of rejoicing we feel deeply afraid.”
That was nearly half a century ago, and yet Leach’s opening lines could easily apply to today. He was speaking before the internet had been built and long before the human genome had been decoded, and so his claim about men becoming “like gods” seems relatively modest compared with the capabilities that molecular biology and computing have subsequently bestowed upon us. Our science-based culture is the most powerful in history, and it is ceaselessly researching, exploring, developing and growing. But in recent times it seems to have also become plagued with existential angst as the implications of human ingenuity begin to be (dimly) glimpsed.
The title that Leach chose for his Reith Lecture – A Runaway World – captures our zeitgeist too. At any rate, we are also increasingly fretful about a world that seems to be running out of control, largely (but not solely) because of information technology and what the life sciences are making possible. But we seek consolation in the thought that “it was always thus”: people felt alarmed about steam in George Eliot’s time and got worked up about electricity, the telegraph and the telephone as they arrived on the scene. The reassuring implication is that we weathered those technological storms, and so we will weather this one too. Humankind will muddle through.
But in the last five years or so even that cautious, pragmatic optimism has begun to erode. There are several reasons for this loss of confidence. One is the sheer vertiginous pace of technological change. Another is that the new forces at loose in our society – particularly information technology and the life sciences – are potentially more far-reaching in their implications than steam or electricity ever were. And, thirdly, we have begun to see startling advances in these fields that have forced us to recalibrate our expectations.
A classic example is the field of artificial intelligence (AI), defined as the quest to enable machines to do things that would require intelligence if performed by a human. For as long as most of us can remember, AI in that sense was always 20 years away from the date of prediction. Maybe it still is. But in the last few years we have seen that the combination of machine learning, powerful algorithms, vast processing power and so-called “Big Data” can enable machines to do very impressive things – real-time language translation, for example, or driving cars safely through complex urban environments – that seemed implausible even a decade ago.
And this, in turn, has led to a renewal of excited speculation about the possibility – and the existential risks – of the “intelligence explosion” that would be caused by inventing a machine that was capable of recursive self-improvement. This possibility was first raised in 1965 by the British cryptographer IJ Good, who famously wrote: “The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” Fifty years later, we find contemporary thinkers like Nick Bostrom and Murray Shanahan taking the idea seriously.
There’s a sense, therefore, that we are approaching another “end of history” moment – but with a difference. In his famous 1989 article, the political scientistFrancis Fukuyama argued that the collapse of the Soviet empire meant the end of the great ideological battle between east and west and the “universalisation of western liberal democracy as the final form of human government”. This was a bold, but not implausible, claim at the time. What Fukuyama could not have known is that a new challenge to liberal democracy would eventually materialise, and that its primary roots would lie not in ideology but in bioscience and information technology.
For that, in a nutshell, is the central argument of Yuval Noah Harari’s new book,Homo Deus: A Brief History of Tomorrow. In a way, it’s a logical extension of his previous book, Sapiens: A Brief History of Humankind, which chronicled the entire span of human history, from the evolution of Homo sapiens up to the political and technological revolutions of the 21st century, and deservedly became a world bestseller.
Most writers on the implications of new technology focus too much on the technology and too little on society’s role in shaping it. That’s partly because those who are interested in these things are (like the engineers who create the stuff) determinists: they believe that technology drives history. And, at heart, Harari is a determinist too. “In the early 21st century,” he writes in a striking passage, “the train of progress is again pulling out of the station – and this will probably be the last train ever to leave the station called Homo sapiens. Those who miss this train will never get a second chance. In order to get a seat on it, you need to understand 21st century technology, and in particular the powers of biotechnology and computer algorithms.”
He continues: “ These powers are far more potent than steam and the telegraph, and they will not be used mainly for the production of food, textiles, vehicles and weapons. The main products of the 21st century will be bodies, brains and minds, and the gap between those who know how to engineer bodies and brains and those who do not will be wider than the gap between Dickens’s Britain and the Madhi’s Sudan. Indeed, it will be bigger than the gap between Sapiens and Neanderthals. In the 21st century, those who ride the train of progress will acquire divine abilities of creation and destruction, while those left behind will face extinction.”
This looks like determinism on steroids. What saves it from ridicule is that Harari sets the scientific and technological story within an historically informed analysis of how liberal democracy evolved. And he provides a plausible account of how the defining features of the liberal democratic order might indeed be upended by the astonishing knowledge and tools that we have produced in the last half-century. So while one might, in the end, disagree with his conclusions, one can at least see how he reached them.
In a way, it’s a story about the evolution and nature of modernity. For most of human history, Harari argues, humans believed in a cosmic order. Their world was ruled by omnipotent gods who exercised their power in capricious and incomprehensible ways. The best one could do was to try to placate these terrifying powers and obey (and pay taxes to) the priesthoods who claimed to be the anointed intermediaries between mere humans and gods. It may have been a dog’s life but at least you knew where you stood, and in that sense belief in a transcendental order gave meaning to human lives.
But then came science. Harari argues that the history of modernity is best told as a struggle between science and religion. In theory, both were interested in truth – but in different kinds of truth. Religion was primarily interested in order, whereas science, as it evolved, was primarily interested in power – the power that comes from understanding why and how things happen, and enables us to cure diseases, fight wars and produce food, among other things.
In the end, in some parts of the world at least, science triumphed: belief in a transcendental order was relegated to the sidelines – or even to the dustbin of history. As science progressed, we did indeed start to acquire powers that in pre-modern times were supposed to be possessed only by gods (Edmund Leach’s point). But if God was dead, as Nietzsche famously said, where would humans find meaning? “The modern world,” writes Harari, “promised us unprecedented power – and the promise has been kept. Now what about the price? In exchange for power, the modern deal expects us to give up on meaning. How did humans handle this chilling demand? ... How did morality, beauty and even compassion survive in a world of gods, of heaven or hell?”
The answer, he argues, was in a new kind of religion: humanism – a belief system that “sanctifies the life, happiness and power of Homo sapiens”. So the deal that defined modern society was a covenant between humanism and science in which the latter provided the means for achieving the ends specified by the former.
And our looming existential crisis, as Harari sees it, comes from the fact that this covenant is destined to fall apart in this century. For one of the inescapable implications of bioscience and information technology (he argues) is that they will undermine and ultimately destroy the foundations on which humanism is built. And since liberal democracy is constructed on the worship of humanist goals (“life, liberty and the pursuit of happiness” by citizens who are “created equal”, as the American founders put it), then our new powers are going to tear liberal democracy apart.
How come? Well, modern society is organised round a combination of individualism, human rights, democracy and the free market. And each of these foundations is being eaten away by 21st-century science and technology. The life sciences are undermining the individualism so celebrated by the humanist tradition with research suggesting that “the free individual is just a fictional tale concocted by an assembly of biochemical algorithms”. Similarly with the idea that we have free will. People may have freedom to choose between alternatives but the range of possibilities is determined elsewhere. And that range is increasingly determined by external algorithms as the “surveillance capitalism” practised by Google, Amazon and co becomes ubiquitous – to the point where internet companies will eventually know what your desires are before you do. And so on.
Here Harari ventures into the kind of dystopian territory that Aldous Huxley would recognise. He sees three broad directions.
1. Humans will lose their economic and military usefulness, and the economic system will stop attaching much value to them.
2. The system will still find value in humans collectively but not in unique individuals.
3. The system will, however, find value in some unique individuals, “but these will be a new race of upgraded superhumans rather than the mass of the population”. By “system”, he means the new kind of society that will evolve as bioscience and information technology progress at their current breakneck pace. As before, this society will be based on a deal between religion and science but this time humanism will be displaced by what Harari calls “dataism” – a belief that the universe consists of data flows, and the value of any entity or phenomenon is determined by its contribution to data processing.
Personally, I’m not convinced by his dataism idea: the technocratic ideology underpinning our current obsession with “Big Data” will eventually collapse under the weight of its own absurdity. But in two other areas, Harari is exceedingly perceptive. The first is that our confident belief that we cannot be superseded by machines – because we have consciousness and they cannot have it – may be naive. Not because machine consciousness will be possible but because for Harari’s dystopia to arrive, consciousness is not required. We require machines that are super-intelligent: intelligence is necessary; consciousness is an optional extra which in most cases would simply be a nuisance. And it’s therefore not a showstopper for AI development.
The second is that I’m sure that his reading of the potential of bioscience is accurate. Even the Economist magazine recently ran a cover story entitled: “Cheating death: the science that can extend your lifespan.” But the exciting new possibilities offered by genetic technology will be expensive and available only to elites. So the long century in which medicine had a “levelling up” effect on human populations, bringing good healthcare within the reach of most people, has come to an end. Even today, rich people live longer and healthier lives. In a couple of decades, that gap will widen into a chasm.
Homo Deus is a remarkable book, full of insights and thoughtful reinterpretations of what we thought we knew about ourselves and our history. In some cases it seems (to me) to be naive about the potential of information technology. But what’s really valuable about it is the way it grounds speculation about sci-tech in the context of how liberal democracy evolved.
One measure of Harari’s achievement is that one has to look a long way back – to 1934, in fact, the year when Lewis Mumford’s Technics and Civilization was published – for a book
with comparable ambition and scope. Not bad going for a young historian.
Subscribe to:
Posts (Atom)