Microsoft pulls back from phone business, announces 7,800 layoffs

Photo of Microsoft logo on former Nokia headquarters

Workers install the logo of U.S. technology giant Microsoft on the wall of Nokia’s former headquarters in Espoo, Finland April 26, 2014. Credit: REUTERS/Mikko Stig/Lehtikuva

Microsoft will write off the entire value of the smartphone business it acquired from Nokia

By  | 

IDG News Service | July 8, 2015

Microsoft is scaling down its mobile phone activities, writing off the entire value of the former Nokia smartphone business it bought last year and laying off almost one-third of that business’ staff.

The company will no longer try to build a standalone phone business, but instead plans to build a Windows ecosystem that includes its own devices, CEO Satya Nadella told staff in an email announcing the changes.

Up to 7,800 jobs will be cut, most of them in the phone business. The cuts come in addition to 18,000 layoffs announced last year: Those cuts included around half of the 25,000 staff who joined Microsoft from Nokia.

Microsoft also announced plans to write down the value of its Nokia acquisition, recording an impairment charge of US$7.6 billion. It bought the Finnish company’s devices and services business only last year for $7.2 billion.

The company explained the write-down by saying that it’s restructuring of the phone business meant that its future prospects were below original expectations. Its Lumia phones are languishing with a single-digit market share, and the company has produced no break-out successes to rival flagship phones from Samsung Electronics or Apple.

The layoffs should happen by year end, Microsoft said, adding that it would provide more information about the impairment charge in its fourth-quarter earnings announcement on July 21.

Go to original article…

Will brainwaves control tomorrow’s computers?

By Corinna Lathan, Apr 25 2014













Every human-computer interface is really a brain-computer interface; it’s just a matter of degree. Our intentions may be sent from our brain to the computer through our fingers and a keyboard, through a camera that tracks eye movement, or from sensors that read signals from the surface of the scalp or from individual neurons. It’s a continuum.

However, when we talk about “brain-computer interfaces” (BCIs) today, we are talking about capturing signals directly from the brain and using them to control an electronic device. This can be done in a few ways, such as through electroencephalography (EEG) sensors that record electrical impulses from the brain, or functional near-infrared spectroscopy (fNIR), which uses light to monitor blood flow in the brain. Though sensors can also be implanted, these less invasive technologies will work if the sensors are worn on headsets that keep them in contact with the scalp.

These technologies are not mind readers just yet, but they can be trained to recognize patterns in controlled scenarios. We’re very far away from me putting an EEG cap on your head, and while you think “red car”, I am able to know that you are thinking about this red car. What we are able to do now, for example, is train the system to recognize a choice of four icons or to know that you’re thinking “red car” versus “playing tennis”.

Thus, a patient with locked-in syndrome might train a BCI to distinguish between two thoughts, such as “playing tennis” versus “walking down the street”, and these could become their “yes” and “no” signals. To an extent, it doesn’t matter what the two thoughts are. We can’t say which neuron in the brain fires when you think about playing tennis, but we can train a BCI to distinguish between that electrical pattern and another pattern.

Just as everyone walks in much the same way, but with differences in gait, pace and so on, so we use the same types of brainwaves for the same kinds of mental activities though there will still be differences between individuals.

As BCIs have advanced we have built up a “library” of signals, so that we can create devices that have the ability to track three, four, or five patterns. We already have robust technology tools that enable us to obtain clean signals from the brain. It used to be necessary to wear 128 or even 256 leads on your head to get any useful information. Now we can get meaningful data with 16, eight, or even four leads, depending on the task and the signal of interest. Now the magic will be in the software and what it can do with those signals.

In time we will be able to use these devices seamlessly for tasks such as controlling the cursor on a computer screen or interacting, hands-free, with mobile phones. One of the projects that we’re currently working on with the U.S. Navy is how to use both BCIs and physiological sensing to optimize individual and team training.

For example, to make training as effective as possible, you could have a BCI that monitors whether you are paying attention properly. If your attention wanders, the computer could alert you or ask you to explain the material that was just covered. It would be part of an intelligent tutor that paces the learning and content to match your focus and attention.

BCIs could also be used to monitor employees in high-stress environments, such as air traffic controllers, or to identify post-traumatic stress disorder in military personnel or concussion in players of contact sports.

Given the advances in BCIs, it seems crazy that every time you visit the doctor your blood pressure, temperature, height and weight are checked, but not your brain vital signs. You don’t need a sophisticated BCI to track brain health, even things such as reaction time tests can be a good indicator of your brain’s processing speed.

EEGs are now just passive monitoring, but it’s easy to imagine a future where energy can be directed into the brain. Last year, scientists at MIT used light to activate cells in genetically modified mice to implant a false memory into their brains. We’re a long way from being able to do that with human beings, but we could see an extension of EEG technology to determine when your brain was in a state where it was most receptive to learning.

One profound application for BCIs will be awareness of other people’s emotions and brain states. Scientists at Princeton University have looked at speaker-listener pairs with both EEG and brain imaging and have shown that when two people are communicating, speaking and understanding each other, their brains are literally on the same wavelength. Not only that, the listener’s brain wave patterns start to precede the speaker’s brain wave patterns. You start to actually anticipate the other person’s brain waves.

That type of data will have a profound impact on the way people interact. Imagine going into every meeting knowing exactly who’s paying attention to you, who’s on the same wavelength as you, literally. Imagine having that kind of information. It will change every single dynamic that you encounter.

To learn more about new technology trends sign up for the Global Information Technology Outlook module. Also discover other modules on various topics on Forum Academy, the online professional leadership development platform of the World Economic Forum.

Author: Corinna Lathan, Founder and Chief Executive Officer of AnthroTronix, an engineering research and development company; member of the World Economic Forum’s Global Agenda Council on Robotics & Smart Devices

Image: A woman poses with a brain-computer interface in Hanover, April 22, 2012. REUTERS/Morris Mac Matze

Go to original article…

Earth Day History

Earth Day in Washington, D.C.

Founded in 1970 as a day of education about environmental issues, Earth Day is now a globally celebrated holiday that is sometimes extended into Earth Week, a full seven days of events focused on green awareness. The brainchild of Senator Gaylord Nelson and inspired by the antiwar protests of the late 1960s, Earth Day was originally aimed at creating a mass environmental movement. It began as a “national teach-in on the environment” and was held on April 22 to maximize the number of students that could be reached on university campuses. By raising public awareness of air and water pollution, Nelson hoped to bring environmental causes into the national spotlight.

By the early 1960s Americans were becoming aware of the effects of pollution on the environment. Rachel Carson’s 1962 bestseller “Silent Spring” raised the specter of the dangerous effects of pestisides on America’s countrysides. Later in the decade, a 1969 fire on Cleveland’s Cuyahoga River shed light on the problem of chemical waste disposal. Until that time, protecting the planet’s natural resources was not part of the national political agenda, and the number of activists devoted to large-scale issues such as industrial pollution was minimal. Factories pumped pollutants into the air, lakes and rivers with few legal consequences. Big, gas-guzzling cars were considered a sign of prosperity. Only a small portion of the American population was familiar with–let alone practiced–recycling.

Elected to the U.S. Senate in 1962, Senator Gaylord Nelson, a Democrat from Wisconsin, was determined to convince the federal government that the planet was at risk. In 1969, Nelson, considered one of the leaders of the modern environmental movement, developed the idea for Earth Day after being inspired by the anti-Vietnam War “teach-ins” that were taking place on college campuses around the United States. According to Nelson, he envisioned a large-scale, grassroots environmental demonstration “to shake up the political establishment and force this issue onto the national agenda.”

Nelson announced the Earth Day concept at a conference in Seattle in the fall of 1969 and invited the entire nation to get involved. He later recalled, “The wire services carried the story from coast to coast. The response was electric. It took off like gangbusters. Telegrams, letters and telephone inquiries poured in from all across the country. The American people finally had a forum to express its concern about what was happening to the land, rivers, lakes and air—and they did so with spectacular exuberance.” Dennis Hayes, a young activist who had served as student president at Stanford University, was selected as Earth Day’s national coordinator, and he worked with an army of student volunteers and several staff members from Nelson’s Senate office to organize the project. According to Nelson, “Earth Day worked because of the spontaneous response at the grassroots level. We had neither the time nor resources to organize 20 million demonstrators and the thousands of schools and local communities that participated. That was the remarkable thing about Earth Day. It organized itself.”

On April 22, rallies were held in Philadelphia, Chicago, Los Angeles and most other American cities, according to the Environmental Protection Agency. In New York City, Mayor John Lindsay closed off a portion of Fifth Avenue to traffic for several hours and spoke at a rally in Union Square with actors Paul Newman and Ali McGraw. In Washington, D.C., thousands of people listened to speeches and performances by singer Pete Seeger and others, and Congress went into recess so its members could speak to their constituents at Earth Day events.

The first Earth Day was effective at raising awareness about environmental issues and transforming public attitudes. According to the Environmental Protection Agency, “Public opinion polls indicate that a permanent change in national priorities followed Earth Day 1970. When polled in May 1971, 25 percent of the U.S. public declared protecting the environment to be an important goal, a 2,500 percent increase over 1969.” Earth Day kicked off the “Environmental decade with a bang,” as Senator Nelson later put it. During the 1970s, a number of important pieces of environmental legislation were passed, among them the Clean Air Act, the Water Quality Improvement Act, the Endangered Species Act, the Toxic Substances Control Act and the Surface Mining Control and Reclamation Act. Another key development was the establishment in December 1970 of the Environmental Protection Agency, which was tasked with protecting human health and safeguarding the natural environment—air, water and land.

Since 1970, Earth Day celebrations have grown. In 1990, Earth Day went global, with 200 million people in over 140 nations participating, according to the Earth Day Network (EDN), a nonprofit organization that coordinates Earth Day activities. In 2000, Earth Day focused on clean energy and involved hundreds of millions of people in 184 countries and 5,000 environmental groups, according to EDN. Activities ranged from a traveling, talking drum chain in Gabon, Africa, to a gathering of hundreds of thousands of people at the National Mall in Washington, D.C. Today, the Earth Day Network collaborates with more than 17,000 partners and organizations in 174 countries. According to EDN, more than 1 billion people are involved in Earth Day activities, making it “the largest secular civic event in the world.”

See original article…

Emerging Tech 2015: Computer chips that mimic the human brain

A 12-inch wafer is displayed at Taiwan Semiconductor Manufacturing Company (TSMC) in Xinchu, Taiwan

By Bernard Meyerson | Mar 4 2015

Neuromorphic technology is one of 10 emerging technologies for 2015 highlighted by the World Economic Forum’s Meta-Council on Emerging Technologies.

Even today’s best supercomputers cannot rival the sophistication of the human brain. Computers are linear, moving data back and forth between memory chips and a central processor over a high-speed backbone. The brain, on the other hand, is fully interconnected, with logic and memory intimately cross-linked at billions of times the density and diversity of that found in a modern computer. Neuromorphic chips aim to process information in a fundamentally different way from traditional hardware, mimicking the brain’s architecture to deliver a huge increase in a computer’s thinking and responding power.

Miniaturization has delivered massive increases in conventional computing power over the years, but the bottleneck of shifting data constantly between stored memory and central processors uses large amounts of energy and creates unwanted heat, limiting further improvements. In contrast, neuromorphic chips can be more energy efficient and powerful, combining data-storage and data-processing components into the same interconnected modules. In this sense, the system copies the networked neurons that, in their billions, make up the human brain.


Neuromorphic technology will be the next stage in powerful computing, enabling vastly more rapid processing of data and a better capacity for machine learning. IBM’s million-neuron TrueNorth chip, revealed in prototype in August 2014, has a power efficiency for certain tasks that is hundreds of times superior to a conventional CPU (Central Processing Unit), and more comparable for the first time to the human cortex. With vastly more compute power available for far less energy and volume, neuromorphic chips should allow more intelligent small-scale machines to drive the next stage in miniaturization and artificial intelligence.

Potential applications include: drones better able to process and respond to visual cues, much more powerful and intelligent cameras and smartphones, and data-crunching on a scale that may help unlock the secrets of financial markets or climate forecasting. Computers will be able to anticipate and learn, rather than merely respond in pre-programmed ways.

Discover the other emerging technologies on the 2015 list:
Sense and avoid drones
Distributed manufacturing
Digital genome
Additive manufacturing
Zero-emission cars
Computers that learn on the job
Precise genetic engineering
A new kind of plastic to cut landfill waste
Next generation robotics


Go to original article (World Economic Forum)…

Are you ready for the technological revolution?

A robotic tape library used for mass storage of digital data is pictured at the Konrad-Zuse Centre for applied mathematics and computer science (ZIB), in Berlin

By Klaus Schwab

It is time to stop looking backwards. In the years that followed the 2008 financial crisis, we spent a lot of time looking for ways to get back to the days of fast economic expansion. We were living in what I call a “post-crisis world”, certain that the challenges we faced were temporary blips in the system, hopeful that things would soon go back to the way they had been.

But now it has become clear that we have entered a new era – we are living in the “post-post crisis” world. What does this mean?

It means that almost everything we once knew is changing. For the foreseeable future, we will have to get used to slower growth rates. In the new world, it is not the big fish which eats the small fish, it’s the fast fish which eats the slow fish.

One of the defining features of this new era is the rapid pace of technological change. It is so fast that people are even referring to it as a technological revolution. This revolution is unlike any previous one in history, and it will affect us all in ways we cannot even begin to imagine.

A different kind of revolution

The first thing that sets this revolution apart from others is how disruptive it is. In the past we had revolutions – perhaps they would be better described as evolutions – that came at a relatively slow pace, like long waves in the ocean. The impact of the first Industrial Revolution, which began in Britain in the 1780s did not fully begin to be felt until the 1830s and 1840s. Today technological change happens like a tsunami. You see small signs at the shore, and suddenly the wave sweeps in.

Read full article…

The 20 best countries for fast public wi-fi

By Lisa Eadicicco, Dec 18 2014

Internet LAN cables are pictured in this photo illustration taken in Sydney

Rotten Wi-Fi, a testing service that evaluates public Wi-Fi networks based on speed and customer satisfaction, says it has found the 20 countries in the world that offer the best public Wi-Fi experience.

The company says its user base has measured and evaluated the quality of public Wi-Fi hotspots in 172 countries around the world. It’s unclear exactly how accurate the data is, however, because it has been gathered by various people around the world at different times.

Surprisingly enough, countries like South Korea, China, and Japan haven’t made the list, even though other studies have rated those countries as having super-fast internet speeds. 

Here’s a look at Rotten Wi-Fi’s findings. It seems Lithuania dominated the competition when it comes to the average download and upload speeds found on public Wi-Fi networks.

See interactive version of chart here.

Read full article..