Monthly Archives: November 2016

Other critical infrastructure

In a world where hackers can sabotage power plants and impact elections, there has never been a more crucial time to examine cybersecurity for critical infrastructure, most of which is privately owned.

According to MIT experts, over the last 25 years presidents from both parties have paid lip service to the topic while doing little about it, leading to a series of short-term fixes they liken to a losing game of “Whac-a-Mole.” This scattershot approach, they say, endangers national security.

In a new report based on a year of workshops with leaders from industry and government, the MIT team has made a series of recommendations for the Trump administration to develop a coherent cybersecurity plan that coordinates efforts across departments, encourages investment, and removes parts of key infrastructure like the electric grid from the internet.

Coming on the heels of a leak of the new administration’s proposed executive order on cybersecurity, the report also recommends changes in tax law and regulations to incentivize private companies to improve the security of their critical infrastructure. While the administration is focused on federal systems, the MIT team aimed to address what’s left out of that effort: privately-owned critical infrastructure.

“The nation will require a coordinated, multi-year effort to address deep strategic weaknesses in the architecture of critical systems, in how those systems are operated, and in the devices that connect to them,” the authors write. “But we must begin now. Our goal is action, both immediate and long-term.”

Entitled “Making America Safer: Toward a More Secure Network Environment for Critical Sectors,” the 50-page report outlines seven strategic challenges that would greatly reduce the risks from cyber attacks in the sectors of electricity, finance, communications and oil/natural gas. The workshops included representatives from major companies from each sector, and focused on recommendations related to immediate incentives, long-term research and streamlined regulation.

The report was published by MIT’s Internet Policy Research Initiative (IPRI) at the Computer Science and Artificial Intelligence Laboratory (CSAIL), in conjunction with MIT’s Center for International Studies (CIS). Principal author Joel Brenner was formerly inspector general of the National Security Agency and head of U.S. counterintelligence in the Office of the Director of National Intelligence. Other contributors include Hal Abelson, David Clark, Shirley Hung, Kenneth Oye, Richard Samuels, John Tirman and Daniel Weitzner.

To determine what a better security environment would look like, the researchers convened a series of workshops aimed at going beyond the day-to-day tactical challenges to look at deep cyber vulnerabilities.

The workshops highlighted the difficulty of quantifying the level of risk across different sectors and the return on investment for specific cybersecurity measures. In light of facility-directed attacks like the Stuxnet virus and the sabotage of a Saudi oil refinery, attendees expressed deep concern about the security of infrastructure like the electric grid, which depends on public networks.

Intelligence technique known as deep learning

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department.

Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory.

The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips.

“There’s this idea that ideas in science are a bit like epidemics of viruses,” says Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT, an investigator at MIT’s McGovern Institute for Brain Research, and director of MIT’s Center for Brains, Minds, and Machines. “There are apparently five or six basic strains of flu viruses, and apparently each one comes back with a period of around 25 years. People get infected, and they develop an immune response, and so they don’t get infected for the next 25 years. And then there is a new generation that is ready to be infected by the same strain of virus. In science, people fall in love with an idea, get excited about it, hammer it to death, and then get immunized — they get tired of it. So ideas should have the same kind of periodicity!”

Weighty matters

Neural nets are a means of doing machine learning, in which a computer learns to perform some task by analyzing training examples. Usually, the examples have been hand-labeled in advance. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels.

Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. Most of today’s neural nets are organized into layers of nodes, and they’re “feed-forward,” meaning that data moves through them in only one direction. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data.

To each of its incoming connections, a node will assign a number known as a “weight.” When the network is active, the node receives a different data item — a different number — over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node “fires,” which in today’s neural nets generally means sending the number — the sum of the weighted inputs — along all its outgoing connections.

When a neural net is being trained, all of its weights and thresholds are initially set to random values. Training data is fed to the bottom layer — the input layer — and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs.

Web browsers to harness micro-moment

Hyper-connectivity has changed the way we communicate, wait, and productively use our time. Even in a world of 5G wireless and “instant” messaging, there are countless moments throughout the day when we’re waiting for messages, texts, and Snapchats to refresh. But our frustrations with waiting a few extra seconds for our emails to push through doesn’t mean we have to simply stand by.

To help us make the most of these “micro-moments,” researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a series of apps called “WaitSuite” that test you on vocabulary words during idle moments, like when you’re waiting for an instant message or for your phone to connect to WiFi.

Building on micro-learning apps like Duolingo, WaitSuite aims to leverage moments when a person wouldn’t otherwise be doing anything — a practice that its developers call “wait-learning.”

“With stand-alone apps, it can be inconvenient to have to separately open them up to do a learning task,” says MIT PhD student Carrie Cai, who leads the project. “WaitSuite is embedded directly into your existing tasks, so that you can easily learn without leaving what you were already doing.”

WaitSuite covers five common daily tasks: waiting for WiFi to connect, emails to push through, instant messages to be received, an elevator to come, or content on your phone to load. When using the system’s instant messaging app “WaitChatter,” users learned about four new words per day, or 57 words over just two weeks.

Ironically, Cai found that the system actually enabled users to better focus on their primary tasks, since they were less likely to check social media or otherwise leave their app.

WaitSuite was developed in collaboration with MIT Professor Rob Miller and former MIT student Anji Ren. A paper on the system will be presented at ACM’s CHI Conference on Human Factors in Computing Systems next month in Colorado.

Among WaitSuite’s apps include “WiFiLearner,” which gives users a learning prompt when it detects that their computer is seeking a WiFi connection. Meanwhile, “ElevatorLearner” automatically detects when a person is near an elevator by sensing Bluetooth iBeacons, and then sends users a vocabulary word to translate.

Though the team used WaitSuite to teach vocabulary, Cai says that it could also be used for learning things like math, medical terms, or legal jargon.

“The vast majority of people made use of multiple kinds of waiting within WaitSuite,” says Cai. “By enabling wait-learning during diverse waiting scenarios, WaitSuite gave people more opportunities to learn and practice vocabulary words.”

Still, some types of waiting were more effective than others, making the “switch time” a key factor. For example, users liked that with “ElevatorLearner,” wait time was typically 50 seconds and opening the flashcard app took 10 seconds, leaving free leftover time. For others, doing a flashcard while waiting for WiFi didn’t seem worth it if the WiFi connected quickly, but those with slow WiFi felt that doing a flashcard made waiting less frustrating.

Combat media stereotypes of Muslim women

Layla Shaikley SM ’13 began her master’s in architecture at MIT with a hunger to redevelop nations recovering from conflict. When she decided that data and logistics contributed more immediately to development than architecture did, ­Shaikley switched to the Media Lab to work with Professor Sandy ­Pentland, and became a cofounder of Wise Systems, which develops routing software that helps companies deliver goods and services.

“There’s nothing more creative than building a company,” Shaikley says. “We plan the most effective routes and optimize them in real time using driver feedback. Better logistics can dramatically reduce the number of late deliveries, increase efficiency, and save fuel.”

But Shaikley is perhaps better known for a viral video, “Muslim Hipsters: #mipsterz,” that she and friends created to combat the media stereotypes of Muslim women. It reached hundreds of thousands of viewers and received vigorous positive and negative feedback.

The video “is a really refreshing, jovial view of an underrepresented identity: young American Muslim women with alternative interests in the arts and culture,” Shaikley says. “The narrow media image is so far from the real fabric of Muslim-­American life that we all need to add our pieces to the quilt to create a more accurate image.”

Shaikley’s parents moved from Iraq to California in the 1970s, and she and her five siblings enjoyed a “quintessentially all-­American childhood,” she says. “I grew up on a skateboard, and I love to surf and snowboard.” She feels deeply grateful to her parents, who “always put our needs first,” she adds. “When we visited relatives in Iraq, we observed what life is like when people don’t have the privilege of a free society. Those experiences really shaped my understanding of the world and also my sense of responsibility to give back.”

Shaikley says the sum of her diverse life experiences has helped her as a professional with Wise Systems and as a voice for underrepresented Muslim women.

“My work at MIT under [professors] Reinhard Goethert and Sandy ­Pentland was critical to my career and understanding of data as it relates to developing urban areas,” she says. “And every piece of my disparate experiences, which included the coolest internship of my life with NASA working on robotics for Mars, has played a huge role.”