DIMACS Miniworkshop on Algorithms and Energy

May 21, 2010
DIMACS Center, CoRE Building, Rutgers University, Piscataway, NJ

Organizers:
Komandur Krishnan, Telcordia, krk at research.telcordia.com
Linda Ness, Telcordia, lness at telcordia.com
Tom Reddington, Bell Labs, treddington at bell-labs.com
Fred Roberts, DIMACS, froberts at dimacs.rutgers.edu
Emina Soljanin, Bell Labs, Emina at research.bell-labs.com
Lisa Zhang, Bell Labs, ylz at research.bell-labs.com
Presented under the auspices of the Special Focus on Discrete Random Systems.

Abstracts:


Con Edison

Title: Con Edison Smart Grid Initiatives

Con Edison intends to demonstrate that the reliability of the grid can be improved through a combination of enhanced monitoring and control capabilities along with intelligent analysis tools.

Every year a great deal of money is spent to maintain the highest levels of reliability and meet a constantly increasing demand for electricity. In addition, NYC and its environs need to be ready for the incorporation of electric vehicles into the load profile of the region. Energy efficiency programs, curtailable electric programs, remotely controlled field switches, high tension monitoring, and distributed generation assets already exist and help to keep up with increasing demands while deferring capital expenditures. In Long Island City, Con Edison is already working to expand this infrastructure and incorporate home area networks, advanced metering infrastructure, building management systems, and unique underground switching applications. The proposed project will incorporate photovoltaics, battery storage, and electric vehicle charging stations.

The proposed project will demonstrate monitoring and control capabilities, leveraging controllable field assets (curtailable customers, switches, distributed generation, battery storage, electric vehicle charging stations, building management systems, home area networks, high tension monitoring and advanced metering infrastructure) to shift, balance or reduce load where and when needed in response to system contingencies or emergencies.

One of the key challenges of operating the electric grid today is the huge amount of data available pertaining to system condition, which comes from a variety of disparate systems. This demonstration will provide tools to enable quick decision making by system operators. Decision-aid tools will be incorporated to help operators identify problem areas, prioritize corrective actions and optimize the application of controllable field assets in both normal and contingency operations.

The capabilities of the system will be shown in a series of six demonstrations. There will be individual demonstration in the Orange and Rockland Control Center, the Manhattan Control Center and the Energy Control Center. In the Brooklyn/Queens Control Center there will be three demonstrations that span the entire length of the project.


Anand Ekbote, Emerson Network Power

Title: Energy Optimization and Efficiency Improvement

Proposal: Developing a general class of energy optimization solutions in multi-system environments, using solutions developed for data centers as a starting point.

The problems of improving data center efficiency has gained prominence in the recent past driven by growth of 'mega' data centers that consume megawatts of energy, comparable to small cities. Data centers have multiple systems working together to deliver compute output. These systems include: software applications and data bases, IT infrastructure (servers, routers, storage, etc.), power and cooling distribution and supply systems, among others.

The problem can be broken down into two parts:

Emerson Network Power developed a 5,000-square-foot data center model based on real-world technologies and operating parameters. Through the model, Emerson Network Power was able to quantify the savings of various actions at the system level, as well as identify how energy reduction in some systems affects consumption in supporting systems. The model demonstrates that reductions in energy consumption at the IT equipment level have the greatest impact on overall consumption because they cascade across all supporting systems. This insight led to the development of Energy Logic, a vendor-neutral roadmap for optimizing data center energy efficiency that starts with the IT equipment and progresses to the support infrastructure. Implementing Energy Logic strategies can deliver a 50 percent or greater reduction in data center energy consumption without compromising performance or availability.

Next, Emerson Network Power tackled the issue of an output metric for data centers, essential to achieving a true understanding of data center efficiency. The lack of a data center output metric is challenging to IT and data center managers as they try to justify much needed IT investments to management. Emerson Network Power used the available information on IT performance improvement and analyzed it to see what insights can be gained. While there is no universally accepted metric for server and data center output, there is significant industry information available on the increase in the "performance" of servers and chips over the past several years. As a first step, Emerson introduces the concept of CUPS, or Compute Units per Second, as a temporary or placeholder name for what will eventually be the sought-after universal metric for IT and data center output. Using CUPS leads to powerful insights in three major areas of importance to all stakeholders and end-users in the IT and infrastructure industries. First, while there has been a significant increase in energy consumption in IT and data center environments, these increases are considerably overshadowed by dramatic gains in data center output and efficiencies over the last five years. Second, application of the results of the analysis to the 5,000 sq.ft. data center model yields clear strategies for improving data center efficiency. Third, the analysis leads to a clear direction on the criteria to be used for arriving at a universally accepted metric for IT and data center output.

Two white papers, which provide more detail, can be found here and here.

For the DIMACS mini-workshop, the questions to be addressed are:


Young-Jin Kim, Marina Thottan, Vladimir Kolesnikov, Bell Labs, Alcatel-Lucent

Title: Algorithms for SmartGrid Communications

In recent years the power grid has been undergoing transformative changes due to the greater penetration of renewable energy sources and increased focus on demand- response shaping. These innovative transformations on the grid require an IP-based decentralized data-centric information infrastructure that can reliably, securely, and cost-effectively support the operation and innovative applications of the next generation grid. The SmartGrid information infrastructure differs from a typical distributed system since it addresses the specific requirements of power applications such as distributed data sources, latency sensitive data transactions and real time event updates.

Some areas to explore are:

  1. Algorithms for Grid Overlay Design
    The grid overlay is used to optimize latency, throughput, and network utilization. There is a trade-off to be made between end-to-end connections vs overlay-based connections. The overlay grid needs to be scalable in terms of network size, resources, computation, and reliability. It should also enable functions such as QoS, priority, multicast and data fusion.
  2. Algorithms for Data-centric Middleware Design
    The Middleware is designed to provide a scalable, secure, reliable, and decentralized information infrastructure where space, time, or data synchronization can be decoupled. It can also enable publisher-subscriber transactions for event notifications across multiple domains. In this context algorithm design is necessary for distributed data-centric storage for pull-based data usage. It could be either DHT-based or Location-based
  3. Algorithms for Latency Budgets
    The move to an IP-based network raises primary concerns such as reliability and delivery latency. It is hard to optimally achieve latency budgeting across multiple domains. Cross-domain network design algorithms are necessary.
  4. Algorithms for Authentication and Secure Communication
    Security algorithms are necessary for the overlay architecture, distributed data-centric storage and publisher-subscriber transactions using PKI, Digital Certificates, or AES.

Steven Fortune, Bell Labs

Title: The GreenTouch Network Energy-Efficiency Initiative

GreenTouch is a telecommunications industry consortium, initiated by Alcatel-Lucent, which has the mission of inventing and promoting ultra-energy-efficient telcommunication networks. The stated goal of GreenTouch is to improve network energy efficiency by a factor of 1000. I will briefly describe the GreenTouch initiative, giving motivation, the modeling of current and future network energy usage, and briefly outlining some of outstanding research challenges.


Francisco de Leon, NYU-Poly

Title: The Viewpoint of a Power Systems Engineer - Grid Reliability

Long-duration interruptions (longer than a few minutes) in the supply of electric power do not happen often (not even in small sections). However, when they do, these events are very disruptive to people and the economy. Very short duration disturbances (under a second) can disrupt certain (automatic) industrial processes. The first and most important function of a smart grid (SG) should be to keep the current levels of reliability although it is expected that the SG will increase the reliability. Therefore, the topic that I propose is to discuss the Grid Reliability in the age of the smart grid. Some of the subtopics would be:


Wonsuck Lee, Bell Labs

Title: Smart Grid Economic Ecosystem Modeling

Overall success of the Smart Grid in a real world would certainly depend on technological achievements but possibly, even more so, on the highly complicated and interconnected economical factors. Such a complex system of an immense scale can only be planned and managed through versatile mathematical and computational models constituting a virtual laboratory of the system. The important components, to be modeled, of the Smart Grid system include different categories of consumers (industry, commercial, residential, mobile), power generation companies, power transmission & operation companies, utility service companies, and market participants such as wholesalers and retailers. The first step towards this goal is to understand the consumers' behavior as a result of the reactions to the perceived information (such as pricing signals) and their needs (such as cooling in hot days or travel). The behavior is ultimately based on perceived utility or utility functions as termed in economics. It is of critical importance to correctly model various consumers' utility functions for economic analysis of the applications and the components of Smart Grid, such as, the demand response. An envisioned important element in the model would be the inclusion of the relationship among need, price, and demand. Questions will arise around the magnitude of the elasticity of demand to the price, what are the leading order components in the utility functions, etc. Computational agent-based modeling method has received attention as a promising technique to model the complex adaptive socio-economic ecosystem of the Smart Grid. Models of non-consumer agents in the power utility service chain and agents representing policy makers are also needed. The agent model should encompass models of, for instance, power consumption devices (such as Electric Vehicles), power transmission and distribution elements and network, power generation technologies (solar panels, wind turbines, etc.) at a certain level since the economics of the Smart Grid is directly related to the power flow over the power grid.


Wonsuck Lee and Vladimir Kolesnikov, Bell Labs

Title: Consumer Information Privacy in the Smart Grid

The Advanced Metering Infrastructure (AMI) and smart meters are important constituents of the Smart Grid. Smart meters have become more than an end terminal for the Automated Meter Reading (AMR) since introduction of additional functionality to collect, send and receive data associated with the poly-phase power consumption, power quality, power-line error detection, etc. Smart meters can also send commands and data for the load-side management and the demand-response from consumers. More information and data at the consumption side enhance power grid control, reliability, stability, and efficiency from the utility provider's perspective. However it comes at a cost of possibly unwanted exposure of the consumers' information. For instance, Non-Intrusive Load Monitoring (NILM) can reveal energy consumption of individual appliances and their operation. Consumers' presence or non-presence in their private premises, their living pattern, and even details on their belongings can be indentified using NILM steady state and transient analysis, using variations related to the current, active and reactive power, etc. Another example is related to location privacy of Electric Vehicle (EV) operation, in particular, charging. For billing purposes, EV needs to be identified at charging station. However, in this case, EV's location, time, and charge amount (which allow to recover EV's usage patterns and travel) are then exposed to the utility.

In these and other scenarios, what can be done to protect privacy of the consumers? Clearly, enough information must be available for the utilities to achieve their grid operation and management goals. At the same time, this information must be handled in a maximally privacy-preserving manner. There are several natural ways to achieve this. First, data can be engineered and handled in certified privacy-protected environment. That is, only relevant information is sent, and, further, there are cryptographic and security safeguards throughout the entire data lifetime. Second, data can be encrypted on the consumer's end, and utility only operates on encrypted data, using techniques from secure multiparty computation. We believe, the best security/efficiency tradeoff lies between the above two approaches.


Adam Meyerson, UCLA

Title: Energy Conservation for Mobile Devices

The introduction of the smartphone has been a boon to modern productivity and personal entertainment. These devices provide telephone services, internet connectivity, and a powerful computing platform, and are quickly gaining popularity in the global mobile phone market. The impressive features of these phones come at a high cost in terms of energy, and battery life is a major limiting factor in their usefulness. Energy-saving measures for these devices have substantial value to their users, beyond the usual advantages to energy-efficiency.

A major problem created by smartphones is the frequent use of internet connectivity over the cellular network. This is slower and more costly in terms of energy than connecting via WiFi, and the high rate of cellular network traffic reduces performance for all mobile phone users. For this reason, it is advantageous to attempt to schedule network access to occur when WiFi is available, to the degree that this is possible.

While some tasks require a near-immediate response (for example making a phone call), there are also many tasks which should be performed at regular intervals but need not be directly controlled by the user (for example downloading new emails or updated versions of applications). There is a natural tradeoff between immediate performance (the user wants the updates sooner rather than later) and cellular usage; the goal is to maximize the amount of data transfer performed over WiFi while still maintaining reasonable quality in terms of task completion times.

Of course, there are many ways to model such a problem -- the one I propose here is an online scheduling model where tasks arrive one at a time along with two non-negative non-increasing functions relating value to completion time (one function representing WiFi completion and the other representing cellular completion). The goal is to maximize the total value obtained. From a theoretical perspective, this is a generalization of a natural (and well-studied) problem of online packet scheduling with a single valuation function.

From a practical standpoint, it will be useful to attempt to predict both the future WiFi connectivity and the task arrivals. Some recent results suggest that this is possible (in fact, predicting the movements of cell-phone users is possible to a high degree of accuracy, although there may be privacy concerns). Thus a "semi- online" model where a highly accurate (but occasionally erroneous) predictor is available may be more realistic and lead to improved results.


Warren B. Powell, Princeton University

Title: Approximate Dynamic Programming for Stochastic, Multiscale Energy Policy Modeling

There is a long history of using deterministic linear programming models to simulate rational, economic investment decisions for energy policy planning. These models typically work in one- or even five-year increments, which means they have to use average production from intermittent sources such as wind and solar. They also have to assume that all inputs are known in advance. SMART uses the modeling and algorithmic framework of approximate dynamic programming to model time in hourly increments (over 250,000 time periods for a 30 year horizon), capturing not only the variability but also the uncertainty of wind, rainfall, demand and prices. We show that the results closely match the optimal solution when the problem is deterministic, and produces robust solutions when the problem is stochastic.


Marina Thottan, Bell Labs, Alcatel-Lucent

Title: Power Efficiency in Routers & Switches

Based on network traffic growth trends which predict a 50% increase per annum it is clear that routers and switches need to double in capacity every year. However, historic trends show that router capacity per chassis is expected to grow 10x in 10 years, which is only a 25% increase year-over-year. Therefore we will soon face a critical challenge to scale existing routers to meet the growing traffic demands on the network. More recently we are seeing Telco providers who are faced with large electricity bills using opex factors such as power consumption to prove-in the acquisition of new technology.

The limiting factors for scaling routers/switches include: power consumption, electrical backplanes, and network interfaces. In a typical core router today, 34% of the energy consumption is for switching and routing and 22% for packet processing. Data centers also contribute to the overall power consumption in the Telco sector. 10% of this energy consumption is in storage systems. Servers consume about 500nJ/bit (10x more than any other equipment). Furthermore data centers use the same network gear as Telco networks and thus do not leverage any optimizations that will be used for data center routing.

A large component of the power consumption in today's routers and switches is due to multi rack systems (Switch fabric & Line cards), excessive buffering and processing, problems at the interfaces (MUX/DEMUX) and the speed of electronic components. Power efficiency is driven by semiconductor technology & optical components. Pure technology evolution alone would suggest only a 20% improvement in power consumption. Some additional gains could be achieved through higher integration in optics (e.g. 4x 10G). However these gains may require complex modulation and electrical impairment compensations that are likely to decrease power efficiency

Faced with these challenges, some interesting areas for exploration are:


David L. Waltz, Center for Computational Learning Systems (CCLS), Columbia University

Title: Computation's Role in Building the Smart Grid for a Sustainable Future

To move toward a sustainable future, it is necessary to make the electric power grid more reliable, more efficient and far more flexible. For the foreseeable future the electric power grid will be an essential component of our energy delivery system. Many parts of the country (and the world) will long remain net consumers of electric power, and others net producers, though the patterns of use and production will vary over the seasons, and may shift over the years. Overall electric energy demand is expected to grow, and grow very rapidly in some areas, notably data centers and charging systems for pluggable electric vehicles. In many applications, relatively inefficient internal combustion engines will be replaced with much cleaner and more efficient electrically powered systems.

However, there are many obstacles hindering our movement toward this future. The grid of the future will evolve out of today's grid, retaining most of its current components. Today's grid is inflexible: its topology cannot be changed dynamically to deal with time-varying energy needs or outages. For the most part it lacks storage facilities and the ability to rapidly alter transmission patterns, so it is incapable of dealing well with distributed power generation, such as solar or wind, which will of necessity vary with weather and time of day. The design criteria is to build the system to meet peak load requirement, which are only experienced on a handful of days a year. This means that the system is typically operating well below its capacity. During contingencies operations, parts of the system can still occasionally operate at or above rated capacity. As demand increases, either 1) grid capacity will have to be increased, or 2) the existing grid will have to be made much more flexible. Increasing grid capacity is difficult due to NIMBY constraints as well as to the very high cost and low ROI from building new transmission facilities. The natural conclusion is that we will have to make the grid smarter, not larger.

For the last six years, CCLS has been working with Con Edison and other companies on applying computation - especially machine learning -- to address power grid needs for New York City. The bulk of our work to date has concentrated on using statistical machine learning tools for preventative maintenance. We have transformed historical electrical grid data into predictive tools for ranking grid systems and components according to the degree to which they are at-risk for failure. Separate tools have been developed for systems (e.g. high voltage feeders, low voltage secondary system) and system components (e.g. cables, joints, transformers, separable connectors). ML methods used for these have included SVMs, boosting, random forests, GAs combined with SVM, CART, and Cox proportional hazards. More recently we have developing learning methods for estimating MTBF (Mean Time Between Failures) models. These tools have been coupled with business management interfaces to allow the prediction capability to be integrated directly into corporate planning and decision support. One such interface provides a Pareto-optimized set of work plans meeting various constraints (e.g. borough, network, total budget, replacement of one of more specific components) ordered by "bang for the buck" ? the greatest increase in MTBF for the dollars spent.

We have also worked on experimental design methodologies for treating systems as if they were patients, generating control and test groups, and answering such questions as whether the system is being "overtreated" (tested too frequently) or whether it's better to replace the worst components here and there or to concentrate on making specific feeders into highly reliable.

CCLS's current research is focusing on demos of a much smarter grid, including storage, electric vehicles, distributed generation, demand response systems (where customers - or their computational surrogates -- can see and respond to energy price signals). Our responsibility in this work is optimal decision-making and control, using adaptive stochastic planning methods that 1) model the overall grid system and customers to a high degree of accuracy, 2) model each decision, and 3) continuously run Monte Carlo simulations of a variety of futures under various stochastic scenarios (heat waves, storms, variations in energy costs, power outages, etc.). The goal is to optimize every decision, from very near-term (should the system be storing energy in a superbattery? Should it be drawing power from the battery?) to long-term (should we purchase a $1M superbattery? If so, where should it be placed?). Models will include novel components not part of the NYC grid today, including current fault limiters, storage facilities, EVs, high power switches, distributed generation, distributed communication and control, etc.). This current work is part of a $92M project, primarily supported by a large DOE grant, being done in conjunction with Con Edison and several other companies.

Thanks to current and former CCLS people: Roger Anderson, Albert Boulanger, Phil Gross, Cynthia Rudin, Becky Passoneau, Ansaf Salleb-Aouissi, Haimonti Dutta, Manoj Poolery, Leon Wu, Axinia Radeva, Marta Arias, Phil Gross, Wei Chu, Jiang Chen, Samantha Cook, Rafi Pelossof, Ilia Vovsha, Bert Huang, Phil Long, Hatim Diab, Sam Lee, Fred Seibel, Hubert Delaney; Con Edison people: Artie Kressner, Serena Lee, Maggie Chow, Delphina Isaac, Frank Doherty; Princeton University: Warren Powell, Hugo Simao; CALM Energy: John Johnson; Digimedia: Chuck Lawson, Mark Juvalier, Carolyn Vu.

Work supported by Con Edison; Department of Energy; NYSTAR; NYSERDA; Columbia Center for Advanced Technology (CAT).


Lisa Zhang, Bell Labs

Title: Routing and Scheduling Optimization for Network Energy Efficiency

Energy conservation is emerging as a key issue in computing and networking within the ICT sector (Information and Communications Technologies). As indicated in a study conducted by the US Department of Energy, the current network elements and telecommunication networks are not designed with energy optimization as an objective or a constraint. They are often designed for peak traffic, for reasons such as accommodating future growth, planned maintenance or unexpected failures, or quality-of-service guarantees. At the same time, the energy consumption of network elements is often defined by the peak profile and varies little for typical traffic, which can be a small fraction of the peak. We therefore have an opportunity to optimize energy efficiency in communication and computing networks.

One aspect of optimization comes from re-examining the scheduling protocol within a single network device, e.g. router, switch or CPU. Scheduling prioritizes the order in which packets leave the device in case of contention. An energy-aware network device necessarily operates at a lower than full rate, for example, when the device is lightly loaded. Therefore, a tradeoff between energy and performance such as delay is expected. Understanding such a tradeoff not only helps to best utilize existing devices, it may also lead to next-generation devices that are optimized for the energy-performance sweet spot.

Another aspect of optimization comes from re-examining routing and scheduling in a network environment, where multiple network devices communicate with one another. Routing refers to choosing paths along which information travels from source nodes to destination nodes. To assess energy cost, an initial step is to model energy consumption by a network device. For example, a common and simplified view is to model the energy cost as a convex function of the carried traffic load. This contrasts the concave functions that are often used to model equipment cost in network design when energy is not a consideration. Therefore, a network-wide energy optimization is expected to be quite a different design problem.

For both single device and a network of devices, the goal can be two fold. One is to evaluate the amount of energy saving that be realized via optimization under realistic scenarios. The other is to device new protocols with provable worst-case performance bounds. Both angles have a rich set of open problems. The following references list some initial efforts from Bell Labs.

  1. A. Francini and D. Stiliadis, "Performance Bounds of Rate-Adaptation Schemes for Energy-Efficient Routers," To appear in Proceedings of 11th International Conference on High Performance Switching and Routing (IEEE HPSR 2010), Dallas, TX, June 2010..
  2. M. Andrews, S. Antonakopoulos and L. Zhang, "Minimum-Cost Network Design with (Dis)economies of Scale," Manuscript.
  3. M. Andrews, A. Fernandez Anta, L. Zhang and W. Zhao. "Routing for Energy Minimization in the Speed Scaling Model," Proceedings of IEEE INFOCOM 2010. San Diego, CA, March 2010.
  4. M. Andrews, A. Fernandez Anta, L. Zhang and W. Zhao. "Routing and Scheduling for Energy and Delay Minimization in the Powerdown Model," Proceedings of IEEE INFOCOM, Mini Conference, 2010. San Diego, CA, March 2010.

Previous: Program
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on May 21, 2010.