The workshop had two mechanisms in place to receive input from participants: breakout discussion subgroups and short presentations to the full audience.
The workshop hosted breakout sessions following each of the three workshop sessions. Each breakout group was organized as an open discussion, and many of the same topics arose across separate sessions. Comments that related directly to a point of a workshop speaker have been integrated into Chapters 2-5. However, some of the comments concerned topics not discussed in the workshop presentations, and some spanned topics from multiple presentations. This chapter captures summaries of those topics.
Using Phasor Measurement Units and Smart Meter Data to Build Better Models
The subgroups discussed whether time series data from phasor measurement units (PMUs) can be used to build better (parametric or nonparametric) models of a power system’s transient response (e.g., impulse, step) to large disturbances. Specifically, can small but informative perturbations be identified that would facilitate model development and testing? One participant noted that small perturbations may not offer enough information and that using data from large unplanned perturbations (such as blackouts) may be more helpful. Another participant commented
that the power system is slowly time-varying. Because this may complicate the ability to train or tune reliable models, it will be important to explicitly incorporate the time-varying composition of loads into the system identification procedure.
Another group discussed the importance of making actual and synthetic power system data sets, and the models that accompany these, available to researchers—potentially via open-source methods. A participant suggested that these data sets should span both transmission and distribution systems and go beyond data (e.g., measurements utilized by supervisory conrol and data acquisition (SCADA) systems and measurements obtained with PMUs) from the physical system to possibly include financial data and other data used in operations (e.g., weather forecast data).
In terms of making real power system data sets and their companion models available to researchers, a participant said it would be necessary to work with industry stakeholders to clear roadblocks related to confidentially. One participant noted that a possible solution to this would be to transform the data sets and their companion models so as to mask the actual system to which they correspond. However, it would be desirable for the resulting masked models to preserve the attributes of a realistic power system. A participant noted that one way to demonstrate to industry the necessity and benefits of making these data sets and models available is to point out that these have been beneficial in other industries. For example, in the 1990s, the airline industry made data and models available to researchers, which resulted in a prolific period in terms of tools and algorithm development from which the industry as a whole benefited.
A participant noted that there are standard Institute of Electrical and Electronics Engineers (IEEE) synthetic power system test models that have been used by researchers for years. However, a problem with these, as pointed out during the discussion, is that their size is not representative of actual power systems; therefore, it is not clear a priori that many tools being proposed by researchers actually scale up to realistic systems. Also, one participant noted, an open issue is how to create synthetic data sets to accompany these models with similar characteristics to those observed in data gathered from a real power system.
Over-fitting was mentioned as a concern in connection with creating libraries of power system data sets and models for researchers to test new techniques and algorithms. Specifically, if the data sets and models made available are not dissimilar enough, it could be the case that new techniques and algorithms are over-fitted: they would work well with these particular data sets and models, but they actually do not represent well other systems or contexts that differ from the models and data sets used by the researchers. Thus, the participant noted, a key issue will be to create libraries that are diverse in terms of the power systems and operating contexts that can be encountered in real settings.
Participants noted that while industry provides models, experiments, and software solutions at the transmission level, doing so for the distribution level would
raise a number of fundamental challenges, specifically owing to limitations in our understanding of the underlying physics.
Real-time pricing (RTP) was discussed in the breakout sessions; specifically, can accurate models of customer response to real-time pricing signals be built? A participant noted that the response would naturally depend on customers’ access to price information and load-shifting enabling technologies. It was noted that most RTP programs in place today rely on day-ahead communication of time-varying prices. This participant questioned whether there is any advantage to moving to true RTP. Another participant commented that many utilities are hesitant to transition to true RTP because of potential customer dissatisfaction. It was noted that many customers may have limited incentive to respond to RTP because of small potential monetary savings.
Using RTP to approach load control was also discussed. A participant noted that it may not lead to a reliable real-time response in demand and might instead increase demand volatility. Another participant commented that RTP can induce instabilities or limit cycles in the aggregate load response. A participant suggested that forward contracts for direct load control might offer a better alternative. Some questions arose from this: How would one structure and price such contracts for small residential customers? Perhaps a pricing approach differentiated by quality of service? Another participant commented that peak-time rebates constitute another incentive design for demand response. This, however, requires a reliable estimate of the customer load baseline that is difficult to produce and susceptible to gaming.
Breakout groups discussed how to prevent attacks on a communication and control infrastructure used to manage customer loads remotely. A participant from industry noted that control centers are highly sensitive to the cybersecurity issues, which makes using the Internet for power system control highly unlikely. Regarding grid control architecture, a participant noted that while the grid started with controlling a few points of generation, the future grid will have to accommodate millions of access points of control on the consumer side. Security breach of the grid will be of concern, and the participant wondered if an “intranet” solely for the grid would be helpful.
As a specific mathematical challenge, an academic participant proposed the following question: What is the mathematical problem in considering adversary actions against the grid? Also, what is the appropriate mathematical framework to consider low-probability, high-impact events such as geomagnetic disturbances or
terrorist attacks? One industry member suggested that micro-grids may be coming. Combined heat and power would turn out to be much more resilient than the bulk power system that currently exists, especially if there is a geomagnetic disturbance event. A participant from industry made reference to China’s smart grid activities, which focus more on reliability than just distributed intelligence.
Another major risk is the gas−electricity coupling. Gas pipelines deliver gas flow at a speed much slower than the speed at which electricity flows. The availability of gas as a fuel at power plants could be an increasing source of uncertainty due to the increasing number of natural gas power plants. A participant noted that the National Science Foundation’s Resilient Interdependent Infrastructure Processes and Systems (RIPS) program has begun to fund activities in these areas.
Energy Storage and Renewables
A participant wondered whether electric energy storage in the form of pervasive small-scale distributed batteries would solve the variability problem of renewable energy integration. Another participant noted that plug-in electric vehicles might represent a natural path toward delivering energy storage to the power system. A participant commented that it would be important to understand how such technologies get adopted to the point where they could provide such options. Another participant questioned how to manage widespread integration of plug-in electric vehicles into the power system.
How to redesign electricity markets to facilitate the integration of energy storage into the power system was also discussed. A participant noted that capacity, energy, and ancillary service markets need to evolve. Specifically, well-designed markets should induce efficient operation of storage in the short run and expansion (i.e., placement and sizing) in the long run.
Another topic of discussion was how markets would have to evolve in order to support the large-scale integration of renewables. A participant wondered whether Germany Energiewende1 is an example of market failure. This participant noted that Germany has many days in which there is wind oversupply (exceeds the needed load) where prices become negative. Another participant commented that the community should be wary of free-market approaches because of adverse effects, such as the California energy crisis in 2000.
Breakout groups discussed what current and future mathematical tools might most naturally apply to emerging problems in power (such as renewables integration and demand response). Participants made the following suggestions:
Game theory and mechanism design. The (re)emergence of demand response as a tool to manage variability in renewable supply requires the delineation of new market constructs to engage residential demand-side resources. Tools from dynamic game theory naturally lend themselves to the treatment of such problems. Participants brought up the following points:
- — Consumer utility functions may be unknown.
- — Market mechanisms and schemes need to be “transparent” (i.e., easily understood by market participants).
- — Market mechanisms need to be robust to gaming, such as attempts at baseline misrepresentation in a variety of demand respond programs centered on peak-time rebate incentives.
- Linear programming. Participants noted that linear programming (LP) terms and algorithms are mature and there are types of problems in the power system that LP can handle well. However, participants noted that there are some issues that LP cannot handle well, such as an explicit description of uncertainty. The existing optimization frameworks that can capture uncertainty were discussed (i.e., stochastic optimization and robust optimization), and participants raised the question of whether these are scalable to large power system problems. Several participants stated that while these tools are promising, it is necessary to do much more research until they are of practical value to the industry. A participant also commented on the necessity of including constraints that capture certain dynamics requirements.
- Mixed-integer programming. Some comments were made about why mixed-integer programming (MIP) has not yet prevailed in power market software. The concern stated was that MIP is highly sensitive to constraints. Some industry representatives mentioned that operators could not afford
to use MIP because the locational marginal prices were much less stable in MIP. A participant mentioned the difficulty of dealing with multiple solutions.
- High-fidelity grid simulation tools. Available and future high-fidelity grid simulation tools were discussed. A participant commented that reliable simulation tools are essential to informing policy decisions as they relate to the power system. This participant noted that designing a simulator that layers policy, planning, and nominal and contingency operations would be helpful. One question posed was whether there is currently a simulator capable of simulating an entire blackout at the most granular of spatial scales. A participant noted that nearly all blackouts start at the distribution level, which is not typically modeled in simulation tools. Another participant noted that disturbances in the distribution system percolate into the transmission system, and vice versa.
- Power-flow models. Participants discussed the inherent inefficiency of the current DC-optimal power flow and noted that the industry would benefit from moving toward AC-optimal power flow to optimize the voltage and to counteract the behavior from consumers. A participant noted that this is an area where the mathematics community could assist the ISOs’ transition toward next-generation control centers.
- Human-in-the-loop models. The importance of building mathematical models to consider humans-in-the-loop was mentioned.
- Data analysis. Challenges of extracting information from massive amounts of data were discussed. Participants noted that from a mathematical perspective, numerical integration algorithms for such a hybrid dynamic system is difficult. Also, the differential algebraic equation solvers need to provide solutions for the grid operators. Another question was posed regarding how to leverage data-driven statistical models to improve upon (first-principles) physical models. Participants mentioned the following metrics of interest for new models: their computational scalability, prediction power, and integrability into real-time control schemes.
- Understanding causality. A comment was made about the importance of understanding causality (i.e., the physical principle) when dealing with data and modeling. Also, how can a system learn over time from the measurement data? A participant wondered if this could be done using self-organized data-based swarm control, similar to what is used in Google’s driverless cars.
- Dynamic models for stability analysis and feedback control. Participants discussed the need for revisiting dynamic models commonly used for stability analysis and feedback control design. A participant noted that dynamic stability and control ceased to be a problem of interest for power system
researchers years ago because of redundancy in transmission systems, advanced relaying, and sufficient inertia provided by large generators. However, with the changes in structure and functionality that power systems are undergoing due to integration of renewable-based generation and other technologies, this problem may be of interest again (e.g., renewable-based generation results in decreased inertia in the system). Current centralized closed-loop power system controllers such as automatic generation control were also discussed; specifically, they may not deliver their intended function properly because of latency, delays, and other issues that are not usually included in models used to design these controllers. Several participants noted that there is a need to revisit dynamic modeling in power systems and that it might be necessary to explicitly describe the communication infrastructure for monitoring and control. By doing this, the participants noted, not only will it be possible to understand the impact of delays, packet drops, and latency that naturally arise from normal operation, but it will also be possible to quantify the potential impact of cyberattacks on system operations. Finally, it is important to note that while the discussion mostly focused on transmission systems, many participants mentioned that some of the same problems may arise in distribution systems due to the increased penetration of distributed energy resources and the increased reliance on communications and control for managing assets in these systems.
Participants noted that mathematical scientists could play a central role in improving the state of the art in power system dynamics analysis in the area of nonlinear and stochastic control and stability. The value of control, a participant noted, needs to be carefully studied in a market setting. One participant from industry commented that there is no payment for primary frequency response. One academic participant asked, Could we do micro-pricing such as is done with Internet advertising? Stochastic control is, broadly, the problem of multiperiod unit commitment. Economic dispatch with renewables amounts to the problem of a constrained sequential decision under uncertainty.
A Paradigm Shift to a Distributed Power System
During a breakout session discussion, a participant noted that distributed generation is becoming more affordable and efficient—in large part because natural gas is currently inexpensive. The subgroup discussed how the utility could be impacted if distributed generation and small storage were widespread. Other participants posed questions regarding what role distribution system operators would play and how the transition from today to this future could be managed
(e.g., through markets or legislative mandate). Participants suggested three possible solutions to enabling endpoint control of distributed resources:
- Centralized control through the ISO,
- Hierarchical control through aggregators (with a to-be-determined level of aggregation resolution), and
- Fully decentralized control.
A participant noted that one challenge is how to transform optimal power flow to a decentralized approach that is scalable. It was observed that the existing SCADA systems and energy management systems (EMSs) involve information primarily from the generator side. However, multidirectional information exchange between supply and demand sides is needed in the future. Potential future micro-grids were discussed in multiple breakout sessions. A participant noted that micro-grids are touted for their favorable reliability properties; specifically, they are resilient to bulk transmission system failures and easily reconfigured. One participant posed the question of how utilities should evolve and/or what role they should play to enable the proliferation of micro-grids, and whether they should play an active role as a micro-grid operator. A participant commented that this would require the development of distribution automation systems and that distributed control theory would likely play an important role, as centralized control would likely fail to scale with the dimension of the system (e.g., 106-107 end points would need to be controlled). An academic participant posed a new vision for the grid with little or no bulk transmission-level energy production and no central operator and with all distributed systems coordinating and exchanging just like the intelligent periphery in the Internet, including peripheral mass storage. The research question then is how to conceptualize such a system mathematically, including what kind of pricing incentive or other incentives would enable such a vision, and how people’s behavior would change. Nordic power systems are similar to this, one participant noted, with hydro serving as “battery storage” for the Danish wind. An industry participant commented that the question is how to transition from here to there.
Another participant from industry commented that it is not clear what the business infrastructure will be 15 years from now. For example, how much will consumers have to pay in order to be off the grid? Can the current market optimization software adapt to the future environment? The participant commented that comprehensive output analysis of software solutions is needed.
A participant questioned if the increasing penetration of rooftop solar alone has the ability to induce hardship within utilities. A participant noted that San Diego Gas and Electric has proposed a network fee for rooftop solar installations to offset lost revenue. Wisconsin and Indiana laws allow utilities to recover their fixed costs by charging rooftop solar owners a fee to cover fixed distribution infrastruc-
ture cost. The Hawaiian legislature is discussing a similar approach. Participants wondered whether this solution is temporary or desirable.
There was also some discussion of the different strategies available to combat financial hardship on utilities. A participant suggested unbundling the distribution system but wondered how this would work practically. Local power transmission and distribution was also discussed. A participant noted that it is not currently possible for one neighbor to send physical power to another and wondered what it would take to start small experiments like this. A reference was made to the phone system in the 1980s, which faced similar problems that have since been overcome. A question was raised on the analogy between the Internet and the power grid: Can the power community draw some lessons from the architectural design of the Internet and use them to design the future grid? One comment was made that in contrast to a layered architecture, perhaps “timescale engineering” architecture would be needed for the future grid.
As an attempt to pose a mathematical question, it was suggested that one consider large-scale decentralized control with stiff electrical constraints. Existing large-scale system theory does not lend itself well to many of the problems in power systems. For example, many theorems are based on the assumption of weak interactions, and they address only sufficient, not necessary, conditions. More theories tailored for the case of large power systems need to be developed.
Several breakout groups discussed the issue of managing uncertainty in the grid and the importance of doing so for the future power system. Uncertainty in these discussions related to both modeling and predictive uncertainty and to variability in available power supplies, particularly from renewable energy sources.
Some framing questions were suggested, specifically, where are the uncertainties coming from in the next 10 to 20 years? What are the mathematical challenges? It was suggested that optimization under uncertainty is of paramount importance. How can scenario reduction be done without the loss of critical contingencies? As an example, the algorithm of progressive hedging can run only 100 or so scenarios within hours, but this is too slow for the industry. Another academic representative suggested that a major source of uncertainty is from demand response—even more than renewables.
Discussions about the variability of renewable energy included how to redesign electricity markets to correctly dispatch and price renewable supply. Participants wondered what information the market participants need and whether risks are being allocated fairly between suppliers and consumers.
Two sources of uncertainty discussed were weather and age. PJM experienced record heat in 2011 and 2012, and then Hurricane Sandy in 2013. A participant
suggested that the power industry examine how the insurance industry manages these risks due to extreme weather to see if any lessons can be learned. The operating capability of old and aging assets, such as mechanical equipment, can introduce uncertainty as well.
A mathematician made the comment that it would be important to differentiate the model uncertainty and the stochastic characterization of model variables. In terms of uncertainty quantification, one academic remarked that the answer would be important, as would improving the confidence associated with an answer. Combining software packages can also be challenging, one participant noted, but there is not a single software package that can simulate an entire blackout.
A control theory expert mentioned that the future new technologies would be the biggest source of uncertainty. Perhaps it would be useful to look at studies carried out by the Australian Academy of Technological Sciences and Engineering, which aimed to optimize the grid under a variety of different scenarios.
A participant noted that many parameters in electrical models are uncertain. It is a challenge and, in the meantime, an opportunity to adaptively estimate these parameters using field measurements. For example, load dynamics is a major area for further improvement. One industry participant commented that verification of models works well but validation is difficult. Even if an individual component is well validated, when multiple components are interconnected the coupling and interactions make validation very challenging. Also, are the phenomena that naturally happen in a power system sufficient to validate the models for the wide range of operating conditions that can happen only infrequently?
The need for standards for an interoperable grid was discussed, including whether information exchange for designing such standards is important or whether it can be completely decentralized. It was noted that control and standards previously differentiated between normal and abnormal operating conditions, but more recently the boundary between normal and abnormal has become less clear. Traditional deterministic N–1 approaches in reliability standards would be too costly and unreliable. A participant suggested that a risk-based approach is needed.
A power system engineer made the point that the power grid was indeed designed to be plug and play. The automatic generation control was designed to perform such functionalities with control-error information from a very limited area. A participant wondered whether such information simplicity can be retained.
Markets on Shorter Timescales
Another issue discussed was the incentive mechanisms available in closer-to-real-time electricity markets. Participants commented that it is difficult to provide incentives for faster-timescale markets and that the transient impact of smart appliances will likely have a noticeable impact on maintaining the stability of the grid.
An industry participant mentioned that most of the energy transactions are committed in day-ahead markets. Only a very small portion of the energy is transacted through real-time markets, which is settled in real-time locational marginal pricing.
Another industry participant reiterated the point that power electronics could be smart and fast but that this requires a different paradigm of control in comparison to traditional resistive load. Yet another industry participant spoke of the importance of centralized and distributed optimization methods, in particular, with demand response and an even larger number of decision variables.
High-Voltage Technology and Power Electronics Improvements
Another window of opportunity discussed for the future grid was high-voltage direct-current (HVDC) technologies or, more generally, making the wires more efficient. A participant noted that a more futuristic view would be creating a grid of HVDC. A difficult technical challenge would be how to do protection and control in an HVDC-meshed network. An academic participant followed up and posed a question about network-level flexible alternating current transmission systems (FACTS) devices control for grid-level performance. It was noted that there will likely be many smaller and smarter switches in the future, and coordinating them may be challenging. A participant noted that it would be important to calculate the grid’s economic and technical benefits by introduction of HVDC, and China’s deployment of HVDC may be an example.
Along this line of discussion, it was mentioned that phase shifters are often not well coordinated in the optimization models. It is a computational challenge to integrate flexible control into the market-clearing software and to consider post contingencies.
Several attendees discussed the fundamental challenge of protection in a power grid that is rich with HVDC and FACTS devices. AC protection, the HVDC protection, does not cross zero;2 therefore, one needs to shut off all equipment in order to do protection. This is a major challenge for the hardware community. The topic of
2 Alternating currents oscillate and cross zero every half period, which creates conditions for self-extinguishing of currents in mechanical circuit breakers. In a direct current system, there is no such natural zero crossing, and currents must be forced to zero by external means before mechanical switches can be opened (Cairoli et al., 2010).
protection was further discussed. More research is needed to move from protecting equipment to protecting the system. For example, how do we adaptively learn the setting point and control logic of a relay setting. This is a potential area where mathematics could play a central role, as machine learning and intelligent alarm processing will be crucial.
Updated Data and Models Needed
Regarding data sharing among utilities, one industry representative commented on the Fedral Energy Regulatory Commission Critical Infrastructure Protection cybersecurity reliability standards, which may have kept the data away from most researchers. Another industry member commented on the outdated software codes that underpin much of today’s core software. Many solvers were written decades ago with Fortran code that is no longer upgradable. Much of the software is company proprietary, which makes benchmarking and cross-comparison very difficult. “Open access” is extremely important and will require a change of business models.
Another discrepancy noted is the difference between the market management system and the operating software. It is important to have a uniform and consistent model for different grid operation applications. The modeling of loads with human factors would be extremely important. The interface of different software would be important. How to provide an integrated suite of software that would be able to solve many things from one program is a challenge.
An industry representative remarked once more on the importance of natural gas and power co-optimization. The co-optimization of transmission and distribution networks also would be very important.
On one hand, computing tasks would benefit greatly from cloud-based solutions. On the other hand, due to cybersecurity concerns, the power companies would not want to use public cloud computing for power system purposes. In this regard, it was suggested that the DOE national laboratories could play a crucial role in providing secure cloud-based solutions for the power industry.
Workshop participants were given the opportunity to give short presentations during the wrap-up session. Speakers were Yonghong Chen (Midcontinent ISO), Bita Analui (University of Vienna), Judy Cardell (Smith College), Michael (Misha) Chertkov (Los Alamos National Laboratory), Ranjit Kumar (InfSys LLC), Cynthia Rudin (Massachusetts Institute of Technology), and Terry Boston (PJM).
Yonghong Chen discussed the importance of mathematics and computations to companies such as Midcontinent ISO (MISO) and PJM. MISO has a footprint
from Manitoba, Canada, to the Gulf of Mexico, and PJM is running the largest electricity market in the world. The centralized electricity market has brought significant benefits to society through the application of optimization and advanced policy analyses software, which can provide robust market solutions on 5-minute intervals. The industry is changing rapidly, and model complexity continues to increase. These models continue to push solvers (that rely on methods such as MIP) to their limits, and it is important that the mathematics community is engaged to address the performance challenges. While this workshop focused on the future grid, Chen noted that today’s challenges—such as renewable integration, storage, increased demand response participation, increased uncertainty, and more decision variable within models—also require new solutions.
Bita Analui provided an alternative algorithmic approach for the robust integration of renewables into the energy system. Multistage stochastic optimization and modeling is an appropriate tool to combine the dynamic characteristics and stochastic parameters of real-world decision problems. An essential step in solving these problems is modeling reality in such a way that model solutions are appropriate and accurate enough to be used as real-world decisions. Analui explained that there are two ways to approach this modeling: (1) by using probability models, which give a description of the underlying uncertainty as random variables or random processes via probability distributions, or (2) by using scenario models, which are less complex, finite approximations of the probability models. In the real world, the true probability model is not known, and several models can represent the data equally well. This model uncertainty is usually ignored in stochastic optimization, and classical robust optimization considers only the worst-case scenario. Analui noted that a multistage distributionally robust approach could bridge this gap by taking into account the whole ambiguity set defined either parametrically or non-parametrically. The optimization problem can then be robustly solved by incorporating all the models rather than only one baseline model. This would quantify the cost of robustness, accounting for model uncertainty (Analui and Pflug, 2014).
Judy Cardell discussed the importance of privacy for demand response data, specifically with respect to the scope and duration of collected data (including the creation of new databases of personal data). She noted that there is and will be a market for these data. Some key considerations are the cultural and generational differences in privacy expectations and how much privacy can truly be protected. In an earlier breakout session, a participant also noted the importance of keeping privacy issues in mind because customers may be wary of handing over control of their appliances to a third party or utility. Another participant noted that designing mathematical models that preserve consumer privacy while utilizing sensitive data is a mathematical challenge.
Cardell described four different approaches to data collection:
- Continue business as usual without any real-time pricing or demand response;
- Allow appliances and HVAC demand response capability with one-way data flow, never collecting data;
- Create privacy-aware design with two-way data flow, processing data at the source; or
- Store all raw demand data flows in an ISO or aggregator, trusting that the data will not be misused.
She offered that the current market is jumping from option 1 to option 4, often overlooking the privacy concerns associated with the transition. She said that focusing on technology to facilitate options 2 and 3 would help protect current and future privacy concerns. If data are collected and the database is created, there will be a market for it from government, corporations, and aggregators. Privacy protections need to be designed into the emerging demand response framework. Cardell also noted that the solution space is very dependent on pricing framework (e.g., real-time pricing versus measuring interval consumption versus baselines).
Cardell concluded by discussing the analytic capabilities needed for one-way communication of reliability and price signals to customers and data processing at the source. One-way communication can be facilitated by end-use technologies such as smart plugs, GridWise, or home automation. However, it is difficult for an operator to best measure compliance without two-way communication. Data processing at the source ensures protection of private information to be anonymized while maintaining usefulness of data to the ISO or aggregator.
Michael (Misha) Chertkov discussed gas-grid reliability, grid sciences, and physics/grid machine learning. Chertkov noted that the power fleet is evolving, gas is becoming significant, and renewables are growing. He noted that the gas system is evolving as well, such as with the addition of the Transco pipeline; the increase in shale gas development; and improvements in flow reversals, liquefied gas, and storage. He noted that gas generation is growing because it is comparatively inexpensive and gas turbines can quickly respond to fluctuations, such as occur due to more renewables in the system. The current reality is that gas and electricity have two separate markets, but their interdependence limits reliability on both sides.
Chertkov called for new academic studies in operations (such as gas-flow awareness; generation dispatch; and stochastic, chance-constrained, and robust optimal power flow accounting for risks) and planning (such as regulations, reserves, liquefied gas, and gas storage especially for emergencies). He concluded by highlighting a few technical modeling challenges for the gas grid, specifically, line-pack gas dynamics, cost probabilities, mixing physics, statistics, optimization, and control.
Ranjit Kumar discussed the importance of voltage stability in a stable power
grid. He provided the following references for more information: Kumar (2011, 2012); Kumar et al. (2011, 2012); Moslehi and Kumar (2010); and Moslehi et al. (2004, 2006a,b, 2008).
Cynthia Rudin discussed two case studies of machine learning for power grid reliability. The first case study is of reactive point processes (Ertekin et al., 2014), a statistical model derived from the New York City electric grid. Most reliability issues in New York City are due to the low-voltage secondary grid. Reactive point processes aim to provide short-horizon and real-time event prediction. It is a generative model for the past and can be used to simulate the future and decide what maintenance policies should be enacted. This is a very general, marked point process model. The model works by establishing baseline vulnerability, defined as the probability of failure at any given time. If an event happens, the probability rises for another event, and then the vulnerability slowly decreases over time. Similarly, if the secondary grid is inspected at a particular place and time, the probability of an event associated with that part of the system decreases, but it slowly rises back to the baseline vulnerability over time. The model has four key characteristics: it is saturating, self-exciting, self-regulating, and has a baseline adjustment. The model incorporates past history of inspections, number of cables at each inspection site, the ages of cables, inspections, etc., and is currently the best tool for predicting power failure in New York City (beating the Cox proportional hazard model and the current long-term prediction model).
The second case study Rudin discussed is the latent state hazard model (Moghaddass and Rudin, 2014). This model aims to predict failure by separating the latent (internal) vulnerability from vulnerability due to external sources using wind turbine data.
Terry Boston, from PJM, discussed the broad importance of improving the grid, specifically given the uncertainty that lies ahead. He highlighted the following ways that the grid may evolve over the coming decades:
- High or low future growth. There is a remanufacturing movement within the United States. Petrochemicals are moving into the Midwest through the use of liquid natural gas. Load forecasts are projected to increase from 0.5 to 3 percent, but some forecasts even call for a small decrease. The loads being forecast are different as well. Technology is improving to make energy use more efficient. For example, new refrigerators have variable-speed drive compressors that have very different load characteristics from what has been seen in the past.
- Concentrated or highly diverse fuel portfolio. The recent boom in production of combined-cycle generation facilities for natural gas implies a concentrated portfolio, but rooftop solar-distributed sources are on the rise.
With natural gas being so inexpensive, it is difficult for other technologies to compete.
- Distributed self-supply by smaller units or centralized supply by large units. The decentralized self-supply (e.g., solar) needs to be backed up by a centralized supply (e.g., gas). Currently, mostly large units supply generation, but Boston foresees a time when the generation and the customers are more integrated. This integrated grid with micro-grids that can be independent if needed will help improve security of the grid.
- Autonomous micro-grid or central grid control. Both will be important to improve stability and security of the grid.
Boston concluded by emphasizing the need for open-source software to improve flexibility, specifically to develop control systems that can focus on the integration of transmission, distribution, and generation. The physics and mathematics to do this exist, but the codes need to be developed.