Brenden Jongman, Hessel C. Winsemius, Stuart A. Fraser, Sanne Muis, and Philip J. Ward
The flooding of rivers and coastlines is the most frequent and damaging of all natural hazards. Between 1980 and 2016, total direct damages exceeded $1.6 trillion, and at least 225,000 people lost their lives. Recent events causing major economic losses include the 2011 river flooding in Thailand ($40 billion) and the 2013 coastal floods in the United States caused by Hurricane Sandy (over $50 billion). Flooding also triggers great humanitarian challenges. The 2015 Malawi floods were the worst in the country’s history and were followed by food shortage across large parts of the country.
Flood losses are increasing rapidly in some world regions, driven by economic development in floodplains and increases in the frequency of extreme precipitation events and global sea level due to climate change. The largest increase in flood losses is seen in low-income countries, where population growth is rapid and many cities are expanding quickly. At the same time, evidence shows that adaptation to flood risk is already happening, and a large proportion of losses can be contained successfully by effective risk management strategies. Such risk management strategies may include floodplain zoning, construction and maintenance of flood defenses, reforestation of land draining into rivers, and use of early warning systems.
To reduce risk effectively, it is important to know the location and impact of potential floods under current and future social and environmental conditions. In a risk assessment, models can be used to map the flow of water over land after an intense rainfall event or storm surge (the hazard). Modeled for many different potential events, this provides estimates of potential inundation depth in flood-prone areas. Such maps can be constructed for various scenarios of climate change based on specific changes in rainfall, temperature, and sea level.
To assess the impact of the modeled hazard (e.g., cost of damage or lives lost), the potential exposure (including buildings, population, and infrastructure) must be mapped using land-use and population density data and construction information. Population growth and urban expansion can be simulated by increasing the density or extent of the urban area in the model. The effects of floods on people and different types of buildings and infrastructure are determined using a vulnerability function. This indicates the damage expected to occur to a structure or group of people as a function of flood intensity (e.g., inundation depth and flow velocity).
Potential adaptation measures such as land-use change or new flood defenses can be included in the model in order to understand how effective they may be in reducing flood risk. This way, risk assessments can demonstrate the possible approaches available to policymakers to build a less risky future.
Matthias Jakob, Kris Holm, and Scott McDougall
Debris flows are one of the most destructive landslide processes worldwide, given their ubiquity in mountainous areas occupied by human settlement or industrial facilities around the world. Given the episodic nature of debris flows, these hazards are often un- or under-recognized.
Three fundamental components of debris-flow risk assessments include frequency-magnitude analysis, numerical scenario modeling, and consequence analysis to estimate the severity of damage and loss. Recent advances in frequency-magnitude analysis take advantage of developments in methods to estimate the age of deposits and size of past and potential future events. Notwithstanding, creating reliable frequency-magnitude relationships is often challenged by practical limitations to investigate and statistically analyze past debris-flow events that are often discontinuous, as well as temporally and spatially censored. To estimate flow runout and destructive potential, several models are used worldwide. Simple empirical models have been developed based on statistical geometric correlations, and two-dimensional and three-dimensional numerical models are commercially available. Quantitative risk assessment (QRA) methods for assessing public safety were developed for the nuclear industry in the 1970s and have been applied to landslide risk in Hong Kong starting in 1998. Debris-flow risk analyses estimate the likelihood of a variety of consequences. Quantitative approaches involve prediction of the annual probability of loss of life to individuals or groups and estimates of annualized economic losses. Recent progress in quantitative debris-flow risk analyses include improved methods to characterize elements at risk within a GIS environment and estimates of their vulnerability to impact. Improvements have also been made in how these risks are communicated to decision makers and stakeholders, including graphic display on conventional and interactive online maps. Substantial limitations remain, including the practical impossibility of estimating every direct and indirect risk associated with debris flows and a shortage of data to estimate vulnerabilities to debris-flow impact. Despite these limitations, quantitative debris-flow risk assessment is becoming a preferred framework for decision makers in some jurisdictions, to compare risks to defined risk tolerance thresholds, support decisions to reduce risk, and quantify the residual risk remaining following implementation of risk reduction measures.
Evolution of Strategic Flood Risk Management in Support of Social Justice, Ecosystem Health, and Resilience
Throughout history, flood management practice has evolved in response to flood events. This heuristic approach has yielded some important incremental shifts in both policy and planning (from the need to plan at a catchment scale to the recognition that flooding arises from multiple sources and that defenses, no matter how reliable, fail). Progress, however, has been painfully slow and sporadic, but a new, more strategic, approach is now emerging.
A strategic approach does not, however, simply sustain an acceptable level of flood defence. Strategic Flood Risk Management (SFRM) is an approach that relies upon an adaptable portfolio of measures and policies to deliver outcomes that are socially just (when assessed against egalitarian, utilitarian, and Rawlsian principles), contribute positively to ecosystem services, and promote resilience. In doing so, SFRM offers a practical policy and planning framework to transform our understanding of risk and move toward a flood-resilient society. A strategic approach to flood management involves much more than simply reducing the chance of damage through the provision of “strong” structures and recognizes adaptive management as much more than simply “wait and see.” SFRM is inherently risk based and implemented through a continuous process of review and adaptation that seeks to actively manage future uncertainty, a characteristic that sets it apart from the linear flood defense planning paradigm based upon a more certain view of the future.
In doing so, SFRM accepts there is no silver bullet to flood issues and that people and economies cannot always be protected from flooding. It accepts flooding as an important ecosystem function and that a legitimate ecosystem service is its contribution to flood risk management. Perhaps most importantly, however, SFRM enables the inherent conflicts as well as opportunities that characterize flood management choices to be openly debated, priorities to be set, and difficult investment choices to be made.
Floods affect more people worldwide than any other natural hazard. Flood risk results from the interplay of a range of processes. For river floods, these are the flood-triggering processes in the atmosphere, runoff generation in the catchment, flood waves traveling through the river network, possibly flood defense failure, and finally, inundation and damage processes in the flooded areas. In addition, ripple effects, such as regional or even global supply chain disruptions, may occur.
Effective and efficient flood risk management requires understanding and quantifying the flood risk and its possible future developments. Hence, risk analysis is a key element of flood risk management. Risk assessments can be structured according to three questions: What can go wrong? How likely is it that it will happen? If it goes wrong, what are the consequences? Before answering these questions, the system boundaries, the processes to be included, and the detail of the analysis need to be carefully selected.
One of the greatest challenges in flood risk analyses is the identification of the set of failure or damage scenarios. Often, extreme events beyond the experience of the analyst are missing, which may bias the risk estimate. Another challenge is the estimation of probabilities. There are at most a few observed events where data on the flood situation, such as inundation extent, depth, and loss are available. That means that even in the most optimistic situation there are only a few data points to validate the risk estimates. The situation is even more delicate when the risk has to be quantified for important infrastructure objects, such as breaching of a large dam or flooding of a nuclear power plant. Such events are practically unrepeatable. Hence, estimating of probabilities needs to be based on all available evidence, using observations whenever possible, but also including theoretical knowledge, modeling, specific investigations, experience, or expert judgment. As a result, flood risk assessments are often associated with large uncertainties. Examples abound where authorities, people at risk, and disaster management have been taken by surprise due to unexpected failure scenarios. This is not only a consequence of the complexity of flood risk systems, but may also be attributed to cognitive biases, such as being overconfident in the risk assessment. Hence, it is essential to ask: How wrong can the risk analysis be and still guarantee that the outcome is acceptable?
Like any other species, Homo sapiens can potentially go extinct. This risk is an existential risk: a threat to the entire future of the species (and possible descendants). While anthropogenic risks may contribute the most to total extinction risk natural hazard events can plausibly cause extinction.
Historically, end-of-the-world scenarios have been popular topics in most cultures. In the early modern period scientific discoveries of changes in the sky, meteors, past catastrophes, evolution and thermodynamics led to the understanding that Homo sapiens was a species among others and vulnerable to extinction. In the 20th century, anthropogenic risks from nuclear war and environmental degradation made extinction risks more salient and an issue of possible policy. Near the end of the century an interdisciplinary field of existential risk studies emerged.
Human extinction requires a global hazard that either destroys the ecological niche of the species or harms enough individuals to reduce the population below a minimum viable size. Long-run fertility trends are highly uncertain and could potentially lead to overpopulation or demographic collapse, both contributors to extinction risk.
Astronomical extinction risks include damage to the biosphere due to radiation from supernovas or gamma ray bursts, major asteroid or comet impacts, or hypothesized physical phenomena such as stable strange matter or vacuum decay. The most likely extinction pathway would be a disturbance reducing agricultural productivity due to ozone loss, low temperatures, or lack of sunlight over a long period. The return time of extinction-level impacts is reasonably well characterized and on the order of millions of years. Geophysical risks include supervolcanism and climate change that affects global food security. Multiyear periods of low or high temperature can impair agriculture enough to stress or threaten the species. Sufficiently radical environmental changes that lead to direct extinction are unlikely. Pandemics can cause species extinction, although historical human pandemics have merely killed a fraction of the species.
Extinction risks are amplified by systemic effects, where multiple risk factors and events conspire to increase vulnerability and eventual damage. Human activity plays an important role in aggravating and mitigating these effects.
Estimates from natural extinction rates in other species suggest an overall risk to the species from natural events smaller than 0.15% per century, likely orders of magnitude smaller. However, due to the current situation with an unusually numerous and widely dispersed population the actual probability is hard to estimate. The natural extinction risk is also likely dwarfed by the extinction risk from human activities.
Many extinction hazards are at present impossible to prevent or even predict, requiring resilience strategies. Many risks have common pathways that are promising targets for mitigation. Endurance mechanisms against extinction may require creating refuges that can survive the disaster and rebuild. Because of the global public goods and transgenerational nature of extinction risks plus cognitive biases there is a large undersupply of mitigation effort despite strong arguments that it is morally imperative.
Marian Muste and Ton Hoitink
With a continuous global increase in flood frequency and intensity, there is an immediate need for new science-based solutions for flood mitigation, resilience, and adaptation that can be quickly deployed in any flood-prone area. An integral part of these solutions is the availability of river discharge measurements delivered in real time with high spatiotemporal density and over large-scale areas. Stream stages and the associated discharges are the most perceivable variables of the water cycle and the ones that eventually determine the levels of hazard during floods. Consequently, the availability of discharge records (a.k.a. streamflows) is paramount for flood-risk management because they provide actionable information for organizing the activities before, during, and after floods, and they supply the data for planning and designing floodplain infrastructure. Moreover, the discharge records represent the ground-truth data for developing and continuously improving the accuracy of the hydrologic models used for forecasting streamflows. Acquiring discharge data for streams is critically important not only for flood forecasting and monitoring but also for many other practical uses, such as monitoring water abstractions for supporting decisions in various socioeconomic activities (from agriculture to industry, transportation, and recreation) and for ensuring healthy ecological flows. All these activities require knowledge of past, current, and future flows in rivers and streams.
Given its importance, an ability to measure the flow in channels has preoccupied water users for millennia. Starting with the simplest volumetric methods to estimate flows, the measurement of discharge has evolved through continued innovation to sophisticated methods so that today we can continuously acquire and communicate the data in real time. There is no essential difference between the instruments and methods used to acquire streamflow data during normal conditions versus during floods. The measurements during floods are, however, complex, hazardous, and of limited accuracy compared with those acquired during normal flows. The essential differences in the configuration and operation of the instruments and methods for discharge estimation stem from the type of measurements they acquire—that is, discrete and autonomous measurements (i.e., measurements that can be taken any time any place) and those acquired continuously (i.e., estimates based on indirect methods developed for fixed locations). Regardless of the measurement situation and approach, the main concern of the data providers for flooding (as well as for other areas of water resource management) is the timely delivery of accurate discharge data at flood-prone locations across river basins.
Seth Guikema and Roshanak Nateghi
Natural disasters can have significant widespread impacts on society, and they often lead to loss of electric power for a large number of customers in the most heavily impacted areas. In the United States, severe weather and climate events have been the leading cause of major outages (i.e., more than 50,000 customers affected), leading to significant socioeconomic losses. Natural disaster impacts can be modeled and probabilistically predicted prior to the occurrence of the extreme event, although the accuracy of the predictive models will vary across different types of disasters. These predictions can help utilities plan for and respond to extreme weather and climate events, helping them better balance the costs of disaster responses with the need to restore power quickly. This, in turn, helps society recover from natural disasters such as storms, hurricanes, and earthquakes more efficiently. Modern Bayesian methods may provide an avenue to further improve the prediction of extreme event impacts by allowing first-principles structural reliability models to be integrated with field-observed failure data.
Climate change and climate nonstationarity pose challenges for natural hazards risk assessment, especially for hydrometeorological hazards such as tropical cyclones and floods, although the link between these types of hazards and climate change remains highly uncertain and the topic of many research efforts. A sensitivity-based approach can be taken to understand the potential impacts of climate change-induced alterations in natural hazards such as hurricanes. This approach gives an estimate of the impacts of different potential changes in hazard characteristics, such as hurricane frequency, intensity, and landfall location, on the power system, should they occur. Further research is needed to better understand and probabilistically characterize the relationship between climate change and hurricane intensity, frequency, and landfall location, and to extend the framework to other types of hydroclimatological events.
Underlying the reliability of power systems in the United States is a diverse set of regulations, policies, and rules governing electric power system reliability. An overview of these regulations and the challenges associated with current U.S. regulatory structure is provided. Specifically, high-impact, low-frequency events such as hurricanes are handled differently in the regulatory structure; there is a lack of consistency between bulk power and the distribution system in terms of how their reliability is regulated. Moreover, the definition of reliability used by the North American Reliability Corporation (NERC) is at odds with generally accepted definitions of reliability in the broader reliability engineering community. Improvements in the regulatory structure may have substantial benefit to power system customers, though changes are difficult to realize.
Overall, broader implications are raised for modeling other types of natural hazards. Some of the key takeaway messages are the following: (1) the impacts natural hazard on infrastructure can be modeled with reasonable accuracy given sufficient data and modern risk analysis methods; (2) there are substantial data on the impacts of some types of natural hazards on infrastructure; and (3) appropriate regulatory frameworks are needed to help translate modeling advances and insights into decreased impacts of natural hazards on infrastructure systems.
How big, how often, and where from? This is almost a mantra for researchers trying to understand tsunami hazard and risk. What we do know is that events such as the 2004 Indian Ocean Tsunami (2004 IOT) caught scientists by surprise, largely because there was no “research memory” of past events for that region, and as such, there was no hazard awareness, no planning, no risk assessment, and no disaster risk reduction. Forewarned is forearmed, but to be in that position, we have to be able to understand the evidence left behind by past events—palaeootsunamis—and to have at least some inkling of what generated them.
While the 2004 IOT was a devastating wake-up call for science, we need to bear in mind that palaeotsunami research was still in its infancy at the time. What we now see is still a comparatively new discipline that is practiced worldwide, but as the “new kid on the block,” there are still many unknowns. What we do know is that in many cases, there is clear evidence of multiple palaeotsunamis generated by a variety of source mechanisms. There is a suite of proxy data—a toolbox, if you will—that can be used to identify a palaeotsunami deposit in the sedimentary record. Things are never quite as simple as they sound, though, and there are strong divisions within the research community as to whether one can really differentiate between a palaeotsunami and a palaeostorm deposit, and whether proxies as such are the way to go. As the discipline matures, though, many of these issues are being resolved, and indeed we have now arrived at a point where we have the potential to detect “invisible deposits” laid down by palaeotsunamis once they have run out of sediment to lay down as they move inland. As such, we are on the brink of being able to better understand the full extent of inundation by past events, a valuable tool in gauging the magnitude of palaeotsunamis.
Palaeotsunami research is multidisciplinary, and as such, it is a melting pot of different scientific perspectives, which leads to rapid innovations. Basically, whatever is associated with modern events may be reflected in prehistory. Also, palaeotsunamis are often part of a landscape response pushed beyond an environmental threshold from which it will never fully recover, but that leaves indelible markers for us to read. In some cases, we do not even need to find a palaeotsunami deposit to know that one happened.
Permafrost, or perennially frozen ground, and the processes linked to the water phase change in ground-pore media are sources of specific dangers to infrastructure and economic activity in cold mountainous regions. Additionally, conventional natural hazards (such as earthquakes, floods, and landslides) assume special characteristics in permafrost territories.
Permafrost hazards are created under two conditions. The first is a location with ice-bounded or water-saturated ground, in which the large amount of ice leads to potentially intensive processes of surface settlement or frost heaving. The second is linked with external, natural, and human-made disturbances that change the heat-exchange conditions. The places where ice-bounded ground meets areas that are subject to effective disturbances are the focus of hazard mapping and risk evaluation.
The fundamentals of geohazard evaluation and geohazard mapping in permafrost regions were originally developed by Gunnar Beskow, Vladimir Kudryavtsev, Troy Péwé, Oscar Ferrians, Jerry Brown, and other American, European, and Soviet authors from 1940s to the 1980s.
Modern knowledge of permafrost hazards was significantly enriched by the publication of Russian book called Permafrost Hazards, part of the six-volume series Natural Hazards in Russia (2000). The book describes, analyses, and evaluates permafrost-related hazards and includes methods for their modeling and mapping.
Simultaneous work on permafrost hazard evaluation continued in different countries with the active support of the International Permafrost Association. Prominent contributions during the new period of investigation were published by Drozdov, Clarke, Kääb, Pavlov, Koff and several other thematic groups of researchers. The importance of common international works became evident. The international project RiskNat: A Cross-Border European Project Taking into Account Permafrost-Related Hazards was developed as a new phenomenon in scientific development.
The intensive economic development in China presented new challenges for linear transportation routes and hydrologic infrastructures. A study of active fault lines and geological hazards along the Golmud–Lhasa Railway across the Tibetan plateau is a good example of the achievements by Chinese scientists.
The method for evaluating the permafrost hazards was based on survey data, monitoring data, and modeling results. The survey data reflected the current environmental conditions, and they are usually shown on a permafrost map. The monitoring data are helpful in understanding the current tendencies of permafrost evolution in different landscapes and regions. The modeling data provided a permafrost forecast that takes climate change and its impact on humans into account.
The International Conference on Permafrost in 2016, in Potsdam, Germany, demonstrated the new horizons of conventional and special permafrost mapping in offshore and continental areas. Permafrost hazards concern large and diverse aspects of human life. It is necessary to expand the approach to this problem from geology to also include geography, biology, social sciences, engineering, and other spheres of competencies in order to synthesize local and regional information. The relevance of this branch of science grows with taking into account climate change and the growing number of natural disasters.
Abdelghani Meslem and Dominik H. Lang
In the fields of earthquake engineering and seismic risk reduction the term “physical vulnerability” defines the component that translates the relationship between seismic shaking intensity, dynamic structural uake damage and loss assessment discipline in the early 1980s, which aimed at predicting the consequences of earthquake shaking for an individual building or a portfolio of buildings. In general, physical vulnerability has become one of the main key components used as model input data by agencies when developinresponse (physical damage), and cost of repair for a particular class of buildings or infrastructure facilities. The concept of physical vulnerability started with the development of the earthqg prevention and mitigation actions, code provisions, and guidelines. The same may apply to insurance and reinsurance industry in developing catastrophe models (also known as CAT models).
Since the late 1990s, a blossoming of methodologies and procedures can be observed, which range from empirical to basic and more advanced analytical, implemented for modelling and measuring physical vulnerability. These methods use approaches that differ in terms of level of complexity, calculation efforts (in evaluating the seismic demand-to-structural response and damage analysis) and modelling assumptions adopted in the development process. At this stage, one of the challenges that is often encountered is that some of these assumptions may highly affect the reliability and accuracy of the resulted physical vulnerability models in a negative way, hence introducing important uncertainties in estimating and predicting the inherent risk (i.e., estimated damage and losses).
Other challenges that are commonly encountered when developing physical vulnerability models are the paucity of exposure information and the lack of knowledge due to either technical or nontechnical problems, such as inventory data that would allow for accurate building stock modeling, or economic data that would allow for a better conversion from damage to monetary losses. Hence, these physical vulnerability models will carry different types of intrinsic uncertainties of both aleatory and epistemic character. To come up with appropriate predictions on expected damage and losses of an individual asset (e.g., a building) or a class of assets (e.g., a building typology class, a group of buildings), reliable physical vulnerability models have to be generated considering all these peculiarities and the associated intrinsic uncertainties at each stage of the development process.