Earthquakes involve sudden shear sliding motion between large rock masses across internal contact surfaces called faults. The slip on the fault releases strain energy previously stored in the surrounding rock that accumulated due to frictional resistance to sliding. Most earthquakes are directly caused by plate tectonics, and locate in the cool, brittle rock near Earth’s surface. Events with seismic magnitude measured 8.0 or greater are called great earthquakes and involve slip of from several to tens of meters across faults with lengths from 100 to more than 1,000 kilometers. These huge ruptures tend to occur on or near plate boundaries; the largest are on shallow-dipping plate boundary faults (megathrusts) found in compressional regions called subduction zones, where one tectonic plate is thrusting under another. Some great earthquakes occur within bending or detaching plates as they deform seaward of or below a subduction zone. Yet others occur on plate boundary strike-slip faults where two plates are shearing horizontally past one another, or within deforming plate interiors. Elastic wave energy released during the fault sliding is recorded and studied by seismologists to determine the fault location, orientation and sense of sliding motion, amount of radiated elastic wave energy, and distribution of slip on the fault during the event (co-seismic slip). Geodetic methods measure elastic strain accumulation prior to an earthquake, co-seismic slip, and afterslip on the fault that occurs without earthquakes, along with viscous deformation of the mantle as it responds to the fault offset. Great earthquakes commonly locate under the ocean, and the sudden motion of the seafloor generates tsunami—gravitational water waves that can be recorded with ocean floor pressure sensors (these waves are also used to determine co-seismic slip). As seismic, geodetic. and tsunami modeling methods have progressed over the past 50 years, our understanding of great earthquake rupture processes and earthquake interactions has advanced steadily in the context of plate tectonics and improved understanding of rock friction. All faults have heterogeneous frictional properties inferred from non-uniform sliding during each event, with areas of large slip instabilities called asperities having slip-velocity weakening friction and other areas having slip-velocity strengthening friction that results in stable sliding. The seismic wave shaking and tsunami waves can cause great devastation for humanity, so efforts are made to anticipate future earthquake hazards. As plate tectonics steadily move Earth’s plates, elastic strain around plate boundary faults accumulates and releases in a repeated stick-slip sliding process that causes a limited degree of regularity of faulting. Given the history of prior earthquakes on a given fault, we can identify seismic gaps where future slip events are likely to occur. With geodesy we can also now measure locations of accumulating slip deficit relative to plate motions, as well as variation in seismic coupling, which characterizes the fraction of plate motion accounted for by earthquake failure.
Abdelghani Meslem and Dominik Lang
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Natural Hazard Science. Please check back later for the full article.
In the fields of earthquake engineering and seismic risk reduction, the term physical vulnerability defines the component that translates the relationship between seismic shaking intensity, dynamic structural response (physical damage), and cost of repair for a particular class of buildings or infrastructure facilities. The concept of physical vulnerability was started in the early 1980s, with the development of the earthquake damage and loss assessment discipline, which aimed at predicting the consequences of earthquake shaking to an individual building or a portfolio of buildings. Nowadays, physical vulnerability has become one of the key components used by agencies as model input data when developing prevention and mitigation actions, code provisions, and guidelines. The same may apply to the insurance and re-insurance industry in developing catastrophe models (also known as CAT models).
Over the past years, a blossoming of methodologies and procedures could be observed, ranging from empirical to basic and more advanced analytical, implemented for modeling and measuring physical vulnerability. These methods use approaches that differ in terms of level of complexity, calculation efforts (in evaluating the seismic demand-to-structural response and damage analysis), and modeling assumptions adopted in the development process. At this stage, one of the challenges often encountered is that some of these assumptions may highly affect the reliability and accuracy of the resultant physical vulnerability models in a negative way, hence introducing important uncertainties in estimating and predicting the inherent risk (i.e., estimated damage and losses).
Other challenges that are commonly encountered when developing physical vulnerability models are the non-availability of exposure information and the lack of knowledge due to technical or nontechnical problems, like, such as inventory data that would allow for an accurate building stock modeling, or economic data that would allow for a better conversion from damage to monetary losses. Hence, these physical vulnerability models carry different types of intrinsic uncertainties of both aleatory and epistemic character. To come up with appropriate predictions on expected damage and losses of an individual asset (e.g., a building) or a class of assets (e.g., a building typology class, a group of buildings), reliable physical vulnerability models have to be generated that consider all these peculiarities and the associated intrinsic uncertainties at each stage of the development process.