Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, NATURAL HAZARD SCIENCE (naturalhazardscience.oxfordre.com). (c) Oxford University Press USA, 2016. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy and Legal Notice).

date: 22 May 2018

Performance Assessment of Natural Hazards Governance

Summary and Keywords

Assessment is a necessary and critical component in process improvement. Moreover, there is a strong public expectation that because governance is a public good, it will incorporate demonstrable equitable and efficient processes. As a central tenet of New Public Management (NPM), a widely accepted approach to increase efficiency of public sector performance through the introduction of “business” practices, performance assessment has helped improve governance in general. However, employing assessment practices has been problematic at best in the realm of hazards preparedness and response. Notably, the fragmented nature of governance in the disaster response network, which spans both levels of government and public and private sectors, is not conducive to holistic evaluation. Similarly, the lack of clear goals, available funding, and trained evaluation personnel severely inhibit the ability to comprehensively assess performance in the management of natural hazards. Effective assessment in this area, that is evaluation that will significantly enhance hazard and vulnerability management in terms of mitigation, preparedness, and response, requires several distinct steps for effective implementation. This includes first understanding the dimensions of the natural hazards governance community and the assessment process. These are: (1) identifying the purpose of the review (formative—evaluation intending to improve processes or summative—evaluation intended for final examination of processes), (2) Identifying clear and concise goals for the program and ensuring these goals are consistent with federal, state, and local policy, and (3) identifying the underlying fragmentation between sectors, levels of governance, and disaster phase in the governance system. Based on these dimensions, the most effective assessments will be those that are incorporated within or developed from the actual governance system.

Keywords: New Public Management, disaster preparedness response, hazards governance, program evaluation, Homeland Security Exercise and Evaluation Program, FEMA

Performance Assessment in Governance

While there has always been some expectation of efficiency in the outcomes of public policy, the 1980s became a hallmark for government performance assessment with the widespread adoption of the New Public Management (NPM) doctrine (Hood, 1995). Touted as an approach to governance that would produce better results with less resources, NPM united the concepts of new institutional economics and managerialism into a framework of management that would increase outputs and quality through contracting out and competition, but would assure transparency and explicit performance outcomes via evaluation (Hood, 1991). NPM was not an overwhelming success as various problems arose largely from the implementation of the program (Im, 2010; van Thiel & Leeuw, 2002); nevertheless, program and performance evaluation persists in the US government as a means to demonstrate program value to stakeholders (Council of Economic Advisers, 2005).

However, while performance assessment has become a hallmark of public management, there seems to be little consensus on how it should be most appropriately handled. Importantly, Ingraham, Joyce, and Donahue (2003) point out that exclusive focus on input or outcome measurement may be detrimental to the public good, as these types of measures neglect the complexity of the system they profess to represent. A similar argument is made by Gerber and Robinson (2009) in the case of regional preparedness for disasters.

Performance Assessment in Hazards Preparedness and Response

Performance measurement has been particularly lacking in the field of emergency management (Gerber & Robinson, 2009), and understandably so. Scholars are quick to point out that disaster response in the United States is driven from the local level, yet hazards tend to be regional threats that require a regional response. This mismatch between the level of governance and the area of the event means that local and state entities lack the resources and incentives to adequately prepare for or respond to threats (Gerber & Robinson, 2009). Unfortunately, this is only one of the complexities facing performance assessment of hazards preparedness and response.

First and foremost, there is a dearth of guidance on the goals for preparedness and response. FEMA provides the National Preparedness Goal, which is a 25-page guide to interpreting the National Preparedness Goal, which is:

A secure and resilient Nation with the capabilities required across the whole community to prevent, protect against, mitigate, respond to, and recover from the threats and hazards that pose the greatest risk.

(FEMA, 2015)

However, even with its accompanying documentation, the National Preparedness Goal (NPG) does not provide much specificity as to what really should be measured to evaluate performance. To begin with, the NPG offers five core capabilities, representing unique areas of the preparedness mission, without offering guidance as to how to weigh the inherent trade-offs that come with multiple missions. For example, mitigation may reduce the impact of an event and the costs of the response, but while the outcome is complementary, the processes seldom are. For example, the State of Louisiana allocated 33 million dollars for the construction of three permanent shelters across the state, yet in 2013, 27 million had been spent and the state was only able to complete one of the shelters, which can hold roughly 2,250 people (KTBS, 2013). This expense for preparedness is very similar to the cost of the transportation portion of the hurricane Gustav evacuation in 2008 (estimated to be $21 million) (LDA, 2008); importantly, these two expenses illustrate the common trade-offs that must be made between mission areas in emergency preparedness. Precious little guidance is available for these types of macro–decision making, yet these trade-offs may well be among the most important made in emergency preparedness.

The second major challenge to disaster preparedness and response comes from the bifurcated nature of the US disaster response mission. Much scholarship in disaster preparedness and response assumes that nationally these functions all fall under the umbrella of the Department of Homeland Security (DHS) through FEMA—but this is not the case. US disaster preparedness and response has two departmental homes in the Federal government—DHS, and the Department of Health and Human Services (DHHS). In the aftermath of the September 11th attacks in 2003, the National Disaster Medical System (NDMS) was placed in the newly formed DHS under the direction of FEMA. In 2007, NDMS was reorganized under DHHS in response to allegations of mismanagement during the Hurricane Katrina response. On paper, this move seems to simply place the home of Emergency Support Function 8 (ESF-8) under a department better aligned with its core functions, but in practice this separates the critical functions of medical services and public health from core response activities.

Fragmentation of the response system is far more problematic that the simple bifurcation of critical tasks. The extreme fragmentation of disaster preparedness and response systems in the United States has been well documented in the literature (Drabek, 1985; Dynes & Tierney, 1994). Similarly, the effects of fragmentation in government have been thoroughly explored (Dolan, 1990; Karvonen & Quenter, 2002; Kaufman, 1956; Rhodes, 1996; M. Schneider, 1986) and extrapolated to the disaster preparedness and response context (Abbasi & Kapucu, 2012; Drabek, 1985; Gerber & Robinson, 2009; Tierney, 1985).

A fourth challenge to disaster preparedness and response in the United States is sparse and uneven funding support. Gerber and Robinson (2009) identify the mismatch between the level of authority for responsible municipalities and the area of effect for natural and manmade disasters along with the perverse incentives this creates, but the full extent of the funding problems go beyond a jurisdictional mismatch. First, many areas outside the area of effect of disasters are routinely required to support response and recovery missions. Historically, areas like Birmingham, Alabama; Shreveport, Louisiana; and Braggs, Oklahoma have hosted disaster evacuees, and while they are reimbursed for associated costs from the federal government, they are not provided with resources to plan and prepare for these types of emergencies in service of individuals from outside the tax base. Interviews with a local EM director yielded a response from a state EM leader that they had explicit directions from the governor not to plan for or receive evacuees from other states regardless of the historical record of evacuees coming to their state. These sorts of political pressures require local and regional administrators to make suboptimal implementation decisions at a cost that may be more than performance hindering.

Even in areas where there are not political barriers to resource allocation, simple economies of scale often prevent rural areas from being prepared to support response missions they may become obligated to undertake (Gerber, Ducatman, Fischer, Althouse, & Scotti, 2006). It is important to note as well that an area’s tax base has a negative correlation to social vulnerability—meaning that those likely to be most affected by disaster often have the least resources to prepare for disaster (Adger, 2000). Increased regional vulnerability is not only a function of available resources. There are differences in how rural and poorer areas operate that increase the likelihood of negative outcomes during an emergency event. For instance, the literature is clear in regards to the differential outcomes for first responders in rural versus urban environments (Veitch, 2009), and differences between risks for volunteer versus professional responders (Kales, Soteriades, Christophi, & Christiani, 2007). Thus, all else being equal, rural areas which traditionally rely heavily on volunteer responders will have higher costs associated with disaster response than equivalent urban areas.

A fifth barrier to disaster preparedness and response is that the networked nature of disaster response means that many of the instrumental actors are not part of a chain of accountability to any single individual or organization. Disasters are incidents that overwhelm local capacity to respond and require external support, and by that definition are inherently boundary spanning. This fact is by no means new, and although management practices like the Incident Command System (ICS), Health Incident Command System (HICS), the National Incident Management System (NIMS), and resource typing serve to ameliorate this in practice, they offer little assistance in planning and evaluation. The necessary reliance on external support for preparedness and response means that any given jurisdiction has no authority over some or many of the assets and resources required for adequate response. The problem is further complicated when the networked resource lies outside the public sector, as nonprofits and the faith-based community have long (and arguably rightly) resisted obligated assignments and standardized typing of response services. For example, the original National Response Plan which took effect in 2004 tasked the American Red Cross (ARC) as lead for ESF #6—Mass Care, Emergency Assistance, Housing, and Human Services—but by the time it was replaced with the National Response Framework (NRF) in 2008, the ARC was able to pass that responsibility to FEMA as it was not possible for them to fill this role as a nonprofit agency. And even though the nonprofit sector steers away from obligated service under legal authority, it is none the less a vital part in disaster response. Decades of research document the vital role of nonprofits in disaster response (Comfort, 1999; Dynes & Tierney, 1994; Eller, Gerber, & Branch, 2015; Kapucu, 2007; Simo & Bies, 2007), yet little is known about the cost or value of that contribution.

Current Approaches to Assessing Performance of Hazards Governance

Evaluation is typically thought of in two forms, that is formative and summative evaluation. In general, formative evaluation is that which takes place during the development of a program and summarizes program development to a fixed point in the process. Often this is thought of as internal evaluation, or process improvement focused evaluation. In contrast, summative evaluation refers to the outcomes of a program. The most common form of formative evaluation is that built into the Homeland Security Exercise and Evaluation Program (HSEEP). This program is a capability-based, progressive planning approach that is on its face a remarkably structured program. In practice, the HSEEP principles are rarely implemented in a way to make them effective for the organization. This is not to indicate that the trainings are without merit. Research indicates that there is a benefit from training for the individuals involved (Perry, 2004); however, individual benefits from training do not necessarily imply organizational benefits. Organizational exercises are generally managed in one of three ways. First, very affluent organizations may have their own training department which develops and tracks organizational trainings—these are rare cases which are becoming less and less prevalent. The second option is that large organizations will look to outside agencies or contractors to provide HSEEP-compliant exercises for the organization. This has some obvious advantages—namely that the organization does not have to maintain the expertise in-house. Finally, organizations with limited resources may task one individual to develop and implement exercises as a collateral duty. This is probably the most common model across the United States.

In theory, the HSEEP program is the formative evaluation portion of the National Preparedness System (NPS). NPS guides response organizations through a process where risk identification and assessment is used to develop a specified set of capability requirements necessary to respond to the specific risks for a given area. These capabilities are then developed and later evaluated through HSEEP (FEMA, 2011). The HSEEP program guides the organization through identification of capabilities, evaluates the organizational ability to provide those specific capabilities, and provides a plan for improvement (FEMA, 2013a). HSEEP is a remarkable system for those organizations that can implement it fully, but even in the few cases where it can be fully implemented, there are lingering problems. First, a RAND report rightfully identifies the lack of reliability measures in the formative evaluation provided through HSEEP (Jackson, 2008). Similarly, HSEEP does not evaluate the limitations of the Incident Command System (ICS) across different organizational contexts and phases of response (Buck, Trainor, & Aguirre, 2006), nor does it capture the organizational or network properties which may fundamentally affect the application of ICS (Moynihan, 2009).

Summative evaluation of hazards governance is far less structured than the formative evaluation process built into the National Preparedness System. Summative evaluation effectively comes in two forms: academic or professional program evaluation, and political oversight. Academic and professional program evaluations are an eclectic collection with varying degrees of completeness and therefore do not present a single model of program evaluation, while political oversight is usually motivated by perceived negative outcomes of a response event. For example, Hurricane Katrina is frequently identified as a failed disaster response (S. K. Schneider, 2005) and the negative perceptions of the event which motivated the congressional review are clearly evident in the finished product (Bowman, Kapp, & Belasco, 2005; Davis, 2006).

Many attempts have been made to provide a consistent framework for assessing disasters. Some of these frameworks focus on single elements of the preparedness mission (Birkmann, 2006; Bruneau et al., 2003; Cimellaro, Reinhorn, & Bruneau, 2010; Sundnes & Birnbaum, 2003), while others apply a systematic approach (Henstra, 2010); yet none of the attempts thus far incorporate all the elements of preparedness, while addressing the regional preparedness issues identified by Gerber and Robinson (2009).

A Framework for Assessing Performance of Hazards Governance

Given the complexity of assessment in the context of hazards management, we intentionally ignore certain facets of the process. First and most pressing is the notion of assessing the risk of specific hazards. While this is a critical component of hazards management, the scope of this activity is far too broad to include in a single chapter broadly addressing the assessment of governance. Second, we are not going to devote time to the concept of summative evaluation. We do this in part because hazards management is an evolving process, and to arbitrarily mark endpoints based on events is not beneficial to the process. With these qualifiers in place, we propose that robust evaluation of hazards governance must be a multiphased process, given the multiple stages of governance involved in the process. Additionally, because the focus of evaluation should be formative and not summative, evaluation of hazards governance should strive to be comparative in nature—evaluation should be about process development as opposed to an arbitrary measurement of absolute capacity or preparedness. The framework for assessment should address: (a) the decision process concerning what is being prepared for, (b) the decision process concerning how scarce resources are being distributed throughout the preparedness/response continuum, (c) the level understanding of available resources throughout the available network, (d) what organizational capabilities are being maximized, and (e) competency in coordination and critical capabilities.

Decision Process Concerning What is Being Prepared for

As we have stated, HSEEP provides a comprehensive evaluation tool; however, even when it is applied correctly it only addresses the competency issue of hazards governance. And while competency is a beneficial quality, being able to perform unnecessary capabilities well is rarely a beneficial skill for hazards response. For that reason, we suggest that evaluation should always begin with identifying what is being prepared for. As Gerber and Robinson (2009) point out, there are a variety of incentives for response organizations to not prepare for the most likely threat—Often, larger regional threats are overlooked in favor of better preparation for local threats, or because the lack of available resources to address regional threats lands them in the “too hard” box. Complicating matters, FEMA (2013b) guides response professionals to, “consider only those threats and hazards that would have a significant effect on them” (p. 8). And while some consideration is given to regional and national events, FEMA stresses that communities should focus only on the impacts within their jurisdiction (FEMA, 2013b).

The question of what is not limited only to what threat, but also to what phase of the disaster cycle. How are the leaders (either in response organizations or in political control) allocating resources between mitigation, preparedness, and response? These questions of “what” are perhaps the most important part of effective preparedness.

Assessing the decision-making process dealing with what hazards and threats to prepare for requires comparing state and local threat and hazard identification to a regional Threat and Hazard Risk Identification and Assessment (THIRA), and to do this across time to (a) identify trends in recognition of threats and hazards at the state and local levels, and (b) to identify what threats are driving local decision making around preparedness. Similarly, assessment of governance of the what portion of hazards management also must address what part of the disaster cycle. In terms of disaster response organizations, this will typically be a simple trade-off between preparedness activities and response capacity, but at more macro levels (state offices and state executive offices) this may be expanded to budgetary priorities across mitigation and recovery efforts as well. This is a particularly interesting area, because it is ripe for academia to make significant contributions. Many academic studies have demonstrated the importance of this trade-off, for example in the realm of earthquake mitigation (Smyth et al., 2004), and yet little systematic information has been amassed to help public managers deal with these trade-offs at the operational level. Cursory evaluations of this stage would compare preparatory activities across two dimensions. First, comparative evaluations examining revealed preferences of the organization across the geographic levels of threat (local, state, region). Second, activities should be compared across the stages of disaster (mitigation, preparedness, response, recovery) and clusters of capabilities associated with the types of threats effecting the organization. Amassing data of this type would facilitate evaluations examining why response organizations make the choices they do based on political, fiscal, and contextual factors. From an academic standpoint, understanding the factors driving what organizations decide to prepare for is critical for policy development in hazards preparedness and an area with precious little systematic research. From an applied side, this knowledge would enable decision makers to be better equipped to drive policy outcomes in the hazards preparedness and response policy arena.

Understanding the Response Network

Hazards preparedness and response is a function of an array of actors spanning the public, private, and not-for-profit arena. This is because for a local, state, or even the federal government to maintain all the resources necessary to adequately respond to every potential hazard is a fiscal impossibility. Thus hazards management spans all three sectors, and is a key feature as to why management evaluation is so problematic in this field. Beyond the political, logistical, and fiscal trade-offs inherent in hazards management, governance in this realm also requires resource provision from sectors outside the direct control of response leadership. On some levels, this is a good thing because it frees nonprofit and private actors to pursue goals that are tangential to those of the response leadership; but because public managers must rely on resources and services from the private and nonprofit sector, it greatly complicates their ability to meet legal obligations for governance in hazards management.

Disaster and hazards scholars generally agree on the importance of the entire response network or community, yet little work has been done to systematize this research. For example, we know that network centrality (Moore, Eng, & Daniel, 2003); coordination (Kapucu, 2006, 2009; Nolte, Martin, & Boenigk, 2012; Robinson, Eller, Gall, & Gerber, 2013); and ability to distribute resources (Kapucu, Arslan, & Demiroz, 2010) are all critical functions of response networks. Yet the plethora of research regarding response networks in hazard preparedness and response sheds little light on the differences between structures of response networks in relation to types of hazards and critical infrastructure. What we do know about response networks is that:

  1. (1) Many of the players are relatively stable across regions of the country. Specifically, the public sector response structure will have very little variance across jurisdictions in the United States. Some regions may have more robust relations with neighboring jurisdictions than others; however, the basic structure is dictated by law.

  2. (2) Much of the nonprofit and faith based community activities in disaster preparedness and response is reasonably predictable. The rise of the Voluntary Organizations Active in Disaster (VOAD) movement has created a great deal of predictability in disaster response networks, and while there will always be new organizations and spontaneous volunteers, the core of the response is usually a function of existing nonprofit response organizations and those who have been engaged by public sector leadership in the past.

  3. (3) Aside from notable exceptions like Home Depot, Walmart, and Tide, most support from the private sector will be in the form of supply logistics, or specialized product development.

It is fair then to assume that the “response network” of any given manager or leader will have a great number of commonalities with most others across the nation in a similar organizational role. While there may be differentiation in the capacities of the individuals and organizations within the network, the outward appearance will be similar. For example, a local emergency manager that is in need of sheltering support will always defer to the American Red Cross (ARC). While the capacity of the ARC to provide shelter may be substantially higher in Baton Rouge, Louisiana than in Charleston, West Virginia, the network map will still be the same.

We know that in practice, response is improved when the network actors have preexisting relationships (Comfort, Ko, & Zagorecki, 2004; Kapucu, 2008, 2009), and we also know that when people are under stress, the effects of stress impede cognition (Hermans, Henckens, Joëls, & Fernández, 2014; Rei et al., 2015). Furthermore, there is a strong literature indicating that groups function better with previous experience working together (De Jong, Dirks, & Gillespie, 2016; Wang, Waldman, & Zhang, 2014). It would make sense then that if we are going to evaluate governance, a critical portion of that is to include the knowledge and history of interaction with the members of the response network. Developing a process for incorporating network partner identification in annual reporting for public response organizations would be a trivial increase in the workload, yet would yield a great deal of information in terms of preparedness and response capacity and coordination. This would also provide “signaling” to the organization about the importance of engagement of the entire network.

Augmenting reported knowledge of must also be moderated with the history of working with various members of the network. Often, emergency managers can name who to call, but have never trained with those groups. Among the information reported within the HSEEP program is a list of players and their affiliations. Evaluation of hazards governance needs to include a review of this information to see who is included (e.g., are they groups critical to specific capabilities necessary for hazards common to the area?), and how and if the “usual suspects” changes over time and with new and ongoing events.

Organizational Capabilities Being Maximized

Two major barriers are embedded in the HSEEP process. First, exercises are often built around familiar circumstances. When exercises are built in house, they are often developed around extreme cases of what the organization expects—not what is possible or even likely. When contractors are involved, exercises tend to be off-the-shelf plans that are augmented for the specific locale. This leads to organizations simply training on different flavors of the same event, and in some cases continuing to reinforce the same poor behaviors. Secondly, exercises are typically developed and implemented with the idea of pushing, but not breaking the evolution. Stated differently, exercises are designed to stress, but not overwhelm the participants. For example, a table top exercise (TTX) was performed several years ago for a rural community built around a forest fire ignited by a downed aircraft. As the exercise progressed, the “fire” pushed toward an area with several assisted living centers, and it wasn’t long before the county manager was lost, declaring that there was simply nothing that could be done. In this instance, the attendees were placed in a scenario they had not mentioned and were not able to move forward. Unfortunately for the county manager, less than a year later a resident of the county made the mistake of attempting a controlled burn of pasture land on a high wind day and quickly ignited the exact fire offered for practice a year earlier. The upside for this particular manager was that the scenario had been pushed through against his protests during the TTX, and the group was actually prepared when the event came.

Evaluation of hazards governance needs to include how groups are selecting organizational capabilities for evaluation as well as how the capabilities match the needs of the actual event. Starting with the latter first, the research extant does little to inform practitioners what capabilities are going to be necessary during which sorts of events. There are some obvious capabilities, such as the need for shelter capabilities for hurricane events or interoperable communications during multijurisdictional events; yet precious little macro-level research has been done matching needs and capabilities across types of events. Similarly, there is little research to provide guidance as to the relative trade-offs when balancing capability development—that is, when maximizing resources at the local level, what are the opportunity costs of prioritizing interoperable communications over citizen preparedness? To gain leverage on these decisions, evaluation should identify the trend in capability development by tracking over time staff development and exercise planning, specifically focusing on which capabilities are being exercised. Linear trends identifying the selection and relative improvement will reveal both the change in capacity as well as the expressed preferences of the organization.

Developing systematic data in this respect would provide a strong start for assessing how the selections are being made. If exercises routinely examine the same set of capabilities, this is possibly a demonstration of a critical delinquency (as is typically the case with communications), but more likely it reveals a strong preference based on professional values or limitations on the part of the leadership. For example, many organizations in areas with high incidence of hurricanes generally exercise sheltering and evacuation evolutions, yet they do so without placing any real emphasis on dealing with medical special needs or issues of co-sheltering. This could potentially be a function-bounded rationality on the part of the response organization. Perhaps they have not had to lead on an actual event where they had to deal with transportation and housing of companion animals or that aspect of the evolution is undervalued because there is not an advocate in the planning sessions. Similarly, emergency managers may not address medical special needs because that falls outside their scope of operations. Given the task falls to NDMS, it is easy to excuse the complications associated with the population—either way, the result is a critical failure in planning.

Competency in the Capabilities Being Evaluated

Competency in specific capabilities is probably the easiest and most frequently employed evaluation in hazards preparedness and response—and yet that is still often done in a suboptimal fashion. FEMA provides a plethora of detailed measures across the recommended capabilities in Homeland Security Presidential Directive-8 (HSPD-8). As Gerber and Robinson (2009) indicate, the trend in competency is far more important than the absolute value—at least until such time when there is a better understanding of the costs and benefits associated with each. The best organizations are those where they have located the Pareto optimal point between the organization’s required capacities where they are delivering the maximum output in terms of service across the spectrum of necessary capabilities required for a given event.

Conclusion

Evaluation is both a necessary and important part of managing the preparation for and response to hazards. Here we have laid out a basic framework for approaching assessment of hazards governance based on two simple principles. The first is that all evaluation should be formative. Given the evolving nature of governance, and the fact that there will not be a time where prevention and response to hazards will be unnecessary, summative evaluation serves little productive purpose at the organizational level. Importantly, the difference between formative and summative evaluation is the intended purpose of the review. This point is not meant to imply that there is not value in assessing response times or monitoring resource delivery—only that the use of those measures for programmatic review is far more valuable as a tool for process improvement. Emergency responders, emergency managers, and the host of nonprofit and private actors that operate in the realm of preparedness and response live under a microscope. The response community especially deals routinely with post hoc evaluation with unreasonable expectations. The response community does not deal with terms like “acceptable losses,” or “collateral damage.” Instead, every loss of life or property is judged as failure. Summative evaluation in this environment creates a chilling effect and creates unnecessary barriers to meaningful organizational evaluation. In the larger political context, summative evaluation is an expected evolution, and one that is required by law in the United States.

Our second principle is that of climactic order. Competency is certainly an important factor in the governance of hazards response—but it is certainly not the most important. It does not matter how talented a surgeon you have on hand if the problem you are dealing with is a cracked pushrod in your engine. Optimizing the flow of resources throughout the range of hazards of preparedness and response is a critical function of governance and it requires broad knowledge of the threats and capabilities in play, collaborative leadership, and an ability to digest remarkable amounts of information. Evaluation has the potential to help develop and strengthen these traits in preparedness and response leaders, but it requires an examination of the operational environment before an assessment of the competency within that sphere.

References

Abbasi, A., & Kapucu, N. (2012). Structural dynamics of organizations during the evolution of interorganizational networks in disaster response.Journal of Homeland Security and Emergency Management, 9(1).Find this resource:

Adger, W. N. (2000). Social and ecological resilience: Are they related? Progress in human geography, 24(3), 347–364.Find this resource:

Birkmann, J. (2006). Measuring vulnerability to promote disaster-resilient societies: Conceptual frameworks and definitions. New York: United Nations Press. Retrieved from http://archive.unu.edu/unupress/sample-chapters/1135-MeasuringVulnerabilityToNaturalHazards.pdf.Find this resource:

Bowman, S., Kapp, L., & Belasco, A. (2005). Hurricane Katrina: DOD Disaster Response. CRS Report for Congress. The Library of Congress. Retrieved from https://fas.org/sgp/crs/natsec/RL33095.pdf.Find this resource:

Bruneau, M., Chang, S. E., Eguchi, R. T., Lee, G. C., O’Rourke, T. D., Reinhorn, A. M., . . . von Winterfeldt, D. (2003). A framework to quantitatively assess and enhance the seismic resilience of communities. Earthquake Spectra, 19(4), 733–752.Find this resource:

Buck, D. A., Trainor, J. E., & Aguirre, B. E. (2006). A critical evaluation of the incident command system and NIMS. Journal of Homeland Security and Emergency Management, 3(3), 1–27.Find this resource:

Cimellaro, G. P., Reinhorn, A. M., & Bruneau, M. (2010). Framework for analytical quantification of disaster resilience. Engineering Structures, 32(11), 3639–3649.Find this resource:

Comfort, L. K. (1999). Shared risk: Complex systems in seismic response. New York: Pergamon.Find this resource:

Comfort, L. K., Ko, K., & Zagorecki, A. (2004). Coordination in rapidly evolving disaster response systems: The role of information. American Behavioral Scientist, 48(3), 295–313.Find this resource:

Council of Economic Advisers. (2005). Economic Report of the President. Washington, DC: US Government Printing Office. Retrieved from http://www.presidency.ucsb.edu/economic_reports/2005.pdf.Find this resource:

Davis, T. (2006). A failure of initiative: Final report. Select bipartisan committee to investigate the preparation for and response to Hurricane Katrina. US House of Representatives.Find this resource:

De Jong, B. A., Dirks, K. T., & Gillespie, N. (2016). Trust and team performance: A meta-analysis of main effects, moderators, and covariates. Journal of Applied Psychology, 101(8), 1134–1150.Find this resource:

Dolan, D. A. (1990). Local government fragmentation: Does it drive up the cost of government? Urban Affairs Review, 26(1), 28–45.Find this resource:

Drabek, T. E. (1985). Managing the emergency response. Public Administration Review, 45, 85–92.Find this resource:

Dynes, R. R., & Tierney, K. J. (1994). Disasters, collective behavior, and social organization. Newark: University of Delaware Press.Find this resource:

Eller, W., Gerber, B. J., & Branch, L. E. (2015). Voluntary nonprofit organizations and disaster management: Identifying the nature of inter‐sector coordination and collaboration in disaster service assistance provision. Risk, Hazards & Crisis in Public Policy, 6(2), 223–238.Find this resource:

FEMA. (2011). National preparedness system. Washington, DC. Retrieved from https://www.fema.gov/national-preparedness-system.Find this resource:

FEMA. (2013a). Homeland Security exercise and evaluation program. Washington, DC. Retrieved from https://www.fema.gov/media-library/assets/documents/32326.Find this resource:

FEMA. (2013b). Threat and hazard identification and risk assessment guide. Homeland Security. 2nd ed. Washington, DC. Retrieved from https://www.fema.gov/media-library-data/8ca0a9e54dc8b037a55b402b2a269e94/CPG201_htirag_2nd_edition.pdf.Find this resource:

FEMA. (2015). National preparedness goal. Washington, DC: Department of Homeland Security. Retrieved from https://www.fema.gov/national-preparedness-goal.Find this resource:

Gerber, B. J., Ducatman, A., Fischer, M., Althouse, R., & Scotti, J. R. (2006). The potential for an uncontrolled mass evacuation of the DC metro area following a terrorist attack: A report of survey findings. West Virginia University, 81. Retrieved from http://digitalarchive.oclc.org/request?id%3Doclcnum%3A191800507.Find this resource:

Gerber, B. J., & Robinson, S. E. (2009). Local government performance and the challenges of regional preparedness for disasters. Public Performance & Management Review, 32(3), 345–371.Find this resource:

Henstra, D. (2010). Evaluating local government emergency management programs: What framework should public managers adopt? Public Administration Review, 70(2), 236–246.Find this resource:

Hermans, E. J., Henckens, M. J., Joëls, M., & Fernández, G. (2014). Dynamic adaptation of large-scale brain networks in response to acute stressors. Trends in Neurosciences, 37(6), 304–314.Find this resource:

Hood, C. (1991). A public management for all seasons? Public administration, 69(1), 3–19.Find this resource:

Hood, C. (1995). The “New Public Management” in the 1980s: Variations on a theme. Accounting, Organizations and Society, 20(2), 93–109.Find this resource:

Ingraham, P. W., Joyce, P. G., & Donahue, A. K. (2003). Government performance: Why management matters. Abingdon, UK: Taylor & Francis.Find this resource:

Im, T. (2010). What are the problems that NPM reform resulted in? Korean Society & Public Administration, 20(1), 1–27.Find this resource:

Jackson, B. A. (2008). The problem of measuring emergency preparedness. RAND Corporation and Homeland Security. Retrieved from https://www.rand.org/content/dam/rand/pubs/occasional_papers/2008/RAND_OP234.pdf.Find this resource:

Kales, S. N., Soteriades, E. S., Christophi, C. A., & Christiani, D. C. (2007). Emergency duties and deaths from heart disease among firefighters in the United States. New England Journal of Medicine, 356(12), 1207–1215.Find this resource:

Kapucu, N. (2006). Interagency communication networks during emergencies boundary spanners in multiagency coordination. The American Review of Public Administration, 36(2), 207–225.Find this resource:

Kapucu, N. (2007). Non-profit response to catastrophic disasters. Disaster Prevention and Management: An International Journal, 16(4), 551–561.Find this resource:

Kapucu, N. (2008). Collaborative emergency management: Better community organising, better public preparedness and response. Disasters, 32(2), 239–262.Find this resource:

Kapucu, N. (2009). Interorganizational coordination in complex environments of disasters: The evolution of intergovernmental disaster response systems. Journal of Homeland Security and Emergency Management, 6(1), 1–26.Find this resource:

Kapucu, N., Arslan, T., & Demiroz, F. (2010). Collaborative emergency management and national emergency management network. Disaster Prevention and Management: An International Journal, 19(4), 452–468.Find this resource:

Karvonen, L., & Quenter, S. (2002). Electoral systems, party system fragmentation, and government instability. In D. Berg-Schlosser & J. Mitchell (Eds.), Authoritarianism and democracy in Europe, 1919–1939 (pp. 131–162). Advances in political science: An international series. London: Palgrave Macmillan.Find this resource:

Kaufman, H. (1956). Emerging conflicts in the doctrines of public administration. American Political Science Review, 50(4), 1057–1073.Find this resource:

KTBS (n.a.). (2013). Emergency shelters built in Louisiana & Mississippi since Hurricane Katrina [3 Investigates]. Shreveport LA: KTBS. Retrieved from https://www.ktbs.com/news/update-investigates-emergency-shelters-built-in-louisiana-mississippi-since-hurricane/article_286116c0-a106-5a83-9c89-dea12ca813b9.html.Find this resource:

LDA. (2008). Breakdown of certain costs to state government for Hurricane Gustav [Press release]. Retrieved from http://emergency.louisiana.gov/Releases/091008DOA.html.Find this resource:

Moore, S., Eng, E., & Daniel, M. (2003). International NGOs and the role of network centrality in humanitarian aid operations: A case study of coordination during the 2000 Mozambique floods. Disasters, 27(4), 305–318.Find this resource:

Moynihan, D. P. (2009). The network governance of crisis response: Case studies of incident command systems. Journal of Public Administration Research and Theory, 19(4), 895–915.Find this resource:

Nolte, I. M., Martin, E. C., & Boenigk, S. (2012). Cross-sectoral coordination of disaster relief. Public Management Review, 14(6), 707–730.Find this resource:

Perry, R. W. (2004). Disaster exercise outcomes for professional emergency personnel and citizen volunteers. Journal of Contingencies and Crisis Management, 12(2), 64–75.Find this resource:

Rei, D., Mason, X., Seo, J., Gräff, J., Rudenko, A., Wang, J., . . . Canter, R. G. (2015). Basolateral amygdala bidirectionally modulates stress-induced hippocampal learning and memory deficits through a p25/Cdk5-dependent pathway. Proceedings of the National Academy of Sciences, 112(23), 7291–7296.Find this resource:

Rhodes, R. A. W. (1996). The new governance: Governing without government. Political Studies, 44(4), 652–667.Find this resource:

Robinson, S. E., Eller, W. S., Gall, M., & Gerber, B. J. (2013). The core and periphery of emergency management networks. Public Management Review, 15(3), 344–362.Find this resource:

Schneider, M. (1986). Fragmentation and the growth of local government. Public Choice, 48(3), 255–263.Find this resource:

Schneider, S. K. (2005). Administrative breakdowns in the governmental response to Hurricane Katrina. Public Administration Review, 65(5), 515–516.Find this resource:

Simo, G., & Bies, A. L. (2007). The role of nonprofits in disaster response: An expanded model of cross‐sector collaboration. Public Administration Review, 67(Suppl. 1), 125–142.Find this resource:

Smyth, A. W., Altay, G. L., Deodatis, G., Erdik, M., Franco, G., Gülkan, P., . . . Seeber, N. (2004). Probabilistic benefit-cost analysis for earthquake damage mitigation: Evaluating measures for apartment houses in Turkey. Earthquake Spectra, 20(1), 171–203.Find this resource:

Sundnes, K. O., & Birnbaum, M. L. (2003). Health disaster management: Guidelines for evaluation and research in the Utstein style. Prehospital and Disaster Medicine, 17(Supplement 3), 1–77.Find this resource:

Tierney, K. J. (1985). Emergency medical preparedness and response in disasters: The need for interorganizational coordination. Public Administration Review, 45, 77–84.Find this resource:

van Thiel, S., & Leeuw, F. L. (2002). The performance paradox in the public sector. Public Performance & Management Review, 25(3), 267–281.Find this resource:

Veitch, C. (2009). Impact of rurality on environmental determinants and hazards. Australian Journal of Rural Health, 17(1), 16–20.Find this resource:

Wang, D., Waldman, D. A., & Zhang, Z. (2014). A meta-analysis of shared leadership and team effectiveness. Journal of Applied Psychology, 99(2), 181–198.Find this resource: