Which of the following is not a reason for resistance against the implementation of an mis?

  • Journal List
  • BMC Health Serv Res
  • v.20; 2020
  • PMC7106610

BMC Health Serv Res. 2020; 20: 258.

Peg Allen,

Which of the following is not a reason for resistance against the implementation of an mis?
1 Rebekah R. Jacob,1 Renee G. Parks,1 Stephanie Mazzucca,1 Hengrui Hu,1 Mackenzie Robinson,1 Maureen Dobbins,2 Debra Dekker,3 Margaret Padek,1 and Ross C. Brownson1,4

Abstract

Background

Public health resources are limited and best used for effective programs. This study explores associations of mis-implementation in public health (ending effective programs or continuing ineffective programs) with organizational supports for evidence-based decision making among U.S. local health departments.

Methods

The national U.S. sample for this cross-sectional study was stratified by local health department jurisdiction population size. One person was invited from each randomly selected local health department: the leader in chronic disease, or the director. Of 600 selected, 579 had valid email addresses; 376 completed the survey (64.9% response). Survey items assessed frequency of and reasons for mis-implementation. Participants indicated agreement with statements on organizational supports for evidence-based decision making (7-point Likert).

Results

Thirty percent (30.0%) reported programs often or always ended that should have continued (inappropriate termination); organizational supports for evidence-based decision making were not associated with the frequency of programs ending. The main reason given for inappropriate termination was grant funding ended (86.0%). Fewer (16.4%) reported programs often or always continued that should have ended (inappropriate continuation). Higher perceived organizational supports for evidence-based decision making were associated with less frequent inappropriate continuation (odds ratio = 0.86, 95% confidence interval 0.79, 0.94). All organizational support factors were negatively associated with inappropriate continuation. Top reasons were sustained funding (55.6%) and support from policymakers (34.0%).

Conclusions

Organizational supports for evidence-based decision making may help local health departments avoid continuing programs that should end. Creative mechanisms of support are needed to avoid inappropriate termination. Understanding what influences mis-implementation can help identify supports for de-implementation of ineffective programs so resources can go towards evidence-based programs.

Keywords: Implementation science, Mis-implementation, De-implementation, Evidence-based decision making, Evidence-based public health, Health departments

Background

Mis-implementation of public health programs, policies, and services can occur in two ways: ending effective programs that should continue (inappropriate termination), or continuing ineffective programs that should end (inappropriate continuation) [1–3]. Here the term program refers to public health policies, environmental or system changes, educational and media activities, and services such as immunizations or screening for disease detection. De-implementation refers to ending ineffective or low-value programs, and is studied more often in medicine than in public health [1, 4–12]. The international Choosing Wisely initiative has recommended numerous medical procedures for de-implementation [13, 14]. McKay and colleagues (2018) recently outlined several public health and social service initiatives that have been discontinued or warrant de-implementation because they are harmful (prone infant sleeping position), ineffective (D.A.R.E. school-based drug prevention program), low value (routine HIV counseling with HIV testing), or the issue dissipated (Ebola) [15]. Evidence suggests these phenomena could have negative impacts on our public health systems [15].

In public health it is necessary to address both types of mis-implementation. Governmental public health departments in the US have experienced budget cuts in the past decade and high staff turnover [16, 17]. Finding ongoing funding is often challenging [15, 18–21]. For example, pressure to deliver programs within funders’ deadlines despite lack of funding for staff led to insufficient planning and incomplete statewide implementation of an evidence-based arthritis program [22]. External politics also influence funding and implementation decisions [21, 23]. Additional aspects influencing sustainment likely vary by program type and include implementation monitoring to improve program adaptation and delivery, partnerships, planning, and communications [18, 24–26]. Therefore, it is important to assess and address both types of mis-implementation in public health practice.

The impact of organizational supports for evidence-based decision making (EBDM) on mis-implementation of public health programs is not yet understood, though organizational structures and processes have been found to affect implementation of medical procedures and mental health services [7, 8, 20, 27, 28]. EBDM fosters implementation of effective programs and prevents mis-implementation through use of the best available surveillance data and intervention evidence in setting priorities and selecting programs, application of systematic prioritization and program planning methods, community engagement, and evaluation to inform adaptation and implementation [29–31]. Capacity building for EBDM includes training to increase skills of individual staff members and management practices to enhance organizational supports for EBDM.

A literature review identified five domains of organizational supports that are associated with agency performance: leadership, workforce development, organizational climate and culture, relationships and partnerships, and financial practices [32]. Leadership support for EBDM includes leadership skills, active modeling of EBDM processes, communication of expectations for use of EBDM processes, and participatory decision-making [32–34]. Workforce development includes in-service training and access to technical assistance. An organizational climate and culture supportive of EBDM has a free flow of information, values evidence and continued learning, and supports methods that may be new to the organization, such as specific prioritization or quality improvement processes [32, 33]. Relationships and partnerships with organizations from different sectors that align their missions and build EBDM capacity are essential, as no public health agency can accomplish the complex multi-level interventions in isolation [32]. Supportive financial practices are transparent and incorporate outcomes-based contracting and allocation of funds for quality improvement, EBDM, information access, and staff training; and diversify funding [32, 35, 36]. While these management practices support EBDM, little is known about direct relationships of these organizational supports with mis-implementation frequency.

To support use of EBDM and prevent mis-implementation, the Centers for Disease Control and Prevention (CDC) and other federal funders in the US increasingly require use of evidence-based interventions (EBIs) by grantees [37, 38]. In chronic disease prevention, CDC includes evidence-based practice requirements in funding state health departments that then pass on funding to local public health agencies. State health departments vary in the extent to which they in turn require local grantees to implement evidence-based strategies [33], even though they rely on local health departments (LHDs) to lead population health efforts to prevent chronic conditions [29]. Delivery of chronic disease prevention programs was not part of the historical regulatory responsibilities of LHDs and remains highly dependent on flow-through funds from federal and state agencies and private foundations. The Public Health Accreditation Board emphasizes workforce development for and documentation of evidence-based practice in its requirements for accreditation of state and local public health departments [39]. Public health workforce competencies include skills needed to plan and implement effective programs [40].

The purposes of the present study are to: 1) describe self-reported LHD frequency of and reasons for mis-implementation among a national sample of LHD chronic disease directors and 2) explore associations between perceived organizational supports for evidence-based processes and mis-implementation.

Methods

Study design

This national cross-sectional study was part of a larger study that used a stratified random sampling design to invite one individual from each included US LHD to complete an online survey [41]. Eligible LHDs were those that screened for diabetes or body mass index, or conducted or contracted for population-based nutrition and physical activity efforts, according to the 2016 National Association of County and City Health Officials (NACCHO) National Profile survey. Of the 1677 eligible LHDs, a total of 600 were randomly selected, 200 in each of three jurisdiction population size strata: small (< 50,000), medium (50,000-199,999), and large (≥ 200,000). The Washington University in St. Louis Institutional Review Board approved the study.

Participants and data collection

The person responsible for making decisions about chronic disease prevention and control in each selected LHD was invited by email to complete the online Qualtrics survey. In some LHDs, this was the LHD director, while in other LHDs, this was a division director or program manager, according to lists provided by NACCHO. Invitees with invalid email addresses were deemed ineligible. Email invitations included materials for informed consent, as did the cover letter of the online survey. All participants gave online consent to the survey. Invitees could decline by email, phone, or online. Participants were offered a $20 Amazon.com gift card. To increase response rates up to three reminder emails were sent and two phone calls made to non-respondents. Data collection took place in August–September 2017.

Survey development

As described in detail elsewhere, the study team drew from measures developed and tested in its previous national health department studies and other existing instruments identified by the study team [32, 41–43]. The survey included sections on mis-implementation (frequency and reasons), LHD and participant characteristics, skills for EBDM, and organizational supports. After three rounds of study team input and cognitive response testing with 10 chronic disease prevention practitioners, the survey demonstrated acceptable test-retest reliability [41].

Measures

The four mis-implementation survey items included frequency and reasons for each type of mis-implementation. Frequency of mis-implementation included: “In your opinion, how often do programs end that should have continued (i.e., end without being warranted)”; and “In your opinion, how often do programs continue that should have ended (i.e., continue without being warranted). Response options were “never, rarely, sometimes, often, always, I do not know”. To understand reasons for mis-implementation, participants were asked, “when you think about public health programs that have ended when they should have continued, what are the most common reasons for programs ending”. And “when you think about public health programs that continued when they should have ended, what are the most common reasons for their continuation”. Response options included pre-listed reasons, “other,” and “I do not know.” The study team listed reasons found in the literature or commonly selected in pilot studies [2, 3]. In a pilot study with a purposive sample of chronic disease practitioners from LHDs, state health departments, and partnering agencies in multiple states, frequency of mis-implementation had 79% test-retest agreement for programs ending that should continue and 80% agreement for programs continuing that should have ended [2]. LHD characteristics included items from the 2016 NACCHO National Profile Survey and items within the current study’s survey. Participant demographic items were from the study team’s previous surveys. The supplemental file contains the complete survey instrument.

Survey items also assessed organizational supports for evidence-based decision making (EBDM) using a 7-point agreement Likert scale with 1 = strongly disagree and 7 = strongly agree. Confirmatory factor analyses in M-Plus supported the study’s conceptual framework of six organizational support factors [41, 44]. Factor scores for each participant were calculated in M-Plus. Table 1 lists the factors and items, with exact item wording.

Table 1

Evidence-based decision making (EBDM) support factors and items

FactoraItem wording
Awareness of EBDM (3-items) 1. I am provided the time to identify evidence-based programs and practices.
2. My direct supervisor recognizes the value of management practices that facilitate EBDM.
3. My work group/division offers employees opportunities to attend EBDM trainings.
Capacity for EBDM (7-items) 1. I use EBDM in my work.
2. My direct supervisor expects me to use EBDM.
3. My performance is partially evaluated on how well I use EBDM in my work.
4. My work group/division currently has the resources (e.g. staff, facilities, partners) to support application of EBDM.
5. The staff in my work group/division has the necessary skills to carry out EBDM.
6. The majority of my work group/division’s external partners support use of EBDM.
7. Top leadership in my agency encourages use of EBDM.
Resource availability (3-items) 1. Informational resources (e.g. academic journals, guidelines, and toolkits) are available to my work group/division to promote the use of EBDM
2. My work group/division engages a diverse external network of partners that share resources to facilitate EBDM.
3. Stable funding is available for EBDM.
Evaluation capacity (3-items) 1. My work group/division plans for evaluation of interventions prior to implementation.
2. My work group/division uses evaluation data to monitor and improve interventions.
3. My work group/division distributes intervention evaluation findings to other organizations that can use our findings.
EBDM climate cultivation (3-items) 1. Information is widely shared in my work group/division so that everyone who makes decisions has access to all available knowledge.
2. My agency is committed to hiring people with relevant training or experience in public health core disciplines (e.g. epidemiology, health education, environmental health).
3. My agency has a culture that supports the processes necessary for EBDM.
Partnerships to support EBDM (3-items) 1. It is important to my agency to have partners who share resources (money, staff time, space, materials).
2. It is important to my agency to have partners in healthcare to address population health issues.
3. It is important to my agency to have partners in other sectors (outside of health) to address population health issues.

aFactors derived through confirmatory factor analyses by coauthor SM

Statistical analysis

Data management, variable recoding, descriptive, and bivariate analyses were conducted in SPSS (IBM SPSS Statistics for Windows, Version 24.0) in 2018. Multivariate logistic regression modeling was conducted in SAS software (SAS Institute Inc., Version 9.4) in 2018. The two dependent variables, frequency of programs ending when they should have continued, and frequency of programs continuing when they should have ended, were each dichotomized into 1 = often or always and 0 = never, rarely, or sometimes, after excluding “I do not know” and blank responses. There were too few responses of “never” and “rarely” to analyze these responses as a separate categorical group. The n’s varied slightly because different numbers of participants answered don’t know. Separate modeling was conducted for each mis-implementation dependent variable and each organizational support for EBDM (the independent variable of interest in each model). All models were adjusted for LHD jurisdiction population size and state. Due to the study design with only one participant per LHD, and low intra-cluster correlation (ICC) statistics for the six EBDM support factors by state (which ranged from 0.005 to 0.012), mixed modeling was not conducted. Models were additionally adjusted for LHD or participant characteristics associated with both the mis-implementation frequency (dependent variable) and organizational support for EBDM (independent variable of interest).

Results

Participants

Of the 579 eligible LHD chronic disease leads with valid email addresses, 376 (64.9%) completed the online survey. Per the study design, there was only one participant from each LHD. Table 2 shows participant and LHD characteristics of the sample. Most participants were women (83.2%) and had worked in public health 10 or more years (71.4%). Participants had been in their positions an average of 6.5 ± standard deviation 6.5 years, with a median 4 years. The majority (58.2%) had a graduate degree; nearly a third (31.8%) had a graduate degree specifically in public health; and 29.1% had a nursing background. As designed, the stratified sample was split roughly in thirds by local health department jurisdiction population size. The sample included LHDs from 44 of 51 states (50 states and District of Columbia).

Table 2

Participant and local health department characteristics, by perceived mis-implementation, 2017 national survey

CharacteristicOverall N = 376a
%
Reported programs END that should have continuedReported programs CONTINUE that should have ended
Often or Always (n = 106, 30.0%)
%
Sometimes, Rarely, or Never (n = 247, 70.0%)
%
Chi-square P-ValueOften or Always (n = 57, 16.4%)
%
Sometimes, Rarely, or Never (n = 290, 83.6%)
%
Chi-square P-Value
Participants
 Position 0.87 0.52
Agency leadership 46.4 46.2 48.6 43.9 49.0
Program manager 45.6 47.2 44.1 45.6 44.5
Technical or other 8.0 6.6 7.3 10.5 6.6
Graduate degree in any field 58.2 51.9 62.0 0.08 68.4 57.9 0.14
Public health graduate degree 31.8 28.3 34.3 0.27 43.9 31.2 0.07
Nursing degree or license 29.1 34.9 26.4 0.11 21.1 29.5 0.20
 Female 83.2 84.8 83.2 0.72 87.5 82.6 0.36
Age ≥ 50 years 43.7 41.5 45.1 0.53 31.6 45.3 0.06
Years worked in current position 0.76 0.94
< 5 years 54.0 50.9 52.8 54.4 52.9
5–9 years 23.3 22.6 24.4 24.6 23.9
≥ 10 years 22.7 26.4 22.8 21.1 23.2
Years worked in public health 0.16 0.15
< 10 years 28.6 32.1 23.6 35.1 25.6
10–19 years 31.6 33.0 32.1 35.1 31.5
≥ 20 years 39.8 34.9 44.3 29.8 42.9
Local health departments
Jurisdiction population size 0.05 0.64
< 50,000 31.6 40.6 27.5 28.1 31.0
  50,000-199,000 34.3 30.2 36.8 40.4 33.8
≥ 200,000 34.0 29.2 35.6 31.6 35.2
Accreditedb 28.0 23.6 30.0 0.22 29.8 29.0 0.90
Has a Local Board of Health 72.6 80.0 69.4 0.04d 73.7 72.2 0.82
Governance structure 0.09 0.26
Locally governed 76.3 76.2 76.1 75.4 76.1
State governed 13.9 9.5 15.8 19.3 13.1
Shared state/local governance 9.9 14.3 8.1 5.3 10.7
Rural jurisdiction 45.6 51.4 42.5 0.12 40.4 44.6 0.55
Community Guidec use to support decision-making in past year 0.73 0.87
Used consistently across all relevant program areas 6.0% 4.8% 6.5% 6.7% 6.4%
Used in some program areas 63.5% 62.7% 65.1% 66.7% 63.0%
Not used 30.5% 32.5% 28.5% 25.7% 30.6%

Frequency of mis-implementation

Thirty percent (30.0%) of participants reported inappropriate termination, defined here as reporting programs often or always end that should have continued (Table 2). While frequency of inappropriate termination was not associated with any participant characteristics, those working in health departments with a jurisdiction population size < 50,000 were more likely to report inappropriate termination than participants in larger jurisdictions (p = .05). In addition, those in health departments governed by a local board of health were also more likely to report inappropriate termination compared to those with other forms of governance (p = .04). Only 16.4% reported inappropriate continuation, defined here as reporting programs often or always continue when they should have ended (Table 2). Frequency of inappropriate continuation did not differ by characteristics of participants or LHDs.

Table 3 shows adjusted odds ratios of reporting mis-implementation in separate models for each organizational support for EBDM. Organizational supports were not significantly associated with frequency of inappropriate termination in an unadjusted model and in a model adjusted for jurisdiction population size, state, having a local board of health, graduate degree in any field, nursing background, and years worked in public health. All six organizational support factors were negatively associated with inappropriate continuation after adjusting for LHD jurisdiction population size, state, public health graduate degree, and age group. That is, participants that reported higher presence of organizational supports for EBDM reported less frequent continuation of programs that should have ended.

Table 3

Adjusted odds ratios of reporting mis-implementation by organizational supports, in separate multivariate logistic regression modelsa

Perceived organization support factorReported programs often or always END that should have continued vs else (N = 353)bReported programs often or always CONTINUE that should have ended (N = 347)c
b (SE)WaldP-ValueOdds Ratio (95% CI)b (SE)WaldP-ValueOdds Ratio (95% CI)
Awareness of EBDM 0.24 1.75 0.19 1.28 (0.89, 1.83) −0.58 6.21 0.01 0.56 (0.36, 0.88)
EBDM Capacity 0.25 2.12 0.15 1.29 (0.92, 1.81) −0.59 7.26 0.007 0.55 (0.36, 0.85)
Resource Availability 0.32 2.68 0.10 1.38 (0.94, 2.20) −0.71 8.51 0.004 0.49 (0.30, 0.79)
Evaluation Capacity 0.27 2.98 0.08 1.31 (0.96, 1.79) −0.67 12.06 0.005 0.51 (0.35, 0.75)
Climate Cultivation 0.36 2.74 0.10 1.44 (0.94, 2.21) −0.96 12.10 0.005 0.39 (0.23, 0.66)
Partnerships that Support EBDM 0.15 0.57 0.45 1.16 (0.79, 1.69) −0.48 4.37 0.04 0.62 (0.39, 0.97)
EBDM support overall (sum of 6) −0.07 (0.50) 0.02 0.89 1.06 (0.99, 1.14) −0.15 (0.05) 10.93 < 0.001 0.86 (0.79, 0.94)

Reasons for mis-implementation

Figure 1 shows percentages of participants that selected pre-listed potential reasons for inappropriate termination. The most commonly chosen reasons were the ending of funding, either that grant funding ended (86.0%), or that funding was diverted to a higher priority program (45.7%). Fewer than 25% of participants selected each of the remaining pre-listed reasons. Twelve participants (3.4%) reported “other” reasons, which included lack of staff (n = 4), other funding issues (n = 4), low program participation (n = 2), and single responses of other reasons.

Which of the following is not a reason for resistance against the implementation of an mis?

Reasons for ENDING programs that should have continued in local health departments, n = 350. Legend: Total percent does not equal 100% as participants could select more than one option

As shown in Fig. 2, the most frequently selected reasons chosen for inappropriate continuation were sustained funding (55.6%) and sustained support from policymakers (34.0%). Sustained support from agency leaders (27.7%), ease of maintaining the program (28.6%), lack of program evaluation (28.3%), and absence of alternative program options (27.7%) were also cited by more than 25%. The 17 (5.2%) “other” responses included resistance to change (n = 8); and continuation was “required”, “mandated”, “requested”, or “supported by others” (n = 6). Resistance to change included “it’s what we’ve always done”, “inertia”, “tradition”, “staff ingrained”, “fear of change”, and “resistance to stopping”. Reasons did not vary by reported frequency of inappropriate continuation (data not shown).

Which of the following is not a reason for resistance against the implementation of an mis?

Reasons for CONTINUING programs that should have ended in local health departments, n = 329. Legend: Total percent does not equal 100% as participants could select more than one option

Discussion

Our study shines light on two different phenomena in mis-implementation of public health programs among local public health practice: inappropriate termination and inappropriate continuation. For programs ending when they should have continued, most reported funding ending as the main reason, while support for EBDM within the LHD was not a factor in frequency. For programs continuing when they should have ended, the most common reason was that funding was sustained. Furthermore, in this instance there appeared to be an association between a lack of organizational support for EBDM and inappropriate continuation.

Reported frequency of mis-implementation in this study was lower than that found in the study team’s earlier pilot work [2], but still concerning. In pilot data from 2013 to 2014 with identical item wording, 42.0% of LHD directors and program managers reported programs often or always ended that should have continued and 29.4% reported programs often or always continued that should have ended [2], compared to 30.0 and 16.4% respectively in the present study. Sampling methods differed, so findings may not be fully comparable. Nonetheless, the lower reported frequencies in the present study may reflect funders’ increased requirements since the previous study for LHDs to demonstrate use of EBIs in chronic disease prevention. Given current national emphasis on EBIs, there may also have been reluctance to report inappropriate continuation (social desirability bias).

It is encouraging that higher perceived organizational supports for EBDM were associated with lower inappropriate continuation of programs, but it is puzzling that several organizational support factors trended toward positive non-significant associations with inappropriate continuation. We can only surmise that managers in LHDs with higher evaluation capacity may be more aware of inappropriate termination. As shown in Fig. 1, only 8% reported lack of evaluation, and only 12% reported lack of program impact, as top reasons for inappropriate termination.

Organizational supports were insufficient in this study to ensure program sustainment, while other studies found multiple internal and external factors affected sustainment. Reviews found organizational climate, leadership support, staff commitment and skills, adequate staffing and low staff turnover, organizational resources, and partnerships affect EBI sustainability [18, 45, 46]. The review by Hodge et al. (2016) found engagement with community leaders and individual implementers key to community-based program sustainment in low resource settings [18]. Engagement involves building relationships with community policy makers and implementers [18, 46]. An important aspect is making decisions together to ensure programs are aligned with community context, cultures, and priorities [18, 46]. Collaborative partnerships across organizations and coalitions are also key to program sustainment [18, 45, 46]. High functioning organizational partnerships that leverage capacity of each collaborating organization are more likely to be able to sustain programs [18, 45, 46]. Policy and legislation are associated with sustainment of programs in community and clinical and social service settings [45]. Engaging community leaders and other policy makers throughout programmatic decision-making can increase likelihood of program sustainment [18].

Qualitative studies emphasize the importance of leadership support, planning, partnerships, and communication in capacity to sustain public health EBIs [26, 47]. Reassignment of staff, changes in staff workloads, and changes in leadership led to discontinuation of an evidence-based trauma intervention in a large urban school district [48]. Lack of organizational policy to support staff time to attend training led to partial implementation of after school physical activity efforts [49]. But in the present study, ending of funding was by far the most commonly reported reason for inappropriate termination, as found in a recent review [50], and organizational supports were not protective. This reflects lack of ongoing funding sources for EBIs, and points out the need for strong community and policy maker engagement, inter-organizational partnerships, and alternate and diversified funding sources. There is a need for better communication with policy makers and other program decision makers on the importance of and evidence for population-based chronic disease prevention. Communicating evidence to policy makers remains one of the top skill gaps among health department staff [51].

Public health systems are working to scale up EBIs [22, 45, 49], but little is known about strategies to address inappropriate continuation of ineffective approaches. Here public health can learn from medical studies, even though organizational structures and funding sources differ. Healthcare systems are acknowledging that ending low value care is difficult, requires different processes than implementation, and best strategies are not yet known [9]. Supportive organizational climates and training are two organizational supports that may help. A recent review by Colla and colleagues found healthcare systems that provided clinician education and feedback decreased use of low value procedures [7], but other authors viewed provider education and feedback as insufficient [9, 10, 52]. For example, clinician awareness of guidelines to avoid use of low-value tumor markers did not lead to ending such use except in healthcare systems with cultures that emphasized collective decision making in line with guidelines [52].

The present study has several limitations. The cross-sectional survey limits temporal interpretations of the findings. We do not know how public health practitioners perceive programs as something that should continue or end. However, these questions were asked after detailed definition of EBDM and sections on EBIs and supports for EBDM. Participants may have different interpretations of the terms “evidence-based” and “effectiveness.” We did not define the term “warranted” in the questions “In your opinion, how often do programs end that should have continued (i.e., end without being warranted)”; and “In your opinion, how often do programs continue that should have ended (i.e., continue without being warranted). So it is unknown whether participants interpreted “warranted” as evidence-based, which was the context of the survey, or were mentally including other factors such as champion or policy maker preferences or lack of partner support. We also did not specify a time frame for perceived frequency of inappropriate termination or continuation. There was not space in the survey to ask participants to define what they meant by inappropriate program termination or continuation, which would have furthered understanding and interpretation of survey responses. There could be social desirability bias to under-report continuation of programs that should end, given national emphasis on EBIs for chronic disease prevention. Still, the present study provides a glimpse into mis-implementation in local public health. A future study will address many of these limitations [3].

In addition to gaining a deeper understanding of organizational and external influences on mis-implementation, future solutions need to be developed for how best to fund public health practice so that effective programs can be sustained. With high staff turnover and a significant portion of the public health workforce retiring [16, 17], succession planning and ongoing EBDM capacity building efforts are needed.

Conclusions

While improvements have occurred since early pilot data were collected in 2013 [2], the results of this study show that both inappropriate termination and continuation of programs continue, mainly due to funding-related issues. Loss of funding was the main reason reported for inappropriate termination, with organizational supports not protective. Policy maker engagement, strong organizational partnerships, and creative mechanisms of support are needed to avoid inappropriate termination. This study shows organizational supports for EBDM may help LHDs avoid inappropriate continuation, but may be secondary to funding considerations. Public health agency leaders can cultivate organizational climates in which EBDM is valued and supported; ensure staff are trained in skills needed for EBDM, including program implementation and evaluation; provide access to evidence; and stimulate strong partner networks. Further understanding of the local and national influences on mis-implementation among LHDs can help identify financial and other supports so resources can be directed towards programs with the greatest promise of improving health and well-being.

Supplementary information

Acknowledgements

We acknowledge the data collection and reporting help of Allison Poehler, and administrative support of Linda Dix, Mary Adams, and Cheryl Valko at the Prevention Research Center in St. Louis, Brown School, Washington University in St. Louis. We also acknowledge The Centers for Disease Control and Prevention and the Robert Wood Johnson Foundation, who provided funding for the 2016 National Profile study and the National Association of County and City Health Officials (NACCHO).

Abbreviations

CDC Centers for Disease Control and Prevention
D.A.R.E. Drug Abuse Resistance Education program
EBDM Evidence-based decision making
HIV Human immunodeficiency virus
ICC Intra-cluster correlation
LHD Local health department
NACCHO National Association of County and City Health Officials

Authors’ contributions

Conceptualization and design: RCB, RGP, MD; Measures development and testing: RCB, RGP, PA, SM; Analyses: HH, MR, SM, PA, RRJ, RCB; Writing: PA, RRJ, RCB, RGP; Manuscript content revisions: RCB, RRJ, SM, PA, RGP, MP, MR, MD, DD; All authors read and approved the final manuscript.

Funding

This study is funded by the National Institute of Diabetes and Digestive and Kidney Diseases of the National Institutes of Health (5R01DK109913, 2P30DK092949, and P30DK092950), the National Cancer Institute of the National Institutes of Health (1P50CA244431), and the Centers for Disease Control and Prevention (U48DP006395). The study sponsor was not involved in study design, data collection, analyses, or reporting of findings. The findings and conclusions in this article are those of the authors and do not necessarily represent the official positions of the National Institutes of Health or Centers for Disease Control and Prevention.

Availability of data and materials

Materials related to this study are available on the Prevention Research Center in St. Louis at Washington University in St. Louis website https://prcstl.wustl.edu/. The datasets analyzed during the current study are available from the corresponding author on reasonable request.

The Human Research Protections Office at Washington University in St. Louis provided institutional review approval (protocol #201603157). All participants gave online consent to the survey.

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Peg Allen, Email: ude.ltsuw@nellagep.

Rebekah R. Jacob, Email: ude.ltsuw@bocajhakeber.

Renee G. Parks, Email: .

Stephanie Mazzucca, Email: ude.ltsuw@accuzzams.

Hengrui Hu, Email: .

Mackenzie Robinson, Email: .

Maureen Dobbins, Email: ac.retsamcm@msnibbod.

Debra Dekker, Email: gro.ohccan@rekkeDD.

Margaret Padek, Email: ude.ltsuw@kedapm.

Ross C. Brownson, Email: ude.ltsuw@nosnworbr.

Supplementary information

Supplementary information accompanies this paper at 10.1186/s12913-020-05141-5.

References

1. Brownson RC, Colditz C, Proctor EK. Dissemination and implementation research in health: translating science to practice. 2. New York City: Oxford Press; 2018. [Google Scholar]

2. Brownson RC, Allen P, Jacob RR, Harris JK, Duggan K, Hipp PR, et al. Understanding mis-implementation in public health practice. Am J Prev Med. 2015;48(5):543–551. doi: 10.1016/j.amepre.2014.11.015. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

3. Padek M, Allen P, Erwin PC, Franco M, Hammond RA, Heuberger B, et al. Toward optimal implementation of cancer prevention and control programs in public health: a study protocol on mis-implementation. Implement Sci. 2018;13(1):49. doi: 10.1186/s13012-018-0742-9. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

4. Norton WE, Kennedy AE, Chambers DA. Studying de-implementation in health: an analysis of funded research grants. Implement Sci. 2017;12(1):144. doi: 10.1186/s13012-017-0655-z. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

5. Niven DJ, Mrklas KJ, Holodinsky JK, Straus SE, Hemmelgarn BR, Jeffs LP, et al. Towards understanding the de-adoption of low-value clinical practices: a scoping review. BMC Med. 2015;13:255. doi: 10.1186/s12916-015-0488-z. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

6. Gnjidic D, Elshaug A. De-adoption and its 43 related terms: harmonizing low-value care terminology. BMC Med. 2015;13(1):273. doi: 10.1186/s12916-015-0511-4. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

7. Colla CH, Mainor AJ, Hargreaves C, Sequist T, Morden N. Interventions aimed at reducing use of low-value health services: a systematic review. Med Care Res Rev. 2017;74(5):507–550. doi: 10.1177/1077558716656970. [PubMed] [CrossRef] [Google Scholar]

8. Montini T, Graham I. Entrenched practices and other biases: unpacking the historical, economic, professional, and social resistance to de-implementation. Implement Sci. 2015;10(1):24. doi: 10.1186/s13012-015-0211-7. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

9. Voorn VMA, Marang-van de Mheen PJ, van der Hout A, Hofstede SN, So-Osman C, van den Akker-van Marle ME, et al. The effectiveness of a de-implementation strategy to reduce low-value blood management techniques in primary hip and knee arthroplasty: a pragmatic cluster-randomized controlled trial. Implement Sci. 2017;12(1):–72. [PMC free article] [PubMed]

10. Voorn VMA, van Bodegom-Vos L, So-Osman C. Towards a systematic approach for (de)implementation of patient blood management strategies. Transfus Med. 2018;28(2):158–167. doi: 10.1111/tme.12520. [PubMed] [CrossRef] [Google Scholar]

11. Parsons Leigh J, Niven DJ, Boyd JM, Stelfox HT. Developing a framework to guide the de-adoption of low-value clinical practices in acute care medicine: a study protocol. BMC Health Serv Res. 2017;17:1–9. doi: 10.1186/s12913-017-2005-x. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

12. Harris C, Allen K, Ramsey W, King R, Green S. Sustainability in health care by allocating resources effectively (SHARE) 11: reporting outcomes of an evidence-driven approach to disinvestment in a local healthcare setting. BMC Health Serv Res. 2018;18(1):386. doi: 10.1186/s12913-018-3172-0. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

13. Rosenberg A, Agiro A, Gottlieb M, Barron J, Brady P, Liu Y, et al. Early trends among seven recommendations from the choosing wisely campaign. JAMA Intern Med. 2015;175(12):1913–1920. doi: 10.1001/jamainternmed.2015.5441. [PubMed] [CrossRef] [Google Scholar]

14. Chalmers K, Badgery-Parker T, Pearson SA, Brett J, Scott IA, Elshaug AG. Developing indicators for measuring low-value care: mapping choosing wisely recommendations to hospital data. BMC Res Notes. 2018;11(1):163. doi: 10.1186/s13104-018-3270-4. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

15. McKay VR, Morshed AB, Brownson RC, Proctor EK, Prusaczyk B. Letting Go: conceptualizing intervention de-implementation in public health and social service settings. Am J Community Psychol. 2018;62:189–202. doi: 10.1002/ajcp.12258. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

16. Leider JP, Coronado F, Beck AJ, Harper E. Reconciling supply and demand for state and local public health staff in an era of retiring baby boomers. Am J Prev Med. 2018;54(3):334–340. doi: 10.1016/j.amepre.2017.10.026. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

17. Leider JP, Harper E, Shon JW, Sellers K, Castrucci BC. Job satisfaction and expected turnover among federal, state, and local public health practitioners. Am J Public Health. 2016;106(10):1782–1788. doi: 10.2105/AJPH.2016.303305. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

18. Hodge LM, Turner KM. Sustained implementation of evidence-based programs in disadvantaged communities: a conceptual framework of supporting factors. Am J Community Psychol. 2016;58(1–2):192–210. doi: 10.1002/ajcp.12082. [PubMed] [CrossRef] [Google Scholar]

19. Freedman AM, Kuester SA, Jernigan J. Evaluating public health resources: what happens when funding disappears? Prev Chronic Dis. 2013;10:E190. doi: 10.5888/pcd10.130130. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

20. Massatti RR, Sweeney HA, Panzano PC, Roth D. The de-adoption of innovative mental health practices (IMHP): why organizations choose not to sustain an IMHP. Admin Pol Ment Health. 2008;35(1–2):50–65. doi: 10.1007/s10488-007-0141-z. [PubMed] [CrossRef] [Google Scholar]

21. Furtado KS, Budd EL, Ying X, de Ruyter AJ, Armstrong RL, Pettman TL, et al. Exploring political influences on evidence-based non-communicable disease prevention across four countries. Health Educ Res. 2018;33(2):89–103. doi: 10.1093/her/cyy005. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

22. Conte KP, Marie Harvey S, Turner GR. “During early implementation you just muddle through”: factors that impacted a statewide arthritis program’s implementation. Transl Behav Med. 2017;7(4):804–815. doi: 10.1007/s13142-017-0478-0. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

23. Johns DM, Bayer R, Fairchild AL. Evidence and the politics of deimplementation: the rise and decline of the “counseling and testing” paradigm for HIV prevention at the US Centers for Disease Control and Prevention. Milbank Q. 2016;94(1):126–162. doi: 10.1111/1468-0009.12183. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

24. Schell SF, Luke DA, Schooley MW, Elliott MB, Herbers SH, Mueller NB, et al. Public health program capacity for sustainability: a new framework. Implement Sci. 2013;8:15. doi: 10.1186/1748-5908-8-15. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

25. Scheirer MA. Linking sustainability research to intervention types. Am J Public Health. 2013;103(4):e73–e80. doi: 10.2105/AJPH.2012.300976. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

26. Tabak RG, Duggan K, Smith C, Aisaka K, Moreland-Russell S, Brownson RC. Assessing capacity for sustainability of effective programs and policies in local health departments. J Public Health Manage Pract. 2016;22(2):129–137. doi: 10.1097/PHH.0000000000000254. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

27. Panzano PC, Sweeney HA, Seffrin B, Massatti R, Knudsen KJ. The assimilation of evidence-based healthcare innovations: a management-based perspective. J Behav Health Serv Res. 2012;39(4):397–416. doi: 10.1007/s11414-012-9294-y. [PubMed] [CrossRef] [Google Scholar]

28. Gold R, Bunce AE, Cohen DJ, Hollombe C, Nelson CA, Proctor EK, et al. Reporting on the strategies needed to implement proven interventions: an example from a “real-world” cross-setting implementation study. Mayo Clin Proc. 2016;91(8):1074–1083. doi: 10.1016/j.mayocp.2016.03.014. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

29. Brownson RC, Baker EA, Deshpande AD, Gillespie KN. Evidence-based public health. 3. New York: Oxford University Press; 2018. [Google Scholar]

30. Brownson RC, Fielding JE, Green LW. Building capacity for evidence-based public health: reconciling the pulls of practice and the push of research. Annu Rev Public Health. 2018;39:3.1–3.27. doi: 10.1146/annurev-publhealth-040617-014746. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

31. Kohatsu ND, Robinson JG, Torner JC. Evidence-based public health: an evolving concept. Am J Prev Med. 2004;27(5):417–421. [PubMed] [Google Scholar]

32. Brownson RC, Allen P, Duggan K, Stamatakis KA, Erwin PC. Fostering more-effective public health by identifying administrative evidence-based practices: a review of the literature. Am J Prev Med. 2012;43(3):309–319. doi: 10.1016/j.amepre.2012.06.006. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

33. Allen P, Jacob RR, Lakshman M, Best LA, Bass K, Brownson RC. Lessons learned in promoting evidence-based public health: perspectives from managers in state public health departments. J Community Health. 2018;43(5):856–863. doi: 10.1007/s10900-018-0494-0. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

34. Aarons GA, Ehrhart MG, Farahnak LR, Sklar M. Aligning leadership across systems and organizations to develop a strategic climate for evidence-based practice implementation. Annu Rev Public Health. 2014;35:255–274. doi: 10.1146/annurev-publhealth-032013-182447. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

35. Honore PA, Clarke RL, Mead DM, Menditto SM. Creating financial transparency in public health: examining best practices of system partners. J Public Health Manage Pract. 2007;13(2):121–129. doi: 10.1097/00124784-200703000-00007. [PubMed] [CrossRef] [Google Scholar]

36. Honore PA, Simoes EJ, Moonesinghe R, Kirbey HC, Renner M. Applying principles for outcomes-based contracting in a public health program. J Public Health Manage Pract. 2004;10(5):451–457. doi: 10.1097/00124784-200409000-00013. [PubMed] [CrossRef] [Google Scholar]

37. Steele CB, Rose JM, Townsend JS, Fonseka J, Richardson LC, Chovnick G. Comprehensive cancer control partners’ use of and attitudes about evidence-based practices. Prev Chronic Dis. 2015;12:E113. doi: 10.5888/pcd12.150095. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

38. DeGroff A, Carter A, Kenney K, Myles Z, Melillo S, Royalty J, et al. Using evidence-based interventions to improve cancer screening in the National Breast and cervical cancer early detection program. J Public Health Manage Pract. 2016;22(5):442–449. doi: 10.1097/PHH.0000000000000369. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

39. Public Health Accreditation Board . Guide to National Public Health Department Initial Accreditation Alexandria, VA. 2015. [Google Scholar]

40. Public Health Foundation . Modified version of the core competencies for public health professionals. 2017. [Google Scholar]

41. Parks RG, Tabak RG, Allen P, Baker EA, Stamatakis KA, Poehler AR, et al. Enhancing evidence-based diabetes and chronic disease control among local health departments: a multi-phase dissemination study with a stepped-wedge cluster randomized trial component. Implement Sci. 2017;12(1):122. doi: 10.1186/s13012-017-0650-4. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

42. Reis RS, Duggan K, Allen P, Stamatakis KA, Erwin PC, Brownson RC. Developing a tool to assess administrative evidence-based practices in local public health departments. Front Public Health Serv Syst Res. 2014;3(3):Article 2. [Google Scholar]

43. Allen P, Sequeira S, Jacob RR, Hino AA, Stamatakis KA, Harris JK, et al. Promoting state health department evidence-based cancer and chronic disease prevention: a multi-phase dissemination study with a cluster randomized trial component. Implement Sci. 2013;8:141. doi: 10.1186/1748-5908-8-141. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

44. Mazzucca S, Parks RG, Tabak RG, Allen P, Dobbins M, Stamatakis KA, Brownson RC. Assessing organizational supports for evidence-based decision making in local public health departments in the United States: development and psychometric properties of a new measure. J Public Health Manag Pract. 2019;25(5):454–463. doi: 10.1097/PHH.0000000000000952. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

45. Shelton RC, Cooper BR, Stirman SW. The sustainability of evidence-based interventions and practices in public health and health care. Annu Rev Public Health. 2018;39:55–76. doi: 10.1146/annurev-publhealth-040617-014731. [PubMed] [CrossRef] [Google Scholar]

46. Wiltsey Stirman S, Kimberly J, Cook N, Calloway A, Castro F, Charns M. The sustainability of new programs and innovations: a review of the empirical literature and recommendations for future research. Implement Sci. 2012;7:17. doi: 10.1186/1748-5908-7-17. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

47. Dreisinger ML, Boland EM, Filler CD, Baker EA, Hessel AS, Brownson RC. Contextual factors influencing readiness for dissemination of obesity prevention programs and policies. Health Educ Res. 2012;27(2):292–306. doi: 10.1093/her/cyr063. [PubMed] [CrossRef] [Google Scholar]

48. Nadeem E, Ringle V. De-adoption of an evidence-based trauma intervention in schools: a retrospective report from an urban school district. Sch Ment Heal. 2016;8(1):132–143. doi: 10.1007/s12310-016-9179-y. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

49. Beets MW, Glenn Weaver R, Turner-McGrievy G, Saunders RP, Webster CA, Moore JB, et al. Evaluation of a statewide dissemination and implementation of physical activity intervention in afterschool programs: a nonrandomized trial. Transl Behav Med. 2017;7(4):690–701. doi: 10.1007/s13142-017-0484-2. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

50. Hailemariam M, Bustos T, Montgomery B, Barajas R, Evans LB, Drahota A. Evidence-based intervention sustainability strategies: a systematic review. Implement Sci. 2019;14(1):57. doi: 10.1186/s13012-019-0910-6. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

51. Jacob RR, Baker EA, Allen P, Dodson EA, Duggan K, Fields R, et al. Training needs and supports for evidence-based decision making among the public health workforce in the United States. BMC Health Serv Res. 2014;14:564. doi: 10.1186/s12913-014-0564-7. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

52. Hahn EE, Munoz-Plaza C, Wang J, Garcia Delgadillo J, Schottinger JE, Mittman BS, et al. Anxiety, culture, and expectations: oncologist-perceived factors associated with use of nonrecommended serum tumor marker tests for surveillance of early-stage breast cancer. J Oncol Pract. 2017;13(1):e77–e90. doi: 10.1200/JOP.2016.014076. [PubMed] [CrossRef] [Google Scholar]


Articles from BMC Health Services Research are provided here courtesy of BioMed Central


Which of the following is not one of the consequences of inadequate software project management?

Chapter 14.

Which type of tool helps project managers identify bottlenecks in project development?

Both Gantt and PERT charts help managers identify bottlenecks and determine the impact that problems will have on project completion times.

Which of the following statements best describes the effect that project structure?

Which of the following statements best describes the effect that project structure has on overall project risk? Highly structured projects tend to be larger, affecting more organizational units, and run both the risk of out-of-control costs and becoming too difficult to control.

Which of the following is considered as tangible benefits of information systems?

Different information system types have different advantages, but the three key tangible advantages are speed, accuracy and customization.