Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

12.2 Disseminating your findings

Learning objectives.

  • Define dissemination
  • Describe how audience impacts the content and purpose of dissemination
  • Identify the options for formally presenting your work to other scholars
  • Explain the role of stakeholders in dissemination

Dissemination refers to “a planned process that involves consideration of target audiences and the settings in which research findings are to be received and, where appropriate, communicating and interacting with wider policy and…service audiences in ways that will facilitate research uptake in decision-making processes and practice” (Wilson, Petticrew, Calnan, & Natareth, 2010, p. 91).  In other words, dissemination of research findings involves careful planning, thought, consideration of target audiences, and communication with those audiences. Writing up results from your research and having others take notice are two entirely different propositions. In fact, the general rule of thumb is that people will not take notice unless you help and encourage them to do so.

social work research dissemination

Disseminating your findings successfully requires determining who your audience is, where your audience is, and how to reach them. When considering who your audience is, think about who is likely to take interest in your work. Your audience might include those who do not express enthusiastic interest but might nevertheless benefit from an awareness of your research. Your research participants and those who share some characteristics in common with your participants are likely to have some interest in what you’ve discovered in the course of your research. Other scholars who study similar topics are another obvious audience for your work. Perhaps there are policymakers who should take note of your work. Organizations that do work in an area related to the topic of your research are another possibility. Finally, any and all inquisitive and engaged members of the public represent a possible audience for your work.

Where your audience is should be fairly obvious. You know where your research participants are because you’ve studied them. You can find interested scholars on your campus, at professional conferences, and via publications such as professional organizations’ newsletters and scholarly journals. Policymakers include your state and federal representatives who, at least in theory, should be available to hear a constituent speak on matters of policy interest. Perhaps you’re already aware of organizations that do work in an area related to your research topic, but if not, a simple web search should help you identify possible organizational audiences for your work. Disseminating your findings to the public more generally could take any number of forms: a letter to the editor of the local newspaper, a blog, or even a post or two on your social media channels.

Finally, determining how to reach your audiences will vary according to which audience you wish to reach. Your strategy should be determined by the norms of the audience. For example, scholarly journals provide author submission instructions that clearly define requirements for anyone wishing to disseminate their work via a particular journal. The same is true for newspaper editorials; check your newspaper’s website for details about how to format and submit letters to the editor. If you wish to reach out to your political representatives, a call to their offices or a simple web search should tell you how to do so.

Disseminating findings involves the following three steps:

  • Determine who your audience
  • Identify where your audience
  • Discover how best to reach your audience

Tailoring your message to your audience

Once you are able to articulate with whom to you wish to share your research, you must decide what to share. While you would never alter your actual findings for different audiences, understanding who your audience is will help you frame your research in a way that is most meaningful to that audience. Certainly, the most obvious candidates with whom you’ll share your work are other social scientists. If you are conducting research for a class project, your main “audience” will probably be your professor. Perhaps you’ll also share your work with other students in the class.

What is more challenging, and possibly a little scary, is sharing your research with the wider world. Sharing with professional audiences is designed to bring your work to the attention of other social scientists and academics, but also other social workers or professionals who practice in areas related to your research. If you are sharing with other scientists, they are probably interested in your study’s methods, particularly statistical tests or data analysis frameworks. Sharing your work with this audience will require you to talk about your methods and data in a different way than you would with other audiences.  Professional social workers are more likely to want to hear about the practice and policy implications of your research.

social work research dissemination

Scholars take extraordinary care not to commit plagiarism . Presenting someone else’s words or ideas as if they are your own is among the most egregious transgressions a scholar can commit. Indeed, plagiarism has ended many careers (Maffly, 2011) [1] and many students’ opportunities to pursue degrees (Go, 2008). [2] Take this very seriously. If you feel a little afraid and paranoid after reading this warning, consider it a good thing— and let it motivate you to take extra care to ensure that you are not plagiarizing the work of others.

Peer-reviewed journal articles

Researchers commonly submit manuscripts to peer-reviewed academic journals.  These journals are commonly read by other researchers, students, and practitioners.  Peer review is a formal process in which other scholars review the work to ensure it is a high quality before publication.  A manuscript may be rejected by a journal after being submitted.  Often, this is an opportunity for the researchers to correct problems with the manuscript or find a journal that is a better fit for their research findings.  Usually, even if a manuscript is accepted for publication, the peer reviewers will request improvements to it before it can be published.  The process of peer review helps improve the quality of journal articles and research.

Formal presentations

Getting your work published in a journal is challenging and time-consuming, as journals receive many submissions but have limited room to publish. Researchers often seek to supplement their publications with formal presentations, which, while adhering to stringent standards, are more accessible and have more opportunities to share research. For researchers, presenting your research is an excellent way to get feedback on your work. Professional social workers often make presentations to their peers to prepare for more formal writing and publishing of their work. Presentations might be formal talks, either individually or as part of a panel at a professional conference; less formal roundtable discussions, another common professional conference format; or posters that are displayed in a specially designated area.

social work research dissemination

Presentations to stakeholders

While it is important to let academics and scientists know about the results of your research, it is important to identify stakeholders who would also benefit from knowing the results of your study. Stakeholders are individuals or groups who have an interest in the outcome of the study you conduct. Instead of the formal presentations or journal articles you may use to engage academics or fellow researchers, stakeholders will expect a presentation that is engaging, understandable, and immediately relevant to their lives and practice. Informal presentations are no less rigorous than formal presentations, but they do not follow a strict format.

Disseminating to the general public

While there are a seemingly infinite number of informal audiences, there is one more that is worth mentioning—the general public.  Part of our job as social workers is to shine a light towards areas of social injustice and raise the consciousness of the public as a whole. Researchers commonly share their results with popular media outlets to reach a broader audience with their study’s conclusions. Unfortunately, journalism about scientific results can sometimes overstate the degree of certainty researchers have in their conclusions. Consequently, it’s important to review the journalistic standards at the media outlet and reporter you approach by examining their previous work and clarifying the degree of control over the final product you will have.

social work research dissemination

Reports written for public consumption differ from those written for scholarly consumption. As noted elsewhere in this chapter, knowing your audience is crucial when preparing a report of your research. What are they likely to want to hear about? What portions of the research do you feel are crucial to share, regardless of the audience? What level of knowledge do they have about your topic? Answering these questions will help you determine how to shape any written reports you plan to produce. In fact, some outlets answer these questions for you, as in the case of newspaper editorials where rules of style, presentation, and length will dictate the shape of your written report.

Whoever your audience, don’t forget what it is that you are reporting: social scientific evidence. Take seriously your role as a social scientist and your place among peers in your discipline. Present your findings as clearly and as honestly as you possibly can; pay appropriate homage to the scholars who have come before you, even while you raise questions about their work; and aim to engage your readers in a discussion about your work and about avenues for further inquiry. Even if you won’t ever meet your readers face-to-face, imagine what they might ask you upon reading your report, imagine your response, and provide some of those details in your written report.

Key Takeaways

  • Disseminating findings takes planning and careful consideration of your audiences.
  • The dissemination process includes determining the who, where, and how of reaching your audiences.
  • Plagiarism is among the most egregious academic transgressions a scholar can commit.
  • In formal presentations, include your research question, methodological approach, major findings, and a few final takeaways.
  • Reports for public consumption usually contain fewer details than reports for scholarly consumption.
  • Keep your role and obligations as a social scientist in mind as you write research reports.
  • Dissemination- “a planned process that involves consideration of target audiences and the settings in which research findings are to be received and, where appropriate, communicating and interacting with wider policy and…service audiences in ways that will facilitate research uptake in decision-making processes and practice” (Wilson, Petticrew, Calnan, & Natareth, 2010, p. 91)
  • Plagiarism- presenting someone else’s words or ideas as if they are your own

Image attributions

microphone by Skitterphoto CC-0

woman man teamwork by rawpixel CC-0

audience by MariSmithPix CC-0

feedback by surdumihail CC-0

  • As just a single example, take note of this story about the pattern of plagiarism that cost a University of Utah scholar his job . ↵
  • As a single example (of many) of the consequences for students committing plagiarism, see this article about two students kicked off semester at sea for plagiarism . ↵

Foundations of Social Work Research Copyright © 2020 by Rebecca L. Mauldin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Logo for VCU Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Matthew DeCarlo

Chapter Outline

  • Ethical responsibility and cultural respectfulness (8 minute read)
  • Critical considerations (5 minute read)
  • Informing your dissemination plan (11 minute read)
  • Final product taking shape (10 minute read)

Content warning: Examples in this chapter contain references to research as a potential tool to stigmatize or oppress vulnerable groups, mistreatment and inequalities experienced by Native American tribes, sibling relationships, caregiving, child welfare, criminal justice and recidivism, first generation college students, Covid-19, school culture and race, health (in)equity, physical and sensory abilities, and transgender youth.

Your sweat and hard work has paid off! You’ve planned your study, collected your data, and completed your analysis. But alas, no rest for the weary student researcher. Now you need to share your findings. As researchers, we generally have some ideas where and with whom we desire to share our findings, but these plans may evolve and change during our research process. Communicating our findings with a broader audience is a critical step in the research process, so make sure not to treat this like an afterthought. Remember, research is about making a contribution to collective knowledge-building in the area of study that you are interested in. Indeed, research is of no value if there is no audience to receive it. You worked hard…get those findings out there!

social work research dissemination

In planning for this phase of research, we can consider a variety of methods for sharing our study findings. Among other options, we may choose to write our findings up as an article in a professional journal, provide a report to an organization, give testimony to a legislative group, or create a presentation for a community event. We will explore these options in a bit more detail below in section 21.4 where we talk more about different types of qualitative research products. We also want to think about our intended audience.

For your research, answer these two key questions as you are planning for dissemination:

  • Who are you targeting to communicate your findings to? In other words, who needs to hear the results of your study?
  • What do you hope your audience will take away after learning about your study?

Of course, we can’t know how our findings will be received, but we will want to work to anticipate what expectations our target audience might have and use this information to tailor our message in a way that is clear, thorough and consistent with our data (keep it honest!). We will further discuss different audiences you might want to reach with your research and a few considerations for each type of audience that may shape your dissemination plan . Before we tackle these sections, we are going to take some time to think through ethical and critical considerations that should be at the forefront of your mind as you plan for the dissemination of your research. Remember, your research is NOT a static, inanimate object. It is a reflection of your participants. Furthermore, others will review, interpret and potentially form opinions based on your research Your research has real implications for individuals, groups, communities, agencies, programs, etc.

21.1 Ethical responsibility and cultural respectfulness

Learning objectives.

Learners will be able to…

  • Identify key ethical considerations in developing their qualitative research dissemination plan
  • Conceptualize how research dissemination may impact diverse groups, both presently and into the future

Have you ever been misrepresented or portrayed in a negative light? It doesn’t feel good. It especially doesn’t feel good when the person portraying us has power, control and influence. While you might not feel powerful, research can be a powerful tool, and can be used and abused for many ends. Once research is out in the world, it is largely out of our control, so we need to approach dissemination with care. Be thoughtful about how you represent your work and take time to think through the potential implications it may have, both intended and unintended, for the people it represents.

As alluded to in the paragraph above, research comes with hefty responsibilities. You aren’t off the hook if you are conducting quantitative research. While quantitative research deals with numbers, these numbers still represent people and their relationships to social problems. However, with qualitative research, we are often dealing with a smaller sample and trying to learn more from them. As such, our job often carries additional weight as we think about how we will represent our findings and the people they reflect. Furthermore, we probably hope that our research has an impact; that in some way, it leads to change around some issue. This is especially true as social work researchers. Our research often deals with oppressed groups, social problems, and inequality. However, it’s hard to predict the implications that our research may have. This suggests that we need to be especially thoughtful about how we present our research to others.

Two of the core values of social work involve respecting the inherent dignity and worth of each person, practicing with integrity, and behaving in a trustworthy manner. [1] As social work researchers, to uphold these values, we need to consider how we are representing the people we are researching. Our work needs to honestly and accurately reflect our findings, but it also needs to be sensitive and respectful to the people it represents. In Chapter 6 we discussed research ethics and introduced the concept of beneficence or the idea that research needs to support the welfare of participants. Beneficence is particularly important as we think about our findings becoming public and how the public will receive, interpret and use this information. Thus, both as social workers and researchers, we need to be conscientious of how dissemination of our findings takes place.

As you think about the people in your sample and the communities or groups to which they belong, consider some of these questions:

  • How are participants being portrayed in my research?
  • What characteristics or findings are being shared or highlighted in my research that may directly or indirectly be associated with participants?
  • Have the groups that I am researching been stigmatized, stereotyped, and/or misrepresented in the past? If so, how does my research potentially reinforce or challenge these representations?
  • How might my research be perceived or interpreted by members of the community or group it represents?
  • In what ways does my research honor the dignity and worth of participants?

social work research dissemination

Qualitative research often has a voyeuristic quality to it, as we are seeking a window into participants’ lives by exploring their experiences, beliefs, and values. As qualitative researchers, we have a role as stewards or caretakers of data. We need to be mindful of how data are gathered, maintained, and most germane to our conversation here, how data are used. We need to craft research products that honor and respect individual participants (micro), our collective sample as a whole (meso), and the communities that our research may represent (macro).

As we prepare to disseminate our findings, our ethical responsibilities as researchers also involve honoring the commitments we have made during the research process. We need to think back to our early phases of the research process, including our initial conversations with research partners and other stakeholders who helped us to coordinate our research activities. If we made any promises along the way about how the findings would be presented or used, we need to uphold them here. Additionally, we need to abide by what we committed to in our informed consent . Part of our informed consent involves letting participants know how findings may be used. We need to present our findings according to these commitments. We of course also have a commitment to represent our research honestly.

As an extension of our ethical responsibilities as researchers, we need to consider the impact that our findings may have, as well as our need to be socially conscientious researchers. As scouts, we were taught to leave our campsite in a better state than when we arrived. I think it is helpful to think of research in these terms. Think about the group(s) that may be represented by your research; what impact might your findings have for the lives of members of this group? Will it leave their lives in a better state than before you conducted your research? As a responsible researcher, you need to be thoughtful, aware and realistic about how your research findings might be interpreted and used by others. As social workers, while we hope that findings will be used to improve the lives of our clients, we can’t ignore that findings can also be used to further oppress or stigmatize vulnerable groups; research is not apolitical and we should not be naive about this. It is worth mentioning the concept of sustainable research here. Sustainable research involves conducting research projects that have a long-term, sustainable impact for the social groups we work with. As researchers, this means that we need to actively plan for how our research will continue to benefit the communities we work with into the future. This can be supported by staying involved with these communities, routinely checking-in and seeking input from community members, and making sure to share our findings in ways that community members can access, understand, and utilize them. Nate Olson provides a very inspiring Ted Talk about the importance of building resilient communities. As you consider your research project, think about it in these terms.

Key Takeaways

  • As you think about how best to share your qualitative findings, remember that these findings represent people. As such, we have a responsibility as social work researchers to ensure that our findings are presented in honest, respectful, and culturally sensitive ways.
  • Since this phase of research deals with how we are going to share our findings with the public, we need to actively consider the potential implications of our research and how it may be interpreted and used.
  • Is your work, in some way, helping to contribute to a resilient and sustainable community? It may not be a big tangible project as described in Olson’s Ted Talk , but is it providing a resource for change and growth to a group of people, either directly or indirectly? Does it promote sustainability amongst the social networks that might be impacted by the research you are conducting?

21.2 Critical considerations

  • Identify how issues of power and control are present in the dissemination of qualitative research findings
  • Begin to examine and account for their own role in the qualitative research process, and address this in their findings

This is the part of our research that is shared with the public and because of this, issues like reciprocity, ownership, and transparency are relevant. We need to think about who will have access to the tangible products of our research and how that research will get used. As researchers, we likely benefit directly from research products; perhaps it helps us to advance our career, obtain a good grade, or secure funding. Our research participants often benefit indirectly by advancing knowledge about a topic that may be relevant or important to them, but often don’t experience the same direct tangible benefits that we do. However, a participatory perspective challenges us to involve community members from the outset in discussions about what changes would be most meaningful to their communities and what research products would be most helpful in accomplishing those changes. This is especially important as it relates to the role of research as a tool to support empowerment.

Ownership of research products is also important as an issue of power and control. We will discuss a range of venues for presenting your qualitative research, some of which are more amenable to shared ownership than others. For instance, if you are publishing your findings in an academic journal, you will need to sign an agreement with that publisher about how the information in that article can be used and who has access to it. Similarly, if you are presenting findings at a national conference, travel and other conference-related expenses and requirements may make access to these research products prohibitive. In these instances, the researcher and the organization(s) they negotiate with (e.g. the publishing company, the conference organizing body) share control. However, disseminating qualitative findings in a public space, public record, or community-owned resource means that more equitable ownership might be negotiated. An equitable or reciprocal arrangement might not always be able to be reached, however. Transparency about who owns the products of research is important if you are working with community partners. To support this, establishing a Memorandum Of Understanding (MOU) or Memorandum of Agreement (MOA) e arly in the research process is important. This document should clearly articulate roles, responsibilities, and a number of other details, such as ownership of research products between the researcher and the partnering group(s).

Resources for learning more about MOUs and MOAs

Center for Community Health and Development, University of Kansas. (n.d.). Section 9. Understanding and writing contracts and memoranda of agreement .

Collaborative Center for Health Equity, University of Wisconson Madison. (n.d.). Standard agreement for research with community organizations .

Office of Research, UC Davis. (n.d.). Research MOUs .

Office of Research, The University of Texas at Dallas. (n.d.). Types of agreements . 

In our discussion about qualitative research, we have also frequently identified the need for the qualitative researcher to account for their role throughout the research process. Part of this accounting can specifically apply to qualitative research products. This is our opportunity to demonstrate to our audience that we have been reflective throughout the course of the study and how this has influenced the work we did. Some qualitative research studies include a positionality statement within the final product. This is often toward the beginning of the report or the presentation and includes information about the researcher(s)’s identity and worldview, particularly details relevant to the topic being studied. This can include why you are invested in the study, what experiences have shaped how you have come to think about the topic, and any positions or assumptions you make with respect to the topic. This is another way to encourage transparency. It can also be a means of relegating or at least acknowledging some of our power in the research process, as it can provide one modest way for us, as the researcher, to be a bit more exposed or vulnerable, although this is a far cry from making the risks of research equitable between the researcher and the researched. However, the positionality statement can be a place to integrate our identities, who we are as an individual, a researcher, and a social work practitioner. Granted, for some of us that might be volumes, but we need to condense this down to a brief but informative statement—don’t let it eclipse the research! It should just be enough to inform the audience and allow them to draw their own conclusions about who is telling the story of this research and how well they can be trusted. This student provides a helpful discussion of the positionality statement that she developed for her study . Reviewing your reflexive journal (discussed in Chapter  20 as a tool to enhance qualitative rigor) can help in identifying underlying assumptions and positions you might have grounded in your reactions throughout the research process. These insights can be integrated into your positionality statement. Please take a few minutes to watch this informative video of a student further explaining what a positionality statement is and providing a good example of one.

  • The products of qualitative research often benefit the researcher disproportionately when compared to research participants or the communities they represent. Whenever possible, we can seek out ways to disseminate research in ways that addresses this imbalance and supports more tangible and direct benefits to community members.
  • Openly positioning ourselves in our dissemination plans can be an important way for qualitative researchers to be transparent and account for our role.

21.3 Informing your dissemination plan

  • Appraise important dimensions of planning that will inform their research dissemination plan, including: audience, purpose, context and content
  • Apply this appraisal to key decisions they will need to make when designing their qualitative research product(s)

This section will offer you a general overview of points to consider as you form the dissemination plan for your research. We will start with considerations regarding your audience, then turn our attention to the purpose of your research, and finally consider the importance of attending to both content and context as you plan for your final research product(s).

Perhaps the most important consideration you have as you plan how to present your work is your audience. Research is a product that is meant to be consumed, and because of this, we need to be conscious of our consumers. We will speak more extensively about knowing your audience in Chapter 24 , devoted to both sharing and consuming research. Regardless of who your audience is (e.g. community members, classmates, research colleagues, practicing social workers, state legislator), there will be common elements that will be important to convey. While the way you present them will vary greatly according to who is listening, Table 21.1 offers a brief review of the elements that you will want your audience to leave with.

Table 21.1 Elements to consider when planning for your audience
Aim What did my research aim to accomplish and why is it important
Process How did I go about conducting my research and what did I do to ensure quality
Concepts What are the main ideas that someone needs to know to make sense of my topic and how are they integrated into my research plan
Findings What were my results and under what circumstances are they valid
Connection How are my findings connected to what else we know about this topic and why are they important

Once we determine who our audience is, we can further tailor our dissemination plan to that specific group. Of course, we may be presenting our findings in more than one venue, and in that case, we will have multiple plans that will meet the needs of each specific audience.

It’s a good idea to pitch your plan first. However you plan to present your findings, you will want to have someone preview before you share with a wider audience. Ideally, whoever previews will be a person from your target audience or at least someone who knows them well. Getting feedback can go a long way in helping us with the clarity with which we convey our ideas and the impact they have on our audience. This might involve giving a practice speech, having someone review your article or report, or practice discussing your research one-on-one, as you would with a poster presentation. Let’s talk about some specific audiences that you may be targeting and their unique needs or expectations.

Below I will go through some brief considerations for each of these different audiences. I have tried to focus this discussion on elements that are relevant specific to qualitative studies since we do revisit this topic in Chapter 24 .

social work research dissemination

Research community

When presenting your findings to an academic audience or other research-related community, it is probably safe to a make a few assumptions. This audience is likely to have a general understanding of the research process and what it entails. For this reason, you will have to do less explaining of research-related terms and concepts. However, compared to other audiences, you will probably have to provide a bit more detail about what steps you took in your research process, especially as they relate to qualitative rigor, because this group will want to know about how your research was carried out and how you arrived at your decisions throughout the research process. Additionally, you will want to make a clear connection between which qualitative design you chose and your research question; a methodological justification . Researchers will also want to have a good idea about how your study fits within the wider body of scientific knowledge that it is related to and what future studies you feel are needed based on your findings. You are likely to encounter this audience if you are disseminating through a peer-reviewed journal article, presenting at a research conference, or giving an invited talk in an academic setting.

Professional community

We often find ourselves presenting our research to other professionals, such as social workers in the field. While this group may have some working knowledge of research, they are likely to be much more focused on how your research is related to the work they do and the clients they serve. While you will need to convey your design accurately, this audience is most likely to be invested in what you learned and what it means (especially for practice). You will want to set the stage for the discussion by doing a good job expressing your connection to and passion for the topic (a positionality statement might be particularly helpful here), what we know about the issue, and why it is important to their professional lives. You will want to give good contextual information for your qualitative findings so that practitioners can know if these findings might apply to people they work with. Also, as since social work practitioners generally place emphasis on person-centered practice, hearing the direct words of participants (quotes) whenever possible, is likely to be impactful as we present qualitative results. Where academics and researchers will want to know about implications for future research, professionals will want to know about implications for how this information could help transform services in the future or understand the clients they serve.

Lay community

The lay community are people who don’t necessarily have specialized training or knowledge of the subject, but may be interested or invested for some other reason; perhaps the issue you are studying affects them or a loved one. Since this is the general public, you should expect to spend the most time explaining scientific knowledge and research processes and terminology in accessible terms. Furthermore, you will want to invest some time establishing a personal connection to the topic (like I talked about for the professional community). They will likely want to know why you are interested and why you are a credible source for this information. While this group may not be experts on research, as potential members of the group(s) that you may be researching, you do want to remember that they are experts in their own community. As such, you will want to be especially mindful of approaching how you present findings with a sense of cultural humility (although hopefully you have this in mind across all audiences). It will be good to discuss what steps you took to ensure that your findings accurately reflect what participants shared with you ( rigor ). You will want to be most clear with this group about what they should take away, without overstating your findings.

Regardless of who your audience is, remember that you are an ambassador. You may represent a topic, a population, an organization, or the whole institution of research, or any combination of these. Make sure to present your findings honestly, ethically, and clearly. Furthermore, I’m assuming that the research you are conducting is important because you have spent a lot of time and energy to arrive at your findings. Make sure that this importance comes through in your dissemination. Tell a compelling story with your research!  

Who needs to hear the message of your qualitative research?

What will they know in advance?

  • Example. If you are presenting your research about caregiver fatigue to a caregiver support group, you won’t need to spend time describing the role of caregivers because your audience will have lived experience.

What terms will they likely already be familiar with and which ones will require additional explanation?

  • Example. If you are presenting your research findings to a group of academics, you wouldn’t have to explain what a sampling frame is, but if you are sharing it with a group of community members from a local housing coalition, you will need to help them understand what this is (or maybe use a phrase that is more meaningful to them).

What will your audience be looking for?

  • Example. If you are speaking to a group of child welfare workers about your study examining trauma-informed communication strategies, they are probably going to want to know how these strategies might impact the work that they do.

What is most impactful or valued by your audience?

  • Example. If you are sharing your findings at a meeting with a council member, it may be especially meaningful to share direct quotes from constituents.

Being clear about the purpose of your research from the outset is immeasurably helpful. What are you hoping to accomplish with your study? We can certainly look to the overarching purpose of qualitative research, that being to develop/expand/challenge/explore understanding of some topic. But, what are you specifically attempting to accomplish with your study? Two of the main reasons we conduct research are to raise awareness about a topic and to create change around some issue. Let’s say you are conducting a study to better understand the experience of recidivism in the criminal justice system. This is an example of a study whose main purpose is to better understand and raise awareness around a particular social phenomenon (recidivism). On the other hand, you could also conduct a study that examines the use of strengths-based strategies by probation officers to reduce recidivism. This would fall into the category of research promoting a specific change (the use of strengths-based strategies among probation officers). I would wager that your research topic falls into one of these two very broad categories. If this is the case, how would you answer the corresponding questions below?

Are you seeking to raise awareness of a particular issue with your research? If so,

  • Whose awareness needs raising? 
  • What will “speak” most effectively to this group? 
  • How can you frame your research so that it has the most impact?

Are you seeking to create a specific change with your research? If so,

  • What will that change look like? 
  • How can your research best support that change occurring? 
  • Who has the power to create that change and what will be most compelling in reaching them?

How you answer these questions will help to inform your dissemination plan. For instance, your dissemination plan will likely look very different if you are trying to persuade a group of legislators to pass a bill versus trying to share a new model or theory with academic colleagues. Considering your purposes will help you to convey the message of your research most effectively and efficiently. We invest a lot of ourselves in our research, so make sure to keep your sights focused on what you hope to accomplish with it!

Content and context

As a reminder, qualitative research often has a dual responsibility for conveying both content and context. You can think of content as the actual data that is shared with us or that we obtain, while context is the circumstances under which that data sharing occurs. Content conveys the message and context provides us the clues with which we can decode and make sense of that message.

While quantitative research may provide some contextual information, especially in regards to describing its sample, it rarely receives as much attention or detail as it does in qualitative studies. Because of this, you will want to plan for how you will attend to both the content and context of your study in planning for your dissemination.

  • Research is an intentional act; you are trying to accomplish something with it. To be successful, you need to approach dissemination planfully.
  • Planning the most effective way of sharing our qualitative findings requires looking beyond what is convenient or even conventional, and requires us to consider a number of factors, including our audience, the purpose or intent of our research and the nature of both the content and the context that we are trying to convey.

21.4 Final product taking shape

  • Evaluate the various means of disseminating research and consider their applicability for your research project
  • Determine appropriate building blocks for designing your qualitative research product

As we have discussed, qualitative research takes many forms. It should then come as no surprise that qualitative research products also come in many different packages. To help guide you as the final products of your research take shape, we will discuss some of the building blocks or elements that you are likely to include as tools in sharing your qualitative findings. These are the elements that will allow you to flesh out the details of your dissemination plan.

Building blocks

There are many building blocks that are at our disposal as we formulate our qualitative research product(s). Quantitative researchers have charts, graphs, tables, and narrative descriptions of numerical output. These tools allow the quantitative researcher to tell the story of their research with numbers. As qualitative researchers, we are tasked with telling the story of our research findings as well, but our tools look different. While this isn’t an exhaustive list of tools that are at our disposal as qualitative researchers, a number of commonly used elements in sharing qualitative findings are discussed here. Depending on your study design and the type of data you are working with, you may use one or some combination of the building blocks discussed below.

Themes are a very common element when presenting qualitative research findings. They may be called themes, but they may also go by other names: categories, dimensions, main ideas, etc. Themes offer the qualitative researcher a way to share ideas that emerged from your analysis that were shared by multiple participants or across multiple sources of data. They help us to distill the large amounts of qualitative data that we might be working with into more concise and manageable pieces of information that are more consumable for our audience. When integrating themes into your qualitative research product, you will want to offer your audience: the title of the theme (try to make this as specific/meaningful as possible), a brief description or definition of the theme, any accompanying dimensions or sub-themes that may be relevant, and examples (when appropriate).

Quotes offer you the opportunity to share participants’ exact words with your audience. Of course, we can’t only rely on quotes, because we need to knit the information that is shared into one cohesive description of our findings and an endless list of quotes is unlikely to support this. Because of this, you will want to be judicious in selecting your quotes. Choose quotes that can stand on their own, best reflect the sentiment that is being captured by the theme or category of findings that you are discussing, and are likely to speak to and be understood by your audience. Quotes are a great way to help your findings come alive or to give them greater depth and significance. If you are using quotes, be sure to do so in a balanced manner—don’t only use them in some sections but not others, or use a large number to support one theme and only one or two for another. Finally, we often provide some brief demographic information in a parenthetical reference following a quote so our reader knows a little bit about the person who shared the information. This helps to provide some context for the quote.

Kohli and Pizarro (2016) [2] provide a good example of a qualitative study using quotes to exemplify their themes. In their study, they gathered data through short-answer questionnaires and in-depth interviews from racial-justice oriented teachers of Color. Their study explored the experiences and motivations of these teachers and the environments in which they worked. As you might guess, the words of the teacher-participants were especially powerful and the quotes provided in the results section were very informative and important in helping to fulfill the aim of the research study. Take a few minutes to review this article. Note how the authors provide a good amount of detail as to what each of the themes meant and how they used the quotes to demonstrate and support each theme. The quotes help bring the themes to life and anchor the results in the actual words of the participants (suggesting greater trustworthiness in the findings). 

Figure 21.1 below offers a more extensive example of a theme being reported along with supporting quotes from a study conducted by Karabanow, Gurman, and Naylor (2012). [3] This study focused on the role of work activities in the lives of “Guatemalan street youth”. One of the important themes had to do with intersection of work and identity for this group. In this example, brief quotes are used within the body of the description of the theme, and also longer quotes (full sentence(s)) to demonstrate important aspects of the description.

Figure 21.1. Example theme from Karabanow et al. 2012

Work, be it formal or informal, is beneficial for street youth not only for its financial benefits; but it helps youth to develop and rebuild their sense of self, to break away from destructive patterns, and ultimately contributes to any goals of exiting street life. Although many of the participants were aware that society viewed them as “lazy” or “useless,” they tended to see themselves as contributing members of society earning a valid and honest living. One participant said, “Well, a lot of people say, right? ‘The kid doesn’t want to do anything. Lazy kid’ right? And I wouldn’t like for people to say that about me, I’d rather do something so that they don’t say that I’m lazy. I want to be someone important in life.” This youth makes an interesting and important connection in this statement: he intrinsically associates “being someone” with “doing something” – he accepts the work-based identity that characterizes much of contemporary capitalist society. Many of the interviews subtly enforced this idea that in the informal economy, as in the formal economy, “who one is” is largely dependent on “what one does.” This demonstrates two important ideas: that street youth working in the informal sector are surprisingly ‘mainstream’ in their underlying beliefs and ambitions, and that work – be it formal or informal – plays a crucial role in allowing street youth, who have often dealt with trauma, isolation and low self-esteem, to rebuild a sense of self-worth.

Many of the youth involved in this study dream of futures that echo traditional ideals: “to have my family all together…to have a home, or rather to have a nice house…to have a good job.” Several explained that this future is unattainable without hard work; many viewed those who “do nothing” as people who “waste their time” and think that “your life isn’t important to you.” On the other hand, those who value their lives and wish to attain a future of “peace and tranquility” must “look for work, that’s what needs to be done to have that future because if God allows it, in the future maybe you can find a partner, form a family and live peacefully.” For these youth, working – be it in the formal or informal sector – is essential to a feeling of “moving forward ( ).” This movement forward begins with self-esteem. Although the focus of this study was not the troubled pasts of the participants, many alluded to the difficulties they have faced and the various traumas that forced them onto the streets. Several of the youth noted that working was a catalyst in rebuilding positive feelings about oneself: one explained, “[When I’m working,] I feel happy, powerful…Sometimes when I go out to sell, I feel happy.” Another said:

For me, when I’m working I feel free because I know that I’m earning my money in an honest way, not stealing right. Because when you’re stealing, you don’t feel free, right? Now when you’re working, you’re free, they can’t arrest you or anything because you’re selling. Now if you’re stealing and everything, you don’t feel free. But when you’re selling you feel free, out of danger.

This feeling of being “free” or “powerful” rests on the idea that money is “earned” and not stolen; being able to earn money is associated with being “someone,” with being a valid and contributing member of society.

In addition, work helps street youth to break away from destructive patterns. One participant spoke of her experience working full time at a café:

For me, working means to be busy, to not just be there….It helps us meet other people, like new people and not to be always in the same scene. Because if you’re not busy, you feel really bored and you might want to, I don’t know, go back to the same thing you were in before…you even forget your problems because you’re keeping busy, you’re talking to other people, people who don’t know you.

For this participant, a formal job was beneficial in that it supplied her with a daily routine and allowed her to interact with non-street people – these factors helped to separate her from the destructive lifestyle of the street, and helped her to “move forward.” Although these benefits are indeed most obvious with formal employment, many participants spoke of the positive effects of informal work as well, although to varying degrees. In Guatemala, since the informal economy accounts for over half of the country’s GNP, there is a wide range of under-the-table informal work available. These jobs frequently bring youth out of the street context and, therefore, provide similar benefits to a formal job, as described by the above participant. As to informal work that takes place on the street, such as hawking or car watching, the benefits of work are present, although to a different degree. Even hawking, for example, gives young workers a routine and a chance to interact with non-street people. As one young man continuously emphasized throughout his interview, “work helps you to keep your mind busy, to be in another mind-set, right? To not be thinking the same thing all the time: ‘Oh, drugs, drugs, drugs…’” As explained earlier, the code of the hawking world dictates that vendors cannot sell while high – just like a formal job, hawking helps to distance youth workers from some of their destructive street habits. However, as one participant thoughtfully noted, it is difficult to break these habits when one is still highly embroiled in street culture; “it depended on who was around me because if they were in the same problems as I was, I stopped working and I started doing the same as they did. And if I was surrounded by serious people, then I got my act together.” While certain types of informal work, like cleaning or waitressing, can help youth to distance themselves from destructive patterns, others, such as car watching and selling, may not do enough to separate youth from their peers. While the routine and activity do have positive effects, they often are not sufficient.

Among some of the participants, there was the sentiment that informal work could function as a transition stage towards exiting the street; it could “change your life.” One participant said “there are lots of vendors who’ve gotten off the streets, if you make an effort, you go out to sell, you can get off the street. Like myself, when I was selling, I mean working, I got off the street, I went home and I managed to stay there quite a long time.” One might credit this success to several factors: first, the money the seller may have been able to save and accumulate; second, the routine of selling may have helped the seller to break from destructive patterns, such as drug use, and also prepared the seller for the demands of formal sector employment; and, thirdly, selling may have enabled the seller to develop the necessary confidence and sense of self to attempt exiting the street.

Pictures or videos

If our data collection involves the use of photographs, drawings, videos or other artistic expression of participants or collection of artifacts, we may very well include selections of these in our dissemination of qualitative findings. In fact, if we failed to include these, it would seem a bit inauthentic. For the same reason we include quotes as direct representations of participants’ contributions, it is a good idea to provide direct reference to other visual forms of data that support or demonstrate our findings. We might incorporate narrative descriptions of these elements or quotes from participants that help to interpret their meaning. Integrating pictures and quotes is especially common if we are conducting a study using a Photovoice approach, as we discussed in Chapter 17 , where a main goal of the research technique is to bring together participant generated visuals with collaborative interpretation.

Take some time to explore the website linked here. It is the webpage for The Philidelphia Collaborative for Health Equity’s PhotoVoice Exhibit Gallery and offers a good demonstration of research that brings together pictures and text.

Graphic or figure

Qualitative researchers will often create a graphic or figure to visually reflect how the various pieces of your findings come together or relate to each other. Using a visual representation can be especially compelling for people who are visual learners. When you are using a visual representation, you will want to: label all elements clearly; include all the components or themes that are part of your findings; pay close attention to where you place and how you orient each element (as their spatial arrangement carries meaning); and finally, offer a brief but informative explanation that helps your reader to interpret your representation. A special subcategory of visual representation is process. These are especially helpful to lay out a sequential relationship within your findings or a model that has emerged out of your analysis. A process or model will show the ‘flow’ of ideas or knowledge in our findings, the logic of how one concept proceeds to the next and what each step of the model entails.

Noonan and colleagues (2004) [4] conducted a qualitative study that examined the career development of high achieving women with physical and sensory disabilities. Through the analysis of their interviews, they built a model of career development based on these women’s experiences with a figure that helps to conceptually illustrate the model. They place the ‘dyanmic self’ in the center, surrounded by a dotted (permeable) line, with a number of influences outside the line (i.e. family influences, disability impact, career attitudes and behaviors, sociopoltical context, developmental opportunities and social support) and arrows directed inward and outward between each influence and the dynamic self to demonstrate mutual influence/exchange between them. The image is included in the results section of their study and brings together “core categories” and demonstrates how they work together in the emergent theory or how they relate to each other. Because so many of our findings are dynamic, like Noonan and colleagues, showing interaction and exchange between ideas, figures can be especially helpful in conveying this as we share our results.

Titled "restructuring at work". There are a series of boxes in a row with arrows leading from one to another. The first states "unresolved work-related conflicts". The second box states, "shaming process" with two bullets stating "interpersonal shaming and "intrapersonal shaming". The 3rd box states "making efforts to please" and has 3 bullets labeled "increased work intensity", "overtime", and "sickness presenteeism". The 4th box is labeled "mental overload" and contains 3 bullets, labeled "chronic tiredness and fatigue", "social withdrawal", and "estrangement from self and others". The fifth and final box is labeled "sick leave".

Going one step further than the graphic or figure discussed above, qualitative researchers may decide to combine and synthesize findings into one integrated representation. In the case of the graphic or figure, the individual elements still maintain their distinctiveness, but are brought together to reflect how they are related. In a composite however, rather than just showing that they are related (static), the audience actually gets to ‘see’ the elements interacting (dynamic). The integrated and interactive findings of a composite can take many forms. It might be a written narrative, such as a fictionalized case study that reflects of highlights the many aspects that emerged during analysis. It could be a poem, dance, painting or any other performance or medium. Ultimately, a composite offers an audience a meaningful and comprehensive expression of our findings. If you are choosing to utilize a composite, there is an underlying assumption that is conveyed: you are suggesting that the findings of your study are best understood holistically. By discussing each finding individually, they lose some of their potency or significance, so a composite is required to bring them together. As an example of a composite, consider that you are conducting research with a number of First Nations Peoples in Canada. After consulting with a number of Elders and learning about the importance of oral traditions and the significance of storytelling, you collaboratively determine that the best way to disseminate your findings will be to create and share a story as a means of presenting your research findings. The use of composites also assumes that the ‘truths’ revealed in our data can take many forms. The Transgender Youth Project hosted by the Mandala Center for Change , is an example of legislative theatre combining research, artistic expression, and political advocacy and a good example of action-oriented research.

While you haven’t heard much about numbers in our qualitative chapters, I’m going to break with tradition and speak briefly about them here. For many qualitative projects we do include some numeric information in our final product(s), mostly in the way of counts. Counts usually show up in the way of frequency of demographic characteristics of our sample or characteristics regarding our artifacts, if they aren’t people. These may be included as a table or they may be integrated into the narrative we provide, but in either case, our goal in including this information is to offer the reader information so they can better understand who or what our sample is representing. The other time we sometimes include count information is in respect to the frequency and coverage of the themes or categories that are represented in our data. Frequency information about a theme can help the reader to know how often an idea came up in our analysis, while coverage can help them to know how widely dispersed this idea was (e.g. did nearly everyone mention this, or was it a small group of participants).

  • There are a wide variety of means by which you can deliver your qualitative research to the public. Choose one that takes into account the various considerations that we have discussed above and also honors the ethical commitments that we outlined early in this chapter.
  • Presenting qualitative research requires some amount of creativity. Utilize the building blocks discussed in this chapter to help you consider how to most authentically and effectively convey your message to a wider audience.
  • What means of delivery will you be choosing for your dissemination plan?
  • What building blocks will best convey your qualitative results to your audience?
  • National Association of Social Workers. (2017). NASW code of ethics. Retrieved from https://www.socialworkers.org/About/Ethics/Code-of-Ethics/Code-of-Ethics-English ↵
  • Kohli, R., & Pizarro, M. (2016). Fighting to educate our own: Teachers of Color, relational accountability, and the struggle for racial justice. Equity & Excellence in Education, 49 (1), 72-84. ↵
  • Karabanow, J., Gurman, E., & Naylor, T. (2012). Street youth labor as an Expression of survival and self-worth. Critical Social Work, 13 (2). ↵
  • Noonan, B. M., Gallor, S. M., Hensler-McGinnis, N. F., Fassinger, R. E., Wang, S., & Goodman, J. (2004). Challenge and success: A Qualitative study of the career development of highly achieving women with physical and sensory disabilities. Journal of Counseling Psychology, 51 (1), 68. ↵
  • Ede, L., & Starrin, B. (2014). Unresolved conflicts and shaming processes: risk factors for long-term sick leave for mental-health reasons. Nordic Journal of Social Research, 5 , 39-54. ↵

Conditions that are not directly observable and represent states of being, experiences, and ideas.

The extent to which a measure “covers” the construct of interest, i.e., it's comprehensiveness to measure the construct.

  • Developing your theoretical framework
  • Conceptual definitions
  • Inductive & deductive reasoning

Nomothetic causal explanations

Content warning: examples in this chapter include references to sexual harassment, domestic violence, gender-based violence, the child welfare system, substance use disorders, neonatal abstinence syndrome, child abuse, racism, and sexism.

11.1 Developing your theoretical framework

Learners will be able to...

  • Differentiate between theories that explain specific parts of the social world versus those that are more broad and sweeping in their conclusions
  • Identify the theoretical perspectives that are relevant to your project and inform your thinking about it
  • Define key concepts in your working question and develop a theoretical framework for how you understand your topic.

Theories provide a way of looking at the world and of understanding human interaction. Paradigms are grounded in big assumptions about the world—what is real, how do we create knowledge—whereas theories describe more specific phenomena. Well, we are still oversimplifying a bit. Some theories try to explain the whole world, while others only try to explain a small part. Some theories can be grouped together based on common ideas but retain their own individual and unique features. Our goal is to help you find a theoretical framework that helps you understand your topic more deeply and answer your working question.

Theories: Big and small

In your human behavior and the social environment (HBSE) class, you were introduced to the major theoretical perspectives that are commonly used in social work. These are what we like to call big-T 'T'heories. When you read about systems theory, you are actually reading a synthesis of decades of distinct, overlapping, and conflicting theories that can be broadly classified within systems theory. For example, within systems theory, some approaches focus more on family systems while others focus on environmental systems, though the core concepts remain similar.

Different theorists define concepts in their own way, and as a result, their theories may explore different relationships with those concepts. For example, Deci and Ryan's (1985) [1] self-determination theory discusses motivation and establishes that it is contingent on meeting one's needs for autonomy, competency, and relatedness. By contrast, ecological self-determination theory, as written by Abery & Stancliffe (1996), [2] argues that self-determination is the amount of control exercised by an individual over aspects of their lives they deem important across the micro, meso, and macro levels. If self-determination were an important concept in your study, you would need to figure out which of the many theories related to self-determination helps you address your working question.

Theories can provide a broad perspective on the key concepts and relationships in the world or more specific and applied concepts and perspectives. Table 7.2 summarizes two commonly used lists of big-T Theoretical perspectives in social work. See if you can locate some of the theories that might inform your project.

Table 7.2: Broad theoretical perspectives in social work
Psychodynamic Systems
Crisis and task-centered Conflict
Cognitive-behavioral Exchange and choice
Systems/ecological Social constructionist
Macro practice/social development/social pedagogy Psychodynamic
Strengths/solution/narrative Developmental
Humanistic/existential/spiritual Social behavioral
Critical Humanistic
Feminist
Anti-discriminatory/multi-cultural sensitivity

social work research dissemination

Competing theoretical explanations

Within each area of specialization in social work, there are many other theories that aim to explain more specific types of interactions. For example, within the study of sexual harassment, different theories posit different explanations for why harassment occurs.

One theory, first developed by criminologists, is called routine activities theory. It posits that sexual harassment is most likely to occur when a workplace lacks unified groups and when potentially vulnerable targets and motivated offenders are both present (DeCoster, Estes, & Mueller, 1999). [5]

Other theories of sexual harassment, called relational theories, suggest that one's existing relationships are the key to understanding why and how workplace sexual harassment occurs and how people will respond when it does occur (Morgan, 1999). [6] Relational theories focus on the power that different social relationships provide (e.g., married people who have supportive partners at home might be more likely than those who lack support at home to report sexual harassment when it occurs).

Finally, feminist theories of sexual harassment take a different stance. These theories posit that the organization of our current gender system, wherein those who are the most masculine have the most power, best explains the occurrence of workplace sexual harassment (MacKinnon, 1979). [7] As you might imagine, which theory a researcher uses to examine the topic of sexual harassment will shape the questions asked about harassment. It will also shape the explanations the researcher provides for why harassment occurs.

For a graduate student beginning their study of a new topic, it may be intimidating to learn that there are so many theories beyond what you’ve learned in your theory classes. What’s worse is that there is no central database of theories on your topic. However, as you review the literature in your area, you will learn more about the theories scientists have created to explain how your topic works in the real world. There are other good sources for theories, in addition to journal articles. Books often contain works of theoretical and philosophical importance that are beyond the scope of an academic journal. Do a search in your university library for books on your topic, and you are likely to find theorists talking about how to make sense of your topic. You don't necessarily have to agree with the prevailing theories about your topic, but you do need to be aware of them so you can apply theoretical ideas to your project.

Applying big-T theories to your topic

The key to applying theories to your topic is learning the key concepts associated with that theory and the relationships between those concepts, or propositions . Again, your HBSE class should have prepared you with some of the most important concepts from the theoretical perspectives listed in Table 7.2. For example, the conflict perspective sees the world as divided into dominant and oppressed groups who engage in conflict over resources. If you were applying these theoretical ideas to your project, you would need to identify which groups in your project are considered dominant or oppressed groups, and which resources they were struggling over. This is a very general example. Challenge yourself to find small-t theories about your topic that will help you understand it in much greater detail and specificity. If you have chosen a topic that is relevant to your life and future practice, you will be doing valuable work shaping your ideas towards social work practice.

Integrating theory into your project can be easy, or it can take a bit more effort. Some people have a strong and explicit theoretical perspective that they carry with them at all times. For me, you'll probably see my work drawing from exchange and choice, social constructionist, and critical theory. Maybe you have theoretical perspectives you naturally employ, like Afrocentric theory or person-centered practice. If so, that's a great place to start since you might already be using that theory (even subconsciously) to inform your understanding of your topic. But if you aren't aware of whether you are using a theoretical perspective when you think about your topic, try writing a paragraph off the top of your head or talking with a friend explaining what you think about that topic. Try matching it with some of the ideas from the broad theoretical perspectives from Table 7.2. This can ground you as you search for more specific theories. Some studies are designed to test whether theories apply the real world while others are designed to create new theories or variations on existing theories. Consider which feels more appropriate for your project and what you want to know.

Another way to easily identify the theories associated with your topic is to look at the concepts in your working question. Are these concepts commonly found in any of the theoretical perspectives in Table 7.2? Take a look at the Payne and Hutchison texts and see if any of those look like the concepts and relationships in your working question or if any of them match with how you think about your topic. Even if they don't possess the exact same wording, similar theories can help serve as a starting point to finding other theories that can inform your project. Remember, HBSE textbooks will give you not only the broad statements of theories but also sources from specific theorists and sub-theories that might be more applicable to your topic. Skim the references and suggestions for further reading once you find something that applies well.

Choose a theoretical perspective from Hutchison, Payne, or another theory textbook that is relevant to your project. Using their textbooks or other reputable sources, identify :

  • At least five important concepts from the theory
  • What relationships the theory establishes between these important concepts (e.g., as x increases, the y decreases)
  • How you can use this theory to better understand the concepts and variables in your project?

Developing your own theoretical framework

Hutchison's and Payne's frameworks are helpful for surveying the whole body of literature relevant to social work, which is why they are so widely used. They are one framework, or way of thinking, about all of the theories social workers will encounter that are relevant to practice. Social work researchers should delve further and develop a theoretical or conceptual framework of their own based on their reading of the literature. In Chapter 8 , we will develop your theoretical framework further, identifying the cause-and-effect relationships that answer your working question. Developing a theoretical framework is also instructive for revising and clarifying your working question and identifying concepts that serve as keywords for additional literature searching. The greater clarity you have with your theoretical perspective, the easier each subsequent step in the research process will be.

Getting acquainted with the important theoretical concepts in a new area can be challenging. While social work education provides a broad overview of social theory, you will find much greater fulfillment out of reading about the theories related to your topic area. We discussed some strategies for finding theoretical information in Chapter 3 as part of literature searching. To extend that conversation a bit, some strategies for searching for theories in the literature include:

  • Consider searching for these keywords in the title or abstract, specifically
  • Looking at the references and cited by links within theoretical articles and textbooks
  • Looking at books, edited volumes, and textbooks that discuss theory
  • Talking with a scholar on your topic, or asking a professor if they can help connect you to someone
  • Nice authors are clear about how they use theory to inform their research project, usually in the introduction and discussion section.
  • For example, from the broad umbrella of systems theory, you might pick out family systems theory if you want to understand the effectiveness of a family counseling program.

It's important to remember that knowledge arises within disciplines, and that disciplines have different theoretical frameworks for explaining the same topic. While it is certainly important for the social work perspective to be a part of your analysis, social workers benefit from searching across disciplines to come to a more comprehensive understanding of the topic. Reaching across disciplines can provide uncommon insights during conceptualization, and once the study is completed, a multidisciplinary researcher will be able to share results in a way that speaks to a variety of audiences. A study by An and colleagues (2015) [8] uses game theory from the discipline of economics to understand problems in the Temporary Assistance for Needy Families (TANF) program. In order to receive TANF benefits, mothers must cooperate with paternity and child support requirements unless they have "good cause," as in cases of domestic violence, in which providing that information would put the mother at greater risk of violence. Game theory can help us understand how TANF recipients and caseworkers respond to the incentives in their environment, and highlight why the design of the "good cause" waiver program may not achieve its intended outcome of increasing access to benefits for survivors of family abuse.

Of course, there are natural limits on the depth with which student researchers can and should engage in a search for theory about their topic. At minimum, you should be able to draw connections across studies and be able to assess the relative importance of each theory within the literature. Just because you found one article applying your theory (like game theory, in our example above) does not mean it is important or often used in the domestic violence literature. Indeed, it would be much more common in the family violence literature to find psychological theories of trauma, feminist theories of power and control, and similar theoretical perspectives used to inform research projects rather than game theory, which is equally applicable to survivors of family violence as workers and bosses at a corporation. Consider using the Cited By feature to identify articles, books, and other sources of theoretical information that are seminal or well-cited in the literature. Similarly, by using the name of a theory in the keywords of a search query (along with keywords related to your topic), you can get a sense of how often the theory is used in your topic area. You should have a sense of what theories are commonly used to analyze your topic, even if you end up choosing a different one to inform your project.

social work research dissemination

Theories that are not cited or used as often are still immensely valuable. As we saw before with TANF and "good cause" waivers, using theories from other disciplines can produce uncommon insights and help you make a new contribution to the social work literature. Given the privileged position that the social work curriculum places on theories developed by white men, students may want to explore Afrocentricity as a social work practice theory (Pellebon, 2007) [9] or abolitionist social work (Jacobs et al., 2021) [10] when deciding on a theoretical framework for their research project that addresses concepts of racial justice. Start with your working question, and explain how each theory helps you answer your question. Some explanations are going to feel right, and some concepts will feel more salient to you than others. Keep in mind that this is an iterative process. Your theoretical framework will likely change as you continue to conceptualize your research project, revise your research question, and design your study.

By trying on many different theoretical explanations for your topic area, you can better clarify your own theoretical framework. Some of you may be fortunate enough to find theories that match perfectly with how you think about your topic, are used often in the literature, and are therefore relatively straightforward to apply. However, many of you may find that a combination of theoretical perspectives is most helpful for you to investigate your project. For example, maybe the group counseling program for which you are evaluating client outcomes draws from both motivational interviewing and cognitive behavioral therapy. In order to understand the change happening in the client population, you would need to know each theory separately as well as how they work in tandem with one another. Because theoretical explanations and even the definitions of concepts are debated by scientists, it may be helpful to find a specific social scientist or group of scientists whose perspective on the topic you find matches with your understanding of the topic. Of course, it is also perfectly acceptable to develop your own theoretical framework, though you should be able to articulate how your framework fills a gap within the literature.

If you are adapting theoretical perspectives in your study, it is important to clarify the original authors' definitions of each concept. Jabareen (2009) [11] offers that conceptual frameworks are not merely collections of concepts but, rather, constructs in which each concept plays an integral role. [12] A conceptual framework is a network of linked concepts that together provide a comprehensive understanding of a phenomenon. Each concept in a conceptual framework plays an ontological or epistemological role in the framework, and it is important to assess whether the concepts and relationships in your framework make sense together. As your framework takes shape, you will find yourself integrating and grouping together concepts, thinking about the most important or least important concepts, and how each concept is causally related to others.

Much like paradigm, theory plays a supporting role for the conceptualization of your research project. Recall the ice float from Figure 7.1. Theoretical explanations support the design and methods you use to answer your research question. In student projects that lack a theoretical framework, I often see the biases and errors in reasoning that we discussed in Chapter 1 that get in the way of good social science. That's because theories mark which concepts are important, provide a framework for understanding them, and measure their interrelationships. If you are missing this foundation, you will operate on informal observation, messages from authority, and other forms of unsystematic and unscientific thinking we reviewed in Chapter 1 .

Theory-informed inquiry is incredibly helpful for identifying key concepts and how to measure them in your research project, but there is a risk in aligning research too closely with theory. The theory-ladenness of facts and observations produced by social science research means that we may be making our ideas real through research. This is a potential source of confirmation bias in social science. Moreover, as Tan (2016) [13] demonstrates, social science often proceeds by adopting as true the perspective of Western and Global North countries, and cross-cultural research is often when ethnocentric and biased ideas are most visible . In her example, a researcher from the West studying teacher-centric classrooms in China that rely partially on rote memorization may view them as less advanced than student-centered classrooms developed in a Western country simply because of Western philosophical assumptions about the importance of individualism and self-determination. Developing a clear theoretical framework is a way to guard against biased research, and it will establish a firm foundation on which you will develop the design and methods for your study.

  • Just as empirical evidence is important for conceptualizing a research project, so too are the key concepts and relationships identified by social work theory.
  • Using theory your theory textbook will provide you with a sense of the broad theoretical perspectives in social work that might be relevant to your project.
  • Try to find small-t theories that are more specific to your topic area and relevant to your working question.
  • In Chapter 2 , you developed a concept map for your proposal. Take a moment to revisit your concept map now as your theoretical framework is taking shape. Make any updates to the key concepts and relationships in your concept map. . If you need a refresher, we have embedded a short how-to video from the University of Guelph Library (CC-BY-NC-SA 4.0) that we also used in Chapter 2 .

11.2 Conceptual definitions

  • Define measurement and conceptualization
  • Apply Kaplan’s three categories to determine the complexity of measuring a given variable
  • Identify the role previous research and theory play in defining concepts
  • Distinguish between unidimensional and multidimensional concepts
  • Critically apply reification to how you conceptualize the key variables in your research project

In social science, when we use the term  measurement , we mean the process by which we describe and ascribe meaning to the key facts, concepts, or other phenomena that we are investigating. At its core, measurement is about defining one’s terms in as clear and precise a way as possible. Of course, measurement in social science isn’t quite as simple as using a measuring cup or spoon, but there are some basic tenets on which most social scientists agree when it comes to measurement. We’ll explore those, as well as some of the ways that measurement might vary depending on your unique approach to the study of your topic.

An important point here is that measurement does not require any particular instruments or procedures. What it does require is a systematic procedure for assigning scores, meanings, and descriptions to individuals or objects so that those scores represent the characteristic of interest. You can measure phenomena in many different ways, but you must be sure that how you choose to measure gives you information and data that lets you answer your research question. If you're looking for information about a person's income, but your main points of measurement have to do with the money they have in the bank, you're not really going to find the information you're looking for!

The question of what social scientists measure can be answered by asking yourself what social scientists study. Think about the topics you’ve learned about in other social work classes you’ve taken or the topics you’ve considered investigating yourself. Let’s consider Melissa Milkie and Catharine Warner’s study (2011) [14] of first graders’ mental health. In order to conduct that study, Milkie and Warner needed to have some idea about how they were going to measure mental health. What does mental health mean, exactly? And how do we know when we’re observing someone whose mental health is good and when we see someone whose mental health is compromised? Understanding how measurement works in research methods helps us answer these sorts of questions.

As you might have guessed, social scientists will measure just about anything that they have an interest in investigating. For example, those who are interested in learning something about the correlation between social class and levels of happiness must develop some way to measure both social class and happiness. Those who wish to understand how well immigrants cope in their new locations must measure immigrant status and coping. Those who wish to understand how a person’s gender shapes their workplace experiences must measure gender and workplace experiences (and get more specific about which experiences are under examination). You get the idea. Social scientists can and do measure just about anything you can imagine observing or wanting to study. Of course, some things are easier to observe or measure than others.

social work research dissemination

Observing your variables

In 1964, philosopher Abraham Kaplan (1964) [15] wrote The   Conduct of Inquiry,  which has since become a classic work in research methodology (Babbie, 2010). [16] In his text, Kaplan describes different categories of things that behavioral scientists observe. One of those categories, which Kaplan called “observational terms,” is probably the simplest to measure in social science. Observational terms are the sorts of things that we can see with the naked eye simply by looking at them. Kaplan roughly defines them as conditions that are easy to identify and verify through direct observation. If, for example, we wanted to know how the conditions of playgrounds differ across different neighborhoods, we could directly observe the variety, amount, and condition of equipment at various playgrounds.

Indirect observables , on the other hand, are less straightforward to assess. In Kaplan's framework, they are conditions that are subtle and complex that we must use existing knowledge and intuition to define. If we conducted a study for which we wished to know a person’s income, we’d probably have to ask them their income, perhaps in an interview or a survey. Thus, we have observed income, even if it has only been observed indirectly. Birthplace might be another indirect observable. We can ask study participants where they were born, but chances are good we won’t have directly observed any of those people being born in the locations they report.

Sometimes the measures that we are interested in are more complex and more abstract than observational terms or indirect observables. Think about some of the concepts you’ve learned about in other social work classes—for example, ethnocentrism. What is ethnocentrism? Well, from completing an introduction to social work class you might know that it has something to do with the way a person judges another’s culture. But how would you  measure  it? Here’s another construct: bureaucracy. We know this term has something to do with organizations and how they operate but measuring such a construct is trickier than measuring something like a person’s income. The theoretical concepts of ethnocentrism and bureaucracy represent ideas whose meanings we have come to agree on. Though we may not be able to observe these abstractions directly, we can observe their components.

Kaplan referred to these more abstract things that behavioral scientists measure as constructs.  Constructs  are “not observational either directly or indirectly” (Kaplan, 1964, p. 55), [17] but they can be defined based on observables. For example, the construct of bureaucracy could be measured by counting the number of supervisors that need to approve routine spending by public administrators. The greater the number of administrators that must sign off on routine matters, the greater the degree of bureaucracy. Similarly, we might be able to ask a person the degree to which they trust people from different cultures around the world and then assess the ethnocentrism inherent in their answers. We can measure constructs like bureaucracy and ethnocentrism by defining them in terms of what we can observe. [18]

The idea of coming up with your own measurement tool might sound pretty intimidating at this point. The good news is that if you find something in the literature that works for you, you can use it (with proper attribution, of course). If there are only pieces of it that you like, you can reuse those pieces (with proper attribution and describing/justifying any changes). You don't always have to start from scratch!

Look at the variables in your research question.

  • Classify them as direct observables, indirect observables, or constructs.
  • Do you think measuring them will be easy or hard?
  • What are your first thoughts about how to measure each variable? No wrong answers here, just write down a thought about each variable.

social work research dissemination

Measurement starts with conceptualization

In order to measure the concepts in your research question, we first have to understand what we think about them. As an aside, the word concept  has come up quite a bit, and it is important to be sure we have a shared understanding of that term. A  concept is the notion or image that we conjure up when we think of some cluster of related observations or ideas. For example, masculinity is a concept. What do you think of when you hear that word? Presumably, you imagine some set of behaviors and perhaps even a particular style of self-presentation. Of course, we can’t necessarily assume that everyone conjures up the same set of ideas or images when they hear the word  masculinity . While there are many possible ways to define the term and some may be more common or have more support than others, there is no universal definition of masculinity. What counts as masculine may shift over time, from culture to culture, and even from individual to individual (Kimmel, 2008). This is why defining our concepts is so important.\

Not all researchers clearly explain their theoretical or conceptual framework for their study, but they should! Without understanding how a researcher has defined their key concepts, it would be nearly impossible to understand the meaning of that researcher’s findings and conclusions. Back in Chapter 7 , you developed a theoretical framework for your study based on a survey of the theoretical literature in your topic area. If you haven't done that yet, consider flipping back to that section to familiarize yourself with some of the techniques for finding and using theories relevant to your research question. Continuing with our example on masculinity, we would need to survey the literature on theories of masculinity. After a few queries on masculinity, I found a wonderful article by Wong (2010) [19] that analyzed eight years of the journal Psychology of Men & Masculinity and analyzed how often different theories of masculinity were used . Not only can I get a sense of which theories are more accepted and which are more marginal in the social science on masculinity, I am able to identify a range of options from which I can find the theory or theories that will inform my project. 

Identify a specific theory (or more than one theory) and how it helps you understand...

  • Your independent variable(s).
  • Your dependent variable(s).
  • The relationship between your independent and dependent variables.

Rather than completing this exercise from scratch, build from your theoretical or conceptual framework developed in previous chapters.

In quantitative methods, conceptualization involves writing out clear, concise definitions for our key concepts. These are the kind of definitions you are used to, like the ones in a dictionary. A conceptual definition involves defining a concept in terms of other concepts, usually by making reference to how other social scientists and theorists have defined those concepts in the past. Of course, new conceptual definitions are created all the time because our conceptual understanding of the world is always evolving.

Conceptualization is deceptively challenging—spelling out exactly what the concepts in your research question mean to you. Following along with our example, think about what comes to mind when you read the term masculinity. How do you know masculinity when you see it? Does it have something to do with men or with social norms? If so, perhaps we could define masculinity as the social norms that men are expected to follow. That seems like a reasonable start, and at this early stage of conceptualization, brainstorming about the images conjured up by concepts and playing around with possible definitions is appropriate. However, this is just the first step. At this point, you should be beyond brainstorming for your key variables because you have read a good amount of research about them

In addition, we should consult previous research and theory to understand the definitions that other scholars have already given for the concepts we are interested in. This doesn’t mean we must use their definitions, but understanding how concepts have been defined in the past will help us to compare our conceptualizations with how other scholars define and relate concepts. Understanding prior definitions of our key concepts will also help us decide whether we plan to challenge those conceptualizations or rely on them for our own work. Finally, working on conceptualization is likely to help in the process of refining your research question to one that is specific and clear in what it asks. Conceptualization and operationalization (next section) are where "the rubber meets the road," so to speak, and you have to specify what you mean by the question you are asking. As your conceptualization deepens, you will often find that your research question becomes more specific and clear.

If we turn to the literature on masculinity, we will surely come across work by Michael Kimmel , one of the preeminent masculinity scholars in the United States. After consulting Kimmel’s prior work (2000; 2008), [20] we might tweak our initial definition of masculinity. Rather than defining masculinity as “the social norms that men are expected to follow,” perhaps instead we’ll define it as “the social roles, behaviors, and meanings prescribed for men in any given society at any one time” (Kimmel & Aronson, 2004, p. 503). [21] Our revised definition is more precise and complex because it goes beyond addressing one aspect of men’s lives (norms), and addresses three aspects: roles, behaviors, and meanings. It also implies that roles, behaviors, and meanings may vary across societies and over time. Using definitions developed by theorists and scholars is a good idea, though you may find that you want to define things your own way.

As you can see, conceptualization isn’t as simple as applying any random definition that we come up with to a term. Defining our terms may involve some brainstorming at the very beginning. But conceptualization must go beyond that, to engage with or critique existing definitions and conceptualizations in the literature. Once we’ve brainstormed about the images associated with a particular word, we should also consult prior work to understand how others define the term in question. After we’ve identified a clear definition that we’re happy with, we should make sure that every term used in our definition will make sense to others. Are there terms used within our definition that also need to be defined? If so, our conceptualization is not yet complete. Our definition includes the concept of "social roles," so we should have a definition for what those mean and become familiar with role theory to help us with our conceptualization. If we don't know what roles are, how can we study them?

Let's say we do all of that. We have a clear definition of the term masculinity with reference to previous literature and we also have a good understanding of the terms in our conceptual definition...then we're done, right? Not so fast. You’ve likely met more than one man in your life, and you’ve probably noticed that they are not the same, even if they live in the same society during the same historical time period. This could mean there are dimensions of masculinity. In terms of social scientific measurement, concepts can be said to have multiple dimensions  when there are multiple elements that make up a single concept. With respect to the term  masculinity , dimensions could based on gender identity, gender performance, sexual orientation, etc.. In any of these cases, the concept of masculinity would be considered to have multiple dimensions.

While you do not need to spell out every possible dimension of the concepts you wish to measure, it is important to identify whether your concepts are unidimensional (and therefore relatively easy to define and measure) or multidimensional (and therefore require multi-part definitions and measures). In this way, how you conceptualize your variables determines how you will measure them in your study. Unidimensional concepts are those that are expected to have a single underlying dimension. These concepts can be measured using a single measure or test. Examples include simple concepts such as a person’s weight, time spent sleeping, and so forth. 

One frustrating this is that there is no clear demarcation between concepts that are inherently unidimensional or multidimensional. Even something as simple as age could be broken down into multiple dimensions including mental age and chronological age, so where does conceptualization stop? How far down the dimensional rabbit hole do we have to go? Researchers should consider two things. First, how important is this variable in your study? If age is not important in your study (maybe it is a control variable), it seems like a waste of time to do a lot of work drawing from developmental theory to conceptualize this variable. A unidimensional measure from zero to dead is all the detail we need. On the other hand, if we were measuring the impact of age on masculinity, conceptualizing our independent variable (age) as multidimensional may provide a richer understanding of its impact on masculinity. Finally, your conceptualization will lead directly to your operationalization of the variable, and once your operationalization is complete, make sure someone reading your study could follow how your conceptual definitions informed the measures you chose for your variables. 

Write a conceptual definition for your independent and dependent variables.

  • Cite and attribute definitions to other scholars, if you use their words.
  • Describe how your definitions are informed by your theoretical framework.
  • Place your definition in conversation with other theories and conceptual definitions commonly used in the literature.
  • Are there multiple dimensions of your variables?
  • Are any of these dimensions important for you to measure?

social work research dissemination

Do researchers actually know what we're talking about?

Conceptualization proceeds differently in qualitative research compared to quantitative research. Since qualitative researchers are interested in the understandings and experiences of their participants, it is less important for them to find one fixed definition for a concept before starting to interview or interact with participants. The researcher’s job is to accurately and completely represent how their participants understand a concept, not to test their own definition of that concept.

If you were conducting qualitative research on masculinity, you would likely consult previous literature like Kimmel’s work mentioned above. From your literature review, you may come up with a  working definition  for the terms you plan to use in your study, which can change over the course of the investigation. However, the definition that matters is the definition that your participants share during data collection. A working definition is merely a place to start, and researchers should take care not to think it is the only or best definition out there.

In qualitative inquiry, your participants are the experts (sound familiar, social workers?) on the concepts that arise during the research study. Your job as the researcher is to accurately and reliably collect and interpret their understanding of the concepts they describe while answering your questions. Conceptualization of concepts is likely to change over the course of qualitative inquiry, as you learn more information from your participants. Indeed, getting participants to comment on, extend, or challenge the definitions and understandings of other participants is a hallmark of qualitative research. This is the opposite of quantitative research, in which definitions must be completely set in stone before the inquiry can begin.

The contrast between qualitative and quantitative conceptualization is instructive for understanding how quantitative methods (and positivist research in general) privilege the knowledge of the researcher over the knowledge of study participants and community members. Positivism holds that the researcher is the "expert," and can define concepts based on their expert knowledge of the scientific literature. This knowledge is in contrast to the lived experience that participants possess from experiencing the topic under examination day-in, day-out. For this reason, it would be wise to remind ourselves not to take our definitions too seriously and be critical about the limitations of our knowledge.

Conceptualization must be open to revisions, even radical revisions, as scientific knowledge progresses. While I’ve suggested consulting prior scholarly definitions of our concepts, you should not assume that prior, scholarly definitions are more real than the definitions we create. Likewise, we should not think that our own made-up definitions are any more real than any other definition. It would also be wrong to assume that just because definitions exist for some concept that the concept itself exists beyond some abstract idea in our heads. Building on the paradigmatic ideas behind interpretivism and the critical paradigm, researchers call the assumption that our abstract concepts exist in some concrete, tangible way is known as reification . It explores the power dynamics behind how we can create reality by how we define it.

Returning again to our example of masculinity. Think about our how our notions of masculinity have developed over the past few decades, and how different and yet so similar they are to patriarchal definitions throughout history. Conceptual definitions become more or less popular based on the power arrangements inside of social science the broader world. Western knowledge systems are privileged, while others are viewed as unscientific and marginal. The historical domination of social science by white men from WEIRD countries meant that definitions of masculinity were imbued their cultural biases and were designed explicitly and implicitly to preserve their power. This has inspired movements for cognitive justice as we seek to use social science to achieve global development.

  • Measurement is the process by which we describe and ascribe meaning to the key facts, concepts, or other phenomena that we are investigating.
  • Kaplan identified three categories of things that social scientists measure including observational terms, indirect observables, and constructs.
  • Some concepts have multiple elements or dimensions.
  • Researchers often use measures previously developed and studied by other researchers.
  • Conceptualization is a process that involves coming up with clear, concise definitions.
  • Conceptual definitions are based on the theoretical framework you are using for your study (and the paradigmatic assumptions underlying those theories).
  • Whether your conceptual definitions come from your own ideas or the literature, you should be able to situate them in terms of other commonly used conceptual definitions.
  • Researchers should acknowledge the limited explanatory power of their definitions for concepts and how oppression can shape what explanations are considered true or scientific.

Think historically about the variables in your research question.

  • How has our conceptual definition of your topic changed over time?
  • What scholars or social forces were responsible for this change?

Take a critical look at your conceptual definitions.

  • How participants might define terms for themselves differently, in terms of their daily experience?
  • On what cultural assumptions are your conceptual definitions based?
  • Are your conceptual definitions applicable across all cultures that will be represented in your sample?

11.3 Inductive and deductive reasoning

  • Describe inductive and deductive reasoning and provide examples of each
  • Identify how inductive and deductive reasoning are complementary

Congratulations! You survived the chapter on theories and paradigms. My experience has been that many students have a difficult time thinking about theories and paradigms because they perceive them as "intangible" and thereby hard to connect to social work research. I even had one student who said she got frustrated just reading the word "philosophy."

Rest assured, you do not need to become a theorist or philosopher to be an effective social worker or researcher. However, you should have a good sense of what theory or theories will be relevant to your project, as well as how this theory, along with your working question, fit within the three broad research paradigms we reviewed. If you don't have a good idea about those at this point, it may be a good opportunity to pause and read more about the theories related to your topic area.

Theories structure and inform social work research. The converse is also true: research can structure and inform theory. The reciprocal relationship between theory and research often becomes evident to students when they consider the relationships between theory and research in inductive and deductive approaches to research. In both cases, theory is crucial. But the relationship between theory and research differs for each approach.

While inductive and deductive approaches to research are quite different, they can also be complementary. Let’s start by looking at each one and how they differ from one another. Then we’ll move on to thinking about how they complement one another.

Inductive reasoning

A researcher using inductive reasoning begins by collecting data that is relevant to their topic of interest. Once a substantial amount of data have been collected, the researcher will then step back from data collection to get a bird’s eye view of their data. At this stage, the researcher looks for patterns in the data, working to develop a theory that could explain those patterns. Thus, when researchers take an inductive approach, they start with a particular set of observations and move to a more general set of propositions about those experiences. In other words, they move from data to theory, or from the specific to the general. Figure 8.1 outlines the steps involved with an inductive approach to research.

A researcher moving from a more particular focus on data to a more general focus on theory by looking for patterns

There are many good examples of inductive research, but we’ll look at just a few here. One fascinating study in which the researchers took an inductive approach is Katherine Allen, Christine Kaestle, and Abbie Goldberg’s (2011) [22] study of how boys and young men learn about menstruation. To understand this process, Allen and her colleagues analyzed the written narratives of 23 young cisgender men in which the men described how they learned about menstruation, what they thought of it when they first learned about it, and what they think of it now. By looking for patterns across all 23 cisgender men’s narratives, the researchers were able to develop a general theory of how boys and young men learn about this aspect of girls’ and women’s biology. They conclude that sisters play an important role in boys’ early understanding of menstruation, that menstruation makes boys feel somewhat separated from girls, and that as they enter young adulthood and form romantic relationships, young men develop more mature attitudes about menstruation. Note how this study began with the data—men’s narratives of learning about menstruation—and worked to develop a theory.

In another inductive study, Kristin Ferguson and colleagues (Ferguson, Kim, & McCoy, 2011) [23] analyzed empirical data to better understand how to meet the needs of young people who are homeless. The authors analyzed focus group data from 20 youth at a homeless shelter. From these data they developed a set of recommendations for those interested in applied interventions that serve homeless youth. The researchers also developed hypotheses for others who might wish to conduct further investigation of the topic. Though Ferguson and her colleagues did not test their hypotheses, their study ends where most deductive investigations begin: with a theory and a hypothesis derived from that theory. Section 8.4 discusses the use of mixed methods research as a way for researchers to test hypotheses created in a previous component of the same research project.

You will notice from both of these examples that inductive reasoning is most commonly found in studies using qualitative methods, such as focus groups and interviews. Because inductive reasoning involves the creation of a new theory, researchers need very nuanced data on how the key concepts in their working question operate in the real world. Qualitative data is often drawn from lengthy interactions and observations with the individuals and phenomena under examination. For this reason, inductive reasoning is most often associated with qualitative methods, though it is used in both quantitative and qualitative research.

Deductive reasoning

If inductive reasoning is about creating theories from raw data, deductive reasoning is about testing theories using data. Researchers using deductive reasoning take the steps described earlier for inductive research and reverse their order. They start with a compelling social theory, create a hypothesis about how the world should work, collect raw data, and analyze whether their hypothesis was confirmed or not. That is, deductive approaches move from a more general level (theory) to a more specific (data); whereas inductive approaches move from the specific (data) to general (theory).

A deductive approach to research is the one that people typically associate with scientific investigation. Students in English-dominant countries that may be confused by inductive vs. deductive research can rest part of the blame on Sir Arthur Conan Doyle, creator of the Sherlock Holmes character. As Craig Vasey points out in his breezy introduction to logic book chapter , Sherlock Holmes more often used inductive rather than deductive reasoning (despite claiming to use the powers of deduction to solve crimes). By noticing subtle details in how people act, behave, and dress, Holmes finds patterns that others miss. Using those patterns, he creates a theory of how the crime occurred, dramatically revealed to the authorities just in time to arrest the suspect. Indeed, it is these flashes of insight into the patterns of data that make Holmes such a keen inductive reasoner. In social work practice, rather than detective work, inductive reasoning is supported by the intuitions and practice wisdom of social workers, just as Holmes' reasoning is sharpened by his experience as a detective.

So, if deductive reasoning isn't Sherlock Holmes' observation and pattern-finding, how does it work? It starts with what you have already done in Chapters 3 and 4, reading and evaluating what others have done to study your topic. It continued with Chapter 5, discovering what theories already try to explain how the concepts in your working question operate in the real world. Tapping into this foundation of knowledge on their topic, the researcher studies what others have done, reads existing theories of whatever phenomenon they are studying, and then tests hypotheses that emerge from those theories. Figure 8.2 outlines the steps involved with a deductive approach to research.

Moving from general to specific using deductive reasoning

While not all researchers follow a deductive approach, many do. We’ll now take a look at a couple excellent recent examples of deductive research. 

In a study of US law enforcement responses to hate crimes, Ryan King and colleagues (King, Messner, & Baller, 2009) [24] hypothesized that law enforcement’s response would be less vigorous in areas of the country that had a stronger history of racial violence. The authors developed their hypothesis from prior research and theories on the topic. They tested the hypothesis by analyzing data on states’ lynching histories and hate crime responses. Overall, the authors found support for their hypothesis and illustrated an important application of critical race theory.

In another recent deductive study, Melissa Milkie and Catharine Warner (2011) [25] studied the effects of different classroom environments on first graders’ mental health. Based on prior research and theory, Milkie and Warner hypothesized that negative classroom features, such as a lack of basic supplies and heat, would be associated with emotional and behavioral problems in children. One might associate this research with Maslow's hierarchy of needs or systems theory. The researchers found support for their hypothesis, demonstrating that policymakers should be paying more attention to the mental health outcomes of children’s school experiences, just as they track academic outcomes (American Sociological Association, 2011). [26]

Complementary approaches

While inductive and deductive approaches to research seem quite different, they can actually be rather complementary. In some cases, researchers will plan for their study to include multiple components, one inductive and the other deductive. In other cases, a researcher might begin a study with the plan to conduct either inductive or deductive research, but then discovers along the way that the other approach is needed to help illuminate findings. Here is an example of each such case.

Dr. Amy Blackstone (n.d.), author of Principles of sociological inquiry: Qualitative and quantitative methods , relates a story about her mixed methods research on sexual harassment.

We began the study knowing that we would like to take both a deductive and an inductive approach in our work. We therefore administered a quantitative survey, the responses to which we could analyze in order to test hypotheses, and also conducted qualitative interviews with a number of the survey participants. The survey data were well suited to a deductive approach; we could analyze those data to test hypotheses that were generated based on theories of harassment. The interview data were well suited to an inductive approach; we looked for patterns across the interviews and then tried to make sense of those patterns by theorizing about them. For one paper (Uggen & Blackstone, 2004) [27] , we began with a prominent feminist theory of the sexual harassment of adult women and developed a set of hypotheses outlining how we expected the theory to apply in the case of younger women’s and men’s harassment experiences. We then tested our hypotheses by analyzing the survey data. In general, we found support for the theory that posited that the current gender system, in which heteronormative men wield the most power in the workplace, explained workplace sexual harassment—not just of adult women but of younger women and men as well. In a more recent paper (Blackstone, Houle, & Uggen, 2006), [28] we did not hypothesize about what we might find but instead inductively analyzed interview data, looking for patterns that might tell us something about how or whether workers’ perceptions of harassment change as they age and gain workplace experience. From this analysis, we determined that workers’ perceptions of harassment did indeed shift as they gained experience and that their later definitions of harassment were more stringent than those they held during adolescence. Overall, our desire to understand young workers’ harassment experiences fully—in terms of their objective workplace experiences, their perceptions of those experiences, and their stories of their experiences—led us to adopt both deductive and inductive approaches in the work. (Blackstone, n.d., p. 21) [29]

Researchers may not always set out to employ both approaches in their work but sometimes find that their use of one approach leads them to the other. One such example is described eloquently in Russell Schutt’s  Investigating the Social World (2006). [30] As Schutt describes, researchers Sherman and Berk (1984) [31] conducted an experiment to test two competing theories of the effects of punishment on deterring deviance (in this case, domestic violence).Specifically, Sherman and Berk hypothesized that deterrence   theory (see Williams, 2005 [32] for more information on that theory) would provide a better explanation of the effects of arresting accused batterers than labeling theory . Deterrence theory predicts that arresting an accused spouse batterer will  reduce  future incidents of violence. Conversely, labeling theory predicts that arresting accused spouse batterers will  increase  future incidents (see Policastro & Payne, 2013 [33] for more information on that theory). Figure 8.3 summarizes the two competing theories and the hypotheses Sherman and Berk set out to test.

social work research dissemination

Research from these follow-up studies were mixed. In some cases, arrest deterred future incidents of violence. In other cases, it did not. This left the researchers with new data that they needed to explain. The researchers therefore took an inductive approach in an effort to make sense of their latest empirical observations. The new studies revealed that arrest seemed to have a deterrent effect for those who were married and employed, but that it led to increased offenses for those who were unmarried and unemployed. Researchers thus turned to control theory, which posits that having some stake in conformity through the social ties provided by marriage and employment, as the better explanation (see Davis et al., 2000 [35] for more information on this theory).

Predictions of control theory on incidents of domestic violence

What the original Sherman and Berk study, along with the follow-up studies, show us is that we might start with a deductive approach to research, but then, if confronted by new data we must make sense of, we may move to an inductive approach. We will expand on these possibilities in section 8.4 when we discuss mixed methods research.

Ethical and critical considerations

Deductive and inductive reasoning, just like other components of the research process comes with ethical and cultural considerations for researchers. Specifically, deductive research is limited by existing theory. Because scientific inquiry has been shaped by oppressive forces such as sexism, racism, and colonialism, what is considered theory is largely based in Western, white-male-dominant culture. Thus, researchers doing deductive research may artificially limit themselves to ideas that were derived from this context. Non-Western researchers, international social workers, and practitioners working with non-dominant groups may find deductive reasoning of limited help if theories do not adequately describe other cultures.

While these flaws in deductive research may make inductive reasoning seem more appealing, on closer inspection you'll find similar issues apply. A researcher using inductive reasoning applies their intuition and lived experience when analyzing participant data. They will take note of particular themes, conceptualize their definition, and frame the project using their unique psychology. Since everyone's internal world is shaped by their cultural and environmental context, inductive reasoning conducted by Western researchers may unintentionally reinforcing lines of inquiry that derive from cultural oppression.

Inductive reasoning is also shaped by those invited to provide the data to be analyzed. For example, I recently worked with a student who wanted to understand the impact of child welfare supervision on children born dependent on opiates and methamphetamine. Due to the potential harm that could come from interviewing families and children who are in foster care or under child welfare supervision, the researcher decided to use inductive reasoning and to only interview child welfare workers.

Talking to practitioners is a good idea for feasibility, as they are less vulnerable than clients. However, any theory that emerges out of these observations will be substantially limited, as it would be devoid of the perspectives of parents, children, and other community members who could provide a more comprehensive picture of the impact of child welfare involvement on children. Notice that each of these groups has less power than child welfare workers in the service relationship. Attending to which groups were used to inform the creation of a theory and the power of those groups is an important critical consideration for social work researchers.

As you can see, when researchers apply theory to research they must wrestle with the history and hierarchy around knowledge creation in that area. In deductive studies, the researcher is positioned as the expert, similar to the positivist paradigm presented in Chapter 5. We've discussed a few of the limitations on the knowledge of researchers in this subsection, but the position of the "researcher as expert" is inherently problematic. However, it should also not be taken to an extreme. A researcher who approaches inductive inquiry as a naïve learner is also inherently problematic. Just as competence in social work practice requires a baseline of knowledge prior to entering practice, so does competence in social work research. Because a truly naïve intellectual position is impossible—we all have preexisting ways we view the world and are not fully aware of how they may impact our thoughts—researchers should be well-read in the topic area of their research study but humble enough to know that there is always much more to learn.

  • Inductive reasoning begins with a set of empirical observations, seeking patterns in those observations, and then theorizing about those patterns.
  • Deductive reasoning begins with a theory, developing hypotheses from that theory, and then collecting and analyzing data to test the truth of those hypotheses.
  • Inductive and deductive reasoning can be employed together for a more complete understanding of the research topic.
  • Though researchers don’t always set out to use both inductive and deductive reasoning in their work, they sometimes find that new questions arise in the course of an investigation that can best be answered by employing both approaches.
  • Identify one theory and how it helps you understand your topic and working question.

I encourage you to find a specific theory from your topic area, rather than relying only on the broad theoretical perspectives like systems theory or the strengths perspective. Those broad theoretical perspectives are okay...but I promise that searching for theories about your topic will help you conceptualize and design your research project.

  • Using the theory you identified, describe what you expect the answer to be to your working question.
  • Define and provide an example of idiographic causal relationships
  • Describe the role of causality in quantitative research as compared to qualitative research
  • Identify, define, and describe each of the main criteria for nomothetic causal relationships
  • Describe the difference between and provide examples of independent, dependent, and control variables
  • Define hypothesis, state a clear hypothesis, and discuss the respective roles of quantitative and qualitative research when it comes to hypotheses

Causality  refers to the idea that one event, behavior, or belief will result in the occurrence of another, subsequent event, behavior, or belief. In other words, it is about cause and effect. It seems simple, but you may be surprised to learn there is more than one way to explain how one thing causes another. How can that be? How could there be many ways to understand causality?

Think back to our discussion in Section 5.3 on paradigms [insert chapter link plus link to section 1.2]. You’ll remember the positivist paradigm as the one that believes in objectivity. Positivists look for causal explanations that are universally true for everyone, everywhere  because they seek objective truth. Interpretivists, on the other hand, look for causal explanations that are true for individuals or groups in a specific time and place because they seek subjective truths. Remember that for interpretivists, there is not one singular truth that is true for everyone, but many truths created and shared by others.

"Are you trying to generalize or nah?"

One of my favorite classroom moments occurred in the early days of my teaching career. Students were providing peer feedback on their working questions. I overheard one group who was helping someone rephrase their research question. A student asked, “Are you trying to generalize or nah?” Teaching is full of fun moments like that one. Answering that one question can help you understand how to conceptualize and design your research project.

Nomothetic causal explanations are incredibly powerful. They allow scientists to make predictions about what will happen in the future, with a certain margin of error. Moreover, they allow scientists to generalize —that is, make claims about a large population based on a smaller sample of people or items. Generalizing is important. We clearly do not have time to ask everyone their opinion on a topic or test a new intervention on every person. We need a type of causal explanation that helps us predict and estimate truth in all situations.

Generally, nomothetic causal relationships work best for explanatory research projects [INSERT SECTION LINK]. They also tend to use quantitative research: by boiling things down to numbers, one can use the universal language of mathematics to use statistics to explore those relationships. On the other hand, descriptive and exploratory projects often fit better with idiographic causality. These projects do not usually try to generalize, but instead investigate what is true for individuals, small groups, or communities at a specific point in time. You will learn about this type of causality in the next section. Here, we will assume you have an explanatory working question. For example, you may want to know about the risk and protective factors for a specific diagnosis or how a specific therapy impacts client outcomes.

What do nomothetic causal explanations look like?

Nomothetic causal explanations express relationships between variables . The term variable has a scientific definition. This one from Gillespie & Wagner (2018) "a logical grouping of attributes that can be observed and measured and is expected to vary from person to person in a population" (p. 9). [36] More practically, variables are the key concepts in your working question. You know, the things you plan to observe when you actually do your research project, conduct your surveys, complete your interviews, etc. These things have two key properties. First, they vary , as in they do not remain constant. "Age" varies by number. "Gender" varies by category. But they both vary. Second, they have attributes . So the variable "health professions" has attributes or categories, such as social worker, nurse, counselor, etc.

It's also worth reviewing what is  not a variable. Well, things that don't change (or vary) aren't variables. If you planned to do a study on how gender impacts earnings but your study only contained women, that concept would not vary . Instead, it would be a constant . Another common mistake I see in students' explanatory questions is mistaking an attribute for a variable. "Men" is not a variable. "Gender" is a variable. "Virginia" is not a variable. The variable is the "state or territory" in which someone or something is physically located.

When one variable causes another, we have what researchers call independent and dependent variables. For example, in a study investigating the impact of spanking on aggressive behavior, spanking would be the independent variable and aggressive behavior would be the dependent variable. An independent variable is the cause, and a  dependent variable  is the effect. Why are they called that? Dependent variables  depend on independent variables. If all of that gets confusing, just remember the graphical relationship in Figure 8.5.

The letters IV on the left side with an arrow pointing to the letters DV on the right

Write out your working question, as it exists now. As we said previously in the subsection, we assume you have an explanatory research question for learning this section.

  • Write out a diagram similar to Figure 8.5.
  • Put your independent variable on the left and the dependent variable on the right.
  • Can your variables vary?
  • Do they have different attributes or categories that vary from person to person?
  • How does the theory you identified in section 8.1 help you understand this causal relationship?

If the theory you've identified isn't much help to you or seems unrelated, it's a good indication that you need to read more literature about the theories related to your topic.

For some students, your working question may not be specific enough to list an independent or dependent variable clearly. You may have "risk factors" in place of an independent variable, for example. Or "effects" as a dependent variable. If that applies to your research question, get specific for a minute even if you have to revise this later. Think about which specific risk factors or effects you are interested in. Consider a few options for your independent and dependent variable and create diagrams similar to Figure 8.5.

Finally, you are likely to revisit your working question so you may have to come back to this exercise to clarify the causal relationship you want to investigate.

For a ten-cent word like "nomothetic," these causal relationships should look pretty basic to you. They should look like "x causes y." Indeed, you may be looking at your causal explanation and thinking, "wow, there are so many other things I'm missing in here." In fact, maybe my dependent variable sometimes causes changes in my independent variable! For example, a working question asking about poverty and education might ask how poverty makes it more difficult to graduate college or how high college debt impacts income inequality after graduation. Nomothetic causal relationships are slices of reality. They boil things down to two (or often more) key variables and assert a one-way causal explanation between them. This is by design, as they are trying to generalize across all people to all situations. The more complicated, circular, and often contradictory causal explanations are idiographic, which we will cover in the next section of this chapter.

Developing a hypothesis

A hypothesis   is a statement describing a researcher’s expectation regarding what they anticipate finding. Hypotheses in quantitative research are a nomothetic causal relationship that the researcher expects to determine is true or false. A hypothesis is written to describe the expected relationship between the independent and dependent variables. In other words, write the answer to your working question using your variables. That's your hypothesis! Make sure you haven't introduced new variables into your hypothesis that are not in your research question. If you have, write out your hypothesis as in Figure 8.5.

A good hypothesis should be testable using social science research methods. That is, you can use a social science research project (like a survey or experiment) to test whether it is true or not. A good hypothesis is also  specific about the relationship it explores. For example, a student project that hypothesizes, "families involved with child welfare agencies will benefit from Early Intervention programs," is not specific about what benefits it plans to investigate. For this student, I advised her to take a look at the empirical literature and theory about Early Intervention and see what outcomes are associated with these programs. This way, she could  more clearly state the dependent variable in her hypothesis, perhaps looking at reunification, attachment, or developmental milestone achievement in children and families under child welfare supervision.

Your hypothesis should be an informed prediction based on a theory or model of the social world. For example, you may hypothesize that treating mental health clients with warmth and positive regard is likely to help them achieve their therapeutic goals. That hypothesis would be based on the humanistic practice models of Carl Rogers. Using previous theories to generate hypotheses is an example of deductive research. If Rogers’ theory of unconditional positive regard is accurate, a study comparing clinicians who used it versus those who did not would show more favorable treatment outcomes for clients receiving unconditional positive regard.

Let’s consider a couple of examples. In research on sexual harassment (Uggen & Blackstone, 2004), [37] one might hypothesize, based on feminist theories of sexual harassment, that more females than males will experience specific sexually harassing behaviors. What is the causal relationship being predicted here? Which is the independent and which is the dependent variable? In this case, researchers hypothesized that a person’s sex (independent variable) would predict their likelihood to experience sexual harassment (dependent variable).

social work research dissemination

Sometimes researchers will hypothesize that a relationship will take a specific direction. As a result, an increase or decrease in one area might be said to cause an increase or decrease in another. For example, you might choose to study the relationship between age and support for legalization of marijuana. Perhaps you’ve taken a sociology class and, based on the theories you’ve read, you hypothesize that age is negatively related to support for marijuana legalization. [38] What have you just hypothesized?

You have hypothesized that as people get older, the likelihood of their supporting marijuana legalization decreases. Thus, as age (your independent variable) moves in one direction (up), support for marijuana legalization (your dependent variable) moves in another direction (down). So, a direct relationship (or positive correlation) involve two variables going in the same direction and an inverse relationship (or negative correlation) involve two variables going in opposite directions. If writing hypotheses feels tricky, it is sometimes helpful to draw them out and depict each of the two hypotheses we have just discussed.

As age increases, support for marijuana legalization decreases

It’s important to note that once a study starts, it is unethical to change your hypothesis to match the data you find. For example, what happens if you conduct a study to test the hypothesis from Figure 8.7 on support for marijuana legalization, but you find no relationship between age and support for legalization? It means that your hypothesis was incorrect, but that’s still valuable information. It would challenge what the existing literature says on your topic, demonstrating that more research needs to be done to figure out the factors that impact support for marijuana legalization. Don’t be embarrassed by negative results, and definitely don’t change your hypothesis to make it appear correct all along!

Criteria for establishing a nomothetic causal relationship

Let’s say you conduct your study and you find evidence that supports your hypothesis, as age increases, support for marijuana legalization decreases. Success! Causal explanation complete, right? Not quite.

You’ve only established one of the criteria for causality. The criteria for causality must include all of the following: covariation, plausibility, temporality, and nonspuriousness. In our example from Figure 8.7, we have established only one criteria—covariation. When variables covary , they vary together. Both age and support for marijuana legalization vary in our study. Our sample contains people of varying ages and varying levels of support for marijuana legalization. If, for example, we only included 16-year-olds in our study, age would be a  constant , not a variable.

Just because there might be some correlation between two variables does not mean that a causal relationship between the two is really plausible. Plausibility means that in order to make the claim that one event, behavior, or belief causes another, the claim has to make sense. It makes sense that people from previous generations would have different attitudes towards marijuana than younger generations. People who grew up in the time of Reefer Madness or the hippies may hold different views than those raised in an era of legalized medicinal and recreational use of marijuana. Plausibility is of course helped by basing your causal explanation in existing theoretical and empirical findings.

Once we’ve established that there is a plausible relationship between the two variables, we also need to establish whether the cause occurred before the effect, the criterion of temporality . A person’s age is a quality that appears long before any opinions on drug policy, so temporally the cause comes before the effect. It wouldn’t make any sense to say that support for marijuana legalization makes a person’s age increase. Even if you could predict someone’s age based on their support for marijuana legalization, you couldn’t say someone’s age was caused by their support for legalization of marijuana.

Finally, scientists must establish nonspuriousness. A spurious relationship is one in which an association between two variables appears to be causal but can in fact be explained by some third variable. This third variable is often called a confound or confounding variable because it clouds and confuses the relationship between your independent and dependent variable, making it difficult to discern the true causal relationship is.

a joke about correlation and causation

Continuing with our example, we could point to the fact that older adults are less likely to have used marijuana recreationally. Maybe it is actually recreational use of marijuana that leads people to be more open to legalization, not their age. In this case, our confounding variable would be recreational marijuana use. Perhaps the relationship between age and attitudes towards legalization is a spurious relationship that is accounted for by previous use. This is also referred to as the third variable problem , where a seemingly true causal relationship is actually caused by a third variable not in the hypothesis. In this example, the relationship between age and support for legalization could be more about having tried marijuana than the age of the person.

Quantitative researchers are sensitive to the effects of potentially spurious relationships. As a result, they will often measure these third variables in their study, so they can control for their effects in their statistical analysis. These are called  control variables , and they refer to potentially confounding variables whose effects are controlled for mathematically in the data analysis process. Control variables can be a bit confusing, and we will discuss them more in Chapter 10, but think about it as an argument between you, the researcher, and a critic.

Researcher: “The older a person is, the less likely they are to support marijuana legalization.” Critic: “Actually, it’s more about whether a person has used marijuana before. That is what truly determines whether someone supports marijuana legalization.” Researcher: “Well, I measured previous marijuana use in my study and mathematically controlled for its effects in my analysis. Age explains most of the variation in attitudes towards marijuana legalization.”

Let’s consider a few additional, real-world examples of spuriousness. Did you know, for example, that high rates of ice cream sales have been shown to cause drowning? Of course, that’s not really true, but there is a positive relationship between the two. In this case, the third variable that causes both high ice cream sales and increased deaths by drowning is time of year, as the summer season sees increases in both (Babbie, 2010). [39]

Here’s another good one: it is true that as the salaries of Presbyterian ministers in Massachusetts rise, so too does the price of rum in Havana, Cuba. Well, duh, you might be saying to yourself. Everyone knows how much ministers in Massachusetts love their rum, right? Not so fast. Both salaries and rum prices have increased, true, but so has the price of just about everything else (Huff & Geis, 1993). [40]

Finally, research shows that the more firefighters present at a fire, the more damage is done at the scene. What this statement leaves out, of course, is that as the size of a fire increases so too does the amount of damage caused as does the number of firefighters called on to help (Frankfort-Nachmias & Leon-Guerrero, 2011). [41] In each of these examples, it is the presence of a confounding variable that explains the apparent relationship between the two original variables.

In sum, the following criteria must be met for a nomothetic causal relationship:

  • The two variables must vary together.
  • The relationship must be plausible.
  • The cause must precede the effect in time.
  • The relationship must be nonspurious (not due to a confounding variable).

The hypothetico-dedutive method

The primary way that researchers in the positivist paradigm use theories is sometimes called the hypothetico-deductive method (although this term is much more likely to be used by philosophers of science than by scientists themselves). Researchers choose an existing theory. Then, they make a prediction about some new phenomenon that should be observed if the theory is correct. Again, this prediction is called a hypothesis. The researchers then conduct an empirical study to test the hypothesis. Finally, they reevaluate the theory in light of the new results and revise it if necessary.

This process is usually conceptualized as a cycle because the researchers can then derive a new hypothesis from the revised theory, conduct a new empirical study to test the hypothesis, and so on. As Figure 8.8 shows, this approach meshes nicely with the process of conducting a research project—creating a more detailed model of “theoretically motivated” or “theory-driven” research. Together, they form a model of theoretically motivated research. 

social work research dissemination

Keep in mind the hypothetico-deductive method is only one way of using social theory to inform social science research. It starts with describing one or more existing theories, deriving a hypothesis from one of those theories, testing your hypothesis in a new study, and finally reevaluating the theory based on the results data analyses. This format works well when there is an existing theory that addresses the research question—especially if the resulting hypothesis is surprising or conflicts with a hypothesis derived from a different theory.

But what if your research question is more interpretive? What if it is less about theory-testing and more about theory-building? This is what our next chapters will cover: the process of inductively deriving theory from people's stories and experiences. This process looks different than that depicted in Figure 8.8. It still starts with your research question and answering that question by conducting a research study. But instead of testing a hypothesis you created based on a theory, you will create a theory of your own that explain the data you collected. This format works well for qualitative research questions and for research questions that existing theories do not address.

  • In positivist and quantitative studies, the goal is often to understand the more general causes of some phenomenon rather than the idiosyncrasies of one particular instance, as in an idiographic causal relationship.
  • Nomothetic causal explanations focus on objectivity, prediction, and generalization.
  • Criteria for nomothetic causal relationships require the relationship be plausible and nonspurious; and that the cause must precede the effect in time.
  • In a nomothetic causal relationship, the independent variable causes changes in the dependent variable.
  • Hypotheses are statements, drawn from theory, which describe a researcher’s expectation about a relationship between two or more variables.
  • Write out your working question and hypothesis.
  • Defend your hypothesis in a short paragraph, using arguments based on the theory you identified in section 8.1.
  • Review the criteria for a nomothetic causal relationship. Critique your short paragraph about your hypothesis using these criteria.
  • Are there potentially confounding variables, issues with time order, or other problems you can identify in your reasoning?

Inductive & deductive (deductive focus)

  • Operational definitions (36 minute read)
  • Writing effective questions and questionnaires (38 minute read)
  • Measurement quality (21 minute read)

Content warning: examples in this chapter contain references to ethnocentrism, toxic masculinity, racism in science, drug use, mental health and depression, psychiatric inpatient care, poverty and basic needs insecurity, pregnancy, and racism and sexism in the workplace and higher education.

11.1 Operational definitions

  • Define and give an example of indicators and attributes for a variable
  • Apply the three components of an operational definition to a variable
  • Distinguish between levels of measurement for a variable and how those differences relate to measurement
  • Describe the purpose of composite measures like scales and indices

Last chapter, we discussed conceptualizing your project. Conceptual definitions are like dictionary definitions. They tell you what a concept means by defining it using other concepts. In this section we will move from the abstract realm (conceptualization) to the real world (measurement).

Operationalization is the process by which researchers spell out precisely how a concept will be measured in their study. It involves identifying the specific research procedures we will use to gather data about our concepts. If conceptually defining your terms means looking at theory, how do you operationally define your terms? By looking for indicators of when your variable is present or not, more or less intense, and so forth. Operationalization is probably the most challenging part of quantitative research, but once it's done, the design and implementation of your study will be straightforward.

social work research dissemination

Operationalization works by identifying specific  indicators that will be taken to represent the ideas we are interested in studying. If we are interested in studying masculinity, then the indicators for that concept might include some of the social roles prescribed to men in society such as breadwinning or fatherhood. Being a breadwinner or a father might therefore be considered indicators  of a person’s masculinity. The extent to which a man fulfills either, or both, of these roles might be understood as clues (or indicators) about the extent to which he is viewed as masculine.

Let’s look at another example of indicators. Each day, Gallup researchers poll 1,000 randomly selected Americans to ask them about their well-being. To measure well-being, Gallup asks these people to respond to questions covering six broad areas: physical health, emotional health, work environment, life evaluation, healthy behaviors, and access to basic necessities. Gallup uses these six factors as indicators of the concept that they are really interested in, which is well-being .

Identifying indicators can be even simpler than the examples described thus far. Political party affiliation is another relatively easy concept for which to identify indicators. If you asked a person what party they voted for in the last national election (or gained access to their voting records), you would get a good indication of their party affiliation. Of course, some voters split tickets between multiple parties when they vote and others swing from party to party each election, so our indicator is not perfect. Indeed, if our study were about political identity as a key concept, operationalizing it solely in terms of who they voted for in the previous election leaves out a lot of information about identity that is relevant to that concept. Nevertheless, it's a pretty good indicator of political party affiliation.

Choosing indicators is not an arbitrary process. As described earlier, utilizing prior theoretical and empirical work in your area of interest is a great way to identify indicators in a scholarly manner. And you conceptual definitions will point you in the direction of relevant indicators. Empirical work will give you some very specific examples of how the important concepts in an area have been measured in the past and what sorts of indicators have been used. Often, it makes sense to use the same indicators as previous researchers; however, you may find that some previous measures have potential weaknesses that your own study will improve upon.

All of the examples in this chapter have dealt with questions you might ask a research participant on a survey or in a quantitative interview. If you plan to collect data from other sources, such as through direct observation or the analysis of available records, think practically about what the design of your study might look like and how you can collect data on various indicators feasibly. If your study asks about whether the participant regularly changes the oil in their car, you will likely not observe them directly doing so. Instead, you will likely need to rely on a survey question that asks them the frequency with which they change their oil or ask to see their car maintenance records.

  • What indicators are commonly used to measure the variables in your research question?
  • How can you feasibly collect data on these indicators?
  • Are you planning to collect your own data using a questionnaire or interview? Or are you planning to analyze available data like client files or raw data shared from another researcher's project?

Remember, you need raw data . You research project cannot rely solely on the results reported by other researchers or the arguments you read in the literature. A literature review is only the first part of a research project, and your review of the literature should inform the indicators you end up choosing when you measure the variables in your research question.

Unlike conceptual definitions which contain other concepts, operational definition consists of the following components: (1) the variable being measured and its attributes, (2) the measure you will use, (3) how you plan to interpret the data collected from that measure to draw conclusions about the variable you are measuring.

Step 1: Specifying variables and attributes

The first component, the variable, should be the easiest part. At this point in quantitative research, you should have a research question that has at least one independent and at least one dependent variable. Remember that variables must be able to vary. For example, the United States is not a variable. Country of residence is a variable, as is patriotism. Similarly, if your sample only includes men, gender is a constant in your study, not a variable. A  constant is a characteristic that does not change in your study.

When social scientists measure concepts, they sometimes use the language of variables and attributes. A  variable refers to a quality or quantity that varies across people or situations. Attributes  are the characteristics that make up a variable. For example, the variable hair color would contain attributes like blonde, brown, black, red, gray, etc. A variable’s attributes determine its level of measurement. There are four possible levels of measurement: nominal, ordinal, interval, and ratio. The first two levels of measurement are  categorical , meaning their attributes are categories rather than numbers. The latter two levels of measurement are  continuous , meaning their attributes are numbers.

social work research dissemination

Levels of measurement

Hair color is an example of a nominal level of measurement.  Nominal measures are categorical, and those categories cannot be mathematically ranked. As a brown-haired person (with some gray), I can’t say for sure that brown-haired people are better than blonde-haired people. As with all nominal levels of measurement, there is no ranking order between hair colors; they are simply different. That is what constitutes a nominal level of gender and race are also measured at the nominal level.

What attributes are contained in the variable  hair color ? While blonde, brown, black, and red are common colors, some people may not fit into these categories if we only list these attributes. My wife, who currently has purple hair, wouldn’t fit anywhere. This means that our attributes were not exhaustive. Exhaustiveness  means that all possible attributes are listed. We may have to list a lot of colors before we can meet the criteria of exhaustiveness. Clearly, there is a point at which exhaustiveness has been reasonably met. If a person insists that their hair color is  light burnt sienna , it is not your responsibility to list that as an option. Rather, that person would reasonably be described as brown-haired. Perhaps listing a category for  other color  would suffice to make our list of colors exhaustive.

What about a person who has multiple hair colors at the same time, such as red and black? They would fall into multiple attributes. This violates the rule of  mutual exclusivity , in which a person cannot fall into two different attributes. Instead of listing all of the possible combinations of colors, perhaps you might include a  multi-color  attribute to describe people with more than one hair color.

Making sure researchers provide mutually exclusive and exhaustive is about making sure all people are represented in the data record. For many years, the attributes for gender were only male or female. Now, our understanding of gender has evolved to encompass more attributes that better reflect the diversity in the world. Children of parents from different races were often classified as one race or another, even if they identified with both cultures. The option for bi-racial or multi-racial on a survey not only more accurately reflects the racial diversity in the real world but validates and acknowledges people who identify in that manner. If we did not measure race in this way, we would leave empty the data record for people who identify as biracial or multiracial, impairing our search for truth.

Unlike nominal-level measures, attributes at the  ordinal  level can be rank ordered. For example, someone’s degree of satisfaction in their romantic relationship can be ordered by rank. That is, you could say you are not at all satisfied, a little satisfied, moderately satisfied, or highly satisfied. Note that even though these have a rank order to them (not at all satisfied is certainly worse than highly satisfied), we cannot calculate a mathematical distance between those attributes. We can simply say that one attribute of an ordinal-level variable is more or less than another attribute.

This can get a little confusing when using rating scales . If you have ever taken a customer satisfaction survey or completed a course evaluation for school, you are familiar with rating scales. “On a scale of 1-5, with 1 being the lowest and 5 being the highest, how likely are you to recommend our company to other people?” That surely sounds familiar. Rating scales use numbers, but only as a shorthand, to indicate what attribute (highly likely, somewhat likely, etc.) the person feels describes them best. You wouldn’t say you are “2” likely to recommend the company, but you would say you are not very likely to recommend the company. Ordinal-level attributes must also be exhaustive and mutually exclusive, as with nominal-level variables.

At the  interval   level, attributes must also be exhaustive and mutually exclusive and there is equal distance between attributes. Interval measures are also continuous, meaning their attributes are numbers, rather than categories. IQ scores are interval level, as are temperatures in Fahrenheit and Celsius. Their defining characteristic is that we can say how much more or less one attribute differs from another. We cannot, however, say with certainty what the ratio of one attribute is in comparison to another. For example, it would not make sense to say that a person with an IQ score of 140 has twice the IQ of a person with a score of 70. However, the difference between IQ scores of 80 and 100 is the same as the difference between IQ scores of 120 and 140.

While we cannot say that someone with an IQ of 140 is twice as intelligent as someone with an IQ of 70 because IQ is measured at the interval level, we can say that someone with six siblings has twice as many as someone with three because number of siblings is measured at the ratio level. Finally, at the ratio   level, attributes are mutually exclusive and exhaustive, attributes can be rank ordered, the distance between attributes is equal, and attributes have a true zero point. Thus, with these variables, we can  say what the ratio of one attribute is in comparison to another. Examples of ratio-level variables include age and years of education. We know that a person who is 12 years old is twice as old as someone who is 6 years old. Height measured in meters and weight measured in kilograms are good examples. So are counts of discrete objects or events such as the number of siblings one has or the number of questions a student answers correctly on an exam. The differences between each level of measurement are visualized in Table 11.1.

Table 11.1 Criteria for Different Levels of Measurement
Nominal Ordinal Interval Ratio
Exhaustive X X X X
Mutually exclusive X X X X
Rank-ordered X X X
Equal distance between attributes X X
True zero point X

Levels of measurement=levels of specificity

We have spent time learning how to determine our data's level of measurement. Now what? How could we use this information to help us as we measure concepts and develop measurement tools? First, the types of statistical tests that we are able to use are dependent on our data's level of measurement. With nominal-level measurement, for example, the only available measure of central tendency is the mode. With ordinal-level measurement, the median or mode can be used as indicators of central tendency. Interval and ratio-level measurement are typically considered the most desirable because they permit for any indicators of central tendency to be computed (i.e., mean, median, or mode). Also, ratio-level measurement is the only level that allows meaningful statements about ratios of scores. The higher the level of measurement, the more complex statistical tests we are able to conduct. This knowledge may help us decide what kind of data we need to gather, and how.

That said, we have to balance this knowledge with the understanding that sometimes, collecting data at a higher level of measurement could negatively impact our studies. For instance, sometimes providing answers in ranges may make prospective participants feel more comfortable responding to sensitive items. Imagine that you were interested in collecting information on topics such as income, number of sexual partners, number of times someone used illicit drugs, etc. You would have to think about the sensitivity of these items and determine if it would make more sense to collect some data at a lower level of measurement (e.g., asking if they are sexually active or not (nominal) versus their total number of sexual partners (ratio).

Finally, sometimes when analyzing data, researchers find a need to change a data's level of measurement. For example, a few years ago, a student was interested in studying the relationship between mental health and life satisfaction. This student used a variety of measures. One item asked about the number of mental health symptoms, reported as the actual number. When analyzing data, my student examined the mental health symptom variable and noticed that she had two groups, those with none or one symptoms and those with many symptoms. Instead of using the ratio level data (actual number of mental health symptoms), she collapsed her cases into two categories, few and many. She decided to use this variable in her analyses. It is important to note that you can move a higher level of data to a lower level of data; however, you are unable to move a lower level to a higher level.

  • Check that the variables in your research question can vary...and that they are not constants or one of many potential attributes of a variable.
  • Think about the attributes your variables have. Are they categorical or continuous? What level of measurement seems most appropriate?

social work research dissemination

Step 2: Specifying measures for each variable

Let’s pick a social work research question and walk through the process of operationalizing variables to see how specific we need to get. I’m going to hypothesize that residents of a psychiatric unit who are more depressed are less likely to be satisfied with care. Remember, this would be a inverse relationship—as depression increases, satisfaction decreases. In this question, depression is my independent variable (the cause) and satisfaction with care is my dependent variable (the effect). Now we have identified our variables, their attributes, and levels of measurement, we move onto the second component: the measure itself.

So, how would you measure my key variables: depression and satisfaction? What indicators would you look for? Some students might say that depression could be measured by observing a participant’s body language. They may also say that a depressed person will often express feelings of sadness or hopelessness. In addition, a satisfied person might be happy around service providers and often express gratitude. While these factors may indicate that the variables are present, they lack coherence. Unfortunately, what this “measure” is actually saying is that “I know depression and satisfaction when I see them.” While you are likely a decent judge of depression and satisfaction, you need to provide more information in a research study for how you plan to measure your variables. Your judgment is subjective, based on your own idiosyncratic experiences with depression and satisfaction. They couldn’t be replicated by another researcher. They also can’t be done consistently for a large group of people. Operationalization requires that you come up with a specific and rigorous measure for seeing who is depressed or satisfied.

Finding a good measure for your variable depends on the kind of variable it is. Variables that are directly observable don't come up very often in my students' classroom projects, but they might include things like taking someone's blood pressure, marking attendance or participation in a group, and so forth. To measure an indirectly observable variable like age, you would probably put a question on a survey that asked, “How old are you?” Measuring a variable like income might require some more thought, though. Are you interested in this person’s individual income or the income of their family unit? This might matter if your participant does not work or is dependent on other family members for income. Do you count income from social welfare programs? Are you interested in their income per month or per year? Even though indirect observables are relatively easy to measure, the measures you use must be clear in what they are asking, and operationalization is all about figuring out the specifics of what you want to know. For more complicated constructs, you will need compound measures (that use multiple indicators to measure a single variable).

How you plan to collect your data also influences how you will measure your variables. For social work researchers using secondary data like client records as a data source, you are limited by what information is in the data sources you can access. If your organization uses a given measurement for a mental health outcome, that is the one you will use in your study. Similarly, if you plan to study how long a client was housed after an intervention using client visit records, you are limited by how their caseworker recorded their housing status in the chart. One of the benefits of collecting your own data is being able to select the measures you feel best exemplify your understanding of the topic.

Measuring unidimensional concepts

The previous section mentioned two important considerations: how complicated the variable is and how you plan to collect your data. With these in hand, we can use the level of measurement to further specify how you will measure your variables and consider specialized rating scales developed by social science researchers.

Measurement at each level

Nominal measures assess categorical variables. These measures are used for variables or indicators that have mutually exclusive attributes, but that cannot be rank-ordered. Nominal measures ask about the variable and provide names or labels for different attribute values like social work, counseling, and nursing for the variable profession. Nominal measures are relatively straightforward.

Ordinal measures often use a rating scale. It is an ordered set of responses that participants must choose from. Figure 11.1 shows several examples. The number of response options on a typical rating scale is usualy five or seven, though it can range from three to 11. Five-point scales are best for unipolar scales where only one construct is tested, such as frequency (Never, Rarely, Sometimes, Often, Always). Seven-point scales are best for bipolar scales where there is a dichotomous spectrum, such as liking (Like very much, Like somewhat, Like slightly, Neither like nor dislike, Dislike slightly, Dislike somewhat, Dislike very much). For bipolar questions, it is useful to offer an earlier question that branches them into an area of the scale; if asking about liking ice cream, first ask “Do you generally like or dislike ice cream?” Once the respondent chooses like or dislike, refine it by offering them relevant choices from the seven-point scale. Branching improves both reliability and validity (Krosnick & Berent, 1993). [42] Although you often see scales with numerical labels, it is best to only present verbal labels to the respondents but convert them to numerical values in the analyses. Avoid partial labels or length or overly specific labels. In some cases, the verbal labels can be supplemented with (or even replaced by) meaningful graphics. The last rating scale shown in Figure 11.1 is a visual-analog scale, on which participants make a mark somewhere along the horizontal line to indicate the magnitude of their response.

social work research dissemination

Interval measures are those where the values measured are not only rank-ordered, but are also equidistant from adjacent attributes. For example, the temperature scale (in Fahrenheit or Celsius), where the difference between 30 and 40 degree Fahrenheit is the same as that between 80 and 90 degree Fahrenheit. Likewise, if you have a scale that asks respondents’ annual income using the following attributes (ranges): $0 to 10,000, $10,000 to 20,000, $20,000 to 30,000, and so forth, this is also an interval measure, because the mid-point of each range (i.e., $5,000, $15,000, $25,000, etc.) are equidistant from each other. The intelligence quotient (IQ) scale is also an interval measure, because the measure is designed such that the difference between IQ scores 100 and 110 is supposed to be the same as between 110 and 120 (although we do not really know whether that is truly the case). Interval measures allow us to examine “how much more” is one attribute when compared to another, which is not possible with nominal or ordinal measures. You may find researchers who “pretend” (incorrectly) that ordinal rating scales are actually interval measures so that we can use different statistical techniques for analyzing them. As we will discuss in the latter part of the chapter, this is a mistake because there is no way to know whether the difference between a 3 and a 4 on a rating scale is the same as the difference between a 2 and a 3. Those numbers are just placeholders for categories.

Ratio measures are those that have all the qualities of nominal, ordinal, and interval scales, and in addition, also have a “true zero” point (where the value zero implies lack or non-availability of the underlying construct). Think about how to measure the number of people working in human resources at a social work agency. It could be one, several, or none (if the company contracts out for those services). Measuring interval and ratio data is relatively easy, as people either select or input a number for their answer. If you ask a person how many eggs they purchased last week, they can simply tell you they purchased `a dozen eggs at the store, two at breakfast on Wednesday, or none at all.

Commonly used rating scales in questionnaires

The level of measurement will give you the basic information you need, but social scientists have developed specialized instruments for use in questionnaires, a common tool used in quantitative research. As we mentioned before, if you plan to source your data from client files or previously published results

Although Likert scale is a term colloquially used to refer to almost any rating scale (e.g., a 0-to-10 life satisfaction scale), it has a much more precise meaning. In the 1930s, researcher Rensis Likert (pronounced LICK-ert) created a new approach for measuring people’s attitudes (Likert, 1932) . [43]  It involves presenting people with several statements—including both favorable and unfavorable statements—about some person, group, or idea. Respondents then express their agreement or disagreement with each statement on a 5-point scale:  Strongly Agree ,  Agree ,  Neither Agree nor Disagree ,  Disagree ,  Strongly Disagree . Numbers are assigned to each response a nd then summed across all items to produce a score representing the attitude toward the person, group, or idea. For items that are phrased in an opposite direction (e.g., negatively worded statements instead of positively worded statements), reverse coding is used so that the numerical scoring of statements also runs in the opposite direction.  The entire set of items came to be called a Likert scale, as indicated in Table 11.2 below.

Unless you are measuring people’s attitude toward something by assessing their level of agreement with several statements about it, it is best to avoid calling it a Likert scale. You are probably just using a rating scale. Likert scales allow for more granularity (more finely tuned response) than yes/no items, including whether respondents are neutral to the statement. Below is an example of how we might use a Likert scale to assess your attitudes about research as you work your way through this textbook.

Table 11.2 Likert scale
I like research more now than when I started reading this book.
This textbook is easy to use.
I feel confident about how well I understand levels of measurement.
This textbook is helping me plan my research proposal.

Semantic differential scales are composite (multi-item) scales in which respondents are asked to indicate their opinions or feelings toward a single statement using different pairs of adjectives framed as polar opposites. Whereas in the above Likert scale, the participant is asked how much they agree or disagree with a statement, in a semantic differential scale the participant is asked to indicate how they feel about a specific item. This makes the s emantic differential scale an excellent technique for measuring people’s attitudes or feelings toward objects, events, or behaviors. Table 11.3 is an example of a semantic differential scale that was created to assess participants' feelings about this textbook. 

Very much Somewhat Neither Somewhat Very much
Boring Exciting
Useless Useful
Hard Easy
Irrelevant Applicable

This composite scale was designed by Louis Guttman and uses a series of items arranged in increasing order of intensity (least intense to most intense) of the concept. This type of scale allows us to understand the intensity of beliefs or feelings. Each item in the above Guttman scale has a weight (this is not indicated on the tool) which varies with the intensity of that item, and the weighted combination of each response is used as an aggregate measure of an observation.

Example Guttman Scale Items

  • I often felt the material was not engaging                               Yes/No
  • I was often thinking about other things in class                     Yes/No
  • I was often working on other tasks during class                     Yes/No
  • I will work to abolish research from the curriculum              Yes/No

Notice how the items move from lower intensity to higher intensity. A researcher reviews the yes answers and creates a score for each participant.

Composite measures: Scales and indices

Depending on your research design, your measure may be something you put on a survey or pre/post-test that you give to your participants. For a variable like age or income, one well-worded question may suffice. Unfortunately, most variables in the social world are not so simple. Depression and satisfaction are multidimensional concepts. Relying on a single indicator like a question that asks "Yes or no, are you depressed?” does not encompass the complexity of depression, including issues with mood, sleeping, eating, relationships, and happiness. There is no easy way to delineate between multidimensional and unidimensional concepts, as its all in how you think about your variable. Satisfaction could be validly measured using a unidimensional ordinal rating scale. However, if satisfaction were a key variable in our study, we would need a theoretical framework and conceptual definition for it. That means we'd probably have more indicators to ask about like timeliness, respect, sensitivity, and many others, and we would want our study to say something about what satisfaction truly means in terms of our other key variables. However, if satisfaction is not a key variable in your conceptual framework, it makes sense to operationalize it as a unidimensional concept.

For more complicated measures, researchers use scales and indices (sometimes called indexes) to measure their variables because they assess multiple indicators to develop a composite (or total) score. Co mposite scores provide a much greater understanding of concepts than a single item could. Although we won't delve too deeply into the process of scale development, we will cover some important topics for you to understand how scales and indices developed by other researchers can be used in your project.

Although they exhibit differences (which will later be discussed) the two have in common various factors.

  • Both are ordinal measures of variables.
  • Both can order the units of analysis in terms of specific variables.
  • Both are composite measures .

social work research dissemination

The previous section discussed how to measure respondents’ responses to predesigned items or indicators belonging to an underlying construct. But how do we create the indicators themselves? The process of creating the indicators is called scaling. More formally, scaling is a branch of measurement that involves the construction of measures by associating qualitative judgments about unobservable constructs with quantitative, measurable metric units. Stevens (1946) [44] said, “Scaling is the assignment of objects to numbers according to a rule.” This process of measuring abstract concepts in concrete terms remains one of the most difficult tasks in empirical social science research.

The outcome of a scaling process is a scale , which is an empirical structure for measuring items or indicators of a given construct. Understand that multidimensional “scales”, as discussed in this section, are a little different from “rating scales” discussed in the previous section. A rating scale is used to capture the respondents’ reactions to a given item on a questionnaire. For example, an ordinally scaled item captures a value between “strongly disagree” to “strongly agree.” Attaching a rating scale to a statement or instrument is not scaling. Rather, scaling is the formal process of developing scale items, before rating scales can be attached to those items.

If creating your own scale sounds painful, don’t worry! For most multidimensional variables, you would likely be duplicating work that has already been done by other researchers. Specifically, this is a branch of science called psychometrics. You do not need to create a scale for depression because scales such as the Patient Health Questionnaire (PHQ-9), the Center for Epidemiologic Studies Depression Scale (CES-D), and Beck’s Depression Inventory (BDI) have been developed and refined over dozens of years to measure variables like depression. Similarly, scales such as the Patient Satisfaction Questionnaire (PSQ-18) have been developed to measure satisfaction with medical care. As we will discuss in the next section, these scales have been shown to be reliable and valid. While you could create a new scale to measure depression or satisfaction, a study with rigor would pilot test and refine that new scale over time to make sure it measures the concept accurately and consistently. This high level of rigor is often unachievable in student research projects because of the cost and time involved in pilot testing and validating, so using existing scales is recommended.

Unfortunately, there is no good one-stop=shop for psychometric scales. The Mental Measurements Yearbook provides a searchable database of measures for social science variables, though it woefully incomplete and often does not contain the full documentation for scales in its database. You can access it from a university library’s list of databases. If you can’t find anything in there, your next stop should be the methods section of the articles in your literature review. The methods section of each article will detail how the researchers measured their variables, and often the results section is instructive for understanding more about measures. In a quantitative study, researchers may have used a scale to measure key variables and will provide a brief description of that scale, its names, and maybe a few example questions. If you need more information, look at the results section and tables discussing the scale to get a better idea of how the measure works. Looking beyond the articles in your literature review, searching Google Scholar using queries like “depression scale” or “satisfaction scale” should also provide some relevant results. For example, searching for documentation for the Rosenberg Self-Esteem Scale (which we will discuss in the next section), I found this report from researchers investigating acceptance and commitment therapy which details this scale and many others used to assess mental health outcomes. If you find the name of the scale somewhere but cannot find the documentation (all questions and answers plus how to interpret the scale), a general web search with the name of the scale and ".pdf" may bring you to what you need. Or, to get professional help with finding information, always ask a librarian!

Unfortunately, these approaches do not guarantee that you will be able to view the scale itself or get information on how it is interpreted. Many scales cost money to use and may require training to properly administer. You may also find scales that are related to your variable but would need to be slightly modified to match your study’s needs. You could adapt a scale to fit your study, however changing even small parts of a scale can influence its accuracy and consistency. While it is perfectly acceptable in student projects to adapt a scale without testing it first (time may not allow you to do so), pilot testing is always recommended for adapted scales, and researchers seeking to draw valid conclusions and publish their results must take this additional step.

An index is a composite score derived from aggregating measures of multiple concepts (called components) using a set of rules and formulas. It is different from a scale. Scales also aggregate measures; however, these measures examine different dimensions or the same dimension of a single construct. A well-known example of an index is the consumer price index (CPI), which is computed every month by the Bureau of Labor Statistics of the U.S. Department of Labor. The CPI is a measure of how much consumers have to pay for goods and services (in general) and is divided into eight major categories (food and beverages, housing, apparel, transportation, healthcare, recreation, education and communication, and “other goods and services”), which are further subdivided into more than 200 smaller items. Each month, government employees call all over the country to get the current prices of more than 80,000 items. Using a complicated weighting scheme that takes into account the location and probability of purchase for each item, analysts then combine these prices into an overall index score using a series of formulas and rules.

Another example of an index is the Duncan Socioeconomic Index (SEI). This index is used to quantify a person's socioeconomic status (SES) and is a combination of three concepts: income, education, and occupation. Income is measured in dollars, education in years or degrees achieved, and occupation is classified into categories or levels by status. These very different measures are combined to create an overall SES index score. However, SES index measurement has generated a lot of controversy and disagreement among researchers.

The process of creating an index is similar to that of a scale. First, conceptualize (define) the index and its constituent components. Though this appears simple, there may be a lot of disagreement on what components (concepts/constructs) should be included or excluded from an index. For instance, in the SES index, isn’t income correlated with education and occupation? And if so, should we include one component only or all three components? Reviewing the literature, using theories, and/or interviewing experts or key stakeholders may help resolve this issue. Second, operationalize and measure each component. For instance, how will you categorize occupations, particularly since some occupations may have changed with time (e.g., there were no Web developers before the Internet)? As we will see in step three below, researchers must create a rule or formula for calculating the index score. Again, this process may involve a lot of subjectivity, so validating the index score using existing or new data is important.

Scale and index development at often taught in their own course in doctoral education, so it is unreasonable for you to expect to develop a consistently accurate measure within the span of a week or two. Using available indices and scales is recommended for this reason.

Differences between scales and indices

Though indices and scales yield a single numerical score or value representing a concept of interest, they are different in many ways. First, indices often comprise components that are very different from each other (e.g., income, education, and occupation in the SES index) and are measured in different ways. Conversely, scales typically involve a set of similar items that use the same rating scale (such as a five-point Likert scale about customer satisfaction).

Second, indices often combine objectively measurable values such as prices or income, while scales are designed to assess subjective or judgmental constructs such as attitude, prejudice, or self-esteem. Some argue that the sophistication of the scaling methodology makes scales different from indexes, while others suggest that indexing methodology can be equally sophisticated. Nevertheless, indexes and scales are both essential tools in social science research.

Scales and indices seem like clean, convenient ways to measure different phenomena in social science, but just like with a lot of research, we have to be mindful of the assumptions and biases underneath. What if a scale or an index was developed using only White women as research participants? Is it going to be useful for other groups? It very well might be, but when using a scale or index on a group for whom it hasn't been tested, it will be very important to evaluate the validity and reliability of the instrument, which we address in the rest of the chapter.

Finally, it's important to note that while scales and indices are often made up of nominal or ordinal variables, when we analyze them into composite scores, we will treat them as interval/ratio variables.

  • Look back to your work from the previous section, are your variables unidimensional or multidimensional?
  • Describe the specific measures you will use (actual questions and response options you will use with participants) for each variable in your research question.
  • If you are using a measure developed by another researcher but do not have all of the questions, response options, and instructions needed to implement it, put it on your to-do list to get them.

social work research dissemination

Step 3: How you will interpret your measures

The final stage of operationalization involves setting the rules for how the measure works and how the researcher should interpret the results. Sometimes, interpreting a measure can be incredibly easy. If you ask someone their age, you’ll probably interpret the results by noting the raw number (e.g., 22) someone provides and that it is lower or higher than other people's ages. However, you could also recode that person into age categories (e.g., under 25, 20-29-years-old, generation Z, etc.). Even scales may be simple to interpret. If there is a scale of problem behaviors, one might simply add up the number of behaviors checked off–with a range from 1-5 indicating low risk of delinquent behavior, 6-10 indicating the student is moderate risk, etc. How you choose to interpret your measures should be guided by how they were designed, how you conceptualize your variables, the data sources you used, and your plan for analyzing your data statistically. Whatever measure you use, you need a set of rules for how to take any valid answer a respondent provides to your measure and interpret it in terms of the variable being measured.

For more complicated measures like scales, refer to the information provided by the author for how to interpret the scale. If you can’t find enough information from the scale’s creator, look at how the results of that scale are reported in the results section of research articles. For example, Beck’s Depression Inventory (BDI-II) uses 21 statements to measure depression and respondents rate their level of agreement on a scale of 0-3. The results for each question are added up, and the respondent is put into one of three categories: low levels of depression (1-16), moderate levels of depression (17-30), or severe levels of depression (31 and over).

One common mistake I see often is that students will introduce another variable into their operational definition. This is incorrect. Your operational definition should mention only one variable—the variable being defined. While your study will certainly draw conclusions about the relationships between variables, that's not what operationalization is. Operationalization specifies what instrument you will use to measure your variable and how you plan to interpret the data collected using that measure.

Operationalization is probably the trickiest component of basic research methods, so please don’t get frustrated if it takes a few drafts and a lot of feedback to get to a workable definition. At the time of this writing, I am in the process of operationalizing the concept of “attitudes towards research methods.” Originally, I thought that I could gauge students’ attitudes toward research methods by looking at their end-of-semester course evaluations. As I became aware of the potential methodological issues with student course evaluations, I opted to use focus groups of students to measure their common beliefs about research. You may recall some of these opinions from Chapter 1 , such as the common beliefs that research is boring, useless, and too difficult. After the focus group, I created a scale based on the opinions I gathered, and I plan to pilot test it with another group of students. After the pilot test, I expect that I will have to revise the scale again before I can implement the measure in a real social work research project. At the time I’m writing this, I’m still not completely done operationalizing this concept.

  • Operationalization involves spelling out precisely how a concept will be measured.
  • Operational definitions must include the variable, the measure, and how you plan to interpret the measure.
  • There are four different levels of measurement: nominal, ordinal, interval, and ratio (in increasing order of specificity).
  • Scales and indices are common ways to collect information and involve using multiple indicators in measurement.
  • A key difference between a scale and an index is that a scale contains multiple indicators for one concept, whereas an indicator examines multiple concepts (components).
  • Using scales developed and refined by other researchers can improve the rigor of a quantitative study.

Use the research question that you developed in the previous chapters and find a related scale or index that researchers have used. If you have trouble finding the exact phenomenon you want to study, get as close as you can.

  • What is the level of measurement for each item on each tool? Take a second and think about why the tool's creator decided to include these levels of measurement. Identify any levels of measurement you would change and why.
  • If these tools don't exist for what you are interested in studying, why do you think that is?

12.3 Writing effective questions and questionnaires

  • Describe some of the ways that survey questions might confuse respondents and how to word questions and responses clearly
  • Create mutually exclusive, exhaustive, and balanced response options
  • Define fence-sitting and floating
  • Describe the considerations involved in constructing a well-designed questionnaire
  • Discuss why pilot testing is important

In the previous section, we reviewed how researchers collect data using surveys. Guided by their sampling approach and research context, researchers should choose the survey approach that provides the most favorable tradeoffs in strengths and challenges. With this information in hand, researchers need to write their questionnaire and revise it before beginning data collection. Each method of delivery requires a questionnaire, but they vary a bit based on how they will be used by the researcher. Since phone surveys are read aloud, researchers will pay more attention to how the questionnaire sounds than how it looks. Online surveys can use advanced tools to require the completion of certain questions, present interactive questions and answers, and otherwise afford greater flexibility in how questionnaires are designed. As you read this section, consider how your method of delivery impacts the type of questionnaire you will design. Because most student projects use paper or online surveys, this section will detail how to construct self-administered questionnaires to minimize the potential for bias and error.

social work research dissemination

Start with operationalization

The first thing you need to do to write effective survey questions is identify what exactly you wish to know. As silly as it sounds to state what seems so completely obvious, we can’t stress enough how easy it is to forget to include important questions when designing a survey. Begin by looking at your research question and refreshing your memory of the operational definitions you developed for those variables from Chapter 11 . You should have a pretty firm grasp of your operational definitions before starting the process of questionnaire design. You may have taken those operational definitions from other researchers' methods, found established scales and indices for your measures, or created your own questions and answer options.

STOP! Make sure you have a complete operational definition for the dependent and independent variables in your research question. A complete operational definition contains the variable being measured, the measure used, and how the researcher interprets the measure. Let's make sure you have what you need from Chapter 11 to begin writing your questionnaire.

List all of the dependent and independent variables in your research question.

  • It's normal to have one dependent or independent variable. It's also normal to have more than one of either.
  • Make sure that your research question (and this list) contain all of the variables in your hypothesis. Your hypothesis should only include variables from you research question.

For each variable in your list:

  • If you don't have questions and answers finalized yet, write a first draft and revise it based on what you read in this section.
  • If you are using a measure from another researcher, you should be able to write out all of the questions and answers associated with that measure. If you only have the name of a scale or a few questions, you need to access to the full text and some documentation on how to administer and interpret it before you can finish your questionnaire.
  • For example, an interpretation might be "there are five 7-point Likert scale questions...point values are added across all five items for each participant...and scores below 10 indicate the participant has low self-esteem"
  • Don't introduce other variables into the mix here. All we are concerned with is how you will measure each variable by itself. The connection between variables is done using statistical tests, not operational definitions.
  • Detail any validity or reliability issues uncovered by previous researchers using the same measures. If you have concerns about validity and reliability, note them, as well.

If you completed the exercise above and listed out all of the questions and answer choices you will use to measure the variables in your research question, you have already produced a pretty solid first draft of your questionnaire! Congrats! In essence, questionnaires are all of the self-report measures in your operational definitions for the independent, dependent, and control variables in your study arranged into one document and administered to participants. There are a few questions on a questionnaire (like name or ID#) that are not associated with the measurement of variables. These are the exception, and it's useful to think of a questionnaire as a list of measures for variables. Of course, researchers often use more than one measure of a variable (i.e., triangulation ) so they can more confidently assert that their findings are true. A questionnaire should contain all of the measures researchers plan to collect about their variables by asking participants to self-report. As we will discuss in the final section of this chapter, triangulating across data sources (e.g., measuring variables using client files or student records) can avoid some of the common sources of bias in survey research.

Sticking close to your operational definitions is important because it helps you avoid an everything-but-the-kitchen-sink approach that includes every possible question that occurs to you. Doing so puts an unnecessary burden on your survey respondents. Remember that you have asked your participants to give you their time and attention and to take care in responding to your questions; show them your respect by only asking questions that you actually plan to use in your analysis. For each question in your questionnaire, ask yourself how this question measures a variable in your study. An operational definition should contain the questions, response options, and how the researcher will draw conclusions about the variable based on participants' responses.

social work research dissemination

Writing questions

So, almost all of the questions on a questionnaire are measuring some variable. For many variables, researchers will create their own questions rather than using one from another researcher. This section will provide some tips on how to create good questions to accurately measure variables in your study. First, questions should be as clear and to the point as possible. This is not the time to show off your creative writing skills; a survey is a technical instrument and should be written in a way that is as direct and concise as possible. As I’ve mentioned earlier, your survey respondents have agreed to give their time and attention to your survey. The best way to show your appreciation for their time is to not waste it. Ensuring that your questions are clear and concise will go a long way toward showing your respondents the gratitude they deserve. Pilot testing the questionnaire with friends or colleagues can help identify these issues. This process is commonly called pretesting, but to avoid any confusion with pretesting in experimental design, we refer to it as pilot testing.

Related to the point about not wasting respondents’ time, make sure that every question you pose will be relevant to every person you ask to complete it. This means two things: first, that respondents have knowledge about whatever topic you are asking them about, and second, that respondents have experienced the events, behaviors, or feelings you are asking them to report. If you are asking participants for second-hand knowledge—asking clinicians about clients' feelings, asking teachers about students' feelings, and so forth—you may want to clarify that the variable you are asking about is the key informant's perception of what is happening in the target population. A well-planned sampling approach ensures that participants are the most knowledgeable population to complete your survey.

If you decide that you do wish to include questions about matters with which only a portion of respondents will have had experience, make sure you know why you are doing so. For example, if you are asking about MSW student study patterns, and you decide to include a question on studying for the social work licensing exam, you may only have a small subset of participants who have begun studying for the graduate exam or took the bachelor's-level exam. If you decide to include this question that speaks to a minority of participants' experiences, think about why you are including it. Are you interested in how studying for class and studying for licensure differ? Are you trying to triangulate study skills measures? Researchers should carefully consider whether questions relevant to only a subset of participants is likely to produce enough valid responses for quantitative analysis.

Many times, questions that are relevant to a subsample of participants are conditional on an answer to a previous question. A participant might select that they rent their home, and as a result, you might ask whether they carry renter's insurance. That question is not relevant to homeowners, so it would be wise not to ask them to respond to it. In that case, the question of whether someone rents or owns their home is a filter question , designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample. Figure 12.1 presents an example of how to accomplish this on a paper survey by adding instructions to the participant that indicate what question to proceed to next based on their response to the first one. Using online survey tools, researchers can use filter questions to only present relevant questions to participants.

social work research dissemination

Researchers should eliminate questions that ask about things participants don't know to minimize confusion. Assuming the question is relevant to the participant, other sources of confusion come from how the question is worded. The use of negative wording can be a source of potential confusion. Taking the question from Figure 12.1 about drinking as our example, what if we had instead asked, “Did you not abstain from drinking during your first semester of college?” This is a double negative, and it's not clear how to answer the question accurately. It is a good idea to avoid negative phrasing, when possible. For example, "did you not drink alcohol during your first semester of college?" is less clear than "did you drink alcohol your first semester of college?"

You should also avoid using terms or phrases that may be regionally or culturally specific (unless you are absolutely certain all your respondents come from the region or culture whose terms you are using). When I first moved to southwest Virginia, I didn’t know what a holler was. Where I grew up in New Jersey, to holler means to yell. Even then, in New Jersey, we shouted and screamed, but we didn’t holler much. In southwest Virginia, my home at the time, a holler also means a small valley in between the mountains. If I used holler in that way on my survey, people who live near me may understand, but almost everyone else would be totally confused. A similar issue arises when you use jargon, or technical language, that people do not commonly know. For example, if you asked adolescents how they experience imaginary audience , they would find it difficult to link those words to the concepts from David Elkind’s theory. The words you use in your questions must be understandable to your participants. If you find yourself using jargon or slang, break it down into terms that are more universal and easier to understand.

Asking multiple questions as though they are a single question can also confuse survey respondents. There’s a specific term for this sort of question; it is called a double-barreled question . Figure 12.2 shows a double-barreled question. Do you see what makes the question double-barreled? How would someone respond if they felt their college classes were more demanding but also more boring than their high school classes? Or less demanding but more interesting? Because the question combines “demanding” and “interesting,” there is no way to respond yes to one criterion but no to the other.

Double-barreled question asking about more than one topic at a time.

Another thing to avoid when constructing survey questions is the problem of social desirability . We all want to look good, right? And we all probably know the politically correct response to a variety of questions whether we agree with the politically correct response or not. In survey research, social desirability refers to the idea that respondents will try to answer questions in a way that will present them in a favorable light. (You may recall we covered social desirability bias in Chapter 11 .)

Perhaps we decide that to understand the transition to college, we need to know whether respondents ever cheated on an exam in high school or college for our research project. We all know that cheating on exams is generally frowned upon (at least I hope we all know this). So, it may be difficult to get people to admit to cheating on a survey. But if you can guarantee respondents’ confidentiality, or even better, their anonymity, chances are much better that they will be honest about having engaged in this socially undesirable behavior. Another way to avoid problems of social desirability is to try to phrase difficult questions in the most benign way possible. Earl Babbie (2010) [45] offers a useful suggestion for helping you do this—simply imagine how you would feel responding to your survey questions. If you would be uncomfortable, chances are others would as well.

Try to step outside your role as researcher for a second, and imagine you were one of your participants. Evaluate the following:

  •   Is the question too general? Sometimes, questions that are too general may not accurately convey respondents’ perceptions. If you asked someone how they liked a certain book and provide a response scale ranging from “not at all” to “extremely well”, and if that person selected “extremely well," what do they mean? Instead, ask more specific behavioral questions, such as "Will you recommend this book to others?" or "Do you plan to read other books by the same author?" 
  • Is the question too detailed? Avoid unnecessarily detailed questions that serve no specific research purpose. For instance, do you need the age of each child in a household or is just the number of children in the household acceptable? However, if unsure, it is better to err on the side of details than generality.
  • Is the question presumptuous? Does your question make assumptions? For instance, if you ask, "what do you think the benefits of a tax cut would be?" you are presuming that the participant sees the tax cut as beneficial. But many people may not view tax cuts as beneficial. Some might see tax cuts as a precursor to less funding for public schools and fewer public services such as police, ambulance, and fire department. Avoid questions with built-in presumptions.
  • Does the question ask the participant to imagine something? Is the question imaginary? A popular question on many television game shows is “if you won a million dollars on this show, how will you plan to spend it?” Most participants have never been faced with this large amount of money and have never thought about this scenario. In fact, most don’t even know that after taxes, the value of the million dollars will be greatly reduced. In addition, some game shows spread the amount over a 20-year period. Without understanding this "imaginary" situation, participants may not have the background information necessary to provide a meaningful response.

Finally, it is important to get feedback on your survey questions from as many people as possible, especially people who are like those in your sample. Now is not the time to be shy. Ask your friends for help, ask your mentors for feedback, ask your family to take a look at your survey as well. The more feedback you can get on your survey questions, the better the chances that you will come up with a set of questions that are understandable to a wide variety of people and, most importantly, to those in your sample.

In sum, in order to pose effective survey questions, researchers should do the following:

  • Identify how each question measures an independent, dependent, or control variable in their study.
  • Keep questions clear and succinct.
  • Make sure respondents have relevant lived experience to provide informed answers to your questions.
  • Use filter questions to avoid getting answers from uninformed participants.
  • Avoid questions that are likely to confuse respondents—including those that use double negatives, use culturally specific terms or jargon, and pose more than one question at a time.
  • Imagine how respondents would feel responding to questions.
  • Get feedback, especially from people who resemble those in the researcher’s sample.

Let's complete a first draft of your questions. In the previous exercise, you listed all of the questions and answers you will use to measure the variables in your research question. 

  • In the previous exercise, you wrote out the questions and answers for each measure of your independent and dependent variables. Evaluate each question using the criteria listed above on effective survey questions.
  • Type out questions for your control variables and evaluate them, as well. Consider what response options you want to offer participants.

Now, let's revise any questions that do not meet your standards!

  •  Use the BRUSO model in Table 12.2 for an illustration of how to address deficits in question wording. Keep in mind that you are writing a first draft in this exercise, and it will take a few drafts and revisions before your questions are ready to distribute to participants.
Table 12.2 The BRUSO model of writing effective questionnaire items, with examples from a perceptions of gun ownership questionnaire
“Are you now or have you ever been the possessor of a firearm?” Have you ever possessed a firearm?
"Who did you vote for in the last election?" Note: Only include items that are relevant to your study.
“Are you a gun person?” Do you currently own a gun?”
How much have you read about the new gun control measure and sales tax?” “How much have you read about the new sales tax on firearm purchases?”
“How much do you support the beneficial new gun control measure?” “What is your view of the new gun control measure?”

social work research dissemination

Writing response options

While posing clear and understandable questions in your survey is certainly important, so too is providing respondents with unambiguous response options. Response options are the answers that you provide to the people completing your questionnaire. Generally, respondents will be asked to choose a single (or best) response to each question you pose. We call questions in which the researcher provides all of the response options closed-ended questions . Keep in mind, closed-ended questions can also instruct respondents to choose multiple response options, rank response options against one another, or assign a percentage to each response option. But be cautious when experimenting with different response options! Accepting multiple responses to a single question may add complexity when it comes to quantitatively analyzing and interpreting your data.

Surveys need not be limited to closed-ended questions. Sometimes survey researchers include open-ended questions in their survey instruments as a way to gather additional details from respondents. An open-ended question does not include response options; instead, respondents are asked to reply to the question in their own way, using their own words. These questions are generally used to find out more about a survey participant’s experiences or feelings about whatever they are being asked to report in the survey. If, for example, a survey includes closed-ended questions asking respondents to report on their involvement in extracurricular activities during college, an open-ended question could ask respondents why they participated in those activities or what they gained from their participation. While responses to such questions may also be captured using a closed-ended format, allowing participants to share some of their responses in their own words can make the experience of completing the survey more satisfying to respondents and can also reveal new motivations or explanations that had not occurred to the researcher. This is particularly important for mixed-methods research. It is possible to analyze open-ended response options quantitatively using content analysis (i.e., counting how often a theme is represented in a transcript looking for statistical patterns). However, for most researchers, qualitative data analysis will be needed to analyze open-ended questions, and researchers need to think through how they will analyze any open-ended questions as part of their data analysis plan. We will address qualitative data analysis in greater detail in Chapter 19 .

To keep things simple, we encourage you to use only closed-ended response options in your study. While open-ended questions are not wrong, they are often a sign in our classrooms that students have not thought through all the way how to operationally define and measure their key variables. Open-ended questions cannot be operationally defined because you don't know what responses you will get. Instead, you will need to analyze the qualitative data using one of the techniques we discuss in Chapter 19 to interpret your participants' responses.

To write an effective response options for closed-ended questions, there are a couple of guidelines worth following. First, be sure that your response options are mutually exclusive . Look back at Figure 12.1, which contains questions about how often and how many drinks respondents consumed. Do you notice that there are no overlapping categories in the response options for these questions? This is another one of those points about question construction that seems fairly obvious but that can be easily overlooked. Response options should also be exhaustive . In other words, every possible response should be covered in the set of response options that you provide. For example, note that in question 10a in Figure 12.1, we have covered all possibilities—those who drank, say, an average of once per month can choose the first response option (“less than one time per week”) while those who drank multiple times a day each day of the week can choose the last response option (“7+”). All the possibilities in between these two extremes are covered by the middle three response options, and every respondent fits into one of the response options we provided.

Earlier in this section, we discussed double-barreled questions. Response options can also be double barreled, and this should be avoided. Figure 12.3 is an example of a question that uses double-barreled response options. Other tips about questions are also relevant to response options, including that participants should be knowledgeable enough to select or decline a response option as well as avoiding jargon and cultural idioms.

Double-barreled response options providing more than one answer for each option

Even if you phrase questions and response options clearly, participants are influenced by how many response options are presented on the questionnaire. For Likert scales, five or seven response options generally allow about as much precision as respondents are capable of. However, numerical scales with more options can sometimes be appropriate. For dimensions such as attractiveness, pain, and likelihood, a 0-to-10 scale will be familiar to many respondents and easy for them to use. Regardless of the number of response options, the most extreme ones should generally be “balanced” around a neutral or modal midpoint. An example of an unbalanced rating scale measuring perceived likelihood might look like this:

Unlikely  |  Somewhat Likely  |  Likely  |  Very Likely  |  Extremely Likely

Because we have four rankings of likely and only one ranking of unlikely, the scale is unbalanced and most responses will be biased toward "likely" rather than "unlikely." A balanced version might look like this:

Extremely Unlikely  |  Somewhat Unlikely  |  As Likely as Not  |  Somewhat Likely  | Extremely Likely

In this example, the midpoint is halfway between likely and unlikely. Of course, a middle or neutral response option does not have to be included. Researchers sometimes choose to leave it out because they want to encourage respondents to think more deeply about their response and not simply choose the middle option by default. Fence-sitters are respondents who choose neutral response options, even if they have an opinion. Some people will be drawn to respond, “no opinion” even if they have an opinion, particularly if their true opinion is the not a socially desirable opinion. Floaters , on the other hand, are those that choose a substantive answer to a question when really, they don’t understand the question or don’t have an opinion. 

As you can see, floating is the flip side of fence-sitting. Thus, the solution to one problem is often the cause of the other. How you decide which approach to take depends on the goals of your research. Sometimes researchers specifically want to learn something about people who claim to have no opinion. In this case, allowing for fence-sitting would be necessary. Other times researchers feel confident their respondents will all be familiar with every topic in their survey. In this case, perhaps it is okay to force respondents to choose one side or another (e.g., agree or disagree) without a middle option (e.g., neither agree nor disagree) or to not include an option like "don't know enough to say" or "not applicable." There is no always-correct solution to either problem. But in general, including middle option in a response set provides a more exhaustive set of response options than one that excludes one. 

The most important check before your finalize your response options is to align them with your operational definitions. As we've discussed before, your operational definitions include your measures (questions and responses options) as well as how to interpret those measures in terms of the variable being measured. In particular, you should be able to interpret all response options to a question based on your operational definition of the variable it measures. If you wanted to measure the variable "social class," you might ask one question about a participant's annual income and another about family size. Your operational definition would need to provide clear instructions on how to interpret response options. Your operational definition is basically like this social class calculator from Pew Research , though they include a few more questions in their definition.

To drill down a bit more, as Pew specifies in the section titled "how the income calculator works," the interval/ratio data respondents enter is interpreted using a formula combining a participant's four responses to the questions posed by Pew categorizing their household into three categories—upper, middle, or lower class. So, the operational definition includes the four questions comprising the measure and the formula or interpretation which converts responses into the three final categories that we are familiar with: lower, middle, and upper class.

It is interesting to note that even though participants inis an ordinal level of measurement. Whereas, Pew asks four questions that use an interval or ratio level of measurement (depending on the question). This means that respondents provide numerical responses, rather than choosing categories like lower, middle, and upper class. It's perfectly normal for operational definitions to change levels of measurement, and it's also perfectly normal for the level of measurement to stay the same. The important thing is that each response option a participant can provide is accounted for by the operational definition. Throw any combination of family size, location, or income at the Pew calculator, and it will define you into one of those three social class categories.

Unlike Pew's definition, the operational definitions in your study may not need their own webpage to define and describe. For many questions and answers, interpreting response options is easy. If you were measuring "income" instead of "social class," you could simply operationalize the term by asking people to list their total household income before taxes are taken out. Higher values indicate higher income, and lower values indicate lower income. Easy. Regardless of whether your operational definitions are simple or more complex, every response option to every question on your survey (with a few exceptions) should be interpretable using an operational definition of a variable. Just like we want to avoid an everything-but-the-kitchen-sink approach to questions on our questionnaire, you want to make sure your final questionnaire only contains response options that you will use in your study.

One note of caution on interpretation (sorry for repeating this). We want to remind you again that an operational definition should not mention more than one variable. In our example above, your operational definition could not say "a family of three making under $50,000 is lower class; therefore, they are more likely to experience food insecurity." That last clause about food insecurity may well be true, but it's not a part of the operational definition for social class. Each variable (food insecurity and class) should have its own operational definition. If you are talking about how to interpret the relationship between two variables, you are talking about your data analysis plan . We will discuss how to create your data analysis plan beginning in Chapter 14 . For now, one consideration is that depending on the statistical test you use to test relationships between variables, you may need nominal, ordinal, or interval/ratio data. Your questions and response options should match the level of measurement you need with the requirements of the specific statistical tests in your data analysis plan. Once you finalize your data analysis plan, return to your questionnaire to match the level of measurement matches with the statistical test you've chosen.

In summary, to write effective response options researchers should do the following:

  • Avoid wording that is likely to confuse respondents—including double negatives, use culturally specific terms or jargon, and double-barreled response options.
  • Ensure response options are relevant to participants' knowledge and experience so they can make an informed and accurate choice.
  • Present mutually exclusive and exhaustive response options.
  • Consider fence-sitters and floaters, and the use of neutral or "not applicable" response options.
  • Define how response options are interpreted as part of an operational definition of a variable.
  • Check level of measurement matches operational definitions and the statistical tests in the data analysis plan (once you develop one in the future)

Look back at the response options you drafted in the previous exercise. Make sure you have a first draft of response options for each closed-ended question on your questionnaire.

  • Using the criteria above, evaluate the wording of the response options for each question on your questionnaire.
  • Revise your questions and response options until you have a complete first draft.
  • Do your first read-through and provide a dummy answer to each question. Make sure you can link each response option and each question to an operational definition.
  • Look ahead to Chapter 14 and consider how each item on your questionnaire will inform your data analysis plan.

From this discussion, we hope it is clear why researchers using quantitative methods spell out all of their plans ahead of time. Ultimately, there should be a straight line from operational definition through measures on your questionnaire to the data analysis plan. If your questionnaire includes response options that are not aligned with operational definitions or not included in the data analysis plan, the responses you receive back from participants won't fit with your conceptualization of the key variables in your study. If you do not fix these errors and proceed with collecting unstructured data, you will lose out on many of the benefits of survey research and face overwhelming challenges in answering your research question.

social work research dissemination

Designing questionnaires

Based on your work in the previous section, you should have a first draft of the questions and response options for the key variables in your study. Now, you’ll also need to think about how to present your written questions and response options to survey respondents. It's time to write a final draft of your questionnaire and make it look nice. Designing questionnaires takes some thought. First, consider the route of administration for your survey. What we cover in this section will apply equally to paper and online surveys, but if you are planning to use online survey software, you should watch tutorial videos and explore the features of of the survey software you will use.

Informed consent & instructions

Writing effective items is only one part of constructing a survey. For one thing, every survey should have a written or spoken introduction that serves two basic functions (Peterson, 2000) . [46] One is to encourage respondents to participate in the survey. In many types of research, such encouragement is not necessary either because participants do not know they are in a study (as in naturalistic observation) or because they are part of a subject pool and have already shown their willingness to participate by signing up and showing up for the study. Survey research usually catches respondents by surprise when they answer their phone, go to their mailbox, or check their e-mail—and the researcher must make a good case for why they should agree to participate. Thus, the introduction should briefly explain the purpose of the survey and its importance, provide information about the sponsor of the survey (university-based surveys tend to generate higher response rates), acknowledge the importance of the respondent’s participation, and describe any incentives for participating.

The second function of the introduction is to establish informed consent . Remember that this involves describing to respondents everything that might affect their decision to participate. This includes the topics covered by the survey, the amount of time it is likely to take, the respondent’s option to withdraw at any time, confidentiality issues, and other ethical considerations we covered in Chapter 6 . Written consent forms are not always used in survey research (when the research is of minimal risk and completion of the survey instrument is often accepted by the IRB as evidence of consent to participate), so it is important that this part of the introduction be well documented and presented clearly and in its entirety to every respondent.

Organizing items to be easy and intuitive to follow

The introduction should be followed by the substantive questionnaire items. But first, it is important to present clear instructions for completing the questionnaire, including examples of how to use any unusual response scales. Remember that the introduction is the point at which respondents are usually most interested and least fatigued, so it is good practice to start with the most important items for purposes of the research and proceed to less important items. Items should also be grouped by topic or by type. For example, items using the same rating scale (e.g., a 5-point agreement scale) should be grouped together if possible to make things faster and easier for respondents. Demographic items are often presented last because they are least interesting to participants but also easy to answer in the event respondents have become tired or bored. Of course, any survey should end with an expression of appreciation to the respondent.

Questions are often organized thematically. If our survey were measuring social class, perhaps we’d have a few questions asking about employment, others focused on education, and still others on housing and community resources. Those may be the themes around which we organize our questions. Or perhaps it would make more sense to present any questions we had about parents' income and then present a series of questions about estimated future income. Grouping by theme is one way to be deliberate about how you present your questions. Keep in mind that you are surveying people, and these people will be trying to follow the logic in your questionnaire. Jumping from topic to topic can give people a bit of whiplash and may make participants less likely to complete it.

Using a matrix is a nice way of streamlining response options for similar questions. A matrix is a question type that that lists a set of questions for which the answer categories are all the same. If you have a set of questions for which the response options are the same, it may make sense to create a matrix rather than posing each question and its response options individually. Not only will this save you some space in your survey but it will also help respondents progress through your survey more easily. A sample matrix can be seen in Figure 12.4.

Survey using matrix options--between agree and disagree--and opinions about class

Once you have grouped similar questions together, you’ll need to think about the order in which to present those question groups. Most survey researchers agree that it is best to begin a survey with questions that will want to make respondents continue (Babbie, 2010; Dillman, 2000; Neuman, 2003). [47] In other words, don’t bore respondents, but don’t scare them away either. There’s some disagreement over where on a survey to place demographic questions, such as those about a person’s age, gender, and race. On the one hand, placing them at the beginning of the questionnaire may lead respondents to think the survey is boring, unimportant, and not something they want to bother completing. On the other hand, if your survey deals with some very sensitive topic, such as child sexual abuse or criminal convictions, you don’t want to scare respondents away or shock them by beginning with your most intrusive questions.

Your participants are human. They will react emotionally to questionnaire items, and they will also try to uncover your research questions and hypotheses. In truth, the order in which you present questions on a survey is best determined by the unique characteristics of your research. When feasible, you should consult with key informants from your target population determine how best to order your questions. If it is not feasible to do so, think about the unique characteristics of your topic, your questions, and most importantly, your sample. Keeping in mind the characteristics and needs of the people you will ask to complete your survey should help guide you as you determine the most appropriate order in which to present your questions. None of your decisions will be perfect, and all studies have limitations.

Questionnaire length

You’ll also need to consider the time it will take respondents to complete your questionnaire. Surveys vary in length, from just a page or two to a dozen or more pages, which means they also vary in the time it takes to complete them. How long to make your survey depends on several factors. First, what is it that you wish to know? Wanting to understand how grades vary by gender and year in school certainly requires fewer questions than wanting to know how people’s experiences in college are shaped by demographic characteristics, college attended, housing situation, family background, college major, friendship networks, and extracurricular activities. Keep in mind that even if your research question requires a sizable number of questions be included in your questionnaire, do your best to keep the questionnaire as brief as possible. Any hint that you’ve thrown in a bunch of useless questions just for the sake of it will turn off respondents and may make them not want to complete your survey.

Second, and perhaps more important, how long are respondents likely to be willing to spend completing your questionnaire? If you are studying college students, asking them to use their very free time to complete your survey may mean they won’t want to spend more than a few minutes on it. But if you find ask them to complete your survey during down-time between classes and there is little work to be done, students may be willing to give you a bit more of their time. Think about places and times that your sampling frame naturally gathers and whether you would be able to either recruit participants or distribute a survey in that context. Estimate how long your participants would reasonably have to complete a survey presented to them during this time. The more you know about your population (such as what weeks have less work and more free time), the better you can target questionnaire length.

The time that survey researchers ask respondents to spend on questionnaires varies greatly. Some researchers advise that surveys should not take longer than about 15 minutes to complete (as cited in Babbie 2010), [48] whereas others suggest that up to 20 minutes is acceptable (Hopper, 2010). [49] As with question order, there is no clear-cut, always-correct answer about questionnaire length. The unique characteristics of your study and your sample should be considered to determine how long to make your questionnaire. For example, if you planned to distribute your questionnaire to students in between classes, you will need to make sure it is short enough to complete before the next class begins.

When designing a questionnaire, a researcher should consider:

  • Weighing strengths and limitations of the method of delivery, including the advanced tools in online survey software or the simplicity of paper questionnaires.
  • Grouping together items that ask about the same thing.
  • Moving any questions about sensitive items to the end of the questionnaire, so as not to scare respondents off.
  • Moving any questions that engage the respondent to answer the questionnaire at the beginning, so as not to bore them.
  • Timing the length of the questionnaire with a reasonable length of time you can ask of your participants.
  • Dedicating time to visual design and ensure the questionnaire looks professional.

Type out a final draft of your questionnaire in a word processor or online survey tool.

  • Evaluate your questionnaire using the guidelines above, revise it, and get it ready to share with other student researchers.

social work research dissemination

Pilot testing and revising questionnaires

A good way to estimate the time it will take respondents to complete your questionnaire (and other potential challenges) is through pilot testing . Pilot testing allows you to get feedback on your questionnaire so you can improve it before you actually administer it. It can be quite expensive and time consuming if you wish to pilot test your questionnaire on a large sample of people who very much resemble the sample to whom you will eventually administer the finalized version of your questionnaire. But you can learn a lot and make great improvements to your questionnaire simply by pilot testing with a small number of people to whom you have easy access (perhaps you have a few friends who owe you a favor). By pilot testing your questionnaire, you can find out how understandable your questions are, get feedback on question wording and order, find out whether any of your questions are boring or offensive, and learn whether there are places where you should have included filter questions. You can also time pilot testers as they take your survey. This will give you a good idea about the estimate to provide respondents when you administer your survey and whether you have some wiggle room to add additional items or need to cut a few items.

Perhaps this goes without saying, but your questionnaire should also have an attractive design. A messy presentation style can confuse respondents or, at the very least, annoy them. Be brief, to the point, and as clear as possible. Avoid cramming too much into a single page. Make your font size readable (at least 12 point or larger, depending on the characteristics of your sample), leave a reasonable amount of space between items, and make sure all instructions are exceptionally clear. If you are using an online survey, ensure that participants can complete it via mobile, computer, and tablet devices. Think about books, documents, articles, or web pages that you have read yourself—which were relatively easy to read and easy on the eyes and why? Try to mimic those features in the presentation of your survey questions. While online survey tools automate much of visual design, word processors are designed for writing all kinds of documents and may need more manual adjustment as part of visual design.

Realistically, your questionnaire will continue to evolve as you develop your data analysis plan over the next few chapters. By now, you should have a complete draft of your questionnaire grounded in an underlying logic that ties together each question and response option to a variable in your study. Once your questionnaire is finalized, you will need to submit it for ethical approval from your professor or the IRB. If your study requires IRB approval, it may be worthwhile to submit your proposal before your questionnaire is completely done. Revisions to IRB protocols are common and it takes less time to review a few changes to questions and answers than it does to review the entire study, so give them the whole study as soon as you can. Once the IRB approves your questionnaire, you cannot change it without their okay.

  • A questionnaire is comprised of self-report measures of variables in a research study.
  • Make sure your survey questions will be relevant to all respondents and that you use filter questions when necessary.
  • Effective survey questions and responses take careful construction by researchers, as participants may be confused or otherwise influenced by how items are phrased.
  • The questionnaire should start with informed consent and instructions, flow logically from one topic to the next, engage but not shock participants, and thank participants at the end.
  • Pilot testing can help identify any issues in a questionnaire before distributing it to participants, including language or length issues.

It's a myth that researchers work alone! Get together with a few of your fellow students and swap questionnaires for pilot testing.

  • Use the criteria in each section above (questions, response options, questionnaires) and provide your peers with the strengths and weaknesses of their questionnaires.
  • See if you can guess their research question and hypothesis based on the questionnaire alone.

11.3 Measurement quality

  • Define and describe the types of validity and reliability
  • Assess for systematic error

The previous chapter provided insight into measuring concepts in social work research. We discussed the importance of identifying concepts and their corresponding indicators as a way to help us operationalize them. In essence, we now understand that when we think about our measurement process, we must be intentional and thoughtful in the choices that we make. This section is all about how to judge the quality of the measures you've chosen for the key variables in your research question.

Reliability

First, let’s say we’ve decided to measure alcoholism by asking people to respond to the following question: Have you ever had a problem with alcohol? If we measure alcoholism this way, then it is likely that anyone who identifies as an alcoholic would respond “yes.” This may seem like a good way to identify our group of interest, but think about how you and your peer group may respond to this question. Would participants respond differently after a wild night out, compared to any other night? Could an infrequent drinker’s current headache from last night’s glass of wine influence how they answer the question this morning? How would that same person respond to the question before consuming the wine? In each cases, the same person might respond differently to the same question at different points, so it is possible that our measure of alcoholism has a reliability problem.  Reliability  in measurement is about consistency.

One common problem of reliability with social scientific measures is memory. If we ask research participants to recall some aspect of their own past behavior, we should try to make the recollection process as simple and straightforward for them as possible. Sticking with the topic of alcohol intake, if we ask respondents how much wine, beer, and liquor they’ve consumed each day over the course of the past 3 months, how likely are we to get accurate responses? Unless a person keeps a journal documenting their intake, there will very likely be some inaccuracies in their responses. On the other hand, we might get more accurate responses if we ask a participant how many drinks of any kind they have consumed in the past week.

Reliability can be an issue even when we’re not reliant on others to accurately report their behaviors. Perhaps a researcher is interested in observing how alcohol intake influences interactions in public locations. They may decide to conduct observations at a local pub by noting how many drinks patrons consume and how their behavior changes as their intake changes. What if the researcher has to use the restroom, and the patron next to them takes three shots of tequila during the brief period the researcher is away from their seat? The reliability of this researcher’s measure of alcohol intake depends on their ability to physically observe every instance of patrons consuming drinks. If they are unlikely to be able to observe every such instance, then perhaps their mechanism for measuring this concept is not reliable.

The following subsections describe the types of reliability that are important for you to know about, but keep in mind that you may see other approaches to judging reliability mentioned in the empirical literature.

Test-retest reliability

When researchers measure a construct that they assume to be consistent across time, then the scores they obtain should also be consistent across time. Test-retest reliability is the extent to which this is actually the case. For example, intelligence is generally thought to be consistent across time. A person who is highly intelligent today will be highly intelligent next week. This means that any good measure of intelligence should produce roughly the same scores for this individual next week as it does today. Clearly, a measure that produces highly inconsistent scores over time cannot be a very good measure of a construct that is supposed to be consistent.

Assessing test-retest reliability requires using the measure on a group of people at one time, using it again on the  same group of people at a later time. Unlike an experiment, you aren't giving participants an intervention but trying to establish a reliable baseline of the variable you are measuring. Once you have these two measurements, you then look at the correlation between the two sets of scores. This is typically done by graphing the data in a scatterplot and computing the correlation coefficient. Figure 11.2 shows the correlation between two sets of scores of several university students on the Rosenberg Self-Esteem Scale, administered two times, a week apart. The correlation coefficient for these data is +.95. In general, a test-retest correlation of +.80 or greater is considered to indicate good reliability.

social work research dissemination

Again, high test-retest correlations make sense when the construct being measured is assumed to be consistent over time, which is the case for intelligence, self-esteem, and the Big Five personality dimensions. But other constructs are not assumed to be stable over time. The very nature of mood, for example, is that it changes. So a measure of mood that produced a low test-retest correlation over a period of a month would not be a cause for concern.

Internal consistency

Another kind of reliability is internal consistency , which is the consistency of people’s responses across the items on a multiple-item measure. In general, all the items on such measures are supposed to reflect the same underlying construct, so people’s scores on those items should be correlated with each other. On the Rosenberg Self-Esteem Scale, people who agree that they are a person of worth should tend to agree that they have a number of good qualities. If people’s responses to the different items are not correlated with each other, then it would no longer make sense to claim that they are all measuring the same underlying construct. This is as true for behavioral and physiological measures as for self-report measures. For example, people might make a series of bets in a simulated game of roulette as a measure of their level of risk seeking. This measure would be internally consistent to the extent that individual participants’ bets were consistently high or low across trials. A specific statistical test known as Cronbach’s Alpha provides a way to measure how well each question of a scale is related to the others.

Interrater reliability

Many behavioral measures involve significant judgment on the part of an observer or a rater. Interrater reliability is the extent to which different observers are consistent in their judgments. For example, if you were interested in measuring university students’ social skills, you could make video recordings of them as they interacted with another student whom they are meeting for the first time. Then you could have two or more observers watch the videos and rate each student’s level of social skills. To the extent that each participant does, in fact, have some level of social skills that can be detected by an attentive observer, different observers’ ratings should be highly correlated with each other.

social work research dissemination

Validity , another key element of assessing measurement quality, is the extent to which the scores from a measure represent the variable they are intended to. But how do researchers make this judgment? We have already considered one factor that they take into account—reliability. When a measure has good test-retest reliability and internal consistency, researchers should be more confident that the scores represent what they are supposed to. There has to be more to it, however, because a measure can be extremely reliable but have no validity whatsoever. As an absurd example, imagine someone who believes that people’s index finger length reflects their self-esteem and therefore tries to measure self-esteem by holding a ruler up to people’s index fingers. Although this measure would have extremely good test-retest reliability, it would have absolutely no validity. The fact that one person’s index finger is a centimeter longer than another’s would indicate nothing about which one had higher self-esteem.

Discussions of validity usually divide it into several distinct “types.” But a good way to interpret these types is that they are other kinds of evidence—in addition to reliability—that should be taken into account when judging the validity of a measure.

Face validity

Face validity is the extent to which a measurement method appears “on its face” to measure the construct of interest. Most people would expect a self-esteem questionnaire to include items about whether they see themselves as a person of worth and whether they think they have good qualities. So a questionnaire that included these kinds of items would have good face validity. The finger-length method of measuring self-esteem, on the other hand, seems to have nothing to do with self-esteem and therefore has poor face validity. Although face validity can be assessed quantitatively—for example, by having a large sample of people rate a measure in terms of whether it appears to measure what it is intended to—it is usually assessed informally.

Face validity is at best a very weak kind of evidence that a measurement method is measuring what it is supposed to. One reason is that it is based on people’s intuitions about human behavior, which are frequently wrong. It is also the case that many established measures in psychology work quite well despite lacking face validity. The Minnesota Multiphasic Personality Inventory-2 (MMPI-2) measures many personality characteristics and disorders by having people decide whether each of over 567 different statements applies to them—where many of the statements do not have any obvious relationship to the construct that they measure. For example, the items “I enjoy detective or mystery stories” and “The sight of blood doesn’t frighten me or make me sick” both measure the suppression of aggression. In this case, it is not the participants’ literal answers to these questions that are of interest, but rather whether the pattern of the participants’ responses to a series of questions matches those of individuals who tend to suppress their aggression.

Content validity

Content validity is the extent to which a measure “covers” the construct of interest. For example, if a researcher conceptually defines test anxiety as involving both sympathetic nervous system activation (leading to nervous feelings) and negative thoughts, then his measure of test anxiety should include items about both nervous feelings and negative thoughts. Or consider that attitudes are usually defined as involving thoughts, feelings, and actions toward something. By this conceptual definition, a person has a positive attitude toward exercise to the extent that they think positive thoughts about exercising, feels good about exercising, and actually exercises. So to have good content validity, a measure of people’s attitudes toward exercise would have to reflect all three of these aspects. Like face validity, content validity is not usually assessed quantitatively. Instead, it is assessed by carefully checking the measurement method against the conceptual definition of the construct.

Criterion validity

Criterion validity is the extent to which people’s scores on a measure are correlated with other variables (known as criteria) that one would expect them to be correlated with. For example, people’s scores on a new measure of test anxiety should be negatively correlated with their performance on an important school exam. If it were found that people’s scores were in fact negatively correlated with their exam performance, then this would be a piece of evidence that these scores really represent people’s test anxiety. But if it were found that people scored equally well on the exam regardless of their test anxiety scores, then this would cast doubt on the validity of the measure.

A criterion can be any variable that one has reason to think should be correlated with the construct being measured, and there will usually be many of them. For example, one would expect test anxiety scores to be negatively correlated with exam performance and course grades and positively correlated with general anxiety and with blood pressure during an exam. Or imagine that a researcher develops a new measure of physical risk taking. People’s scores on this measure should be correlated with their participation in “extreme” activities such as snowboarding and rock climbing, the number of speeding tickets they have received, and even the number of broken bones they have had over the years. When the criterion is measured at the same time as the construct, criterion validity is referred to as concurrent validity ; however, when the criterion is measured at some point in the future (after the construct has been measured), it is referred to as predictive validity (because scores on the measure have “predicted” a future outcome).

Discriminant validity

Discriminant validity , on the other hand, is the extent to which scores on a measure are not  correlated with measures of variables that are conceptually distinct. For example, self-esteem is a general attitude toward the self that is fairly stable over time. It is not the same as mood, which is how good or bad one happens to be feeling right now. So people’s scores on a new measure of self-esteem should not be very highly correlated with their moods. If the new measure of self-esteem were highly correlated with a measure of mood, it could be argued that the new measure is not really measuring self-esteem; it is measuring mood instead.

Increasing the reliability and validity of measures

We have reviewed the types of errors and how to evaluate our measures based on reliability and validity considerations. However, what can we do while selecting or creating our tool so that we minimize the potential of errors? Many of our options were covered in our discussion about reliability and validity. Nevertheless, the following table provides a quick summary of things that you should do when creating or selecting a measurement tool. While not all of these will be feasible in your project, it is important to include easy-to-implement measures in your research context.

Make sure that you engage in a rigorous literature review so that you understand the concept that you are studying. This means understanding the different ways that your concept may manifest itself. This review should include a search for existing instruments. [50]

  • Do you understand all the dimensions of your concept? Do you have a good understanding of the content dimensions of your concept(s)?
  • What instruments exist? How many items are on the existing instruments? Are these instruments appropriate for your population?
  • Are these instruments standardized? Note: If an instrument is standardized, that means it has been rigorously studied and tested.

Consult content experts to review your instrument. This is a good way to check the face validity of your items. Additionally, content experts can also help you understand the content validity. [51]

  • Do you have access to a reasonable number of content experts? If not, how can you locate them?
  • Did you provide a list of critical questions for your content reviewers to use in the reviewing process?

Pilot test your instrument on a sufficient number of people and get detailed feedback. [52] Ask your group to provide feedback on the wording and clarity of items. Keep detailed notes and make adjustments BEFORE you administer your final tool.

  • How many people will you use in your pilot testing?
  • How will you set up your pilot testing so that it mimics the actual process of administering your tool?
  • How will you receive feedback from your pilot testing group? Have you provided a list of questions for your group to think about?

Provide training for anyone collecting data for your project. [53] You should provide those helping you with a written research protocol that explains all of the steps of the project. You should also problem solve and answer any questions that those helping you may have. This will increase the chances that your tool will be administered in a consistent manner.

  • How will you conduct your orientation/training? How long will it be? What modality?
  • How will you select those who will administer your tool? What qualifications do they need?

When thinking of items, use a higher level of measurement, if possible. [54] This will provide more information and you can always downgrade to a lower level of measurement later.

  • Have you examined your items and the levels of measurement?
  • Have you thought about whether you need to modify the type of data you are collecting? Specifically, are you asking for information that is too specific (at a higher level of measurement) which may reduce participants' willingness to participate?

Use multiple indicators for a variable. [55] Think about the number of items that you will include in your tool.

  • Do you have enough items? Enough indicators? The correct indicators?

Conduct an item-by-item assessment of multiple-item measures. [56] When you do this assessment, think about each word and how it changes the meaning of your item.

  • Are there items that are redundant? Do you need to modify, delete, or add items?

social work research dissemination

Types of error

As you can see, measures never perfectly describe what exists in the real world. Good measures demonstrate validity and reliability but will always have some degree of error. Systematic error (also called bias) causes our measures to consistently output incorrect data in one direction or another on a measure, usually due to an identifiable process. Imagine you created a measure of height, but you didn’t put an option for anyone over six feet tall. If you gave that measure to your local college or university, some of the taller students might not be measured accurately. In fact, you would be under the mistaken impression that the tallest person at your school was six feet tall, when in actuality there are likely people taller than six feet at your school. This error seems innocent, but if you were using that measure to help you build a new building, those people might hit their heads!

A less innocent form of error arises when researchers word questions in a way that might cause participants to think one answer choice is preferable to another. For example, if I were to ask you “Do you think global warming is caused by human activity?” you would probably feel comfortable answering honestly. But what if I asked you “Do you agree with 99% of scientists that global warming is caused by human activity?” Would you feel comfortable saying no, if that’s what you honestly felt? I doubt it. That is an example of a  leading question , a question with wording that influences how a participant responds. We’ll discuss leading questions and other problems in question wording in greater detail in Chapter 12 .

In addition to error created by the researcher, your participants can cause error in measurement. Some people will respond without fully understanding a question, particularly if the question is worded in a confusing way. Let’s consider another potential source or error. If we asked people if they always washed their hands after using the bathroom, would we expect people to be perfectly honest? Polling people about whether they wash their hands after using the bathroom might only elicit what people would like others to think they do, rather than what they actually do. This is an example of  social desirability bias , in which participants in a research study want to present themselves in a positive, socially desirable way to the researcher. People in your study will want to seem tolerant, open-minded, and intelligent, but their true feelings may be closed-minded, simple, and biased. Participants may lie in this situation. This occurs often in political polling, which may show greater support for a candidate from a minority race, gender, or political party than actually exists in the electorate.

A related form of bias is called  acquiescence bias , also known as “yea-saying.” It occurs when people say yes to whatever the researcher asks, even when doing so contradicts previous answers. For example, a person might say yes to both “I am a confident leader in group discussions” and “I feel anxious interacting in group discussions.” Those two responses are unlikely to both be true for the same person. Why would someone do this? Similar to social desirability, people want to be agreeable and nice to the researcher asking them questions or they might ignore contradictory feelings when responding to each question. You could interpret this as someone saying "yeah, I guess." Respondents may also act on cultural reasons, trying to “save face” for themselves or the person asking the questions. Regardless of the reason, the results of your measure don’t match what the person truly feels.

So far, we have discussed sources of error that come from choices made by respondents or researchers. Systematic errors will result in responses that are incorrect in one direction or another. For example, social desirability bias usually means that the number of people who say  they will vote for a third party in an election is greater than the number of people who actually vote for that candidate. Systematic errors such as these can be reduced, but random error can never be eliminated. Unlike systematic error, which biases responses consistently in one direction or another,  random error  is unpredictable and does not consistently result in scores that are consistently higher or lower on a given measure. Instead, random error is more like statistical noise, which will likely average out across participants.

Random error is present in any measurement. If you’ve ever stepped on a bathroom scale twice and gotten two slightly different results, maybe a difference of a tenth of a pound, then you’ve experienced random error. Maybe you were standing slightly differently or had a fraction of your foot off of the scale the first time. If you were to take enough measures of your weight on the same scale, you’d be able to figure out your true weight. In social science, if you gave someone a scale measuring depression on a day after they lost their job, they would likely score differently than if they had just gotten a promotion and a raise. Even if the person were clinically depressed, our measure is subject to influence by the random occurrences of life. Thus, social scientists speak with humility about our measures. We are reasonably confident that what we found is true, but we must always acknowledge that our measures are only an approximation of reality.

Humility is important in scientific measurement, as errors can have real consequences. At the time I'm writing this, my wife and I are expecting our first child. Like most people, we used a pregnancy test from the pharmacy. If the test said my wife was pregnant when she was not pregnant, that would be a false positive . On the other hand, if the test indicated that she was not pregnant when she was in fact pregnant, that would be a  false negative . Even if the test is 99% accurate, that means that one in a hundred women will get an erroneous result when they use a home pregnancy test. For us, a false positive would have been initially exciting, then devastating when we found out we were not having a child. A false negative would have been disappointing at first and then quite shocking when we found out we were indeed having a child. While both false positives and false negatives are not very likely for home pregnancy tests (when taken correctly), measurement error can have consequences for the people being measured.

  • Reliability is a matter of consistency.
  • Validity is a matter of accuracy.
  • There are many types of validity and reliability.
  • Systematic error may arise from the researcher, participant, or measurement instrument.
  • Systematic error biases results in a particular direction, whereas random error can be in any direction.
  • All measures are prone to error and should interpreted with humility.

Use the measurement tools you located in the previous exercise. Evaluate the reliability and validity of these tools. Hint: You will need to go into the literature to "research" these tools.

  • Provide a clear statement regarding the reliability and validity of these tools. What strengths did you notice? What were the limitations?
  • Think about your target population . Are there changes that need to be made in order for one of these tools to be appropriate for your population?
  • If you decide to create your own tool, how will you assess its validity and reliability?

approach to recruitment where participants are members of an organization or social group with identified membership

  • Choosing a research topic (10 minute read)
  • Your research proposal (14 minute read)
  • Evaluating online resources (11 minute read)

Content warning: Examples in this chapter discuss substance use disorders, mental health disorders and therapies, obesity, poverty, gun violence, gang violence, school discipline, racism and hate groups, domestic violence, trauma and triggers, incarceration, child neglect and abuse, bullying, self-harm and suicide, racial discrimination in housing, burnout in helping professions, and sex trafficking of indigenous women.

2.1 Choosing a research topic

  • Brainstorm topics you may want to investigate as part of a research project
  • Explore your feelings and existing knowledge about the topic
  • Develop a working question

Research methods is a unique class in that you get to decide what you want to learn about. Perhaps you came to your MSW program with a specific issue you were passionate about. In my MSW program, I wanted to learn about the best interventions to use with people who have substance use disorders. This was in line with my future career plans, which included working in a clinical setting with clients with co-occurring mental health and substance use issues. I suggest you start by thinking about your future practice goals and create a research project that addresses a topic that represents an area of social work you are passionate about.

For those of you without a specific direction, don't worry. Many people enter their MSW program without an exact topic in mind they want to study. Throughout the program, you will be exposed to different populations, theories, practice interventions, and policies that will spark your interest. Think back to papers you enjoyed researching and writing in other classes. You may want to continue studying the same topic. Research methods will enable you to gain a deeper, more nuanced understanding of a topic or issue. If you haven't found an interesting topic yet, here are some other suggestions for seeking inspiration for a research project:

  • If you already have practice experience in social work through employment, an internship, or volunteer work, think about practice issues you noticed in the placement. Do you have any idea of how to better address client needs? Do you need to learn more about existing interventions or the programs that fund your agency? Use this class as an opportunity to engage with your previous field experience in greater detail. Begin with “what” and “why” questions and then expand on those. For example, what are the most effective methods of treating severe depression among a specific population? Or why are people receiving food assistance more likely to be obese? 
  • You could also a sk a professor at your school about possible topics. Read departmental information on faculty research interests, which may surprise you. Most departmental websites post the curriculum vitae (CV) of faculty, which lists their publications, credentials, and interests. For those of you interested in doctoral study, this process is particularly important. Students often pick schools based on professors they want to learn from or research initiatives they want to join. 

Once you have a potential idea, start reading! A simple web search should bring you some basic information about your topic. News articles can reveal new or controversial information. You may also want to identify and browse academic journals related to your research interests. Faculty and librarians can help you identify relevant journals in your field and specific areas of interest. We'll also review more detailed strategies for searching the literature in Chapter 3. As you read, look for what’s missing. These may be “gaps in the literature” that you might explore in your own study.

It's a good idea to keep it simple when you're starting your project. Choose a topic that can be easily defined and explored. Your study cannot focus on everything that is important about your topic. A study on gun violence might address only one system, for example schools, while only briefly mentioning other systems that impact gun violence. That doesn't mean it's a bad study! Every study presents only a small picture of a larger, more complex and multifaceted issue. The sooner you can arrive at something specific and clear that you want to study, the better off your project will be.

social work research dissemination

Writing a working question

There are lots of great research topics. Perhaps your topic is a client population—for example, youth who identify as LGBTQ+ or visitors to a local health clinic. In other cases, your topic may be a social problem, such as gang violence, or a social policy or program, such as zero-tolerance policies in schools. Alternately, maybe there are interventions such as dialectical behavioral therapy or applied behavior analysis that interest you.

Whatever your topic idea, begin to think about it in terms of a question. What do you really want to know about the topic? As a warm-up exercise, try dropping a possible topic idea into one of the blank spaces below. The questions may help bring your subject into sharper focus and bring you closer towards developing your topic.

  • What does ___ mean?
  • What are the causes of ___?
  • What are the consequences of ___?
  • What are the component parts of ___?
  • How does ___ impact ___?
  • What is it like to experience ___?
  • What is the relationship between _____ and the outcome of ____?
  • What case can be made for or against ___?
  • What are the risk/protective factors for ___?
  • How do people think about ___?

Take a minute right now and write down a question you want to answer. Even if it doesn’t seem perfect, it is important to start somewhere. Make sure your research topic is relevant to social work. You’d be surprised how much of the world that encompasses. It’s not just research on mental health treatment or child welfare services. Social workers can study things like the pollution of irrigation systems and entrepreneurship in women, among other topics. The only requirement is your research must inform action to fight social problems faced by target populations.

Because research is an iterative process , one that you will revise over and over, your question will continue to evolve. As you progress through this textbook, you’ll learn how to refine your question and include the necessary components for proper qualitative and quantitative research questions. Your question will also likely change as you engage with the literature on your topic. You will learn new and important concepts that may shift your focus or clarify your original ideas. Trust that a strong question will emerge from this process. A good researcher must be comfortable with altering their question as a result of scientific inquiry.

Very often, our students will email us in the first few weeks of class and ask if they have a good research topic. We love student emails! But just to reassure you if you're about to send a panicked email to your professor, as long as you are interested in dedicating a semester or two learning about your topic, it will make a good research topic. That's why we would advise you to focus on how much you like this topic, so that three months from now you are still motivated to complete your project. Your project should have meaning to you.

How do you feel about your topic?

Now that you have an idea of what you might want to study, it's time to consider what you think and feel about that topic. Your motivation for choosing a topic does not have to be objective. Because social work is a value-based profession, scholars often find themselves motivated to conduct research that furthers social justice or fights oppression. Just because you think a policy is wrong or a group is being marginalized, for example, does not mean that your research will be biased. It means you must understand what you feel, why you feel that way, and what would cause you to feel differently about your topic.

Start by asking yourself how you feel about your topic. Sometimes the best topics to research are those about which you feel strongly. What better way to stay engaged with your research project than to study something you are passionate about? However, you must be able to accept that people may have a different perspective, and you must represent their viewpoints fairly in the research report you produce. If you feel prepared to accept all findings, even those that may be unflattering or distinct from your personal perspective, then perhaps you should begin your research project by intentionally studying a topic about which you have strong feelings.

Kathleen Blee (2002) [57] has taken this route in her research. Blee studies groups whose racist ideologies may be different than her own. You can listen to her lecture Women in Organized Racism that details some of her findings. Her scientific research is so impactful because she was willing to report her findings and observations honestly, even those contrary to her beliefs and feelings. If you believe that you may have personal difficulty sharing findings with which you disagree, then you may want to study a different topic. Knowing your own hot-button issues is an important part of self-knowledge and reflection in social work, and there is nothing wrong with avoiding topics that are likely to cause you unnecessary stress.

Social workers often use personal experience as a starting point to identify topics of interest. As we’ve discussed here, personal experience can be a powerful motivator to learn more about a topic. However, social work researchers should be mindful of their own mental health during the research process. A social worker who has experienced a mental health crisis or traumatic event should approach researching related topics cautiously. There is no need to trigger yourself or jeopardize your mental health for a research project. For example, a student who has just experienced domestic violence may want to know about Eye Movement Desensitization and Reprocessing (EMDR) therapy. While the student might gain some knowledge about potential treatments for domestic violence, they will likely have to read through many stories and reports about domestic violence as part of the research process. Unless the student’s trauma has been processed in therapy, conducting a research project on this topic may negatively impact the student’s mental health.

What do you think about your topic?

Once you figure out what you feel about your topic, consider what you think about it. There are many ways we know what we know. Perhaps your mother told you something is so. Perhaps it came to you in a dream. Perhaps you took a class last semester and learned something about your topic there. Or you may have read something about your topic in your local newspaper. We discussed the strengths and weaknesses associated with some of these different sources of knowledge in Chapter 1 , and we’ll talk about other scientific sources of knowledge in Chapter 3 and Chapter 4 . For now, take some time to think of everything you know about your topic. Thinking about what you already know will help you identify any biases you may have, and it will help as you begin to frame a question about your topic.

You might consider creating a concept map, just to get your thoughts and ideas on paper and beginning to organize them. Consider this video from the University of Guelph Library ( CC-BY-NC-SA 4.0 ).

  • You should pick a topic for your research proposal that you are interested in, since you will be working with it for several months.
  • Investigate your own feelings and thoughts about a topic, and make sure you can be objective and fair in your investigation.
  • Research projects are guided by a working question that develops and changes as you learn more about your topic.

Just as a reminder, exercises are designed to help you create your individual research proposal. We designed these activities to break down your proposal into small but manageable chunks. We suggest completing each exercise so you can apply what you are learning to your individual research project, as the exercises in each section and each chapter build on one another.

If you haven't done so already, you can create a document in a word processor on your computer or in a written notebook with your answers to each exercise.

Brainstorm at least 4-5 topics of interest to you and pick the one you think is the most promising for a research project.

  • For your chosen topic, outline what you currently know about the topic and your feelings towards the topic. Make sure you are able to be objective and fair in your research.
  • Formulate at least one working question to guide your inquiry. It is common for topics to change and develop over the first few weeks of a project, but think of your working question as a place to start. Use the 10 examples we provided in this chapter if you need some help getting started.

2.2 Your research proposal

  • Describe the stages of a research project
  • Define your target population and describe how your study will impact that population
  • Identify the aim of your study
  • Classify your project as descriptive, exploratory, explanatory, or evaluative

Most research methods courses are designed to help students propose a research project. But what is a research project? Figure 2.1 indicates the steps of the research project. Right now, we are in the top right corner, using your informal observations from your practice experience and lived experience to form a working draft of your research question. In the next three chapters, you'll learn how to find and evaluate scholarly literature on your topic. After thoroughly evaluating the literature, you'll conceptualize an empirical study based on a research question you create. In many courses, students will have to carry out these designs and make a contribution to the research literature in their topic area.

A circular pattern starting at research literature and research question (which loops) and then moving to empirical study, data analysis, and conclusions

This book uses a project-based approach because it mimics the real-world process of inquiry on a social work topic. In an introductory research methods course, students often have to create a research proposal followed by a more advanced research class in which they conduct quantitative and qualitative data analysis. The research proposal , is a document produced by researchers that reviews the literature relevant to their topic and describes the methods they will use to conduct their study. Part 1 of this textbook is designed to help you with your literature review. Part 2 is designed to help you figure out which methods you will use in your study.

Check with your professor on whether you are required to carry out the project you propose to do in your research proposal. Some of you may only need to propose a hypothetical project. If you are planning to complete your project, you will have to pay more attention to the practical and ethical considerations in this chapter.

A research proposal is focused on a question. Right now, this is your working question from Section 2.1. If you haven't created one yet, this is a good time to pause and complete the exercises from section 2.1. [58] It is likely you will revise your working question many times as you read more literature about your topic. Consider yourself in the cycle between (re)creating your research question and reviewing the research literature for Part 1 of the textbook.

Student research proposals

Student research projects are a big undertaking, but they are well within your capability as a graduate student. Let's start with the research proposal. Think about the research proposal as a communication device. You are telling the reader (your professor, usually) everything they need to know in order to understand your topic and the research study you plan to do. You are also demonstrating to the reader that you are competent and informed enough to conduct the study.

You can think of a research proposal like creating a recipe. If you are a chef trying to cook a new dish from scratch, you would probably start by looking at other recipes. You might cook a few of them and come up with ideas about how to create your own version of the dish. Writing your recipe is a process of trial and error, and you will likely revise your proposal many times over the course of the semester. This textbook and its exercises designed to get you working on your project little by little, so that by the time you turn in your final research proposal, you'll be confident it represents the best way to answer your question. Of course, like with any time I cook, you never quite know how it will turn out. What matters for scientists in the end isn't whether your data proves your ideas right or wrong or whether your data collection doesn't work as planned or goes off perfectly. Instead, what matters is that you report your results (warts and all) as honestly and openly as possible to inform others engaged in scholarly inquiry.

Is writing a research proposal a useful skill for a social worker? On one hand, you probably won't be writing research proposals for a living. But the same structure of a research proposal (literature review + methods) is used in grant applications. Writing grant proposals is often a part of practice, particularly in agency-based and policy practice. Instead of finding a gap in the literature to study, practitioners write grant proposals describing a program they will use to address an issue in their community, as well as the research methods they will use to evaluate whether it worked. Similarly, a policy advocate or public administrator might sketch out a proposed program and its evaluation as part of a proposal. Proposal writing may differ somewhat in practice, but the general idea is the same.

Focusing your project

Based on your work in Section 2.1, you should have a working question—a place to start. Think about what you hope to accomplish with your study. This is the aim of your research project. Often, social work researchers begin with a target population in mind.  As you will recall from section 1.4, social work research is research for action . Social workers engage in research to help people. Think about your working question. Why do you want to answer it? What impact would answering your question have?

In my MSW program, I began my research by looking at ways to intervene with people who have substance use disorders. My foundation year placement was in an inpatient drug treatment facility that used 12-step facilitation as its primary treatment modality. I observed that this approach differed significantly from others I had been exposed to, especially the idea of powerlessness over drugs and drug use. My working question started as "what are the alternatives to 12-step treatment for people with substance use issues and are they more effective?"  The aim of my project was to determine whether different treatment approaches might be more effective, and I suspected that self-determination and powerlessness were important.

It's important to note that my working question contained a target population —people with substance use disorders. A target population is the group of people that will benefit the most. I envisioned I would help the field of social work to think through how to better meet clients where they were at, specific to the problem of substance use. I was studying to be a clinical social worker, so naturally, I formulated a micro-level question. Yet, the question also has implications for meso- and macro-level practice. If other treatment methods are more effective than 12-step facilitation, then we should direct more public money towards providing more effective therapies for people who use substances. We may also need to train the substance use professionals to use new treatment methodologies.

Think about your working question.

  • Is it more oriented towards micro-, meso-, or macro-level practice?
  • What implications would answering your question have at each level of the ecosystem?

Asking yourself whether your project is more micro, meso, or macro is a good check to see if your project is well-focused. A project that seems like it could be all of those might have too many components or try to study too much. Consider identifying one ecosystemic level your project will focus on, and you can interpret and contextualize your findings at the other levels of analysis.

Exploration, description, and explanation

Social science is a big place. Looking at the various empirical studies in the literature, there is a lot of diversity—from focus groups with clients and families to multivariate statistical analysis of large population surveys conducted online. Ultimately, all of social science can be described as one of three basic types of research studies. As you develop your research question, consider which of the following types of research studies fits best with what you want to learn about your topic. In subsequent chapters, we will use these broad frameworks to help craft your study's final research question and choose quantitative and qualitative research methods to answer it.

social work research dissemination

Exploratory research

Researchers conducting  exploratory research are typically at the early stages of examining their topics. Exploratory research projects are carried out to test the feasibility of conducting a more extensive study and to figure out the “lay of the land” with respect to the particular topic. Usually, very little prior research has been conducted on this topic. For this reason, a researcher may wish to do some exploratory work to learn what method to use in collecting data, how best to approach research subjects, or even what sorts of questions are reasonable to ask.

Often, student projects begin as exploratory research. Because students don't know as much about the topic area yet, their working questions can be general and vague. That's a great place to start! An exploratory question is great for delving into the literature and learning more about your topic. For example, the question “what are common social work interventions for parents who neglect their children?” is a good place to start when looking at articles and textbooks to understand what interventions are commonly used with this population. However, it is important for a student research project to progress beyond exploration unless the topic truly has very little existing research. 

In my classes, I often read papers where students say there is not a lot of literature on a topic, but a quick search of library databases shows a deep body of literature on the topic. The skills you develop in Chapter 3 and Chapter 4 should assist you with finding relevant research, and working with a librarian can definitely help with finding information for your research project. That said, there are a few students each year who pick a topic for which there is in fact little existing research. Perhaps, if you were looking at child neglect interventions for parents who identify as transgender or parents who are refugees from the Syrian civil war, less would be known about child neglect for those specific populations. In that case, an exploratory design would make sense as there is little, if any, literature about your specific topic.

Descriptive research

Another purpose of a research project is to describe or define a particular phenomenon. This is called descriptive research . For example, researchers at the Princeton Review conduct descriptive research each year when they set out to provide students and their parents with information about colleges and universities around the United States. They describe the social life at a school, the cost of admission, and student-to-faculty ratios (to name just a few of the categories reported). If our topic were child neglect, we might seek to know the number of people arrested for child neglect in our community and whether they are more likely to have other problems, such as poverty, mental health issues, or substance use.

Social workers often rely on descriptive research to tell them about their service area. Keeping track of the number of parents receiving child neglect interventions, their demographic makeup (e.g., race, sex, age), and length of time in care are excellent examples of descriptive research. On a more macro-level, the Centers for Disease Control provides a remarkable amount of descriptive research on mental and physical health conditions. In fact, descriptive research has many useful applications, and you probably rely on such findings without realizing you are reading descriptive research.

Explanatory research

Lastly, social work researchers often aim to explain why particular phenomena operate in the way that they do. Research that answers “why” questions is referred to as explanatory research . Asking "why" means the researcher is trying to identify cause-and-effect relationships in their topic. For example, explanatory research may try to identify risk and protective factors for parents who neglect their children. Explanatory research may attempt to understand how religious affiliation impacts views on immigration. All explanatory research tries to study cause-and-effect relationships between two or more variables . A specific offshoot of explanatory research that comes up often is evaluation research , which investigates the impact of an intervention, program, or policy on a group of people. Evaluation research is commonly practiced in agency-based social work settings, and later chapters will discusses some of the basics for conducting a program evaluation.

There are numerous examples of explanatory social scientific investigations. For example, Dominique Simons and Sandy Wurtele (2010) [59] sought to understand whether receiving corporal punishment from parents led children to turn to violence in solving their interpersonal conflicts with other children. In their study of 102 families with children between the ages of 3 and 7, the authors found that experiencing frequent spanking did in fact result in children being more likely to accept aggressive problem-solving techniques. Another example of explanatory research can be seen in Robert Faris and Diane Felmlee’s (2011) [60] research study on the connections between popularity and bullying. From their study of 8th, 9th, and 10th graders in nineteen North Carolina schools, they found that aggression increased as adolescents’ popularity increased. [61]

  • Think back to your working question from section 2.1. Which type of research—exploratory, descriptive, or explanatory—best describes your working question?
  • Try writing a question about your topic that fits with each type of research.

Important things are more rewarding to do

Another consideration in starting a research project is whether the question is important enough to answer. For the researcher, answering the question should be important enough to put in the effort and time required to complete a research project. As we discussed in section 2.1, you should choose a topic that is important to you—one you wouldn’t mind learning about for at least a few months, if not a few years. Time is your most precious resource as a student. Make sure you dedicate it to topics and projects you consider genuinely important.

Your research question should also be contribute to the larger expanse of research in that area. For example, if your research question is "does cognitive behavioral therapy (CBT) effectively treat depression?" you are a few decades late to be asking that question. Hundreds of scientists have published articles demonstrating its effectiveness in treating depression. However, a student interested in learning more about CBT can still find new areas to research. Perhaps there is a new population—for example, older adults in a nursing home—or a new problem—like mobile phone addiction—for which there is little research on the impact of CBT.

Your research project should contribute something new to social science. It should address a gap in what we know and what is written in the literature. This can seem intimidating for students whose projects involve learning a totally new topic. How could I add something new when other researchers have studied this for decades? Trust us, by thoroughly reviewing the existing literature, you can find new and unresolved research questions to answer. Google Scholar’s motto at the bottom of their search page is “stand on the shoulders of giants.” Social science research rests on the work of previous scholars, and builds off of what they discovered to learn more about the social world. Ensure that your question will bring our scientific understanding of your topic to new heights.

Finally, your research question should be of import to the social world. Social workers conduct research on behalf of individuals, groups, and communities to promote change as part of their mission to advance human rights and further social and economic justice. Your research should matter to the people you are trying to help. Your research project should aim to improve the lives of people in your target population by helping the world understand their needs more holistically.

Research projects, obviously, do not need to address all aspects of a problem. As social workers, our goal in enacting social justice isn't to accomplish it all in one semester (or even one lifetime). Our goal is to move the world in the right direction and make small, incremental progress. We encourage all students to think about how they will make their work accessible and relevant to the broader public and use their results to promote change. 

  • Research exists in a cycle. Your research project will follow this cycle, beginning from reading literature (where you are now), to proposing a study, to completing a research project, and finally, to publishing the results.
  • Social work researchers should identify a target population and understand how their project will impact them.
  • Research projects can be exploratory, descriptive, evaluative, or a combination therein. While you are likely still exploring your topic, you may settle on another type of research, particularly if your topic has been previously addressed extensively in the literature.
  • Your research project should be important to you, fill a gap or address a controversy in the scientific literature, and make a difference for your target population and broader society.
  • State why your working question is an important one to answer, keeping in mind that your statement should address the scientific literature, target population, and the social world.

2.3 Critical considerations

  • Critique the traditional role of researchers and identify how action research addresses these issues

So far in this chapter, we have presented the steps of student research projects as follows:

  • Find a topic that is important to you and read about it.
  • Pose a question that is important to the literature and to your community.
  • Propose to use specific research methods to answer your question.
  • Carry out your project and report the results.

These were depicted in Figure 2.1 earlier in this chapter. There are important limitations to this approach. This section examines those problems and how to address them.

Whose knowledge is privileged?

First, let's critically examine your role as the researcher. Following along with the steps in a research project, you start studying the literature your topic, find a place where you can add to scientific knowledge, and conduct your study. But why are you the person who gets to decide what is important? Just as clients are the experts on their lives, members of your target population are the experts on their lives. What does it mean for a group of people to be researched on, rather than researched with? How can we better respect the knowledge and self-determination of community members?

social work research dissemination

A different way of approaching your research project is to start by talking with members of the target population and those who are knowledgeable about that community. Perhaps there is a community-led organization you can partner with on a research project. The researcher's role in this case would be more similar to a consultant, someone with specialized knowledge about research who can help communities study problems they consider to be important. The social worker is a co-investigator, and community members are equal partners in the research project. Each has a type of knowledge—scientific expertise vs. lived experience—that should inform the research process.

The community focus highlights something important about student projects: they are localized. Student projects can dedicate themselves to issues at a single agency or within a service area. With a local scope, student researchers can bring about change in their community. This is the purpose behind action research.

Action research

Action research   is research that is conducted for the purpose of creating social change. When engaging in action research, scholars collaborate with community stakeholders to conduct research that will be relevant to the community. Social workers who engage in action research don't just go it alone; instead, they collaborate with the people who are affected by the research at each stage in the process. Stakeholders, particularly those with the least power, should be consulted on the purpose of the research project, research questions, design, and reporting of results.

Action research also distinguishes itself from other research in that its purpose is to create change on an individual and community level. Kristin Esterberg puts it quite eloquently when she says, “At heart, all action researchers are concerned that research not simply contribute to knowledge but also lead to positive changes in people’s lives” (2002, p. 137). [62] Action research has multiple origins across the globe, including Kurt Lewin’s psychological experiments in the US and Paulo Friere’s literacy and education programs (Adelman, 1993; Reason, 1994). [63] Over the years, action research has become increasingly popular among scholars who wish for their work to have tangible outcomes that benefit the groups they study.

A traditional scientist might look at the literature or use their practice wisdom to formulate a question for quantitative or qualitative research, as we suggested earlier in this chapter. An action researcher, on the other hand, would consult with people in target population and community to see what they believe the most pressing issues are and what their proposed solutions may be. In this way, action research flips traditional research on its head. Scientists are not the experts on the research topic. Instead, they are more like consultants who provide the tools and resources necessary for a target population to achieve their goals and to address social problems using social science research.

According to Healy (2001), [64] the assumptions of participatory-action research are that (a) oppression is caused by macro-level structures such as patriarchy and capitalism; (b) research should expose and confront the powerful; (c) researcher and participant relationships should be equal, with equitable distribution of research tasks and roles; and (d) research should result in consciousness-raising and collective action. Consistent with social work values, action research supports the self-determination of oppressed groups and privileges their voice and understanding through the conceptualization, design, data collection, data analysis, and dissemination processes of research. We will return to similar ideas in Part 4 of the textbook when we discuss qualitative research methods, though action research can certainly be used with quantitative research methods, as well.

Student projects can make a difference!

One last thing. We've told you all to think small and simple with your projects. The adage that "a good project is a done project" is true. At the same time, this advice might unnecessarily limit an ambitious and diligent student who wanted to investigate something more complex. For example, here is a Vice News article about MSW student Christine Stark's work on sex trafficking of indigenous women. Student projects have the potential to address sensitive and politically charged topics. With support from faculty and community partners, student projects can become more comprehensive. The results of your project should accomplish something. Social work research is about creating change, and you will find the work of completing a research project more rewarding and engaging if you can envision the change your project will create.

In addition to broader community and agency impacts, student research projects can have an impact on a university or academic program. Consider this resource on how to research your institution by Rine Vieth. As a student, you are one of the groups on campus with the least power (others include custodial staff, administrative staff, contingent and adjunct faculty). It is often necessary that you organize within your cohort of MSW students for change within the program. Not only is it an excellent learning opportunity to practice your advocacy skills, you can use raw data that is publicly available (such as those linked in the guide) or create your own raw data to inform change. The collaborative and transformative focus of student research projects like these can be impactful learning experiences, and students should consider projects that will lead to some small change in both themselves and their communities.

  • Traditionally, researchers did not consult target populations and communities prior to formulating a research question. Action research proposes a more community-engaged model in which researchers are consultants that help communities research topics of import to them.
  • Just because we’ve advised you to keep your project simple and small doesn’t mean you must do so! There are excellent examples of student research projects that have created real change in the world.
  • Apply the key concepts of action research to your project. How might you incorporate the perspectives and expertise of community members in your project? How can your project create real change?

2.4 Evaluating internet resources

  • Apply the SIFT technique to find better coverage of scientific information and current events.

When first learning about a new topic, a natural first place to look is an internet search engine (e.g., Google, Bing, or DuckDuckGo). Before diving into the academic literature (which we will do in the next chapter), let's explore how to use research methods to find scientific information that is intended for non-expert audiences. Take some time to learn the basics of your topic before you dive into more advanced literature, which may present a more nuanced (or jargon-filled and confusing) study of the topic. Generally, scholarly literature is specialized, in that it does not try to provide a broad overview of a topic for a non-expert audience. When you are looking at journal articles, you are looking at literature intended for other scientists and researchers.

While you can be assured that articles in reputable journals have passed peer review, that does not always mean they contain accurate information. Articles are often debated on social media or in journalistic outlets. For example, here is a news story debunking a journal article which erroneously found Safe Consumption Sites for people who use drugs were moderately associated with crime increases. After multiple scholars evaluated the article's data, they realized there were flaws in the design and the conclusions were not supported, which led the journal to retract the article. You can find these controversies in the literature by using a Google Scholar feature we've talked about before—'Cited By'. Click the 'Cited By' link to see which articles cited the article you are evaluating. If you see critical commentary on the article by other scholars, it is likely an area of active scientific debate. You should investigate the controversy further prior to using the source in your literature review.

social work research dissemination

If your literature search contains sources other than academic journal articles (and almost all of them do), you'll need to do a bit more work to assess whether the source is reputable enough to include in your review. Let's say you find a report from a Google Scholar search or a Bing search. Without peer review or a journal's approval, how do you know the information you are reading is any good?

The SIFT method

Mike Caulfield, Washington State University digital literacy expert, has helpfully condensed key fact-checking strategies into a short list of four moves, or things to do to quickly make a decision about whether or not a source is worthy of your attention. It is referred to as the “SIFT” method, and it stands for Stop, Investigate the source, Find better coverage, and Trace claims claims, quotes, and media to the original context.

When you initially encounter a source of information and start to read it—stop. Ask yourself whether you know and trust the author, publisher, publication, or website. If you don’t, use the other fact-checking moves that follow, to get a better sense of what you’re looking at. In other words, don’t read, share, or use the source in your research until you know what it is, and you can verify it is reliable.

This is a particularly important step, considering what we know about the attention economy—social media, news organizations, and other digital platforms purposely promote sensational, divisive, and outrage-inducing content that emotionally hijacks our attention in order to keep us “engaged” with their sites (clicking, liking, commenting, sharing). Stop and check your emotions before engaging! What about this website is driving your engagement?

Investigate the sources

You don’t have to do a three-hour investigation into a source to determine its truth. But if you’re reading a piece on economics, and the author is a Nobel prize-winning economist, that would be useful information. Likewise, if you’re watching a video on the many benefits of milk consumption, you would want to be aware if the video was produced by the dairy industry. This doesn’t mean the Nobel economist will always be right and that the dairy industry can’t ever be trusted. But knowing the expertise and agenda of the person who created the source is crucial to your interpretation of the information provided.

When investigating a source, fact-checkers read “laterally” across many websites, rather than digging deep (reading “vertically”) into the one source they are evaluating. That is, they don’t spend much time on the source itself, but instead they quickly get off the page and see what others have said about the source. Indeed, one study cited in the video below found that academic historians are actually less able to tell the difference between reputable and bogus internet sources because they do not read laterally but instead check references and credentials. Those are certainly a good idea to check when reading a source in detail, but fact checkers instead ask what other sources on the web say about it rather than what the source says about itself. They open up many tabs in their browser, piecing together different bits of information from across the web to get a better picture of the source they’re investigating. Not only is this faster, but it harnesses the collected knowledge of the web to more accurately determine whether a source is reputable or not.

We recommend watching this short video [2:44] for a demonstration of how to investigate online sources . Pay particular attention to how Wikipedia can be used to quickly get useful information about publications, organizations, and authors. Note: Turn on closed captions with the “CC” button or use the text transcript if you prefer to read.

Find better coverage

What if the source you find is low-quality, or you can’t determine if it is reliable or not? Perhaps you don’t really care about the source—you care about the claim that source is making. You want to know if it is true or false. You want to know if it represents a consensus viewpoint, or if it is the subject of much disagreement. A common example of this is a meme you might encounter on social media. The random person or group who posted the meme may be less important than the quote or claim the meme makes.

Your best strategy in this case might actually be to find a better source altogether, to look for other coverage that includes trusted reporting or analysis on that same claim. Rather than relying on the source that you initially found, you can trade up for a higher quality source. The point is that you’re not wedded to using that initial source. We have the internet! You can go out and find a better source, and invest your time there.

We recommend watching this short video [4:10] that demonstrates how to find better coverage and notes how fact-checkers build a library of trusted sources they can rely on to provide better coverage. Note: Turn on closed captions with the “CC” button or use the text transcript if you prefer to read.

Trace claims, quotes, and media to the original context

Much of what we find on the internet has been stripped of context. Maybe there’s a video of a fight between two people with Person A as the aggressor. But what happened before that? What was clipped out of the video and what stayed in? Maybe there’s a picture that seems real but the caption could be misleading. Maybe a claim is made about a new medical treatment based on a research finding—but you’re not certain if the cited research paper actually said that. The people who re-report these stories either get things wrong by mistake, or, in some cases, they are intentionally misleading us.

In these cases you will want to trace the claim, quote, or media back to the source, so you can see it in its original context and get a sense of whether the version you saw was accurately presented. We will talk about this more in Chapter 3 when we distinguish between primary sources and secondary sources . Secondary and tertiary sources are great for getting started with a topic, but researchers want to rely on the most highly informed source to give us information about a topic. If you see a news article about a research study, look for the journal article written by the researchers who performed the study as citations for your paper rather than a journalist who is unaffiliated with the project.

We recommend watching this short video [1:33] that discusses re-reporting vs. original reporting and demonstrates a quick tip: going “upstream” to find the original reporting source. Researchers must follow the thread of information from where they first read it to where it originated in order to understand its truth and value. Social workers who fail to check their sources can spread misinformation within our practice context or come to ill-informed conclusions that hurt clients or communities.

Once you have limited your search to trustworthy sources, ask yourself the following questions when evaluating which of these sources to download:

  • Does this source help me answer my working question?
  • Does this source help me revise and focus my working question?
  • Does this source help me address what my professor expects in a literature review?
  • Is this the best source I can find? Is this a primary or secondary source?
  • What is the original context of this information?
  • Is there controversy surrounding this source?
  • Are the publisher and author reputable and unbiased?

Reflect and plan for the future

As you look search the literature, you will learn more about your topic area. You will learn new concepts that become new keywords in new queries. You will continue to come up with search queries and download articles throughout the research process. While we present this material at the beginning of the textbook, that is a bit misleading. You will return to search the literature often during the research process. As such, it is important to keep notes about what you did at each stage. I usually keep a "working notes" document in the same folder as the PDFs of articles I download. I can write down which categories different articles fall into (e.g., theoretical articles, empirical articles), reflect on how my question may need to change, or highlight important unresolved questions or gaps revealed in my search.

Creating and refining your working question will help you identify the key concepts you study will address. Once you identify those concepts, you’ll need to decide how to define them and how to measure them when it comes time to collect your data. As you are reading articles, note how other researchers who study your topic define concepts theoretically in the introduction and measure them in their methods section. Tuck these notes away for the future, when you will have to define and measure these concepts.

You need to be able to speak intelligently about the target population you want to study, so finding literature about their strengths, challenges, and how they have been impacted by historical and cultural oppression is a good idea. Last, but certainly not least, you should consider any potential ethical concerns that could arise during the course of carrying out your research project. These concerns might come up during your data collection, but they may also arise when you get to the point of analyzing data or disseminating results.

Decisions about the various research components do not necessarily occur in sequential order. For example, you may have to think about potential ethical concerns before changing your working question. In summary, the following list shows some of the major components you’ll need to consider as you design your research project. Make sure you have information that will inform how you think about each component.

  • Research question
  • Literature review
  • Theories and causal relationships
  • Unit of analysis and unit of observation
  • Key concepts ( conceptual definitions and operational definitions )
  • Method of data collection
  • Research participants (sample and population)
  • Ethical concerns

Carve some time out each week during the beginning of the research process to revisit your working question. As you write notes on the articles you find, reflect on how that knowledge would impact your working question and the purpose of your research. You still have some time to figure it out. We'll work on turning your working question into a full-fledged research question in Chapter 9 .

  • Research requires fact-checking. The SIFT technique is an easy approach to critically investigating internet resources about your topic.
  • Investigate the source of the information you find on the web and look for better coverage.
  • Search for internet resources that help you address your working question and write your research proposal.
  • Look at your professor's prompt for a literature review and sketch out how you might answer those questions using your present level of knowledge. Search for sources that support or challenge what you think is true about your topic.
  • Find a news article reporting about topics similar to your working question. Identify whether it is a primary or secondary source. If it is a secondary source, trace any claims to their primary sources. Provide the URLs.

This resource draws from the existing body of open educational resources on social science research methods. This index provides information about which sources were adapted in each chapter. This index does not include attributions for images, which are provided in the Media Attributions section at the end of each chapter before the footnotes.

I should note that this textbook draws heavily from Scientific inquiry in social work by Matt DeCarlo and published by Open Social Work under a  CC-BY-NC-SA 4.0 license which adapted a large chunk of content from Principles of sociological inquiry: Quantitative and qualitative methods by Amy Blackstone and published by the Saylor Foundation under a CC-BY-NC-SA license. In many ways, this project is indebted to Dr. Blackstone's seminal scholarly gift.

Chapter 1 adapted content from:

  • Chapter 1 of Scientific inquiry in social work by Matt DeCarlo and published by Open Social Work under a  CC-BY-NC-SA 4.0 license.
  • Cultural Humility: People, Principles and Practices—Part 1 of 4 published by Vivian Chavez under a CC BY-NC-ND 3.0 license. 

Chapter 2 adapted content from:

  • Sections 2.1 and 15.1 of Scientific inquiry in social work by Matt DeCarlo and published by Open Social Work under a  CC-BY-NC-SA 4.0 license.
  • How to create a concept map published by the University of Guelph Library under a CC-BY 4.0 license.

Chapters 3, 4, and 5 adapted content from:

  • Sections 2.2, 2.3, 3.1, and 3.2 of Scientific inquiry in social work by Matt DeCarlo and published by Open Social Work  under a  CC-BY-NC-SA 4.0 license.
  • Bonanno & Veselak's article A matter of trust: Parents attitudes towards child mental health information sources and published in Advances in Social Work under a CC-BY 4.0 license.
  • The section on SIFT from Introduction to College Research by Walter D. Butler, Aloha Sargent, and Kelsey Smith and published under a CC-BY 4.0 license.
  • Information privilege by Steely Library NKU and published under a CC-BY 4.0 license.

Chapter 6 adapted content from:

  • Chapter 5 of  Scientific inquiry in social work by Matt DeCarlo and published by Open Social Work under a  CC-BY-NC-SA 4.0 license.

Chapter 7 adapted content from:

  • Section 6.2 of Scientific inquiry in social work by Matt DeCarlo CC-BY-NC-SA 4.0 published by Open Social Work  under a  CC-BY-NC-SA 4.0 license.
  • The GO-GN research methods handbook by Farrow, R., Iniesto, F., Weller, M. & Pitt., R. published by the Global OER Graduate Network (GO-GN) under a CC-BY 4.0 license.
  • A small amount of content from the Wikipedia entry for postpositivism published under a CC-BY-SA 3.0 license as well as Yosef Jabarin's article on conceptual frameworks published by the I nternational Journal of Qualitative Methods under a CC-BY 4.0 license.

Chapter 8 adapted content from:

  • Sections 6.3, 7.2, and 7.3 of Scientific inquiry in social work by Matt DeCarlo CC-BY-NC-SA 4.0 published by Open Social Work which is adapted from Principles of sociological inquiry: Quantitative and qualitative methods by Amy Blackstone CC-BY-NC-SA published by the Saylor Foundation .
  • Chapter 10 of Research methods in psychology (4th edition) by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, and Dana C. Leighton CC-BY-NC-SA 4.0 from Kwantlen Polytechnic University Library.

Chapter 9 adapted content from:

  • Chapter 8 of Scientific inquiry in social work by Matt DeCarlo CC-BY-NC-SA 4.0 published by Open Social Work  under a  CC-BY-NC-SA 4.0 license.
  • Many figures and ideas from Principles of sociological inquiry: Quantitative and qualitative methods by Amy Blackstone CC-BY-NC-SA published by the Saylor Foundation .

Chapter 10 adapted content from:

  • Chapter 10 of Scientific inquiry in social work by Matt DeCarlo and published by Open Social Work under a  CC-BY-NC-SA 4.0 license.

Chapter 11 adapted content from:

  • Chapter 9 of Scientific inquiry in social work by Matt DeCarlo CC-BY-NC-SA 4.0 published by Open Social Work  under a  CC-BY-NC-SA 4.0 license.
  • Chapter 6 of Social Science Research: Principles, Methods, and Practices published by Anol Bhattacherjee under a CC-BY-NC-SA 3.0 Unported license.
  • Chapter 35 of  Research methods in psychology (4th edition) by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, and Dana C. Leighton CC-BY-NC-SA 4.0 from Kwantlen Polytechnic University Library.

Chapter 12 adapted content from:

  • Chapter 35 of Research methods in psychology (4th edition) by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, and Dana C. Leighton CC-BY-NC-SA 4.0 from Kwantlen Polytechnic University Library.
  • Chapter 9 of Social Science Research: Principles, Methods, and Practices published by Anol Bhattacherjee under a CC-BY-NC-SA 3.0 Unported license.
  • The final section on cultural bias adapts a large amount of content from Navigating cross-cultural research: methodological and ethical considerations , an open access article by Tanya Broesch, Alyssa N. Crittenden, Bret A. Beheim, Aaron D. Blackwell, John A. Bunce, Heidi Colleran, Kristin Hagel, Michelle Kline, Richard McElreath, Robin G. Nelson, Anne C. Pisor, Sean Prall, Ilaria Pretelli, Benjamin Purzycki, Elizabeth A. Quinn, Cody Ross, Brooke Scelza, Kathrine Starkweather, Jonathan Stieglitz and Monique Borgerhoff Mulder published by Proceedings of the Royal Academy of Biological Sciences under a CC-BY 4.0 license.

Chapter 13 adapted content from:

  • Chapter 7 of Research methods in psychology (3rd American edition) by Paul C. Price, Rajiv S. Jhangiani, I-Chant A. Chiang, Dana C. Leighton, and Carrie Cuttler published by Washington State University Press under a  CC-BY-NC-SA 4.0 license.

Chapter 15 adapted content from:

  • Chapter 12 of Research methods in psychology (3rd American edition) by Paul C. Price, Rajiv S. Jhangiani, I-Chant A. Chiang, Dana C. Leighton, and Carrie Cuttler CC-BY-NC-SA 4.0 published by Washington State University Press under a  CC-BY-NC-SA 4.0 license.

Chapter 19 adapted content from:

  • Figures 19.4 and 19.5 are from Growing up in New York City: A Generational Memoir (1941–1960) . by Howard R. Wolf and published in American Studies Journal under a  CC-BY-SA 4.0 license.

Chapter 21 adapted content from:

  • Figure 21.1 is from Street youth labor as an expression of survival and self-worth by Karabanow, J & Gurman, E. and published in Critical Social Work  under a CC-BY-NC-ND 4.0 license.

Chapter 24 adapted content from:

  • Chapter 16 of Scientific inquiry in social work by Matt DeCarlo CC-BY-NC-SA 4.0 published by Open Social Work  under a  CC-BY-NC-SA 4.0 license.
  • The scholarly literature (26 minute read)
  • Identifying your information needs (13 minute read)
  • Searching basics (23 minute read)

Content warning: examples in this chapter contain references to school discipline, mental health, gender-based discrimination, police shootings, ableism, autism and anti-vaccination conspiracy theories, children’s mental health, child abuse, poverty, substance use disorders and parenting/pregnancy, and tobacco use.

3.1 The scholarly literature

  • Define what we mean by "the literature"
  • Differentiate between primary and secondary sources and summarize why researchers use primary sources
  • Distinguish between different types of resources and how they may be useful for a research project
  • Identify different types of journal articles

In the last chapter, we discussed how looking at the literature on your topic is the first thing you will need to do as part of a student research project. A literature review makes up the first half of a research proposal. But what do we mean by "the literature?" The literature consists of the published works that document a scholarly conversation on a specific topic within and between disciplines. You will find in “the literature” documents that track the progression of the research and background of your topic. You will also find controversies and unresolved questions that can inspire your own project. By this point in your social work academic career, you’ve probably heard that you need to get “peer-reviewed journal articles.” But what are those exactly? How do they differ from news articles or encyclopedias? That is the focus of this section of the text—different types of literature.

Disciplines

Information does not exist in the environment like some kind of raw material. It is produced by individuals working within a particular field of knowledge who use specific methods for generating new information, or a discipline . Disciplines consume, produce, and share knowledge. Looking through a university’s course catalog gives clues to disciplinary structure. Fields such as political science, biology, history, and mathematics are unique disciplines, as is social work. We have things we study often, like social welfare policy or mental health interventions, and a particular theoretical and ethical framework through which we view the world.

Social work researchers must take an interdisciplinary perspective, engaging with not just social work literature but literature from many disciplines to gain a comprehensive and accurate understanding of a topic. Consider how nursing and social work would differ in studying sports in schools. A nursing researcher might study how sports affect individuals’ health and well-being, how to assess and treat sports injuries, or the physical conditioning required for athletics. A social work researcher might study how schools privilege or punish student athletes, how athletics impact social relationships and hierarchies, or the differences in participation in organized sports between children of different genders. People in some disciplines are much more likely to publish on sports in schools than those in others, like political science or religious studies. I'm sure you can find a bit of scholarship on sports in schools in most disciplines, though you will want to target disciplines that are likely to address the topic often as part of the core purpose of the profession (like sports medicine or sports social work ). Sometimes disciplines overlap in their focus, as clinical social work and counseling psychology or urban planning and public administration.

You will need to become comfortable with identifying the disciplines that might contribute information to any literature search. When you do this, you will also learn how to decode the way people talk about a topic within a discipline. This will be useful to you when you begin a review of the literature in your area of study.

Think about your research topic and working question. What disciplines are likely to publish about your topic?

For each discipline, consider:

  • What is important about the topic to scholars in that discipline?
  • What is most likely to be the focus of their study about the topic?
  • What perspective are they most likely to have on the topic?

Periodicals

First, let’s discuss periodicals. Periodicals include trade publications, magazines, and newspapers. While they may appear similar, particularly those that are found online, each of these periodicals has unique features designed for a specific purpose. Magazine and newspaper articles are usually written by journalists, are intended to be short and understandable for the average adult, contain color images and advertisements, and are designed as commodities to be sold to an audience. An example of a useful magazine for social workers is the New Social Worker.

Primary sources

Periodicals may contain primary or secondary literature depending on the article in question. An article that is a primary source gathers information as an event happens (e.g. an interview with a victim of a local fire), or it may relay original research reported on by the journalists (e.g. the Guardian newspaper’s The Counted webpage which tracked how many people were killed by police officers in the United States from 2015-2016). [65] Primary sources are based on raw data, which we discussed in Chapter 2 .

social work research dissemination

Is it okay to use a magazine or newspaper as a source in your research methods class? If you were in my class, the answer is “probably not.” There are some exceptions, such as the Guardian page mentioned above (which is a great example of data journalism ) or breaking news about a policy or community, but most of what newspapers and magazines publish is secondary information. Researchers should look for primary sources—the original source of information—whenever possible.

Secondary sources

Secondary sources interpret, discuss, and summarize primary sources. Examples of secondary sources include literature reviews written by researchers, book reviews, as well as news articles or videos about recently published research. Your job in this course is to read the original source of the information , the primary source. When you read a secondary source, you are relying on another's interpretation of the primary source. That person might be wrong in how they interpret the primary source, or they may simply focus on a small part of what the primary source says.

Most often, I see the distinction between primary and secondary sources in the references pages students submit for the first few assignments in my research class. Students are used to citing the summaries of scientific findings from periodicals like the New York Times or Washington Post. Instead, students should read the journal article written by the researchers, as it is the primary source. Journalists are not scientists. If you have seen articles about how chocolate cures cancer or how drinking whiskey can extend your life, you should understand how journalists can exaggerate or misinterpret scientific results. See this cartoon from Saturday Morning Breakfast Cereal for a hilarious illustration of sensationalizing findings from scientific research in journalism. Even when news outlets do a very good job of interpreting scientific findings, and that is certainly what most journalistic outlets strive for, you are still likely to only get a short summary of the key results.

When you are getting started with research, newspaper and magazine articles are excellent places to commence your journey into the literature, as they do not require specialized knowledge and may inspire deeper inquiry. Just make sure you are reading news, not opinion pieces, which may exclude facts that contradict the author's viewpoint. As we will talk about in the next chapter, the one secondary source that is highly valuable for student researchers are review articles published in academic journals, as they summarize much of the relevant literature on a topic.

Trade publications

Unlike magazines and newspapers, trade publications may take some specialized knowledge to understand. Trade publications or trade journals are periodicals directed to members of a specific profession. They often include information about industry trends and practical information for people working in the field. Because of this, trade publications are somewhat more reputable than newspapers or magazines, as the authors are specialists in their field.

NASW News, published by the National Association of Social Workers (NASW), is a good example of a trade publication in social work. Its intended audience is social work practitioners who want to learn about important practice issues. They report news and trends in a particular field but not scholarly research. They may also provide product or service reviews, job listings, and advertisements.

So, can you use trade publications in a formal research proposal? Again, if you’re in my class, the answer would be “probably not.” A main shortcoming of trade publications is the lack of peer review. Peer review   refers to a formal process in which other esteemed researchers and experts ensure your work meets the standards and expectations of the professional field. Peer review is part of the cycle of publication for journal articles and provides a gatekeeper service, so to speak, which (in theory) ensures that only top-quality articles are published.

In reality, there are a number of problems with our present system of peer review. Briefly, critiques of the current peer review system include unreliable feedback across reviewers; inconsistent error and fraud detection; prolonged evaluation periods; and biases against non-dominant identities, perspectives, and methods. It can take years for a manuscript to complete peer review, likely with one or more rounds of revisions between the author and the reviewers or editors at the journal. See Kelly and colleagues (2014) [66] for an overview of the strengths and limitations of peer review .

A trade publication isn't as reputable as a journal article because it does not include expert peer review. While trade publications do contain a staff of editors, the level of review is not as stringent as with academic journals. On the other hand, if you are completing a study about practitioners, then trade publications may be highly relevant sources of information for your proposal.

In summary, newspapers and other popular press publications are useful for getting general topic ideas. Trade publications are useful for practical application in a profession and may also be a good source of keywords for future searching.

Journal articles

As you’ve probably heard by now, academic journal articles are considered to be the most reputable sources of information, particularly in research methods courses. Journal articles are written by scholars with the intended audience of other scholars (like you!). The articles are often long and contain extensive references that support the author's arguments. The journals themselves are often dedicated to a single topic, like violence or child welfare, and include articles that seek to advance the body of knowledge about a particular topic. We included a list of the 25 most popular social work journals in Chapter 1 .

social work research dissemination

Peer review

Most journals are peer-reviewed (also known as refereed), which means a panel of scholars reviews the articles to provide a recommendation on whether they should be accepted into that specific journal. Scholarly journals make available articles of interest to experts or researchers. An editorial board of respected scholars (i.e., peers) reviews all articles submitted to a journal. Editors and volunteer reviewers decide if the article provides a noteworthy contribution to the field and should be published. For this reason, journal articles are the main source of information for researchers as well as for literature reviews. Usually, peer review is done confidentially, with the reviewer's name hidden from the author and vice versa. This confidentiality is mediated by the editorial staff at an academic journal and is termed blinded review , though this ableist language should be revised to confidential review (Ades, 2020). [67]

You can usually tell whether a journal is peer reviewed by going to its website. Under the “About Us” or "Author Submission" sections, the website should list the procedures for peer review. If a journal does not provide such information, you may have found a “predatory journal.” You may want to check with your professor or a librarian, as the websites for some reputable journals are not straightforward. It is important not to use articles from predatory journals. These journals will publish any article—no matter how bad it is—as long as the author pays them. If a journal appears suspicious and may be a predatory publication, search for it in the COPE member list . Another predatory publishing resource, Beall's List , has been critiqued as discriminatory towards some open access publishing models and non-Western publishers so we do not recommend its use (Berger & Cirasella, 2015). [68]

Even reputable, non-predatory journals publish articles that are later shown to be incorrect, unethically manipulated, or otherwise faulty. In Chapter 1 , we talked about Andrew Wakefield's fabricated data linking autism and vaccines. His study was published in T he Lancet , one of the most reputable journals in medicine, who formally retracted the article years later. Retraction Watch is a scholarly project dedicated to highlighting when scientific papers are withdrawn and why. It is important to note that peer review is not a perfect process, and that it is subject to error. There is a growing movement to make the peer review process faster and more transparent. For example, 17 life science journals moved to a model of publishing the unreviewed draft, the reviewer's comments, and how they informed the final draft (Brainard, 2019). [69]

Seminal articles

Peers don't just review work prior to publication. The whole point of publishing a journal article is to ensure that your peers will read it and use it in their work. A seminal article is “a classic work of research literature that is more than 5 years old and is marked by its uniqueness and contribution to professional knowledge” (Houser, 2018, p. 112). [70] Basically, it is a really important article. This article by Hodge, Lacasse, and Benson (2012) [71] reviews the most cited social work publications from the previous decade. These would all be considered seminal articles.

There are far more than just these 100 seminal articles in the literature. How do you know if you are looking at a seminal article? Seminal articles are cited a lot in the literature. You can see how many authors have cited an article using Google Scholar’s citation count feature when you search for the article, as depicted in Figure 3.1. Generally speaking, articles that have been cited more often are considered more reputable. There is nothing wrong with citing an article with a low citation count, but it is an indicator that not many other scholars have found the source to be useful or important, at least not yet.

Shows the Google Scholar results page with the Cited By link circled, indicating the article has 4,030 citations by other authors

Figure 3.1 shows a citation count for an article that is obviously seminal, with over 4,000 citations. There is no exact number of citations at which an article is considered seminal. A low citation count for an article published last year is not a reliable indicator of the importance of this article to the literature, as it takes time for people to read articles and incorporate them into their scholarship. For example, if you are looking at an article from five years ago with 20 citations, that's still a good quality resource to use. The purpose here is to recognize when articles are  clearly influential in the literature, and are therefore more likely to be important to your review of the literature in that topic area.

Empirical articles

Journal articles can fall into one of several categories. Empirical articles report the results of a quantitative or qualitative analysis of raw data conducted by the author. Empirical articles also integrate theory and previous research. However, just because an article includes quantitative or qualitative results does not necessarily mean it is an empirical journal article. Since most articles contain a literature review with findings from empirical studies, you need to make sure the findings reported in the study stem from the author’s analysis of raw data.

Fortunately, empirical articles follow a similar structure, and include the following sections: introduction, methods, results, and discussion—in that order. While the exact wording in the headings may differ slightly from publication to publication and other sections may be added, this general structure applies to nearly all empirical journal articles. If you see this general structure, you are reading an empirical journal article. You can see this structure in a recent article from the open access social work journal, Advances in Social Work . Bonnano & Vesalak (2019) [72] conducted interviews of parents of children who are diagnosed with mental health issues, uncovering that trust was central to which sources parents used to inform themselves about their child's diagnosis. While there is no "Introduction" heading, the article begins on page 397 with the abstract and introduction. The "Methods" heading appears on page 400, after the introduction ends. The "Results" section begins on page 402. Because this is a qualitative empirical article, the results section is full of themes and quotes based on what participants said in their interviews with researchers. If it were a quantitative empirical article, like this one from Kelly, Kubart, and Freed (2020) [73] evaluating the impact of a social work intervention on depression in teenagers , the results section would be full of statistical data, tables, and figures that indicated mathematical relationships between the key variables in the research question. The "Discussion" for the Bonnano & Vesalak begins on page 407, and it contextualizes the findings in their "Results" section within the scholarly literature.

While this article uses the traditional headings, you may see other journal articles that use headings like "Background" instead of "Introduction" or "Findings" instead of "Results". You may also find headings like "Conclusions", "Implications", "Limitations", and so forth in other journal articles. Regardless of the specific wording, empirical journal articles follow the structure of introduction, methods, results, and discussion. If it has a methods and results section, it's almost certainly an empirical journal article. However, if the article does not have these specific sections, it's likely you've found a non-empirical article because the author did not analyze any raw data.

Review articles

Along with empirical articles, review articles are the most important type of article you will read as part of conducting your review of the literature. We will discuss their importance more in Chapter 4 , but briefly, review articles are journal articles that summarize the findings other researchers and establish the state of the literature in a given topic area. They are so helpful because they synthesize a wide body of research information in a (relatively) short journal article. Review articles include literature reviews, like this one from Reynolds & Bacon (2018) [74] which summarizes the literature on integrating refugee children in schools . You may find literature reviews with a special focus like a critical review of the literature, which may apply a perspective like critical theory or feminist theory while surveying the scholarly literature on a given topic.

Review articles also include systematic reviews , which are like literature reviews but more clearly specify the criteria by which the authors search for and include scholarly literature in the review (Uman, 2011). [75] For example, this recent systematic review by Leonard, Hafford-Letchfield, and Couchman (2018) [76] discusses how to use strategies from arts education to inform social work education. As you can see in Figure 1 of this article, systematic reviews are very specific about which articles they include in their analysis. Because systematic reviews try to address all scholarly literature published on a given topic, researchers specify how the literature search was conducted, how many articles were included or excluded, and the reasoning for their decisions. This way, researchers make sure there is no relevant source excluded from their analysis.

The two final kinds of review articles, meta-analysis and meta-synthesis go even further than systematic reviews in that they analyze the raw data from all of the articles published in a given topic area, not just the published results. A meta-analysis is a study that combines raw data from multiple sources and analyzes the pooled data using statistics. Meta-analyses are particularly helpful for intervention studies, as it will pool together the raw data from multiple samples and studies to create a super-study of thousands of people, which has greater explanatory power. For example, this recent meta-analysis by Kennedy and colleagues (2016) [77] analyzes pooled data from six separate studies on parent-child interaction therapy to see if it is effective.

A meta-synthesis is similar to a meta-analysis but it pools qualitative results from multiple studies for analysis. For example, this meta-synthesis by Hodge and Horvath (2011) [78] pooled data from 11 qualitative studies that addressed spiritual needs in healthcare settings. While meta-analyses and meta-syntheses are the most methodologically robust type of review article, any recently published review article that is highly relevant to your topic area is a good place to start reading literature. Because review articles synthesize the results of multiple studies, they can give you a broad sense of the overall literature on your topic. We'll review a few kinds of non-empirical articles next.

Theoretical articles

Theoretical articles discuss a theory, conceptual model, or framework for understanding a problem. They may delve into philosophical or ethical analysis as well. For example, this theoretical article by Carrillo & Grady (2018) [79] discusses structural social work theory and anti-oppressive practice approaches. While most students know they need to have statistics to back up what they think is true, it's also important to use theory to inform what you think about your topic. Theoretical articles can help you understand how to think about a topic and may help you make sense of the results of empirical studies.

Practical articles

Practical articles describe “how things are done" from the perspective of on-the-ground practitioners. They are usually shorter than other types of articles and are intended to inform practitioners of a discipline on current issues. They may also reflect on a “hot topic” in the practice domain, a complex client situation, or an issue that may affect the profession as a whole. This practical article by Curry-Stevens and colleagues (2019) [80] was written by social work faculty for other social work faculty. It helps faculty learn how to increase the number of graduate social work students concentrating in macro practice, based on what faculty in one program did.

Table 3.1 Identifying and using different types of journal articles
Peer-reviewed Go to the journal's website, and look for information that describes peer review. This may be in the instructions for authors or about the journal sections. To ensure other respected scholars have reviewed, provided feedback, and approved this article as containing honest and important scientific scholarship.
Article in a predatory journal Use the SIFT method. Search for the publisher and journal on Wikipedia and in a search engine. Check the   to see if your journal appears there. They are not useful. Articles in predatory journals are published for a fee and have not undergone serious review. They should not be cited.
*Not an journal article* Does not provide the name of the journal it is published in. You may want to google the name of the article or report and its author to see if you can find a published version. Journal articles aren't everything, but if your instructor asks for one, make sure you haven't mistakenly use dissertations, theses, and government reports.
Empirical article Most likely, if it has a methods and results section, it is an empirical article. Once you have a more specific idea in mind for your study, you can adapt the measures, design, and sampling approach of empirical articles (quantitative or qualitative). The introduction and results section can also provide important information for your literature review. Make sure not to rely too heavily on a single empirical study, though, and try to build connections across studies you read.
Quantitative article The methods section contains numerical measures and instruments, and the results section provides outcomes of statistical analyses.
Qualitative article The methods section describes interviews, focus groups, or other qualitative techniques. The results section has a lot of quotes from study participants.
Review article The title or abstract includes terms like literature review, systematic review, meta-analysis, or meta-synthesis. Getting a comprehensive but generalized understanding of the topic and identifying common and controversial findings across studies.
Theoretical article The articles does not have a methods or results section. It talks about theory a lot. Developing your theoretical understanding of your topic.
Practical articles Usually in a section at the front or back of the journal, separate from full articles. It addresses hot topics in the profession from the standpoint of a practitioner. Identifying how practitioners in the real world think about your topic.
Book reviews, notes, and commentary Much shorter, usually only a page or two. Should state clearly what kind of note or review they are in the title, abstract, or keywords. Can point towards sources that provide more substance on the topic you are studying or point out emerging or controversial aspects of your topic.

What kind of articles should you read?

No one type of article is better than the other, as each serves a different purpose. Seminal articles relevant to your topic area are important to read because the ideas in them are deeply influential in the literature. Theoretical articles will help you understand the social theory behind your topic. Empirical articles should test those theories quantitatively or create those theories qualitatively, a process we will discuss in greater detail in Chapter 8 . Review articles help you get a broad survey of the literature by summarizing the results of multiple studies, which is particularly important at the beginning of a literature search. And finally, practical articles will help you understand a practitioner’s on-the-ground experience. Pick the kind of article that gives you the kind of information you need.

Other sources of information

As I mentioned previously, newspaper and magazine articles are good places to start your search (though they should not be the end of your search!). Another source students go to almost immediately is Wikipedia. Wikipedia is a marvel of human knowledge. It is a digital encyclopedia to which anyone can contribute. The entries for each Wikipedia article are overseen by skilled and specialized editors who volunteer their time and knowledge to making sure their articles are correct and up to date. That said, Wikipedia has been criticized for gender and racial bias as well as inequities in the treatment of the Global South (Montgomery et al., 2021). [81]

Encyclopedias

Wikipedia is an example of a tertiary source. We reviewed primary and secondary sources in the previous section.  Tertiary sources review primary and secondary sources. Examples of tertiary sources include encyclopedias, directories, dictionaries, and textbooks like this one. Tertiary sources are an excellent place to start (but are not a good place to end) your search. A student might consult Wikipedia , the Open Education Sociology Dictionary , or the Encyclopedia of Social Work to get a general idea of a social work topic. Encyclopedias can also be excellent resources for their citations, and often the citations will be to seminal articles or important primary sources.

The difference between secondary and tertiary sources is not exact, and as we’ve discussed, using one or both at the beginning of a project is a good idea. As your study of the topic progresses, you will naturally have to transition away from secondary and tertiary sources and towards primary sources. We’ve already talked about one particular kind of primary source—empirical journal articles. We will spend more time on this primary source than any other in this textbook. However, it is important to understand how other types of sources can be used as well.

Books contain important scholarly information. They are particularly helpful for theoretical, philosophical, and historical inquiry. For example, in my research on self-determination for individuals with intellectual and developmental disabilities, I needed to define and explore the concept of self-determination. I learned how to define it from the philosophical literature on self-determination and the advocacy literature contained in books. You can use books to learn key concepts, theories, and keywords you can use to find more up-to-date sources. They may also help you understand the scope and foundations of a topic and how it has changed over time.

Some books contain chapters that look like academic journal articles. These are called  edited volumes. They contain chapters similar to journal articles, but the content is reviewed by an editor rather than peer reviewers. Edited volumes are considered somewhat less reputable than journal articles, as confidential peer review is still considered to be more objective than editorial review. However, articles in social science journals will often include references to books and edited volumes.

Conference proceedings and presentations

Conferences provide a great source of information. At conferences such as the Council on Social Work Education’s (CSWE) Annual Program Meeting (APM) or your state’s NASW conference, researchers present papers on their most recent research. The papers presented at conferences are sometimes published in a volume called a conference proceeding, though these are not common in social work. Conference proceedings highlight the current discourse in a discipline and can lead you to scholars who are interested in specific research areas. For a list of social work conferences, with a focus on the United States, see Chapter 24 .

A word about conference papers: several factors may render these documents difficult to find. It is not unusual that papers delivered at professional conferences are not published in print or electronic form, although an abstract may be available. In these cases, the full paper may only be available from the author or authors. The most important thing to remember is that if you have any difficulty finding a conference proceeding or paper, ask a librarian for assistance.

Gray literature

Another source of information is the  gray literature , which consist of research reports released by non-commercial publishers, such as government agencies, policy organizations, and think tanks. If you have already taken a policy class, perhaps you’ve come across the Center on Budget and Policy Priorities (CBPP). The CBPP is a think tank—a group of scholars that conducts research and advocates for political, economic, and social change. Similarly, students often find the Centers for Disease Control website helpful for understanding the prevalence of social problems like mental illness and child abuse. If you find fact sheets, news items, blog posts, or other content from organizations like think tanks or advocacy organizations in your topic area, look at the references cited to find the primary source, rather than citing the secondary source (the fact sheet). While you might be persuaded by their arguments, you want to make sure to account for the author's potential bias.

Think tanks and policy organizations often have a specific viewpoint they support. For example, there are conservative, liberal, and libertarian think tanks. Policy organizations may be funded by donors to push a given message to the public. Government agencies are generally more objective, though they may be less critical of government programs than other sources might be. Similarly, agencies may be less critical in the reports they share with the public. The main shortcoming of gray literature is the lack of peer review that is found in academic journal articles. Government reports are often cited in research proposals, along with some reports from policy organizations.

Dissertations and theses

Dissertations and theses can be rich sources of information and include extensive reference lists you can use to scan for resources. However, dissertations and theses are considered gray literature because they are not peer reviewed. The accuracy and validity of the document itself may depend on the school that awarded the doctoral or master’s degree to the author. Having completed a dissertation, I can attest to the length of time they take to write as well as read. If you come across a relevant dissertation, it is a good idea to read the literature review and note the sources the author used so you can incorporate them into your literature review. However, the data analysis found in these sources is considered less reputable as it has not passed through peer review yet. Dissertations are approved by committees of scholars, so they are considered more reputable than sources that lack review by experts prior to publication. Consider searching for journal articles by the dissertation's author to see if any of the results have been peer reviewed and published. You will also be thankful because journal articles are much shorter than dissertations and theses!

The final source of information we are going to talk about is webpages. My graduate research focused on substance abuse and drugs, and I was fond of reading Drug War Rant , a blog about drug policy. It provided me with breaking news about drug policy and editorial opinion about the drug war. I still do the same thing with Twitter, as I follow policy organizations and noted scholars to learn about developments in my interest areas. Podcasts (like the Social Work Podcast or Doin' the Work ) are also great sources of news, interviews, and long-form journalism. I would almost never cite a tweet or podcast in a research proposal, but they are excellent places to find primary sources I can read and cite.

Webpages will also help you locate professional organizations and human service agencies that address your problem. Looking at their social media feeds, reports, publications, or “news” sections on an organization’s webpage can clue you into important topics to study. Because anyone can develop their own webpage, they are usually not considered scholarly sources to use in formal writing, but may be useful as you are first learning about a topic. Additionally, many advocacy webpages will provide references for the facts they site, providing you with the primary source. Keep in mind too that webpages can be tricky to quote and cite, as they can be updated or moved at any time. While some reputable sources will include editing notes, others may not. Webpages can be updated without notice.

Use quality sources

We will discuss more specific criteria for evaluating the quality of the sources you choose in section 4.1 . For now, as you access and read each source you encounter, remember:

All information sources are not created equal. Sources can vary greatly in terms of how carefully they are researched, written, edited, and reviewed for accuracy. Common sense will help you identify obviously questionable sources, such as tabloids that feature tales of alien abductions, or personal websites with glaring typos. Sometimes, however, a source’s reliability—or lack of it—is not so obvious. You should consider criteria such as the type of source, its intended purpose and audience, the author’s (or authors’) qualifications, the publication’s reputation, any indications of bias or hidden agendas, how current the source is, and the overall quality of the writing, thinking, and design. (Writing for Success, 2015, p. 448). [82]

While each of these sources provides an important facet of our learning process, your research should focus on finding academic journal articles about your topic. These are the primary sources of the research world. While it may be acceptable and necessary to use other primary sources—like books, government reports, or an investigative article by a newspaper or magazine—academic journal articles are preferred. Finding these journal articles is the topic of the next section.

  • News articles, Wikipedia, and webpages are a good place to start learning about a new topic, but you should rely on more reputable and peer-reviewed sources when reviewing “the literature” on your topic.
  • Researchers should read and cite primary sources, tracing a fact or idea to its original source. If you read a news article or webpage that talks about a study or report (i.e., a secondary source), read the primary source to get all the facts.
  • Empirical articles report quantitative or qualitative analysis completed by the author on raw data.
  • You must critically assess the quality and bias of any source you read. Just because it’s published doesn’t mean it’s true.

Conduct a general Google search on your topic.

  • Identify whether the first 10 sources that come up are primary or secondary sources.
  • Optionally, ask a friend to conduct the same Google search and compare your results. Google's search engine customizes the results you see based on your browsing history and other data the company has collected about you. This is called a filter bubble, and you can learn more about it in this Ted Talk about filter bubbles .

Find a secondary or tertiary source about your topic.

  • Trace one to the original, primary source.

Find a review article in your topic area.

  • Identify which kind it is (literature review, systematic review, meta-analysis, etc.)

Find a think tank, advocacy organization, or website that addresses your topic.

  • Using their information, identify primary sources that might be of use to you. Consider adding them to your social media feed or joining an email newsletter.

3.2 Identifying your information needs

  • Identify a a need for information knowledge about a subject area
  • Identify a search topic question and define it using simple terminology
  • Articulate current knowledge on a topic
  • Recognize a need for information and data to achieve a specific end and define limits to the information need
  • Identify the key concepts relevant to your research question and inquiry into the literature

Norm Allknow was having trouble. He had been using computers since he was five years old and thought he knew all there was to know about them. So, when he was given an assignment to write about the impact of the Internet on society, he thought it would be a breeze. He would just write what he knew, and in no time the paper would be finished. In fact, Norm thought the paper would probably be much longer than the required ten pages. He spent a few minutes imagining how impressed his teacher was going to be, and then sat down to start writing.

He wrote about how the Internet had helped him to play online games with his friends, and to keep in touch with distant relatives, and even to do some homework once in a while. Soon he leaned back in his chair and looked over what he had written. It was just half a page long and he was out of ideas.

Identifying a Personal Need for Information

One of the first things you need to do when beginning any information-based project is to identify your personal need for information. This may seem obvious, but it is something many of us take for granted. We may mistakenly assume, as Norm did in the above example, that we already know enough to proceed. Such an assumption can lead us to waste valuable time working with incomplete or outdated information. Information literacy addresses a number of abilities and concepts that can help us to determine exactly what our information needs are in various circumstances. These are discussed below, and are followed by exercises to help develop your fluency in this area.

Information literacy skills are necessary because the landscape of scientific information is constantly evolving. New information is  always added to what is known about your topic. Trained experts, informed amateurs, and opinionated laypeople are publishing in traditional and emerging formats; there is always something new to find out. The scale of information available varies according to topic, but in general it’s safe to say that there is more information accessible now than ever before.

Due to the extensive amount of information available, part of becoming more information literate is developing habits of mind and of practice that enable you to continually seek new information and to adapt your understanding of topics according to what you find. Because of the widely varying quality of new information, evaluation is also a key element of information literacy, and will be addressed in the next chapter of this book.

Finally, while you are busy searching for information on your current topic, be sure to keep your mind open for new avenues or angles of research that you haven’t yet considered. Often the information you found for your initial need will turn out to be the pathway to a rich vein of information that can serve as raw material for many subsequent projects.

When you understand the information environment where your information need is situated, you can begin to define the topic more clearly and you can begin to understand where your research fits in with related work that precedes it. Your information literacy skills will develop against this changing background as you use the same underlying principles to do research on a variety of topics.

What do you already know?

Part of identifying your own information need is giving yourself credit for what you already know about your topic. It can be helpful to visualize your information need using a chart. Construct a chart using the following format to list whatever you already know about the topic.

Table with three columns. The cells below the headers are empty, indicating the chart should be filled out. The headings are as follows: (1) What do you know? (2) How do you know it? (3) How confident are you in this knowledge?

A simple tracking chart can be seen in Figure 3.2. In the first column, list what you know about your topic. In the second column, briefly explain how you know this (heard it from the professor, read it in the textbook, saw it on a blog, etc.). In the last column, rate your confidence in that knowledge. Are you 100% sure of this bit of knowledge, or did you just hear it somewhere and assume it was right?

When you’ve looked at everything you think you know about the topic and why, step back and look at the chart as a whole. How much do you know about the topic, and how confident are you about it? You may be surprised at how little or how much you already know, but either way you will be aware of your own background on the topic. This self-awareness is key to becoming more information literate.

This exercise gives you a simple way to gauge your starting point, and may help you identify specific gaps in your knowledge of your topic that you will need to fill as you proceed with your research. It can also be useful to revisit the chart as you work on your project to see how far you’ve progressed, as well as to double check that you haven’t forgotten an area of weakness.

Once you’ve clearly stated what you do know, it should be easier to state what you don’t know. Keep in mind that you are not attempting to state everything you don’t know. You are only stating what you don’t know in terms of your current information need. This is where you define the limits of what you are searching for. These limits enable you to meet both size requirements and time deadlines for a project. If you state them clearly, they can help to keep you on track as you proceed with your research. You can learn more about this in the Scope chapter of this book.

One useful way to keep your research on track is with a “KWHL” chart. This type of chart enables you to state both what you know and what you want to know, as well as providing space where you can track your planning, searching and evaluation progress. For now, just fill out the first column, but start thinking about the gaps in your knowledge and how they might inform your research questions. You will learn more about developing these questions and the research activities that follow from them as you work through this book.

Table with four columns. The cells below the headers are empty, indicating the chart should be filled out. The headings are as follows: (1) What do you already know about your topic? (2) What do you want to know about your topic. (3) How will you find information on your topic. (4) What have you Learned about your topic?

Revising your working question

Once you have identified your own need for knowledge, investigated the existing information on the topic, and set some limits on your research based on your current information need, write out your research question or state your thesis. The next exercise will help you conceptualize your question and the potential answers you are considering.

You’ll find that it’s not uncommon to revise your question or thesis statement several times in the course of a research project. As you become more and more knowledgeable about the topic, you will be able to state your ideas more clearly and precisely, until they almost perfectly reflect the information you have found.

  • State your working question.
  • It doesn’t have to be perfect at this point, but based on your current understanding of your topic and what you expect or hope to find is the answer to the question you asked.

Look at your question and your thesis/hypothesis, and make a list of the terms common to both lists (excluding “the,” “and,” “a,” etc.). These common terms are likely the important concepts that you will need to research to support your thesis/hypothesis. They may be the most useful search terms overall or they may only be a starting point.

If none of the terms from your question and thesis/hypothesis lists overlap at all, you might want to take a closer look and see if your thesis/hypothesis really answers your research question. If not, you may have arrived at your first opportunity for revision. If your question asks about self-esteem and your hypothesis talks about self-concept, the answers you seek will not address your working question.

  • Does your question really ask what you’re trying to find out?
  • Does your proposed answer really answer that question?

You may find that you need to change one or both, or to add something to one or both to really get at what you’re interested in. This is part of the process, and you will likely discover that as you gather more information about your topic, you will find other ways that you want to change your question or thesis to align with the facts, even if they are different from what you hoped.

Defining a research question can be more difficult than it seems. Your initial questions may be too broad or too narrow. You may not be familiar with specialized terminology used in the field you are researching. You may not know if your question is worth investigating at all.

These problems can often be solved by a preliminary investigation of scholarly information on the topic. As previously discussed, gaining a general understanding of the information environment helps you to situate your information need in the relevant context and can also make you aware of possible alternative directions for your research. On a more practical note, however, reading through some of the existing information can also provide you with commonly used terminology, which you can then use to state your own research question, as well as in searches for additional information. Don’t try to reinvent the wheel, but rely on the experts who have laid the groundwork for you to build upon. Find good, argumentative writing from scholars, experts, and people with lived experience that you find insightful.

  • Researchers continually revise their working question and the answers they are considering throughout the literature search process.
  • Your existing knowledge on a topic is a resource you can rely upon, but you will be required as social work practitioners to show evidence that what you say is scientifically true.
  • As you learn more about your topic (theories, key terms, history, etc.), you should revise your working question and the answers you are considering to it.

3.3 Searching basics

  • Brainstorm potential keywords and search queries for your topic
  • Apply basic Boolean search operators to your search queries
  • Identify relevant databases in which to search for journal articles
  • Target your search queries in databases to the most relevant results

One of the drawbacks (or joys, depending on your perspective) of being a researcher in the 21st century is that we can do much of our work without ever leaving the comfort of our recliners. This is certainly true of familiarizing yourself with the literature. Most libraries offer incredible online search options and access to all of the literature you'll ever need on your topic.

Unfortunately, even though many college classes require students to find and use academic journal articles, there is little training for students on how to do so. The purpose of this section and the rest of part 1 of the textbook is to build information literacy skills, which are the basic building blocks of any research project. While we describe the process here as linear, with one step following neatly after another, in reality, you will likely go back to Step 1 often as your topic gets more specific and your working question becomes clearer . Recall our diagram in the last chapter, Figure 2.1. You are cycling between reformulating your working question then searching for and reading the literature on your topic.

The process of searching databases and revising your working question can be time-consuming, so it is a good idea to write notes to yourself. Remembering why you included certain search terms, searched different databases, or other decisions in the literature search process can help you keep track of what you've done. You can then recreate searches to download articles you may have missed or remember reflections you had weeks ago when you last conducted a literature search. When I search the literature, I create a "working document" in a word processor where I write notes and reflections.

Step 1: Build search queries using keywords

What do you type when you are searching for something on in a search engine like Google or Bing? Are you a question-asker? Do you type in full sentences or just a few keywords? What you type into a database or search engine is called a query . Well-constructed queries get you to the information you need more quickly, while unclear queries will force you to sift through dozens of irrelevant articles before you find the ones you want. The danger in being a question-asker or typing like you talk is that including too many extraneous words will confuse the search engine. In this section, you will learn to use keywords rather than natural language to find scholarly information.

social work research dissemination

Keywords are the words or phrases you use in your search query, and they inform the relevance and quantity of results you get. Unfortunately, different studies often use different terms to mean the same thing. A study may describe its topic as substance abuse, rather than addiction (for example). Think about what keywords are important to answering your working question. Are there other ways of expressing the same idea?

Often in social work research, there is a bit of jargon to become familiar with when crafting your search queries. If you wanted to learn more about individuals of low SES who do not have access to a bank account, you may need to learn the term “unbanked,” which refers to people without bank accounts, and thereby include the word “unbanked” in your search query. If you wanted to learn about children who take on parental roles in families, you may need to include the word “parentification” as part of your search query.

As graduate researchers, you are not expected to know these terms ahead of time. Instead, start with the keywords you already know. As you read more about your topic and become familiar with some of these words, begin including these keywords into your search results, as they may return more relevant hits.

  • List all of the keywords you can think of that are relevant to your working question. Use your list of keywords from the previous section as a starting point.
  • Add new keywords to this list as you search the literature and learn more about your topic.
  • Start a document in which you can put your keywords as well as your notes and reflections on the search process.

Alternate keywords

think of synonyms or related terms for each concept. If you do this, you will have more flexibility when searching in case your first search term doesn’t produce any or enough results. This may sound strange, since if you are looking for information using a Web search engine, you almost always get too many results. Databases, however, contain fewer items, and having alternative search terms may lead you to useful sources. Even in a search engine like Google, having terms you can combine thoughtfully will yield better results.

The following worksheet is an example of a process you can use to come up with search terms. It illustrates how you might think about the topic of violence in high schools. Notice that this exact phrase is not what will be used for the search. Rather, it is a starting point for identifying the terms that will eventually be used.

Topic: Violence in high schools. The concepts section is split into two columns: Violence and High school. Possible alternate terms for violence are listed in the first column, with OR separating each word. These are bullying, guns, knives, or gangs. Possible alternate terms for high school are listed in the second column, with OR separating each word. These are secondary school and 12th grade.

Now, use a clean copy of the same worksheet to think about the topic Sarah’s team is working on. How might you divide their topic into concepts and then search terms? Keep in mind that the number of concepts will depend on what you are searching for. And that the search terms may be synonyms or narrower terms. Occasionally, you may be searching for something very specific, and in those cases, you may need to use broader terms as well. Jot down your ideas then compare what you have written to the information on the second, completed worksheet and identify 3 differences.

Topic and Concepts chart. The topic is “The involvement of women painters in the Impressionist movement had an effect upon the subjects portrayed.” The concepts are (1) Women (2) Painters (3) Impressionist Movement and (4) Subjects. There are 8 blanks below each concept, each separated with the word OR. These blanks are for alternate terms for each concept.

Boolean searching: Learn to talk like a robot

Google is a “natural language” search engine, which means it tries to use its knowledge of how people talk to better understand your query. Google’s academic database, Google Scholar , utilizes that same approach. However, other databases that are important for social work research—such as Academic Search Complete, PsycINFO, and PubMed—will not return useful results if you ask a question or type a sentence or phrase as your search query. Instead, these databases are best used by typing in keywords. Instead of typing “the effects of cocaine addiction on the quality of parenting,” you might type in “cocaine AND parenting” or “addiction AND child development.” [Note: you would not actually use the quotation marks in your search query for the examples in this subsection].

These operators (AND, OR, NOT) are part of what is called Boolean searching. Boolean searching works like a simple computer program. Your search query is made up of words connected by operators. Searching for “cocaine AND parenting” returns articles that mention both cocaine and parenting. There are lots of articles on cocaine and lots of articles on parenting, but fewer articles on that address both of those topics. In this way, the AND operator reduces the number of results you will get from your search query because both terms must be present.

The NOT operator also reduces the number of results you get from your query. For example, perhaps you wanted to exclude issues related to pregnancy. Searching for “cocaine AND parenting NOT pregnancy” would exclude articles that mentioned pregnancy from your results. Conversely, the OR operator would increase the number of results you get from your query. For example, searching for “cocaine OR parenting” would return not only articles that mentioned both words but also those that mentioned only one of your two search terms. This relationship is visualized in Figure 3.2 below.

social work research dissemination

  • Build a simple Boolean search query, appropriately using AND, OR, and NOT.
  • Experiment with different Boolean search queries by swapping in different keywords.
  • Write down which Boolean queries you create in your notes.

Step 2: Find the right databases

Queries are entered into a database or a searchable collection of information. In research methods, we are usually looking for databases that contain academic journal articles. Each database has a unique focus and presents distinct advantages or disadvantages. It is important to try out multiple databases, and experiment with different keywords and Boolean operators to find the most relevant results for your working question.

Google Scholar

The most commonly used public database of scholarly literature is Google Scholar, and there are many benefits to using it. Most importantly, Google Scholar is free and is the only database to which you will have access after you graduate. Building competence at using it now will help you as you engage in the evidence-based practice process post-graduation. Because Google Scholar is a natural language search engine, it can seem less intimidating. However, Google Scholar also understands Boolean search terms and contains some advanced searching features like searching within one specific author's body of work, one journal, or searching for keywords in the title of the journal. We have previously mentioned the 'Cited By' field which counts the number of other publications that cite the article and a link to explore and search within those citing articles. This can be helpful when you need to find more recent articles about the same topic.

I recommend linking your personal Google account and your university login which provides one-click access to journal articles available at your institution. I also often use their citation generator, though I have to fix a few things on most entries before using them in my references. You can also set email alerts for a specific search query, which can help you stay current on the literature over time, and organize journal articles into folders. If these advanced features sound useful to you, consult Google Scholar Search Tips  for a walkthrough.

While these features are great, there are some limitations to Google Scholar. The sources in Google Scholar are not curated by a professional body like the American Psychological Association (who curates PsycINFO). Instead, Google crawls the internet for what it thinks are scholarly works and indexes them in their database. When you search in Google Scholar, the sources will be not only journal articles, but books, government and foundation reports, gray literature, graduate theses, and articles that have not undergone peer review. There is no way to select for a specific type of source, and you need to look clearly and identify what kind of source you are reading. Not every source on Google Scholar is reputable or will match what you need for an assignment.

With that broad focus, Google Scholar ranks the results from your query by the number of times each source was cited by other scholars and the number of times your keywords are mentioned in the source. This biases the search results towards older, more seminal articles. So, you should consider limiting your results to only those published the last few years. Unfortunately, Google Scholar also lacks advanced searching features of other databases, like searching by subject area or within the abstract of an article.

  • Type your queries into Google Scholar. See which queries provide you with the most relevant results.
  • Look for new keywords in the results you find, particularly any jargon used in that topic area you may not have known before.
  • Write down notes on which queries and keywords work best. Reflect on what you read in the abstracts and titles of the search results in Google Scholar.

Databases accessed via an institution

Although learning Google Scholar is important, you should take advantage of the databases to which your institution purchases access. The first place to look for social work scholarship are two databases: Social Service Abstracts and Social Work Abstracts, as these databases are indexed specifically for the discipline of social work. As you are aware, social work is interdisciplinary by nature, and it is a good idea not to limit yourself solely to the social work literature on your topic. If your study requires information from the disciplines of psychology, education, or medicine, you will likely find the databases PsycINFO, ERIC, and PubMed (respectively) to be helpful in your search.

There are also less specialized databases that contain articles from across social work and other disciplines. Academic Search Complete is similar to Google Scholar in its breadth, as it contains a number of smaller databases from a variety of social science disciplines. Within Academic Search Complete, click Choose Databases to select databases that apply to different disciplines or topic areas.

It is worth mentioning that many university libraries have a meta-search engine which searches some of the databases to which they have access, including both disciplinary and interdisciplinary databases. Meta-search engines are fantastic, but you must be careful to narrow your results down to the type of resource you are looking for, as they will include all types of media (e.g., books, movies, games). Unfortunately, not every database is included in these meta-search engines, so you will want to try other databases, as well.

You can find the databases we mentioned here and others on the Databases page of your university library (at my institution, the page is titled A-Z Databases). A university login is required to for you to access these databases because you pay for access as part of the tuition and fees at your university. This means that sometime after graduation, you will lose access to these databases and would need to physically travel to a university library to access them. Make the most of your database access during your graduate degree program. We will review in Chapter 4 how to get around paywalls after you graduate.

The databases we described here can be a bit more intimidating, as they have limited natural language support and rely mostly on Boolean searching. However, they contain more advanced search features which can help you narrow down your search to only the most relevant sources, our next step in the research process.

  • Explore different databases you can access via your university's library website and searching using your keywords.
  • Write notes on which databases and keywords provide you with the most relevant results and the disciplines you are likely to find in each database.
  • Look for any new keywords in your results that will help you target your search, and experiment with new search queries.

Step 3: Reduce irrelevant search results

At this point, you should have worked on a few different search queries, swapped in and out different keywords, and explored a few different databases of academic journal articles relevant to your topic. Next, you have to deal with the most common frustration for students: narrowing down your search and reducing the number of results you have to look at. I'm sure at some point you've typed a query into Google Scholar or your library's search engine and seen hundreds of thousands or millions of sources in your search results. Don’t worry! You don’t have to read all those articles to know enough about your topic area to produce a good research project.

While reading millions of articles is clearly ridiculous, there is no magic number of search results to reach. A good search will return results which are highly relevant. In my experience, student search queries tend to be too general (and have too many irrelevant results), rather than too specific (and have too few relevant results). Don't worry! You will not need to read all of the articles you find, but reducing the number of results will save you time by eliminating irrelevant articles.

You should have two goals in reducing the number of results:

  • Reduce the number of sources until you could reasonably skim through the title and abstract of each source. Generally, you want a hundred or a thousand results, rather than a hundred thousand results.
  • Reduce the number of irrelevant sources in your search results until you are much more likely to encounter relevant, rather than irrelevant articles. If only one of every ten results in your search is relevant to your topic for example, you are wasting time. You would be better served by using the tips below to better target your search query.

Here are some tips for reducing the number of irrelevant sources when you search in databases.

  • Use quotation marks to indicate exact phrases, like “mental health” or “substance abuse.”
  • Search for your keywords in the ABSTRACT. A lot of your results may be from articles about irrelevant topics that mention your search term only once. If your topic isn’t in the abstract, chances are the article isn’t relevant. You can be even more restrictive and search for your keywords in the TITLE, but that may be too restrictive, and exclude relevant articles with titles that use slightly different phrasing. Academic databases provide these options in their advanced search tools.
  • Use SUBJECT headings to find relevant articles. These tags are added by the organization that provides the database, like the American Psychological Association who curates PyscINFO, to help classify and categorize information for easier browsing.
  • Narrow down the years of publication. Unless you are gathering historical information about a topic, you are unlikely to find articles older than 10-15 years to be useful. They no longer tell you the current knowledge on a topic. All databases have options to narrow your results down by year. Check with your professor to see if they have specific guidelines for when an article is "too old."
  • Talk to a librarian. They are professional knowledge-gatherers, and there is often a librarian assigned to your department. Their job is to help you find what you need to know, and they are extensively trained in how to help you!
  • Talk to your stakeholders or someone knowledgeable about your target population. They have lived experience with your topic which can help you understand the literature through a different lens. Moreover, the words they use to describe their experiences can also be useful sources of keywords, theories, studies, or jargon.
  • Using the techniques described in this subsection, reduce the number of irrelevant results in the database queries you have performed so far. Pay particular attention to searching within the abstract, using quotation marks to indicate keyword phrases, and exploring subject headings relevant to your topic.
  • Reduce your database query results down to a number (a) where you could reasonably skim through the titles and abstracts to identify relevant articles and (b) where you are much more likely to encounter relevant, rather than irrelevant articles. Repeat this process for your search of each database relevant to your project.
  • Write down your queries or save them, so you can recreate them later if you need to return to them. Also, write down any reflections you have on the search process.
  • Look for any new keywords in your results that will help you target your search better, and experiment with new search queries.

Step 4: Conduct targeted searches

Another way to save time when searching for literature is to look for articles that synthesize the results of other articles: review articles. Systematic reviews provide a summary of the existing literature on a topic. If you find one on your topic, you will be able to read one person’s summary of the literature and go deeper by reviewing and reading articles they cited in their references. Other types of reviews such as critical reviews may offer a viewpoint along with a birds-eye view of the literature.

Similarly, meta-analyses and meta-syntheses have long reference lists that are useful for finding additional sources on a topic. They use data from each article to run their own quantitative or qualitative data analysis. In this way, meta-analyses and meta-syntheses provide a more comprehensive overview of a topic. To find these kinds of articles, include the term “meta-analysis,” “meta-synthesis,” or “systematic review” as keywords in your search queries.

My advice is to read a review article first, before any type of article. The purpose of review articles is to summarize an entire body of literature is as short an article as possible. This is exactly what students who are learning a new area need. Time is a precious resource for graduate students, and review articles provide the most knowledge in the shortest time. Even if they are a few years old, they should help you understand what else you need to find in your literature search.

As you look through abstracts of articles in your search results, you should begin to notice that certain authors keep appearing. If you find an author that has written multiple articles on your topic, consider searching the AUTHOR field for that particular author. You can also search the web for that author’s Curriculum Vitae or CV (an academic resume) that will list their publications. Many authors maintain personal websites or host their CV on their university department’s webpage. Just type in their name and “CV” into a search engine. For example, you may find Michael Sherraden’s name often if your search query is about assets and poverty. You can find Michael Sherraden's CV on the Washington University of St. Louis website.

A similar process can be used for journal names. As you are going through search results, you may also notice that many of the articles you’ve skimmed come from the same journals. Searching with that journal name in the JOURNAL field will allow you to narrow down your results to just that journal. For example, if you are searching for articles related to values and ethics in social work, you might want to search for articles in the Journal of Social Work Values and Ethics . You can do so within any database that indexes the journal, like Google Scholar, or through the search feature journal’s webpage. Browse the abstracts and download the full-text PDF of any article you think is relevant.

Step 5: Explore references and citations

The last thing you'll need to do in order to make sure you didn't miss anything is to look at the references in each article you cite. If there are articles that seem relevant to you, you may want to download them. Unfortunately, references may contain out-of-date material and cannot be updated after publication. As a result, a reference section for an article published in 2014 will only include references from pre-2014.

To find research published after 2014 on the same topic, you can use Google Scholar’s 'Cited By' feature to do a future-looking search. Look up your article on Google Scholar and click the 'Cited By' link. The results are a list of all the sources that cite the article you just read. Google Scholar also allows you to search within these articles—check the box below the search bar to do so—and it will search within the 'Cited By' articles for your keywords.

  • Examine the search results from various databases and look for authors and journals that publish often on your topic. Conduct targeted searches of these authors and journals to make sure you are not missing any relevant sources.
  • Identify at least one relevant article, preferably a review article. Look in the references for any other sources relevant to your working question. Search for the article in Google Scholar and use the the 'Cited By' feature to look for additional sources relevant to your working question.
  • Search for review articles that will help you get a broad sense of the literature. Depending on your topic, you may also want to look for other types of articles as well.
  • Write notes on your targeted searches, so you can recreate them later and remember your reflections on the search process.

Getting the right results

I read a lot of literature review drafts of that say "this topic has not been studied in detail." Before you come to that conclusion, make an appointment with a librarian . It is their job to help you find material that is relevant to your academic interests. My first project as a graduate research assistant involved writing a literature review for a professor, and the time I spent working with a librarian on that project brought home how important and impactful they can be. But there is a lot you can do on your own!

Let’s walk through an example. Imagine a local university wherein smoking was recently banned, much to the chagrin of a group of student smokers. Students were so upset by the idea that they would no longer be allowed to smoke on university grounds that they staged several smoke-outs during which they gathered in populated areas around campus and enjoyed a puff or two. Their activism was surprising. They were advocating for the freedoms of people committing a deviant act—smoking—that is highly disapproved of. Were the protesters otherwise politically active? How much effort and coordination had it taken to organize the smoke-outs?

social work research dissemination

The student researcher began their research by attempting to familiarize themselves with the literature. They started by looking to see if there were any other student smoke-outs using a simple Google search. When they turned to the scholarly literature, their search in Google Scholar for “college student activist smoke-outs,” yielded no results. Concluding there was no prior research on their topic, they informed the professor that they would not be able to write the required literature review since there was no literature for them to review. What went wrong here?

The student had been too narrow in their search for articles in their first attempt. They went back to Google Scholar and searched again using queries with different combinations of keywords. Rather than searching for “college student activist smoke-outs” they tried, “college student activism," "smoke-out," "protest," and other keywords. This time their search yielded many articles. Of course, they were not all focused on pro-smoking activist efforts, but the results addressed their population of interest, college students, and on their broad topic of interest, activism. Experimenting with different keywords across different databases helped them get a comprehensive and multi-disciplinary view of the topic.

Reading articles on college student activism might give them some idea about what other researchers have found in terms of what motivates college students to become involved in activist efforts. They could also play around with their search terms and look for research on activism centered on other sorts of activities that are perceived by some to be deviant, such as marijuana use or veganism. In other words, they needed to broaden their search about college student activism to include more than just smoke-outs and understand the theory and empirical evidence around student activism.

Once they searched for literature about student activism, they could link it to their specific research focus: smoke-outs organized by college student activists. What is true about activism broadly is not necessarily true of smoke-outs, as advocacy for smokers is unique from advocacy on other topics like environmental conservation. Engaging with the sociological literature about their target population, college students who smoke cigarettes, for example, helped to link the broader literature on advocacy to their specific topic area.

Revise your working question often

Throughout the process of creating and refining search queries, it is important to revisit your working question. In this example, trying to understand how and why the smoke-out took place is a good start to a working question, but it will likely need to be developed further to be more specific and concrete. Perhaps the researcher wants to understand how the protest was organized using social media and how social media impacts how students perceived the protests when they happened. This is a more specific question than "how and why did the smoke-out take place?" though you can see how the researcher started with a broad question and made it more specific by identifying one aspect of the topic to investigate in detail. You should find your working question shifting as you learn more about your topic and read more of the literature. This is an important sign that you are critically engaging with the literature and making progress. Though it can often feel like you are going in circles, there is no shortcut to figuring out what you need to know to study what you want to study.

  • The quality of your search query determines the quality of your search results. If you don't work on your queries, you’ll get a million results, only a small percentage of which are relevant to your project.
  • Keywords and queries should be updated as you learn more about your topic.
  • Your two goals in targeting your database searches should be reducing the number of results for each search query to a number you can feasibly skim and making those results more relevant to your project.
  • Techniques to increase the number of relevant results in a search include using Boolean operators, quotation marks, searching in the abstract or title, and using subject headings.
  • Use the databases that are most relevant to your topic. For general searches, Google Scholar has some strengths (ease of use, Cited By) and limitations (cannot search in abstract, includes resources other than journal articles). You can access other databases through your university's library.
  • Don’t be afraid to reach out to a librarian or your professor for help with searching. It is their job to help you.
  • Reflect on your working question. Consider changes that would make it clearer and more specific based on the literature you have skimmed during your search.
  • Describe how your search queries (across different databases) address your working question and provide a comprehensive view of the topic.

21. Qualitative research dissemination Copyright © 2020 by Matthew DeCarlo is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Tools and Resources
  • Customer Services
  • Addictions and Substance Use
  • Administration and Management
  • Aging and Older Adults
  • Biographies
  • Children and Adolescents
  • Clinical and Direct Practice
  • Couples and Families
  • Criminal Justice
  • Disabilities
  • Ethics and Values
  • Gender and Sexuality
  • Health Care and Illness
  • Human Behavior
  • International and Global Issues
  • Macro Practice
  • Mental and Behavioral Health
  • Policy and Advocacy
  • Populations and Practice Settings
  • Race, Ethnicity, and Culture
  • Religion and Spirituality
  • Research and Evidence-Based Practice
  • Social Justice and Human Rights
  • Social Work Profession
  • Share Facebook LinkedIn Twitter

Article contents

Dissemination and implementation research.

  • Enola Proctor Enola Proctor Frank J. Bruno Professor of Social Work Research, Director of the Center for Mental Health Services Research, Washington University in St. Louis
  • https://doi.org/10.1093/acrefore/9780199975839.013.900
  • Published online: 02 June 2014

Implementation research seeks to inform how to deliver evidence-based interventions, programs, and policies in real-world settings so their benefits can be realized and sustained. The ultimate aim of implementation research is building a base of evidence about the most effective processes and strategies for improving service delivery. Implementation research builds upon effectiveness research and then seeks to discover how to use specific implementation strategies and move those interventions into specific settings, extending their availability, reach, and benefits to clients and communities. This entry provides an overview of implementation research as a component of research translation and defines key terms, including implementation outcomes and implementation strategies, as well as an overview of guiding theories and models and methodological issues including variable measurement, research design, and stakeholder engagement.

  • dissemination
  • implementation research
  • evidence-based practice
  • implementation outcomes
  • implementation strategies

You do not currently have access to this article

Please login to access the full content.

Access to the full content requires a subscription

Printed from Encyclopedia of Social Work. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 28 September 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [81.177.182.136]
  • 81.177.182.136

Character limit 500 /500

Library Home

Graduate research methods in social work

(3 reviews)

social work research dissemination

Matt DeCarlo, La Salle University

Cory Cummings, Nazareth University

Kate Agnelli, Virginia Commonwealth University

Copyright Year: 2021

ISBN 13: 9781949373219

Publisher: Open Social Work Education

Language: English

Formats Available

Conditions of use.

Attribution-NonCommercial-ShareAlike

Learn more about reviews.

Reviewed by Erin Boyce, Full Time Faculty, Metropolitan State University of Denver on 6/3/24

This book provides a strong comprehensive overview of each step in the research & evaluation process for students, clearly outlining each step with clarity and direction. read more

Comprehensiveness rating: 5 see less

This book provides a strong comprehensive overview of each step in the research & evaluation process for students, clearly outlining each step with clarity and direction.

Content Accuracy rating: 5

Content in this text is accurate, needing no clarification or added information, and is presented in an unbiased manner.

Relevance/Longevity rating: 5

The relevance of this text is it's greatest strength. It is one of the strongtest research texts I've encountered, and while change always comes this text will survive new iterations of research, only needing minimal and straightforward updates.

Clarity rating: 5

As a research text, this is extremely user friendly. It is easy to read, direct, and does not interfere with student understanding. Students come away with a good understanding of the concepts from this text, and many continue to use it beyond the classroom.

Consistency rating: 5

This text is consistent with research methods and frameworks and stands alone among social work research texts as the most accessbile due to it's status as an OER and as a social work textbook.

Modularity rating: 5

This text is easily divisible into smaller readings, it works great for courses in which assignments are scaffolded to move students through the research process.

Organization/Structure/Flow rating: 5

This text is organized to walk the student through the research process from start to finish, and is easily adjusted for different teaching styles.

Interface rating: 5

This text has no significant interface issues, the readings, links, and images are easily accessbile and are presented in a way that does not interfere with student learning.

Grammatical Errors rating: 5

This text is well edited and formatted.

Cultural Relevance rating: 5

This text is culturally relevant, addresses issues of cultural relevance to social work, and highlights the role of social work values within the realm of social work research.

This is one of the best research texts I've encounted in over a decade of teaching. It is so easily digested and presents information in a direct and understandable way, and is one of the best texts for those teaching graduate level research for social workers. It is an inclusive text that honors the multiple levels of knowledge that our students come to us with, which helps sets it apart. And, the committment throughout the text to social work values and ethics is critical for todays social worker.

Reviewed by Laura Montero, Full-time Lecturer and Course Lead, Metropolitan State University of Denver on 12/23/23

Graduate Research Methods in Social Work by DeCarlo, et al., is a comprehensive and well-structured guide that serves as an invaluable resource for graduate students delving into the intricate world of social work research. The book is divided... read more

Comprehensiveness rating: 4 see less

Graduate Research Methods in Social Work by DeCarlo, et al., is a comprehensive and well-structured guide that serves as an invaluable resource for graduate students delving into the intricate world of social work research. The book is divided into five distinct parts, each carefully curated to provide a step-by-step approach to mastering research methods in the field. Topics covered include an intro to basic research concepts, conceptualization, quantitative & qualitative approaches, as well as research in practice. At 800+ pages, however, the text could be received by students as a bit overwhelming.

Content appears consistent and reliable when compared to similar textbooks in this topic.

The book's well-structured content begins with fundamental concepts, such as the scientific method and evidence-based practice, guiding readers through the initiation of research projects with attention to ethical considerations. It seamlessly transitions to detailed explorations of both quantitative and qualitative methods, covering topics like sampling, measurement, survey design, and various qualitative data collection approaches. Throughout, the authors emphasize ethical responsibilities, cultural respectfulness, and critical thinking. These are crucial concepts we cover in social work and I was pleased to see these being integrated throughout.

The level of the language used is appropriate for graduate-level study.

Book appears to be consistent in the tone and terminology used.

Modularity rating: 4

The images and videos included, help to break up large text blocks.

Topics covered are well-organized and comprehensive. I appreciate the thorough preamble the authors include to situate the role of the social worker within a research context.

Interface rating: 4

When downloaded as a pdf, the book does not begin until page 30+ so it may be a bit difficult to scroll so long for students in order to access the content for which they are searching. Also, making the Table of Contents clickable, would help in navigating this very long textbook.

I did not find any grammatical errors or typos in the pages reviewed.

I appreciate the efforts made to integrate diverse perspectives, voices, and images into the text. The discussion around ethics and cultural considerations in research was nuanced and comprehensive as well.

Overall, the content of the book aligns with established principles of social work research, providing accurate and up-to-date information in a format that is accessible to graduate students and educators in the field.

Reviewed by Elisa Maroney, Professor, Western Oregon University on 1/2/22

With well over 800 pages, this text is beyond comprehensive! read more

With well over 800 pages, this text is beyond comprehensive!

I perused the entire text, but my focus was on "Part 4: Using qualitative methods." This section seems accurate.

As mentioned above, my primary focus was on the qualitative methods section. This section is relevant to the students I teach in interpreting studies (not a social sciences discipline).

This book is well-written and clear.

Navigating this text is easy, because the formatting is consistent

My favorite part of this text is that I can be easily customized, so that I can use the sections on qualitative methods.

The text is well-organized and easy to find and link to related sections in the book.

There are no distracting or confusing features. The book is long; being able to customize makes it easier to navigate.

I did not notice grammatical errors.

The authors offer resources for Afrocentricity for social work practice (among others, including those related to Feminist and Queer methodologies). These are relevant to the field of interpreting studies.

I look forward to adopting this text in my qualitative methods course for graduate students in interpreting studies.

Table of Contents

  • 1. Science and social work
  • 2. Starting your research project
  • 3. Searching the literature
  • 4. Critical information literacy
  • 5. Writing your literature review
  • 6. Research ethics
  • 7. Theory and paradigm
  • 8. Reasoning and causality
  • 9. Writing your research question
  • 10. Quantitative sampling
  • 11. Quantitative measurement
  • 12. Survey design
  • 13. Experimental design
  • 14. Univariate analysis
  • 15. Bivariate analysis
  • 16. Reporting quantitative results
  • 17. Qualitative data and sampling
  • 18. Qualitative data collection
  • 19. A survey of approaches to qualitative data analysis
  • 20. Quality in qualitative studies: Rigor in research design
  • 21. Qualitative research dissemination
  • 22. A survey of qualitative designs
  • 23. Program evaluation
  • 24. Sharing and consuming research

Ancillary Material

About the book.

We designed our book to help graduate social work students through every step of the research process, from conceptualization to dissemination. Our textbook centers cultural humility, information literacy, pragmatism, and an equal emphasis on quantitative and qualitative methods. It includes extensive content on literature reviews, cultural bias and respectfulness, and qualitative methods, in contrast to traditionally used commercial textbooks in social work research.  

Our author team spans across academic, public, and nonprofit social work research. We love research, and we endeavored through our book to make research more engaging, less painful, and easier to understand. Our textbook exercises direct students to apply content as they are reading the book to an original research project. By breaking it down step-by-step, writing in approachable language, as well as using stories from our life, practice, and research experience, our textbook helps professors overcome students’ research methods anxiety and antipathy.  

If you decide to adopt our resource, we ask that you complete this short  Adopter’s Survey  that helps us keep track of our community impact. You can also contact  [email protected]  for a student workbook, homework assignments, slideshows, a draft bank of quiz questions, and a course calendar. 

About the Contributors

Matt DeCarlo , PhD, MSW is an assistant professor in the Department of Social Work at La Salle University. He is the co-founder of Open Social Work (formerly Open Social Work Education), a collaborative project focusing on open education, open science, and open access in social work and higher education. His first open textbook, Scientific Inquiry in Social Work, was the first developed for social work education, and is now in use in over 60 campuses, mostly in the United States. He is a former OER Research Fellow with the OpenEd Group. Prior to his work in OER, Dr. DeCarlo received his PhD from Virginia Commonwealth University and has published on disability policy.

Cory Cummings , Ph.D., LCSW is an assistant professor in the Department of Social Work at Nazareth University. He has practice experience in community mental health, including clinical practice and administration. In addition, Dr. Cummings has volunteered at safety net mental health services agencies and provided support services for individuals and families affected by HIV. In his current position, Dr. Cummings teaches in the BSW program and MSW programs; specifically in the Clinical Practice with Children and Families concentration. Courses that he teaches include research, social work practice, and clinical field seminar. His scholarship focuses on promoting health equity for individuals experiencing symptoms of severe mental illness and improving opportunities to increase quality of life. Dr. Cummings received his PhD from Virginia Commonwealth University.

Kate Agnelli , MSW, is an adjunct professor at VCU’s School of Social Work, teaching masters-level classes on research methods, public policy, and social justice. She also works as a senior legislative analyst with the Joint Legislative Audit and Review Commission (JLARC), a policy research organization reporting to the Virginia General Assembly. Before working for JLARC, Ms. Agnelli worked for several years in government and nonprofit research and program evaluation. In addition, she has several publications in peer-reviewed journals, has presented at national social work conferences, and has served as a reviewer for Social Work Education. She received her MSW from Virginia Commonwealth University.

Contribute to this Page

  • Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies
  • Chinese Studies
  • Cinema and Media Studies
  • Communication
  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation

Social Work

  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section Social Work Research Methods

Introduction.

  • History of Social Work Research Methods
  • Feasibility Issues Influencing the Research Process
  • Measurement Methods
  • Existing Scales
  • Group Experimental and Quasi-Experimental Designs for Evaluating Outcome
  • Single-System Designs for Evaluating Outcome
  • Program Evaluation
  • Surveys and Sampling
  • Introductory Statistics Texts
  • Advanced Aspects of Inferential Statistics
  • Qualitative Research Methods
  • Qualitative Data Analysis
  • Historical Research Methods
  • Meta-Analysis and Systematic Reviews
  • Research Ethics
  • Culturally Competent Research Methods
  • Teaching Social Work Research Methods

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Community-Based Participatory Research
  • Economic Evaluation
  • Evidence-based Social Work Practice
  • Evidence-based Social Work Practice: Finding Evidence
  • Evidence-based Social Work Practice: Issues, Controversies, and Debates
  • Experimental and Quasi-Experimental Designs
  • Impact of Emerging Technology in Social Work Practice
  • Implementation Science and Practice
  • Interviewing
  • Measurement, Scales, and Indices
  • Meta-analysis
  • Occupational Social Work
  • Postmodernism and Social Work
  • Qualitative Research
  • Research, Best Practices, and Evidence-based Group Work
  • Social Intervention Research
  • Social Work Profession
  • Systematic Review Methods
  • Technology for Social Work Interventions

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • Abolitionist Perspectives in Social Work
  • Randomized Controlled Trials in Social Work
  • Social Work Practice with Transgender and Gender Expansive Youth
  • Find more forthcoming articles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

Social Work Research Methods by Allen Rubin LAST REVIEWED: 14 December 2009 LAST MODIFIED: 14 December 2009 DOI: 10.1093/obo/9780195389678-0008

Social work research means conducting an investigation in accordance with the scientific method. The aim of social work research is to build the social work knowledge base in order to solve practical problems in social work practice or social policy. Investigating phenomena in accordance with the scientific method requires maximal adherence to empirical principles, such as basing conclusions on observations that have been gathered in a systematic, comprehensive, and objective fashion. The resources in this entry discuss how to do that as well as how to utilize and teach research methods in social work. Other professions and disciplines commonly produce applied research that can guide social policy or social work practice. Yet no commonly accepted distinction exists at this time between social work research methods and research methods in allied fields relevant to social work. Consequently useful references pertaining to research methods in allied fields that can be applied to social work research are included in this entry.

This section includes basic textbooks that are used in courses on social work research methods. Considerable variation exists between textbooks on the broad topic of social work research methods. Some are comprehensive and delve into topics deeply and at a more advanced level than others. That variation is due in part to the different needs of instructors at the undergraduate and graduate levels of social work education. Most instructors at the undergraduate level prefer shorter and relatively simplified texts; however, some instructors teaching introductory master’s courses on research prefer such texts too. The texts in this section that might best fit their preferences are by Yegidis and Weinbach 2009 and Rubin and Babbie 2007 . The remaining books might fit the needs of instructors at both levels who prefer a more comprehensive and deeper coverage of research methods. Among them Rubin and Babbie 2008 is perhaps the most extensive and is often used at the doctoral level as well as the master’s and undergraduate levels. Also extensive are Drake and Jonson-Reid 2007 , Grinnell and Unrau 2007 , Kreuger and Neuman 2006 , and Thyer 2001 . What distinguishes Drake and Jonson-Reid 2007 is its heavy inclusion of statistical and Statistical Package for the Social Sciences (SPSS) content integrated with each chapter. Grinnell and Unrau 2007 and Thyer 2001 are unique in that they are edited volumes with different authors for each chapter. Kreuger and Neuman 2006 takes Neuman’s social sciences research text and adapts it to social work. The Practitioner’s Guide to Using Research for Evidence-based Practice ( Rubin 2007 ) emphasizes the critical appraisal of research, covering basic research methods content in a relatively simplified format for instructors who want to teach research methods as part of the evidence-based practice process instead of with the aim of teaching students how to produce research.

Drake, Brett, and Melissa Jonson-Reid. 2007. Social work research methods: From conceptualization to dissemination . Boston: Allyn and Bacon.

This introductory text is distinguished by its use of many evidence-based practice examples and its heavy coverage of statistical and computer analysis of data.

Grinnell, Richard M., and Yvonne A. Unrau, eds. 2007. Social work research and evaluation: Quantitative and qualitative approaches . 8th ed. New York: Oxford Univ. Press.

Contains chapters written by different authors, each focusing on a comprehensive range of social work research topics.

Kreuger, Larry W., and W. Lawrence Neuman. 2006. Social work research methods: Qualitative and quantitative applications . Boston: Pearson, Allyn, and Bacon.

An adaptation to social work of Neuman's social sciences research methods text. Its framework emphasizes comparing quantitative and qualitative approaches. Despite its title, quantitative methods receive more attention than qualitative methods, although it does contain considerable qualitative content.

Rubin, Allen. 2007. Practitioner’s guide to using research for evidence-based practice . Hoboken, NJ: Wiley.

This text focuses on understanding quantitative and qualitative research methods and designs for the purpose of appraising research as part of the evidence-based practice process. It also includes chapters on instruments for assessment and monitoring practice outcomes. It can be used at the graduate or undergraduate level.

Rubin, Allen, and Earl R. Babbie. 2007. Essential research methods for social work . Belmont, CA: Thomson Brooks Cole.

This is a shorter and less advanced version of Rubin and Babbie 2008 . It can be used for research methods courses at the undergraduate or master's levels of social work education.

Rubin, Allen, and Earl R. Babbie. Research Methods for Social Work . 6th ed. Belmont, CA: Thomson Brooks Cole, 2008.

This comprehensive text focuses on producing quantitative and qualitative research as well as utilizing such research as part of the evidence-based practice process. It is widely used for teaching research methods courses at the undergraduate, master’s, and doctoral levels of social work education.

Thyer, Bruce A., ed. 2001 The handbook of social work research methods . Thousand Oaks, CA: Sage.

This comprehensive compendium includes twenty-nine chapters written by esteemed leaders in social work research. It covers quantitative and qualitative methods as well as general issues.

Yegidis, Bonnie L., and Robert W. Weinbach. 2009. Research methods for social workers . 6th ed. Boston: Allyn and Bacon.

This introductory paperback text covers a broad range of social work research methods and does so in a briefer fashion than most lengthier, hardcover introductory research methods texts.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Social Work »
  • Meet the Editorial Board »
  • Adolescent Depression
  • Adolescent Pregnancy
  • Adolescents
  • Adoption Home Study Assessments
  • Adult Protective Services in the United States
  • African Americans
  • Aging out of foster care
  • Aging, Physical Health and
  • Alcohol and Drug Abuse Problems
  • Alcohol and Drug Problems, Prevention of Adolescent and Yo...
  • Alcohol Problems: Practice Interventions
  • Alcohol Use Disorder
  • Alzheimer's Disease and Other Dementias
  • Anti-Oppressive Practice
  • Asian Americans
  • Asian-American Youth
  • Autism Spectrum Disorders
  • Baccalaureate Social Workers
  • Behavioral Health
  • Behavioral Social Work Practice
  • Bereavement Practice
  • Bisexuality
  • Brief Therapies in Social Work: Task-Centered Model and So...
  • Bullying and Social Work Intervention
  • Canadian Social Welfare, History of
  • Case Management in Mental Health in the United States
  • Central American Migration to the United States
  • Child Maltreatment Prevention
  • Child Neglect and Emotional Maltreatment
  • Child Poverty
  • Child Sexual Abuse
  • Child Welfare
  • Child Welfare and Child Protection in Europe, History of
  • Child Welfare and Parents with Intellectual and/or Develop...
  • Child Welfare Effectiveness
  • Child Welfare, Immigration and
  • Child Welfare Practice with LGBTQ Youth and Families
  • Children of Incarcerated Parents
  • Christianity and Social Work
  • Chronic Illness
  • Clinical Social Work Practice with Adult Lesbians
  • Clinical Social Work Practice with Males
  • Cognitive Behavior Therapies with Diverse and Stressed Pop...
  • Cognitive Processing Therapy
  • Cognitive-Behavioral Therapy
  • Community Development
  • Community Policing
  • Community-Needs Assessment
  • Comparative Social Work
  • Computational Social Welfare: Applying Data Science in Soc...
  • Conflict Resolution
  • Council on Social Work Education
  • Counseling Female Offenders
  • Criminal Justice
  • Crisis Interventions
  • Cultural Competence and Ethnic Sensitive Practice
  • Culture, Ethnicity, Substance Use, and Substance Use Disor...
  • Dementia Care
  • Dementia Care, Ethical Aspects of
  • Depression and Cancer
  • Development and Infancy (Birth to Age Three)
  • Differential Response in Child Welfare
  • Digital Storytelling for Social Work Interventions
  • Direct Practice in Social Work
  • Disabilities
  • Disability and Disability Culture
  • Domestic Violence Among Immigrants
  • Early Pregnancy and Parenthood Among Child Welfare–Involve...
  • Eating Disorders
  • Ecological Framework
  • Elder Mistreatment
  • End-of-Life Decisions
  • Epigenetics for Social Workers
  • Ethical Issues in Social Work and Technology
  • Ethics and Values in Social Work
  • European Institutions and Social Work
  • European Union, Justice and Home Affairs in the
  • Evidence-based Social Work Practice: Issues, Controversies...
  • Families with Gay, Lesbian, or Bisexual Parents
  • Family Caregiving
  • Family Group Conferencing
  • Family Policy
  • Family Services
  • Family Therapy
  • Family Violence
  • Fathering Among Families Served By Child Welfare
  • Fetal Alcohol Spectrum Disorders
  • Field Education
  • Financial Literacy and Social Work
  • Financing Health-Care Delivery in the United States
  • Forensic Social Work
  • Foster Care
  • Foster care and siblings
  • Gender, Violence, and Trauma in Immigration Detention in t...
  • Generalist Practice and Advanced Generalist Practice
  • Grounded Theory
  • Group Work across Populations, Challenges, and Settings
  • Group Work, Research, Best Practices, and Evidence-based
  • Harm Reduction
  • Health Care Reform
  • Health Disparities
  • Health Social Work
  • History of Social Work and Social Welfare, 1900–1950
  • History of Social Work and Social Welfare, 1950-1980
  • History of Social Work and Social Welfare, pre-1900
  • History of Social Work from 1980-2014
  • History of Social Work in China
  • History of Social Work in Northern Ireland
  • History of Social Work in the Republic of Ireland
  • History of Social Work in the United Kingdom
  • HIV/AIDS and Children
  • HIV/AIDS Prevention with Adolescents
  • Homelessness
  • Homelessness: Ending Homelessness as a Grand Challenge
  • Homelessness Outside the United States
  • Human Needs
  • Human Trafficking, Victims of
  • Immigrant Integration in the United States
  • Immigrant Policy in the United States
  • Immigrants and Refugees
  • Immigrants and Refugees: Evidence-based Social Work Practi...
  • Immigration and Health Disparities
  • Immigration and Intimate Partner Violence
  • Immigration and Poverty
  • Immigration and Spirituality
  • Immigration and Substance Use
  • Immigration and Trauma
  • Impaired Professionals
  • Indigenous Peoples
  • Individual Placement and Support (IPS) Supported Employmen...
  • In-home Child Welfare Services
  • Intergenerational Transmission of Maltreatment
  • International Human Trafficking
  • International Social Welfare
  • International Social Work
  • International Social Work and Education
  • International Social Work and Social Welfare in Southern A...
  • Internet and Video Game Addiction
  • Interpersonal Psychotherapy
  • Intervention with Traumatized Populations
  • Intimate-Partner Violence
  • Juvenile Justice
  • Kinship Care
  • Korean Americans
  • Latinos and Latinas
  • Law, Social Work and the
  • LGBTQ Populations and Social Work
  • Mainland European Social Work, History of
  • Major Depressive Disorder
  • Management and Administration in Social Work
  • Maternal Mental Health
  • Medical Illness
  • Men: Health and Mental Health Care
  • Mental Health
  • Mental Health Diagnosis and the Addictive Substance Disord...
  • Mental Health Needs of Older People, Assessing the
  • Mental Health Services from 1990 to 2023
  • Mental Illness: Children
  • Mental Illness: Elders
  • Microskills
  • Middle East and North Africa, International Social Work an...
  • Military Social Work
  • Mixed Methods Research
  • Moral distress and injury in social work
  • Motivational Interviewing
  • Multiculturalism
  • Native Americans
  • Native Hawaiians and Pacific Islanders
  • Neighborhood Social Cohesion
  • Neuroscience and Social Work
  • Nicotine Dependence
  • Organizational Development and Change
  • Pain Management
  • Palliative Care
  • Palliative Care: Evolution and Scope of Practice
  • Pandemics and Social Work
  • Parent Training
  • Personalization
  • Person-in-Environment
  • Philosophy of Science and Social Work
  • Physical Disabilities
  • Podcasts and Social Work
  • Police Social Work
  • Political Social Work in the United States
  • Positive Youth Development
  • Postsecondary Education Experiences and Attainment Among Y...
  • Post-Traumatic Stress Disorder (PTSD)
  • Practice Interventions and Aging
  • Practice Interventions with Adolescents
  • Practice Research
  • Primary Prevention in the 21st Century
  • Productive Engagement of Older Adults
  • Profession, Social Work
  • Program Development and Grant Writing
  • Promoting Smart Decarceration as a Grand Challenge
  • Psychiatric Rehabilitation
  • Psychoanalysis and Psychodynamic Theory
  • Psychoeducation
  • Psychometrics
  • Psychopathology and Social Work Practice
  • Psychopharmacology and Social Work Practice
  • Psychosocial Framework
  • Psychosocial Intervention with Women
  • Psychotherapy and Social Work
  • Race and Racism
  • Readmission Policies in Europe
  • Redefining Police Interactions with People Experiencing Me...
  • Refugee Children, Unaccompanied Immigrant and
  • Rehabilitation
  • Religiously Affiliated Agencies
  • Reproductive Health
  • Restorative Justice
  • Risk Assessment in Child Protection Services
  • Risk Management in Social Work
  • Rural Social Work in China
  • Rural Social Work Practice
  • School Social Work
  • School Violence
  • School-Based Delinquency Prevention
  • Services and Programs for Pregnant and Parenting Youth
  • Severe and Persistent Mental Illness: Adults
  • Sexual and Gender Minority Immigrants, Refugees, and Asylu...
  • Sexual Assault
  • Single-System Research Designs
  • Social and Economic Impact of US Immigration Policies on U...
  • Social Development
  • Social Insurance and Social Justice
  • Social Justice and Social Work
  • Social Movements
  • Social Planning
  • Social Policy
  • Social Policy in Denmark
  • Social Security in the United States (OASDHI)
  • Social Work and Islam
  • Social Work and Social Welfare in East, West, and Central ...
  • Social Work and Social Welfare in Europe
  • Social Work Education and Research
  • Social Work Leadership
  • Social Work Luminaries: Luminaries Contributing to the Cla...
  • Social Work Luminaries: Luminaries contributing to the fou...
  • Social Work Luminaries: Luminaries Who Contributed to Soci...
  • Social Work Practice, Rare and Orphan Diseases and
  • Social Work Regulation
  • Social Work Research Methods
  • Social Work with Interpreters
  • Solution-Focused Therapy
  • Strategic Planning
  • Strengths Perspective
  • Strengths-Based Models in Social Work
  • Supplemental Security Income
  • Survey Research
  • Sustainability: Creating Social Responses to a Changing En...
  • Syrian Refugees in Turkey
  • Task-Centered Practice
  • Technology Adoption in Social Work Education
  • Technology, Human Relationships, and Human Interaction
  • Technology in Social Work
  • Terminal Illness
  • The Impact of Systemic Racism on Latinxs’ Experiences with...
  • Transdisciplinary Science
  • Translational Science and Social Work
  • Transnational Perspectives in Social Work
  • Transtheoretical Model of Change
  • Trauma-Informed Care
  • Triangulation
  • Tribal child welfare practice in the United States
  • United States, History of Social Welfare in the
  • Universal Basic Income
  • Veteran Services
  • Vicarious Trauma and Resilience in Social Work Practice wi...
  • Vicarious Trauma Redefining PTSD
  • Victim Services
  • Virtual Reality and Social Work
  • Welfare State Reform in France
  • Welfare State Theory
  • Women and Macro Social Work Practice
  • Women's Health Care
  • Work and Family in the German Welfare State
  • Workforce Development of Social Workers Pre- and Post-Empl...
  • Working with Non-Voluntary and Mandated Clients
  • Young and Adolescent Lesbians
  • Youth at Risk
  • Youth Services
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [81.177.182.136]
  • 81.177.182.136

A partnership between our Nation's largest voluntary mental health social service organization and its oldest school of social work has resulted in a unique research center, the Center for the Study of Social Work Practice. The Center is the only endowed research organization focused solely on the development and dissemination of social work practice knowledge. Its endowment of approximately $3,000,000 is supplemented by gifts and grants from public and voluntary sources. These financial resources, combined with the formidable human resources available to the Center, create an unmatched capacity for social work practice research.

The Center's human resources flow from the unique partnerships and collaborations between the faculty and staff of the two sponsoring institutions. The Center is a joint program of the Columbia University School of Social Work (CUSSW) and the Jewish Board of Family and Children's Services (JBFCS). With a full-time CUSSW faculty of over 40 and a professional JBFCS staff of over 1,000, a wide array of social work practice research interests are ever present. Through this "town and gown" partnership, practice research conducted through the Center is grounded firmly in the realities of practice while at the same time focused on significant contributions to practice theory.

The studies conducted by the Center have spanned a wide range of populations and problem areas: children and adolescents receiving outpatient mental health services, residential treatment services, school-based counseling services, and services for trauma; women who have suffered from domestic violence, with studies focusing on HIV-positive battered Latino women, Asian immigrant women and Jewish women; elderly Japanese men and women in need of social support and mental health services; and multicultural competence among social work practitioners and students.  Past studies have examined: suicidality among preadolescents; group interventions for grandparents raising grandchildren; and, the effectiveness of early prevention programs for parents who are likely to engage in child abuse. Other studies have examined service system issues such as the impact of managed care on services and utilization of outcomes measurement.

Hand in hand with its knowledge development mission, the Center also promotes knowledge dissemination and utilization. Its research findings are disseminated internationally through extensive publications by Center affiliates. In addition, the Center's journal, Practice & Research, reaches thousands of individuals and organizations around the globe through its Internet distribution as well as through its paper edition. Periodic conferences, forums, and symposia bring together experts in specialized areas of practice research in order to exchange ideas, discuss research programs, and disseminate findings.

The Center provides a secure and exciting environment for faculty, staff and students from the sponsoring organizations who wish to advance its mission. The Center welcomes collaborations with other individuals and organizations interested in the advancement of social work knowledge.

The Center's founding director was Dr. Shirley Jenkins. Its director from 1992 through mid-2002 was Dr. Edward Mullen.  The current director is Dr. Ronald A Feldman and Constantino Chito Trillana is the associate director.

Individuals or organizations wishing to be placed on the Center's mailing list, and those wishing to attend Center events should contact the Center at:

Center for the Study of Social Work Practice 1255 Amsterdam Avenue, New York, NY 10027 Phone 212-851-2266; Fax 212-851-2268 E-mail: [email protected]

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Clin Transl Sci
  • v.4(3); 2020 Jun

Logo of jctsci

Communicating and disseminating research findings to study participants: Formative assessment of participant and researcher expectations and preferences

Cathy l. melvin.

1 College of Medicine, Medical University of South Carolina, Charleston, SC, USA

Jillian Harvey

2 College of Health Professions/Healthcare Leadership & Management, Medical University of South Carolina, Charleston, SC, USA

Tara Pittman

3 South Carolina Clinical & Translational Research Institute (CTSA), Medical University of South Carolina, Charleston, SC, USA

Stephanie Gentilin

Dana burshell.

4 SOGI-SES Add Health Study Carolina Population Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA

Teresa Kelechi

5 College of Nursing, Medical University of South Carolina, Charleston, SC, USA

Introduction:

Translating research findings into practice requires understanding how to meet communication and dissemination needs and preferences of intended audiences including past research participants (PSPs) who want, but seldom receive, information on research findings during or after participating in research studies. Most researchers want to let others, including PSP, know about their findings but lack knowledge about how to effectively communicate findings to a lay audience.

We designed a two-phase, mixed methods pilot study to understand experiences, expectations, concerns, preferences, and capacities of researchers and PSP in two age groups (adolescents/young adults (AYA) or older adults) and to test communication prototypes for sharing, receiving, and using information on research study findings.

Principal Results:

PSP and researchers agreed that sharing study findings should happen and that doing so could improve participant recruitment and enrollment, use of research findings to improve health and health-care delivery, and build community support for research. Some differences and similarities in communication preferences and message format were identified between PSP groups, reinforcing the best practice of customizing communication channel and messaging. Researchers wanted specific training and/or time and resources to help them prepare messages in formats to meet PSP needs and preferences but were unaware of resources to help them do so.

Conclusions:

Our findings offer insight into how to engage both PSP and researchers in the design and use of strategies to share research findings and highlight the need to develop services and support for researchers as they aim to bridge this translational barrier.

Introduction

Since 2006, the National Institutes of Health Clinical and Translational Science Awards (CTSA) have aimed to advance science and translate knowledge into evidence that, if implemented, helps patients and providers make more informed decisions with the potential to improve health care and health outcomes [ 1 , 2 ]. This aim responded to calls by leaders in the fields of comparative effectiveness research, clinical trials, research ethics, and community engagement to assure that results of clinical trials were made available to participants and suggesting that providing participants with results both positive and negative should be the “ethical norm” [ 1 , 3 ]. Others noted that

on the surface, the concept of providing clinical trial results might seem straightforward but putting such a plan into action will be much more complicated. Communication with patients following participation in a clinical trial represents an important and often overlooked aspect of the patient-physician relationship. Careful exploration of this issue, both from the patient and clinician-researcher perspective, is warranted [ 4 ].

Authors also noted that no systematic approach to operationalizing this “ethical norm” existed and that evidence was lacking to describe either positive or negative outcomes of sharing clinical trial results with study participants and the community [ 4 ]. It was generally assumed, but not supported by research, that sharing would result in better patient–physician/researcher communication, improvement in patient care and satisfaction with care, better patient/participant understanding of clinical trials, and enhanced clinical trial accrual [ 4 ].

More recent literature informs these processes but also raises unresolved concerns about the communication and dissemination of research results. A 2008 narrative review of available data on the effects of communicating aggregate and individual research showed that

  • research participants want aggregate and clinically significant individual study results made available to them despite the transient distress that communication of results sometimes elicits [ 3 , 5 ]. While differing in their preferences for specific channels of communication, they indicated that not sharing results fostered lack of participant trust in the health-care system, providers, and researchers [ 6 ] and an adverse impact on trial participation [ 5 ];
  • investigators recognized their ethical obligation to at least offer to share research findings with recipients and the nonacademic community but differed on whether they should proactively re-contact participants, the type of results to be offered to participants, the need for clinical relevance before disclosure, and the stage at which research results should be offered [ 5 ]. They also reported not being well versed in communication and dissemination strategies known to be effective and not having funding sources to implement proven strategies for sharing with specific audiences [ 5 ];
  • members of the research enterprise noted that while public opinion regarding participation in clinical trials is positive, clinical trial accrual remains low and that the failure to provide information about study results may be one of many factors negatively affecting accrual. They also called for better understanding of physician–researcher and patient attitudes and preferences and posit that development of effective mechanisms to share trial results with study participants should enhance patient–physician communication and improve clinical care and research processes [ 5 ].

A 2010 survey of CTSAs found that while professional and scientific audiences are currently the primary focus for communicating and disseminating research findings, it is equally vital to develop approaches for sharing research findings with other audiences, including individuals who participate in clinical trials [ 1 , 5 ]. Effective communication and dissemination strategies are documented in the literature [ 6 , 7 ], but most are designed to promote adoption of evidence-based interventions and lack of applicability to participants overall, especially to participants who are members of special populations and underrepresented minorities who have fewer opportunities to participate in research and whose preferences for receiving research findings are unknown [ 7 ].

Researchers often have limited exposure to methods that offer them guidance in communicating and disseminating study findings in ways likely to improve awareness, adoption, and use of their findings [ 7 ]. Researchers also lack expertise in using communication channels such as traditional journalism platforms, live or face-to-face events such as public festivals, lectures, and panels, and online interactions [ 8 ]. Few strategies provide guidance for researchers about how to develop communications that are patient-centered, contain plain language, create awareness of the influence of findings on participant or population health, and increase the likelihood of enrollment in future studies.

Consequently, researchers often rely on traditional methods (e.g., presentations at scientific meetings and publication of study findings in peer-reviewed journals) despite evidence suggesting their limited reach and/or impact among professional/scientific and/or lay audiences [ 9 , 10 ].

Input from stakeholders can enhance our understanding of how to assure that participants will receive understandable, useful information about research findings and, as appropriate, interpret and use this information to inform their decisions about changing health behaviors, interacting with their health-care providers, enrolling in future research studies, sharing their study experiences with others, or recommending to others that they participate in studies.

Purpose and Goal

This pilot project was undertaken to address issues cited above and in response to expressed concerns of community members in our area about not receiving information on research studies in which they participated. The project design, a two-phase, mixed methods pilot study, was informed by their subsequent participation in a committee of community-academic representatives to determine possible options for improving the communication and dissemination of study results to both study participants and the community at large.

Our goals were to understand the experiences, expectations, concerns, preferences, and capacities of researchers and past research participants (PSP) in two age groups (adolescents/young adults (AYA) aged 15–25 years and older adults aged 50 years or older) and to test communication prototypes for sharing, receiving, and using information on research study findings. Our long-term objectives are to stimulate new, interdisciplinary collaborative research and to develop resources to meet PSP and researcher needs.

This study was conducted in an academic medical center located in south-eastern South Carolina. Phase one consisted of surveying PSP and researchers. In phase two, in-person focus groups were conducted among PSP completing the survey and one-on-one interviews were conducted among researchers. Participants in either the interviews or focus groups responded to a set of questions from a discussion guide developed by the study team and reviewed three prototypes for communicating and disseminating study results developed by the study team in response to PSP and researcher survey responses: a study results letter, a study results email, and a web-based communication – Mail Chimp (Figs.  1 – 3 ).

An external file that holds a picture, illustration, etc.
Object name is S2059866120000096_fig1.jpg

Prototype 1: study results email prototype. MUSC, Medical University of South Carolina.

An external file that holds a picture, illustration, etc.
Object name is S2059866120000096_fig3.jpg

Prototype 3: study results MailChimp prototypes 1 and 2. MUSC, Medical University of South Carolina.

An external file that holds a picture, illustration, etc.
Object name is S2059866120000096_fig2.jpg

Prototype 2: study results letter prototype.

PSP and researcher surveys

A 42-item survey questionnaire representing seven domains was developed by a multidisciplinary team of clinicians, researchers, and PSP that evaluated the questions for content, ease of understanding, usefulness, and comprehensiveness [ 11 ]. Project principal investigators reviewed questions for content and clarity [ 11 ]. The PSP and researcher surveys contained screening and demographic questions to determine participant eligibility and participant characteristics. The PSP survey assessed prior experience with research, receipt of study information from the research team, intention to participate in future research, and preferences and opinions about receipt of information about study findings and next steps. Specific questions for PSP elicited their preferences for communication channels such as phone call, email, social or mass media, and public forum and included channels unique to South Carolina, such as billboards. PSP were asked to rank their preferences and experiences regarding receipt of study results using a Likert scale with the following measurements: “not at all interested” (0), “not very interested” (1), “neutral” (3), “somewhat interested” (3), and “very interested” (4).

The researcher survey contained questions about researcher decisions, plans, and actions regarding communication and dissemination of research results for a recently completed study. Items included knowledge and opinions about how to communicate and disseminate research findings, resources used and needed to develop communication strategies, and awareness and use of dissemination channels, message development, and presentation format.

A research team member administered the survey to PSP and researchers either in person or via phone. Researchers could also complete the survey online through Research Electronic Data Capture (REDCap©).

Focus groups and discussion guide content

The PSP focus group discussion guide contained questions to assess participants’ past experiences with receiving information about research findings; identify participant preferences for receiving research findings whether negative, positive, or equivocal; gather information to improve communication of research results back to participants; assess participant intention to enroll in future research studies, to share their study experiences with others, and to refer others to our institution for study participation; and provide comments and suggestions on prototypes developed for communication and dissemination of study results. Five AYA participated in one focus group, and 11 older adults participated in one focus group. Focus groups were conducted in an off-campus location with convenient parking and at times convenient for participants. Snacks and beverages were provided.

The researcher interview guide was designed to understand researchers’ perspectives on communicating and disseminating research findings to participants; explore past experiences, if any, of researchers with communication and dissemination of research findings to study participants; document any approaches researchers may have used or intend to use to communicate and disseminate research findings to study participants; assess researcher expectations of benefits associated with sharing findings with participants, as well as, perceived and actual barriers to sharing findings; and provide comments and suggestions on prototypes developed for communication and dissemination of study results.

Prototype materials

Three prototypes were presented to focus group participants and included (1) a formal letter on hospital letterhead designed to be delivered by standard mail, describing the purpose and findings of a fictional study and thanking the individual for his/her participation, (2) a text-only email including a brief thank you and a summary of major findings with a link to a study website for more information, and (3) an email formatted like a newsletter with detailed information on study purpose, method, and findings with graphics to help convey results. A mock study website was shown and included information about study background, purpose, methods, results, as well as, links to other research and health resources. Prototypes were presented either in paper or PowerPoint format during the focus groups and explained by a study team member who then elicited participant input using the focus group guide. Researchers also reviewed and commented on prototype content and format in one-on-one interviews with a study team member.

Protection of Human Subjects

The study protocol (No. Pro00067659) was submitted to and approved by the Institutional Review Board at the Medical University of South Carolina in 2017. PSP (or the caretakers for PSP under age 18), and researchers provided verbal informed consent prior to completing the survey or participating in either a focus group or interview. Participants received a verbal introduction prior to participating in each phase.

Recruitment and Interview Procedures

Past study participants.

A study team member reviewed study participant logs from five recently completed studies at our institution involving AYA or older adults to identify individuals who provided consent for contact regarding future studies. Subsequent PSP recruitment efforts based on these searches were consistent with previous contact preferences recorded in each study participant’s consent indicating desire to be re-contacted. The primary modes of contact were phone/SMS and email.

Efforts to recruit other PSP were made through placement of flyers in frequented public locations such as coffee shops, recreation complexes, and college campuses and through social media, Yammer, and newsletters. ResearchMatch, a web-based recruitment tool, was used to alert its subscribers about the study. Potential participants reached by these methods contacted our study team to learn more about the study, and if interested and pre-screened eligible, volunteered and were consented for the study. PSP completing the survey indicated willingness to share experiences with the study team in a focus group and were re-contacted to participate in focus groups.

Researcher recruitment

Researchers were identified through informal outreach by study investigators and staff, a flyer distributed on campus, use of Yammer and other institutional social media platforms, and internal electronic newsletters. Researchers responding to these recruitment efforts were invited to participate in the researcher survey and/or interview.

Incentives for participation

Researchers and PSP received a $25 gift card for completing the survey and $75 for completing the interview (researcher) or focus group (PSP) (up to $100 per researcher or PSP).

Data tables displaying demographic and other data from the PSP surveys (Table ​ (Table1) 1 ) were prepared from the REDCap© database and responses reported as number and percent of respondents choosing each response option.

Post study participant (PSP) characteristics by Adolescents/Young Adults (AYA), Older Adults, and ALL (All participants regardless of age)

CharacteristicsAYA (age
15–24.99
years) ( = 15)
Older adult
(age 50 years
or more)
( = 33)
ALL
( = 48)
Race
  Black African American2 (13%)8 (24%)10 (21%)
  White12 (80%)25 (76%)37 (77%)
  More than one race1 (7%)--1 (2%)
Gender
  Female12 (80%)25 (76%)37 (77%)
  Male3 (20%)8 (24%)11 (23%)
Education
  Grade 9–12---
  High-school graduate2 (13%)8 (24%)10 (21%)
  Some college2 (13%)12 (36%)14 (29%)
  Associate degree-1 (3%)1 (2%)
  Bachelor’s degree9 (60%)7 (21%)16 (33%)
  Master’s degree1 (7%)5 (16%)6 (13%)
  Professional degree1 (7%)-1 (2%)
Ethnicity
  Not Hispanic/Latino14 (93%)32 (97%)46 (96%)
  Hispanic Latino1 (7%)1 (3%)2 (4%)

Age mean (SD) = 49.7 (18.6).

Focus group and researcher interview data were recorded (either via audio recording and/or notes taken by research staff) and analyzed via a general inductive qualitative approach, a method appropriate for program evaluation studies and aimed at condensing large amounts of textual data into frameworks that describe the underlying process and experiences under study [ 12 ]. Data were analyzed by our team’s qualitative expert who read the textual data multiple times, developed a coding scheme to identify themes in the textual data, and used group consensus methods with other team members to identify unique, key themes.

Sixty-one of sixty-five PSP who volunteered to participate in the PSP survey were screened eligible, fifty were consented, and forty-eight completed the survey questionnaire. Of the 48 PSP completing the survey, 15 (32%) were AYA and 33 (68%) older adults. The mean age of survey respondents was 49.7 years, 23.5 for AYA, and 61.6 for older adults. Survey respondents were predominantly White, non-Hispanic/Latino, female, and with some college or a college degree (Table ​ (Table1). 1 ). The percentage of participants in each group never or rarely needing any help with reading/interpreting written materials was above 93% in both groups.

Over 90% of PSP responded that they would participate in another research study, and more than 75% of PSP indicated that study participants should know about study results. Most (68.8%) respondents indicated that they did not receive any communications from study staff after they finished a study .

PSP preferences for communication channel are summarized in Table ​ Table2 2 and based on responses to the question “How do you want to receive information?.” Both AYA and older adults agree or completely agree that they prefer email to other communication channels and that billboards did not apply to them. Older adult preferences for communication channels as indicated by agreeing or completely agreeing were in ranked order of highest to lowest: use of mailed letters/postcards, newsletter, and phone. A majority (over 50%) of older adults completely disagreed or disagreed on texting and social media as options and had only slight preference for mass media, public forum, and wellness fairs or expos.

Communication preference by group: AYA * , older adult ** , and ALL ( n = 48)

Communication formatCompletely disagreeDisagreeNeutralAgreeCompletely agreeDon’t knowNot applicable
Phone
 AYA4 (26.7)3 (20)6 (40.0)1 (6.7)1 (6.7)--
 Older adult10 (30.3)1 (3)6 (18.2)2 (6.1)14 (42.4)--
 ALL14 (29.2)4 (8.3)12 (25.0)3 (9.1)15 (31.3)--
Mailed letters, postcards
 AYA5 (33.3)4 (26.7)2 (13.3)2 (13.3)2 (13.3)--
 Older adult3 (9.1)2 (6.1)5 (15.2)7 (21.2)16 (48.5)--
 ALL8 (16.7)6 (12.5)7 (14.6)9 (18.8)18 (37.5)--
Email
 AYA---3 (20)12 (80)--
 Older adult5 (15.2)1 (3.0)2 (6.1)2 (6.1)21 (63.6)--
 ALL5 (10.4)1 (2.1)2 (4.2)5 (10.4)33 (68.8)--
Texting
 AYA5 (33.3)2 (13.3)2 (13.3)4 (26.7)2 (13.3)--
 Older adult17 (51.5)1 (3.0)4 (12.1)3 (9.1)4 (12.1)--
 ALL22 (45.8)3 (6.3)6 (12.5)7 (14.6)6 (12.5)--
Newsletter
 AYA5 (33.3)3 (20.0)4 (26.7)1 (6.7)2 (13.3)--
 Older adult4 (12.1)2 (6.1)8 (24.2)6 (18.2)13 (39.4)--
 ALL9 (18.8)5 (10.4)12(25)7 (14.6)15 (31.3)--
Social media
 AYA5 (33.3)5 (33.3)4 (26.7)-1 (6.7)--
 Older adult20 (60.6)-4 (12.1)1 (3.0)6 (21.2)--
 ALL25 (52.1)5 (10.4)8 (16.7)1 (2.1)7 (14.6)--
Mass media
 AYA3 (20.0)6 (40.0)6 (40.0)---
 Older adult14 (42.4)2 (6.1)7 (21.2)4 (12.1)6 (18.2)-
 ALL17 (35.4)8 (16.7)13 (27.1)4 (8.3)6 (12.5)-
Public forum
 AYA5 (33.3)2 (13.3)6 (40.0)1 (6.7)1 (6.7)
 Older adult12 (36.4)4 (12.1)5 (15.2)6 (18.2)6 (18.2)
 ALL17 (35.4)6 (12.5)11 (22.9)7 (14.6)7 (14.6)
Wellness fair/expo
 AYA4 (26.7)1 (6.7)5 (33.3)5 (33.3)---
 Older adult12 (36.4)3 (9.1)9 (27.3)2 (6.1)7 (21.2)
 ALL16 (33.3)4 (8.3)14 (29.4)7 (14.6)7 (14.6)--
Other (billboard)
 AYA----1 (1.67)3 (20.0)11 (73.3)
 Older adult2 (6.1)-1(3.0)-1 (3.0)8 (3)-
 ALL2 (14.2)--1 (2.1)1 (2.1)4 (8.3)39 (81.3)

ALL, total per column.

While AYA preferred email over all other options, they completely disagreed/disagreed with mailed letters/postcards, social media, and mass media options.

When communication formats were ranked overall by each group and by both groups combined, the ranking from most to least preferred was written materials, opportunities to interact with study teams and ask questions, visual charts, graphs, pictures, and videos, audios, and podcasts.

PSP Focus Groups

PSP want to receive and share information on study findings for studies in which he/she participated. Furthermore, participants stated their desire to share study results across social networks and highlighted opportunities to share communicated study results with their health-care providers, family members, friends, and other acquaintances with similar medical conditions.

Because of the things I was in a study for, it’s a condition I knew three other people who had the same condition, so as soon as it worked for me, I put the word out, this is great stuff. I would forward the email with the link, this is where you can go to also get in on this study, or I’d also tell them, you know, for me, like the medication. Here’s the medication. Here’s the name of it. Tell your doctor. I would definitely share. I’d just tell everyone without a doubt. Right when I get home, as soon as I walk in the door, and say Renee-that’s my daughter-I’ve got to tell you this.

Communication of study information could happen through several channels including social media, verbal communication, sharing of written documents, and forwarding emails containing a range of content in a range of formats (e.g., reports and pamphlets).

Word of mouth and I have no shame in saying I had head to toe psoriasis, and I used the drug being studied, and so I would just go to people, hey, look. So, if you had it in paper form, like a pamphlet or something, yeah I’d pass it on to them.

PSP prefer clear, simple messaging and highlighted multiple, preferred communication modalities for receiving information on study findings including emails, letters, newsletters, social media, and websites.

The wording is really simple, which I like. It’s to the point and clear. I really like the bullet points, because it’s quick and to the point. I think the [long] paragraphs-you get lost, especially when you are reading on your phone.

They indicated a clear preference for colorful, simple, easy to read communication. PSP also expressed some concern about difficulty opening emails with pictures and dislike lengthy written text. “I don’t read long emails. I tend to delete them”

PSP indicated some confusion about common research language. For example, one participant indicated that using the word “estimate” indicates the research findings were an approximation, “When I hear those words, I just think you’re guessing, estimate, you know? It sounds like an estimate, not a definite answer.”

Researcher Survey

Twenty-three of thirty-two researchers volunteered to participate in the researcher survey, were screened eligible, and two declined to participate, resulting in 19 who provided consent to participate and completed the survey. The mean age of survey respondents was 51.8 years. Respondents were predominantly White, non-Hispanic/Latino, and female, and all were holders of either a professional school degree or a doctoral degree. When asked if it is important to inform study participants of study results, 94.8% of responding researchers agreed that it was extremely important or important. Most researchers have disseminated findings to study participants or plan to disseminate findings.

Researchers listed a variety of reasons for their rating of the importance of informing study participants of study results including “to promote feelings of inclusion by participants and other community members”, “maintaining participant interest and engagement in the subject study and in research generally”, “allowing participants to benefit somewhat from their participation in research and especially if personal health data are collected”, “increasing transparency and opportunities for learning”, and “helping in understanding the impact of the research on the health issue under study”.

Some researchers view sharing study findings as an “ethical responsibility and/or a tenet of volunteerism for a research study”. For example, “if we (researchers) are obligated to inform participants about anything that comes up during the conduct of the study, we should feel compelled to equally give the results at the end of the study”.

One researcher “thought it a good idea to ask participants if they would like an overview of findings at the end of the study that they could share with others who would like to see the information”.

Two researchers said that sharing research results “depends on the study” and that providing “general findings to the participants” might be “sufficient for a treatment outcome study”.

Researchers indicated that despite their willingness to share study results, they face resource challenges such as a lack of funding and/or staff to support communication and dissemination activities and need assistance in developing these materials. One researcher remarked “I would really like to learn what are (sic) the best ways to share research findings. I am truly ignorant about this other than what I have casually observed. I would enjoy attending a workshop on the topic with suggested templates and communication strategies that work best” and that this survey “reminds me how important this is and it is promising that our CTSA seems to plan to take this on and help researchers with this important study element.”

Another researcher commented on a list of potential types of assistance that could be made available to assist with communicating and disseminating results, that “Training on developing lay friendly messaging is especially critically important and would translate across so many different aspects of what we do, not just dissemination of findings. But I’ve noticed that it is a skill that very few people have, and some people never can seem to develop. For that reason, I find as a principal investigator that I am spending a lot of my time working on these types of materials when I’d really prefer research assistant level folks having the ability to get me 99% of the way there.”

Most researchers indicated that they provide participants with personal tests or assessments taken from the study (60% n = 6) and final study results (72.7%, n = 8) but no other information such as recruitment and retention updates, interim updates or results, information on the impact of the study on either the health topic of the study or the community, information on other studies or provide tips and resources related to the health topic and self-help. Sixty percent ( n = 6) of researcher respondents indicated sharing planned next steps for the study team and information on how the study results would be used.

When asked about how they communicated results, phone calls were mentioned most frequently followed by newsletters, email, webpages, public forums, journal article, mailed letter or postcard, mass media, wellness fairs/expos, texting, or social media.

Researchers used a variety of communication formats to communicate with study participants. Written descriptions of study findings were most frequently reported followed by visual depictions, opportunities to interact with study staff and ask questions or provide feedback, and videos/audio/podcasts.

Seventy-three percent of researchers reported that they made efforts to make study findings information available to those with low levels of literacy, health literacy, or other possible limitations such as non-English-speaking populations.

In open-ended responses, most researchers reported wanting to increase their awareness and use of on-campus training and other resources to support communication and dissemination of study results, including how to get resources and budgets to support their use.

Researcher Interviews

One-on-one interviews with researchers identified two themes.

Researchers may struggle to see the utility of communicating small findings

Some researchers indicated hesitancy in communicating preliminary findings, findings from small studies, or highly summarized information. In addition, in comparison to research participants, researchers seemed to place a higher value on specific details of the study.

“I probably wouldn’t put it up [on social media] until the actual manuscript was out with the graphs and the figures, because I think that’s what people ultimately would be interested in.”

Researchers face resource and time limitations in communication and dissemination of study findings

Researchers expressed interest in communicating research results to study participants. However, they highlighted several challenges including difficulties in tracking current email and physical addresses for participants; compliance with literacy and visual impairment regulations; and the number of products already required in research that consume a considerable amount of a research team’s time. Researchers expressed a desire to have additional resources and templates to facilitate sharing study findings. According to one respondent, “For every grant there is (sic) 4-10 papers and 3-5 presentations, already doing 10-20 products.” Researchers do not want to “reinvent the wheel” and would like to pull from existing papers and presentations on how to share with participants and have boilerplate, writing templates, and other logistical information available for their use.

Researchers would also like training in the form of lunch-n-learns, podcasts, or easily accessible online tools on how to develop materials and approaches. Researchers are interested in understanding the “do’s and don’ts” of communicating and disseminating study findings and any regulatory requirements that should be considered when communicating with research participants following a completed study. For example, one researcher asked, “From beginning to end – the do’s and don’ts – are stamps allowed as a direct cost? or can indirect costs include paper for printing newsletters, how about designing a website, a checklist for pulling together a newsletter?”

The purpose of this pilot study was to explore the current experiences, expectations, concerns, preferences, and capacities of PSP including youth/young adult and older adult populations and researchers for sharing, receiving, and using information on research study findings. PSP and researchers agreed, as shown in earlier work [ 3 , 5 ], that sharing information upon study completion with participants was something that should be done and that had value for both PSP and researchers. As in prior studies [ 3 , 5 ], both groups also agreed that sharing study findings could improve ancillary outcomes such as participant recruitment and enrollment, use of research findings to improve health and health-care delivery, and build overall community support for research. In addition, communicating results acknowledges study participants’ contributions to research, a principle firmly rooted in respect for treating participants as not merely a means to further scientific investigation [ 5 ].

The majority of PSP indicated that they did not receive research findings from studies they participated in, that they would like to receive such information, and that they preferred specific communication methods for receipt of this information such as email and phone calls. While our sample was small, we did identify preferences for communication channels and for message format. Some differences and similarities in preferences for communication channels and message format were identified between AYA and older adults, thus reinforcing the best practice of customizing communication channel and messaging to each specific group. However, the preference for email and the similar rank ordering of messaging formats suggest that there are some overall communication preferences that may apply to most populations of PSP. It remains unclear whether participants prefer individual or aggregate results of study findings and depends on the type of study, for example, individual results of genotypes versus aggregate results of epidemiological studies [ 13 ]. A study by Miller et al suggests that the impact of receiving aggregate results, whether clinically relevant or not, may equal that of receiving individual results [ 14 ]. Further investigation warrants evaluation of whether, when, and how researchers should communicate types of results to study participants, considering multiple demographics of the populations such as age and ethnicity on preferences.

While researchers acknowledged that PSP would like to hear from them regarding research results and that they wanted to meet this expectation, they indicated needing specific training and/or time and resources to provide this information to PSP in a way that meets PSP needs and preferences. Costs associated with producing reports of findings were a concern of researchers in our study, similar to findings from a study conducted by Di Blasi and colleagues in which 15% (8 of 53 investigators) indicated that they wanted to avoid extra costs associated with the conduct of their studies and extra administrative work [ 15 ]. In this same study, the major reason for not informing participants about study results was that forty percent of investigators never considered this option. Researchers were unaware of resources available on existing platforms at their home institution or elsewhere to help them with communication and dissemination efforts [ 10 ].

Addressing Barriers to Implementation

Information from academic and other organizations on how to best communicate research findings in plain language is available and could be shared with researchers and their teams. The Cochrane Collaborative [ 16 ], the Centers for Disease Control and Prevention [ 17 ], and the Patient-Centered Outcomes Research Institute [ 18 ] have resources to help researchers develop plain language summaries using proven approaches to overcome literacy and other issues that limit participant access to study findings. Some academic institutions have electronic systems in place to confidentially share templated laboratory and other personal study information with participants and, if appropriate, with their health-care providers.

Limitations

Findings from the study are limited by several study and respondent characteristics. The sample was drawn from research records at one university engaging in research in a relatively defined geographic area and among two special populations: AYA and older adults. As such, participants were not representative of either the general population in the area, the population of PSP or researchers available in the area, or the racial and ethnic diversity of potential and/or actual participants in the geographic area. The small number of researcher participants did not represent the pool of researchers at the university, and the research studies from which participants were drawn were not representative of the broad range of clinical and translational research undertaken by our institution or within the geographic community it serves. The number of survey and focus group participants was insufficient to allow robust analysis of findings specific to participants’ race, ethnicity, gender, or membership in the target age groups of AYA or older adult. However, these data will inform a future trial with adequate representations from underrepresented and special population groups.

Since all PSP had participated in research, they may have been biased in favor of wanting to know more about study results and/or supportive/nonsupportive of the method of communication/dissemination they were exposed to through their participation in these studies.

Conclusions

Our findings provide information from PSP and researchers on their expectations about sharing study findings, preferences for how to communicate and disseminate study findings, and need for greater assistance in removing roadblocks to using proven communication and dissemination approaches. This information illustrates the potential to engage both PSP and researchers in the design and use of communication and dissemination strategies and materials to share research findings, engage in efforts to more broadly disseminate research findings, and inform our understanding of how to interpret and communicate research findings for members of special population groups. While several initial prototypes were developed in response to this feedback and shared for review by participants in this study, future research will focus on finalizing and testing specific communication and dissemination prototypes aimed at these special population groups.

Findings from our study support a major goal of the National Center for Advancing Translational Science Recruitment Innovation Center to engage and collaborate with patients and their communities to advance translation science. In response to the increased awareness of the importance of sharing results with study participants or the general public, a template for dissemination of research results is available in the Recruitment and Retention Toolbox through the CTSA Trial Innovation Network (TIN: trialinnovationnetwork.org ). We believe that our findings will inform resources for use in special populations through collaborations within the TIN.

Acknowledgment

This pilot project was supported, in part, by the National Center for Advancing Translational Sciences of the NIH under Grant Number UL1 TR001450. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

Disclosures

The authors have no conflicts of interest to declare.

Ethical Approval

This study was reviewed, approved, and continuously overseen by the IRB at the Medical University of South Carolina (ID: Pro00067659). All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Social Work Research and Its Relevance to Practice: “The Gap Between Research and Practice Continues to be Wide”

  • Journal of Social Service Research 43(1):1-19

Barbra Teater at City University of New York - College of Staten Island

  • City University of New York - College of Staten Island

Strategies for effective dissemination of research to United States policymakers: a systematic review

  • Laura Ellen Ashcraft   ORCID: orcid.org/0000-0001-9957-0617 1 ,
  • Deirdre A. Quinn 2 &
  • Ross C. Brownson 3 , 4  

Implementation Science volume  15 , Article number:  89 ( 2020 ) Cite this article

43k Accesses

56 Citations

40 Altmetric

Metrics details

Research has the potential to influence US social policy; however, existing research in this area lacks a coherent message. The Model for Dissemination of Research provides a framework through which to synthesize lessons learned from research to date on the process of translating research to US policymakers.

The peer-reviewed and grey literature was systematically reviewed to understand common strategies for disseminating social policy research to policymakers in the United States. We searched Academic Search Premier, PolicyFile, SocINDEX, Social Work Abstracts, and Web of Science from January 1980 through December 2019. Articles were independently reviewed and thematically analyzed by two investigators and organized using the Model for Dissemination of Research.

The search resulted in 5225 titles and abstracts for inclusion consideration. 303 full-text articles were reviewed with 27 meeting inclusion criteria. Common sources of research dissemination included government, academic researchers, the peer reviewed literature, and independent organizations. The most frequently disseminated research topics were health-related, and legislators and executive branch administrators were the most common target audience. Print materials and personal communication were the most common channels for disseminating research to policymakers. There was variation in dissemination channels by level of government (e.g., a more formal legislative process at the federal level compared with other levesl). Findings from this work suggest that dissemination is most effective when it starts early, galvanizes support, uses champions and brokers, considers contextual factors, is timely, relevant, and accessible, and knows the players and process.

Conclusions

Effective dissemination of research to US policymakers exists; yet, rigorous quantitative evaluation is rare. A number of cross-cutting strategies appear to enhance the translation of research evidence into policy.

Registration

Not registered.

Peer Review reports

Contributions to the literature

This is one of the first systematic reviews to synthesize how social policy research evidence is disseminated to US policymakers.

Print materials and personal communications were the most commonly used channels to disseminate social policy research to policymakers.

Several cross-cutting strategies (e.g., start early, use evidence “champions,” make research products more timely, relevant, and accessible) were identified that are likely to lead to more effective translate of research evidence into the policy making process in the United States.

In recent years, social scientists have sought to understand how research may influence policy [ 1 , 2 ]. Interest in this area of investigation has grown with the increased availability of funding for policy-specific research (e.g., dissemination and implementation research) [ 3 ]. However, because of variation in the content of public policy, this emerging area of scholarship lacks a coherent message that specifically addresses social policy in the United States (US). While other studies have examined the use of evidence in policymaking globally [ 4 , 5 , 6 , 7 ], the current review focuses on US social policy; for the purposes of this study, social policy includes policies which focus on antipoverty, economic security, health, education, and social services [ 8 , 9 , 10 ].

Significant international research exists on barriers and facilitators to the dissemination and use of research evidence by policymakers [ 4 , 5 ]. Common themes include the importance of personal relationships, the timeliness of evidence, and resource availability [ 4 , 5 ]. Previous work demonstrates the importance of understanding policymakers’ perceptions and how evidence is disseminated. The current review builds on this existing knowledge to examine how research evidence reaches policymakers and to understand what strategies are likely to be effective in overcoming identified barriers.

Theoretical frameworks offer a necessary foundation to identify and assess strategies for disseminating research to policymakers. The Model for Dissemination of Research integrates Diffusion of Innovations Theory and Social Marketing Theory with the Mathematical Theory of Communication [ 11 , 12 ] and the Matrix of Persuasive Communication [ 13 , 14 ] to address the translation gap between research and policy. The purpose of the Model for Dissemination of Research is to highlight the gaps between research and targets audiences (e.g., policymakers) and improve dissemination through the use of a theoretical foundation and review of the literature [ 15 ]. Diffusion of Innovations Theory describes the spread and adoption of novel interventions through an “s-curve,” ordered process, and characteristics of the message and audience [ 16 ]. Additional theoretical contributions for dissemination research come from Social Marketing Theory, which postulates commercial marketing strategies summarized by the four P’s (produce, price, place, and promotion) and the understanding that communication of the message alone will not change behavior [ 17 ].

The Model for Dissemination of Research includes the four key components described by Shannon and Weaver [ 11 , 12 ] and later McGuire [ 13 , 14 ] of the research translation process: the source, message, audience, and channel (Fig. 1 ). The source includes researchers who generate evidence. The message includes relevant information sent by the source on a policy topic. The audience includes those receiving the message via the channel [ 15 ]. The channel is how the message gets from the source to the audience [ 15 ].

figure 1

The Model for Dissemination of Research. The Model for Dissemination of Research integrates Diffusion of Innovations Theory, the Mathematical Theory of Communication, and Social Marketing Theory to develop a framework for conceptualizing how information moves from source to audience. Originally published by Brownson et al. in Journal of public health management and practice in 2018

While the Model for Dissemination of Research and its origins (i.e., the Mathematical Theory of Communication and Diffusion of Innovations Theory) appear linear in their presentation, Shannon and Weaver [ 11 , 12 ] and Rogers [ 16 ] clearly acknowledge that the dissemination of information is not a linear process and is effected by the environment within which it occurs. This approach aligns with the system model or knowledge to action approach proposed by Best and Holmes [ 18 ]. The systems model accounts for influence of the environment on a process and accounts for the complexity of the system [ 18 ]. Therefore, while some theoretical depictions appear linear in their presentation; it is important to acknowledge the critical role of systems thinking.

To date, lessons learned from dissemination and implementation science about the ways in which research influences policy are scattered across diverse disciplines and bodies of literature. These disparate lessons highlight the critical need to integrate knowledge across disciplines. The current study aims to make sense of and distill these lessons by conducting a systematic review of scientific literature on the role of research in shaping social policy in the United States. The results of this systematic review are synthesized in a preliminary conceptual model (organized around the Model for Dissemination of Research) with the goal of improving dissemination strategies for the translation of scientific research to policymakers and guiding future research in this area.

This systematic review aims to synthesize existing evidence about how research has been used to influence social policy and is guided by the following research questions:

What are common strategies for using research to influence social policy in the United States?

What is the effectiveness of these strategies?

We used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA-P) model [ 19 , 20 ] to examine and distill existing studies on strategies for using research evidence to influence social policy.

Eligibility criteria

Studies were eligible for this review if they met the following inclusion criteria: (1) occurred in the United States; (2) reported in English; (3) systematically evaluated the impact of research on social policy (this typically excluded studies focusing on policymaker dissemination preferences); (4) discussed domestic social policy (as defined above); and (5) were published in the peer reviewed literature or the grey literature (e.g., think tank research briefs, foundation research publications).

We chose to focus our review on the United States to capture the strengths and challenges of its unique, multi-level policy and political environment. The de-centralized structure of government in the United States allows significant decision-making authority at the state and local levels, with wide variation in capacity and the availability of resources across the country [ 21 ]. For example, some states have full-time legislatures while other states have part-time legislatures. In total, these factors create a fitting and complex environment to examine the dissemination of research to policymakers. The influence of lobbying in the United States also differs from other western countries. In the United States, there is more likely to be a “winner-take-all” process where some advocates (often corporations and trade associations) have disproportionate influence [ 22 ]. In addition, the role of evidence differs in the US compared with other countries, where the US tends to take a narrower focus on intervention impact with less emphasis on system-level issues (e.g., implementation, cost) [ 23 ].

Studies were excluded if they were not in English or occurred outside of the United States. We also excluded non-research sources, such as editorials, opinion pieces, and narrative stories that contain descriptions of dissemination strategies without systematic evaluation. Further, studies were excluded if the results focused on practitioners (e.g., case managers, local health department workers) and/or if results for practitioners could not be parsed from results for policymakers.

To identify studies that systematically evaluated the impact of research on social policy, we reviewed the research questions and results of each study to determine whether or not they examined how research evidence reaches policymakers (as opposed to policymaker preferences for disseminated research). For example, we would not include a research study that only describes different types of policy briefs, without also evaluating how the briefs are used by policymakers to inform policy decisions. We used the Model for Dissemination of Research, as defined above, to see if and how the studies describe and test the channels of dissemination. We built on the Model of Dissemination by also considering passive forms of knowledge, such as peer-reviewed literature or research briefs, as potential sources of knowledge and not just as channels in and of themselves.

Information sources

We took a three-pronged approach to develop a comprehensive understanding of existing knowledge in this area. First, we searched the peer reviewed literature using the following databases: Academic Search Premier, PolicyFile, SocINDEX, Social Work Abstracts, and Web of Science. We expanded the inquiry for evidence by searching the grey literature through PolicyFile, and included recommendations from experts in the field of dissemination of research evidence to policymakers resulting in 137 recommended publications.

Search strategy

Our search strategy included the following terms: [research OR study OR studies OR knowledge] AND [policy OR policies OR law OR laws OR legislation] AND [use OR utilization OR utilisation] OR [disseminate OR dissemination OR disseminating] OR [implementation OR implementing OR implement] OR [translate OR translation OR translating]. Our search was limited to studies in the United States between 1980 and 2019. We selected this timeframe based on historical context: the 1950s through the 1970s saw the development of the modern welfare state, which was (relatively) complete by 1980. However, shifting political agendas in the 1980s saw the demand for evidence increase to provide support for social programs [ 24 ]; we hoped to capture this increase in evidence use in policy.

Selection process

All titles and abstracts were screened by the principal investigator (LEA) with 20% reviewed at random by a co-investigator (DAQ) with total agreement post-training. Studies remaining after abstract screening moved to full text review. The full text of each study was considered for inclusion (LEA and DAQ) with conflicts resolved by consensus. The data abstraction form was developed by the principal investigator (LEA) based on previous research [ 25 , 26 ] and with feedback from co-authors. Data were independently abstracted from each reference in duplicate with conflicts resolved by consensus (LEA and DAQ). We completed reliability checks on 20% of the final studies, selected at random, to ensure accurate data abstraction.

Data synthesis

Abstracted data was qualitatively analyzed using thematic analysis (LEA and DAQ) and guided by the Model for Dissemination of Research. The goal of the preliminary conceptual model was to synthesize components of dissemination for studies that evaluate the dissemination of social policy to policymakers.

Descriptive results

The search of the literature resulted in 5675 articles and 137 articles recommended by content experts for review with 5225 titles and abstracts screened after duplicates removed. Of those articles, 4922 were excluded due to not meeting inclusion criteria. Further, 303 full text articles were reviewed with 276 excluded as they did not meet inclusion criteria. Twenty-seven articles met inclusion criteria (see the Fig. 2 for the PRISMA flow diagram).

figure 2

PRISMA flowchart. The preferred reporting items for systematic reviews and meta-analyses (PRISMA) flow diagram reports included and excluded articles in the systematic review

Included studies are listed in Table 1 . The 27 included 6 studies using quantitative methods, 18 that employed qualitative methods, and 3 that used a mixed methods approach. The qualitative studies mostly employed interviews ( n = 10), while others used case studies ( n = 6) or focus groups ( n = 3). Most studies examined state-level policy ( n = 18) and nine studies examined federal-level policy, with some studies looking at multiple levels of government. Included studies focused on the executive and legislative branches with no studies examining the judicial branch.

We examined dissemination based on geographic regions and/or political boundaries (i.e., regions or states). Sixteen of the 27 studies (about 59%) used national samples or multiple states and did not provide geographic-specific results [ 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 ]. Two studies (about 7%) did not specific the geographic region or state in which the study took place [ 43 , 44 ]. Of the remaining studies, four examined policymaking in the Northeastern United States [ 45 , 46 , 47 , 48 ], four in the Western US [ 49 , 50 , 51 , 52 ], and one in the South [ 53 ]. The geographic regional groups used similar channels to disseminate evidence to policymakers including publications and presentations.

We also analyzed whether dissemination at different levels of government (i.e., local, state, and federal) used unique channels. Six of included studies (about 22%) examined multiple levels of government and did not separate results based on specific levels of government [ 27 , 28 , 29 , 30 , 31 , 53 ]. One study did not specifically identify the level of government used [ 46 ]. While there is considerable overlap in dissemination channels used at each level of government, there are some unique characteristics.

Five studies (about 18.5%) examined dissemination at the federal level [ 32 , 33 , 34 , 35 , 36 ]. At the federal level, dissemination channels tended to be more formal such as congressional committee hearings [ 36 ] and legislative development [ 35 ]. Twelve studies (about 44%) evaluated dissemination at the state level [ 38 , 39 , 40 , 41 , 42 , 43 , 44 , 47 , 48 , 50 , 51 , 52 ]. State level dissemination heavily relied on printed materials including from mental health care disparity report cards [ 41 ], policy briefs [ 38 ], and effectiveness reports [ 50 ]. Another common channel was in-person communications such as one-on-one meetings [ 44 ] and presentations to stakeholders [ 51 ]. Three studies (about 11%) focused on local-level government. Dissemination channels at the local level had little consistency across the three studies with channels including public education [ 45 ], reports [ 37 ], and print materials [ 49 ].

Roughly half of studies were atheoretical ( n = 13). Four studies used the Weiss Typology [ 29 , 36 , 54 , 55 ], two studies used the operationalization framework [ 45 , 53 ], and two studies used the advocacy coalition framework [ 53 , 56 ].

Model for dissemination of research

We used the Model for Dissemination of Research to summarize the findings from the included studies into the themes of source, message, audience, and channel (i.e., strategies). We integrated themes from the studies into the Model (see Fig. 3 ).

figure 3

A conceptual model for dissemination of research to policymakers. The populated conceptual model builds on the Model for Dissemination of Research by organizing findings from the current systematic review to build an understanding of how research is disseminated to policymakers in the United States

The sources of knowledge varied across studies with some studies including multiple sources of social policy information. The most common sources of knowledge included research, as in peer-reviewed literature ( n = 7) [ 30 , 33 , 38 , 42 , 43 , 49 , 54 ], researchers ( n = 5) [ 27 , 31 , 32 , 34 , 56 ], and research broadly defined ( n = 5) [ 36 , 39 , 47 , 48 , 55 ], the government ( n = 11) [ 29 , 36 , 41 , 42 , 43 , 44 , 47 , 50 , 54 , 56 , 57 ], and organizations ( n = 7) [ 33 , 36 , 46 , 52 , 53 , 54 , 56 ].

The majority of studies focused on health topics ( n = 12) [ 29 , 30 , 33 , 34 , 38 , 41 , 42 , 45 , 47 , 55 , 56 , 58 ] and child and family well-being ( n = 6) [ 27 , 36 , 46 , 49 , 52 , 57 ]. The remaining studies covered the topics of education ( n = 4) [ 39 , 43 , 53 , 54 ], guns [ 56 ], veterans [ 44 ], and general social research ( n = 3) [ 31 , 32 , 48 ]. Multiple studies offered specific recommendations for message framing, suggesting that the packaging of information is as critical as the information itself [ 27 ]. One study piloted multiple styles of policy briefs and found staffers preferred to use and share narrative or story-based briefs while legislators were more likely to use and share statistical, data-based briefs [ 38 ]. This finding was mirrored in two studies that found testimonial or descriptive evidence to be as effective as data-driven research [ 34 , 52 ], particularly in the context of sympathetic populations [ 52 ]. Three studies highlighted the reliance of effective message delivery on the message’s ability to capture audience interest (e.g., what the research means to the policymaker, specifically and if possible, personally) [ 27 , 34 , 41 ]. Finally, two studies emphasized creating a sense of urgency or even shock-value within the message in order to capture policymakers’ interest [ 36 , 57 ].

The audience included executive branch policymakers [ 49 ], administrators ( n = 9) [ 27 , 31 , 38 , 39 , 41 , 43 , 53 , 55 , 57 ], and staff [ 42 ]. Studies which focused on the legislative branch examined legislators ( n = 12) [ 27 , 32 , 36 , 38 , 44 , 45 , 46 , 47 , 50 , 52 , 53 , 58 ] and staff ( n = 3) [ 32 , 34 , 36 ]. Three studies examined broadly defined policymakers [ 33 , 54 , 56 ] and generalized staff [ 54 ] without indication for specific branch of government.

Included studies examined a variety of channels with many including multiple channels. Print materials was the most commonly used channel, including reports ( n = 10) [ 27 , 30 , 33 , 41 , 46 , 50 , 53 , 55 , 57 , 58 ] and policy briefs ( n = 3) [ 31 , 34 , 38 ]. Researchers examined in-person meetings and communications as a channel to disseminate research ( n = 9) [ 30 , 32 , 33 , 39 , 44 , 48 , 53 , 56 , 57 ]. Research and research summaries were also studied ( n = 7) [ 30 , 31 , 42 , 47 , 49 , 52 , 54 ]. Both traditional ( n = 6) [ 31 , 33 , 47 , 52 , 53 , 54 ] and social media ( n = 2) [ 47 , 53 ] were examined as channels to disseminate research to policymakers. Other channels include conferences and presentations ( n = 4) [ 33 , 34 , 49 , 57 ], electronic communication ( n = 2) [ 27 , 57 ], online resources ( n = 3) [ 34 , 49 , 58 ], and personal testimony ( n =2) [ 42 , 52 ].

Effectiveness and lessons learned

The majority of studies employed qualitative research methods (e.g., interviews, case studies, focus groups) to evaluate the impact of scientific research on domestic social policy. Our review of the literature also identified nine quantitative and mixed-methods studies [ 31 , 32 , 38 , 39 , 42 , 43 , 44 , 49 , 58 ]. We identified a series of cross-cutting dissemination strategies for engaging policymakers including recommendations for and barriers to research-to-policy (see Table 2 ).

Start early

Four studies highlighted the importance for early and ongoing engagement with policymakers throughout the research process in order to maximize interest and applicability. Researchers are encouraged to take the initiative to contact policymakers as early as possible in the research process. Many policymakers may be interested in accessing and using research but uncertain who or how to make connections in the academic or research community [ 27 ]. Involving policymakers when designing projects and framing initial research questions increases the likelihood that key policy stakeholders will remain invested in the work by allowing their individual research interests to shine [ 34 , 41 ]. Early engagement also ensures that research products (e.g., reports, policy briefs, factsheets) will have strategic usefulness for policymakers [ 30 ].

Drum up support

In addition to early policymaker engagement, three studies highlighted the need for researchers to garner outside support for their work, ideally involving a broad pool of experts and cultivating a broader coalition of supporters than typical academic endeavors [ 47 ]. Often, policymakers appear unwilling or uninterested in considering the application of evidence to their work [ 45 , 53 ]; when researchers can demonstrate the value and relevance of their work [ 58 ], policymakers may be more likely to engage.

Use research evidence “champions” or “brokers”

A common strategy for garnering support (as recommended above) is the use of evidence champions or brokers ; these are intermediary individuals or organizations who connect research suppliers (e.g., individual researchers, academic institutions) to research demand (e.g., policymakers) [ 53 ]. These champions can broker important connections; however, researchers and policymakers alike must remember that these intermediaries are not neutral carriers of information, and may spin research in support of personal agendas [ 45 , 52 , 53 ]. Individual biases may also present a barrier in research-to-policy translation, as individuals or organizations are empowered to select the “best” research evidence to share with policymakers [ 29 ]. One study found that nearly half of state policymakers named professional associations as trusted sources for research information, specifically because the organization is perceived not to have a stake in the final policy outcome [ 58 ].

Two studies specifically addressed the role of intermediary organizations or brokers in the translation of research evidence to policy. Hopkins et al. [ 39 ] explored the exchange of research evidence among state education agency (SEA) leaders, while Massell et al. [ 43 ] examined more broadly the origins of research evidence use in three SEAs. Both studies found that external brokers played a role in connecting SEA policymakers to relevant research, as well as in the conceptualization and development of policy.

Focus on context

Multiple studies stressed the importance of research evidence being contextually relevant to the specific policy audience [ 29 , 54 , 55 , 57 ]. For some policymakers, the needs and interests of local constituents will drive the use of research and the specifics of the policy agenda; for others, discussions that integrate research evidence into the broader sociopolitical context will be more effective [ 45 ]. For state- and local-level policymakers, policies may be most effective when based on the evidence-based understanding of local stakeholders, rather than imposed from the federal level without local contextual details [ 29 ].

Ideology of external advisors and brokers (as discussed above) and policymakers’ own personal beliefs and experiences [ 54 ] and the prevailing political ideology of a particular geographic region [ 55 ] are critical components of context. Ideological beliefs, often deeply held and personal, may create a barrier between researchers and policymakers [ 41 ], though differentiating ideology from other factors that affect individual position-taking is difficult in most situations [ 44 ]. McGinty et al. [ 56 ] suggest that in polarized contexts involving strong ideological beliefs, research may add legitimacy to a particular viewpoint, though as with brokers, that research is likely to be carefully curated to support the desired message. Purtle et al. [ 55 ] concur, reporting that some county health officials were wary of the potential to spin research findings to make a case for certain programs over others and noted the need to avoid the challenge of distorting evidence. Two studies recommend positional neutrality as a researcher’s best approach to handling potential ideological differences, suggesting that presenting research findings as simple fact, rather than making specific recommendations for action, may help avoid conflict and also help researchers gain credibility across the ideological spectrum [ 27 , 50 ].

Make research products timely, relevant, and accessible

As with all research endeavors, timeliness and relevance are paramount. However, the typical timeline for academic research (years) is often too long for policymakers whose window for championing a policy action is much shorter (weeks or months) [ 27 , 52 ]. A frequently reported barrier in research-to-policy translation is the complexity of research and concerns about the quality of research evidence [ 29 , 41 , 56 ]; one strategy for combating this concern is the use of clear, careful language [ 27 ], and tailored, audience-specific products that meet the needs of a diverse population of end users [ 27 , 34 , 58 ]. Research that is presented in commonly used, accessible formats (e.g., briefs, factsheets, videos) [ 48 ] may also be more effective, though one study found that use of these formats was dependent on job type, with legislators and staffers preferring different formats [ 58 ].

Multiple studies engaged with policymakers in an effort to determine how they receive research evidence and what strategies or formats are most desirable or effective [ 38 ]. After piloting four different styles of policy briefs (on the same research topic) with state-level policymakers, Brownson et al. [ 38 ] found that while all styles of brief were considered understandable and credible, opinions on the usefulness of the brief varied by the style of the brief and by the level of policymaker (e.g., legislative staff, legislators, and executive branch administrators). These findings suggest that targeted, audience-specific research evidence materials may be more likely to be used by policymakers than generic research evidence. One study explored the usefulness of electronic vs. printed research material and again found differences by type of policymaker—legislators were more likely to read hard copy printed material, while staffers gave higher ratings to online content. Not surprisingly, the age of the policymaker also played a role in the choice to access electronic or printed material, with younger policymakers much more likely to read electronic copy than were their older peers [ 58 ].

A study on state policymakers’ perceptions of comparative effectiveness research (CER) found that the most useful research is that which is consistent and specific to the needs of the policymakers [ 42 ]. The same study identified related barriers to the use of CER in policy decision-making, citing a lack of relevant high quality or conclusive research [ 42 ].

Finally, two studies described pilot projects focused on the delivery of research evidence directly to policymakers. The first cultivated researchers’ capacity to accelerate the translation of research evidence into useable knowledge for policymakers through a rapid response researcher network [ 32 ]. This model was shown to be effective for both researchers (in mobilizing) and policymakers (in eliciting requests for research evidence to bolster a policy conversation or debate) [ 32 ]. The second implementation study reported on a field experiment in which state legislators randomly received relevant research about pending policy proposals [ 44 ]. Findings from this study suggest that having relevant research information increases policymakers’ co-sponsorship of proposals by 60% and highlights the importance of research access in the policy process [ 44 ].

Know the players and the process

Policymakers are as much experts in their arena as researchers are in their academic fields. In order to build lasting working relationships with a target policymaking audience and maximize the relevance of research products for policy work, researchers must first understand the policy process [ 27 , 30 , 34 ]. One study examined the role of researchers themselves in disseminating findings to policymakers and identified individual- and organizational-level facilitators and barriers to the process [ 31 ]. Researchers’ familiarity with the policy process, the relevance of policy dissemination to individual programs of research, and the expectation of dissemination (from higher institutional or funding bodies) facilitated the research-to-policy exchange, while lack of familiarity with effective dissemination strategies and lack of financial and institutional support for dissemination emerged as primary barriers in the research-to-policy exchange [ 31 ].

Public policy, whether legislative, executive, or judicial, affects all areas of daily life in both obvious and subtle ways. The policy process (i.e., the steps from an idea to policy enactment) does not exist in a vacuum; it is influenced by many factors, including public opinion [ 59 , 60 ], special interest groups [ 61 ], personal narratives [ 62 ], expressed needs of constituents [ 1 ], the media [ 63 , 64 , 65 ], and corporations [ 66 , 67 ]. Research may also play a role in shaping policy and has the potential to add objectivity and evidence to these other forces [ 1 , 2 , 68 ]. The current study synthesizes existing knowledge to understand dissemination strategies of social policy research to policymakers in the United States.

Many channels exist to disseminate evidence to policymakers, with the most common being print materials (i.e., reports and policy briefs). This finding is surprising in our current digital age, as print materials are necessarily time-bound and rapidly evolving technology has created more channels (e.g., social media, videos) which may be preferred by policymakers. This shift creates an opportunity to optimize the content of print materials to disseminate in new mediums; it also offers a chance for authors to improve the accessibility of their work for broader audiences (e.g., via more visual presentation formats) [ 15 , 69 , 70 , 71 ].

Our review found strategies to increase effectiveness of research dissemination to policymakers includes starting early, drumming-up support, using champions and brokers, understanding the context, ensuring timeliness, relevance, and accessibility of research products, and knowing the players and the process. These themes align with existing knowledge about policymaker preferences including face-to-face engagement [ 72 , 73 ], contextual considerations (e.g., timeliness and budget) [ 2 , 72 ], and existing barriers and facilitators to research evidence use [ 4 , 5 ]. Our study adds to what we already know about policymakers’ desire for research evidence and their varying preferences as to the context and form of that knowledge [ 2 , 72 , 74 ] and supports existing efforts to bridge the gap between researchers and policymakers.

Many of the barriers and facilitators to research dissemination that we identified in this review mirror those cited by policymakers as barriers and facilitators to evidence use; this overlap reasonably suggests that efforts to expand research dissemination may improve the other. Particularly relevant lessons from the evidence use literature that also emerged from our review include emphasis on the benefit of building personal relationships between researchers and policymakers [ 5 , 75 , 76 ], narrowing the perceived gap between the two groups [ 77 , 78 ], and changing the culture of decision making to increase appreciation for the value of research in policy development [ 5 , 75 , 76 , 77 ]. Considering the multiple pathways through which research evidence is used in policy, from providing direct evidence of a program’s effectiveness to informing or orienting policy makers about relevant issues [ 23 ], these shared lessons around barriers and facilitators may better inform researchers, policymakers, and staff as to best practices for future communication and collaboration.

Our findings also highlight several unique elements of the US policy landscape, wherein significant power is reserved from the federal-level and afforded to state-level government. In some states, this power is further distributed to county and local governments. This system creates major variation across the country in both policy decisions and in resource availability for social policy implementation. Despite our relatively unique government structure, however, many of the effective strategies for dissemination we identified mirror strategies found in other countries [ 79 , 80 ].

Studies that focused on a specific level of government had some unique characteristics such as formality and reliance on print materials. For example, federal dissemination relied more heavily on formal legislative testimony while state level material relied on written policy materials (e.g., policy briefs, report cards). However, these results are limited by small sample sizes and limited evidence about effectiveness.

A wide range of contextual variables may influence policy dissemination in the US at different levels of government. In the federal legislative context alone, multiple committees and subcommittees of both the U.S. House of Representatives and the U.S. Senate may exercise some control over programs and policies related to a single social policy issue (e.g., child and family services) [ 81 ]. At the federal level, the Congressional Research Service (CRS) provides non-partisan research support to legislators in multiple formats including reports on major policy issues, expert testimony, and responses to individual inquiries; the Domestic Social Policy Division offers Congress interdisciplinary research and analysis on social policy issues [ 82 ]. While there may be fewer decision-makers for each issue on the state level, policymaking is further complicated by the extensive rules and reporting requirements attached to state use of federal funding as well as competing priorities or needs at the local level within each state [ 83 , 84 ]. Another dissemination influence may include geographic proximity; for example, geographical proximity may increase the likelihood of university-industry partnerships [ 85 ].

Infrastructure differences may also represent important differences between the US social policy context and that of other developed nations. Each country has a distinct and perhaps unique policy context given available resources, political rules and regulations, and priorities. While models for infrastructure and dissemination interventions may be shared across policy contexts, it may be difficult to directly compare dissemination strategies in one country with dissemination strategies in another country.

Several examples across western countries contribute to a stronger nexus between research evidence and the policy-making process. In the United States, the Wisconsin Family Impact Seminars ( www.wisfamilyimpact.org ) are an example of long-standing initiatives that provide the opportunity for researchers and policymakers to come together to discuss unbiased policy-relevant evidence [ 86 ]. As exemplified by Friese and Bogenschneider [ 27 ], these forums continue to be perceived as objective, relevant, and useful by policymakers and have succeeded at bringing attention to social policy [ 86 ]. Researchers and policymakers in Canada have sought to bridge the research-to-policy gap. For example, the Canadian Foundation for Healthcare Improvement (formerly the Canadian Health Services Research Foundation), funded by the Canadian federal government, brings together researchers and policymakers early and throughout the research development process to discuss, prioritize, and evaluate opportunities for research and dissemination [ 79 ]. In the UK, infrastructure at the national level includes the National Institute for Health Research Policy Research Programme, which funds health research with the explicit goal of informing national policy decisions in health and social care [ 87 ]. These efforts include open calls for research proposals as well as 15 dedicated Policy Research Units located at leading academic institutions around the country. Another resource is the EPPI-Centre at University College London, which provides policymakers support for finding and using research to inform policy decisions through its Research Advisory Service. This allows researchers to work alongside policymakers to reach their goals in addressing educational needs with evidence-informed policy [ 80 ].

Limitations

The current study has several limitations—these illustrate opportunities for future research. First, we attempted to cast a wide net when searching for studies which examined the influence of research on social policy by including a broad search of the peer-reviewed literature, think tanks, and content experts. However, it is possible we missed some studies which examine how research influences policy. Second, we provide a rationale for focusing on US studies and that our findings may not be generalizable to other countries. Third, we were unable to assess the risk of bias for individual studies as current standards note difficulties in assessing quality and bias in qualitative research [ 88 ]. Fourth, many studies examined multiple channels or strategies for how research influences policy, so the parsing of singular strategies (e.g., policy brief, in-person meeting) as an effective approach should be interpreted with caution. Additional investigation is needed to explore and test causal pathways in how these channels can best influence social policy. Fifth, the majority of studies did not use any theory or framework as a foundation or guide for exploration. This gap may indicate a space to use frameworks such as the Model for Dissemination of Research to guide future research. Finally, the dearth of mixed-methods studies that systematically evaluate the impact of research evidence on domestic social policy (this review identified only 3) presents an opportunity for future work in this field to integrate quantitative and qualitative methodologies.

One significant challenge to increasing the rigor in dissemination research studies is the difficulty in choosing and then measuring an outcome. Many of the studies included in this review are either case studies or descriptive, making it difficult to determine what, if any, impact the given research had on policy. Bogenschneider and Corbett discuss this at length as one of the primary challenges to furthering this research [ 72 ], imploring researchers not to focus solely on the outcome of whether or not a piece or legislation passes but rather to examine whether research influenced one of the proposed policy options [ 72 ]. However, this information can be difficult both to operationalize and to collect. That said, some researchers have already begun to think beyond the passage of legislation, as evidenced by Zelizer [ 44 ] who examined bill co-sponsorship rather than passage. A recent review of health policy implementation measurement found that validated quantitative measures are underutilized and recommends further development and testing of such measures [ 89 ]. Difficulties in identifying robust outcomes and high-quality scales to operationalize them present opportunities for additional exploration in this area.

Dissemination and implementation are often described together; not surprisingly, is overlap in effective strategies for each. The current review identified six dissemination strategies and described their reported effectiveness, while the Expert Recommendations for Implementing Change (ERIC) Project identified 73 implementation strategies [ 90 ]. One such similarity is obvious: the dissemination strategy of using champions and brokers mirrors the ERIC implementation strategy of identifying and preparing champions. The difference between the number of implementation strategies and dissemination strategies is striking and highlights the gap in research. Future work should further explore the degree to which dissemination strategies and implementation strategies either overlap or are distinct.

Finally, the dissemination of research to policymakers may raise certain ethical issues. It is imperative for researchers to critically assess when and how to disseminate research findings to policymakers, keeping in mind that promoting a specific policy agenda may result in a perceived or real loss of objectivity [ 91 ]. Syntheses of policy-relevant evidence can be useful, particularly when researchers work in partnership with non-governmental organizations to inform the policy process.

We summarize strategies and illuminate potential barriers to the research-to-policy dissemination process. Key findings are drawn from multiple disciplines and suggest that lessons learned may cut across both research topics and levels of government. The most frequently referenced channel for dissemination to policymakers was print materials, with personal communication (including both in-person and electronic meetings and individual communications) a close second. Corresponding strategies for effective dissemination to policymakers included starting early, drumming-up support, using champions and brokers, understanding the context, ensuring timeliness, relevance, and accessibility of research products, and knowing the players and the process. A shared feature of these strategies is the distillation of complex research findings into accessible pieces of relevant information that can then be delivered via multiple avenues.

Interdisciplinary collaboration is a common practice in scientific research [ 92 ]. Our findings provide leads on how to more effectively to engage with policymakers, leading to a greater likelihood of translating research evidence into policy action. Engaging policymakers early as contributing members of the research team, maintaining communication during the research process, and presenting relevant findings in a clear, concise manner may empower both researchers and policymakers to further apply scientific evidence to improve social policy in the United States.

Availability of data and materials

Raw search results, citations, and abstracts available upon request.

Abbreviations

United States

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Comparative Effectiveness Research

Expert Recommendations for Implementing Change

Dodson EA, Geary NA, Brownson RC. State legislators’ sources and use of information: bridging the gap between research and policy. Health Educ Res. 2015;30:840–8.

PubMed   PubMed Central   Google Scholar  

Purtle J, Dodson EA, Nelson K, Meisel ZF, Brownson RC. Legislators’ sources of behavioral Health research and preferences for dissemination: variations by political party. Psychiatr Serv. 2018; appi.ps.201800153.

Purtle J, Peters R, Brownson RC. A review of policy dissemination and implementation research funded by the National Institutes of Health, 2007-2014. Implement Sci. 2016;11.

Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14:2.

Article   PubMed   PubMed Central   Google Scholar  

Innvær S, Vist G, Trommald M, Oxman A. Health policy-makers’ perceptions of their use of evidence: a systematic review. J Health Serv Res Policy. 2002;7:239–44.

Article   PubMed   Google Scholar  

El-Jardali F, Lavis JN, Ataya N, Jamal D. Use of health systems and policy research evidence in the health policymaking in eastern Mediterranean countries: views and practices of researchers. Implement Sci IS. 2012;7:2.

Campbell DM, Redman S, Rychentnik L, Cooke M, Zwi AB, Jorm L. Increasing the use of evidence in health policy: practice and views of policy makers and researchers. Aust N Z Health Policy CSIRO. 2009;6.

Dean H. Social Policy. Cambridge: Polity; 2012.

Google Scholar  

Gilbert N, Terrell P. Dimensions of social welfare policy. 8th ed. Boston: Pearson; 2012.

Marshall TH. Citizenship and social class. Cambridge; 1950.

Shannon CE. A mathematical theory of communication. Bell Syst Tech J. 1948;27:379–423.

Article   Google Scholar  

Weaver W, Shannon CE. The mathematical theory of communication. Champaign: University of Illinois Press; 1963.

McGuire W. The nature of attitudes and attitude change. Vol. 3. Reading: Addison-Wesley Pub. Co; 1969.

McGuire WJ, Rice R, Atkin C. Input and output variables currently promising for constructing persuasive communications. Public Commun Campaigns. 2001;3:22–48.

Brownson RC, Eyler AA, Harris JK, Moore JB, Tabak RG. Getting the Word Out: New Approaches for Disseminating Public Health Science. J Public Health Manag Pract JPHMP. 2018;24:102–11.

Rogers EM. Diffusion of Innovations, 5th Edition: Simon and Schuster; 2003.

Kotler P, Zaltman G. Social marketing: an approach to planned social change. J Mark. 1971;35:3–12.

Article   CAS   PubMed   Google Scholar  

Best A, Holmes B. Systems thinking, knowledge and action: towards better models and methods. Evid Policy J Res Debate Pract Policy Press. 2010;6:145–59.

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1.

Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2015;349:g7647.

Political party - Two-party systems. Encycl. Br. 2020 [cited 2020 Aug 10]. Available from: https://www.britannica.com/topic/political-party .

Mahoney C. Why lobbying in America is different [Internet]. POLITICO. 2009 [cited 2020 Aug 10]. Available from: https://www.politico.eu/article/why-lobbying-in-america-is-different/ .

Tseng V, Coburn C. Using evidence in the US. What Works Evid-Inf Policy Pract. Bristol: Policy Press; 2019. p. 351–68.

Stern MJ. Social Policy: History (1950–1980). Encycl Soc Work. 2013; [cited 2020 Apr 17]. Available from: https://oxfordre.com/socialwork/view/10.1093/acrefore/9780199975839.001.0001/acrefore-9780199975839-e-610 .

Ashcraft LE, Asato M, Houtrow AJ, Kavalieratos D, Miller E, Ray KN. Parent empowerment in pediatric healthcare settings: A systematic review of observational studies. Patient Patient Centered Outcomes Res. 2019;12:199–212.

Jacobs LA, Ashcraft LE, Sewall CJR, Folb BL, Mair C. Ecologies of juvenile reoffending: a systematic review of risk factors. J Crim Just. 2020;66:101638.

Friese B, Bogenschneider K. The voice of experience: how social scientists communicate family research to policymakers. Fam Relat. 2009;58:229–43.

Nelson JW, Scammell MK, Altman RG, Webster TF, Ozonoff DM. A new spin on research translation: the Boston Consensus Conference on Human Biomonitoring. Environ Health Perspect. 2009;117:495–9.

Weiss CH, Murphy-Graham E, Petrosino A, Gandhi AG. The fairy godmother—nd her warts. Am J Eval. 2008;29:29–47.

Meisel ZF, Mitchell J, Polsky D, Boualam N, McGeoch E, Weiner J, et al. Strengthening partnerships between substance use researchers and policy makers to take advantage of a window of opportunity. Subst Abuse Treat Prev Policy. 2019;14:12.

McVay AB, Stamatakis KA, Jacobs JA, Tabak RG, Brownson RC. The role of researchers in disseminating evidence to public health practice settings: a cross-sectional study. Health Res Policy Syst. 2016;14:1–9.

Crowley M, Scott JTB, Fishbein D. Translating prevention research for evidence-based policymaking: results from the research-to-policy collaboration pilot. Prev Sci. 2018;19:260–70.

Lane JP, Rogers JD. Engaging national organizations for knowledge translation: Comparative case studies in knowledge value mapping. Implement Sci. 2011;6.

McBride T, Coburn A, MacKinney C, Mueller K, Slifkin R, Wakefield M. Bridging health research and policy: effective dissemination strategies. J Public Health Manag Pract. 2008;14:150–4.

McGinty EE, Siddiqi S, Linden S, Horwitz J, Frattaroli S. Improving the use of evidence in public health policy development, enactment and implementation: a multiple-case study. Health Educ Res. 2019;34:129–44.

Yanovitzky I, Weber M. Analysing use of evidence in public policymaking processes: a theory-grounded content analysis methodology. 2019 [cited 2019 May 8]. Available from: https://www.ingentaconnect.com/content/tpp/ep/pre-prints/content-ppevidpold1700095 .

Purtle J, Lê-Scherban F, Wang XI, Shattuck PT, Proctor EK, Brownson RC. State Legislators’ Support for Behavioral Health Parity Laws: The Influence of Mutable and Fixed Factors at Multiple Levels. Milbank Q. 2019;97:1200–32.

Article   PubMed Central   Google Scholar  

Brownson RC, Dodson EA, Stamatakis KA, Casey CM, Elliott MB, Luke DA, et al. Communicating evidence-based information on cancer prevention to state-level policy makers. J Natl Cancer Inst. 2011;103:306–16.

Hopkins M, Wiley KE, Penuel WR, Farrell CC. Brokering research in science education policy implementation: the case of a professional association. Evid Policy. 2018;14:459–76.

Sorian R, Baugh T. Power of information: closing the gap between research and policy. Health Aff (Millwood). 2002;21:264–73.

Valentine A, DeAngelo D, Alegria M, Cook BL. Translating disparities research to policy: a qualitative study of state mental health policymakers’ perceptions of mental health care disparities report cards. Psychol Serv. 2014;11:377–87.

Weissman JS, Westrich K, Hargraves JL, Pearson SD, Dubois R, Emond S, et al. Translating comparative effectiveness research into Medicaid payment policy: views from medical and pharmacy directors. J Comp Eff Res. 2015;4:79–88.

Massell D, Goertz ME, Barnes CA. State education agencies’ acquisition and use of research knowledge for school improvement. Peabody J Educ. 2012;87:609–26.

Zelizer A. How Responsive Are Legislators to Policy Information? Evidence from a field experiment in a state legislature. Legis Stud Q. 2018;43:595–618.

Allen ST, Ruiz MS, O’Rourke A. The evidence does not speak for itself: The role of research evidence in shaping policy change for the implementation of publicly funded syringe exchange programs in three US cities. Int J Drug Policy. 2015;26:688–95.

Brim OG Jr, Dustan J, Brim OG Jr. Translating research into policy for children: the private foundation experience. Am Psychol. 1983;38:85–90.

Austin SB, Yu K, Tran A, Mayer B. Research-to-policy translation for prevention of disordered weight and shape control behaviors: a case example targeting dietary supplements sold for weight loss and muscle building. Eat Behav. 2017;25:9–14.

Bumbarger B, Campbell E. A state agency-university partnership for translational research and the dissemination of evidence-based prevention and intervention. Adm Policy Ment Health Ment Health Serv Res. 2012;39:268–77.

Garcia AR, Kim M, Palinkas LA, Snowden L, Landsverk J. Socio-contextual determinants of research evidence use in public-youth systems of care. Adm Policy Ment Health Ment Health Serv Res. 2016;43:569–78.

Coffman JM, Hong M-K, Aubry WM, Luft HS, Yelin E. Translating medical effectiveness research into policy: lessons from the California health benefits review program. Milbank Q. 2009;87:863–902.

Jamieson M, Bodonyi JM. Data-driven child welfare policy and practice in the next century. Child Welfare. 1999;78:15–30.

Mosley JE, Courtney ME. Partnership and the politics of care: advocates’ role in passing and implementing California’s law to extend foster care: Chapin Hall Center for Children; 2012.

Jabbar H, Goel La Londe P, Debray E, Scott J, Lubienski C. How Policymakers Define ‘Evidence’: The Politics of Research Use in New Orleans. Policy Futur Educ. 2015;12:286–303.

Nelson SR, Leffler JC, Hansen BA. Toward a research agenda for understanding and improving the use of research evidence. Northwest Reg Educ Lab NWREL. 2009.

Purtle J, Peters R, Kolker J, Diez Roux AV. Uses of population health rankings in local policy contexts: A multisite case study. Med Care Res Rev. 2019;76:478–96.

McGinty B, Siddiqi S, Linden S. Academic research-policy translation strategies to improve the use of evidence in health policy development, enactment and implementation: a 3-part embedded case study. Implement Sci. 2018;13.

Jamieson M, Bodonyi JM. Data-driven child welfare policy and practice in the next century. Fam Foster Care Century. 2001;13.

Sorian R, Baugh T. Power of information: closing the gap between research and policy. Health Aff (Millwood). Health Aff. 2002;21:264–73.

Burstein P. The impact of public opinion on public policy: a review and an agenda. Polit Res Q. 2003;56:29–40.

Pacheco J, Maltby E. The role of public opinion—does it influence the diffusion of ACA decisions? J Health Polit Policy Law. 2017;42:309–40.

Wolton S. Lobbying, Inside and Out: How special interest groups influence policy choices. Rochester, NY: Social Science Research Network; 2017 Mar. Report No.: ID 2190685. Available from: https://papers.ssrn.com/abstract=2190685 .

Stamatakis KA, McBride TD, Brownson RC. Communicating prevention messages to policy makers: the role of stories in promoting physical activity. J Phys Act Health. 2010;7:S99–107.

Otten AL. The influence of the mass media on health policy. Health Aff (Millwood). Health Aff. 1992;11:111–8.

Article   CAS   Google Scholar  

Robinson P. Theorizing the influence of media on world politics: models of media influence on foreign policy. Eur J Commun. 2001;16:523–44.

Shanahan EA, McBeth MK, Hathaway PL. Narrative policy framework: the influence of media policy narratives on public opinion. Polit Policy. 2011;39:373–400.

Salamon LM, Siegfried JJ. Economic power and political influence: the impact of industry structure on public policy. Am Polit Sci Rev. 1977;71:1026–43.

Shelley D, Ogedegbe G, Elbel B. Same strategy different industry: corporate influence on public policy. Am J Public Health. 2014;104:e9–11.

Hird JA. Policy Analysis for What? The effectiveness of nonpartisan policy research organizations. Policy Stud J. 2005;33:83–105.

Trost MJ, Webber EC, Wilson KM. Getting the word out: disseminating scholarly work in the technology age. Acad Pediatr. 2017;17:223–4.

Kapp JM, Hensel B, Schnoring KT. Is Twitter a forum for disseminating research to health policy makers? Ann Epidemiol. 2015;25:883–7.

Dodson EA, Eyler AA, Chalifour S, Wintrode CG. A review of obesity-themed policy briefs. Am J Prev Med. 2012;43:S143–8.

Bogenschneider K, Corbett TJ. Evidence-based policymaking: Insights from policy-minded researchers and research-minded policymakers: Routledge; 2011.

Mitton C, Adair CE, Mckenzie E, Patten SB, Perry BW. Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q. 2007;85:729–68.

Purtle J, Lê-Scherban F, Shattuck P, Proctor EK, Brownson RC. An audience research study to disseminate evidence about comprehensive state mental health parity legislation to US State policymakers: protocol. Implement Sci. 2017;12:1–13.

Van der Arend J. Bridging the research/policy gap: policy officials’ perspectives on the barriers and facilitators to effective links between academic and policy worlds. Policy Stud. 2014;35:611–30.

Langer L, Tripney J, Gough DA. The science of using science: researching the use of research evidence in decision-making: UCL Institute of Education, EPPI-Centre; 2016.

Orton L, Lloyd-Williams F, Taylor-Robinson D, O’Flaherty M, Capewell S. The use of research evidence in public health decision making processes: systematic review. PLoS One. 2011;6:e21704.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Hennink M, Stephenson R. Using research to inform health policy: barriers and strategies in developing countries. J Health Commun. 2005;10:163–80.

Lomas J. Essay: Using ‘Linkage And Exchange’ To move research into policy at a Canadian Foundation. Health Aff (Millwood). Health Aff. 2000;19:236–40.

EPPI-Centre. Research Advisory Service. 2019 [cited 2020 Aug 4]. Available from: https://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3421 .

Kerner JF, Hall KL. Research dissemination and diffusion: translation within science and society. Res Soc Work Pract. 2009;19:519–30.

Library of Congress. About CRS - congressional research service (Library of Congress). 2020 [cited 2020 Aug 26]. Available from: https://www.loc.gov/crsinfo/about/ .

Edwards C. Complexity in State Government. 2020 [cited 2020 Aug 31]. Available from: https://www.cato.org/blog/complexity-state-government .

Urban Institute. State and Local Expenditures [Internet]. Urban Inst. 2020 [cited 2020 Aug 25]. Available from: https://www.urban.org/policy-centers/cross-center-initiatives/state-and-local-finance-initiative/state-and-local-backgrounders/state-and-local-expenditures .

D’Este P, Guy F, Iammarino S. Shaping the formation of university–industry research collaborations: what type of proximity does really matter? J Econ Geogr. 2013;13:537–58.

Owen JW, Larson AM. Researcher-policymaker partnerships: Strategies for launching and sustaining successful collaborations: Routledge; 2017.

National Institute for Health Research. Policy research. 2020 [cited 2020 Sep 4]. Available from: https://www.nihr.ac.uk/explore-nihr/funding-programmes/policy-research.htm .

Higgins J, Green S. Cochrane Handbook for systematic reviews of interventions. 2011 [cited 2018 Sep 20]. Available from: /handbook.

Allen P, Pilar M, Walsh-Baily C, Hooley C, Mazzucca S, Lewis C, et al. Quantitative measures of health policy implementation determinants and outcomes: a systematic review. Implement Sci. 2020.

Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21.

Brownson RC, Hartge P, Samet JM, Ness RB. From epidemiology to policy: toward more effective practice. Ann Epidemiol. 2010;20:409.

Carroll L, Ali MK, Cuff P, Huffman MD, Kelly BB, Kishore SP, et al. Envisioning a transdisciplinary university. J Law Med Ethics. 2014;42:17–25.

Download references

Acknowledgements

The views expressed herein are those of the authors and do not reflect those of the Department of Veterans Affairs, the Centers for Disease Control and Prevention, or the National Institutes of Health.

LEA is supported by a pre-doctoral Clinical and Translational Science Fellowship (NIH TL1 TR001858 (PI: Kraemer)). DAQ is supported by a postdoctoral fellowship through the Department of Veterans Affairs (VA) Office of Academic Affiliations and the Center for Health Equity Research and Promotion at the VA Pittsburgh Healthcare System. RCB is supported by the National Cancer Institute (P50CA244431) the Centers for Disease Control and Prevention (U48DP006395). The funding entities had no role in the development, data collection, analysis, reporting, or publication of this work. Article processing charges for this article were fully paid by the University Library System, University of Pittsburgh.

Author information

Authors and affiliations.

University of Pittsburgh School of Social Work, 2117 Cathedral of Learning, 4200 Fifth Avenue, Pittsburgh, PA, 15260, USA

Laura Ellen Ashcraft

Center for Health Equity Research and Promotion (CHERP), VA Pittsburgh Healthcare System, University Drive C, Building 30, Pittsburgh, PA, 15240, USA

Deirdre A. Quinn

Prevention Research Center, Brown School, Washington University in St. Louis, One Brookings Drive, Campus Box 1196, St. Louis, MO, 63130, USA

Ross C. Brownson

Department of Surgery, Division of Public Health Sciences, and Alvin J. Siteman Cancer Center, Washington University School of Medicine, 660 South Euclid Avenue, Saint Louis, MO, 63110, USA

You can also search for this author in PubMed   Google Scholar

Contributions

Review methodology: LEA, DAQ, RCB; eligibility criteria: LEA, DAQ, RCB; search strings and terms: LEA, DAQ; abstract screening: LEA, DAQ; full text screening: LEA, DAQ; pilot extraction: LEA, DAQ; data extraction: LEA, DAQ; data aggregation: LEA, DAQ; writing: LEA, DAQ; editing: LEA, DAQ, RCB. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Laura Ellen Ashcraft .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare they have no conflicting interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1..

PRISMA Checklist.

Additional File 2.

Search Strategy.

Additional File 3.

Data Abstraction Form.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Ashcraft, L.E., Quinn, D.A. & Brownson, R.C. Strategies for effective dissemination of research to United States policymakers: a systematic review. Implementation Sci 15 , 89 (2020). https://doi.org/10.1186/s13012-020-01046-3

Download citation

Received : 18 May 2020

Accepted : 14 September 2020

Published : 15 October 2020

DOI : https://doi.org/10.1186/s13012-020-01046-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Dissemination
  • Dissemination science
  • Social policy
  • Public policy

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

social work research dissemination

  • Search Close search
  • Find a journal
  • Search calls for papers
  • Journal Suggester
  • Open access publishing

We’re here to help

Find guidance on Author Services

Publication Cover

Open access

Using Theory in Practice – An Intervention Supporting Research Dissemination in Social Work

  • Cite this article
  • https://doi.org/10.1080/23303131.2021.1935376

Introduction

Selecting the research, supporting theory-informed and evidence-informed practice, the experiences of group participants, implications for human service organizations.

  • Acknowledgements

Disclosure statement

  • Full Article
  • Figures & data
  • Reprints & Permissions
  • View PDF PDF View EPUB EPUB

This guest editorial explores how theory-informed and evidence-informed practice can be strengthened in human service organizations. This exploration involves the description of a Practice and Theory group intervention model. Based on a three-case study of pilot intervention groups provided to social workers, the short-term and intermediate outcomes as well as the expected intermediate and long-term outcomes are presented and illustrated by a logic model. The shared conversations help overcome the difficulties practitioners and managers may have in understanding the role of theory or research in practice. Discussing theories in the context of everyday practice can provide practitioners with concrete tools for decision-making. Applying and experimenting with theories opens new perspectives for the problem-solving process where the practitioner is experimenting, reflecting and seeking to improve practice. Thus, shared reflections of theories and research can promote adaptive and developmental workplace learning and enhance an individual sense of epistemic agency.

  • Dissemination
  • evidence-informed practice
  • intervention
  • theory-informed practice

Practitioners often lack the access and time to read research publications and many lack the critical thinking skills needed to interpret research. They also often need to overcome the organizationally hostile attitudes toward research, inadequate supervision and/or the lack of autonomy to implement research (Gray, Joy, & Plath, Citation 2013 ; Nutley, Walter, & Davies, Citation 2007 ). All of these factors are taking on increased importance within the current context of implementation science and the introduction of evidence-informed practices (Bunger & Lengnick-Hall, Citation 2019 ).

The goal of this editorial is to explore the process of translating research knowledge in order to apply it to contemporary practice. This exploration involves the description of a group intervention model for disseminating research and supporting the problem-solving process that underlies evidence-informed practice. In addition to noting the outcomes of the group model, recommendations are provided for strengthening theory-informed practice in human service organizations.

Specifically, we emphasize the importance of organizational supportive structures for evidence-informed and theory-informed practice. Based on the findings of the Practice and Theory group, the editorial proposes providing professionals hands-on guidance on how to integrate research into practice and decision-making as part of their practice. We also propose that reflecting on the relevance of theories and research with colleagues, and deliberately experimenting in practice, supports adaptive and developmental learning. This can encourage professionals to develop personal and organizational practices and even to conduct practice research. Finally, we contend that obtaining new perspectives from research and contributing in shared knowledge creation can improve work-related sense of well-being.

Our discussion begins with a description of the context that impacted the authors, followed by an introduction of the three pilot interventions. Next, we discuss the role that research plays in the Practice and Theory group and how the research was selected. We then present the short-term and intermediate outcomes our research indicates as well as the expected intermediate and long-term outcomes illustrated by a logic model (Gugiua & Rodríguez-Campos, Citation 2007 ). We conclude with a set of recommendations.

The immediate context

Both authors have been involved with social work practice research at the Heikki Waris Institute funded by the Helsinki Metropolitan municipalities and University of Helsinki, Finland (Muurinen & Satka, Citation 2020 ). Both authors worked at the Institute, Aino Kääriäinen as a university lecturer and Heidi Muurinen as a researcher social worker and frequently recognized the challenges of disseminating results beyond the active practice communities. Both authors have been inspired by John Dewey’s ( Citation 1920/1950 , p. 121) writings about the relationship between theory and practice where concepts, theories and systems of thought are seen as tools to inform practice. Dewey’s ideas are used to support social workers in their efforts to utilize research in practice and underlie the design of our exploratory study of theory-informed group intervention (Kääriäinen & Muurinen, Citation 2019 ).

During 2015–2017, we conducted three pilot studies of the intervention groups to research how participating social workers reflected upon and utilized theories when reviewing qualitative research (Muurinen & Kääriäinen, Citation 2020 ). The first group was in a social work agency serving adults where Heidi Muurinen worked as a team manager. The next two groups were with social workers in child protection agencies. Of the three groups, two were jointly facilitated by the authors and the third was facilitated by Aino Kääriäinen and a development planner at the City of Helsinki. The three groups had a total of 16 participants who were all master’s-level social workers (M.Sc.Sc.). Participation was voluntary and the social workers were recruited from the organizations by e-mail.

The Practice and Theory group meets five to six times. In each session, the group chooses a research summary prepared by the group facilitator. For two weeks in-between the group meetings, each participant applies the chosen piece of research to their practice by analyzing their practice using theoretical concepts that are described in the research summary. The observations of the participants were then discussed in the group meetings. More detailed information about the group facilitation process is available in a guidebook (see Kääriäinen & Muurinen, Citation 2019 ).

Before the pilot groups began and without specifically knowing which social workers would participate and the questions they might find interesting, the group facilitators chose the research to be discussed within the group. Given the pilot nature of the group intervention model, the process and criteria for selecting the research topics were not very systematic. However, the criteria for selecting theories and research findings included: 1) could the selected theory support decision-making or provide substantial explanation of various client situations (Forte, Citation 2014 , p. 109), and 2) how might the selection process take into account some topical questions in the participants’ field of practice (e.g. child welfare services, adult and aging services, mental health services, etc.). Based on the notion that theories could be useful in analyzing a client or organizational situation (Dewey, Citation 1920 /1950, p. 128), the purpose of the theory and research selection process was to strengthen theory-informed practice and evidence-informed practice using qualitative research with less attention to quantitative research (based, in part, on the anticipated limited research skills of the group participants).

Figure 1. Criteria for selecting research with two example studies.

Lessons learned in research selection

Based on the use of research in three different pilot intervention groups, it became apparent that more effort needed to be made to capture the range of participant research interests, despite our experience in receiving limited responses from the participants related to either proposing research questions or identifying relevant theories. In contrast, the participants really appreciated the preselected topics because, according to them, they did not feel confident in identifying interesting research or answerable questions in the group sessions. This issue could have been more adequately addressed if we had surveyed each participant in advance of each group session and guaranteeing anonymity.

Another approach to selecting research would be conducting a systematic literature review based on the preferences of the group participants. As part of the pilot project, we simply searched and selected publications that were familiar to us and we thought to be inspiring and relevant. However, instead of making a summary of one research article or theory description, a more comprehensive summary could have been written based on a number of relevant publications. It is not clear if the time needed for more wide-ranging preparations take would complicate the implementation of the group model.

Given the limited time available in the group for busy practitioners, only a few concepts or research findings can be covered, suggesting that only one set of research findings or a theory is most feasible. In contrast, larger research studies or theories would need to be spread out into more than one session. For example, in one of our groups, it was proposed that one theme per group session could focus on the implementation of the Finnish Systemic Practice Model (Isokuortti & Aaltio, Citation 2020 ).

In the next section, we provide two examples to demonstrate how social workers applied research in practice. Then, we collected brief versions of both short-term and intermediate outcomes envisioned by the group participants.

We named the group ’Practice and Theory’ because we wanted to emphasize the bridging of the perceived gap between practice and theory where theory is often viewed as speculation separate from practice (Payne, Citation 2014 , p. 4). The discussion of theory in the pilot groups was always linked to selected publications of qualitative research in order to promote evidence-informed or evidence-based practice with theory-informed practice (Austin, Citation 2020 , p. 26).

Malcom Payne ( Citation 2014 ) defines theory as “a generalized set of ideas that describes and explains our knowledge of the world around us in an organized way” (p. 5). Social work theories help to understand the nature of social work practice along with the perspectives of clients being served (ibid., 6). In the group, theory discussions included explanatory generalizations and conceptualizations based on research about the client world or research resulting in implications for social work practice.

The group discussed various social work practice theories (e.g., narrative practice), social science theories of facework and specifically the concept of “face” which describes how positive self-image is created, maintained and guarded in interaction with others (Goffman, Citation 1955 /2016), and philosophical theories related to I-Thou relationship which propose how in human relationships dialogical interaction can take place when the other person is acknowledged and respected as another “I”, not objectified and treated as “it” (Buber, Citation 1923 /2008). Theories about the client world included the conceptualization of having-to which describes the construction of adolescents’ agency from the viewpoint of cultural expectations in discussions with professionals (Juvonen, Citation 2014 ) and Actor-Network Theory which is a theoretical and methodological approach in analyzing symmetrically how human actors and non-human entities participate and influence the construction of social situations or systems (Latour, Citation 2005 ). In the group sessions, the focus was mostly on empirical generalizations or single concepts for easy grasp within one session as a way of encompassing wider theories or frameworks that need to be understood one concept at a time.

In the intervention groups, theories or conceptualizations were used to analyze situations, social problems or practice phenomena within the problem-solving process where knowledge is acquired, created, tested and evaluated. The evidence-informed decision-making process begins with defining an answerable question to which best available evidence is located and critically appraised, clients are informed, and the intervention is evaluated (Gambrill, Citation 2001 ).

Many different types of explanatory and interventive theories are intertwined in social work practice. For example, explanatory feminist perspective and systems theory framework can be applied to such interventive theories related to task-centered casework or motivational interviewing or cognitive-behavioral therapy (Payne, Citation 2014 , p. 5). If relevant evidence-based models are lacking, the explanatory theory perspective can still provide practitioners with frameworks to guide interventions.

Along with practice theories, research on client populations (e.g., children, the elderly, domestic violence survivors) can inform social worker regarding the needs, behaviors and relevant experiences of service users. If a practitioner reads a qualitative study about a client population and reflects upon how this research relates to one’s own practice, the application and analysis can lead to something surprising or contradictory. This form of abductive reasoning (making a probable conclusion from what you know) can lead to preliminary hypotheses as well as answerable questions leading to a search for the best available evidence on a variety of interventions (Peirce, Citation 1903 /1934, p. 117). The research on client populations can also include the identification of evidence-informed practices or the need for such practices. Finally, the search for qualitative research might also lead to practice recommendations and guidelines that can be applied in decision-making as will be noted in two examples later on in this discussion.

A distinction between an independent practitioner-focused understanding of evidence-informed practice and the group approach to reflecting upon the research findings and their applications is that the group enables participants to allocate time to considering research as well as find courage to publicly share in a safe space their understandings of the application of research to their own practice. Also, previous research has emphasized the importance of interactive group processes and supportive organizational structures for promoting evidence-informed practice (Austin & Carnochan, Citation 2020 ; Austin, Dal Santo, & Lee, Citation 2012 ; Carnochan, McBeath, & Austin, Citation 2017 ).

Figure 2. Two examples of the consequences of applying research to practice.

Findings from group participants

The results of the experiences of group participants are based on a thematic analysis of reflective discussions during the last group sessions and follow-up group interviews of the three pilot intervention groups in 2015–2017 (Muurinen & Kääriäinen, Citation 2020 ). The results to short-term and intermediate outcomes are identified by the participants.

It was significant that the group activities could be fitted into the busy schedules and practice of the participants. The group experiences provided the participants with an opportunity to see how research knowledge could be connected to practice given their limited experience with understanding this connection. Perhaps the most significant consequence of participating in the Practice and Theory group was that it lowered the perceived barriers to applying research as a way to reflect upon their own practice . By engaging in group discussions about theory and research, participants gained a new perspective to social work practice and by reflecting upon their professional experiences they were able to make new interpretations of their actions and their practice . According to the participants, the discussions of research knowledge and theoretical frameworks gave them a perspective to step back from daily practice as a way of helping them examine their decision-making and the actions taken.

Through personal and shared reflection, the practitioners became more aware of their own reasoning . They were able to use the research knowledge to recognize, improve, and appreciate their argumentation skills in decision-making . Participating in the discussion groups was professionally empowering for them as a way of developing new ways of operating that enhanced their ability to develop their practice skills related to increased productivity and effectiveness . With all these new perspectives and understandings, participants reported that they felt inspired and excited about their work and noted that the group experience of engaging with theory and research would improve their work-related sense of well-being .

Identifying the outcomes of the group intervention model

Figure 3. Logic model for practice and theory pilot group intervention (based on Gugiua & Rodríguez-Campos, Citation 2007 ).

Even though the logic model helps to illustrate expected outcomes, learning is a complex process that does not always proceed in a linear and rational manner. The relational aspects of learning were evident when group members analyzed the theories together. Listening to each other provided new perspectives for interpreting practice situations as well as seeing themselves as professionals engaging with research and theory. For example, the theory-based conversations not only provided participants with new understanding about theories but also increased their sense of agency in making deliberate and conscious decisions along with the explication of the reasons for taking actions. Group members not only saw how the use of theories could become tools for practice but also how the shared experience of learning together could lead to shared reflections and knowledge creation.

The learning challenges inherent in the process of engaging in evidence-informed practice call for both adaptive and developmental learning (Nilsen, Neher, Ellström, & Gardner, Citation 2020 ). Adaptive learning involves transforming explicit knowledge found in research and theories into implicit or tacit knowledge that links explanatory theory with the interventive theories of practice as well as research findings that inform practice and related knowledge about client populations. Developmental learning builds upon prior knowledge and practice experience that involves transforming implicit knowledge acquired over years of practice into explicit knowledge that takes into account personal thoughts or habits and deliberate actions based upon articulated decision-making processes.

Developmental learning can be well supported through reflective discussions based on research. For example, Nilsen, Nordström, and Ellström ( Citation 2012 ) provided managers with opportunities to engage in reflection groups to discuss research in Sweden in order to support the use of research as part of the developmental learning experienced by managers. Participating enhanced the managers’ self-efficacy concerning their role as leaders, supported handling different dilemmas and increased their understanding of their work (Nilsen et al., Citation 2012 ).

In the Practice and Theory group, adaptive and developmental learning were also present. Adaptive learning took place, for example, when the participants gained new understanding in how research is connected to practice. Another example of adoptive learning is how the social workers in the above case integrated the ethical decision-making model to their practice. The participants also received new understanding of phenomena related to clients’ lives which, as one participant describes, increased understanding of where the clients “ are coming from, what their experience is of everything, and in a good way this [theory] brings the background” .

Meanwhile, developmental learning was present when the participants used the theories or research to step back from their professional practice to reflect upon their assumptions or to deliberately explore new ways of operating. For example, the SW2 in the above case was able to make the reasons for a custody care decision more explicit by considering the six-stages of ethical decision-making. This also increased SW2’s understanding of the importance of making the tacit knowledge more explicit. Thus, the Practice and Theory group model allowed for the combining of both adaptive and developmental learning. It also provided hands-on-guidance integrating knowledge about explanatory and interventive theories that could enhance their professional practice, one of the most challenging aspects of evidence-informed practice (Nilsen et al., Citation 2020 , p. 413).

We conclude this editorial with the identification of implications for human service organizations and further research. Organizational strategies are needed to overcome the well-known barriers (e.g., lack of time, access and skills or negative attitudes) such as the Practice and Theory Group Model that provides staff (Muurinen & Kääriäinen, Citation 2020 ) or managers with the use of reflective groups (Nilsen et al., Citation 2012 ) that can enhance theory-based practice and evidence-based practice in human service organizations.

To address the persistent lack of staff time and access to research, the group sessions can be easily incorporated into the busy schedules of practitioners. When research and theory are shared with practitioners, there are opportunities for immediate application in the form of small experiments carried out within the context of everyday practice. However, organizations need to support the efforts of the group facilitators beyond the actual sessions with staff to account for the time needed for preparation. In addition, group facilitators need to be able to understand the core idea of group learning as well as the concepts of theory-based and evidence-based practice. A facilitator’s Guidebook includes key references for this type of staff facilitation (see Kääriäinen & Muurinen, Citation 2019 ).

One of the significant outcomes for participants in the Practice and Theory group was an increased sense of work-related well-being. Some of the short-term outcomes reported by the participants included: 1) a new appreciation of one’s personal skills, 2) feeling inspired about one’s own work and 3) being professionally empowered. Supporting work-related well-being is especially significant among social workers who have higher probability for staff burn-out (Rantonen et al., Citation 2019 ).

How can theory-based and evidence-based practice be strengthened in human service organizations? The shared conversations in everyday practice help overcoming the difficulties practitioners and managers may have in understanding the role of theory or research in practice. Discussing theories in the context of everyday practice can provide practitioners with concrete tools for decision-making. This means acknowledging and utilizing theories, perspectives, frameworks and conceptualizations when: a) analyzing situations b) forming answerable questions, c) searching for and selecting relevant research, and d) utilizing theories to inform interventions especially when evidence-based practice findings are not available. Applying and experimenting with theories opens new perspectives for problem-solving process where the practitioner is experimenting, reflecting, experimenting and seeking to improve practice.

In addition to incorporating theories into decision-making, the reflective process itself can support a sense of agency among staff and managers. When practitioners have the opportunity to reflect upon the use and relevance of theories and thereby deliberately engage in experimentation, they become contributors to knowledge creation (Dewey, Citation 1920 /1950, p. 89). A sense of epistemic agency is derived from what one knows or does not know (Reed, Citation 2001 , p. 522) when seeking to increase one’s ability to set a goal, motivate oneself, make a long-term plan and evaluate ones’ own actions (Scardamalia, Citation 2002 ). Gaining a stronger sense of epistemic agency also strengthens the capacity of practitioners to make their reasoning more explicit beyond their experiential knowledge, legislation or organizations’ protocols by actively using research on client populations as well as theories about human behavior and the social environment.

Conclusions for future research

Facilitating the Practice and Theory groups has demonstrated to us how short intervention can enhance theory-informed and evidence-informed practice. Also, creating a safe space for discussing, sharing personal experiences, and exploring ideas supports organizational learning (also Austin, Citation 2020 ; Carnochan et al., Citation 2017 ). However, evaluation research is still needed on the outcomes of this group model as well as implementation research on how the model could be used in different environments.

The pilot intervention can also lead to qualitative research or to theory development. First, the shared reflection around the existing research can lead to new research questions and to conducting practice research, as in our example of SW2 above. Second, the group discussions generate interesting qualitative data that could be used by practice researchers as a less-traditional method of collecting qualitative data that could be used for developing practice-related concepts or theories based on recorded practical reflections. Third, the group model could also be expanded to promote shared practice research projects with the group participants as co-researchers so that the data could be analyzed together within the group.

A group model as we have described can be used in human service organizations to enhance implementation science by acknowledging the role of theory-based practice and evidence-informed practice. Through shared critical thinking and reflection, theories and research findings can enhance the understanding and the promotion of different perspectives in both clinical and managerial work in organizational settings. Shared reflections of theories and research can promote both adaptive and developmental workplace learning as a way of enhancing an individual sense of agency.

Acknowledgments

We thank Professor Michael J. Austin for helpful comments on earlier drafts of this guest editorial.

No potential conflict of interest was reported by the author(s).

  • Austin, M. (2020). Identifying the conceptual foundations of practice research. In L. Joubert & M. Webber (Eds.), The Routledge handbook of social work practice research (pp. 15–31). Routledge, Milton, Oxon, UK: Taylor and Francis.   Google Scholar
  • Austin, M. J., & Carnochan, S. (2020). Practice research in the human services: A university-agency partnership model . Oxford, UK: Oxford University Press.   Google Scholar
  • Austin, M. J., Dal Santo, T., & Lee, C. (2012). Building organizational supports for research-minded practitioners. Journal of Evidence-Based Social Work , 9 ( 1–2 ), 174–211. doi: https://doi.org/10.1080/15433714.2012.636327   PubMed Google Scholar
  • Buber, M. (1923/2008). I and Thou . English translation: Walter Kaufman. Brentwood, USA: Howard Books. (Original: Ich und Du). : . .   Google Scholar
  • Bunger, A. C., & Lengnick-Hall, R. (2019). Implementation science and human service organizations research: Opportunities and challenges for building on complementary strengths. Human Service Organizations: Management, Leadership & Governance , 43(4), 258–268.   Web of Science ® Google Scholar
  • Carnochan, M., McBeath, B., & Austin, M. (2017). Managerial and frontline perspectives on the process of evidence-informed practice within human service organizations. Human Service Organizations, Management, Leadership & Governance , 41 ( 4 ), 346–358. doi: https://doi.org/10.1080/23303131.2017.1279095   Web of Science ® Google Scholar
  • Dewey, J. (1920/1950). Reconstruction in philosophy . New York, USA: The New American Library.   Google Scholar
  • Forte, J. A. (2014). Skills for using theory in social work. 32 lessons for evidence-informed practice . New York, USA: Routledge.   Google Scholar
  • Gambrill, E. (2001). Social work: An authority-based profession. Research on Social Work Practice , 11 ( 2 ), 166–175. doi: https://doi.org/10.1177/104973150101100203   Web of Science ® Google Scholar
  • Goffman, E. (1955/2016). On face-work. An analysis of ritual elements in social interaction. Psychiatry, Interpersonal and Biological Processes , 18 ( 3 ), 213–231. doi: https://doi.org/10.1080/00332747.1955.11023008   PubMed Web of Science ® Google Scholar
  • Gray, M., Joy, E., & Plath, D. (2013). Implementing evidence-based practice: A review of the empirical research literature. Research on Social Work Practice , 23 ( 2 ), 157–166. doi: https://doi.org/10.1177/1049731512467072   Web of Science ® Google Scholar
  • Gugiua, C., & Rodríguez-Campos, L. (2007). Semi-structured interview protocol for constructing logic models. Evaluation and Program Planning , 30 ( 4 ), 339–350. doi: https://doi.org/10.1016/j.evalprogplan.2007.08.004   PubMed Google Scholar
  • Isokuortti, N., & Aaltio, E. (2020). Fidelity and influencing factors in the systemic practice model of children’s social care in Finland. Children and Youth Services Review , 119, 105647. DOI: https://doi.org/10.1016/j.childyouth.2020.105647   Google Scholar
  • Juvonen, T. (2014). Kulttuurisesti määrittynyt täytyminen osana nuorten aikuisten toimijuutta. Nuorisotutkimus , 32(3), 3–16.   Google Scholar
  • Kääriäinen, A., & Muurinen, H. (2019). Combining practice and theory in professional fieldwork: A guidebook to facilitate practice and theory groups . Helsinki, Finland: University of Helsinki – Publications of the Faculty of Social Sciences 136. https://helda.helsinki.fi//bitstream/handle/10138/315681/Combining_Practice_and_Theory_in_Professional_Fieldwork_web.pdf?sequence=1   Google Scholar
  • Latour, B. (2005). Reassembling the social: An introduction to actor-network theory . Oxford: Oxford University Press.   Google Scholar
  • Lonne, B., Harries, M., & Featherstone, B. (2016). Working ethically in child protection . London: Routledge.   Google Scholar
  • Muurinen, H., & Kääriäinen, A. (2020). Integrating theory and practice in social work: An intervention with practitioners in Finland. Qualitative Social Work , 19 ( 5–6 ), 1200–1218. doi: https://doi.org/10.1177/1473325019900287   Web of Science ® Google Scholar
  • Muurinen, H., & Satka, M. (2020). Pragmatist knowledge production in practice research. In L. Joubert & M. Webber (Eds.), The Routledge handbook on social work practice research (pp. 126–136). Milton Park, Oxon, UK: Routledge. (Routledge International Handbooks).   Google Scholar
  • Nilsen, P., Neher, M., Ellström, P., & Gardner, B. (2020). Implementation from a learning perspective. In Per Nilsen Per & Sarah A. Birken (Eds.) Handbook on implementation science , 409–421 Cheltenham, UK: Edward Elgar Publishing.   Google Scholar
  • Nilsen, P., Nordström, G., & Ellström, P. E. (2012). Integrating research-based and practice-based knowledge through workplace reflection. Journal of Workplace Learning , 24 ( 6 ), 403–415. doi: https://doi.org/10.1108/13665621211250306   Google Scholar
  • Nutley, S. M., Walter, I., & Davies, H. T. O. (2007). Using evidence: How research can inform public services . Bristol, UK: Policy Press.   Google Scholar
  • Payne, M. (2014). Modern social work theory . London, UK: Macmillan Education UK. 2014. ProQuest Ebook Central.   Google Scholar
  • Peirce, C. S. (1903/1934). Lecture VII: Pragmatism and abduction. In C. S. Peirce, C. Hartshorne, & P. Weiss (Eds.), Collected papers of Charles Sanders Peirce: 5–6, pragmatism and pragmatism; scientific metaphysics (pp. 112–131). Cambridge, USA: Harvard University.   Google Scholar
  • Rantonen, O., Alexanderson, K., Clark, A. J., Aalto, V., Sónden, A., Brønnum-Hansen, H., … Salo, P. (2019). Antidepressant treatment among social workers, human service professionals, and non-human service professionals: A multi-cohort study in Finland, Sweden and Denmark. Journal of Affective Disorders , 250 ( 10 ), 153–162. doi: https://doi.org/10.1016/j.jad.2019.03.037   PubMed Google Scholar
  • Reed, B. (2001). Epistemic agency and the intellectual virtues. The Southern Journal of Philosophy , 39 ( 4 ), 507–525. doi: https://doi.org/10.1111/j.2041-6962.2001.tb01831.x   Google Scholar
  • Scardamalia, M. (2002). Collective cognitive responsibility for the advancement of knowledge. In B. Smith (Ed.), Liberal education in a knowledge society (pp. 67–98). Chicago, ILL, USA: Open Court.   Google Scholar
  • Back to Top

Related research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations. Articles with the Crossref icon will open in a new tab.

  • People also read
  • Recommended articles

To cite this article:

Download citation, your download is now in progress and you may close this window.

  • Choose new content alerts to be informed about new research of interest to you
  • Easy remote access to your institution's subscriptions on any device, from any location
  • Save your searches and schedule alerts to send you new results
  • Export your search results into a .csv file to support your research

Login or register to access this feature

Register now or learn more

social work research dissemination

  • Politics & Social Sciences
  • Politics & Government

Sorry, there was a problem.

Kindle app logo image

Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required .

Read instantly on your browser with Kindle for Web.

Using your mobile phone camera - scan the code below and download the Kindle app.

QR code to download the Kindle App

Image Unavailable

Social Work Research Methods: From Conceptualization to Dissemination

  • To view this video download Flash Player

social work research dissemination

Follow the author

Brett Drake

Social Work Research Methods: From Conceptualization to Dissemination 1st Edition

Social Work Research Methods is a stand alone "how-to" social research text that covers conceptualization, design, implementation, data management, and statistical analysis with comprehensively detailed examples.

It provides students with everything they need to learn about social science research, and how complete a research project from start to finish. In addition, the text's research process is covered sequentially, in a straightforward “how-to” format. It discusses values and ethics, conceptualization, design, familiarization with key computer programs (Microsoft Excel, SAS, SPSS and NVIVO), analyses, implementation and dissemination. It is also written in a tone that is intentionally light-hearted to increase student interest and comfort level by addressing often overlooked aspects of conducting research (e.g. data management, IRB clearance, and grant development) in detail.

Key highlights of the text include five example research projects. These example projects are presented in their entirety; including how the researchers chose their areas of interest, how they executed their literature reviews (annotated citation lists are given), and how they designed, implemented, and disseminated (poster, article, agency report, or PowerPoint presentation) their work.

  • ISBN-10 0205460976
  • ISBN-13 978-0205460977
  • Edition 1st
  • Publisher Pearson
  • Publication date December 1, 2019
  • Language English
  • Dimensions 11.2 x 8.69 x 0.9 inches
  • Print length 460 pages
  • See all details

Editorial Reviews

From the back cover.

Social Work Research Methods: From Conceptualization to Dissemination

Brett Drake

Melissa Jonson-Reid

Basic Approach:

Written in an intentionally light-hearted tone, Social Work Research Methods is a stand alone "how-to" text that provides everything necessary to learn about social science research and complete a research project from start to finish. It covers conceptualization, design, implementation, data management, and statistical analysis through rich examples designed to increase interest and comfort level.

  • Five example research projects are integrated throughout the text and are presented in their entirety –including how the researchers chose their area of interest, how they executed literature reviews, and how they designed, implemented and disseminated their work.
  • Concepts are presented in a step-by-step format so students can do research projects with tangible results.
  • Content on values and ethics was developed to ensure adherence to CSWE guidelines .
  • Research as a part of a clinical practice (both single subject and multi-subject) is included for students interested in doing research as part of their clinical work.

What the reviewers are saying…

“There are no superfluous side trips into abstract theory or explanations of ‘why’ [students] should be interested in research. It just teaches them how to do it. It cuts to the chase from the beginning, and this prevents students from being turned off before they begin.”

Ted R. Watkins, Texas State University–San Marcos

“[ Social Work Research Methods ] includes many aspects of the research process (e.g., pilot testing an instrument, getting access to subjects, monitoring data collection) that are simply overlooked by the average research methods text, yet are crucial to a solid research study. The authors visit issues that come up in real research, yet are not considered very much in standard texts. It is an actual guidebook in carrying out research.”

Carolyn Turturro, University of Arkansas–Little Rock

“The writing is crisp, fresh, and clear…Students will likely find this text a fun read.”

Dean F. Duncan, III, University of North Carolina–Chapel Hill

Value-Packaged at No Additional Charge:

MyHelpingKit is a one-stop online portal that provides students with chapter-by-chapter Learning Objectives, Chapter Summaries, Practice Tests, Flashcards, Research Activities and Research Navigator™.

Value-Packaged at Minimal Charge (Also Available Separately):

…A workbook to accompany Social Work Research Methods. It includes exercises for each chapter, evidence-based practice modules, and step-by-step instructions for students working on their own research projects.

Please Contact Your Local Sales Representative for Additional Information:

www.ablongman.com/replocator

Product details

  • Publisher ‏ : ‎ Pearson; 1st edition (December 1, 2019)
  • Language ‏ : ‎ English
  • Hardcover ‏ : ‎ 460 pages
  • ISBN-10 ‏ : ‎ 0205460976
  • ISBN-13 ‏ : ‎ 978-0205460977
  • Item Weight ‏ : ‎ 2.69 pounds
  • Dimensions ‏ : ‎ 11.2 x 8.69 x 0.9 inches
  • #1,807 in Public Policy (Books)
  • #2,506 in Social Sciences Research
  • #3,131 in Social Services & Welfare (Books)

About the author

Discover more of the author’s books, see similar authors, read book recommendations and more.

Customer reviews

  • 5 star 4 star 3 star 2 star 1 star 5 star 75% 3% 14% 8% 0% 75%
  • 5 star 4 star 3 star 2 star 1 star 4 star 75% 3% 14% 8% 0% 3%
  • 5 star 4 star 3 star 2 star 1 star 3 star 75% 3% 14% 8% 0% 14%
  • 5 star 4 star 3 star 2 star 1 star 2 star 75% 3% 14% 8% 0% 8%
  • 5 star 4 star 3 star 2 star 1 star 1 star 75% 3% 14% 8% 0% 0%

Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.

To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.

  • Sort reviews by Top reviews Most recent Top reviews

Top reviews from the United States

There was a problem filtering reviews right now. please try again later..

social work research dissemination

  • About Amazon
  • Investor Relations
  • Amazon Devices
  • Amazon Science
  • Sell products on Amazon
  • Sell on Amazon Business
  • Sell apps on Amazon
  • Become an Affiliate
  • Advertise Your Products
  • Self-Publish with Us
  • Host an Amazon Hub
  • › See More Make Money with Us
  • Amazon Business Card
  • Shop with Points
  • Reload Your Balance
  • Amazon Currency Converter
  • Amazon and COVID-19
  • Your Account
  • Your Orders
  • Shipping Rates & Policies
  • Returns & Replacements
  • Manage Your Content and Devices
 
 
 
 
  • Conditions of Use
  • Privacy Notice
  • Consumer Health Data Privacy Disclosure
  • Your Ads Privacy Choices

social work research dissemination

Loading metrics

Open Access

Ten simple rules for innovative dissemination of research

* E-mail: [email protected]

Affiliation Open and Reproducible Research Group, Institute of Interactive Systems and Data Science, Graz University of Technology and Know-Center GmbH, Graz, Austria

ORCID logo

Affiliation Center for Research and Interdisciplinarity, University of Paris, Paris, France

Affiliation Freelance Researcher, Vilnius, Lithuania

Affiliation University and National Library, University of Debrecen, Debrecen, Hungary

Affiliation Institute for Research on Population and Social Policies, National Research Council, Rome, Italy

Affiliation Open Knowledge Maps, Vienna, Austria

Affiliation National and Kapodistrian University of Athens, Athens, Greece

Affiliation Center for Digital Safety and Security, AIT Austrian Institute of Technology, Vienna, Austria

  • Tony Ross-Hellauer, 
  • Jonathan P. Tennant, 
  • Viltė Banelytė, 
  • Edit Gorogh, 
  • Daniela Luzi, 
  • Peter Kraker, 
  • Lucio Pisacane, 
  • Roberta Ruggieri, 
  • Electra Sifacaki, 
  • Michela Vignoli

PLOS

Published: April 16, 2020

  • https://doi.org/10.1371/journal.pcbi.1007704
  • Reader Comments

Fig 1

Author summary

How we communicate research is changing because of new (especially digital) possibilities. This article sets out 10 easy steps researchers can take to disseminate their work in novel and engaging ways, and hence increase the impact of their research on science and society.

Citation: Ross-Hellauer T, Tennant JP, Banelytė V, Gorogh E, Luzi D, Kraker P, et al. (2020) Ten simple rules for innovative dissemination of research. PLoS Comput Biol 16(4): e1007704. https://doi.org/10.1371/journal.pcbi.1007704

Editor: Russell Schwartz, Carnegie Mellon University, UNITED STATES

Copyright: © 2020 Ross-Hellauer et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Funding: This work was partly funded by the OpenUP project, which received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 710722. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: We have read the journal's policy and have the following conflicts: TR-H is Editor-in-Chief of the open access journal Publications . JT is the founder of the Open Science MOOC, and a former employee of ScienceOpen.

Introduction

As with virtually all areas of life, research dissemination has been disrupted by the internet and digitally networked technologies. The last two decades have seen the majority of scholarly journals move online, and scholarly books are increasingly found online as well as in print. However, these traditional communication vehicles have largely retained similar functions and formats during this transition. But digital dissemination can happen in a variety of ways beyond the traditional modes: social media have become more widely used among researchers [ 1 , 2 , 3 ], and the use of blogs and wikis as a specific form of ‘open notebook science’ has been popular for more than a decade [ 4 ].

Professional academic social networks such as ResearchGate and Academia.edu boast millions of users. New online formats for interaction with the wider public, such as TED talks broadcast via YouTube, often receive millions of views. Some researchers have even decided to make all of their research findings public in real time by keeping open notebooks [ 5 , 6 ]. In particular, digital technologies invoke new ways of reaching and involving audiences beyond their usual primary dissemination targets (i.e., other scholars) to actively involve peers or citizens who would otherwise remain out of reach for traditional methods of communication [ 7 ]. Adoption of these outlets and methods can also lead to new cross-disciplinary collaborations, helping to create new research, publication, and funding opportunities [ 8 ].

Beyond the increase in the use of web-based and computational technologies, other trends in research cultures have had a profound effect on dissemination. The push towards greater public understanding of science and research since the 1980s, and an emphasis on engagement and participation of non-research audiences have brought about new forms of dissemination [ 9 ]. These approaches include popular science magazines and science shows on television and the radio. In recent years, new types of events have emerged that aim at involving the general public within the research process itself, including science slams and open lab days. With science cafés and hackerspaces, novel, participatory spaces for research production and dissemination are emerging—both online and offline. Powerful trends towards responsible research and innovation, the increasing globalisation of research, and the emergence and inclusion of new or previously excluded stakeholders or communities are also reshaping the purposes of dissemination as well as the scope and nature of its audiences.

Many now view wider dissemination and public engagement with science to be a fundamental element of open science [ 10 ]. However, there is a paradox at play here, for while there have never been more avenues for the widespread dissemination of research, researchers tend nonetheless to value and focus upon just a few traditional outputs: journal articles, books, and conference presentations [ 11 ].

Following Wilson and colleagues [ 12 ], we here define research dissemination as a planned process that involves consideration of target audiences, consideration of the settings in which research findings are to be received, and communicating and interacting with wider audiences in ways that will facilitate research uptake and understanding. Innovative dissemination, then, means dissemination that goes beyond traditional academic publishing (e.g., academic journals, books, or monographs) and meetings (conferences and workshops) to achieve more widespread research uptake and understanding. Hence, a citizen science project, which involves citizens in data collection but does not otherwise educate them about the research, is not here considered innovative dissemination.

We here present 10 steps researchers can take to embrace innovative dissemination practices in their research, either as individuals or groups ( Fig 1 ). They represent the synthesis of multidimensional research activities undertaken within the OpenUP project ( https://www.openuphub.eu/ ). This European Coordination and Support Action grant award addressed key aspects and challenges of the currently transforming science landscape and proposed recommendations and solutions addressing the needs of researchers, innovators, the public, and funding bodies. The goal is to provide stakeholders (primarily researchers but also intermediaries) with an entry point to innovative dissemination, so that they can choose methods and tools based on their audience, their skills, and their requirements. The advice is directed towards both individual researchers and research teams or projects. It is similar to other entries in the Ten Simple Rules series (e.g., [ 13 , 14 ]). Ultimately, the benefit here for researchers is increased recognition and social impact of their work.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pcbi.1007704.g001

Rule 1: Get the basics right

Despite changes in communication technologies and models, there are some basic organisational aspects of dissemination that remain important: to define objectives, map potential target audience(s), target messages, define mode of communication/engagement, and create a dissemination plan. These might seem a bit obvious or laborious but are critical first steps towards strategically planning a project.

Define objectives

The motivation to disseminate research can come in many forms. You might want to share your findings with wider nonacademic audiences to raise awareness of particular issues or invite audience engagement, participation, and feedback. Start by asking yourself what you want to achieve with your dissemination. This first strategic step will make all other subsequent steps much simpler, as well as guide how you define the success of your activities.

Map your audience

Specify who exactly you want your research results to reach, for which purposes, and what their general characteristics might be (e.g., policy makers, patient groups, non-governmental organisations). Individuals are not just ‘empty vessels’ to be filled with new knowledge, and having a deeper contextual understanding of your audience can make a real difference to the success of your engagement practices. Who is most affected by your research? Who might find it most valuable? What is it that you want them to take away? Get to know your target audiences, their needs and expectations of the research outcomes, as well as their preferred communication channels to develop a detailed understanding of their interests and align your messages and media with their needs and priorities. Keep in mind, too, that intermediaries such as journalists or science communication organisations can support or mediate the dissemination process.

Target/frame your messages

Target and frame the key messages that you want to communicate to specific groups. Think first from the perspective of what they might want or need to hear from you, rather than what you want to tell them. Choosing media and format of your communication strongly depends on your communication objectives, i.e., what you want to achieve. There are many ways to communicate your research; for example, direct messages, blog/vlog posts, tweeting about it, or putting your research on Instagram. Form and content go hand in hand. Engage intermediaries and leverage any relevant existing networks to help amplify messages.

Create a dissemination plan

Many funded research projects require a dissemination plan. However, even if not, the formal exercise of creating a plan at the outset that organises dissemination around distinct milestones in the research life cycle will help you to assign roles, structure activities, as well as plan funds to be allocated in your dissemination. This will ultimately save you time and make future work easier. If working in groups, distribute tasks and effort to ensure regular updates of content targeted to different communities. Engage those with special specific skills in the use and/or development of appropriate communication tools, to help you in using the right language and support you in finding the suitable occasions to reach your identified audience. Research is not linear, however, and so you might find it best to treat the plan as a living document to be flexibly adapted as the direction of research changes.

Rule 2: Keep the right profile

Whether communicating as an individual researcher, a research project, or a research organisation, establishing a prominent and unique identity online and offline is essential for communicating. Use personal websites, social media accounts, researcher identifiers, and academic social networks to help make you and your research visible. When doing this, try to avoid any explicit self-promotion—your personal profile naturally will develop based on your ability to be an effective and impactful communicator.

Academia is a prestige economy, where individual researchers are often evaluated based on their perceived esteem or standing within their communities [ 15 ]. Remaining visible is an essential part of accumulating esteem. An online presence maintained via personal websites, social media accounts (e.g., Facebook, Twitter, LinkedIn), researcher identifiers (e.g., ORCID), and academic social networks (e.g., ResearchGate, institutional researcher profiles) can be a personal calling card, where you can highlight experience and demonstrate your expertise in certain topics. Being active on important mailing lists, forums, and social media is not only a good chance to disseminate your findings to those communities but also offers you the chance to engage with your community and potentially spark new ideas and collaborations.

Using researcher identifiers like ORCID when disseminating outputs will ensure that those outputs will be unambiguously linked back to the individual researcher (and even automatically updated to their ORCID profile). The OpenUP survey showed that nearly half of the respondents (41%) use academic social networks as a medium to disseminate their research, and a quarter of respondents (26%) said that these networks informed their professional work [ 16 ].

Create a brand by giving your project a unique name, ideally with some intuitive relation to the issue you are investigating. Create a striking visual identity, with a compelling logo, core colours, and a project slogan. Create a website that leverages this visual identity and is as simple and intuitive as possible, both in its layout and in the way content is formulated (limit insider jargon). Create associated appropriate social media accounts (e.g., Twitter, Facebook, LinkedIn, SlideShare, YouTube) and link to this from the project website. Aim for a sustained presence with new and engaging content to reinforce project messaging, and this can help to establish a core following group or user base within different platforms. Include links to other project online presences such as social media accounts, or a rolling feed of updates if possible. Consider including a blog to disseminate core findings or give important project updates. A periodical newsletter could be released in order to provide project updates and other news, to keep the community informed and activated regarding project issues. Depending on the size of your project and budget, you might want to produce hard copy material such as leaflets or fact sheets, as well as branded giveaways to increase awareness of your project. Finally, and perhaps most importantly, try not to come across as a ‘scientific robot’, and make sure to communicate the more human personality side of research.

Rule 3: Encourage participation

In the age of open research, don’t just broadcast. Invite and engage others to foster participation and collaboration with research audiences. Scholarship is a collective endeavour, and so we should not expect its dissemination to be unidirectional, especially not in the digital age. Dissemination is increasingly done at earlier stages of the research life cycle, and such wider and more interactive engagement is becoming an integral part of the whole research workflow.

Such participative activities can be as creative as you wish; for example, through games, such as Foldit for protein folding ( https://fold.it/portal/ ). You might even find it useful to actively engage ‘citizen scientists’ in research projects; for example, to collect data or analyse findings. Initiatives such as Zooniverse ( https://www.zooniverse.org/ ) serve as great examples of allowing anyone to freely participate in cutting-edge ‘people-powered research’.

Disseminating early and often showcases the progress of your work and demonstrates productivity and engagement as part of an agile development workflow. People like to see progress and react positively to narrative, so give regular updates to followers on social media, for example, blogging or tweeting early research findings for early feedback. Alternatively, involving businesses early on can align research to industry requirements and expectations, thus potentially increasing commercial impact. In any case, active involvement of citizens and other target audiences beyond academia can help increase the societal impact of your research [ 17 ].

Rule 4: Open science for impact

Open science is ‘transparent and accessible knowledge that is shared and developed through collaborative networks’, as defined by one systematic review [ 18 ]. It encompasses a variety of practices covering a range of research processes and outputs, including areas like open access (OA) to publications, open research data, open source software/tools, open workflows, citizen science, open educational resources, and alternative methods for research evaluation including open peer review [ 19 ]. Open science is rooted in principles of equitable participation and transparency, enabling others to collaborate in, contribute to, scrutinise and reuse research, and spread knowledge as widely as possible [ 20 ]. As such, innovative dissemination is a core element of open science.

Embracing open science principles can boost the impact of research. Firstly, OA publications seem to accrue more citations than their closed counterparts, as well as having a variety of possible wider economic and societal benefits [ 21 ]. There are a number of ways to make research papers OA, including at the journal site itself, or self-archiving an accepted manuscript in a repository or personal website.

Disseminating publications as preprints in advance of or parallel to journal submission can increase impact, as measured by relative citation counts [ 22 ]. Very often, traditional publishing takes a long time, with the waiting time between submission and acceptance of a paper being in excess of 100 days [ 23 ]. Preprinting speeds up dissemination, meaning that findings are available sooner for sharing and reuse. Potential platforms for disseminating preprints include the Open Science Framework, biorXiv, or arXiv.

Dissemination of other open science outputs that would usually remain hidden also not only helps to ensure the transparency and increased reproducibility of research [ 24 ], but also means that more research elements are released that can potentially impact upon others by creating network effects through reuse. Making FAIR (Findable, Accessible, Interoperable, Reusable) research data and code available enables reuse and remixing of core research outputs, which can also lead to further citations for projects [ 25 , 26 , 27 ]. Published research proposals, protocols, and open notebooks act as advertisements for ongoing research and enable others to reuse methods, exposing the continuous and collaborative nature of scholarship.

To enable reuse, embrace open licenses. When it comes to innovative dissemination, the goal is usually that the materials are accessible to as large an audience as possible. If appropriate open licenses are not used, while materials may be free to access, they cannot be widely used, modified, or shared. The best in this case is the widely adopted Creative Commons licenses, CC BY or CC 0. Variations of these licenses are less permissive and can constrain reuse for commercial or derivative purposes. This limitation, however, prevents the use of materials in many forms of (open) educational resources and other open projects, including Wikipedia. Careful consideration should be given to licensing of materials, depending on what your intended outcomes from the project are (see Rule 1). Research institutes and funding bodies typically have a variety of policies and guidance about the use and licensing of such materials, and should be consulted prior to releasing any materials.

Rule 5: Remix traditional outputs

Traditional research outputs like research articles and books can be complemented with innovative dissemination to boost impact; for example, by preparing accompanying nonspecialist summaries, press releases, blog posts, and visual/video abstracts to better reach your target audiences. Free media coverage can be an easy way to get results out to as many people as possible. There are countless media outlets interested in science-related stories. Most universities and large research organisations have an office for public affairs or communication: liaise with these experts to disseminate research findings widely through public media. Consider writing a press release for manuscripts that have been accepted for publication in journals or books and use sample forms and tools available online to assist you in the process. Some journals also have dedicated press teams that might be able to help you with this.

Another useful tool to disseminate traditional research outputs is to release a research summary document. This one- or two-page document clearly and concisely summarises the key conclusions from a research initiative. It can combine several studies by the same investigator or by a research group and should integrate two main components: key findings and fact sheets (preferably with graphical images to illustrate your point). This can be published on your institutional website as well as on research blogs, thematic hubs, or simply posted on your social media profiles. Other platforms such as ScienceOpen and Kudos allow authors to attach nonspecialist summaries to each of their research papers.

To maximise the impact of your conference presentations or posters, there are several steps that can be taken. For instance, you can upload your slides to a general-purpose repository such as Figshare or Zenodo and add a digital object identifier (DOI) to your presentation. This also makes it easier to integrate such outputs with other services like ORCID. You can also schedule tweets before and during any conferences, and use the conference hashtag to publicise your talk or poster. Finally, you can also add information about your contributions to email signatures or out-of-office messages [ 28 ].

Rule 6: Go live

In-person dissemination does not just have to be at stuffy conferences. With research moving beyond the walls of universities, there are several types of places for more participatory events. Next to classic scientific conferences, different types of events addressing wider audiences have emerged. It is possible to hit the road and take part in science festivals, science slams, TEDx talks, or road shows.

Science slams are short talks in which researchers explain a scientific topic to a typically nonexpert audience. Similar to other short talk formats like TED talks, they lend themselves to being spread over YouTube and other video channels. A prominent example from the German-speaking area is Giulia Enders, who won the first prize in a science slam that took place in 2012 in Berlin. The YouTube video of her fascinating talk about the gut has received over 1 million views. After this success, she got an offer to write a book about the gut and the digestive system, which has since been published and translated into many languages. You never know how these small steps might end up having a wider impact on your research and career.

Another example is Science Shops, small entities which provide independent, participatory research support to civil society. While they are usually linked to universities, hacker and maker spaces tend to be community-run locations, where people with an interest in science, engineering, and art meet and collaborate on projects. Science festivals are community-based showcases of science and technology that take place over large areas for several days or weeks and directly involve researchers and practitioners in public outreach. Less formally, Science Cafés or similar events like Pint of Science are public engagement events in casual settings like pubs and coffeehouses.

Alternatively, for a more personal approach, consider reaching out to key stakeholders who might be affected by your research and requesting a meeting, or participating in relevant calls for policy consultations. Such an approach can be especially powerful in getting the message across to decision-makers and thought-leaders, although the resources required to schedule and potentially travel to such meetings means you should target such activities very carefully. And don’t forget the value of serendipity—who knows who you’ll meet in the course of your everyday meetings and travels. Always be prepared with a 30 second ‘elevator pitch’ that sums up your project in a confident and concise manner—such encounters may be the gateways to greater engagement or opportunities.

Rule 7: Think visual

Dissemination of research is still largely ruled by the written or spoken word. However, there are many ways to introduce visual elements that can act as attractive means to help your audience understand and interpret your research. Disseminate findings through art or multimedia interpretations. Let your artistic side loose or use new visualisation techniques to produce intuitive, attractive data displays. Of course, not everyone is a trained artist, and this will be dependent on your personal skills.

Most obviously, this could take the form of data visualisation. Graphic representation of quantitative information reaches back to ‘earliest map-making and visual depiction’ [ 29 ]. As technologies have advanced, so have our means of visually representing data.

If your data visualisations could be considered too technical and not easily understandable by a nonexpert reader, consider creating an ad hoc image for this document; sometimes this can also take the form of a graphical abstract or infographic. Use online tools to upload a sample of your data and develop smart graphs and infographics (e.g., Infogr.am, Datawrapper, Easel.ly, or Venngage).

Science comics can be used, in the words of McDermott, Partridge, and Bromberg [ 30 ], to ‘communicate difficult ideas efficiently, illuminate obscure concepts, and create a metaphor that can be much more memorable than a straightforward description of the concept itself’. McDermott and colleagues continue that comics can be used to punctuate or introduce papers or presentations and to capture and share the content of conference talks, and that some journals even have a ‘cartoon’ publication category. They advise that such content has a high chance of being ‘virally’ spread via social media.

As previously discussed, you may also consider creating a video abstract for a paper or project. However, as with all possible methods, it is worth considering the relative costs versus benefits of such an approach. Creating a high-quality video might have more impact than, say, a blog post but could be more costly to produce.

Projects have even successfully disseminated scientific findings through art. For example, The Civilians—a New York–based investigative theatre company—received a three-year grant to develop The Great Immensity , a play addressing the complexity of climate change. AstroDance tells the story of the search for gravitational waves through a combination of dance, multimedia, sound, and computer simulations. The annual Dance Your PhD contest, which began in 2007 and is sponsored by Science magazine, even asks scientists to interpret their PhD research as dance. This initiative receives approximately 50 submissions a year, demonstrating the popularity of novel forms of research dissemination.

Rule 8: Respect diversity

The academic discourse on diversity has always included discussions on gender, ethnic and cultural backgrounds, digital literacy, and epistemic, ideological, or economic diversity. An approach that is often taken is to include as many diverse groups into research teams as possible; for example, more women, underrepresented minorities, or persons from developing countries. In terms of scientific communication, however, not only raising awareness about diversity issues but also increasing visibility of underrepresented minorities in research or including more women in science communication teams should be considered, and embedded in projects from the outset. Another important aspect is assessing how the communication messages are framed, and if the chosen format and content is appropriate to address and respect all audiences. Research should reach all who might be affected by it. Respect inclusion in scientific dissemination by creating messages that reflect and respect diversity regarding factors like gender, demography, and ability. Overcoming geographic barriers is also important, as well as the consideration of differences in time zones and the other commitments that participants might have. As part of this, it is a key responsibility to create a healthy and welcoming environment for participation. Having things such as a code of conduct, diversity statement, and contributing guidelines can really help provide this for projects.

The 2017 Progression Framework benchmarking report of the Scientific Council made several recommendations on how to make progress on diversity and inclusion in science: (1) A strategy and action plan for diversity should developed that requires action from all members included and (2) diversity should be included in a wide range of scientific activities, such as building diversity into prizes, awards, or creating guidance on building diversity and inclusion across a range of demographics groups into communications, and building diversity and inclusion into education and training.

Rule 9: Find the right tools

Innovative dissemination practices often require different resources and skills than traditional dissemination methods. As a result of different skills and tools needed, there may be higher costs associated with some aspects of innovative dissemination. You can find tools via a more-complete range of sources, including the OpenUP Hub. The Hub lists a catalogue of innovative dissemination services, organised according to the following categories, with some suggested tools:

  • Visualising data: tools to help create innovative visual representations of data (e.g., Nodegoat, DataHero, Plot.ly)
  • Sharing notebooks, protocols, and workflows: ways to share outputs that document and share research processes, including notebooks, protocols, and workflows (e.g., HiveBench, Protocols.io, Open Notebook Science Network)
  • Crowdsourcing and collaboration: platforms that help researchers and those outside academia to come together to perform research and share ideas (e.g., Thinklab, Linknovate, Just One Giant Lab)
  • Profiles and networking: platforms to raise academic profile and find collaboration and funding opportunities with new partners (e.g., Humanities Commons, ORCID, ImpactStory)
  • Organiding events: tools to help plan, facilitate, and publicise academic events (e.g., Open Conference Systems, Sched, ConfTool)
  • Outreach to wider public: channels to help broadcast your research to audiences beyond academia, including policy makers, young people, industry, and broader society (e.g., Famelab, Kudos, Pint of Science)
  • Publishing: platforms, tools, and services to help you publish your research (e.g., Open Science Framework, dokieli, ScienceMatters)
  • Archive and share: preprint servers and repositories to help you archive and share your texts, data, software, posters, and more (e.g., BitBucket, GitHub, RunMyCode)

The Hub here represents just one attempt to create a registry of resources related to scholarly communication. A similar project is the 101 Innovations in Scholarly Communication project, which contains different tools and services for all parts of a generalised research workflow, including dissemination and outreach. This can be broadly broken down into services for communication through social media (e.g., Twitter), as well as those designed for sharing of scholarly outputs, including posters and presentations (e.g., Zenodo or Figshare). The Open Science MOOC has also curated a list of resources for its module on Public Engagement with Science, and includes key research articles, organisations, and services to help with wider scientific engagement.

Rule 10: Evaluate, evaluate, evaluate

Assess your dissemination activities. Are they having the right impact? If not, why not? Evaluation of dissemination efforts is an essential part of the process. In order to know what worked and which strategies did not generate the desired outcomes, all the research activities should be rigorously assessed. Such evaluation should be measured via the use of a combination of quantitative and qualitative indicators (which should be already foreseen in the planning stage of dissemination; see Rule 1). Questionnaires, interviews, observations, and assessments could also be used to measure the impact. Assessing and identifying the most successful practices will give you the evidence for the most effective strategies to reach your audience. In addition, the evaluation can help you plan your further budget and minimise the spending and dedicating efforts on ineffective dissemination methods.

Some examples of quantitative indicators include the following:

  • Citations of publications;
  • alternative metrics related to websites and social media platforms (updates, visits, interactions, likes, and reposts);
  • numbers of events held for specific audiences;
  • numbers of participants in those events;
  • production and circulation of printed materials;
  • media coverage (articles in specialised press newsletters, press releases, interviews, etc.); and
  • how much time and effort were spent on activities.

Some examples of qualitative indicators include the following:

  • Visibility in the social media and attractiveness of website;
  • newly established contacts with networks and partners and the outcomes of these contacts;
  • feedback from the target groups; and
  • share feedback within your group on what dissemination strategies seemed to be the most effective in conveying your messages and reaching your target audiences.

We recognise that researchers are usually already very busy, and we do not seek to pressurise them further by increasing their burdens. Our recommendations, however, come at a time when there are shifting norms in how researchers are expected to engage with society through new technologies. Researchers are now often partially evaluated based on such, or expected to include dissemination plans in grant applications. We also do not want to encourage the further fragmentation of scholarship across different platforms and ‘silos’, and therefore we strongly encourage researchers to be highly strategic in how they engage with different methods of innovative dissemination. We hope that these simple rules provide guidance for researchers and their future projects, especially as the tools and services available evolve through time. Some of these suggestions or platforms might not work across all project types, and it is important for researchers to find which methods work best for them.

Acknowledgments

Many thanks to everyone who engaged with the workshops we conducted as part of this grant award.

  • View Article
  • Google Scholar
  • PubMed/NCBI
  • 19. Pontika N, Knoth P, Cancellieri M, Pearce S. Fostering Open Science to Research Using a Taxonomy and an ELearning Portal. In: iKnow: 15th International Conference on Knowledge Technologies and Data Driven Business, 21–22 Oct 2015, Graz, Austria [cited 2020 Mar 23]. Available from: http://oro.open.ac.uk/44719/
  • 29. Friendly M. A Brief History of Data Visualization. In: Handbook of Data Visualization . Chen C, Härdle W, Unwin A, editors. Springer Handbooks Comp.Statistics. Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-33037-0_2 . 2008. Pp.15–56.
  • School of Social Work >
  • Academics >

Read article

By Sue Coyle, MSW Social Work Today

student working on laptop outside.

Become a social work leader and advance your professional practice through UB's Doctor of Social Work (DSW) in Social Welfare program. Our part-time, online DSW program is the first and only DSW program in SUNY and one of about 40 programs across the United States.

DSW Program Overview [3:49]

DSW Program overview

About the Program

Our 39-credit DSW program is aimed at experienced social workers with a desire to advance their professional practice and address research-to-practice gaps through the application of implementation science . DSW students will learn to effectively integrate evidence into the practice setting to promote diversity, equity and inclusion. Our program incorporates the signature strengths of the school’s  trauma-informed and human rights (TI-HR)  perspective into the implementation and evaluation of evidence-based treatments with vulnerable populations.

As a part-time, fully online program, the DSW capitalizes on the latest digital technologies to innovate social work education and practice. Graduates are prepared to assume leadership in the implementation and dissemination of advanced social work practice and contribute to advancing equity and bringing a TI-HR perspective into practice.  

Based on coursework and knowledge gained within the program, students implement a Capstone Project in their agency with a target population.

Three fundamental aspects make our online DSW program unique:

  • Utilizes implementation science to translate research into best practice interventions and identify strategies to address barriers to effective service delivery and program uptake. This is applicable at all levels of social work practice, and in both clinical and non-clinical settings.
  • Incorporates a  trauma-informed and human rights perspective  to truly transform social work practice.
  • Integrates  state-of-the-art digital technologies  to innovate social work education and practice, as our  students use virtual reality and develop a global professional collaborative network.
  • Credit hours: 39
  • Program length: 3 years (7 semesters), part-time
  • Program start: Fall semester only

Fall Semester

SW 620 Digital Technology and Professional Collaboration Networks for Social Work Practice SW 626 Doctoral Seminar in Trauma and Human Rights

Spring Semester

SW 621 Concepts in Implementation Science SW 622 Evaluating & Utilizing Evidence-Based Practices in Social Work

SW 624 Methods in Implementation Science I SW 627 Organizational Characteristics & Implementing Evidence-Based Practice

SW 631 DSW Capstone I: Identifying Evidence-Based Interventions for Translation SW 635 Methods in Implementation Science II

Summer Semester

SW 632 DSW Capstone II: Designing Implementation Strategies SW XXX Elective*

SW 633 DSW Capstone III: Testing Implementation Strategies SW 629 Disseminating, Spreading/Scaling, and Sustaining Interventions

SW 634 DSW Capstone IV: Evaluating & Disseminating Interventions

* Electives:  Students must complete one 3-credit elective throughout the DSW program. The elective will be selected in consultation with the DSW Program Director. The elective should have a direct relevance to the student’s substantive area and/or enable the student to expand pedagogical skills. In any case, students should have a clear rationale for electives they select.

Suggested elective:

SW 628 Teaching and Pedagogy

In general, electives should be taken at the graduate level and from within the UBSSW DSW or Master of Social Work (MSW) program. In order to take a MSW course for DSW elective credit:

  • The course must be taught by a faculty member holding a doctorate.
  • The instructor must agree to adjust assignments and expectations to qualify as doctoral-level work.
  • Course content may not duplicate a previous course (including from the student’s MSW program).
  • The student must submit a DSW Elective Permission form, including clear justification, for review and approval by their advisor and the DSW Program Director prior to enrolling in the course.

Capstone Project

Capstone project overview.

The Capstone Project is a four-semester process incorporating implementation science principles and strategies to promote the adoption and integration of evidence-based interventions in direct practice settings and advance the trauma-informed human rights perspective. Under guidance from DSW program faculty, students independently identify and pursue a problem, intervention, and implementation research question(s) of their choice. They then implement the intervention at their agency with a target population. The target population can include clients, practitioners, administrators, volunteers, and/or other individuals associated with or served by the student’s agency.

Capstone Project Proposal

During the first two semesters of the four semester Capstone Project, students will develop an Implementation Research Project Proposal. The proposal must include a literature review; theory, model and/or conceptual framework; implementation science strategies; methodology; analytic strategy; and timeline for completion of work. Students will also need to complete requisite university Institutional Review Board (IRB) steps as well as obtain any agency required IRB (if applicable). 

Capstone Project Implementation, Evaluation, and Dissemination

Provided faculty approval is granted, students will move forward with the implementation, evaluation, and dissemination phase of their Capstone Project during their last two semesters. With faculty support, students implement their proposed project, collect and analyze data, and critically evaluate their results. Students must develop and submit a Capstone Project Final Paper which is a written description of their project. The assignment is designed to provide students with a product that will provide the basis for a manuscript for publication or other means of dissemination.   

Online Course Delivery Methods

  • The DSW will require a mandatory orientation session in August before the start of the program. 
  • The DSW program can be completed without physically coming to campus. Online courses can have both synchronous and asynchronous components to them.  Synchronous sessions will require students to log on at a specific day and time for live video sessions, while asynchronous means students have the flexibility to complete required work within a prescribed time frame and deadline. Faculty members will advise students on the number of synchronous sessions required in any course. 

Receive NY Social Work Contact Hours

Select DSW courses are approved for NYSED social work contact hours to renew your license registration. If you have a license in another state, check with your state regulatory board to determine if NY hours will be accepted in your state. Each of the 3-credit online DSW courses listed on the form is approved for 45 NYSED live online social work contact hours.

Use the form below to request a contact hours certificate upon completion of DSW coursework.

Tuition and Cost of Attendance

Approved tuition rates for fall 2024 are as follows. 

  New York Residents Out-of-State/ International
Tuition
(per credit hour)
$800 $1,092
Tuition + Fees
(per credit hour)

DSW students will be required to purchase a stand-alone virtual reality headset that will be used throughout the DSW program. The estimated cost is $300-$400. Details on the specific headset to be purchased will be provided upon acceptance into the program.

DSW students can apply for financial aid via the FAFSA .  

To qualify for the in-state tuition rate, admitted students must provide proof of New York State residency. Visit the  accepted student information page for details. 

Request DSW Information Session Recording

Contact us: [email protected]

Michelle Fortunato-Kewin, DSW '22

Headshot of Michelle Fortunato-Kewin.

“We need culturally responsive interventions that work now.”

Learn more about our current DSW students >>>

  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Editor's Choice
  • Author Guidelines
  • Submission Site
  • Open Access
  • About The British Journal of Social Work
  • About the British Association of Social Workers
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Introduction, limitations, supplementary material.

  • < Previous

Social Workers’ Perceived Barriers and Facilitators to Social Work Practice in Schools: A Scoping Review

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Sarah Binks, Lyndal Hickey, Airin Heath, Anna Bornemisza, Lauren Goulding, Arno Parolini, Social Workers’ Perceived Barriers and Facilitators to Social Work Practice in Schools: A Scoping Review, The British Journal of Social Work , Volume 54, Issue 6, September 2024, Pages 2661–2680, https://doi.org/10.1093/bjsw/bcae046

  • Permissions Icon Permissions

The aim of this scoping review was to establish the breadth of the academic literature regarding the barriers and facilitators to social work practice in schools as perceived by School Social Workers (SSWs). Following the PRISMA-ScR Scoping Review Framework, 42 articles were identified as meeting the inclusion criteria. Five interrelated themes related to the barriers and facilitators to SSW practice were identified: (1) Inadequacy of service delivery infrastructure; (2) SSWs’ role ambiguities and expectations; (3) SSWs’ competency, knowledge and support; (4) School climate and context; and (5) Cultivating relationships and engagement. This scoping review found that social workers perceive far greater barriers than facilitators when delivering services in school settings, with limited evidence related to the facilitators that enhance School Social Work (SSW) practice. Further research regarding the facilitators of SSW practice is needed, specifically in countries where research on this topic is emergent.

Within the field of Social Work (SW), School Social Work (SSW) practice is a unique specialization that is committed to supporting students to thrive and reach their full educational potential. There is a growing need for school-based mental health services due to the changing political, economic, cultural and environmental contexts and challenges of the last 10 years, that have seen an increase in xenophobia; racism; and social, economic and health inequalities ( Phillippo et al. , 2017 ; Capp et al. , 2021 ; Kelly et al. , 2021 ; Daftary, 2022 ; Villarreal Sosa, 2022 ). School Social Workers (SSWs) have been instrumental in providing effective psychosocial and mental health interventions to students and their families to overcome such educational barriers and inequities related to homelessness, family violence, bullying, school violence, sexuality, grief and loss, disabilities, school attendance and in response to the coronavirus disease 2019 (COVID-19) pandemic ( Reid, 2006 ; Sawyer et al. , 2006 ; Allen-Meares et al. , 2013 ; Quinn-Lee, 2014 ; Rueda et al. , 2014 ; Miller et al. , 2015 ; Webber, 2018 ; Smith-Millman et al. , 2019 ; Johnson and Barsky, 2020 ; Karikari et al. , 2020 ; Capp et al. , 2021 ; Daftary, 2022 ). However, for SSWs to effectively respond to the increased need and demands for service, they must successfully overcome barriers to effective SSW practice such as: resource restrictions; unmanageable workloads; ambiguous roles and responsibilities; professional isolation; and limited supervision and training ( Agresta, 2006 ; Teasley et al. , 2012 ; Whittlesey-Jerome, 2012 ; Phillippo et al. , 2017 ; Beddoe, 2019 ; Capp et al. , 2021 ).

Failure to address barriers to SSW practice can significantly impact the provision of effective SSW services due to increased job-related stress, job dissatisfaction, compassion fatigue, vicarious trauma, burnout, absenteeism and attrition, which in turn can have a detrimental impact on the provision of effective SSW services addressing mental health and wellbeing needs of students, families and the school system ( Lloyd et al. , 2002 ; Agresta, 2006 ; Caselman and Brandt, 2017 ). Moreover, the existing research regarding the barriers and facilitators to SSW practice is substantially more deficit focused and provides limited understanding regarding how SSWs respond to these practice challenges, and how they facilitate effective SSW practice. The dearth of evidence regarding SSWs’ perspectives makes it challenging to assess the impact that these barriers may have on SSWs’ wellbeing and may hinder evidence-informed approaches to enhance practitioner wellbeing. Consequently, there is a growing evidence base emphasising that understanding how SSWs perceive the barriers and facilitators to SSW practice is essential to ensuring the continuity of care in the student-SSW relationships, contributing to the improvement of student, family and school outcomes ( Caselman and Brandt, 2017 ).

Despite the available evidence highlighting the importance of understanding the barriers and facilitators of SSW practice and the emergence of National SSW Practice Models in the USA ( Frey et al. , 2013 ) and Australia ( Australian Association of Social Workers (AASW), 2011 ), there is a lack of synthesis of the existing literature examining the barriers and facilitators that influence the successful integration of these SSW practice standards in real-world settings and across international perspectives.

The present study addresses this significant shortcoming by synthesising the existing research to identify themes related to the barriers and facilitators to SSW practice that will allow us to understand how SSWs resolve these practice-based challenges, to enhance evidence-informed best practice for SSWs, better inform SW’s education and preparation to enter the field and strengthen the linkages between research and SW practice. Using a scoping review methodology, this study aims to answer the following research question: What evidence exists in the academic literature regarding the perceived barriers and facilitators to SW practice experienced by social workers (SWs) in schools in Australia, Canada, Aotearoa NZ, the UK and USA?

This review follows the Tricco et al. (2018) , the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) framework methodology. A scoping review was chosen as the appropriate method to synthesise the existing research regarding the barriers and facilitators to SSW practice to map the relevant literature and identify key concepts and knowledge gaps ( Arksey and O’Malley, 2005 ; Levac et al. , 2010 ; Munn et al. , 2018 ). This study uses the population, concept and context (PCC) approach outlined by Peters et al. (2015) .

The study focused on Social Workers working in school settings. For this scoping review, a Social Worker is defined as a graduate of a SW education program at the bachelor’s or master’s degree level or is eligible for accreditation with the SW governing body in their location of practice.

SSWs are trained mental health professionals who provide SW services in a school setting, with the primary goal of supporting a student’s learning potential and facilitates successful learning outcomes and full participation for students, in consultation with school staff, parents and communities ( AASW, 2011 ; National Association of Social Workers (NASW), 2012 ; Frey et al. 2013 ; Constable, 2016 ). While there are a variety of SSW models used internationally, SSW practice broadly encompasses: (1) Evidence-based educationally relevant behaviour and mental health services with students, families and school personnel; (2) Promoting school climate, culture and system change to foster academic achievement; (3) Facilitating access and coordination with school and community resources; and (4) Research, education and professional development ( AASW, 2011 ; NASW, 2012 ; Frey et al. , 2013 ). For this study, we defined barriers as impediments to the implementation of SSW practice and facilitators as enablers that enhance SSW practice interventions and efficacy ( Teasley et al. , 2010 ; IGI Global, 2023 ).

This study focused on SW practice in primary and secondary schools, which comprise of students in Grades: Kindergarten/Prep to 12. The study included all schools that fit this category, regardless of funding or religious affiliation. To ensure that the identified evidence is comparable and manageable in scope, the inclusion criteria were restricted to published studies of SSWs’ perspectives in Australia, Canada, Aotearoa NZ, the UK and USA. These countries were selected given the similarities in linguistics, governance structures, school systems, colonial histories and the historical development of the social work profession in response to industrialization, urbanization and social inequalities; while at the same time providing a meaningful comparative analysis that recognises the diversity of cultural, historical, political and socioeconomic factors. Given the contextual differences across the five countries, such as the existence or absence of SSW practice models, the variety of SSW roles and responsibilities, the availability of SSW-specific tertiary education, licencing, accreditation and professional representation and specific legislation and funding guiding SSW practice (e.g. No Child Left Behind Act (2002) and Individuals with Disabilities Education Act (2004) in the USA), this study has, where relevant, specified the context specific barriers and facilitators in the results section ( Slovak et al. , 2006 ; NASW, 2012 ).

Eligibility criteria

This study focused on academic, peer-reviewed literature, written in English, between the years January 2000 to February 2022. We limited the search to post-2000 given societal and mental health service system changes that have increased focus on students’ social and emotional wellbeing to reflect the contemporary educational landscape. For full inclusion and exclusion criteria for this scoping review see Table 1 .

Inclusion and exclusion criteria.

Inclusion criteriaExclusion criteria
: Social workers (SWs) providing services in school settings.

: Social work (SW) services in school-settings provided by SWs

: Australia, Canada, Aotearoa NZ, USA, UK, Schools (k/prep to 12).

: English language; Date: 2000 to February 2022, Academic peer-reviewed literature and articles, Research chapters in edited books.

: SWs not providing services in schools or not during the school day.

: Articles not regarding SWs’ perspectives, barriers or facilitators not experienced by the SWs, barriers that are not related to delivery of SW services in school-settings.

: Optional education settings (post-secondary education, pre-school or kindergarten when not compulsory), field or professional education, School(s) of SW or SW Education.

: Conceptual studies, grey literature, dissertations, non-research books, editorials

Inclusion criteriaExclusion criteria
: Social workers (SWs) providing services in school settings.

: Social work (SW) services in school-settings provided by SWs

: Australia, Canada, Aotearoa NZ, USA, UK, Schools (k/prep to 12).

: English language; Date: 2000 to February 2022, Academic peer-reviewed literature and articles, Research chapters in edited books.

: SWs not providing services in schools or not during the school day.

: Articles not regarding SWs’ perspectives, barriers or facilitators not experienced by the SWs, barriers that are not related to delivery of SW services in school-settings.

: Optional education settings (post-secondary education, pre-school or kindergarten when not compulsory), field or professional education, School(s) of SW or SW Education.

: Conceptual studies, grey literature, dissertations, non-research books, editorials

Search strategy

The search strategy and databases were selected in consultation with an expert librarian and authors (SB, AP and LH). In February 2022, the lead author searched seven academic databases: PsycInfo (OVID), CINAHL (EBSCO), ERIC (EBSCO), Medline (OVID), INFORMIT (‘A+ education’ and ‘humanities and social sciences’), ASSIA (ProQuest) and SocINDEX (EBSCO). These databases were chosen given their broad coverage of the SW and social care parameters and fields with the ability to focus results on those most relevant.

Title and abstract were searched in all databases. The following search terms were intentionally broad to capture the relevant literature given the multiple possible terms representing the perspectives of SSWs: ‘Social Work*’ AND (School* OR Education*) AND (Perception* or perceive* or attitude* or perspective* or view* or belie* or opinion* or impression* or experience* or encounter* or identif*). Where available, the search was expanded to subject headings. We included peer reviewed and research chapters in edited books to ensure we captured the depth and breadth of empirical evidence that matched the inclusion criteria. We did not include search terms related to ‘barrier’ or ‘facilitator’ to practice, given we found that this limit introduced bias into the search and excluded some results that would otherwise have been included.

Selection of sources of evidence

All articles were screened by two independent reviewers to minimise potential reviewer bias. The lead author (SB) reviewed all 14,317 articles for title and abstract screening; and 285 for full-text review, while other authors (LH, AH, AB, LK) were second reviewers during both screening stages. During the title and abstract phase, a third reviewer (AP) resolved any conflicts. After the full-text review, a third reviewer (AP) resolved conflicts related to exclusion reasons (e.g. wrong concept or setting), any remaining conflicts were resolved by consensus.

Data charting process

The authors developed a data charting form specifying which variables to extract. Authors (SB, LH and AP) independently charted the data, discussed the results and updated the data charting form in an iterative process. Disagreements on data charting were resolved by consensus and discussion with other authors, if required. Results were reviewed by all authors. The research aim and question guided the data synthesis process, with relevant data being charted into the following categories: the study characteristics (e.g. authors, title, publication year, country, aims/purpose, population, sample size, methodology/methods, relevant outcomes/findings, relevant findings and recommendations (by study authors)). Data regarding the barriers and facilitators to SSW practice was charted based on whether or not the SSWs perceived it to be a barrier or facilitator to SSW practice. Key findings data was synthesised into common themes, guided by the research question, with a focus on the barriers and facilitators to SSW practice ( Hsieh and Shannon, 2005 ). Initially, the data were coded under broad themes of ‘barriers’ or ‘facilitators’ and were subsequently grouped into the themes following the conventional content analysis deductive approach, where all data were sorted into categories based on how the different codes were related and linked, and then, organised into meaningful clusters ( Hsieh and Shannon, 2005 ). See Supplementary Table S1 for an overview of data synthesised.

Study selection

The final search results ( n  = 25,865) from seven academic databases were exported into ENDNOTE (version X9) bibliographic software ( Clarivate Analytics, 2018 ) and duplicates ( n  = 11,492) were removed ( Figure 1 ). Screening of articles ( n  = 14,373) was conducted using COVIDENCE software ( Veritas Health Innovation, 2022 ). Following the title and abstract screening, 14,032 articles were excluded. 285 full-text articles were retrieved and screened, with 242 excluded. One full-text article could not be retrieved. Overall, 42 articles met the inclusion criteria.

PRISMA flow chart of article selection process.

PRISMA flow chart of article selection process.

Majority ( n  = 36) of studies were from the USA, with the remaining six studies divided between Australia ( n  = 2), New Zealand ( n  = 2), Canada ( n  = 1) and the UK (England and Wales) ( n  = 1). There appears an increasing research interest in this area, with over 75% of studies published after 2010 ( n  = 32); almost half were published after 2015 ( n  = 19); and 23.8% published since 2020 ( n  = 10).

Characteristics of sources of evidence

For the included studies, a range of methodologies were employed: qualitative ( n  = 16), quantitative ( n  = 10) and mixed methods ( n  = 16). Methods including interviews, focus groups, questionnaires and document archival records analysis. Ten studies did not disaggregate, or only partially aggregated their data, preventing delineation of SSWs responses from other professionals ( Crepeau-Hobson et al. , 2005 ; Reid, 2006 ; Sawyer et al. , 2006 ; Peabody, 2014 ; Avant and Lindsey, 2016 ; Smith-Millman et al. , 2019 ; Sweifach, 2019 ; Johnson and Barsky, 2020 ; Heberle et al. , 2021 ; Goodcase et al. , 2022 ). Smith-Millman et al. (2019) noted SSWs perceived ‘barriers’ without defining further.

Synthesis of results: SSW practice barrier and facilitator themes

The analysis of available evidence identified five interrelated themes: (1) Inadequacy of service delivery infrastructure; (2) SSW role ambiguities and expectations; (3) SSWs’ competency, knowledge and support; (4) School climate and context and (5) Cultivating relationships and engagement.

Inadequacy of service delivery infrastructure

SSWs reported that an adequate service delivery infrastructure, such as the availability of human and material resources, were essential requirements to meet student needs, and that a lack of such resources hampered their ability to do their job and increased their stress levels and job dissatisfaction ( Agresta, 2006 ). Of the 42 included papers, 81% ( n  = 34) identified SSW practice barriers and facilitators related to an inadequacy of service delivery infrastructure. Out of these, 67% ( n  = 28) reported on barriers only, no articles reported on facilitators only, and 14% ( n  = 6) identified both barriers and facilitators to SSW practice. SSWs perceived restrictive funding requirements (e.g. limited school resources, scarcity of funding for SSW positions and SSW salaries) as a barrier to SSW service delivery, and some SSWs expressed concerns for the future of SSW practice and job security, particularly during economic hardship ( Crepeau-Hobson et al. , 2005 ; Raines, 2006 ; Teasley et al. , 2010 , 2012 ; Bronstein et al. , 2011 ; Lee, 2012 ; Whittlesey-Jerome, 2012 ; Peckover et al. , 2013 ; Peabody, 2014 ; Rueda et al. , 2014 ; Miller et al. , 2015 ; Avant and Swerdlik, 2016 ; Johnson and Barsky, 2020 ; Capp et al. , 2021 ; Drew and Gonzalez, 2021 ; Heberle et al. , 2021 ). SSWs reported that insufficient SSW staff levels impacted their ability to meet student needs, and resulted in unmanageable caseloads, unrealistic SSWs to student ratios, serving multiple schools and the inability to provide services in some areas ( Crepeau-Hobson et al. , 2005 ; Raines, 2006 ; Teasley et al. , 2010 , 2012 ; Bronstein et al. , 2011 ; Lee, 2012 ; Whittlesey-Jerome, 2012 ; Peckover et al. , 2013 ; Peabody, 2014 ; Rueda et al. , 2014 ; Miller et al. , 2015 ; Avant and Swerdlik, 2016 ; Johnson and Barsky, 2020 ; Capp et al. , 2021 ; Drew and Gonzalez, 2021 ; Heberle et al. , 2021 ).

Time and logistics were consistently mentioned as resource barriers to SSW practice when inadequate, but were seen as facilitators when SSWs were able to access appropriate and confidential work and meeting space, and were able to be informally present and visible in schools and in the community ( Blair, 2002 ; Crepeau-Hobson et al. , 2005 ; Teasley, 2005 ; Agresta, 2006 ; Mann, 2008 ; Chanmugam, 2009 ; Teasley et al. , 2010   Lee, 2012 ; Peckover et al. , 2013 ; Avant, 2014 ; Peabody, 2014 ; Quinn-Lee, 2014 ; Rueda et al. , 2014 ; Miller et al. , 2015 ; Avant and Lindsey, 2016 ; Avant and Swerdlik, 2016 ; Beddoe, 2019 ; Johnson and Barsky, 2020 ; Drew and Gonzalez, 2021 ; Elswick and Cuellar, 2021 ; Kelly et al. , 2021 ; Goodcase et al. , 2022 ).

The research evidence identified SSWs reported a lack of material resources, (e.g. specialised curricula or evidence-based practice (EBP) resources) to guide SSW practice; and that documentation and reporting requirements were barriers to SSW practice ( Agresta, 2006 ; Bates, 2006 ; Reid, 2006 ; Sawyer et al. , 2006 ; Chanmugam, 2009 ; Garrett, 2012 ; Lee, 2012 ; Quinn-Lee, 2014 ; Avant and Lindsey, 2016 ; Phillippo et al. , 2017 ; Elswick and Cuellar, 2021 ; Heberle et al. , 2021 ). During the COVID-19 school shutdowns, SSWs reported barriers regarding insufficient access to internet, technology and school-based resources and associated software skills, knowledge and support ( Capp et al. , 2021 ; Kelly et al. , 2021 ; Daftary, 2022 ). Only in three studies did SSWs identify resources that facilitated SSW practice, such as: electronic records system for data tracking; sharing information and resources; and during the COVID-19 school shutdowns, availability and access to tele-health curricula and activities ( Johnson and Barsky, 2020 ; Capp et al. , 2021 ; Daftary, 2022 ).

SSW role ambiguities and expectations

SSWs reported numerous challenges regarding the roles, responsibilities and expectations of the social worker role within the school setting. In thirty-five studies (83%), SSWs reported barriers and facilitators to SSW practice regarding SSW role ambiguities and expectations. Out of these, 45% ( n  = 19) listed barriers only, <1% ( n  = 4) reported facilitators only and 29% ( n  = 10) covered both. SSWs reported barriers resulting from insufficient understanding of the SSW role; role ambiguities and conflicts; and an absence of respect or recognition for SSW perspectives and scope of practice ( Blair, 2002 ; Teasley, 2005 ; Raines, 2006 ; Reid, 2006 ; Teasley et al. , 2010 ; Bronstein et al. , 2011 ; Lee, 2012 ; Whittlesey-Jerome, 2012 ; Peckover et al. , 2013 ; Avant, 2014 ; Rueda et al. , 2014 ; Miller et al. , 2015 ; Avant and Swerdlik, 2016 ; Phillippo et al. , 2017 ; Webber, 2018 ; Beddoe, 2019 ; Gherardi and Whittlesey-Jerome, 2019 ; Karikari et al. , 2020 ; Capp et al. , 2021 ; Drew and Gonzalez, 2021 ; Elswick and Cuellar, 2021 ; Heberle et al. , 2021 ). SSWs identified roles dominated by reactionary working conditions; and crisis driven-work, that was overwhelmed by competing demands, expectations and interruptions ( Blair, 2002 ; Sawyer et al. , 2006 ; Chanmugam, 2009 ; Lee, 2012 ; Avant, 2014 ; Peabody, 2014 ; Miller et al. , 2015 ; Avant and Swerdlik, 2016 ; Phillippo et al. , 2017 ; Beddoe, 2019 ; Gherardi and Whittlesey-Jerome, 2019 ; Elswick and Cuellar, 2021 ; Heberle et al. , 2021 ; Goodcase et al. , 2022 ). SSWs reported insufficient professional autonomy and professional identity as barriers to practice and some SSWs identified challenges with maintaining boundaries during times of crises response and when providing services remotely during the pandemic ( Blair, 2002 ; Chanmugam, 2009 ; Lee, 2012 ; Peckover et al. , 2013 ; Webber, 2018 ; Capp et al. , 2021 ; Kelly et al. , 2021 ; Goodcase et al. , 2022 ). The environments that facilitated SSW practice consisted of low professional role discrepancy, appreciation of the SSW role and expertise, high professional autonomy, support for clinical interventions and special programs and empowered SSWs to balance the complexity of the role with making meaningful contributions ( Agresta, 2006 ; Teasley et al. , 2010 , 2012 ; Lee, 2012 ; Peckover et al. , 2013 ; Peabody, 2014 ; Johnson and Barsky, 2020 ; Heberle et al. , 2021 ).

There was evidence that SSWs perceived tensions regarding EBP outcome measure reporting and EBP adaptation to school context; and SSWs noted the dearth of research and failure by SSWs to report practice outcomes negatively impacted cases and justification for the SSW role ( Bates, 2006 ; Raines, 2006 ; Phillippo et al. , 2017 ). Some SSWs identified successfully adapting EBP and utilizing data tracking to: improve measurement strategies and guide implementation decisions, which increased their ability to meet students’ needs, provide school-wide capacity and support, demonstrate program effectiveness, professional credibility and justified SSW funding ( Bates, 2006 ; Avant, 2014 ; Avant and Lindsey, 2016 ; Avant and Swerdlik, 2016 ; Webber, 2018 ; Elswick and Cuellar, 2021 ; Heberle et al. , 2021 ).

SSWs’ competency, knowledge and support

SSWs perceived barriers and facilitators regarding the competencies, knowledge, training and support that they require to effectively responding to the needs of students, families and the school community, while maintaining their own mental health and wellbeing ( Teasley et al. , 2010 ; Bronstein et al. , 2011 ). Twenty-eight studies (67%) examined SSWs’ competency, knowledge and support as barriers and facilitators to SSW practice. Of these, only 36% ( n  = 15) reported barriers only, 14% ( n  = 6) facilitators only and 17% ( n  = 7) both. SSWs felt their SW skills, attitudes and compassion were facilitators to SSW practice ( Teasley et al. , 2010 , 2012 ). Some SSWs identified that inadequate preparation, training or required skills and knowledge from their generalist SW education, specifically lacking school-specific practice knowledge, an understanding of relevant legislation, special education policies and practices, interdisciplinary teams and EBPs specific to SSW practice ( Bates, 2006 ; Reid, 2006 ; Sawyer et al. , 2006 ; Bronstein et al. , 2011 ; Lee, 2012 ; Phillippo et al. , 2017 ; Beddoe, 2019 ; Elswick and Cuellar, 2021 ). SSWs who completed a school-based field placement, who had professors with practice experience, who felt knowledgeable about diversity issues and felt culturally competent, felt better prepared ( Teasley et al. , 2010 , 2012 ; Phillippo et al. , 2017 ; Beddoe, 2019 ). During COVID-19 school shutdowns, some SSWs were overwhelmed by student/family difficulties, felt unprepared and unsupported to deliver online services, unbalanced home/work boundaries and felt remote SSW services as a modality was ineffective or unfair ( Capp et al. , 2021 ; Kelly et al. , 2021 ; Daftary, 2022 ).

There was evidence that SSWs perceived barriers resulted from insufficient professional development, training opportunities, support and guidance, which impacted SSW services ( Agresta, 2006 ; Teasley et al. , 2010 ; Lee, 2012 ; Peckover et al. , 2013 ; Avant, 2014 ; Peabody, 2014 ; Avant and Lindsey, 2016 ; Avant and Swerdlik, 2016 ; Phillippo et al. , 2017 ; Elswick and Cuellar, 2021 ; Kelly et al. , 2021 ). SSWs reported increased competence, knowledge and awareness when supported to attend or provided with training and professional development ( Teasley, 2005 ; Teasley et al. , 2008 , 2010 , 2012 ; Lee, 2012 ; Peckover et al. , 2013 ; Peabody, 2014 ).

SSWs repeatedly mentioned the scarcity of professional support and/or clinical or SW supervision as barriers to SSW practice ( Teasley et al. , 2010 ; Lee, 2012 ; Peckover et al. , 2013 ; Peabody, 2014 ; Quinn-Lee, 2014 ; Rueda et al. , 2014 ; Phillippo et al. , 2017 ; Webber, 2018 ; Sweifach, 2019 ; Capp et al. , 2021 ). SSWs reported consultation and support from SW supervisors and peers facilitated SSW practice; and SSWs also valued constructive consultation and relational support with non-SW administrators on non-counselling topics ( Chanmugam, 2009 ; Peabody, 2014 ; Phillippo et al. , 2017 ; Sweifach, 2019 ; Heberle et al. , 2021 ). Some SSWs noted that SSW association membership and SW licensure were facilitators to SSW practice ( Raines, 2006 ; Teasley et al. , 2012 ).

School climate and context

SSWs reported adapting their SW practice to be successful working within a ‘host setting’ that is guided by educational policy and processes ( Beddoe, 2019 ). Almost half ( n  = 19, 43%) of the included studies examined barriers and/or facilitators related to School climate and context. Of these, 38% reported barriers only ( n  = 16), <1% ( n  = 1) facilitators only and <1% ( n  = 2) both. Some SSWs reported a fundamental conflict at times between student and organizational needs of the school, prompting the question ‘Who is the client?’ ( Phillippo et al. , 2017 ; Webber, 2018 ). There was evidence that SSWs identified SW ethics and values as barriers pertaining to issues of confidentiality, privacy and best interest of the client; with school district policies (e.g. sexual health and religion) ( Sawyer et al. , 2006 ; Chanmugam, 2009 ; Quinn-Lee, 2014 ; Rueda et al. , 2014 ; Miller et al. , 2015 ; Phillippo et al. , 2017 ; Webber, 2018 ; Daftary, 2022 ; Goodcase et al. , 2022 ). SSWs reported barriers related to the school context and their location within the school landscape as a ‘guest’ in the ‘host setting’ and that school climate and internal dynamics shaped interprofessional relationships and collaboration and a schools’ response to issues (e.g. bullying) ( Testa, 2012 ; Sawyer et al. , 2006 ; Peabody, 2014 ; Miller et al. , 2015 ; Phillippo et al. , 2017 ; Beddoe, 2019 ; Goodcase et al. , 2022 ). Some SSWs reported limited ability to meet with students during the school day given prioritisation of academics over student wellbeing ( Blair, 2002 ; Peabody, 2014 ; Quinn-Lee, 2014 ). Some SSWs reported policy and bureaucracy barriers from district and administrative policies that were inflexible or inadequate in addressing the needs of vulnerable students ( Crepeau-Hobson et al. , 2005 ; Reid, 2006 ; Sawyer et al. , 2006 ; Teasley et al. , 2010 ; Lee, 2012 ; Oades, 2021 ). Some SSWs felt that having a system-wide united purpose and commitment, with rules and policies that supported SSW practice and addressed barriers to learning, facilitated SSW practice ( Teasley et al. , 2010 , 2012 ; Miller et al. , 2015 ).

Cultivating relationships and engagement

SSWs perceived that cultivating effective relationships, through consultation and collaboration with students, families, staff and the community was an essential facilitator for effective SSW practice ( Beddoe, 2019 ; Daftary, 2022 ). Thirty-three (79%) of the included studies identified barriers and/or facilitators to SSW practice regarding cultivating relationships and engagement. Of these, 38% ( n  = 16) listed barriers only, 14% ( n  = 6) facilitators only and 26% ( n  = 11) both. SSWs reported that power imbalances and dynamics with school administrators were barriers to SSW practice ( Chanmugam, 2009 ; Webber, 2018 ; Beddoe, 2019 ; Karikari et al. , 2020 ). SSWs perceived a lack of agency and marginalization and that the relational dynamics influence a schools’ culture and norms, which in turn impacts SSW referrals and collaboration ( Chanmugam, 2009 ; Testa, 2012 ; Miller et al. , 2015 ; Beddoe, 2019 ; Karikari et al. , 2020 ).

SSWs noted that interprofessional relationships, consultation and collaboration were barriers to SSW practice due to staff attitudes and expectations and disrespect for SSWs’ perspectives ( Blair, 2002 ; Teasley, 2005 ; Sawyer et al. , 2006 ; Mann, 2008 ; Teasley et al. , 2008 ; Lee, 2012 ; Testa, 2012 ; Whittlesey-Jerome, 2012 ; Avant, 2014 ; Quinn-Lee, 2014 ; Gherardi and Whittlesey-Jerome, 2019 ; Elswick and Cuellar, 2021 ; Goodcase et al. , 2022 ). Some SSWs experienced barriers resulting from inaccessible support or idiosyncratic relationships between multidisciplinary staff (e.g. school psychologists, school counsellors) ( Reid, 2006 ; Lee, 2012 ; Peckover et al. , 2013 ; Webber, 2018 ).

SSWs noted the importance of utilising relationship-based strategies in response to challenging relational and power dynamics, that strong and positive system-wide relationships with open-communication and consultation facilitated SSW practice ( Teasley, 2005 ; Mann, 2008 ; Chanmugam, 2009 ; Teasley et al. , 2010 ; Lee, 2012 ; Peckover et al. , 2013 ; Avant, 2014 ; Peabody, 2014 ; Rueda et al. , 2014 ; Miller et al. , 2015 ; Avant and Lindsey, 2016 ; Beddoe, 2019 ; Johnson and Barsky, 2020 ; Heberle et al. , 2021 ; Daftary, 2022 ).

Some SSWs were concerned with their ability to engage and cultivate relationships with students, families and the community. SSWs identified barriers associated with a lack of student and family engagement ( Teasley, 2005 ; Reid, 2006 ; Sawyer et al. , 2006 ; Teasley et al. , 2008 , 2010 , 2012 ; Lee, 2012 ; Quinn-Lee, 2014 ; Rueda et al. , 2014 ; Miller et al. , 2015 ; Johnson and Barsky, 2020 ; Capp et al. , 2021 ; Kelly et al. , 2021 ; Goodcase et al. , 2022 ). Some SSWs found barriers to community engagement due to the accessibility of community supports/services, community misperceptions of SSW role, strained school relationships and community and environmental risk factors ( Teasley, 2005 ; Reid, 2006 ; Sawyer et al. , 2006 ; Teasley et al. , 2010 , 2012 ; Lee, 2012 ; Peckover et al. , 2013 ; Rueda et al. , 2014 ; Goodcase et al. , 2022 ). There was evidence that SSWs perceived that positive formal and informal relationship building and collaboration and availability of community resources and referrals facilitated SSW practice ( Teasley, 2005 ; Mann, 2008 ; Teasley et al. , 2010 ; Quinn-Lee, 2014 ; Heberle et al. , 2021 ; Oades, 2021 ; Daftary, 2022 ).

By examining and synthesizing the barriers and the facilitators to SSW practice, this study demonstrates the challenges that SSWs experience and highlights the facilitators that support effective SSW practice. This review found that barriers to SSW practice were reported in greater detail than facilitators and a dearth regarding facilitators that were considered under the themes: Inadequacy of service delivery infrastructure, SSW role ambiguities and expectations, SSWs’ competency, knowledge and support and School climate and context. There was evidence that SSWs perceived that the barriers related to SSW role expectations and ambiguities, resulted in unrealistic workloads and significantly impacted SSWs’ ability to provide effective services. These findings support existing evidence that these barriers impact SSWs job satisfaction and intent to stay, and can result in burnout, compassion fatigue, vicarious trauma, absenteeism and attrition, which impacts SSW practice effectiveness and negatively impacts student and family outcomes in schools ( Agresta, 2006 ; Caselman and Brandt, 2017 ). However, it is important to note that only a few included studies specifically referenced burnout in their results, with little to no discussion of their implications. With so little research regarding the barriers and facilitators to SSW practice related to compassion fatigue and burnout, this scoping review has identified a potentially important area for future research.

This study demonstrates the importance for SSW practice of cultivating effective interprofessional relationships and collaboration amongst school staff, students, families and the community to improve student and school outcomes. SSWs perceived that the barriers to establishing strong interprofessional relationships were related to significant SSW staff turnover, insufficient funding and time, schedule conflicts, high caseloads and servicing multiple schools ( Bronstein et al. , 2011 ; Lee, 2012 ; Miller et al. , 2015 ; Drew and Gonzalez, 2021 ). These findings highlight the need to mitigate these barriers to ensure effective interprofessional relationships and collaboration and facilitate effective SSW practice to help students thrive.

An interesting finding is that SSWs in Daftary (2022) reported increased time, during COVID-19 school shutdowns, for planning and preventative work due to the absence of school crises and interruptions, which increased accessibility to students due to minimised interruptions and improved SSWs’ ability to better meet the needs of students and their families. However, there is little research identifying which facilitators support SSWs to overcome the barriers that prevent them from effective engagement, consultation and collaboration; and what facilitates their ability to respond to challenges, such as role conflicts, competing demands, unrealistic workloads and crisis-driven, reactive environments. Future research is warranted to explore whether the facilitators that were effective in supporting SSW practice during the COVID-19 school shutdowns have continued with the return to in-person learning, which may inform school leaders and SSW practitioners to develop practices that integrate these facilitators into ongoing SSW services and may provide important contextual information for SW educators to include in their preparation of new SSW graduates. It is also notable that the impact of natural disasters was not discussed in any of the included studies and provides an opportunity for future research contributions in this area.

This study highlighted the importance of school-context specific training, education and support as a facilitator to SSW practice, which can inform policy makers, school leaders and SW educators to better support SSWs so that they have the competencies and knowledge required to enhance student’s educational and wellbeing outcomes. Furthermore, the absence of any discussion in the included studies regarding the specialist SSW education programs available in the USA is interesting. Given the findings from this study highlight the importance of supervision and SW consultation in facilitating SSW practice, school leaders and SSWs must ensure that appropriate supervision and supports are available to ensure that SSWs are supported to effectively respond to the diverse needs of students and their families. However, given the paucity of information in the included articles regarding how SSWs navigate the barriers to accessing supervision and support, further understanding regarding the perspectives of how SSWs engaged creatively to overcome these barriers is warranted. Furthermore, little attention to the role of membership of an SSW specific association or SW licensure as facilitators to SSW practice indicates that further research into their role as a barrier and/or facilitator to SSW practice is warranted.

This review also highlighted the disconnect between the SW professional ethics/values and their experiences in practice within school settings. The lack of attention given to SSWs’ perspectives regarding their engagement with the SSW practice standards (where available) or SW Codes of Ethics to align their day-to-day practice, support their professional autonomy, decrease role discrepancy, resolve ethical dilemmas and support their interprofessional relationships, is an interesting finding in itself. This is important given that SSW practice standards and code of ethics provide a framework for effective SSW practice based on SW values and principals and are an important tool in legitimising the SSW profession ( Altshuler and Webb, 2009 ; AASW, 2011 ).

The scoping review as a methodology mapped the existing academic literature, and as a result, the quality of the evidence included was not assessed. Limiting the context to empirical evidence from USA, UK, Australia, Canada and Aotearoa New Zealand and excluding grey literature and non-English language articles, faces the risk of excluding a greater international perspective of SSWs. By including articles that aggregated responses from interprofessional staff, which prevented delineation of SSWs responses from other professionals, the findings may not purely reflect SSWs perspectives. However, the inclusion of these studies was preferred considering the risk of omitting important evidence arising from reducing the number of included articles to only those that solely focused on SSWs in the study sample. While this scoping review compared five countries that have similar structures of education, political institutions and colonial histories, this study does not take into consideration all contextual differences that exist within each country. It was also beyond the scope of this study to focus specifically on specialised SSW programming or on barriers and facilitators regarding SW practice with specialised populations. These limitations highlight important areas for further consideration.

This scoping review examined the existing academic SW literature regarding the barriers and facilitators to SSW practice. The five main themes are an extensive summary of the factors that inhibit or enable SSWs to provide effective services to meet the diverse needs to students, families and the school community. With so little evidence regarding the facilitators to SSW practice, specifically regarding how SSWs operationalise practice-based strategies and skills to overcome barriers to SSW practice, this scoping review has identified an important area for further research, particularly in countries where research is emerging. This article furthers the understanding of the barriers to effective SSW practice, which provides important contextual information to inform the development of policies and practices that social workers, school leaders, SW educators and policy makers can take into consideration to effectively facilitate SSW practice and enhance students’ wellbeing and ability to thrive in school.

This research was supported by an Australian Government Research Training Program (RTP) Scholarship.

Conflict of interest statement. None declared.

Supplementary material is available at British Journal of Social Work Journal online.

Agresta J. ( 2006 ) ‘ Job satisfaction among school social workers: The role of interprofessional relationships and professional role discrepancy ’, Journal of Social Service Research , 33 ( 1 ), pp. 47 – 52 .

Google Scholar

Allen-Meares P. , Montgomery K. , Kim J. ( 2013 ) ‘ School-based social work interventions: A Cross-National Systematic Review ’, Social Work , 58 ( 3 ), pp. 253 – 62 .

Altshuler S. , Webb J. ( 2009 ) ‘ School social work: Increasing the legitimacy of the profession ’, Children & Schools , 31 ( 4 ), pp. 207 – 18 .

Arksey H. , O’Malley L. ( 2005 ) ‘ Scoping studies: Towards a methodological framework ’, International Journal of Social Research Methodology , 8 ( 1 ), pp. 19 – 32 .

Australian Association of Social Workers ( 2011 ) Practice Standards for School Social Workers , available online at: www.aasw.asn.au/document/item/814 (accessed November 14, 2022).

Avant D. ( 2014 ) ‘ The role of school social workers in implementation of response to intervention ’, School Social Work Journal , 38 ( 2 ), pp. 11 – 31 .

Avant D. , Lindsey B. ( 2016 ) ‘ School social workers as response to intervention change champions ’, Advances in Social Work , 16 ( 2 ), pp. 276 – 91 .

Avant D. , Swerdlik M. ( 2016 ) ‘ A collaborative endeavor: The roles and functions of school social workers and school psychologists in implementing multi-tiered system of supports/response to intervention ’, School Social Work Journal , 41 , 56 – 72 .

Bates M. ( 2006 ) ‘ A critically reflective approach to evidence-based practice ’, Canadian Social Work Review , 23 ( 1/2 ), pp. 95 – 109 .

Beddoe L. ( 2019 ) ‘ Managing identity in a host setting: School social workers’ strategies for better interprofessional work in New Zealand schools ’, Qualitative Social Work , 18 ( 4 ), pp. 566 – 82 .

Blair K. ( 2002 ) ‘ School social work, the transmission of culture, and gender roles in schools ’, Children & Schools , 24 ( 1 ), pp. 21 – 33 .

Bronstein L. , Ball A. , Mellin E. , Wade-Mdivanian R. , Anderson-Butcher D. ( 2011 ) ‘ Advancing collaboration between school- and agency-employed school-based social workers: A mixed-methods comparison of competencies and preparedness ’, Children & Schools , 33 ( 2 ), pp. 83 – 95 .

Capp G. , Watson K. , Astor R. , Kelly M. , Benbenishty R. ( 2021 ) ‘ School social worker voice during COVID-19 school disruptions: A national qualitative analysis ’, Children & Schools , 43 ( 2 ), pp. 79 – 88 .

Caselman T. , Brandt M. ( 2017 ) ‘ School social workers’ intent to stay ’, School Social Work Journal , 31 ( 2 ), pp. 33 – 48 .

Chanmugam A. ( 2009 ) ‘ A qualitative study of school social workers’ clinical and professional relationships when reporting child maltreatment ’, Children & Schools , 31 ( 3 ), pp. 145 – 61 .

Clarivate Analytics ( 2018 ) Endnote reference management software , Version X9. https://endnote.com/ (accessed February 18, 2022).

Constable R. ( 2016 ) ‘Chapter 1. The Role of the School Social Worker’, in Massat C. R. , Kelly M. , Constable R. (eds), School Social Work: Practice, Policy and Research , 8th edn, Chicago, IL, US, Oxford University Press , pp. 3 – 24 .

Google Preview

Crepeau-Hobson M. F. , Filaccio M. , Gottfried L. ( 2005 ) ‘ Violence prevention after Columbine: A survey of high school mental health professionals ’, Children & Schools , 27 ( 3 ), pp. 157 – 65 .

Daftary A. H. ( 2022 ) ‘ Remotely successful: Telehealth interventions in K-12 schools during a global pandemic ’, Clinical Social Work Journal , 50 ( 1 ), pp. 93 – 101 .

Drew M. , Gonzalez M. ( 2021 ) ‘ Making the time: Relationships among the school specialists ’, The School Community Journal , 31 ( 1 ), pp. 171 – 204 .

Elswick S. E. , Cuellar M. J. ( 2021 ) ‘ School social workers perceptions of the use of functional behavior assessments ’, Research on Social Work Practice , 31 ( 5 ), pp. 503 – 12 .

Frey A. J. , Alvarez M. E. , Dupper D. R. , Sabatino C. A. , Lindsey B. C. , Raines J. C. ( 2013 ) National school social work practice model , available online at: https://docs.wixstatic.com/ugd/426a18_09cc4457882b4138bb70d3654a0b87bc.pdf (accessed February 9, 2023).

Garrett K. J. ( 2012 ) ‘ Managing school social work records ’, Children & Schools , 34 ( 4 ), pp. 239 – 48 .

Gherardi S. A. , Whittlesey-Jerome W. K. ( 2019 ) ‘ Exploring school social worker involvement in community school implementation ’, Children & Schools , 41 ( 2 ), pp. 69 – 78 .

Goodcase E. , Brewe A. , White S. , Jones S. ( 2022 ) ‘ Providers as stakeholders in addressing implementation barriers to youth mental healthcare ’, Community Mental Health Journal , 58 ( 5 ), pp. 967 – 81 .

Heberle A. , Sheanáin Ú. , Walsh M. , Hamilton A. , Chung A. , Eells Lutas V. ( 2021 ) ‘ Experiences of practitioners implementing comprehensive student support in high-poverty schools ’, Improving Schools , 24 ( 1 ), pp. 76 – 93 .

Hsieh H. , Shannon S. ( 2005 ) ‘ Three approaches to qualitative content analysis ’, Qualitative Health Research , 15 ( 9 ), pp. 1277 – 88 .

IGI Global ( 2023 ) Dictionary: What is Enablers and Barriers? Available online at: www.igi-global.com/dictionary/enablers-and-barriers/9793 (accessed March 17, 2023).

Johnson D. , Barsky A. ( 2020 ) ‘ Preventing gun violence in schools: Roles and perspectives of social workers ’, School Social Work Journal , 44 ( 2 ), pp. 26 – 48 .

Karikari I. , Brown J. , Ashirifi G. , Storms J. ( 2020 ) ‘ Bullying prevention in schools: The need for a multiple stakeholder approach ’, Advances in Social Work , 20 ( 1 ), pp. 61 – 81 .

Kelly M. , Benbenishty R. , Capp G. , Watson K. , Astor R. ( 2021 ) ‘ Practice in a pandemic: School social workers’ adaptations and experiences during the 2020 COVID-19 school disruptions ’, Families in Society: The Journal of Contemporary Social Services , 102 ( 3 ), pp. 400 – 13 .

Lee J. ( 2012 ) ‘ School social work in Australia ’, Australian Social Work , 65 ( 4 ), pp. 552 – 70 .

Levac D. , Colquhoun H. , O’Brien K. K. ( 2010 ) ‘ Scoping studies: Advancing the methodology ’, Implementation Science , 5 ( 1 ), pp. 69 .

Lloyd C. , King R. , Chenoweth L. ( 2002 ) ‘ Social work, stress and burnout: A review ’, Journal of Mental Health , 11 ( 3 ), pp. 255 – 65 .

Mann K. ( 2008 ) ‘ How school social workers use consultation to aid clinical decision making ’, School Social Work Journal , 33 ( 1 ), pp. 65 – 79 .

Miller P. , Pavlakis A. , Samartino L. , Bourgeois A. ( 2015 ) ‘ Brokering educational opportunity for homeless students and their families ’, International Journal of Qualitative Studies in Education , 28 ( 6 ), pp. 730 – 49 .

Munn Z. , Peters M. , Stern C. , Tufanaru C. , McArthur A. , Aromataris E. ( 2018 ) ‘ Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach ’, BMC Medical Research Methodology , 18 ( 1 ), pp. 143 .

National Association of Social Workers ( 2012 ) NASW Standards for School Social Work Services’, Social Work , available online at: www.socialworkers.org/LinkClick.aspx?fileticket=1Ze4-9-Os7E%3D&portalid=0 (accessed March 9, 2023).

Oades L. ( 2021 ) ‘ Mamae Nui me te Takiwātanga: Surplus suffering and autism spectrum disorder in school social work practice ’, Aotearoa New Zealand Social Work Review , 33 ( 1 ), pp. 55 – 66 .

Peabody M. ( 2014 ) ‘ Exploring dimensions of administrative support for play therapy in schools ’, International Journal of Play Therapy , 23 ( 3 ), pp. 161 – 72 .

Peckover C. , Vasquez M. , Van Housen S. , Saunders J. , Allen L. ( 2013 ) ‘ Preparing school social work for the future: An update of school social workers’ tasks in Iowa ’, Children & Schools , 35 ( 1 ), pp. 9 – 17 .

Peters M. , Godfrey C. , Khalil H. , McInerney P. , Parker D. , Soares C. ( 2015 ) ‘ Guidance for conducting systematic scoping reviews ’, International Journal of Evidence-Based Healthcare , 13 ( 3 ), pp. 141 – 6 .

Phillippo K. , Kelly M. , Shayman E. , Frey A. ( 2017 ) ‘ School social worker practice decisions: The impact of professional models, training, and school context ’, Families in Society: The Journal of Contemporary Social Services , 98 ( 4 ), pp. 275 – 83 .

Quinn-Lee L. ( 2014 ) ‘ School social work with grieving children ’, Children & Schools , 36 ( 2 ), pp. 93 – 103 .

Raines J. ( 2006 ) ‘ SWOT! A strategic plan for school social work in the twenty-first century ’, School Social Work Journal , 31 ( 3 ), pp. 132 – 50 .

Reid K. ( 2006 ) ‘ The views of education social workers on the management of truancy and other forms of non-attendance ’, Research in Education , 75 ( 1 ), pp. 40 – 57 .

Rueda H. , Linton K. , Williams L. ( 2014 ) ‘ School social workers’ needs in supporting adolescents with disabilities toward dating and sexual health: A qualitative study ’, Children & Schools , 36 ( 2 ), pp. 79 – 90 .

Sawyer R. , Porter J. , Lehman T. , Anderson C. , Anderson K. ( 2006 ) ‘ Education and training needs of school staff relevant to preventing risk behaviors and promoting health behaviors among gay, lesbian, bisexual, and questioning youth ’, Journal of HIV/AIDS Prevention in Children & Youth , 7 ( 1 ), pp. 37 – 53 .

Slovak K. , Joseph A. L. , Broussard A. ( 2006 ) ‘ School social workers’ perceptions of graduate education preparation ’, Children & Schools , 28 ( 2 ), pp. 97 – 105 . https://doi.org/10.1093/cs/28.2.97

Smith-Millman M. , Harrison S. , Pierce L. , Flaspohler P. ( 2019 ) ‘ Ready, willing, and able”: Predictors of school mental health providers’ competency in working with LGBTQ youth ’, Journal of LGBT Youth , 16 ( 4 ), pp. 380 – 402 .

Sweifach J. ( 2019 ) ‘ A look behind the curtain at social work supervision in interprofessional practice settings: Critical themes and pressing practical challenges ’, European Journal of Social Work , 22 ( 1 ), pp. 59 – 68 .

Teasley M. ( 2005 ) ‘ Perceived levels of cultural competence through social work education and professional development for urban school social workers ’, Journal of Social Work Education , 41 ( 1 ), pp. 85 – 98 .

Teasley M. , Canifield J. P. , Archuleta A. J. , Crutchfield J. , Chavis A. M. ( 2012 ) ‘ Perceived barriers and facilitators to school social work practice: A mixed-methods study ’, Children & Schools , 34 ( 3 ), pp. 145 – 53 .

Teasley M. , Gourdine R. , Canfield J. ( 2010 ) ‘ Identifying perceived barriers and facilitators to culturally competent practice for school social workers ’, School Social Work Journal , 34 ( 2 ), pp. 90 – 104 .

Teasley M. , Randolph K. , Cho H. ( 2008 ) ‘ School social workers’ perceived understanding of inner city and urban community and neighborhood risk and protective factors and effectiveness in practice tasks ’, School Social Work Journal , 33 ( 1 ), pp. 47 – 64 .

Testa D. ( 2012 ) ‘ Cross-disciplinary collaboration and health promotion in schools ’, Australian Social Work , 65 ( 4 ), pp. 535 – 51 .

Tricco A. C. , Lillie E. , Zarin W. , O’Brien K. K. , Colquhoun H. , Levac D. , Moher D. , Peters M. D. J. , Horsley T. , Weeks L. , Hempel S. , Akl E. A. , Chang C. , McGowan J. , Stewart L. , Hartling L. , Aldcroft A. , Wilson M. G. , Garritty C. , Lewin S. , Godfrey C. M. , Macdonald M. T. , Langlois E. V. , Soares-Weiser K. , Moriarty J. , Clifford T. , Tunçalp Ö. , Straus S. E. ( 2018 ) ‘ PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. The PRISMA-ScR statement ’, Annals of Internal Medicine , 169 ( 7 ), pp. 467 – 73 .

Veritas Health Innovation ( 2022 ) Covidence systematic review software. Melbourne, Australia, available online at: www.covidence.org (accessed March 8, 2022).

Villarreal Sosa L. ( 2022 ) ‘ School social work: Challenges and opportunities ’, Children & Schools , 44 ( 2 ), pp. 67 – 9 .

Webber K. ( 2018 ) ‘ A qualitative study of school social workers’ roles and challenges in dropout prevention ’, Children & Schools , 40 ( 2 ), pp. 82 – 90 .

Whittlesey-Jerome W. ( 2012 ) ‘ Selling the need for school social work services to the legislature: A call for advocacy ’, School Social Work Journal , 36 ( 2 ), pp. 44 – 55 .

Supplementary data

Month: Total Views:
April 2024 390
May 2024 596
June 2024 326
July 2024 234
August 2024 273
September 2024 214

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1468-263X
  • Print ISSN 0045-3102
  • Copyright © 2024 British Association of Social Workers
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

IMAGES

  1. Social Work Research Methods: From Conceptualization to Dissemination

    social work research dissemination

  2. 21. Qualitative research dissemination

    social work research dissemination

  3. Types of Information Disseminated by Sample Professional Social Workers

    social work research dissemination

  4. Full article: Using Theory in Practice

    social work research dissemination

  5. (PDF) SOCIAL WORK RESEARCH: IMPLICATIONS FOR GROWTH AND DEVELOPMENT OF

    social work research dissemination

  6. 21. Qualitative research dissemination

    social work research dissemination

VIDEO

  1. social work research : MEANING , DEFINITION AND OBJECTIVES OF SOCIAL WORK RESEARCH

  2. social work research

  3. Social Work Research: Steps of Research #researchstudy #socialresearch #BSW #MSW #UGC-NET

  4. Documentation of Research work I Dissemination of Data

  5. PMS: Social Work Lecture-10 Paper-2 ll Social Research

  6. Social media in transplantation

COMMENTS

  1. 12.2 Disseminating your findings

    Disseminating findings takes planning and careful consideration of your audiences. The dissemination process includes determining the who, where, and how of reaching your audiences. Plagiarism is among the most egregious academic transgressions a scholar can commit. In formal presentations, include your research question, methodological ...

  2. Ten simple rules for innovative dissemination of research

    Rule 6: Go live. In-person dissemination does not just have to be at stuffy conferences. With research moving beyond the walls of universities, there are several types of places for more participatory events. Next to classic scientific conferences, different types of events addressing wider audiences have emerged.

  3. Disseminating research findings: what should researchers do? A

    Describes an integrated dissemination model for social work and provides an example to illustrate its practical application (OutPatient Treatment In ONtario Services -OPTIONS project) Argues that diffusion of innovations and social marketing address the important question of how to put the products of research where they will do the most good ...

  4. Strategies for effective dissemination of research to United States

    Background. In recent years, social scientists have sought to understand how research may influence policy [1, 2].Interest in this area of investigation has grown with the increased availability of funding for policy-specific research (e.g., dissemination and implementation research) [].However, because of variation in the content of public policy, this emerging area of scholarship lacks a ...

  5. 16.2 Disseminating your findings

    Dissemination refers to "a planned process that involves consideration of target audiences and the settings in which research findings are to be received and, where appropriate, communicating and interacting with wider policy and…service audiences in ways that will facilitate research uptake in decision-making processes and practice" (Wilson, Petticrew, Calnan, & Natareth, 2010, p. 91).

  6. 21. Qualitative research dissemination

    Chapter Outline. Ethical responsibility and cultural respectfulness (8 minute read); Critical considerations (5 minute read); Informing your dissemination plan (11 minute read); Final product taking shape (10 minute read); Content warning: Examples in this chapter contain references to research as a potential tool to stigmatize or oppress vulnerable groups, mistreatment and inequalities ...

  7. Dissemination and Implementation Research

    Fidelity measures are often specific to a particular intervention. Social-work research has emphasized the implementation outcome of provider attitudes toward evidence-based interventions or the acceptability of evidence-based service. Aarons' Evidence-Based Practice Attitudes scale is a widely used, standardized measure for this construct ...

  8. Graduate research methods in social work

    We designed our book to help graduate social work students through every step of the research process, from conceptualization to dissemination. Our textbook centers cultural humility, information literacy, pragmatism, and an equal emphasis on quantitative and qualitative methods. It includes extensive content on literature reviews, cultural bias and respectfulness, and qualitative methods, in ...

  9. Social Work Research Methods

    Social work research methods: From conceptualization to dissemination. Boston: Allyn and Bacon. Boston: Allyn and Bacon. This introductory text is distinguished by its use of many evidence-based practice examples and its heavy coverage of statistical and computer analysis of data.

  10. PDF Ten simple rules for innovative dissemination of research

    Rule 3: Encourage participation. In the age of open research, don't just broadcast. Invite and engage others to foster participa-tion and collaboration with research audiences. Scholarship is a collective endeavour, and so we should not expect its dissemination to be unidirectional, especially not in the digital age.

  11. The Center For The Study Of Social Work Practice

    The Center is the only endowed research organization focused solely on the development and dissemination of social work practice knowledge. Its endowment of approximately $3,000,000 is supplemented by gifts and grants from public and voluntary sources.

  12. Communicating and disseminating research findings to study participants

    The researcher interview guide was designed to understand researchers' perspectives on communicating and disseminating research findings to participants; explore past experiences, if any, of researchers with communication and dissemination of research findings to study participants; document any approaches researchers may have used or intend ...

  13. (PDF) Social Work Research and Its Relevance to Practice: "The Gap

    Second, studies of practice research in social work remain scant in Asia and Singapore Webber 2020 Ho et al. 2023;Teo, Koh, and Kwan 2023), with limited insights into organisational contexts ...

  14. Social Work Research Methods: From Conceptualization to Dissemination

    Social Work Research Methods is a stand alone "how-to" social research text that covers conceptualization, design, implementation, data management, and statistical analysis with comprehensively detailed examples. ... SPSS and NVIVO), analyses, implementation and dissemination. It is also written in a tone that is intentionally light-hearted to ...

  15. Ethical considerations in social work research

    Sobočan, A. M. (2010). Ethics in/after social work research: Deliberation on the meaning of knowledge dissemination in social work research. In D. Zaviršek, B. Rommelspacher, & S. Staub (Eds.), Ethical dilemmas in social work: International perspective (pp. 169-187). Ljubljana: Faculty of Social Work, University of Ljubljana.

  16. Strategies for effective dissemination of research to United States

    Background Research has the potential to influence US social policy; however, existing research in this area lacks a coherent message. The Model for Dissemination of Research provides a framework through which to synthesize lessons learned from research to date on the process of translating research to US policymakers. Methods The peer-reviewed and grey literature was systematically reviewed ...

  17. Professional Collaboration Networks as a Social Work Research Practice

    They allow social workers to contribute their unique knowledge of social systems across interdisciplinary contexts and contribute to conversations about social. This article explores the development of PCNs as a tool for social work researchers, practitioners, and students.

  18. Using Theory in Practice

    The immediate context. Both authors have been involved with social work practice research at the Heikki Waris Institute funded by the Helsinki Metropolitan municipalities and University of Helsinki, Finland (Muurinen & Satka, Citation 2020).Both authors worked at the Institute, Aino Kääriäinen as a university lecturer and Heidi Muurinen as a researcher social worker and frequently ...

  19. Social Work Research Methods: From Conceptualization to Dissemination

    Social Work Research Methods: From Conceptualization to Dissemination. Brett Drake. Melissa Jonson-Reid . Basic Approach: Written in an intentionally light-hearted tone, Social Work Research Methods is a stand alone "how-to" text that provides everything necessary to learn about social science research and complete a research project from start ...

  20. Ten simple rules for innovative dissemination of research

    Rule 3: Encourage participation. In the age of open research, don't just broadcast. Invite and engage others to foster participation and collaboration with research audiences. Scholarship is a collective endeavour, and so we should not expect its dissemination to be unidirectional, especially not in the digital age.

  21. DSW Online

    Three fundamental aspects make our online DSW program unique: Utilizes implementation science to translate research into best practice interventions and identify strategies to address barriers to effective service delivery and program uptake. This is applicable at all levels of social work practice, and in both clinical and non-clinical settings.

  22. Social Work as a Human Rights Profession: An Action Framework

    Social workers indicate that they should be much more concerned with their self-critical role. Their own actions as social workers should also be scrutinised in some form of 'self-politicisation'. Conclusion. Our qualitative research on how social work acts when aiming to realise human rights reveals five building blocks.

  23. Social Workers' Perceived Barriers and Facilitators to Social Work

    This scoping review found that social workers perceive far greater barriers than facilitators when delivering services in school settings, with limited evidence related to the facilitators that enhance School Social Work (SSW) practice. Further research regarding the facilitators of SSW practice is needed, specifically in countries where ...

  24. PDF Research on Social Work Practice

    Research on Social Work Practice 2009 19: 503 originally published online 5 June 2009 James W. Dearing ... Research about dissemination is a response to a gen-eral acknowledgment that successful, effective prac-tices, programs, and policies resulting from clinical