Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research paper

Writing a Research Paper Conclusion | Step-by-Step Guide

Published on October 30, 2022 by Jack Caulfield . Revised on April 13, 2023.

  • Restate the problem statement addressed in the paper
  • Summarize your overall arguments or findings
  • Suggest the key takeaways from your paper

Research paper conclusion

The content of the conclusion varies depending on whether your paper presents the results of original empirical research or constructs an argument through engagement with sources .

Table of contents

Step 1: restate the problem, step 2: sum up the paper, step 3: discuss the implications, research paper conclusion examples, frequently asked questions about research paper conclusions.

The first task of your conclusion is to remind the reader of your research problem . You will have discussed this problem in depth throughout the body, but now the point is to zoom back out from the details to the bigger picture.

While you are restating a problem you’ve already introduced, you should avoid phrasing it identically to how it appeared in the introduction . Ideally, you’ll find a novel way to circle back to the problem from the more detailed ideas discussed in the body.

For example, an argumentative paper advocating new measures to reduce the environmental impact of agriculture might restate its problem as follows:

Meanwhile, an empirical paper studying the relationship of Instagram use with body image issues might present its problem like this:

“In conclusion …”

Avoid starting your conclusion with phrases like “In conclusion” or “To conclude,” as this can come across as too obvious and make your writing seem unsophisticated. The content and placement of your conclusion should make its function clear without the need for additional signposting.

Scribbr Citation Checker New

The AI-powered Citation Checker helps you avoid common mistakes such as:

  • Missing commas and periods
  • Incorrect usage of “et al.”
  • Ampersands (&) in narrative citations
  • Missing reference entries

conclusion of survey research

Having zoomed back in on the problem, it’s time to summarize how the body of the paper went about addressing it, and what conclusions this approach led to.

Depending on the nature of your research paper, this might mean restating your thesis and arguments, or summarizing your overall findings.

Argumentative paper: Restate your thesis and arguments

In an argumentative paper, you will have presented a thesis statement in your introduction, expressing the overall claim your paper argues for. In the conclusion, you should restate the thesis and show how it has been developed through the body of the paper.

Briefly summarize the key arguments made in the body, showing how each of them contributes to proving your thesis. You may also mention any counterarguments you addressed, emphasizing why your thesis holds up against them, particularly if your argument is a controversial one.

Don’t go into the details of your evidence or present new ideas; focus on outlining in broad strokes the argument you have made.

Empirical paper: Summarize your findings

In an empirical paper, this is the time to summarize your key findings. Don’t go into great detail here (you will have presented your in-depth results and discussion already), but do clearly express the answers to the research questions you investigated.

Describe your main findings, even if they weren’t necessarily the ones you expected or hoped for, and explain the overall conclusion they led you to.

Having summed up your key arguments or findings, the conclusion ends by considering the broader implications of your research. This means expressing the key takeaways, practical or theoretical, from your paper—often in the form of a call for action or suggestions for future research.

Argumentative paper: Strong closing statement

An argumentative paper generally ends with a strong closing statement. In the case of a practical argument, make a call for action: What actions do you think should be taken by the people or organizations concerned in response to your argument?

If your topic is more theoretical and unsuitable for a call for action, your closing statement should express the significance of your argument—for example, in proposing a new understanding of a topic or laying the groundwork for future research.

Empirical paper: Future research directions

In a more empirical paper, you can close by either making recommendations for practice (for example, in clinical or policy papers), or suggesting directions for future research.

Whatever the scope of your own research, there will always be room for further investigation of related topics, and you’ll often discover new questions and problems during the research process .

Finish your paper on a forward-looking note by suggesting how you or other researchers might build on this topic in the future and address any limitations of the current paper.

Full examples of research paper conclusions are shown in the tabs below: one for an argumentative paper, the other for an empirical paper.

  • Argumentative paper
  • Empirical paper

While the role of cattle in climate change is by now common knowledge, countries like the Netherlands continually fail to confront this issue with the urgency it deserves. The evidence is clear: To create a truly futureproof agricultural sector, Dutch farmers must be incentivized to transition from livestock farming to sustainable vegetable farming. As well as dramatically lowering emissions, plant-based agriculture, if approached in the right way, can produce more food with less land, providing opportunities for nature regeneration areas that will themselves contribute to climate targets. Although this approach would have economic ramifications, from a long-term perspective, it would represent a significant step towards a more sustainable and resilient national economy. Transitioning to sustainable vegetable farming will make the Netherlands greener and healthier, setting an example for other European governments. Farmers, policymakers, and consumers must focus on the future, not just on their own short-term interests, and work to implement this transition now.

As social media becomes increasingly central to young people’s everyday lives, it is important to understand how different platforms affect their developing self-conception. By testing the effect of daily Instagram use among teenage girls, this study established that highly visual social media does indeed have a significant effect on body image concerns, with a strong correlation between the amount of time spent on the platform and participants’ self-reported dissatisfaction with their appearance. However, the strength of this effect was moderated by pre-test self-esteem ratings: Participants with higher self-esteem were less likely to experience an increase in body image concerns after using Instagram. This suggests that, while Instagram does impact body image, it is also important to consider the wider social and psychological context in which this usage occurs: Teenagers who are already predisposed to self-esteem issues may be at greater risk of experiencing negative effects. Future research into Instagram and other highly visual social media should focus on establishing a clearer picture of how self-esteem and related constructs influence young people’s experiences of these platforms. Furthermore, while this experiment measured Instagram usage in terms of time spent on the platform, observational studies are required to gain more insight into different patterns of usage—to investigate, for instance, whether active posting is associated with different effects than passive consumption of social media content.

If you’re unsure about the conclusion, it can be helpful to ask a friend or fellow student to read your conclusion and summarize the main takeaways.

  • Do they understand from your conclusion what your research was about?
  • Are they able to summarize the implications of your findings?
  • Can they answer your research question based on your conclusion?

You can also get an expert to proofread and feedback your paper with a paper editing service .

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

The conclusion of a research paper has several key elements you should make sure to include:

  • A restatement of the research problem
  • A summary of your key arguments and/or findings
  • A short discussion of the implications of your research

No, it’s not appropriate to present new arguments or evidence in the conclusion . While you might be tempted to save a striking argument for last, research papers follow a more formal structure than this.

All your findings and arguments should be presented in the body of the text (more specifically in the results and discussion sections if you are following a scientific structure). The conclusion is meant to summarize and reflect on the evidence and arguments you have already presented, not introduce new ones.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Caulfield, J. (2023, April 13). Writing a Research Paper Conclusion | Step-by-Step Guide. Scribbr. Retrieved November 23, 2023, from https://www.scribbr.com/research-paper/research-paper-conclusion/

Is this article helpful?

Jack Caulfield

Jack Caulfield

Other students also liked, writing a research paper introduction | step-by-step guide, how to create a structured research paper outline | example, checklist: writing a great research paper, what is your plagiarism score.

Book cover

Methods in Urban Analysis pp 65–85 Cite as

Conducting Survey Research

  • Heather Shearer 3  
  • First Online: 06 June 2021

929 Accesses

Part of the Cities Research Series book series (CRS)

This chapter is aimed at students and researchers who will use questionnaire surveys in their research. It describes the basics of conducting questionnaire surveys. It begins by exploring some reasons why a researcher or another may want to conduct a survey. It then describes different types of surveys, describing how they can be differentiated by time and delivery method, and explains that most are cross-sectional (one-off) and self-administered. It then describes the necessary steps in survey research, particularly aligning the survey to the research question, and identifying the audience at which the survey is aimed (sample selection). The chapter then details the various aspects relevant to survey and question design (closed and open questions), including how not to write survey questions. Good question design is integral to a successful survey and this is where most surveys fall short. The different types of closed survey questions, such as dichotomous, nominal, rank order and Likert scale, are discussed, with examples of each. Some logistics around distributing surveys are given, concentrating on online survey distribution, as this is the most common method used nowadays. Finally, the chapter concludes with a brief piece on survey analysis.

This is a preview of subscription content, access via your institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Note, if you are a student or work for a university, all survey research requires Ethics approval.

Andres L (2012) Designing and doing survey research. Sage

Google Scholar  

Bryman A (2016) Social research methods. Oxford university Press

Survey Monkey. Survey Monkey, https://www.surveymonkey.com/ (2020)

Lime Survey. Lime Survey, https://www.limesurvey.org/ (2020)

Qualtrics. Customer Experience Surveys, https://www.qualtrics.com/au/customer-experience/surveys/ (2020)

Rowley J (2014) Designing and using research questionnaires. Management Research Review

ABS. 2033.0.55.001—Census of Population and Housing: Socio-Economic Indexes for Areas (SEIFA), Australia, 2016, https://www.abs.gov.au/ausstats/[email protected]/mf/2033.0.55.001 (2018)

Caruana EJ, Roman M, Hernández-Sánchez J, Solli P (2015) Longitudinal studies. J Thoracic Disease 7(11):E537–E540. https://doi.org/10.3978/j.issn.2072-1439.2015.10.63

CrossRef   Google Scholar  

Wikipedia (2020) Up (Film series). Internet website https://en.wikipedia.org/wiki/Up_(film_series)

Pierce M, Hope H, Ford T, Hatch S, Hotopf M, John A,…Abel KM (2020) Mental health before and during the COVID-19 pandemic: a longitudinal probability sample survey of the UK population. Lancet Psych 7(10):883–892

Statistics How To (2014) Qualities of a Good Sampling Frame. https://www.statisticshowto.com/sampling-frame/

Kalton G (1983) Quantitative applications in the Social Sciences: Introduction to survey sampling. SAGE Publications, Inc., Thousand Oaks, CA. https://doi.org/10.4135/9781412984683

Lewis-Beck M, Bryman A, Futing L (2004) Fishing expedition

Leek J, A menagerie of messed up data analyses and how to avoid them, https://simplystatistics.org/2016/02/01/a-menagerie-of-messed-up-data-analyses-and-how-to-avoid-them/

Crozier R (2018) Facebook reveals 63 A/NZ users took quiz behind Cambridge Analytica scandal, https://www.itnews.com.au/news/facebook-reveals-63-a-nz-users-took-quiz-behind-cambridge-analytica-scandal-488715 (2018)

Giles M (2017) Social Media Phishing: a Primer, https://inspiredelearning.com/blog/social-phishing/

Hoerger M (2010) Participant dropout as a function of survey length in internet-mediated university studies: implications for study design and voluntary participation in psy- chological research. Cyberpsychol Behav Soc Netw 13:697–700

Barrett J (2020) The 7 Deadly Survey Questions, https://www.getfeedback.com/re-sources/online-surveys/7-deadly-survey-questions/

Dong Y, Peng C-YJ (2013) Principled missing data methods for researchers. Springer-Plus 2:222

Amsterdam Public Health Research Institute. Handling Missing Data, http://www.emgo.nl/kc/handling-missing-data/ (2015)

Raghunath D (2019) Data Levels of Measurement, https://medium.com/@rndayala/data-levels-of-measurement-4af33d9ab51a

Goldstein, E. Choosing a Statistical Test, https://www.youtube.com/watch?v=UaptUhOushw (2016)

Vigen T (2020) Spurious Correlations, https://www.tylervigen.com/spurious-correlations

Jones B (2019) A short intro to linear regression analysis using survey data, https://medium.com/pew-research-center-decoded/a-short-intro-to-linear-regression-analysis-using-survey-data

Statistics How To. Likert Scale Definition and Articles, https://www.statis-ticshowto.com/likert-scale-definition-and-examples/ (ND)

Erlingsson C, Brysiewicz P (2017) A hands-on guide to doing content analysis. African J Emergency Med 7:93–99

Download references

Author information

Authors and affiliations.

Griffith University, Gold Coast, QLD, Australia

Heather Shearer

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Heather Shearer .

Editor information

Editors and affiliations.

Griffith University, Brisbane, QLD, Australia

Prof. Dr. Scott Baum

Rights and permissions

Reprints and Permissions

Copyright information

© 2021 Springer Nature Singapore Pte Ltd.

About this chapter

Cite this chapter.

Shearer, H. (2021). Conducting Survey Research. In: Baum, S. (eds) Methods in Urban Analysis. Cities Research Series. Springer, Singapore. https://doi.org/10.1007/978-981-16-1677-8_5

Download citation

DOI : https://doi.org/10.1007/978-981-16-1677-8_5

Published : 06 June 2021

Publisher Name : Springer, Singapore

Print ISBN : 978-981-16-1676-1

Online ISBN : 978-981-16-1677-8

eBook Packages : Physics and Astronomy Physics and Astronomy (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • 9. The Conclusion
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

The conclusion is intended to help the reader understand why your research should matter to them after they have finished reading the paper. A conclusion is not merely a summary of the main topics covered or a re-statement of your research problem, but a synthesis of key points and, if applicable, where you recommend new areas for future research. For most college-level research papers, one or two well-developed paragraphs is sufficient for a conclusion, although in some cases, more paragraphs may be required in summarizing key findings and their significance.

Conclusions. The Writing Center. University of North Carolina; Conclusions. The Writing Lab and The OWL. Purdue University.

Importance of a Good Conclusion

A well-written conclusion provides you with important opportunities to demonstrate to the reader your understanding of the research problem. These include:

  • Presenting the last word on the issues you raised in your paper . Just as the introduction gives a first impression to your reader, the conclusion offers a chance to leave a lasting impression. Do this, for example, by highlighting key findings in your analysis that advance new understanding about the research problem, that are unusual or unexpected, or that have important implications applied to practice.
  • Summarizing your thoughts and conveying the larger significance of your study . The conclusion is an opportunity to succinctly re-emphasize  the "So What?" question by placing the study within the context of how your research advances past research about the topic.
  • Identifying how a gap in the literature has been addressed . The conclusion can be where you describe how a previously identified gap in the literature [described in your literature review section] has been filled by your research.
  • Demonstrating the importance of your ideas . Don't be shy. The conclusion offers you the opportunity to elaborate on the impact and significance of your findings. This is particularly important if your study approached examining the research problem from an unusual or innovative perspective.
  • Introducing possible new or expanded ways of thinking about the research problem . This does not refer to introducing new information [which should be avoided], but to offer new insight and creative approaches for framing or contextualizing the research problem based on the results of your study.

Bunton, David. “The Structure of PhD Conclusion Chapters.” Journal of English for Academic Purposes 4 (July 2005): 207–224; Conclusions. The Writing Center. University of North Carolina; Kretchmer, Paul. Twelve Steps to Writing an Effective Conclusion. San Francisco Edit, 2003-2008; Conclusions. The Writing Lab and The OWL. Purdue University; Assan, Joseph. "Writing the Conclusion Chapter: The Good, the Bad and the Missing." Liverpool: Development Studies Association (2009): 1-8.

Structure and Writing Style

I.  General Rules

The function of your paper's conclusion is to restate the main argument . It reminds the reader of the strengths of your main argument(s) and reiterates the most important evidence supporting those argument(s). Do this by stating clearly the context, background, and necessity of pursuing the research problem you investigated in relation to an issue, controversy, or a gap found in the literature. Make sure, however, that your conclusion is not simply a repetitive summary of the findings. This reduces the impact of the argument(s) you have developed in your essay.

When writing the conclusion to your paper, follow these general rules:

  • Present your conclusions in clear, simple language. Re-state the purpose of your study, then describe how your findings differ or support those of other studies and why [i.e., what were the unique or new contributions your study made to the overall research about your topic?].
  • Do not simply reiterate your findings or the discussion of your results. Provide a synthesis of arguments presented in the paper to show how these converge to address the research problem and the overall objectives of your study.
  • Indicate opportunities for future research if you haven't already done so in the discussion section of your paper. Highlighting the need for further research provides the reader with evidence that you have an in-depth awareness of the research problem and that further investigations should take place.

Consider the following points to help ensure your conclusion is presented well:

  • If the argument or purpose of your paper is complex, you may need to summarize the argument for your reader.
  • If, prior to your conclusion, you have not yet explained the significance of your findings or if you are proceeding inductively, use the end of your paper to describe your main points and explain their significance.
  • Move from a detailed to a general level of consideration that returns the topic to the context provided by the introduction or within a new context that emerges from the data. 

The conclusion also provides a place for you to persuasively and succinctly restate the research problem, given that the reader has now been presented with all the information about the topic . Depending on the discipline you are writing in, the concluding paragraph may contain your reflections on the evidence presented. However, the nature of being introspective about the research you have conducted will depend on the topic and whether your professor wants you to express your observations in this way.

NOTE : If asked to think introspectively about the topics, do not delve into idle speculation. Being introspective means looking within yourself as an author to try and understand an issue more deeply, not to guess at possible outcomes or make up scenarios not supported by the evidence.

II.  Developing a Compelling Conclusion

Although an effective conclusion needs to be clear and succinct, it does not need to be written passively or lack a compelling narrative. Strategies to help you move beyond merely summarizing the key points of your research paper may include any of the following strategies:

  • If your essay deals with a critical, contemporary problem, warn readers of the possible consequences of not attending to the problem proactively.
  • Recommend a specific course or courses of action that, if adopted, could address a specific problem in practice or in the development of new knowledge.
  • Cite a relevant quotation or expert opinion already noted in your paper in order to lend authority and support to the conclusion(s) you have reached [a good place to look is research from your literature review].
  • Explain the consequences of your research in a way that elicits action or demonstrates urgency in seeking change.
  • Restate a key statistic, fact, or visual image to emphasize the most important finding of your paper.
  • If your discipline encourages personal reflection, illustrate your concluding point by drawing from your own life experiences.
  • Return to an anecdote, an example, or a quotation that you presented in your introduction, but add further insight derived from the findings of your study; use your interpretation of results to recast it in new or important ways.
  • Provide a "take-home" message in the form of a succinct, declarative statement that you want the reader to remember about your study.

III. Problems to Avoid

Failure to be concise Your conclusion section should be concise and to the point. Conclusions that are too lengthy often have unnecessary information in them. The conclusion is not the place for details about your methodology or results. Although you should give a summary of what was learned from your research, this summary should be relatively brief, since the emphasis in the conclusion is on the implications, evaluations, insights, and other forms of analysis that you make. Strategies for writing concisely can be found here .

Failure to comment on larger, more significant issues In the introduction, your task was to move from the general [the field of study] to the specific [the research problem]. However, in the conclusion, your task is to move from a specific discussion [your research problem] back to a general discussion [i.e., how your research contributes new understanding or fills an important gap in the literature]. In short, the conclusion is where you should place your research within a larger context [visualize your paper as an hourglass--start with a broad introduction and review of the literature, move to the specific analysis and discussion, conclude with a broad summary of the study's implications and significance].

Failure to reveal problems and negative results Negative aspects of the research process should never be ignored. These are problems, deficiencies, or challenges encountered during your study should be summarized as a way of qualifying your overall conclusions. If you encountered negative or unintended results [i.e., findings that are validated outside the research context in which they were generated], you must report them in the results section and discuss their implications in the discussion section of your paper. In the conclusion, use your summary of the negative results as an opportunity to explain their possible significance and/or how they may form the basis for future research.

Failure to provide a clear summary of what was learned In order to be able to discuss how your research fits within your field of study [and possibly the world at large], you need to summarize briefly and succinctly how it contributes to new knowledge or a new understanding about the research problem. This element of your conclusion may be only a few sentences long.

Failure to match the objectives of your research Often research objectives in the social sciences change while the research is being carried out. This is not a problem unless you forget to go back and refine the original objectives in your introduction. As these changes emerge they must be documented so that they accurately reflect what you were trying to accomplish in your research [not what you thought you might accomplish when you began].

Resist the urge to apologize If you've immersed yourself in studying the research problem, you presumably should know a good deal about it [perhaps even more than your professor!]. Nevertheless, by the time you have finished writing, you may be having some doubts about what you have produced. Repress those doubts! Don't undermine your authority by saying something like, "This is just one approach to examining this problem; there may be other, much better approaches that...." The overall tone of your conclusion should convey confidence to the reader.

Assan, Joseph. "Writing the Conclusion Chapter: The Good, the Bad and the Missing." Liverpool: Development Studies Association (2009): 1-8; Concluding Paragraphs. College Writing Center at Meramec. St. Louis Community College; Conclusions. The Writing Center. University of North Carolina; Conclusions. The Writing Lab and The OWL. Purdue University; Freedman, Leora  and Jerry Plotnick. Introductions and Conclusions. The Lab Report. University College Writing Centre. University of Toronto; Leibensperger, Summer. Draft Your Conclusion. Academic Center, the University of Houston-Victoria, 2003; Make Your Last Words Count. The Writer’s Handbook. Writing Center. University of Wisconsin Madison; Miquel, Fuster-Marquez and Carmen Gregori-Signes. “Chapter Six: ‘Last but Not Least:’ Writing the Conclusion of Your Paper.” In Writing an Applied Linguistics Thesis or Dissertation: A Guide to Presenting Empirical Research . John Bitchener, editor. (Basingstoke,UK: Palgrave Macmillan, 2010), pp. 93-105; Tips for Writing a Good Conclusion. Writing@CSU. Colorado State University; Kretchmer, Paul. Twelve Steps to Writing an Effective Conclusion. San Francisco Edit, 2003-2008; Writing Conclusions. Writing Tutorial Services, Center for Innovative Teaching and Learning. Indiana University; Writing: Considering Structure and Organization. Institute for Writing Rhetoric. Dartmouth College.

Writing Tip

Don't Belabor the Obvious!

Avoid phrases like "in conclusion...," "in summary...," or "in closing...." These phrases can be useful, even welcome, in oral presentations. But readers can see by the tell-tale section heading and number of pages remaining to read, when an essay is about to end. You'll irritate your readers if you belabor the obvious.

Assan, Joseph. "Writing the Conclusion Chapter: The Good, the Bad and the Missing." Liverpool: Development Studies Association (2009): 1-8.

Another Writing Tip

New Insight, Not New Information!

Don't surprise the reader with new information in your conclusion that was never referenced anywhere else in the paper and, as such, the conclusion rarely has citations to sources. If you have new information to present, add it to the discussion or other appropriate section of the paper. Note that, although no actual new information is introduced, the conclusion, along with the discussion section, is where you offer your most "original" contributions in the paper; the conclusion is where you describe the value of your research, demonstrate that you understand the material that you’ve presented, and locate your findings within the larger context of scholarship on the topic, including describing how your research contributes new insights or valuable insight to that scholarship.

Assan, Joseph. "Writing the Conclusion Chapter: The Good, the Bad and the Missing." Liverpool: Development Studies Association (2009): 1-8; Conclusions. The Writing Center. University of North Carolina.

  • << Previous: Limitations of the Study
  • Next: Appendices >>
  • Last Updated: Oct 10, 2023 1:30 PM
  • URL: https://libguides.usc.edu/writingguide

How to Write a Conclusion for Research Papers (with Examples)

How to Write a Conclusion for Research Papers (with Examples)

The conclusion of a research paper is a crucial section that plays a significant role in the overall impact and effectiveness of your research paper. However, this is also the section that typically receives less attention compared to the introduction and the body of the paper. The conclusion serves to provide a concise summary of the key findings, their significance, their implications, and a sense of closure to the study. Discussing how can the findings be applied in real-world scenarios or inform policy, practice, or decision-making is especially valuable to practitioners and policymakers. The research paper conclusion also provides researchers with clear insights and valuable information for their own work, which they can then build on and contribute to the advancement of knowledge in the field.

The research paper conclusion should explain the significance of your findings within the broader context of your field. It restates how your results contribute to the existing body of knowledge and whether they confirm or challenge existing theories or hypotheses. Also, by identifying unanswered questions or areas requiring further investigation, your awareness of the broader research landscape can be demonstrated.

Remember to tailor the research paper conclusion to the specific needs and interests of your intended audience, which may include researchers, practitioners, policymakers, or a combination of these.

Table of Contents

What is a conclusion in a research paper, summarizing conclusion, editorial conclusion, externalizing conclusion, importance of a good research paper conclusion, how to write a conclusion for your research paper, research paper conclusion examples, frequently asked questions.

A conclusion in a research paper is the final section where you summarize and wrap up your research, presenting the key findings and insights derived from your study. The research paper conclusion is not the place to introduce new information or data that was not discussed in the main body of the paper. When working on how to conclude a research paper, remember to stick to summarizing and interpreting existing content. The research paper conclusion serves the following purposes: 1

  • Warn readers of the possible consequences of not attending to the problem.
  • Recommend specific course(s) of action.
  • Restate key ideas to drive home the ultimate point of your research paper.
  • Provide a “take-home” message that you want the readers to remember about your study.

conclusion of survey research

Types of conclusions for research papers

In research papers, the conclusion provides closure to the reader. The type of research paper conclusion you choose depends on the nature of your study, your goals, and your target audience. I provide you with three common types of conclusions:

A summarizing conclusion is the most common type of conclusion in research papers. It involves summarizing the main points, reiterating the research question, and restating the significance of the findings. This common type of research paper conclusion is used across different disciplines.

An editorial conclusion is less common but can be used in research papers that are focused on proposing or advocating for a particular viewpoint or policy. It involves presenting a strong editorial or opinion based on the research findings and offering recommendations or calls to action.

An externalizing conclusion is a type of conclusion that extends the research beyond the scope of the paper by suggesting potential future research directions or discussing the broader implications of the findings. This type of conclusion is often used in more theoretical or exploratory research papers.

The conclusion in a research paper serves several important purposes:

  • Offers Implications and Recommendations : Your research paper conclusion is an excellent place to discuss the broader implications of your research and suggest potential areas for further study. It’s also an opportunity to offer practical recommendations based on your findings.
  • Provides Closure : A good research paper conclusion provides a sense of closure to your paper. It should leave the reader with a feeling that they have reached the end of a well-structured and thought-provoking research project.
  • Leaves a Lasting Impression : Writing a well-crafted research paper conclusion leaves a lasting impression on your readers. It’s your final opportunity to leave them with a new idea, a call to action, or a memorable quote.

conclusion of survey research

Writing a strong conclusion for your research paper is essential to leave a lasting impression on your readers. Here’s a step-by-step process to help you create and know what to put in the conclusion of a research paper: 2

  • Research Statement : Begin your research paper conclusion by restating your research statement. This reminds the reader of the main point you’ve been trying to prove throughout your paper. Keep it concise and clear.
  • Key Points : Summarize the main arguments and key points you’ve made in your paper. Avoid introducing new information in the research paper conclusion. Instead, provide a concise overview of what you’ve discussed in the body of your paper.
  • Address the Research Questions : If your research paper is based on specific research questions or hypotheses, briefly address whether you’ve answered them or achieved your research goals. Discuss the significance of your findings in this context.
  • Significance : Highlight the importance of your research and its relevance in the broader context. Explain why your findings matter and how they contribute to the existing knowledge in your field.
  • Implications : Explore the practical or theoretical implications of your research. How might your findings impact future research, policy, or real-world applications? Consider the “so what?” question.
  • Future Research : Offer suggestions for future research in your area. What questions or aspects remain unanswered or warrant further investigation? This shows that your work opens the door for future exploration.
  • Closing Thought : Conclude your research paper conclusion with a thought-provoking or memorable statement. This can leave a lasting impression on your readers and wrap up your paper effectively. Avoid introducing new information or arguments here.
  • Proofread and Revise : Carefully proofread your conclusion for grammar, spelling, and clarity. Ensure that your ideas flow smoothly and that your conclusion is coherent and well-structured.

Remember that a well-crafted research paper conclusion is a reflection of the strength of your research and your ability to communicate its significance effectively. It should leave a lasting impression on your readers and tie together all the threads of your paper. Now you know how to start the conclusion of a research paper and what elements to include to make it impactful, let’s look at a research paper conclusion sample.

conclusion of survey research

The research paper conclusion is a crucial part of your paper as it provides the final opportunity to leave a strong impression on your readers. In the research paper conclusion, summarize the main points of your research paper by restating your research statement, highlighting the most important findings, addressing the research questions or objectives, explaining the broader context of the study, discussing the significance of your findings, providing recommendations if applicable, and emphasizing the takeaway message. The main purpose of the conclusion is to remind the reader of the main point or argument of your paper and to provide a clear and concise summary of the key findings and their implications. All these elements should feature on your list of what to put in the conclusion of a research paper to create a strong final statement for your work.

A strong conclusion is a critical component of a research paper, as it provides an opportunity to wrap up your arguments, reiterate your main points, and leave a lasting impression on your readers. Here are the key elements of a strong research paper conclusion: 1. Conciseness : A research paper conclusion should be concise and to the point. It should not introduce new information or ideas that were not discussed in the body of the paper. 2. Summarization : The research paper conclusion should be comprehensive enough to give the reader a clear understanding of the research’s main contributions. 3 . Relevance : Ensure that the information included in the research paper conclusion is directly relevant to the research paper’s main topic and objectives; avoid unnecessary details. 4 . Connection to the Introduction : A well-structured research paper conclusion often revisits the key points made in the introduction and shows how the research has addressed the initial questions or objectives. 5. Emphasis : Highlight the significance and implications of your research. Why is your study important? What are the broader implications or applications of your findings? 6 . Call to Action : Include a call to action or a recommendation for future research or action based on your findings.

The length of a research paper conclusion can vary depending on several factors, including the overall length of the paper, the complexity of the research, and the specific journal requirements. While there is no strict rule for the length of a conclusion, but it’s generally advisable to keep it relatively short. A typical research paper conclusion might be around 5-10% of the paper’s total length. For example, if your paper is 10 pages long, the conclusion might be roughly half a page to one page in length.

In general, you do not need to include citations in the research paper conclusion. Citations are typically reserved for the body of the paper to support your arguments and provide evidence for your claims. However, there may be some exceptions to this rule: 1. If you are drawing a direct quote or paraphrasing a specific source in your research paper conclusion, you should include a citation to give proper credit to the original author. 2. If your conclusion refers to or discusses specific research, data, or sources that are crucial to the overall argument, citations can be included to reinforce your conclusion’s validity.

The conclusion of a research paper serves several important purposes: 1. Summarize the Key Points 2. Reinforce the Main Argument 3. Provide Closure 4. Offer Insights or Implications 5. Engage the Reader. 6. Reflect on Limitations

Remember that the primary purpose of the research paper conclusion is to leave a lasting impression on the reader, reinforcing the key points and providing closure to your research. It’s often the last part of the paper that the reader will see, so it should be strong and well-crafted.

  • Makar, G., Foltz, C., Lendner, M., & Vaccaro, A. R. (2018). How to write effective discussion and conclusion sections. Clinical spine surgery, 31(8), 345-346.
  • Bunton, D. (2005). The structure of PhD conclusion chapters.  Journal of English for academic purposes ,  4 (3), 207-224.

Paperpal is an AI academic writing assistant that helps authors write better and faster with real-time writing suggestions and in-depth checks for language and grammar correction. Trained on millions of published scholarly articles and 20+ years of STM experience, Paperpal delivers human precision at machine speed.    

Try it for free or upgrade to  Paperpal Prime , which unlocks unlimited access to Paperpal Copilot and premium features like academic translation, paraphrasing, contextual synonyms, consistency checks, submission readiness and more. It’s like always having a professional academic editor by your side! Go beyond limitations and experience the future of academic writing.  Get Paperpal Prime now at just US$19 a month!  

Related Reads:

5 reasons for rejection after peer review, ethical research practices for research with human subjects, 7 ways to improve your academic writing process.

  • Chemistry Terms: 7 Commonly Confused Words in Chemistry Manuscripts

Preflight For Editorial Desk: The Perfect Hybrid (AI + Human) Assistance Against Compromised Manuscripts

You may also like, what next after manuscript rejection 5 options for..., how to revise and resubmit rejected manuscripts: a..., paraphrasing in academic writing: answering top author queries, chemistry terms: 7 commonly confused words in chemistry..., paperpal copilot is live: experience the generative ai..., life sciences papers: 9 tips for authors writing..., what is peer review: importance and types of....

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Research Council (US) Panel on Collecting, Storing, Accessing, and Protecting Biological Specimens and Biodata in Social Surveys; Hauser RM, Weinstein M, Pool R, et al., editors. Conducting Biosocial Surveys: Collecting, Storing, Accessing, and Protecting Biospecimens and Biodata. Washington (DC): National Academies Press (US); 2010.

Cover of Conducting Biosocial Surveys

Conducting Biosocial Surveys: Collecting, Storing, Accessing, and Protecting Biospecimens and Biodata.

  • Hardcopy Version at National Academies Press

5 Findings, Conclusions, and Recommendations

As the preceding chapters have made clear, incorporating biological specimens into social science surveys holds great scientific potential, but also adds a variety of complications to the tasks of both individual researchers and institutions. These complications arise in a number of areas, including collecting, storing, using, and distributing biospecimens; sharing data while protecting privacy; obtaining informed consent from participants; and engaging with Institutional Review Boards (IRBs). Any effort to make such research easier and more effective will need to address the issues in these areas.

In considering its recommendations, the panel found it useful to think of two categories: (1) recommendations that apply to individual investigators, and (2) recommendations that are addressed to the National Institute on Aging ( NIA ) or other institutions, particularly funding agencies. Researchers who wish to collect biological specimens with social science data will need to develop new skills in a variety of areas, such as the logistics of specimen storage and management, the development of more diverse informed consent forms, and ways of dealing with the disclosure risks associated with sharing biogenetic data. At the same time, NIA and other funding agencies must provide researchers the tools they need to succeed. These tools include such things as biorepositories for maintaining and distributing specimens, better guidance on informed consent policies, and better ways to share data without risking confidentiality.

  • TAKING ADVANTAGE OF EXISTING EXPERTISE

Although working with biological specimens will be new and unfamiliar to many social scientists, it is an area in which biomedical researchers have a great deal of expertise and experience. Many existing documents describe recommended procedures and laboratory practices for the handling of biospecimens. These documents provide an excellent starting point for any social scientist who is interested in adding biospecimens to survey research.

Recommendation 1: Social scientists who are planning to add biological specimens to their survey research should familiarize themselves with existing best practices for the collection, storage, use, and distribution of biospecimens. First and foremost, the design of the protocol for collection must ensure the safety of both participants and survey staff (data and specimen collectors and handlers).

Although existing best-practice documents were not developed with social science surveys in mind, their guidelines have been field-tested and approved by numerous IRBs and ethical oversight committees. The most useful best-practice documents are updated frequently to reflect growing knowledge and changing opinions about the best ways to collect, store, use, and distribute biological specimens. At the same time, however, many issues arising from the inclusion of biospecimens in social science surveys are not fully addressed in the best-practice documents intended for biomedical researchers. For guidance on these issues, it will be necessary to seek out information aimed more specifically at researchers at the intersection of social science and biomedicine.

  • COLLECTING, STORING, USING, AND DISTRIBUTING BIOSPECIMENS

As described in Chapter 2 , the collection, storage, use, and distribution of biospecimens and biodata are tasks that are likely to be unfamiliar to many social scientists and that raise a number of issues with which even specialists are still grappling. For example, which biospecimens in a repository should be shared, given that in most cases the amount of each specimen is limited? And given that the available technology for cost-efficient analysis of biospecimens, particularly genetic analysis, is rapidly improving, how much of any specimen should be used for immediate research and analysis, and how much should be stored for analysis at a later date? Collecting, storing, using, and distributing biological specimens also present significant practical and financial challenges for social scientists. Many of the questions they must address, such as exactly what should be held, where it should be held, and what should be shared or distributed, have not yet been resolved.

Developing Data Sharing Plans

An important decision concerns who has access to any leftover biospecimens. This is a problem more for biospecimens than for biodata because in most cases, biospecimens can be exhausted. Should access be determined according to the principle of first funded, first served? Should there be a formal application process for reviewing the scientific merits of a particular investigation? For studies that involve international collaboration, should foreign investigators have access? And how exactly should these decisions be made? Recognizing that some proposed analyses may lie beyond the competence of the original investigators, as well as the possibility that principal investigators may have a conflict of interest in deciding how to use any remaining biospecimens, one option is for a principal investigator to assemble a small scientific committee to judge the merits of each application, including the relevance of the proposed study to the parent study and the capacities of the investigators. Such committees should publish their review criteria to help prospective applicants. A potential problem with such an approach, however, is that many projects may not have adequate funding to carry out such tasks.

Recommendation 2: Early in the planning process, principal investigators who will be collecting biospecimens as part of a social science survey should develop a complete data sharing plan.

This plan should spell out the criteria for allowing other researchers to use (and therefore deplete) the available stock of biospecimens, as well as to gain access to any data derived therefrom. To avoid any appearance of self-interest, a project might empower an external advisory board to make decisions about access to its data. The data sharing plan should also include provisions for the storage and retrieval of biospecimens and clarify how the succession of responsibility for and control of the biospecimens will be handled at the conclusion of the project.

Recommendation 3: NIA (or preferably the National Institutes of Health [ NIH ]) should publish guidelines for principal investigators containing a list of points that need to be considered for an acceptable data sharing plan. In addition to staff review, Scientific Review Panels should read and comment on all proposed data sharing plans. In much the same way as an unacceptable human subjects plan, an inadequate data sharing plan should hold up an otherwise acceptable proposal.

Supporting Social Scientists in the Storage of Biospecimens

The panel believes that many social scientists who decide to add the collection of biospecimens to their surveys may be ill equipped to provide for the storage and distribution of the specimens.

Conclusion: The issues related to the storage and distribution of biospecimens are too complex and involve too many hidden costs to assume that social scientists without suitable knowledge, experience, and resources can handle them without assistance.

Investigators should therefore have the option of delegating the storage and distribution of biospecimens collected as part of social science surveys to a centralized biorepository. Depending on the circumstances, a project might choose to utilize such a facility for immediate use, long-term or archival storage, or not at all.

Recommendation 4: NIA and other relevant funding agencies should support at least one central facility for the storage and distribution of biospecimens collected as part of the research they support.
  • PROTECTING PRIVACY AND CONFIDENTIALITY: SHARING DIGITAL REPRESENTATIONS OF BIOLOGICAL AND SOCIAL DATA

Several different types of data must be kept confidential: survey data, data derived from biospecimens, and all administrative and operational data. In the discussion of protecting confidentiality and privacy, this report has focused on biodata, but the panel believes it is important to protect all the data collected from survey participants. For many participants, for example, data on wealth, earnings, or sexual behavior can be as or more sensitive than genetic data.

Conclusion: Although biodata tend to receive more attention in discussions of privacy and confidentiality, social science and operational data can be sensitive in their own right and deserve similar attention in such discussions.

Protecting the participants in a social science survey that collects biospecimens requires securing the data, but data are most valuable when they are made available to researchers as widely as possible. Thus there is an inherent tension between the desire to protect the privacy of the participants and the desire to derive as much scientific value from the data as possible, particularly since the costs of data collection and analysis are so high. The following recommendations regarding confidentiality are made in the spirit of balancing these equally important needs.

Genomic data present a particular challenge. Several researchers have demonstrated that it is possible to identify individuals with even modest amounts of such data. When combined with social science data, genomic data may pose an even greater risk to confidentiality. It is difficult to know how much or which genomic data, when combined with social science data, could become critical identifiers in the future. Although the problem is most significant with genomic data, similar challenges can arise with other kinds of data derived from biospecimens.

Conclusion: Unrestricted distribution of genetic and other biodata risks violating promises of confidentiality made to research participants.

There are two basic approaches to protecting confidentiality: restricting data and restricting access. Restricting data—for example, by stripping individual and spatial identifiers and modifying the data to make it difficult or impossible to trace them back to their source—usually makes it possible to release social science data widely. In the case of biodata, however, there is no answer to how little data is required to make a participant uniquely identifiable. Consequently, any release of biodata must be carefully managed to protect confidentiality.

Recommendation 5: No individual-level data containing uniquely identifying variables, such as genomic data, should be publicly released without explicit informed consent.
Recommendation 6: Genomic data and other individual-level data containing uniquely identifying variables that are stored or in active use by investigators on their institutional or personal computers should be encrypted at all times.

Even if specific identifying variables, such as names and addresses, are stripped from data, it is still often possible to identify the individuals associated with the data by other means, such as using the variables that remain (age, sex, marital status, family income, etc.) to zero in on possible candidates. In the case of biodata that do not uniquely identify individuals and can change with time, such as blood pressure and physical measurements, it may be possible to share the data with no more protection than stripping identifying variables. Even these data, however, if known to intruders, can increase identification disclosure risk when combined with enough other data. With sufficient characteristics to match, intruders can uniquely identify individuals in shared data if given access to another data source that contains the same information plus identifiers.

Conclusion: Even nonunique biodata, if combined with social science data, may pose a serious risk of reidentification.

In the case of high-dimensional genomic data, standard disclosure limitation techniques, such as data perturbation, are not effective with respect to preserving the utility of the data because they involve such extreme alterations that they would severely distort analyses aimed at determining gene—gene and gene—environment interactions. Standard disclosure limitation methods could be used to generate public-use data sets that would enable low-dimensional analyses involving genes, for example, one gene at a time. However, with several such public releases, it may be possible for a key match to be used to construct a data set with higher-dimensional genomic data.

Conclusion: At present, no data restriction strategy has been demonstrated to protect confidentiality while preserving the usefulness of the data for drawing inferences involving high-dimensional interactions among genomic and social science variables, which are increasingly the target of research. Providing public-use genomic data requires such intense data masking to protect confidentiality that it would distort the high-dimensional analyses that could result in ground-breaking research progress.
Recommendation 7: Both rich genomic data acquired for research and sensitive and potentially identifiable social science data that do not change (or change very little) with time should be shared only under restricted circumstances, such as licensing and (actual or virtual) data enclaves.

As discussed in Chapter 3 , the four basic ways to restrict access to data are licensing, remote execution centers, data enclaves, and virtual data enclaves. Each has its advantages and disadvantages. 1 Licensing, for example, is the least restrictive for a researcher in terms of access to the data, but the licensing process itself can be lengthy and burdensome. Thus it would be useful if the licensing process could be facilitated.

Recommendation 8: NIA (or preferably NIH ) should develop new standards and procedures for licensing confidential data in ways that will maximize timely access while maintaining security and that can be used by data repositories and by projects that distribute data.

Ways to improve the other approaches to restricted access are needed as well. For example, improving the convenience and availability of virtual data enclaves could increase the use of combined social science and biodata without a significant increase in risk to confidentiality. The panel notes that much of the discussion of the confidentiality risk posed by the various approaches is theoretical; no one has a clear idea of just what disclosure risks are associated with the various ways of sharing data. It is important to learn more about these disclosure risks for a variety of reasons—determining how to minimize the risks, for instance, or knowing which approaches to sharing data pose the least risk. It would also be useful to be able to describe disclosure risks more accurately to survey participants.

Recommendation 9: NIA and other funding agencies should assess the strength of confidentiality protections through periodic expert audits of confidentiality and computer security. Willingness to participate in such audits should be a condition for receipt of NIA support. Beyond enforcement, the purpose of such audits would be to identify challenges and solutions.

Evaluating risks and applying protection methods, whether they involve restricted access or restricted data, is a complex process requiring expertise in disclosure protection methods that exceeds what individual principal investigators and their institutions usually possess. Currently, not enough is known to be able to represent these risks either fully or accurately. The NIH requirement for data sharing necessitates a large investment of resources to anticipate which variables are potentially available to intruders and to alter data in ways that reduce disclosure risks while maintaining the utility of the data. Such resources are better spent by principal investigators on collecting and analyzing the data.

Recommendation 10: NIH should consider funding Centers of Excellence to explore new ways of protecting digital representations of data and to assist principal investigators wishing to share data with others. NIH should also support research on disclosure risks and limitations.

Principal investigators could send digital data to these centers, which would organize and manage any restricted access or restricted data policies or provide advisory services to investigators. NIH would maintain the authority to penalize those who violated any confidentiality agreements, for example, by denying them or their home institution NIH funding. Models for these centers include the Inter-university Consortium for Political and Social Research ( ICPSR ) and its projects supported by NIH and the Eunice Kennedy Shriver National Institute of Child Health and Human Development ( NICHD ) and the UK data sharing archive. The centers would alleviate the burden of data sharing as mandated of principal investigators by NIH and place it in expert hands. However, excellence in the design of data access and control systems is likely to require intimate knowledge of each specific data resource, so data producers should be involved in the systems’ development.

  • INFORMED CONSENT

As described in Chapter 4 , informed consent is a complex subject involving many issues that are still being debated; the growing power of genetic analysis techniques and bioinformatics has only added to this complexity. Given the rapid pace of advances in scientific knowledge and in the technology used to analyze biological materials, it is impossible to predict what information might be gleaned from biological specimens just a few years hence; accordingly, it is impossible, even in theory, to talk about perfectly informed consent. The best one can hope for is relatively well-informed consent from a study’s participants, but knowing precisely what that means is difficult. Determining the scope of informed consent adds another layer of complexity. Will new analyses be covered under the existing consent, for example? There are no clear guidelines on such questions, yet specific details on the scope of consent will likely affect an IRB ’s reaction to a study proposal.

What Individual Researchers Need to Know and Do Regarding Informed Consent

To be sure, there is a wide range of views about the practicality of providing adequate protection to participants while proceeding with the scientific enterprise, from assertions that it is simply not possible to provide adequate protection to offers of numerous procedural safeguards but no iron-clad guarantees. This report takes the latter position—that investigators should do their best to communicate adequately and accurately with participants, to provide procedural safeguards to the extent possible, and not to promise what is not possible. 2 Social science researchers need to know that adding the collection of biospecimens to social science surveys changes the nature of informed consent. Informed consent for a traditional social science survey may entail little more than reading a short script over the phone and asking whether the participant is willing to continue; obtaining informed consent for the collection and use of biospecimens and biodata is generally a much more involved process.

Conclusion: Social scientists should be made aware that the process of obtaining informed consent for the use of biospecimens and biodata typically differs from social science norms.

If participants are to provide truly informed consent to taking part in any study, they must be given a certain minimum amount of information. They should be told, for example, what the purpose of the study is, how it is to be carried out, and what participants’ roles are. In addition, because of the unique risks associated with providing biospecimens, participants in a social science survey that involves the collection of such specimens should be provided with other types of information as well. In particular, they should be given detail on the storage and use of the specimens that relates to those risks and can assist them in determining whether to take part in the study.

Recommendation 11: In designing a consent form for the collection of biospecimens, in addition to those elements that are common to social science and biomedical research, investigators should ensure that certain other information is provided to participants: how long researchers intend to retain their biospecimens and the genomic and other biodata that may be derived from them; both the risks associated with genomic data and the limits of what they can reveal; which other researchers will have access to their specimens, to the data derived therefrom, and to information collected in a survey questionnaire; the limits on researchers’ ability to maintain confidentiality; any potential limits on participants’ ability to withdraw their specimens or data from the research; the penalties 3 that may be imposed on researchers for various types of breaches of confidentiality; and what plans have been put in place to return to them any medically relevant findings.

Researchers who fail to properly plan for and handle all of these issues before proceeding with a study are in essence compromising assurances under informed consent. The literature on informed consent emphasizes the importance of ensuring that participants understand reasonably well what they are consenting to. This understanding cannot be taken for granted, particularly as it pertains to the use of biological specimens and the data derived therefrom. While it is not possible to guarantee that participants have a complete understanding of the scientific uses of their specimens or all the possible risks of their participation, they should be able to make a relatively well-informed decision about whether to take part in the study. Thus the ability of various participants to understand the research and the informed consent process must be considered. Even impaired individuals may be able to participate in research if their interests are protected and they can do so only through proxy consent. 4

Recommendation 12: NIA should locate and publicize positive examples of the documentation of consent processes for the collection of biospecimens. In particular, these examples should take into account the special needs of certain individuals, such as those with sensory problems and the cognitively impaired.

Participants in a biosocial survey are likely to have different levels of comfort concerning how their biospecimens and data will be used. Some may be willing to provide only answers to questions, for example, while others may both answer questions and provide specimens. Among those who provide specimens, some may be willing for the specimens to be used only for the current study, while others may consent to their use in future studies. One effective way to deal with these different comfort levels is to offer a tiered approach to consent that allows participants to determine just how their specimens and data will be used. Tiers might include participating in the survey, providing specimens for genetic and/or nongenetic analysis in a particular study, and allowing the specimens and data to be stored for future uses (genetic and/or nongenetic). For those participants who are willing to have their specimens and data used in future studies, researchers should tell them what sort of approval will be obtained for such use. For example, an IRB may demand reconsent, in which case participants may have to be contacted again before their specimens and data can be used. Ideally, researchers should design their consent forms to avoid the possibility that an IRB will demand a costly or infeasible reconsent process.

Recommendation 13: Researchers should consider adopting a tiered approach to obtaining consent. Participants who are willing to have their specimens and data used in future studies should be informed about the process that will be used to obtain approval for such uses.

What Institutions Should Do Regarding Informed Consent

Because the details of informed consent vary from study to study, individual investigators must bear ultimate responsibility for determining the details of informed consent for any particular study. Thus researchers must understand the various issues and concerns surrounding informed consent and be prepared to make decisions about the appropriate approach for their research in consultation with staff of survey organizations. These decisions should be addressed in the training of survey interviewers. As noted above, however, the issues surrounding informed consent are complex and not completely resolved, and researchers have few options for learning about informed consent as it applies to social science studies that collect biospecimens. Thus it makes sense for agencies funding this research, the Office for Human Research Protection ( OHRP ), or other appropriate organizations (for example, Public Responsibility in Medicine and Research [PRIM&R]) to provide opportunities for such learning, taking into account the fact that the issues arising in biosocial research do not arise in the standard informed consent situations encountered in social science research. It should also be made clear that the researchers’ institution is usually deemed (e.g., in the courts) to bear much of the responsibility for informed consent.

Recommendation 14: NIA , OHRP , and other appropriate organizations should sponsor training programs, create training modules, and hold informational workshops on informed consent for investigators, staff of survey organizations, including field staff, administrators, and members of IRBs who oversee surveys that collect social science data and biospecimens.

The Return of Medically Relevant Information

An issue related to informed consent is how much information to provide to survey participants once their biological specimens have been analyzed and in particular, how to deal with medically relevant information that may arise from the analysis. What, for example, should a researcher do if a survey participant is found to have a genetic disease that does not appear until later in life? Should the participant be notified? Should participants be asked as part of the initial interview whether they wish to be notified about such a discovery? At this time, there are no generally agreed-upon answers to such questions, but researchers should expect to have to deal with these issues as they analyze the data derived from biological specimens.

Recommendation 15: NIH should direct investigators to formulate a plan in advance concerning the return of any medically relevant findings to survey participants and to implement that plan in the design and conduct of their informed consent procedures.
  • INSTITUTIONAL REVIEW BOARDS

Investigators seeking IRB approval for biosocial research face a number of challenges. Few IRBs are familiar with both social and biological science; thus, investigators may find themselves trying to justify standard social science protocols to a biologically oriented IRB or explaining standard biological protocols to an IRB that is used to dealing with social science—or sometimes both. Researchers can expect these obstacles, which arise from the interdisciplinary nature of their work, to be exacerbated by a number of other factors that are characteristic of IRBs in general (see Chapter 4 ).

Recommendation 16: In institutions that have separate biomedical and social science IRBs, mechanisms should be created for sharing expertise during the review of biosocial protocols. 5

What Individual Researchers Need to Do Regarding IRBs

Because the collection of biospecimens as part of social science surveys is still relatively unfamiliar to many IRBs, researchers planning such a study can expect their interactions with the IRB overseeing the research to involve a certain learning curve. The IRB may need extra time to become familiar and comfortable with the proposed practices of the survey, and conversely, the researchers will need time to learn what the IRB will require. Thus it will be advantageous if researchers conducting such studies plan from the beginning to devote additional time to working with their IRBs.

Recommendation 17: Investigators considering collecting biospecimens as part of a social science survey should consult with their IRBs early and often.

What Research Agencies Should Do Regarding IRBs

One way to improve the IRB process would be to give members of IRBs an opportunity to learn more about biosocial research and the risks it entails. This could be done by individual institutions, but it would be more effective if a national funding agency took the lead (see Recommendation 14).

It is the panel’s hope that its recommendations will support the incorporation of social science and biological data into empirical models, allowing researchers to better document the linkages among social, behavioral, and biological processes that affect health and other measures of well-being while avoiding or minimizing many of the challenges that may arise. Implementing these recommendations will require the combined efforts of both individual investigators and the agencies that support them.

See the discussion on “Choosing a Data Sharing Strategy” in Chapter 3 .

In a few cases, it may be necessary to deceive participants about the purposes of a study—for example, in field tests of labor market discrimination—but these situations are unlikely to occur in biosocial studies. However, the Common Rule (45 CFR 46: 46.116.c.2, 46.116.d.3) explicitly permits such exceptions when they are scientifically necessary.

Penalties might include NIH eliminating researchers’ eligibility for funding and institutions eliminating research privileges of faculty.

Note that this report does not address the issue of obtaining informed consent from children.

Sharing expertise between biomedical and social science IRBs does not require a return to the days when there was only one IRB at each institution, a situation that still exists at many small institutions. For example, the Social and Behavioral Science IRB at the University of Wisconsin, Madison, has asked a geneticist to serve as an ex officio member of the IRB when it considers protocols that use genetic data.

  • Cite this Page National Research Council (US) Panel on Collecting, Storing, Accessing, and Protecting Biological Specimens and Biodata in Social Surveys; Hauser RM, Weinstein M, Pool R, et al., editors. Conducting Biosocial Surveys: Collecting, Storing, Accessing, and Protecting Biospecimens and Biodata. Washington (DC): National Academies Press (US); 2010. 5, Findings, Conclusions, and Recommendations.
  • PDF version of this title (549K)
  • Disable Glossary Links

In this Page

Other titles in this collection.

  • The National Academies Collection: Reports funded by National Institutes of Health

Recent Activity

  • Findings, Conclusions, and Recommendations - Conducting Biosocial Surveys Findings, Conclusions, and Recommendations - Conducting Biosocial Surveys

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

conclusion of survey research

Home Market Research

Survey Research: Definition, Examples and Methods

Survey Research

Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

LEARN ABOUT: Behavioral Research

In this article, you will learn everything about survey research, such as types, methods, and examples.

Survey Research Definition

Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization’s eager to understand what their customers think about their products or services and make better business decisions. Researchers can conduct research in multiple ways, but surveys are proven to be one of the most effective and trustworthy research methods. An online survey is a method for extracting information about a significant business matter from an individual or a group of individuals. It consists of structured survey questions that motivate the participants to respond. Creditable survey research can give these businesses access to a vast information bank. Organizations in media, other companies, and even governments rely on survey research to obtain accurate data.

The traditional definition of survey research is a quantitative method for collecting information from a pool of respondents by asking multiple survey questions. This research type includes the recruitment of individuals collection, and analysis of data. It’s useful for researchers who aim to communicate new features or trends to their respondents.

LEARN ABOUT: Level of Analysis Generally, it’s the primary step towards obtaining quick information about mainstream topics and conducting more rigorous and detailed quantitative research methods like surveys/polls or qualitative research methods like focus groups/on-call interviews can follow. There are many situations where researchers can conduct research using a blend of both qualitative and quantitative strategies.

LEARN ABOUT: Survey Sampling

Survey Research Methods

Survey research methods can be derived based on two critical factors: Survey research tool and time involved in conducting research. There are three main survey research methods, divided based on the medium of conducting survey research:

  • Online/ Email:   Online survey research is one of the most popular survey research methods today. The survey cost involved in online survey research is extremely minimal, and the responses gathered are highly accurate.
  • Phone:  Survey research conducted over the telephone ( CATI survey ) can be useful in collecting data from a more extensive section of the target population. There are chances that the money invested in phone surveys will be higher than other mediums, and the time required will be higher.
  • Face-to-face:  Researchers conduct face-to-face in-depth interviews in situations where there is a complicated problem to solve. The response rate for this method is the highest, but it can be costly.

Further, based on the time taken, survey research can be classified into two methods:

  • Longitudinal survey research:  Longitudinal survey research involves conducting survey research over a continuum of time and spread across years and decades. The data collected using this survey research method from one time period to another is qualitative or quantitative. Respondent behavior, preferences, and attitudes are continuously observed over time to analyze reasons for a change in behavior or preferences. For example, suppose a researcher intends to learn about the eating habits of teenagers. In that case, he/she will follow a sample of teenagers over a considerable period to ensure that the collected information is reliable. Often, cross-sectional survey research follows a longitudinal study .
  • Cross-sectional survey research:  Researchers conduct a cross-sectional survey to collect insights from a target audience at a particular time interval. This survey research method is implemented in various sectors such as retail, education, healthcare, SME businesses, etc. Cross-sectional studies can either be descriptive or analytical. It is quick and helps researchers collect information in a brief period. Researchers rely on the cross-sectional survey research method in situations where descriptive analysis of a subject is required.

Survey research also is bifurcated according to the sampling methods used to form samples for research: Probability and Non-probability sampling. Every individual in a population should be considered equally to be a part of the survey research sample. Probability sampling is a sampling method in which the researcher chooses the elements based on probability theory. The are various probability research methods, such as simple random sampling, systematic sampling, cluster sampling, stratified random sampling, etc. Non-probability sampling is a sampling method where the researcher uses his/her knowledge and experience to form samples.

LEARN ABOUT: Survey Sample Sizes

The various non-probability sampling techniques are :

  • Convenience sampling
  • Snowball sampling
  • Consecutive sampling
  • Judgemental sampling
  • Quota sampling

Process of implementing survey research methods:

  • Decide survey questions:  Brainstorm and put together valid survey questions that are grammatically and logically appropriate. Understanding the objective and expected outcomes of the survey helps a lot. There are many surveys where details of responses are not as important as gaining insights about what customers prefer from the provided options. In such situations, a researcher can include multiple-choice questions or closed-ended questions . Whereas, if researchers need to obtain details about specific issues, they can consist of open-ended questions in the questionnaire. Ideally, the surveys should include a smart balance of open-ended and closed-ended questions. Use survey questions like Likert Scale , Semantic Scale, Net Promoter Score question, etc., to avoid fence-sitting.

LEARN ABOUT: System Usability Scale

  • Finalize a target audience:  Send out relevant surveys as per the target audience and filter out irrelevant questions as per the requirement. The survey research will be instrumental in case the target population decides on a sample. This way, results can be according to the desired market and be generalized to the entire population.

LEARN ABOUT:  Testimonial Questions

  • Send out surveys via decided mediums:  Distribute the surveys to the target audience and patiently wait for the feedback and comments- this is the most crucial step of the survey research. The survey needs to be scheduled, keeping in mind the nature of the target audience and its regions. Surveys can be conducted via email, embedded in a website, shared via social media, etc., to gain maximum responses.
  • Analyze survey results:  Analyze the feedback in real-time and identify patterns in the responses which might lead to a much-needed breakthrough for your organization. GAP, TURF Analysis , Conjoint analysis, Cross tabulation, and many such survey feedback analysis methods can be used to spot and shed light on respondent behavior. Researchers can use the results to implement corrective measures to improve customer/employee satisfaction.

Reasons to conduct survey research

The most crucial and integral reason for conducting market research using surveys is that you can collect answers regarding specific, essential questions. You can ask these questions in multiple survey formats as per the target audience and the intent of the survey. Before designing a study, every organization must figure out the objective of carrying this out so that the study can be structured, planned, and executed to perfection.

LEARN ABOUT: Research Process Steps

Questions that need to be on your mind while designing a survey are:

  • What is the primary aim of conducting the survey?
  • How do you plan to utilize the collected survey data?
  • What type of decisions do you plan to take based on the points mentioned above?

There are three critical reasons why an organization must conduct survey research.

  • Understand respondent behavior to get solutions to your queries:  If you’ve carefully curated a survey, the respondents will provide insights about what they like about your organization as well as suggestions for improvement. To motivate them to respond, you must be very vocal about how secure their responses will be and how you will utilize the answers. This will push them to be 100% honest about their feedback, opinions, and comments. Online surveys or mobile surveys have proved their privacy, and due to this, more and more respondents feel free to put forth their feedback through these mediums.
  • Present a medium for discussion:  A survey can be the perfect platform for respondents to provide criticism or applause for an organization. Important topics like product quality or quality of customer service etc., can be put on the table for discussion. A way you can do it is by including open-ended questions where the respondents can write their thoughts. This will make it easy for you to correlate your survey to what you intend to do with your product or service.
  • Strategy for never-ending improvements:  An organization can establish the target audience’s attributes from the pilot phase of survey research . Researchers can use the criticism and feedback received from this survey to improve the product/services. Once the company successfully makes the improvements, it can send out another survey to measure the change in feedback keeping the pilot phase the benchmark. By doing this activity, the organization can track what was effectively improved and what still needs improvement.

Survey Research Scales

There are four main scales for the measurement of variables:

  • Nominal Scale:  A nominal scale associates numbers with variables for mere naming or labeling, and the numbers usually have no other relevance. It is the most basic of the four levels of measurement.
  • Ordinal Scale:  The ordinal scale has an innate order within the variables along with labels. It establishes the rank between the variables of a scale but not the difference value between the variables.
  • Interval Scale:  The interval scale is a step ahead in comparison to the other two scales. Along with establishing a rank and name of variables, the scale also makes known the difference between the two variables. The only drawback is that there is no fixed start point of the scale, i.e., the actual zero value is absent.
  • Ratio Scale:  The ratio scale is the most advanced measurement scale, which has variables that are labeled in order and have a calculated difference between variables. In addition to what interval scale orders, this scale has a fixed starting point, i.e., the actual zero value is present.

Benefits of survey research

In case survey research is used for all the right purposes and is implemented properly, marketers can benefit by gaining useful, trustworthy data that they can use to better the ROI of the organization.

Other benefits of survey research are:

  • Minimum investment:  Mobile surveys and online surveys have minimal finance invested per respondent. Even with the gifts and other incentives provided to the people who participate in the study, online surveys are extremely economical compared to paper-based surveys.
  • Versatile sources for response collection:  You can conduct surveys via various mediums like online and mobile surveys. You can further classify them into qualitative mediums like focus groups , and interviews and quantitative mediums like customer-centric surveys. Due to the offline survey response collection option, researchers can conduct surveys in remote areas with limited internet connectivity. This can make data collection and analysis more convenient and extensive.
  • Reliable for respondents:  Surveys are extremely secure as the respondent details and responses are kept safeguarded. This anonymity makes respondents answer the survey questions candidly and with absolute honesty. An organization seeking to receive explicit responses for its survey research must mention that it will be confidential.

Survey research design

Researchers implement a survey research design in cases where there is a limited cost involved and there is a need to access details easily. This method is often used by small and large organizations to understand and analyze new trends, market demands, and opinions. Collecting information through tactfully designed survey research can be much more effective and productive than a casually conducted survey.

There are five stages of survey research design:

  • Decide an aim of the research:  There can be multiple reasons for a researcher to conduct a survey, but they need to decide a purpose for the research. This is the primary stage of survey research as it can mold the entire path of a survey, impacting its results.
  • Filter the sample from target population:  Who to target? is an essential question that a researcher should answer and keep in mind while conducting research. The precision of the results is driven by who the members of a sample are and how useful their opinions are. The quality of respondents in a sample is essential for the results received for research and not the quantity. If a researcher seeks to understand whether a product feature will work well with their target market, he/she can conduct survey research with a group of market experts for that product or technology.
  • Zero-in on a survey method:  Many qualitative and quantitative research methods can be discussed and decided. Focus groups, online interviews, surveys, polls, questionnaires, etc. can be carried out with a pre-decided sample of individuals.
  • Design the questionnaire:  What will the content of the survey be? A researcher is required to answer this question to be able to design it effectively. What will the content of the cover letter be? Or what are the survey questions of this questionnaire? Understand the target market thoroughly to create a questionnaire that targets a sample to gain insights about a survey research topic.
  • Send out surveys and analyze results:  Once the researcher decides on which questions to include in a study, they can send it across to the selected sample . Answers obtained from this survey can be analyzed to make product-related or marketing-related decisions.

Survey examples: 10 tips to design the perfect research survey

Picking the right survey design can be the key to gaining the information you need to make crucial decisions for all your research. It is essential to choose the right topic, choose the right question types, and pick a corresponding design. If this is your first time creating a survey, it can seem like an intimidating task. But with QuestionPro, each step of the process is made simple and easy.

Below are 10 Tips To Design The Perfect Research Survey:

  • Set your SMART goals:  Before conducting any market research or creating a particular plan, set your SMART Goals . What is that you want to achieve with the survey? How will you measure it promptly, and what are the results you are expecting?
  • Choose the right questions:  Designing a survey can be a tricky task. Asking the right questions may help you get the answers you are looking for and ease the task of analyzing. So, always choose those specific questions – relevant to your research.
  • Begin your survey with a generalized question:  Preferably, start your survey with a general question to understand whether the respondent uses the product or not. That also provides an excellent base and intro for your survey.
  • Enhance your survey:  Choose the best, most relevant, 15-20 questions. Frame each question as a different question type based on the kind of answer you would like to gather from each. Create a survey using different types of questions such as multiple-choice, rating scale, open-ended, etc. Look at more survey examples and four measurement scales every researcher should remember.
  • Prepare yes/no questions:  You may also want to use yes/no questions to separate people or branch them into groups of those who “have purchased” and those who “have not yet purchased” your products or services. Once you separate them, you can ask them different questions.
  • Test all electronic devices:  It becomes effortless to distribute your surveys if respondents can answer them on different electronic devices like mobiles, tablets, etc. Once you have created your survey, it’s time to TEST. You can also make any corrections if needed at this stage.
  • Distribute your survey:  Once your survey is ready, it is time to share and distribute it to the right audience. You can share handouts and share them via email, social media, and other industry-related offline/online communities.
  • Collect and analyze responses:  After distributing your survey, it is time to gather all responses. Make sure you store your results in a particular document or an Excel sheet with all the necessary categories mentioned so that you don’t lose your data. Remember, this is the most crucial stage. Segregate your responses based on demographics, psychographics, and behavior. This is because, as a researcher, you must know where your responses are coming from. It will help you to analyze, predict decisions, and help write the summary report.
  • Prepare your summary report:  Now is the time to share your analysis. At this stage, you should mention all the responses gathered from a survey in a fixed format. Also, the reader/customer must get clarity about your goal, which you were trying to gain from the study. Questions such as – whether the product or service has been used/preferred or not. Do respondents prefer some other product to another? Any recommendations?

Having a tool that helps you carry out all the necessary steps to carry out this type of study is a vital part of any project. At QuestionPro, we have helped more than 10,000 clients around the world to carry out data collection in a simple and effective way, in addition to offering a wide range of solutions to take advantage of this data in the best possible way.

From dashboards, advanced analysis tools, automation, and dedicated functions, in QuestionPro, you will find everything you need to execute your research projects effectively. Uncover insights that matter the most!

MORE LIKE THIS

Credible Intervals

Credible Intervals: Defining & Unveiling Statistical Insights

Nov 23, 2023

text analysis tools

Text Analysis Tools: Exploring the 10 Best Tools of 2023

Nov 22, 2023

Large Language Models learn and generate human-like language from vast text data. Explore everything about it in this article.

Large Language Models: Types, Applications, and the Future

Unlock the secrets of success with the Net Promoter System. Explore its components and effective strategies for building it.

Net Promoter System: Building and Implementing Effectively

Nov 21, 2023

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered straight to teams on the ground

Know exactly how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Employee Exit Interviews
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Market Research
  • Artificial Intelligence
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • What is a survey?
  • Survey Research

Try Qualtrics for free

What is survey research.

15 min read Find out everything you need to know about survey research, from what it is and how it works to the different methods and tools you can use to ensure you’re successful.

Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall .

As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions. But survey research needs careful planning and execution to get the results you want.

So if you’re thinking about using surveys to carry out research, read on.

Get started with our free survey maker tool

Types of survey research

Calling these methods ‘survey research’ slightly underplays the complexity of this type of information gathering. From the expertise required to carry out each activity to the analysis of the data and its eventual application, a considerable amount of effort is required.

As for how you can carry out your research, there are several options to choose from — face-to-face interviews, telephone surveys, focus groups (though more interviews than surveys), online surveys , and panel surveys.

Typically, the survey method you choose will largely be guided by who you want to survey, the size of your sample , your budget, and the type of information you’re hoping to gather.

Here are a few of the most-used survey types:

Face-to-face interviews

Before technology made it possible to conduct research using online surveys, telephone, and mail were the most popular methods for survey research. However face-to-face interviews were considered the gold standard — the only reason they weren’t as popular was due to their highly prohibitive costs.

When it came to face-to-face interviews, organizations would use highly trained researchers who knew when to probe or follow up on vague or problematic answers. They also knew when to offer assistance to respondents when they seemed to be struggling. The result was that these interviewers could get sample members to participate and engage in surveys in the most effective way possible, leading to higher response rates and better quality data.

Telephone surveys

While phone surveys have been popular in the past, particularly for measuring general consumer behavior or beliefs, response rates have been declining since the 1990s .

Phone surveys are usually conducted using a random dialing system and software that a researcher can use to record responses.

This method is beneficial when you want to survey a large population but don’t have the resources to conduct face-to-face research surveys or run focus groups, or want to ask multiple-choice and open-ended questions .

The downsides are they can: take a long time to complete depending on the response rate, and you may have to do a lot of cold-calling to get the information you need.

You also run the risk of respondents not being completely honest . Instead, they’ll answer your survey questions quickly just to get off the phone.

Focus groups (interviews — not surveys)

Focus groups are a separate qualitative methodology rather than surveys — even though they’re often bunched together. They’re normally used for survey pretesting and designing , but they’re also a great way to generate opinions and data from a diverse range of people.

Focus groups involve putting a cohort of demographically or socially diverse people in a room with a moderator and engaging them in a discussion on a particular topic, such as your product, brand, or service.

They remain a highly popular method for market research , but they’re expensive and require a lot of administration to conduct and analyze the data properly.

You also run the risk of more dominant members of the group taking over the discussion and swaying the opinions of other people — potentially providing you with unreliable data.

Online surveys

Online surveys have become one of the most popular survey methods due to being cost-effective, enabling researchers to accurately survey a large population quickly.

Online surveys can essentially be used by anyone for any research purpose – we’ve all seen the increasing popularity of polls on social media (although these are not scientific).

Using an online survey allows you to ask a series of different question types and collect data instantly that’s easy to analyze with the right software.

There are also several methods for running and distributing online surveys that allow you to get your questionnaire in front of a large population at a fraction of the cost of face-to-face interviews or focus groups.

This is particularly true when it comes to mobile surveys as most people with a smartphone can access them online.

However, you have to be aware of the potential dangers of using online surveys, particularly when it comes to the survey respondents. The biggest risk is because online surveys require access to a computer or mobile device to complete, they could exclude elderly members of the population who don’t have access to the technology — or don’t know how to use it.

It could also exclude those from poorer socio-economic backgrounds who can’t afford a computer or consistent internet access. This could mean the data collected is more biased towards a certain group and can lead to less accurate data when you’re looking for a representative population sample.

When it comes to surveys, every voice matters.

Find out how to create more inclusive and representative surveys for your research.

Panel surveys

A panel survey involves recruiting respondents who have specifically signed up to answer questionnaires and who are put on a list by a research company. This could be a workforce of a small company or a major subset of a national population. Usually, these groups are carefully selected so that they represent a sample of your target population — giving you balance across criteria such as age, gender, background, and so on.

Panel surveys give you access to the respondents you need and are usually provided by the research company in question. As a result, it’s much easier to get access to the right audiences as you just need to tell the research company your criteria. They’ll then determine the right panels to use to answer your questionnaire.

However, there are downsides. The main one being that if the research company offers its panels incentives, e.g. discounts, coupons, money — respondents may answer a lot of questionnaires just for the benefits.

This might mean they rush through your survey without providing considered and truthful answers. As a consequence, this can damage the credibility of your data and potentially ruin your analyses.

What are the benefits of using survey research?

Depending on the research method you use, there are lots of benefits to conducting survey research for data collection. Here, we cover a few:

1.   They’re relatively easy to do

Most research surveys are easy to set up, administer and analyze. As long as the planning and survey design is thorough and you target the right audience , the data collection is usually straightforward regardless of which survey type you use.

2.   They can be cost effective

Survey research can be relatively cheap depending on the type of survey you use.

Generally, qualitative research methods that require access to people in person or over the phone are more expensive and require more administration.

Online surveys or mobile surveys are often more cost-effective for market research and can give you access to the global population for a fraction of the cost.

3.   You can collect data from a large sample

Again, depending on the type of survey, you can obtain survey results from an entire population at a relatively low price. You can also administer a large variety of survey types to fit the project you’re running.

4.   You can use survey software to analyze results immediately

Using survey software, you can use advanced statistical analysis techniques to gain insights into your responses immediately.

Analysis can be conducted using a variety of parameters to determine the validity and reliability of your survey data at scale.

5.   Surveys can collect any type of data

While most people view surveys as a quantitative research method, they can just as easily be adapted to gain qualitative information by simply including open-ended questions or conducting interviews face to face.

How to measure concepts with survey questions

While surveys are a great way to obtain data, that data on its own is useless unless it can be analyzed and developed into actionable insights.

The easiest, and most effective way to measure survey results, is to use a dedicated research tool that puts all of your survey results into one place.

When it comes to survey measurement, there are four measurement types to be aware of that will determine how you treat your different survey results:

Nominal scale

With a nominal scale , you can only keep track of how many respondents chose each option from a question, and which response generated the most selections.

An example of this would be simply asking a responder to choose a product or brand from a list.

You could find out which brand was chosen the most but have no insight as to why.

Ordinal scale

Ordinal scales are used to judge an order of preference. They do provide some level of quantitative value because you’re asking responders to choose a preference of one option over another.

Ratio scale

Ratio scales can be used to judge the order and difference between responses. For example, asking respondents how much they spend on their weekly shopping on average.

Interval scale

In an interval scale, values are lined up in order with a meaningful difference between the two values — for example, measuring temperature or measuring a credit score between one value and another.

Step by step: How to conduct surveys and collect data

Conducting a survey and collecting data is relatively straightforward, but it does require some careful planning and design to ensure it results in reliable data.

Step 1 – Define your objectives

What do you want to learn from the survey? How is the data going to help you? Having a hypothesis or series of assumptions about survey responses will allow you to create the right questions to test them.

Step 2 – Create your survey questions

Once you’ve got your hypotheses or assumptions, write out the questions you need answering to test your theories or beliefs. Be wary about framing questions that could lead respondents or inadvertently create biased responses .

Step 3 – Choose your question types

Your survey should include a variety of question types and should aim to obtain quantitative data with some qualitative responses from open-ended questions. Using a mix of questions (simple Yes/ No, multiple-choice, rank in order, etc) not only increases the reliability of your data but also reduces survey fatigue and respondents simply answering questions quickly without thinking.

Find out how to create a survey that’s easy to engage with

Step 4 – Test your questions

Before sending your questionnaire out, you should test it (e.g. have a random internal group do the survey) and carry out A/B tests to ensure you’ll gain accurate responses.

Step 5 – Choose your target and send out the survey

Depending on your objectives, you might want to target the general population with your survey or a specific segment of the population. Once you’ve narrowed down who you want to target, it’s time to send out the survey.

After you’ve deployed the survey, keep an eye on the response rate to ensure you’re getting the number you expected. If your response rate is low, you might need to send the survey out to a second group to obtain a large enough sample — or do some troubleshooting to work out why your response rates are so low. This could be down to your questions, delivery method, selected sample, or otherwise.

Step 6 – Analyze results and draw conclusions

Once you’ve got your results back, it’s time for the fun part.

Break down your survey responses using the parameters you’ve set in your objectives and analyze the data to compare to your original assumptions. At this stage, a research tool or software can make the analysis a lot easier — and that’s somewhere Qualtrics can help.

Get reliable insights with survey software from Qualtrics

Gaining feedback from customers and leads is critical for any business, data gathered from surveys can prove invaluable for understanding your products and your market position, and with survey software from Qualtrics, it couldn’t be easier.

Used by more than 13,000 brands and supporting more than 1 billion surveys a year, Qualtrics empowers everyone in your organization to gather insights and take action. No coding required — and your data is housed in one system.

Get feedback from more than 125 sources on a single platform and view and measure your data in one place to create actionable insights and gain a deeper understanding of your target customers .

Automatically run complex text and statistical analysis to uncover exactly what your survey data is telling you, so you can react in real-time and make smarter decisions.

We can help you with survey management, too. From designing your survey and finding your target respondents to getting your survey in the field and reporting back on the results, we can help you every step of the way.

And for expert market researchers and survey designers, Qualtrics features custom programming to give you total flexibility over question types, survey design, embedded data, and other variables.

No matter what type of survey you want to run, what target audience you want to reach, or what assumptions you want to test or answers you want to uncover, we’ll help you design, deploy and analyze your survey with our team of experts.

Ready to find out more about Qualtrics CoreXM?

Get started with our free survey maker tool today

Related resources

Survey bias types 24 min read, post event survey questions 10 min read, best survey software 16 min read, close-ended questions 7 min read, survey vs questionnaire 12 min read, response bias 13 min read, double barreled question 11 min read, request demo.

Ready to learn more about Qualtrics?

National Academies Press: OpenBook

A Survey of Attitudes and Actions on Dual Use Research in the Life Sciences: A Collaborative Effort of the National Research Council and the American Association for the Advancement of Science (2009)

Chapter: 4 conclusions and recommendations.

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

4 Conclusions and Recommendations INTRODUCTION The results of the survey provide some of the first empirical data about the perceptions of a sample of U.S. life scientists of the potential risks of misuse of legitimate scientific research for malicious purposes. The survey obtained information from a diverse group of academic, gov- ernment, and industry researchers. The survey data provide evidence about how the respondents perceive the sources of risk related to dual use research, the actions that some scientists are taking to reduce the risk of misuse of science, and the prospects for acceptance of various policy proposals aimed at reducing the risks of misuse of legitimate life science research. While useful, the results of the survey must be viewed with caution because of the low response rate and possible response bias. Scientists who may be involved in biodefense or select agent research, for example, may be more aware of the dual use dilemma and thus more likely to have responded to the survey. In addition, a few of the questions could have been interpreted in multiple ways. Despite the limitations, which are discussed in detail in Chapter 2, the committee believes that the data obtained in this study offer valuable insights and new information. Overall, the survey findings suggest that there is considerable support for models of oversight that rely on the responsible conduct of research and self-governance by the scientific community. The responses also sug- gest, however, that there is a critical need to clarify the scope of research activities of high concern and to determine the appropriate actions that 115

116 DUAL USE RESEARCH IN THE LIFE SCIENCES members of the life sciences community can take to reduce the risk of misuse of science for bioweapons development or bioterrorism. The rest of the chapter provides a summary of the survey findings. Following a brief summary of the perceptions of risks of the scientists who responded to the survey, three key areas of current and potential activities and policies are highlighted: actions that life scientists have already taken to address dual use concerns, mechanisms for the oversight of research, and issues related to education and outreach. The chapter closes with the committee’s recommendations for furthering education and outreach activities that are based on the findings of the survey and its own judg- ments and analysis. PERCEPTIONS OF RISK The findings suggest that, on average, the scientists who responded to the survey perceive a potential, but not overwhelming, risk of bioter- rorism and that the risk is greater outside the United States. On average, the respondents believed that there is a 51 percent chance that there will be an act of bioterrorism somewhere in the world in the next 5 years and a 35 percent chance that there will be an act of bioterrorism in the United States in the next 5 years. Three-quarters of the respondents believe that a preference for other means of attack is the primary reason why there have been only a few acts of bioterrorism to date; overwhelmingly, 87 percent of respondents said that they believe that terrorists are not deterred by the threat of being caught and punished. Fewer scientists considered a lack of knowledge (46 percent) or access to equipment (51 percent) or agents (36 percent) to be significant barriers. It may be that one’s perceived risk of such an attack is related to one’s support for taking measures to reduce the risks that life sciences research might be misused. With regard to the chance that the knowledge, tools, or techniques from dual use research will facilitate bioterrorism, the respondents per- ceive a 28 percent chance, on average, of such a bioterror attack within the next 5 years. Half of the respondents thought that if someone wanted to create a harmful biological agent, the Internet would be the most likely place to provide sufficient information for life scientists with college- level training. Other sources of information—articles in scientific journals (40 percent), personal communications (38 percent), and presentations at professional meetings (18 percent)—were considered relatively less likely sources, although on average 45 percent of respondents answered “Don’t Know” to these questions.

CONCLUSIONS AND RECOMMENDATIONS 117 ACTIONS TAKEN BY LIFE SCIENTISTS IN RESPONSE TO DUAL USE CONCERNS Although the responses to the survey indicate that bioterrorism prob- ably is not perceived to present a serious immediate risk to U.S. or global security, the survey results also indicate that there is already concern about dual use issues among some of the life scientists who responded. Fifteen percent of the respondents (260 individuals out of 1,744) indicated that they are so concerned about dual use research that they have taken actions, even in the absence of guidelines or mandatory regulations from the U.S. government. Some respondents reported that they had broken collaborations, not conducted some research projects, or not commu- nicated research results. The results indicate that more scientists have modified their research activities than some members of the committee expected on the basis of previous reports of manuscripts that have been modified or not published because of dual use concerns. Interestingly, many of the actions that the respondents reported tak- ing to mitigate concerns occurred before the publication stage; much of the behavior change occurred during the research design, collaboration, and early communication stages. Of particular interest and concern to the committee, a few respondents commented on their concerns about for- eigners as potential security risks, which may be reflected in the reported avoidance of some collaborations. The survey results suggest that: (1) some life scientists in the United States may be willing to consider self-governance aimed at the respon- sible scientific conduct for dual use research, and (2) some life scientists in the United States are already acting, even in the absence of govern- ment regulations and guidance, to protect against the perceived risk of misuse of dual use research. OVERSIGHT MECHANISMS With a proposed oversight framework for dual use research of concern proposed by NSABB in June 2007 now under consideration within the U.S. government, the survey was an opportunity to assess scientists’ atti- tudes toward specific policy options. Many of the respondents indicated that they believe that personal responsibility, including measures such as codes of conduct, could foster a positive culture within the scientific com- munity to evaluate the potential consequences of their research for public safety and national security. They also indicated that they believe that individual researchers, professional scientific societies, institutions, and scientific journals should be responsible for evaluating dual use potential of research and/or fostering the culture of scientific responsibility. A majority of those who responded to the survey favored self-gov-

118 DUAL USE RESEARCH IN THE LIFE SCIENCES ernance mechanisms for dealing with dual use research of concern, such as those proposed by the Fink report (NRC 2004a), rather than addi- tional mandatory government regulations. In addition to the low level of support for greater federal oversight (26 percent), the individual com- ments indicated a belief that increased government oversight of dual use research would be counterproductive by inhibiting the research needed to combat emerging infectious diseases and bioterrorism as well as being potentially harmful to the scientific enterprise more generally. The survey suggests that most of the respondents (82 percent) favor their professional societies’ prescribing a code of responsible conduct to help prevent misuse of life sciences research. However, many respondents (66 percent) did not know whether the societies to which they belonged already had codes that address dual use issues, and some of the societies most frequently cited do not in fact have a code. There was substantially less support (38 percent agree or strongly agree) for a Hippocratic-style oath. The results also indicate potential support for journals having bios- ecurity policies. Yet, most of the respondents did not know if any of the journals in which they have published or to which they have submitted manuscripts have those policies. Moreover, more than half of those who responded to the survey strongly disagreed or disagreed with restrictions on personal communication, altering or removing methods or findings from scientific publications, or limiting publication itself. The survey points to a likely preference for self-governance measures to provide oversight of dual use research. There was substantially less support for mandatory measures that might be imposed by regulation, although the results varied for different policy measures. The results indicate that there may be greater support for restrictions on access to biological agents (just under 50 percent of the respondents said they agree or strongly agree) and certifications of researchers (just over 40 percent of the respondents said they agree or strongly agree) than for any control of scientific knowledge generated from the research or through informa- tion exchange (only 20 to 30 percent of respondents supported these measures). Table 4-1 provides a list of the level of support for the various measures addressed in the survey. The survey results suggest there is support for: 1. Greater oversight that is not federally mandated, 2. Self-governance mechanisms as an approach for preventing misuse of life science research and knowledge, 3. Professional and scientific societies adopting codes of conduct that include dual use research as suggested in the Fink report (NRC 2004a), 4. Establishing and implementing policies for authors and reviewers

CONCLUSIONS AND RECOMMENDATIONS 119 TABLE 4-1  Summary of Results Regarding Support for Measures of Personal and Institutional Responsibility Strongly Agree or Agree Measures of Personal or Institutional Responsibility (or Respond Yes*) (%) Principal investigators should be responsible for the 87 initial evaluation of the dual use potential of their life sciences research. Principal investigators should be responsible for training 86 lab staff, students, and visiting scientists about dual use research. Should professional science societies have codes for the 82* responsible conduct of dual use life sciences research? University and college students should receive 68 educational lectures and materials on dual use life sciences research. Scientists should provide formal assurance to their 67 institution that they are assessing their work for dual use potential. Funding agencies should require grantees to attest on 60 grant applications that they have considered dual use implications of their proposed research. Should scientific journals have policies regarding 57* publication of dual use research? Institutions should provide mandatory training for 55 scientists regarding dual use life sciences research. Greater restrictions should be placed on access to specific 47 biological agents or toxins. Researchers conducting dual use research should be 42 certified. All grant proposals for life sciences research with dual 41 use potential should be reviewed by a researcher’s institution prior to submission for funding. Scientists conducting or managing research should take 38 an oath. Research findings should be classified based on their dual 28 use potential. Dual use research needs greater federal oversight. 26 Certain experimental methods or findings should be 22 altered or removed prior to publication or presentation. Certain biological equipment that is commonly used in 21 life science research should be licensed. There should be restrictions on disclosure of details 21 about the research or its findings through personal communication. There should be restrictions on publication of findings 21 based on their dual use potential. SOURCE: NRC/AAAS Survey of Life Scientists; data analysis by staff.

120 DUAL USE RESEARCH IN THE LIFE SCIENCES to consider the dual use potential of research manuscripts submitted to journals. The survey results suggest there is opposition to: 1. Mandatory government regulations to govern the conduct of dual use research and the communication of knowledge from that research; 2. Other mandatory oversight actions, such as oaths or licensing of scientists. Based on the survey results and its own analysis, the committee believes that a basis of support exists within the U.S. scientific commu- nity for measures that, taken together, could lead to the development of a system of self-governance for the oversight of key aspects of dual use research. EDUCATION AND OUTREACH A major reason for conducting the survey was to inform efforts for education and awareness-raising about dual use research by providing empirical data on the attitudes of a sample of the life sciences community. In general, the respondents to this survey would likely support educa- tional and outreach activities aimed at raising awareness of the dual use dilemma. The respondents indicated that they supported educational materials and lectures on dual use research for students. They also sup- ported mandatory training by institutions for practicing life scientists regarding dual use research of concern. The survey results also highlight the need to better define the scope of dual use research of concern. Fewer than half of the respondents who indicated that they were carrying out dual use research activities felt that their research fell into one of the seven categories of research of concern specified by the NSABB. The dual use experiments of concern as listed in the Fink report (NRC 2004a) and by the NSABB are all based on microbial research, but other relevant research, such as theoretical research, scenario development, or applied research (e.g., pharmaceutical formulations or neuroscience research) can be of dual use concern. In their individual comments, a number of respondents stressed the difficulties of defining dual use, as did participants in the focus groups used to develop the survey. Clearly a better understanding of the scope of dual use research of real concern would help any educational or outreach activities aimed at raising the awareness of life scientists so that appropriate actions can be taken.

CONCLUSIONS AND RECOMMENDATIONS 121 Based on the survey results and its own analysis, the committee believes that there is support for mandatory education and training about dual use issues, most likely as part of ethics and responsible conduct of research training. RECOMMENDATIONS The committee believes that the survey raises several hypotheses that merit further research about the views of life scientists about oversight policies and education and outreach efforts to address concerns about dual use issues in the life sciences. In particular, based on the survey results and its own deliberations, the committee offers the following recommendations: Oversight, Education, and Outreach 1. Explore how to continue and to expand the dialogue within the life sciences community about dual use research of concern. 2. Explore ways to provide guidance to the life sciences community about appropriate actions that can be taken to protect against the misuse of dual use research. 3. Seek to better define the scope of knowledge in the life sciences that may be at greatest risk for misuse and to provide the life sciences community with criteria for recognizing dual use research of concern. 4. Encourage journals that have biosecurity policies or plan to adopt them in the future and the professional and scientific societies that have or plan to develop codes of conduct to communicate those policies more effectively. Further Research 1. Examine the effectiveness of existing educational programs and how they can be enhanced and focused. 2. Seek to extend educational and awareness-raising efforts being conducted in the United States to the broad international scientific community. 3. Examine how education and outreach activities can be developed to guide the life science community’s response to concerns about dual use research so as to ensure that actions taken by the community are appro- priate and contribute to advancing scientific knowledge while protecting national security. 4. Conduct additional surveys, interviews, or focus groups of U.S. life scientists that better represent the full community, with higher response

122 DUAL USE RESEARCH IN THE LIFE SCIENCES rates than the current study was able to achieve, and the ability to assess potential bias, in order to gain  i.  a better understanding of the awareness of a broader range of U.S. life scientists about dual use research of concern and the measure that they would support to reduce the threat that research in the life sciences could be subverted to do harm;  ii.  a better understanding of the types of behavioral changes being made in response to dual use concerns to determine if actions by life scientists are contributing to national security or harming scientific research; such research is critical given the actions that the current survey suggests are being taken;  iii.  more detailed information about the types of changes scientists are making or scientists’ thoughts about dual use issues, experiments of concern, and select agents;  iv.  a better understanding of scientists’ experiences with educa- tion on this topic and their views about the content and delivery of educational and training materials. 5. Conduct additional surveys of life scientists outside the United States that would enable comparisons of attitudes toward dual use research of concern and inform educational and outreach programs so that they can be effective on a global scale. Such knowledge could also facilitate international discussions of potential measures to address dual use concerns.

The same technologies that fuel scientific advances also pose potential risks—that the knowledge, tools, and techniques gained through legitimate biotechnology research could be misused to create biological weapons or for bioterrorism. This is often called the dual use dilemma of the life sciences. Yet even research with the greatest potential for misuse may offer significant benefits. Determining how to constrain the danger without harming essential scientific research is critical for national security as well as prosperity and well-being.

This book discusses a 2007 survey of American Association for the Advancement of Science (AAAS) members in the life sciences about their knowledge of dual use issues and attitudes about their responsibilities to help mitigate the risks of misuse of their research.

Overall, the results suggest that there may be considerable support for approaches to oversight that rely on measures that are developed and implemented by the scientific community itself. The responses also suggest that there is a need to clarify the scope of research activities of concern and to provide guidance about what actions scientists can take to reduce the risk that their research will be misused by those with malicious intent.

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

  • Privacy Policy
  • SignUp/Login

Research Method

Home » Survey Research – Types, Methods, Examples

Survey Research – Types, Methods, Examples

Table of Contents

Survey Research

Survey Research

Definition:

Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw conclusions about the population being studied.

Survey research can be used to answer a variety of questions, including:

  • What are people’s opinions about a certain topic?
  • What are people’s experiences with a certain product or service?
  • What are people’s beliefs about a certain issue?

Survey Research Methods

Survey Research Methods are as follows:

  • Telephone surveys: A survey research method where questions are administered to respondents over the phone, often used in market research or political polling.
  • Face-to-face surveys: A survey research method where questions are administered to respondents in person, often used in social or health research.
  • Mail surveys: A survey research method where questionnaires are sent to respondents through mail, often used in customer satisfaction or opinion surveys.
  • Online surveys: A survey research method where questions are administered to respondents through online platforms, often used in market research or customer feedback.
  • Email surveys: A survey research method where questionnaires are sent to respondents through email, often used in customer satisfaction or opinion surveys.
  • Mixed-mode surveys: A survey research method that combines two or more survey modes, often used to increase response rates or reach diverse populations.
  • Computer-assisted surveys: A survey research method that uses computer technology to administer or collect survey data, often used in large-scale surveys or data collection.
  • Interactive voice response surveys: A survey research method where respondents answer questions through a touch-tone telephone system, often used in automated customer satisfaction or opinion surveys.
  • Mobile surveys: A survey research method where questions are administered to respondents through mobile devices, often used in market research or customer feedback.
  • Group-administered surveys: A survey research method where questions are administered to a group of respondents simultaneously, often used in education or training evaluation.
  • Web-intercept surveys: A survey research method where questions are administered to website visitors, often used in website or user experience research.
  • In-app surveys: A survey research method where questions are administered to users of a mobile application, often used in mobile app or user experience research.
  • Social media surveys: A survey research method where questions are administered to respondents through social media platforms, often used in social media or brand awareness research.
  • SMS surveys: A survey research method where questions are administered to respondents through text messaging, often used in customer feedback or opinion surveys.
  • IVR surveys: A survey research method where questions are administered to respondents through an interactive voice response system, often used in automated customer feedback or opinion surveys.
  • Mixed-method surveys: A survey research method that combines both qualitative and quantitative data collection methods, often used in exploratory or mixed-method research.
  • Drop-off surveys: A survey research method where respondents are provided with a survey questionnaire and asked to return it at a later time or through a designated drop-off location.
  • Intercept surveys: A survey research method where respondents are approached in public places and asked to participate in a survey, often used in market research or customer feedback.
  • Hybrid surveys: A survey research method that combines two or more survey modes, data sources, or research methods, often used in complex or multi-dimensional research questions.

Types of Survey Research

There are several types of survey research that can be used to collect data from a sample of individuals or groups. following are Types of Survey Research:

  • Cross-sectional survey: A type of survey research that gathers data from a sample of individuals at a specific point in time, providing a snapshot of the population being studied.
  • Longitudinal survey: A type of survey research that gathers data from the same sample of individuals over an extended period of time, allowing researchers to track changes or trends in the population being studied.
  • Panel survey: A type of longitudinal survey research that tracks the same sample of individuals over time, typically collecting data at multiple points in time.
  • Epidemiological survey: A type of survey research that studies the distribution and determinants of health and disease in a population, often used to identify risk factors and inform public health interventions.
  • Observational survey: A type of survey research that collects data through direct observation of individuals or groups, often used in behavioral or social research.
  • Correlational survey: A type of survey research that measures the degree of association or relationship between two or more variables, often used to identify patterns or trends in data.
  • Experimental survey: A type of survey research that involves manipulating one or more variables to observe the effect on an outcome, often used to test causal hypotheses.
  • Descriptive survey: A type of survey research that describes the characteristics or attributes of a population or phenomenon, often used in exploratory research or to summarize existing data.
  • Diagnostic survey: A type of survey research that assesses the current state or condition of an individual or system, often used in health or organizational research.
  • Explanatory survey: A type of survey research that seeks to explain or understand the causes or mechanisms behind a phenomenon, often used in social or psychological research.
  • Process evaluation survey: A type of survey research that measures the implementation and outcomes of a program or intervention, often used in program evaluation or quality improvement.
  • Impact evaluation survey: A type of survey research that assesses the effectiveness or impact of a program or intervention, often used to inform policy or decision-making.
  • Customer satisfaction survey: A type of survey research that measures the satisfaction or dissatisfaction of customers with a product, service, or experience, often used in marketing or customer service research.
  • Market research survey: A type of survey research that collects data on consumer preferences, behaviors, or attitudes, often used in market research or product development.
  • Public opinion survey: A type of survey research that measures the attitudes, beliefs, or opinions of a population on a specific issue or topic, often used in political or social research.
  • Behavioral survey: A type of survey research that measures actual behavior or actions of individuals, often used in health or social research.
  • Attitude survey: A type of survey research that measures the attitudes, beliefs, or opinions of individuals, often used in social or psychological research.
  • Opinion poll: A type of survey research that measures the opinions or preferences of a population on a specific issue or topic, often used in political or media research.
  • Ad hoc survey: A type of survey research that is conducted for a specific purpose or research question, often used in exploratory research or to answer a specific research question.

Types Based on Methodology

Based on Methodology Survey are divided into two Types:

Quantitative Survey Research

Qualitative survey research.

Quantitative survey research is a method of collecting numerical data from a sample of participants through the use of standardized surveys or questionnaires. The purpose of quantitative survey research is to gather empirical evidence that can be analyzed statistically to draw conclusions about a particular population or phenomenon.

In quantitative survey research, the questions are structured and pre-determined, often utilizing closed-ended questions, where participants are given a limited set of response options to choose from. This approach allows for efficient data collection and analysis, as well as the ability to generalize the findings to a larger population.

Quantitative survey research is often used in market research, social sciences, public health, and other fields where numerical data is needed to make informed decisions and recommendations.

Qualitative survey research is a method of collecting non-numerical data from a sample of participants through the use of open-ended questions or semi-structured interviews. The purpose of qualitative survey research is to gain a deeper understanding of the experiences, perceptions, and attitudes of participants towards a particular phenomenon or topic.

In qualitative survey research, the questions are open-ended, allowing participants to share their thoughts and experiences in their own words. This approach allows for a rich and nuanced understanding of the topic being studied, and can provide insights that are difficult to capture through quantitative methods alone.

Qualitative survey research is often used in social sciences, education, psychology, and other fields where a deeper understanding of human experiences and perceptions is needed to inform policy, practice, or theory.

Data Analysis Methods

There are several Survey Research Data Analysis Methods that researchers may use, including:

  • Descriptive statistics: This method is used to summarize and describe the basic features of the survey data, such as the mean, median, mode, and standard deviation. These statistics can help researchers understand the distribution of responses and identify any trends or patterns.
  • Inferential statistics: This method is used to make inferences about the larger population based on the data collected in the survey. Common inferential statistical methods include hypothesis testing, regression analysis, and correlation analysis.
  • Factor analysis: This method is used to identify underlying factors or dimensions in the survey data. This can help researchers simplify the data and identify patterns and relationships that may not be immediately apparent.
  • Cluster analysis: This method is used to group similar respondents together based on their survey responses. This can help researchers identify subgroups within the larger population and understand how different groups may differ in their attitudes, behaviors, or preferences.
  • Structural equation modeling: This method is used to test complex relationships between variables in the survey data. It can help researchers understand how different variables may be related to one another and how they may influence one another.
  • Content analysis: This method is used to analyze open-ended responses in the survey data. Researchers may use software to identify themes or categories in the responses, or they may manually review and code the responses.
  • Text mining: This method is used to analyze text-based survey data, such as responses to open-ended questions. Researchers may use software to identify patterns and themes in the text, or they may manually review and code the text.

Applications of Survey Research

Here are some common applications of survey research:

  • Market Research: Companies use survey research to gather insights about customer needs, preferences, and behavior. These insights are used to create marketing strategies and develop new products.
  • Public Opinion Research: Governments and political parties use survey research to understand public opinion on various issues. This information is used to develop policies and make decisions.
  • Social Research: Survey research is used in social research to study social trends, attitudes, and behavior. Researchers use survey data to explore topics such as education, health, and social inequality.
  • Academic Research: Survey research is used in academic research to study various phenomena. Researchers use survey data to test theories, explore relationships between variables, and draw conclusions.
  • Customer Satisfaction Research: Companies use survey research to gather information about customer satisfaction with their products and services. This information is used to improve customer experience and retention.
  • Employee Surveys: Employers use survey research to gather feedback from employees about their job satisfaction, working conditions, and organizational culture. This information is used to improve employee retention and productivity.
  • Health Research: Survey research is used in health research to study topics such as disease prevalence, health behaviors, and healthcare access. Researchers use survey data to develop interventions and improve healthcare outcomes.

Examples of Survey Research

Here are some real-time examples of survey research:

  • COVID-19 Pandemic Surveys: Since the outbreak of the COVID-19 pandemic, surveys have been conducted to gather information about public attitudes, behaviors, and perceptions related to the pandemic. Governments and healthcare organizations have used this data to develop public health strategies and messaging.
  • Political Polls During Elections: During election seasons, surveys are used to measure public opinion on political candidates, policies, and issues in real-time. This information is used by political parties to develop campaign strategies and make decisions.
  • Customer Feedback Surveys: Companies often use real-time customer feedback surveys to gather insights about customer experience and satisfaction. This information is used to improve products and services quickly.
  • Event Surveys: Organizers of events such as conferences and trade shows often use surveys to gather feedback from attendees in real-time. This information can be used to improve future events and make adjustments during the current event.
  • Website and App Surveys: Website and app owners use surveys to gather real-time feedback from users about the functionality, user experience, and overall satisfaction with their platforms. This feedback can be used to improve the user experience and retain customers.
  • Employee Pulse Surveys: Employers use real-time pulse surveys to gather feedback from employees about their work experience and overall job satisfaction. This feedback is used to make changes in real-time to improve employee retention and productivity.

Survey Sample

Purpose of survey research.

The purpose of survey research is to gather data and insights from a representative sample of individuals. Survey research allows researchers to collect data quickly and efficiently from a large number of people, making it a valuable tool for understanding attitudes, behaviors, and preferences.

Here are some common purposes of survey research:

  • Descriptive Research: Survey research is often used to describe characteristics of a population or a phenomenon. For example, a survey could be used to describe the characteristics of a particular demographic group, such as age, gender, or income.
  • Exploratory Research: Survey research can be used to explore new topics or areas of research. Exploratory surveys are often used to generate hypotheses or identify potential relationships between variables.
  • Explanatory Research: Survey research can be used to explain relationships between variables. For example, a survey could be used to determine whether there is a relationship between educational attainment and income.
  • Evaluation Research: Survey research can be used to evaluate the effectiveness of a program or intervention. For example, a survey could be used to evaluate the impact of a health education program on behavior change.
  • Monitoring Research: Survey research can be used to monitor trends or changes over time. For example, a survey could be used to monitor changes in attitudes towards climate change or political candidates over time.

When to use Survey Research

there are certain circumstances where survey research is particularly appropriate. Here are some situations where survey research may be useful:

  • When the research question involves attitudes, beliefs, or opinions: Survey research is particularly useful for understanding attitudes, beliefs, and opinions on a particular topic. For example, a survey could be used to understand public opinion on a political issue.
  • When the research question involves behaviors or experiences: Survey research can also be useful for understanding behaviors and experiences. For example, a survey could be used to understand the prevalence of a particular health behavior.
  • When a large sample size is needed: Survey research allows researchers to collect data from a large number of people quickly and efficiently. This makes it a useful method when a large sample size is needed to ensure statistical validity.
  • When the research question is time-sensitive: Survey research can be conducted quickly, which makes it a useful method when the research question is time-sensitive. For example, a survey could be used to understand public opinion on a breaking news story.
  • When the research question involves a geographically dispersed population: Survey research can be conducted online, which makes it a useful method when the population of interest is geographically dispersed.

How to Conduct Survey Research

Conducting survey research involves several steps that need to be carefully planned and executed. Here is a general overview of the process:

  • Define the research question: The first step in conducting survey research is to clearly define the research question. The research question should be specific, measurable, and relevant to the population of interest.
  • Develop a survey instrument : The next step is to develop a survey instrument. This can be done using various methods, such as online survey tools or paper surveys. The survey instrument should be designed to elicit the information needed to answer the research question, and should be pre-tested with a small sample of individuals.
  • Select a sample : The sample is the group of individuals who will be invited to participate in the survey. The sample should be representative of the population of interest, and the size of the sample should be sufficient to ensure statistical validity.
  • Administer the survey: The survey can be administered in various ways, such as online, by mail, or in person. The method of administration should be chosen based on the population of interest and the research question.
  • Analyze the data: Once the survey data is collected, it needs to be analyzed. This involves summarizing the data using statistical methods, such as frequency distributions or regression analysis.
  • Draw conclusions: The final step is to draw conclusions based on the data analysis. This involves interpreting the results and answering the research question.

Advantages of Survey Research

There are several advantages to using survey research, including:

  • Efficient data collection: Survey research allows researchers to collect data quickly and efficiently from a large number of people. This makes it a useful method for gathering information on a wide range of topics.
  • Standardized data collection: Surveys are typically standardized, which means that all participants receive the same questions in the same order. This ensures that the data collected is consistent and reliable.
  • Cost-effective: Surveys can be conducted online, by mail, or in person, which makes them a cost-effective method of data collection.
  • Anonymity: Participants can remain anonymous when responding to a survey. This can encourage participants to be more honest and open in their responses.
  • Easy comparison: Surveys allow for easy comparison of data between different groups or over time. This makes it possible to identify trends and patterns in the data.
  • Versatility: Surveys can be used to collect data on a wide range of topics, including attitudes, beliefs, behaviors, and preferences.

Limitations of Survey Research

Here are some of the main limitations of survey research:

  • Limited depth: Surveys are typically designed to collect quantitative data, which means that they do not provide much depth or detail about people’s experiences or opinions. This can limit the insights that can be gained from the data.
  • Potential for bias: Surveys can be affected by various biases, including selection bias, response bias, and social desirability bias. These biases can distort the results and make them less accurate.
  • L imited validity: Surveys are only as valid as the questions they ask. If the questions are poorly designed or ambiguous, the results may not accurately reflect the respondents’ attitudes or behaviors.
  • Limited generalizability : Survey results are only generalizable to the population from which the sample was drawn. If the sample is not representative of the population, the results may not be generalizable to the larger population.
  • Limited ability to capture context: Surveys typically do not capture the context in which attitudes or behaviors occur. This can make it difficult to understand the reasons behind the responses.
  • Limited ability to capture complex phenomena: Surveys are not well-suited to capture complex phenomena, such as emotions or the dynamics of interpersonal relationships.

Following is an example of a Survey Sample:

Welcome to our Survey Research Page! We value your opinions and appreciate your participation in this survey. Please answer the questions below as honestly and thoroughly as possible.

1. What is your age?

  • A) Under 18
  • G) 65 or older

2. What is your highest level of education completed?

  • A) Less than high school
  • B) High school or equivalent
  • C) Some college or technical school
  • D) Bachelor’s degree
  • E) Graduate or professional degree

3. What is your current employment status?

  • A) Employed full-time
  • B) Employed part-time
  • C) Self-employed
  • D) Unemployed

4. How often do you use the internet per day?

  •  A) Less than 1 hour
  • B) 1-3 hours
  • C) 3-5 hours
  • D) 5-7 hours
  • E) More than 7 hours

5. How often do you engage in social media per day?

6. Have you ever participated in a survey research study before?

7. If you have participated in a survey research study before, how was your experience?

  • A) Excellent
  • E) Very poor

8. What are some of the topics that you would be interested in participating in a survey research study about?

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

9. How often would you be willing to participate in survey research studies?

  • A) Once a week
  • B) Once a month
  • C) Once every 6 months
  • D) Once a year

10. Any additional comments or suggestions?

Thank you for taking the time to complete this survey. Your feedback is important to us and will help us improve our survey research efforts.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Qualitative Research

Qualitative Research – Methods, Analysis Types...

Focus Groups in Qualitative Research

Focus Groups – Steps, Examples and Guide

Basic Research

Basic Research – Types, Methods and Examples

Ethnographic Research

Ethnographic Research -Types, Methods and Guide

Qualitative Research Methods

Qualitative Research Methods

Research Methods

Research Methods – Types, Examples and Guide

  • Foundations
  • Write Paper
  • Self-Esteem
  • Social Anxiety
  • Anthropology
  • Experiments

Conclusion of a Survey

(Aug 18, 2012). Conclusion of a Survey. Retrieved Nov 23, 2023 from Assisted Self-Help: https://staging.explorable.com/en/conclusion-of-a-survey

Related articles

Write a Conclusion of a Survey

  • Example - Questionnaire
  • Response Scales
  • Surveys and Questionnaires - Guide
  • Advantages and Disadvantages
  • Types of Surveys

Browse Full Outline

  • 1 Surveys and Questionnaires - Guide
  • 2.1 Research and Surveys
  • 2.2 Advantages and Disadvantages
  • 2.3 Survey Design
  • 2.4 Sampling
  • 3.1 Defining Goals
  • 4.1 Survey Layout
  • 4.2 Types of Questions
  • 4.3 Constructing Questions
  • 4.4 Response Formats
  • 4.5 Response Scales
  • 5.1 Selecting Method
  • 5.2 Personal Interview
  • 5.3 Telephone
  • 5.4.1 Preparing Online Surveys
  • 5.4.2 Online Tools
  • 5.5 Focus Group
  • 5.6 Panel Study
  • 6.1 Pilot Survey
  • 6.2 Increasing Response Rates
  • 7.1 Analysis and Data
  • 7.2 Conclusion
  • 7.3 Presenting the Results
  • 8 Example - Questionnaire
  • 9 Checklist
  • Open access
  • Published: 16 November 2023

Methods, strategies, and incentives to increase response to mental health surveys among adolescents: a systematic review

  • Julia Bidonde   ORCID: orcid.org/0000-0001-7535-678X 1 ,
  • Jose F. Meneses-Echavez   ORCID: orcid.org/0000-0003-4312-6909 1 , 2 ,
  • Elisabet Hafstad   ORCID: orcid.org/0009-0001-6296-410X 1 ,
  • Geir Scott Brunborg   ORCID: orcid.org/0000-0002-1382-2922 3 , 4 &
  • Lasse Bang   ORCID: orcid.org/0000-0002-3548-5234 3  

BMC Medical Research Methodology volume  23 , Article number:  270 ( 2023 ) Cite this article

187 Accesses

10 Altmetric

Metrics details

This systematic review aimed to identify effective methods to increase adolescents’ response to surveys about mental health and substance use, to improve the quality of survey information.

We followed a protocol and searched for studies that compared different survey delivery modes to adolescents. Eligible studies reported response rates, mental health score variation per survey mode and participant variations in mental health scores. We searched CENTRAL, PsycINFO, MEDLINE and Scopus in May 2022, and conducted citation searches in June 2022. Two reviewers independently undertook study selection, data extraction, and risk of bias assessments. Following the assessment of heterogeneity, some studies were pooled using meta-analysis.

Fifteen studies were identified, reporting six comparisons related to survey methods and strategies. Results indicate that response rates do not differ between survey modes (e.g., web versus paper-and-pencil) delivered in classroom settings. However, web surveys may yield higher response rates outside classroom settings. The largest effects on response rates were achieved using unconditional monetary incentives and obtaining passive parental consent. Survey mode influenced mental health scores in certain comparisons.

Conclusions

Despite the mixed quality of the studies, the low volume for some comparisons and the limit to studies in high income countries, several effective methods and strategies to improve adolescents’ response rates to mental health surveys were identified.

Peer Review reports

Globally, one in seven adolescents (aged 10–19 years) experiences a mental disorder, accounting for 13% of the health burden in this age group [ 1 ]. The Global Burden of Diseases Study reports that anxiety disorders, depressive disorders and self-harm are among the top ten leading causes of adolescent health loss [ 2 ]. Understanding the magnitude and determinants of mental health problems among adolescents may inform initiatives to improve their health.

Survey research methods are often used to investigate the prevalence and incidence of mental health problems and associated risk factors and outcomes [ 3 , 4 , 5 ]. Prevalence estimates are based on responses from a sample of the target population. A major priority is to ensure that invited adolescents participate in the survey. In survey research, the response rate (also known as completion rate or return rate) is a crucial metric that indicates the proportion of individuals who participated in the survey divided by the total number of people in the selected sample. Non-response reduces the sample size and statistical precision of the estimates and may also induce non-response bias [ 6 , 7 ]. Consequently, survey response rate is often considered an indicator of the quality and representativeness of the obtained data [ 6 , 8 ].

Non-response is a particular concern in surveys of adolescents as this age-group is hard to reach and motivate to participate in research. Furthermore, response rates for health-related surveys are declining [ 3 , 5 ]. For example, the response rate for a repeated household survey conducted in the US dropped by 35 percentage points between 1971 and 2017 [ 9 ]. Similarly, response rates for the National Health and Nutrition Examination Survey (NHANES) dropped by 15 percentage points from 2011/2012 to 2017/2018 [ 10 ]. There is an increasing need for surveys to be designed and administered in ways that maximise response rates. Multiple published reviews [ 11 , 12 , 13 ] provide evidence of methods and strategies to increase response rates (primarily among adults). These point to several factors associated with increased response rate, including the use of monetary incentives, short questionnaires and notifying participants before sending questionnaires. However, none of these focuses specifically on adolescent samples. Survey characteristics may impact response rates differently in adult and adolescent samples due to age-specific attitudes. For example, adolescents may find web surveys more acceptable and appealing than telephone or postal surveys. Attitudes towards incentives or the topic of surveys (e.g., mental health) may also differ between adults and adolescents. Furthermore, surveys of adolescents are often conducted in class-room settings which exerts a strong contextual influence on response rates. Such contextual factors may moderate the effect of methods and strategies that have been shown to influence response rates among adults.

Features that boost response rates may also influence the mental health outcomes obtained. For example, web-based surveys may improve response rates due to the relative ease of participation when compared with in-person surveys. But they may also impact mental health scores, leading to higher or lower estimates of the prevalence of mental health problems. For example, this can occur because of reluctance to disclose mental health problems to an interviewer, or because web-surveys elicit careless responses. Some studies suggest that mental health indicators differ according to the mode of data collection [ 14 , 15 , 16 ]. Consequently, we need to know which strategies and methods improve adolescents' response rates to mental health surveys and how these might impact mental health scores.

Many factors may positively affect response rates in surveys, including how potential participants are approached and informed about the survey (e.g., pre-notifications), incentives (e.g., financial compensation), data collection mode (e.g., web-based vs. paper-and-pencil), survey measure composition and design (e.g., questionnaire length), using follow-up reminders, and practical issues such as time and location [ 11 , 16 ].

This review aims to identify effective methods and strategies to increase adolescents’ response rates (which may improve the quality of information gathered) to surveys that include questions about mental health, alcohol, and substance use. It also explores how different modes of survey delivery may impact on mental health scores. To accommodate recent trends in technological improvements and attitudes we focus on studies that have been published after 2007. By choosing 2007 we covered advances in technology since the development of the smart phone, and the literature after a previous review [ 13 ] whose search was completed in 2008. Furthermore, to provide the best quality evidence we focus on studies with randomised controlled designs.

This systematic review used the Cochrane approach to methodology reviews [ 17 ]. The full protocol was peer reviewed and is publicly available [ 18 ], but was not registered. The review is reported according to the PRISMA guidelines [ 19 ]. Amendments to the protocol can be found in Additional file 7 : Appendix G.

Eligibility criteria

This review evaluates the effectiveness of survey methods, strategies, and incentives (hereafter “survey mode”) to improve adolescents’ response rates for surveys containing mental health, alcohol, and substance use questions. Adolescents were defined as those aged 12–19 years. It focuses on research conducted in a community setting published since 2007 (when smart phones were introduced). The outcome measures are:

Survey response rates: the percentage of individuals who returned a completed survey, by survey mode;

Mental health variation (i.e., self-reported prevalence) by survey mode. For example, depression scores or alcohol use rates reported for survey modes;

Participant variations (e.g., gender differences) in self-reported mental health scores by survey mode.

Additional file 1 : Appendix A present the review’s eligibility criteria and a glossary of definitions.

Search strategy

One information specialist (EH) developed the search strategy, and a second peer reviewed it using the six domains of the PRESS guidelines [ 20 ]. Following a pilot search in the Cochrane Central Database of Controlled Clinical Trials (Wiley), an adapted search strategy was run in APA PsycINFO (Ovid), MEDLINE (Ovid) and Scopus (Elsevier) on May 13, 2022. Backwards and forwards citation searching were undertaken with last searches undertaken on June 28, 2022. Full searches are presented in Additional file 2 : Appendix B.

Study selection

We deduplicated records in EndNote and screened records in EPPI Reviewer 4 [ 21 ]. Two reviewers (JB, JFME) independently piloted the screening, using machine learning functions in EPPI-Reviewer combined with human assessment (see Additional file 2 : Appendix B). Randomised controlled trials (RCTs) and non-randomised studies of interventions were screened first, and once we identified more than five (pre-specified) RCTs, screening for other study designs was stopped. The two reviewers screened titles and abstracts, and then each relevant full text, independently against the eligibility criteria. A third reviewer adjudicated disagreements. Figure  1 shows the search and screening, and Additional file 2 : Appendix B lists the excluded studies.

figure 1

PRISMA diagram for the study identification and selection

For studies reported in several documents, all related documents were identified and grouped together to ensure participants were only counted once.

Data extraction

The two reviewers conducted double independent data extraction into Excel forms. A third reviewer adjudicated disagreements. We piloted data extraction on five studies (see Additional file 3 : Appendix C).

Risk of bias (quality assessment)

The two reviewers assessed studies’ risk of bias (RoB) independently using Cochrane’s RoB 2.0 [ 22 ]. Any financial and non-financial conflicts of interest reported in the studies were collected as a separate bias category outside of RoB 2.0 (see Additional file 3 : Appendix C).

Data synthesis

The protocol provides full details of the planned data synthesis [ 18 ]. We present a summary here.

We grouped studies by the type of survey modes. When two or more studies reported the same outcome and survey modes were deemed sufficiently homogeneous, we checked that the data direction permitted pooling. Where necessary to make the values meaningful, we arithmetically reversed scales. We included studies in the meta-analyses regardless of their RoB rating.

To assess statistical heterogeneity, we first checked our data for mistakes and then used the Chi 2 test (threshold P  < 0.10) and the I 2 statistic following Cochrane Handbook recommendations [ 23 ]. In cases of considerable statistical heterogeneity (I 2  > 70%) we did not conduct meta-analysis. Where there was less heterogeneity (I 2  <  = 70%), we performed random effects meta-analysis using Review Manager 5.4.1. We also assessed studies’ clinical and methodological heterogeneity (participants, survey processes, outcomes, and other study characteristics) to determine whether meta‐analysis was appropriate.

Where statistical pooling was not feasible, we followed the Synthesis Without Meta-analysis guideline to report the results narratively [ 24 ]. For dichotomous outcomes (e.g., response rates and adolescents’ self-reported alcohol use) we calculated odds ratios (ORs) and their 95% confidence intervals (CIs) to estimate between-mode differences. We used the default weighting technique (e.g., Mantel–Haenszel) for dichotomous outcomes in RevMan software. For continuous outcomes, we estimated the difference between survey modes using Mean Differences (MDs) or Standardized Mean Differences (SMDs) if the same outcome was measured with different questionnaires. The standard deviation was not modified [ 25 ]. We planned subgroup analyses and a GRADE assessment [ 18 ]. Amendments to the protocol are in Additional file 7 : Appendix G.

Search and screening results

Database searches retrieved 12,054 records. We removed 1,892 duplicates. EPPI-reviewer 4 marked 6,841 records as ineligible (see Additional file 2 : Appendix B). The team screened the titles and abstracts of 3,321 records and the full text of 48 documents, identifying ten eligible documents. Citation searches on ten eligible documents retrieved a further 740 records, which yielded six eligible documents. We identified one further document from reference lists. In total, this review included 15 studies (17 documents). Additional file 2 : Appendix B shows the excluded studies. We did not identify any studies in languages we could not translate.

Figure  1 shows the PRISMA diagram.

Details of included studies

Table 1 provides details of the included studies and Additional file 3 : Appendix C shows the data extraction tables. The age distribution of participants in the studies varied, but most were aged 14 to 16 years. A smaller proportion of participants were aged < 14 years or > 16 years. The sex distribution in studies were generally even and ranged from 32% [ 26 ] to 58% [ 27 ]. Studies were conducted in both rural and urban areas and included a range of national and racial/ethnic representation. Although most studies took place within school settings, four of them [ 26 , 28 , 29 , 30 ] were conducted in non-school environments. All the studies involved community (i.e., non-clinical) samples, but we note that the Pejtersen’s study [ 26 ] focused on a group of vulnerable children and youth.

The fifteen studies investigated six comparisons:

Paper-and-pencil (PAPI) survey administration versus web administration ( n  = 9 in 10 documents)

Telephone interviews versus postal questionnaires ( n  = 2)

Active versus passive parental consent ( n  = 1)

Web first versus in-person first interviews ( n  = 1)

Vouchers versus no vouchers ( n  = 1 in 2 documents)

Internal supervision versus external supervision ( n  = 1)

Risk of bias

Overall, study authors provided little information on their research methods resulting in several unclear domains that raised concerns about risk of bias. The main issues identified related to the randomisation process, measurement of the outcomes, and selective reporting of results. We classified three cluster RCTs [ 31 , 32 , 38 , 40 ] and three parallel RCTs [ 26 , 35 , 37 , 39 ] as high RoB. There were some concerns with nine [ 14 , 16 , 27 , 28 , 29 , 30 , 33 , 34 , 36 ] parallel RCTs (see Additional file 4 : Appendix D). RoB for each study is presented below.

This section presents the study results and the meta-analyses. Additional file 6 : Appendix F contains additional forest plots. We describe the results narratively without prioritization or hierarchy. We did not contact study authors for missing/additional data. Caution is advised when interpreting the meta-analyses because of studies’ quality/RoB and imprecision.

The considerable statistical heterogeneity (I 2  > 70%) in the data for the two largest comparisons (1 and 2) precluded a meta-analysis of response rates. The studies showed divergent effect estimates, which may be explained by their different outcome measures. There were differences inherent to the study designs with cluster RCTs adjusted for clustering. There were important differences in the survey implementation procedures, including different interfaces, skipped questions, confidentiality measures and different degrees of supervision. Ignoring these considerations would have resulted in pooled analyses prone to misleading inferences.

Comparison 1: paper-and-pencil versus web-based administration mode

Nine studies (ten documents) compared PAPI surveys to web-based surveys [ 14 , 16 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 ]. The studies included one cluster RCT with high RoB, three RCTs with high RoB and five RCTs with RoB concerns.

Response rate

Five studies reported response rate [ 16 , 30 , 31 , 32 , 34 , 37 ]. Three studies reported between-group differences [ 30 , 31 , 32 , 34 ], but because of considerable heterogeneity (I 2  > 90%) we present the effect estimates for each study separately (Fig.  2 ). Van de Looij-Jansen [ 16 ] reported a narrative summary rather than outcome data. Trapl [ 37 ] reported a 100% response rate.

figure 2

Odds ratios for various survey delivery mode comparisons: Adolescents’ response rates (results not pooled)

Denniston [ 31 ], reported a cluster RCT in two documents [ 31 , 32 ] and accounted for clustering in the analyses. Therefore, we did not conduct design effect adjustment [ 41 ]. The odds of response increased by nearly 80% for PAPI compared with a web mode (OR 0.22, 95% CI 0.19 to 0.26; n  = 7747). Participants could skip questions in some of the modes (“with skip patterns”). Treated as an independent intervention arm, the group “on your own” web without skip patterns had the lowest response rate (28%; 559/1997) compared with the other web formats (in-class web without skips and with skips) and markedly lower odds of response relative to PAPI (OR 0.04, 95% CI 0.03 to 0.04). Low odds of response affect the pooled rates among the web survey modes. The pooled response rate for the two web in-class modes (with and without skips) was 90.7%, which was no different to the PAPI response rate (OR 0.94, 95% CI 0.78 to 1.14; n  = 5750).

Mauz [ 30 ] explored three survey modes that we combined into an “overall web mode”. Each mode included varying proportions of participants receiving PAPI surveys or web surveys (see Table 1 ), but separate data for web participants were not reported. The odds of response decreased by nearly 70% when using PAPI compared with a web mode (OR 0.29, 95% CI 0.23 to 0.38; n  = 1195) [ 30 ].

Miech [ 34 ] found evidence of no effect on response rates for PAPI compared with web mode (electronic tablets) (OR 1.03, 95% CI 0.97 to 1.08; n  = 41,514).

Van de Looij-Jansen [ 16 ] reported an overall response rate of 90%, with no difference between PAPI or web modes (data not reported) and Trapl [ 37 ] reported 100% response rate.

Mental health variation by mode of survey delivery

Nine studies (ten documents) reported between-modes variations in point estimates for various mental health and substance use scores at the time of survey completion [ 14 , 16 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 ].

Two studies (considerable heterogeneity: I 2  = 82%) of Dutch adolescents from secondary schools in rural and urban areas reported between-modes variations for adolescents’ mental health scores (Fig.  3 ) [ 16 , 35 ]. Raat [ 35 ] reported that for the mental health subscale of the Child Health Questionnaire (CHQ-CF), PAPI mode participants had slightly lower scores compared with web users (MD -1.90, 95% CI -3.84 to 0.04; n  = 933). Conversely, van de Looij-Jansen [ 16 ] reported no between-mode variations in self-reported total scores for the Strength and Difficulties Questionnaire (SDQ). Boys tended to report better mental health scores when completing surveys using PAPI than the web (MD 1.0, 95% CI -0.10 to 2.10; n  = 279).

figure 3

Mean differences for paper-and-pencil versus web administration survey delivery modes: Adolescents’ self-reported mental health

Two studies estimated between-mode variations for adolescents’ self-reported psychological wellbeing scores [ 16 , 30 ]. Mauz [ 30 ] reported the number of adolescents experiencing favourable psychological wellbeing, expressed as t values, using the KIDSCREEN (the Health-Related Quality of Life Questionnaire for Children and Adolescents aged from 8 to 18 years, Questionnaires—kidscreen.org). The narrative findings indicated that psychological wellbeing was the same for both PAPI and web-based questionnaire modes (PAPI 50.5% vs web 49.3% ( n  = 1194), P  = 0.07 adjusted with Bonferroni correction). Similarly, van de Looij-Jansen [ 16 ] reported no between-mode variations in mean scores of adolescents’ self-reported psychological wellbeing obtained from nine items about feelings and moods from the CHQ-CF (MD pooled for boys and girls -0.97, 95% CI -3.21 to 1.28; n  = 531) (Fig.  4 ).

figure 4

Mean differences for paper-and-pencil versus web administration survey delivery modes: Adolescents’ psychological wellbeing (nine items about feelings and moods derived from the CHQ-CF)

Denniston [ 31 ] found evidence of no between-mode estimate variations for adolescents’ self-reported sadness (OR 1.02, 95% CI 0.90 to 1.15; n  = 5786) or suicide attempts (OR 1.01, 95% CI 0.83 to 1.24; n  = 5786) measured using the Youth Risk Behavior Surveys [ 31 , 32 ].

Hamann [ 33 ] found evidence of no between-mode estimate variations for adolescents’ self-reported anxiety (MD 1.65, 95% CI -5.18 to 8.48; n  = 56) or depression (MD 0.78, 95% CI -1.54 to 3.10; n  = 56) measured using the Spence Children’s Anxiety Scale (SCAS) and the German version of the Children’s Depression Inventory (CDI) [ 33 ].

Six studies (7 documents) reported adolescents’ self-reported lifetime alcohol use [ 14 , 30 , 31 , 32 , 34 , 36 , 37 ]. Lygidakis [ 14 ] reported on adolescents who said they “ have been drunk ” and therefore we did not pool this study with studies of lifetime use. In Lygidakis [ 14 ], lifetime estimates of self-reported alcohol use were 11% lower in the PAPI group compared with the web survey group (OR 0.89, 95% CI 0.79 to 1.00; n  = 190). A pooled analysis of five studies [ 30 , 31 , 32 , 34 , 36 , 37 ] suggested that the odds of alcohol lifetime use were 13% higher among adolescents completing the web survey compared with those using PAPI (OR 1.13, 95% CI 1.00 to 1.28; n  = 49,554); substantial heterogeneity was observed (I 2  = 59%) (Fig.  5 ).

figure 5

Odds ratios for paper-and-pencil versus web administration of surveys: Adolescents’ self-reported lifetime alcohol use

A pooled analysis of two studies, Denniston [ 31 ] and Trapl [ 37 ], showed evidence of no between-mode estimate variations for adolescents’ self-reported marijuana use (OR 1.05, 95% CI 0.93 to 1.18; n  = 6,061) (Fig.  6 ).

figure 6

Pooled estimate variations for paper-and-pencil versus web administration of surveys: Adolescents’ self-reported lifetime marijuana use

Participant variation by mode of survey delivery

Gender was the only participant characteristic for which the included studies reported disaggregated data. We calculated estimate variations by gender within studies rather than between survey mode comparisons.

In Van de Looij-Jansen [ 16 ], boys tended to report better mental health scores than girls for total mental health score, emotional symptoms, and psychological well-being. The largest and more precise difference was for emotional symptoms (pooled MD for both survey modes -1.31, 95% CI -1.64 to -0.98; n  = 531), whereas the mental health total scores reported with the PAPI version of the SDQ proved to be the least precise (MD -0.30, 95% CI -1.54 to 0.94; n  = 261). The absence of statistical heterogeneity in the results for emotional symptoms and psychological well-being suggests that boys reported better scores than girls regardless of the survey mode (Fig.  7 ).

figure 7

Mean difference by gender for paper-and-pencil and web administration of surveys: Adolescents’ self-reported mental health outcomes

In Raghupathy [ 36 ], the odds of reporting lifetime alcohol use increased by more than one half in girls (OR 1.61, 95% CI 0.99 to 2.62; n  = 339). Less precise estimate variations were observed when using PAPI vs web mode (Fig.  8 ).

figure 8

Odds ratios for gender variations for paper-and-pencil and web administration of surveys: Adolescents’ self-reported lifetime alcohol use

Comparison 2: telephone interview vs postal questionnaires

Two studies reported outcome data for this comparison ( n  = 2322) [ 28 , 29 ]. Trained interviewers performed the telephone interviews in both studies. Interviewers in Erhart [ 29 ] used computer-assisted telephone interviews whereas in Wettergren [ 28 ] interviewers were trained to read the questions aloud and record participants’ answers. There were concerns for RoB for both studies.

We did not pool the response rates due to considerable heterogeneity (I 2  > 90%); the studies are presented separately [ 28 , 29 ]. The studies reported opposing results (Fig.  2 ). Erhart [ 29 ] reported a 41% completion rate for telephone interviews compared with 46% for postal questionnaires (OR 0.82, 95% CI 0.68 to 1.00; n  = 1,737), whereas Wettergren [ 28 ] reported a response rate of 77% for telephone interviews and 64% for postal questionnaires (OR 1.89, 95% CI 1.32 to 2.72; n  = 585).

The studies evaluated the effect of differences in survey mode on estimate variations of adolescents’ self-reported mental health measured by the SDQ total score [ 29 ] and the mental health component of the RAND 36-Item Short Form Health Survey (SF-36) measure [ 28 ]. We converted the data in Wettergren [ 28 ] to a zero to 10 scale to obtain a more homogenous pooled analysis. In the meta-analysis, adolescents reported 1.06 points better mental health when a telephone interview was used (MD 1.06, 95% CI 0.81 to 1.30; n  = 1,609) (Fig.  9 ).

figure 9

Pooled mean difference for survey delivery by telephone interview versus postal questionnaires: Adolescents’ self-reported mental health

Wettergren [ 28 ] found evidence of no estimate variation for adolescents’ self-reported anxiety (MD -0.60, 95% CI -1.21 to 0.01; n  = 580) and a small estimate variation for self-reported depression on the Hospital Anxiety and Depression Scale (HADS) favouring telephone interviews relative to postal questionnaires (MD -0.50, 95% CI -0.94 to -0.06; n  = 585).

Wettergren [ 28 ] reported participants’ gender differences in self-reported estimate variations of mental health (SF-36) alongside anxiety and depression (both measured with the HADS). Boys tended to report better mental health (SF-36) and anxiety (HADS) scores than girls, with the largest gender difference in anxiety (MD -1.85, 95% CI -2.42 to -1.28, n  = 585) [ 28 ]. Postal questionnaires seem to result in a larger gender difference in self-reported mental health scores compared with telephone questionnaires (I 2  = 53%). No differences between survey modes were observed for anxiety scores (I 2  = 0%). Boys and girls reported similar depression scores (MD -0.07, 95% CI -0.49 to 0.35; I 2  = 0%) for both survey modes (Fig.  10 ).

figure 10

Pooled mean differences by gender for survey delivery by post and telephone: Adolescents’ self-reported mental health

Comparison 3: active vs passive parental consent.

One cluster RCT compared schools randomised into groups where adolescents required active parental consent to undertake the survey or where passive parental consent was accepted [ 38 ]. The study had high RoB.

District schools assigned to passive parental consent achieved a response rate of 79% compared to 29% achieved by schools assigned to active consent mode ( p  = 0.001, number of participants per mode not reported) [ 38 ].

Courser [ 38 ] did not report any mental health variation or participant variations by survey mode.

Comparison 4: web first vs in-person first survey versions

One RCT [ 27 ] investigated the order of survey delivery. One group of students was offered an in-person survey, with web follow-up in case of non-response. A second group was asked to complete a web survey first, with in-person survey in case of non-response. There are some concerns over the study’s RoB.

McMorris [ 27 ] found evidence of no difference in response rates between adolescents completing a web survey first or an in-person survey first (OR 0.57, 95% CI 0.24 to 1.31; n  = 386) (Fig.  2 ).

McMorris [ 27 ] found evidence of no difference on adolescents’ self-reported lifetime alcohol use (OR 0.84, 95% CI 0.55 to 1.27; n  = 359) or lifetime marijuana use (OR 0.65, 95% CI 0.41 to 1.01; n  = 359) between the two survey modes. McMorris [ 27 ] did not report on participant variations by survey mode.

Comparison 5: voucher vs no voucher

One RCT [ 26 ] (reported in two documents) investigated whether an unconditional monetary incentive (a supermarket voucher) increases the response rate among vulnerable children and youths receiving a postal questionnaire [ 26 , 39 ]. The study was classified as high RoB.

Pejtersen [ 26 ] found that the monetary incentive yielded a response rate of 76% versus 43% without the incentive (OR 4.11, 95% CI 2.43 to 6.97; n  = 262) (Fig.  2 ).

The study also found that offering a voucher made no difference to adolescents’ self-reported emotional symptoms compared with no voucher (MD -0.70, 95% CI -1.58 to 0.18; n  = 156) measured using the emotional symptoms subscale of the SDQ [ 26 , 39 ]. Pejtersen [ 26 ] did not report on participant variations by survey mode.

Comparison 6: internal versus external supervision

One Swiss cluster-RCT evaluated the effect of external supervision (by a senior student or researcher) compared to internal supervision (by the teacher) when students completed online interviews [ 40 ]. The study was classified as high RoB.

Walser [ 40 ] only reported outcomes relevant to mental health variations, finding evidence of no variations in adolescents’ self-reported lifetime alcohol use according to the survey mode (OR 1.08, 95% CI 0.79 to 1.47; n  = 1,197).

Subgroup and sensitivity analyses

There were too few studies, and no quasi-RCTs, to complete the planned subgroup and sensitivity analyses.

Reporting bias assessment

We could not assess reporting biases, because too few studies were available (i.e., less than 10 studies) for each comparison [ 23 ].

Certainty assessment

We opted not to perform a GRADE assessment due to the limited quantity of studies for each comparison under consideration and the mixed quality of studies.

This review identified fifteen RCTs that investigated six different comparisons among adolescents. Although the included studies were of mixed quality, several effective methods and strategies to improve adolescents’ response rates to mental health surveys were identified. Findings show that response rates varied with survey mode, consent type, and incentives.

Comparisons of web versus PAPI mode yielded discrepant findings that must be interpreted in relation to survey delivery context. One study showed that postal invitations to a web survey was associated with higher response rates compared to PAPI mode [ 30 ], possibly due to the additional effort required to return the completed PAPI survey by post. In contrast, there were no significant differences in response rates for web and PAPI modes conducted in classrooms during school hours [ 16 , 31 , 32 , 34 ]. However, one study showed that inviting adolescents to complete a web survey on their own (at home within 2–3 weeks following the invitation) dramatically decreased response rates compared with completing PAPI or web surveys at school (28% vs. ~ 90%) [ 31 , 32 ]. These findings show that response rates may vary according to both delivery mode and context. A previous meta-analysis showed that web surveys yield lower response rates (on average 12 percentage points) than other modes [ 12 ]. However, this review did not focus specifically on adolescents. More studies are needed to determine whether response rates among adolescents differ between web and PAPI surveys delivered outside school.

Conflicting evidence was found for telephone interview surveys compared to postal PAPI surveys. One study found significantly higher response rates (77% vs 64%) for telephone interview surveys [ 28 ], while another found significantly but marginally (48% vs. 43%) higher response rates for postal PAPI surveys [ 29 ]. The reasons for these opposing findings are unclear, but other contextual factors may play a role such as the age of the studies (conducted before 2010) reflecting potential time related differences in attitudes towards telephone interviews and postal PAPI surveys. One study [ 27 ] found that response rates did not differ significantly when comparing a web survey and follow-up in-person interview for non-responders with in-person interview and follow-up web survey for non-responders. Administering a web survey first is a cost-saving approach which is unlikely to adversely impact adolescents’ response rates.

One study showed that unconditional monetary incentives (i.e., voucher) increased response rates by 33 percentage points [ 26 ], supporting a prior review on postal surveys [ 42 ]. Interestingly, evidence favours monetary incentives unconditional on response compared with similar incentives conditional on response to improve response rates [ 11 , 42 ]. In contrast, a recent meta-analysis [ 12 ] concluded that incentives had no effect on response rates in web surveys. These discrepant findings may indicate that incentives matter less for response rates in web surveys compared to other modes. Our review also identified one study showing that passive parental consent achieved more than double the response rate of active consent (79% vs. 29%) [ 38 ]. A prior meta-analysis of studies found similar evidence in favour of passive parental consent [ 43 ]. If ethical and data protection considerations permit, using passive parental consent may boost response rates substantially.

Survey mode influenced mental health scores in certain comparisons. We found no evidence of effect on self-reported mental health scores (across a range of measures) between PAPI and web surveys [ 16 , 30 , 31 , 32 , 34 , 35 , 36 , 37 ]. However, our pooled analysis of lifetime alcohol use showed 13% higher use when a web mode was used compared to a PAPI mode. This could possibly be attributed to differential response rates, for example if heavy drinkers are less likely to respond to a PAPI compared to web survey. In contrast, two studies indicated that lifetime marijuana use did not differ between web and PAPI survey modes [ 31 , 32 , 37 ]. The reasons for such differences are unclear and should be further researched. Telephone interview compared with postal PAPI surveys was associated with slightly better mental health scores [ 28 , 29 ]. These differences were quite small and probably of limited practical significance [ 28 ]. Nonetheless, survey designers should be aware that adolescents may report fewer mental health problems in telephone interviews. Such findings may be due to differential response rates as already mentioned, for example if those with mental health problems are less likely to respond to telephone surveys compared to PAPI surveys. Another reason may be that adolescents are less willing to report such problems directly to another person. The added anonymity of non-telephone surveys may encourage adolescents to provide more genuine responses to sensitive questions concerning their mental health. A study that compared supervision by either teachers or researchers during an in-class web survey [ 40 ] found no significant differences in mental health scores, which suggests that the choice of supervision personnel does not impact responses.

There was little evidence of differences between gender and survey characteristics on mental health scores. While several studies highlighted that males report better mental health than females [ 16 , 28 ], there was no indication that specific survey modes impacted males’ and females’ mental health differentially (i.e., no interaction effect). Many studies did not report mental health scores separately for males and females.

Our review complements earlier reviews of factors that influence response rates [ 11 , 12 , 42 , 43 , 44 ]. Together, these reviews provide useful information regarding how to design surveys to maximise response rates. The extent to which their findings are generalizable to adolescents in recent decades is unclear. Our own review show that relatively few studies have focused specifically on adolescents. Nevertheless, many of our findings are in line with those outlined in previous reviews. One outstanding question is whether web surveys yield lower response rates than other modes also for adolescents. The studies included in our review highlights the need to consider contextual factors when comparing response rates between surveys. For example, survey mode may have less impact on response rates in class-room settings. Our findings highlight the need for more studies to provide high-quality evidence of methods and strategies to ensure adequate response rates in mental health surveys of adolescents. This is particularly important given the present worldwide focus on adolescent mental health and the decreasing response rates in surveys.

Although we found relevant RCTs, they were of insufficient quality to draw firm conclusions. The studies in some comparisons showed considerable heterogeneity and meta-analysis was not feasible for most comparisons. For several comparisons, only one or two studies were available. In RCTs where one survey mode was superior to another, the results need to be confirmed with better conducted (and/or reported) studies.

The studies had a range of differences that reduce the comparability of studies and the generalisability and strength of our findings. Various questionnaires were used, differing greatly in content, length, and appearance. Questionnaires were managed in different ways, for example some used skips to ensure confidentiality, and some did not permit the questions to be read aloud during interview. Different methods were used to deliver questionnaires: postal, in the classroom, or sent to parents. The studies investigated a mix of outcomes using a range of tools and with study-specific adaptations in some cases.

The median publication year of the studies is 2010. The inclusion of older RCTs may mean that in a world of high internet and smart phone usage, the applicability of the earlier findings may be weakened.

Key strengths of this review include the team’s expertise in synthesis methods, topic area, information retrieval, and machine learning. We identified a substantial number of RCTs in adolescent populations, some with many participants, using an extensive search in databases augmented by forwards and backwards citation searching.

Although it is not usually common practice to search for outcomes in literature searches for reviews of effect of interventions [ 45 ], given the challenges of searching for this review topic, we considered it necessary to reduce the screening burden by including the concept of outcomes in our search. This approach may have lowered the search sensitivity where authors did not mention outcomes of interest in the abstract [ 46 ] and may also have introduced publication bias, because outcomes with positive results might be more likely to reported in the abstract than negative results [ 47 ]. Our citation searches should have mitigated both issues somewhat since they rely on publications citing each other, rather than containing specific words.

The review used machine learning for study selection reducing the study selection workload by 95%. Our experience confirms the widely documented potential of automated and semi-automated methods to improve systematic review efficiency [ 48 , 49 ]. The workload savings enabled us to spend more time in discussions with content experts.

The review results are affected by statistical heterogeneity in the analyses, which may be due to methodological and clinical heterogeneity in the variables, as well as the large variability in the design and conduct of the studies. There were not enough studies to explore heterogeneity using subgroup and sensitivity analyses, nor to test for publication bias. In many instances, results come from a single study, which greatly reduces the applicability of the findings considering none of the studies had low RoB.

We limited eligible studies to those undertaken in high income countries and as a result we cannot generalize our findings to low- or middle-income countries. The body of evidence comes from nationwide surveys in schools in the USA and Europe.

Implications for research

There is a need for more evidence on how best to identify records which report research into modes of data collection.

Some of the analyses showed unexpected results which might merit further research. These include lifetime alcohol use being higher when a web survey was used compared to PAPI, although there was no difference for lifetime marijuana use. Also, the evidence of differences in reported mental health for telephone compared with web surveys merit further investigation. Whether and in what situations web surveys yield poorer response rates compared to other modes in adolescents should also be investigated in future studies.

The absence of research evidence on the impact of survey mode on mental health scores by gender or other demographic characteristics, suggests this area could merit research.

There is a need for research that could better reflect the current situation where adolescents’ use of the internet and smart phones is widespread.

Implications for practice

Survey designers must balance practical concerns against the sampling, non-response and measurement error associated with specific design features. This review, and others, highlight methods and strategies that may improve survey response rates among adolescents with minimal impact on the assessment of mental health status [ 11 , 12 , 42 ]. Based on the poor reporting in the included studies, authors should be encouraged to register their trials and make their protocols publicly available. Authors and journal editors should follow the CONSORT reporting guidelines [ 50 ].

Despite the absence of low RoB studies, few studies for some comparisons and the focus on research undertaken in high income countries, there are methods and strategies that could be considered for improving survey response rates among adolescents being surveyed about mental health and substance use. For example, the use of monetary incentives may lead to higher response rates. Our review show that survey mode has limited impact on response rates in surveys delivered in school settings. Outside school settings, web surveys may be superior to other modes, but more research is needed to determine this. More studies using controlled designs are needed to further identify effective methods and strategies to ensure adequate response rates among adolescents. Some studies indicate that mental health scores may differ between certain survey modes. Finally, there was limited evidence on any differences between gender and survey characteristics on mental health scores.

Availability of data and materials

The templates for data collection, the extracted data and the data used for all of the analyses are available from the main author upon reasonable request.

Abbreviations

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Randomised controlled trial

Standardized mean difference

Grading of Recommendations, Assessment, Development, and Evaluations

Mean difference

Paper-and-pencil

Child Health Questionnaire

Children’s Depression Inventory

Spence Children’s Anxiety Scale

RAND 36-Item Short Form Health Survey

Hospital Anxiety and Depression Scale

World Health Organization. Adolescent mental health [web page] Geneva: World Health Organization; 2021 [updated 17.11.2021. Available from: https://www.who.int/news-room/fact-sheets/detail/adolescent-mental-health .

Vos T, Lim SS, Abbafati C, Abbas KM, Abbasi M, Abbasifard M, et al. Global burden of 369 diseases and injuries in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019. Lancet. 2020;396(10258):1204–22.

Article   Google Scholar  

Jackson SL. Research Methods and Statistics: A Critical Approach. 4th ed. Andover: Cengage Learning; 2011.

Google Scholar  

Kalb LG, Cohen C, Lehmann H, Law P. Survey non-response in an internet-mediated, longitudinal autism research study. J Am Med Inform Assoc. 2012;19(4):668–73.

Article   PubMed   PubMed Central   Google Scholar  

Lallukka T, Pietiläinen O, Jäppinen S, Laaksonen M, Lahti J, Rahkonen O. Factors associated with health survey response among young employees: a register-based study using online, mailed and telephone interview data collection methods. BMC Public Health. 2020;20(1):184.

Groves RM, Peytcheva E. The Impact of Nonresponse Rates on Nonresponse Bias: A Meta-Analysis. Public Opin Q. 2008;72(2):167–89.

Volken T. Second-stage non-response in the Swiss health survey: determinants and bias in outcomes. BMC Public Health. 2013;13(1):167.

Johnson TP, Wislar JS. Response rates and nonresponse errors in surveys. JAMA. 2012;307(17):1805–6.

Article   CAS   PubMed   Google Scholar  

Stedman RC, Connelly NA, Heberlein TA, Decker DJ, Allred SB. The End of the (Research) World As We Know It? Understanding and Coping With Declining Response Rates to Mail Surveys. Soc Nat Resour. 2019;32(10):1139–54.

McQuillan G, Kruszon-Moran D, Di H, Schaar D, Lukacs S, Fakhouri T, et al. Assessing consent for and response to health survey components in an era of falling response rates: National Health and Nutrition Examination Survey, 2011–2018. Survey Research Methods. 2021;15(3):257–68.

PubMed   PubMed Central   Google Scholar  

Fan W, Yan Z. Factors affecting response rates of the web survey: A systematic review. Comput Hum Behav. 2010;26(2):132–9.

Daikeler J, Bošnjak M, Lozar MK. Web Versus Other Survey Modes: An Updated and Extended Meta-Analysis Comparing Response Rates. J Surv Stat Methodol. 2020;8(3):513–39.

Edwards PJ, Roberts I, Clarke MJ, DiGuiseppi C, Wentz R, Kwan I, et al. Methods to increase response to postal and electronic questionnaires. Cochrane Database of Syst Rev. 2009;2009(3):MR000008.

Lygidakis C, Rigon S, Cambiaso S, Bottoli E, Cuozzo F, Bonetti S, et al. A web-based versus paper questionnaire on alcohol and tobacco in adolescents. Telemed J E Health. 2010;16(9):925–30.

Townsend L, Kobak K, Kearney C, Milham M, Andreotti C, Escalera J, et al. Development of Three Web-Based Computerized Versions of the Kiddie Schedule for Affective Disorders and Schizophrenia Child Psychiatric Diagnostic Interview: Preliminary Validity Data. J Am Acad Child Adolesc Psychiatry. 2020;59(2):309–25.

Article   PubMed   Google Scholar  

Van De Looij-Jansen PM, De Wilde EJ. Comparison of web-based versus paper-and-pencil self-administered questionnaire: effects on health indicators in Dutch adolescents. Health Serv Res. 2008;43(5 Pt 1):1708–21.

Clarke M, Oxman AD, Paulsen E, Higgins JPT, Green S. Appendix A: Guide to the contents of a Cochrane Methodology protocol and review. In: Higgins JPT, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions Version 510 (updated March 2011): Cochrane Collaboration; 2011.

Bidonde MJ, Bang L, Brunborg GS, Hafstad EV, Meneses Echavez JF. Methods, strategies and incentives to increase response to questionnaires and surveys among adolescents. - Protocol for a methodological systematic review - prosjektbeskrivelse. Oslo: Norwegian Institute of Public Health; 2022.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71.

McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement. J Clin Epidemiol. 2016;75:40–6.

Thomas J, Brunton J, Graziosi S. EPPI-Reviewer 4.0: software for research synthesis. London, UK: Social Science Research Unit, Institute of Education, University of London; 2010.

Sterne JAC, Savovic J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366:l4898.

Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al. Cochrane Handbook for Systematic Reviews of Interventions. Chichester: Wiley; 2019.

Book   Google Scholar  

Campbell M, McKenzie JE, Sowden A, Katikireddi SV, Brennan SE, Ellis S, et al. Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline. BMJ. 2020;368:l6890.

Higgins JPT, Deeks JJ. Chapter 7. Selecting studies and collecting data. In: Hggins JPT, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions Version 510 (updated March 2011): Cochrane Collaboration; 2011.

Pejtersen JH. The effect of monetary incentive on survey response for vulnerable children and youths: A randomized controlled trial. PLoS One. 2020;15(5):ArtID e0233025.

McMorris BJ, Petrie RS, Catalano RF, Fleming CB, Haggerty KP, Abbott RD. Use of web and in-person survey modes to gather data from young adults on sex and drug use: An evaluation of cost, time, and survey error based on a randomized mixed-mode design. Eval Rev. 2009;33(2):138–58.

Wettergren L, Mattsson E, von Essen L. Mode of administration only has a small effect on data quality and self-reported health status and emotional distress among Swedish adolescents and young adults. J Clin Nurs. 2011;20(11–12):1568–77.

Erhart M, Wetzel RM, Krugel A, Ravens-Sieberer U. Effects of phone versus mail survey methods on the measurement of health-related quality of life and emotional and behavioural problems in adolescents. BMC Public Health. 2009;9:491.

Mauz E, Hoffmann R, Houben R, Krause L, Kamtsiuris P, Goswald A. Mode equivalence of health indicators between data collection modes and mixed-mode survey designs in population-based health interview surveys for children and adolescents: Methodological study. J Med Internet Res. 2018;20(3):e64.

Denniston MM, Brener ND, Kann L, Eaton DK, McManus T, Kyle TM, et al. Comparison of paper-and-pencil versus Web administration of the Youth Risk Behavior Survey (YRBS): Participation, data quality, and perceived privacy and anonymity. Comput Hum Behav. 2010;26(5):1054–60.

Eaton DK, Brener ND, Kann L, Denniston MM, McManus T, Kyle TM, et al. Comparison of paper-and-pencil versus web administration of the youth risk behavior survey (YRBS): Risk behavior prevalence estimates. Eval Rev. 2010;34(2):137–53.

Hamann C, Schultze-Lutter F, Tarokh L. Web-based assessment of mental well-being in early adolescence: A reliability study. J Med Internet Res. 2016;18(6):e138.

Miech RA, Couper MP, Heeringa SG, Patrick ME. The impact of survey mode on US national estimates of adolescent drug prevalence: Results from a randomized controlled study. Addiction. 2021;116(5):1144–51.

Raat H, Mangunkusumo RT, Landgraf JM, Kloek G, Brug J. Feasibility, reliability, and validity of adolescent health status measurement by the Child Health Questionnaire Child Form (CHQ-CF): internet administration compared with the standard paper version. Qual Life Res. 2007;16(4):675–85.

Article   PubMed Central   Google Scholar  

Raghupathy S, Hahn-Smith S. The effect of survey mode on high school risk behavior data: A comparison between web and paper-based surveys. Curr Issues in Educ. 2013;16(2):1–11.

Trapl ES. Understanding adolescent survey responses: Impact of mode and other characteristics on data outcomes and quality [Doctoral dissertation, Case Western Reserve University]. OhioLINK Electronic Theses and Dissertations Center2007.

Courser MW, Shamblen SR, Lavrakas PJ, Collins D, Ditterline P. The impact of active consent procedures on nonresponse and nonresponse error in youth survey data: Evidence from a new experiment. Eval Rev. 2009;33(4):370–95.

National Library of Medicine. The effect of monetary incentive on survey response for vulnerable children and youth [NCT01741675] Bethesda, MD: National Library of Medicine; 2014. Available from: https://clinicaltrials.gov/ct2/show/NCT01741675 .

Walser S, Killias M. Who should supervise students during self-report interviews? A controlled experiment on response behavior in online questionnaires. J Exp Criminol. 2012;8(1):17–28.

Higgins JPT, Eldridge S, Li T. Chapter 23: Including variants on randomized trials. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al., editors. Cochrane Handbook for Systematic Reviews of Interventions version 63 (updated February 2022): Cochrane; 2022.

Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, et al. Increasing response rates to postal questionnaires: systematic review. BMJ. 2002;324(7347):1183.

Liu C, Cox RB Jr, Washburn IJ, Croff JM, Crethar HC. The effects of requiring parental consent for research on adolescents’ risk behaviors: A meta-analysis. J Adolesc Health. 2017;61(1):45–52.

Daikeler J, Silber H, Bošnjak M. A Meta-Analysis of How Country-Level Factors Affect Web Survey Response Rates. Int J Mark Res. 2021;64(3):306–33.

Lefebvre C, Glanville J, Briscoe S, Featherstone R, Littlewood A, Marshall C, et al. Chapter 4: Searching for and selecting studies. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al., editors. Cochrane Handbook for Systematic Reviews of Interventions version 63 (updated February 2022): Cochrane; 2022.

Frandsen TF, Bruun Nielsen MF, Lindhardt CL, Eriksen MB. Using the full PICO model as a search tool for systematic reviews resulted in lower recall for some PICO elements. J Clin Epidemiol. 2020;127:69–75.

Duyx B, Swaen GMH, Urlings MJE, Bouter LM, Zeegers MP. The strong focus on positive results in abstracts may cause bias in systematic reviews: a case study on abstract reporting bias. Syst Rev. 2019;8(1):174.

Miwa M, Thomas J, O’Mara-Eves A, Ananiadou S. Reducing systematic review workload through certainty-based screening. J Biomed Inform. 2014;51:242–53.

van de Schoot R, de Bruin J, Schram R, Zahedi P, de Boer J, Weijdema F, et al. An open source machine learning framework for efficient and transparent systematic reviews. Nat Mach Intell. 2021;3(2):125–33.

Schulz KF, Altman DG, Moher D. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c332.

Download references

Acknowledgements

The Department of Child and Development and the Division of Health Services within the Norwegian Institute of Public Health funded this project. We thank our colleagues Dr. Simon Lewin and Dr. Chris Ross for their time and Ingvild Kirkehei for reviewing the search strategy.

Open access funding provided by Norwegian Institute of Public Health (FHI) The authors report no funding sources.

Author information

Authors and affiliations.

Division of Health Services, Norwegian Institute of Public Health, Oslo, Norway

Julia Bidonde, Jose F. Meneses-Echavez & Elisabet Hafstad

Facultad de Cultura Física, Deporte, y Recreación, Universidad Santo Tomás, Bogotá, Colombia

Jose F. Meneses-Echavez

Department of Child Health and Development, Norwegian Institute of Public Health, Oslo, Norway

Geir Scott Brunborg & Lasse Bang

Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden

Geir Scott Brunborg

You can also search for this author in PubMed   Google Scholar

Contributions

L.B: Conceptualization (equal); Formal Analysis (equal); Writing – Original Draft Preparation (equal); Writing – Review & Editing (equal). J.B: Conceptualization (lead); Data Curation (lead); Formal Analysis (lead); Investigation (lead); Methodology (lead); Project Administration (lead); Supervision (lead); Validation (lead); Visualization (equal); Writing – Original Draft Preparation (lead); Writing – Review & Editing (lead). G.S.B: Conceptualization (equal); Formal Analysis (equal); Writing – Original Draft Preparation (equal); Writing – Review & Editing (equal). E.H: Conceptualization (equal); Investigation (equal); Methodology (equal); Writing – Original Draft Preparation (equal); Writing – Review & Editing (equal). J.F.M-E: Conceptualization (equal); Data Curation (equal); Formal Analysis (equal); Investigation (equal); Methodology (equal); Validation (equal); Visualization (lead); Writing – Original Draft Preparation (equal); Writing – Review & Editing (equal). All authors read and approved the final manuscript.

Corresponding author

Correspondence to Lasse Bang .

Ethics declarations

Ethics approval and consent to participate.

This study pooled anonymized data from existing studies. No ethical approval was therefore required for the present study. The original studies that collected the data acquired ethical approvals and consents from participants.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Eligibility criteria and Glossary.

Additional file 2.

Search strategies and lists of excluded studies.

Additional file 3.

Detailed data extraction for the included studies.

Additional file 4.

Risk of bias assessment.

Additional file 5.

PRISMA checklist.

Additional file 6.

Additional Forest plots.

Additional file 7.

Protocol changes.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Cite this article.

Bidonde, J., Meneses-Echavez, J.F., Hafstad, E. et al. Methods, strategies, and incentives to increase response to mental health surveys among adolescents: a systematic review. BMC Med Res Methodol 23 , 270 (2023). https://doi.org/10.1186/s12874-023-02096-z

Download citation

Received : 04 April 2023

Accepted : 06 November 2023

Published : 16 November 2023

DOI : https://doi.org/10.1186/s12874-023-02096-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Adolescents
  • Mental health
  • Surveys and questionnaires
  • Systematic review

BMC Medical Research Methodology

ISSN: 1471-2288

conclusion of survey research

ABOUT OUR SURVEY RESEARCH INNOVATION

Research scientist performing more accurate pricing sensitivity analysis with conjoint experiments

Written by Anja Kilibarda, Ph.D.

November 21, 2023 at 1:00 pm ET

Limitations of Van Westendorp

Leveraging randomization via conjoint experiments, possibilities for a conjoint-based pricing approach.

Van Westendorp Price Sensitivity Analysis is currently one of the most common approaches to understanding price preferences in market research. Survey respondents are asked to report the prices at which they would feel a given product is 1) so cheap that they would doubt its quality, 2) is a good buy for the money, 3) expensive but is not out of the question, and 4) too expensive; the resulting data are then used to develop estimates of ‘acceptable’ and ‘optimal’ price ranges for the product. Though easy to implement, Van Westendorp has serious drawbacks in terms of the assumptions it requires and the kinds of inferences it can make. To circumvent these limitations, we propose a framework that leverages conjoint experimentation to deliver more rigorous pricing insights with fewer assumptions. 

This post walks through an original survey experiment comparing Van Westendorp to conjoint and offers insight into how to implement a conjoint-based pricing approach, either on its own or in conjunction with Van Westendorp. Using a well-known, mid-market denim retailer as the putative company, we randomly assigned 2,000 people in the U.S. to complete either a Van Westendorp question set about their jeans or a series of conjoint tasks evaluating profiles of said jeans, where price is varied alongside other characteristics. Below, we report the findings from this exercise and highlight the strengths of conjoint vis a vis Van Westendorp.

Though the Van Westendorp (VW) framework is simple in that it relies on a limited set of intuitive questions, it is unlikely to accurately reflect how consumers actually respond to price in the market for the following reasons:  

  • It makes strong assumptions about respondent comprehension and behavior , among them that respondents understand price levels and accurately report price preferences, their price preferences are linear, transitive and converge to meaningful ranges, and all products have a ‘too cheap’ point.
  • It is unclear that VW maps onto real-world consumer behavior . In real life, people operate in information- and product-attribute-rich environments and make tradeoffs with respect to price. In VW, price is the main, if not only, information available and we neither observe choices over price nor tradeoffs. The approach thus has limited external validity and also hamstrings our capacity to estimate price elasticity.
  • ‘Optimal’ price ranges under VW are only optimal conditional on company strategy and cost/revenue assessments. At best, VW offers a very broad sense of the entire market.
  • There is no visibility into or control over the features people impute to a product primarily (or only) described by price and such features may vary extensively from person to person. Whether respondents have a similar product in mind is unclear. 
  • There is neither theoretical nor mathematical justification for why the intersections of VW’s distribution curves can be interpreted as limitations or optimizations of price.

Figure 1 below plots the results from the VW arm of our experiment. The derived range of acceptable prices for a putative pair of the company’s jeans ranges from $30 to $42, and the optimal price point is located at $33. The point of indifference is at $36, where the same proportion of customers feel that the product is getting too expensive as those who feel it is a bargain. These prices fall far below the standard range for a pair of the company’s jeans, which tend to range between $70 and $130 at full price. Nearly all respondents in our survey consider the lower bound of the real price range, $70, so expensive that they would not consider purchasing these jeans. And yet, the company has operated at this price point (adjusting for inflation) profitably for decades. Our data is weighted to Census values along a series of dimensions and does not skew exceptionally lower income such that we could attribute these low values to an artifact of sampling.

Van Westendorp Price Sensitivity Meter Chart

Figure 1: Van Westendorp Price Sensitivity Meter

These price ranges indeed cast doubt on respondents’ capacity to understand and accurately and honestly articulate price preferences. While people will, of course, try to go as low as they can with respect to price in an unconstrained setting, that is precisely the point: in addition to VW’s myriad other shortcomings, it also does not establish the constraints businesses face in pricing, including price floors. Moreover, a meaningful proportion of our sample assigned to the VW condition reported intransitive price preferences. Though these respondents are not included in the analysis above, they reveal that without forcing transitivity in the survey programming, reported price preferences are wont to be inconsistent and may not converge to reasonable distributions. It’s important to note that outside of this behavior, the respondents were otherwise attentive and offered high-quality responses, certainly in part due to our proprietary screening procedures . Given these limitations, we turn to demonstrating the comparative advantages of using conjoint experiments to understand price.

Conjoint analysis measures the preference and importance that respondents place on the different elements of a product or service, represented in (typically) pairs of hypothetical profiles, among which respondents choose. Pricing analysis can be done via conjoint experiments through the inclusion of a price characteristic that varies across profiles. The conjoint framework does not make the same strong assumptions that Van Westendorp does about how people perceive, understand, and report price sensitivity. Below, a non-exhaustive list of its advantages: 

  • Realism : Because conjoints present people with profiles of a product or service and ask them to make a choice, they more closely mimic real decision-making environments where people must consider several features of an item, of which price is one, and make tradeoffs. This can lead to more accurate and actionable insights.
  • Strategic insights: Because businesses have full control over the price inputs to a conjoint, the ranges can reflect strategic objectives and constraints. 
  • Observed choices: A conjoint design allows us to observe respondents’ behavior with respect to price by asking people to actually make choices at different price points. VW only allows us to infer choice behavior, and only under the assumption that reports of ‘too cheap’ or ‘too expensive’ map onto rejection. In a conjoint, we can also offer a ‘Neither’ option to explicitly observe rejection behavior. 
  • Elasticity : Unlike VW, conjoints allow for the estimation of price elasticity by systematically varying price levels and observing respondents’ choices. 
  • Multidimensional analysis : Conjoints allow us to estimate the impact of price conditional on other features, offering insight into how different pricing strategies interact with other features and how they collectively influence preferences and purchase decisions. 
  • Predictive power: Conjoints have been shown to have a meaningful ability to predict real-world behavior (e.g., Hainmueller et al. 2014a). They can thus be used to generate predictions of market share and customer preferences. By contrast, the external validity of VW has not been clearly demonstrated. 

The price levels varied in the conjoint arm of our experiment were $50, $70, $90, and $110, reflecting the most common current price points based on a review of the company’s website across both the men’s and women’s sections. Additionally, each profile included information about the fit of the jeans (slim fit, straight leg, wide leg), their color (black, dark blue, light blue), and whether or not the jeans had rips as part of the style. These, of course, are common characteristics one would know about a pair of jeans in addition to their price.

Figure 2 below plots the average marginal component effects (AMCEs) of the jean attributes randomized in the conjoint condition. AMCEs represent the marginal effect of an attribute averaged over the joint distribution of the remaining attributes and are causally identified by virtue of the fact that attribute levels are randomly assigned to profiles (Hainmueller et al., 2014b). That is to say, randomization allows us to directly infer the causal effect of price on selection decisions – an impossibility in non-randomized pricing frameworks like VW. We can interpret each AMCE as the change in the probability of a profile being selected if a given attribute level is present in a profile, compared to a baseline attribute level. 

Of all attributes, differences in price levels are most notable. Compared to a $50 price point, a $110 price point leads to a 25 percentage point decrease in the probability of a jean being selected. A $90 and $70 price point lead to 14 and 8 percentage point decreases in the probability of a jean being selected, respectively. Perhaps most interestingly, a $70 price point leads to a relatively small decrease in demand compared to a $50 one, especially considering the suggestion of the Van Westendorp model that $70 is unacceptable for nearly all respondents. The other attributes of the jeans can be interpreted in the same way and may offer insight into the dimensions underpinning demand in addition to price. 

Figure 2: Average Marginal Effects of Conjoint Attributes

Figure 2: Average Marginal Effects of Conjoint Attributes

We can also evaluate the marginal means of the conjoint attributes instead of effects relative to an attribute’s baseline value, as in Figure 3. Marginal means (sometimes called ‘win rates’) have a direct interpretation as probabilities: a marginal mean of 0 means respondents choose profiles with that feature level with a probability of 0, and a marginal mean of 1 means they choose it with a probability of 1. ¹ This presents a straightforward way to evaluate price elasticity, or how demand changes as price changes – something we cannot do with a Van Westendorp.

Figure 3: Marginal Means of Conjoint Attributes

Figure 3: Marginal Means of Conjoint Attributes

We can likewise use the conjoint framework to evaluate how the effects of price change as different additional features of the product change, or how the effects of additional features change as price changes. Suppose this company was interested in how the effects of price vary across whether jeans have rips as a style feature, specifically whether people are less likely to pay higher prices when jeans have rips. We plot the effects of attributes, including price, by style in Figure 4 below. Evidently, whether jeans have rips or not has little bearing on preferences over price: the effects of prices are consistent across both attribute levels, and any slight differences between them are not statistically significant, as evidenced by the right-most pane of the figure. Powerfully, these interactions between features have causal interpretations due to the fact that all features are randomly assigned to profiles. In the same way, we could additionally subset conjoint analyses by respondent segments too and evaluate whether meaningful heterogeneity in price preferences exists across segments; while respondent-by-attribute interactions don’t yield causal inferences, they offer insight into what respondent characteristics predict certain price preferences.

Figure 4: Attribute Effects by Price

Figure 4: Attribute Effects by Price

We can also integrate a conjoint approach with a VW analysis, which might be especially useful when introducing new products. Indeed, we can use a VW analysis to elicit unprompted and unconstrained price evaluations in a first survey, and then pipe these into the pricing attributes – either as is or adjusted based on strategic considerations – into a conjoint experiment in a second survey. Even if price ranges are left unaltered, the conjoint framework still confers all of its aforementioned benefits, including the unique capacity to observe choices and estimate price elasticity. 

Similarly, we can use values from the third VW statement (“getting expensive, but not out of the question”) to derive a pseudo-demand curve to complement our conjoint estimates. A typical demand curve illustrates the proportion of people willing to pay $X or more for a product and moving down the curve allows us to identify how many additional people we can bring into the market by reducing price. Here, by plotting the cumulative distribution of that third statement we can make similar inferences. Figure 5 below plots an example using the VW arm of our experiment. Here, we see that around 25% of respondents say they would be willing to pay $50 or more for the jeans.

Figure 5: Pseudo-Demand Curve

Figure 5: Pseudo-Demand Curve

Motivated by an original survey experiment, we have outlined the theoretical and empirical shortcomings of a Van Westendorp approach to pricing and the ways in which a conjoint design overcomes them. By randomly assigning attributes, including price, to product profiles, conjoint experiments are unique among pricing frameworks in allowing us to estimate the causal effects of price while offering added realism, predictive validity, capacity to estimate elasticity and insight into critical heterogeneities, whether of the effect of price by other attributes or by respondent segments, or both. A conjoint pricing framework can also make space for integration with Van Westendorp: the latter can be used to derive initial price ranges that are then piped into the former to generate unconstrained price ranges and then observe their performance in a more powerful setting. 

Hainmueller, J., Hangartner, D., & Yamamoto, T. (2014a). Do survey experiments capture real-world behavior? External validation of conjoint and vignette analyses with a natural experiment. Proceedings of the National Academy of Sciences, 112(8), 2395-2400.

Hainmueller, J., Hopkins, D. J., & Yamamoto, T. (2014b). Causal inference in conjoint analysis: Understanding multidimensional choices via stated preference experiments. Political analysis, 22(1), 1-30.

¹Because our 4 price levels can co-occur across a pair of profiles (for instance, it is possible to see $50 in both profiles in one task), our actual range for this attribute can only vary between (¼)*(¼) = 0.06 and 1-0.06 = 0.94.

Using Effective Attention Checks in Surveys

Our approach to weighting non-binary gender in survey research.

Anja Kilibarda Ph.D.

Anja Kilibarda, Ph.D. is a research scientist at Morning Consult, focusing on survey research using experimental methods for causal inference. She has more than a decade of experience designing and analyzing surveys and experiments to identify complex dynamics with published work in Public Opinion Quarterly, the International Journal of Public Opinion Research and more. Anja earned her doctorate from Columbia University, her master’s degree at the Université de Montréal, and her bachelor’s degree from the University of Toronto.

Back to top

ORIGINAL RESEARCH article

This article is part of the research topic.

Teaching controversial issues in Secondary Education

Evaluating the Impact of Conceptual Change Pedagogy on Student Attitudes and Behaviors Toward Controversial Topics in Iraq

  • 1 Walden University, United States
  • 2 Hardwired Global, United States

The final, formatted version of the article will be published soon.

This study assesses the effect of conceptual change pedagogy on students' attitudes toward pluralism and related rights within culturally sensitive contexts. Global efforts to address the spread of intolerant ideologies that foment radicalization, discrimination, and violence are fraught with controversy. Prior research on the Middle East and North Africa region has found that efforts to address these challenges in the field of education -including reform to curricula, the promotion of narratives inclusive of religious diversity, and civics education initiatives -have had varied levels of success. Absent from these efforts is the development of an effective pedagogy and the training of teachers to identify and address ideologies and behaviors that foment intolerance and conflict among students. Hardwired Global developed a teacher training program based on conceptual change theory and pedagogy to fill these needs. Conceptual change refers to the development of new ways of thinking and understanding of concepts, beliefs, and attitudes. Hardwired Global implemented the program in partnership with the regional Directorate of Education for Mosul and the Nineveh Plains region of Iraq from 2019-2023. From 2021-2023, Hardwired trained 485 teachers in 40 schools across the region. Following the training, teachers implemented two lessons. A mixed method research study -with a primary focus on quantitative data collected -was conducted to determine the effect of the program on student perceptions, understanding, and behavior towards key concepts inherent to pluralism. Quantitative data consisted of a pre-post survey with four multiple choice questions. Scores on pre-surveys were compared to post-surveys and a two sample paired t-test was applied. We documented statistically significant developments in students' conceptual understanding of key concepts inherent to pluralism and associated rights, including: respect for diversity in expression inclusion of diverse religious and/or ethnic communities, gender equality, and violent or non-violent approaches to conflict. Qualitative data consisted of semi-structured interviews with teachers and students implemented at the conclusion of the program and observations reported by Master Trainers and teachers during training and activity implementation. Findings suggest conceptual change pedagogy on pluralism and associated rights is a promising approach to education about controversial topics in conflict-affected and culturally sensitive environments.

Keywords: pluralism education1, controversial topics in education2, conceptual change3, human rights education4, gender equality5

Received: 16 Aug 2023; Accepted: 23 Nov 2023.

Copyright: © 2023 Rea-Ramirez, Abboud and Ramirez. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Mx. Lena Abboud, Hardwired Global, Richmond, California, United States

People also looked at

Bizcommunity.com

Bizcommunity.com

Understanding the rise of food delivery services in South Africa – KLA consumer insights

Posted: 22 November 2023 | Last updated: 23 November 2023

Consumer Insights Agency, KLA shares results on a research survey conducted on the food delivery industry (including groceries), using the YourView panel.

The food delivery industry in South Africa is experiencing unprecedented growth, driven by the increasing prevalence and convenience of food delivery apps. Consumers are embracing the ease of having meals and groceries delivered to their doorstep, leading to a surge in the popularity of food delivery services. In an effort to shed light on the motivations, needs, and preferences of South African food delivery users, this consumer insights article presents key findings from the YourView research survey.

Factors influencing the decision to use food delivery services

1. Convenience takes the lead (20,5%)

A total of 20.5% of respondents highlight convenience as the primary factor influencing their decision to use food delivery services. In a fast-paced society, time-saving benefits are highly valued, and the ability to order food with just a few taps on a smartphone is a significant draw for users.

2. Managing a busy schedule (15,4%)

For 15.4% of respondents, the decision to use food delivery is driven by the challenge of managing a busy schedule. With multiple responsibilities, including work, family, and social commitments, finding time to cook becomes a hurdle that food delivery services conveniently overcome.

3. Attractive pricing (14.5%)

Competitive pricing and deals influence the decision for 14.5% of respondents. In a price-sensitive market, discounts, promotions, and loyalty programmes offered by food delivery services make eating out more affordable and appealing to consumers seeking value for money.

4. Craving variety (12%)

Variety is a significant factor for 12% of respondents, who appreciate the diverse range of restaurant options available for delivery. Food delivery apps cater to different culinary preferences, allowing consumers to explore new tastes without leaving their homes.

5. Need for speed (8.9%)

Quick delivery times matter for 8.9% of respondents, highlighting the importance of prompt service in a fast-paced society. Food delivery services that prioritise speed and accuracy stand out in attracting and retaining customers.

Industry insights

Checkers’ Xtra Savings Plus subscription Service

In response to evolving consumer needs, Checkers has launched the Xtra Savings Plus subscription service, combining the best of the Xtra Savings rewards programme with the convenience of their Sixty60 delivery service. Subscribers enjoy unlimited free Sixty60 deliveries, a 10% in-store discount, and double personalised offers. Neil Schreuder, chief of strategy at Checkers, emphasised the company's commitment to providing excellent value for both food deliveries and in-store purchases.

Pick n Pay’s On-Demand delivery app relaunch

Pick n Pay has relaunched its on-demand delivery app, offering a month of free delivery without a subscription. This move aligns with the growing demand for on-demand solutions, providing customers with a taste of the convenience and efficiency of their delivery service.

The South African food delivery industry continues to evolve as consumers prioritise convenience, time-saving solutions, competitive pricing, variety, and speedy delivery. Key players such as Checkers and Pick n Pay are actively responding to these trends, demonstrating a commitment to enhancing the customer experience and offering greater value. By leveraging insights from consumer research, businesses can tailor their strategies to meet the evolving demands of South African consumers, ensuring sustained growth and success in this dynamic industry. For more information, visit www.kla.co.za .

More for You

Beware of exercise if you take these medications

Beware of exercise if you take these medications

Koko case still alive and will be re-enrolled – Hawks boss

Koko case still alive and will be re-enrolled – Hawks boss

The power of 12

21 great V12 classic cars

Why Kaizer Chiefs need Pitso Mosimane, and why South Africans disagree

Why Kaizer Chiefs need Pitso Mosimane, and why South Africans disagree

Sony devices hacked? Hacker 'threatens' to release 'stolen' data by 28 September

Hackers claim data stolen from 2 of SA's biggest credit bureaus, demand millions

Parker on backing Radebe’s SAFA presidency ambitions

Parker on backing Radebe’s SAFA presidency ambitions

How dangerous are interactions between herbal medicines and prescribed drugs?

How dangerous are interactions between herbal medicines and prescribed drugs?

Settlement exposes bank directors to criminal liability

Settlement exposes bank directors to criminal liability

Britain’s Blue Oval unicorns

21 rarest Fords

Katlego Mphela.jpg

Former soccer star, Katlego Mphela says nothing will kill his goals

The most expensive plants

The most expensive plants

This is how many people registered to vote and how Fikile Mbalula bungled the number up

This is how many people registered to vote and how Fikile Mbalula bungled the number up

What you need to know (and do) when your pet has cancer

What you need to know (and do) when your pet has cancer

SAPS build more police stations, buy more vehicles in major beef-up

SAPS build more police stations, buy more vehicles in major beef-up

Patriotic Alliance could lose ANC coalition over pro-Israel stance, McKenzie fans xenophobic rhetoric flames

Patriotic Alliance defies ANC ultimatum by continuing to back Israel in the war on Gaza

thubelihle-shamase-banyana-2107280

Desiree Ellis names Banyana Banyana squad for WAFCON qualifier

Rock 'Em Sock 'Em Robots

Awesome toys from the 90s: How many can you remember?

Judges Hlophe and Motata face chop as parliamentary impeachment processes set to decide their fate

Judges Hlophe and Motata face chop as parliamentary impeachment processes set to decide their fate

Who were the druids, and what did they do?

Who were the druids, and what did they do?

Ramaphosa: We want trucks off the road; we want products on rail network

Ramaphosa: We want trucks off the road; we want products on rail network

IMAGES

  1. Survey Resaerch

    conclusion of survey research

  2. Survey

    conclusion of survey research

  3. A Complete Guide on How to Write a Conclusion for a Research Paper

    conclusion of survey research

  4. How to Write an Effective Conclusion for the Research Paper

    conclusion of survey research

  5. Conclusion survey report

    conclusion of survey research

  6. Writing about survey results

    conclusion of survey research

VIDEO

  1. Conducting scientific research as a student (Chapter five

  2. Survey Method in Research #research #researchmethodology #shorts #psychology #surveymethod

  3. RESEARCH TITLE CONSTRUCTION & RESEARCH QUESTIONS FORMULATION // RESEARCH SERIES

  4. Short Lecture 20- Survey Research- STA630 @educationwithceemi

  5. How to Find Research Questionnaire

  6. Research Designs: Survey

COMMENTS

  1. Understanding and Evaluating Survey Research

    Given this range of options in the conduct of survey research, it is imperative for the consumer/reader of survey research to understand the potential for bias in survey research as well as the tested techniques for reducing bias, in order to draw appropriate conclusions about the information reported in this manner.

  2. Conclusion of a Survey

    Conclusion of a Survey Sarah Mae Sincero 126.7K reads Drawing conclusions from the survey results is one of the last steps in conducting a survey. Most researchers find writing the conclusion as hard as creating the introduction to the survey because these two segments act as the frame of the study. The Importance of the Conclusion

  3. Survey Research

    Step 1: Define the population and sample Step 2: Decide on the type of survey Step 3: Design the survey questions Step 4: Distribute the survey and collect responses Step 5: Analyze the survey results Step 6: Write up the survey results Other interesting articles Frequently asked questions about surveys What are surveys used for?

  4. Writing a Research Paper Conclusion

    The conclusion of a research paper is where you wrap up your ideas and leave the reader with a strong final impression. It has several key goals: Restate the problem statement addressed in the paper Summarize your overall arguments or findings Suggest the key takeaways from your paper

  5. Survey Research

    Conclusion: Surveys, Survey Centers and Companies, and the Social Sciences Survey research centers are institutions that provide databases, methodological research, education and training for clients of the academic community, and are increasingly international in scope.

  6. Conclusions and Future Directions for Understanding Survey ...

    Conclusions and Future Directions for Understanding Survey Methodology Philip S. Brenner Chapter First Online: 24 October 2020 1109 Accesses Part of the Frontiers in Sociology and Social Research book series (FSSR,volume 4) Abstract

  7. Conducting Survey Research

    In conclusion, this chapter gave an overview of how to conduct survey research. It began with some of the reasons why urban researchers conduct surveys, particularly for value-add to purely quantitative data, such as maps of planning scheme zones, or existing land-use.

  8. 9. The Conclusion

    Bunton, David. "The Structure of PhD Conclusion Chapters." Journal of English for Academic Purposes 4 (July 2005): 207-224; Conclusions. The Writing Center. University of North Carolina; Kretchmer, Paul. Twelve Steps to Writing an Effective Conclusion. San Francisco Edit, 2003-2008; Conclusions. The Writing Lab and The OWL.

  9. Reporting Survey Based Studies

    Survey-based research is a means to obtain quick data, and such studies are relatively easy to conduct and analyse, ... CONCLUSION. A good survey ought to be designed with a clear objective, the design being precise and focused with close-ended questions and all probabilities included. Use of rating scales, multiple choice questions and ...

  10. Chapter 13 Methods for Survey Studies

    The survey is a popular means of gauging people's opinion of a particular topic, such as their perception or reported use of an eHealth system. Yet surveying as a scientific approach is often misconstrued. And while a survey seems easy to conduct, ensuring that it is of high quality is much more difficult to achieve. Often the terms "survey" and "questionnaire" are used ...

  11. How to Write a Conclusion for Research Papers (with Examples)

    The research paper conclusion serves the following purposes: 1 Warn readers of the possible consequences of not attending to the problem. Recommend specific course (s) of action. Restate key ideas to drive home the ultimate point of your research paper. Provide a "take-home" message that you want the readers to remember about your study.

  12. Findings, Conclusions, and Recommendations

    5 Findings, Conclusions, and Recommendations As the preceding chapters have made clear, incorporating biological specimens into social science surveys holds great scientific potential, but also adds a variety of complications to the tasks of both individual researchers and institutions.

  13. Survey Research: Definition, Examples and Methods

    Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

  14. (PDF) Understanding and Evaluating Survey Research

    The method used in this research is a survey method, which aims to map information and evaluate statements from a number of respondents regarding the research object/issue [14], [15]. This survey ...

  15. Presenting Survey Results

    Conclusion and Recommendations Presenting the conclusion and recommendations includes reviewing the survey goals and objectives and relating the survey results to them. In your presentation, the conclusion must comprise of concise but eloquent words that will lead the panel to make proper decisions or interpretations regarding the survey results.

  16. Drawing Conclusions from Sample Surveys

    Drawing Conclusions from Sample Surveys. Instructor: Gerald Lemay. Gerald has taught engineering, math and science and has a doctorate in electrical engineering. Cite this lesson. In this lesson ...

  17. Survey Research: Definition, Examples & Methods

    Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall.. As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions.

  18. How to Create a Survey Results Report (+7 Examples to Steal)

    7. Export Your Survey Result Graphs. Bonus Tip: Export Data for Survey Analysis. Let's walk through some tricks and techniques with real examples. 1. Use Data Visualization. We already talked about ways to collect data and format the results visually. Adding a chart to a survey results report helps to bring it to life.

  19. 4 Conclusions and Recommendations

    4 Conclusions and Recommendations | A Survey of Attitudes and Actions on Dual Use Research in the Life Sciences: A Collaborative Effort of the National Research Council and the American Association for the Advancement of Science | The National Academies Press

  20. Survey Research

    November 3, 2023 by Muhammad Hassan Table of Contents Survey Research Definition: Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews.

  21. Conclusion of a Survey > Assisted Self-Help

    1. Focus On Satisfying Your Survey Goal. The conclusion must answer the queries presented by your survey goals and objectives. In writing the conclusion, your mind must be set on fulfilling the very purpose of conducting the survey. With the survey goal in mind, you will be able to avoid common mistakes such as adding new information that were ...

  22. Methods, strategies, and incentives to increase response to mental

    Conclusions. Despite the mixed quality of the studies, the low volume for some comparisons and the limit to studies in high income countries, several effective methods and strategies to improve adolescents' response rates to mental health surveys were identified. ... Survey research methods are often used to investigate the prevalence and ...

  23. Reasons for hearing aid uptake in the United States: a qualitative

    Reasons for hearing aid uptake in the United States: a qualitative analysis of open-text responses from a large-scale survey of user-perspectives. ... Conclusions . The decision to take up hearing aids included intrinsic factors like readiness to change and extrinsic factors such as the availability of finances. ... Related Research .

  24. Report of the Diversity, Equity and Inclusion Survey

    About the Survey. During the fall of 2021, the Office for Institutional Equity and Inclusive Culture and the Office of Institutional Research, Assessment and Accreditation administered the Diversity, Equity and Inclusion (DEI) survey created by the Higher Education Data Service Consortium (HEDS). The HEDS survey has 39 questions that include ...

  25. Elicit Better Pricing Insights With Conjoint Experiments

    Conclusion. Motivated by an original survey experiment, we have outlined the theoretical and empirical shortcomings of a Van Westendorp approach to pricing and the ways in which a conjoint design ...

  26. Conclusion survey report

    1.5. Conclusion Health information system provides many benefits to health institutions as patients' health records and administrative works can be managed effectively and efficiently. The system enables information sharing among related healthcare providers. However, with this development, health institutions face new challenges due to security problems.

  27. Frontiers

    This study assesses the effect of conceptual change pedagogy on students' attitudes toward pluralism and related rights within culturally sensitive contexts. Global efforts to address the spread of intolerant ideologies that foment radicalization, discrimination, and violence are fraught with controversy. Prior research on the Middle East and North Africa region has found that efforts to ...

  28. Understanding the rise of food delivery services in South Africa

    Consumer Insights Agency, KLA shares results on a research survey conducted on the food delivery industry (including groceries), using the YourView panel.The food delivery industry in South Africa ...