• Privacy Policy

Research Method

Home » Correlational Research Vs Experimental Research

Correlational Research Vs Experimental Research

Table of Contents

Correlational Research Vs Experimental Research

Correlational research and experimental research are two different research approaches used in social sciences and other fields of research.

Correlational Research

Correlational Research is a research approach that examines the relationship between two or more variables. It involves measuring the degree of association or correlation between the variables without manipulating them. The goal of correlational research is to identify whether there is a relationship between the variables and the strength of that relationship. Correlational research is typically conducted through surveys, observational studies, or secondary data analysis.

Experimental Research

Experimental Research , on the other hand, is a research approach that involves the manipulation of one or more variables to observe the effect on another variable. The goal of experimental research is to establish a cause-and-effect relationship between the variables. Experimental research is typically conducted in a controlled environment and involves random assignment of participants to different groups to ensure that the groups are equivalent. The data is collected through measurements and observations, and statistical analysis is used to test the hypotheses.

Difference Between Correlational Research and Experimental Research

Here’s a comparison table that highlights the differences between correlational research and experimental research:

Correlational ResearchExperimental Research
Examines the relationship between two or more variables without manipulating themInvolves the manipulation of one or more variables to observe the effect on another variable
To identify the strength and direction of the relationship between variablesTo establish a cause-and-effect relationship between variables
Surveys, observational studies, or secondary data analysisControlled experiments with random assignment of participants
Correlation coefficients, regression analysisInferential statistics, analysis of variance (ANOVA)
Association between variablesCausality between variables
Examining the relationship between smoking and lung cancerTesting the effect of a new medication on a particular disease

Also see Research Methods

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Hypothesis Vs Null Hypothesis

Research Hypothesis Vs Null Hypothesis

Primary Vs Secondary Research

Primary Vs Secondary Research

Basic Vs Applied Research

Basic Vs Applied Research

Generative Vs Evaluative Research

Generative Vs Evaluative Research

Clinical Research Vs Lab Research

Clinical Research Vs Lab Research

Descriptive vs Experimental Research

Descriptive vs Experimental Research

Experimental and Quasi-Experimental Research

Guide Title: Experimental and Quasi-Experimental Research Guide ID: 64

You approach a stainless-steel wall, separated vertically along its middle where two halves meet. After looking to the left, you see two buttons on the wall to the right. You press the top button and it lights up. A soft tone sounds and the two halves of the wall slide apart to reveal a small room. You step into the room. Looking to the left, then to the right, you see a panel of more buttons. You know that you seek a room marked with the numbers 1-0-1-2, so you press the button marked "10." The halves slide shut and enclose you within the cubicle, which jolts upward. Soon, the soft tone sounds again. The door opens again. On the far wall, a sign silently proclaims, "10th floor."

You have engaged in a series of experiments. A ride in an elevator may not seem like an experiment, but it, and each step taken towards its ultimate outcome, are common examples of a search for a causal relationship-which is what experimentation is all about.

You started with the hypothesis that this is in fact an elevator. You proved that you were correct. You then hypothesized that the button to summon the elevator was on the left, which was incorrect, so then you hypothesized it was on the right, and you were correct. You hypothesized that pressing the button marked with the up arrow would not only bring an elevator to you, but that it would be an elevator heading in the up direction. You were right.

As this guide explains, the deliberate process of testing hypotheses and reaching conclusions is an extension of commonplace testing of cause and effect relationships.

Basic Concepts of Experimental and Quasi-Experimental Research

Discovering causal relationships is the key to experimental research. In abstract terms, this means the relationship between a certain action, X, which alone creates the effect Y. For example, turning the volume knob on your stereo clockwise causes the sound to get louder. In addition, you could observe that turning the knob clockwise alone, and nothing else, caused the sound level to increase. You could further conclude that a causal relationship exists between turning the knob clockwise and an increase in volume; not simply because one caused the other, but because you are certain that nothing else caused the effect.

Independent and Dependent Variables

Beyond discovering causal relationships, experimental research further seeks out how much cause will produce how much effect; in technical terms, how the independent variable will affect the dependent variable. You know that turning the knob clockwise will produce a louder noise, but by varying how much you turn it, you see how much sound is produced. On the other hand, you might find that although you turn the knob a great deal, sound doesn't increase dramatically. Or, you might find that turning the knob just a little adds more sound than expected. The amount that you turned the knob is the independent variable, the variable that the researcher controls, and the amount of sound that resulted from turning it is the dependent variable, the change that is caused by the independent variable.

Experimental research also looks into the effects of removing something. For example, if you remove a loud noise from the room, will the person next to you be able to hear you? Or how much noise needs to be removed before that person can hear you?

Treatment and Hypothesis

The term treatment refers to either removing or adding a stimulus in order to measure an effect (such as turning the knob a little or a lot, or reducing the noise level a little or a lot). Experimental researchers want to know how varying levels of treatment will affect what they are studying. As such, researchers often have an idea, or hypothesis, about what effect will occur when they cause something. Few experiments are performed where there is no idea of what will happen. From past experiences in life or from the knowledge we possess in our specific field of study, we know how some actions cause other reactions. Experiments confirm or reconfirm this fact.

Experimentation becomes more complex when the causal relationships they seek aren't as clear as in the stereo knob-turning examples. Questions like "Will olestra cause cancer?" or "Will this new fertilizer help this plant grow better?" present more to consider. For example, any number of things could affect the growth rate of a plant-the temperature, how much water or sun it receives, or how much carbon dioxide is in the air. These variables can affect an experiment's results. An experimenter who wants to show that adding a certain fertilizer will help a plant grow better must ensure that it is the fertilizer, and nothing else, affecting the growth patterns of the plant. To do this, as many of these variables as possible must be controlled.

Matching and Randomization

In the example used in this guide (you'll find the example below), we discuss an experiment that focuses on three groups of plants -- one that is treated with a fertilizer named MegaGro, another group treated with a fertilizer named Plant!, and yet another that is not treated with fetilizer (this latter group serves as a "control" group). In this example, even though the designers of the experiment have tried to remove all extraneous variables, results may appear merely coincidental. Since the goal of the experiment is to prove a causal relationship in which a single variable is responsible for the effect produced, the experiment would produce stronger proof if the results were replicated in larger treatment and control groups.

Selecting groups entails assigning subjects in the groups of an experiment in such a way that treatment and control groups are comparable in all respects except the application of the treatment. Groups can be created in two ways: matching and randomization. In the MegaGro experiment discussed below, the plants might be matched according to characteristics such as age, weight and whether they are blooming. This involves distributing these plants so that each plant in one group exactly matches characteristics of plants in the other groups. Matching may be problematic, though, because it "can promote a false sense of security by leading [the experimenter] to believe that [the] experimental and control groups were really equated at the outset, when in fact they were not equated on a host of variables" (Jones, 291). In other words, you may have flowers for your MegaGro experiment that you matched and distributed among groups, but other variables are unaccounted for. It would be difficult to have equal groupings.

Randomization, then, is preferred to matching. This method is based on the statistical principle of normal distribution. Theoretically, any arbitrarily selected group of adequate size will reflect normal distribution. Differences between groups will average out and become more comparable. The principle of normal distribution states that in a population most individuals will fall within the middle range of values for a given characteristic, with increasingly fewer toward either extreme (graphically represented as the ubiquitous "bell curve").

Differences between Quasi-Experimental and Experimental Research

Thus far, we have explained that for experimental research we need:

  • a hypothesis for a causal relationship;
  • a control group and a treatment group;
  • to eliminate confounding variables that might mess up the experiment and prevent displaying the causal relationship; and
  • to have larger groups with a carefully sorted constituency; preferably randomized, in order to keep accidental differences from fouling things up.

But what if we don't have all of those? Do we still have an experiment? Not a true experiment in the strictest scientific sense of the term, but we can have a quasi-experiment, an attempt to uncover a causal relationship, even though the researcher cannot control all the factors that might affect the outcome.

A quasi-experimenter treats a given situation as an experiment even though it is not wholly by design. The independent variable may not be manipulated by the researcher, treatment and control groups may not be randomized or matched, or there may be no control group. The researcher is limited in what he or she can say conclusively.

The significant element of both experiments and quasi-experiments is the measure of the dependent variable, which it allows for comparison. Some data is quite straightforward, but other measures, such as level of self-confidence in writing ability, increase in creativity or in reading comprehension are inescapably subjective. In such cases, quasi-experimentation often involves a number of strategies to compare subjectivity, such as rating data, testing, surveying, and content analysis.

Rating essentially is developing a rating scale to evaluate data. In testing, experimenters and quasi-experimenters use ANOVA (Analysis of Variance) and ANCOVA (Analysis of Co-Variance) tests to measure differences between control and experimental groups, as well as different correlations between groups.

Since we're mentioning the subject of statistics, note that experimental or quasi-experimental research cannot state beyond a shadow of a doubt that a single cause will always produce any one effect. They can do no more than show a probability that one thing causes another. The probability that a result is the due to random chance is an important measure of statistical analysis and in experimental research.

Example: Causality

Let's say you want to determine that your new fertilizer, MegaGro, will increase the growth rate of plants. You begin by getting a plant to go with your fertilizer. Since the experiment is concerned with proving that MegaGro works, you need another plant, using no fertilizer at all on it, to compare how much change your fertilized plant displays. This is what is known as a control group.

Set up with a control group, which will receive no treatment, and an experimental group, which will get MegaGro, you must then address those variables that could invalidate your experiment. This can be an extensive and exhaustive process. You must ensure that you use the same plant; that both groups are put in the same kind of soil; that they receive equal amounts of water and sun; that they receive the same amount of exposure to carbon-dioxide-exhaling researchers, and so on. In short, any other variable that might affect the growth of those plants, other than the fertilizer, must be the same for both plants. Otherwise, you can't prove absolutely that MegaGro is the only explanation for the increased growth of one of those plants.

Such an experiment can be done on more than two groups. You may not only want to show that MegaGro is an effective fertilizer, but that it is better than its competitor brand of fertilizer, Plant! All you need to do, then, is have one experimental group receiving MegaGro, one receiving Plant! and the other (the control group) receiving no fertilizer. Those are the only variables that can be different between the three groups; all other variables must be the same for the experiment to be valid.

Controlling variables allows the researcher to identify conditions that may affect the experiment's outcome. This may lead to alternative explanations that the researcher is willing to entertain in order to isolate only variables judged significant. In the MegaGro experiment, you may be concerned with how fertile the soil is, but not with the plants'; relative position in the window, as you don't think that the amount of shade they get will affect their growth rate. But what if it did? You would have to go about eliminating variables in order to determine which is the key factor. What if one receives more shade than the other and the MegaGro plant, which received more shade, died? This might prompt you to formulate a plausible alternative explanation, which is a way of accounting for a result that differs from what you expected. You would then want to redo the study with equal amounts of sunlight.

Methods: Five Steps

Experimental research can be roughly divided into five phases:

Identifying a research problem

The process starts by clearly identifying the problem you want to study and considering what possible methods will affect a solution. Then you choose the method you want to test, and formulate a hypothesis to predict the outcome of the test.

For example, you may want to improve student essays, but you don't believe that teacher feedback is enough. You hypothesize that some possible methods for writing improvement include peer workshopping, or reading more example essays. Favoring the former, your experiment would try to determine if peer workshopping improves writing in high school seniors. You state your hypothesis: peer workshopping prior to turning in a final draft will improve the quality of the student's essay.

Planning an experimental research study

The next step is to devise an experiment to test your hypothesis. In doing so, you must consider several factors. For example, how generalizable do you want your end results to be? Do you want to generalize about the entire population of high school seniors everywhere, or just the particular population of seniors at your specific school? This will determine how simple or complex the experiment will be. The amount of time funding you have will also determine the size of your experiment.

Continuing the example from step one, you may want a small study at one school involving three teachers, each teaching two sections of the same course. The treatment in this experiment is peer workshopping. Each of the three teachers will assign the same essay assignment to both classes; the treatment group will participate in peer workshopping, while the control group will receive only teacher comments on their drafts.

Conducting the experiment

At the start of an experiment, the control and treatment groups must be selected. Whereas the "hard" sciences have the luxury of attempting to create truly equal groups, educators often find themselves forced to conduct their experiments based on self-selected groups, rather than on randomization. As was highlighted in the Basic Concepts section, this makes the study a quasi-experiment, since the researchers cannot control all of the variables.

For the peer workshopping experiment, let's say that it involves six classes and three teachers with a sample of students randomly selected from all the classes. Each teacher will have a class for a control group and a class for a treatment group. The essay assignment is given and the teachers are briefed not to change any of their teaching methods other than the use of peer workshopping. You may see here that this is an effort to control a possible variable: teaching style variance.

Analyzing the data

The fourth step is to collect and analyze the data. This is not solely a step where you collect the papers, read them, and say your methods were a success. You must show how successful. You must devise a scale by which you will evaluate the data you receive, therefore you must decide what indicators will be, and will not be, important.

Continuing our example, the teachers' grades are first recorded, then the essays are evaluated for a change in sentence complexity, syntactical and grammatical errors, and overall length. Any statistical analysis is done at this time if you choose to do any. Notice here that the researcher has made judgments on what signals improved writing. It is not simply a matter of improved teacher grades, but a matter of what the researcher believes constitutes improved use of the language.

Writing the paper/presentation describing the findings

Once you have completed the experiment, you will want to share findings by publishing academic paper (or presentations). These papers usually have the following format, but it is not necessary to follow it strictly. Sections can be combined or not included, depending on the structure of the experiment, and the journal to which you submit your paper.

  • Abstract : Summarize the project: its aims, participants, basic methodology, results, and a brief interpretation.
  • Introduction : Set the context of the experiment.
  • Review of Literature : Provide a review of the literature in the specific area of study to show what work has been done. Should lead directly to the author's purpose for the study.
  • Statement of Purpose : Present the problem to be studied.
  • Participants : Describe in detail participants involved in the study; e.g., how many, etc. Provide as much information as possible.
  • Materials and Procedures : Clearly describe materials and procedures. Provide enough information so that the experiment can be replicated, but not so much information that it becomes unreadable. Include how participants were chosen, the tasks assigned them, how they were conducted, how data were evaluated, etc.
  • Results : Present the data in an organized fashion. If it is quantifiable, it is analyzed through statistical means. Avoid interpretation at this time.
  • Discussion : After presenting the results, interpret what has happened in the experiment. Base the discussion only on the data collected and as objective an interpretation as possible. Hypothesizing is possible here.
  • Limitations : Discuss factors that affect the results. Here, you can speculate how much generalization, or more likely, transferability, is possible based on results. This section is important for quasi-experimentation, since a quasi-experiment cannot control all of the variables that might affect the outcome of a study. You would discuss what variables you could not control.
  • Conclusion : Synthesize all of the above sections.
  • References : Document works cited in the correct format for the field.

Experimental and Quasi-Experimental Research: Issues and Commentary

Several issues are addressed in this section, including the use of experimental and quasi-experimental research in educational settings, the relevance of the methods to English studies, and ethical concerns regarding the methods.

Using Experimental and Quasi-Experimental Research in Educational Settings

Charting causal relationships in human settings.

Any time a human population is involved, prediction of casual relationships becomes cloudy and, some say, impossible. Many reasons exist for this; for example,

  • researchers in classrooms add a disturbing presence, causing students to act abnormally, consciously or unconsciously;
  • subjects try to please the researcher, just because of an apparent interest in them (known as the Hawthorne Effect); or, perhaps
  • the teacher as researcher is restricted by bias and time pressures.

But such confounding variables don't stop researchers from trying to identify causal relationships in education. Educators naturally experiment anyway, comparing groups, assessing the attributes of each, and making predictions based on an evaluation of alternatives. They look to research to support their intuitive practices, experimenting whenever they try to decide which instruction method will best encourage student improvement.

Combining Theory, Research, and Practice

The goal of educational research lies in combining theory, research, and practice. Educational researchers attempt to establish models of teaching practice, learning styles, curriculum development, and countless other educational issues. The aim is to "try to improve our understanding of education and to strive to find ways to have understanding contribute to the improvement of practice," one writer asserts (Floden 1996, p. 197).

In quasi-experimentation, researchers try to develop models by involving teachers as researchers, employing observational research techniques. Although results of this kind of research are context-dependent and difficult to generalize, they can act as a starting point for further study. The "educational researcher . . . provides guidelines and interpretive material intended to liberate the teacher's intelligence so that whatever artistry in teaching the teacher can achieve will be employed" (Eisner 1992, p. 8).

Bias and Rigor

Critics contend that the educational researcher is inherently biased, sample selection is arbitrary, and replication is impossible. The key to combating such criticism has to do with rigor. Rigor is established through close, proper attention to randomizing groups, time spent on a study, and questioning techniques. This allows more effective application of standards of quantitative research to qualitative research.

Often, teachers cannot wait to for piles of experimentation data to be analyzed before using the teaching methods (Lauer and Asher 1988). They ultimately must assess whether the results of a study in a distant classroom are applicable in their own classrooms. And they must continuously test the effectiveness of their methods by using experimental and qualitative research simultaneously. In addition to statistics (quantitative), researchers may perform case studies or observational research (qualitative) in conjunction with, or prior to, experimentation.

Relevance to English Studies

Situations in english studies that might encourage use of experimental methods.

Whenever a researcher would like to see if a causal relationship exists between groups, experimental and quasi-experimental research can be a viable research tool. Researchers in English Studies might use experimentation when they believe a relationship exists between two variables, and they want to show that these two variables have a significant correlation (or causal relationship).

A benefit of experimentation is the ability to control variables, such as the amount of treatment, when it is given, to whom and so forth. Controlling variables allows researchers to gain insight into the relationships they believe exist. For example, a researcher has an idea that writing under pseudonyms encourages student participation in newsgroups. Researchers can control which students write under pseudonyms and which do not, then measure the outcomes. Researchers can then analyze results and determine if this particular variable alone causes increased participation.

Transferability-Applying Results

Experimentation and quasi-experimentation allow for generating transferable results and accepting those results as being dependent upon experimental rigor. It is an effective alternative to generalizability, which is difficult to rely upon in educational research. English scholars, reading results of experiments with a critical eye, ultimately decide if results will be implemented and how. They may even extend that existing research by replicating experiments in the interest of generating new results and benefiting from multiple perspectives. These results will strengthen the study or discredit findings.

Concerns English Scholars Express about Experiments

Researchers should carefully consider if a particular method is feasible in humanities studies, and whether it will yield the desired information. Some researchers recommend addressing pertinent issues combining several research methods, such as survey, interview, ethnography, case study, content analysis, and experimentation (Lauer and Asher, 1988).

Advantages and Disadvantages of Experimental Research: Discussion

In educational research, experimentation is a way to gain insight into methods of instruction. Although teaching is context specific, results can provide a starting point for further study. Often, a teacher/researcher will have a "gut" feeling about an issue which can be explored through experimentation and looking at causal relationships. Through research intuition can shape practice .

A preconception exists that information obtained through scientific method is free of human inconsistencies. But, since scientific method is a matter of human construction, it is subject to human error . The researcher's personal bias may intrude upon the experiment , as well. For example, certain preconceptions may dictate the course of the research and affect the behavior of the subjects. The issue may be compounded when, although many researchers are aware of the affect that their personal bias exerts on their own research, they are pressured to produce research that is accepted in their field of study as "legitimate" experimental research.

The researcher does bring bias to experimentation, but bias does not limit an ability to be reflective . An ethical researcher thinks critically about results and reports those results after careful reflection. Concerns over bias can be leveled against any research method.

Often, the sample may not be representative of a population, because the researcher does not have an opportunity to ensure a representative sample. For example, subjects could be limited to one location, limited in number, studied under constrained conditions and for too short a time.

Despite such inconsistencies in educational research, the researcher has control over the variables , increasing the possibility of more precisely determining individual effects of each variable. Also, determining interaction between variables is more possible.

Even so, artificial results may result . It can be argued that variables are manipulated so the experiment measures what researchers want to examine; therefore, the results are merely contrived products and have no bearing in material reality. Artificial results are difficult to apply in practical situations, making generalizing from the results of a controlled study questionable. Experimental research essentially first decontextualizes a single question from a "real world" scenario, studies it under controlled conditions, and then tries to recontextualize the results back on the "real world" scenario. Results may be difficult to replicate .

Perhaps, groups in an experiment may not be comparable . Quasi-experimentation in educational research is widespread because not only are many researchers also teachers, but many subjects are also students. With the classroom as laboratory, it is difficult to implement randomizing or matching strategies. Often, students self-select into certain sections of a course on the basis of their own agendas and scheduling needs. Thus when, as often happens, one class is treated and the other used for a control, the groups may not actually be comparable. As one might imagine, people who register for a class which meets three times a week at eleven o'clock in the morning (young, no full-time job, night people) differ significantly from those who register for one on Monday evenings from seven to ten p.m. (older, full-time job, possibly more highly motivated). Each situation presents different variables and your group might be completely different from that in the study. Long-term studies are expensive and hard to reproduce. And although often the same hypotheses are tested by different researchers, various factors complicate attempts to compare or synthesize them. It is nearly impossible to be as rigorous as the natural sciences model dictates.

Even when randomization of students is possible, problems arise. First, depending on the class size and the number of classes, the sample may be too small for the extraneous variables to cancel out. Second, the study population is not strictly a sample, because the population of students registered for a given class at a particular university is obviously not representative of the population of all students at large. For example, students at a suburban private liberal-arts college are typically young, white, and upper-middle class. In contrast, students at an urban community college tend to be older, poorer, and members of a racial minority. The differences can be construed as confounding variables: the first group may have fewer demands on its time, have less self-discipline, and benefit from superior secondary education. The second may have more demands, including a job and/or children, have more self-discipline, but an inferior secondary education. Selecting a population of subjects which is representative of the average of all post-secondary students is also a flawed solution, because the outcome of a treatment involving this group is not necessarily transferable to either the students at a community college or the students at the private college, nor are they universally generalizable.

When a human population is involved, experimental research becomes concerned if behavior can be predicted or studied with validity. Human response can be difficult to measure . Human behavior is dependent on individual responses. Rationalizing behavior through experimentation does not account for the process of thought, making outcomes of that process fallible (Eisenberg, 1996).

Nevertheless, we perform experiments daily anyway . When we brush our teeth every morning, we are experimenting to see if this behavior will result in fewer cavities. We are relying on previous experimentation and we are transferring the experimentation to our daily lives.

Moreover, experimentation can be combined with other research methods to ensure rigor . Other qualitative methods such as case study, ethnography, observational research and interviews can function as preconditions for experimentation or conducted simultaneously to add validity to a study.

We have few alternatives to experimentation. Mere anecdotal research , for example is unscientific, unreplicatable, and easily manipulated. Should we rely on Ed walking into a faculty meeting and telling the story of Sally? Sally screamed, "I love writing!" ten times before she wrote her essay and produced a quality paper. Therefore, all the other faculty members should hear this anecdote and know that all other students should employ this similar technique.

On final disadvantage: frequently, political pressure drives experimentation and forces unreliable results. Specific funding and support may drive the outcomes of experimentation and cause the results to be skewed. The reader of these results may not be aware of these biases and should approach experimentation with a critical eye.

Advantages and Disadvantages of Experimental Research: Quick Reference List

Experimental and quasi-experimental research can be summarized in terms of their advantages and disadvantages. This section combines and elaborates upon many points mentioned previously in this guide.

gain insight into methods of instruction

subject to human error

intuitive practice shaped by research

personal bias of researcher may intrude

teachers have bias but can be reflective

sample may not be representative

researcher can have control over variables

can produce artificial results

humans perform experiments anyway

results may only apply to one situation and may be difficult to replicate

can be combined with other research methods for rigor

groups may not be comparable

use to determine what is best for population

human response can be difficult to measure

provides for greater transferability than anecdotal research

political pressure may skew results

Ethical Concerns

Experimental research may be manipulated on both ends of the spectrum: by researcher and by reader. Researchers who report on experimental research, faced with naive readers of experimental research, encounter ethical concerns. While they are creating an experiment, certain objectives and intended uses of the results might drive and skew it. Looking for specific results, they may ask questions and look at data that support only desired conclusions. Conflicting research findings are ignored as a result. Similarly, researchers, seeking support for a particular plan, look only at findings which support that goal, dismissing conflicting research.

Editors and journals do not publish only trouble-free material. As readers of experiments members of the press might report selected and isolated parts of a study to the public, essentially transferring that data to the general population which may not have been intended by the researcher. Take, for example, oat bran. A few years ago, the press reported how oat bran reduces high blood pressure by reducing cholesterol. But that bit of information was taken out of context. The actual study found that when people ate more oat bran, they reduced their intake of saturated fats high in cholesterol. People started eating oat bran muffins by the ton, assuming a causal relationship when in actuality a number of confounding variables might influence the causal link.

Ultimately, ethical use and reportage of experimentation should be addressed by researchers, reporters and readers alike.

Reporters of experimental research often seek to recognize their audience's level of knowledge and try not to mislead readers. And readers must rely on the author's skill and integrity to point out errors and limitations. The relationship between researcher and reader may not sound like a problem, but after spending months or years on a project to produce no significant results, it may be tempting to manipulate the data to show significant results in order to jockey for grants and tenure.

Meanwhile, the reader may uncritically accept results that receive validity by being published in a journal. However, research that lacks credibility often is not published; consequentially, researchers who fail to publish run the risk of being denied grants, promotions, jobs, and tenure. While few researchers are anything but earnest in their attempts to conduct well-designed experiments and present the results in good faith, rhetorical considerations often dictate a certain minimization of methodological flaws.

Concerns arise if researchers do not report all, or otherwise alter, results. This phenomenon is counterbalanced, however, in that professionals are also rewarded for publishing critiques of others' work. Because the author of an experimental study is in essence making an argument for the existence of a causal relationship, he or she must be concerned not only with its integrity, but also with its presentation. Achieving persuasiveness in any kind of writing involves several elements: choosing a topic of interest, providing convincing evidence for one's argument, using tone and voice to project credibility, and organizing the material in a way that meets expectations for a logical sequence. Of course, what is regarded as pertinent, accepted as evidence, required for credibility, and understood as logical varies according to context. If the experimental researcher hopes to make an impact on the community of professionals in their field, she must attend to the standards and orthodoxy's of that audience.

Related Links

Contrasts: Traditional and computer-supported writing classrooms. This Web presents a discussion of the Transitions Study, a year-long exploration of teachers and students in computer-supported and traditional writing classrooms. Includes description of study, rationale for conducting the study, results and implications of the study.

http://kairos.technorhetoric.net/2.2/features/reflections/page1.htm

Annotated Bibliography

A cozy world of trivial pursuits? (1996, June 28) The Times Educational Supplement . 4174, pp. 14-15.

A critique discounting the current methods Great Britain employs to fund and disseminate educational research. The belief is that research is performed for fellow researchers not the teaching public and implications for day to day practice are never addressed.

Anderson, J. A. (1979, Nov. 10-13). Research as argument: the experimental form. Paper presented at the annual meeting of the Speech Communication Association, San Antonio, TX.

In this paper, the scientist who uses the experimental form does so in order to explain that which is verified through prediction.

Anderson, Linda M. (1979). Classroom-based experimental studies of teaching effectiveness in elementary schools . (Technical Report UTR&D-R- 4102). Austin: Research and Development Center for Teacher Education, University of Texas.

Three recent large-scale experimental studies have built on a database established through several correlational studies of teaching effectiveness in elementary school.

Asher, J. W. (1976). Educational research and evaluation methods . Boston: Little, Brown.

Abstract unavailable by press time.

Babbie, Earl R. (1979). The Practice of Social Research . Belmont, CA: Wadsworth.

A textbook containing discussions of several research methodologies used in social science research.

Bangert-Drowns, R.L. (1993). The word processor as instructional tool: a meta-analysis of word processing in writing instruction. Review of Educational Research, 63 (1), 69-93.

Beach, R. (1993). The effects of between-draft teacher evaluation versus student self-evaluation on high school students' revising of rough drafts. Research in the Teaching of English, 13 , 111-119.

The question of whether teacher evaluation or guided self-evaluation of rough drafts results in increased revision was addressed in Beach's study. Differences in the effects of teacher evaluations, guided self-evaluation (using prepared guidelines,) and no evaluation of rough drafts were examined. The final drafts of students (10th, 11th, and 12th graders) were compared with their rough drafts and rated by judges according to degree of change.

Beishuizen, J. & Moonen, J. (1992). Research in technology enriched schools: a case for cooperation between teachers and researchers . (ERIC Technical Report ED351006).

This paper describes the research strategies employed in the Dutch Technology Enriched Schools project to encourage extensive and intensive use of computers in a small number of secondary schools, and to study the effects of computer use on the classroom, the curriculum, and school administration and management.

Borg, W. P. (1989). Educational Research: an Introduction . (5th ed.). New York: Longman.

An overview of educational research methodology, including literature review and discussion of approaches to research, experimental design, statistical analysis, ethics, and rhetorical presentation of research findings.

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research . Boston: Houghton Mifflin.

A classic overview of research designs.

Campbell, D.T. (1988). Methodology and epistemology for social science: selected papers . ed. E. S. Overman. Chicago: University of Chicago Press.

This is an overview of Campbell's 40-year career and his work. It covers in seven parts measurement, experimental design, applied social experimentation, interpretive social science, epistemology and sociology of science. Includes an extensive bibliography.

Caporaso, J. A., & Roos, Jr., L. L. (Eds.). Quasi-experimental approaches: Testing theory and evaluating policy. Evanston, WA: Northwestern University Press.

A collection of articles concerned with explicating the underlying assumptions of quasi-experimentation and relating these to true experimentation. With an emphasis on design. Includes a glossary of terms.

Collier, R. Writing and the word processor: How wary of the gift-giver should we be? Unpublished manuscript.

Unpublished typescript. Charts the developments to date in computers and composition and speculates about the future within the framework of Willie Sypher's model of the evolution of creative discovery.

Cook, T.D. & Campbell, D.T. (1979). Quasi-experimentation: design and analysis issues for field settings . Boston: Houghton Mifflin Co.

The authors write that this book "presents some quasi-experimental designs and design features that can be used in many social research settings. The designs serve to probe causal hypotheses about a wide variety of substantive issues in both basic and applied research."

Cutler, A. (1970). An experimental method for semantic field study. Linguistic Communication, 2 , N. pag.

This paper emphasizes the need for empirical research and objective discovery procedures in semantics, and illustrates a method by which these goals may be obtained.

Daniels, L. B. (1996, Summer). Eisenberg's Heisenberg: The indeterminancies of rationality. Curriculum Inquiry, 26 , 181-92.

Places Eisenberg's theories in relation to the death of foundationalism by showing that he distorts rational studies into a form of relativism. He looks at Eisenberg's ideas on indeterminacy, methods and evidence, what he is against and what we should think of what he says.

Danziger, K. (1990). Constructing the subject: Historical origins of psychological research. Cambridge: Cambridge University Press.

Danzinger stresses the importance of being aware of the framework in which research operates and of the essentially social nature of scientific activity.

Diener, E., et al. (1972, December). Leakage of experimental information to potential future subjects by debriefed subjects. Journal of Experimental Research in Personality , 264-67.

Research regarding research: an investigation of the effects on the outcome of an experiment in which information about the experiment had been leaked to subjects. The study concludes that such leakage is not a significant problem.

Dudley-Marling, C., & Rhodes, L. K. (1989). Reflecting on a close encounter with experimental research. Canadian Journal of English Language Arts. 12 , 24-28.

Researchers, Dudley-Marling and Rhodes, address some problems they met in their experimental approach to a study of reading comprehension. This article discusses the limitations of experimental research, and presents an alternative to experimental or quantitative research.

Edgington, E. S. (1985). Random assignment and experimental research. Educational Administration Quarterly, 21 , N. pag.

Edgington explores ways on which random assignment can be a part of field studies. The author discusses both non-experimental and experimental research and the need for using random assignment.

Eisenberg, J. (1996, Summer). Response to critiques by R. Floden, J. Zeuli, and L. Daniels. Curriculum Inquiry, 26 , 199-201.

A response to critiques of his argument that rational educational research methods are at best suspect and at worst futile. He believes indeterminacy controls this method and worries that chaotic research is failing students.

Eisner, E. (1992, July). Are all causal claims positivistic? A reply to Francis Schrag. Educational Researcher, 21 (5), 8-9.

Eisner responds to Schrag who claimed that critics like Eisner cannot escape a positivistic paradigm whatever attempts they make to do so. Eisner argues that Schrag essentially misses the point for trying to argue for the paradigm solely on the basis of cause and effect without including the rest of positivistic philosophy. This weakens his argument against multiple modal methods, which Eisner argues provides opportunities to apply the appropriate research design where it is most applicable.

Floden, R.E. (1996, Summer). Educational research: limited, but worthwhile and maybe a bargain. (response to J.A. Eisenberg). Curriculum Inquiry, 26 , 193-7.

Responds to John Eisenberg critique of educational research by asserting the connection between improvement of practice and research results. He places high value of teacher discrepancy and knowledge that research informs practice.

Fortune, J. C., & Hutson, B. A. (1994, March/April). Selecting models for measuring change when true experimental conditions do not exist. Journal of Educational Research, 197-206.

This article reviews methods for minimizing the effects of nonideal experimental conditions by optimally organizing models for the measurement of change.

Fox, R. F. (1980). Treatment of writing apprehension and tts effects on composition. Research in the Teaching of English, 14 , 39-49.

The main purpose of Fox's study was to investigate the effects of two methods of teaching writing on writing apprehension among entry level composition students, A conventional teaching procedure was used with a control group, while a workshop method was employed with the treatment group.

Gadamer, H-G. (1976). Philosophical hermeneutics . (D. E. Linge, Trans.). Berkeley, CA: University of California Press.

A collection of essays with the common themes of the mediation of experience through language, the impossibility of objectivity, and the importance of context in interpretation.

Gaise, S. J. (1981). Experimental vs. non-experimental research on classroom second language learning. Bilingual Education Paper Series, 5 , N. pag.

Aims on classroom-centered research on second language learning and teaching are considered and contrasted with the experimental approach.

Giordano, G. (1983). Commentary: Is experimental research snowing us? Journal of Reading, 27 , 5-7.

Do educational research findings actually benefit teachers and students? Giordano states his opinion that research may be helpful to teaching, but is not essential and often is unnecessary.

Goldenson, D. R. (1978, March). An alternative view about the role of the secondary school in political socialization: A field-experimental study of theory and research in social education. Theory and Research in Social Education , 44-72.

This study concludes that when political discussion among experimental groups of secondary school students is led by a teacher, the degree to which the students' views were impacted is proportional to the credibility of the teacher.

Grossman, J., and J. P. Tierney. (1993, October). The fallibility of comparison groups. Evaluation Review , 556-71.

Grossman and Tierney present evidence to suggest that comparison groups are not the same as nontreatment groups.

Harnisch, D. L. (1992). Human judgment and the logic of evidence: A critical examination of research methods in special education transition literature. In D. L. Harnisch et al. (Eds.), Selected readings in transition.

This chapter describes several common types of research studies in special education transition literature and the threats to their validity.

Hawisher, G. E. (1989). Research and recommendations for computers and composition. In G. Hawisher and C. Selfe. (Eds.), Critical Perspectives on Computers and Composition Instruction . (pp. 44-69). New York: Teacher's College Press.

An overview of research in computers and composition to date. Includes a synthesis grid of experimental research.

Hillocks, G. Jr. (1982). The interaction of instruction, teacher comment, and revision in teaching the composing process. Research in the Teaching of English, 16 , 261-278.

Hillock conducted a study using three treatments: observational or data collecting activities prior to writing, use of revisions or absence of same, and either brief or lengthy teacher comments to identify effective methods of teaching composition to seventh and eighth graders.

Jenkinson, J. C. (1989). Research design in the experimental study of intellectual disability. International Journal of Disability, Development, and Education, 69-84.

This article catalogues the difficulties of conducting experimental research where the subjects are intellectually disables and suggests alternative research strategies.

Jones, R. A. (1985). Research Methods in the Social and Behavioral Sciences. Sunderland, MA: Sinauer Associates, Inc..

A textbook designed to provide an overview of research strategies in the social sciences, including survey, content analysis, ethnographic approaches, and experimentation. The author emphasizes the importance of applying strategies appropriately and in variety.

Kamil, M. L., Langer, J. A., & Shanahan, T. (1985). Understanding research in reading and writing . Newton, Massachusetts: Allyn and Bacon.

Examines a wide variety of problems in reading and writing, with a broad range of techniques, from different perspectives.

Kennedy, J. L. (1985). An Introduction to the Design and Analysis of Experiments in Behavioral Research . Lanham, MD: University Press of America.

An introductory textbook of psychological and educational research.

Keppel, G. (1991). Design and analysis: a researcher's handbook . Englewood Cliffs, NJ: Prentice Hall.

This updates Keppel's earlier book subtitled "a student's handbook." Focuses on extensive information about analytical research and gives a basic picture of research in psychology. Covers a range of statistical topics. Includes a subject and name index, as well as a glossary.

Knowles, G., Elija, R., & Broadwater, K. (1996, Spring/Summer). Teacher research: enhancing the preparation of teachers? Teaching Education, 8 , 123-31.

Researchers looked at one teacher candidate who participated in a class which designed their own research project correlating to a question they would like answered in the teaching world. The goal of the study was to see if preservice teachers developed reflective practice by researching appropriate classroom contexts.

Lace, J., & De Corte, E. (1986, April 16-20). Research on media in western Europe: A myth of sisyphus? Paper presented at the annual meeting of the American Educational Research Association. San Francisco.

Identifies main trends in media research in western Europe, with emphasis on three successive stages since 1960: tools technology, systems technology, and reflective technology.

Latta, A. (1996, Spring/Summer). Teacher as researcher: selected resources. Teaching Education, 8 , 155-60.

An annotated bibliography on educational research including milestones of thought, practical applications, successful outcomes, seminal works, and immediate practical applications.

Lauer. J.M. & Asher, J. W. (1988). Composition research: Empirical designs . New York: Oxford University Press.

Approaching experimentation from a humanist's perspective to it, authors focus on eight major research designs: Case studies, ethnographies, sampling and surveys, quantitative descriptive studies, measurement, true experiments, quasi-experiments, meta-analyses, and program evaluations. It takes on the challenge of bridging language of social science with that of the humanist. Includes name and subject indexes, as well as a glossary and a glossary of symbols.

Mishler, E. G. (1979). Meaning in context: Is there any other kind? Harvard Educational Review, 49 , 1-19.

Contextual importance has been largely ignored by traditional research approaches in social/behavioral sciences and in their application to the education field. Developmental and social psychologists have increasingly noted the inadequacies of this approach. Drawing examples for phenomenology, sociolinguistics, and ethnomethodology, the author proposes alternative approaches for studying meaning in context.

Mitroff, I., & Bonoma, T. V. (1978, May). Psychological assumptions, experimentations, and real world problems: A critique and an alternate approach to evaluation. Evaluation Quarterly , 235-60.

The authors advance the notion of dialectic as a means to clarify and examine the underlying assumptions of experimental research methodology, both in highly controlled situations and in social evaluation.

Muller, E. W. (1985). Application of experimental and quasi-experimental research designs to educational software evaluation. Educational Technology, 25 , 27-31.

Muller proposes a set of guidelines for the use of experimental and quasi-experimental methods of research in evaluating educational software. By obtaining empirical evidence of student performance, it is possible to evaluate if programs are making the desired learning effect.

Murray, S., et al. (1979, April 8-12). Technical issues as threats to internal validity of experimental and quasi-experimental designs . San Francisco: University of California.

The article reviews three evaluation models and analyzes the flaws common to them. Remedies are suggested.

Muter, P., & Maurutto, P. (1991). Reading and skimming from computer screens and books: The paperless office revisited? Behavior and Information Technology, 10 (4), 257-66.

The researchers test for reading and skimming effectiveness, defined as accuracy combined with speed, for written text compared to text on a computer monitor. They conclude that, given optimal on-line conditions, both are equally effective.

O'Donnell, A., Et al. (1992). The impact of cooperative writing. In J. R. Hayes, et al. (Eds.). Reading empirical research studies: The rhetoric of research . (pp. 371-84). Hillsdale, NJ: Lawrence Erlbaum Associates.

A model of experimental design. The authors investigate the efficacy of cooperative writing strategies, as well as the transferability of skills learned to other, individual writing situations.

Palmer, D. (1988). Looking at philosophy . Mountain View, CA: Mayfield Publishing.

An introductory text with incisive but understandable discussions of the major movements and thinkers in philosophy from the Pre-Socratics through Sartre. With illustrations by the author. Includes a glossary.

Phelps-Gunn, T., & Phelps-Terasaki, D. (1982). Written language instruction: Theory and remediation . London: Aspen Systems Corporation.

The lack of research in written expression is addressed and an application on the Total Writing Process Model is presented.

Poetter, T. (1996, Spring/Summer). From resistance to excitement: becoming qualitative researchers and reflective practitioners. Teaching Education , 8109-19.

An education professor reveals his own problematic research when he attempted to institute a educational research component to a teacher preparation program. He encountered dissent from students and cooperating professionals and ultimately was rewarded with excitement towards research and a recognized correlation to practice.

Purves, A. C. (1992). Reflections on research and assessment in written composition. Research in the Teaching of English, 26 .

Three issues concerning research and assessment is writing are discussed: 1) School writing is a matter of products not process, 2) school writing is an ill-defined domain, 3) the quality of school writing is what observers report they see. Purves discusses these issues while looking at data collected in a ten-year study of achievement in written composition in fourteen countries.

Rathus, S. A. (1987). Psychology . (3rd ed.). Poughkeepsie, NY: Holt, Rinehart, and Winston.

An introductory psychology textbook. Includes overviews of the major movements in psychology, discussions of prominent examples of experimental research, and a basic explanation of relevant physiological factors. With chapter summaries.

Reiser, R. A. (1982). Improving the research skills of instructional designers. Educational Technology, 22 , 19-21.

In his paper, Reiser starts by stating the importance of research in advancing the field of education, and points out that graduate students in instructional design lack the proper skills to conduct research. The paper then goes on to outline the practicum in the Instructional Systems Program at Florida State University which includes: 1) Planning and conducting an experimental research study; 2) writing the manuscript describing the study; 3) giving an oral presentation in which they describe their research findings.

Report on education research . (Journal). Washington, DC: Capitol Publication, Education News Services Division.

This is an independent bi-weekly newsletter on research in education and learning. It has been publishing since Sept. 1969.

Rossell, C. H. (1986). Why is bilingual education research so bad?: Critique of the Walsh and Carballo study of Massachusetts bilingual education programs . Boston: Center for Applied Social Science, Boston University. (ERIC Working Paper 86-5).

The Walsh and Carballo evaluation of the effectiveness of transitional bilingual education programs in five Massachusetts communities has five flaws and the five flaws are discussed in detail.

Rubin, D. L., & Greene, K. (1992). Gender-typical style in written language. Research in the Teaching of English, 26.

This study was designed to find out whether the writing styles of men and women differ. Rubin and Green discuss the pre-suppositions that women are better writers than men.

Sawin, E. (1992). Reaction: Experimental research in the context of other methods. School of Education Review, 4 , 18-21.

Sawin responds to Gage's article on methodologies and issues in educational research. He agrees with most of the article but suggests the concept of scientific should not be regarded in absolute terms and recommends more emphasis on scientific method. He also questions the value of experiments over other types of research.

Schoonmaker, W. E. (1984). Improving classroom instruction: A model for experimental research. The Technology Teacher, 44, 24-25.

The model outlined in this article tries to bridge the gap between classroom practice and laboratory research, using what Schoonmaker calls active research. Research is conducted in the classroom with the students and is used to determine which two methods of classroom instruction chosen by the teacher is more effective.

Schrag, F. (1992). In defense of positivist research paradigms. Educational Researcher, 21, (5), 5-8.

The controversial defense of the use of positivistic research methods to evaluate educational strategies; the author takes on Eisner, Erickson, and Popkewitz.

Smith, J. (1997). The stories educational researchers tell about themselves. Educational Researcher, 33 (3), 4-11.

Recapitulates main features of an on-going debate between advocates for using vocabularies of traditional language arts and whole language in educational research. An "impasse" exists were advocates "do not share a theoretical disposition concerning both language instruction and the nature of research," Smith writes (p. 6). He includes a very comprehensive history of the debate of traditional research methodology and qualitative methods and vocabularies. Definitely worth a read by graduates.

Smith, N. L. (1980). The feasibility and desirability of experimental methods in evaluation. Evaluation and Program Planning: An International Journal , 251-55.

Smith identifies the conditions under which experimental research is most desirable. Includes a review of current thinking and controversies.

Stewart, N. R., & Johnson, R. G. (1986, March 16-20). An evaluation of experimental methodology in counseling and counselor education research. Paper presented at the annual meeting of the American Educational Research Association, San Francisco.

The purpose of this study was to evaluate the quality of experimental research in counseling and counselor education published from 1976 through 1984.

Spector, P. E. (1990). Research Designs. Newbury Park, California: Sage Publications.

In this book, Spector introduces the basic principles of experimental and nonexperimental design in the social sciences.

Tait, P. E. (1984). Do-it-yourself evaluation of experimental research. Journal of Visual Impairment and Blindness, 78 , 356-363 .

Tait's goal is to provide the reader who is unfamiliar with experimental research or statistics with the basic skills necessary for the evaluation of research studies.

Walsh, S. M. (1990). The current conflict between case study and experimental research: A breakthrough study derives benefits from both . (ERIC Document Number ED339721).

This paper describes a study that was not experimentally designed, but its major findings were generalizable to the overall population of writers in college freshman composition classes. The study was not a case study, but it provided insights into the attitudes and feelings of small clusters of student writers.

Waters, G. R. (1976). Experimental designs in communication research. Journal of Business Communication, 14 .

The paper presents a series of discussions on the general elements of experimental design and the scientific process and relates these elements to the field of communication.

Welch, W. W. (March 1969). The selection of a national random sample of teachers for experimental curriculum evaluation. Scholastic Science and Math , 210-216.

Members of the evaluation section of Harvard project physics describe what is said to be the first attempt to select a national random sample of teachers, and list 6 steps to do so. Cost and comparison with a volunteer group are also discussed.

Winer, B.J. (1971). Statistical principles in experimental design , (2nd ed.). New York: McGraw-Hill.

Combines theory and application discussions to give readers a better understanding of the logic behind statistical aspects of experimental design. Introduces the broad topic of design, then goes into considerable detail. Not for light reading. Bring your aspirin if you like statistics. Bring morphine is you're a humanist.

Winn, B. (1986, January 16-21). Emerging trends in educational technology research. Paper presented at the Annual Convention of the Association for Educational Communication Technology.

This examination of the topic of research in educational technology addresses four major areas: (1) why research is conducted in this area and the characteristics of that research; (2) the types of research questions that should or should not be addressed; (3) the most appropriate methodologies for finding answers to research questions; and (4) the characteristics of a research report that make it good and ultimately suitable for publication.

Citation Information

Luann Barnes, Jennifer Hauser, Luana Heikes, Anthony J. Hernandez, Paul Tim Richard, Katherine Ross, Guo Hua Yang, and Mike Palmquist. (1994-2024). Experimental and Quasi-Experimental Research. The WAC Clearinghouse. Colorado State University. Available at https://wac.colostate.edu/repository/writing/guides/.

Copyright Information

Copyright © 1994-2024 Colorado State University and/or this site's authors, developers, and contributors . Some material displayed on this site is used with permission.

Correlational Research vs. Experimental Research

What's the difference.

Correlational research and experimental research are two different approaches used in scientific studies. Correlational research aims to identify relationships or associations between variables without manipulating them. It examines the extent to which changes in one variable are related to changes in another variable. On the other hand, experimental research involves manipulating variables to determine cause-and-effect relationships. It includes the use of control groups, random assignment, and independent and dependent variables. While correlational research can provide valuable insights into relationships between variables, experimental research allows researchers to establish causal relationships and draw more definitive conclusions.

AttributeCorrelational ResearchExperimental Research
DefinitionExamines the relationship between variables without manipulating them.Manipulates variables to establish cause-and-effect relationships.
Research DesignNon-experimentalExperimental
Control over VariablesLittle to no control over variablesHigh control over variables
Manipulation of VariablesVariables are not manipulatedVariables are manipulated
CausalityCannot establish causalityCan establish causality
Random AssignmentNot usedUsed to assign participants to groups
Independent VariableNot manipulatedManipulated by the researcher
Dependent VariableMeasured and analyzed for relationshipsMeasured and analyzed for cause-and-effect
Sample SizeCan be largeUsually smaller

Further Detail

Introduction.

Research is a fundamental aspect of advancing knowledge and understanding in various fields. Two common research methods used in scientific studies are correlational research and experimental research. While both methods aim to explore relationships between variables, they differ in their approach, design, and the level of control over variables. This article will compare and contrast the attributes of correlational research and experimental research, highlighting their strengths and limitations.

Correlational Research

Correlational research is a non-experimental method used to examine the relationship between two or more variables. It focuses on measuring the degree of association or correlation between variables without manipulating them. In this type of research, researchers collect data on the variables of interest and analyze the statistical relationship between them. The strength and direction of the relationship are typically expressed through correlation coefficients.

One of the key advantages of correlational research is its ability to study naturally occurring phenomena in real-world settings. It allows researchers to observe and analyze relationships between variables that may be difficult or unethical to manipulate in an experimental setting. For example, studying the relationship between smoking and lung cancer would be more feasible using correlational research, as it would be unethical to assign individuals to smoke for an experimental study.

However, correlational research has limitations. It cannot establish causality or determine the direction of the relationship between variables. While a correlation may exist between two variables, it does not necessarily mean that one variable causes the other. It could be due to a third variable, known as a confounding variable, that influences both variables simultaneously. Additionally, correlational research is susceptible to issues such as selection bias, measurement error, and the inability to control extraneous variables.

Experimental Research

Experimental research, on the other hand, is a method that involves manipulating variables to determine cause-and-effect relationships. In experimental research, researchers carefully design and control the conditions under which the study takes place. They manipulate the independent variable(s) and measure the effects on the dependent variable(s). The goal is to establish a cause-and-effect relationship by systematically varying the independent variable(s) and controlling for potential confounding variables.

One of the main strengths of experimental research is its ability to establish causality. By manipulating variables and controlling extraneous factors, researchers can confidently attribute any observed changes in the dependent variable(s) to the manipulation of the independent variable(s). This allows for a more definitive understanding of the relationship between variables. Experimental research also provides a high level of control, which increases the internal validity of the study.

However, experimental research also has limitations. It may not always be feasible or ethical to manipulate certain variables. For example, it would be unethical to assign individuals to smoke for an experimental study on the effects of smoking. Additionally, experimental research often takes place in controlled laboratory settings, which may limit the generalizability of the findings to real-world situations. The high level of control in experimental research can also lead to artificiality, as it may not fully capture the complexity and variability of natural settings.

Comparison of Attributes

While correlational research and experimental research differ in their approach and design, they share some common attributes. Both methods involve collecting and analyzing data to explore relationships between variables. They also rely on statistical techniques to examine the strength and direction of these relationships. Furthermore, both methods contribute to the advancement of knowledge in their respective fields.

However, the key differences lie in the level of control over variables and the ability to establish causality. Correlational research lacks the ability to manipulate variables and establish cause-and-effect relationships. It focuses on observing and analyzing existing relationships between variables. On the other hand, experimental research allows for the manipulation of variables, providing a higher level of control and the ability to establish causality.

Correlational research is often used in exploratory studies or when it is not feasible or ethical to manipulate variables. It can provide valuable insights into the relationships between variables and generate hypotheses for further investigation. Experimental research, on the other hand, is more suitable for testing specific hypotheses and establishing causal relationships. It allows researchers to control extraneous variables and systematically manipulate independent variables to observe their effects on dependent variables.

Correlational research and experimental research are two distinct methods used in scientific studies. While correlational research focuses on observing and analyzing relationships between variables without manipulation, experimental research involves manipulating variables to establish cause-and-effect relationships. Both methods have their strengths and limitations, and their choice depends on the research question, feasibility, and ethical considerations. By understanding the attributes of correlational research and experimental research, researchers can make informed decisions about the most appropriate method to use in their studies, ultimately contributing to the advancement of knowledge in their respective fields.

Comparisons may contain inaccurate information about people, places, or facts. Please report any issues.

  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

Quasi Experimental Design Overview & Examples

By Jim Frost Leave a Comment

What is a Quasi Experimental Design?

A quasi experimental design is a method for identifying causal relationships that does not randomly assign participants to the experimental groups. Instead, researchers use a non-random process. For example, they might use an eligibility cutoff score or preexisting groups to determine who receives the treatment.

Image illustrating a quasi experimental design.

Quasi-experimental research is a design that closely resembles experimental research but is different. The term “quasi” means “resembling,” so you can think of it as a cousin to actual experiments. In these studies, researchers can manipulate an independent variable — that is, they change one factor to see what effect it has. However, unlike true experimental research, participants are not randomly assigned to different groups.

Learn more about Experimental Designs: Definition & Types .

When to Use Quasi-Experimental Design

Researchers typically use a quasi-experimental design because they can’t randomize due to practical or ethical concerns. For example:

  • Practical Constraints : A school interested in testing a new teaching method can only implement it in preexisting classes and cannot randomly assign students.
  • Ethical Concerns : A medical study might not be able to randomly assign participants to a treatment group for an experimental medication when they are already taking a proven drug.

Quasi-experimental designs also come in handy when researchers want to study the effects of naturally occurring events, like policy changes or environmental shifts, where they can’t control who is exposed to the treatment.

Quasi-experimental designs occupy a unique position in the spectrum of research methodologies, sitting between observational studies and true experiments. This middle ground offers a blend of both worlds, addressing some limitations of purely observational studies while navigating the constraints often accompanying true experiments.

A significant advantage of quasi-experimental research over purely observational studies and correlational research is that it addresses the issue of directionality, determining which variable is the cause and which is the effect. In quasi-experiments, an intervention typically occurs during the investigation, and the researchers record outcomes before and after it, increasing the confidence that it causes the observed changes.

However, it’s crucial to recognize its limitations as well. Controlling confounding variables is a larger concern for a quasi-experimental design than a true experiment because it lacks random assignment.

In sum, quasi-experimental designs offer a valuable research approach when random assignment is not feasible, providing a more structured and controlled framework than observational studies while acknowledging and attempting to address potential confounders.

Types of Quasi-Experimental Designs and Examples

Quasi-experimental studies use various methods, depending on the scenario.

Natural Experiments

This design uses naturally occurring events or changes to create the treatment and control groups. Researchers compare outcomes between those whom the event affected and those it did not affect. Analysts use statistical controls to account for confounders that the researchers must also measure.

Natural experiments are related to observational studies, but they allow for a clearer causality inference because the external event or policy change provides both a form of quasi-random group assignment and a definite start date for the intervention.

For example, in a natural experiment utilizing a quasi-experimental design, researchers study the impact of a significant economic policy change on small business growth. The policy is implemented in one state but not in neighboring states. This scenario creates an unplanned experimental setup, where the state with the new policy serves as the treatment group, and the neighboring states act as the control group.

Researchers are primarily interested in small business growth rates but need to record various confounders that can impact growth rates. Hence, they record state economic indicators, investment levels, and employment figures. By recording these metrics across the states, they can include them in the model as covariates and control them statistically. This method allows researchers to estimate differences in small business growth due to the policy itself, separate from the various confounders.

Nonequivalent Groups Design

This method involves matching existing groups that are similar but not identical. Researchers attempt to find groups that are as equivalent as possible, particularly for factors likely to affect the outcome.

For instance, researchers use a nonequivalent groups quasi-experimental design to evaluate the effectiveness of a new teaching method in improving students’ mathematics performance. A school district considering the teaching method is planning the study. Students are already divided into schools, preventing random assignment.

The researchers matched two schools with similar demographics, baseline academic performance, and resources. The school using the traditional methodology is the control, while the other uses the new approach. Researchers are evaluating differences in educational outcomes between the two methods.

They perform a pretest to identify differences between the schools that might affect the outcome and include them as covariates to control for confounding. They also record outcomes before and after the intervention to have a larger context for the changes they observe.

Regression Discontinuity

This process assigns subjects to a treatment or control group based on a predetermined cutoff point (e.g., a test score). The analysis primarily focuses on participants near the cutoff point, as they are likely similar except for the treatment received. By comparing participants just above and below the cutoff, the design controls for confounders that vary smoothly around the cutoff.

For example, in a regression discontinuity quasi-experimental design focusing on a new medical treatment for depression, researchers use depression scores as the cutoff point. Individuals with depression scores just above a certain threshold are assigned to receive the latest treatment, while those just below the threshold do not receive it. This method creates two closely matched groups: one that barely qualifies for treatment and one that barely misses out.

By comparing the mental health outcomes of these two groups over time, researchers can assess the effectiveness of the new treatment. The assumption is that the only significant difference between the groups is whether they received the treatment, thereby isolating its impact on depression outcomes.

Controlling Confounders in a Quasi-Experimental Design

Accounting for confounding variables is a challenging but essential task for a quasi-experimental design.

In a true experiment, the random assignment process equalizes confounders across the groups to nullify their overall effect. It’s the gold standard because it works on all confounders, known and unknown.

Unfortunately, the lack of random assignment can allow differences between the groups to exist before the intervention. These confounding factors might ultimately explain the results rather than the intervention.

Consequently, researchers must use other methods to equalize the groups roughly using matching and cutoff values or statistically adjust for preexisting differences they measure to reduce the impact of confounders.

A key strength of quasi-experiments is their frequent use of “pre-post testing.” This approach involves conducting initial tests before collecting data to check for preexisting differences between groups that could impact the study’s outcome. By identifying these variables early on and including them as covariates, researchers can more effectively control potential confounders in their statistical analysis.

Additionally, researchers frequently track outcomes before and after the intervention to better understand the context for changes they observe.

Statisticians consider these methods to be less effective than randomization. Hence, quasi-experiments fall somewhere in the middle when it comes to internal validity , or how well the study can identify causal relationships versus mere correlation . They’re more conclusive than correlational studies but not as solid as true experiments.

In conclusion, quasi-experimental designs offer researchers a versatile and practical approach when random assignment is not feasible. This methodology bridges the gap between controlled experiments and observational studies, providing a valuable tool for investigating cause-and-effect relationships in real-world settings. Researchers can address ethical and logistical constraints by understanding and leveraging the different types of quasi-experimental designs while still obtaining insightful and meaningful results.

Cook, T. D., & Campbell, D. T. (1979).  Quasi-experimentation: Design & analysis issues in field settings . Boston, MA: Houghton Mifflin

Share this:

correlational research vs quasi experimental research

Reader Interactions

Comments and questions cancel reply.

Frequently asked questions

What’s the difference between correlational and experimental research.

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

Frequently asked questions: Methodology

Quantitative observations involve measuring or counting something and expressing the result in numerical form, while qualitative observations involve describing something in non-numerical terms, such as its appearance, texture, or color.

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Scope of research is determined at the beginning of your research process , prior to the data collection stage. Sometimes called “scope of study,” your scope delineates what will and will not be covered in your project. It helps you focus your work and your time, ensuring that you’ll be able to achieve your goals and outcomes.

Defining a scope can be very useful in any research project, from a research proposal to a thesis or dissertation . A scope is needed for all types of research: quantitative , qualitative , and mixed methods .

To define your scope of research, consider the following:

  • Budget constraints or any specifics of grant funding
  • Your proposed timeline and duration
  • Specifics about your population of study, your proposed sample size , and the research methodology you’ll pursue
  • Any inclusion and exclusion criteria
  • Any anticipated control , extraneous , or confounding variables that could bias your research if not accounted for properly.

Inclusion and exclusion criteria are predominantly used in non-probability sampling . In purposive sampling and snowball sampling , restrictions apply as to who can be included in the sample .

Inclusion and exclusion criteria are typically presented and discussed in the methodology section of your thesis or dissertation .

The purpose of theory-testing mode is to find evidence in order to disprove, refine, or support a theory. As such, generalisability is not the aim of theory-testing mode.

Due to this, the priority of researchers in theory-testing mode is to eliminate alternative causes for relationships between variables . In other words, they prioritise internal validity over external validity , including ecological validity .

Convergent validity shows how much a measure of one construct aligns with other measures of the same or related constructs .

On the other hand, concurrent validity is about how a measure matches up to some known criterion or gold standard, which can be another measure.

Although both types of validity are established by calculating the association or correlation between a test score and another variable , they represent distinct validation methods.

Validity tells you how accurately a method measures what it was designed to measure. There are 4 main types of validity :

  • Construct validity : Does the test measure the construct it was designed to measure?
  • Face validity : Does the test appear to be suitable for its objectives ?
  • Content validity : Does the test cover all relevant parts of the construct it aims to measure.
  • Criterion validity : Do the results accurately measure the concrete outcome they are designed to measure?

Criterion validity evaluates how well a test measures the outcome it was designed to measure. An outcome can be, for example, the onset of a disease.

Criterion validity consists of two subtypes depending on the time at which the two measures (the criterion and your test) are obtained:

  • Concurrent validity is a validation strategy where the the scores of a test and the criterion are obtained at the same time
  • Predictive validity is a validation strategy where the criterion variables are measured after the scores of the test

Attrition refers to participants leaving a study. It always happens to some extent – for example, in randomised control trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analysing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Construct validity refers to how well a test measures the concept (or construct) it was designed to measure. Assessing construct validity is especially important when you’re researching concepts that can’t be quantified and/or are intangible, like introversion. To ensure construct validity your test should be based on known indicators of introversion ( operationalisation ).

On the other hand, content validity assesses how well the test represents all aspects of the construct. If some aspects are missing or irrelevant parts are included, the test has low content validity.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

Construct validity has convergent and discriminant subtypes. They assist determine if a test measures the intended notion.

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Reproducibility and replicability are related terms.

  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.
  • Reproducing research entails reanalysing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalisations – often the goal of quantitative research . As such, a snowball sample is not representative of the target population, and is usually a better fit for qualitative research .

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones. 

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extra-marital affairs)

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection , using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

When your population is large in size, geographically dispersed, or difficult to contact, it’s necessary to use a sampling method .

This allows you to gather information from a smaller part of the population, i.e. the sample, and make accurate statements by using statistical analysis. A few sampling methods include simple random sampling , convenience sampling , and snowball sampling .

The two main types of social desirability bias are:

  • Self-deceptive enhancement (self-deception): The tendency to see oneself in a favorable light without realizing it.
  • Impression managemen t (other-deception): The tendency to inflate one’s abilities or achievement in order to make a good impression on other people.

Response bias refers to conditions or factors that take place during the process of responding to surveys, affecting the responses. One type of response bias is social desirability bias .

Demand characteristics are aspects of experiments that may give away the research objective to participants. Social desirability bias occurs when participants automatically try to respond in ways that make them seem likeable in a study, even if it means misrepresenting how they truly feel.

Participants may use demand characteristics to infer social norms or experimenter expectancies and act in socially desirable ways, so you should try to control for demand characteristics wherever possible.

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information – for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Peer review is a process of evaluating submissions to an academic journal. Utilising rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication.

For this reason, academic journals are often considered among the most credible sources you can use in a research project – provided that the journal itself is trustworthy and well regarded.

In general, the peer review process follows the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field.

It acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analysing the data.

Blinding is important to reduce bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behaviour in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a die to randomly assign participants to groups.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalisability of your results, while random assignment improves the internal validity of your study.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardisation and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyse, detect, modify, or remove ‘dirty’ data to make your dataset ‘clean’. Data cleaning is also called data cleansing or data scrubbing.

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimise or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Observer bias occurs when a researcher’s expectations, opinions, or prejudices influence what they perceive or record in a study. It usually affects studies when observers are aware of the research aims or hypotheses. This type of research bias is also called detection bias or ascertainment bias .

The observer-expectancy effect occurs when researchers influence the results of their own study through interactions with participants.

Researchers’ own beliefs and expectations about the study results may unintentionally influence participants through demand characteristics .

You can use several tactics to minimise observer bias .

  • Use masking (blinding) to hide the purpose of your study from all observers.
  • Triangulate your data with different data collection methods or sources.
  • Use multiple observers and ensure inter-rater reliability.
  • Train your observers to make sure data is consistently recorded between them.
  • Standardise your observation procedures to make sure they are structured and clear.

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

Naturalistic observation is a qualitative research method where you record the behaviours of your research subjects in real-world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as ‘people watching’ with a purpose.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

You can organise the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomisation can minimise the bias from order effects.

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or by post. All questions are standardised so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment
  • Random assignment of participants to ensure the groups are equivalent

Depending on your study topic, there are various other methods of controlling variables .

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

A true experiment (aka a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A research hypothesis is your proposed answer to your research question. The research hypothesis usually includes an explanation (‘ x affects y because …’).

A statistical hypothesis, on the other hand, is a mathematical statement about a population parameter. Statistical hypotheses always come in pairs: the null and alternative hypotheses. In a well-designed study , the statistical hypotheses correspond logically to the research hypothesis.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess. It should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations, and statistical analysis of data).

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data are available for analysis; other times your research question may only require a cross-sectional study to answer it.

Cross-sectional studies cannot establish a cause-and-effect relationship or analyse behaviour over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

Longitudinal study Cross-sectional study
observations Observations at a in time
Observes the multiple times Observes (a ‘cross-section’) in the population
Follows in participants over time Provides of society at a given point

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups . Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with ‘yes’ or ‘no’ (questions that start with ‘why’ or ‘how’ are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

Social desirability bias is the tendency for interview participants to give responses that will be viewed favourably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias in research can also occur in observations if the participants know they’re being observed. They might alter their behaviour accordingly.

A focus group is a research method that brings together a small group of people to answer questions in a moderated setting. The group is chosen due to predefined demographic traits, and the questions are designed to shed light on a topic of interest. It is one of four types of interviews .

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order.
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualise your initial thoughts and hypotheses
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when:

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyse your data quickly and efficiently
  • Your research question depends on strong parity between participants, with environmental conditions held constant

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g., understanding the needs of your consumers or user testing your website).
  • You can control and standardise the process for high reliability and validity (e.g., choosing appropriate measurements and sampling methods ).

However, there are also some drawbacks: data collection can be time-consuming, labour-intensive, and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organisations.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

If something is a mediating variable :

  • It’s caused by the independent variable
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g., the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g., water volume or weight).

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

You want to find out how blood sugar levels are affected by drinking diet cola and regular cola, so you conduct an experiment .

  • The type of cola – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of cola.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control, and randomisation.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomisation , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalisation .

In statistics, ordinal and nominal variables are both considered categorical variables .

Even though ordinal data can sometimes be numerical, not all mathematical operations can be performed on them.

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

‘Controlling for a variable’ means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

There are 4 main types of extraneous variables :

  • Demand characteristics : Environmental cues that encourage participants to conform to researchers’ expectations
  • Experimenter effects : Unintentional actions by researchers that influence study outcomes
  • Situational variables : Eenvironmental variables that alter participants’ behaviours
  • Participant variables : Any characteristic or aspect of a participant’s background that could affect study results

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

The term ‘ explanatory variable ‘ is sometimes preferred over ‘ independent variable ‘ because, in real-world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so ‘explanatory variables’ is a more appropriate term.

On graphs, the explanatory variable is conventionally placed on the x -axis, while the response variable is placed on the y -axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called ‘independent’ because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation)

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it ‘depends’ on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalisation : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalisation: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity, and criterion validity to achieve construct validity.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity: The extent to which your measure is unrelated or negatively related to measures of distinct constructs

Attrition bias can skew your sample so that your final sample differs significantly from your original sample. Your sample is biased because some groups from your population are underrepresented.

With a biased final sample, you may not be able to generalise your findings to the original population that you sampled from, so your external validity is compromised.

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment, and situation effect.

The two types of external validity are population validity (whether you can generalise to other groups of people) and ecological validity (whether you can generalise to other situations and settings).

The external validity of a study is the extent to which you can generalise your findings to different groups of people, situations, and measures.

Attrition bias is a threat to internal validity . In experiments, differential rates of attrition between treatment and control groups can skew results.

This bias can affect the relationship between your independent and dependent variables . It can make variables appear to be correlated when they are not, or vice versa.

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction, and attrition .

A sampling error is the difference between a population parameter and a sample statistic .

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 × 5 = 15 subgroups.

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method .

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

In multistage sampling , you can use probability or non-probability sampling methods.

For a probability sample, you have to probability sampling at every stage. You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data are then collected from as large a percentage as possible of this random subset.

Sampling bias occurs when some members of a population are systematically more likely to be selected in a sample than others.

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from county to city to neighbourhood) to create a sample that’s less expensive and time-consuming to collect data from.

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling , and quota sampling .

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

Advantages:

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.

Disadvantages:

  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes
  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomisation. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference between this and a true experiment is that the groups are not randomly assigned.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Triangulation can help:

  • Reduce bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labour-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analysing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.

To design a successful experiment, first identify:

  • A testable hypothesis
  • One or more independent variables that you will manipulate
  • One or more dependent variables that you will measure

When designing the experiment, first decide:

  • How your variable(s) will be manipulated
  • How you will control for any potential confounding or lurking variables
  • How many subjects you will include
  • How you will assign treatments to your subjects

Exploratory research explores the main aspects of a new or barely researched question.

Explanatory research explains the causes and effects of an already widely researched question.

The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.

An observational study could be a good fit for your research if your research question is based on things you observe. If you have ethical, logistical, or practical concerns that make an experimental design challenging, consider an observational study. Remember that in an observational study, it is critical that there be no interference or manipulation of the research subjects. Since it’s not an experiment, there are no control or treatment groups either.

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analysed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analysed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualise your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analysed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organisation to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organise your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyse data (e.g. experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Ask our team

Want to contact us directly? No problem. We are always here for you.

Support team - Nina

Our support team is here to help you daily via chat, WhatsApp, email, or phone between 9:00 a.m. to 11:00 p.m. CET.

Our APA experts default to APA 7 for editing and formatting. For the Citation Editing Service you are able to choose between APA 6 and 7.

Yes, if your document is longer than 20,000 words, you will get a sample of approximately 2,000 words. This sample edit gives you a first impression of the editor’s editing style and a chance to ask questions and give feedback.

How does the sample edit work?

You will receive the sample edit within 24 hours after placing your order. You then have 24 hours to let us know if you’re happy with the sample or if there’s something you would like the editor to do differently.

Read more about how the sample edit works

Yes, you can upload your document in sections.

We try our best to ensure that the same editor checks all the different sections of your document. When you upload a new file, our system recognizes you as a returning customer, and we immediately contact the editor who helped you before.

However, we cannot guarantee that the same editor will be available. Your chances are higher if

  • You send us your text as soon as possible and
  • You can be flexible about the deadline.

Please note that the shorter your deadline is, the lower the chance that your previous editor is not available.

If your previous editor isn’t available, then we will inform you immediately and look for another qualified editor. Fear not! Every Scribbr editor follows the  Scribbr Improvement Model  and will deliver high-quality work.

Yes, our editors also work during the weekends and holidays.

Because we have many editors available, we can check your document 24 hours per day and 7 days per week, all year round.

If you choose a 72 hour deadline and upload your document on a Thursday evening, you’ll have your thesis back by Sunday evening!

Yes! Our editors are all native speakers, and they have lots of experience editing texts written by ESL students. They will make sure your grammar is perfect and point out any sentences that are difficult to understand. They’ll also notice your most common mistakes, and give you personal feedback to improve your writing in English.

Every Scribbr order comes with our award-winning Proofreading & Editing service , which combines two important stages of the revision process.

For a more comprehensive edit, you can add a Structure Check or Clarity Check to your order. With these building blocks, you can customize the kind of feedback you receive.

You might be familiar with a different set of editing terms. To help you understand what you can expect at Scribbr, we created this table:

Types of editing Available at Scribbr?


This is the “proofreading” in Scribbr’s standard service. It can only be selected in combination with editing.


This is the “editing” in Scribbr’s standard service. It can only be selected in combination with proofreading.


Select the Structure Check and Clarity Check to receive a comprehensive edit equivalent to a line edit.


This kind of editing involves heavy rewriting and restructuring. Our editors cannot help with this.

View an example

When you place an order, you can specify your field of study and we’ll match you with an editor who has familiarity with this area.

However, our editors are language specialists, not academic experts in your field. Your editor’s job is not to comment on the content of your dissertation, but to improve your language and help you express your ideas as clearly and fluently as possible.

This means that your editor will understand your text well enough to give feedback on its clarity, logic and structure, but not on the accuracy or originality of its content.

Good academic writing should be understandable to a non-expert reader, and we believe that academic editing is a discipline in itself. The research, ideas and arguments are all yours – we’re here to make sure they shine!

After your document has been edited, you will receive an email with a link to download the document.

The editor has made changes to your document using ‘Track Changes’ in Word. This means that you only have to accept or ignore the changes that are made in the text one by one.

It is also possible to accept all changes at once. However, we strongly advise you not to do so for the following reasons:

  • You can learn a lot by looking at the mistakes you made.
  • The editors don’t only change the text – they also place comments when sentences or sometimes even entire paragraphs are unclear. You should read through these comments and take into account your editor’s tips and suggestions.
  • With a final read-through, you can make sure you’re 100% happy with your text before you submit!

You choose the turnaround time when ordering. We can return your dissertation within 24 hours , 3 days or 1 week . These timescales include weekends and holidays. As soon as you’ve paid, the deadline is set, and we guarantee to meet it! We’ll notify you by text and email when your editor has completed the job.

Very large orders might not be possible to complete in 24 hours. On average, our editors can complete around 13,000 words in a day while maintaining our high quality standards. If your order is longer than this and urgent, contact us to discuss possibilities.

Always leave yourself enough time to check through the document and accept the changes before your submission deadline.

Scribbr is specialised in editing study related documents. We check:

  • Graduation projects
  • Dissertations
  • Admissions essays
  • College essays
  • Application essays
  • Personal statements
  • Process reports
  • Reflections
  • Internship reports
  • Academic papers
  • Research proposals
  • Prospectuses

Calculate the costs

The fastest turnaround time is 24 hours.

You can upload your document at any time and choose between four deadlines:

At Scribbr, we promise to make every customer 100% happy with the service we offer. Our philosophy: Your complaint is always justified – no denial, no doubts.

Our customer support team is here to find the solution that helps you the most, whether that’s a free new edit or a refund for the service.

Yes, in the order process you can indicate your preference for American, British, or Australian English .

If you don’t choose one, your editor will follow the style of English you currently use. If your editor has any questions about this, we will contact you.

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

7.2 Correlational Research

Learning objectives.

  • Define correlational research and give several examples.
  • Explain why a researcher might choose to conduct correlational research rather than experimental research or another type of nonexperimental research.

What Is Correlational Research?

Correlational research is a type of nonexperimental research in which the researcher measures two variables and assesses the statistical relationship (i.e., the correlation) between them with little or no effort to control extraneous variables. There are essentially two reasons that researchers interested in statistical relationships between variables would choose to conduct a correlational study rather than an experiment. The first is that they do not believe that the statistical relationship is a causal one. For example, a researcher might evaluate the validity of a brief extraversion test by administering it to a large group of participants along with a longer extraversion test that has already been shown to be valid. This researcher might then check to see whether participants’ scores on the brief test are strongly correlated with their scores on the longer one. Neither test score is thought to cause the other, so there is no independent variable to manipulate. In fact, the terms independent variable and dependent variable do not apply to this kind of research.

The other reason that researchers would choose to use a correlational study rather than an experiment is that the statistical relationship of interest is thought to be causal, but the researcher cannot manipulate the independent variable because it is impossible, impractical, or unethical. For example, Allen Kanner and his colleagues thought that the number of “daily hassles” (e.g., rude salespeople, heavy traffic) that people experience affects the number of physical and psychological symptoms they have (Kanner, Coyne, Schaefer, & Lazarus, 1981). But because they could not manipulate the number of daily hassles their participants experienced, they had to settle for measuring the number of daily hassles—along with the number of symptoms—using self-report questionnaires. Although the strong positive relationship they found between these two variables is consistent with their idea that hassles cause symptoms, it is also consistent with the idea that symptoms cause hassles or that some third variable (e.g., neuroticism) causes both.

A common misconception among beginning researchers is that correlational research must involve two quantitative variables, such as scores on two extraversion tests or the number of hassles and number of symptoms people have experienced. However, the defining feature of correlational research is that the two variables are measured—neither one is manipulated—and this is true regardless of whether the variables are quantitative or categorical. Imagine, for example, that a researcher administers the Rosenberg Self-Esteem Scale to 50 American college students and 50 Japanese college students. Although this “feels” like a between-subjects experiment, it is a correlational study because the researcher did not manipulate the students’ nationalities. The same is true of the study by Cacioppo and Petty comparing college faculty and factory workers in terms of their need for cognition. It is a correlational study because the researchers did not manipulate the participants’ occupations.

Figure 7.2 “Results of a Hypothetical Study on Whether People Who Make Daily To-Do Lists Experience Less Stress Than People Who Do Not Make Such Lists” shows data from a hypothetical study on the relationship between whether people make a daily list of things to do (a “to-do list”) and stress. Notice that it is unclear whether this is an experiment or a correlational study because it is unclear whether the independent variable was manipulated. If the researcher randomly assigned some participants to make daily to-do lists and others not to, then it is an experiment. If the researcher simply asked participants whether they made daily to-do lists, then it is a correlational study. The distinction is important because if the study was an experiment, then it could be concluded that making the daily to-do lists reduced participants’ stress. But if it was a correlational study, it could only be concluded that these variables are statistically related. Perhaps being stressed has a negative effect on people’s ability to plan ahead (the directionality problem). Or perhaps people who are more conscientious are more likely to make to-do lists and less likely to be stressed (the third-variable problem). The crucial point is that what defines a study as experimental or correlational is not the variables being studied, nor whether the variables are quantitative or categorical, nor the type of graph or statistics used to analyze the data. It is how the study is conducted.

Figure 7.2 Results of a Hypothetical Study on Whether People Who Make Daily To-Do Lists Experience Less Stress Than People Who Do Not Make Such Lists

Results of a Hypothetical Study on Whether People Who Make Daily To-Do Lists Experience Less Stress Than People Who Do Not Make Such Lists

Data Collection in Correlational Research

Again, the defining feature of correlational research is that neither variable is manipulated. It does not matter how or where the variables are measured. A researcher could have participants come to a laboratory to complete a computerized backward digit span task and a computerized risky decision-making task and then assess the relationship between participants’ scores on the two tasks. Or a researcher could go to a shopping mall to ask people about their attitudes toward the environment and their shopping habits and then assess the relationship between these two variables. Both of these studies would be correlational because no independent variable is manipulated. However, because some approaches to data collection are strongly associated with correlational research, it makes sense to discuss them here. The two we will focus on are naturalistic observation and archival data. A third, survey research, is discussed in its own chapter.

Naturalistic Observation

Naturalistic observation is an approach to data collection that involves observing people’s behavior in the environment in which it typically occurs. Thus naturalistic observation is a type of field research (as opposed to a type of laboratory research). It could involve observing shoppers in a grocery store, children on a school playground, or psychiatric inpatients in their wards. Researchers engaged in naturalistic observation usually make their observations as unobtrusively as possible so that participants are often not aware that they are being studied. Ethically, this is considered to be acceptable if the participants remain anonymous and the behavior occurs in a public setting where people would not normally have an expectation of privacy. Grocery shoppers putting items into their shopping carts, for example, are engaged in public behavior that is easily observable by store employees and other shoppers. For this reason, most researchers would consider it ethically acceptable to observe them for a study. On the other hand, one of the arguments against the ethicality of the naturalistic observation of “bathroom behavior” discussed earlier in the book is that people have a reasonable expectation of privacy even in a public restroom and that this expectation was violated.

Researchers Robert Levine and Ara Norenzayan used naturalistic observation to study differences in the “pace of life” across countries (Levine & Norenzayan, 1999). One of their measures involved observing pedestrians in a large city to see how long it took them to walk 60 feet. They found that people in some countries walked reliably faster than people in other countries. For example, people in the United States and Japan covered 60 feet in about 12 seconds on average, while people in Brazil and Romania took close to 17 seconds.

Because naturalistic observation takes place in the complex and even chaotic “real world,” there are two closely related issues that researchers must deal with before collecting data. The first is sampling. When, where, and under what conditions will the observations be made, and who exactly will be observed? Levine and Norenzayan described their sampling process as follows:

Male and female walking speed over a distance of 60 feet was measured in at least two locations in main downtown areas in each city. Measurements were taken during main business hours on clear summer days. All locations were flat, unobstructed, had broad sidewalks, and were sufficiently uncrowded to allow pedestrians to move at potentially maximum speeds. To control for the effects of socializing, only pedestrians walking alone were used. Children, individuals with obvious physical handicaps, and window-shoppers were not timed. Thirty-five men and 35 women were timed in most cities. (p. 186)

Precise specification of the sampling process in this way makes data collection manageable for the observers, and it also provides some control over important extraneous variables. For example, by making their observations on clear summer days in all countries, Levine and Norenzayan controlled for effects of the weather on people’s walking speeds.

The second issue is measurement. What specific behaviors will be observed? In Levine and Norenzayan’s study, measurement was relatively straightforward. They simply measured out a 60-foot distance along a city sidewalk and then used a stopwatch to time participants as they walked over that distance. Often, however, the behaviors of interest are not so obvious or objective. For example, researchers Robert Kraut and Robert Johnston wanted to study bowlers’ reactions to their shots, both when they were facing the pins and then when they turned toward their companions (Kraut & Johnston, 1979). But what “reactions” should they observe? Based on previous research and their own pilot testing, Kraut and Johnston created a list of reactions that included “closed smile,” “open smile,” “laugh,” “neutral face,” “look down,” “look away,” and “face cover” (covering one’s face with one’s hands). The observers committed this list to memory and then practiced by coding the reactions of bowlers who had been videotaped. During the actual study, the observers spoke into an audio recorder, describing the reactions they observed. Among the most interesting results of this study was that bowlers rarely smiled while they still faced the pins. They were much more likely to smile after they turned toward their companions, suggesting that smiling is not purely an expression of happiness but also a form of social communication.

A woman bowling

Naturalistic observation has revealed that bowlers tend to smile when they turn away from the pins and toward their companions, suggesting that smiling is not purely an expression of happiness but also a form of social communication.

sieneke toering – bowling big lebowski style – CC BY-NC-ND 2.0.

When the observations require a judgment on the part of the observers—as in Kraut and Johnston’s study—this process is often described as coding . Coding generally requires clearly defining a set of target behaviors. The observers then categorize participants individually in terms of which behavior they have engaged in and the number of times they engaged in each behavior. The observers might even record the duration of each behavior. The target behaviors must be defined in such a way that different observers code them in the same way. This is the issue of interrater reliability. Researchers are expected to demonstrate the interrater reliability of their coding procedure by having multiple raters code the same behaviors independently and then showing that the different observers are in close agreement. Kraut and Johnston, for example, video recorded a subset of their participants’ reactions and had two observers independently code them. The two observers showed that they agreed on the reactions that were exhibited 97% of the time, indicating good interrater reliability.

Archival Data

Another approach to correlational research is the use of archival data , which are data that have already been collected for some other purpose. An example is a study by Brett Pelham and his colleagues on “implicit egotism”—the tendency for people to prefer people, places, and things that are similar to themselves (Pelham, Carvallo, & Jones, 2005). In one study, they examined Social Security records to show that women with the names Virginia, Georgia, Louise, and Florence were especially likely to have moved to the states of Virginia, Georgia, Louisiana, and Florida, respectively.

As with naturalistic observation, measurement can be more or less straightforward when working with archival data. For example, counting the number of people named Virginia who live in various states based on Social Security records is relatively straightforward. But consider a study by Christopher Peterson and his colleagues on the relationship between optimism and health using data that had been collected many years before for a study on adult development (Peterson, Seligman, & Vaillant, 1988). In the 1940s, healthy male college students had completed an open-ended questionnaire about difficult wartime experiences. In the late 1980s, Peterson and his colleagues reviewed the men’s questionnaire responses to obtain a measure of explanatory style—their habitual ways of explaining bad events that happen to them. More pessimistic people tend to blame themselves and expect long-term negative consequences that affect many aspects of their lives, while more optimistic people tend to blame outside forces and expect limited negative consequences. To obtain a measure of explanatory style for each participant, the researchers used a procedure in which all negative events mentioned in the questionnaire responses, and any causal explanations for them, were identified and written on index cards. These were given to a separate group of raters who rated each explanation in terms of three separate dimensions of optimism-pessimism. These ratings were then averaged to produce an explanatory style score for each participant. The researchers then assessed the statistical relationship between the men’s explanatory style as college students and archival measures of their health at approximately 60 years of age. The primary result was that the more optimistic the men were as college students, the healthier they were as older men. Pearson’s r was +.25.

This is an example of content analysis —a family of systematic approaches to measurement using complex archival data. Just as naturalistic observation requires specifying the behaviors of interest and then noting them as they occur, content analysis requires specifying keywords, phrases, or ideas and then finding all occurrences of them in the data. These occurrences can then be counted, timed (e.g., the amount of time devoted to entertainment topics on the nightly news show), or analyzed in a variety of other ways.

Key Takeaways

  • Correlational research involves measuring two variables and assessing the relationship between them, with no manipulation of an independent variable.
  • Correlational research is not defined by where or how the data are collected. However, some approaches to data collection are strongly associated with correlational research. These include naturalistic observation (in which researchers observe people’s behavior in the context in which it normally occurs) and the use of archival data that were already collected for some other purpose.

Discussion: For each of the following, decide whether it is most likely that the study described is experimental or correlational and explain why.

  • An educational researcher compares the academic performance of students from the “rich” side of town with that of students from the “poor” side of town.
  • A cognitive psychologist compares the ability of people to recall words that they were instructed to “read” with their ability to recall words that they were instructed to “imagine.”
  • A manager studies the correlation between new employees’ college grade point averages and their first-year performance reports.
  • An automotive engineer installs different stick shifts in a new car prototype, each time asking several people to rate how comfortable the stick shift feels.
  • A food scientist studies the relationship between the temperature inside people’s refrigerators and the amount of bacteria on their food.
  • A social psychologist tells some research participants that they need to hurry over to the next building to complete a study. She tells others that they can take their time. Then she observes whether they stop to help a research assistant who is pretending to be hurt.

Kanner, A. D., Coyne, J. C., Schaefer, C., & Lazarus, R. S. (1981). Comparison of two modes of stress measurement: Daily hassles and uplifts versus major life events. Journal of Behavioral Medicine, 4 , 1–39.

Kraut, R. E., & Johnston, R. E. (1979). Social and emotional messages of smiling: An ethological approach. Journal of Personality and Social Psychology, 37 , 1539–1553.

Levine, R. V., & Norenzayan, A. (1999). The pace of life in 31 countries. Journal of Cross-Cultural Psychology, 30 , 178–205.

Pelham, B. W., Carvallo, M., & Jones, J. T. (2005). Implicit egotism. Current Directions in Psychological Science, 14 , 106–110.

Peterson, C., Seligman, M. E. P., & Vaillant, G. E. (1988). Pessimistic explanatory style is a risk factor for physical illness: A thirty-five year longitudinal study. Journal of Personality and Social Psychology, 55 , 23–27.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

A Modern Guide to Understanding and Conducting Research in Psychology

Chapter 7 quasi-experimental research, learning objectives.

  • Explain what quasi-experimental research is and distinguish it clearly from both experimental and correlational research.
  • Describe three different types of quasi-experimental research designs (nonequivalent groups, pretest-posttest, and interrupted time series) and identify examples of each one.

The prefix quasi means “resembling.” Thus quasi-experimental research is research that resembles experimental research but is not true experimental research. Although the independent variable is manipulated, participants are not randomly assigned to conditions or orders of conditions ( Cook et al., 1979 ) . Because the independent variable is manipulated before the dependent variable is measured, quasi-experimental research eliminates the directionality problem. But because participants are not randomly assigned—making it likely that there are other differences between conditions—quasi-experimental research does not eliminate the problem of confounding variables. In terms of internal validity, therefore, quasi-experiments are generally somewhere between correlational studies and true experiments.

Quasi-experiments are most likely to be conducted in field settings in which random assignment is difficult or impossible. They are often conducted to evaluate the effectiveness of a treatment—perhaps a type of psychotherapy or an educational intervention. There are many different kinds of quasi-experiments, but we will discuss just a few of the most common ones here, focusing first on nonequivalent groups, pretest-posttest, interrupted time series, and combination designs before turning to single subject designs (including reversal and multiple-baseline designs).

7.1 Nonequivalent Groups Design

Recall that when participants in a between-subjects experiment are randomly assigned to conditions, the resulting groups are likely to be quite similar. In fact, researchers consider them to be equivalent. When participants are not randomly assigned to conditions, however, the resulting groups are likely to be dissimilar in some ways. For this reason, researchers consider them to be nonequivalent. A nonequivalent groups design , then, is a between-subjects design in which participants have not been randomly assigned to conditions.

Imagine, for example, a researcher who wants to evaluate a new method of teaching fractions to third graders. One way would be to conduct a study with a treatment group consisting of one class of third-grade students and a control group consisting of another class of third-grade students. This would be a nonequivalent groups design because the students are not randomly assigned to classes by the researcher, which means there could be important differences between them. For example, the parents of higher achieving or more motivated students might have been more likely to request that their children be assigned to Ms. Williams’s class. Or the principal might have assigned the “troublemakers” to Mr. Jones’s class because he is a stronger disciplinarian. Of course, the teachers’ styles, and even the classroom environments, might be very different and might cause different levels of achievement or motivation among the students. If at the end of the study there was a difference in the two classes’ knowledge of fractions, it might have been caused by the difference between the teaching methods—but it might have been caused by any of these confounding variables.

Of course, researchers using a nonequivalent groups design can take steps to ensure that their groups are as similar as possible. In the present example, the researcher could try to select two classes at the same school, where the students in the two classes have similar scores on a standardized math test and the teachers are the same sex, are close in age, and have similar teaching styles. Taking such steps would increase the internal validity of the study because it would eliminate some of the most important confounding variables. But without true random assignment of the students to conditions, there remains the possibility of other important confounding variables that the researcher was not able to control.

7.2 Pretest-Posttest Design

In a pretest-posttest design , the dependent variable is measured once before the treatment is implemented and once after it is implemented. Imagine, for example, a researcher who is interested in the effectiveness of an STEM education program on elementary school students’ attitudes toward science, technology, engineering and math. The researcher could measure the attitudes of students at a particular elementary school during one week, implement the STEM program during the next week, and finally, measure their attitudes again the following week. The pretest-posttest design is much like a within-subjects experiment in which each participant is tested first under the control condition and then under the treatment condition. It is unlike a within-subjects experiment, however, in that the order of conditions is not counterbalanced because it typically is not possible for a participant to be tested in the treatment condition first and then in an “untreated” control condition.

If the average posttest score is better than the average pretest score, then it makes sense to conclude that the treatment might be responsible for the improvement. Unfortunately, one often cannot conclude this with a high degree of certainty because there may be other explanations for why the posttest scores are better. One category of alternative explanations goes under the name of history . Other things might have happened between the pretest and the posttest. Perhaps an science program aired on television and many of the students watched it, or perhaps a major scientific discover occured and many of the students heard about it. Another category of alternative explanations goes under the name of maturation . Participants might have changed between the pretest and the posttest in ways that they were going to anyway because they are growing and learning. If it were a yearlong program, participants might become more exposed to STEM subjects in class or better reasoners and this might be responsible for the change.

Another alternative explanation for a change in the dependent variable in a pretest-posttest design is regression to the mean . This refers to the statistical fact that an individual who scores extremely on a variable on one occasion will tend to score less extremely on the next occasion. For example, a bowler with a long-term average of 150 who suddenly bowls a 220 will almost certainly score lower in the next game. Her score will “regress” toward her mean score of 150. Regression to the mean can be a problem when participants are selected for further study because of their extreme scores. Imagine, for example, that only students who scored especially low on a test of fractions are given a special training program and then retested. Regression to the mean all but guarantees that their scores will be higher even if the training program has no effect. A closely related concept—and an extremely important one in psychological research—is spontaneous remission . This is the tendency for many medical and psychological problems to improve over time without any form of treatment. The common cold is a good example. If one were to measure symptom severity in 100 common cold sufferers today, give them a bowl of chicken soup every day, and then measure their symptom severity again in a week, they would probably be much improved. This does not mean that the chicken soup was responsible for the improvement, however, because they would have been much improved without any treatment at all. The same is true of many psychological problems. A group of severely depressed people today is likely to be less depressed on average in 6 months. In reviewing the results of several studies of treatments for depression, researchers Michael Posternak and Ivan Miller found that participants in waitlist control conditions improved an average of 10 to 15% before they received any treatment at all ( Posternak & Miller, 2001 ) . Thus one must generally be very cautious about inferring causality from pretest-posttest designs.

Finally, it is possible that the act of taking a pretest can sensitize participants to the measurement process or heighten their awareness of the variable under investigation. This heightened sensitivity, called a testing effect , can subsequently lead to changes in their posttest responses, even in the absence of any external intervention effect.

7.3 Interrupted Time Series Design

A variant of the pretest-posttest design is the interrupted time-series design . A time series is a set of measurements taken at intervals over a period of time. For example, a manufacturing company might measure its workers’ productivity each week for a year. In an interrupted time series-design, a time series like this is “interrupted” by a treatment. In a recent COVID-19 study, the intervention involved the implementation of state-issued mask mandates and restrictions on on-premises restaurant dining. The researchers examined the impact of these measures on COVID-19 cases and deaths ( Guy Jr et al., 2021 ) . Since there was a rapid reduction in daily case and death growth rates following the implementation of mask mandates, and this effect persisted for an extended period, the researchers concluded that the implementation of mask mandates was the cause of the decrease in COVID-19 transmission. This study employed an interrupted time series design, similar to a pretest-posttest design, as it involved measuring the outcomes before and after the intervention. However, unlike the pretest-posttest design, it incorporated multiple measurements before and after the intervention, providing a more comprehensive analysis of the policy impacts.

Figure 7.1 shows data from a hypothetical interrupted time-series study. The dependent variable is the number of student absences per week in a research methods course. The treatment is that the instructor begins publicly taking attendance each day so that students know that the instructor is aware of who is present and who is absent. The top panel of Figure 7.1 shows how the data might look if this treatment worked. There is a consistently high number of absences before the treatment, and there is an immediate and sustained drop in absences after the treatment. The bottom panel of Figure 7.1 shows how the data might look if this treatment did not work. On average, the number of absences after the treatment is about the same as the number before. This figure also illustrates an advantage of the interrupted time-series design over a simpler pretest-posttest design. If there had been only one measurement of absences before the treatment at Week 7 and one afterward at Week 8, then it would have looked as though the treatment were responsible for the reduction. The multiple measurements both before and after the treatment suggest that the reduction between Weeks 7 and 8 is nothing more than normal week-to-week variation.

Two line graphs. The x-axes on both are labeled Week and range from 0 to 14. The y-axes on both are labeled Absences and range from 0 to 8. Between weeks 7 and 8 a vertical dotted line indicates when a treatment was introduced. Both graphs show generally high levels of absences from weeks 1 through 7 (before the treatment) and only 2 absences in week 8 (the first observation after the treatment). The top graph shows the absence level staying low from weeks 9 to 14. The bottom graph shows the absence level for weeks 9 to 15 bouncing around at the same high levels as before the treatment.

Figure 7.1: Hypothetical interrupted time-series design. The top panel shows data that suggest that the treatment caused a reduction in absences. The bottom panel shows data that suggest that it did not.

7.4 Combination Designs

A type of quasi-experimental design that is generally better than either the nonequivalent groups design or the pretest-posttest design is one that combines elements of both. There is a treatment group that is given a pretest, receives a treatment, and then is given a posttest. But at the same time there is a control group that is given a pretest, does not receive the treatment, and then is given a posttest. The question, then, is not simply whether participants who receive the treatment improve but whether they improve more than participants who do not receive the treatment.

Imagine, for example, that students in one school are given a pretest on their current level of engagement in pro-environmental behaviors (i.e., recycling, eating less red meat, abstaining for single-use plastics, etc.), then are exposed to an pro-environmental program in which they learn about the effects of human caused climate change on the planet, and finally are given a posttest. Students in a similar school are given the pretest, not exposed to an pro-environmental program, and finally are given a posttest. Again, if students in the treatment condition become more involved in pro-environmental behaviors, this could be an effect of the treatment, but it could also be a matter of history or maturation. If it really is an effect of the treatment, then students in the treatment condition should become engage in more pro-environmental behaviors than students in the control condition. But if it is a matter of history (e.g., news of a forest fire or drought) or maturation (e.g., improved reasoning or sense of responsibility), then students in the two conditions would be likely to show similar amounts of change. This type of design does not completely eliminate the possibility of confounding variables, however. Something could occur at one of the schools but not the other (e.g., a local heat wave with record high temperatures), so students at the first school would be affected by it while students at the other school would not.

Finally, if participants in this kind of design are randomly assigned to conditions, it becomes a true experiment rather than a quasi experiment. In fact, this kind of design has now been conducted many times—to demonstrate the effectiveness of psychotherapy.

KEY TAKEAWAYS

  • Quasi-experimental research involves the manipulation of an independent variable without the random assignment of participants to conditions or orders of conditions. Among the important types are nonequivalent groups designs, pretest-posttest, and interrupted time-series designs.
  • Quasi-experimental research eliminates the directionality problem because it involves the manipulation of the independent variable. It does not eliminate the problem of confounding variables, however, because it does not involve random assignment to conditions. For these reasons, quasi-experimental research is generally higher in internal validity than correlational studies but lower than true experiments.
  • Practice: Imagine that two college professors decide to test the effect of giving daily quizzes on student performance in a statistics course. They decide that Professor A will give quizzes but Professor B will not. They will then compare the performance of students in their two sections on a common final exam. List five other variables that might differ between the two sections that could affect the results.

regression to the mean

Spontaneous remission, 7.5 single-subject research.

  • Explain what single-subject research is, including how it differs from other types of psychological research and who uses single-subject research and why.
  • Design simple single-subject studies using reversal and multiple-baseline designs.
  • Explain how single-subject research designs address the issue of internal validity.
  • Interpret the results of simple single-subject studies based on the visual inspection of graphed data.
  • Explain some of the points of disagreement between advocates of single-subject research and advocates of group research.

Researcher Vance Hall and his colleagues were faced with the challenge of increasing the extent to which six disruptive elementary school students stayed focused on their schoolwork ( Hall et al., 1968 ) . For each of several days, the researchers carefully recorded whether or not each student was doing schoolwork every 10 seconds during a 30-minute period. Once they had established this baseline, they introduced a treatment. The treatment was that when the student was doing schoolwork, the teacher gave him or her positive attention in the form of a comment like “good work” or a pat on the shoulder. The result was that all of the students dramatically increased their time spent on schoolwork and decreased their disruptive behavior during this treatment phase. For example, a student named Robbie originally spent 25% of his time on schoolwork and the other 75% “snapping rubber bands, playing with toys from his pocket, and talking and laughing with peers” (p. 3). During the treatment phase, however, he spent 71% of his time on schoolwork and only 29% on other activities. Finally, when the researchers had the teacher stop giving positive attention, the students all decreased their studying and increased their disruptive behavior. This was consistent with the claim that it was, in fact, the positive attention that was responsible for the increase in studying. This was one of the first studies to show that attending to positive behavior—and ignoring negative behavior—could be a quick and effective way to deal with problem behavior in an applied setting.

Single-subject research has shown that positive attention from a teacher for studying can increase studying and decrease disruptive behavior. *Photo by Jerry Wang on Unsplash.*

Figure 7.2: Single-subject research has shown that positive attention from a teacher for studying can increase studying and decrease disruptive behavior. Photo by Jerry Wang on Unsplash.

Most of this book is about what can be called group research, which typically involves studying a large number of participants and combining their data to draw general conclusions about human behavior. The study by Hall and his colleagues, in contrast, is an example of single-subject research, which typically involves studying a small number of participants and focusing closely on each individual. In this section, we consider this alternative approach. We begin with an overview of single-subject research, including some assumptions on which it is based, who conducts it, and why they do. We then look at some basic single-subject research designs and how the data from those designs are analyzed. Finally, we consider some of the strengths and weaknesses of single-subject research as compared with group research and see how these two approaches can complement each other.

Overview of Single-Subject Research

What is single-subject research.

Single-subject research is a type of quantitative, quasi-experimental research that involves studying in detail the behavior of each of a small number of participants. Note that the term single-subject does not mean that only one participant is studied; it is more typical for there to be somewhere between two and 10 participants. (This is why single-subject research designs are sometimes called small-n designs, where n is the statistical symbol for the sample size.) Single-subject research can be contrasted with group research , which typically involves studying large numbers of participants and examining their behavior primarily in terms of group means, standard deviations, and so on. The majority of this book is devoted to understanding group research, which is the most common approach in psychology. But single-subject research is an important alternative, and it is the primary approach in some areas of psychology.

Before continuing, it is important to distinguish single-subject research from two other approaches, both of which involve studying in detail a small number of participants. One is qualitative research, which focuses on understanding people’s subjective experience by collecting relatively unstructured data (e.g., detailed interviews) and analyzing those data using narrative rather than quantitative techniques (see. Single-subject research, in contrast, focuses on understanding objective behavior through experimental manipulation and control, collecting highly structured data, and analyzing those data quantitatively.

It is also important to distinguish single-subject research from case studies. A case study is a detailed description of an individual, which can include both qualitative and quantitative analyses. (Case studies that include only qualitative analyses can be considered a type of qualitative research.) The history of psychology is filled with influential cases studies, such as Sigmund Freud’s description of “Anna O.” (see box “The Case of ‘Anna O.’”) and John Watson and Rosalie Rayner’s description of Little Albert ( Watson & Rayner, 1920 ) who learned to fear a white rat—along with other furry objects—when the researchers made a loud noise while he was playing with the rat. Case studies can be useful for suggesting new research questions and for illustrating general principles. They can also help researchers understand rare phenomena, such as the effects of damage to a specific part of the human brain. As a general rule, however, case studies cannot substitute for carefully designed group or single-subject research studies. One reason is that case studies usually do not allow researchers to determine whether specific events are causally related, or even related at all. For example, if a patient is described in a case study as having been sexually abused as a child and then as having developed an eating disorder as a teenager, there is no way to determine whether these two events had anything to do with each other. A second reason is that an individual case can always be unusual in some way and therefore be unrepresentative of people more generally. Thus case studies have serious problems with both internal and external validity.

The Case of “Anna O.”

Sigmund Freud used the case of a young woman he called “Anna O.” to illustrate many principles of his theory of psychoanalysis ( Freud, 1957 ) . (Her real name was Bertha Pappenheim, and she was an early feminist who went on to make important contributions to the field of social work.) Anna had come to Freud’s colleague Josef Breuer around 1880 with a variety of odd physical and psychological symptoms. One of them was that for several weeks she was unable to drink any fluids. According to Freud,

She would take up the glass of water that she longed for, but as soon as it touched her lips she would push it away like someone suffering from hydrophobia.…She lived only on fruit, such as melons, etc., so as to lessen her tormenting thirst (p. 9).

But according to Freud, a breakthrough came one day while Anna was under hypnosis.

[S]he grumbled about her English “lady-companion,” whom she did not care for, and went on to describe, with every sign of disgust, how she had once gone into this lady’s room and how her little dog—horrid creature!—had drunk out of a glass there. The patient had said nothing, as she had wanted to be polite. After giving further energetic expression to the anger she had held back, she asked for something to drink, drank a large quantity of water without any difficulty, and awoke from her hypnosis with the glass at her lips; and thereupon the disturbance vanished, never to return.

Freud’s interpretation was that Anna had repressed the memory of this incident along with the emotion that it triggered and that this was what had caused her inability to drink. Furthermore, her recollection of the incident, along with her expression of the emotion she had repressed, caused the symptom to go away.

As an illustration of Freud’s theory, the case study of Anna O. is quite effective. As evidence for the theory, however, it is essentially worthless. The description provides no way of knowing whether Anna had really repressed the memory of the dog drinking from the glass, whether this repression had caused her inability to drink, or whether recalling this “trauma” relieved the symptom. It is also unclear from this case study how typical or atypical Anna’s experience was.

"Anna O." was the subject of a famous case study used by Freud to illustrate the principles of psychoanalysis. Source: Wikimedia Commons

Figure 7.3: “Anna O.” was the subject of a famous case study used by Freud to illustrate the principles of psychoanalysis. Source: Wikimedia Commons

Assumptions of Single-Subject Research

Again, single-subject research involves studying a small number of participants and focusing intensively on the behavior of each one. But why take this approach instead of the group approach? There are two important assumptions underlying single-subject research, and it will help to consider them now.

First and foremost is the assumption that it is important to focus intensively on the behavior of individual participants. One reason for this is that group research can hide individual differences and generate results that do not represent the behavior of any individual. For example, a treatment that has a positive effect for half the people exposed to it but a negative effect for the other half would, on average, appear to have no effect at all. Single-subject research, however, would likely reveal these individual differences. A second reason to focus intensively on individuals is that sometimes it is the behavior of a particular individual that is primarily of interest. A school psychologist, for example, might be interested in changing the behavior of a particular disruptive student. Although previous published research (both single-subject and group research) is likely to provide some guidance on how to do this, conducting a study on this student would be more direct and probably more effective.

Another assumption of single-subject research is that it is important to study strong and consistent effects that have biological or social importance. Applied researchers, in particular, are interested in treatments that have substantial effects on important behaviors and that can be implemented reliably in the real-world contexts in which they occur. This is sometimes referred to as social validity ( Wolf, 1978 ) . The study by Hall and his colleagues, for example, had good social validity because it showed strong and consistent effects of positive teacher attention on a behavior that is of obvious importance to teachers, parents, and students. Furthermore, the teachers found the treatment easy to implement, even in their often chaotic elementary school classrooms.

Who Uses Single-Subject Research?

Single-subject research has been around as long as the field of psychology itself. In the late 1800s, one of psychology’s founders, Wilhelm Wundt, studied sensation and consciousness by focusing intensively on each of a small number of research participants. Herman Ebbinghaus’s research on memory and Ivan Pavlov’s research on classical conditioning are other early examples, both of which are still described in almost every introductory psychology textbook.

In the middle of the 20th century, B. F. Skinner clarified many of the assumptions underlying single-subject research and refined many of its techniques ( Skinner, 1938 ) . He and other researchers then used it to describe how rewards, punishments, and other external factors affect behavior over time. This work was carried out primarily using nonhuman subjects—mostly rats and pigeons. This approach, which Skinner called the experimental analysis of behavior —remains an important subfield of psychology and continues to rely almost exclusively on single-subject research. For examples of this work, look at any issue of the Journal of the Experimental Analysis of Behavior . By the 1960s, many researchers were interested in using this approach to conduct applied research primarily with humans—a subfield now called applied behavior analysis ( Baer et al., 1968 ) . Applied behavior analysis plays a significant role in contemporary research on developmental disabilities, education, organizational behavior, and health, among many other areas. Examples of this work (including the study by Hall and his colleagues) can be found in the Journal of Applied Behavior Analysis . The single-subject approach can also be used by clinicians who take any theoretical perspective—behavioral, cognitive, psychodynamic, or humanistic—to study processes of therapeutic change with individual clients and to document their clients’ improvement ( Kazdin, 2019 ) .

Single-Subject Research Designs

General features of single-subject designs.

Before looking at any specific single-subject research designs, it will be helpful to consider some features that are common to most of them. Many of these features are illustrated in Figure 7.4 , which shows the results of a generic single-subject study. First, the dependent variable (represented on the y-axis of the graph) is measured repeatedly over time (represented by the x-axis) at regular intervals. Second, the study is divided into distinct phases, and the participant is tested under one condition per phase. The conditions are often designated by capital letters: A, B, C, and so on. Thus Figure 7.4 represents a design in which the participant was tested first in one condition (A), then tested in another condition (B), and finally retested in the original condition (A). (This is called a reversal design and will be discussed in more detail shortly.)

Results of a generic single-subject study illustrating several principles of single-subject research.

Figure 7.4: Results of a generic single-subject study illustrating several principles of single-subject research.

Another important aspect of single-subject research is that the change from one condition to the next does not usually occur after a fixed amount of time or number of observations. Instead, it depends on the participant’s behavior. Specifically, the researcher waits until the participant’s behavior in one condition becomes fairly consistent from observation to observation before changing conditions. This is sometimes referred to as the steady state strategy ( Sidman, 1960 ) . The idea is that when the dependent variable has reached a steady state, then any change across conditions will be relatively easy to detect. Recall that we encountered this same principle when discussing experimental research more generally. The effect of an independent variable is easier to detect when the “noise” in the data is minimized.

Reversal Designs

The most basic single-subject research design is the reversal design , also called the ABA design . During the first phase, A, a baseline is established for the dependent variable. This is the level of responding before any treatment is introduced, and therefore the baseline phase is a kind of control condition. When steady state responding is reached, phase B begins as the researcher introduces the treatment. Again, the researcher waits until that dependent variable reaches a steady state so that it is clear whether and how much it has changed. Finally, the researcher removes the treatment and again waits until the dependent variable reaches a steady state. This basic reversal design can also be extended with the reintroduction of the treatment (ABAB), another return to baseline (ABABA), and so on. The study by Hall and his colleagues was an ABAB reversal design (Figure 7.5 ).

An approximation of the results for Hall and colleagues’ participant Robbie in their ABAB reversal design. The percentage of time he spent studying (the dependent variable) was low during the first baseline phase, increased during the first treatment phase until it leveled off, decreased during the second baseline phase, and again increased during the second treatment phase.

Figure 7.5: An approximation of the results for Hall and colleagues’ participant Robbie in their ABAB reversal design. The percentage of time he spent studying (the dependent variable) was low during the first baseline phase, increased during the first treatment phase until it leveled off, decreased during the second baseline phase, and again increased during the second treatment phase.

Why is the reversal—the removal of the treatment—considered to be necessary in this type of design? If the dependent variable changes after the treatment is introduced, it is not always clear that the treatment was responsible for the change. It is possible that something else changed at around the same time and that this extraneous variable is responsible for the change in the dependent variable. But if the dependent variable changes with the introduction of the treatment and then changes back with the removal of the treatment, it is much clearer that the treatment (and removal of the treatment) is the cause. In other words, the reversal greatly increases the internal validity of the study.

Multiple-Baseline Designs

There are two potential problems with the reversal design—both of which have to do with the removal of the treatment. One is that if a treatment is working, it may be unethical to remove it. For example, if a treatment seemed to reduce the incidence of self-injury in a developmentally disabled child, it would be unethical to remove that treatment just to show that the incidence of self-injury increases. The second problem is that the dependent variable may not return to baseline when the treatment is removed. For example, when positive attention for studying is removed, a student might continue to study at an increased rate. This could mean that the positive attention had a lasting effect on the student’s studying, which of course would be good, but it could also mean that the positive attention was not really the cause of the increased studying in the first place.

One solution to these problems is to use a multiple-baseline design , which is represented in Figure 7.6 . In one version of the design, a baseline is established for each of several participants, and the treatment is then introduced for each one. In essence, each participant is tested in an AB design. The key to this design is that the treatment is introduced at a different time for each participant. The idea is that if the dependent variable changes when the treatment is introduced for one participant, it might be a coincidence. But if the dependent variable changes when the treatment is introduced for multiple participants—especially when the treatment is introduced at different times for the different participants—then it is less likely to be a coincidence.

Results of a generic multiple-baseline study. The multiple baselines can be for different participants, dependent variables, or settings. The treatment is introduced at a different time on each baseline.

Figure 7.6: Results of a generic multiple-baseline study. The multiple baselines can be for different participants, dependent variables, or settings. The treatment is introduced at a different time on each baseline.

As an example, consider a study by Scott Ross and Robert Horner ( Ross et al., 2009 ) . They were interested in how a school-wide bullying prevention program affected the bullying behavior of particular problem students. At each of three different schools, the researchers studied two students who had regularly engaged in bullying. During the baseline phase, they observed the students for 10-minute periods each day during lunch recess and counted the number of aggressive behaviors they exhibited toward their peers. (The researchers used handheld computers to help record the data.) After 2 weeks, they implemented the program at one school. After 2 more weeks, they implemented it at the second school. And after 2 more weeks, they implemented it at the third school. They found that the number of aggressive behaviors exhibited by each student dropped shortly after the program was implemented at his or her school. Notice that if the researchers had only studied one school or if they had introduced the treatment at the same time at all three schools, then it would be unclear whether the reduction in aggressive behaviors was due to the bullying program or something else that happened at about the same time it was introduced (e.g., a holiday, a television program, a change in the weather). But with their multiple-baseline design, this kind of coincidence would have to happen three separate times—an unlikely occurrence—to explain their results.

Data Analysis in Single-Subject Research

In addition to its focus on individual participants, single-subject research differs from group research in the way the data are typically analyzed. As we have seen throughout the book, group research involves combining data across participants. Inferential statistics are used to help decide whether the result for the sample is likely to generalize to the population. Single-subject research, by contrast, relies heavily on a very different approach called visual inspection . This means plotting individual participants’ data as shown throughout this chapter, looking carefully at those data, and making judgments about whether and to what extent the independent variable had an effect on the dependent variable. Inferential statistics are typically not used.

In visually inspecting their data, single-subject researchers take several factors into account. One of them is changes in the level of the dependent variable from condition to condition. If the dependent variable is much higher or much lower in one condition than another, this suggests that the treatment had an effect. A second factor is trend , which refers to gradual increases or decreases in the dependent variable across observations. If the dependent variable begins increasing or decreasing with a change in conditions, then again this suggests that the treatment had an effect. It can be especially telling when a trend changes directions—for example, when an unwanted behavior is increasing during baseline but then begins to decrease with the introduction of the treatment. A third factor is latency , which is the time it takes for the dependent variable to begin changing after a change in conditions. In general, if a change in the dependent variable begins shortly after a change in conditions, this suggests that the treatment was responsible.

In the top panel of Figure 7.7 , there are fairly obvious changes in the level and trend of the dependent variable from condition to condition. Furthermore, the latencies of these changes are short; the change happens immediately. This pattern of results strongly suggests that the treatment was responsible for the changes in the dependent variable. In the bottom panel of Figure 7.7 , however, the changes in level are fairly small. And although there appears to be an increasing trend in the treatment condition, it looks as though it might be a continuation of a trend that had already begun during baseline. This pattern of results strongly suggests that the treatment was not responsible for any changes in the dependent variable—at least not to the extent that single-subject researchers typically hope to see.

Visual inspection of the data suggests an effective treatment in the top panel but an ineffective treatment in the bottom panel.

Figure 7.7: Visual inspection of the data suggests an effective treatment in the top panel but an ineffective treatment in the bottom panel.

The results of single-subject research can also be analyzed using statistical procedures—and this is becoming more common. There are many different approaches, and single-subject researchers continue to debate which are the most useful. One approach parallels what is typically done in group research. The mean and standard deviation of each participant’s responses under each condition are computed and compared, and inferential statistical tests such as the t test or analysis of variance are applied ( Fisch, 2001 ) . (Note that averaging across participants is less common.) Another approach is to compute the percentage of nonoverlapping data (PND) for each participant ( Scruggs & Mastropieri, 2021 ) . This is the percentage of responses in the treatment condition that are more extreme than the most extreme response in a relevant control condition. In the study of Hall and his colleagues, for example, all measures of Robbie’s study time in the first treatment condition were greater than the highest measure in the first baseline, for a PND of 100%. The greater the percentage of nonoverlapping data, the stronger the treatment effect. Still, formal statistical approaches to data analysis in single-subject research are generally considered a supplement to visual inspection, not a replacement for it.

The Single-Subject Versus Group “Debate”

Single-subject research is similar to group research—especially experimental group research—in many ways. They are both quantitative approaches that try to establish causal relationships by manipulating an independent variable, measuring a dependent variable, and controlling extraneous variables. As we will see, single-subject research and group research are probably best conceptualized as complementary approaches.

Data Analysis

One set of disagreements revolves around the issue of data analysis. Some advocates of group research worry that visual inspection is inadequate for deciding whether and to what extent a treatment has affected a dependent variable. One specific concern is that visual inspection is not sensitive enough to detect weak effects. A second is that visual inspection can be unreliable, with different researchers reaching different conclusions about the same set of data ( Danov & Symons, 2008 ) . A third is that the results of visual inspection—an overall judgment of whether or not a treatment was effective—cannot be clearly and efficiently summarized or compared across studies (unlike the measures of relationship strength typically used in group research).

In general, single-subject researchers share these concerns. However, they also argue that their use of the steady state strategy, combined with their focus on strong and consistent effects, minimizes most of them. If the effect of a treatment is difficult to detect by visual inspection because the effect is weak or the data are noisy, then single-subject researchers look for ways to increase the strength of the effect or reduce the noise in the data by controlling extraneous variables (e.g., by administering the treatment more consistently). If the effect is still difficult to detect, then they are likely to consider it neither strong enough nor consistent enough to be of further interest. Many single-subject researchers also point out that statistical analysis is becoming increasingly common and that many of them are using it as a supplement to visual inspection—especially for the purpose of comparing results across studies ( Scruggs & Mastropieri, 2021 ) .

Turning the tables, some advocates of single-subject research worry about the way that group researchers analyze their data. Specifically, they point out that focusing on group means can be highly misleading. Again, imagine that a treatment has a strong positive effect on half the people exposed to it and an equally strong negative effect on the other half. In a traditional between-subjects experiment, the positive effect on half the participants in the treatment condition would be statistically cancelled out by the negative effect on the other half. The mean for the treatment group would then be the same as the mean for the control group, making it seem as though the treatment had no effect when in fact it had a strong effect on every single participant!

But again, group researchers share this concern. Although they do focus on group statistics, they also emphasize the importance of examining distributions of individual scores. For example, if some participants were positively affected by a treatment and others negatively affected by it, this would produce a bimodal distribution of scores and could be detected by looking at a histogram of the data. The use of within-subjects designs is another strategy that allows group researchers to observe effects at the individual level and even to specify what percentage of individuals exhibit strong, medium, weak, and even negative effects.

External Validity

The second issue about which single-subject and group researchers sometimes disagree has to do with external validity—the ability to generalize the results of a study beyond the people and situation actually studied. In particular, advocates of group research point out the difficulty in knowing whether results for just a few participants are likely to generalize to others in the population. Imagine, for example, that in a single-subject study, a treatment has been shown to reduce self-injury for each of two developmentally disabled children. Even if the effect is strong for these two children, how can one know whether this treatment is likely to work for other developmentally disabled children?

Again, single-subject researchers share this concern. In response, they note that the strong and consistent effects they are typically interested in—even when observed in small samples—are likely to generalize to others in the population. Single-subject researchers also note that they place a strong emphasis on replicating their research results. When they observe an effect with a small sample of participants, they typically try to replicate it with another small sample—perhaps with a slightly different type of participant or under slightly different conditions. Each time they observe similar results, they rightfully become more confident in the generality of those results. Single-subject researchers can also point to the fact that the principles of classical and operant conditioning—most of which were discovered using the single-subject approach—have been successfully generalized across an incredibly wide range of species and situations.

And again turning the tables, single-subject researchers have concerns of their own about the external validity of group research. One extremely important point they make is that studying large groups of participants does not entirely solve the problem of generalizing to other individuals. Imagine, for example, a treatment that has been shown to have a small positive effect on average in a large group study. It is likely that although many participants exhibited a small positive effect, others exhibited a large positive effect, and still others exhibited a small negative effect. When it comes to applying this treatment to another large group , we can be fairly sure that it will have a small effect on average. But when it comes to applying this treatment to another individual , we cannot be sure whether it will have a small, a large, or even a negative effect. Another point that single-subject researchers make is that group researchers also face a similar problem when they study a single situation and then generalize their results to other situations. For example, researchers who conduct a study on the effect of cell phone use on drivers on a closed oval track probably want to apply their results to drivers in many other real-world driving situations. But notice that this requires generalizing from a single situation to a population of situations. Thus the ability to generalize is based on much more than just the sheer number of participants one has studied. It requires a careful consideration of the similarity of the participants and situations studied to the population of participants and situations that one wants to generalize to ( Shadish et al., 2002 ) .

Single-Subject and Group Research as Complementary Methods

As with quantitative and qualitative research, it is probably best to conceptualize single-subject research and group research as complementary methods that have different strengths and weaknesses and that are appropriate for answering different kinds of research questions ( Kazdin, 2019 ) . Single-subject research is particularly good for testing the effectiveness of treatments on individuals when the focus is on strong, consistent, and biologically or socially important effects. It is especially useful when the behavior of particular individuals is of interest. Clinicians who work with only one individual at a time may find that it is their only option for doing systematic quantitative research.

Group research, on the other hand, is good for testing the effectiveness of treatments at the group level. Among the advantages of this approach is that it allows researchers to detect weak effects, which can be of interest for many reasons. For example, finding a weak treatment effect might lead to refinements of the treatment that eventually produce a larger and more meaningful effect. Group research is also good for studying interactions between treatments and participant characteristics. For example, if a treatment is effective for those who are high in motivation to change and ineffective for those who are low in motivation to change, then a group design can detect this much more efficiently than a single-subject design. Group research is also necessary to answer questions that cannot be addressed using the single-subject approach, including questions about independent variables that cannot be manipulated (e.g., number of siblings, extroversion, culture).

  • Single-subject research—which involves testing a small number of participants and focusing intensively on the behavior of each individual—is an important alternative to group research in psychology.
  • Single-subject studies must be distinguished from case studies, in which an individual case is described in detail. Case studies can be useful for generating new research questions, for studying rare phenomena, and for illustrating general principles. However, they cannot substitute for carefully controlled experimental or correlational studies because they are low in internal and external validity.
  • Single-subject research designs typically involve measuring the dependent variable repeatedly over time and changing conditions (e.g., from baseline to treatment) when the dependent variable has reached a steady state. This approach allows the researcher to see whether changes in the independent variable are causing changes in the dependent variable.
  • Single-subject researchers typically analyze their data by graphing them and making judgments about whether the independent variable is affecting the dependent variable based on level, trend, and latency.
  • Differences between single-subject research and group research sometimes lead to disagreements between single-subject and group researchers. These disagreements center on the issues of data analysis and external validity (especially generalization to other people). Single-subject research and group research are probably best seen as complementary methods, with different strengths and weaknesses, that are appropriate for answering different kinds of research questions.
  • Does positive attention from a parent increase a child’s toothbrushing behavior?
  • Does self-testing while studying improve a student’s performance on weekly spelling tests?
  • Does regular exercise help relieve depression?
  • Practice: Create a graph that displays the hypothetical results for the study you designed in Exercise 1. Write a paragraph in which you describe what the results show. Be sure to comment on level, trend, and latency.
  • Discussion: Imagine you have conducted a single-subject study showing a positive effect of a treatment on the behavior of a man with social anxiety disorder. Your research has been criticized on the grounds that it cannot be generalized to others. How could you respond to this criticism?
  • Discussion: Imagine you have conducted a group study showing a positive effect of a treatment on the behavior of a group of people with social anxiety disorder, but your research has been criticized on the grounds that “average” effects cannot be generalized to individuals. How could you respond to this criticism?

7.6 Glossary

The simplest reversal design, in which there is a baseline condition (A), followed by a treatment condition (B), followed by a return to baseline (A).

applied behavior analysis

A subfield of psychology that uses single-subject research and applies the principles of behavior analysis to real-world problems in areas that include education, developmental disabilities, organizational behavior, and health behavior.

A condition in a single-subject research design in which the dependent variable is measured repeatedly in the absence of any treatment. Most designs begin with a baseline condition, and many return to the baseline condition at least once.

A detailed description of an individual case.

experimental analysis of behavior

A subfield of psychology founded by B. F. Skinner that uses single-subject research—often with nonhuman animals—to study relationships primarily between environmental conditions and objectively observable behaviors.

group research

A type of quantitative research that involves studying a large number of participants and examining their behavior in terms of means, standard deviations, and other group-level statistics.

interrupted time-series design

A research design in which a series of measurements of the dependent variable are taken both before and after a treatment.

item-order effect

The effect of responding to one survey item on responses to a later survey item.

Refers collectively to extraneous developmental changes in participants that can occur between a pretest and posttest or between the first and last measurements in a time series. It can provide an alternative explanation for an observed change in the dependent variable.

multiple-baseline design

A single-subject research design in which multiple baselines are established for different participants, different dependent variables, or different contexts and the treatment is introduced at a different time for each baseline.

naturalistic observation

An approach to data collection in which the behavior of interest is observed in the environment in which it typically occurs.

nonequivalent groups design

A between-subjects research design in which participants are not randomly assigned to conditions, usually because participants are in preexisting groups (e.g., students at different schools).

nonexperimental research

Research that lacks the manipulation of an independent variable or the random assignment of participants to conditions or orders of conditions.

open-ended item

A questionnaire item that asks a question and allows respondents to respond in whatever way they want.

percentage of nonoverlapping data

A statistic sometimes used in single-subject research. The percentage of observations in a treatment condition that are more extreme than the most extreme observation in a relevant baseline condition.

pretest-posttest design

A research design in which the dependent variable is measured (the pretest), a treatment is given, and the dependent variable is measured again (the posttest) to see if there is a change in the dependent variable from pretest to posttest.

quasi-experimental research

Research that involves the manipulation of an independent variable but lacks the random assignment of participants to conditions or orders of conditions. It is generally used in field settings to test the effectiveness of a treatment.

rating scale

An ordered set of response options to a closed-ended questionnaire item.

The statistical fact that an individual who scores extremely on one occasion will tend to score less extremely on the next occasion.

A term often used to refer to a participant in survey research.

reversal design

A single-subject research design that begins with a baseline condition with no treatment, followed by the introduction of a treatment, and after that a return to the baseline condition. It can include additional treatment conditions and returns to baseline.

single-subject research

A type of quantitative research that involves examining in detail the behavior of each of a small number of participants.

single-variable research

Research that focuses on a single variable rather than on a statistical relationship between variables.

social validity

The extent to which a single-subject study focuses on an intervention that has a substantial effect on an important behavior and can be implemented reliably in the real-world contexts (e.g., by teachers in a classroom) in which that behavior occurs.

Improvement in a psychological or medical problem over time without any treatment.

steady state strategy

In single-subject research, allowing behavior to become fairly consistent from one observation to the next before changing conditions. This makes any effect of the treatment easier to detect.

survey research

A quantitative research approach that uses self-report measures and large, carefully selected samples.

testing effect

A bias in participants’ responses in which scores on the posttest are influenced by simple exposure to the pretest

visual inspection

The primary approach to data analysis in single-subject research, which involves graphing the data and making a judgment as to whether and to what extent the independent variable affected the dependent variable.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

correlational research vs quasi experimental research

Home Market Research Research Tools and Apps

Quasi-experimental Research: What It Is, Types & Examples

quasi-experimental research is research that appears to be experimental but is not.

Much like an actual experiment, quasi-experimental research tries to demonstrate a cause-and-effect link between a dependent and an independent variable. A quasi-experiment, on the other hand, does not depend on random assignment, unlike an actual experiment. The subjects are sorted into groups based on non-random variables.

What is Quasi-Experimental Research?

“Resemblance” is the definition of “quasi.” Individuals are not randomly allocated to conditions or orders of conditions, even though the regression analysis is changed. As a result, quasi-experimental research is research that appears to be experimental but is not.

The directionality problem is avoided in quasi-experimental research since the regression analysis is altered before the multiple regression is assessed. However, because individuals are not randomized at random, there are likely to be additional disparities across conditions in quasi-experimental research.

As a result, in terms of internal consistency, quasi-experiments fall somewhere between correlational research and actual experiments.

The key component of a true experiment is randomly allocated groups. This means that each person has an equivalent chance of being assigned to the experimental group or the control group, depending on whether they are manipulated or not.

Simply put, a quasi-experiment is not a real experiment. A quasi-experiment does not feature randomly allocated groups since the main component of a real experiment is randomly assigned groups. Why is it so crucial to have randomly allocated groups, given that they constitute the only distinction between quasi-experimental and actual  experimental research ?

Let’s use an example to illustrate our point. Let’s assume we want to discover how new psychological therapy affects depressed patients. In a genuine trial, you’d split half of the psych ward into treatment groups, With half getting the new psychotherapy therapy and the other half receiving standard  depression treatment .

And the physicians compare the outcomes of this treatment to the results of standard treatments to see if this treatment is more effective. Doctors, on the other hand, are unlikely to agree with this genuine experiment since they believe it is unethical to treat one group while leaving another untreated.

A quasi-experimental study will be useful in this case. Instead of allocating these patients at random, you uncover pre-existing psychotherapist groups in the hospitals. Clearly, there’ll be counselors who are eager to undertake these trials as well as others who prefer to stick to the old ways.

These pre-existing groups can be used to compare the symptom development of individuals who received the novel therapy with those who received the normal course of treatment, even though the groups weren’t chosen at random.

If any substantial variations between them can be well explained, you may be very assured that any differences are attributable to the treatment but not to other extraneous variables.

As we mentioned before, quasi-experimental research entails manipulating an independent variable by randomly assigning people to conditions or sequences of conditions. Non-equivalent group designs, pretest-posttest designs, and regression discontinuity designs are only a few of the essential types.

What are quasi-experimental research designs?

Quasi-experimental research designs are a type of research design that is similar to experimental designs but doesn’t give full control over the independent variable(s) like true experimental designs do.

In a quasi-experimental design, the researcher changes or watches an independent variable, but the participants are not put into groups at random. Instead, people are put into groups based on things they already have in common, like their age, gender, or how many times they have seen a certain stimulus.

Because the assignments are not random, it is harder to draw conclusions about cause and effect than in a real experiment. However, quasi-experimental designs are still useful when randomization is not possible or ethical.

The true experimental design may be impossible to accomplish or just too expensive, especially for researchers with few resources. Quasi-experimental designs enable you to investigate an issue by utilizing data that has already been paid for or gathered by others (often the government). 

Because they allow better control for confounding variables than other forms of studies, they have higher external validity than most genuine experiments and higher  internal validity  (less than true experiments) than other non-experimental research.

Is quasi-experimental research quantitative or qualitative?

Quasi-experimental research is a quantitative research method. It involves numerical data collection and statistical analysis. Quasi-experimental research compares groups with different circumstances or treatments to find cause-and-effect links. 

It draws statistical conclusions from quantitative data. Qualitative data can enhance quasi-experimental research by revealing participants’ experiences and opinions, but quantitative data is the method’s foundation.

Quasi-experimental research types

There are many different sorts of quasi-experimental designs. Three of the most popular varieties are described below: Design of non-equivalent groups, Discontinuity in regression, and Natural experiments.

Design of Non-equivalent Groups

Example: design of non-equivalent groups, discontinuity in regression, example: discontinuity in regression, natural experiments, example: natural experiments.

However, because they couldn’t afford to pay everyone who qualified for the program, they had to use a random lottery to distribute slots.

Experts were able to investigate the program’s impact by utilizing enrolled people as a treatment group and those who were qualified but did not play the jackpot as an experimental group.

How QuestionPro helps in quasi-experimental research?

QuestionPro can be a useful tool in quasi-experimental research because it includes features that can assist you in designing and analyzing your research study. Here are some ways in which QuestionPro can help in quasi-experimental research:

Design surveys

Randomize participants, collect data over time, analyze data, collaborate with your team.

With QuestionPro, you have access to the most mature market research platform and tool that helps you collect and analyze the insights that matter the most. By leveraging InsightsHub, the unified hub for data management, you can ​​leverage the consolidated platform to organize, explore, search, and discover your  research data  in one organized data repository . 

Optimize Your quasi-experimental research with QuestionPro. Get started now!

LEARN MORE         FREE TRIAL

MORE LIKE THIS

New Edit Options

Edit survey: A new way of survey building and collaboration

Oct 10, 2024

pulse surveys vs annual employee surveys

Pulse Surveys vs Annual Employee Surveys: Which to Use

Oct 4, 2024

Employee perception

Employee Perception Role in Organizational Change

Oct 3, 2024

Mixed Methods Research

Mixed Methods Research: Overview of Designs and Techniques

Oct 2, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

Experimental vs Quasi-Experimental Design: Which to Choose?

Here’s a table that summarizes the similarities and differences between an experimental and a quasi-experimental study design:

 Experimental Study (a.k.a. Randomized Controlled Trial)Quasi-Experimental Study
ObjectiveEvaluate the effect of an intervention or a treatmentEvaluate the effect of an intervention or a treatment
How participants get assigned to groups?Random assignmentNon-random assignment (participants get assigned according to their choosing or that of the researcher)
Is there a control group?YesNot always (although, if present, a control group will provide better evidence for the study results)
Is there any room for confounding?No (although check for a detailed discussion on post-randomization confounding in randomized controlled trials)Yes (however, statistical techniques can be used to study causal relationships in quasi-experiments)
Level of evidenceA randomized trial is at the highest level in the hierarchy of evidenceA quasi-experiment is one level below the experimental study in the hierarchy of evidence [ ]
AdvantagesMinimizes bias and confounding– Can be used in situations where an experiment is not ethically or practically feasible
– Can work with smaller sample sizes than randomized trials
Limitations– High cost (as it generally requires a large sample size)
– Ethical limitations
– Generalizability issues
– Sometimes practically infeasible
Lower ranking in the hierarchy of evidence as losing the power of randomization causes the study to be more susceptible to bias and confounding

What is a quasi-experimental design?

A quasi-experimental design is a non-randomized study design used to evaluate the effect of an intervention. The intervention can be a training program, a policy change or a medical treatment.

Unlike a true experiment, in a quasi-experimental study the choice of who gets the intervention and who doesn’t is not randomized. Instead, the intervention can be assigned to participants according to their choosing or that of the researcher, or by using any method other than randomness.

Having a control group is not required, but if present, it provides a higher level of evidence for the relationship between the intervention and the outcome.

(for more information, I recommend my other article: Understand Quasi-Experimental Design Through an Example ) .

Examples of quasi-experimental designs include:

  • One-Group Posttest Only Design
  • Static-Group Comparison Design
  • One-Group Pretest-Posttest Design
  • Separate-Sample Pretest-Posttest Design

What is an experimental design?

An experimental design is a randomized study design used to evaluate the effect of an intervention. In its simplest form, the participants will be randomly divided into 2 groups:

  • A treatment group: where participants receive the new intervention which effect we want to study.
  • A control or comparison group: where participants do not receive any intervention at all (or receive some standard intervention).

Randomization ensures that each participant has the same chance of receiving the intervention. Its objective is to equalize the 2 groups, and therefore, any observed difference in the study outcome afterwards will only be attributed to the intervention – i.e. it removes confounding.

(for more information, I recommend my other article: Purpose and Limitations of Random Assignment ).

Examples of experimental designs include:

  • Posttest-Only Control Group Design
  • Pretest-Posttest Control Group Design
  • Solomon Four-Group Design
  • Matched Pairs Design
  • Randomized Block Design

When to choose an experimental design over a quasi-experimental design?

Although many statistical techniques can be used to deal with confounding in a quasi-experimental study, in practice, randomization is still the best tool we have to study causal relationships.

Another problem with quasi-experiments is the natural progression of the disease or the condition under study — When studying the effect of an intervention over time, one should consider natural changes because these can be mistaken with changes in outcome that are caused by the intervention. Having a well-chosen control group helps dealing with this issue.

So, if losing the element of randomness seems like an unwise step down in the hierarchy of evidence, why would we ever want to do it?

This is what we’re going to discuss next.

When to choose a quasi-experimental design over a true experiment?

The issue with randomness is that it cannot be always achievable.

So here are some cases where using a quasi-experimental design makes more sense than using an experimental one:

  • If being in one group is believed to be harmful for the participants , either because the intervention is harmful (ex. randomizing people to smoking), or the intervention has a questionable efficacy, or on the contrary it is believed to be so beneficial that it would be malevolent to put people in the control group (ex. randomizing people to receiving an operation).
  • In cases where interventions act on a group of people in a given location , it becomes difficult to adequately randomize subjects (ex. an intervention that reduces pollution in a given area).
  • When working with small sample sizes , as randomized controlled trials require a large sample size to account for heterogeneity among subjects (i.e. to evenly distribute confounding variables between the intervention and control groups).

Further reading

  • Statistical Software Popularity in 40,582 Research Papers
  • Checking the Popularity of 125 Statistical Tests and Models
  • Objectives of Epidemiology (With Examples)
  • 12 Famous Epidemiologists and Why

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Correlational Research | When & How to Use

Correlational Research | When & How to Use

Published on July 7, 2021 by Pritha Bhandari . Revised on June 22, 2023.

A correlational research design investigates relationships between variables without the researcher controlling or manipulating any of them.

A correlation reflects the strength and/or direction of the relationship between two (or more) variables. The direction of a correlation can be either positive or negative.

Positive correlation Both variables change in the same direction As height increases, weight also increases
Negative correlation The variables change in opposite directions As coffee consumption increases, tiredness decreases
Zero correlation There is no relationship between the variables Coffee consumption is not correlated with height

Table of contents

Correlational vs. experimental research, when to use correlational research, how to collect correlational data, how to analyze correlational data, correlation and causation, other interesting articles, frequently asked questions about correlational research.

Correlational and experimental research both use quantitative methods to investigate relationships between variables. But there are important differences in data collection methods and the types of conclusions you can draw.

Correlational research Experimental research
Purpose Used to test strength of association between variables Used to test cause-and-effect relationships between variables
Variables Variables are only observed with no manipulation or intervention by researchers An is manipulated and a dependent variable is observed
Control Limited is used, so other variables may play a role in the relationship are controlled so that they can’t impact your variables of interest
Validity High : you can confidently generalize your conclusions to other populations or settings High : you can confidently draw conclusions about causation

Prevent plagiarism. Run a free check.

Correlational research is ideal for gathering data quickly from natural settings. That helps you generalize your findings to real-life situations in an externally valid way.

There are a few situations where correlational research is an appropriate choice.

To investigate non-causal relationships

You want to find out if there is an association between two variables, but you don’t expect to find a causal relationship between them.

Correlational research can provide insights into complex real-world relationships, helping researchers develop theories and make predictions.

To explore causal relationships between variables

You think there is a causal relationship between two variables, but it is impractical, unethical, or too costly to conduct experimental research that manipulates one of the variables.

Correlational research can provide initial indications or additional support for theories about causal relationships.

To test new measurement tools

You have developed a new instrument for measuring your variable, and you need to test its reliability or validity .

Correlational research can be used to assess whether a tool consistently or accurately captures the concept it aims to measure.

There are many different methods you can use in correlational research. In the social and behavioral sciences, the most common data collection methods for this type of research include surveys, observations , and secondary data.

It’s important to carefully choose and plan your methods to ensure the reliability and validity of your results. You should carefully select a representative sample so that your data reflects the population you’re interested in without research bias .

In survey research , you can use questionnaires to measure your variables of interest. You can conduct surveys online, by mail, by phone, or in person.

Surveys are a quick, flexible way to collect standardized data from many participants, but it’s important to ensure that your questions are worded in an unbiased way and capture relevant insights.

Naturalistic observation

Naturalistic observation is a type of field research where you gather data about a behavior or phenomenon in its natural environment.

This method often involves recording, counting, describing, and categorizing actions and events. Naturalistic observation can include both qualitative and quantitative elements, but to assess correlation, you collect data that can be analyzed quantitatively (e.g., frequencies, durations, scales, and amounts).

Naturalistic observation lets you easily generalize your results to real world contexts, and you can study experiences that aren’t replicable in lab settings. But data analysis can be time-consuming and unpredictable, and researcher bias may skew the interpretations.

Secondary data

Instead of collecting original data, you can also use data that has already been collected for a different purpose, such as official records, polls, or previous studies.

Using secondary data is inexpensive and fast, because data collection is complete. However, the data may be unreliable, incomplete or not entirely relevant, and you have no control over the reliability or validity of the data collection procedures.

After collecting data, you can statistically analyze the relationship between variables using correlation or regression analyses, or both. You can also visualize the relationships between variables with a scatterplot.

Different types of correlation coefficients and regression analyses are appropriate for your data based on their levels of measurement and distributions .

Correlation analysis

Using a correlation analysis, you can summarize the relationship between variables into a correlation coefficient : a single number that describes the strength and direction of the relationship between variables. With this number, you’ll quantify the degree of the relationship between variables.

The Pearson product-moment correlation coefficient , also known as Pearson’s r , is commonly used for assessing a linear relationship between two quantitative variables.

Correlation coefficients are usually found for two variables at a time, but you can use a multiple correlation coefficient for three or more variables.

Regression analysis

With a regression analysis , you can predict how much a change in one variable will be associated with a change in the other variable. The result is a regression equation that describes the line on a graph of your variables.

You can use this equation to predict the value of one variable based on the given value(s) of the other variable(s). It’s best to perform a regression analysis after testing for a correlation between your variables.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

correlational research vs quasi experimental research

It’s important to remember that correlation does not imply causation . Just because you find a correlation between two things doesn’t mean you can conclude one of them causes the other for a few reasons.

Directionality problem

If two variables are correlated, it could be because one of them is a cause and the other is an effect. But the correlational research design doesn’t allow you to infer which is which. To err on the side of caution, researchers don’t conclude causality from correlational studies.

Third variable problem

A confounding variable is a third variable that influences other variables to make them seem causally related even though they are not. Instead, there are separate causal links between the confounder and each variable.

In correlational research, there’s limited or no researcher control over extraneous variables . Even if you statistically control for some potential confounders, there may still be other hidden variables that disguise the relationship between your study variables.

Although a correlational study can’t demonstrate causation on its own, it can help you develop a causal hypothesis that’s tested in controlled experiments.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Correlational Research | When & How to Use. Scribbr. Retrieved October 9, 2024, from https://www.scribbr.com/methodology/correlational-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, what is quantitative research | definition, uses & methods, correlation vs. causation | difference, designs & examples, correlation coefficient | types, formulas & examples, what is your plagiarism score.

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 7: Nonexperimental Research

Quasi-Experimental Research

Learning Objectives

  • Explain what quasi-experimental research is and distinguish it clearly from both experimental and correlational research.
  • Describe three different types of quasi-experimental research designs (nonequivalent groups, pretest-posttest, and interrupted time series) and identify examples of each one.

The prefix  quasi  means “resembling.” Thus quasi-experimental research is research that resembles experimental research but is not true experimental research. Although the independent variable is manipulated, participants are not randomly assigned to conditions or orders of conditions (Cook & Campbell, 1979). [1] Because the independent variable is manipulated before the dependent variable is measured, quasi-experimental research eliminates the directionality problem. But because participants are not randomly assigned—making it likely that there are other differences between conditions—quasi-experimental research does not eliminate the problem of confounding variables. In terms of internal validity, therefore, quasi-experiments are generally somewhere between correlational studies and true experiments.

Quasi-experiments are most likely to be conducted in field settings in which random assignment is difficult or impossible. They are often conducted to evaluate the effectiveness of a treatment—perhaps a type of psychotherapy or an educational intervention. There are many different kinds of quasi-experiments, but we will discuss just a few of the most common ones here.

Nonequivalent Groups Design

Recall that when participants in a between-subjects experiment are randomly assigned to conditions, the resulting groups are likely to be quite similar. In fact, researchers consider them to be equivalent. When participants are not randomly assigned to conditions, however, the resulting groups are likely to be dissimilar in some ways. For this reason, researchers consider them to be nonequivalent. A  nonequivalent groups design , then, is a between-subjects design in which participants have not been randomly assigned to conditions.

Imagine, for example, a researcher who wants to evaluate a new method of teaching fractions to third graders. One way would be to conduct a study with a treatment group consisting of one class of third-grade students and a control group consisting of another class of third-grade students. This design would be a nonequivalent groups design because the students are not randomly assigned to classes by the researcher, which means there could be important differences between them. For example, the parents of higher achieving or more motivated students might have been more likely to request that their children be assigned to Ms. Williams’s class. Or the principal might have assigned the “troublemakers” to Mr. Jones’s class because he is a stronger disciplinarian. Of course, the teachers’ styles, and even the classroom environments, might be very different and might cause different levels of achievement or motivation among the students. If at the end of the study there was a difference in the two classes’ knowledge of fractions, it might have been caused by the difference between the teaching methods—but it might have been caused by any of these confounding variables.

Of course, researchers using a nonequivalent groups design can take steps to ensure that their groups are as similar as possible. In the present example, the researcher could try to select two classes at the same school, where the students in the two classes have similar scores on a standardized math test and the teachers are the same sex, are close in age, and have similar teaching styles. Taking such steps would increase the internal validity of the study because it would eliminate some of the most important confounding variables. But without true random assignment of the students to conditions, there remains the possibility of other important confounding variables that the researcher was not able to control.

Pretest-Posttest Design

In a  pretest-posttest design , the dependent variable is measured once before the treatment is implemented and once after it is implemented. Imagine, for example, a researcher who is interested in the effectiveness of an antidrug education program on elementary school students’ attitudes toward illegal drugs. The researcher could measure the attitudes of students at a particular elementary school during one week, implement the antidrug program during the next week, and finally, measure their attitudes again the following week. The pretest-posttest design is much like a within-subjects experiment in which each participant is tested first under the control condition and then under the treatment condition. It is unlike a within-subjects experiment, however, in that the order of conditions is not counterbalanced because it typically is not possible for a participant to be tested in the treatment condition first and then in an “untreated” control condition.

If the average posttest score is better than the average pretest score, then it makes sense to conclude that the treatment might be responsible for the improvement. Unfortunately, one often cannot conclude this with a high degree of certainty because there may be other explanations for why the posttest scores are better. One category of alternative explanations goes under the name of  history . Other things might have happened between the pretest and the posttest. Perhaps an antidrug program aired on television and many of the students watched it, or perhaps a celebrity died of a drug overdose and many of the students heard about it. Another category of alternative explanations goes under the name of  maturation . Participants might have changed between the pretest and the posttest in ways that they were going to anyway because they are growing and learning. If it were a yearlong program, participants might become less impulsive or better reasoners and this might be responsible for the change.

Another alternative explanation for a change in the dependent variable in a pretest-posttest design is  regression to the mean . This refers to the statistical fact that an individual who scores extremely on a variable on one occasion will tend to score less extremely on the next occasion. For example, a bowler with a long-term average of 150 who suddenly bowls a 220 will almost certainly score lower in the next game. Her score will “regress” toward her mean score of 150. Regression to the mean can be a problem when participants are selected for further study  because  of their extreme scores. Imagine, for example, that only students who scored especially low on a test of fractions are given a special training program and then retested. Regression to the mean all but guarantees that their scores will be higher even if the training program has no effect. A closely related concept—and an extremely important one in psychological research—is  spontaneous remission . This is the tendency for many medical and psychological problems to improve over time without any form of treatment. The common cold is a good example. If one were to measure symptom severity in 100 common cold sufferers today, give them a bowl of chicken soup every day, and then measure their symptom severity again in a week, they would probably be much improved. This does not mean that the chicken soup was responsible for the improvement, however, because they would have been much improved without any treatment at all. The same is true of many psychological problems. A group of severely depressed people today is likely to be less depressed on average in 6 months. In reviewing the results of several studies of treatments for depression, researchers Michael Posternak and Ivan Miller found that participants in waitlist control conditions improved an average of 10 to 15% before they received any treatment at all (Posternak & Miller, 2001) [2] . Thus one must generally be very cautious about inferring causality from pretest-posttest designs.

Does Psychotherapy Work?

Early studies on the effectiveness of psychotherapy tended to use pretest-posttest designs. In a classic 1952 article, researcher Hans Eysenck summarized the results of 24 such studies showing that about two thirds of patients improved between the pretest and the posttest (Eysenck, 1952) [3] . But Eysenck also compared these results with archival data from state hospital and insurance company records showing that similar patients recovered at about the same rate  without  receiving psychotherapy. This parallel suggested to Eysenck that the improvement that patients showed in the pretest-posttest studies might be no more than spontaneous remission. Note that Eysenck did not conclude that psychotherapy was ineffective. He merely concluded that there was no evidence that it was, and he wrote of “the necessity of properly planned and executed experimental studies into this important field” (p. 323). You can read the entire article here: Classics in the History of Psychology .

Fortunately, many other researchers took up Eysenck’s challenge, and by 1980 hundreds of experiments had been conducted in which participants were randomly assigned to treatment and control conditions, and the results were summarized in a classic book by Mary Lee Smith, Gene Glass, and Thomas Miller (Smith, Glass, & Miller, 1980) [4] . They found that overall psychotherapy was quite effective, with about 80% of treatment participants improving more than the average control participant. Subsequent research has focused more on the conditions under which different types of psychotherapy are more or less effective.

Interrupted Time Series Design

A variant of the pretest-posttest design is the  interrupted time-series design . A time series is a set of measurements taken at intervals over a period of time. For example, a manufacturing company might measure its workers’ productivity each week for a year. In an interrupted time series-design, a time series like this one is “interrupted” by a treatment. In one classic example, the treatment was the reduction of the work shifts in a factory from 10 hours to 8 hours (Cook & Campbell, 1979) [5] . Because productivity increased rather quickly after the shortening of the work shifts, and because it remained elevated for many months afterward, the researcher concluded that the shortening of the shifts caused the increase in productivity. Notice that the interrupted time-series design is like a pretest-posttest design in that it includes measurements of the dependent variable both before and after the treatment. It is unlike the pretest-posttest design, however, in that it includes multiple pretest and posttest measurements.

Figure 7.3 shows data from a hypothetical interrupted time-series study. The dependent variable is the number of student absences per week in a research methods course. The treatment is that the instructor begins publicly taking attendance each day so that students know that the instructor is aware of who is present and who is absent. The top panel of  Figure 7.3 shows how the data might look if this treatment worked. There is a consistently high number of absences before the treatment, and there is an immediate and sustained drop in absences after the treatment. The bottom panel of  Figure 7.3 shows how the data might look if this treatment did not work. On average, the number of absences after the treatment is about the same as the number before. This figure also illustrates an advantage of the interrupted time-series design over a simpler pretest-posttest design. If there had been only one measurement of absences before the treatment at Week 7 and one afterward at Week 8, then it would have looked as though the treatment were responsible for the reduction. The multiple measurements both before and after the treatment suggest that the reduction between Weeks 7 and 8 is nothing more than normal week-to-week variation.

Image description available

Combination Designs

A type of quasi-experimental design that is generally better than either the nonequivalent groups design or the pretest-posttest design is one that combines elements of both. There is a treatment group that is given a pretest, receives a treatment, and then is given a posttest. But at the same time there is a control group that is given a pretest, does  not  receive the treatment, and then is given a posttest. The question, then, is not simply whether participants who receive the treatment improve but whether they improve  more  than participants who do not receive the treatment.

Imagine, for example, that students in one school are given a pretest on their attitudes toward drugs, then are exposed to an antidrug program, and finally are given a posttest. Students in a similar school are given the pretest, not exposed to an antidrug program, and finally are given a posttest. Again, if students in the treatment condition become more negative toward drugs, this change in attitude could be an effect of the treatment, but it could also be a matter of history or maturation. If it really is an effect of the treatment, then students in the treatment condition should become more negative than students in the control condition. But if it is a matter of history (e.g., news of a celebrity drug overdose) or maturation (e.g., improved reasoning), then students in the two conditions would be likely to show similar amounts of change. This type of design does not completely eliminate the possibility of confounding variables, however. Something could occur at one of the schools but not the other (e.g., a student drug overdose), so students at the first school would be affected by it while students at the other school would not.

Finally, if participants in this kind of design are randomly assigned to conditions, it becomes a true experiment rather than a quasi experiment. In fact, it is the kind of experiment that Eysenck called for—and that has now been conducted many times—to demonstrate the effectiveness of psychotherapy.

Key Takeaways

  • Quasi-experimental research involves the manipulation of an independent variable without the random assignment of participants to conditions or orders of conditions. Among the important types are nonequivalent groups designs, pretest-posttest, and interrupted time-series designs.
  • Quasi-experimental research eliminates the directionality problem because it involves the manipulation of the independent variable. It does not eliminate the problem of confounding variables, however, because it does not involve random assignment to conditions. For these reasons, quasi-experimental research is generally higher in internal validity than correlational studies but lower than true experiments.
  • Practice: Imagine that two professors decide to test the effect of giving daily quizzes on student performance in a statistics course. They decide that Professor A will give quizzes but Professor B will not. They will then compare the performance of students in their two sections on a common final exam. List five other variables that might differ between the two sections that could affect the results.
  • regression to the mean
  • spontaneous remission

Image Descriptions

Figure 7.3 image description: Two line graphs charting the number of absences per week over 14 weeks. The first 7 weeks are without treatment and the last 7 weeks are with treatment. In the first line graph, there are between 4 to 8 absences each week. After the treatment, the absences drop to 0 to 3 each week, which suggests the treatment worked. In the second line graph, there is no noticeable change in the number of absences per week after the treatment, which suggests the treatment did not work. [Return to Figure 7.3]

  • Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues in field settings . Boston, MA: Houghton Mifflin. ↵
  • Posternak, M. A., & Miller, I. (2001). Untreated short-term course of major depression: A meta-analysis of studies using outcomes from studies using wait-list control groups. Journal of Affective Disorders, 66 , 139–146. ↵
  • Eysenck, H. J. (1952). The effects of psychotherapy: An evaluation. Journal of Consulting Psychology, 16 , 319–324. ↵
  • Smith, M. L., Glass, G. V., & Miller, T. I. (1980). The benefits of psychotherapy . Baltimore, MD: Johns Hopkins University Press. ↵

A between-subjects design in which participants have not been randomly assigned to conditions.

The dependent variable is measured once before the treatment is implemented and once after it is implemented.

A category of alternative explanations for differences between scores such as events that happened between the pretest and posttest, unrelated to the study.

An alternative explanation that refers to how the participants might have changed between the pretest and posttest in ways that they were going to anyway because they are growing and learning.

The statistical fact that an individual who scores extremely on a variable on one occasion will tend to score less extremely on the next occasion.

The tendency for many medical and psychological problems to improve over time without any form of treatment.

A set of measurements taken at intervals over a period of time that are interrupted by a treatment.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

correlational research vs quasi experimental research

Quantitative Research Designs: Non-Experimental vs. Experimental

correlational research vs quasi experimental research

While there are many types of quantitative research designs, they generally fall under one of three umbrellas: experimental research, quasi-experimental research, and non-experimental research.

Experimental research designs are what many people think of when they think of research; they typically involve the manipulation of variables and random assignment of participants to conditions. A traditional experiment may involve the comparison of a control group to an experimental group who receives a treatment (i.e., a variable is manipulated). When done correctly, experimental designs can provide evidence for cause and effect. Because of their ability to determine causation, experimental designs are the gold-standard for research in medicine, biology, and so on. However, such designs can also be used in the “soft sciences,” like social science. Experimental research has strict standards for control within the research design and for establishing validity. These designs may also be very resource and labor intensive. Additionally, it can be hard to justify the generalizability of the results in a very tightly controlled or artificial experimental setting. However, if done well, experimental research methods can lead to some very convincing and interesting results.

Need help with your research?

Schedule a time to speak with an expert using the calendar below.

Non-experimental research, on the other hand, can be just as interesting, but you cannot draw the same conclusions from it as you can with experimental research. Non-experimental research is usually descriptive or correlational, which means that you are either describing a situation or phenomenon simply as it stands, or you are describing a relationship between two or more variables, all without any interference from the researcher. This means that you do not manipulate any variables (e.g., change the conditions that an experimental group undergoes) or randomly assign participants to a control or treatment group. Without this level of control, you cannot determine any causal effects. While validity is still a concern in non-experimental research, the concerns are more about the validity of the measurements, rather than the validity of the effects.

Finally, a quasi-experimental design is a combination of the two designs described above. For quasi-experimental designs you still can manipulate a variable in the experimental group, but there is no random assignment into groups. Quasi-experimental designs are the most common when the researcher uses a convenience sample to recruit participants. For example, let’s say you were interested in studying the effect of stress on student test scores at the school that you work for. You teach two separate classes so you decide to just use each class as a different group. Class A becomes the experimental group who experiences the stressor manipulation and class B becomes the control group. Because you are sampling from two different pre-existing groups, without any random assignment, this would be known as a quasi-experimental design. These types of designs are very useful for when you want to find a causal relationship between variables but cannot randomly assign people to groups for practical or ethical reasons, such as working with a population of clinically depressed people or looking for gender differences (we can’t randomly assign people to be clinically depressed or to be a different gender). While these types of studies sometimes have higher external validity than a true experimental design, since they involve real world interventions and group rather than a laboratory setting, because of the lack of random assignment in these groups, the generalizability of the study is severely limited.

So, how do you choose between these designs? This will depend on your topic, your available resources, and desired goal. For example, do you want to see if a particular intervention relieves feelings of anxiety? The most convincing results for that would come from a true experimental design with random sampling and random assignment to groups. Ultimately, this is a decision that should be made in close collaboration with your advisor. Therefore, I recommend discussing the pros and cons of each type of research, what it might mean for your personal dissertation process, and what is required of each design before making a decision.

Take the Course: Experimental and Quasi-Experimental Research Design

correlational research vs quasi experimental research

Understanding Nursing Research

  • Primary Research
  • Qualitative vs. Quantitative Research

Experimental Design

Randomization vs random selection, randomized control trials (rcts), how do i tell if my article is a randomized control trial, how to limit your research to randomized control trials.

  • Is it a Nursing journal?
  • Is it Written by a Nurse?
  • Systematic Reviews and Secondary Research
  • Quality Improvement Plans

Correlational , or non-experimental , research is research where subjects are not acted upon, but where research questions can be answered merely by observing subjects.

An example of a correlational research question could be, "What is relationship between parents who make their children wash their hands at home and hand washing at school?" This is a question that  I could answer without acting upon the students or their parents.

Quasi-Experimental Research is research where an independent variable is manipulated, but the subjects of a study are not randomly assigned to an action (or a lack of action).

An example of quasi-experimental research would be to ask "What is the effect of hand-washing posters in school bathrooms?" If researchers put posters in the same place in all of the bathrooms of a single high school and measured how often students washed their hands. The reason the study is quasi-experimental is because the students are not randomly selected to participate in the study, they just participate because their school is receiving the intervention (posters in the bathroom).

Experimental Research is research that randomly selects subjects to participate in a study that includes some kind of intervention, or action intended to have an effect on the participants.

An example of an experimental design would be randomly selecting all of the schools participating in the hand washing poster campaign. The schools would then randomly be assigned to either the poster-group or the control group, which would receive no posters in their bathroom. Having a control group allows researchers to compare the group of students who received an intervention to those who did not.

How to tell:

The only way to tell what kind of experimental design is in an article you're reading is to read the Methodologies section of the article. This section should describe if participants were selected, how they were selected, and how they were assigned to either a control or intervention group.

Random Selection means subjects are randomly selected to participate in a study that involves an intervention.

Random Assignment means subjects are randomly assigned to whether they will be in a control group or a group that receives an intervention.

Controlled Trials are trials or studies that include a "control" group. If you were researching whether hand-washing posters were effective in getting students to wash their hands, you would put the posters in all of the bathrooms of one high school and in none of the bathrooms in another high school with similar demographic make up. The high school without the posters would be the control group. The control group allows you to see just how effective or ineffective your intervention was when you compare data at the end of your study.

Randomized Controlled Trials (RCTs) are also sometimes called Randomized Clinical Trials. These are studies where the participants are not necessarily randomly selected, but they are sorted into either an intervention group or a control group randomly. So in the example above, the researchers might select had twenty high schools in South Texas that were relatively similar (demographic make up, household incomes, size, etc.) and randomly decide which schools received hand washing posters and which did not.

To tell if an article you're looking at is a Randomized Control Trial (RCT) is relatively simple.

First, check the article's publication information. Sometimes even before you open an article, you can tell if it's a Randomized Control Trial. Like in this example:

correlational research vs quasi experimental research

If you can't find the information in the article's publication information, the next step is to read the article's Abstract and Methodologies. In at least one of these sections, the researchers will state whether or not they used a control group in their study and whether or not the control and the intervention groups were assigned randomly.

The Methodologies section in particular should clearly explain how the participants were sorted into group. If the author states that participants were randomly assigned to groups, then that study is a Randomized Control Trial (RCT). If nothing about randomization is mentioned, it is safe to assume the article is not an RCT.

Below is an example of what to look for in an article's Methodologies section:

correlational research vs quasi experimental research

If you know when you begin your research that you're interested in just Randomized Control Trials (RCTs), you can tell the database to just show you results that include Randomized Control Trials (RCTs).

In CINAHL, you can do that by scrolling down on the homepage and checking the box next to "Randomized Control Trials"

correlational research vs quasi experimental research

If you keep scrolling, you'll get to a box that says "Publication Type." You can also scroll through those options and select "Randomized Control Trials." 

correlational research vs quasi experimental research

If you're in PubMed, then enter your search terms and hit "Search." Then, when you're on the results page, click "Randomized Controlled Trial" under "Article types."

If you don't see a "Randomized Controlled Trial" option, click "Customize...," check the box next to "Randomized Controlled Trial," click the blue "show" button, and then click on "Randomized Controlled Trial" to make sure you've selected it.

correlational research vs quasi experimental research

This is a really helpful way to limit your search results to just the kinds of articles you're interested in, but you should always double check that an article is in fact about a Randomized Control Trial (RCT) by reading the article's Methodologies section thoroughly.

  • << Previous: Qualitative vs. Quantitative Research
  • Next: Is it a Nursing journal? >>
  • Last Updated: Aug 21, 2024 10:40 AM
  • URL: https://guides.library.tamucc.edu/nursingresearch

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.38(37); 2023 Sep 18
  • PMC10506897

Logo of jkms

Conducting and Writing Quantitative and Qualitative Research

Edward barroga.

1 Department of Medical Education, Showa University School of Medicine, Tokyo, Japan.

Glafera Janet Matanguihan

2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.

Atsuko Furuta

Makiko arima, shizuma tsuchiya, chikako kawahara, yusuke takamiya.

Comprehensive knowledge of quantitative and qualitative research systematizes scholarly research and enhances the quality of research output. Scientific researchers must be familiar with them and skilled to conduct their investigation within the frames of their chosen research type. When conducting quantitative research, scientific researchers should describe an existing theory, generate a hypothesis from the theory, test their hypothesis in novel research, and re-evaluate the theory. Thereafter, they should take a deductive approach in writing the testing of the established theory based on experiments. When conducting qualitative research, scientific researchers raise a question, answer the question by performing a novel study, and propose a new theory to clarify and interpret the obtained results. After which, they should take an inductive approach to writing the formulation of concepts based on collected data. When scientific researchers combine the whole spectrum of inductive and deductive research approaches using both quantitative and qualitative research methodologies, they apply mixed-method research. Familiarity and proficiency with these research aspects facilitate the construction of novel hypotheses, development of theories, or refinement of concepts.

Graphical Abstract

An external file that holds a picture, illustration, etc.
Object name is jkms-38-e291-abf001.jpg

INTRODUCTION

Novel research studies are conceptualized by scientific researchers first by asking excellent research questions and developing hypotheses, then answering these questions by testing their hypotheses in ethical research. 1 , 2 , 3 Before they conduct novel research studies, scientific researchers must possess considerable knowledge of both quantitative and qualitative research. 2

In quantitative research, researchers describe existing theories, generate and test a hypothesis in novel research, and re-evaluate existing theories deductively based on their experimental results. 1 , 4 , 5 In qualitative research, scientific researchers raise and answer research questions by performing a novel study, then propose new theories by clarifying their results inductively. 1 , 6

RATIONALE OF THIS ARTICLE

When researchers have a limited knowledge of both research types and how to conduct them, this can result in substandard investigation. Researchers must be familiar with both types of research and skilled to conduct their investigations within the frames of their chosen type of research. Thus, meticulous care is needed when planning quantitative and qualitative research studies to avoid unethical research and poor outcomes.

Understanding the methodological and writing assumptions 7 , 8 underpinning quantitative and qualitative research, especially by non-Anglophone researchers, is essential for their successful conduct. Scientific researchers, especially in the academe, face pressure to publish in international journals 9 where English is the language of scientific communication. 10 , 11 In particular, non-Anglophone researchers face challenges related to linguistic, stylistic, and discourse differences. 11 , 12 Knowing the assumptions of the different types of research will help clarify research questions and methodologies, easing the challenge and help.

SEARCH FOR RELEVANT ARTICLES

To identify articles relevant to this topic, we adhered to the search strategy recommended by Gasparyan et al. 7 We searched through PubMed, Scopus, Directory of Open Access Journals, and Google Scholar databases using the following keywords: quantitative research, qualitative research, mixed-method research, deductive reasoning, inductive reasoning, study design, descriptive research, correlational research, experimental research, causal-comparative research, quasi-experimental research, historical research, ethnographic research, meta-analysis, narrative research, grounded theory, phenomenology, case study, and field research.

AIMS OF THIS ARTICLE

This article aims to provide a comparative appraisal of qualitative and quantitative research for scientific researchers. At present, there is still a need to define the scope of qualitative research, especially its essential elements. 13 Consensus on the critical appraisal tools to assess the methodological quality of qualitative research remains lacking. 14 Framing and testing research questions can be challenging in qualitative research. 2 In the healthcare system, it is essential that research questions address increasingly complex situations. Therefore, research has to be driven by the kinds of questions asked and the corresponding methodologies to answer these questions. 15 The mixed-method approach also needs to be clarified as this would appear to arise from different philosophical underpinnings. 16

This article also aims to discuss how particular types of research should be conducted and how they should be written in adherence to international standards. In the US, Europe, and other countries, responsible research and innovation was conceptualized and promoted with six key action points: engagement, gender equality, science education, open access, ethics and governance. 17 , 18 International ethics standards in research 19 as well as academic integrity during doctoral trainings are now integral to the research process. 20

POTENTIAL BENEFITS FROM THIS ARTICLE

This article would be beneficial for researchers in further enhancing their understanding of the theoretical, methodological, and writing aspects of qualitative and quantitative research, and their combination.

Moreover, this article reviews the basic features of both research types and overviews the rationale for their conduct. It imparts information on the most common forms of quantitative and qualitative research, and how they are carried out. These aspects would be helpful for selecting the optimal methodology to use for research based on the researcher’s objectives and topic.

This article also provides information on the strengths and weaknesses of quantitative and qualitative research. Such information would help researchers appreciate the roles and applications of both research types and how to gain from each or their combination. As different research questions require different types of research and analyses, this article is anticipated to assist researchers better recognize the questions answered by quantitative and qualitative research.

Finally, this article would help researchers to have a balanced perspective of qualitative and quantitative research without considering one as superior to the other.

TYPES OF RESEARCH

Research can be classified into two general types, quantitative and qualitative. 21 Both types of research entail writing a research question and developing a hypothesis. 22 Quantitative research involves a deductive approach to prove or disprove the hypothesis that was developed, whereas qualitative research involves an inductive approach to create a hypothesis. 23 , 24 , 25 , 26

In quantitative research, the hypothesis is stated before testing. In qualitative research, the hypothesis is developed through inductive reasoning based on the data collected. 27 , 28 For types of data and their analysis, qualitative research usually includes data in the form of words instead of numbers more commonly used in quantitative research. 29

Quantitative research usually includes descriptive, correlational, causal-comparative / quasi-experimental, and experimental research. 21 On the other hand, qualitative research usually encompasses historical, ethnographic, meta-analysis, narrative, grounded theory, phenomenology, case study, and field research. 23 , 25 , 28 , 30 A summary of the features, writing approach, and examples of published articles for each type of qualitative and quantitative research is shown in Table 1 . 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43

ResearchTypeMethodology featureResearch writing pointersExample of published article
QuantitativeDescriptive researchDescribes status of identified variable to provide systematic information about phenomenonExplain how a situation, sample, or variable was examined or observed as it occurred without investigator interferenceÖstlund AS, Kristofferzon ML, Häggström E, Wadensten B. Primary care nurses’ performance in motivational interviewing: a quantitative descriptive study. 2015;16(1):89.
Correlational researchDetermines and interprets extent of relationship between two or more variables using statistical dataDescribe the establishment of reliability and validity, converging evidence, relationships, and predictions based on statistical dataDíaz-García O, Herranz Aguayo I, Fernández de Castro P, Ramos JL. Lifestyles of Spanish elders from supervened SARS-CoV-2 variant onwards: A correlational research on life satisfaction and social-relational praxes. 2022;13:948745.
Causal-comparative/Quasi-experimental researchEstablishes cause-effect relationships among variablesWrite about comparisons of the identified control groups exposed to the treatment variable with unexposed groups : Sharma MK, Adhikari R. Effect of school water, sanitation, and hygiene on health status among basic level students in Nepal. Environ Health Insights 2022;16:11786302221095030.
Uses non-randomly assigned groups where it is not logically feasible to conduct a randomized controlled trialProvide clear descriptions of the causes determined after making data analyses and conclusions, and known and unknown variables that could potentially affect the outcome
[The study applies a causal-comparative research design]
: Tuna F, Tunçer B, Can HB, Süt N, Tuna H. Immediate effect of Kinesio taping® on deep cervical flexor endurance: a non-controlled, quasi-experimental pre-post quantitative study. 2022;40(6):528-35.
Experimental researchEstablishes cause-effect relationship among group of variables making up a study using scientific methodDescribe how an independent variable was manipulated to determine its effects on dependent variablesHyun C, Kim K, Lee S, Lee HH, Lee J. Quantitative evaluation of the consciousness level of patients in a vegetative state using virtual reality and an eye-tracking system: a single-case experimental design study. 2022;32(10):2628-45.
Explain the random assignments of subjects to experimental treatments
QualitativeHistorical researchDescribes past events, problems, issues, and factsWrite the research based on historical reportsSilva Lima R, Silva MA, de Andrade LS, Mello MA, Goncalves MF. Construction of professional identity in nursing students: qualitative research from the historical-cultural perspective. 2020;28:e3284.
Ethnographic researchDevelops in-depth analytical descriptions of current systems, processes, and phenomena or understandings of shared beliefs and practices of groups or cultureCompose a detailed report of the interpreted dataGammeltoft TM, Huyền Diệu BT, Kim Dung VT, Đức Anh V, Minh Hiếu L, Thị Ái N. Existential vulnerability: an ethnographic study of everyday lives with diabetes in Vietnam. 2022;29(3):271-88.
Meta-analysisAccumulates experimental and correlational results across independent studies using statistical methodSpecify the topic, follow reporting guidelines, describe the inclusion criteria, identify key variables, explain the systematic search of databases, and detail the data extractionOeljeklaus L, Schmid HL, Kornfeld Z, Hornberg C, Norra C, Zerbe S, et al. Therapeutic landscapes and psychiatric care facilities: a qualitative meta-analysis. 2022;19(3):1490.
Narrative researchStudies an individual and gathers data by collecting stories for constructing a narrative about the individual’s experiences and their meaningsWrite an in-depth narration of events or situations focused on the participantsAnderson H, Stocker R, Russell S, Robinson L, Hanratty B, Robinson L, et al. Identity construction in the very old: a qualitative narrative study. 2022;17(12):e0279098.
Grounded theoryEngages in inductive ground-up or bottom-up process of generating theory from dataWrite the research as a theory and a theoretical model.Amini R, Shahboulaghi FM, Tabrizi KN, Forouzan AS. Social participation among Iranian community-dwelling older adults: a grounded theory study. 2022;11(6):2311-9.
Describe data analysis procedure about theoretical coding for developing hypotheses based on what the participants say
PhenomenologyAttempts to understand subjects’ perspectivesWrite the research report by contextualizing and reporting the subjects’ experiencesGreen G, Sharon C, Gendler Y. The communication challenges and strength of nurses’ intensive corona care during the two first pandemic waves: a qualitative descriptive phenomenology study. 2022;10(5):837.
Case studyAnalyzes collected data by detailed identification of themes and development of narratives written as in-depth study of lessons from caseWrite the report as an in-depth study of possible lessons learned from the caseHorton A, Nugus P, Fortin MC, Landsberg D, Cantarovich M, Sandal S. Health system barriers and facilitators to living donor kidney transplantation: a qualitative case study in British Columbia. 2022;10(2):E348-56.
Field researchDirectly investigates and extensively observes social phenomenon in natural environment without implantation of controls or experimental conditionsDescribe the phenomenon under the natural environment over timeBuus N, Moensted M. Collectively learning to talk about personal concerns in a peer-led youth program: a field study of a community of practice. 2022;30(6):e4425-32.

QUANTITATIVE RESEARCH

Deductive approach.

The deductive approach is used to prove or disprove the hypothesis in quantitative research. 21 , 25 Using this approach, researchers 1) make observations about an unclear or new phenomenon, 2) investigate the current theory surrounding the phenomenon, and 3) hypothesize an explanation for the observations. Afterwards, researchers will 4) predict outcomes based on the hypotheses, 5) formulate a plan to test the prediction, and 6) collect and process the data (or revise the hypothesis if the original hypothesis was false). Finally, researchers will then 7) verify the results, 8) make the final conclusions, and 9) present and disseminate their findings ( Fig. 1A ).

An external file that holds a picture, illustration, etc.
Object name is jkms-38-e291-g001.jpg

Types of quantitative research

The common types of quantitative research include (a) descriptive, (b) correlational, c) experimental research, and (d) causal-comparative/quasi-experimental. 21

Descriptive research is conducted and written by describing the status of an identified variable to provide systematic information about a phenomenon. A hypothesis is developed and tested after data collection, analysis, and synthesis. This type of research attempts to factually present comparisons and interpretations of findings based on analyses of the characteristics, progression, or relationships of a certain phenomenon by manipulating the employed variables or controlling the involved conditions. 44 Here, the researcher examines, observes, and describes a situation, sample, or variable as it occurs without investigator interference. 31 , 45 To be meaningful, the systematic collection of information requires careful selection of study units by precise measurement of individual variables 21 often expressed as ranges, means, frequencies, and/or percentages. 31 , 45 Descriptive statistical analysis using ANOVA, Student’s t -test, or the Pearson coefficient method has been used to analyze descriptive research data. 46

Correlational research is performed by determining and interpreting the extent of a relationship between two or more variables using statistical data. This involves recognizing data trends and patterns without necessarily proving their causes. The researcher studies only the data, relationships, and distributions of variables in a natural setting, but does not manipulate them. 21 , 45 Afterwards, the researcher establishes reliability and validity, provides converging evidence, describes relationship, and makes predictions. 47

Experimental research is usually referred to as true experimentation. The researcher establishes the cause-effect relationship among a group of variables making up a study using the scientific method or process. This type of research attempts to identify the causal relationships between variables through experiments by arbitrarily controlling the conditions or manipulating the variables used. 44 The scientific manuscript would include an explanation of how the independent variable was manipulated to determine its effects on the dependent variables. The write-up would also describe the random assignments of subjects to experimental treatments. 21

Causal-comparative/quasi-experimental research closely resembles true experimentation but is conducted by establishing the cause-effect relationships among variables. It may also be conducted to establish the cause or consequences of differences that already exist between, or among groups of individuals. 48 This type of research compares outcomes between the intervention groups in which participants are not randomized to their respective interventions because of ethics- or feasibility-related reasons. 49 As in true experiments, the researcher identifies and measures the effects of the independent variable on the dependent variable. However, unlike true experiments, the researchers do not manipulate the independent variable.

In quasi-experimental research, naturally formed or pre-existing groups that are not randomly assigned are used, particularly when an ethical, randomized controlled trial is not feasible or logical. 50 The researcher identifies control groups as those which have been exposed to the treatment variable, and then compares these with the unexposed groups. The causes are determined and described after data analysis, after which conclusions are made. The known and unknown variables that could still affect the outcome are also included. 7

QUALITATIVE RESEARCH

Inductive approach.

Qualitative research involves an inductive approach to develop a hypothesis. 21 , 25 Using this approach, researchers answer research questions and develop new theories, but they do not test hypotheses or previous theories. The researcher seldom examines the effectiveness of an intervention, but rather explores the perceptions, actions, and feelings of participants using interviews, content analysis, observations, or focus groups. 25 , 45 , 51

Distinctive features of qualitative research

Qualitative research seeks to elucidate about the lives of people, including their lived experiences, behaviors, attitudes, beliefs, personality characteristics, emotions, and feelings. 27 , 30 It also explores societal, organizational, and cultural issues. 30 This type of research provides a good story mimicking an adventure which results in a “thick” description that puts readers in the research setting. 52

The qualitative research questions are open-ended, evolving, and non-directional. 26 The research design is usually flexible and iterative, commonly employing purposive sampling. The sample size depends on theoretical saturation, and data is collected using in-depth interviews, focus groups, and observations. 27

In various instances, excellent qualitative research may offer insights that quantitative research cannot. Moreover, qualitative research approaches can describe the ‘lived experience’ perspectives of patients, practitioners, and the public. 53 Interestingly, recent developments have looked into the use of technology in shaping qualitative research protocol development, data collection, and analysis phases. 54

Qualitative research employs various techniques, including conversational and discourse analysis, biographies, interviews, case-studies, oral history, surveys, documentary and archival research, audiovisual analysis, and participant observations. 26

Conducting qualitative research

To conduct qualitative research, investigators 1) identify a general research question, 2) choose the main methods, sites, and subjects, and 3) determine methods of data documentation access to subjects. Researchers also 4) decide on the various aspects for collecting data (e.g., questions, behaviors to observe, issues to look for in documents, how much (number of questions, interviews, or observations), 5) clarify researchers’ roles, and 6) evaluate the study’s ethical implications in terms of confidentiality and sensitivity. Afterwards, researchers 7) collect data until saturation, 8) interpret data by identifying concepts and theories, and 9) revise the research question if necessary and form hypotheses. In the final stages of the research, investigators 10) collect and verify data to address revisions, 11) complete the conceptual and theoretical framework to finalize their findings, and 12) present and disseminate findings ( Fig. 1B ).

Types of qualitative research

The different types of qualitative research include (a) historical research, (b) ethnographic research, (c) meta-analysis, (d) narrative research, (e) grounded theory, (f) phenomenology, (g) case study, and (h) field research. 23 , 25 , 28 , 30

Historical research is conducted by describing past events, problems, issues, and facts. The researcher gathers data from written or oral descriptions of past events and attempts to recreate the past without interpreting the events and their influence on the present. 6 Data is collected using documents, interviews, and surveys. 55 The researcher analyzes these data by describing the development of events and writes the research based on historical reports. 2

Ethnographic research is performed by observing everyday life details as they naturally unfold. 2 It can also be conducted by developing in-depth analytical descriptions of current systems, processes, and phenomena or by understanding the shared beliefs and practices of a particular group or culture. 21 The researcher collects extensive narrative non-numerical data based on many variables over an extended period, in a natural setting within a specific context. To do this, the researcher uses interviews, observations, and active participation. These data are analyzed by describing and interpreting them and developing themes. A detailed report of the interpreted data is then provided. 2 The researcher immerses himself/herself into the study population and describes the actions, behaviors, and events from the perspective of someone involved in the population. 23 As examples of its application, ethnographic research has helped to understand a cultural model of family and community nursing during the coronavirus disease 2019 outbreak. 56 It has also been used to observe the organization of people’s environment in relation to cardiovascular disease management in order to clarify people’s real expectations during follow-up consultations, possibly contributing to the development of innovative solutions in care practices. 57

Meta-analysis is carried out by accumulating experimental and correlational results across independent studies using a statistical method. 21 The report is written by specifying the topic and meta-analysis type. In the write-up, reporting guidelines are followed, which include description of inclusion criteria and key variables, explanation of the systematic search of databases, and details of data extraction. Meta-analysis offers in-depth data gathering and analysis to achieve deeper inner reflection and phenomenon examination. 58

Narrative research is performed by collecting stories for constructing a narrative about an individual’s experiences and the meanings attributed to them by the individual. 9 It aims to hear the voice of individuals through their account or experiences. 17 The researcher usually conducts interviews and analyzes data by storytelling, content review, and theme development. The report is written as an in-depth narration of events or situations focused on the participants. 2 , 59 Narrative research weaves together sequential events from one or two individuals to create a “thick” description of a cohesive story or narrative. 23 It facilitates understanding of individuals’ lives based on their own actions and interpretations. 60

Grounded theory is conducted by engaging in an inductive ground-up or bottom-up strategy of generating a theory from data. 24 The researcher incorporates deductive reasoning when using constant comparisons. Patterns are detected in observations and then a working hypothesis is created which directs the progression of inquiry. The researcher collects data using interviews and questionnaires. These data are analyzed by coding the data, categorizing themes, and describing implications. The research is written as a theory and theoretical models. 2 In the write-up, the researcher describes the data analysis procedure (i.e., theoretical coding used) for developing hypotheses based on what the participants say. 61 As an example, a qualitative approach has been used to understand the process of skill development of a nurse preceptor in clinical teaching. 62 A researcher can also develop a theory using the grounded theory approach to explain the phenomena of interest by observing a population. 23

Phenomenology is carried out by attempting to understand the subjects’ perspectives. This approach is pertinent in social work research where empathy and perspective are keys to success. 21 Phenomenology studies an individual’s lived experience in the world. 63 The researcher collects data by interviews, observations, and surveys. 16 These data are analyzed by describing experiences, examining meanings, and developing themes. The researcher writes the report by contextualizing and reporting the subjects’ experience. This research approach describes and explains an event or phenomenon from the perspective of those who have experienced it. 23 Phenomenology understands the participants’ experiences as conditioned by their worldviews. 52 It is suitable for a deeper understanding of non-measurable aspects related to the meanings and senses attributed by individuals’ lived experiences. 60

Case study is conducted by collecting data through interviews, observations, document content examination, and physical inspections. The researcher analyzes the data through a detailed identification of themes and the development of narratives. The report is written as an in-depth study of possible lessons learned from the case. 2

Field research is performed using a group of methodologies for undertaking qualitative inquiries. The researcher goes directly to the social phenomenon being studied and observes it extensively. In the write-up, the researcher describes the phenomenon under the natural environment over time with no implantation of controls or experimental conditions. 45

DIFFERENCES BETWEEN QUANTITATIVE AND QUALITATIVE RESEARCH

Scientific researchers must be aware of the differences between quantitative and qualitative research in terms of their working mechanisms to better understand their specific applications. This knowledge will be of significant benefit to researchers, especially during the planning process, to ensure that the appropriate type of research is undertaken to fulfill the research aims.

In terms of quantitative research data evaluation, four well-established criteria are used: internal validity, external validity, reliability, and objectivity. 23 The respective correlating concepts in qualitative research data evaluation are credibility, transferability, dependability, and confirmability. 30 Regarding write-up, quantitative research papers are usually shorter than their qualitative counterparts, which allows the latter to pursue a deeper understanding and thus producing the so-called “thick” description. 29

Interestingly, a major characteristic of qualitative research is that the research process is reversible and the research methods can be modified. This is in contrast to quantitative research in which hypothesis setting and testing take place unidirectionally. This means that in qualitative research, the research topic and question may change during literature analysis, and that the theoretical and analytical methods could be altered during data collection. 44

Quantitative research focuses on natural, quantitative, and objective phenomena, whereas qualitative research focuses on social, qualitative, and subjective phenomena. 26 Quantitative research answers the questions “what?” and “when?,” whereas qualitative research answers the questions “why?,” “how?,” and “how come?.” 64

Perhaps the most important distinction between quantitative and qualitative research lies in the nature of the data being investigated and analyzed. Quantitative research focuses on statistical, numerical, and quantitative aspects of phenomena, and employ the same data collection and analysis, whereas qualitative research focuses on the humanistic, descriptive, and qualitative aspects of phenomena. 26 , 28

Structured versus unstructured processes

The aims and types of inquiries determine the difference between quantitative and qualitative research. In quantitative research, statistical data and a structured process are usually employed by the researcher. Quantitative research usually suggests quantities (i.e., numbers). 65 On the other hand, researchers typically use opinions, reasons, verbal statements, and an unstructured process in qualitative research. 63 Qualitative research is more related to quality or kind. 65

In quantitative research, the researcher employs a structured process for collecting quantifiable data. Often, a close-ended questionnaire is used wherein the response categories for each question are designed in which values can be assigned and analyzed quantitatively using a common scale. 66 Quantitative research data is processed consecutively from data management, then data analysis, and finally to data interpretation. Data should be free from errors and missing values. In data management, variables are defined and coded. In data analysis, statistics (e.g., descriptive, inferential) as well as central tendency (i.e., mean, median, mode), spread (standard deviation), and parameter estimation (confidence intervals) measures are used. 67

In qualitative research, the researcher uses an unstructured process for collecting data. These non-statistical data may be in the form of statements, stories, or long explanations. Various responses according to respondents may not be easily quantified using a common scale. 66

Composing a qualitative research paper resembles writing a quantitative research paper. Both papers consist of a title, an abstract, an introduction, objectives, methods, findings, and discussion. However, a qualitative research paper is less regimented than a quantitative research paper. 27

Quantitative research as a deductive hypothesis-testing design

Quantitative research can be considered as a hypothesis-testing design as it involves quantification, statistics, and explanations. It flows from theory to data (i.e., deductive), focuses on objective data, and applies theories to address problems. 45 , 68 It collects numerical or statistical data; answers questions such as how many, how often, how much; uses questionnaires, structured interview schedules, or surveys 55 as data collection tools; analyzes quantitative data in terms of percentages, frequencies, statistical comparisons, graphs, and tables showing statistical values; and reports the final findings in the form of statistical information. 66 It uses variable-based models from individual cases and findings are stated in quantified sentences derived by deductive reasoning. 24

In quantitative research, a phenomenon is investigated in terms of the relationship between an independent variable and a dependent variable which are numerically measurable. The research objective is to statistically test whether the hypothesized relationship is true. 68 Here, the researcher studies what others have performed, examines current theories of the phenomenon being investigated, and then tests hypotheses that emerge from those theories. 4

Quantitative hypothesis-testing research has certain limitations. These limitations include (a) problems with selection of meaningful independent and dependent variables, (b) the inability to reflect subjective experiences as variables since variables are usually defined numerically, and (c) the need to state a hypothesis before the investigation starts. 61

Qualitative research as an inductive hypothesis-generating design

Qualitative research can be considered as a hypothesis-generating design since it involves understanding and descriptions in terms of context. It flows from data to theory (i.e., inductive), focuses on observation, and examines what happens in specific situations with the aim of developing new theories based on the situation. 45 , 68 This type of research (a) collects qualitative data (e.g., ideas, statements, reasons, characteristics, qualities), (b) answers questions such as what, why, and how, (c) uses interviews, observations, or focused-group discussions as data collection tools, (d) analyzes data by discovering patterns of changes, causal relationships, or themes in the data; and (e) reports the final findings as descriptive information. 61 Qualitative research favors case-based models from individual characteristics, and findings are stated using context-dependent existential sentences that are justifiable by inductive reasoning. 24

In qualitative research, texts and interviews are analyzed and interpreted to discover meaningful patterns characteristic of a particular phenomenon. 61 Here, the researcher starts with a set of observations and then moves from particular experiences to a more general set of propositions about those experiences. 4

Qualitative hypothesis-generating research involves collecting interview data from study participants regarding a phenomenon of interest, and then using what they say to develop hypotheses. It involves the process of questioning more than obtaining measurements; it generates hypotheses using theoretical coding. 61 When using large interview teams, the key to promoting high-level qualitative research and cohesion in large team methods and successful research outcomes is the balance between autonomy and collaboration. 69

Qualitative data may also include observed behavior, participant observation, media accounts, and cultural artifacts. 61 Focus group interviews are usually conducted, audiotaped or videotaped, and transcribed. Afterwards, the transcript is analyzed by several researchers.

Qualitative research also involves scientific narratives and the analysis and interpretation of textual or numerical data (or both), mostly from conversations and discussions. Such approach uncovers meaningful patterns that describe a particular phenomenon. 2 Thus, qualitative research requires skills in grasping and contextualizing data, as well as communicating data analysis and results in a scientific manner. The reflective process of the inquiry underscores the strengths of a qualitative research approach. 2

Combination of quantitative and qualitative research

When both quantitative and qualitative research methods are used in the same research, mixed-method research is applied. 25 This combination provides a complete view of the research problem and achieves triangulation to corroborate findings, complementarity to clarify results, expansion to extend the study’s breadth, and explanation to elucidate unexpected results. 29

Moreover, quantitative and qualitative findings are integrated to address the weakness of both research methods 29 , 66 and to have a more comprehensive understanding of the phenomenon spectrum. 66

For data analysis in mixed-method research, real non-quantitized qualitative data and quantitative data must both be analyzed. 70 The data obtained from quantitative analysis can be further expanded and deepened by qualitative analysis. 23

In terms of assessment criteria, Hammersley 71 opined that qualitative and quantitative findings should be judged using the same standards of validity and value-relevance. Both approaches can be mutually supportive. 52

Quantitative and qualitative research must be carefully studied and conducted by scientific researchers to avoid unethical research and inadequate outcomes. Quantitative research involves a deductive process wherein a research question is answered with a hypothesis that describes the relationship between independent and dependent variables, and the testing of the hypothesis. This investigation can be aptly termed as hypothesis-testing research involving the analysis of hypothesis-driven experimental studies resulting in a test of significance. Qualitative research involves an inductive process wherein a research question is explored to generate a hypothesis, which then leads to the development of a theory. This investigation can be aptly termed as hypothesis-generating research. When the whole spectrum of inductive and deductive research approaches is combined using both quantitative and qualitative research methodologies, mixed-method research is applied, and this can facilitate the construction of novel hypotheses, development of theories, or refinement of concepts.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Barroga E, Matanguihan GJ.
  • Data curation: Barroga E, Matanguihan GJ, Furuta A, Arima M, Tsuchiya S, Kawahara C, Takamiya Y, Izumi M.
  • Formal analysis: Barroga E, Matanguihan GJ, Furuta A, Arima M, Tsuchiya S, Kawahara C.
  • Investigation: Barroga E, Matanguihan GJ, Takamiya Y, Izumi M.
  • Methodology: Barroga E, Matanguihan GJ, Furuta A, Arima M, Tsuchiya S, Kawahara C, Takamiya Y, Izumi M.
  • Project administration: Barroga E, Matanguihan GJ.
  • Resources: Barroga E, Matanguihan GJ, Furuta A, Arima M, Tsuchiya S, Kawahara C, Takamiya Y, Izumi M.
  • Supervision: Barroga E.
  • Validation: Barroga E, Matanguihan GJ, Furuta A, Arima M, Tsuchiya S, Kawahara C, Takamiya Y, Izumi M.
  • Visualization: Barroga E, Matanguihan GJ.
  • Writing - original draft: Barroga E, Matanguihan GJ.
  • Writing - review & editing: Barroga E, Matanguihan GJ, Furuta A, Arima M, Tsuchiya S, Kawahara C, Takamiya Y, Izumi M.
  • Biotechnology
  • Biochemistry
  • Microbiology
  • Cell Biology
  • Cell Signaling
  • Diversity in Life Form
  • Molecular Biology

Difference Between Correlational and Experimental-Research

Non-experimental research methods like correlational research are used to look at correlations between two or more variables. Positive or negative correlations suggest that as one measure rises, the other either rises or falls. To study the cause-and-effect relationship between various variables, experimental research manages one or more of them. Researchers can accurately see how changing one variable influences the other through this manipulation. The reason and effect of data variance can frequently be determined with the greatest certainty through this kind of investigation.

What is Correlational Research?

A technique of non-experimental research named correlational study is used to explore correlations among two or more variables. Both positive and negative correlations indicate that as one variable changes, the other changes as well or both. In experimental research, one or more variables are changed to study their cause-and-effect relationship. This manipulation allows researchers to exactly see how changes to one variable influence the other. The source and effect of data variance are often best understood through this kind of research.

What is Experimental Research?

A concept or theory is verified through observation and the change of variables in research methods. In order to make inferences and verify the concept, data has to be collected and the results must be evaluated. It’s vital to the scientific investigation because it enables researchers to comprehend how various elements affect the result of a specific experiment. It is often used in physics, biology, psychology, and many other types of research.

Similarities Between Correlational & Experimental Research

To identify relationships between variables and make conclusions about cause and effect, both correlational and experimental research are used. Experiments are used by both types to test theories and gather data. The main difference is that in a correlational study, the researcher does not control the variables; instead, they are used to investigate the relationship between variables. In contrast, an experiment changes one variable while keeping the others constant to find out how the change affects the other variables.

Types of Correlational Research

 The three forms of correlational study are categorized by their own combination of traits as follows

  • Positive Correlational Analysis (PCA) Using the most correlated variables, positive correlational research is an important technique to determine whether changing one of the variables will also change the other. For example, increasing employee wages may result in higher product costs.
  • Zero Correlational Analysis (ZCA) There is no connection between the variables in zero correlational analysis. Similar experiments combine many parameters that are not logically related in a process known as zero correlational research. In this case, a change in one of the factors might not even result in an equal or opposite change in the other variable. Zero correlational studies allow for the different confusing causal linkages and their causes. Money and stamina can be factors in a study with no link even though they are linearly different.
  • Negative Correlational Analysis (NCRA) A study method known as negative correlational research involves two numerically opposing qualities, where a rise in one variable results in a fall in the other. Prices fall when the cost of goods or services rises, and vice versa; this is an example of a negative correlation.

Types of Experimental Research

  The experimental research is also of three primary types:

  • Pre-experimental Research: A research study could use a pre-experimental research design when studying a group or groups after implementing research factors of cause and effect. The pre-experimental design will help the researcher in deciding whether more examination is needed for the groups under observation.
  • True Experimental Research: A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group.
  • Quasi-experimental Research: The word “quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.

Difference Between Correlational & Experimental Research

  
A correlation describes the theory and/or direction of the relationship between two or more variables. A study that uses sets of variables and a theory is called experimental research.
Correlational research allows researchers to collect much more data than experiments. Researchers have firm control over variables to obtain results.
The relationship between paddy yield and fertilizer use is an example of a simple correlation, meaning that the presence of one variable has an impact on another. Testing methods that combine various chemical elements to observe how one element affects another are used in experimental research.

Correlational studies focus on studying the variables in a mostly natural setting, identifying them, and establishing relationships between them. However, these relationships cannot imply that there is a cause-and-effect connection between either of these variables. Experiments single out certain independent variables and influence them to determine the cause-and-effect between them and dependent variables. That is the main difference between correlational and experimental studies. Each has its uses, depending on the circumstances and the scope of every individual research.

Difference Between Correlational and Experimental-Research – FAQs

What relationship exists in correlational research.

A correlation has direction and can be either positive or negative.

What is a real-life example of correlation?

The variable time spent watching TV and the variable exam score has a negative correlation. As time spent watching TV increases, exam scores decrease.

“Will seeds soaked in sugar water sprout sooner than seeds soaked in plain water?” 

The independent variable is the type of water sugar or plain, and the dependent variable is the time it takes for the seeds to sprout.

Are correlational studies more accurate than experiments?

In general, correlational research has high external validity, while experimental research has high internal validity.

Similar Reads

  • Difference Between
  • School Biology

Please Login to comment...

  • Noel Tata: Ratan Tata's Brother Named as a new Chairman of Tata Trusts
  • Ratan Tata Passes Away at 86: A Great Loss for India and the World
  • Uber to launch AI Assistant Back by OpenAI's GPT-4o to help Drivers Go Electric
  • 10 Best IPTV Services in Sweden (October 2024 Update)
  • GeeksforGeeks Practice - Leading Online Coding Platform

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

  • Open access
  • Published: 11 October 2024

The effect of flipped learning on students’ basic psychological needs and its association with self-esteem

  • Esma I. Avakyan   ORCID: orcid.org/0000-0002-9509-4278 1 , 2 &
  • David C. M Taylor   ORCID: orcid.org/0000-0002-3296-2963 2  

BMC Medical Education volume  24 , Article number:  1127 ( 2024 ) Cite this article

261 Accesses

Metrics details

Modification of the learning environment enhances academic performance, and meta-motivational skills. Yet it is largely unknown which underlying cause potentiates these effects. The study’s goal is to analyse flipped classroom (FC) effect on basic psychological needs and self-esteem.

40 undergraduate medical students participated in a one-site two phased study. In Phase I, students attended a traditional lecture-based classroom (TC). In Phase II, the same group attended FC. Upon completion of each Phase students completed two questionnaires: Basic Psychological Need Satisfaction and Frustration Scale, and Rosenberg self-esteem scale.

Autonomy satisfaction was significantly higher in FC ( n  = 40, z = 5.520, p  < .001), the same tendency was seen for Competence satisfaction in FC ( n  = 40, z = 5.122, p  < .001). As for the frustration of all three needs, the statistical difference was observed for all three subscales between TC and FC. In FC, autonomy ( n  = 40, z = − 5.370, p  < .001), relatedness ( n  = 40, z = 4.187, p  < .001), and competence ( n  = 40, z = − 5.323, p  < .001) frustration was significantly lower. Self-esteem was significantly higher in FC ( n  = 40, z = 5.528, p  < .001). In TC self-esteem negatively correlated with autonomy frustration, (r(38) = − 0.430, p  < .01), and competence frustration, (r(38) = − 0.379, p  < .05). In FC, self-esteem positively correlated with autonomy satisfaction (r(38) = 0.316, p  < .05), and competence satisfaction (r(38) = 0.429, p  < .01).

Conclusions

FC better fulfils students’ basic psychological needs, specifically needs for autonomy and competence, and self-esteem compared to TC. Collaborative work, and academic scaffolding, contributes to behavioural engagement of students in the learning process. FC with the main focus on students’ active involvement may better meet millennials’ needs. Implementing validated questionnaires to measure students’ psychological needs should become a regular practice in medical schools, specifically during the process of curriculum redesign.

Peer Review reports

Introduction

Despite a large body of literature, current knowledge is unexpectedly scarce when it comes to analysing the effect of flipped learning on self-esteem and basic psychological needs, such as autonomy, relatedness and competence [ 1 , 2 ]. Learning is often regarded as a social process [ 3 ]. Moreover, social learning might occur in different contexts, from formal workplace training to informal online communities and social networks [ 4 ]. It allows students to learn not only from experts but also from their peers. Previous research showed that modification of the learning environment towards a more student-centred approach enhances positive student relationships with peers and faculty [ 5 ]. However, learning is a complex process that along with cognitive elements involves motivation, and meta-motivational skills [ 6 ]. Several studies showed the positive effect of the student-centred approach on internal student motivation, which among other variables, proved to be a strong predictor of academic performance and general well-being (e.g., self-esteem) [ 7 , 8 ]. The scarce evidence on medical students hasn’t examined the underlying causes that might potentiate these effects [ 9 ]. Therefore, the aim of the study was to explore the effect of a relatively new methodology of teaching on basic psychological needs and self-esteem among medical students.

Background of the study

Due to increasing pressure for Higher Education institutions to meet the conceptual needs of the time, medical schools are transforming their curriculum to promote interaction between students and their peers, as well as with faculty [ 10 ].

One of these active learning setups –flipped learning also known as Flipped Classroom (FC) - received many accolades as an approach that best reflects students’ needs [ 11 ] and became popular among faculty and students [ 12 ]. FC was found to be effective in developing skills needed to function effectively in the 21st century. Among them are the ability to work in groups [ 13 ], apply knowledge in practice [ 14 , 15 , 16 , 17 ], and analyse and synthesise information [ 18 , 19 ].

Numerous studies investigated the impact of FC on a particular set of dimensions, mostly overall motivation and cognitive learning outcomes [ 20 , 21 ]. To illustrate a few research examples, Hew and Lo in their meta-analysis of 28 comparative studies demonstrated FC was more effective in improving learning performance in comparison to a lecture-based traditional classroom (TC) [ 22 ]. In the context of medical education, Chowdhury et al. reported that in FC students “feel more engaged and active in the learning process.” [ 23 ]. Additionally, Lundin et al. showed that most studies are related to local context and research is “quite scattered”, while systematic evidence based on empirical data is still limited [ 24 ]. Nevertheless, in a number of critical appraisals of FC, concluded that students in FC may learn more than in TC [ 25 , 26 ]; FC is more beneficial to learning higher cognition skills [ 27 ]; learners are more engaged in FC, however, satisfaction largely depended on how teachers prepared instructions [ 28 ].

Some research works explored FC impact on students’ motivation and satisfaction. For example, Aksoy and Pasli Gurdogan reported that FC significantly benefited students’ knowledge and motivation by measuring their self-efficacy and lower scores in test anxiety [ 29 ]. Finally, Sergis, Sampson and Pelliccione explored whether FC contributed to enhancing students’ basic psychological needs satisfaction and showed promising results [ 30 ]. However, the study was performed in the context of K-12 education.

Despite a large body of publications, the current knowledge is unexpectedly scarce, when it comes to analysing the effect of FC on basic psychological needs, satisfaction of which can be the underlying cause for the positive impact of flipped learning on cognitive and meta-cognitive skills. According to self-determination theory (SDT), a learning environment that fosters basic psychological needs will facilitate autonomous or internal motivation, needed for engagement in the learning process and overall improvement in academic performance [ 31 , 32 ]. On the contrary, thwarting of those needs can devitalize learning process resulting in maladaptive functioning and procrastination among students [ 33 , 34 ].

SDT is a theory that highlights the significance of inner “needs” development among individuals for personality development, behavioural self-regulation, and performance in a certain situation [ 35 ].

The theory implies that an individual’s psychological well-being is closely related to the fulfilment of basic psychological needs, such as the need for Autonomy , Relatedness , and Competence [ 36 ].

Need for autonomy

It is the expression of the self and fosters the ability to act in alignment with the individual’s values. Teaching that supports autonomy makes students feel free as opposed to controlling teaching style or behaviour [ 37 , 38 ]. Moreover, it stimulates intrinsic motivation and is associated with deep learning and better performance [ 39 ].

Need for relatedness

The need refers to the inner desire to feel related or connected to others. It highlights the importance of being valued in a society and the need to feel cared for and supported by others. The need is satisfied when individuals experience affiliation with significant others and thus may develop trusted relationships [ 40 ].

Need for competence

In accordance with SDT when individuals don’t feel capable it can affect their motivation to pursue whatever activities they are involved in. On the contrary, the experience of mastery and the ability to do things leads to satisfaction and well-being [ 41 ]. This existing positive link between competence and greater well-being indicates that it is a precondition for psychological health and personal growth through mastering the environment [ 42 ].

SDT also evaluates how contextual factors affect individuals’ needs satisfaction. Hence, it can be stated that need satisfaction is to be expected to shift along with the changes in the environment or perception of those changes [ 43 , 44 ].

Summing up, SDT suggests that when three basic psychological needs are satisfied, individuals are more likely to experience greater well-being [ 45 , 46 ]. On the contrary, when these needs are not met, individuals may experience negative consequences such as poor well-being and psychological distress [ 47 , 48 ]. Besides, the theory argues that all three needs are universal in the way that their relationship with well-being and optimal functioning shall remain robust regardless of the cultural context [ 49 , 50 ].

Therefore, finding the answers to what lies behind increased satisfaction and overall motivation in FC from the perspective of SDT, as a theoretical framework of our research, could provide new valuable data.

There is also the research gap on the possible effect of FC on students’ well-being which can be further divided into academic well-being and general well-being, such as self-esteem [ 51 ]. Although the definition of self-esteem is inconsistent, it can be figuratively defined as an “underground foundation” of a skyscraper building [ 52 ]. According to Rosenberg’s theory of self-esteem, individuals may experience negative or positive attitudes toward themselves and their perception of their thoughts and feelings [ 53 ]. Various studies have shown that low self-esteem may have a detrimental effect on motivation and learning [ 54 , 55 ]. Self-esteem can fluctuate among medical students as they tend to experience long-standing stress [ 56 , 57 ]. Baumeister et al. reported that high self-esteem has a positive impact on students’ motivation, and academic achievement [ 58 ]. In addition, it was also demonstrated that the authoritarian style of management of individuals promotes silence, obedience, and acceptance of information with no critical approach, and therefore may contribute to low self-esteem [ 59 ]. Conversely, education that involves active participation of students, and life skill training improves the feeling of self-esteem [ 60 ]. Research has found that learning engagement is closely related to academic performance and has a positive correlation with self-esteem [ 61 ]. Moreover, students with low self-esteem do not consider themselves competent unlike those with high self-esteem showing resilience towards academic failures [ 62 ].

Epstein also showed that self-esteem is one of the important factors for learning, motivation and confidence that may result in academic improvements and performance [ 63 ]. In the context of self-esteem, little is known whether FC benefits our students or puts them at a disadvantage [ 64 ]. Individuals can be classified as introverts and extraverts in terms of the way they interact with each other [ 65 ]. In the discussion-emphasised approach, verbal contribution, as an engagement marker, is highly rewarded by teachers; however, Reeve and Lee demonstrated that along with verbal engagement, behavioural and emotional constructs should not be underestimated [ 66 ]. Several studies demonstrated that introverts are prone to have lower self-esteem in comparison to more socially engaged students [ 67 , 68 ]. This may indicate that quiet students might experience difficulties through coursework which implies active participation. As a consequence, some students felt overshadowed by more vocal participants and found it hard to benefit from the learning activities [ 69 ]. Thus, exploring the effect of FC on self-esteem in comparison to conventional lecture-based learning environment from the perspective of Rosenberg theory of self-esteem, as a theoretical framework of our research, can provide useful information to better meet students’ needs.

Research purpose and questions

The purpose of this research is to analyse the effect of FC on students’ basic psychological needs: Autonomy , Relatedness and Competence and its association with Self-esteem .

Particularly, we aimed to find the answers to the following questions:

Does FC have a positive effect on fulfilling students’ basic psychological needs in comparison to their prior experience with TC?

Does FC have a positive effect on students’ Self-esteem in comparison to their prior experience with a TC?

Does satisfaction of basic psychological needs positively correlate with Self-esteem in the context of FC?

Methodology

Participants.

This was a quasi-experimental quantitative observational research with an experimental group of undergraduate medical students. Randomisation per se was not performed as we were dealing with the existing tutorial group. The inclusion criteria for the study are the international medical students ( N  = 40) in their third year of taking a 12-week course of Internal Medicine in the Department of Faculty Therapy, for whom English was a second language. The exclusion criteria for our research were individuals who met the inclusion criteria however were on their fourth year of taking the course. The mean age of the students was 21.68 years (SD = 1.25), and the majority of students were from China, Iran, and Bahrain. The study is based on previously collected anonymised data and all respondents gave informed consent. The Institutional Review Board’s Health Professions Education committee of the Gulf Medical University approved the research protocol - reference number IRB-COM-MHPE-STD-64-APRIL-2023.

FC methodology was designed and implemented for the first time at the University. The lessons were held weekly for three consecutive months. For the initial six weeks of the experimental study, the students were taught in TC, and for the last six weeks, the same group of students attended FC. Two online questionnaires were used to tap into satisfaction of students’ basic psychological needs and self-esteem at the end of TC and after exposure to FC. The group was taught by the same professor practitioner. However, in TC lectures were delivered by different faculty members.

Questionnaires

To collect data concerning three dimensions of Self-Determination Theory, students were asked to complete an English version of the Basic Psychological Need Satisfaction and Frustration Scale (BPNSFS-Domain-specific measures), specified for training, before and after the exposure to FC [ 70 ]. The scale consists of 24 affirmative statements (items) grouped in six subscales measuring both satisfaction and frustration of basic psychological needs. Examples of the statements are: “I felt a sense of choice and freedom in the things I thought and did,” “ I had doubts about whether I could apply the proposed strategies,” “ I had the impression that the other participants had less respect for my opinion,” “ I experienced a good bond with the other participants,” “I felt like a failure because of the opinion I had of the mistakes I made.” The answers are rated on a 5-point Likert scale from 1 - Absolutely Wrong to 5 - Completely True and tap into both satisfaction and frustration with the feelings of Autonomy, Relatedness and Competence . To measure students’ self-esteem, all students were asked to complete the Rosenberg scale in the same timeframe as BPNSFS-Domain-specific [ 71 ]. Although the original scale consists of 10 items, the data used in our study contains only five negatively (reversed) worded items to tap into the negative dimension of self-image [ 72 ]. Examples of the items are: “I do not have much to be proud of,” “I wish I could have more respect for myself,” “All in all, I am inclined to feel that I am a failure.” The answers were rated on a 5-point Likert scale from 1 - Strongly Agree to 5 - Strongly Disagree . The higher the scores the higher self-esteem. Full versions of both questionnaires are presented in the section “Additional materials.”

Educational design and delivery

Educational design in the TC and FC environment was created with the highest level of similarity to minimise biases. Both TC and FC consisted of three stages ( Fig.  1 ). The main difference referred mainly to the way the learning materials were delivered in Stage 1 , and the time students spent in the classroom during Stage 2 . In TC, the learning content was distributed during face-to-face classroom sessions, in which the lecturer presented the new material. Whereas in FC, learning activities included “home-based” sessions prior to face-to-face classroom sessions. In TC, the distribution of contact time was shorter in Stage 2 compared to FC, as the students spent the time by attending a face-to-face lecture in Stage 1. In FC, the distribution of contact time in Stage 2 was longer as the lecture time was added to the group learning in class.

figure 1

Educational design of FC and TC. 1 In general students had no time limits for the class preparation, however deadlines were placed to help students prioritise their tasks before face-to-face seminars; 2 MCQs – multiple-choice questions

Statistical analysis

The analyses were performed using IBM SPSS Statistics, Base edition. Descriptive statistics of the items, such as means and standard deviations among the variables were checked. For the distribution of the scores, the values of skewness and kurtosis were measured. Considering the relatively small size of the sample, the Shapiro-Wilk test was used to evaluate the normality of assumption. Non-parametric Wilcoxon signed-rank test was applied to compare differences in means of the variables within a group before and after the exposure to FC. Cronbach’s alpha reliability test with values > 0.7 is typically accepted as satisfactory [ 73 ]; and the validity Spearman’s rho correlation test was measured for Autonomy , Relatedness and Competence subscales before and after the exposure to FC. Correlation analysis was implemented to define the relations between self-esteem and basic psychological needs autonomy in both FC and TC.

Cohen’s d z was calculated to measure the effect size.

Among 40 international students, 40% were males and 60% females. The mean age of the participants was 21.68 years (range = 20–25 years, SD = 1.25).

Descriptive statistics of the 24 items of The Basic Psychological Need Satisfaction and Frustration Scale and 5 items of the Rosenberg self-esteem scale were evaluated. To compute normality, the Shapiro-Wilk statistics test for skewness and kurtosis was performed, which identified that across the sample most items violated the assumption of normality. For that reason, to assess and validate the measurement structure of a set of observed variables a Factor Loading Analysis was conducted.

Taking into consideration the eigenvalue criteria (> 1.0), two factors have been retained in a factor-loading analysis involving 8 autonomy items. In particular, four autonomy satisfaction items tend to load on one factor, and four autonomy frustration items tend to load on another one (Table  1 ). Eigenvalues for these two retained factors were 2.67 and 1.39, and they explained 50.80% of the variance.

An analogous 2-factor pattern was seen for the 8 relatedness items, the 8 competence items, and the 5 Rosenberg self-esteem items (Table  2 ) explaining 40.39% of the variance of relatedness, 42.44% of the variance of competence, and 63.50% of the variance of self-esteem. The extraction of commonalities was above > 0.5 for both scales (for SPSS factor loading 0.5 or higher is considered as a rule of thumb) for all variables, so that all items were retained.

To measure internal consistency between items Cronbach’s Alpha was measured. To avoid negative alpha, positively and negatively worded questions were not mixed. Negatively worded items were reversed, with the following calculation of the sum score of five items of the Rosenberg self-esteem scale.

The Cronbach’s alpha for the whole sample was 0.72 for autonomy, 0.75 for relatedness and 0.70 for competence in TC, and were slightly higher in FC: 0.73, 0.79, 0.75, respectively. The Cronbach’s alpha for the Rosenberg self-esteem scale was 0.73 both in TC and FC.

The study ’s first aim was to examine the effect of FC on fulfilling students’ need for Autonomy , Relatedness and Competence in comparison to their prior experience with a TC. As a preliminary step, descriptive statistics and cumulative mean comparison (mean as a central tendency) of BPNSFS-Domain-specific subscales and self-esteem between TC and FC were performed (Table  3 ). Autonomy satisfaction was significantly higher in FC ( n  = 40, z = 5.520, p  < .001, Cohen’s d z = 0.9), and the same tendency was seen for competence satisfaction in FC ( n  = 40, z = 5.122, p  < .001, Cohen’s d z = 0.98). Although the central tendency of cumulative mean for Relatedness satisfaction was slightly higher in FC (3.61 vs. 3.38), it wasn’t statistically different. As for the frustration of all three needs, a statistical difference was observed for all three subscales between TC and FC. In FC, autonomy ( n  = 40, z = − 5.370, p  < .001, Cohen’s d z = 0.9), relatedness ( n  = 40, z = 4.187, p  < .001, Cohen’s d z = 0.89), and competence ( n  = 40, z = − 5.323, p  < .001, Cohen’s d z = 0.98) frustration was significantly lower.

The study’s second aim was to examine the effect of FC methodology on students’ Self-esteem in comparison to their prior experience with a traditional classroom (TC) environment. A descriptive and cumulative mean comparison of self-esteem between TC and FC is presented in Table  3 . Self-esteem was significantly higher in FC in comparison with TC ( n  = 40, z = 5.528, p  < .001). Figure  2 graphically displays a box plot analysis of self-esteem in TC and FC settings. 50% of participants in TC would range their self-esteem between 2.6 and 3.3, whereas in FC between 3.9 and 4.3. The median of self-esteem was 2.8 for TC, and 4.0 for FC.

figure 2

Comparison of students’ Self-Esteem in TC and FC settings. Self-esteem has been found significantly higher among students in FC setting

The study’s third aim was to evaluate whether satisfaction of Autonomy , Relatedness and Competence positively correlated with Self-esteem in the context of FC methodology versus TC. Nonparametric Spearman’s correlations were obtained for all the variables in TC and FC. In TC, self-esteem negatively correlated with autonomy frustration, (r(38) = − 0.430, p  < .01), and competence frustration, (r(38) = − 0.379, p  < .05) (Table  4 ). The correlation between autonomy, relatedness, competence satisfaction and self-esteem were not significant ( p  > .05). Competence satisfaction positively correlated with autonomy satisfaction (r(38) = 0.471, p  < .01).

In FC, self-esteem positively correlated with autonomy satisfaction (r(38) = 0.316, p  < .05) (Table  5 ), and competence satisfaction (r(38) = 0.429, p  < .01). The correlation with autonomy, relatedness, competence frustration in FC was not significant ( p  > .05). Competence satisfaction positively correlated with autonomy satisfaction (r(38) = 0.471, p  < .01).

Millennials are considered to be tech-savvy and often prefer to acquire knowledge in real-life settings by making mistakes without the fear of being judged, which can be seen as a major characteristics of FC [ 74 ]. Therefore, it was worthwhile examining how a relatively new methodology with a focus on a student-centred approach would fulfil students’ “self-determination’’ needs in comparison to TC. The findings of our research demonstrated a consistent pattern. Specifically, students’ needs for autonomy and competence were significantly higher in the FC setting. Autonomy satisfaction in FC was supposedly achieved through collaborative work, which quite often was led by the students under the supervision of their teaching professor. It is also argued that instructional and academic scaffolding provided by a teacher along with the hands-on activities contribute to the enhanced feeling of competence, which makes them feel more confident and most importantly not afraid of making mistakes in the classroom [ 75 ]. Although both TC and FC shared identical teaching instructions during face-to-face classroom sessions, students in TC experienced more lack of autonomy. Peer interaction, as well as peer-professor interaction, is not always supported during the lecture. Moreover, all the lectures were delivered early in the morning and “not everyone is a morning bird” [ 68 , 76 ].

In terms of relatedness satisfaction, a statistically significant difference between FC and TC wasn’t found. This may be because relatedness is a much larger construct and can be linked to maladaptive social and interpersonal interactions [ 77 ]. Moreover, it should be noted that the group of students was quite heterogeneous with different cultural backgrounds from Iran, China and South Africa to Bahrain, Mozambique and Brazil. While in Western cultures, positive social interactions with a certain level of openness are preferable, diverse Eastern cultures may have social skills specifically rooted in the way of upbringing, and practised societal norms [ 78 ]. However, it is important to note that relatedness frustration was significantly lower in the FC environment. This may indicate that students felt more secure and perceived less threat from the positive and flexible environment of FC. Teachers should consider specific constraints while dealing with students from diverse cultural contexts. Teachers should also organise their classroom sessions to be more encouraging of social and academic interaction with other students.

The second aim of the study was to evaluate whether FC fulfilled students’ self-esteem. Self-esteem is one of the key factors that influences academic achievement [ 79 ]. It is also closely related to academic performance through the affective domain [ 80 ]. Therefore, examining the effect of FC on self-esteem was considered valuable, as it provides empirical evidence that can be taken into consideration by universities in their curriculum design. Self-esteem along with other constructs such as motivation and sufficient feedback are still undervalued factors in curriculum development [ 81 ]. The findings of our research again demonstrated a persistent pattern. In particular, self-esteem was significantly higher in the FC environment, which can be explained by emotional and behavioural engagement in more extensive collaborative work. This suggests that teachers should set up a socially supportive environment that will help promote the personal worth of the students. Active student involvement, and collaborative concepts implemented in FC can teach students important skills, such as understanding that there are different personalities in groups, and showing a respectful attitude toward each other. All these skills help build up socially desirable behaviour to enhance self-esteem. Apart from academic achievement, behaving socially at university can lead to other advantages in life. The third aim of the study was to examine the correlation between needs satisfaction and self-esteem. Our results indicate that autonomy and competence satisfaction positively correlated with self-esteem in the FC environment. On the contrary, self-esteem negatively correlated with autonomy frustration and competence frustration in the TC. It was observed that relatedness satisfaction/frustration didn’t correlate with self-esteem in both TC and FC. The socio-cultural context of the study may have contributed to the results, which can be explored further. Together these results underline the possible role of the learning environment in the satisfaction/frustration of the basic psychological needs of students which in turn correlated with self-esteem. The learning environment is a multifaceted term; however, it can be broadly described as an environment “in which students’ learning process is embedded.” To further this idea, we address the role of a teacher as a leading factor in creating a high-quality lesson aimed at developing critical thinking with the importance of effective instructions, active student involvement and feedback.

Limitations and future perspectives

The following limitations should be taken into consideration, when the results of our study are evaluated. First, it should be noted that it was a one-group non-randomised pre-test-post-test design quasi-experiment, in which outcomes have been measured two times: once before and then after the exposure to a flipped learning environment. Second, there wasn’t a control group in the research which would allow the use of more complex statistical analysis, such as a multivariate analysis of variance. The correlation analysis used in the study doesn’t conclude cause-effect of the findings. Another limitation is the student-teacher familiarity effect among our participants. Basic psychological needs and self-esteem may change over the course of study, specifically when students are taught by the same teacher [ 82 ]. This can be the case of another limitation, such as biasing effects on teacher’s likability, and these factors should be considered in future research.

Another limitation of our research is the universality of SDT which does not explain cultural and individual differences in the way students get their needs satisfied. Again, this may require more exploration in future research. It should be also noted that although we investigated students with diverse cultural backgrounds in our research, the representation of cultural populations was limited and therefore we were unable to evaluate cultural markers, such as values of independence, freedom, openness and trust. Hence, the generalizability of the findings to the broader audience should be made with caution.

Practical application

FC with the main focus on students’ active involvement in class discussion may better meet millennials’ needs. On microlevel, implementing new methodology of teaching may have a positive impact on students’ self-esteem, self-regulation and personal growth. Putting into practice validated questionnaires to measure students’ psychological constructs should become a regular practice in medical schools, specifically during the process of curriculum planning and redesign. Regardless of the existing trend in education with student-centred approach, it is the faculty who play a pivotal role in providing students with the quality education. Hence, on macrolevel, university administrators and leadership should not underestimate the importance of faculty development and the role of teachers’ evaluation to improve the quality of teaching and integrity of teachers. Therefore, faculty leadership should implement best practices of Health Professions Education Development to prepare faculty for the positive change in affective, intellectual, and social aspects of academic life.

The present research found the positive role of FC in the satisfaction of basic psychological needs, namely, autonomy and competence and its correlation with self-esteem for students from diverse cultural backgrounds. These findings highlight the significance of the needs satisfaction in a more flexible and socially friendly learning environment as a pivotal factor in enhancing students’ self-esteem.

Data availability

The datasets used and analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Basic Psychological Need Satisfaction and Frustration Scale

Frustration

  • Flipped classroom

Satisfaction

Standard deviation

  • Self-determination theory

Traditional classroom

Ryan RM, Deci EL, Vansteenkiste M, Soenens B. Building a science of motivated persons: self-determination theory’s empirical approach to human experience and the regulation of behavior. Motiv Sci. 2021;7:97–110. https://doi.org/10.1037/mot0000194 .

Article   Google Scholar  

Vansteenkiste M, Ryan RM, Soenens B. Basic psychological need theory: advancements, critical themes, and future directions. Motiv Emot. 2020;44:1–31. https://doi.org/10.1007/s11031-019-09818-1 .

Vygotsky LS, Vygotsky LS. (1978). Mind in society: The development of higher psychological processes., Cambridge, MA: Harvard University Press., 1978.

Thammasitboon S, Brand PLP. The physiology of learning: strategies clinical teachers can adopt to facilitate learning. Eur J Pediatr. 2022;181:429–33. https://doi.org/10.1007/s00431-021-04054-7 .

Kassab SE, El-Sayed W, Hamdy H. Student engagement in undergraduate medical education: a scoping review. Med Educ. 2022;56:703–15. https://doi.org/10.1111/medu.14799 .

Alizadeh M, Parmelee D, Taylor D, Norouzi S, Norouzi A. Keeping motivation on track by metamotivational knowledge: AMEE Guide 160, Med. Teach. 2023;1–9. https://doi.org/10.1080/0142159X.2023.2190482 .

Saeed S, Zyngier D. How motivation influences Student Engagement: a qualitative case study. J Educ Learn. 2012;1:252. https://doi.org/10.5539/jel.v1n2p252 .

Orsini C, Binnie VI, Wilson SL. Determinants and outcomes of motivation in health professions education: a systematic review based on self-determination theory. J Educ Eval Health Prof. 2016;13:19. https://doi.org/10.3352/jeehp.2016.13.19 .

Valdemoros San Emeterio MÁ, Alonso Ruiz RA, Gallardo A, Garrachón. ESPIRAL Cuad Profr. 2024;17:1–16. https://doi.org/10.25115/ecp.v17i35.9686 . Inclusion of physical education in the hospital classroom from the Service-Learning perspective [Inclusión de la educación física en el aula hospitalaria en clave de Aprendizaje y Servicio].

van Laar E, van Deursen AJAM, van Dijk JAGM, de Haan J. Determinants of 21st-Century skills and 21st-Century Digital Skills for Workers: a systematic literature. Rev SAGE Open. 2020;10:215824401990017. https://doi.org/10.1177/2158244019900176 .

Murillo-Zamorano LR, López Sánchez JÁ, Godoy-Caballero AL. How the flipped classroom affects knowledge, skills, and engagement in higher education: effects on students’ satisfaction. Comput Educ. 2019;141:103608. https://doi.org/10.1016/j.compedu.2019.103608 .

Mehta NB, Hull AL, Young JB, Stoller JK. Just imagine: New paradigms for Medical Education. Acad Med. 2013;88:1418–23. https://doi.org/10.1097/ACM.0b013e3182a36a07 .

O’Flaherty J, Phillips C. The use of flipped classrooms in higher education: a scoping review. Internet High Educ. 2015;25:85–95. https://doi.org/10.1016/j.iheduc.2015.02.002 .

Betihavas V, Bridgman H, Kornhaber R, Cross M. The evidence for ‘flipping out’: a systematic review of the flipped classroom in nursing education. Nurse Educ Today. 2016;38:15–21. https://doi.org/10.1016/j.nedt.2015.12.010 .

Nouri J. The flipped classroom: for active, effective and increased learning – especially for low achievers. Int J Educ Technol High Educ. 2016;13:33. https://doi.org/10.1186/s41239-016-0032-z .

Badyal D, Singh T. Learning theories: the basics to learn in medical education. Int J Appl Basic Med Res. 2017;7:1. https://doi.org/10.4103/ijabmr.IJABMR_385_17 .

Bergmann J, Sams A. Flip your classroom: reach every student in every class every day. Eugene, Or: International Society for Technology in Education; 2012.

Google Scholar  

Campillo-Ferrer JM, Miralles-Martínez P. Effectiveness of the flipped classroom model on students’ self-reported motivation and learning during the COVID-19 pandemic. Humanit Soc Sci Commun. 2021;8:176. https://doi.org/10.1057/s41599-021-00860-4 .

Chang S-C, Hwang G-J. Impacts of an augmented reality-based flipped learning guiding approach on students’ scientific project performance and perceptions. Comput Educ. 2018;125:226–39. https://doi.org/10.1016/j.compedu.2018.06.007 .

Elzainy A, Sadik AE. The impact of flipped classroom: evaluation of cognitive level and attitude of undergraduate medical students. Ann Anat - Anat Anz. 2022;243:151952. https://doi.org/10.1016/j.aanat.2022.151952 .

Ji M, Luo Z, Feng D, Xiang Y, Xu J. Short- and long-term influences of flipped Classroom Teaching in Physiology Course on Medical Students’ learning effectiveness. Front Public Health. 2022;10:835810. https://doi.org/10.3389/fpubh.2022.835810 .

Hew KF, Lo CK. Flipped classroom improves student learning in health professions education: a meta-analysis. BMC Med Educ. 2018;18:38. https://doi.org/10.1186/s12909-018-1144-z .

Chowdhury TA, Khan H, Druce MR, Drake WM, Rajakariar R, Thuraisingham R, Dobbie H, Parvanta L, Chinegwundoh F, Almushatat A, Warrens A, Alstead EM. Flipped learning: turning medical education upside down. Future Healthc J. 2019;6:192–5. https://doi.org/10.7861/fhj.2018-0017 .

Lundin M, Bergviken Rensfeldt A, Hillman T, Lantz-Andersson A, Peterson L. Higher education dominance and siloed knowledge: a systematic review of flipped classroom research. Int J Educ Technol High Educ. 2018;15:20. https://doi.org/10.1186/s41239-018-0101-6 .

Kraut AS, Omron R, Caretta-Weyer H, Jordan J, Manthey D, Wolf SJ, Yarris LM, Johnson S, Kornegay J. The flipped Classroom: a critical Appraisal. West J Emerg Med. 2019;20:527–36. https://doi.org/10.5811/westjem.2019.2.40979 .

Liebert CA, Mazer L, Bereknyei Merrell S, Lin DT, Lau JN. Student perceptions of a simulation-based flipped classroom for the surgery clerkship: a mixed-methods study. Surgery. 2016;160:591–8. https://doi.org/10.1016/j.surg.2016.03.034 .

Morton DA, Colbert-Getz JM. Measuring the impact of the flipped anatomy classroom: the importance of categorizing an assessment by Bloom’s taxonomy: impact of the flipped anatomy Classroom. Anat Sci Educ. 2017;10:170–5. https://doi.org/10.1002/ase.1635 .

Khanova J, Roth MT, Rodgers JE, McLaughlin JE. Student experiences across multiple flipped courses in a single curriculum. Med Educ. 2015;49:1038–48. https://doi.org/10.1111/medu.12807 .

Aksoy B, Pasli E, Gurdogan. Examining effects of the flipped classroom approach on motivation, learning strategies, urinary system knowledge, and urinary catheterization skills of first-year nursing students. Jpn J Nurs Sci. 2022;19. https://doi.org/10.1111/jjns.12469 .

Sergis S, Sampson DG, Pelliccione L. Investigating the impact of flipped Classroom on students’ learning experiences: a self-determination theory approach. Comput Hum Behav. 2018;78:368–78. https://doi.org/10.1016/j.chb.2017.08.011 .

Deci EL, Ryan RM. The what and why of goal pursuits: human needs and the self-determination of Behavior, Psychol. Inq. 2000;11:227–68. https://doi.org/10.1207/S15327965PLI1104_01 .

Gondal SA, Khan AQ, Cheema EU, Dehele IS. Impact of the flipped classroom on students’ academic performance and satisfaction in Pharmacy education: a quasi-experimental study. Cogent Educ. 2024;11:2378246. https://doi.org/10.1080/2331186X.2024.2378246 .

Ahmadi A, Noetel M, Parker P, Ryan RM, Ntoumanis N, Reeve J, Beauchamp M, Dicke T, Yeung A, Ahmadi M, Bartholomew K, Chiu TKF, Curran T, Erturan G, Flunger B, Frederick C, Froiland JM, González-Cutre D, Haerens L, Jeno LM, Koka A, Krijgsman C, Langdon J, White RL, Litalien D, Lubans D, Mahoney J, Nalipay MJN, Patall E, Perlman D, Quested E, Schneider S, Standage M, Stroet K, Tessier D, Thogersen-Ntoumani C, Tilga H, Vasconcellos D, Lonsdale C. A classification system for teachers’ motivational behaviors recommended in self-determination theory interventions. J Educ Psychol. 2023;115:1158–76. https://doi.org/10.1037/edu0000783 .

Opdenakker M-C, Need-Supportive and, Behavior N-TT. Their importance to boys’ and girls’ Academic Engagement and Procrastination Behavior. Front Psychol. 2021;12:628064. https://doi.org/10.3389/fpsyg.2021.628064 .

Orsini CA, Tricio JA, Segura C, Tapia D. Exploring teachers’ motivation to teach: a multisite study on the associations with the work climate, students’ motivation, and teaching approaches. J Dent Educ. 2020;84:429–37. https://doi.org/10.1002/jdd.12050 .

Deci EL, Ryan RM. Self-determination theory: a macrotheory of human motivation, development, and health. Can Psychol Psychol Can. 2008;49:182–5. https://doi.org/10.1037/a0012801 .

Amoura C, Berjot S, Gillet N, Caruana S, Cohen J, Finez L. Autonomy-supportive and Controlling styles of Teaching. Swiss J Psychol. 2015;74:141–58. https://doi.org/10.1024/1421-0185/a000156 .

Gilboy MB, Heinerichs S, Pazzaglia G. Enhancing Student Engagement using the flipped Classroom. J Nutr Educ Behav. 2015;47:109–14. https://doi.org/10.1016/j.jneb.2014.08.008 .

Van Den Broeck A, Vansteenkiste M, De Witte H, Lens W. Explaining the relationships between job characteristics, burnout, and engagement: the role of basic psychological need satisfaction. Work Stress. 2008;22:277–94. https://doi.org/10.1080/02678370802393672 .

Escandon-Barbosa D, Salas-Paramo J. NEED FOR RELATEDNESS AND EATING BEHAVIOUR IN MILLENNIALS. Bus Theory Pract. 2024;25:73–82. https://doi.org/10.3846/btp.2024.16755 .

Deci EL, Ryan RM, Theory S-D. in: Handb. Theor. Soc. Psychol. Vol. 1, SAGE Publications Ltd, 1 Oliver’s Yard, 55 City Road, London EC1Y 1SP United Kingdom, 2012: pp. 416–437. https://doi.org/10.4135/9781446249215.n21

Legault L, Theory S-D. In: Zeigler-Hill V, Shackelford TK, editors. Encycl. Personal. Individ. Differ. Cham: Springer International Publishing; 2017. pp. 1–9. https://doi.org/10.1007/978-3-319-28099-8_1162-1 .

Chapter   Google Scholar  

Petrou P, Bakker AB. Crafting one’s leisure time in response to high job strain. Hum Relat. 2016;69:507–29. https://doi.org/10.1177/0018726715590453 .

Bidee J, Vantilborgh T, Pepermans R, Willems J, Jegers M, Hofmans J. Daily motivation of volunteers in healthcare organizations: relating team inclusion and intrinsic motivation using self-determination theory. Eur J Work Organ Psychol. 2017;26:325–36. https://doi.org/10.1080/1359432X.2016.1277206 .

Ten Cate OTJ, Kusurkar RA, Williams GC. How self-determination theory can assist our understanding of the teaching and learning processes in medical education. AMEE Guide 59 Med Teach. 2011;33:961–73. https://doi.org/10.3109/0142159X.2011.595435 .

Martela F, Riekki TJJ. Autonomy, competence, relatedness, and beneficence: a multicultural comparison of the Four pathways to Meaningful Work. Front Psychol. 2018;9:1157. https://doi.org/10.3389/fpsyg.2018.01157 .

Deci EL, Olafsen AH, Ryan RM. Self-determination theory in Work organizations: the state of a science. Annu Rev Organ Psychol Organ Behav. 2017;4:19–43. https://doi.org/10.1146/annurev-orgpsych-032516-113108 .

Coxen L, Van Der Vaart L, Van Den Broeck A, Rothmann S. Basic Psychological needs in the work context: a systematic literature review of Diary studies. Front Psychol. 2021;12:698526. https://doi.org/10.3389/fpsyg.2021.698526 .

Church AT, Katigbak MS, Locke KD, Zhang H, Shen J, De Jesús J, Vargas-Flores J, Ibáñez-Reyes J, Tanaka-Matsumi GJ, Curtis HF, Cabrera KA, Mastor JM, Alvarez FA, Ortiz J-YR, Simon CM, Ching. Need satisfaction and Well-Being: testing self-determination theory in eight cultures. J Cross-Cult Psychol. 2013;44:507–34. https://doi.org/10.1177/0022022112466590 .

Chen B, Vansteenkiste M, Beyers W, Boone L, Deci EL, Van Der Kaap-Deeder J, Duriez B, Lens W, Matos L, Mouratidis A, Ryan RM, Sheldon KM, Soenens B, Van Petegem S, Verstuyf J. Basic psychological need satisfaction, need frustration, and need strength across four cultures. Motiv Emot. 2015;39:216–36. https://doi.org/10.1007/s11031-014-9450-1 .

Diener E. Subjective well-being. In: Diener E, editor. Sci. Well-being. Dordrecht: Springer Netherlands; 2009. pp. 11–58. https://doi.org/10.1007/978-90-481-2350-6_2 .

Bailey JA. The foundation of self-esteem. J Natl Med Assoc. 2003;95:388–93.

Rosenberg M. Society and the adolescent self-image. Princeton, NJ: Princeton University Press; 1965.

Book   Google Scholar  

Orth U, Robins RW. Is high self-esteem beneficial? Revisiting a classic question. Am Psychol. 2022;77:5–17. https://doi.org/10.1037/amp0000922 .

Zhao Y, Zheng Z, Pan C, Zhou L. Self-Esteem and Academic Engagement among adolescents: a Moderated Mediation Model. Front Psychol. 2021;12:690828. https://doi.org/10.3389/fpsyg.2021.690828 .

Radeef AS, Faisal GG. Low self-esteem and its relation with psychological distress among Dental Students. Eur J Med Health Sci. 2019;1. https://doi.org/10.24018/ejmed.2019.1.1.21 .

Qadeer T, Javed MK, Manzoor A, Wu M, Zaman SI. The experience of International Students and Institutional recommendations: a comparison between the students from the developing and developed regions. Front Psychol. 2021;12:667230. https://doi.org/10.3389/fpsyg.2021.667230 .

Baumeister RF, Campbell JD, Krueger JI, Vohs KD. Does High Self-Esteem cause better performance, interpersonal success, happiness, or healthier lifestyles? Psychol Sci Public Interest. 2003;4:1–44. https://doi.org/10.1111/1529-1006.01431 .

Rudy D, Grusec JE. Authoritarian parenting in individualist and collectivist groups: associations with maternal emotion and cognition and children’s self-esteem. J Fam Psychol. 2006;20:68–78. https://doi.org/10.1037/0893-3200.20.1.68 .

Srikala B, Kishore KKV. Empowering adolescents with life skills education in schools - school mental health program: does it work? Indian J Psychiatry. 2010;52:344–9. https://doi.org/10.4103/0019-5545.74310 .

Virtanen TE, Kiuru N, Lerkkanen M-K. Poikkeus, Anna-Maija, Kuorelahti, Matti, Assessment of student engagement among junior high school students and associations with self-esteem, burnout, and academic achievement, (2016). https://doi.org/10.25656/01:12430

Park LE, Crocker J, Kiefer AK. Contingencies of Self-Worth, academic failure, and goal pursuit. Pers Soc Psychol Bull. 2007;33:1503–17. https://doi.org/10.1177/0146167207305538 .

Epstein S. The self-concept revisited: or a theory of a theory. Am Psychol. 1973;28:404–16. https://doi.org/10.1037/h0034679 .

Tuovinen S, Tang X, Salmela-Aro K. Introversion and Social Engagement: Scale Validation, their Interaction, and positive Association with Self-Esteem. Front Psychol. 2020;11:590748. https://doi.org/10.3389/fpsyg.2020.590748 .

Reeve J, Lee W. Students’ classroom engagement produces longitudinal changes in classroom motivation. J Educ Psychol. 2014;106:527–40. https://doi.org/10.1037/a0034934 .

Cheng H, Furnham A. Personality, self-esteem, and demographic predictions of happiness and depression. Personal Individ Differ. 2003;34:921–42. https://doi.org/10.1016/S0191-8869(02)00078-8 .

Murberg TA. The role of personal attributes and social support factors on passive behaviour in classroom among secondary school students: a prospective study. Soc Psychol Educ. 2010;13:511–22. https://doi.org/10.1007/s11218-010-9123-1 .

Pettit R, McCoy L, Kinney M. What millennial medical students say about flipped learning. Adv Med Educ Pract Volume. 2017;8:487–97. https://doi.org/10.2147/AMEP.S139569 .

Rosheim KC. A cautionary Tale about using the Word Shy : an Action Research Study of how three quiet learners demonstrated participation beyond Speech. J Adolesc Adult Lit. 2018;61:663–70. https://doi.org/10.1002/jaal.729 .

Van der Kaap-Deeder J, Soenens B, Ryan RM, Vansteenkiste M, Van der Kaap- Deeder J, Soenens B, Ryan RM, Vansteenkiste M. (2020). Manual of the Basic Psychological Need Satisfaction and Frustration Scale (BPNSFS). Ghent University, Belgium., 2020.

Ruddell RJ. Validity and reliability evidence for the Rosenberg self-esteem scale with adults in Canada and the United States, (2020). https://doi.org/10.14288/1.0394068

Greenberger E, Chen C, Dmitrieva J, Farruggia SP. Rosenberg Self-Esteem Scale–Revised-Negative Version, (2012). https://doi.org/10.1037/t12469-000

Olafsen AH, Halvari H, Frølund CW. The Basic Psychological need satisfaction and need frustration at work scale: a validation study. Front Psychol. 2021;12:697306. https://doi.org/10.3389/fpsyg.2021.697306 .

Twenge JM. Generational changes and their impact in the classroom: teaching Generation Me. Med Educ. 2009;43:398–405. https://doi.org/10.1111/j.1365-2923.2009.03310.x .

Van Tonder GP, Kloppers MM, Grosser MM. Enabling Self-Directed Academic and Personal Wellbeing through Cognitive Education. Front Psychol. 2022;12:789194. https://doi.org/10.3389/fpsyg.2021.789194 .

Yao MZ, He J, Ko DM, Pang K. The influence of personality, parental behaviors, and self-esteem on internet addiction: a study of Chinese College Students, Cyberpsychology Behav. Soc Netw. 2014;17:104–10. https://doi.org/10.1089/cyber.2012.0710 .

Xiao M, Wang Z, Kong X, Ao X, Song J, Zhang P. Relatedness need satisfaction and the Dark Triad: the role of Depression and Prevention Focus. Front Psychol. 2021;12:677906. https://doi.org/10.3389/fpsyg.2021.677906 .

Baumeister RF, Leary MR. The need to belong: Desire for interpersonal attachments as a fundamental human motivation. Psychol Bull. 1995;117:497–529. https://doi.org/10.1037/0033-2909.117.3.497 .

Bailey TH, Phillips LJ. The influence of motivation and adaptation on students’ subjective well-being, meaning in life and academic performance. High Educ Res Dev. 2016;35:201–16. https://doi.org/10.1080/07294360.2015.1087474 .

Acosta-Gonzaga E. The effects of Self-Esteem and Academic Engagement on University Students’ performance. Behav Sci. 2023;13:348. https://doi.org/10.3390/bs13040348 .

Kusurkar RA, Croiset G, Mann KV, Custers E, Cate Oten. Have motivation theories guided the Development and Reform of Medical Education Curricula? A review of the literature. Acad Med. 2012;87:735–43. https://doi.org/10.1097/ACM.0b013e318253cc0e .

Hwang N. The benefits of Familiarity: the effects of repeated student-teacher matching on School Discipline. J Res Educ Eff. 2024;1–19. https://doi.org/10.1080/19345747.2024.2322673 .

Download references

Acknowledgements

Not applicable.

This study was not supported by any sponsor or funder.

Author information

Authors and affiliations.

Curriculum&Co - Consulting in Education, Clinical Director of Biocorp, Los Angeles, USA

Esma I. Avakyan

Professor of Medical Education and Physiology, Gulf Medical University, Ajman, UAE

Esma I. Avakyan & David C. M Taylor

You can also search for this author in PubMed   Google Scholar

Contributions

Dr. E. A. made substantial contribution to the conception, design, analysis, and interpretation of data.Professor D.T. made substantial contribution to the conception, design, revision of the paper, and have approved the submitted version.All authors reviewed the manuscript.

Corresponding author

Correspondence to Esma I. Avakyan .

Ethics declarations

Ethics approval and consent to participate.

The study is based on previously collected anonymised data and all respondents gave informed consent. The Institutional Review Board’s Health Professions Education committee of the Gulf Medical University approved the research protocol - reference number IRB-COM-MHPE-STD-64-APRIL-2023.

Consent for publication

Each author agreed with the content and gave consent to submit and publish the work.

Competing interests

We declare that we have no known competing financial interests or personal relationship that could have appeared to influence the work reported in this paper.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Avakyan, E.I., Taylor, D.C.M. The effect of flipped learning on students’ basic psychological needs and its association with self-esteem. BMC Med Educ 24 , 1127 (2024). https://doi.org/10.1186/s12909-024-06113-7

Download citation

Received : 28 July 2024

Accepted : 01 October 2024

Published : 11 October 2024

DOI : https://doi.org/10.1186/s12909-024-06113-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Rosenberg self-esteem theory
  • Relatedness

BMC Medical Education

ISSN: 1472-6920

correlational research vs quasi experimental research

IMAGES

  1. RM lecture correlational and quasi experimental research with narration

    correlational research vs quasi experimental research

  2. Correlational Vs Experimental Research

    correlational research vs quasi experimental research

  3. True vs Quasi-Experimental Research: Understanding the Key Differences

    correlational research vs quasi experimental research

  4. PPT

    correlational research vs quasi experimental research

  5. Difference Between Correlational & Experimental Research (Research Short Video #RSV_34) #RSV

    correlational research vs quasi experimental research

  6. PPT

    correlational research vs quasi experimental research

VIDEO

  1. Correlational Research Methodology

  2. TYPES OF RESEARCH : Quick Review (Comprehensive Exam Reviewer)

  3. Chapter 5. Alternatives to Experimentation: Correlational and Quasi Experimental Designs

  4. Experimental- Descriptive- Correlational research l Types of Research

  5. Correlation: Comparing theory with experiment (U1-9-04)

  6. Unit 0 Part 6 Correlational Research Design

COMMENTS

  1. What's the difference between correlational and experimental research?

    Controlled experiments establish causality, whereas correlational studies only show associations between variables. In an experimental design, you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can't impact the results. In a correlational design, you measure variables ...

  2. Correlational Research Vs Experimental Research

    The goal of correlational research is to identify whether there is a relationship between the variables and the strength of that relationship. Correlational research is typically conducted through surveys, observational studies, or secondary data analysis. Experimental Research. Experimental Research, on the other hand, is a research approach ...

  3. Experimental and Quasi-Experimental Research

    Experimental and Quasi-Experimental Research. Guide Title: Experimental and Quasi-Experimental Research Guide ID: 64. You approach a stainless-steel wall, separated vertically along its middle where two halves meet. After looking to the left, you see two buttons on the wall to the right. You press the top button and it lights up.

  4. 7.3 Quasi-Experimental Research

    Describe three different types of quasi-experimental research designs (nonequivalent groups, pretest-posttest, and interrupted time series) and identify examples of each one. The prefix quasi means "resembling.". Thus quasi-experimental research is research that resembles experimental research but is not true experimental research.

  5. Correlational Research vs. Experimental Research

    Correlational research lacks the ability to manipulate variables and establish cause-and-effect relationships. It focuses on observing and analyzing existing relationships between variables. On the other hand, experimental research allows for the manipulation of variables, providing a higher level of control and the ability to establish causality.

  6. Quasi Experimental Design Overview & Examples

    A significant advantage of quasi-experimental research over purely observational studies and correlational research is that it addresses the issue of directionality, determining which variable is the cause and which is the effect. In quasi-experiments, an intervention typically occurs during the investigation, and the researchers record outcomes before and after it, increasing the confidence ...

  7. Types of Research Designs Compared

    You can also create a mixed methods research design that has elements of both. Descriptive research vs experimental research. Descriptive research gathers data without controlling any variables, while experimental research manipulates and controls variables to determine cause and effect.

  8. Quasi-Experimental Design

    Revised on January 22, 2024. Like a true experiment, a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable. However, unlike a true experiment, a quasi-experiment does not rely on random assignment. Instead, subjects are assigned to groups based on non-random criteria.

  9. What's the difference between correlational and experimental research?

    Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.. When a test has strong face validity, anyone would agree that the test's questions appear to measure what they are intended to measure.. For example, looking at a 4th grade math test ...

  10. 7.2 Correlational Research

    7.3 Quasi-Experimental Research. 7.4 Qualitative Research. Chapter 8: Complex Research Designs. 8.1 Multiple Dependent Variables. ... Correlational research is a type of nonexperimental research in which the researcher measures two variables and assesses the statistical relationship (i.e., the correlation) between them with little or no effort ...

  11. Chapter 7 Quasi-Experimental Research

    The prefix quasi means "resembling." Thus quasi-experimental research is research that resembles experimental research but is not true experimental research. Although the independent variable is manipulated, participants are not randomly assigned to conditions or orders of conditions (Cook et al., 1979).Because the independent variable is manipulated before the dependent variable is ...

  12. Quasi-experimental Research: What It Is, Types & Examples

    Quasi-experimental research designs are a type of research design that is similar to experimental designs but doesn't give full control over the independent variable (s) like true experimental designs do. In a quasi-experimental design, the researcher changes or watches an independent variable, but the participants are not put into groups at ...

  13. Experimental vs Quasi-Experimental Design: Which to Choose?

    A quasi-experimental design is a non-randomized study design used to evaluate the effect of an intervention. The intervention can be a training program, a policy change or a medical treatment. Unlike a true experiment, in a quasi-experimental study the choice of who gets the intervention and who doesn't is not randomized.

  14. PDF Distinguishing Correlational vs. Experimental Research

    2. Students are given instructions to decide whether the research is correlational or experimental. a. If correlational, they should predict a sample coefficient (r-value) or they should draw a sample scatter plot. b. If experimental, they should illustrate a bar graph and label the axis. The y-axis represents the dependent variable.

  15. Correlational Research

    A correlational research design investigates relationships between variables without the researcher controlling or manipulating any of them. A correlation reflects the strength and/or direction of the relationship between two (or more) variables. The direction of a correlation can be either positive or negative. Positive correlation.

  16. Research Designs: Quasi-Experimental, Case Studies & Correlational

    Quasi-experimental means that the research will include features of a true experiment but some elements may be missing. The most common experimental element to be missing is a random sample.

  17. PDF Quasi-experiments and Correlational Studies

    researchers classify differential research as a variation of correlational research. We believe that differential research designs can employ control procedures not available in straight correlational research and therefore should be conceptualized as somewhere between quasi-experimental and cor­ relational designs.

  18. Quasi-Experimental Research

    The prefix quasi means "resembling." Thus quasi-experimental research is research that resembles experimental research but is not true experimental research. Although the independent variable is manipulated, participants are not randomly assigned to conditions or orders of conditions (Cook & Campbell, 1979). [1] Because the independent variable is manipulated before the dependent variable ...

  19. Planning and Conducting Clinical Research: The Whole Process

    The pinnacle of non-experimental research is the comparative effectiveness study, which is grouped with other non-experimental study designs such as cross-sectional, ... An experimental study design without randomization is referred to as a quasi-experimental study. Experimental studies try to determine the efficacy of a new intervention on a ...

  20. Quantitative Research Designs: Non-Experimental vs. Experimental

    Without this level of control, you cannot determine any causal effects. While validity is still a concern in non-experimental research, the concerns are more about the validity of the measurements, rather than the validity of the effects. Finally, a quasi-experimental design is a combination of the two designs described above.

  21. Experimental Design

    Correlational, or non-experimental, research is research where subjects are not acted upon, but where research questions can be answered merely by observing subjects. ... An example of quasi-experimental research would be to ask "What is the effect of hand-washing posters in school bathrooms?" If researchers put posters in the same place in all ...

  22. Conducting and Writing Quantitative and Qualitative Research

    Quantitative research usually includes descriptive, correlational, causal-comparative / quasi-experimental, and experimental research.21 On the other hand, qualitative research usually encompasses historical, ethnographic, meta-analysis, narrative, grounded theory, phenomenology, case study, and field research.23,25,28,30 A summary of the ...

  23. Difference Between Correlational and Experimental-Research

    Quasi-experimental Research: The word "quasi" means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned.

  24. The effect of flipped learning on students' basic psychological needs

    This was a quasi-experimental quantitative observational research with an experimental group of undergraduate medical students. Randomisation per se was not performed as we were dealing with the existing tutorial group. ... Correlation analysis was implemented to define the relations between self-esteem and basic psychological needs autonomy in ...