19+ Experimental Design Examples (Methods + Types)
Ever wondered how scientists discover new medicines, psychologists learn about behavior, or even how marketers figure out what kind of ads you like? Well, they all have something in common: they use a special plan or recipe called an "experimental design."
Imagine you're baking cookies. You can't just throw random amounts of flour, sugar, and chocolate chips into a bowl and hope for the best. You follow a recipe, right? Scientists and researchers do something similar. They follow a "recipe" called an experimental design to make sure their experiments are set up in a way that the answers they find are meaningful and reliable.
Experimental design is the roadmap researchers use to answer questions. It's a set of rules and steps that researchers follow to collect information, or "data," in a way that is fair, accurate, and makes sense.
Long ago, people didn't have detailed game plans for experiments. They often just tried things out and saw what happened. But over time, people got smarter about this. They started creating structured plans—what we now call experimental designs—to get clearer, more trustworthy answers to their questions.
In this article, we'll take you on a journey through the world of experimental designs. We'll talk about the different types, or "flavors," of experimental designs, where they're used, and even give you a peek into how they came to be.
What Is Experimental Design?
Alright, before we dive into the different types of experimental designs, let's get crystal clear on what experimental design actually is.
Imagine you're a detective trying to solve a mystery. You need clues, right? Well, in the world of research, experimental design is like the roadmap that helps you find those clues. It's like the game plan in sports or the blueprint when you're building a house. Just like you wouldn't start building without a good blueprint, researchers won't start their studies without a strong experimental design.
So, why do we need experimental design? Think about baking a cake. If you toss ingredients into a bowl without measuring, you'll end up with a mess instead of a tasty dessert.
Similarly, in research, if you don't have a solid plan, you might get confusing or incorrect results. A good experimental design helps you ask the right questions ( think critically ), decide what to measure ( come up with an idea ), and figure out how to measure it (test it). It also helps you consider things that might mess up your results, like outside influences you hadn't thought of.
For example, let's say you want to find out if listening to music helps people focus better. Your experimental design would help you decide things like: Who are you going to test? What kind of music will you use? How will you measure focus? And, importantly, how will you make sure that it's really the music affecting focus and not something else, like the time of day or whether someone had a good breakfast?
In short, experimental design is the master plan that guides researchers through the process of collecting data, so they can answer questions in the most reliable way possible. It's like the GPS for the journey of discovery!
History of Experimental Design
Around 350 BCE, people like Aristotle were trying to figure out how the world works, but they mostly just thought really hard about things. They didn't test their ideas much. So while they were super smart, their methods weren't always the best for finding out the truth.
Fast forward to the Renaissance (14th to 17th centuries), a time of big changes and lots of curiosity. People like Galileo started to experiment by actually doing tests, like rolling balls down inclined planes to study motion. Galileo's work was cool because he combined thinking with doing. He'd have an idea, test it, look at the results, and then think some more. This approach was a lot more reliable than just sitting around and thinking.
Now, let's zoom ahead to the 18th and 19th centuries. This is when people like Francis Galton, an English polymath, started to get really systematic about experimentation. Galton was obsessed with measuring things. Seriously, he even tried to measure how good-looking people were ! His work helped create the foundations for a more organized approach to experiments.
Next stop: the early 20th century. Enter Ronald A. Fisher , a brilliant British statistician. Fisher was a game-changer. He came up with ideas that are like the bread and butter of modern experimental design.
Fisher invented the concept of the " control group "—that's a group of people or things that don't get the treatment you're testing, so you can compare them to those who do. He also stressed the importance of " randomization ," which means assigning people or things to different groups by chance, like drawing names out of a hat. This makes sure the experiment is fair and the results are trustworthy.
Around the same time, American psychologists like John B. Watson and B.F. Skinner were developing " behaviorism ." They focused on studying things that they could directly observe and measure, like actions and reactions.
Skinner even built boxes—called Skinner Boxes —to test how animals like pigeons and rats learn. Their work helped shape how psychologists design experiments today. Watson performed a very controversial experiment called The Little Albert experiment that helped describe behaviour through conditioning—in other words, how people learn to behave the way they do.
In the later part of the 20th century and into our time, computers have totally shaken things up. Researchers now use super powerful software to help design their experiments and crunch the numbers.
With computers, they can simulate complex experiments before they even start, which helps them predict what might happen. This is especially helpful in fields like medicine, where getting things right can be a matter of life and death.
Also, did you know that experimental designs aren't just for scientists in labs? They're used by people in all sorts of jobs, like marketing, education, and even video game design! Yes, someone probably ran an experiment to figure out what makes a game super fun to play.
So there you have it—a quick tour through the history of experimental design, from Aristotle's deep thoughts to Fisher's groundbreaking ideas, and all the way to today's computer-powered research. These designs are the recipes that help people from all walks of life find answers to their big questions.
Key Terms in Experimental Design
Before we dig into the different types of experimental designs, let's get comfy with some key terms. Understanding these terms will make it easier for us to explore the various types of experimental designs that researchers use to answer their big questions.
Independent Variable : This is what you change or control in your experiment to see what effect it has. Think of it as the "cause" in a cause-and-effect relationship. For example, if you're studying whether different types of music help people focus, the kind of music is the independent variable.
Dependent Variable : This is what you're measuring to see the effect of your independent variable. In our music and focus experiment, how well people focus is the dependent variable—it's what "depends" on the kind of music played.
Control Group : This is a group of people who don't get the special treatment or change you're testing. They help you see what happens when the independent variable is not applied. If you're testing whether a new medicine works, the control group would take a fake pill, called a placebo , instead of the real medicine.
Experimental Group : This is the group that gets the special treatment or change you're interested in. Going back to our medicine example, this group would get the actual medicine to see if it has any effect.
Randomization : This is like shaking things up in a fair way. You randomly put people into the control or experimental group so that each group is a good mix of different kinds of people. This helps make the results more reliable.
Sample : This is the group of people you're studying. They're a "sample" of a larger group that you're interested in. For instance, if you want to know how teenagers feel about a new video game, you might study a sample of 100 teenagers.
Bias : This is anything that might tilt your experiment one way or another without you realizing it. Like if you're testing a new kind of dog food and you only test it on poodles, that could create a bias because maybe poodles just really like that food and other breeds don't.
Data : This is the information you collect during the experiment. It's like the treasure you find on your journey of discovery!
Replication : This means doing the experiment more than once to make sure your findings hold up. It's like double-checking your answers on a test.
Hypothesis : This is your educated guess about what will happen in the experiment. It's like predicting the end of a movie based on the first half.
Steps of Experimental Design
Alright, let's say you're all fired up and ready to run your own experiment. Cool! But where do you start? Well, designing an experiment is a bit like planning a road trip. There are some key steps you've got to take to make sure you reach your destination. Let's break it down:
- Ask a Question : Before you hit the road, you've got to know where you're going. Same with experiments. You start with a question you want to answer, like "Does eating breakfast really make you do better in school?"
- Do Some Homework : Before you pack your bags, you look up the best places to visit, right? In science, this means reading up on what other people have already discovered about your topic.
- Form a Hypothesis : This is your educated guess about what you think will happen. It's like saying, "I bet this route will get us there faster."
- Plan the Details : Now you decide what kind of car you're driving (your experimental design), who's coming with you (your sample), and what snacks to bring (your variables).
- Randomization : Remember, this is like shuffling a deck of cards. You want to mix up who goes into your control and experimental groups to make sure it's a fair test.
- Run the Experiment : Finally, the rubber hits the road! You carry out your plan, making sure to collect your data carefully.
- Analyze the Data : Once the trip's over, you look at your photos and decide which ones are keepers. In science, this means looking at your data to see what it tells you.
- Draw Conclusions : Based on your data, did you find an answer to your question? This is like saying, "Yep, that route was faster," or "Nope, we hit a ton of traffic."
- Share Your Findings : After a great trip, you want to tell everyone about it, right? Scientists do the same by publishing their results so others can learn from them.
- Do It Again? : Sometimes one road trip just isn't enough. In the same way, scientists often repeat their experiments to make sure their findings are solid.
So there you have it! Those are the basic steps you need to follow when you're designing an experiment. Each step helps make sure that you're setting up a fair and reliable way to find answers to your big questions.
Let's get into examples of experimental designs.
1) True Experimental Design
In the world of experiments, the True Experimental Design is like the superstar quarterback everyone talks about. Born out of the early 20th-century work of statisticians like Ronald A. Fisher, this design is all about control, precision, and reliability.
Researchers carefully pick an independent variable to manipulate (remember, that's the thing they're changing on purpose) and measure the dependent variable (the effect they're studying). Then comes the magic trick—randomization. By randomly putting participants into either the control or experimental group, scientists make sure their experiment is as fair as possible.
No sneaky biases here!
True Experimental Design Pros
The pros of True Experimental Design are like the perks of a VIP ticket at a concert: you get the best and most trustworthy results. Because everything is controlled and randomized, you can feel pretty confident that the results aren't just a fluke.
True Experimental Design Cons
However, there's a catch. Sometimes, it's really tough to set up these experiments in a real-world situation. Imagine trying to control every single detail of your day, from the food you eat to the air you breathe. Not so easy, right?
True Experimental Design Uses
The fields that get the most out of True Experimental Designs are those that need super reliable results, like medical research.
When scientists were developing COVID-19 vaccines, they used this design to run clinical trials. They had control groups that received a placebo (a harmless substance with no effect) and experimental groups that got the actual vaccine. Then they measured how many people in each group got sick. By comparing the two, they could say, "Yep, this vaccine works!"
So next time you read about a groundbreaking discovery in medicine or technology, chances are a True Experimental Design was the VIP behind the scenes, making sure everything was on point. It's been the go-to for rigorous scientific inquiry for nearly a century, and it's not stepping off the stage anytime soon.
2) Quasi-Experimental Design
So, let's talk about the Quasi-Experimental Design. Think of this one as the cool cousin of True Experimental Design. It wants to be just like its famous relative, but it's a bit more laid-back and flexible. You'll find quasi-experimental designs when it's tricky to set up a full-blown True Experimental Design with all the bells and whistles.
Quasi-experiments still play with an independent variable, just like their stricter cousins. The big difference? They don't use randomization. It's like wanting to divide a bag of jelly beans equally between your friends, but you can't quite do it perfectly.
In real life, it's often not possible or ethical to randomly assign people to different groups, especially when dealing with sensitive topics like education or social issues. And that's where quasi-experiments come in.
Quasi-Experimental Design Pros
Even though they lack full randomization, quasi-experimental designs are like the Swiss Army knives of research: versatile and practical. They're especially popular in fields like education, sociology, and public policy.
For instance, when researchers wanted to figure out if the Head Start program , aimed at giving young kids a "head start" in school, was effective, they used a quasi-experimental design. They couldn't randomly assign kids to go or not go to preschool, but they could compare kids who did with kids who didn't.
Quasi-Experimental Design Cons
Of course, quasi-experiments come with their own bag of pros and cons. On the plus side, they're easier to set up and often cheaper than true experiments. But the flip side is that they're not as rock-solid in their conclusions. Because the groups aren't randomly assigned, there's always that little voice saying, "Hey, are we missing something here?"
Quasi-Experimental Design Uses
Quasi-Experimental Design gained traction in the mid-20th century. Researchers were grappling with real-world problems that didn't fit neatly into a laboratory setting. Plus, as society became more aware of ethical considerations, the need for flexible designs increased. So, the quasi-experimental approach was like a breath of fresh air for scientists wanting to study complex issues without a laundry list of restrictions.
In short, if True Experimental Design is the superstar quarterback, Quasi-Experimental Design is the versatile player who can adapt and still make significant contributions to the game.
3) Pre-Experimental Design
Now, let's talk about the Pre-Experimental Design. Imagine it as the beginner's skateboard you get before you try out for all the cool tricks. It has wheels, it rolls, but it's not built for the professional skatepark.
Similarly, pre-experimental designs give researchers a starting point. They let you dip your toes in the water of scientific research without diving in head-first.
So, what's the deal with pre-experimental designs?
Pre-Experimental Designs are the basic, no-frills versions of experiments. Researchers still mess around with an independent variable and measure a dependent variable, but they skip over the whole randomization thing and often don't even have a control group.
It's like baking a cake but forgetting the frosting and sprinkles; you'll get some results, but they might not be as complete or reliable as you'd like.
Pre-Experimental Design Pros
Why use such a simple setup? Because sometimes, you just need to get the ball rolling. Pre-experimental designs are great for quick-and-dirty research when you're short on time or resources. They give you a rough idea of what's happening, which you can use to plan more detailed studies later.
A good example of this is early studies on the effects of screen time on kids. Researchers couldn't control every aspect of a child's life, but they could easily ask parents to track how much time their kids spent in front of screens and then look for trends in behavior or school performance.
Pre-Experimental Design Cons
But here's the catch: pre-experimental designs are like that first draft of an essay. It helps you get your ideas down, but you wouldn't want to turn it in for a grade. Because these designs lack the rigorous structure of true or quasi-experimental setups, they can't give you rock-solid conclusions. They're more like clues or signposts pointing you in a certain direction.
Pre-Experimental Design Uses
This type of design became popular in the early stages of various scientific fields. Researchers used them to scratch the surface of a topic, generate some initial data, and then decide if it's worth exploring further. In other words, pre-experimental designs were the stepping stones that led to more complex, thorough investigations.
So, while Pre-Experimental Design may not be the star player on the team, it's like the practice squad that helps everyone get better. It's the starting point that can lead to bigger and better things.
4) Factorial Design
Now, buckle up, because we're moving into the world of Factorial Design, the multi-tasker of the experimental universe.
Imagine juggling not just one, but multiple balls in the air—that's what researchers do in a factorial design.
In Factorial Design, researchers are not satisfied with just studying one independent variable. Nope, they want to study two or more at the same time to see how they interact.
It's like cooking with several spices to see how they blend together to create unique flavors.
Factorial Design became the talk of the town with the rise of computers. Why? Because this design produces a lot of data, and computers are the number crunchers that help make sense of it all. So, thanks to our silicon friends, researchers can study complicated questions like, "How do diet AND exercise together affect weight loss?" instead of looking at just one of those factors.
Factorial Design Pros
This design's main selling point is its ability to explore interactions between variables. For instance, maybe a new study drug works really well for young people but not so great for older adults. A factorial design could reveal that age is a crucial factor, something you might miss if you only studied the drug's effectiveness in general. It's like being a detective who looks for clues not just in one room but throughout the entire house.
Factorial Design Cons
However, factorial designs have their own bag of challenges. First off, they can be pretty complicated to set up and run. Imagine coordinating a four-way intersection with lots of cars coming from all directions—you've got to make sure everything runs smoothly, or you'll end up with a traffic jam. Similarly, researchers need to carefully plan how they'll measure and analyze all the different variables.
Factorial Design Uses
Factorial designs are widely used in psychology to untangle the web of factors that influence human behavior. They're also popular in fields like marketing, where companies want to understand how different aspects like price, packaging, and advertising influence a product's success.
And speaking of success, the factorial design has been a hit since statisticians like Ronald A. Fisher (yep, him again!) expanded on it in the early-to-mid 20th century. It offered a more nuanced way of understanding the world, proving that sometimes, to get the full picture, you've got to juggle more than one ball at a time.
So, if True Experimental Design is the quarterback and Quasi-Experimental Design is the versatile player, Factorial Design is the strategist who sees the entire game board and makes moves accordingly.
5) Longitudinal Design
Alright, let's take a step into the world of Longitudinal Design. Picture it as the grand storyteller, the kind who doesn't just tell you about a single event but spins an epic tale that stretches over years or even decades. This design isn't about quick snapshots; it's about capturing the whole movie of someone's life or a long-running process.
You know how you might take a photo every year on your birthday to see how you've changed? Longitudinal Design is kind of like that, but for scientific research.
With Longitudinal Design, instead of measuring something just once, researchers come back again and again, sometimes over many years, to see how things are going. This helps them understand not just what's happening, but why it's happening and how it changes over time.
This design really started to shine in the latter half of the 20th century, when researchers began to realize that some questions can't be answered in a hurry. Think about studies that look at how kids grow up, or research on how a certain medicine affects you over a long period. These aren't things you can rush.
The famous Framingham Heart Study , started in 1948, is a prime example. It's been studying heart health in a small town in Massachusetts for decades, and the findings have shaped what we know about heart disease.
Longitudinal Design Pros
So, what's to love about Longitudinal Design? First off, it's the go-to for studying change over time, whether that's how people age or how a forest recovers from a fire.
Longitudinal Design Cons
But it's not all sunshine and rainbows. Longitudinal studies take a lot of patience and resources. Plus, keeping track of participants over many years can be like herding cats—difficult and full of surprises.
Longitudinal Design Uses
Despite these challenges, longitudinal studies have been key in fields like psychology, sociology, and medicine. They provide the kind of deep, long-term insights that other designs just can't match.
So, if the True Experimental Design is the superstar quarterback, and the Quasi-Experimental Design is the flexible athlete, then the Factorial Design is the strategist, and the Longitudinal Design is the wise elder who has seen it all and has stories to tell.
6) Cross-Sectional Design
Now, let's flip the script and talk about Cross-Sectional Design, the polar opposite of the Longitudinal Design. If Longitudinal is the grand storyteller, think of Cross-Sectional as the snapshot photographer. It captures a single moment in time, like a selfie that you take to remember a fun day. Researchers using this design collect all their data at one point, providing a kind of "snapshot" of whatever they're studying.
In a Cross-Sectional Design, researchers look at multiple groups all at the same time to see how they're different or similar.
This design rose to popularity in the mid-20th century, mainly because it's so quick and efficient. Imagine wanting to know how people of different ages feel about a new video game. Instead of waiting for years to see how opinions change, you could just ask people of all ages what they think right now. That's Cross-Sectional Design for you—fast and straightforward.
You'll find this type of research everywhere from marketing studies to healthcare. For instance, you might have heard about surveys asking people what they think about a new product or political issue. Those are usually cross-sectional studies, aimed at getting a quick read on public opinion.
Cross-Sectional Design Pros
So, what's the big deal with Cross-Sectional Design? Well, it's the go-to when you need answers fast and don't have the time or resources for a more complicated setup.
Cross-Sectional Design Cons
Remember, speed comes with trade-offs. While you get your results quickly, those results are stuck in time. They can't tell you how things change or why they're changing, just what's happening right now.
Cross-Sectional Design Uses
Also, because they're so quick and simple, cross-sectional studies often serve as the first step in research. They give scientists an idea of what's going on so they can decide if it's worth digging deeper. In that way, they're a bit like a movie trailer, giving you a taste of the action to see if you're interested in seeing the whole film.
So, in our lineup of experimental designs, if True Experimental Design is the superstar quarterback and Longitudinal Design is the wise elder, then Cross-Sectional Design is like the speedy running back—fast, agile, but not designed for long, drawn-out plays.
7) Correlational Design
Next on our roster is the Correlational Design, the keen observer of the experimental world. Imagine this design as the person at a party who loves people-watching. They don't interfere or get involved; they just observe and take mental notes about what's going on.
In a correlational study, researchers don't change or control anything; they simply observe and measure how two variables relate to each other.
The correlational design has roots in the early days of psychology and sociology. Pioneers like Sir Francis Galton used it to study how qualities like intelligence or height could be related within families.
This design is all about asking, "Hey, when this thing happens, does that other thing usually happen too?" For example, researchers might study whether students who have more study time get better grades or whether people who exercise more have lower stress levels.
One of the most famous correlational studies you might have heard of is the link between smoking and lung cancer. Back in the mid-20th century, researchers started noticing that people who smoked a lot also seemed to get lung cancer more often. They couldn't say smoking caused cancer—that would require a true experiment—but the strong correlation was a red flag that led to more research and eventually, health warnings.
Correlational Design Pros
This design is great at proving that two (or more) things can be related. Correlational designs can help prove that more detailed research is needed on a topic. They can help us see patterns or possible causes for things that we otherwise might not have realized.
Correlational Design Cons
But here's where you need to be careful: correlational designs can be tricky. Just because two things are related doesn't mean one causes the other. That's like saying, "Every time I wear my lucky socks, my team wins." Well, it's a fun thought, but those socks aren't really controlling the game.
Correlational Design Uses
Despite this limitation, correlational designs are popular in psychology, economics, and epidemiology, to name a few fields. They're often the first step in exploring a possible relationship between variables. Once a strong correlation is found, researchers may decide to conduct more rigorous experimental studies to examine cause and effect.
So, if the True Experimental Design is the superstar quarterback and the Longitudinal Design is the wise elder, the Factorial Design is the strategist, and the Cross-Sectional Design is the speedster, then the Correlational Design is the clever scout, identifying interesting patterns but leaving the heavy lifting of proving cause and effect to the other types of designs.
8) Meta-Analysis
Last but not least, let's talk about Meta-Analysis, the librarian of experimental designs.
If other designs are all about creating new research, Meta-Analysis is about gathering up everyone else's research, sorting it, and figuring out what it all means when you put it together.
Imagine a jigsaw puzzle where each piece is a different study. Meta-Analysis is the process of fitting all those pieces together to see the big picture.
The concept of Meta-Analysis started to take shape in the late 20th century, when computers became powerful enough to handle massive amounts of data. It was like someone handed researchers a super-powered magnifying glass, letting them examine multiple studies at the same time to find common trends or results.
You might have heard of the Cochrane Reviews in healthcare . These are big collections of meta-analyses that help doctors and policymakers figure out what treatments work best based on all the research that's been done.
For example, if ten different studies show that a certain medicine helps lower blood pressure, a meta-analysis would pull all that information together to give a more accurate answer.
Meta-Analysis Pros
The beauty of Meta-Analysis is that it can provide really strong evidence. Instead of relying on one study, you're looking at the whole landscape of research on a topic.
Meta-Analysis Cons
However, it does have some downsides. For one, Meta-Analysis is only as good as the studies it includes. If those studies are flawed, the meta-analysis will be too. It's like baking a cake: if you use bad ingredients, it doesn't matter how good your recipe is—the cake won't turn out well.
Meta-Analysis Uses
Despite these challenges, meta-analyses are highly respected and widely used in many fields like medicine, psychology, and education. They help us make sense of a world that's bursting with information by showing us the big picture drawn from many smaller snapshots.
So, in our all-star lineup, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, the Factorial Design is the strategist, the Cross-Sectional Design is the speedster, and the Correlational Design is the scout, then the Meta-Analysis is like the coach, using insights from everyone else's plays to come up with the best game plan.
9) Non-Experimental Design
Now, let's talk about a player who's a bit of an outsider on this team of experimental designs—the Non-Experimental Design. Think of this design as the commentator or the journalist who covers the game but doesn't actually play.
In a Non-Experimental Design, researchers are like reporters gathering facts, but they don't interfere or change anything. They're simply there to describe and analyze.
Non-Experimental Design Pros
So, what's the deal with Non-Experimental Design? Its strength is in description and exploration. It's really good for studying things as they are in the real world, without changing any conditions.
Non-Experimental Design Cons
Because a non-experimental design doesn't manipulate variables, it can't prove cause and effect. It's like a weather reporter: they can tell you it's raining, but they can't tell you why it's raining.
The downside? Since researchers aren't controlling variables, it's hard to rule out other explanations for what they observe. It's like hearing one side of a story—you get an idea of what happened, but it might not be the complete picture.
Non-Experimental Design Uses
Non-Experimental Design has always been a part of research, especially in fields like anthropology, sociology, and some areas of psychology.
For instance, if you've ever heard of studies that describe how people behave in different cultures or what teens like to do in their free time, that's often Non-Experimental Design at work. These studies aim to capture the essence of a situation, like painting a portrait instead of taking a snapshot.
One well-known example you might have heard about is the Kinsey Reports from the 1940s and 1950s, which described sexual behavior in men and women. Researchers interviewed thousands of people but didn't manipulate any variables like you would in a true experiment. They simply collected data to create a comprehensive picture of the subject matter.
So, in our metaphorical team of research designs, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, and Meta-Analysis is the coach, then Non-Experimental Design is the sports journalist—always present, capturing the game, but not part of the action itself.
10) Repeated Measures Design
Time to meet the Repeated Measures Design, the time traveler of our research team. If this design were a player in a sports game, it would be the one who keeps revisiting past plays to figure out how to improve the next one.
Repeated Measures Design is all about studying the same people or subjects multiple times to see how they change or react under different conditions.
The idea behind Repeated Measures Design isn't new; it's been around since the early days of psychology and medicine. You could say it's a cousin to the Longitudinal Design, but instead of looking at how things naturally change over time, it focuses on how the same group reacts to different things.
Imagine a study looking at how a new energy drink affects people's running speed. Instead of comparing one group that drank the energy drink to another group that didn't, a Repeated Measures Design would have the same group of people run multiple times—once with the energy drink, and once without. This way, you're really zeroing in on the effect of that energy drink, making the results more reliable.
Repeated Measures Design Pros
The strong point of Repeated Measures Design is that it's super focused. Because it uses the same subjects, you don't have to worry about differences between groups messing up your results.
Repeated Measures Design Cons
But the downside? Well, people can get tired or bored if they're tested too many times, which might affect how they respond.
Repeated Measures Design Uses
A famous example of this design is the "Little Albert" experiment, conducted by John B. Watson and Rosalie Rayner in 1920. In this study, a young boy was exposed to a white rat and other stimuli several times to see how his emotional responses changed. Though the ethical standards of this experiment are often criticized today, it was groundbreaking in understanding conditioned emotional responses.
In our metaphorical lineup of research designs, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, and Non-Experimental Design is the journalist, then Repeated Measures Design is the time traveler—always looping back to fine-tune the game plan.
11) Crossover Design
Next up is Crossover Design, the switch-hitter of the research world. If you're familiar with baseball, you'll know a switch-hitter is someone who can bat both right-handed and left-handed.
In a similar way, Crossover Design allows subjects to experience multiple conditions, flipping them around so that everyone gets a turn in each role.
This design is like the utility player on our team—versatile, flexible, and really good at adapting.
The Crossover Design has its roots in medical research and has been popular since the mid-20th century. It's often used in clinical trials to test the effectiveness of different treatments.
Crossover Design Pros
The neat thing about this design is that it allows each participant to serve as their own control group. Imagine you're testing two new kinds of headache medicine. Instead of giving one type to one group and another type to a different group, you'd give both kinds to the same people but at different times.
Crossover Design Cons
What's the big deal with Crossover Design? Its major strength is in reducing the "noise" that comes from individual differences. Since each person experiences all conditions, it's easier to see real effects. However, there's a catch. This design assumes that there's no lasting effect from the first condition when you switch to the second one. That might not always be true. If the first treatment has a long-lasting effect, it could mess up the results when you switch to the second treatment.
Crossover Design Uses
A well-known example of Crossover Design is in studies that look at the effects of different types of diets—like low-carb vs. low-fat diets. Researchers might have participants follow a low-carb diet for a few weeks, then switch them to a low-fat diet. By doing this, they can more accurately measure how each diet affects the same group of people.
In our team of experimental designs, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, and Repeated Measures Design is the time traveler, then Crossover Design is the versatile utility player—always ready to adapt and play multiple roles to get the most accurate results.
12) Cluster Randomized Design
Meet the Cluster Randomized Design, the team captain of group-focused research. In our imaginary lineup of experimental designs, if other designs focus on individual players, then Cluster Randomized Design is looking at how the entire team functions.
This approach is especially common in educational and community-based research, and it's been gaining traction since the late 20th century.
Here's how Cluster Randomized Design works: Instead of assigning individual people to different conditions, researchers assign entire groups, or "clusters." These could be schools, neighborhoods, or even entire towns. This helps you see how the new method works in a real-world setting.
Imagine you want to see if a new anti-bullying program really works. Instead of selecting individual students, you'd introduce the program to a whole school or maybe even several schools, and then compare the results to schools without the program.
Cluster Randomized Design Pros
Why use Cluster Randomized Design? Well, sometimes it's just not practical to assign conditions at the individual level. For example, you can't really have half a school following a new reading program while the other half sticks with the old one; that would be way too confusing! Cluster Randomization helps get around this problem by treating each "cluster" as its own mini-experiment.
Cluster Randomized Design Cons
There's a downside, too. Because entire groups are assigned to each condition, there's a risk that the groups might be different in some important way that the researchers didn't account for. That's like having one sports team that's full of veterans playing against a team of rookies; the match wouldn't be fair.
Cluster Randomized Design Uses
A famous example is the research conducted to test the effectiveness of different public health interventions, like vaccination programs. Researchers might roll out a vaccination program in one community but not in another, then compare the rates of disease in both.
In our metaphorical research team, if True Experimental Design is the quarterback, Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, Repeated Measures Design is the time traveler, and Crossover Design is the utility player, then Cluster Randomized Design is the team captain—always looking out for the group as a whole.
13) Mixed-Methods Design
Say hello to Mixed-Methods Design, the all-rounder or the "Renaissance player" of our research team.
Mixed-Methods Design uses a blend of both qualitative and quantitative methods to get a more complete picture, just like a Renaissance person who's good at lots of different things. It's like being good at both offense and defense in a sport; you've got all your bases covered!
Mixed-Methods Design is a fairly new kid on the block, becoming more popular in the late 20th and early 21st centuries as researchers began to see the value in using multiple approaches to tackle complex questions. It's the Swiss Army knife in our research toolkit, combining the best parts of other designs to be more versatile.
Here's how it could work: Imagine you're studying the effects of a new educational app on students' math skills. You might use quantitative methods like tests and grades to measure how much the students improve—that's the 'numbers part.'
But you also want to know how the students feel about math now, or why they think they got better or worse. For that, you could conduct interviews or have students fill out journals—that's the 'story part.'
Mixed-Methods Design Pros
So, what's the scoop on Mixed-Methods Design? The strength is its versatility and depth; you're not just getting numbers or stories, you're getting both, which gives a fuller picture.
Mixed-Methods Design Cons
But, it's also more challenging. Imagine trying to play two sports at the same time! You have to be skilled in different research methods and know how to combine them effectively.
Mixed-Methods Design Uses
A high-profile example of Mixed-Methods Design is research on climate change. Scientists use numbers and data to show temperature changes (quantitative), but they also interview people to understand how these changes are affecting communities (qualitative).
In our team of experimental designs, if True Experimental Design is the quarterback, Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, Repeated Measures Design is the time traveler, Crossover Design is the utility player, and Cluster Randomized Design is the team captain, then Mixed-Methods Design is the Renaissance player—skilled in multiple areas and able to bring them all together for a winning strategy.
14) Multivariate Design
Now, let's turn our attention to Multivariate Design, the multitasker of the research world.
If our lineup of research designs were like players on a basketball court, Multivariate Design would be the player dribbling, passing, and shooting all at once. This design doesn't just look at one or two things; it looks at several variables simultaneously to see how they interact and affect each other.
Multivariate Design is like baking a cake with many ingredients. Instead of just looking at how flour affects the cake, you also consider sugar, eggs, and milk all at once. This way, you understand how everything works together to make the cake taste good or bad.
Multivariate Design has been a go-to method in psychology, economics, and social sciences since the latter half of the 20th century. With the advent of computers and advanced statistical software, analyzing multiple variables at once became a lot easier, and Multivariate Design soared in popularity.
Multivariate Design Pros
So, what's the benefit of using Multivariate Design? Its power lies in its complexity. By studying multiple variables at the same time, you can get a really rich, detailed understanding of what's going on.
Multivariate Design Cons
But that complexity can also be a drawback. With so many variables, it can be tough to tell which ones are really making a difference and which ones are just along for the ride.
Multivariate Design Uses
Imagine you're a coach trying to figure out the best strategy to win games. You wouldn't just look at how many points your star player scores; you'd also consider assists, rebounds, turnovers, and maybe even how loud the crowd is. A Multivariate Design would help you understand how all these factors work together to determine whether you win or lose.
A well-known example of Multivariate Design is in market research. Companies often use this approach to figure out how different factors—like price, packaging, and advertising—affect sales. By studying multiple variables at once, they can find the best combination to boost profits.
In our metaphorical research team, if True Experimental Design is the quarterback, Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, Repeated Measures Design is the time traveler, Crossover Design is the utility player, Cluster Randomized Design is the team captain, and Mixed-Methods Design is the Renaissance player, then Multivariate Design is the multitasker—juggling many variables at once to get a fuller picture of what's happening.
15) Pretest-Posttest Design
Let's introduce Pretest-Posttest Design, the "Before and After" superstar of our research team. You've probably seen those before-and-after pictures in ads for weight loss programs or home renovations, right?
Well, this design is like that, but for science! Pretest-Posttest Design checks out what things are like before the experiment starts and then compares that to what things are like after the experiment ends.
This design is one of the classics, a staple in research for decades across various fields like psychology, education, and healthcare. It's so simple and straightforward that it has stayed popular for a long time.
In Pretest-Posttest Design, you measure your subject's behavior or condition before you introduce any changes—that's your "before" or "pretest." Then you do your experiment, and after it's done, you measure the same thing again—that's your "after" or "posttest."
Pretest-Posttest Design Pros
What makes Pretest-Posttest Design special? It's pretty easy to understand and doesn't require fancy statistics.
Pretest-Posttest Design Cons
But there are some pitfalls. For example, what if the kids in our math example get better at multiplication just because they're older or because they've taken the test before? That would make it hard to tell if the program is really effective or not.
Pretest-Posttest Design Uses
Let's say you're a teacher and you want to know if a new math program helps kids get better at multiplication. First, you'd give all the kids a multiplication test—that's your pretest. Then you'd teach them using the new math program. At the end, you'd give them the same test again—that's your posttest. If the kids do better on the second test, you might conclude that the program works.
One famous use of Pretest-Posttest Design is in evaluating the effectiveness of driver's education courses. Researchers will measure people's driving skills before and after the course to see if they've improved.
16) Solomon Four-Group Design
Next up is the Solomon Four-Group Design, the "chess master" of our research team. This design is all about strategy and careful planning. Named after Richard L. Solomon who introduced it in the 1940s, this method tries to correct some of the weaknesses in simpler designs, like the Pretest-Posttest Design.
Here's how it rolls: The Solomon Four-Group Design uses four different groups to test a hypothesis. Two groups get a pretest, then one of them receives the treatment or intervention, and both get a posttest. The other two groups skip the pretest, and only one of them receives the treatment before they both get a posttest.
Sound complicated? It's like playing 4D chess; you're thinking several moves ahead!
Solomon Four-Group Design Pros
What's the pro and con of the Solomon Four-Group Design? On the plus side, it provides really robust results because it accounts for so many variables.
Solomon Four-Group Design Cons
The downside? It's a lot of work and requires a lot of participants, making it more time-consuming and costly.
Solomon Four-Group Design Uses
Let's say you want to figure out if a new way of teaching history helps students remember facts better. Two classes take a history quiz (pretest), then one class uses the new teaching method while the other sticks with the old way. Both classes take another quiz afterward (posttest).
Meanwhile, two more classes skip the initial quiz, and then one uses the new method before both take the final quiz. Comparing all four groups will give you a much clearer picture of whether the new teaching method works and whether the pretest itself affects the outcome.
The Solomon Four-Group Design is less commonly used than simpler designs but is highly respected for its ability to control for more variables. It's a favorite in educational and psychological research where you really want to dig deep and figure out what's actually causing changes.
17) Adaptive Designs
Now, let's talk about Adaptive Designs, the chameleons of the experimental world.
Imagine you're a detective, and halfway through solving a case, you find a clue that changes everything. You wouldn't just stick to your old plan; you'd adapt and change your approach, right? That's exactly what Adaptive Designs allow researchers to do.
In an Adaptive Design, researchers can make changes to the study as it's happening, based on early results. In a traditional study, once you set your plan, you stick to it from start to finish.
Adaptive Design Pros
This method is particularly useful in fast-paced or high-stakes situations, like developing a new vaccine in the middle of a pandemic. The ability to adapt can save both time and resources, and more importantly, it can save lives by getting effective treatments out faster.
Adaptive Design Cons
But Adaptive Designs aren't without their drawbacks. They can be very complex to plan and carry out, and there's always a risk that the changes made during the study could introduce bias or errors.
Adaptive Design Uses
Adaptive Designs are most often seen in clinical trials, particularly in the medical and pharmaceutical fields.
For instance, if a new drug is showing really promising results, the study might be adjusted to give more participants the new treatment instead of a placebo. Or if one dose level is showing bad side effects, it might be dropped from the study.
The best part is, these changes are pre-planned. Researchers lay out in advance what changes might be made and under what conditions, which helps keep everything scientific and above board.
In terms of applications, besides their heavy usage in medical and pharmaceutical research, Adaptive Designs are also becoming increasingly popular in software testing and market research. In these fields, being able to quickly adjust to early results can give companies a significant advantage.
Adaptive Designs are like the agile startups of the research world—quick to pivot, keen to learn from ongoing results, and focused on rapid, efficient progress. However, they require a great deal of expertise and careful planning to ensure that the adaptability doesn't compromise the integrity of the research.
18) Bayesian Designs
Next, let's dive into Bayesian Designs, the data detectives of the research universe. Named after Thomas Bayes, an 18th-century statistician and minister, this design doesn't just look at what's happening now; it also takes into account what's happened before.
Imagine if you were a detective who not only looked at the evidence in front of you but also used your past cases to make better guesses about your current one. That's the essence of Bayesian Designs.
Bayesian Designs are like detective work in science. As you gather more clues (or data), you update your best guess on what's really happening. This way, your experiment gets smarter as it goes along.
In the world of research, Bayesian Designs are most notably used in areas where you have some prior knowledge that can inform your current study. For example, if earlier research shows that a certain type of medicine usually works well for a specific illness, a Bayesian Design would include that information when studying a new group of patients with the same illness.
Bayesian Design Pros
One of the major advantages of Bayesian Designs is their efficiency. Because they use existing data to inform the current experiment, often fewer resources are needed to reach a reliable conclusion.
Bayesian Design Cons
However, they can be quite complicated to set up and require a deep understanding of both statistics and the subject matter at hand.
Bayesian Design Uses
Bayesian Designs are highly valued in medical research, finance, environmental science, and even in Internet search algorithms. Their ability to continually update and refine hypotheses based on new evidence makes them particularly useful in fields where data is constantly evolving and where quick, informed decisions are crucial.
Here's a real-world example: In the development of personalized medicine, where treatments are tailored to individual patients, Bayesian Designs are invaluable. If a treatment has been effective for patients with similar genetics or symptoms in the past, a Bayesian approach can use that data to predict how well it might work for a new patient.
This type of design is also increasingly popular in machine learning and artificial intelligence. In these fields, Bayesian Designs help algorithms "learn" from past data to make better predictions or decisions in new situations. It's like teaching a computer to be a detective that gets better and better at solving puzzles the more puzzles it sees.
19) Covariate Adaptive Randomization
Now let's turn our attention to Covariate Adaptive Randomization, which you can think of as the "matchmaker" of experimental designs.
Picture a soccer coach trying to create the most balanced teams for a friendly match. They wouldn't just randomly assign players; they'd take into account each player's skills, experience, and other traits.
Covariate Adaptive Randomization is all about creating the most evenly matched groups possible for an experiment.
In traditional randomization, participants are allocated to different groups purely by chance. This is a pretty fair way to do things, but it can sometimes lead to unbalanced groups.
Imagine if all the professional-level players ended up on one soccer team and all the beginners on another; that wouldn't be a very informative match! Covariate Adaptive Randomization fixes this by using important traits or characteristics (called "covariates") to guide the randomization process.
Covariate Adaptive Randomization Pros
The benefits of this design are pretty clear: it aims for balance and fairness, making the final results more trustworthy.
Covariate Adaptive Randomization Cons
But it's not perfect. It can be complex to implement and requires a deep understanding of which characteristics are most important to balance.
Covariate Adaptive Randomization Uses
This design is particularly useful in medical trials. Let's say researchers are testing a new medication for high blood pressure. Participants might have different ages, weights, or pre-existing conditions that could affect the results.
Covariate Adaptive Randomization would make sure that each treatment group has a similar mix of these characteristics, making the results more reliable and easier to interpret.
In practical terms, this design is often seen in clinical trials for new drugs or therapies, but its principles are also applicable in fields like psychology, education, and social sciences.
For instance, in educational research, it might be used to ensure that classrooms being compared have similar distributions of students in terms of academic ability, socioeconomic status, and other factors.
Covariate Adaptive Randomization is like the wise elder of the group, ensuring that everyone has an equal opportunity to show their true capabilities, thereby making the collective results as reliable as possible.
20) Stepped Wedge Design
Let's now focus on the Stepped Wedge Design, a thoughtful and cautious member of the experimental design family.
Imagine you're trying out a new gardening technique, but you're not sure how well it will work. You decide to apply it to one section of your garden first, watch how it performs, and then gradually extend the technique to other sections. This way, you get to see its effects over time and across different conditions. That's basically how Stepped Wedge Design works.
In a Stepped Wedge Design, all participants or clusters start off in the control group, and then, at different times, they 'step' over to the intervention or treatment group. This creates a wedge-like pattern over time where more and more participants receive the treatment as the study progresses. It's like rolling out a new policy in phases, monitoring its impact at each stage before extending it to more people.
Stepped Wedge Design Pros
The Stepped Wedge Design offers several advantages. Firstly, it allows for the study of interventions that are expected to do more good than harm, which makes it ethically appealing.
Secondly, it's useful when resources are limited and it's not feasible to roll out a new treatment to everyone at once. Lastly, because everyone eventually receives the treatment, it can be easier to get buy-in from participants or organizations involved in the study.
Stepped Wedge Design Cons
However, this design can be complex to analyze because it has to account for both the time factor and the changing conditions in each 'step' of the wedge. And like any study where participants know they're receiving an intervention, there's the potential for the results to be influenced by the placebo effect or other biases.
Stepped Wedge Design Uses
This design is particularly useful in health and social care research. For instance, if a hospital wants to implement a new hygiene protocol, it might start in one department, assess its impact, and then roll it out to other departments over time. This allows the hospital to adjust and refine the new protocol based on real-world data before it's fully implemented.
In terms of applications, Stepped Wedge Designs are commonly used in public health initiatives, organizational changes in healthcare settings, and social policy trials. They are particularly useful in situations where an intervention is being rolled out gradually and it's important to understand its impacts at each stage.
21) Sequential Design
Next up is Sequential Design, the dynamic and flexible member of our experimental design family.
Imagine you're playing a video game where you can choose different paths. If you take one path and find a treasure chest, you might decide to continue in that direction. If you hit a dead end, you might backtrack and try a different route. Sequential Design operates in a similar fashion, allowing researchers to make decisions at different stages based on what they've learned so far.
In a Sequential Design, the experiment is broken down into smaller parts, or "sequences." After each sequence, researchers pause to look at the data they've collected. Based on those findings, they then decide whether to stop the experiment because they've got enough information, or to continue and perhaps even modify the next sequence.
Sequential Design Pros
This allows for a more efficient use of resources, as you're only continuing with the experiment if the data suggests it's worth doing so.
One of the great things about Sequential Design is its efficiency. Because you're making data-driven decisions along the way, you can often reach conclusions more quickly and with fewer resources.
Sequential Design Cons
However, it requires careful planning and expertise to ensure that these "stop or go" decisions are made correctly and without bias.
Sequential Design Uses
In terms of its applications, besides healthcare and medicine, Sequential Design is also popular in quality control in manufacturing, environmental monitoring, and financial modeling. In these areas, being able to make quick decisions based on incoming data can be a big advantage.
This design is often used in clinical trials involving new medications or treatments. For example, if early results show that a new drug has significant side effects, the trial can be stopped before more people are exposed to it.
On the flip side, if the drug is showing promising results, the trial might be expanded to include more participants or to extend the testing period.
Think of Sequential Design as the nimble athlete of experimental designs, capable of quick pivots and adjustments to reach the finish line in the most effective way possible. But just like an athlete needs a good coach, this design requires expert oversight to make sure it stays on the right track.
22) Field Experiments
Last but certainly not least, let's explore Field Experiments—the adventurers of the experimental design world.
Picture a scientist leaving the controlled environment of a lab to test a theory in the real world, like a biologist studying animals in their natural habitat or a social scientist observing people in a real community. These are Field Experiments, and they're all about getting out there and gathering data in real-world settings.
Field Experiments embrace the messiness of the real world, unlike laboratory experiments, where everything is controlled down to the smallest detail. This makes them both exciting and challenging.
Field Experiment Pros
On one hand, the results often give us a better understanding of how things work outside the lab.
While Field Experiments offer real-world relevance, they come with challenges like controlling for outside factors and the ethical considerations of intervening in people's lives without their knowledge.
Field Experiment Cons
On the other hand, the lack of control can make it harder to tell exactly what's causing what. Yet, despite these challenges, they remain a valuable tool for researchers who want to understand how theories play out in the real world.
Field Experiment Uses
Let's say a school wants to improve student performance. In a Field Experiment, they might change the school's daily schedule for one semester and keep track of how students perform compared to another school where the schedule remained the same.
Because the study is happening in a real school with real students, the results could be very useful for understanding how the change might work in other schools. But since it's the real world, lots of other factors—like changes in teachers or even the weather—could affect the results.
Field Experiments are widely used in economics, psychology, education, and public policy. For example, you might have heard of the famous "Broken Windows" experiment in the 1980s that looked at how small signs of disorder, like broken windows or graffiti, could encourage more serious crime in neighborhoods. This experiment had a big impact on how cities think about crime prevention.
From the foundational concepts of control groups and independent variables to the sophisticated layouts like Covariate Adaptive Randomization and Sequential Design, it's clear that the realm of experimental design is as varied as it is fascinating.
We've seen that each design has its own special talents, ideal for specific situations. Some designs, like the Classic Controlled Experiment, are like reliable old friends you can always count on.
Others, like Sequential Design, are flexible and adaptable, making quick changes based on what they learn. And let's not forget the adventurous Field Experiments, which take us out of the lab and into the real world to discover things we might not see otherwise.
Choosing the right experimental design is like picking the right tool for the job. The method you choose can make a big difference in how reliable your results are and how much people will trust what you've discovered. And as we've learned, there's a design to suit just about every question, every problem, and every curiosity.
So the next time you read about a new discovery in medicine, psychology, or any other field, you'll have a better understanding of the thought and planning that went into figuring things out. Experimental design is more than just a set of rules; it's a structured way to explore the unknown and answer questions that can change the world.
Related posts:
- Experimental Psychologist Career (Salary + Duties + Interviews)
- 40+ Famous Psychologists (Images + Biographies)
- 11+ Psychology Experiment Ideas (Goals + Methods)
- The Little Albert Experiment
- 41+ White Collar Job Examples (Salary + Path)
Reference this article:
About The Author
Free Personality Test
Free Memory Test
Free IQ Test
PracticalPie.com is a participant in the Amazon Associates Program. As an Amazon Associate we earn from qualifying purchases.
Follow Us On:
Youtube Facebook Instagram X/Twitter
Psychology Resources
Developmental
Personality
Relationships
Psychologists
Serial Killers
Psychology Tests
Personality Quiz
Memory Test
Depression test
Type A/B Personality Test
© PracticalPsychology. All rights reserved
Privacy Policy | Terms of Use
Experimental Design: Types, Examples & Methods
Saul McLeod, PhD
Editor-in-Chief for Simply Psychology
BSc (Hons) Psychology, MRes, PhD, University of Manchester
Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.
Learn about our Editorial Process
Olivia Guy-Evans, MSc
Associate Editor for Simply Psychology
BSc (Hons) Psychology, MSc Psychology of Education
Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.
On This Page:
Experimental design refers to how participants are allocated to different groups in an experiment. Types of design include repeated measures, independent groups, and matched pairs designs.
Probably the most common way to design an experiment in psychology is to divide the participants into two groups, the experimental group and the control group, and then introduce a change to the experimental group, not the control group.
The researcher must decide how he/she will allocate their sample to the different experimental groups. For example, if there are 10 participants, will all 10 participants participate in both groups (e.g., repeated measures), or will the participants be split in half and take part in only one group each?
Three types of experimental designs are commonly used:
1. Independent Measures
Independent measures design, also known as between-groups , is an experimental design where different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants.
This should be done by random allocation, ensuring that each participant has an equal chance of being assigned to one group.
Independent measures involve using two separate groups of participants, one in each condition. For example:
- Con : More people are needed than with the repeated measures design (i.e., more time-consuming).
- Pro : Avoids order effects (such as practice or fatigue) as people participate in one condition only. If a person is involved in several conditions, they may become bored, tired, and fed up by the time they come to the second condition or become wise to the requirements of the experiment!
- Con : Differences between participants in the groups may affect results, for example, variations in age, gender, or social background. These differences are known as participant variables (i.e., a type of extraneous variable ).
- Control : After the participants have been recruited, they should be randomly assigned to their groups. This should ensure the groups are similar, on average (reducing participant variables).
2. Repeated Measures Design
Repeated Measures design is an experimental design where the same participants participate in each independent variable condition. This means that each experiment condition includes the same group of participants.
Repeated Measures design is also known as within-groups or within-subjects design .
- Pro : As the same participants are used in each condition, participant variables (i.e., individual differences) are reduced.
- Con : There may be order effects. Order effects refer to the order of the conditions affecting the participants’ behavior. Performance in the second condition may be better because the participants know what to do (i.e., practice effect). Or their performance might be worse in the second condition because they are tired (i.e., fatigue effect). This limitation can be controlled using counterbalancing.
- Pro : Fewer people are needed as they participate in all conditions (i.e., saves time).
- Control : To combat order effects, the researcher counter-balances the order of the conditions for the participants. Alternating the order in which participants perform in different conditions of an experiment.
Counterbalancing
Suppose we used a repeated measures design in which all of the participants first learned words in “loud noise” and then learned them in “no noise.”
We expect the participants to learn better in “no noise” because of order effects, such as practice. However, a researcher can control for order effects using counterbalancing.
The sample would be split into two groups: experimental (A) and control (B). For example, group 1 does ‘A’ then ‘B,’ and group 2 does ‘B’ then ‘A.’ This is to eliminate order effects.
Although order effects occur for each participant, they balance each other out in the results because they occur equally in both groups.
3. Matched Pairs Design
A matched pairs design is an experimental design where pairs of participants are matched in terms of key variables, such as age or socioeconomic status. One member of each pair is then placed into the experimental group and the other member into the control group .
One member of each matched pair must be randomly assigned to the experimental group and the other to the control group.
- Con : If one participant drops out, you lose 2 PPs’ data.
- Pro : Reduces participant variables because the researcher has tried to pair up the participants so that each condition has people with similar abilities and characteristics.
- Con : Very time-consuming trying to find closely matched pairs.
- Pro : It avoids order effects, so counterbalancing is not necessary.
- Con : Impossible to match people exactly unless they are identical twins!
- Control : Members of each pair should be randomly assigned to conditions. However, this does not solve all these problems.
Experimental design refers to how participants are allocated to an experiment’s different conditions (or IV levels). There are three types:
1. Independent measures / between-groups : Different participants are used in each condition of the independent variable.
2. Repeated measures /within groups : The same participants take part in each condition of the independent variable.
3. Matched pairs : Each condition uses different participants, but they are matched in terms of important characteristics, e.g., gender, age, intelligence, etc.
Learning Check
Read about each of the experiments below. For each experiment, identify (1) which experimental design was used; and (2) why the researcher might have used that design.
1 . To compare the effectiveness of two different types of therapy for depression, depressed patients were assigned to receive either cognitive therapy or behavior therapy for a 12-week period.
The researchers attempted to ensure that the patients in the two groups had similar severity of depressed symptoms by administering a standardized test of depression to each participant, then pairing them according to the severity of their symptoms.
2 . To assess the difference in reading comprehension between 7 and 9-year-olds, a researcher recruited each group from a local primary school. They were given the same passage of text to read and then asked a series of questions to assess their understanding.
3 . To assess the effectiveness of two different ways of teaching reading, a group of 5-year-olds was recruited from a primary school. Their level of reading ability was assessed, and then they were taught using scheme one for 20 weeks.
At the end of this period, their reading was reassessed, and a reading improvement score was calculated. They were then taught using scheme two for a further 20 weeks, and another reading improvement score for this period was calculated. The reading improvement scores for each child were then compared.
4 . To assess the effect of the organization on recall, a researcher randomly assigned student volunteers to two conditions.
Condition one attempted to recall a list of words that were organized into meaningful categories; condition two attempted to recall the same words, randomly grouped on the page.
Experiment Terminology
Ecological validity.
The degree to which an investigation represents real-life experiences.
Experimenter effects
These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.
Demand characteristics
The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).
Independent variable (IV)
The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.
Dependent variable (DV)
Variable the experimenter measures. This is the outcome (i.e., the result) of a study.
Extraneous variables (EV)
All variables which are not independent variables but could affect the results (DV) of the experiment. Extraneous variables should be controlled where possible.
Confounding variables
Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.
Random Allocation
Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of taking part in each condition.
The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.
Order effects
Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:
(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;
(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
30 8.1 Experimental design: What is it and when should it be used?
Learning objectives.
- Define experiment
- Identify the core features of true experimental designs
- Describe the difference between an experimental group and a control group
- Identify and describe the various types of true experimental designs
Experiments are an excellent data collection strategy for social workers wishing to observe the effects of a clinical intervention or social welfare program. Understanding what experiments are and how they are conducted is useful for all social scientists, whether they actually plan to use this methodology or simply aim to understand findings from experimental studies. An experiment is a method of data collection designed to test hypotheses under controlled conditions. In social scientific research, the term experiment has a precise meaning and should not be used to describe all research methodologies.
Experiments have a long and important history in social science. Behaviorists such as John Watson, B. F. Skinner, Ivan Pavlov, and Albert Bandura used experimental design to demonstrate the various types of conditioning. Using strictly controlled environments, behaviorists were able to isolate a single stimulus as the cause of measurable differences in behavior or physiological responses. The foundations of social learning theory and behavior modification are found in experimental research projects. Moreover, behaviorist experiments brought psychology and social science away from the abstract world of Freudian analysis and towards empirical inquiry, grounded in real-world observations and objectively-defined variables. Experiments are used at all levels of social work inquiry, including agency-based experiments that test therapeutic interventions and policy experiments that test new programs.
Several kinds of experimental designs exist. In general, designs considered to be true experiments contain three basic key features:
- random assignment of participants into experimental and control groups
- a “treatment” (or intervention) provided to the experimental group
- measurement of the effects of the treatment in a post-test administered to both groups
Some true experiments are more complex. Their designs can also include a pre-test and can have more than two groups, but these are the minimum requirements for a design to be a true experiment.
Experimental and control groups
In a true experiment, the effect of an intervention is tested by comparing two groups: one that is exposed to the intervention (the experimental group , also known as the treatment group) and another that does not receive the intervention (the control group ). Importantly, participants in a true experiment need to be randomly assigned to either the control or experimental groups. Random assignment uses a random number generator or some other random process to assign people into experimental and control groups. Random assignment is important in experimental research because it helps to ensure that the experimental group and control group are comparable and that any differences between the experimental and control groups are due to random chance. We will address more of the logic behind random assignment in the next section.
Treatment or intervention
In an experiment, the independent variable is receiving the intervention being tested—for example, a therapeutic technique, prevention program, or access to some service or support. It is less common in of social work research, but social science research may also have a stimulus, rather than an intervention as the independent variable. For example, an electric shock or a reading about death might be used as a stimulus to provoke a response.
In some cases, it may be immoral to withhold treatment completely from a control group within an experiment. If you recruited two groups of people with severe addiction and only provided treatment to one group, the other group would likely suffer. For these cases, researchers use a control group that receives “treatment as usual.” Experimenters must clearly define what treatment as usual means. For example, a standard treatment in substance abuse recovery is attending Alcoholics Anonymous or Narcotics Anonymous meetings. A substance abuse researcher conducting an experiment may use twelve-step programs in their control group and use their experimental intervention in the experimental group. The results would show whether the experimental intervention worked better than normal treatment, which is useful information.
The dependent variable is usually the intended effect the researcher wants the intervention to have. If the researcher is testing a new therapy for individuals with binge eating disorder, their dependent variable may be the number of binge eating episodes a participant reports. The researcher likely expects her intervention to decrease the number of binge eating episodes reported by participants. Thus, she must, at a minimum, measure the number of episodes that occur after the intervention, which is the post-test . In a classic experimental design, participants are also given a pretest to measure the dependent variable before the experimental treatment begins.
Types of experimental design
Let’s put these concepts in chronological order so we can better understand how an experiment runs from start to finish. Once you’ve collected your sample, you’ll need to randomly assign your participants to the experimental group and control group. In a common type of experimental design, you will then give both groups your pretest, which measures your dependent variable, to see what your participants are like before you start your intervention. Next, you will provide your intervention, or independent variable, to your experimental group, but not to your control group. Many interventions last a few weeks or months to complete, particularly therapeutic treatments. Finally, you will administer your post-test to both groups to observe any changes in your dependent variable. What we’ve just described is known as the classical experimental design and is the simplest type of true experimental design. All of the designs we review in this section are variations on this approach. Figure 8.1 visually represents these steps.
An interesting example of experimental research can be found in Shannon K. McCoy and Brenda Major’s (2003) study of people’s perceptions of prejudice. In one portion of this multifaceted study, all participants were given a pretest to assess their levels of depression. No significant differences in depression were found between the experimental and control groups during the pretest. Participants in the experimental group were then asked to read an article suggesting that prejudice against their own racial group is severe and pervasive, while participants in the control group were asked to read an article suggesting that prejudice against a racial group other than their own is severe and pervasive. Clearly, these were not meant to be interventions or treatments to help depression, but were stimuli designed to elicit changes in people’s depression levels. Upon measuring depression scores during the post-test period, the researchers discovered that those who had received the experimental stimulus (the article citing prejudice against their same racial group) reported greater depression than those in the control group. This is just one of many examples of social scientific experimental research.
In addition to classic experimental design, there are two other ways of designing experiments that are considered to fall within the purview of “true” experiments (Babbie, 2010; Campbell & Stanley, 1963). The posttest-only control group design is almost the same as classic experimental design, except it does not use a pretest. Researchers who use posttest-only designs want to eliminate testing effects , in which participants’ scores on a measure change because they have already been exposed to it. If you took multiple SAT or ACT practice exams before you took the real one you sent to colleges, you’ve taken advantage of testing effects to get a better score. Considering the previous example on racism and depression, participants who are given a pretest about depression before being exposed to the stimulus would likely assume that the intervention is designed to address depression. That knowledge could cause them to answer differently on the post-test than they otherwise would. In theory, as long as the control and experimental groups have been determined randomly and are therefore comparable, no pretest is needed. However, most researchers prefer to use pretests in case randomization did not result in equivalent groups and to help assess change over time within both the experimental and control groups.
Researchers wishing to account for testing effects but also gather pretest data can use a Solomon four-group design. In the Solomon four-group design , the researcher uses four groups. Two groups are treated as they would be in a classic experiment—pretest, experimental group intervention, and post-test. The other two groups do not receive the pretest, though one receives the intervention. All groups are given the post-test. Table 8.1 illustrates the features of each of the four groups in the Solomon four-group design. By having one set of experimental and control groups that complete the pretest (Groups 1 and 2) and another set that does not complete the pretest (Groups 3 and 4), researchers using the Solomon four-group design can account for testing effects in their analysis.
Group 1 | X | X | X |
Group 2 | X | X | |
Group 3 | X | X | |
Group 4 | X |
Solomon four-group designs are challenging to implement in the real world because they are time- and resource-intensive. Researchers must recruit enough participants to create four groups and implement interventions in two of them.
Overall, true experimental designs are sometimes difficult to implement in a real-world practice environment. It may be impossible to withhold treatment from a control group or randomly assign participants in a study. In these cases, pre-experimental and quasi-experimental designs–which we will discuss in the next section–can be used. However, the differences in rigor from true experimental designs leave their conclusions more open to critique.
Experimental design in macro-level research
You can imagine that social work researchers may be limited in their ability to use random assignment when examining the effects of governmental policy on individuals. For example, it is unlikely that a researcher could randomly assign some states to implement decriminalization of recreational marijuana and some states not to in order to assess the effects of the policy change. There are, however, important examples of policy experiments that use random assignment, including the Oregon Medicaid experiment. In the Oregon Medicaid experiment, the wait list for Oregon was so long, state officials conducted a lottery to see who from the wait list would receive Medicaid (Baicker et al., 2013). Researchers used the lottery as a natural experiment that included random assignment. People selected to be a part of Medicaid were the experimental group and those on the wait list were in the control group. There are some practical complications macro-level experiments, just as with other experiments. For example, the ethical concern with using people on a wait list as a control group exists in macro-level research just as it does in micro-level research.
Key Takeaways
- True experimental designs require random assignment.
- Control groups do not receive an intervention, and experimental groups receive an intervention.
- The basic components of a true experiment include a pretest, posttest, control group, and experimental group.
- Testing effects may cause researchers to use variations on the classic experimental design.
- Classic experimental design- uses random assignment, an experimental and control group, as well as pre- and posttesting
- Control group- the group in an experiment that does not receive the intervention
- Experiment- a method of data collection designed to test hypotheses under controlled conditions
- Experimental group- the group in an experiment that receives the intervention
- Posttest- a measurement taken after the intervention
- Posttest-only control group design- a type of experimental design that uses random assignment, and an experimental and control group, but does not use a pretest
- Pretest- a measurement taken prior to the intervention
- Random assignment-using a random process to assign people into experimental and control groups
- Solomon four-group design- uses random assignment, two experimental and two control groups, pretests for half of the groups, and posttests for all
- Testing effects- when a participant’s scores on a measure change because they have already been exposed to it
- True experiments- a group of experimental designs that contain independent and dependent variables, pretesting and post testing, and experimental and control groups
Image attributions
exam scientific experiment by mohamed_hassan CC-0
Foundations of Social Work Research Copyright © 2020 by Rebecca L. Mauldin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
Share This Book
- Privacy Policy
Home » Experimental Design – Types, Methods, Guide
Experimental Design – Types, Methods, Guide
Table of Contents
Experimental Design
Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results.
Experimental design typically includes identifying the variables that will be manipulated or measured, defining the sample or population to be studied, selecting an appropriate method of sampling, choosing a method for data collection and analysis, and determining the appropriate statistical tests to use.
Types of Experimental Design
Here are the different types of experimental design:
Completely Randomized Design
In this design, participants are randomly assigned to one of two or more groups, and each group is exposed to a different treatment or condition.
Randomized Block Design
This design involves dividing participants into blocks based on a specific characteristic, such as age or gender, and then randomly assigning participants within each block to one of two or more treatment groups.
Factorial Design
In a factorial design, participants are randomly assigned to one of several groups, each of which receives a different combination of two or more independent variables.
Repeated Measures Design
In this design, each participant is exposed to all of the different treatments or conditions, either in a random order or in a predetermined order.
Crossover Design
This design involves randomly assigning participants to one of two or more treatment groups, with each group receiving one treatment during the first phase of the study and then switching to a different treatment during the second phase.
Split-plot Design
In this design, the researcher manipulates one or more variables at different levels and uses a randomized block design to control for other variables.
Nested Design
This design involves grouping participants within larger units, such as schools or households, and then randomly assigning these units to different treatment groups.
Laboratory Experiment
Laboratory experiments are conducted under controlled conditions, which allows for greater precision and accuracy. However, because laboratory conditions are not always representative of real-world conditions, the results of these experiments may not be generalizable to the population at large.
Field Experiment
Field experiments are conducted in naturalistic settings and allow for more realistic observations. However, because field experiments are not as controlled as laboratory experiments, they may be subject to more sources of error.
Experimental Design Methods
Experimental design methods refer to the techniques and procedures used to design and conduct experiments in scientific research. Here are some common experimental design methods:
Randomization
This involves randomly assigning participants to different groups or treatments to ensure that any observed differences between groups are due to the treatment and not to other factors.
Control Group
The use of a control group is an important experimental design method that involves having a group of participants that do not receive the treatment or intervention being studied. The control group is used as a baseline to compare the effects of the treatment group.
Blinding involves keeping participants, researchers, or both unaware of which treatment group participants are in, in order to reduce the risk of bias in the results.
Counterbalancing
This involves systematically varying the order in which participants receive treatments or interventions in order to control for order effects.
Replication
Replication involves conducting the same experiment with different samples or under different conditions to increase the reliability and validity of the results.
This experimental design method involves manipulating multiple independent variables simultaneously to investigate their combined effects on the dependent variable.
This involves dividing participants into subgroups or blocks based on specific characteristics, such as age or gender, in order to reduce the risk of confounding variables.
Data Collection Method
Experimental design data collection methods are techniques and procedures used to collect data in experimental research. Here are some common experimental design data collection methods:
Direct Observation
This method involves observing and recording the behavior or phenomenon of interest in real time. It may involve the use of structured or unstructured observation, and may be conducted in a laboratory or naturalistic setting.
Self-report Measures
Self-report measures involve asking participants to report their thoughts, feelings, or behaviors using questionnaires, surveys, or interviews. These measures may be administered in person or online.
Behavioral Measures
Behavioral measures involve measuring participants’ behavior directly, such as through reaction time tasks or performance tests. These measures may be administered using specialized equipment or software.
Physiological Measures
Physiological measures involve measuring participants’ physiological responses, such as heart rate, blood pressure, or brain activity, using specialized equipment. These measures may be invasive or non-invasive, and may be administered in a laboratory or clinical setting.
Archival Data
Archival data involves using existing records or data, such as medical records, administrative records, or historical documents, as a source of information. These data may be collected from public or private sources.
Computerized Measures
Computerized measures involve using software or computer programs to collect data on participants’ behavior or responses. These measures may include reaction time tasks, cognitive tests, or other types of computer-based assessments.
Video Recording
Video recording involves recording participants’ behavior or interactions using cameras or other recording equipment. This method can be used to capture detailed information about participants’ behavior or to analyze social interactions.
Data Analysis Method
Experimental design data analysis methods refer to the statistical techniques and procedures used to analyze data collected in experimental research. Here are some common experimental design data analysis methods:
Descriptive Statistics
Descriptive statistics are used to summarize and describe the data collected in the study. This includes measures such as mean, median, mode, range, and standard deviation.
Inferential Statistics
Inferential statistics are used to make inferences or generalizations about a larger population based on the data collected in the study. This includes hypothesis testing and estimation.
Analysis of Variance (ANOVA)
ANOVA is a statistical technique used to compare means across two or more groups in order to determine whether there are significant differences between the groups. There are several types of ANOVA, including one-way ANOVA, two-way ANOVA, and repeated measures ANOVA.
Regression Analysis
Regression analysis is used to model the relationship between two or more variables in order to determine the strength and direction of the relationship. There are several types of regression analysis, including linear regression, logistic regression, and multiple regression.
Factor Analysis
Factor analysis is used to identify underlying factors or dimensions in a set of variables. This can be used to reduce the complexity of the data and identify patterns in the data.
Structural Equation Modeling (SEM)
SEM is a statistical technique used to model complex relationships between variables. It can be used to test complex theories and models of causality.
Cluster Analysis
Cluster analysis is used to group similar cases or observations together based on similarities or differences in their characteristics.
Time Series Analysis
Time series analysis is used to analyze data collected over time in order to identify trends, patterns, or changes in the data.
Multilevel Modeling
Multilevel modeling is used to analyze data that is nested within multiple levels, such as students nested within schools or employees nested within companies.
Applications of Experimental Design
Experimental design is a versatile research methodology that can be applied in many fields. Here are some applications of experimental design:
- Medical Research: Experimental design is commonly used to test new treatments or medications for various medical conditions. This includes clinical trials to evaluate the safety and effectiveness of new drugs or medical devices.
- Agriculture : Experimental design is used to test new crop varieties, fertilizers, and other agricultural practices. This includes randomized field trials to evaluate the effects of different treatments on crop yield, quality, and pest resistance.
- Environmental science: Experimental design is used to study the effects of environmental factors, such as pollution or climate change, on ecosystems and wildlife. This includes controlled experiments to study the effects of pollutants on plant growth or animal behavior.
- Psychology : Experimental design is used to study human behavior and cognitive processes. This includes experiments to test the effects of different interventions, such as therapy or medication, on mental health outcomes.
- Engineering : Experimental design is used to test new materials, designs, and manufacturing processes in engineering applications. This includes laboratory experiments to test the strength and durability of new materials, or field experiments to test the performance of new technologies.
- Education : Experimental design is used to evaluate the effectiveness of teaching methods, educational interventions, and programs. This includes randomized controlled trials to compare different teaching methods or evaluate the impact of educational programs on student outcomes.
- Marketing : Experimental design is used to test the effectiveness of marketing campaigns, pricing strategies, and product designs. This includes experiments to test the impact of different marketing messages or pricing schemes on consumer behavior.
Examples of Experimental Design
Here are some examples of experimental design in different fields:
- Example in Medical research : A study that investigates the effectiveness of a new drug treatment for a particular condition. Patients are randomly assigned to either a treatment group or a control group, with the treatment group receiving the new drug and the control group receiving a placebo. The outcomes, such as improvement in symptoms or side effects, are measured and compared between the two groups.
- Example in Education research: A study that examines the impact of a new teaching method on student learning outcomes. Students are randomly assigned to either a group that receives the new teaching method or a group that receives the traditional teaching method. Student achievement is measured before and after the intervention, and the results are compared between the two groups.
- Example in Environmental science: A study that tests the effectiveness of a new method for reducing pollution in a river. Two sections of the river are selected, with one section treated with the new method and the other section left untreated. The water quality is measured before and after the intervention, and the results are compared between the two sections.
- Example in Marketing research: A study that investigates the impact of a new advertising campaign on consumer behavior. Participants are randomly assigned to either a group that is exposed to the new campaign or a group that is not. Their behavior, such as purchasing or product awareness, is measured and compared between the two groups.
- Example in Social psychology: A study that examines the effect of a new social intervention on reducing prejudice towards a marginalized group. Participants are randomly assigned to either a group that receives the intervention or a control group that does not. Their attitudes and behavior towards the marginalized group are measured before and after the intervention, and the results are compared between the two groups.
When to use Experimental Research Design
Experimental research design should be used when a researcher wants to establish a cause-and-effect relationship between variables. It is particularly useful when studying the impact of an intervention or treatment on a particular outcome.
Here are some situations where experimental research design may be appropriate:
- When studying the effects of a new drug or medical treatment: Experimental research design is commonly used in medical research to test the effectiveness and safety of new drugs or medical treatments. By randomly assigning patients to treatment and control groups, researchers can determine whether the treatment is effective in improving health outcomes.
- When evaluating the effectiveness of an educational intervention: An experimental research design can be used to evaluate the impact of a new teaching method or educational program on student learning outcomes. By randomly assigning students to treatment and control groups, researchers can determine whether the intervention is effective in improving academic performance.
- When testing the effectiveness of a marketing campaign: An experimental research design can be used to test the effectiveness of different marketing messages or strategies. By randomly assigning participants to treatment and control groups, researchers can determine whether the marketing campaign is effective in changing consumer behavior.
- When studying the effects of an environmental intervention: Experimental research design can be used to study the impact of environmental interventions, such as pollution reduction programs or conservation efforts. By randomly assigning locations or areas to treatment and control groups, researchers can determine whether the intervention is effective in improving environmental outcomes.
- When testing the effects of a new technology: An experimental research design can be used to test the effectiveness and safety of new technologies or engineering designs. By randomly assigning participants or locations to treatment and control groups, researchers can determine whether the new technology is effective in achieving its intended purpose.
How to Conduct Experimental Research
Here are the steps to conduct Experimental Research:
- Identify a Research Question : Start by identifying a research question that you want to answer through the experiment. The question should be clear, specific, and testable.
- Develop a Hypothesis: Based on your research question, develop a hypothesis that predicts the relationship between the independent and dependent variables. The hypothesis should be clear and testable.
- Design the Experiment : Determine the type of experimental design you will use, such as a between-subjects design or a within-subjects design. Also, decide on the experimental conditions, such as the number of independent variables, the levels of the independent variable, and the dependent variable to be measured.
- Select Participants: Select the participants who will take part in the experiment. They should be representative of the population you are interested in studying.
- Randomly Assign Participants to Groups: If you are using a between-subjects design, randomly assign participants to groups to control for individual differences.
- Conduct the Experiment : Conduct the experiment by manipulating the independent variable(s) and measuring the dependent variable(s) across the different conditions.
- Analyze the Data: Analyze the data using appropriate statistical methods to determine if there is a significant effect of the independent variable(s) on the dependent variable(s).
- Draw Conclusions: Based on the data analysis, draw conclusions about the relationship between the independent and dependent variables. If the results support the hypothesis, then it is accepted. If the results do not support the hypothesis, then it is rejected.
- Communicate the Results: Finally, communicate the results of the experiment through a research report or presentation. Include the purpose of the study, the methods used, the results obtained, and the conclusions drawn.
Purpose of Experimental Design
The purpose of experimental design is to control and manipulate one or more independent variables to determine their effect on a dependent variable. Experimental design allows researchers to systematically investigate causal relationships between variables, and to establish cause-and-effect relationships between the independent and dependent variables. Through experimental design, researchers can test hypotheses and make inferences about the population from which the sample was drawn.
Experimental design provides a structured approach to designing and conducting experiments, ensuring that the results are reliable and valid. By carefully controlling for extraneous variables that may affect the outcome of the study, experimental design allows researchers to isolate the effect of the independent variable(s) on the dependent variable(s), and to minimize the influence of other factors that may confound the results.
Experimental design also allows researchers to generalize their findings to the larger population from which the sample was drawn. By randomly selecting participants and using statistical techniques to analyze the data, researchers can make inferences about the larger population with a high degree of confidence.
Overall, the purpose of experimental design is to provide a rigorous, systematic, and scientific method for testing hypotheses and establishing cause-and-effect relationships between variables. Experimental design is a powerful tool for advancing scientific knowledge and informing evidence-based practice in various fields, including psychology, biology, medicine, engineering, and social sciences.
Advantages of Experimental Design
Experimental design offers several advantages in research. Here are some of the main advantages:
- Control over extraneous variables: Experimental design allows researchers to control for extraneous variables that may affect the outcome of the study. By manipulating the independent variable and holding all other variables constant, researchers can isolate the effect of the independent variable on the dependent variable.
- Establishing causality: Experimental design allows researchers to establish causality by manipulating the independent variable and observing its effect on the dependent variable. This allows researchers to determine whether changes in the independent variable cause changes in the dependent variable.
- Replication : Experimental design allows researchers to replicate their experiments to ensure that the findings are consistent and reliable. Replication is important for establishing the validity and generalizability of the findings.
- Random assignment: Experimental design often involves randomly assigning participants to conditions. This helps to ensure that individual differences between participants are evenly distributed across conditions, which increases the internal validity of the study.
- Precision : Experimental design allows researchers to measure variables with precision, which can increase the accuracy and reliability of the data.
- Generalizability : If the study is well-designed, experimental design can increase the generalizability of the findings. By controlling for extraneous variables and using random assignment, researchers can increase the likelihood that the findings will apply to other populations and contexts.
Limitations of Experimental Design
Experimental design has some limitations that researchers should be aware of. Here are some of the main limitations:
- Artificiality : Experimental design often involves creating artificial situations that may not reflect real-world situations. This can limit the external validity of the findings, or the extent to which the findings can be generalized to real-world settings.
- Ethical concerns: Some experimental designs may raise ethical concerns, particularly if they involve manipulating variables that could cause harm to participants or if they involve deception.
- Participant bias : Participants in experimental studies may modify their behavior in response to the experiment, which can lead to participant bias.
- Limited generalizability: The conditions of the experiment may not reflect the complexities of real-world situations. As a result, the findings may not be applicable to all populations and contexts.
- Cost and time : Experimental design can be expensive and time-consuming, particularly if the experiment requires specialized equipment or if the sample size is large.
- Researcher bias : Researchers may unintentionally bias the results of the experiment if they have expectations or preferences for certain outcomes.
- Lack of feasibility : Experimental design may not be feasible in some cases, particularly if the research question involves variables that cannot be manipulated or controlled.
About the author
Muhammad Hassan
Researcher, Academic Writer, Web developer
You may also like
One-to-One Interview – Methods and Guide
Textual Analysis – Types, Examples and Guide
Questionnaire – Definition, Types, and Examples
Observational Research – Methods and Guide
Descriptive Research Design – Types, Methods and...
Survey Research – Types, Methods, Examples
Have a language expert improve your writing
Run a free plagiarism check in 10 minutes, automatically generate references for free.
- Knowledge Base
- Methodology
- A Quick Guide to Experimental Design | 5 Steps & Examples
A Quick Guide to Experimental Design | 5 Steps & Examples
Published on 11 April 2022 by Rebecca Bevans . Revised on 5 December 2022.
Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.
Experimental design means creating a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying.
There are five key steps in designing an experiment:
- Consider your variables and how they are related
- Write a specific, testable hypothesis
- Design experimental treatments to manipulate your independent variable
- Assign subjects to groups, either between-subjects or within-subjects
- Plan how you will measure your dependent variable
For valid conclusions, you also need to select a representative sample and control any extraneous variables that might influence your results. If if random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead.
Table of contents
Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, frequently asked questions about experimental design.
You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:
To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.
Start by simply listing the independent and dependent variables .
Research question | Independent variable | Dependent variable |
---|---|---|
Phone use and sleep | Minutes of phone use before sleep | Hours of sleep per night |
Temperature and soil respiration | Air temperature just above the soil surface | CO2 respired from soil |
Then you need to think about possible extraneous and confounding variables and consider how you might control them in your experiment.
Extraneous variable | How to control | |
---|---|---|
Phone use and sleep | in sleep patterns among individuals. | measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group. |
Temperature and soil respiration | also affects respiration, and moisture can decrease with increasing temperature. | monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots. |
Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.
Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.
Prevent plagiarism, run a free check.
Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.
Null hypothesis (H ) | Alternate hypothesis (H ) | |
---|---|---|
Phone use and sleep | Phone use before sleep does not correlate with the amount of sleep a person gets. | Increasing phone use before sleep leads to a decrease in sleep. |
Temperature and soil respiration | Air temperature does not correlate with soil respiration. | Increased air temperature leads to increased soil respiration. |
The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:
- Systematically and precisely manipulate the independent variable(s).
- Precisely measure the dependent variable(s).
- Control any potential confounding variables.
If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.
How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalised and applied to the broader world.
First, you may need to decide how widely to vary your independent variable.
- just slightly above the natural range for your study region.
- over a wider range of temperatures to mimic future warming.
- over an extreme range that is beyond any possible natural variation.
Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.
- a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
- a continuous variable (minutes of phone use measured every night).
How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.
First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.
Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).
You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.
When assigning your subjects to groups, there are two main choices you need to make:
- A completely randomised design vs a randomised block design .
- A between-subjects design vs a within-subjects design .
Randomisation
An experiment can be completely randomised or randomised within blocks (aka strata):
- In a completely randomised design , every subject is assigned to a treatment group at random.
- In a randomised block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomised design | Randomised block design | |
---|---|---|
Phone use and sleep | Subjects are all randomly assigned a level of phone use using a random number generator. | Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups. |
Temperature and soil respiration | Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. | Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups. |
Sometimes randomisation isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .
Between-subjects vs within-subjects
In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.
In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.
In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.
Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.
Counterbalancing (randomising or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.
Between-subjects (independent measures) design | Within-subjects (repeated measures) design | |
---|---|---|
Phone use and sleep | Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. | Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomised. |
Temperature and soil respiration | Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. | Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomised. |
Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimise bias or error.
Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalised to turn them into measurable observations.
- Ask participants to record what time they go to sleep and get up each day.
- Ask participants to wear a sleep tracker.
How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.
Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.
Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.
To design a successful experiment, first identify:
- A testable hypothesis
- One or more independent variables that you will manipulate
- One or more dependent variables that you will measure
When designing the experiment, first decide:
- How your variable(s) will be manipulated
- How you will control for any potential confounding or lurking variables
- How many subjects you will include
- How you will assign treatments to your subjects
The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.
A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.
A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.
In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.
In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.
In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.
The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.
Bevans, R. (2022, December 05). A Quick Guide to Experimental Design | 5 Steps & Examples. Scribbr. Retrieved 27 September 2024, from https://www.scribbr.co.uk/research-methods/guide-to-experimental-design/
Is this article helpful?
Rebecca Bevans
User Preferences
Content preview.
Arcu felis bibendum ut tristique et egestas quis:
- Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris
- Duis aute irure dolor in reprehenderit in voluptate
- Excepteur sint occaecat cupidatat non proident
Keyboard Shortcuts
Lesson 1: introduction to design of experiments, overview section .
In this course we will pretty much cover the textbook - all of the concepts and designs included. I think we will have plenty of examples to look at and experience to draw from.
Please note: the main topics listed in the syllabus follow the chapters in the book.
A word of advice regarding the analyses. The prerequisite for this course is STAT 501 - Regression Methods and STAT 502 - Analysis of Variance . However, the focus of the course is on the design and not on the analysis. Thus, one can successfully complete this course without these prerequisites, with just STAT 500 - Applied Statistics for instance, but it will require much more work, and for the analysis less appreciation of the subtleties involved. You might say it is more conceptual than it is math oriented.
Text Reference: Montgomery, D. C. (2019). Design and Analysis of Experiments , 10th Edition, John Wiley & Sons. ISBN 978-1-119-59340-9
What is the Scientific Method? Section
Do you remember learning about this back in high school or junior high even? What were those steps again?
Decide what phenomenon you wish to investigate. Specify how you can manipulate the factor and hold all other conditions fixed, to insure that these extraneous conditions aren't influencing the response you plan to measure.
Then measure your chosen response variable at several (at least two) settings of the factor under study. If changing the factor causes the phenomenon to change, then you conclude that there is indeed a cause-and-effect relationship at work.
How many factors are involved when you do an experiment? Some say two - perhaps this is a comparative experiment? Perhaps there is a treatment group and a control group? If you have a treatment group and a control group then, in this case, you probably only have one factor with two levels.
How many of you have baked a cake? What are the factors involved to ensure a successful cake? Factors might include preheating the oven, baking time, ingredients, amount of moisture, baking temperature, etc.-- what else? You probably follow a recipe so there are many additional factors that control the ingredients - i.e., a mixture. In other words, someone did the experiment in advance! What parts of the recipe did they vary to make the recipe a success? Probably many factors, temperature and moisture, various ratios of ingredients, and presence or absence of many additives. Now, should one keep all the factors involved in the experiment at a constant level and just vary one to see what would happen? This is a strategy that works but is not very efficient. This is one of the concepts that we will address in this course.
- understand the issues and principles of Design of Experiments (DOE),
- understand experimentation is a process,
- list the guidelines for designing experiments, and
- recognize the key historical figures in DOE.
Instant insights, infinite possibilities
Experimental design: Guide, steps, examples
Last updated
27 April 2023
Reviewed by
Miroslav Damyanov
Short on time? Get an AI generated summary of this article instead
Experimental research design is a scientific framework that allows you to manipulate one or more variables while controlling the test environment.
When testing a theory or new product, it can be helpful to have a certain level of control and manipulate variables to discover different outcomes. You can use these experiments to determine cause and effect or study variable associations.
This guide explores the types of experimental design, the steps in designing an experiment, and the advantages and limitations of experimental design.
Make research less tedious
Dovetail streamlines research to help you uncover and share actionable insights
- What is experimental research design?
You can determine the relationship between each of the variables by:
Manipulating one or more independent variables (i.e., stimuli or treatments)
Applying the changes to one or more dependent variables (i.e., test groups or outcomes)
With the ability to analyze the relationship between variables and using measurable data, you can increase the accuracy of the result.
What is a good experimental design?
A good experimental design requires:
Significant planning to ensure control over the testing environment
Sound experimental treatments
Properly assigning subjects to treatment groups
Without proper planning, unexpected external variables can alter an experiment's outcome.
To meet your research goals, your experimental design should include these characteristics:
Provide unbiased estimates of inputs and associated uncertainties
Enable the researcher to detect differences caused by independent variables
Include a plan for analysis and reporting of the results
Provide easily interpretable results with specific conclusions
What's the difference between experimental and quasi-experimental design?
The major difference between experimental and quasi-experimental design is the random assignment of subjects to groups.
A true experiment relies on certain controls. Typically, the researcher designs the treatment and randomly assigns subjects to control and treatment groups.
However, these conditions are unethical or impossible to achieve in some situations.
When it's unethical or impractical to assign participants randomly, that’s when a quasi-experimental design comes in.
This design allows researchers to conduct a similar experiment by assigning subjects to groups based on non-random criteria.
Another type of quasi-experimental design might occur when the researcher doesn't have control over the treatment but studies pre-existing groups after they receive different treatments.
When can a researcher conduct experimental research?
Various settings and professions can use experimental research to gather information and observe behavior in controlled settings.
Basically, a researcher can conduct experimental research any time they want to test a theory with variable and dependent controls.
Experimental research is an option when the project includes an independent variable and a desire to understand the relationship between cause and effect.
- The importance of experimental research design
Experimental research enables researchers to conduct studies that provide specific, definitive answers to questions and hypotheses.
Researchers can test Independent variables in controlled settings to:
Test the effectiveness of a new medication
Design better products for consumers
Answer questions about human health and behavior
Developing a quality research plan means a researcher can accurately answer vital research questions with minimal error. As a result, definitive conclusions can influence the future of the independent variable.
Types of experimental research designs
There are three main types of experimental research design. The research type you use will depend on the criteria of your experiment, your research budget, and environmental limitations.
Pre-experimental research design
A pre-experimental research study is a basic observational study that monitors independent variables’ effects.
During research, you observe one or more groups after applying a treatment to test whether the treatment causes any change.
The three subtypes of pre-experimental research design are:
One-shot case study research design
This research method introduces a single test group to a single stimulus to study the results at the end of the application.
After researchers presume the stimulus or treatment has caused changes, they gather results to determine how it affects the test subjects.
One-group pretest-posttest design
This method uses a single test group but includes a pretest study as a benchmark. The researcher applies a test before and after the group’s exposure to a specific stimulus.
Static group comparison design
This method includes two or more groups, enabling the researcher to use one group as a control. They apply a stimulus to one group and leave the other group static.
A posttest study compares the results among groups.
True experimental research design
A true experiment is the most common research method. It involves statistical analysis to prove or disprove a specific hypothesis .
Under completely experimental conditions, researchers expose participants in two or more randomized groups to different stimuli.
Random selection removes any potential for bias, providing more reliable results.
These are the three main sub-groups of true experimental research design:
Posttest-only control group design
This structure requires the researcher to divide participants into two random groups. One group receives no stimuli and acts as a control while the other group experiences stimuli.
Researchers perform a test at the end of the experiment to observe the stimuli exposure results.
Pretest-posttest control group design
This test also requires two groups. It includes a pretest as a benchmark before introducing the stimulus.
The pretest introduces multiple ways to test subjects. For instance, if the control group also experiences a change, it reveals that taking the test twice changes the results.
Solomon four-group design
This structure divides subjects into two groups, with two as control groups. Researchers assign the first control group a posttest only and the second control group a pretest and a posttest.
The two variable groups mirror the control groups, but researchers expose them to stimuli. The ability to differentiate between groups in multiple ways provides researchers with more testing approaches for data-based conclusions.
Quasi-experimental research design
Although closely related to a true experiment, quasi-experimental research design differs in approach and scope.
Quasi-experimental research design doesn’t have randomly selected participants. Researchers typically divide the groups in this research by pre-existing differences.
Quasi-experimental research is more common in educational studies, nursing, or other research projects where it's not ethical or practical to use randomized subject groups.
- 5 steps for designing an experiment
Experimental research requires a clearly defined plan to outline the research parameters and expected goals.
Here are five key steps in designing a successful experiment:
Step 1: Define variables and their relationship
Your experiment should begin with a question: What are you hoping to learn through your experiment?
The relationship between variables in your study will determine your answer.
Define the independent variable (the intended stimuli) and the dependent variable (the expected effect of the stimuli). After identifying these groups, consider how you might control them in your experiment.
Could natural variations affect your research? If so, your experiment should include a pretest and posttest.
Step 2: Develop a specific, testable hypothesis
With a firm understanding of the system you intend to study, you can write a specific, testable hypothesis.
What is the expected outcome of your study?
Develop a prediction about how the independent variable will affect the dependent variable.
How will the stimuli in your experiment affect your test subjects?
Your hypothesis should provide a prediction of the answer to your research question .
Step 3: Design experimental treatments to manipulate your independent variable
Depending on your experiment, your variable may be a fixed stimulus (like a medical treatment) or a variable stimulus (like a period during which an activity occurs).
Determine which type of stimulus meets your experiment’s needs and how widely or finely to vary your stimuli.
Step 4: Assign subjects to groups
When you have a clear idea of how to carry out your experiment, you can determine how to assemble test groups for an accurate study.
When choosing your study groups, consider:
The size of your experiment
Whether you can select groups randomly
Your target audience for the outcome of the study
You should be able to create groups with an equal number of subjects and include subjects that match your target audience. Remember, you should assign one group as a control and use one or more groups to study the effects of variables.
Step 5: Plan how to measure your dependent variable
This step determines how you'll collect data to determine the study's outcome. You should seek reliable and valid measurements that minimize research bias or error.
You can measure some data with scientific tools, while you’ll need to operationalize other forms to turn them into measurable observations.
- Advantages of experimental research
Experimental research is an integral part of our world. It allows researchers to conduct experiments that answer specific questions.
While researchers use many methods to conduct different experiments, experimental research offers these distinct benefits:
Researchers can determine cause and effect by manipulating variables.
It gives researchers a high level of control.
Researchers can test multiple variables within a single experiment.
All industries and fields of knowledge can use it.
Researchers can duplicate results to promote the validity of the study .
Replicating natural settings rapidly means immediate research.
Researchers can combine it with other research methods.
It provides specific conclusions about the validity of a product, theory, or idea.
- Disadvantages (or limitations) of experimental research
Unfortunately, no research type yields ideal conditions or perfect results.
While experimental research might be the right choice for some studies, certain conditions could render experiments useless or even dangerous.
Before conducting experimental research, consider these disadvantages and limitations:
Required professional qualification
Only competent professionals with an academic degree and specific training are qualified to conduct rigorous experimental research. This ensures results are unbiased and valid.
Limited scope
Experimental research may not capture the complexity of some phenomena, such as social interactions or cultural norms. These are difficult to control in a laboratory setting.
Resource-intensive
Experimental research can be expensive, time-consuming, and require significant resources, such as specialized equipment or trained personnel.
Limited generalizability
The controlled nature means the research findings may not fully apply to real-world situations or people outside the experimental setting.
Practical or ethical concerns
Some experiments may involve manipulating variables that could harm participants or violate ethical guidelines .
Researchers must ensure their experiments do not cause harm or discomfort to participants.
Sometimes, recruiting a sample of people to randomly assign may be difficult.
- Experimental research design example
Experiments across all industries and research realms provide scientists, developers, and other researchers with definitive answers. These experiments can solve problems, create inventions, and heal illnesses.
Product design testing is an excellent example of experimental research.
A company in the product development phase creates multiple prototypes for testing. With a randomized selection, researchers introduce each test group to a different prototype.
When groups experience different product designs , the company can assess which option most appeals to potential customers.
Experimental research design provides researchers with a controlled environment to conduct experiments that evaluate cause and effect.
Using the five steps to develop a research plan ensures you anticipate and eliminate external variables while answering life’s crucial questions.
Should you be using a customer insights hub?
Do you want to discover previous research faster?
Do you share your research findings with others?
Do you analyze research data?
Start for free today, add your research, and get to key insights faster
Editor’s picks
Last updated: 18 April 2023
Last updated: 27 February 2023
Last updated: 22 August 2024
Last updated: 5 February 2023
Last updated: 16 April 2023
Last updated: 9 March 2023
Last updated: 30 April 2024
Last updated: 12 December 2023
Last updated: 11 March 2024
Last updated: 4 July 2024
Last updated: 6 March 2024
Last updated: 5 March 2024
Last updated: 13 May 2024
Latest articles
Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.
- Types of experimental
Log in or sign up
Get started for free
- My Storyboards
Exploring the Art of Experimental Design: A Step-by-Step Guide for Students and Educators
Experimental design for students.
Experimental design is a key method used in subjects like biology, chemistry, physics, psychology, and social sciences. It helps us figure out how different factors affect what we're studying, whether it's plants, chemicals, physical laws, human behavior, or how society works. Basically, it's a way to set up experiments so we can test ideas, see what happens, and make sense of our results. It's super important for students and researchers who want to answer big questions in science and understand the world better. Experimental design skills can be applied in situations ranging from problem solving to data analysis; they are wide reaching and can frequently be applied outside the classroom. The teaching of these skills is a very important part of science education, but is often overlooked when focused on teaching the content. As science educators, we have all seen the benefits practical work has for student engagement and understanding. However, with the time constraints placed on the curriculum, the time needed for students to develop these experimental research design and investigative skills can get squeezed out. Too often they get a ‘recipe’ to follow, which doesn’t allow them to take ownership of their practical work. From a very young age, they start to think about the world around them. They ask questions then use observations and evidence to answer them. Students tend to have intelligent, interesting, and testable questions that they love to ask. As educators, we should be working towards encouraging these questions and in turn, nurturing this natural curiosity in the world around them.
Teaching the design of experiments and letting students develop their own questions and hypotheses takes time. These materials have been created to scaffold and structure the process to allow teachers to focus on improving the key ideas in experimental design. Allowing students to ask their own questions, write their own hypotheses, and plan and carry out their own investigations is a valuable experience for them. This will lead to students having more ownership of their work. When students carry out the experimental method for their own questions, they reflect on how scientists have historically come to understand how the universe works.
Take a look at the printer-friendly pages and worksheet templates below!
What are the Steps of Experimental Design?
Embarking on the journey of scientific discovery begins with mastering experimental design steps. This foundational process is essential for formulating experiments that yield reliable and insightful results, guiding researchers and students alike through the detailed planning, experimental research design, and execution of their studies. By leveraging an experimental design template, participants can ensure the integrity and validity of their findings. Whether it's through designing a scientific experiment or engaging in experimental design activities, the aim is to foster a deep understanding of the fundamentals: How should experiments be designed? What are the 7 experimental design steps? How can you design your own experiment?
This is an exploration of the seven key experimental method steps, experimental design ideas, and ways to integrate design of experiments. Student projects can benefit greatly from supplemental worksheets and we will also provide resources such as worksheets aimed at teaching experimental design effectively. Let’s dive into the essential stages that underpin the process of designing an experiment, equipping learners with the tools to explore their scientific curiosity.
1. Question
This is a key part of the scientific method and the experimental design process. Students enjoy coming up with questions. Formulating questions is a deep and meaningful activity that can give students ownership over their work. A great way of getting students to think of how to visualize their research question is using a mind map storyboard.
Ask students to think of any questions they want to answer about the universe or get them to think about questions they have about a particular topic. All questions are good questions, but some are easier to test than others.
2. Hypothesis
A hypothesis is known as an educated guess. A hypothesis should be a statement that can be tested scientifically. At the end of the experiment, look back to see whether the conclusion supports the hypothesis or not.
Forming good hypotheses can be challenging for students to grasp. It is important to remember that the hypothesis is not a research question, it is a testable statement . One way of forming a hypothesis is to form it as an “if... then...” statement. This certainly isn't the only or best way to form a hypothesis, but can be a very easy formula for students to use when first starting out.
An “if... then...” statement requires students to identify the variables first, and that may change the order in which they complete the stages of the visual organizer. After identifying the dependent and independent variables, the hypothesis then takes the form if [change in independent variable], then [change in dependent variable].
For example, if an experiment were looking for the effect of caffeine on reaction time, the independent variable would be amount of caffeine and the dependent variable would be reaction time. The “if, then” hypothesis could be: If you increase the amount of caffeine taken, then the reaction time will decrease.
3. Explanation of Hypothesis
What led you to this hypothesis? What is the scientific background behind your hypothesis? Depending on age and ability, students use their prior knowledge to explain why they have chosen their hypotheses, or alternatively, research using books or the internet. This could also be a good time to discuss with students what a reliable source is.
For example, students may reference previous studies showing the alertness effects of caffeine to explain why they hypothesize caffeine intake will reduce reaction time.
4. Prediction
The prediction is slightly different to the hypothesis. A hypothesis is a testable statement, whereas the prediction is more specific to the experiment. In the discovery of the structure of DNA, the hypothesis proposed that DNA has a helical structure. The prediction was that the X-ray diffraction pattern of DNA would be an X shape.
Students should formulate a prediction that is a specific, measurable outcome based on their hypothesis. Rather than just stating "caffeine will decrease reaction time," students could predict that "drinking 2 cans of soda (90mg caffeine) will reduce average reaction time by 50 milliseconds compared to drinking no caffeine."
5. Identification of Variables
Below is an example of a Discussion Storyboard that can be used to get your students talking about variables in experimental design.
The three types of variables you will need to discuss with your students are dependent, independent, and controlled variables. To keep this simple, refer to these as "what you are going to measure", "what you are going to change", and "what you are going to keep the same". With more advanced students, you should encourage them to use the correct vocabulary.
Dependent variables are what is measured or observed by the scientist. These measurements will often be repeated because repeated measurements makes your data more reliable.
The independent variables are variables that scientists decide to change to see what effect it has on the dependent variable. Only one is chosen because it would be difficult to figure out which variable is causing any change you observe.
Controlled variables are quantities or factors that scientists want to remain the same throughout the experiment. They are controlled to remain constant, so as to not affect the dependent variable. Controlling these allows scientists to see how the independent variable affects the dependent variable within the experimental group.
Use this example below in your lessons, or delete the answers and set it as an activity for students to complete on Storyboard That.
How temperature affects the amount of sugar able to be dissolved in water | |
---|---|
Independent Variable | Water Temperature (Range 5 different samples at 10°C, 20°C, 30°C, 40°C and 50°C) |
Dependent Variable | The amount of sugar that can be dissolved in the water, measured in teaspoons. |
Controlled Variables |
6. Risk Assessment
Ultimately this must be signed off on by a responsible adult, but it is important to get students to think about how they will keep themselves safe. In this part, students should identify potential risks and then explain how they are going to minimize risk. An activity to help students develop these skills is to get them to identify and manage risks in different situations. Using the storyboard below, get students to complete the second column of the T-chart by saying, "What is risk?", then explaining how they could manage that risk. This storyboard could also be projected for a class discussion.
7. Materials
In this section, students will list the materials they need for the experiments, including any safety equipment that they have highlighted as needing in the risk assessment section. This is a great time to talk to students about choosing tools that are suitable for the job. You are going to use a different tool to measure the width of a hair than to measure the width of a football field!
8. General Plan and Diagram
It is important to talk to students about reproducibility. They should write a procedure that would allow their experimental method to be reproduced easily by another scientist. The easiest and most concise way for students to do this is by making a numbered list of instructions. A useful activity here could be getting students to explain how to make a cup of tea or a sandwich. Act out the process, pointing out any steps they’ve missed.
For English Language Learners and students who struggle with written English, students can describe the steps in their experiment visually using Storyboard That.
Not every experiment will need a diagram, but some plans will be greatly improved by including one. Have students focus on producing clear and easy-to-understand diagrams that illustrate the experimental group.
For example, a procedure to test the effect of sunlight on plant growth utilizing completely randomized design could detail:
- Select 10 similar seedlings of the same age and variety
- Prepare 2 identical trays with the same soil mixture
- Place 5 plants in each tray; label one set "sunlight" and one set "shade"
- Position sunlight tray by a south-facing window, and shade tray in a dark closet
- Water both trays with 50 mL water every 2 days
- After 3 weeks, remove plants and measure heights in cm
9. Carry Out Experiment
Once their procedure is approved, students should carefully carry out their planned experiment, following their written instructions. As data is collected, students should organize the raw results in tables, graphs, photos or drawings. This creates clear documentation for analyzing trends.
Some best practices for data collection include:
- Record quantitative data numerically with units
- Note qualitative observations with detailed descriptions
- Capture set up through illustrations or photos
- Write observations of unexpected events
- Identify data outliers and sources of error
For example, in the plant growth experiment, students could record:
Group | Sunlight | Sunlight | Sunlight | Shade | Shade |
---|---|---|---|---|---|
Plant ID | 1 | 2 | 3 | 1 | 2 |
Start Height | 5 cm | 4 cm | 5 cm | 6 cm | 4 cm |
End Height | 18 cm | 17 cm | 19 cm | 9 cm | 8 cm |
They would also describe observations like leaf color change or directional bending visually or in writing.
It is crucial that students practice safe science procedures. Adult supervision is required for experimentation, along with proper risk assessment.
Well-documented data collection allows for deeper analysis after experiment completion to determine whether hypotheses and predictions were supported.
Completed Examples
Resources and Experimental Design Examples
Using visual organizers is an effective way to get your students working as scientists in the classroom.
There are many ways to use these investigation planning tools to scaffold and structure students' work while they are working as scientists. Students can complete the planning stage on Storyboard That using the text boxes and diagrams, or you could print them off and have students complete them by hand. Another great way to use them is to project the planning sheet onto an interactive whiteboard and work through how to complete the planning materials as a group. Project it onto a screen and have students write their answers on sticky notes and put their ideas in the correct section of the planning document.
Very young learners can still start to think as scientists! They have loads of questions about the world around them and you can start to make a note of these in a mind map. Sometimes you can even start to ‘investigate’ these questions through play.
The foundation resource is intended for elementary students or students who need more support. It is designed to follow exactly the same process as the higher resources, but made slightly easier. The key difference between the two resources are the details that students are required to think about and the technical vocabulary used. For example, it is important that students identify variables when they are designing their investigations. In the higher version, students not only have to identify the variables, but make other comments, such as how they are going to measure the dependent variable or utilizing completely randomized design. As well as the difference in scaffolding between the two levels of resources, you may want to further differentiate by how the learners are supported by teachers and assistants in the room.
Students could also be encouraged to make their experimental plan easier to understand by using graphics, and this could also be used to support ELLs.
Students need to be assessed on their science inquiry skills alongside the assessment of their knowledge. Not only will that let students focus on developing their skills, but will also allow them to use their assessment information in a way that will help them improve their science skills. Using Quick Rubric , you can create a quick and easy assessment framework and share it with students so they know how to succeed at every stage. As well as providing formative assessment which will drive learning, this can also be used to assess student work at the end of an investigation and set targets for when they next attempt to plan their own investigation. The rubrics have been written in a way to allow students to access them easily. This way they can be shared with students as they are working through the planning process so students know what a good experimental design looks like.
Proficient 13 Points | Emerging 7 Points | Beginning 0 Points | |
---|---|---|---|
Proficient 11 Points | Emerging 5 Points | Beginning 0 Points | |
---|---|---|---|
Printable Resources
Return to top
Related Activities
Additional Worksheets
If you're looking to add additional projects or continue to customize worksheets, take a look at several template pages we've compiled for you below. Each worksheet can be copied and tailored to your projects or students! Students can also be encouraged to create their own if they want to try organizing information in an easy to understand way.
- Lab Worksheets
- Discussion Worksheets
- Checklist Worksheets
Related Resources
- Scientific Method Steps
- Science Discussion Storyboards
- Developing Critical Thinking Skills
How to Teach Students the Design of Experiments
Encourage questioning and curiosity.
Foster a culture of inquiry by encouraging students to ask questions about the world around them.
Formulate testable hypotheses
Teach students how to develop hypotheses that can be scientifically tested. Help them understand the difference between a hypothesis and a question.
Provide scientific background
Help students understand the scientific principles and concepts relevant to their hypotheses. Encourage them to draw on prior knowledge or conduct research to support their hypotheses.
Identify variables
Teach students about the three types of variables (dependent, independent, and controlled) and how they relate to experimental design. Emphasize the importance of controlling variables and measuring the dependent variable accurately.
Plan and diagram the experiment
Guide students in developing a clear and reproducible experimental procedure. Encourage them to create a step-by-step plan or use visual diagrams to illustrate the process.
Carry out the experiment and analyze data
Support students as they conduct the experiment according to their plan. Guide them in collecting data in a meaningful and organized manner. Assist them in analyzing the data and drawing conclusions based on their findings.
Frequently Asked Questions about Experimental Design for Students
What are some common experimental design tools and techniques that students can use.
Common experimental design tools and techniques that students can use include random assignment, control groups, blinding, replication, and statistical analysis. Students can also use observational studies, surveys, and experiments with natural or quasi-experimental designs. They can also use data visualization tools to analyze and present their results.
How can experimental design help students develop critical thinking skills?
Experimental design helps students develop critical thinking skills by encouraging them to think systematically and logically about scientific problems. It requires students to analyze data, identify patterns, and draw conclusions based on evidence. It also helps students to develop problem-solving skills by providing opportunities to design and conduct experiments to test hypotheses.
How can experimental design be used to address real-world problems?
Experimental design can be used to address real-world problems by identifying variables that contribute to a particular problem and testing interventions to see if they are effective in addressing the problem. For example, experimental design can be used to test the effectiveness of new medical treatments or to evaluate the impact of social interventions on reducing poverty or improving educational outcomes.
What are some common experimental design pitfalls that students should avoid?
Common experimental design pitfalls that students should avoid include failing to control variables, using biased samples, relying on anecdotal evidence, and failing to measure dependent variables accurately. Students should also be aware of ethical considerations when conducting experiments, such as obtaining informed consent and protecting the privacy of research subjects.
- 353/365 ~ Second Fall #running #injury • Ray Bouknight • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- Always Writing • mrsdkrebs • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- Batteries • Razor512 • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- Bleed for It • zerojay • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- Bulbs • Roo Reynolds • License Attribution, Non Commercial (http://creativecommons.org/licenses/by-nc/2.0/)
- Change • dominiccampbell • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- Children • Quang Minh (YILKA) • License Attribution, Non Commercial (http://creativecommons.org/licenses/by-nc/2.0/)
- Danger • KatJaTo • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- draw • Asja. • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- Epic Fireworks Safety Goggles • EpicFireworks • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- GERMAN BUNSEN • jasonwoodhead23 • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- Heart Dissection • tjmwatson • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- ISST 2014 Munich • romanboed • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- Lightbulb! • Matthew Wynn • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- Mini magnifying glass • SkintDad.co.uk • License Attribution, Non Commercial (http://creativecommons.org/licenses/by-nc/2.0/)
- Plants • henna lion • License Attribution, Non Commercial (http://creativecommons.org/licenses/by-nc/2.0/)
- Plants • Graham S Dean Photography • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- Pré Treino.... São Carlos está foda com essa queimada toda #asma #athsma #ashmatt #asthma • .v1ctor Casale. • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- puzzle • olgaberrios • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- Puzzled • Brad Montgomery • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- Question Mark • ryanmilani • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- Radiator • Conal Gallagher • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- Red Tool Box • marinetank0 • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- Remote Control • Sean MacEntee • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- stopwatch • Search Engine People Blog • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- Thinking • Caramdir • License Attribution, Non Commercial (http://creativecommons.org/licenses/by-nc/2.0/)
- Thumb Update: The hot-glue induced burn now has a purple blister. Purple is my favorite color. (September 26, 2012 at 04:16PM) • elisharene • License Attribution, Non Commercial (http://creativecommons.org/licenses/by-nc/2.0/)
- Washing my Hands 2 • AlishaV • License Attribution (http://creativecommons.org/licenses/by/2.0/)
- Windows • Stanley Zimny (Thank You for 18 Million views) • License Attribution, Non Commercial (http://creativecommons.org/licenses/by-nc/2.0/)
- wire • Dyroc • License Attribution (http://creativecommons.org/licenses/by/2.0/)
Pricing for Schools & Districts
Limited Time
- 5 Teachers for One Year
- 1 Hour of Virtual PD
30 Day Money Back Guarantee • New Customers Only • Full Price After Introductory Offer • Access is for 1 Calendar Year
- 30 Day Money Back Guarantee
- New Customers Only
- Full Price After Introductory Offer
Introductory School Offer
30 Day Money Back Guarantee. New Customers Only. Full Price After Introductory Offer. Access is for 1 Calendar Year
Generating a Quote
This is usually pretty quick :)
Quote Sent!
Email Sent to
- Skip to secondary menu
- Skip to main content
- Skip to primary sidebar
Statistics By Jim
Making statistics intuitive
Experimental Design: Definition and Types
By Jim Frost 3 Comments
What is Experimental Design?
An experimental design is a detailed plan for collecting and using data to identify causal relationships. Through careful planning, the design of experiments allows your data collection efforts to have a reasonable chance of detecting effects and testing hypotheses that answer your research questions.
An experiment is a data collection procedure that occurs in controlled conditions to identify and understand causal relationships between variables. Researchers can use many potential designs. The ultimate choice depends on their research question, resources, goals, and constraints. In some fields of study, researchers refer to experimental design as the design of experiments (DOE). Both terms are synonymous.
Ultimately, the design of experiments helps ensure that your procedures and data will evaluate your research question effectively. Without an experimental design, you might waste your efforts in a process that, for many potential reasons, can’t answer your research question. In short, it helps you trust your results.
Learn more about Independent and Dependent Variables .
Design of Experiments: Goals & Settings
Experiments occur in many settings, ranging from psychology, social sciences, medicine, physics, engineering, and industrial and service sectors. Typically, experimental goals are to discover a previously unknown effect , confirm a known effect, or test a hypothesis.
Effects represent causal relationships between variables. For example, in a medical experiment, does the new medicine cause an improvement in health outcomes? If so, the medicine has a causal effect on the outcome.
An experimental design’s focus depends on the subject area and can include the following goals:
- Understanding the relationships between variables.
- Identifying the variables that have the largest impact on the outcomes.
- Finding the input variable settings that produce an optimal result.
For example, psychologists have conducted experiments to understand how conformity affects decision-making. Sociologists have performed experiments to determine whether ethnicity affects the public reaction to staged bike thefts. These experiments map out the causal relationships between variables, and their primary goal is to understand the role of various factors.
Conversely, in a manufacturing environment, the researchers might use an experimental design to find the factors that most effectively improve their product’s strength, identify the optimal manufacturing settings, and do all that while accounting for various constraints. In short, a manufacturer’s goal is often to use experiments to improve their products cost-effectively.
In a medical experiment, the goal might be to quantify the medicine’s effect and find the optimum dosage.
Developing an Experimental Design
Developing an experimental design involves planning that maximizes the potential to collect data that is both trustworthy and able to detect causal relationships. Specifically, these studies aim to see effects when they exist in the population the researchers are studying, preferentially favor causal effects, isolate each factor’s true effect from potential confounders, and produce conclusions that you can generalize to the real world.
To accomplish these goals, experimental designs carefully manage data validity and reliability , and internal and external experimental validity. When your experiment is valid and reliable, you can expect your procedures and data to produce trustworthy results.
An excellent experimental design involves the following:
- Lots of preplanning.
- Developing experimental treatments.
- Determining how to assign subjects to treatment groups.
The remainder of this article focuses on how experimental designs incorporate these essential items to accomplish their research goals.
Learn more about Data Reliability vs. Validity and Internal and External Experimental Validity .
Preplanning, Defining, and Operationalizing for Design of Experiments
This phase of the design of experiments helps you identify critical variables, know how to measure them while ensuring reliability and validity, and understand the relationships between them. The review can also help you find ways to reduce sources of variability, which increases your ability to detect treatment effects. Notably, the literature review allows you to learn how similar studies designed their experiments and the challenges they faced.
Operationalizing a study involves taking your research question, using the background information you gathered, and formulating an actionable plan.
This process should produce a specific and testable hypothesis using data that you can reasonably collect given the resources available to the experiment.
- Null hypothesis : The jumping exercise intervention does not affect bone density.
- Alternative hypothesis : The jumping exercise intervention affects bone density.
To learn more about this early phase, read Five Steps for Conducting Scientific Studies with Statistical Analyses .
Formulating Treatments in Experimental Designs
In an experimental design, treatments are variables that the researchers control. They are the primary independent variables of interest. Researchers administer the treatment to the subjects or items in the experiment and want to know whether it causes changes in the outcome.
As the name implies, a treatment can be medical in nature, such as a new medicine or vaccine. But it’s a general term that applies to other things such as training programs, manufacturing settings, teaching methods, and types of fertilizers. I helped run an experiment where the treatment was a jumping exercise intervention that we hoped would increase bone density. All these treatment examples are things that potentially influence a measurable outcome.
Even when you know your treatment generally, you must carefully consider the amount. How large of a dose? If you’re comparing three different temperatures in a manufacturing process, how far apart are they? For my bone mineral density study, we had to determine how frequently the exercise sessions would occur and how long each lasted.
How you define the treatments in the design of experiments can affect your findings and the generalizability of your results.
Assigning Subjects to Experimental Groups
A crucial decision for all experimental designs is determining how researchers assign subjects to the experimental conditions—the treatment and control groups. The control group is often, but not always, the lack of a treatment. It serves as a basis for comparison by showing outcomes for subjects who don’t receive a treatment. Learn more about Control Groups .
How your experimental design assigns subjects to the groups affects how confident you can be that the findings represent true causal effects rather than mere correlation caused by confounders. Indeed, the assignment method influences how you control for confounding variables. This is the difference between correlation and causation .
Imagine a study finds that vitamin consumption correlates with better health outcomes. As a researcher, you want to be able to say that vitamin consumption causes the improvements. However, with the wrong experimental design, you might only be able to say there is an association. A confounder, and not the vitamins, might actually cause the health benefits.
Let’s explore some of the ways to assign subjects in design of experiments.
Completely Randomized Designs
A completely randomized experimental design randomly assigns all subjects to the treatment and control groups. You simply take each participant and use a random process to determine their group assignment. You can flip coins, roll a die, or use a computer. Randomized experiments must be prospective studies because they need to be able to control group assignment.
Random assignment in the design of experiments helps ensure that the groups are roughly equivalent at the beginning of the study. This equivalence at the start increases your confidence that any differences you see at the end were caused by the treatments. The randomization tends to equalize confounders between the experimental groups and, thereby, cancels out their effects, leaving only the treatment effects.
For example, in a vitamin study, the researchers can randomly assign participants to either the control or vitamin group. Because the groups are approximately equal when the experiment starts, if the health outcomes are different at the end of the study, the researchers can be confident that the vitamins caused those improvements.
Statisticians consider randomized experimental designs to be the best for identifying causal relationships.
If you can’t randomly assign subjects but want to draw causal conclusions about an intervention, consider using a quasi-experimental design .
Learn more about Randomized Controlled Trials and Random Assignment in Experiments .
Randomized Block Designs
Nuisance factors are variables that can affect the outcome, but they are not the researcher’s primary interest. Unfortunately, they can hide or distort the treatment results. When experimenters know about specific nuisance factors, they can use a randomized block design to minimize their impact.
This experimental design takes subjects with a shared “nuisance” characteristic and groups them into blocks. The participants in each block are then randomly assigned to the experimental groups. This process allows the experiment to control for known nuisance factors.
Blocking in the design of experiments reduces the impact of nuisance factors on experimental error. The analysis assesses the effects of the treatment within each block, which removes the variability between blocks. The result is that blocked experimental designs can reduce the impact of nuisance variables, increasing the ability to detect treatment effects accurately.
Suppose you’re testing various teaching methods. Because grade level likely affects educational outcomes, you might use grade level as a blocking factor. To use a randomized block design for this scenario, divide the participants by grade level and then randomly assign the members of each grade level to the experimental groups.
A standard guideline for an experimental design is to “Block what you can, randomize what you cannot.” Use blocking for a few primary nuisance factors. Then use random assignment to distribute the unblocked nuisance factors equally between the experimental conditions.
You can also use covariates to control nuisance factors. Learn about Covariates: Definition and Uses .
Observational Studies
In some experimental designs, randomly assigning subjects to the experimental conditions is impossible or unethical. The researchers simply can’t assign participants to the experimental groups. However, they can observe them in their natural groupings, measure the essential variables, and look for correlations. These observational studies are also known as quasi-experimental designs. Retrospective studies must be observational in nature because they look back at past events.
Imagine you’re studying the effects of depression on an activity. Clearly, you can’t randomly assign participants to the depression and control groups. But you can observe participants with and without depression and see how their task performance differs.
Observational studies let you perform research when you can’t control the treatment. However, quasi-experimental designs increase the problem of confounding variables. For this design of experiments, correlation does not necessarily imply causation. While special procedures can help control confounders in an observational study, you’re ultimately less confident that the results represent causal findings.
Learn more about Observational Studies .
For a good comparison, learn about the differences and tradeoffs between Observational Studies and Randomized Experiments .
Between-Subjects vs. Within-Subjects Experimental Designs
When you think of the design of experiments, you probably picture a treatment and control group. Researchers assign participants to only one of these groups, so each group contains entirely different subjects than the other groups. Analysts compare the groups at the end of the experiment. Statisticians refer to this method as a between-subjects, or independent measures, experimental design.
In a between-subjects design , you can have more than one treatment group, but each subject is exposed to only one condition, the control group or one of the treatment groups.
A potential downside to this approach is that differences between groups at the beginning can affect the results at the end. As you’ve read earlier, random assignment can reduce those differences, but it is imperfect. There will always be some variability between the groups.
In a within-subjects experimental design , also known as repeated measures, subjects experience all treatment conditions and are measured for each. Each subject acts as their own control, which reduces variability and increases the statistical power to detect effects.
In this experimental design, you minimize pre-existing differences between the experimental conditions because they all contain the same subjects. However, the order of treatments can affect the results. Beware of practice and fatigue effects. Learn more about Repeated Measures Designs .
Assigned to one experimental condition | Participates in all experimental conditions |
Requires more subjects | Fewer subjects |
Differences between subjects in the groups can affect the results | Uses same subjects in all conditions. |
No order of treatment effects. | Order of treatments can affect results. |
Design of Experiments Examples
For example, a bone density study has three experimental groups—a control group, a stretching exercise group, and a jumping exercise group.
In a between-subjects experimental design, scientists randomly assign each participant to one of the three groups.
In a within-subjects design, all subjects experience the three conditions sequentially while the researchers measure bone density repeatedly. The procedure can switch the order of treatments for the participants to help reduce order effects.
Matched Pairs Experimental Design
A matched pairs experimental design is a between-subjects study that uses pairs of similar subjects. Researchers use this approach to reduce pre-existing differences between experimental groups. It’s yet another design of experiments method for reducing sources of variability.
Researchers identify variables likely to affect the outcome, such as demographics. When they pick a subject with a set of characteristics, they try to locate another participant with similar attributes to create a matched pair. Scientists randomly assign one member of a pair to the treatment group and the other to the control group.
On the plus side, this process creates two similar groups, and it doesn’t create treatment order effects. While matched pairs do not produce the perfectly matched groups of a within-subjects design (which uses the same subjects in all conditions), it aims to reduce variability between groups relative to a between-subjects study.
On the downside, finding matched pairs is very time-consuming. Additionally, if one member of a matched pair drops out, the other subject must leave the study too.
Learn more about Matched Pairs Design: Uses & Examples .
Another consideration is whether you’ll use a cross-sectional design (one point in time) or use a longitudinal study to track changes over time .
A case study is a research method that often serves as a precursor to a more rigorous experimental design by identifying research questions, variables, and hypotheses to test. Learn more about What is a Case Study? Definition & Examples .
In conclusion, the design of experiments is extremely sensitive to subject area concerns and the time and resources available to the researchers. Developing a suitable experimental design requires balancing a multitude of considerations. A successful design is necessary to obtain trustworthy answers to your research question and to have a reasonable chance of detecting treatment effects when they exist.
Share this:
Reader Interactions
March 23, 2024 at 2:35 pm
Dear Jim You wrote a superb document, I will use it in my Buistatistics course, along with your three books. Thank you very much! Miguel
March 23, 2024 at 5:43 pm
Thanks so much, Miguel! Glad this post was helpful and I trust the books will be as well.
April 10, 2023 at 4:36 am
What are the purpose and uses of experimental research design?
Comments and Questions Cancel reply
- SCIENCE COMMUNICATION LAB
- IBIOLOGY LECTURES
- IBIOLOGY COURSES
Welcome back!
Please login to access your SCL Educator portal:
Forgot your password?
- Experimental Design
Let’s Experiment
How to Design Experiments in Biological Research
Planning Your Scientific Journey
How to Develop and Plan a Research Project
Sign Up for Updates
Connect with scicommlab, films by topic, award-winning films & videos.
- Career Planning
- Climate Change
- CRISPR + Genetics
- Famous Discoveries
- Health + Medicine
- Microbiology
- Science Communication
- Science Identity
- Science + Society
- With Educator Resources
Short Films
Scientists sharing their stories.
Feature Documentaries
Science on the big screen.
This material is based upon work supported by the National Science Foundation and the National Institute of General Medical Sciences under Grant No. 2122350 and 1 R25 GM139147. Any opinion, finding, conclusion, or recommendation expressed in these videos are solely those of the speakers and do not necessarily represent the views of the Science Communication Lab/iBiology, the National Science Foundation, the National Institutes of Health, or other Science Communication Lab funders.
- © 2024 by the Science Communication Lab (SCICOMMLAB)
- Privacy Policy
- Terms of Use
- Usage Policy
- DEI Statement
- Financial Conflict of Interest Policy
- How it works
- DOE for assay development
- DOE for media optimization
- DOE for purification process development
- Purification process development
- DNA assembly
- Case studies
- DOE training course
- DOE Masterclass
- Automation Masterclass
- Synthesis newsletter
- Dilution calculator
- Applications overview
- DOE overview
- DOE for miniaturized purification
- Assays overview
- Bioprocess development overview
- Miniaturized purification
- Molecular biology overview
- Other resources
- Life at Synthace
Implementing Design of Experiments (DOE): A practical example
You may have an understanding of what Design of Experiments (DOE) is in theory. But what happens when DOE collides with the real world?
Implementing DOE in a busy laboratory is, of course, a nuanced topic—and there’s plenty of ways to approach it.
DOE implementation with a practical example: 7 elements to consider
Let’s jump straight in with a real-life example. Imagine that we want to optimize the expression of a target protein in bacterial cell culture.
Based on our experience with DOE campaigns, here are the most important elements to consider, and how we’d approach them in this scenario:
1. Using DOE tools vs doing it manually
In theory, you can create, execute, and analyze this DOE example (and any other DOE, for that matter) with little more than a pipette, pen, and paper.
But it’ll be hard to do more than scratch the surface without the proper tools. For something as complex as protein expression, you’re going to need a hefty toolbox to help you with each stage.
Software for DOE
Let’s begin with DOE software.
DOE rests on a well-established and robust mathematical foundation. Technically, you can do the math by hand. But it’s hard work, error-prone, and requires specialized mathematical knowledge. Using DOE software helps reduce the risk of a mathematical slip.
And thankfully, over the last few years, DOE software has become more accessible to scientists—which lowers the barriers to entry for non-statisticians.
By creating and assessing different designs, analyzing the data, and building models with software, you’ll also find it easier to decide your next action or iteration.
Automation hardware for DOE
Biological DOEs—including our example of optimizing protein production—typically involves liquid handling and analytics.
Manually handling small quantities of liquid is feasible.
That being said, DOEs are more complex than most protocols. They typically employ dozens or hundreds of runs—and the variations between runs are minute. For DOEs that surpass a certain scale, you’d end up driving yourself mad trying to manually pipette 10 or more liquids into wells that are millimeters apart. All at variable volumes, in an unpredictable layout.
To unleash the full potential of DOE, working by hand without making any errors would be nothing short of a miracle, even for the most practiced pipettor.
Automation hardware would instantly relieve you of the burden, and radically speed up time to insight.
Worth noting: If you go down the automation route, you will, of course, need to integrate the output of your DOE software with the software that controls their lab automation. Automation engineers can help make the transition from manual to automated liquid handling and ease DOE implementation. Though we know that this can create a new bottleneck. And DOE can be complex enough without worrying about shifting toward fully automated experimentation.
That’s why at Synthace, we’ve created a more accessible kind of DOE software —the kind that doesn’t require an automation engineer’s specialist scripting or coding knowledge.
But we digress...
2. Framing your question as a hypothesis
A large part of the power of DOE resides in the process . It’s a campaign approach encompassing screening, refinement and iteration, optimization, and assessing robustness.
So before you begin, you need to sketch out a plan for your campaign. And as with every scientific experiment, you always start by framing your question as a hypothesis.
Returning to our growth experiment: Producing compound ‘X’ in bacteria depends on a complex interaction between genetics and environment.
So our hypothesis would be: By varying aspects of genetics and environment, we will discover what’s important, and how they affect one another. This will help us optimize production.
3. Choosing factors based on what you know...
After forming a hypothesis, the next stage is to start thinking about which factors to investigate, and how to change them.
By factors, we mean variables in your experiment. There can be genetic factors and environmental factors. In our protein optimization with DOE, an example of a genetic factor could be which promoter we use, and an environmental one could be the overnight growth temperature.
To avoid spending too much effort re-learning things that are already known, we recommend using all of the knowledge you can get.
For instance, if you know which growth media achieve high yields when you’re trying to optimize protein production with DOE, there’s usually no need to confirm this experimentally.
However, you can investigate a biologically plausible change to the media (e.g., zinc availability may be limiting) alongside other media, genetic and process factors, and interactions (e.g., between zinc and manganese).
4. ... Without relying on your knowledge completely
Having said that, familiarity should not breed complacency. There’s a line to toe here: It’s all too easy to develop experiments that confirm, rather than test, hypotheses. Not having a well-developed and robust theoretical framework for your experiment will prevent you from getting to grips with the complexity of your system.
So, be open-minded. Don’t assume you know everything. DOE helps you investigate your system in an unbiased way, which often reveals new insights and generates novel hypotheses.
For instance, the formulae for many cell growth media are handed down and used unquestioningly by generations of scientists. After all, why would you risk taking something out if your cells might not grow properly?
But calculated risks are part of science.
Cell growth is complex and there’s no perfect medium that gives excellent results in every possible case. It’s likely that many ingredients aren’t necessary for specific applications or may even be harmful: High levels of zinc may inhibit the growth of certain bacteria, for example.
Investigating the composition of such apparently standard parts of the workflow can be useful: Some “unnecessary” components of the media can be very expensive, while others are actively harmful for the specific application.
5. Getting your measurements right
Results for your DOE are only as good as the quality of your measurement data. So for your DOE to work, your measurements have to be in order.
What can go wrong? There are two related problems: Noise and sensitivity.
Noise is about how reproducible the signal is. If you measure the same thing 3 times, how much do the results vary? This will define the resolution of your experiment. Noisier assays make it harder to distinguish between real changes and random variations. Noise is often something to watch out for during the earlier stages, where many runs will produce low or no signal. Distinguishing these to inform the next iteration will be critical.
Sensitivity is more about the range of signals that you can detect. This usually comes down to a device’s upper and lower detection limits. If you don’t take these into account, you risk losing a lot of information on signals outside those limits, which is a big problem when it comes to modeling DOE data.
Sensitivity could come up in our working DOE example as a side-effect of the assay protocol. The simplest way to detect protein expression might be using crude lysates with a Bradford assay. But you’d need to ensure that the dynamic range of the plate reader doesn’t restrict sensitivity. Testing multiple dilutions is one common way to mitigate this. Mitigating noise issues from background expression of non-target proteins using a proper negative control strategy is also something you’d want to consider.
6. Avoiding the "big bang" —and breaking up your experiment into stages
DOE lets you investigate lots of factors at once—so naturally, you’ll have plenty of factors to choose from. Though you can’t test them all at once. You’ll need to avoid the temptation of creating a “big bang”. In other words, trying to investigate all your factors in depth with 1 massive experiment. This would be impractical, if not impossible.
When thinking about what influences the optimal expression of a target protein, for example, you’ll have to choose between dozens of factors, like variations in the genetic payload—plasmid type, the coding, promoter or terminator sequences. The molecular biology techniques used to assemble and transform the payload, the host strain details and growth conditions, such as temperatures and times, could also influence expression.
Most of the possible combinations will have little if any effect on the expression profile. The problem is you don’t know which!
Thankfully, the solution is simple: It’s best to do your experiment in stages. Begin your DOE campaign by investigating a broad set of factors in limited detail , as you’ll eliminate dead-ends—and produce a smaller, more interesting and influential set. Later experiments can fill in the missing details.
7. Giving your DOE campaign a sanity check
Before you start, look over your DOE campaign. Make sure that you understand exactly what you’re proposing to do in each stage, and whether it makes biological sense.
Will all your runs be biologically plausible?
When you’re looking at the early stages of your DOE campaign, remember that the aim is to investigate high and low levels of continuous factors. For our protein optimization example, we’d want to focus on things like concentrations of media components, to establish ranges to investigate.
And while each of the highest and lowest levels for your factors may make sense in isolation, the combination may not be possible. For instance, investigating the effect of several carbon sources on bacterial growth could involve a low level or zero for each source individually. Bacteria may, however, thrive on more than one carbon source. But giving bacteria no carbon would obviously prevent growth. Equally, large amounts of different carbon sources could overwhelm the bacteria. So, you may want to set limits for total carbon.
Biologically, implausible runs waste time and resources, and can compromise the overall results. Especially if they occur multiple times. Trying to understand how the combination of levels would influence the system is critical: It will make a huge difference to the success of your run. No DOE design package or statistician can give you these answers.
Have you also thought about your positive and negative controls?
It's good scientific practice to use positive and negative controls. But these aren't included in the DOE experimental design, and they're important to think about.
The experimental design will contain the points required to estimate the effects and interactions that you are investigating. DOE also assumes that you can easily measure the response for each run.
You should consider all of these as experimental runs. While they can sometimes include runs that could function as controls (e.g., the zero carbon example above) that's not their purpose. Which means you need to make sure that you add the required controls and replicate runs separately.
Can you make iterating easier by making some of your runs identical?
We also advocate, particularly when iterating, for including a few repeated runs from earlier stages to help understand if your system is behaving the same way.
Otherwise, you could end up in a situation where all your runs look suspiciously different from what you expect given earlier experiments. But as your runs have little to nothing in common, it can be difficult to identify errors that affect large sets of runs, like a machine not functioning correctly.
What we learned from this example? For DOE, the scientist holds the key
If these 7 elements are too much for you to take in all in one go, just remember this: Software and automation, as well as experts in statistics and lab automation, are all valuable allies.
But your greatest ally is your scientific knowledge and instincts: It's up to you to make sure that your experiments ask the right questions in the right way.
Just remember to temper this with open-mindedness: Be critical of what you think you already know. After all, you have nothing to lose but your cognitive bias.
Interested in learning more about DOE? Make sure to check out our other DOE blogs , download our DOE for biologists ebook , or watch our DOE Masterclass webinar series .
Michael "Sid" Sadowski, PhD
Michael Sadowski, aka Sid, is the Director of Scientific Software at Synthace, where he leads the company’s DOE product development. In his 10 years at the company he has consulted on dozens of DOE campaigns, many of which included aspects of QbD.
Other posts you might be interested in
What is design of experiments (doe).
Why Design of Experiments (DOE) is important for biologists
A Biologist’s Guide to Design of Experiments
Subscribe to email updates
Additional content around the benefits of subscribing to this blog feed.
Experimental Research Design — 6 mistakes you should never make!
Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.
An experimental research design helps researchers execute their research objectives with more clarity and transparency.
In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.
Table of Contents
What Is Experimental Research Design?
Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .
Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.
When Can a Researcher Conduct Experimental Research?
A researcher can conduct experimental research in the following situations —
- When time is an important factor in establishing a relationship between the cause and effect.
- When there is an invariable or never-changing behavior between the cause and effect.
- Finally, when the researcher wishes to understand the importance of the cause and effect.
Importance of Experimental Research Design
To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.
By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.
Types of Experimental Research Designs
Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:
1. Pre-experimental Research Design
A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.
Pre-experimental research is of three types —
- One-shot Case Study Research Design
- One-group Pretest-posttest Research Design
- Static-group Comparison
2. True Experimental Research Design
A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —
- There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
- A variable that can be manipulated by the researcher
- Random distribution of the variables
This type of experimental research is commonly observed in the physical sciences.
3. Quasi-experimental Research Design
The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.
The classification of the research subjects, conditions, or groups determines the type of research design to be used.
Advantages of Experimental Research
Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:
- Researchers have firm control over variables to obtain results.
- The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
- The results are specific.
- Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
- Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
- Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.
6 Mistakes to Avoid While Designing Your Research
There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.
1. Invalid Theoretical Framework
Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.
2. Inadequate Literature Study
Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.
3. Insufficient or Incorrect Statistical Analysis
Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.
4. Undefined Research Problem
This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.
5. Research Limitations
Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.
6. Ethical Implications
The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.
Experimental Research Design Example
In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)
By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.
Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.
Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!
Frequently Asked Questions
Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.
Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.
There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.
The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.
Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.
good and valuable
Very very good
Good presentation.
Rate this article Cancel Reply
Your email address will not be published.
Enago Academy's Most Popular Articles
- Promoting Research
Graphical Abstracts Vs. Infographics: Best practices for using visual illustrations for increased research impact
Dr. Sarah Chen stared at her computer screen, her eyes staring at her recently published…
- Publishing Research
10 Tips to Prevent Research Papers From Being Retracted
Research paper retractions represent a critical event in the scientific community. When a published article…
- Industry News
Google Releases 2024 Scholar Metrics, Evaluates Impact of Scholarly Articles
Google has released its 2024 Scholar Metrics, assessing scholarly articles from 2019 to 2023. This…
Ensuring Academic Integrity and Transparency in Academic Research: A comprehensive checklist for researchers
Academic integrity is the foundation upon which the credibility and value of scientific findings are…
- Reporting Research
How to Optimize Your Research Process: A step-by-step guide
For researchers across disciplines, the path to uncovering novel findings and insights is often filled…
Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…
Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…
Sign-up to read more
Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:
- 2000+ blog articles
- 50+ Webinars
- 10+ Expert podcasts
- 50+ Infographics
- 10+ Checklists
- Research Guides
We hate spam too. We promise to protect your privacy and never spam you.
- AI in Academia
- Career Corner
- Diversity and Inclusion
- Infographics
- Expert Video Library
- Other Resources
- Enago Learn
- Upcoming & On-Demand Webinars
- Peer Review Week 2024
- Open Access Week 2023
- Conference Videos
- Enago Report
- Journal Finder
- Enago Plagiarism & AI Grammar Check
- Editing Services
- Publication Support Services
- Research Impact
- Translation Services
- Publication solutions
- AI-Based Solutions
- Thought Leadership
- Call for Articles
- Call for Speakers
- Author Training
- Edit Profile
I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:
Which among these features would you prefer the most in a peer review assistant?
Frequently asked questions
What is experimental design.
Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:
- A testable hypothesis
- At least one independent variable that can be precisely manipulated
- At least one dependent variable that can be precisely measured
When designing the experiment, you decide:
- How you will manipulate the variable(s)
- How you will control for any potential confounding variables
- How many subjects or samples will be included in the study
- How subjects will be assigned to treatment levels
Experimental design is essential to the internal and external validity of your experiment.
Frequently asked questions: Methodology
Attrition refers to participants leaving a study. It always happens to some extent—for example, in randomized controlled trials for medical research.
Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .
Action research is conducted in order to solve a particular issue immediately, while case studies are often conducted over a longer period of time and focus more on observing and analyzing a particular ongoing phenomenon.
Action research is focused on solving a problem or informing individual and community-based knowledge in a way that impacts teaching, learning, and other related processes. It is less focused on contributing theoretical input, instead producing actionable input.
Action research is particularly popular with educators as a form of systematic inquiry because it prioritizes reflection and bridges the gap between theory and practice. Educators are able to simultaneously investigate an issue as they solve it, and the method is very iterative and flexible.
A cycle of inquiry is another name for action research . It is usually visualized in a spiral shape following a series of steps, such as “planning → acting → observing → reflecting.”
To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.
Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.
While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.
Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.
Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.
- Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
- Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .
You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.
- Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related
Content validity shows you how accurately a test or other measurement method taps into the various aspects of the specific construct you are researching.
In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.
The higher the content validity, the more accurate the measurement of the construct.
If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.
Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.
When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.
For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).
On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analyzing whether each one covers the aspects that the test was designed to cover.
A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.
Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.
Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.
Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .
This means that you cannot use inferential statistics and make generalizations —often the goal of quantitative research . As such, a snowball sample is not representative of the target population and is usually a better fit for qualitative research .
Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones.
Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .
Snowball sampling is best used in the following cases:
- If there is no sampling frame available (e.g., people with a rare disease)
- If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
- If the research focuses on a sensitive topic (e.g., extramarital affairs)
The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.
Reproducibility and replicability are related terms.
- Reproducing research entails reanalyzing the existing data in the same manner.
- Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data .
- A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
- A successful replication shows that the reliability of the results is high.
Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.
The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).
Purposive and convenience sampling are both sampling methods that are typically used in qualitative data collection.
A convenience sample is drawn from a source that is conveniently accessible to the researcher. Convenience sampling does not distinguish characteristics among the participants. On the other hand, purposive sampling focuses on selecting participants possessing characteristics associated with the research study.
The findings of studies based on either convenience or purposive sampling can only be generalized to the (sub)population from which the sample is drawn, and not to the entire population.
Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.
On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.
Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.
However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.
In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection, using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.
A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.
Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.
Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .
A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.
The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .
An observational study is a great choice for you if your research question is based purely on observations. If there are ethical, logistical, or practical concerns that prevent you from conducting a traditional experiment , an observational study may be a good choice. In an observational study, there is no interference or manipulation of the research subjects, as well as no control or treatment groups .
It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.
While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.
Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.
Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.
Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.
Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.
You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .
When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.
Construct validity is often considered the overarching type of measurement validity , because it covers all of the other types. You need to have face validity , content validity , and criterion validity to achieve construct validity.
Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.
There are two subtypes of construct validity.
- Convergent validity : The extent to which your measure corresponds to measures of related constructs
- Discriminant validity : The extent to which your measure is unrelated or negatively related to measures of distinct constructs
Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.
The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.
Naturalistic observation is a qualitative research method where you record the behaviors of your research subjects in real world settings. You avoid interfering or influencing anything in a naturalistic observation.
You can think of naturalistic observation as “people watching” with a purpose.
A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it “depends” on your independent variable.
In statistics, dependent variables are also called:
- Response variables (they respond to a change in another variable)
- Outcome variables (they represent the outcome you want to measure)
- Left-hand-side variables (they appear on the left-hand side of a regression equation)
An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called “independent” because it’s not influenced by any other variables in the study.
Independent variables are also called:
- Explanatory variables (they explain an event or outcome)
- Predictor variables (they can be used to predict the value of a dependent variable)
- Right-hand-side variables (they appear on the right-hand side of a regression equation).
As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups. Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.
Overall, your focus group questions should be:
- Open-ended and flexible
- Impossible to answer with “yes” or “no” (questions that start with “why” or “how” are often best)
- Unambiguous, getting straight to the point while still stimulating discussion
- Unbiased and neutral
A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when:
- You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
- You are constrained in terms of time or resources and need to analyze your data quickly and efficiently.
- Your research question depends on strong parity between participants, with environmental conditions held constant.
More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .
Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .
Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.
This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.
The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.
There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.
A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:
- You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
- Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.
An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.
Unstructured interviews are best used when:
- You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions.
- Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
- You are seeking descriptive data, and are ready to ask questions that will deepen and contextualize your initial thoughts and hypotheses.
- Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts.
The four most common types of interviews are:
- Structured interviews : The questions are predetermined in both topic and order.
- Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
- Unstructured interviews : None of the questions are predetermined.
- Focus group interviews : The questions are presented to a group instead of one individual.
Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .
In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.
Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.
Deductive reasoning is also called deductive logic.
There are many different types of inductive reasoning that people use formally or informally.
Here are a few common types:
- Inductive generalization : You use observations about a sample to come to a conclusion about the population it came from.
- Statistical generalization: You use specific numbers about samples to make statements about populations.
- Causal reasoning: You make cause-and-effect links between different things.
- Sign reasoning: You make a conclusion about a correlational relationship between different things.
- Analogical reasoning: You make a conclusion about something based on its similarities to something else.
Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.
Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.
In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.
Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.
Inductive reasoning is also called inductive logic or bottom-up reasoning.
A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.
A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).
Triangulation can help:
- Reduce research bias that comes from using a single method, theory, or investigator
- Enhance validity by approaching the same topic with different tools
- Establish credibility by giving you a complete picture of the research problem
But triangulation can also pose problems:
- It’s time-consuming and labor-intensive, often involving an interdisciplinary team.
- Your results may be inconsistent or even contradictory.
There are four main types of triangulation :
- Data triangulation : Using data from different times, spaces, and people
- Investigator triangulation : Involving multiple researchers in collecting or analyzing data
- Theory triangulation : Using varying theoretical perspectives in your research
- Methodological triangulation : Using different methodologies to approach the same topic
Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.
However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.
Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.
Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.
Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.
In general, the peer review process follows the following steps:
- First, the author submits the manuscript to the editor.
- Reject the manuscript and send it back to author, or
- Send it onward to the selected peer reviewer(s)
- Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
- Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.
Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.
You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.
Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.
Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.
Exploratory research aims to explore the main aspects of an under-researched problem, while explanatory research aims to explain the causes and consequences of a well-defined problem.
Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.
Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.
Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.
Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.
For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.
After data collection, you can use data standardization and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.
Every dataset requires different techniques to clean dirty data , but you need to address these issues in a systematic way. You focus on finding and resolving data points that don’t agree or fit with the rest of your dataset.
These data might be missing values, outliers, duplicate values, incorrectly formatted, or irrelevant. You’ll start with screening and diagnosing your data. Then, you’ll often standardize and accept or remove data to make your dataset consistent and valid.
Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimize or resolve these.
Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.
Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.
In this process, you review, analyze, detect, modify, or remove “dirty” data to make your dataset “clean.” Data cleaning is also called data cleansing or data scrubbing.
Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.
These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.
Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .
You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.
You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.
Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.
Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.
Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .
These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.
In multistage sampling , you can use probability or non-probability sampling methods .
For a probability sample, you have to conduct probability sampling at every stage.
You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.
Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.
But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .
These are four of the most common mixed methods designs :
- Convergent parallel: Quantitative and qualitative data are collected at the same time and analyzed separately. After both analyses are complete, compare your results to draw overall conclusions.
- Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
- Explanatory sequential: Quantitative data is collected and analyzed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualize your quantitative findings.
- Exploratory sequential: Qualitative data is collected and analyzed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.
Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.
Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.
In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.
This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.
No, the steepness or slope of the line isn’t related to the correlation coefficient value. The correlation coefficient only tells you how closely your data fit on a line, so two datasets with the same correlation coefficient can have very different slopes.
To find the slope of the line, you’ll need to perform a regression analysis .
Correlation coefficients always range between -1 and 1.
The sign of the coefficient tells you the direction of the relationship: a positive value means the variables change together in the same direction, while a negative value means they change together in opposite directions.
The absolute value of a number is equal to the number without its sign. The absolute value of a correlation coefficient tells you the magnitude of the correlation: the greater the absolute value, the stronger the correlation.
These are the assumptions your data must meet if you want to use Pearson’s r :
- Both variables are on an interval or ratio level of measurement
- Data from both variables follow normal distributions
- Your data have no outliers
- Your data is from a random or representative sample
- You expect a linear relationship between the two variables
Quantitative research designs can be divided into two main categories:
- Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
- Experimental and quasi-experimental designs are used to test causal relationships .
Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.
A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.
The priorities of a research design can vary depending on the field, but you usually have to specify:
- Your research questions and/or hypotheses
- Your overall approach (e.g., qualitative or quantitative )
- The type of design you’re using (e.g., a survey , experiment , or case study )
- Your sampling methods or criteria for selecting subjects
- Your data collection methods (e.g., questionnaires , observations)
- Your data collection procedures (e.g., operationalization , timing and data management)
- Your data analysis methods (e.g., statistical tests or thematic analysis )
A research design is a strategy for answering your research question . It defines your overall approach and determines how you will collect and analyze data.
Questionnaires can be self-administered or researcher-administered.
Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.
Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.
You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.
Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.
Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.
A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.
The third variable and directionality problems are two main reasons why correlation isn’t causation .
The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.
The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.
Correlation describes an association between variables : when one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables.
Causation means that changes in one variable brings about changes in the other (i.e., there is a cause-and-effect relationship between variables). The two variables are correlated with each other, and there’s also a causal link between them.
While causation and correlation can exist simultaneously, correlation does not imply causation. In other words, correlation is simply a relationship where A relates to B—but A doesn’t necessarily cause B to happen (or vice versa). Mistaking correlation for causation is a common error and can lead to false cause fallacy .
Controlled experiments establish causality, whereas correlational studies only show associations between variables.
- In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
- In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.
In general, correlational research is high in external validity while experimental research is high in internal validity .
A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.
A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.
Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.
A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .
A correlation reflects the strength and/or direction of the association between two or more variables.
- A positive correlation means that both variables change in the same direction.
- A negative correlation means that the variables change in opposite directions.
- A zero correlation means there’s no relationship between the variables.
Random error is almost always present in scientific studies, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error by taking repeated measurements, using a large sample, and controlling extraneous variables .
You can avoid systematic error through careful design of your sampling , data collection , and analysis procedures. For example, use triangulation to measure your variables using multiple methods; regularly calibrate instruments or procedures; use random sampling and random assignment ; and apply masking (blinding) where possible.
Systematic error is generally a bigger problem in research.
With random error, multiple measurements will tend to cluster around the true value. When you’re collecting data from a large sample , the errors in different directions will cancel each other out.
Systematic errors are much more problematic because they can skew your data away from the true value. This can lead you to false conclusions ( Type I and II errors ) about the relationship between the variables you’re studying.
Random and systematic error are two types of measurement error.
Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).
Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently records weights as higher than they actually are).
On graphs, the explanatory variable is conventionally placed on the x-axis, while the response variable is placed on the y-axis.
- If you have quantitative variables , use a scatterplot or a line graph.
- If your response variable is categorical, use a scatterplot or a line graph.
- If your explanatory variable is categorical, use a bar graph.
The term “ explanatory variable ” is sometimes preferred over “ independent variable ” because, in real world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.
Multiple independent variables may also be correlated with each other, so “explanatory variables” is a more appropriate term.
The difference between explanatory and response variables is simple:
- An explanatory variable is the expected cause, and it explains the results.
- A response variable is the expected effect, and it responds to other variables.
In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:
- A control group that receives a standard treatment, a fake treatment, or no treatment.
- Random assignment of participants to ensure the groups are equivalent.
Depending on your study topic, there are various other methods of controlling variables .
There are 4 main types of extraneous variables :
- Demand characteristics : environmental cues that encourage participants to conform to researchers’ expectations.
- Experimenter effects : unintentional actions by researchers that influence study outcomes.
- Situational variables : environmental variables that alter participants’ behaviors.
- Participant variables : any characteristic or aspect of a participant’s background that could affect study results.
An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.
A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.
In a factorial design, multiple independent variables are tested.
If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.
Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .
Advantages:
- Only requires small samples
- Statistically powerful
- Removes the effects of individual differences on the outcomes
Disadvantages:
- Internal validity threats reduce the likelihood of establishing a direct relationship between variables
- Time-related effects, such as growth, can influence the outcomes
- Carryover effects mean that the specific order of different treatments affect the outcomes
While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .
- Prevents carryover effects of learning and fatigue.
- Shorter study duration.
- Needs larger samples for high power.
- Uses more resources to recruit participants, administer sessions, cover costs, etc.
- Individual differences may be an alternative explanation for results.
Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.
In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.
In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.
The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.
Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.
In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.
To implement random assignment , assign a unique number to every member of your study’s sample .
Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a dice to randomly assign participants to groups.
Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.
In contrast, random assignment is a way of sorting the sample into control and experimental groups.
Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal validity of your study.
In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.
“Controlling for a variable” means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.
Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.
Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .
If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .
A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.
Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.
Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.
If something is a mediating variable :
- It’s caused by the independent variable .
- It influences the dependent variable
- When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered.
A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.
A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.
There are three key steps in systematic sampling :
- Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
- Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
- Choose every k th member of the population as your sample.
Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .
Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.
For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 x 5 = 15 subgroups.
You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.
Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.
For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.
In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).
Once divided, each subgroup is randomly sampled using another probability sampling method.
Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.
However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.
There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.
- In single-stage sampling , you collect data from every unit within the selected clusters.
- In double-stage sampling , you select a random sample of units from within the clusters.
- In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.
Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.
The clusters should ideally each be mini-representations of the population as a whole.
If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,
If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.
The American Community Survey is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.
Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data is then collected from as large a percentage as possible of this random subset.
Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .
Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity as they can use real-world interventions instead of artificial laboratory settings.
A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.
Blinding is important to reduce research bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .
If participants know whether they are in a control or treatment group , they may adjust their behavior in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.
- In a single-blind study , only the participants are blinded.
- In a double-blind study , both participants and experimenters are blinded.
- In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analyzing the data.
Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .
A true experiment (a.k.a. a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.
However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).
For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.
An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.
Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.
Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.
The type of data determines what statistical tests you should use to analyze your data.
A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.
To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.
In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).
The process of turning abstract concepts into measurable variables and indicators is called operationalization .
There are various approaches to qualitative data analysis , but they all share five steps in common:
- Prepare and organize your data.
- Review and explore your data.
- Develop a data coding system.
- Assign codes to the data.
- Identify recurring themes.
The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .
There are five common approaches to qualitative research :
- Grounded theory involves collecting data in order to develop new theories.
- Ethnography involves immersing yourself in a group or organization to understand its culture.
- Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
- Phenomenological research involves investigating phenomena through people’s lived experiences.
- Action research links theory and practice in several cycles to drive innovative changes.
Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.
Operationalization means turning abstract conceptual ideas into measurable observations.
For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.
Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.
When conducting research, collecting original data has significant advantages:
- You can tailor data collection to your specific research aims (e.g. understanding the needs of your consumers or user testing your website)
- You can control and standardize the process for high reliability and validity (e.g. choosing appropriate measurements and sampling methods )
However, there are also some drawbacks: data collection can be time-consuming, labor-intensive and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.
Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.
There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control and randomization.
In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.
In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .
In statistical control , you include potential confounders as variables in your regression .
In randomization , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.
A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.
Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.
To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.
Yes, but including more than one of either type requires multiple research questions .
For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.
You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .
To ensure the internal validity of an experiment , you should only change one independent variable at a time.
No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both!
You want to find out how blood sugar levels are affected by drinking diet soda and regular soda, so you conduct an experiment .
- The type of soda – diet or regular – is the independent variable .
- The level of blood sugar that you measure is the dependent variable – it changes depending on the type of soda.
Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.
In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.
Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .
Probability sampling means that every member of the target population has a known chance of being included in the sample.
Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .
Using careful research design and sampling procedures can help you avoid sampling bias . Oversampling can be used to correct undercoverage bias .
Some common types of sampling bias include self-selection bias , nonresponse bias , undercoverage bias , survivorship bias , pre-screening or advertising bias, and healthy user bias.
Sampling bias is a threat to external validity – it limits the generalizability of your findings to a broader group of people.
A sampling error is the difference between a population parameter and a sample statistic .
A statistic refers to measures about the sample , while a parameter refers to measures about the population .
Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.
Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.
There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment and situation effect.
The two types of external validity are population validity (whether you can generalize to other groups of people) and ecological validity (whether you can generalize to other situations and settings).
The external validity of a study is the extent to which you can generalize your findings to different groups of people, situations, and measures.
Cross-sectional studies cannot establish a cause-and-effect relationship or analyze behavior over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .
Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.
Sometimes only cross-sectional data is available for analysis; other times your research question may only require a cross-sectional study to answer it.
Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.
The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .
Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.
Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.
Longitudinal study | Cross-sectional study |
---|---|
observations | Observations at a in time |
Observes the multiple times | Observes (a “cross-section”) in the population |
Follows in participants over time | Provides of society at a given point |
There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction and attrition .
Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.
In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .
The research methods you use depend on the type of data you need to answer your research question .
- If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
- If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
- If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.
A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.
A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.
In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.
Discrete and continuous variables are two types of quantitative variables :
- Discrete variables represent counts (e.g. the number of objects in a collection).
- Continuous variables represent measurable amounts (e.g. water volume or weight).
Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).
Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).
You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .
You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .
In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:
- The independent variable is the amount of nutrients added to the crop field.
- The dependent variable is the biomass of the crops at harvest time.
Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .
I nternal validity is the degree of confidence that the causal relationship you are testing is not influenced by other factors or variables .
External validity is the extent to which your results can be generalized to other contexts.
The validity of your experiment depends on your experimental design .
Reliability and validity are both about how well a method measures something:
- Reliability refers to the consistency of a measure (whether the results can be reproduced under the same conditions).
- Validity refers to the accuracy of a measure (whether the results really do represent what they are supposed to measure).
If you are doing experimental research, you also have to consider the internal and external validity of your experiment.
A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.
In statistics, sampling allows you to test a hypothesis about the characteristics of a population.
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.
Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.
Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).
In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .
In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.
8.1 Experimental design: What is it and when should it be used?
Learning objectives.
- Define experiment
- Identify the core features of true experimental designs
- Describe the difference between an experimental group and a control group
- Identify and describe the various types of true experimental designs
Experiments are an excellent data collection strategy for social workers wishing to observe the effects of a clinical intervention or social welfare program. Understanding what experiments are and how they are conducted is useful for all social scientists, whether they actually plan to use this methodology or simply aim to understand findings from experimental studies. An experiment is a method of data collection designed to test hypotheses under controlled conditions. In social scientific research, the term experiment has a precise meaning and should not be used to describe all research methodologies.
Experiments have a long and important history in social science. Behaviorists such as John Watson, B. F. Skinner, Ivan Pavlov, and Albert Bandura used experimental design to demonstrate the various types of conditioning. Using strictly controlled environments, behaviorists were able to isolate a single stimulus as the cause of measurable differences in behavior or physiological responses. The foundations of social learning theory and behavior modification are found in experimental research projects. Moreover, behaviorist experiments brought psychology and social science away from the abstract world of Freudian analysis and towards empirical inquiry, grounded in real-world observations and objectively-defined variables. Experiments are used at all levels of social work inquiry, including agency-based experiments that test therapeutic interventions and policy experiments that test new programs.
Several kinds of experimental designs exist. In general, designs considered to be true experiments contain three basic key features:
- random assignment of participants into experimental and control groups
- a “treatment” (or intervention) provided to the experimental group
- measurement of the effects of the treatment in a post-test administered to both groups
Some true experiments are more complex. Their designs can also include a pre-test and can have more than two groups, but these are the minimum requirements for a design to be a true experiment.
Experimental and control groups
In a true experiment, the effect of an intervention is tested by comparing two groups: one that is exposed to the intervention (the experimental group , also known as the treatment group) and another that does not receive the intervention (the control group ). Importantly, participants in a true experiment need to be randomly assigned to either the control or experimental groups. Random assignment uses a random number generator or some other random process to assign people into experimental and control groups. Random assignment is important in experimental research because it helps to ensure that the experimental group and control group are comparable and that any differences between the experimental and control groups are due to random chance. We will address more of the logic behind random assignment in the next section.
Treatment or intervention
In an experiment, the independent variable is receiving the intervention being tested—for example, a therapeutic technique, prevention program, or access to some service or support. It is less common in of social work research, but social science research may also have a stimulus, rather than an intervention as the independent variable. For example, an electric shock or a reading about death might be used as a stimulus to provoke a response.
In some cases, it may be immoral to withhold treatment completely from a control group within an experiment. If you recruited two groups of people with severe addiction and only provided treatment to one group, the other group would likely suffer. For these cases, researchers use a control group that receives “treatment as usual.” Experimenters must clearly define what treatment as usual means. For example, a standard treatment in substance abuse recovery is attending Alcoholics Anonymous or Narcotics Anonymous meetings. A substance abuse researcher conducting an experiment may use twelve-step programs in their control group and use their experimental intervention in the experimental group. The results would show whether the experimental intervention worked better than normal treatment, which is useful information.
The dependent variable is usually the intended effect the researcher wants the intervention to have. If the researcher is testing a new therapy for individuals with binge eating disorder, their dependent variable may be the number of binge eating episodes a participant reports. The researcher likely expects her intervention to decrease the number of binge eating episodes reported by participants. Thus, she must, at a minimum, measure the number of episodes that occur after the intervention, which is the post-test . In a classic experimental design, participants are also given a pretest to measure the dependent variable before the experimental treatment begins.
Types of experimental design
Let’s put these concepts in chronological order so we can better understand how an experiment runs from start to finish. Once you’ve collected your sample, you’ll need to randomly assign your participants to the experimental group and control group. In a common type of experimental design, you will then give both groups your pretest, which measures your dependent variable, to see what your participants are like before you start your intervention. Next, you will provide your intervention, or independent variable, to your experimental group, but not to your control group. Many interventions last a few weeks or months to complete, particularly therapeutic treatments. Finally, you will administer your post-test to both groups to observe any changes in your dependent variable. What we’ve just described is known as the classical experimental design and is the simplest type of true experimental design. All of the designs we review in this section are variations on this approach. Figure 8.1 visually represents these steps.
An interesting example of experimental research can be found in Shannon K. McCoy and Brenda Major’s (2003) study of people’s perceptions of prejudice. In one portion of this multifaceted study, all participants were given a pretest to assess their levels of depression. No significant differences in depression were found between the experimental and control groups during the pretest. Participants in the experimental group were then asked to read an article suggesting that prejudice against their own racial group is severe and pervasive, while participants in the control group were asked to read an article suggesting that prejudice against a racial group other than their own is severe and pervasive. Clearly, these were not meant to be interventions or treatments to help depression, but were stimuli designed to elicit changes in people’s depression levels. Upon measuring depression scores during the post-test period, the researchers discovered that those who had received the experimental stimulus (the article citing prejudice against their same racial group) reported greater depression than those in the control group. This is just one of many examples of social scientific experimental research.
In addition to classic experimental design, there are two other ways of designing experiments that are considered to fall within the purview of “true” experiments (Babbie, 2010; Campbell & Stanley, 1963). The posttest-only control group design is almost the same as classic experimental design, except it does not use a pretest. Researchers who use posttest-only designs want to eliminate testing effects , in which participants’ scores on a measure change because they have already been exposed to it. If you took multiple SAT or ACT practice exams before you took the real one you sent to colleges, you’ve taken advantage of testing effects to get a better score. Considering the previous example on racism and depression, participants who are given a pretest about depression before being exposed to the stimulus would likely assume that the intervention is designed to address depression. That knowledge could cause them to answer differently on the post-test than they otherwise would. In theory, as long as the control and experimental groups have been determined randomly and are therefore comparable, no pretest is needed. However, most researchers prefer to use pretests in case randomization did not result in equivalent groups and to help assess change over time within both the experimental and control groups.
Researchers wishing to account for testing effects but also gather pretest data can use a Solomon four-group design. In the Solomon four-group design , the researcher uses four groups. Two groups are treated as they would be in a classic experiment—pretest, experimental group intervention, and post-test. The other two groups do not receive the pretest, though one receives the intervention. All groups are given the post-test. Table 8.1 illustrates the features of each of the four groups in the Solomon four-group design. By having one set of experimental and control groups that complete the pretest (Groups 1 and 2) and another set that does not complete the pretest (Groups 3 and 4), researchers using the Solomon four-group design can account for testing effects in their analysis.
Group 1 | X | X | X |
Group 2 | X | X | |
Group 3 | X | X | |
Group 4 | X |
Solomon four-group designs are challenging to implement in the real world because they are time- and resource-intensive. Researchers must recruit enough participants to create four groups and implement interventions in two of them.
Overall, true experimental designs are sometimes difficult to implement in a real-world practice environment. It may be impossible to withhold treatment from a control group or randomly assign participants in a study. In these cases, pre-experimental and quasi-experimental designs–which we will discuss in the next section–can be used. However, the differences in rigor from true experimental designs leave their conclusions more open to critique.
Experimental design in macro-level research
You can imagine that social work researchers may be limited in their ability to use random assignment when examining the effects of governmental policy on individuals. For example, it is unlikely that a researcher could randomly assign some states to implement decriminalization of recreational marijuana and some states not to in order to assess the effects of the policy change. There are, however, important examples of policy experiments that use random assignment, including the Oregon Medicaid experiment. In the Oregon Medicaid experiment, the wait list for Oregon was so long, state officials conducted a lottery to see who from the wait list would receive Medicaid (Baicker et al., 2013). Researchers used the lottery as a natural experiment that included random assignment. People selected to be a part of Medicaid were the experimental group and those on the wait list were in the control group. There are some practical complications macro-level experiments, just as with other experiments. For example, the ethical concern with using people on a wait list as a control group exists in macro-level research just as it does in micro-level research.
Key Takeaways
- True experimental designs require random assignment.
- Control groups do not receive an intervention, and experimental groups receive an intervention.
- The basic components of a true experiment include a pretest, posttest, control group, and experimental group.
- Testing effects may cause researchers to use variations on the classic experimental design.
- Classic experimental design- uses random assignment, an experimental and control group, as well as pre- and posttesting
- Control group- the group in an experiment that does not receive the intervention
- Experiment- a method of data collection designed to test hypotheses under controlled conditions
- Experimental group- the group in an experiment that receives the intervention
- Posttest- a measurement taken after the intervention
- Posttest-only control group design- a type of experimental design that uses random assignment, and an experimental and control group, but does not use a pretest
- Pretest- a measurement taken prior to the intervention
- Random assignment-using a random process to assign people into experimental and control groups
- Solomon four-group design- uses random assignment, two experimental and two control groups, pretests for half of the groups, and posttests for all
- Testing effects- when a participant’s scores on a measure change because they have already been exposed to it
- True experiments- a group of experimental designs that contain independent and dependent variables, pretesting and post testing, and experimental and control groups
Image attributions
exam scientific experiment by mohamed_hassan CC-0
Foundations of Social Work Research Copyright © 2020 by Rebecca L. Mauldin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
Share This Book
How to Craft Effective Product Experiments
- Website for Krystian Kaczor
- Contact Krystian Kaczor
- Twitter for Krystian Kaczor
- LinkedIn for Krystian Kaczor
- Facebook for Krystian Kaczor
Experimentation is a powerful tool in product development. When done right, it allows teams to test assumptions, validate ideas, and gather crucial data before fully committing to a feature or change. But what does it take to design an effective experiment? In this blog post, we’ll explore the key steps to crafting product experiments and delve into various techniques you can use to ensure your experiments lead to actionable insights.
Why Crafting Experiments Matters
At the heart of experimentation lies learning. By running structured tests, you can make data-driven decisions and reduce uncertainty in product development. Experiments are particularly useful when:
- You want to validate a new feature or concept.
- You’re unsure how users will respond to changes.
- You’re exploring ways to optimize or enhance an existing product.
- You cannot make a decision for a long time
- You cannot come to an agreement with stakeholders
However, simply running experiments without careful design can lead to flawed results, wasted resources, and incorrect conclusions. That’s why crafting experiments thoughtfully is essential.
Key Steps in Crafting an Effective Experiment
1. Define Your Objective
Before jumping into experimentation, it’s critical to define what you’re trying to learn or achieve. Do you want to validate a new feature idea? Or maybe understand user preferences for a specific function? Being clear about the objective helps ensure that the experiment aligns with your product goals and prevents scope creep. At this stage, it’s also essential to decide if running an experiment is necessary. In some cases, it might be more cost-effective to build and roll out a feature directly rather than run a separate experiment. Experiments cost money too, so it’s important to weigh the cost-benefit carefully.
Example Objective: “We want to increase the number of registered users.”
2. Formulate a Hypothesis
A well-defined hypothesis gives direction to your experiment. A hypothesis should be specific, measurable, and grounded in prior knowledge. It outlines what you expect to happen and serves as a benchmark for analyzing the outcome.
Example Hypothesis: “Changing the button text to ‘Get Started’ will lead to a 15% increase in sign-ups.”
3. Choose the Right Experimentation Technique
Choosing the right technique depends on the kind of insights you’re after. Some techniques are ideal for validating early ideas with minimal investment, while others provide more in-depth user data but require more resources. We’ll explore these techniques below.
4. Identify Metrics and Success Criteria
Experiments need measurable outcomes. Identify the key metrics that will help you determine if the experiment is successful. Your success criteria should be tied to your hypothesis and objective. You need to know how to judge the results objectively without bias before you start.
Example Metrics: Sign-up conversion rate, click-through rate.
5. Design the Experiment
This step involves deciding on the structure and scope of your experiment. You’ll need to determine:
- The sample size (how many users will be part of the experiment).
- The duration (how long you’ll run the experiment).
- The control group (if applicable).
- The variables you’ll test.
For example, if you’re running an A/B test, your control group will see the existing version, while the experiment group will see the new variation.
6. Isolate Variables
It’s essential to control as many variables as possible, so the changes you make are the only factors influencing the results. This can involve randomizing user groups, ensuring representative samples, or eliminating external factors that could skew the data.
7. Run the Experiment
Launch the experiment, monitor it carefully, and make sure that your system is tracking the data correctly. It’s important not to intervene in the middle unless absolutely necessary, as that could introduce bias.
8. Analyze the Results
After the experiment, analyze the results against your success criteria. Look at whether your hypothesis was supported or disproved. Document the findings and evaluate whether further iterations are needed.
9. Learn and Iterate
One experiment often leads to the next. Even if the results weren’t what you expected, they still provide valuable insights. Maybe you noticed something unexpected or uncovered a new insight. Use these findings to craft new experiments, create a new hypothesis or refine the product based on what you’ve learned. The result of a successful experiment can also be a green light on developing an MVP version of a product or feature. Again, experiments cost resources and can negatively impact lead time to actually deliver value.
Different Techniques for Product Experimentation
Now that you understand the general process of crafting an experiment, let’s look at the various techniques you can use depending on your product stage, budget, and the type of insights you seek.
1. A/B Testing (Split Testing)
What it is: A/B testing is a simple yet powerful technique where two versions of a product or feature are tested against each other to see which performs better.
- Best For: Optimizing existing features, testing UI changes, or improving conversion rates.
- Example: Testing two different headlines on a landing page to see which one drives more clicks.
2. Multivariate Testing
What it is: Multivariate testing involves testing multiple variables at once to see how different combinations affect the outcome. It’s more complex than A/B testing but can provide deeper insights.
- Best For: Complex feature optimizations where multiple elements are changing at once (e.g., different combinations of images, headlines, and CTAs).
- Example: Testing various combinations of product images and descriptions to see which combination leads to the most purchases.
3. Usability Testing
What it is: Usability testing involves observing real users as they interact with your product to identify pain points, frustrations, and opportunities for improvement.
- Best For: Understanding user behavior and improving the user experience (UX).
- Example: Watching users navigate your website to identify confusing navigation flows or unclear elements.
4. Paper Prototyping
What it is: A low-fidelity prototype using paper sketches to simulate a digital interface. It’s an excellent early-stage technique to quickly gather feedback without developing anything.
- Best For: Early-stage concept validation, especially for new features or product ideas.
- Example: Presenting a paper version of a new app layout to potential users and gathering their feedback on usability before development.
5. Landing Page Testing
What it is: This technique involves creating a landing page for a product or feature and driving traffic to it to gauge user interest or validate a value proposition.
- Best For: Testing market demand before full product development.
- Example: Building a landing page for a new service offering and measuring sign-up intent before building the service.
6. Wizard of Oz
What it is: In a Wizard of Oz experiment, users interact with what they think is a fully functioning system, but behind the scenes, a person is manually executing the tasks.
- Best For: Testing complex product features that are not fully built yet.
- Example: Presenting a chatbot interface where, instead of AI, a team member responds manually, allowing you to test user interest before developing the actual AI.
7. Concierge MVP
What it is: The Concierge MVP involves offering a service manually rather than through an automated product. It’s used to validate whether users are interested in a feature before you invest in automating it.
- Best For: Validating demand for services or features without developing the tech upfront.
- Example: Manually helping users find product recommendations instead of developing an AI-powered recommendation engine, just to see if users value the service.
8. Pre-Order Page
What it is: A pre-order page gauges interest in a product before it’s available. Users can place an order or sign up to express interest.
- Best For: Testing the demand for a new product or feature without fully developing it.
- Example: Creating a pre-order page for a new gadget to see if there’s enough interest before mass production.
9. Feature fake/Feature stub
What it is: A fake door test involves offering a feature or product on your website or app that doesn’t exist yet. When users click to use it, they’re informed that the feature isn’t available and are invited to leave feedback.
- Best For: Testing demand for a new feature without building it.
- Example: Placing a “Try Our New Feature” button on a website to gauge interest before committing development resources.
- Caution: Excessive use can decrease the product's reputation. Some users might not use the actual feature because they believe it's still a fake.
Experiments are the foundation of evidence based product development. By carefully crafting experiments and choosing the right techniques, you can validate ideas, optimize features, and minimize risk while ensuring you’re building the right product for your users.
Different experimentation techniques allow you to test at various stages of the product lifecycle—from early-stage prototypes to fully developed features. The key is to balance cost, complexity, and insight depth as you move through each phase of development. The more experiments you run, the smarter and more efficient your product evolution becomes.
So whether you’re tweaking a small UI element or exploring a major feature, remember to experiment, measure, learn, and iterate—continuously improving your product with every test.
What did you think about this post?
Share with your network.
- Share this page via email
- Share this page on Facebook
- Share this page on Twitter
- Share this page on LinkedIn
View the discussion thread.
Redirect Notice
Why properly designed experiments are critical for animal research, and advancing public health.
Good research practices are critical to ensuring rigorous, reproducible, and relevant results. When experiments are designed properly, the results are more likely to be replicated in future studies and relevant for human health.
Properly designing experiments means:
- Thinking about and planning for the appropriate number of animals necessary for the research
- Understanding the health of the animals and how they are cared for, such as their housing and other environmental factors
- Clearly explaining, identifying, and sharing the study methods when discussing the research to ensure it can be repeated by others
- Applying the 3Rs when conducting research to reduce, replace, and refine the use of animals when scientifically appropriate
NIH is committed to ensuring the research it supports is of the highest quality, is efficient, and the results can be explained and understood. This includes ensuring that studies involving animals are rigorous and reproducible.
NIH and the wider research community is actively working toward identifying, developing, and sharing any and all research methods that improve the quality and transparency of animal research. A group of independent experts have even provided recommendations for NIH to consider going forward.
NIH is actively listening and participating throughout this process.
- The National Institute of Neurological Disorders and Stroke funds the development of educational resources to advance rigor in animal research and build a greater emphasis on rigor at universities around the country. They also held a workshop that brought together a diverse cross-section of individuals who promote rigor and transparency in biomedical research and are invested in catalyzing change.
- The National Institute on Aging developed a publicly available, searchable, database, called the AlzPED program , to increase the transparency, reproducibility and translatability of preclinical studies of possible treatments for Alzheimer’s disease.
- NIH also provides many resources for researchers to address rigor in their grant applications. For example, we encourage the use of a free online tool that guides researchers through the design of their experiments, helping to ensure that they use as few animals as possible. This webinar explains how and why this tool is used.
- NIH encourages recipient organizations and researchers to include the ARRIVE Essential 10 (essential elements of study design) in all NIH-supported publications describing vertebrate animal and cephalopod (such as octopus) research. The ARRIVE Essential 10 is a checklist that explains the most basic information to report in a scientific paper that includes animal research, which will help readers assess the reliability of the findings. You can watch this webinar to learn more.
NIH will continue to devise strategies that minimize the numbers of animals needed in NIH-supported studies, while remaining committed to promoting rigorous and transparent research in all areas of science.
Related NIH Leadership Statements
- February 10, 2023: Dr. Michael Lauer (NIH Deputy Director for Extramural Research) -
- Take Advantage of Our Many Resources for Enhancing the Rigor of Animal Research
- December 8, 2022: Dr. Lyric Jorgenson (NIH Associate Director for Science Policy) - Catalyzing Research with Novel Alternative Methods (which complements animal research to minimize animal use where possible and strengthen the reproducibility of results when animal models are necessary)
- June 11, 2021: Dr. Francis Collins (former NIH Director) - Statement on enhancing rigor, transparency, and translatability in animal research
- May 4, 2020: Dr. Carrie Wolinetz (former NIH Associate Director for Science Policy) - Summary of “NIH Workshop on Optimizing Reproducibility in Nonhuman Primate Research Studies by Enhancing Rigor and Transparency” Now Available .
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
- View all journals
- Explore content
- About the journal
- Publish with us
- Sign up for alerts
- Open access
- Published: 27 September 2024
Atomic-scale strain engineering of atomically resolved Pt clusters transcending natural enzymes
- Ke Chen 1 na1 ,
- Guo Li 2 na1 ,
- Xiaoqun Gong 3 na1 ,
- Qinjuan Ren 2 ,
- Junying Wang 4 ,
- Shuang Zhao 3 ,
- Ling Liu 1 ,
- Yuxing Yan 1 ,
- Qingshan Liu 1 ,
- Yang Cao 1 ,
- Yaoyao Ren 5 ,
- Qiong Qin 5 ,
- Shu-Lin Liu ORCID: orcid.org/0000-0002-1043-4238 6 ,
- Peiyu Yao 6 ,
- Bo Zhang ORCID: orcid.org/0000-0001-8745-2752 7 ,
- Jingkai Yang 7 ,
- Ruoli Zhao 2 ,
- Yuan Li 2 ,
- Ran Luo 3 ,
- Yikai Fu 2 ,
- Yonghui Li ORCID: orcid.org/0000-0002-6105-6484 2 ,
- Wei Long 8 ,
- Shu Zhang 5 ,
- Haitao Dai 2 ,
- Changlong Liu 2 ,
- Jianning Zhang ORCID: orcid.org/0000-0003-3627-863X 5 ,
- Jin Chang 3 ,
- Xiaoyu Mu ORCID: orcid.org/0000-0002-6053-9775 1 , 2 &
- Xiao-Dong Zhang ORCID: orcid.org/0000-0002-7212-0138 1 , 2
Nature Communications volume 15 , Article number: 8346 ( 2024 ) Cite this article
Metrics details
- Biocatalysis
- Biomedical materials
Strain engineering plays an important role in tuning electronic structure and improving catalytic capability of biocatalyst, but it is still challenging to modify the atomic-scale strain for specific enzyme-like reactions. Here, we systematically design Pt single atom (Pt 1 ), several Pt atoms (Pt n ) and atomically-resolved Pt clusters (Ptc) on PdAu biocatalysts to investigate the correlation between atomic strain and enzyme-like catalytic activity by experimental technology and in-depth Density Functional Theory calculations. It is found that Ptc on PdAu (Ptc-PA) with reasonable atomic strain upshifts the d -band center and exposes high potential surface, indicating the sufficient active sites to achieve superior biocatalytic performances. Besides, the Pd shell and Au core serve as storage layers providing abundant energetic charge carriers. The Ptc-PA exhibits a prominent peroxidase (POD)-like activity with the catalytic efficiency ( K cat / K m ) of 1.50 × 10 9 mM −1 min −1 , about four orders of magnitude higher than natural horseradish peroxidase (HRP), while catalase (CAT)-like and superoxide dismutase (SOD)-like activities of Ptc-PA are also comparable to those of natural enzymes. Biological experiments demonstrate that the detection limit of the Ptc-PA-based catalytic detection system exceeds that of visual inspection by 132-fold in clinical cancer diagnosis. Besides, Ptc-PA can reduce multi-organ acute inflammatory damage and mitigate oxidative stress disorder.
Introduction
Noble metal platinum (Pt) has been recognized as the most efficient biocatalysts to mimic natural enzymes in many fields 1 , 2 , 3 , 4 , 5 . The pursuit of biocatalyst development is the desired catalytic activity, high substrate selectivity, and durable stability, which depends sensitively on the rational design of both geometric and electronic structure due to the prominent quantum effect 6 , 7 , 8 , 9 . A powerfully effective approach to manipulating the electronic structure of Pt-based catalysts, especially shifting the d- band center, is lattice strain engineering 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 . Generally, alloying Pt with transition metals, M (M = Pd, Fe, Ru and so on) 18 , 19 , 20 , 21 , 22 , could provide a different dimension to modulate the geometric and electronic properties via the characteristic strain effect and/or ligand effect for high-performance catalysis. Moreover, the use of typical core-shell structures and elastic substrates has also been proposed to master lattice strain property of catalysts due to the lattice mismatch 7 , 10 , 23 , 24 . Besides, strain effect can also be generated through the vacancy defect 25 , 26 , 27 , 28 , stacking faults 29 , 30 , and in-plane amorphous-crystalline phase boundaries of catalysts with drastically improved catalytic performance 31 .
However, lattice strain introduced by lattice mismatch or amorphous-crystalline phase boundaries extensively occurs at the interface of two metals, metal-substrate or in-plane phase boundary, and gradually weakens from the interface toward the outmost surface, which severely limits the efficient development of high-performing Pt-based biocatalysts 31 , 32 , 33 . Therefore, the catalytic performance shows strongly dependence on the thickness of shell in core-shell structure or the size of metal in metal-substrate structure. For example, ultrathin Pt shells (~2 nm) deposited on palladium-based nanocubes with compressive-strain or tensile-strain possessed efficient electrocatalytic activity via phosphorization and dephosphorization at high temperature 10 . Ultrasmall Pt clusters with several atomic layers may be an alternative method to effectively utilize strain effect from the lattice mismatch of interface to boost biocatalytic activity 34 . Meanwhile, single Pt atoms on noble metal substrate have been proposed to tune the electronic structure and achieve efficient catalytic performance 35 , 36 , 37 , 38 . To investigate the strain-activity correlations of single Pt atoms or ultrasmall Pt clusters, we should perform strain-distribution analyses at the atomic scale, which is completely distinct from the Geometric phase analysis adopted by conventional lattice strain evolution of nanoparticles (10–100 nm). At the atomic scale, strain values can be accurately calculated by the precise position of the atomic columns 39 , 40 . However, it is still challenging to modify the atomic-scale strain for specific enzyme-like reactions.
In this work, we design Pt single atom (Pt 1 ), several Pt atoms (Pt n ) and atomically-resolved Pt cluster (Ptc) with polyvinyl pyrrolidone (PVP)-antiaggregating ligand laminated on PdAu nanocrystals (denoted as Ptc-PA) by a simple and mild method at room temperature. Strain analyses at the atomic scale manifest that Ptc possesses reasonable tensile strain along [110] direction (Supplementary Fig. 1 ). Density Functional Theory (DFT) calculations reveal that tensile strain upshifts the d -band center and exposes high potential surface, signifying more catalytic active sites and stronger adsorption energy to contribute the biocatalytic performances (Fig. 1 ). Besides, Au core and Pd shell in Ptc-PA would serve as electron donor and electron storage pool, respectively, which further improves the enzyme-like catalytic properties. In particular, Ptc-PA exhibits an excellent peroxidase (POD)-like activity with the catalytic efficiency ( K cat / K m ) of 1.50 × 10 9 mM −1 min −1 , about four orders of magnitude higher than natural horseradish peroxidase (HRP), as well as FeN 3 P SAzyme 41 , FeN 5 SA/CNF 42 , RhN 4 SAzyme 43 , and conventional nanozymes 44 , 45 , 46 , 47 . In addition, Ptc-PA possesses 61.5 times higher catalase (CAT)-like capacity than that of natural CAT and comparable superoxide dismutase (SOD)-like activity to that of natural SOD. Owing to remarkable POD-like activity, Ptc-PA enables the detection limit down to 0.7525 pg mL −1 for clinical diagnosis of cancer based on catalyzing enhancement, representing a 132-fold improvement over visual detection methods. Ptc-PA can inhibit acute inflammation of multi-organ damage and mitigate oxidative stress disorders, which results in a significantly improvement of irradiated mice survival rate from 20% to 80%.
Firstly, the evolution process from Pt single atom (Pt 1 ), several Pt atoms (Pt n ) to Pt cluster (Ptc) on the PdAu is accompanied by a reduction in tensile strain. Strain can prompt the formation of Pt vacancies, and further modulate the electronic structure, especially the upshift of d -band center. Meanwhile, Pd and Au substrates serve as storage layer and generation source of energetic charge carriers, respectively, which contributes to the formation of high potential surface. Thus, Ptc-PA exhibits ~10 4 times and 61.5 times higher POD-like and CAT-like capacities than natural HRP and CAT, respectively, and comparable SOD-like activity with natural SOD.
Structural properties
Surface strain can regulate the electrical properties of Pt catalysts, typically by shifting the d -band center, thereby altering the catalytic activity of metal nanomaterials 9 , 10 , 31 . Generally, surface strain results from lattice mismatch 7 , 13 . Since strain is spatially dependent, exposing active strain surfaces is challenging and requires the regulation of geometric structures with surface strain at atomic scale or a few of atomic-layers level. Herein, we synthesized Pt single atom (Pt 1 ), several Pt atoms (Pt n ) and atomic resolution Pt clusters (Ptc) laminated on Au nanocrystals with thickness of ~2.5 nm Pd transition layer (labeled as Pt 1 -PA, Pt n -PA and Ptc-PA, respectively) to investigate the strain-activity correlations (Fig. 2a ). In the absence of the nano-transition layer of Pd or AuPd core, Pt clusters would tend to accumulate and stack rather than existing at the several atomic-layers level (Supplementary Figs. 2 and 3 ).
a Schematic evolution of structure and strain at the atomic level. b – d HAADF-STEM images of Pt 1 -PA (the yellow circle represents isolated Pt atom), Pt n -PA (the yellow circle represents several Pt atoms), and Ptc-PA (the yellow circle represents Pt clusters), respectively. The experiments were repeated independently three times with similar results. e – g HAADF-STEM-EDS mapping images of Pt 1 -PA, Pt n -PA and Ptc-PA, respectively. h – j High-magnification HAADF-STEM images of Pt 1 -PA, Pt n -PA and Ptc-PA, respectively. The experiments were repeated independently three times with similar results. k HAADF-STEM image of Pt cluster from Ptc-PA, showing the emergence of Pt atom and vacancies. l Corresponding intensity profile along the green lines from ( k ). ‘arb. units’ represents arbitrary units. m HAADF-STEM image of Pt cluster from Ptc-PA, showing screw dislocations. Atomic strain mappings ( n – p ) and the corresponding box-charts of the measured strain values ( q – s ) along [110] direction for Pt 1 -PA, Pt n -PA and Ptc-PA, respectively ( n = 67, 40 and 51 for ( q ), ( r ) and ( s ), respectively. Boxes represent the median and IQR and the upper and lower whiskers extending to the values that are within 1.5 × IQR). t Intensity profiles of lattice space from Pt cluster in Ptc-PA and pure Pt. ‘arb. units’ represents arbitrary units.
To demonstrate geometric and strain characteristics of Pt 1 -PA, Pt n -PA and Ptc-PA, transmission electron microscope (TEM) and high-angle angular dark-field scanning transmission electron microscopy (HAADF-STEM) analyses were performed. As shown in Fig. 2b and Supplementary Fig. 4a , single Pt atom is dispersed on the surface of Pd shell or doping in the lattice gap at the molar ratio of reactants with 0.1:1:1. As increasing the proportion of Pt precursors, multiple Pt atoms get gathering on the surface with the molar ratio of 0.2:1:1 (Fig. 2c and Supplementary Fig. 4b ), and become clustered with the molar ratio of 0.4:1:1, 0.5:1:1 and 1:1:1 (Fig. 2d and Supplementary Fig. 4c–e ). The full morphology of Ptc-PA (Pt:Pd:Au=0.4:1:1, 0.5:1:1 and 1:1:1) from TEM image (Supplementary Figs. 4 and 5 ) displays the dimensional uniformity and Pt clusters with an average diameter of ~2.45 nm, HAADF-STEM image of Pt clusters also reveals the thickness of several atomic-layers (Fig. 2d ), and the calculated atom number of Pt clusters is 509 ± 2 according to the formula reported in the literature 48 . Furthermore, scanning transmission electron microscopy-energy dispersive spectrometer (STEM-EDS) was conducted to analyze the element distribution, visually showing that Pt elements are uniformly dispersed on the material surface in the form of single atom, polyatoms and up to clusters (Fig. 2e–g and Supplementary Fig. 6a, b ). Representative surface structures of Pt 1 -PA and Pt n -PA were presented at the atomic scale by high-magnification HAADF-STEM images (Fig. 2h, i ), revealing the crystal spacing of 0.152 nm and 0.1437 nm, respectively. The corresponding fast Fourier transform (FFT) pattern can be seen in Supplementary Fig. 7a,b to represent the Pd face-centered cubic (fcc) structure. Thus, the aforementioned crystal spacing is corresponding to Pd (220) facet, which is larger than pure Pd probably thanks to Pt atom doping. Similarly, atomic-resolution HAADF-STEM image and corresponding FFT pattern of Pt cluster for Ptc-PA with the molar ratio of 0.4:1:1, 0.5:1:1 and 1:1:1 demonstrate Pt fcc structure and the Pt (220) facet of 0.1436 nm, 0.144 nm, 0.138 nm, respectively (Fig. 2j , Supplementary Figs. 6c ,d, 7c–e ). X-ray diffraction (XRD) pattern of Ptc-PA with the molar ratio of 0.4:1:1 also reveals randomly oriented fcc crystals in which four peaks are assigned to (111), (200), (220) and (311) facets (Supplementary Fig. 8 ), while the amplifying peaks are still difficult to demonstrate the lattice spacing change due to the approaching peak of Pt and Pd. In addition, it is found from intensity profile of atomic HAADF-STEM image (Fig. 2k–m ) that Pt clusters in Ptc-PA contain a few of Pt vacancies, isolated Pt atoms, screw dislocations, as well as a high density of stepped surface atoms.
Importantly, to evaluate the atomic-strain characteristics of representative surface structures in Pt 1 -PA, Pt n -PA and Ptc-PA, we pinpointed the precise atom-column positions in the atomic-resolution HAADF-STEM images (Supplementary Fig. 9 ) by means of StatSTEM software 49 . Note that the StatSTEM software employs a model-based estimation algorithm and takes into account overlap between neighboring columns to quantify the atomic column positions with high precision and accuracy. Inspired by strain analysis methods in the literature 40 , 50 , we measured the nearest-neighbor distances in the crystallographic direction [110] and compared them with standard reference values to generate 2D visualized atomic strain maps (Fig. 2n–p and Supplementary Fig. 10a, b ). Further, as shown in Fig. 2q–s and Supplementary Fig. 10c, d , the strain values within the representative region are drawn statistically. The statistical results reveal that tensile strain occurs at the surface of Pt 1 -PA and Pt n -PA and the average strain value is about 10.86%, 5.03% along the [110] direction, respectively. As for Ptc-PA with the molar ratio of 0.4:1:1, 0.5:1:1 and 1:1:1, the average strain value is 3.61%, 2.61% and −1.72% along the [110] direction, respectively. Despite possessing similar structures, the Ptc-PAs with different molar ratios exhibited gradually decreasing strain values with increasing Pt input, which could be ascribed to continuous stress release 51 . As for Pt 1 -PA and Pt n -PA, the production of strain effect could be attributed to Pt atom doping, and especially, lattice gap doping in Pt 1 -PA plays a vital role in tensile strain. As for Ptc-PA, Pt vacancies and isolated Pt atoms could contribute to the opportune strain, which would lead to the increase of atomic arrangement disorder and low-coordination sites and further be beneficial to the improvement of catalytic performance 10 , 52 . Furthermore, unlike conventional nanocrystals (10–100 nm), atomically resolved Pt clusters exhibit finite periodic structure and a high density of stepped surface atoms, which could also be responsible for the observed strain. Meanwhile, quantitative analysis of the intensity profile along lattice space (Fig. 2t ) shows that there is a larger lattice expansion of Pt cluster in Ptc-PA with the molar ratio of 0.4:1:1 than pure Pt, which is attributed to the tensile surface strain. In addition, Ptc-PA with the molar ratio of 0.4:1:1 exhibits strong stability in PBS and BSA, facilitating the expansion of its application (Supplementary Figs. 11 and 12 ). Overall, Ptc-PA possesses atomically resolved Pt clusters with the sufficient exposure of active surface strain sites, as well as Pt vacancies.
Since the three types of Ptc-PAs possess similar structures, we focused on the molar ratio of 0.4:1:1. To evaluate the electronic structure of Ptc-PA with the molar ratio of 0.4:1:1, X-ray photoelectron spectroscopy (XPS) and X-ray absorption fine spectrometric (XAFS) measurements were performed. The XPS analyses of Ptc-PA display that the three metals dominantly hold zero-valence states, while Pt 4 f is accompanied by a certain amount of Pt (II) (Supplementary Fig. 13 ), indicating that Pt electronic states are modified by PdAu core or surface strain via electron transfer. From the normalized X-ray absorption near-edge structure (XANES) spectra of the Au- L 3 edge (Fig. 3a ), the absorption edge in Ptc-PA appears positive shift to higher energy than Au foil, indicating an increase in the average valence state and a decrease in the electron number of Au. It is conjectured that more electrons are transferred from Au to Pd. As shown in the Fourier transformed extended X-ray absorption fine structure (EXAFS) spectra (Fig. 3b ), the maximum peak in Ptc-PA occurs at about 2.58 Å exhibits slightly left shift away from the Au foil, revealing that Au-Au bonds are dominant and a small amount of Au-Pd bonds is accompanied. To corroborate this result, the wavelet transform (WT) analysis of Au L 3 -edge oscillation was carried out (Fig. 3c, d ). Except the intensity at around 8.72 Å −1 , Ptc-PA shows a secondary intensity at around 5.9 Å −1 , which is assigned to the Au-Pd coordination. For the XANES spectra of Pt- L 3 edge (Fig. 3e ), the absorption edge in Ptc-PA exhibits positive shift compared to that of Pt foil, revealing the electron transfer from Pt to Pd and the average valence state increases. This result is consistent with the XPS analysis of Pt 4 f from Supplementary Fig. 13 . Meanwhile, the white line intensity of Pt- L 3 edge in Ptc-PA significantly increases, suggesting the presence of high Pt-vacancy density, in agreement with the STEM results (Fig. 2k, l ). Furthermore, the corresponding Fourier-transformed EXAFS spectra are performed in Fig. 3f , showing that Ptc-PA has a similar peak at ~2.4 Å with Pt foil, which is mainly ascribed to Pt-Pt bond. To further distinguish the local configurations of Ptc-PA, WT maps were provided. As shown in Fig. 3g, h , the maximum intensity at around 9.1 Å −1 in Pt foil is slightly shifted to 9.2 Å −1 in Ptc-PA, which is assigned to the elongated Pt-Pt bond lengths. Ptc-PA also shows a secondary intensity at around 7.8 Å −1 , which could be associated with the formation of the Pt-Pd coordination. In addition, curve fitting analysis in k-space of EXAFS was performed to further analyze the coordination configurations of Au and Pt in Ptc-PA (Supplementary Fig. 14 ). The coordination number of Pt-Pt bond in Ptc-PA is lower than Pt foil (Supplementary Table 1 ), which could be attributed to the formation of Pt vacancies introduced by lattice strain and also in favor of catalytic performance 26 , 53 .
a XANES and ( b ) Fourier-transformed EXAFS spectra of the Au L 3 -edge in Ptc-PA and Au foil. c , d Wavelet transform spectra obtained at the Au L 3 -edge in Au foil and Ptc-PA, respectively. e XANES and ( f ) Fourier-transformed EXAFS spectra of the Pt L 3 -edge in Ptc-PA and Pt foil. g , h Wavelet transform spectra obtained at the Pt L 3 -edge in Pt foil and Ptc-PA, respectively. R represents the radial distribution function.
Enzyme-like activity
Pod-like activity.
To investigate the peroxide-like activity of Pt single atom (Pt 1 ), several Pt atoms (Pt n ) and atomically-resolved Pt clusters (Ptc) on PdAu biocatalysts, a colorimetric method using 3,3′, 5,5′-tetramethylbenzidine (TMB) in the presence of hydrogen peroxide (H 2 O 2 ) was performed (Fig. 4a ). To explore the effect of different Pt atomic numbers and surface strain on the POD-like activity, Fig. 4b shows that Ptc-PA exhibits the highest value of catalytic efficiency ( K cat / K m ) at the molar ratio of reactants with 0.4:1:1, demonstrating that the activity of Ptc-PA increases nonmonotonically with the quantity of Pt atoms and possesses an optimal molar ratio and appropriate strain. In particular, we quantified the POD-like activity of pure Pt NPs and Ptc-PA (Pt:Pd:Au=0.4:1:1) at identical Pt mass concentrations using inductively coupled plasma massspectrometry (ICP-MS), and compared the strain and POD-like activity between Ptc-PA and pure Pt NPs. As shown in Fig. 4c , Ptc-PA (Pt:Pd:Au=0.4:1:1) with 3.61% tensile strain exhibits a POD-like activity ~45.3-fold higher than that of pure Pt NPs without strain, which could be due to atomically resolved Pt clusters in Ptc-PA with the more exposure of active catalytic sites, as well as a larger specific surface area. Above all, these results further emphasize the enormous impact of strain at the atomic level on the POD-like activity of nanomaterials. Furthermore, we performed a systematic comparison of the activities of all species NPs composed of Au, Pd, and Pt elementals at the same mass concentration of the whole sample and synthesized by the same method. It is found that Ptc-PA (Pt:Pd:Au=0.4:1:1) possesses the highest activity, indicating that both Au and Pd elements play a key role in the enhancement of catalytic performance (Supplementary Figs. 15 and 16 ). Subsequently, the POD-like activity of Ptc-PA (Pt:Pd:Au=0.4:1:1) was explored in detail, showing the concentration-dependence (Fig. 4d and Supplementary Fig. 17 ). In contrast, there is scarcely any oxidases (OXD)-like activity for Ptc-PA (Pt:Pd:Au=0.4:1:1), indicating the catalytic specificity of POD-like activity (Supplementary Fig. 18 ). Besides, Ptc-PA (Pt:Pd:Au=0.4:1:1) still maintained ultrahigh POD-like activity in the five-round catalytic reaction (Fig. 4e and Supplementary Fig. 19 ), suggesting that the enduring catalytic stability.
a Schematic diagram for POD-like activity of Ptc-PA. b Strain and K cat / K m of Au@Pd@Pt with different molar ratio. c POD-like activity of Pt and Ptc-PA (Pt:Pd:Au=0.4:1:1). d The absorbance of POD-like activity of Ptc-PA with different concentrations changes with time ( n = 3 independent experiments, data are presented as mean ± SD). e Cyclic stability of TMB catalyzed by Ptc-PA in the presence of H 2 O 2. The Michaelis–Menten at various concentrations of ( f ) H 2 O 2 and ( g ) TMB for Ptc-PA (Inset: corresponding Lineweaver–Burk plots, n = 3 independent experiments, data are presented as mean ± SD). h Comparison of strain, Pt atoms from Pt cluster and kinetics for Pt 1 -PA, Pt n -PA, Ptc-PA and HRP. [E] represents particle concentration of Ptc-PA used in the catalytic reaction. K m is the Michaelis–Menten constant. V max is the maximal reaction velocity. K cat is the catalytic rate constant, where K cat = V max / [E]. The value of K cat /K m represents catalytic efficiency. i Changes of POD-like activity of Ptc-PA and HRP with time ( n = 3 independent experiments, data are presented as mean ± SD). j Temperature stability of POD-like activity of Ptc-PA ( n = 3 independent experiments, data are presented as mean ± SD). All ‘arb. units’ on the axis represent arbitrary units.
To quantitatively evaluate the POD-like activity of Ptc-PA, enzyme kinetic tests were performed on Ptc-PA and natural HRP (Fig. 4f, g and Supplementary Fig. 20 ). The particle concentration of Ptc-PA was obtained by conversion from the elemental content measured by ICP-MS. The corresponding Michaelis constant ( K m ) of Ptc-PA (Pt:Pd:Au=0.4:1:1) for the TMB and H 2 O 2 substrate are 0.044 mM and 2.09 mM, respectively. It is noteworthy that the K m of Ptc-PA (Pt:Pd:Au=0.4:1:1) for TMB substrate was lower than that of natural HRP (Fig. 4h ), indicating that Ptc-PA exhibits higher affinity for TMB substrates. Accordingly, the catalytic rate constant K cat (also defined as the turnover number) of Ptc-PA (Pt:Pd:Au=0.4:1:1) for TMB and H 2 O 2 appears 4.05 × 10 3 times and 326 times higher than that of natural HRP. Meanwhile, the catalytic K cat / K m of Ptc-PA (Pt:Pd:Au = 0.4:1:1) for TMB is 1.50 × 10 9 mM −1 min −1 , about 10 4 -fold, 1.07 × 10 4 -fold, 8.02 × 10 6 -fold, 7.98 × 10 5 -fold, 3.4-fold, and 100-fold higher than the natural HRP, FeN 3 P SAzyme 54 , FeN 5 SA/CNF 42 , RhN 4 SAzyme 43 , Ptc-PA (Pt:Pd:Au=0.5:1:1) and Ptc-PA (Pt:Pd:Au=1:1:1), respectively (Fig. 4h , Supplementary Fig. 21 and Supplementary Table 2 ). In addition, the catalytic efficiency of Ptc-PA (Pt:Pd:Au=0.4:1:1) also far superseded that of other conventional nanozymes (Supplementary Table 2 ). The catalytic efficiency of Ptc-PA (Pt:Pd:Au=0.4:1:1) for H 2 O 2 displays almost 2.43 × 10 3 -fold higher than natural HRP. In addition, the natural HRP shows nearly no catalytic activity at 3 days, while Ptc-PA (Pt:Pd:Au=0.4:1:1) maintains a relatively stable catalytic activity for 50 days (Fig. 4i and Supplementary Fig. 22 ). Performance stability under different storage conditions, especially temperature, is a very important indicator for the practical application of nanomaterials, and thus the POD-like activity of Ptc-PA (Pt:Pd:Au=0.4:1:1) at different temperatures was evaluated. Figure 4j shows that the catalytic performance of Ptc-PA (Pt:Pd:Au=0.4:1:1) could be efficiently sustained at 0 °C–50 °C, which facilitates the broadening of applications of Ptc-PA (Pt:Pd:Au=0.4:1:1). The above results demonstrate that Ptc-PA (Pt:Pd:Au=0.4:1:1) exhibits robust and stable POD-like capability.
To ascertain whether the main factor contributing to the superior POD-like activity of Ptc-PA stems from atomic-level strain on its surface, we compare the strain and POD-like activity of Ptc-PA (Pt:Pd:Au=0.4:1:1), Ptc-PA (Pt:Pd:Au=0.5:1:1) and Ptc-PA (Pt:Pd:Au=1:1:1). Despite only a 20% difference in Pt composition between Ptc-PA (Pt:Pd:Au=0.4:1:1) and Ptc-PA (Pt:Pd:Au=0.5:1:1), the POD-like catalytic efficiency of Ptc-PA (Pt:Pd:Au=0.4:1:1) with 3.61% tensile strain was 3.4-fold higher than Ptc-PA (Pt:Pd:Au=0.5:1:1) with 2.61% tensile strain, suggesting that a proper strain value facilitates catalytic efficiency 10 . Moreover, the catalytic efficiency is further reduced when the strain became negative (compressive strain) in Ptc-PA (Pt:Pd:Au=1:1:1) even with the increased exposure of Pt atoms. Therefore, the above results demonstrate that the suitable tensile strain effect can significantly enhance the POD-like catalytic efficiency.
CAT-like activity
Subsequent tests have utilized Ptc-PA with a molar ratio of Pt:Pd:Au=0.4:1:1. For simplicity, the molar ratio will not be reiterated in subsequent writing. As shown in Fig. 5a, b , Ptc-PA exhibits favorable CAT-like activity with concentration dependence, which is also visualized by different degrees of bubbling after decomposition of H 2 O 2 . To further investigate and quantify the CAT-like activity, we similarly performed enzyme kinetic tests on Ptc-PA and compared it with natural CAT (Fig. 5c and Supplementary Fig. 23 ). The K m and K cat of Ptc-PA (4.49 mM and 1.66 × 10 7 min −1 ) is significantly better than that of natural CAT (28.8 mM and 1.73 × 10 6 min −1 ), suggesting that Ptc-PA has a stronger affinity and higher catalytic rate for H 2 O 2 substrate (Fig. 5d ). In particular, the K cat /K m of Ptc-PA is 3.69 × 10 6 mM −1 min −1 , which is 61.5 times higher than that of natural CAT. In combination, these results indicate that Ptc-PA possesses excellent CAT-like activity beyond natural enzyme and is more efficient in catalyzing the formation of non-toxic O 2 and H 2 O from H 2 O 2 .
a Time-dependent absorbance at 240 nm of H 2 O 2 treated with different concentrations Ptc-PA ( n = 3 independent experiments, data are presented as mean ±SD). b Efficiency of H 2 O 2 scavenging by Ptc-PA in 180 s ( n = 3 independent experiments, data are presented as mean ± SD). The inset shows the physical decomposition of H 2 O 2 . c The Michaelis–Menten and corresponding Lineweaver–Burk plots at various concentrations of H 2 O 2 for Ptc-PA ( n = 3 independent experiments, data are presented as mean ±SD). d Comparison of kinetics for Ptc-PA and CAT. e UV–vis spectra of Ptc-PA catalyzing SOD-like reactions. ‘arb. units’ represents arbitrary units. f Inhibitor rate of SOD-like activity for Ptc-PA and natural SOD. g The radar map of enzymatic activities and schematic illustration of Ptc-PA to mimic enzymatic antioxidant defense system.
SOD and GPx-like activity
As an important antioxidant enzyme, SOD-like activity of Ptc-PA was assessed through SOD Assay Kit. The variation of the absorbance of the reaction system at 450 nm shows a strong dependence on the concentration of Ptc-PA (Fig. 5e ). Ptc-PA exhibits a high inhibition rate of SOD up to nearly 80% at a concentration of 10 μg/mL. A serial comparison of the SOD-like activity of this Ptc-PA with natural SOD was carried out. As shown in Fig. 5f and Supplementary Fig. 24 , SOD-like activity of Ptc-PA is nearly equivalent to that of natural SOD. In addition, Ptc-PA is able to maintain high SOD-like activity after almost 50 days (Supplementary Fig. 24d ), suggesting the durable stability of SOD-like activity. Besides, it is found that Ptc-PA also exhibits some degree of GPx-like activity (Supplementary Fig. 25 ). Taken together, the results indicate that Ptc-PA possesses excellent multispecies enzyme-like activities, especially notable POD-like, CAT-like and SOD-like activity (Fig. 5g ). Combing with the structure analyses, the superior catalytic activity of Ptc-PA may be due to the fact that ultrathin Pd shell and Au core serve as storage layer and generation source of energetic charge carriers, respectively 55 , 56 , 57 , 58 , as well as the atomically resolved Pt clusters with surface strain and Pt vacancies 59 . In addition, XANES also shows that the coordination number of the active center in Ptc-PA is lower than that of pure Pt, which is also favorable to enhance the overall catalytic properties 53 , 60 , 61 .
Scavenging free radicals and electrocatalysis
The scavenging activity of •OH and O 2 •- free radical was investigated using electron spin resonance (ESR), illustrating that Ptc-PA exhibits favorable scavenging ability of free radicals via the migration of energetic charge carriers (Supplementary Fig. 26 ). Moreover, the electrocatalytic performances for H 2 O 2 reduction and oxygen reduction reaction (ORR) have also been evaluated by Ptc-PA-modified glassy carbon (GC) electrode. In the presence of H 2 O 2 , the reduced current of the Ptc-PA-modified GC electrode is significantly higher than that of the unmodified electrode (Supplementary Fig. 27a ). The current density of Ptc-PA reaches −1.4 mA/cm 2 at a bias of −0.8 V, which is about 7 times higher than that of a pure GC electrode (Supplementary Fig. 27b ), indicating that Ptc-PA exhibits efficient catalytic reduction activity against H 2 O 2 . The ORR performance of Ptc-PA-modified GC electrodes was evaluated by measuring the cyclic voltammetric (CV) curve in 0.01 M PBS saturated with O 2 . Compared to the unmodified GC electrode, the current density of Ptc-PA is about 8 times higher than that of pure GC electrode (Supplementary Fig. 27c, d ), suggesting the high catalytic performance for O 2 . Therefore, Ptc-PA exhibits excellent scavenging activity for oxygen free radicals due to their highly catalytic performance, suggesting that the artificial nanozymes can effectively block the oxygen cascade reaction, regulate the redox imbalance and protect the organism from oxidative damage.
Mechanism of catalytic activity
DFT calculations were carried out to uncover the possible catalytic mechanism and the effect of strain, ultrathin Pd shell and Au core on the excellent catalytic activity of Ptc-PA. According to the analysis of geometrical characteristic of Ptc-PA, the simple atomic structure model has been constructed. As shown in the surficial electron density difference analysis (Fig. 6a ), the increased electron density exists in the interface of Au-Pd and Pt-Pd due to the electron transfer among the three metal atoms, which could be attributed to the modulation of electronic structure due to the strain effect. To gain insight into the electron transport among the three metal atoms, we researched the Bader charges of different atoms in Ptc-PA (Supplementary Tables 3 and 4 ). The Pt and Au possess the negative Bader charges of −0.292 and −0.445, respectively, while Pd exhibits a positive Bader charge of 0.737 (Fig. 6b ). It suggests that the electrons transfer from Pt and Au to Pd, and ultrathin Pd shell would serve as electronic storage pool and intermediate transfer station, which is in accord with XANES results. Moreover, compared with compression strain, Pt in Ptc-PA exhibits more electron loss with tensile strain (Supplementary Tables 3 and 4 ), thereby enhancing its ability to capture foreign electrons. However, the absolute value of Bader charge of Pt atom becomes lower without Au, clearly indicating that the introduction of the Au atomic layer contributes to electron transfer from Pt to Pd. Given the electron transfer effect, Pt 0 would be transformed into the form of Pt 2 ⁺, which has been demonstrated by XPS results. Pt 2 ⁺ traps electrons more readily from surface external radicals. Based on it, we propose the capable reaction mechanism for the POD-like and CAT-like processes of Ptc-PA:
a The deformation charge density analysis of Ptc-PA model (blue and red represent charge depletion and charge accumulation, respectively. Isosurfaces correspond to 0.002 e/Bohr 3 ). b Bader charge of Ptc-PA model. c TDOS and d -band center of Ptc-PA at different strains. The total ESP for the surface layer of ( d ) Pt 1 -PA and ( e ) Ptc-PA. f Schematic illustrating that electron transfer during different enzyme-like reaction and the d -band center upshift of Ptc-PA at different strains. g Variation of Pt-O bond length and adsorption energy for H 2 O 2 in Pt surface for pure Pt, Ptc-P (Pt atomic layer on the Pd atomic layer without Au atomic layer) and Ptc-PA at different strains.
The capable reaction mechanism for the SOD-like process of Ptc-PA is as follows:
The d -band center serves as a valuable descriptor for evaluating the adsorption potential of metallic materials towards small molecules 62 . The upshift of the d -band center results in an elevation of the energy level of the antibonding band, leading to a decrease in electron occupancy within the antibonding orbitals and thereby enhancing bond strength. Consequently, this phenomenon facilitates the adsorption of small molecules by metallic materials. Due to the strain effect, the d- band center of Ptc-PA will shift and in turn affect the overall activity 12 , 63 . As shown in Fig. 6c , compared with compressive strain, tensile strain leads to a upshift in the d -band center of Ptc-PA. It could enhance the ability to absorb oxygen-containing intermediates 64 , and in turn, improve the catalytic activity of Ptc-PA. To further investigate the catalytic enhancement mechanism of Ptc-PA compared to Pt 1 -PA, we conducted and visualized the total Electrostatic Potential (ESP) based on the DFT simulations. Our focus lies on the surface by plotting the ESP on a plane passing through the topmost layer of atoms on the segment. The total ESP which is the summation of nuclei potential and Hartree potential can be used to describe the impact of net charge approximately. The contour plots of the ESP function indicate positive ESP values for all atoms in Pt 1 -PA and Ptc-PA, suggesting their electrophilic nature (Fig. 6d, e ). Moreover, the Pt atoms display higher peak ESP values compared to the Pd atoms, suggesting a stronger electron trapping ability. Notably, the larger peaks on the Pt atoms of Pt 1 -PA indicate an electric field directed towards the surrounding Pd atoms. Consequently, this electric field may drive small molecules close to the surface of Pt 1 -PA towards Pt atoms, implying that Pt serves as the active center in catalytic reactions. The higher potential at the surface of Ptc-PA relative to Pt 1 -PA signifies its enhanced capability for absorbing small molecules. In brief, the schematic of Fig. 6f illustrates that the capable reaction mechanism during different enzyme-like activities is governed by the conversion between Pt 0 and Pt 2+ , and the d -band center upshift of Ptc-PA with tensile strain could make a critical difference for the excellent enzyme-like catalytic activity. To further verify the strain effect on optimizing catalytic activity of Ptc-PA, adsorption energy and Pt-O bond length for H 2 O 2 at different strain were accurately calculated (Fig. 6g ). The H 2 O 2 is rapidly decomposed by pure Pt, Ptc-PA and Ptc-P to generate OH* species when it approaches the surface, which is favorable for the downstream reaction. In comparison to the Pt-O bond lengths observed on the Ptc-P surface (1.985 Å) and the pure Pt surface (1.966 Å), the Ptc-PA surface exhibits shorter Pt-O bond lengths (1.945 Å), indicating a stronger bonding between OH* and Ptc-PA. Furthermore, as the strain transitions from negative to positive, the Pt-O bond length decreases about 0.03 Å on the surface of Ptc-PA, suggesting that a certain amount of tensile strain is favorable for Ptc-PA to adsorb H 2 O 2 and catalyze its decomposition. In addition, the H 2 O 2 adsorption energy on surface of Ptc-PA with tensile strain (−2.041 eV) is more optimal than those on pure Pt and Ptc-P (Fig. 6g ), demonstrating the vital roles of strain, Au layer and Pd layer in enhancing the catalytic performance of Ptc-PA. Therefore, these findings highlight that the tensile strain can effectively improve the adsorption capacity for H 2 O 2 substrates, in turn boost the catalytic activity of Ptc-PA, which would be used to guide and design of high-performance catalysts.
Ptc-PA-based LFA for cancer patient detection
Development of sensitive detection tools facilitates early clinical monitoring of major disease progression. Taking advantages of Ptc-PA prominent POD-like activity and robust stability, we further introduced the Ptc-PA, as probes in lateral flow assay (LFA). The LFA widely used in testing at home, community and field, and often limited by the detection sensitivity. The Ptc-PA probes may effectively resolve the aforementioned hurdles with two kinds of signal readouts including visual readout and catalytic readout (Fig. 7a ). Due to the POD-like activity of Ptc-PA, the catalytic signal was measured through recording the absorbance with a microplate reader, allowing for accurate quantitative detection of low-abundance analytes and different application requirements. (Supplementary Fig. 28 ).
a Schematic illustration of the Ptc-PA-based LFA for sensitive detection of PSA and CEA. Specificity analysis to different nontarget proteins under colorimetric and catalytic modes with Ptc-PA-based LFA for ( b ) PSA and ( c ) CEA. Colorimetric and catalytic photographs of visual recognizable color changes in the reaction system for ( d ) PSA and ( e ) CEA. Numbers 1−15 represent 6.4, 3.2, 1.6, 0.8, 0.4, 0.2, 0.1, 0.05, 0.025, 0.0125, 0.00625, 0.00313,0.00155, 0.0007525 and 0 ng/mL of PSA or CEA, respectively. Relationships of the T-line intensity and the absorbance value of 652 nm with different concentrations of ( f ) PSA or ( g ) CEA (Insert image: The calibration curve of colorimetric and catalytic signal versus concentration of PSA or CEA, n = 3 independent experiments, data are presented as mean ±SD). Recovery rates of ( h ) PSA and ( i ) CEA spiked in serum samples under colorimetric and catalytic modes. j Detection results of PSA in 50 clinical serum samples. k Heat map showing the assay results of the 50 clinical serum samples. l Correlation analysis between the Ptc-PA-based LFA and the electrochemical assay for the detection of PSA in serum samples. m Detection results of CEA in 30 clinical serum samples. All ‘arb. units’ on the axis represent arbitrary units.
Prostate specific antigen (PSA) and carcinoembryonic antigen (CEA), which were considered crucial biomarkers for the diagnosis of prostate cancer and colorectal cancer, are selected as the model analytes in clinical trials 65 , 66 . Sensitive detection of PSA or CEA facilitates early diagnosis of cancer and monitoring of cancer treatment progress. To verify the specificity of our assay system for PSA and CEA antigens, we used two strips to detect a variety of protein markers. Figure 7b, c indicates that this system possessed excellent inclusiveness and exclusiveness for the detection of PSA or CEA. The visual detection limit of the Ptc-PA-based LFA without catalytic enhancement mode had been achieved at an ultralow level of 0.1 ng/mL for PSA or CEA, which was about an order of magnitude lower than the detection limit reported in the literature (Fig. 7d, e ). Notably, the visual detection limit of these two biosensors with catalytic enhancement mode (0.7525 pg/mL) was 132-times lower than the naked eye detection limit, approaching the sensitivity of electrochemical detection (Fig. 7d, e ). As expected, the T-line color intensities and the catalytic signal enhanced as the concentrations of PSA or CEA increased continuously (Fig. 7f, g ). As for PSA, the limit of detection (LOD) with catalytic enhancement system enabled the detection down to 0.33 pg/mL, while the mode without enhancement was 11.08 pg/mL (Supplementary Table 5 ). As for CEA, the LODs were calculated to be 0.063 ng/mL and 0.46 pg/mL for the Ptc-PA-based LFA and the catalytic enhancement model, respectively (Supplementary Table 6 ). Different concentrations of PSA or CEA standards were added to four healthy serum samples and calculated of spiked-recovery. The average recoveries of standard addition for PSA were calculated from 89.26% to 113.13%, and the coefficient of variations (CV, n = 3) values were at 0.47%–9.78%, indicating that the developed method was not impacted by biological matrices (Fig. 7h and Supplementary Table 7 ). Figure 7i and Supplementary Table 8 also demonstrated the specificity of the assay in detecting CEA, and spiked standard samples confirmed its stability against biological matrices interference.
Finally, the clinical application of Ptc-PA-based LFA was evaluated by detecting PSA or CEA in human serum samples. The veracity and practicability of signal-enhanced Ptc-PA-based LFA was further verified by detecting 50 clinical serum samples of PSA (Fig. 7j ). The detection results were summarized in Supplementary Fig. 29 , which were consistent with the electrochemical detection results, and the correlation coefficient was 0.99 and 0.97 before and after catalytic enhancement, respectively (Fig. 7k, l and Supplementary Fig. 30 ). Besides, we also validated the accuracy and practicality by testing 30 clinical serum samples with varying levels of CEA (Fig. 7m and Supplementary Fig. 31 ). Taken together, these results highlight the excellent capability of Ptc-PA-based LFA in cancer marker detection. Notably, with the addition of a catalytic enhancement system based on POD-like enzyme reaction, the sensitivity of this bio-detector can be higher than that of currently known detection systems, which possesses a great potential for immediate cancer diagnosis.
Modulation of radiation oxidation
Artificial enzymes, as a highly stable and inexpensive biocatalyst have been widely applied in the field of biomedicine 45 , 67 , 68 , 69 , 70 , 71 , 72 . Benefiting from the excellent catalytic activity of Ptc-PA, the inhibition of oxidative damage was further investigated in biological systems. Ptc-PA showed almost no toxicity to several common cell types (CHO, HT22, BV2, MA-c, and NCM460 cells) at concentration up to 300 μg/mL, pointing the favorable biosafety (Fig. 8a and Supplementary Fig. 32 ). The survival of H 2 O 2 or LPS-treated CHO cells were sharply reduced to 65%–68% (Fig. 8b and Supplementary Fig. 33 ), while the survival of Ptc-PA-treated cells was remarkably increased to 90%. In particular, the survival rate of cells treated with Ptc-PA after exposure to 4 Gy gamma rays was more than 90%, whereas the viability in the radiation group was significantly reduced to 66% (Fig. 8b ). This was attributed to the superior enzyme-like catalytic activity of Ptc-PA 73 , 74 . Fluorescence staining at the cellular level and the corresponding flow cytometry results also demonstrated the ability of Ptc-PA to effectively scavenge free radicals (Fig. 8c–g and Supplementary Fig. 34 ). Exposure to high levels of H 2 O 2 and LPS can result in cytotoxicity and impair the normal function of cells, triggering apoptosis and necrosis. To examine the influence of Ptc-PA on the functionality of damaged cells, we conducted cell cycle and apoptosis assessments. Our findings indicated that Ptc-PA mitigates the accumulation of S-phase arrested cells induced by H 2 O 2 or LPS, suggesting the restoration of DNA replication inhibition (Supplementary Fig. 35 ). Concurrently, Ptc-PA intervention also ameliorates apoptosis (Fig. 8h, i ). The results suggest that Ptc-PA could rescue cell oxidation damage and possess potential applications in the oxidative regulation of biological systems.
a Survival rate of CHO cells after coculture of Ptc-PA with different concentrations (3, 6, 12, 25, 50, 100, 200, 300 μg/mL) for 24 h and 48 h ( n = 6 per group). b In vitro protection effect of Ptc-PA with different concentrations ( n = 3 per group). Data are presented as mean ± SEM, analyzed by one-way ANOVA with one-sided Tukey’s multiple comparisons test (the p values are shown). Fluorescence microscopic images of intracellular ( c ) ROS and ( d ) O 2 ·− levels under different conditions. Experiments were repeated independently five times with similar results. e Fluorescence quantification of CHO cells staining for ROS and O 2 •− ( n = 5 per group). Data are presented as mean ± SEM, analyzed by one-way ANOVA with one-sided Tukey’s multiple comparisons test (the p values are shown). Quantitative analysis of intracellular ( f ) ROS and ( g ) O 2 ·− levels under different conditions by flow cytometry. h Cell apoptosis of CHO cells stimulated by 800 μM H 2 O 2 and 1 mg/mL LPS with and without Ptc-PA treatment. i Statistical diagram of early apoptosis and late apoptosis obtained from cell apoptosis test ( n = 3 per group). Data are presented as mean ±SEM, analyzed by one-way ANOVA with one-sided Tukey’s multiple comparisons test (the p values are shown). j Schematic diagram of Ptc-PA modulated radiation-induced oxidative damage (Created with BioRender.com). k Survival percentage of irradiated mice with or without treatment of Ptc-PA, as well as Amifostine as a control. l The level of WBC and PLT of mice with or without the treatment of Ptc-PA ( n = 3 per group). Data are presented as mean ±SEM, analyzed by one-way ANOVA with one-sided Tukey’s multiple comparisons test (the p values are shown). m The level of DNA and BMNC of mice with or without the treatment of Ptc-PA ( n = 5 per group). Data are presented as mean ± SEM, analyzed by one-way ANOVA with one-sided Tukey’s multiple comparisons test (the p values are shown). The level of ( n ) Lung SOD, Liver SOD, ( o ) Lung MDA and Liver MDA of mice with or without the treatment of Ptc-PA ( n = 3 per group). Data are presented as mean ±SEM, analyzed by one-way ANOVA with one-sided Tukey’s multiple comparisons test (the p values are shown).
With the application of nuclear technology, health problems caused by radiation are attracting rising concerns. Therefore, biological effect of Ptc-PA on radiation oxidation was carried out (Fig. 8j ). Toxicological assessments are indispensable in the biological applications of nanomaterials, offering valuable insights into their biosafety profiles. The body weight of mice increased steadily within 28 days after intraperitoneal injection of Ptc-PA (Supplementary Fig. 36a ). The major organ index and blood routine of the mice were at normal levels (Supplementary Figs. 36b and 37 ). Pathological studies showed that intraperitoneal injection of Ptc-PA did not cause toxicity to the major organs of mice (Supplementary Fig. 38 ). In conclusion, Ptc-PA possesses positive biosafety and potential for biomedical application. As a golden index to judge the effect of radiation protection, the survival rate of radiation mice with Ptc-PA treatment increased from 20% to 80% compared with the untreated group, reaching the level of the clinical adjuvant Amifostime (Fig. 8k ). This indicates that the radiation protection effect of Ptc-PA is comparable to that of the clinical adjuvants. In addition, the corresponding routine blood indices were significantly restored (Fig. 8l and Supplementary Fig. 39 ). The changes induced by Ptc-PA in the radiation characteristic markers, total bone marrow DNA, and bone marrow mononuclear cells (BMNC) reflected the exceptional in vivo regulation of radiation oxidation 75 , 76 . Ptc-PA effectively protected the hematopoietic system of mice and improved the hematopoietic function of irradiated mice (Fig. 8m ). Variations in organismal biomolecules such as SOD and malondialdehyde (MDA) reflect the degree of oxidative damage in the body 43 , 54 , 74 , 77 . The SOD and MDA contents in the liver and lung of irradiated mice were similarly restored to normal levels, suggesting the protective effect of Ptc-PA in irradiated mice (Fig. 8n, o ). In brief, Ptc-PA as an excellent biocatalyst has shown notable oxidative damage modulating effects in the organism, revealing its great potential as a biocatalyst for oxidative stress-induced biological applications.
In summary, we have systematically investigated the atomic strain-activity correlation during the atomic structure evolution from Pt 1 , Pt n to Ptc, demonstrating the efficient enzyme-like catalytic mechanism. Surface atomic precision Pt clusters maximize the exposition of catalytic active sites. Combining structural analysis and quantum mechanical calculations accurately of Ptc-PA suggests that tensile strain effectively promotes the emergence of Pt vacancies and active low coordination sites. Besides, the upshift of the d -band center and the generation of high-energy charge carriers also play an equally important role in regulating the enzyme-like catalytic activity. The Ptc-PA exhibits ~10 4 -fold and 61.5-fold higher POD-like and CAT-like activities than the natural POD and CAT, respectively. In addition, the SOD-like activity of Ptc-PA is equivalent to natural SOD enzymes. Biological results reveal that the catalytic colorimetric assay for Ptc-PA achieves a LOD 132 times lower than the naked eye due to the notable POD-like activity of Ptc-PA, resulting in a sensitive clinical diagnosis of cancer in patients. Moreover, Ptc-PA significantly modulates radiation-mediated oxidative damage and survival rate. Our work provides a reliable idea for designing biocatalysts with ultrahigh catalytic activity.
All chemicals are commercially available with the highest purity and used without further treatment. Gold chloride (HAuCl 4 ·3H 2 O, ≥99.9%), Sodium tetrachloroplatinate (K 2 PtCl 4 , ≥99.9%), Potassium tetrachloropalladate (Na 2 PdCl 4 , ≥99.99%), glutathione (GSH, ≥98%), cysteine (Cys, ≥99%), Ascorbic acid (AA, ≥99.99%) and PVP (K30, Mw = 40,000, ≥ 99.99%) were purchased from Aladdin. Phosphate-buffered saline (PBS) buffer (0.01 M, pH = 7) and bovine serum albumin (BSA, ≥98%) was purchased from Solarbio. Adjust the PBS or BSA solution to a pH = 5 solution using 0.1 M HCl (≥99.99%) to prepare for use. Ultrapure water (18.2 MΩ*cm) was used for all the experiments. Nitrocellulose (NC) membrane, glass fiber membrane, polyvinyl chloride (PVC) backing card, and absorbent pad were obtained from Shanghai Kinbio Technology Co., Ltd (Shanghai, China). TMB (99%) single-component substrate solution and 30% H 2 O 2 (99.8%) was purchased from Sigma-Aldrich Chemical Co, Ltd (St. Louis, USA). Goat anti-rabbit IgG antibody (bs-0295G, 0.5 mg/mL), Goat anti-mouse IgG antibody (bs-0296G, 0.5 mg/mL), PSA, CEA were all obtained from Beijing Bioss Biotechnology Ltd. PSA capture antibody (Ab1, SDT-195-50, 1 mg/mL) and PSA labeled antibody (Ab2, SDT-195-69, 1 mg/mL,) were obtained from Starter Bioscience Co.Lt. CEA capture antibody (Ab1, 3CEA-23, 0.5 mg/mL) and CEA labeled antibody (Ab2, CEA-100, 1 mg/mL) were obtained from Fapon Biotech. The human serum samples with different PSA or CEA concentrations were obtained with the patients’ permission and collected by the Tianjin Medical University Affiliated General Hospital.
Materials preparation
The Ptc-PA was synthesized according to the previous literature 78 . In a typical synthesis, aqueous solutions of HAuCl 4 (20 mM, 2.5 mL), K 2 PtCl 4 (20 mM, 2.5 mL), Na 2 PdCl 4 (20 mM, 2.5 mL) and PVP (0.1 g) were homogenized by ultrasound, followed by the addition of AA solution (0.4 M, 1 mL) under sonication. The difference in the synthesis of Pt single atom (Pt 1 ), several Pt atoms (Pt n ) and atomically resolved Pt clusters (Ptc) on (Au core) @ (Pd thin shell) is that the input of K 2 PtCl 4 is 10%, 20% and 40% of the original input molar amount, respectively, all other conditions are the same. Then the whole solution was placed for 6 h. The product was collected by centrifugation at 13,400 × g for 20 min, and the final product was redispersed in PBS. The synthesis of Pt, Au, Pd, AuPt, AuPd and PtPd were also based on the same method.
Materials characterization
TEM and high-resolution TEM (HRTEM) characterizations were carried out using a JEOL JEM-2100F operated at 200 kV equipped with energy-dispersive spectrometry analyses. The aberration-corrected scanning transmission electron microscopy (AC-STEM, ARM200F, JEOL, Japan) was adopted to observed the atomic and cluster information of NPs. The atomic number of Pt is calculated from the TEM measurements and the following equation:
where the N is the atomic number of Pt in Ptc-PA, r is the radius of Pt cluster in Ptc-PA. For Ptc-PA (Pt: Pd: Au=0.4:1:1), r = 1.225 nm. For Ptc-PA (Pt: Pd: Au=0.5:1:1), r = 1.224 nm. For Ptc-PA (Pt: Pd: Au=1:1:1), r = 1.228 nm. V 2 is the volume of Pt atom, V 2 = 9.1 cm 3 /mol. N A is Avogadro constant, N A = 6.02214076 × 10 23 .
The XAFS spectra of Au L 3 -edge and Pt L 3 -edge in Ptc-PA were tested and provided by Beijing Synchrotron Radiation Facility. The XAFS results were analyzed by the ATHENA and ARTEMIS modules of IFEFFIT software packages. XRD pattern was recorded with a Rigaku Rint 2500 diffractometer with monochromated Cu Kα radiation. UV–vis absorption spectra were recorded on Shimadzu 3600 UV–Vis–NIR spectrophotometer. XPS analyses were carried out using a ThermoFisher Scientific spectrometer with a monochromatic Al Kα X-ray source with 300 W operating power. The content of metallic elements in NPs was tested by the 7900 ICP-MS (Agilent, US).
Enzyme-like activity test
Pod-like activity test.
POD-like activity of Ptc-PA was determined by colorimetric method. First, the working solution for TMB substrate color development kit (Sbjbio, Nanjing, China) was prepared according to the provided instructions. Subsequently, Ptc-PA NPs (20 μL) and working solution (180 μL) were mixed in a 96-well plate, and the absorbance at 652 nm (characteristic absorption peak of TMB) was tested at different NPs concentration and reaction time. The testing procedure for HRP followed the same methodology. For the kinetic tests, the reaction rates of samples were tested using UV–vis spectroscopy at different concentrations of TMB or H 2 O 2 . In brief, Ptc-PA (2 μL, 2.5 mg/mL), TMB (2 μL, 0–800 mM), H 2 O 2 (2 μL, 50 mM), and acetic acid buffer solution (198 μL, pH = 4.5) were added to a 96-well plate. The absorbance of the reaction mixture was measured immediately at 652 nm using UV–vis spectroscopy and recorded over time. To determine the reaction rate of the sample towards different concentrations H 2 O 2 (0–25 mM), TMB concentration was set as 800 mM. The initial reaction rate was calculated by Microsoft Excel. The maximum reaction velocity ( V max ) and Michaelis–Menten constant ( K m ) were calculated using the Michaelis–Menten equation:
The kinetic constant ( K cat ) was calculated based on the equation:
where V represents reaction rate, V max represents the rate of reaction when the catalyst is saturated with substrate, K m value is termed as Michaelis–Menten constant and is the concentration of the substrate at which the rate of reaction is the maximum rate of reaction typically. [ E ] represents particle concentration of Ptc-PA used in the catalytic reaction.
The [ E ] was calculated based on the equation:
where the [ E ] is particle concentration of Ptc-PA used in the catalytic reaction, Au ICP-MS is the mass concentration of Au in the reaction system measured by ICP-MS. N A is Avogadro constant, N A = 6.02214076 × 10 23 . m Au is the mass of individual gold particles. The m Au was calculated based on the equation:
where the r is radius of individual gold particles. The density of gold ρ = 19.3 g/cm 3 .
CAT-like activity test
Ptc-PA with different concentrations (0–10 ng/μL) were added to the H 2 O 2 (40 mM), and the resulting absorbance changes at 240 nm were measured using a UV–vis spectrometer to evaluate its CAT-like activity. For kinetic tests, the reaction rates of Ptc-PA for the decomposition of H 2 O 2 with different concentrations (0–50 mM) was tested using the kinetic mode of UV–vis spectrophotometer. The initial reaction rate was calculated using Microsoft Excel, then V max and K m were determined by the curve simulated from Origin software.
SOD-like activity test
According to the protocol provided in the SOD Assay Kit (Dojindo, Japan), appropriate volumes of assay working solution and enzyme working solution were prepared. For the experimental group, Ptc-PA with different concentrations (0–0.5 mg/mL, 20 μL), working solution (200 μL) and enzyme working solution (20 μL) were added sequentially to a 96-well plate. After incubation at 37 °C for 20 min, the absorbance at 450 nm (A 450 ) was measured using microplate spectrophotometer (CMax Plus, Molecular Devices). The elimination rate of the superoxide was calculated by quantifying the decrease of A 450 .
GPx-like activity test
The GPx-like activity of Ptc-PA was determined following the previous literature. Briefly, a reaction mixture containing 200 μM H 2 O 2 , 2 mM GSH, 200 μM NADPH, 1.7 units mL –1 GR, and Ptc-PA (0–10 ng/μL) was added to a volume of 200 μL PBS neutral buffer. The changes of absorbance at 340 nm were recorded by the UV–vis spectrometer, which represents the concentration of NADPH.
•OH scavenging test
The •OH scavenging process was investigated using an ESR spectrometer (Bruker EMX plus, Germany). Initially, a mixture containing 5 mM H 2 O 2 and 20 mM FeSO 4 was employed to generate •OH radicals, which were then captured by 50 mM 5-tert-butoxycarbonyl 5-methyl-1-pyrroline N-oxide (BMPO) to produce spin adduct (BMPO/•OH) with four peaks in the ESR spectrometry. The ability to scavenge •OH was determined by detecting the change in peak intensity before and after the addition of Ptc-PA.
O 2 •– scavenging test
The O 2 •– scavenging process was also examined by an ESR spectrometer (Bruker EMX plus, Germany). We first used 2.5 mM KO 2 and 3.5 mM 18-crown-6 to generate stable O 2 •– , which were then captured by 25 mM 5-(diethoxyphosphoryl)−5-methyl-1-pyrroline-N-oxide (DEPMPO) to produce spin adduct (DEPMPO/O 2 •– ) with six peaks under ESR spectrometry. The ability to scavenge O 2 •– was determined by detecting the change in peak intensity before and after the addition of Ptc-PA.
Electrochemical test
The electrochemical properties of the materials were characterized mainly by cyclic voltammetry on the setup produced by Shanghai Chenhua, employing a three-electrode system (GC electrode as the working electrode, Pt wire as the counter electrode, and saturated calomel electrode as the reference electrode). Ptc-PA (0.5 mg/mL, 20 μL) was dropped onto the surface of the GC electrode and dried at room temperature. After that, 3 μL of perfluorinated sulfonic acid (5%w/w) was added to serve as a waterproof protective layer and dried further at room temperature. During the drying period, N 2 was uniformly injected into the electrolyte (0.1 M PBS) for 30 min. For the H 2 O 2 reduction reaction, 50 µL of 30% H 2 O 2 stock solution was added to the N 2 -saturated electrolytic cell. Subsequently, the electrode device was connected and CV scans were performed at room temperature using a scanning rate at 50 mV/s. The electrochemical test for ORR requires the same sample preparation operation procedure, while only O 2 -saturated electrolyte without H 2 O 2 replaced N 2 -saturated electrolyte. The CV scans were performed directly, while the control group was tested using empty platinum-carbon electrodes.
DFT calculations
The DFT calculations were performed using the Vienna ab initio Simulation Package (VASP 6.1.2) 79 , 80 . The projector augmented wave method was employed to represent the elemental core and valence electrons 81 . To estimate the exchange-correlation potential energy, the Perdew–Burke–Ernzerhof generalized gradient approximation (GGA-PBE) functional was utilized and the cutoff energy was set to 520 eV for the plane-wave basis 82 . The electron-ion interaction was described using a norm-conserving pseudopotential, where the valence-electron configuration for Au, Pt and Pd atoms consisted of 5 d 10 6 s 1 , 5 d 9 6 s 1 and 4 d 10 , respectively. The simulation employed a seven-layer supercell arranged specifically along the (111) crystal plane, comprising of three layers of Au, two layers of Pd, and two layers of Pt. The supercell had dimensions of 8.55 Å × 5.70 Å, accommodating a total of 42 atoms. To avoid the interaction between layers caused by periodicity and to enhance the reliability and accuracy of calculations, a vacuum distance of 35 Å was implemented along the z -direction, which is perpendicular to the surface. In addition, the Monkhorst Pack scheme was employed for all Brillouin zone integrations 83 , and a 5 × 7 × 1 k-mesh was utilized for relaxation calculations. For structural relaxation, the convergence criteria were set to 1 × 10 −6 eV for energy and −0.02 eV/Å for force. Dipole corrections were applied in all the DFT calculations. The calculation, with or without strain, allowed for uniform expansion (contraction) of the (111) lattice in all three Cartesian directions while excluding corrections to the interlayer distance. Some post-process wave function analyses including ESP were performed using Multiwfn software 84 . In this work, the binding energies of different structures were calculated by the following equation:
Where A is Ptc-PA or Pt, B is H 2 O 2 . E A and E B are the total energies of A species and B species, respectively.
The d -band center of different structures was calculated by the following equation:
Where ρ (x) represents the distribution of electronic density of states as a function of energy (x).
The study conducted by Tianou He et al. demonstrates that Pt clusters exhibit enhanced catalytic activity when subjected to structural strains within the range of ±(2–4)% 10 , which is consistent with our experimental findings. Considering that different structures and strains can significantly influence the electronic structure and catalytic performance of materials, we have chosen to simplify the variables by setting both tensile and compressive strains at 3%. All the structural strains are applied in the [110] direction. Notably, strain is additionally applied to the optimized structure after the original modeling in order to investigate the impact of strain on the electronic structure and catalytic activity of these clusters.
PSA and CEA detection
Conjugation of ab2 to ptc-pa.
The conjugation of Ab2 to the surface of Ptc-PA was achieved via a simple physical absorption procedure. First, 10 μL Ab2 (1 mg/mL) was added to 1 mL of the prepared Ptc-PA solution (0.4 mg/mL) and incubated for 1 h at room temperature. Subsequently, BSA solution (100 μL, 10 mg/mL) was added to the mixture as a blocking agent. After another 1 h, centrifugation (10621 × g, 10 min) was performed to recover the final product (Ptc-PA-Ab2 probe), which was then redispersed in PBS (100 μL, pH 7.4). Finally, the Ptc-PA-Ab2 probe was stored at 4 °C for further experiments.
Fabrication of the Ptc-PA-based LFA test strip
The Ptc-PA-based LFA test strip (3.5 mm width) contains five main parts: NC membrane with a test line (T-line) and a control line (C-line), sample pad, conjugate pad, PVC backing card, and absorbent pad. The conjugate pad was pre-blocked with 4 mL of immune buffer (5 mM PVP-10000, 0.15 M sucrose, 0.45 mM BSA, 2% Tween-20 in 0.01 M PBS) and dried at 37 °C for 8 h. The Ab1 (0.5 mg/mL) and goat anti-rabbit IgG antibody/anti-mouse IgG antibody (0.5 mg/mL) were dispensed onto the porous NC membrane at a jetting rate of 1.0 μL/cm, respectively. The interval between the two lines was 4 mm apart before drying completely at ambient temperature. Finally, different pads were constructed on a PVC backing card with an overlap of ~2 mm before cutting into 3 mm by guillotine cutter. These fabricated strips were stored in a refrigerator at 4 °C for the subsequent assays.
Protocols for the Ptc-PA-based LFA
A series of PSA or CEA standard solutions with different concentrations (6.4, 3.2, 1.6, 0.8, 0.4, 0.2, 0.1, 0.05, 0.025, 0.0125, 0.00625, 0.00313, 0.00155, 0.0007525 and 0 ng/mL) were prepared in an assay buffer (0.01 M PBS, pH 7.4). Briefly, 3 μL of the Ptc-PA-Ab2 probe was incubated with 60 μL of PSA or CEA solutions and dispensed onto the sample pad to migrate under capillary action for 20 min. Then, optical images of the strips were directly recorded using iPhone 13 pro smartphone-based imaging device, and the pixel intensity within the T-line regions was calculated using the ImageJ software. In the catalytic enhancement mode, the T-lines in the strips were clipped and transferred to substrate solution (a mixture containing TMB single-component substrate solution and 3% H 2 O 2 ). Due to its POD-like catalytic activities, Ptc-PA facilitated obtaining a catalytic signal by recording the absorbance value of 652 nm with a multimode plate reader after incubation at room temperature for 5 min. The specificities experiment of Ptc-PA-LFA involved studying PSA or CEA (3.4 ng/mL) along with other proteins. All reactions were repeated three times.
Detection of the clinical samples
The human serum samples with different concentrations of PSA or CEA were obtained with the patients’ permission and collected by the Tianjin Medical University Affiliated General Hospital. Serum samples from 20 healthy men were included in this project. 40 male patients with prostate cancer and 20 with colorectal cancer, aged between 30 and 65 years old. This study was approved by the Tianjin Medical University Affiliated General Hospital, and all blood donors signed informed consent (IRB2020-KY-097). All assays were conducted and adhered to legal requirements and ethical guidelines. The standard addition method was employed to introduce different concentrations of PSA or CEA standards (3.2 ng/mL, 1.6 ng/mL, and 0.8 ng/mL) to four healthy serum samples by the standard addition method. The serum samples obtained from patients with varying concentrations were analyzed using a specially designed biosensor and compared to the results obtained from a standard electrochemical sensor.
In vitro experiments
Chinese hamster ovary CHO cells and mouse hippocampal neuronal HT22 cells were obtained from the Institute of Radiation Medicine, Chinese Academy of Medical Sciences and Peking Union Medical College. Mouse microglia BV2 cells and mouse astrocytes-cerebellar MA-c cells were obtained from Tianjin Huanhu Hospital. Human colonic epithelium NCM460 cells were obtained from Tianjin Medical University General Hospital. NCM460 cells were cultured in McCoy’s 5A (Gibco) with 10% fetal bovine serum (FBS, BI) at 37 °C with 5% CO 2 . Other cells were cultured in Dulbecco’s Modified Eagle Medium (DMEM, Gibco) with 10% fetal bovine serum (FBS, BI) at 37 °C with 5% CO 2 . Penicillin (100 μ/mL) and streptomycin sulfate (100 mg/mL) were added as required for cell growth.
Cytotoxicity assay
The above cells in the logarithmic growth phase were seeded into sterile 96-well plates at a density of 4 × 10 3 cells per well. Sterile PBS was added around the perimeter of the plates to replenish evaporated water during culture. After seeding, the cells were incubated overnight in a sterile incubator (37 °C, 5% CO 2 ) until fully attached to the wall. When the cell density reached 60%, co-incubated with Ptc-PA at different concentrations for another 24 h or 48 h. Cells were washed with sterile PBS and added to serum-free medium. 3-(4,5)-dimethylthiahiazo (-z-y1)-3,5-di- phenytetrazoliumromide (MTT) with the concentration of 5 mg/mL was added to each well and incubated for 2.5 h. Dimethyl sulfoxide was added to dissolve the precipitate after discarding the supernatant of the medium and the absorbance at 490 nm was tested in each well.
Cell viability
CHO cells (4 × 10 3 ), HT22 cells (4 × 10 3 ), BV2 cells (4 × 10 3 ) and MA-c cells (4 × 10 3 ) were cultured in the 96-well plate. 1 mg/mL LPS or 500 µM H 2 O 2 was added to cells at ~60% confluence and stimulated for 4 h. The original medium was discarded and replaced with medium containing Ptc-PA with different concentrations for overnight incubation. The wells were washed with PBS to remove serum and subsequently co-incubated with the cells for 2.5 h using 5 mg/mL MTT. Cell viability was compared between the different treatment groups by analyzing the absorbance at 490 nm. To assess the effect of Ptc-PA on cell viability in response to radiation, CHO cells were inoculated into 96-well plates at a density of 4 × 10³ per well and cultured overnight. The cells were co-incubated with Ptc-PA at various concentrations (6–200 μg/mL) for 1 h, after which the plates were exposed to 4 Gy gamma rays and incubated for 24 h at 37 °C. The medium was then removed and the cells were washed twice with PBS. Cell viability was assayed according to the instructions in the Cell Counting Kit-8 (Beyotime, C0037).
Measurement of intracellular oxidative stress
CHO cells (2 × 10 5 ) were seeded into 6-well plates and cultured overnight in an incubator until fully attached to the surface. 0.5 mg/mL LPS and 200 µm H 2 O 2 were then used to stimulate the cells for 6 h, followed by the intervention of Ptc-PA (25 μg/mL). The cells were then placed in an incubator for 24 h and cleaned 3 times with PBS. The culture medium was replaced with 5 µM DCFH (Beyotime, S0033S) or 25 µM DHE (Beyotime, S0063) probe solution, and incubated for 25 min at 37 °C in the dark to determine the total ROS level or O 2 •– level, respectively. The probe solution was removed, and the cell was washed again 3 times with PBS. Fluorescence images of cells from different treatment groups were captured using a fluorescence microscope (EVOS, AMG), and quantitative analysis was conducted by the flow cytometer (BD AccuriTM C6).
Cell apoptosis
The measurement of apoptosis was conducted using the FITC Annexin V Apoptosis Detection Kit I (BD, No. 556547). CHO cells were cultured overnight in 6-well plates. When growth reached 60% confluence, cells were stimulated with 800 μM H 2 O 2 or 1 mg/mL LPS overnight and then cultured with medium containing 25 µg/mL Ptc-PA for another 24 h. Cells were detached by trypsin and collected into centrifuge tubes, centrifuged at 1000 × g for 5 min and the supernatant discarded. The cells were washed twice with PBS, centrifuged at 1000 × g for 5 min, and the supernatant was removed. The cells were then resuspended in an appropriate volume of 1× loading buffer to adjust the cell count to 10 6 cells/ml. 5 μL of FITC Annexin V was added to 100 μL of cell suspension and stained in the dark for 5 min. Afterward, 5 μL of propidium iodide (PI) was added, and the cells were stained for 15 min. The final assay was performed using flow cytometry (FACSAria III).
The cell cycle was determined by quantitative DNA content assay (Solarbio, CA1510). CHO cells were cultured overnight in 6-well plates. When growth reached 60% confluence, cells were stimulated with 800 μM H 2 O 2 or 1 mg/mL LPS overnight and cultured with medium containing 25 µg/mL Ptc-PA for another 24 h. Cells were washed with PBS and collected to prepare single cell suspensions. Cells were fixed with 70% ethanol overnight and washed once with PBS. The cells were centrifuged at 1000 × g for 5 min, and the supernatant was removed. After that, the cells were resuspended in RNase solution and incubated at 37 °C for 30 min. After cooling, PI dye solution (400 μL) was added to the sample, and the mixture was incubated at 37 °C in the dark for another 30 min. Flow cytometry analysis (BD AccuriTM C6) was performed on the sample.
In vivo treatment
All animal procedures were approved by the Institute of Radiation Medicine, Chinese Academy of Medical Sciences and Peking Union Medical College (IRM-DWLI-2021107). Efforts were made to reduce the number of animals used and minimize their suffering.
Toxicological studies
SPF Male C57BL/6J mice (6–8 weeks) were purchased from Beijing Vital River Laboratory Animal Technology Co., Ltd. The mice were housed under controlled conditions with a constant temperature range of 21–23 °C, relative humidity maintained between 45 and 60%, and a 12-h light-dark cycle. Food and water were available ad libitum. The mice were randomly divided into 2 groups (control group and Ptc-PA group, n = 3 per group). Each mouse was injected with 200 µL of saline or Ptc-PA solution (5 mg/mL), and body weights were recorded daily post-injection. On day 28, blood was collected for hematological analysis. Major organs (heart, liver, spleen, lung, and kidney) were collected, weighed, fixed in 10% neutral buffered formalin for 24–48 h, embedded in paraffin, sliced into thin sections 4-μm thickness and subjected to pathological Hematoxylin and Eosin (H&E) staining.
Animal models
SPF Male C57BL/6J mice (7–9 weeks, 21–23 g). were purchased from Beijing Vital River Laboratory Animal Technology Co., Ltd. The mice were assigned to four groups ( n = 25 per group) randomly. Both the Con group and the Rad group were intraperitoneally injected with 200 µL saline, while the Rad + Ptc-PA groups were intraperitoneally injected with 50 mg/kg Ptc-PA. Amifostine was injected intravenously into mice as a control. Thirty minutes post-injection, mice were anesthetized with sodium pentobarbital (1%, 50 mg/kg) by intraperitoneal injection (i.p.). All the groups of mice were then exposed to 7.5 Gy gamma rays covering the entire body. The irradiated mice were housed in a standard feeding environment with clean feed and water. Survival rates and weight changes were recorded daily. Blood samples were collected for routine blood tests on day 30, and serum was separated by centrifugation for blood biochemistry analysis.
BMNC and bone marrow DNA measurements
The mice were euthanized at the time points of 7 days after being irradiated and all organs were taken out ( n = 5 per group). Both femurs were resected and cleaned of connective tissue. The bone marrow cells were flushed into PBS by the injector. To prevent interference from tissue debris and bone fragments, the samples were filtered through a 200-mesh nylon filter and then counted with a blood cell counter to compare the amount of BMNC in different groups of mice. For bone marrow DNA measurements, the bone marrow cells were rinsed with 5 mM calcium chloride solution, and the mixture was gently stirred to obtain a single-cell suspension. The suspension was then refrigerated at 4 °C for 2 h. The samples were then centrifuged at 700 × g for 15 min. After centrifugation, the supernatant was removed and the pellet was resuspended in 5 mL of 0.2 M perchloric acid solution and heated in a 90 °C water bath for 15 min. The samples were cooled to room temperature and filtered. Finally, the absorbance of filtrate was measured at 268 nm using a UV–vis spectrophotometer.
Oxidative stress
Liver and lung samples were collected from mice on day 7 after radiation exposure. Tissue homogenates were prepared by adding PBS to the samples and grinding until no tissue mass remained. The homogenate was centrifuged at 10,000 × g for 10 min. The supernatant was removed and stored at −80 °C. The protein concentration of the supernatant was measured using the Enhanced BCA Protein Assay Kit (Beyotime, P0010) according to the experimental procedure in the instructions provided by the manufacturer. The levels of MDA and SOD in the tissue samples were measured using the Lipid Peroxidation MDA Assay Kit (Beyotime, S0131S) and Total SOD Activity Assay Kit (WST-8, Beyotime, S0101M), respectively.
Statistic methods
Data are presented as mean ± standard deviation (SD) or standard error of the mean (SEM). For multiple comparison, one-way analyses of variance (ANOVA) with one-sided Tukey’s multiple comparisons test were used to assess difference in means among groups.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Data availability
The data that support the findings of this study are available from the corresponding author upon request. Source data are provided with this paper.
Li, F. et al. A nuclease-mimetic platinum nanozyme induces concurrent DNA platination and oxidative cleavage to overcome cancer drug resistance. Nat. Commun. 13 , 7361 (2022).
Article ADS PubMed PubMed Central Google Scholar
Broto, M. et al. Nanozyme-catalysed crispr assay for preamplification-free detection of non-coding rnas. Nat. Nanotechnol. 17 , 1120–1126 (2022).
Article ADS CAS PubMed Google Scholar
Yan, R. et al. Nanozyme-based bandage with single-atom catalysis for brain trauma. ACS Nano 13 , 11552–11560 (2019).
Article CAS PubMed Google Scholar
Wang, J.-Y. et al. Hollow ptpdrh nanocubes with enhanced catalytic activities for in vivo clearance of radiation-induced ros via surface-mediated bond breaking. Small 14 , 1703736 (2018).
Article Google Scholar
Mu, X. et al. Redox trimetallic nanozyme with neutral environment preference for brain injury. ACS Nano 13 , 1870–1884 (2019).
CAS PubMed Google Scholar
Zhang, X. et al. Conjugated dual size effect of core-shell particles synergizes bimetallic catalysis. Nat. Commun. 14 , 530 (2023).
Article ADS CAS PubMed PubMed Central Google Scholar
Strasser, P. et al. Lattice-strain control of the activity in dealloyed core–shell fuel cell catalysts. Nat. Chem. 2 , 454–460 (2010).
Rodriguez, J. A. & Goodman, D. W. The nature of the metal-metal bond in bimetallic surfaces. Science 257 , 897–903 (1992).
Hu, Q. et al. Subnanometric Ru clusters with upshifted d band center improve performance for alkaline hydrogen evolution reaction. Nat. Commun. 13 , 3958 (2022).
He, T. et al. Mastering the surface strain of platinum catalysts for efficient electrocatalysis. Nature 598 , 76–81 (2021).
Koh, S. & Strasser, P. Electrocatalysis on bimetallic surfaces: Modifying catalytic reactivity for oxygen reduction by voltammetric surface dealloying. J. Am. Chem. Soc. 129 , 12624–12625 (2007).
Mavrikakis, M., Hammer, B. & Nørskov, J. K. Effect of strain on the reactivity of metal surfaces. Phys. Rev. Lett. 81 , 2819–2822 (1998).
Article ADS Google Scholar
Wang, H. et al. Direct and continuous strain control of catalysts with tunable battery electrode materials. Science 354 , 1031–1036 (2016).
Wu, J. et al. Surface lattice-engineered bimetallic nanoparticles and their catalytic properties. Chem. Soc. Rev. 41 , 8066–8074 (2012).
Zhang, J., Yin, S. & Yin, H.-M. Strain engineering to enhance the oxidation reduction reaction performance of atomic-layer Pt on nanoporous gold. ACS Appl. Energy Mater. 3 , 11956–11963 (2020).
Article CAS Google Scholar
Atlan, C. et al. Imaging the strain evolution of a platinum nanoparticle under electrochemical control. Nat. Mater. 22 , 754–761 (2023).
Zhang, S. et al. Tuning nanoparticle structure and surface strain for catalysis optimization. J. Am. Chem. Soc. 136 , 7734–7739 (2014).
Escudero-Escribano, M. et al. Tuning the activity of Pt alloy electrocatalysts by means of the lanthanide contraction. Science 352 , 73–76 (2016).
Guan, Q. et al. Bimetallic monolayer catalyst breaks the activity–selectivity trade-off on metal particle size for efficient chemoselective hydrogenations. Nat. Catal. 4 , 840–849 (2021).
Li, P. et al. Hydrogen bond network connectivity in the electric double layer dominates the kinetic ph effect in hydrogen electrocatalysis on Pt. Nat. Catal. 5 , 900–911 (2022).
Stamenkovic, V. R. et al. Trends in electrocatalysis on extended and nanoscale Pt-bimetallic alloy surfaces. Nat. Mater. 6 , 241–247 (2007).
Zhang, L. et al. Platinum-based nanocages with subnanometer-thick walls and well-defined, controllable facets. Science 349 , 412–416 (2015).
Bu, L. et al. Biaxially strained PtPb/Pt core/shell nanoplate boosts oxygen reduction catalysis. Science 354 , 1410–1414 (2016).
Wang, X. et al. Palladium–platinum core-shell icosahedra with substantially enhanced activity and durability towards oxygen reduction. Nat. Commun. 6 , 7594 (2015).
Article ADS PubMed Google Scholar
Ping, X. et al. Activating a two-dimensional PtSe 2 basal plane for the hydrogen evolution reaction through the simultaneous generation of atomic vacancies and Pt clusters. Nano Lett. 21 , 3857–3863 (2021).
Xiong, L. et al. Octahedral gold-silver nanoframes with rich crystalline defects for efficient methanol oxidation manifesting a co-promoting effect. Nat. Commun. 10 , 3782 (2019).
Xie, J. et al. Defect-rich mos 2 ultrathin nanosheets with additional active edge sites for enhanced electrocatalytic hydrogen evolution. Adv. Mater. 25 , 5807–5813 (2013).
Chattot, R. et al. Surface distortion as a unifying concept and descriptor in oxygen reduction reaction electrocatalysis. Nat. Mater. 17 , 827–833 (2018).
Li, Z. et al. A silver catalyst activated by stacking faults for the hydrogen evolution reaction. Nat. Catal. 2 , 1107–1114 (2019).
Willhammar, T. et al. Structure and catalytic properties of the most complex intergrown zeolite itq-39 determined by electron crystallography. Nat. Chem. 4 , 188–194 (2012).
Wu, G. et al. In-plane strain engineering in ultrathin noble metal nanosheets boosts the intrinsic electrocatalytic hydrogen evolution activity. Nat. Commun. 13 , 4200 (2022).
Fang, P.-P. et al. Tailoring Au-core Pd-shell Pt-cluster nanoparticles for enhanced electrocatalytic activity. Chem. Sci. 2 , 531–539 (2011).
Xi, Z. et al. Strain effect in palladium nanostructures as nanozymes. Nano Lett. 20 , 272–277 (2020).
Zhang, J. et al. Tin-assisted fully exposed platinum clusters stabilized on defect-rich graphene for dehydrogenation reaction. ACS Catal. 9 , 5998–6005 (2019).
Poerwoprajitno, A. R. et al. A single-Pt-atom-on-Ru-nanoparticle electrocatalyst for co-resilient methanol oxidation. Nat. Catal. 5 , 231–237 (2022).
Yang, S., Kim, J., Tak, Y. J., Soon, A. & Lee, H. Single-atom catalyst of platinum supported on titanium nitride for selective electrochemical reactions. Angew. Chem. Int. Ed. 55 , 2058–2062 (2016).
Yang, S. & Lee, H. Atomically dispersed platinum on gold nano-octahedra with high catalytic activity on formic acid oxidation. ACS Catal. 3 , 437–443 (2013).
Ma, H. et al. Atomically precise ag clusters for intelligent nir-ii imaging. Matter 7 , 1660–1676 (2024).
Wang, L. et al. Tracking the sliding of grain boundaries at the atomic scale. Science 375 , 1261–1265 (2022).
Zhang, B. et al. Atomic-scale insights on hydrogen trapping and exclusion at incoherent interfaces of nanoprecipitates in martensitic steels. Nat. Commun. 13 , 3858 (2022).
Ji, S. et al. Matching the kinetics of natural enzymes with a single-atom iron nanozyme. Nat. Catal. 4 , 407–417 (2021).
Huang, L., Chen, J., Gan, L., Wang, J. & Dong, S. Single-atom nanozymes. Sci. Adv. 5 , eaav5490 (2019).
Zhang, S. et al. Single-atom nanozymes catalytically surpassing naturally occurring enzymes as sustained stitching for brain trauma. Nat. Commun. 13 , 4744 (2022).
Davidson, E., Xi, Z., Gao, Z. & Xia, X. Ultrafast and sensitive colorimetric detection of ascorbic acid with Pd-Pt core-shell nanostructure as peroxidase mimic. Sens. Int. 1 , 100031 (2020).
Gao, L. et al. Intrinsic peroxidase-like activity of ferromagnetic nanoparticles. Nat. Nanotechnol. 2 , 577–583 (2007).
Xi, Z. et al. Nickel–platinum nanoparticles as peroxidase mimics with a record high catalytic efficiency. J. Am. Chem. Soc. 143 , 2660–2664 (2021).
Xia, X. et al. Pd–Ir core–shell nanocubes: a type of highly efficient and versatile peroxidase mimic. ACS Nano 9 , 9994–10004 (2015).
Nishida, N. et al. Fluorescent gold nanoparticle superlattices. Adv. Mater. 20 , 4719–4723 (2008).
De Backer, A., van den Bos, K. H. W., Van den Broek, W., Sijbers, J. & Van Aert, S. StatSTEM: an efficient approach for accurate and precise model-based quantification of atomic resolution electron microscopy images. Ultramicroscopy 171 , 104–116 (2016).
Article PubMed Google Scholar
Du, K. et al. Manipulating topological transformations of polar structures through real-time observation of the dynamic polarization evolution. Nat. Commun. 10 , 4864 (2019).
Liu, G. et al. Site-specific reactivity of stepped Pt surfaces driven by stress release. Nature 626 , 1005–1010 (2024).
Li, X. et al. Ordered clustering of single atomic te vacancies in atomically thin ptte 2 promotes hydrogen evolution catalysis. Nat. Commun. 12 , 2351 (2021).
Li, H. et al. Synergetic interaction between neighbouring platinum monomers in CO 2 hydrogenation. Nat. Nanotechnol. 13 , 411–417 (2018).
Liu, H. et al. Catalytically potent and selective clusterzymes for modulation of neuroinflammation through single-atom substitutions. Nat. Commun. 12 , 114 (2021).
Khorshidi, A., Violet, J., Hashemi, J. & Peterson, A. A. How strain can break the scaling relations of catalysis. Nat. Catal. 1 , 263–268 (2018).
Aslam, U., Chavez, S. & Linic, S. Controlling energy flow in multimetallic nanostructures for plasmonic catalysis. Nat. Nanotechnol. 12 , 1000–1005 (2017).
Sundararaman, R., Narang, P., Jermyn, A. S., Goddard Iii, W. A. & Atwater, H. A. Theoretical predictions for hot-carrier generation from surface plasmon decay. Nat. Commun. 5 , 5788 (2014).
Rao, V. G., Aslam, U. & Linic, S. Chemical requirement for extracting energetic charge carriers from plasmonic metal nanoparticles to perform electron-transfer reactions. J. Am. Chem. Soc. 141 , 643–647 (2019).
Escudero-Escribano, M. et al. Pt 5 Gd as a highly active and stable catalyst for oxygen electroreduction. J. Am. Chem. Soc. 134 , 16476–16479 (2012).
Mistry, H., Varela, A. S., Kühl, S., Strasser, P. & Cuenya, B. R. Nanostructured electrocatalysts with tunable activity and selectivity. Nat. Rev. Mater. 1 , 16009 (2016).
Article ADS CAS Google Scholar
Liu, L. & Corma, A. Identification of the active sites in supported subnanometric metal catalysts. Nat. Catal. 4 , 453–456 (2021).
Vojvodic, A., Nørskov, J. K. & Abild-Pedersen, F. Electronic structure effects in transition metal surface chemistry. Top. Catal. 57 , 25–32 (2014).
Mukherjee, D., Gamler, J. T. L., Skrabalak, S. E. & Unocic, R. R. Lattice strain measurement of core@shell electrocatalysts with 4d scanning transmission electron microscopy nanobeam electron diffraction. ACS Catal. 10 , 5529–5541 (2020).
Hammer, B. & Nørskov, J. K. Electronic factors determining the reactivity of metal surfaces. Surf. Sci. 343 , 211–220 (1995).
Dhanasekaran, S. M. et al. Delineation of prognostic biomarkers in prostate cancer. Nature 412 , 822–826 (2001).
Lilja, H., Ulmert, D. & Vickers, A. J. Prostate-specific antigen and prostate cancer: prediction, detection and monitoring. Nat. Rev. Cancer 8 , 268–278 (2008).
Wang, X. et al. Eg occupancy as an effective descriptor for the catalytic activity of perovskite oxide-based peroxidase mimics. Nat. Commun. 10 , 704 (2019).
Wang, Z. et al. Accelerated discovery of superoxide-dismutase nanozymes via high-throughput computational screening. Nat. Commun. 12 , 6866 (2021).
Cao, S. et al. A library of ros-catalytic metalloenzyme mimics with atomic metal centers. Adv. Mater. 34 , 2200255 (2022).
Fan, K. et al. In vivo guiding nitrogen-doped carbon nanozyme for tumor catalytic therapy. Nat. Commun. 9 , 1440 (2018).
Huang, Y., Ren, J. & Qu, X. Nanozymes: classification, catalytic mechanisms, activity regulation, and applications. Chem. Rev. 119 , 4357–4412 (2019).
Liu, Y. et al. Integrated cascade nanozyme catalyzes in vivo ros scavenging for anti-inflammatory therapy. Sci. Adv. 6 , eabb2695 (2020).
Zhang, W. et al. Prussian blue nanoparticles as multienzyme mimetics and reactive oxygen species scavengers. J. Am. Chem. Soc. 138 , 5860–5865 (2016).
Mu, X. et al. An oligomeric semiconducting nanozyme with ultrafast electron transfers alleviates acute brain injury. Sci. Adv. 7 , eabk1210 (2021).
Zhang, X.-D. et al. Highly catalytic nanodots with renal clearance for radiation protection. ACS Nano 10 , 4511–4519 (2016).
Wang, H., Mu, X., He, H. & Zhang, X.-D. Cancer radiosensitizers. Trends Pharmacol. Sci. 39 , 24–48 (2018).
Liu, G. et al. High-throughput preparation of radioprotective polymers via Hantzsch’s reaction for in vivo x-ray damage determination. Nat. Commun. 11 , 6214 (2020).
Wang, L. & Yamauchi, Y. Strategic synthesis of trimetallic Au@Pd@Pt core−shell nanoparticles from poly(vinylpyrrolidone)-based aqueous solution toward highly active electrocatalysts. Chem. Mater. 23 , 2457–2465 (2011).
Kresse, G. & Furthmüller, J. Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set. Comput. Mater. Sci. 6 , 15–50 (1996).
Kresse, G. & Furthmüller, J. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Phys. Rev. B 54 , 11169–11186 (1996).
Blöchl, P. E. Projector augmented-wave method. Phys. Rev. B 50 , 17953–17979 (1994).
Perdew, J. P. et al. Atoms, molecules, solids, and surfaces: applications of the generalized gradient approximation for exchange and correlation. Phys. Rev. B 46 , 6671–6687 (1992).
Fan, K. et al. Magnetoferritin nanoparticles for targeting and visualizing tumour tissues. Nat. Nanotechnol. 7 , 459–464 (2012).
Lu, T. & Chen, F. Multiwfn: a multifunctional wavefunction analyzer. J. Comput. Chem. 33 , 580–592 (2012).
Download references
Acknowledgements
This work was financially supported by the National Key Research and Development Program of China (2021YFF1200700, X.-D. Z.), the National Natural Science Foundation of China (Grant Nos. 91859101, X.-D. Z., 81971744, X.-D. Z., U1932107, X.-D. Z., 82001952, X. M., and 11804248, Y. L.), Outstanding Youth Funds of Tianjin (2021FJ-0009, X.-D. Z.), STI 2030-Major Projects (2022ZD0210200, X. M.), National Natural Science Foundation of Tianjin (Nos. 20JCYBJC00940 and 21JCYBJC00550, X. M.), the Innovation Foundation of Tianjin University, and CAS Interdisciplinary Innovation Team (JCTD-2020-08, X.-D. Z.).
Author information
These authors contributed equally: Ke Chen, Guo Li, Xiaoqun Gong.
Authors and Affiliations
Tianjin Key Laboratory of Brain Science and Neural Engineering, Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
Ke Chen, Ling Liu, Yuxing Yan, Qingshan Liu, Yang Cao, Qi Xin, Xiaoyu Mu & Xiao-Dong Zhang
Department of Physics and Tianjin Key Laboratory of Low Dimensional Materials Physics and Preparing Technology, School of Sciences, Tianjin University, Tianjin, China
Guo Li, Qinjuan Ren, Ruoli Zhao, Yuan Li, Yikai Fu, Yonghui Li, Haitao Dai, Changlong Liu, Xiaoyu Mu & Xiao-Dong Zhang
School of Life Sciences, Tianjin Engineering Center of Micro-Nano Biomaterials and Detection-Treatment Technology, Tianjin University, Tianjin, China
Xiaoqun Gong, Shuang Zhao, Ran Luo & Jin Chang
Lineberger Comprehensive Cancer Center, School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
Junying Wang
Tianjin Neurological Institute, Department of Neurosurgery, Tianjin Medical University General Hospital, Tianjin, China
Yaoyao Ren, Qiong Qin, Shu Zhang & Jianning Zhang
State Key Laboratory of Medicinal Chemical Biology, Frontiers Science Centre for New Organic Matter, Tianjin Key Laboratory of Biosensing and Molecular Recognition, Research Centre for Analytical Sciences, College of Chemistry, School of Medicine and Frontiers Science Center for Cell Responses, Nankai University, Tianjin, China
Shu-Lin Liu & Peiyu Yao
Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, China
Bo Zhang & Jingkai Yang
Tianjin Key Laboratory of Molecular Nuclear Medicine, Institute of Radiation Medicine Chinese Academy of Medical, Sciences and Peking Union Medical College, Tianjin, China
You can also search for this author in PubMed Google Scholar
Contributions
X.-D. Z. and X. M. conceived and designed the experiments. X. M., Q. R., and K. C. contributed to materials synthesis and physical and chemical measurement, R. Z. and Y. L. contributed to the construction of the schematic diagram of the structure, G. L., L. L., Y. F., H. D., C. L., and Y. L. contributed to the simulation of the theoretical calculation, J. W., K. C., Y. Y., and W. L. contributed to the radiation oxidation modulation experiment, S. Z., R. L., Q. L., Y. C., P. Y., S.-L. L., B. Z., J. Y., X. G., and J. C. contributed to the detection of cancer, Q. X., Y. R, Q. Q, S. Z, and J. Z. provided clinical serum samples. X. M., K. C., G. L., S. Z., and X. G. analyzed the data, X. M., K. C., G. L., S. Z., and X. G. prepared the manuscript. All authors discussed the results and commented the manuscript.
Corresponding authors
Correspondence to Xiaoyu Mu or Xiao-Dong Zhang .
Ethics declarations
Competing interests.
The authors declare no competing interests.
Peer review
Peer review information.
Nature Communications thanks Hong-Kang Tian, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Supplementary information, reporting summary, peer review file, source data, source data, rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .
Reprints and permissions
About this article
Cite this article.
Chen, K., Li, G., Gong, X. et al. Atomic-scale strain engineering of atomically resolved Pt clusters transcending natural enzymes. Nat Commun 15 , 8346 (2024). https://doi.org/10.1038/s41467-024-52684-w
Download citation
Received : 05 July 2023
Accepted : 19 September 2024
Published : 27 September 2024
DOI : https://doi.org/10.1038/s41467-024-52684-w
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Quick links
- Explore articles by subject
- Guide to authors
- Editorial policies
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
Experimental Estimation of Sudden Torque Changes in a PMSG Using a Luenberger Time-Varying Observer
- Published: 24 September 2024
Cite this article
- Younes Azelhak ORCID: orcid.org/0000-0003-0518-6230 1 ,
- Damien Voyer 2 &
- Hicham Medromi 1
10 Accesses
Explore all metrics
This paper proposes a new Luenberger observer designed to estimate all mechanical parameters required for controlling a permanent magnet synchronous generator (PMSG), such as angular position, rotational speed, mechanical torque and jerk (derivative of acceleration). The design is based on a fourth-order model that uses the relationship between currents and voltages to improve accuracy and robustness against noise. The observer can track sudden variations of the mechanical torque, which is not possible with conventional Luenberger observer, Kalman filter or sliding mode observer. For a salient-pole machine, the system is time-varying: the observer stability and convergence are proved using the theory of linear time-varying systems. MATLAB Simulink simulations and experiments with a 1.5 kW machine were carried out to validate the observer performance in comparison to conventional ones. The results show that the proposed observer can accurately estimate mechanical torque variations greater than 5 N m for jerks lasting less than 0.1 s.
This is a preview of subscription content, log in via an institution to check access.
Access this article
Subscribe and save.
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Price includes VAT (Russian Federation)
Instant access to the full article PDF.
Rent this article via DeepDyve
Institutional subscriptions
Similar content being viewed by others
Speed estimation with parameters identification of pmsm based on mras.
A Nonlinear State Observer for Sensorless Speed Control of IPMSM
Disturbance Observer Based Robust Sliding Mode Control of Permanent Magnet Synchronous Motor
Abo-Khalil, A. G., Eltamaly, A. M., Alsaud, M. S., Sayed, K., & Alghamdi, A. S. (2021). Sensorless control for pmsm using model reference adaptive system. International Transactions on Electrical Energy Systems, 31 , 12733. https://doi.org/10.1002/2050-7038.12733
Article Google Scholar
Antonysamy, R. P., & Joo, Y. H. (2023). Power maximization and regulation of the super-large wind turbine system using generalized predictive approach-based torque and pitch control. International Journal of Electrical Power and Energy Systems . https://doi.org/10.1016/j.ijepes.2023.109416
Article MATH Google Scholar
Aubrée, R., Auger, F., & Dai, P. (2012). A new low-cost sensorless MPPT algorithm for small wind turbines. In 2012 1st international conference on renewable energies and vehicular technology (pp. 305–311). https://doi.org/10.1109/REVET.2012.6195288
Aubree, R., Auger, F., Macé, M., & Loron, L. (2016). Design of an efficient small wind-energy conversion system with an adaptive sensorless mppt strategy. Renewable Energy, 280–291.
Auger, F., Toudert, O. M., & Chibah, A. (2011). Design of advanced resolver - to - digital converters. Electrimacs, 6–8.
Baratieri, C. L., & Pinheiro, H. (2016). New variable gain super-twisting sliding mode observer for sensorless vector control of nonsinusoidal back-emf pmsm. Control Engineering Practice, 52 , 59–69. https://doi.org/10.1016/J.CONENGPRAC.2016.04.003
Batista, P., Petit, N., Silvestre, C., & Oliveira, P. (2017). Relaxed conditions for uniform complete observability and controllability of ltv systems with bounded realizations. IFAC-PapersOnLine, 50 , 3598–3605. https://doi.org/10.1016/J.IFACOL.2017.08.701
Bristeau, P. J., Petit, N. & Praly, L. (2010). Design of a navigation filter by analysis of local observability. In Proceedings of the IEEE conference on decision and control (pp. 1298–1305). https://doi.org/10.1109/CDC.2010.5717848
Chaoui, H., Khayamy, M., & Aljarboua, A. A. (2017). Adaptive interval type-2 fuzzy logic control for pmsm drives with a modified reference frame. IEEE Transactions on Industrial Electronics, 64 , 3786–3797. https://doi.org/10.1109/TIE.2017.2650858
Chatri, C., Labbadi, M., & Ouassaid, M. (2023). Improved high-order integral fast terminal sliding mode-based disturbance-observer for the tracking problem of pmsg in wecs. International Journal of Electrical Power and Energy Systems . https://doi.org/10.1016/j.ijepes.2022.108514
Chen, K. Y., Hu, J. S., Tang, C. H., & Shen, T. Y. (2012). A novel switching strategy for foc motor drive using multi-dimensional feedback quantization. Control Engineering Practice, 20 , 196–204. https://doi.org/10.1016/J.CONENGPRAC.2011.10.013
Chen, J., Yao, W., Ren, Y., Wang, R., Zhang, L., & Jiang, L. (2019). Nonlinear adaptive speed control of a permanent magnet synchronous motor: A perturbation estimation approach. Control Engineering Practice, 85 , 163–175. https://doi.org/10.1016/J.CONENGPRAC.2019.01.019
Comanescu, M. (2016). Speed, rotor position and load torque estimation of the PMSM using an extended dynamic model and cascaded sliding mode observers. In 2016 International symposium on power electronics, electrical drives, automation and motion, SPEEDAM 2016 (pp. 98–103). https://doi.org/10.1109/SPEEDAM.2016.7525806
Elmas, C., & Ustun, O. (2008). A hybrid controller for the speed control of a permanent magnet synchronous motor drive. Control Engineering Practice, 16 , 260–270. https://doi.org/10.1016/J.CONENGPRAC.2007.04.016
Genduso, F., Miceli, R., Rando, C., & Galluzzo, G. R. (2010). Back emf sensorless-control algorithm for high-dynamic performance pmsm. IEEE Transactions on Industrial Electronics, 57 , 2092–2100. https://doi.org/10.1109/TIE.2009.2034182
Hou, Q., & Ding, S. (2021). Gpio based super-twisting sliding mode control for pmsm. IEEE Transactions on Circuits and Systems II: Express Briefs, 68 , 747–751. https://doi.org/10.1109/TCSII.2020.3008188
Kia, S. H., Henao, H., & Capolino, G. A. (2010). Torsional vibration assessment using induction machine electromagnetic torque estimation. IEEE Transactions on Industrial Electronics, 57 , 209–219. https://doi.org/10.1109/TIE.2009.2034181
Kyslan, K., Slapak, V., Fedak, V., Durovsky, F. & Horvath, K. (2017). Design of load torque and mechanical speed estimator of PMSM with unscented Kalman filter-an engineering guide. In International conference on electical drives and power electronics (vol. 2017-Octob, pp. 297–302) . https://doi.org/10.1109/EDPE.2017.8123249
Lian, C., Xiao, F., Gao, S., & Liu, J. (2019). Load torque and moment of inertia identification for permanent magnet synchronous motor drives based on sliding mode observer. IEEE Transactions on Power Electronics, 34 , 5675–5683. https://doi.org/10.1109/TPEL.2018.2870078
Liu, M., Chan, K. W., Hu, J., Xu, W., & Rodriguez, J. (2019). Model predictive direct speed control with torque oscillation reduction for pmsm drives. IEEE Transactions on Industrial Informatics, 15 , 4944–4956. https://doi.org/10.1109/tii.2019.2898004
Lu, W., Tang, B., Ji, K., Lu, K., Wang, D., & Yu, Z. (2021). A new load adaptive identification method based on an improved sliding mode observer for pmsm position servo system. IEEE Transactions on Power Electronics, 36 , 3211–3223. https://doi.org/10.1109/TPEL.2020.3016713
Nguyen, P. T. H., Stüdli, S., Braslavsky, J. H., & Middleton, R. H. (2022). Lyapunov stability of grid-connected wind turbines with permanent magnet synchronous generator. European Journal of Control, 65 , 100615. https://doi.org/10.1016/j.ejcon.2022.100615
Nicola, M., Nicola, C. I., & Duta, M. (2020). Sensorless control of PMSM using FOC strategy based on multiple ANN and load torque observer. In 2020 15th international conference on development and application systems, DAS 2020—Proceedings (pp. 32–37). https://doi.org/10.1109/DAS49615.2020.9108914
Safaeinejad, A., Rahimi, M., Zhou, D., & Blaabjerg, F. (2024). A sensorless active control approach to mitigate fatigue loads arising from the torsional and blade edgewise vibrations in pmsg-based wind turbine system. International Journal of Electrical Power and Energy Systems . https://doi.org/10.1016/j.ijepes.2023.109525
Silva, F. L., Silva, L. C. A., Eckert, J. J., Yamashita, R. Y., & Lourenço, M. A. M. (2022). Parameter influence analysis in an optimized fuzzy stability control for a four-wheel independent-drive electric vehicle. Control Engineering Practice, 120 , 105000. https://doi.org/10.1016/J.CONENGPRAC.2021.105000
Solo, V. (1994). On the stability of slowly time-varying linear systems. Mathematics of Control, Signals and Systems, 7 , 331–350.
Suman, S. K., Gautam, M. K., Srivastava, R., & Giri, V. K. (2016). Novel approach of speed control of PMSM drive using neural network controller. In International conference on electrical, electronics, and optimization techniques, ICEEOT 2016 (pp. 2780–2783 ). https://doi.org/10.1109/ICEEOT.2016.7755202
Sun, X., Yu, H., Yu, J., & Liu, X. (2019). Design and implementation of a novel adaptive backstepping control scheme for a pmsm with unknown load torque. IET Electric Power Applications, 13 , 445–455. https://doi.org/10.1049/IET-EPA.2018.5656
Suryakant, Sreejeth, M., & Singh, M. (2018). Performance analysis of PMSM drive using hysteresis current controller and PWM current controller. In 2018 IEEE international students’ conference on electrical, electronics and computer science, SCEECS 2018 . https://doi.org/10.1109/SCEECS.2018.8546862
Wen, D., Wang, W., & Zhang, Y. (2022). Sensorless control of permanent magnet synchronous motor in full speed range. Chinese Journal of Electrical Engineering, 8 , 97–107. https://doi.org/10.23919/CJEE.2022.000018
Zhang, C., Jia, L., & He, J. (2013). Load torque observer based sliding mode control method for permanent magnet synchronous motor. In Proceedings of the 25th Chinese control and decision conference (pp. 550–555). https://doi.org/10.1109/CCDC.2013.6560985
Zhang, X., & Zhao, Z. (2021). Model predictive control for pmsm drives with variable dead-zone time. IEEE Transactions on Power Electronics, 36 , 10514–10525. https://doi.org/10.1109/TPEL.2021.3066636
Download references
Acknowledgements
This project was cofinanced by the Interreg Atlantic Area Program through the European Regional Development Fund and the PORTOS project. The authors thank Julie Devlies for the work on the PMSG characterization (data in Table 1 ).
Author information
Authors and affiliations.
Electrical Department, Laboratory of Research in Engineering (LRI) System Architecture Team (EAS), ENSEM, Hassan II University of Casablanca, Route El jadida, 8118, Casablanca, Morocco
Younes Azelhak & Hicham Medromi
EIGSI La Rochelle, 17000, La Rochelle, France
Damien Voyer
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Younes Azelhak .
Additional information
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Kalman Filter
Kalman filter addresses the problem of estimating the state of a discrete-time process. In our case, discretization of the state-space model ( 2 ) leads to the system:
where \(X_{3,k}\) (respectively \(\kappa _k\) , \(y_k\) and \(w_k\) ) is the sample of \(X_3\) (respectively \(\kappa \) , y and w ) obtained at some time \(t = k Ts\) , with \(T_s\) the time step. \(A_{3,T_s}~=~A_3~+~T_s~I \) , with I the identity matrix, and \(G_{3,T_s} = T_s G_3\) .
A Kalman filter can be implemented directly because system ( 19 ) is linear, unlike most PMSG-related problems, which are nonlinear and require the design of extended or unscented Kalman filters (Kyslan et al., 2017 ). Kalman filter estimates the process state at some time \(t = k Ts\) (prediction) and then obtains feedback in the form of noisy measurements (correction). For a given iteration k , the algorithm is given by:
Prediction \(\hat{X}_{3,k} \leftarrow A_{3,T_s} \hat{X}_{3,k-1}\) ( a priori predicted state) \(P \leftarrow A_{3,T_s} P {A_{3,T_s}}^T + G_{3, T_s} \Gamma _{\kappa } {G_{3, T_s}}^T\) ( a priori predicted covariance)
Correction \(K \leftarrow P {C_3}^T \left( C_3 P {C_3}^T+ \Gamma _w \right) ^{-1}\) (optimal gain) \(e \leftarrow y_k - C_3 \hat{X}_{3,k} \) (measurement residual) \(\hat{X}_{3,k} \leftarrow \hat{X}_{3,k} + K e\) ( a posteriori state estimate) \(P \leftarrow \left( I - K C_3 \right) P\) ( a posteriori covariance)
where \(\Gamma _{\kappa }\) is the variance of the process due to the jerk \(\kappa \) , and \(\Gamma _w \) the variance of the measurement noise w .
Sliding Mode Observer
Sliding mode consists in ensuring a zero error tracking, through a sliding variable and its first derivate. For the estimation of the shaft angle, a first-order sliding mode observer can be designed as follows (Zhang et al., 2013 ):
where \(s_1\) is the sliding surface, with \(L_1~>~0 \) . \(\theta \) at time \(t = k T_s\) is provided by the \(y_k\) measurement.
Consider the Lyapunov function \(V_1 = \frac{1}{2} s_1^2\) . The time derivate of \(V_1\) is given by:
\(V_1\left( t\right) \) is clearly positive-definite and \(\dot{V}_1\left( t\right) \) is negative-definite when \(K_1 \ge \left| e_2 \right| \) . Under this condition, the equilibrium at the origin \(s_1\left( t\right) = 0\) is globally asymptotically stable. All trajectories starting off the sliding surface \(s_1\left( t\right) = 0\) must reach it in finite time and then will remain on this surface. Then, since \(L_1~>~0 \) , the tracking error \(e_1\left( t\right) \) converges to zero exponentially.
Similarly, angular speed and acceleration are estimated as follows:
where \(s_2\) is the sliding surface, with \(L_2 > 0 \) and \(K_2 \ge \left| e_3 \right| \) .
where \(s_3\) is the sliding surface, with \(L_3 > 0 \) and \(K_3 \ge \left| \kappa \right| \) .
The value of \(\Omega _k\) is computed from the approximation of the first-order derivate \(\left( y_{k}-y_{k-1}\right) /T_s\) . The value of \(\alpha _k\) is computed similarly, from the approximation of the second-order derivate \(\left( -y_{k}+2y_{k-1}-y_{k-2}\right) /{T_s}^2\) .
To reduce chattering, the function \(\textit{sign}\left( s\right) \) was replaced by the function \(tanh\left( \xi s\right) \) , with \(\xi \) a parameter to be set.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and permissions
About this article
Azelhak, Y., Voyer, D. & Medromi, H. Experimental Estimation of Sudden Torque Changes in a PMSG Using a Luenberger Time-Varying Observer. J Control Autom Electr Syst (2024). https://doi.org/10.1007/s40313-024-01123-8
Download citation
Received : 20 September 2023
Revised : 17 July 2024
Accepted : 04 September 2024
Published : 24 September 2024
DOI : https://doi.org/10.1007/s40313-024-01123-8
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Permanent magnet synchronous generator
- Salient-pole machine
- Luenberger observer
- Linear time-varying system
- Kalman filter
- Sliding mode
- Find a journal
- Publish with us
- Track your research
IMAGES
VIDEO
COMMENTS
Table of contents. Step 1: Define your variables. Step 2: Write your hypothesis. Step 3: Design your experimental treatments. Step 4: Assign your subjects to treatment groups. Step 5: Measure your dependent variable. Other interesting articles. Frequently asked questions about experiments.
1) True Experimental Design. In the world of experiments, the True Experimental Design is like the superstar quarterback everyone talks about. Born out of the early 20th-century work of statisticians like Ronald A. Fisher, this design is all about control, precision, and reliability.
Three types of experimental designs are commonly used: 1. Independent Measures. Independent measures design, also known as between-groups, is an experimental design where different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants.
An experiment is a method of data collection designed to test hypotheses under controlled conditions. In social scientific research, the term experiment has a precise meaning and should not be used to describe all research methodologies. ... Ivan Pavlov, and Albert Bandura used experimental design to demonstrate the various types of ...
Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results. Experimental design typically includes identifying the variables that ...
A good experimental design requires a strong understanding of the system you are studying. There are five key steps in designing an experiment: Consider your variables and how they are related. Write a specific, testable hypothesis. Design experimental treatments to manipulate your independent variable.
A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.
Specify how you can manipulate the factor and hold all other conditions fixed, to insure that these extraneous conditions aren't influencing the response you plan to measure. Then measure your chosen response variable at several (at least two) settings of the factor under study. If changing the factor causes the phenomenon to change, then you ...
Experimental research design is a scientific framework that allows you to manipulate one or more variables while controlling the test environment. When testing a theory or new product, it can be helpful to have a certain level of control and manipulate variables to discover different outcomes. You can use these experiments to determine cause ...
Prepare 2 identical trays with the same soil mixture. Place 5 plants in each tray; label one set "sunlight" and one set "shade". Position sunlight tray by a south-facing window, and shade tray in a dark closet. Water both trays with 50 mL water every 2 days. After 3 weeks, remove plants and measure heights in cm.
Design of experiment means how to design an experiment in the sense that how the observations or measurements should be obtained to answer a query in a valid, efficient and economical way. The designing of the experiment and the analysis of obtained data are inseparable. If the experiment is designed properly keeping in mind the question, then ...
An experimental design is a detailed plan for collecting and using data to identify causal relationships. Through careful planning, the design of experiments allows your data collection efforts to have a reasonable chance of detecting effects and testing hypotheses that answer your research questions. An experiment is a data collection ...
Training Videos & Courses Experimental Design. Scientists from a variety of backgrounds give concrete steps and advice to help you build a framework for how to design experiments in biological research. Learn strategies for successful experimental design, tips to avoid bias, and insights to improve reproducibility with in-depth case studies.
DOE lets you investigate lots of factors at once—so naturally, you'll have plenty of factors to choose from. Though you can't test them all at once. You'll need to avoid the temptation of creating a "big bang". In other words, trying to investigate all your factors in depth with 1 massive experiment.
There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design. 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2.
Design of experiments (DOE) is defined as a branch of applied statistics that deals with planning, conducting, analyzing, and interpreting controlled tests to evaluate the factors that control the value of a parameter or group of parameters. DOE is a powerful data collection and analysis tool that can be used in a variety of experimental ...
Experimental design means planning a set of procedures to investigate a relationship between variables. To design a controlled experiment, you need: A testable hypothesis. At least one independent variable that can be precisely manipulated. At least one dependent variable that can be precisely measured. When designing the experiment, you decide:
How to design an experiment. To design your own experiment, consider following these steps and examples: 1. Determine your specific research question. To begin, craft a specific research question. A research question is a topic you are hoping to learn more about. In order to create the best possible results, try to make your topic as specific ...
The six steps of the scientific method include: 1) asking a question about something you observe, 2) doing background research to learn what is already known about the topic, 3) constructing a hypothesis, 4) experimenting to test the hypothesis, 5) analyzing the data from the experiment and drawing conclusions, and 6) communicating the results ...
Experimental design concerns the validity and efficiency of the experiment. The experimental design in the following diagram (Box et al., 1978), is represented by a movable window through which certain aspects of the true state of nature, more or less distorted by noise, may be observed.
An experiment is a method of data collection designed to test hypotheses under controlled conditions. In social scientific research, the term experiment has a precise meaning and should not be used to describe all research methodologies. ... Ivan Pavlov, and Albert Bandura used experimental design to demonstrate the various types of ...
Course: Statistics and probability > Unit 6. Lesson 5: Experiments. Introduction to experiment design. Matched pairs experiment design. The language of experiments. Principles of experiment design. Experiment designs. Random sampling vs. random assignment (scope of inference)
5. Design the Experiment. This step involves deciding on the structure and scope of your experiment. You'll need to determine: The sample size (how many users will be part of the experiment). The duration (how long you'll run the experiment). The control group (if applicable). The variables you'll test.
When experiments are designed properly, the results are more likely to be replicated in future studies and relevant for human health. Properly designing experiments means: ... (essential elements of study design) in all NIH-supported publications describing vertebrate animal and cephalopod (such as octopus) research. The ARRIVE Essential 10 is ...
Biological experiments demonstrate that the detection limit of the Ptc-PA-based catalytic detection system exceeds that of visual inspection by 132-fold in clinical cancer diagnosis.
The experiments reported in Sect. 3.3 show that the presence of noise does not allow setting a smaller value for \(T_c\). 3.3 Experiment Results. In the experiments, sudden variations in motor torque are produced by a sudden change in the power supply to the induction motor driving the PMSG.