In an experimental design, participants are randomly selected into what two groups?

Experimental Designs
Inside Research: Travis Seymour, Department of Psychology, University of California, Santa Cruz
Media Matters: The “Sugar Pill” Knee Surgery

The Uniqueness of Experimental Methodology
Experimental designs allow researchers to fully control the variables of interest and allow for causal conclusions to be drawn.

Experimental Control
Experimental designs allow researchers to fully control the independent variable so as to create a situation where the independent variable is the only explanation for any observed change in the dependent variable.

Determination of Causality
In experimental designs, participants are randomly assigned to conditions that are manipulated by the researcher, allowing causal conclusions to be drawn. By isolating the variables of interest and controlling the temporal ordering, researchers may conclude that disparities in observed behavior between the two groups were likely caused by the independent variable. Of course, this would be true only if they could be sure that other uncontrolled variables were not also in play.

Internal versus External Validity

Another advantage of a well-designed experimental method is its high level of internal validity. A design that has high internal validity allows you to conclude that a particular variable is the direct cause of a particular outcome. In contrast external validity is often seen as a challenge for experimental work. External validity is the degree to which conclusions drawn from a particular set of results can be generalized to other samples and situations. The sample in a particular experiment may not represent the larger population of interest, and the experimental situation may not resemble the real-world context that it is designed to model because of its artificiality. The concern around artificiality is controversial and not shared by everyone who does psychological research.

Key Constructs of Experimental Methods
This section introduces the key concepts that are crucial to understanding how experimental methods work.

Independent and Dependent Variables
Independent and dependent variables are central to experimental designs. Quasi- independent variables are preexisting factors of the participants that cannot be altered and to which one cannot be randomly assigned.

Experimental and Control Groups
In experimental designs, researchers assign participants to at least two different groups and compare outcomes across groups. When two groups are compared as in most classic designs, they are known as the experimental group and the control group. The experimental groupreceives the intervention or treatment. The control groupserves as a direct comparison for the experimental group and receives either an inert version of the treatment or no treatment at all. The reasoning is that, all other things being equal, if the experiment group does differently than the control group on the relevant dependent variable, there is evidence for the effect of the independent variable. Ideally, the control group is given an experience as close as possible to the experiment group with the only difference being the levels of the independent variable.

Placebo Effect
A placebo effect is an effect of treatment that can be attributed to participants’expectations from the treatment rather than any property of the treatment itself. The benefits of placebos are real and measurable. In studies designed to test the beneficial effects of an intervention, the comparison between an experimental and placebo control group is essential, so that you can determine whether your treatment is more effective than a placebo alone.

Random Assignment
Random assignment is the procedure by which researchers place participants in different experimental groups using chance procedures. Random assignment usually is effective at equalizing the many other factors that the experimental and control group might differ on besides the experimental assignment to groups. However, a truly random assignment is most likely to occur only when the groups are relatively large. In a relatively small study that comprises only a few participants per group, the likelihood of the experimental and control group differing on some dimension increases. In such cases, a quasi-random assignment might be done to equalize the participants in each condition on factors that could influence the research outcome.

Types of Experimental Designs
Experiments can take two basic designs: between-subjects and within-subjects, or a third design called the matched-group.

Between-Subjects Designs
In a between-subjects design, researchers expose two (or more) groups of individuals to different conditions and then measure and compare differences between groups on the variable(s) of interest. In such a design, the researcher is looking for differences between individual participants or groups of participants, with each group exposed to a separate condition.

Advantages of Between-Subjects Designs
Major advantages of between-subjects designs include simplicity of setup, their intuitive structure, and the relative ease of statistical analyses that they permit.

Disadvantages of Between-Subjects Designs
The disadvantages of between-subjects designs are cost and variability due to individual differences. The resources required to gather a sample can be considerable depending on the numbers of conditions and participants. Furthermore, variability from individual differences between participants makes detection of an effect due to the variable(s) of interest more difficult.

Within-Subjects Designs
A within-subjects design(also called within-group or repeated measures design) assigns each participant to all possible conditions.

Advantages of Within-Subjects Designs
Advantages of within-subjects designs are the relatively lower cost and elimination of variability due to individual differences. Fewer participants than in a between-subjects design are needed because each individual participates in all the conditions. Furthermore, since the same participants are used in all conditions, you can proceed with the assumption that the variability is due to the real factor of interest.

Disadvantages of Within-Subjects Designs
Within-subjects designs contain quite a number of drawbacks. First, they require a more complex set of statistical assumptions. Second, within-subjects designs are vulnerable to order effects because the order in which participants receive different experimental conditions may influence the outcome. A simple order effect occurs when the particular order of the conditions influences the results. As a result of repeated exposure to experimental conditions in a within-subjects design, participants may show a fatigue (or boredom) effect and begin to perform more poorly as the experiment goes on. A carryover effect occurs when a participant’s performance in one experimental condition inevitably affects his or her performance in a subsequent condition. In some cases, potential order and carryover effects will rule out the use of a within-subjects design.

Matched Group Designs
A matched-group design has separate groups of participants in each condition and involves “twinning” a participant in each group with a participant in another group.

Advantages of Matched Group Designs
As long as you have matched participants properly on dimensions of relevance to the dependent variable, you do not need to worry about the unwanted variability of individual differences. This results in a greater probability of being able to detect an effect that is present. Additionally, order and carryover effects are not a concern in a matched-group design.

Disadvantages of Matched Group Designs
Matched-group designs require a more complex set of statistical assumptions, like within-subjects designs. The process of matching can prove quite difficult. It can be hard to know on which dimensions you should match your participants, and if you cannot identify those dimensions correctly, your matching will be ineffective. Recruiting matched samples may also be difficult and expensive.

Confounding Factors and Extraneous Variables
Confounds, also known as extraneous variables, are uncontrolled variables that vary along with the independent variable in your experimental design and could account for effects that you find. When an experimenter fails to account for confounds, the validity of the findings come into question. There are several types of confounds that will be discussed in this section.

Participant Characteristics
Of particular concern for researchers is the possibility that experimental groups may differ systematically in their participant characteristics. Group differences that are unaccounted for affect the internal validity of the study. If you have a large enough sample, randomization is likely to be effective in minimizing group differences.

The Hawthorne Effect
The Hawthorne effect, or observer effect, acknowledges that the act of observation can alter the behavior being observed. Participants’ expectations, which are an unavoidable part of the research process, may sometimes drive effects in unanticipated ways.

Demand Characteristics
Demand characteristics are features of the experimental design itself that lead participants to make certain conclusions about the purpose of the experiment, and then adjust their behavior accordingly, either consciously or unconsciously. In grappling with the challenge posed by such characteristics, Orne (1962) argued that any participant must be recognized as an active participant in the experiment and, as such, the possibility of demand characteristics should always be considered.

Other Confounds
There are so many possible confounds for any given experimental design that no researcher can be expected to anticipate them all. You should consider all dimensions that may affect an experimental design, even something that initially seems insignificant.

Strategies for Dealing with Confounds
A carefully designed experiment anticipates possible confounds and ensures that the design either eliminates those confounds altogether, or deals with them in other ways.

Hold Potential Confounding Variables Constant
By holding potential confounding variables constant, you can minimize the influence of potential confounds.

Vary Test Items and Tasks
If carryover effects are a concern, then the design should include a range of tests or tasks that vary enough such that practice alone would not lead to improvement.

Use Blind and Double-Blind Designs
Blind and double-blind designs address the Hawthorne effect as the experimenter measuring the behavior of interest does not know what intervention (if any) the individuals being observed have received. In many experiments, either experimenters or participants are unaware of the experimental condition they are in. If only one of these groups, usually the participants, is “blind” to the intervention, then the study is said to have a single-blind design.In a double-blind design, which is often considered the gold standard because it most rigorously implements a blinded design, you would ensure that both the experimenter doing the rating and the participant receiving the intervention do not know to which condition they have been assigned.

Statistically Control for Variables that Can’t be Experimentally Controlled
In analyzing your study results, you can make a statistical adjustment that will account for the influence of a specified third variable and allow you to analyze the results with the influence of that third variable eliminated. Statistical control requires you to know what your confound is, to measure it systematically, and to include these measurements in your statistical analysis.

Use Randomization and Counterbalancing
Randomization and counterbalancing address confounds due to order effects. In randomization, you simply randomize the order of the presentation of conditions/stimuli for each participant so that you can assume that, across all of your participants, no one particular order influenced the results. Counterbalancing involve calculating all the possible orders of your interventions and confirming that you evenly distribute the different order combinations across your participants.

Ceiling and Floor Effects
Ceiling effects occur when scores cluster at the upper end of the measurement scale. Floor effects occur when the scores cluster at the lower end. Ceiling and floor effects remind researchers that, as much as you need to carefully construct your design to minimize confounds, your measurement tools also need to be appropriately sensitive for the purposes for which you will use them.

What Steele and Aronson Found
Throughout this chapter, we have used the classic work of Steele and Aronson (1995) on stereotype threat to demonstrate various aspects of experimental design. Steele & Aronson (1995) aimed to demonstrate that individuals would be at risk to self-confirm a commonly held stereotype about their own group if that stereotype was activated. They found that Black participants did, in fact, underperform on a set of challenging GRE verbal items when compared to their White counterparts in the threat condition, while the non-threat condition showed no such difference. Work on stereotype threat in recent years has provided additional validation for the concept and has expanded our understanding of the so-called achievement gap in standardized testing.

Ethical Considerations in Experimental Design
Experimental designs evoke ethical issues related to using a placebo/control group and the use of confederates/deceit.

Placebo/Control Group and Denial of Treatment
Use of a placebo or control group in a study becomes problematic when there is reason to believe that your treatment group will receive some therapeutic benefit. Researchers must grapple with the ethical concerns of denying treatment to the placebo/control group. One way to handle this issue is to provide the control group with the treatment following the completion of the experiment.

Confederates and Deceit
A confederaterefers to an actor who is part of an experiment and plays a specific role in setting up the experimental situation. Participants are generally unaware of the role of the confederate, believing them to be another participant in the study. Participants may be upset or angry about having been deceived and may even behave aggressively towards the confederate and researcher. It is important to consider not only the safety of the participant and research team, but also the impact of deception on the subject’s self-worth.

What are the two groups in experimental design?

In a true experiment, the effect of an intervention is tested by comparing two groups. One group is exposed to the intervention (the experimental group, also known as the treatment group) and the other is not exposed to the intervention (the control group).

In which group design are participants randomly assigned to groups?

In an experimental matched design, you use blocking and then match up individual participants from each block based on specific characteristics. Within each matched pair or group, you randomly assign each participant to one of the conditions in the experiment and compare their outcomes.

What two groups are needed for experimental conditions to take place?

In order to determine the impact of an independent variable, it is important to have at least two different treatment conditions. This usually involves using a control group that receives no treatment against an experimental group that receives the treatment.