When an issue is presented in a specific way that affects our judgment and decision making This is known as?

Why it happens

Anchoring bias is one of the most robust effects in psychology. Many studies have confirmed its effects, and shown that we can often become anchored by values that aren’t even relevant to the task at hand. In one study, for example, people were asked for the last two digits of their social security number. Next, they were shown a number of different products, including things like computer equipment, bottles of wine, and boxes of chocolate. For each item, participants indicated whether they would be willing to pay the amount of money formed by their two digits. For example, if somebody’s number ended in 34, they would say whether or not they would pay $34 for each item. After that, the researchers asked what the maximum amount was that the participants would be willing to pay.

Even though somebody’s social security number is nothing more than a random series of digits, those numbers had an effect on their decision making. People whose digits amounted to a higher number were willing to pay significantly more for the same products, compared to those with lower numbers.9 Anchoring bias also hold up when anchors are obtained by rolling some dice or spinning a wheel, and when researchers remind people that the anchor is irrelevant.4

Given its ubiquity, anchoring appears to be deeply rooted in human cognition. Its causes are still being debated, but the most recent evidence suggests that it happens for different reasons depending on where the anchoring information comes from. We can become anchored to all kinds of values or pieces of information, whether we came up with them ourselves or we were provided with them,4 but apparently for different reasons.

When we come up with anchors ourselves: The anchor-and-adjust hypothesis

The original explanation for anchoring bias comes from Amos Tversky and Daniel Kahneman, two of the most influential figures in behavioral economics. In a 1974 paper called “Judgment under Uncertainty: Heuristics and Biases,” Tversky and Kahneman theorized that, when people try to make estimates or predictions, they begin with some initial value, or starting point, and then adjust from there. Anchoring bias happens because the adjustments usually aren’t big enough, leading us to incorrect decisions. This has become known as the anchor-and-adjust hypothesis.

To back up their account of anchoring, Tversky and Kahneman ran a study where they had high school students guess the answers to mathematical equations in a very short period of time. Within five seconds, the students were asked to estimate the product:

8 x 7 x 6 x 5 x 4 x 3 x 2 x 1

Another group was given the same sequence, but in reverse:

1 x 2 x 3 x 4 x 5 x 6 x 7 x 8

The media estimate for the first problem was 2,250, while the median estimate for the second was 512. (The correct answer is 40,320.) Tversky and Kahneman argued that this difference arose because the students were doing partial calculations in their heads, and then trying to adjust these values to get to an answer. The group who was given the descending sequence was working with larger numbers to start with, so their partial calculations brought them to a larger starting point, which they became anchored to (and vice-versa for the other group).5

Tversky and Kahneman’s explanation works well to explain anchoring bias in situations where people generate an anchor on their own.6 However, in cases where an anchor is provided by some external source, the anchor-and-adjust hypothesis is not so well supported. In these situations, the literature favors a phenomenon known as selective accessibility.

The selective accessibility hypothesis

This theory relies on priming, another prevalent effect in psychology. When people are exposed to a given concept, it is said to become primed, meaning that the areas of the brain related to that concept remain activated at some level. This makes the concept more easily accessible, and more able to influence people’s behavior without their realizing.

Just like anchoring, priming is a robust and ubiquitous phenomenon that plays a role in many other biases and heuristics—and as it turns out, anchoring might be one of them. According to this theory, when we are first presented with an anchoring piece of information, the first thing we do is to mentally test whether it is a plausible value for whatever target object or situation we are considering. We do this by building a mental representation of the target. For example, if I were to ask you whether the Mississippi River is longer or shorter than 3,000 miles, you might try to imagine the north-south extension of the United States, and use that to try to figure out the answer.7

As we’re building our mental model and testing out the anchor on it, we end up activating other pieces of information that are consistent with the anchor. As a result, all of this information becomes primed, and more likely to affect our decision making. However, because the activated information lives within our mental model for a specific concept, anchoring bias should be stronger when the primed information is applicable to the task at hand. So, after you answered my first Mississippi question, if I were to follow it up by asking how wide the river is, the anchor I gave you (3,000 miles) shouldn’t affect your answer as much, because in your mental model, this figure was only related to length.

To test this idea, Strack and Mussweiler (1997) had participants fill out a questionnaire. First, they made a comparative judgment, meaning they were asked to guess whether some value of a target object was higher or lower than an anchor. For example, they might have been asked whether the Brandenburg Gate (the target) is taller or shorter than 150 meters (the anchor). After this, they made an absolute judgment about the target, such as being asked to guess how tall the Brandenburg Gate is. For some participants, however, the absolute judgment involved a different dimension than the comparative judgment—for example, asking about a structure’s width instead of its height.

The results showed that the anchor effect was much stronger if the object dimension was the same for both questions,7 lending support to the theory of selective accessibility. This does not mean that the anchor-and-adjust hypothesis is incorrect, however. Instead, it means that anchoring bias relies on multiple, different mechanisms, and it happens for different reasons depending on the circumstances.

Bad moods weigh us down

The research on anchoring has turned up a number of other factors that influence anchoring bias. One of these is mood: evidence shows that people in sad moods are more susceptible to anchoring, compared to others in good moods. This result is surprising, because usually, experiments have found the opposite to be true: happy moods result in more biased processing, whereas sadness causes people to think things through more carefully.4

This finding makes sense in the context of the selective accessibility theory. If sadness makes people more thorough processors, that would mean that they activate more anchor-consistent information, which would then enhance anchoring bias.8

How to avoid it

Avoiding anchoring bias entirely probably isn’t possible, given how ubiquitous and how powerful it is. Like all cognitive biases, anchoring bias happens subconsciously, and when one isn’t aware something is happening, it’s difficult to interrupt it. Even more frustrating, some of the strategies that intuitively sound like good ways to avoid bias might not work with anchoring. For example, it’s usually a good idea to take one’s time with making a decision, and think it through carefully—but, as discussed above, thinking more about an anchor might actually make this effect stronger, because it results in more anchor-consistent information being activated.

One strategy to combat anchoring bias that is evidence-based, and pretty straightforward, is to come up with reasons why that anchor is inappropriate for the situation. In one study, car experts were asked to judge whether a resale price of a certain car (the anchor) was too high or too low, after which they were asked to provide a better estimate. However, before giving their own price, half of the experts were also asked to come up with arguments against the anchor price. These participants showed a weaker anchoring effect, compared to those who hadn’t come up with counterarguments.10

Considering alternative options is always a good idea to aid decision making. This strategy is similar to that of red teaming, which involves designating people to oppose and challenge the ideas of a group.11 By building a step into the decision making process that is specifically dedicated to exposing the weaknesses of a plan, and considering alternatives, it might be possible to reduce the influence of an anchor.

Example 1 - Anchors in the courtroom

In the criminal justice system, prosecutors and attorneys typically demand a certain length of prison sentence for those convicted of a crime. In other cases, a sentence might be recommended by a probation officer. Technically speaking, the judge in a case still has the freedom to sentence a person as they see fit—but research shows that these demands can serve as anchors, influencing the final judgment.

In one study, criminal judges were given a hypothetical criminal case, including what the prosecutor in the case demanded as a prison sentence. For some of the judges, the recommended sentence was 2 months; for others, it was 34 months. First, the judges rated whether they thought the demand was too low, too high, or adequate. After that, they indicated how long a sentence they would assign if they were presiding over the case.

As the researchers expected, the anchor had a significant effect on the length of the sentence prescribed. On average, the judges who had been given the higher anchor gave a sentence of 28.7 months, while the group given the lower anchor had an average sentence of 18.78 months.12 These results show how sentencing demands might color a judge’s perception of a criminal case, and could seriously skew their judgment. Even people who are seen as experts in their fields aren’t immune to anchoring bias.

Summary

What it is

Anchoring bias is a pervasive cognitive bias that causes us to rely too heavily on information that we received early on in the decision making process. Because we use this “anchoring” information as a point of reference, our perception of the situation can become skewed.

Why it happens

There are two dominant theories behind anchoring bias. The first one, the anchor-and-adjust hypothesis, says that when we make decisions under uncertainty, we start by calculating some initial value and adjusting it, but our adjustments are usually insufficient. The second one, the selective accessibility theory, says that anchoring bias happens because we are primed to recall and notice anchor-consistent information.

Example 1 - Anchors in the courtroom

In criminal court cases, prosecutors often demand a certain length of sentence for the accused. Research shows these demands can become anchors that bias the judge’s decision making.

Example 2 - Anchoring and portion sizes

The common tendency to eat more when faced with a larger portion might be explained by anchoring. In one study, participants’ estimates of how much they would eat were influenced by an anchoring portion size (large or small) they had been told to imagine previously.

How to avoid it

The anchoring effect is difficult (if not impossible) to completely avoid, but research shows that it can be reduced by considering reasons why the anchor doesn’t fit the situation well.

What do we call the mental activity involved in processing understanding and making judgments and decisions about information?

Heuristics are mental shortcuts that allows people to solve problems and make judgments quickly and efficiently. These rule-of-thumb strategies shorten decision-making time and allow people to function without constantly stopping to think about their next course of action.

What is it called when a decision maker recognizes information that supports a decision while ignoring contracting information multiple choice question?

judgmental heuristics. The bias that takes place because we are optimistic based on overestimates of what we are able to do is: overconfidence bias. What is it called when a decision maker recognizes information that supports a decision, while ignoring contracting information? confirmation bias.

What is the term for a previously used approach to problem solving that acts as a mental framework to solve later problems?

adjustability. What is the term for a previously used approach to problem solving that acts as a mental framework to solve later problems? mental set.

Which of the following refers to a mental grouping of objects events or people?

concept a mental grouping of similar objects, events, ideas, or people.