Which schedule of reinforcement is a ratio schedule stating a ratio of responses to reinforcements?

The arrangements of reinforcement delivery are vital in the learning process that involves reinforcement techniques. Such arrangements of reinforcements are termed as Schedule of Reinforcement.

Table of Contents

  • Types of Schedule of Reinforcement
  • Continuous Reinforcement Schedule (CRF)
  • Partial or Intermittent Schedule of Reinforcement
  • How to Choose a Schedule?
  • What is a Schedule of Reinforcement?
  • Continuous Schedule of Reinforcement (CRF)
  • Intermittent Schedules of Reinforcement
  • Fixed-Ratio Schedule (FR)
  • Variable-Ratio Schedule (VR)
  • Fixed-Interval Schedule (FI)
  • Variable-Interval Schedule (VI)
  • Interval Schedules: A Tip
  • Interval Schedules with a Limited Hold
  • Thinner and Thicker Schedules of Reinforcement
  • Combining Schedules of Reinforcement

Schedule of reinforcement is a tactic used in operant conditioning that is critical in manipulating behavior. The major objective of this fundamental concept of operant conditioning is to try and decide how and when a desired behavior occurs.

The rate of any behavior occurring repeatedly is increased by the use of reinforcers, and decreased through the use of punishments. Removal of stimulus however renders the behavior to be extinct.

Schedule of Reinforcement, as the term defines, is the concept of encouraging any behavior by arranging interval between reinforcers and responses.

There are various ways to introduce schedules, varying from simple ratio and interval based schedules to more complicated compound schedules. Whether the schedules are simple or complicated combined with multiple strategies, all of them are targeted to manipulate behavior.

Types of Schedule of Reinforcement

There are two major types of Schedule of Reinforcement,

  • Continuous Reinforcement Schedule (CRF)
  • Partial or Intermittent Reinforcement Schedule

Both kinds of schedule have certain advantages, and the effectiveness of each method depends on the situation and the individual in question.

Continuous Reinforcement Schedule (CRF)

The schedule of reinforcement in which every correct response is reinforced is called Continuous Reinforcement Schedule (CRF). Example of providing a reward at every specific activity is

  • Giving a chocolate everyday when child finishes math homework
  • Student getting praised every time he receives A+ Grades

Partial or Intermittent Schedule of Reinforcement

Another alternative to maintain responses in a schedule is by intermittent or partial reinforcement.

The schedule of reinforcement arranged such that not every correct response reinforced is termed as intermittent reinforcement. Reinforcements are arranged to be presented at certain intervals or ratios. This type of reinforcement is regarded to be more powerful in maintaining and shaping behavior.

There are four basic types of partial reinforcement schedule

  1. Fixed Interval (FI) Schedule
    A fixed amount of time must elapse for reinforcements to be presented. Number of responses or trials is irrelevant.
  2. Variable Interval (VI) Schedule
    Reinforcement is contingent on the passage of time but the interval varies in random order.
  3. Fixed Ratio (FR) Schedule
    Reinforcement is provided after a fixed number of correct responses have been made.
  4. Variable Ratio (VR) Schedule
    Reinforcement is provided after a variable number of correct responses.

How to Choose a Schedule?

Choosing a schedule is an essential part, which has a massive influence on the success rate of the behavior. Continuous schedule works best when trying to teach a new behavior, whereas switching to partial schedule is often more preferred when a behavior has been learned.

Realistically speaking, it might not always be feasible to reinforce a behavior every single time it occurs, as it requires extensive amount of attention and resources. Partial schedules, in this case, might prove more useful. Also, partial schedules make the subject more disciplined and behaviors learned with the technique have been found to be more resistant to extinction.

Different situations might require different forms of reinforcement schedule, and the training activity might also require you to switch from one to another.

Schedules of reinforcement are the precise rules that are used to present (or to remove) reinforcers (or punishers) following a specified operant behavior. These rules are defined in terms of the time and/or the number of responses required in order to present (or to remove) a reinforcer (or a punisher). Different schedules schedules of reinforcement produce distinctive effects on operant behavior.

Interval Schedule

Interval schedules require a minimum amount of time that must pass between successive reinforced responses (e.g. 5 minutes). Responses which are made before this time has elapsed are not reinforced. Interval schedules may specify a fixed time period between reinforcers (Fixed Interval schedule) or a variable time period between reinforcers (Variable Interval schedule).

Fixed Interval schedules produce an accelerated rate of response as the time of reinforcement approaches. Students' visits to the university library show a decided increase in rate as the time of final examinations approaches.

Variable Interval schedules produce a steady rate of response. Presses of the "redial" button on the telephone are sustained at a steady rate when you are trying to reach your parents and get a "busy" signal on the other end of the line.

Ratio Schedule

Ratio schedule require a certain number of operant responses (e.g., 10 responses) to produce the next reinforcer. The required number of responses may be fixed from one reinforcer to the next (Fixed Ratio schedule) or it may vary from one reinforcer to the next (Variable Ratio schedule).

Fixed Ratio schedules support a high rate of response until a reinforcer is received, after which a discernible pause in responding may be seen, especially with large ratios. Sales people who are paid on a "commission" basis may work feverously to reach their sales quota, after which they take a break from sales for a few days.

Variable Ratio schedules support a high and steady rate of response. The power of this schedule of reinforcement is illustrated by the gambler who persistently inserts coins and pulls the handle of a "one-armed bandit."

Extinction

A special and important schedule of reinforcement is extinction, in which the reinforcement of a response is discontinued. Discontinuation of reinforcement leads to the progressive decline in the occurrence of a previously reinforced response.

Glossary Index | Quotations

Disclaimer | Reading Time: 6.7 minutes

"The schedule of reinforcement for a particular behaviour specifies whether every response is followed by reinforcement or whether only some responses are followed by reinforcement"

- Miltenberger (2007, p.86)

What is a Schedule of Reinforcement?

A schedule of reinforcement is a protocol or set of rules that a teacher will follow when delivering reinforcers (e.g. tokens when using a token economy). The “rules” might state that reinforcement is given after every correct response to a question; or for every 2 correct responses; or for every 100 correct responses; or when a certain amount of time has elapsed.

Broadly speaking there are two categories of reinforcement schedule, the first being a "continuous" schedule and the other being an "intermittent" schedule.

A continuous schedule of reinforcement (sometimes abbreviated into CRF) occurs when reinforcement is delivered after every single target behaviour whereas an intermittent schedule of reinforcement (INT) means reinforcement is delivered after some behaviours or responses but never after each one.

Continuous reinforcement schedules are more often used when teaching new behaviours, while intermittent reinforcement schedules are used when maintaining previously learned behaviours (Cooper et al. 2007).

Continuous Schedule of Reinforcement (CRF)

Within an educational setting, a CRF would mean that the teacher would deliver reinforcement after every correct response from their student/s. For example, if you were teaching a student to read the letters A, B, C, and D, then everytime you presented one of these letters to your student and they correctly read the letter then you would deliver reinforcement.

For an everday example, every time you press the number 9 button on your television remote control your TV changes to channel 9; or every time you turn on your kettle it heats up the water inside it; or every time you turn on your kitchen tap (faucet) water flows out of it (unless any of these are broken of course).

Which schedule of reinforcement is a ratio schedule stating a ratio of responses to reinforcements?

A continuous schedule of reinforcement.


Intermittent Schedules of Reinforcement

There are four basic types of intermittent schedules of reinforcement and these are:

  • Fixed-Ratio (FR) Schedule.
  • Fixed Interval (FI) Schedule.
  • Variable-Ratio (VR) schedule.
  • Variable-Interval (VI) schedule.

Fixed-Ratio Schedule (FR)

A fixed-ratio schedule of reinforcement means that reinforcement should be delivered after a constant or “fixed” number of correct responses. For example, a fixed ratio schedule of 2 means reinforcement is delivered after every 2 correct responses. The chosen number could be 5, 10, 20 or it could be 100 or more; there is no limit but the number must be defined.

Generally, when writing out a fixed-ratio schedule into the discrete trial script it is shortened into just “FR” with the number of required correct responses stated after it (Malott & Trojan-Suarez, 2006). For example, choosing to reinforce for every second correct response would be written as “FR2”; reinforcing for every fifth correct response would be an “FR5”; for every 100 correct responses would be an “FR100” and so on.

Note that when running an ABA programme, you may see the reinforcement schedule defined as “FR1”. Technically this is a continuous reinforcement schedule (CRF) but to keep in line with how other ratio schedules are defined it is written using the “FR” abbreviation and so is written as “FR1”.

Comparing an FR1 and an FR2 schedule of reinforcement.


Variable-Ratio Schedule (VR)

When using a variable-ratio (VR) schedule of reinforcement the delivery of reinforcement will “vary” but must average out at a specific number. Just like a fixed-ratio schedule, a variable-ratio schedule can be any number but must be defined.

For example, a teacher following a “VR2” schedule of reinforcement might give reinforcement after 1 correct response, then after 3 more correct responses, then 2 more, then 1 more and finally after 3 more correct responses.

Overall there were a total of 10 correct responses (1 + 3 + 2 + 1 + 3 = 10), reinforcement was delivered 5 times and so reinforcement was delivered for every 2 correct responses on average (10 ÷ 5 = 2). As can be seen in the image below, reinforcement did not follow a constant or fixed number of correct responses and instead “varied” and hence the name “variable-ratio” schedule of reinforcement.

A variable ratio schedule of reinforcement. Specifically a VR3 schedule.


Fixed-Interval Schedule (FI)

A fixed-interval schedule means that reinforcement becomes available after a specific period of time. The schedule is abbreviated into “FI” followed by the amount of time that must pass before reinforcement becomes available, e.g. an FI2 would mean reinforcement becomes available after 2 minutes has passed; an FI20 means 20 minutes must pass and so on.

A common misunderstanding is that reinforcement is automatically delivered at the end of this interval but this is not the case. Reinforcement only becomes available to be delivered and would only be given if the target behaviour is emitted at some stage after the time interval has ended.

To better explain this say a target behaviour is for a child to sit upright at his desk and an FI2 schedule of reinforcement is chosen. If the child sits upright during the 2 minute fixed-interval no reinforcement would be given because reinforcement for the target behaviour is not available during the fixed-interval.

If the child is slumped in his seat after the 2 minute interval elapses reinforcement would still not be given because reinforcement is only now available to be given. Just because he emitted the target behaviour (sitting upright) during the interval does not mean reinforcement is delivered at the end of the interval.

Say 10 more minutes pass before the boy sits upright, it is only now that he has emitted the target behaviour and the interval is over that reinforcement would be delivered. Once reinforcement is delivered then the 2 minute fixed-interval would be started again. After the 2 minute fixed-interval had elapsed, it could have taken 2 seconds, 10 minutes, 20 minutes, 200 minutes or more until the boy sat upright, but no matter how long it would have taken, no reinforcement would be delivered until he did.

Variable-Interval Schedule (VI)

The variable-interval (VI) schedule of reinforcement means the time periods that must pass before reinforcement becomes available will “vary” but must average out at a specific time interval. Again the time interval can be any number but must be defined.

Following a “VI3” schedule of reinforcement, a teacher could make reinforcement available after 2 minutes, then 5 minutes, then 3 minutes, then 4 minutes and finally 1 minute. In this example, reinforcement became available 5 times over a total interval period of 15 minutes. On average then, three minutes had to pass before reinforcement became available (2 + 5 + 3 + 4 + 1 = 15 ÷ 5 = 3) and so this was a VI3 schedule.

Just like a fixed-interval (FI) schedule, reinforcement is only available to be delivered after the time interval has ended. Reinforcement is not delivered straight after the interval ends, the child must emit the target behaviour after the time interval has ended for the reinforcement to be delivered.

Interval Schedules: A Tip

A helpful way to think of the interval schedules of reinforcement (both fixed and variable) is to think of the chosen time period as a period of time where no reinforcement would be given for the target behaviour.

Interval Schedules with a Limited Hold

Both fixed-interval (FI) and variable-interval (VI) schedules of reinforcement might have what is called a “limited hold” placed on them. When a limited hold is applied to either interval schedule then reinforcement is only available for a set time period after the time intervals have ended.

For example, using an FI2 schedule with a limited hold of 10 seconds means that when the 2 minute time interval has ended the child must engage in the target behaviour within 10 seconds or the fixed-interval of 2 minutes will start again and no reinforcement would be delivered. The limited hold is abbreviated into “LH” so the example above would be written as “FI2-minutes LH10-seconds” or sometimes maybe “FI2min LH10sec”.

Thinner and Thicker Schedules of Reinforcement

Sometimes you might hear the term “thicker schedule of reinforcement” or “thinner schedule of reinforcement”. These terms are used to describe a change that may be made to a schedule of reinforcement already being used.

For example, if a teaching programme was using an FR10 schedule (reinforcement delivered after every 10 correct responses), then a “thinner” schedule would mean increasing the amount of correct responses needed to earn reinforcement so the amount of reinforcement is reduced or “thinned”. Think of “thinner” in terms of “less” reinforcement. So for example, a thinner schedule than an FR10 schedule might be an FR15 schedule, so the child would now have to get 15 correct responses before earning reinforcement.

A “thicker” schedule would mean decreasing the amount of correct responses needed to earn reinforcement so the amount of reinforcement is increased. Think of “thicker” in terms of “more” reinforcement. So a thicker schedule than an FR10 might be an FR5 schedule, so the child would now have to get only 5 correct responses before earning reinforcement. Sometimes the term “denser” schedule of reinforcement might be used to denote a thicker schedule – but these terms mean the same thing.

Thinner and thicker schedules of reinforcement.


Combining Schedules of Reinforcement

Say a teacher is working through a spelling programme with a child and is using a token economy as positive reinforcement on an FR2 schedule of reinforcement; one token (reinforcement) is being delivered for every second correct spelling. So for the first trial, the teacher says “Spell apple”, the child correctly spells the word and the teacher does not give a token…but what does the teacher do? How does the child know if he’s right or wrong?

To combat this, combinations of reinforcement schedules may be used where “verbal praise” is on a continuous (or an FR1) schedule of reinforcement while the token economy is on the FR2 schedule.

So for every correct spelling, the teacher would say something like “great job!” or “brilliant!” or “you’re right!” and then every second correct spelling is reinforced with a token as well as verbal praise. In these cases, you would likely see “FR1 praise, FR2 token” written out in the discrete trial script to specify which schedules of reinforcement are being used.

Combining fixed ratio schedules of reinforcement to deliver both tokens and verbal praise for correct responding.

There are also “compound” schedules of reinforcement where different types of reinforcement schedules are combined in various different ways. There is a lot that can be said to describe these schedules and for the sake of this article we will not go into this detail.


Related Content

  • Discrete Trial Training
  • Positive Reinforcement
  • Token Economy

^ Skip to the top.

References

  • Cooper, J., Heron, T., & Heward, W. (2007). Applied Behaviour Analysis. New Jersey: Pearson Education.
  • Malott, R. & Trojan-Suarez, E. (2004) Principles of Behaviour. New Jersey: Pearson Prentice Hall.
  • Miltenberger, R. (2008). Behaviour Modification. Belmont, CA. Wadsworth Publishing.

What is a ratio schedule of reinforcement?

Ratio schedules involve reinforcement after a certain number of responses have been emitted. The fixed ratio schedule involves using a constant number of responses. For example, if the rabbit is reinforced every time it pulls the lever exactly five times, it would be reinforced on an FR 5 schedule.

What is the schedule of reinforcement?

A continuous reinforcement schedule (CRF) presents the reinforcer after every performance of the desired behavior. This schedule reinforces target behavior every single time it occurs, and is the quickest in teaching a new behavior.

What are the 3 schedules of reinforcement?

Table 1: Different Types of Reinforcement Schedules..
TYPE OF. REINFORCEMENT..
Fixed interval. schedules..
Variable. interval. schedules..

In which schedule of reinforcement appropriate movements are reinforced after varying number of responses?

In the variable-ratio reinforcement schedule, appropriate movements are reinforced after a varying number of responses.