Stimulus Control: Salience, Masking and Overshadowing

Stimulus Salience refers to how obvious or prominent a stimulus is in a person’s environment.  If a person has visual deficits, then visual stimulus will not have as much salience as auditory stimulus, for example.  In order to notice stimulus, and for that stimulus to have salience, a learner must possess pre-attending skills necessary for the setting.  The pre-attending skills for kindergarten, for example, include looking at the instructional materials, listening to instructions and to the teacher and sitting quietly while instruction is happening.

Masking is when the salience of a stimulus is decreased.  A competing stimulus blocks the evocative power of the stimulus, decreasing its effectiveness.  For example, a teenager may follow directions when alone with a parent, have a more difficult time when peers are present.  This example is competition of different contingencies of reinforcement which makes it more difficult for the subject to mind the discriminant stimulus.

Overshadowing is when the first stimulus has no more stimulus control.  An example is a teenager who can study in a classroom, but not in front of the a group of cheerleaders.

In order to reduce the effect of overshadowing and masking, we must apply antecedent interventions such as:  arranging the environment to reduce “noise” from unwanted stimulus, making the instructional stimuli intense and consistently reinforcing behavior in the presence of desired stimulus.

Interobserver Agreement (IOA)

Interobserver Agreement (IOA) refers to the degree to which two or more independent observers report the same observed values after measuring the same events.

4 Benefits of IOA

  1. Determine the competence of new observers (when IOA is low)
  2. Detect observer drift over the course of a study (when IOA is low)
  3. Increases confidence that the target behavior was clearly defined (when IOA is high)
  4. Confirms that change in data is due to change in behavior and not in data collection (when IOA is high)

4 Methods for collecting IOA

  1. Total count IOA – this is the simplest and least exact method.  IOA = smaller count / larger count * 100.  Caution must be used because there is no guarantee that the observers are recording the same instances of the behavior.
  2. Mean count-per-interval IOA – The chance to have a more accurate representation of IOA is by
    1. dividing up the total observation period into a series of smaller counting times and
    2. having the observers record the number of occurrences of behavior within each interval
    3. calculating the agreement between the observer counts within each interval
    4. using the agreements per interval as the basis for calculating the IOA for the total observation period
    5. IOA = int 1 IOA + int 2 IOA …+ int N IOA / n intervals * 100
  3. Exact Count-per-interval IOA – is the most exact way to count IOA.  This is the percent of intervals in which observers record the same count.  IOA = # of intervals at 100% IOA / n intervals * 100
  4. Trial-by-trial IOA – # of trials items agreement / # of trials * 100

Interval IOA

  1. In Scored interval IOA, you determine the number of intervals that have a “yes.”  Then you divide the number of those that agree by the total number of those intervals.
  2. In Unscored Interval IOA, you do the same for the Scored Interval IOA, except you take the number of intervals that have a “no.”

Reliable vs. Accurate Data

Reliable data is data that gives the same results each time you measure it.

Accurate data is data that is correct.

Reliable data is not always accurate, but accurate data is always reliable.

3 Conditioned Motivating Operations – CMOs

What are conditioned motivating operations (CMOs)?  First we need to discuss motivating operations (MOs).  MOs  or sometimes called establishing operation (EOs) refers to a state that changes the value of consequences and elevates their status as reinforcers.  For example, not having eaten lunch in a while creates a state of hunger which is a motivating operation that elevates the value of food as a reward for doing work.   If someone is told that after they finish their homework, they will get a snack, they will work very hard to finish if they are sufficiently hungry.  The MO creates a state of value for the reward of eating the snack.  Conversely, if someone has just eaten a big lunch, they may not be very motivated to work for a snack, as there is little motivation to acquire that snack because one is satiated on food and it does not momentarily serve as a reinforcement.

Unconditioned motivating operations are the MOs that one naturally has acquired without being taught a value to them.  These are unlearned states of motivating operations and include states such as being tired, hungry, thirsty and wanting of activity.

Conditioned motivating operations (CMOs) are the MOs that one learns to place a value.  These are otherwise neutral states that now have value because they have been paired with a UMO, another CMO or with reinforcement or punishment in order to learn the value of the given CMO.  There are 3 types of CMOs: surrogate CMOs (CMO-S), reflexive CMOs (CMO-R), and transitive CMOs (CMO-T).

CMO-S (SURROGATE)
A stimulus that has acquired its effectiveness by accompanying some other MO and has come to have the same value-altering and behavior-altering effects as the MO that it has accompanied.  A pairing process has to take place here with another MO.

Example: Mom usually puts baby to sleep. One day, dad tried to put the baby to sleep, but the baby doesn’t fall asleep.  Mom usually wears a certain fuzzy house robe that the baby has paired with sleep.  Dad wears mom’s house robe and the pairing of the robe with dad helps the baby fall asleep.

CMO-R (REFLEXIVE)
A condition or object that acquires its effectiveness as an MO by preceding a situation that either is worsening or improving.  This signals to us that an aversive event may be occurring soon.  Achtung. It is exemplified by the warning stimulus in a typical escape-avoidance procedure, which establishes its own offset as reinforcement and evokes all behavior that has accomplished that offset.

Example: The punishing coworker. In the presence of this person you “can’t seem to do anything right” and are constantly punished. She is always finding fault with you.  Because of this, you want to spend less time with this person and you avoid her. Soon the office associated with her takes on these aversive qualities and you avoid going anywhere near where this person might be. Even hearing their voice down the hallway may signal you to take an early lunch and avoid running into them (and therefore avoid possible punishment).

CMO-T (TRANSITIVE)
An environmental variable that establishes (or abolishes) the reinforcing effectiveness of another stimulus and thereby evokes (or abates) the behavior that has been reinforced by that other stimulus.  You CANNOT have access to the stimulus you want until you solve the problem.

Example: Someone puts a lock on the fridge.  This establishes the reinforcing value of a key (key becomes the CMO-T) when access to food is valuable as a source of reinforcement.

interval recording ABA

Interval Recording in ABA

Time Sampling: Refers to a variety of methods to record behavior at specific moments.  One divides the observation period into intervals and then record either the presence or absence of a behavior within or at the end of the interval.

Partial Interval Recording: Record whether the behavior happened at any time during the interval.  Tends to underestimate high-frequency behavior and overestimate duration.

When the goal is to increase behavior – use whole-interval recording because it underestimates the duration of the behavior

When the goal is to decrease behavior – use partial-interval recording because it overestimates the duration of the behavior

Whole Interval Recording:  At the end of each interval, it is recorded if the behavior happened during the whole interval.  The longer the interval, the more whole interval will underestimate the occurrence of the behavior.

Momentary Time Sampling: Recorder notes whether the behavior happens at the moment each interval ends.  Not recommended for low frequency, short duration behaviors.

PLACHECK (planned activity check) is momentary time sampling for group engagement.

3 Types of Discontinuous measurement  (aka, time sampling)  Momentary time sampling and partial and whole-interval recording are discontinuous methods.  These methods all either over or underestimate the rate of the target behavior because of the way that it is measured.  This is called an artifact.

Time sampling are suited for behaviors that do not have a discrete start and end (for example, crying).

A measurement artifact are data that appear to exist, but only because of the way that they were measured.  Discontinuous measurement procedures, especially poorly chosen aspects of it, may result in artifact.

Interresponse Time (IRT) measurement of the time between responses.

Behavioral Contrast

Behavioral contrast occurs in a multiple schedule of reinforcement or punishment and describes what happens when a change in the schedule of one part of the reinforcement or punishment changes a behavior in an opposite direction in the other component of the schedule. An example from Applied Behavior Analysis given is that of a child who eats cookies at the same rate given the presence or absence of his grandmother.  One day, the grandmother punishes the child from eating cookies when she is present. This results in reducing the cookie eating when the grandmother is in the kitchen, but increases the cookie eating in the alternate condition of the grandmother being absent (Cooper et.al., 2007, p. 337).

Behavioral contrast is associated with multiple schedules of reinforcement which generally occurs between separate settings.

3 Ways to mitigate behavioral contrast effects:

  1. Teaching replacement behaviors
  2. Punishing all occurrences of the target behavior (all settings, all stimulus conditions, etc.)
  3. Eliminating or minimizing access to reinforcement for the problem behavior

Echoics, Mands, Tacts

The Echoic is a verbal operant that is present when a person verbally repeats what another person says.  Echoic is a point-to-point correspondence meaning that the verbal stimulus and response products match in entirety. Motor imitation is related to echoics and can be a stepping stone to learning echoic behavior.  Echoics are a precursor to other verbal operants, such as Tact and Mand and are essential component in a learner’s verbal behavior (Cooper, Heron & Howard, 2007, p.531).

The Mand is verbal behavior where a speaker asks for something that he or she wants.  Mands occur when there is a motivating operation (MO) for something and the reinforcement is the acquisition of that thing directly related to that MO.  Mands are one of the first verbal operants acquired by a child and are essential to behavior management as learning to mand for an item can decrease undesired behaviors in order to acquire that item (Cooper et.al., 2007, p. 530).

Mand training involves moving from stimulus control to motivating operation control.

Tacts are a verbal operant where the speaker labels things in the environment.  Tacts occur when a non-verbal stimulus is presented which becomes a discriminative stimulus (Sd) via discrimination training.  When the Tact produces generalized conditioned reinforcement, it becomes under functional control of the nonverbal discriminative stimulus (Cooper et.al., 2007, p. 530).

References
Cooper, J.O., Heron, T.E., Heward, W.L., (2007) Applied Behavior Analysis Second Edition.
Upper Saddle River, New Jersey, Columbus, Ohio: Person Merrill Prentice Hall

Motivating Operations

An establishing operation (EO) is a motivating operation that increases the value of a reinforcer and increase the frequency in behavior that provides access to the reinforcer (Cooper, Heron & Heward, 207, p. 695).  An example of  an EO is skipping lunch and having an empty stomach. By being hungry it increases the value of food and increases the behaviors that gain access to food.

An abolishing operation (AO) is a motivating operation that decreases the value of a reinforcer (Cooper et al., 2007, p. 263). For example, after having juice, the value of juice as a reinforcer could potentially decrease. Another example of an AO, could be after running a marathon the value of running has decreased for the athlete.  

A conditioned motivating operation is when an item or an event has been trained to have a reinforcing value due to previously learning the association (Cooper, et.al, 2007, p. 384). An example of a CMO would be needing a car key to turn on a car. The relationship between a car key and car is a function of learning in the past.

A unconditioned motivating operation is when an item or situation has a reinforcement value that does not depend on previously learning about it (Cooper, et.al, 2007, p.707). A clear example, if you are stranded on an island without food the need to satisfied the hunger would be reinforcing. This association would not have to have been previously learned.

A value altering effect is when the effectiveness of a reinforcing stimulus, event or tangible object is changed to either be more or less effective based on the surrounding situations, or the result of a motivating operation. (Cooper, et.al, 2007, p. 707). An example of this is access to the internet. The reinforcing effectiveness of access to the internet changed based on overindulgence or complete restriction of access to the internet.

A behavior altering effect occurs when the effectiveness of a reinforcer on the frequency of a behavior is changed by the same motivating operation that maintained the current frequency of behavior (Cooper, et.al, 2007, p. 375). For example, if someone baths everyday this would be their current frequency of bathing behavior. However, if they hurt their back and realize that bathing reduces their back pain a behavior altering effect would then be that the bathing behavior increases because of the reinforcement of pain relief. Similarly, if the person hurt their back and realized that climbing down into the bathtub caused more pain, a behavior altering effect might be that their bathing behavior decreases because of the positive punishment of additional back pain.

A surrogate CMO (CMO-S) has the same effect as the MO it was paired with has (Cooper, et.al, 2007, p. 384).   An example of this visually you often see snow when it is cold, and then it starts to snow when it is warm.  In this case, the visual of the snow may cause a person to put more clothes on even though the temperature has not significantly changed.

A reflexive CMO (CMO-R) makes the removal of itself a reinforcement in itself.  This can happen when a stimulus has come before either a worsening or improvement of a situation (Cooper, et.al, 2007, p. 384).  CMO-Rs can be thought of as warning signals.  An example would be if a student had experience in a classroom when a teacher said, “Let’s get to work,” it would serve as a warning signal and could evoke escape behavior due to the threat of difficult work to follow.

A transitive CMO (CMO-T) makes something else into a reinforcement but does not change itself (Cooper, et.al, 2007, p. 384). An example of this is that the value of having a pen is greater when someone gives you a piece of paper and tells you to fill it out.  The paper has not changed in this example, but serves to increase the value of something else, in this case, a writing utensil.

Contingencies

A contingency can be either a reinforcement or punishment that occurs after a behavior has been expressed by an individual or group.  A naturally existing contingency, in layman’s terms, “natural consequence” happens without the manipulation of the behavioral analysts.  Such an example would be, hitting the snooze button makes you late for work which causes you to leave your house without the opportunity for breakfast.  The punishment of not having access to breakfast is the naturally existing contingency for hitting the snooze button.

A contrived contingency is a reinforcement or punishment that is implemented by a teacher in order to encourage behavior change or skill acquisition.  An example of this would be giving a child access to a preferred toy after finishing a puzzle.

Non-contingent reinforcement (NCR) involves giving reinforcement on a regular schedule regardless of behavior.  The attempt is to reduce the problem behavior by providing the reinforcer freely, without contingencies, so that the motivation to engage in the problem behavior is reduced. NCT is a manipulation of motivating operations, creating an abolishing operation (AO) and a situation of reinforcement satiation. If one has identified that the function of a behavior is for attention, for example, we could try to reduce the behavior by giving attention on a regular schedule and reducing the motivation to get it through the behavior.

An indiscriminate contingency is the reinforcement rate decreasing to the one that is naturally available.  In order to test for generalization, one would reduce the reinforcement rate to an indiscriminable contingency.  The learner should not need any prompts or continuous reinforcement during generalization, and the indiscriminate contingency available should be enough to maintain the behavior.

4 term contingency is EO -> Sd-> Response and Reinforcement

Generalization

Baer, Wolf, and Risely (1968) included “generality of behavior change” as one of their 7 dimensions of applied behavior analysis.  Generalization of behavior change occurs when that behavior occurs outside of the learning environment.  Generalization can happen across 1) settings, 2) time  and 3) across people and exists when the behavior occurs in these various dimensions without relearning.

Response Maintenance is the continuation of a learned behavior after the intervention has been removed.  The learned behavior is still in place after a portion to all of the intervention is no longer present.  For example, take Jimmy, who learns to complete his homework via the use of a token board after each correct answer.  If, when the token board is not longer used, Jimmy continues to complete his homework at the criteria level, then response maintenance has occurred.  It is important to probe, or test, for maintenance over time to ensure that the learner has retained this new skill.

Setting / Situation Generalization has occurred when the behavior of interest occurs in a setting other than in the one that it was taught (instructional setting).

Response Generalization is the extent to which the learner can issue a behavior that is functionally equal to the behavior that was taught.  This is the case of stimuli that occasion novel responses. For example, if Sally learned to pick up a phone and talk on it with a friend, she has response generalization if she can also pick up a walkie talkie and use it to talk to a friend.

Over-generalization is a layman’s term for when a behavior under stimulus control is too broad.  This can also be referred to as “undesired response generalization” and results when a learner’s training results in generalization that causes poor performance or undesired results.  For example, if a child learns to open the door when the doorbell rings during the day, if he also opens the door any time the phone makes a ringing sound similar to the door chime, that would be over-generalization.

Stimulus Generalization refers to when different, but physically similar stimuli, evoke the same response. It is also known as a loose degree of stimulus control. For example, if a child says, “Josie” in the presence of his black and white cat, stimulus generalization would be observed if he said, “Josie” in the presence of his neighbors black and white dog or in the presence of a skunk. However, stimulus generalization doesn’t always mean a greater degree of stimulus control is needed. For example, if a child was taught how to use the potty on only one toilet, his ability to go potty on different toilet in another environment would demonstrate stimulus generalization.

Stimulus Discrimination occurs when a stimuli evoke a different response.  For example, not all snakes are poisonous.  My husband knows how to tell the difference between poisonous snakes and non – he has discriminated these stimulus and will catch a non-poisonous snake but avoid a poisonous one.

It is important to note that a behavior analyst should always program explicitly for generalization and never assume that generalization comes “for free.”  Programming for generalization consists of 1) targeting the behavior and identifying natural reinforcement and 2) indicating the different environments, people and times that create an opportunity for generalization.