Don Baer

Don Baer is one of the “fathers of generalization.”  In 1999, he wrote:

“Learning one aspect of anything never means that you know the rest of it.  Doing something skillfully now never means that you will always do it well.  Resisting one temptation consistently never means that you now have character, strength, and discipline.  Thus, it is not the learner who is dull, learning disabled, or immature, because all learners are alike in this regard: no one learns a generalized lesson unless a generalized lesson is taught.”

He also established the effectiveness of “withdrawal of positive reinforcement” as a means of reducing behavior.

Links

Adkins, V. (2002). In Memoriam: Don Baer, 1931-2002. Behavior and Social Issues, 12(9), 9. (available online here)

Cataldo, M. (2002). A tribute to Don Baer. Journal of Applied Behavior Analysis, 3, 319-321. (available online here)

Degen Horowitz, F. (2002). Donald M. Baer remembered. Journal of Applied Behavior Analysis, 35, 313-314. (available online here)

Poulson, C. L. (2002). In Memoriam: Donald M. Baer (1931-2002): A man of intelligence, integrity, courtesy, and humor. The Behavior Analyst, 25(2), 129-134. (available online here)

Wesolowski, M. D. (2002). Pioneer Profiles: An interview with Don Baer. The Behavior Analyst, 25(2), 135-150. (available online here)

Published Papers

Baer, D.M. (1960). Escape and avoidance response of preschool children to two schedules of reinforcement withdrawal. Journal of the Experimental Analysis of Behavior, 3, 155-159. (available online here)

Baer, D. M. (1961). The effect withdrawal of positive reinforcement on an extinguishing response in young children. Child Development, 32, 67-74.

Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 91-97. (available online here)

Baer, D. M., Wolf, M. M., & Risley, T. R. (1987). Some still-current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 20, 313-327. (available online here)

Morris, E. K., Baer, D. M., Favell, J. E., Glenn, S. S., Hineline, P. N., Malott, M. E., & Michael, J. (2001). Some reflections on 25 years of The Association for Behavior Analysis: Past, present, and future. The Behavior Analyst, 24(2), 125-146. (available online here)

Stokes, T. F. & Baer, D. M. (1977). An implicit technology of generalization. Journal of Applied Behavior Analysis, 10, 349-367. (available online here)

Students

Jesús Rosales-Ruiz

Instructional Design that Promotes Generalization

Multiple exemplar training: A teaching style, for the benefit of generalization, uses a variety of stimulus and response outcomes.  For example, teaching a child to say, “hello” and / or wave to the stimulus of both to someone waving hello and saying, “howdy.”

General Case Analysis: Is a systematic way of teaching examples that represent a full range of both stimulus and responses.  For example, teaching a student to purchase milk at the grocery store with a credit card and to buy a magazine with cash at a kiosk.

Teaching Non-examples: It is important to teach a student examples of non-correct behavior.  For example, making a sandwich is ok with fresh bread, but if the bread is old, then don’t do it.  The “don’t do it” stimulus (moldy bread for a sandwhich) is a S>delta.

Programming Common Stimuli:  This is using the generalized and natural setting in the teaching environment.  For example, when teaching to count money, don’t just do it with a picture of money, use real dollar bills.

Teach Loosely: Varying the environment within the teaching setting to encourage generalization.    This includes varying the time of day, temperature, teacher, choice of words, etc.  Be as unpredictable and random as possible to encourage the learning to happen across settings.

Contingencies

A contingency can be either a reinforcement or punishment that occurs after a behavior has been expressed by an individual or group.  A naturally existing contingency, in layman’s terms, “natural consequence” happens without the manipulation of the behavioral analysts.  Such an example would be, hitting the snooze button makes you late for work which causes you to leave your house without the opportunity for breakfast.  The punishment of not having access to breakfast is the naturally existing contingency for hitting the snooze button.

A contrived contingency is a reinforcement or punishment that is implemented by a teacher in order to encourage behavior change or skill acquisition.  An example of this would be giving a child access to a preferred toy after finishing a puzzle.

Non-contingent reinforcement (NCR) involves giving reinforcement on a regular schedule regardless of behavior.  The attempt is to reduce the problem behavior by providing the reinforcer freely, without contingencies, so that the motivation to engage in the problem behavior is reduced. NCT is a manipulation of motivating operations, creating an abolishing operation (AO) and a situation of reinforcement satiation. If one has identified that the function of a behavior is for attention, for example, we could try to reduce the behavior by giving attention on a regular schedule and reducing the motivation to get it through the behavior.

An indiscriminate contingency is the reinforcement rate decreasing to the one that is naturally available.  In order to test for generalization, one would reduce the reinforcement rate to an indiscriminable contingency.  The learner should not need any prompts or continuous reinforcement during generalization, and the indiscriminate contingency available should be enough to maintain the behavior.

4 term contingency is EO -> Sd-> Response and Reinforcement

Generalization

Baer, Wolf, and Risely (1968) included “generality of behavior change” as one of their 7 dimensions of applied behavior analysis.  Generalization of behavior change occurs when that behavior occurs outside of the learning environment.  Generalization can happen across 1) settings, 2) time  and 3) across people and exists when the behavior occurs in these various dimensions without relearning.

Response Maintenance is the continuation of a learned behavior after the intervention has been removed.  The learned behavior is still in place after a portion to all of the intervention is no longer present.  For example, take Jimmy, who learns to complete his homework via the use of a token board after each correct answer.  If, when the token board is not longer used, Jimmy continues to complete his homework at the criteria level, then response maintenance has occurred.  It is important to probe, or test, for maintenance over time to ensure that the learner has retained this new skill.

Setting / Situation Generalization has occurred when the behavior of interest occurs in a setting other than in the one that it was taught (instructional setting).

Response Generalization is the extent to which the learner can issue a behavior that is functionally equal to the behavior that was taught.  This is the case of stimuli that occasion novel responses. For example, if Sally learned to pick up a phone and talk on it with a friend, she has response generalization if she can also pick up a walkie talkie and use it to talk to a friend.

Over-generalization is a layman’s term for when a behavior under stimulus control is too broad.  This can also be referred to as “undesired response generalization” and results when a learner’s training results in generalization that causes poor performance or undesired results.  For example, if a child learns to open the door when the doorbell rings during the day, if he also opens the door any time the phone makes a ringing sound similar to the door chime, that would be over-generalization.

Stimulus Generalization refers to when different, but physically similar stimuli, evoke the same response. It is also known as a loose degree of stimulus control. For example, if a child says, “Josie” in the presence of his black and white cat, stimulus generalization would be observed if he said, “Josie” in the presence of his neighbors black and white dog or in the presence of a skunk. However, stimulus generalization doesn’t always mean a greater degree of stimulus control is needed. For example, if a child was taught how to use the potty on only one toilet, his ability to go potty on different toilet in another environment would demonstrate stimulus generalization.

Stimulus Discrimination occurs when a stimuli evoke a different response.  For example, not all snakes are poisonous.  My husband knows how to tell the difference between poisonous snakes and non – he has discriminated these stimulus and will catch a non-poisonous snake but avoid a poisonous one.

It is important to note that a behavior analyst should always program explicitly for generalization and never assume that generalization comes “for free.”  Programming for generalization consists of 1) targeting the behavior and identifying natural reinforcement and 2) indicating the different environments, people and times that create an opportunity for generalization.

 

 

 

Sticking to Your New Year’s Goals

The New Year is upon us and along with it linger our New Year’s resolutions. Have you already considered discarding your goals this year due to lack of progress? Maybe you haven’t even set any yet because they haven’t worked in the past. It is not too late to set a few solid New Year’s resolutions for this year. Sticking to your goals simply requires some fine-tuning.

Most people set goals for the New Year that are health-related, either seeking to improve their physical well-being by improving their eating habits or increasing their exercise habits. Others may be in search of improving their emotional health. By addressing unresolved emotional concerns or improving their preventative mental health practices. If you want to be firm in your resolve, then have a look at your goals and determine whether or not they are possible.

Refining New Year’s resolutions is not a new topic. Dr. Meredith Brinster previously posted a blog about exercising self-compassion. Dr. Mike Brooks has also posted many related blogs including a couple with specific tips for weight loss hacks, and overcoming inertia using the 5-minute rule.

Setting S.M.A.R.T. Goals

One acronym that helps you decide whether or not your goals are actionable goals is S.M.A.R.T. It stands for the following:

  • Specific – The more detailed you describe your goal, the better. Consider exactly what you want to achieve and then work out the details (e.g., what, when, how, and why).
  • Measurable – Identify exactly how you know when you have reached your goal. What you will see, hear, and feel.
  • Achievable – Is your goal reasonably calculated given your current obligations and life circumstances? Consider what you need in order to reach the goal. If the goal is impossible to attain, then you need to reevaluate and choose something else.
  • Relevant – How motivated are you to achieve the goal? Ask yourself if the goal is worthwhile and whether or not it is the right goal for you.
  • Time-Bound – it Is important to set a realistic timeframe for accomplishing your goal. Setting up smaller goals will help you determine if you are on-track for meeting your ultimate goal.

Accountability

Once you have devised your SMART goal, the last step is holding yourself accountable. Share your goal with someone else. If others know about your goal, then you will have someone else checking in to see whether or not you have made progress. Your accountability partners will be able to offer you encouragement and you will be more motivated to not disappoint them.

The Takeaway

Now you know how to develop goals that are Specific, Measurable, Attainable, Relevant and Time-Bound. Take some time to write down your new goals and develop your plan for checking on your progress. If you follow these steps, then you will have more success this year sticking to your goals and will experience a sense of accomplishment.

Schedules of Reinforcement

A schedule of reinforcement is a rule that describes how often the occurrence a behavior will receive a reinforcement.  On the two ends of the spectrum of schedules of reinforcement there is continuous reinforcement (CRF) and extinction (EXT).

Continuous reinforcement provides a reinforcement each and every time a behavior is emitted.  If every time you hear the doorbell ring and there is someone on the other side of the door with a package for you, that would continuous reinforcement.

With extinction, a previously reinforced behavior is no longer reinforced at all.  All reinforcement is withdrawn with a schedule of extinction.  An example of this is if every time you go to the grocery store with your child, when they ask for a treat, you give it to them.  One day, you decide to put this behavior into extinction and try to reduce the “asking for candy” behavior by not giving it to them any more.  You are now putting the behavior into extinction, which can have the affect of temporarily increasing aggressive behaviors as a side effect.

Intermittent schedules of reinforcement (INT) are when some, but not all, instances of a behavior are reinforced.  An intermittent schedule of reinforcement can be described as either being a ratio or interval schedule.  Ratio schedules are when a certain number of responses are emitted before reinforcement.  An interval schedule is when a response is reinforced after a certain amount of time since the last reinforcement.  The interval or ratio schedule can be either fixed or variable.  A fixed schedule is when the number of responses or the amount of time remains constant.  A variable schedule is when the number or time between reinforcements changes according to an average.

Post-reinforcement pauses are associated with fixed schedules of reinforcement.  While both fixed ratio and fixed interval show a post-reinforcement pause, the fixed ratio has a high steady rate. This type of schedule shows a scalloped effect when graphed.  This is due to the fact that immediately after the reinforcement is delivered there is a decrease in responding, and before the next scheduled opportunity there is an increase in responding behavior.  Post-reinforcement pauses and scalloped graphed effects are not present with variable schedules and conjunctive schedules of reinforcement.

Compound schedule of reinforcement

Concurrent schedule (conc)
Occurs when 2+ contingencies of reinforcement operate independently and simultaneously for 2+ behaviors.
Uses choice making
Matching Law
3 Types of Interactions associated with concurrent schedules are:

  1. the frequency of reinforcement (i.e. the more frequently a behavior receives reinforcement, the higher the likelihood that responding will increase),
  2. reinforcement vs. punishment (i.e. the behaviors associated with the punishment schedule will decrease, while the behaviors associated with reinforcement schedule will increase), and
  3. reinforcement vs. aversive stimuli (i.e. rate of avoidance responding to the aversive stimuli will increase with the intensity and frequency of the aversive stimulus schedule).

Multiple schedule (mult):

  1. alternating  two or more component schedules of reinforcement for a single response
  2. only has one schedule in effect at any time
  3. uses an Sd to signal that the particular schedule is in effect

Chained schedule (chain): Presents the schedules in a specific order and may use the same or different behaviors for all elements in the chain.

Mixed schedule (mix)

  1. alternating  two or more component schedules of reinforcement for a single response
  2. only has one schedule in effect at any time
  3. NO Sd to signal the schedule in effect

Tandem schedule (tand)

Alternative schedule (alt)

Conjunctive schedule (conj)

Progressive Schedule: Systematically thin each following reinforcement opportunity regardless of the learners behavior.

Reinforcement 101

The concept of reinforcement is one of the most important and utilized principles in applied behavior analysis.  The most basic definition of reinforcement is that a type of behavior (R) is followed by a reinforcement (S^R) there will be an increase in the future frequency of that behavior.  Reinforcement can be categorized as types or classes of reinforcement and they are identified through a process called preference assessment. Reinforcement occurs when a behavior increases because of a consequence of either adding or subtracting something from the environment.

Some important attributes of reinforcement:

  • The time between the behavior and the consequence.  Immediacy of reinforcement is critical to create the relationship between the behavior and the consequence.
  • The conditions present when the behavior occurs
  • The motivation present for the desire of the consequence

If a response does not closely follow a behavior, it is not acting as reinforcement.  If a behavior increases because of a long delayed response, then the behavior has increased behavior of a rule governed  behavior or instructional control.  An example of rule governed behavior is when a child receives a reward for a semester of straight As.  If this increases the child’s studying behavior,  it is not due to reinforcement but the rule governed behavior that “if I study more, I will receive future awards.”  Instead, if a child received immediate social praise by a parent or teacher after a study session and this increased the child’s studying behavior, this would be an example of positive reinforcement at work.

2 Term Contingency

Reinforcement is a function of the relation of a consequence immediately following a behavior, which increases the odds that the behavior will occur again in the future.

3 Term Contingency

When a consequence is paired with a behavior, that relationship in combination with the antecedent stimulus, creates a 3 term contingency.  When there is the antecedent, followed by the behavior, which is followed by the consequence, there is a contingency relationship between these.

A discriminative stimulus (S^D) is an antecedent stimulus (thing that happens before the behavior) that is correlated with the availability of reinforcement for that behavior.  Stimulus delta (S^ delta) is the absence of the stimulus, and this does not produce the reinforcement when the behavior is emitted.  For example, given the S^D, “Cell phone ringing” the behavior of “answering the phone and saying ‘hi'” results in the consequence (reinforcement) of having a conversation.  If the phone does not ring (stimulus delta), if you pick up the phone and say “hi”, you will not receive the reinforcement or consequence of having a conversation.

It’s all about motivation

Motivating operations (MOs) are environmental variables that change the value of a consequence, making it a reinforcer or not.  For example, when you have mowed the lawn on a hot day, a cold glass lemonade is much more motivating and rewarding than it is on a cold, snowy day sitting in your pajamas near a fire.