What Is Behaviorism And How Does It Influence The Work Of A Behavior Analyst?

What is Behaviorism and How Does It Influence The Work Of A Behavior Analyst Featured Image

Do you ever wonder why we do what we do? You might wonder why you tend to behave in certain ways, get stuck in some hard-to-break habits, or wonder how some of your habits have contributed to your success. 

While we often hear theories from the field of psychology on how our thoughts influence our behavior, there are other scientific ways to understand human behaviors. So what is this science of behavior and how can we use it to modify our behavior?

Behaviorism is the study of observable and measurable human behaviors. Otherwise known as behavioral psychology, behaviorism emphasizes the role of environmental factors in influencing behavior.

“Behaviorism is understanding how the environment works so that we can make ourselves smarter, more organized, more responsible; so we can encounter fewer punishments and few disappointments. Behavior Analysis is a science of studying how we can arrange our environments so they make very likely the behaviors we want to be probable enough, and they make unlikely the behaviors we want to be improbable.” (Cooper et al., 2007, p. 15).

Behavior analysts (BCBAs) are professionals that seek to understand human behavior by examining one’s environment and how to make changes in order to improve the quality of life of individuals, groups and society. 

In this post you will find: 

There’s a lot to cover, so you make sure to bookmark in case you run out of time!

History of Behaviorism and Behavior Analysis 

Behaviorism dates back to the early 20th Century and Thorndike’s Law of Effect. Through studying animal behavior, Thorndike found that if a behavior is followed by a desirable outcome the likelihood of engaging in that behavior again is greater. 

Around the same time, Ivan Pavlov discovered the concept of classical conditioning with his seminal dog experiment. Pavlov developed the concept of classical conditioning in which a neutral stimulus could be conditioned to elicit a reflex response in a dog. 

A previously neutral stimulus of the bell ringing, when paired repeatedly with the presentation of food later, led to the dog salivating when the bell was rung, despite food not being present. Pavlov’s research in classical conditioning added to the body of work that paved the way for behaviorism to really begin to take shape in the early 20th century. 

Later, Thorndike’s Law of Effect was further developed through the work of John B. Watson. He argued that psychology needed to shift away from looking at mental processes to explain behavior as they could not be objectively measured or observed. 

He suggested a shift toward the direct observation of how environmental factors (or stimuli) influence the behavioral responses of living organisms (e.g. animals and humans). Initially, this was called Stimulus-Response Psychology, and became the precursor of modern Behavior Analysis (Cooper et al., p. 9). 

Using the work of Thordike, Pavlov and Watson, B.F. Skinner further developed behaviorist theory and concepts through his empirical research, becoming the father of modern Behavior Analysis. 

Skinner’s research in the 1930s highlighted two types of behavior: respondent and operant. 

Respondent Behavior

These are what we call ‘reflexes.’ (Cooper et al., p. 10). Animals do not have control over these behaviors and are the result of whatever immediate stimulus precedes the behavior. For example: seeing or smelling appetizing food leads to salivation, feeling cold leads to sneezing, and hunger leads to the infant rooting reflex. 

Operant Behavior

When considering why many behaviors could not be explained as reflexes, Skinner began to look at observable environmental factors rather than hypothetical reasons, such as free will or mental processes, that could not be understood or observed. 

He discovered that behavior is shaped by the results, outcomes, or consequences that follow a behavior, rather than the stimulus that precedes it. This is what he called operant behavior, and is the result of learning through the outcomes that follow a behavior that has been emitted. These outcomes predict whether a behavior is likely to occur again in the future or not.

Initially, Skinner experimented with animal behavior, using pigeons and rats in what became known as a ‘Skinner Box.’ 

The Skinner Box Experiment

In the Skinner Box experiment, Skinner delivered food to an animal if it pressed a specific lever. The initial responses seemed to not have an impact on the following behavior but, after the animals had experienced the food coming after the lever-press a number of times, their rate of response greatly increased (Cooper et al., p. 11). 

By tracking their rate of response, Skinner was able to demonstrate they had ‘learned’ what would occur if the lever was pressed. He continued on to include other environmental stimuli or conditions in which food was available (e.g., a colored light was turned on or off). 

Skinner developed Pavlov’s early understanding of conditioning by creating the more robust concept of stimulus control. Through his research of animal behavior, he learned that a previously neutral stimulus (e.g., the light) could become a conditioned stimulus (now signaling the availability of food through a series of learning experiences), eliciting a conditioned response (e.g., the animal is more likely to press the lever when the light is on rather than when it is off). 

This is the basis of operant conditioning, later leading to behavior modification with people.

Outward and Internal Behaviors

There was a clear bifurcation within behaviorism regarding mental states and mental processes, often referred to as ‘private events’. This was because mental processes like thoughts and feelings are not outwardly observable behaviors. 

Skinner believed that these should not be ignored and be considered in the analysis of overt behaviors. While harder to measure human cognition, he still believed that there are ways to integrate these internal factors when studying overt measurable behavior. 

With this in mind, Skinner created radical behaviorism which became the basis of behavior analysis. Those that adhered to a behaviorist perspective, but did not think private events were worth paying attention to in the analysis of behavior, adhered to what became known as methodological behaviorism (Cooper et al., p. 13).

Radical behaviorism is “a thoroughgoing form of behaviorism that attempts to understand all human behavior, including private events such as thoughts and feelings, in terms of controlling variables in the history of the person (ontogeny) and the species (phylogeny)” (p. 702). 

Skinner realized that without acknowledging and accounting for behavior that occurs ‘within the skin,’ we are missing out on accounting for feelings, thoughts, and sensations. 

He believed that these internal behaviors and stimuli are just as important as observable behaviors and stimuli. Internal stimuli are variables that can influence outward, observable behaviors (Cooper et al., p. 13). This video by Behavior Analyst Ryan O’Donnell explains radical behaviorism in more detail.

Skinner said that these internal ‘behaviors’ also serve a function for the individual and are sensitive to the same contingencies as external behaviors, therefore, worthy of being studied. 

For example, having a migraine headache is an internal stimulus that cannot be ignored. A behaviorist views it as an internal stimulus that will make certain outward behaviors more or less likely to occur. A migraine means you’re like to take medication and less likely to go to work. This is not something we can easily observe in the person, but it still influences their behavior. 

What Is Behavior?

Behavior is anything a person does. According to ontogenic selectionism (i.e., selection by consequences), behavior is shaped by the responses (i.e., consequences) we experience from the environment in which we exist after we engage in a behavior. 

Responses include those from other humans, internal physiological stimulation, and other aspects of our physical environment. This means that the behavior of living things evolves over time as a result of the consequences the organism has experienced.

Parallel to Darwin’s natural selection for the physical evolution of living things (known as phylogeny), selectionism results in the evolution of new behaviors due to their function as a result of the consequences experienced by the living thing (known as ontogeny). 

Behavior Analysts examine the patterns of behavior that are typical in the client’s environment. They observe patterns that typically precede and follow a behavior to determine why a behavior is happening or what function it serves the person. 

Three-Term Contingency

The discovery of operant learning shifted from predicting behavior based on the stimulus to the consequence or outcome of the behavior. The three-term contingency includes the antecedent (A), behavior or response (B), and consequence or outcome (C). 

Behavior scientists consider the patterns in the consequences following a behavior to predict if that behavior will increase or decrease in the future. 

The question is, what purpose is this behavior serving for this person? What are the outcomes for them? Let’s look at the outcomes that help predict whether a behavior will occur again in the future or not. 

Reinforcement

Reinforcement is a central principle in applied behavior analysis. 

Reinforcement is when an outcome following a behavior increases the likelihood that the behavior will occur again in the future. Something can only be defined as a reinforcer if it causes a behavior to increase or be strengthened in the future.

Reinforcement can only be confirmed or determined by an individual’s preferences, not the hypothesis of what others think might act as a reinforcer for that person. There are two types of reinforcement you might already have some understanding of:

Positive reinforcement

Positive reinforcement includes a stimulus being added after a behavior is emitted and makes the behavior more likely to occur in the future. Examples of this include social interaction, a tangible reward like a delicious dessert, or a pleasurable physical sensation like sexual arousal. 

What acts as positive reinforcement to one person might not have the same effect on another. This has to do with one’s preferences and how the stimuli influence one’s future behavior.

For example, I might create a workout program for myself and decide to reward myself with getting my nails done if I meet my goals for the week. However, when it comes down to it I’m not that motivated by this and it has no influence on my working out behavior. In fact, I stop meeting my daily goals.

Perhaps I’d rather reward myself with a latte at the end of the week instead. When I switch my reward and see my working out behavior increase, it’s clear that the latte is functioning as a reinforcer but getting my nails done was not. 

Negative reinforcement

Negative reinforcement is when something is removed after a behavior is emitted and makes the behavior more likely to occur in the future. This often has to do with escaping from a situation. 

For example, when the buzzer goes off in my car because my seatbelt is not on, I put my seatbelt on. Phew! I have escaped the annoyance of the buzzer. In the future, I’ll be put on my seatbelt sooner when I start the car to avoid the annoyance of the buzzer. 

Something can only be deemed a reinforcer for a person if the stimuli being added or removed results in them emitting that behavior more often in the future e.g. putting the seatbelt on sooner.

This has a lot to do with personal preferences, tolerance level, pet peeves, and sensory needs. For example, if I choose to share my idea in a staff meeting and it gives me a lot of positive social attention, I might never speak in a staff meeting again since I do not like social attention in group settings.

On the other hand, if I am someone who values public accolades and attention from my colleagues, and sharing my idea in a staff meeting gains this for me, then I will be more likely to share my ideas in a staff meeting again. The attention functions as positive reinforcement. Something that is reinforcing for one person might not function as a reinforcer for another. 

Using the same example as above, my partner might not find the buzzing sound in the car as aversive as I do. This might result in him delaying to put on his seatbelt as he doesn’t find the buzzer annoying. I start putting on my seatbelt right away, as I find the buzzer quite annoying. It has served as a negative reinforcer for me, but not for him. 

Punishment

Discussing the word punishment unto itself can seem aversive. We might automatically associate this term with all sorts of traumatic and negative connotations. While punishment can include things that are aversive and inappropriate in modern behavioral treatment, let’s look at what the behavioral definition says. 

By definition, punishment is only defined by whether the stimulus added or removed decreases the future frequency of a behavior. This is in contrast to reinforcement, which increases a behavior in the future. 

Positive punishment is when “a behavior is followed immediately by the presentation of a stimulus that decreases the future frequency of the behavior” (Cooper et al., p. 701). 

Negative punishment is when “a response behavior is followed by the removal of a stimulus (or a decrease in the intensity of the stimulus), that decreases the future frequency of similar responses under similar conditions (p. 700). 

Let’s look at some common examples: 

Positive Punishment

You ask your roommate to do their dishes more often. They respond in a whiny tone of voice, get defensive and it turns into an argument. You find this whining and arguing aversive. Your behavior of asking your roommate to do their dishes happens less and less often in the future as you want to avoid that aversive situation of whining and arguing.

The whining and arguing is the stimulus that follows your asking. It results in the asking behavior decreasing in the future.

Negative Punishment

A classic example for many families is when a child is acting in a way that a parent doesn’t like. As things escalate, the parent starts taking away privileges. If in the future the child engages in that behavior less often to avoid having privileges taken away, the removal of privileges is acting as a negative punisher.

A stimulus was removed (the privilege) in response to the undesirable behavior, resulting in that behavior being less frequent in the future. However, please see other articles on this site, including the one about parenting children with ODD, about why relying on punishment is not fruitful. 

Modern behavior analysis primarily focuses on the use of positive reinforcement to teach new and adaptive skills, as there are many negative side-effects and questionable ethics of using punishment strategies.

Extinction

This is a third behavioral principle related to reinforcement and punishment. If a behavior typically results in reinforcement, but then reinforcement is withheld and the behavior decreases in frequency, extinction is in place. The behavior that once resulted in specific reinforcement no longer produces that same reinforcement. 

Here is an example:

You often go into a nearby grocery store by pressing a button with your elbow. 

For weeks, this door has reliably opened for you so you can enter the store. In other words, you have been repeatedly reinforced for pressing the button, by the door opening over and over again. Today, however, you press the button at the grocery store and nothing happens.

You quickly press it again and maybe a third time. You look inside to see if the store is open. It appears there are other patrons inside so you press it again twice a little more firmly. Nothing. You are no longer being reinforced for the behavior that you once were. 

At this point, you give up pressing the button and try to wave down an employee through the door to come and investigate from the inside. Your button-pressing behavior has stopped by being placed on extinction. What once was reinforced is no longer being reinforced.

Functions Of Behavior

Behavior Analysts seek to understand the function, purpose, or ‘why’ behind a behavior. When we understand the concepts of reinforcement and punishment, there is always something that is reinforcing a behavior that is being maintained.

It is the job of a Behavior Analyst to observe, measure and analyze behaviors and be somewhat of a detective to figure out the function of the behavior of concern. 

There are four functions of behavior, and they often work in tandem with each other, but sometimes one will stand out as the clear primary function. This is especially true for very young children. 

  • Automatic: One gains a pleasurable sensory experience from engaging in a behavior and is not the result of another person being involved (i.e. it is not socially mediated). Engaging in the behavior just feels good. 
    • Example: If you’re someone who engages in exercise regularly, you likely enjoy the physiological feeling you get during and after exercising. Therefore your exercising behavior is being reinforced and you continue to exercise regularly.
  • Escape: Engaging in the behavior results in an escape from or delay of something aversive to the individual. 
    • Example: A child may tantrum when asked to do a chore because, in the past, the parent will usually retract the instructions in response to the tantrum. In the past, the tantrum has resulted in an escape from the chores. It serves as an ‘escape from chores’ function for the child.  
  • Tangible: One gains a physical item or activity as a result of engaging in the behavior of concern.
    • Example: A child may learn that if they begin to whine and yell when asked to give up the iPad, they are usually then allowed to continue playing on the iPad. The tangible reinforcement they receive for whining and yelling is more time on the iPad. 
    • On the flip side, the parent gives in because they find it hard to tolerate the whining and yelling. They want to escape their child’s aversive behavior so they give in and allow more time on the iPad. This might make it more likely for the parent to continue giving in to the whining in the future, as giving in serves as an escape function from the whining behavior.
  • Attention/Social:  A behavior is maintained by social attention from another human. Just to be crystal clear, humans have social needs. It is not bad to need social attention from others. It is simply part of being human. The challenge can come in when behaviors that are not safe or prosocial become the primary way a person meets their social/attention needs. 

The Science of Behavior and Learning

Behavior Analysts are in the business of teaching new skills. The goal of behavior analysis is to create meaningful changes for an individual to improve their quality of life.

Sometimes this means trying to reduce a problematic behavior, but this will always mean that the individual is also being taught useful and meaningful new skills and behaviors that will improve their quality of life. 

The early discoveries by Skinner influenced learning theory. By the 1940s, scientists began applying operant conditioning to people including preschoolers, people with developmental disabilities, children with autism, adults with schizophrenia, and also typically-developing adults. 

The discovery that the principles of operant behavior applied to humans paved the way for modern applied behavior analysis in which these principles are applied to influence socially significant behavior and improve the quality of life for humans on small and large scales. 

This includes learning and education. You may have heard about some of the unsavory history of behavior analysis including methods used in early behavior modification or Ivar Lovaas and his work with individuals with autism.

However, the field has developed significantly in recent years and is shifting toward a compassionate, empowering, inclusive field truly devoted to making the world a better place through the thoughtful application of behavioral science. Like many other sciences, there have been things done in the past that today’s practitioners are not proud of but seek to change how things are done with a focus on equity and the betterment of society.

Here is an interesting video by Ryan O’Donnell about various applications of Behavior Analysis from small to large scale. It gives you a better idea of how it can be applied to groups or a societal level. Let’s look a little closer at the principles of behavior and learning.

Stimulus Control and Learning

A behaviorist theory of learning is centered around stimulus control. This is one of the most exciting principles in behavior analysis as it is the foundation of learning.

Stimulus Control is when the presence or absence of a stimulus is presented, resulting in behavior to change in some way. This might include the behavior changing in latency (delay to onset), magnitude/intensity, frequency, or length (i.e., duration).

Through the principle of reinforcement (and sometimes punishment and extinction) we learn to respond to certain stimuli in specific ways. By learning to discriminate or discern which stimuli will produce reinforcement for us, we learn to behave in certain ways under specific conditions. 

Through the same processes we learn stimulus generalization, which is understanding which related or similar stimulus will also produce reinforcement for us. 

When the balance between generalizing and discriminating is found, we have learned a new concept. In other words, a concept is the result of both stimulus generalization and stimulus discrimination between different groups of stimuli. 

For example, let’s think of the color blue. When we learn the color ‘blue’ we learn to discriminate blue from red, yellow, green etc. However, there are shades of blue that are all still considered ‘blue.’ We also learn to generalize what is still within the category of ‘blue’ and would label royal blue, baby blue, cobalt etc. all ‘blue.’

If stimulus control is too loose, we would perhaps call shades of purple ‘blue.’ If stimulus control is too tight, then we might only label one shade of blue as ‘blue.’ 

Verbal Behavior And Relational Frame Theory

Theorists from various fields have long debated the mechanisms that result in language acquisition and language learning. One of the original behaviorist theories about this is called verbal behavior (VB).

The term verbal behavior was developed by B.F. Skinner, and is defined as “behavior whose reinforcement is mediated by a listener; includes both vocal-verbal behavior and nonvocal-verbal behavior. Encompasses the subject matter usually treated as language and topics such as thinking, grammar, composition, and understanding” (Cooper et al., p. 708). 

Skinner put forward that language is verbal behavior and is shaped by the same behavioral processes that shape non-language behavior (e.g., reinforcement, extinction, stimulus control etc.). 

Similarly, Skinner also defined verbal behavior by its function rather than what it looked like. Skinner developed an environmental account of language acquisition, stemming from the same principles of behavior established in behavioral science. 

In contrast, Noam Chomsky’s biological account of language acquisition states that humans are born with innate language abilities (Cooper et al., p. 527). He pointed out in a critique that Skinner’s verbal behavior approach did not account for the way in which humans gain language in a generative or exponential manner.

A toddler is not explicitly taught every single word they begin to speak. They might be directly taught some words, but others are learned indirectly. In short, the verbal behavior approach can be critiqued as failing to account for complex language development, falling short of providing empirical research to support it, and explains language acquisition through only direct learning/contingencies of reinforcement and other behavioral processes. 

Relational frame theory (RFT) was developed in response to Skinner’s verbal behavior approach but from within the behavioral sciences. 

RFT relies on operant learning and derived relational responding which means humans can learn things without direct teaching, training or experience (Torneke, 2010, p.x). When taught some concepts through operant learning (i.e., reinforcement, stimulus control etc.), humans can derive relations to other concepts and thereby explaining why we don’t need to be directly taught EVERY single word we use.

If you’re curious to learn more about RFT, watch BCBA Ryan O’Donnell explain it further. 

Summary

And there you have it! Behaviorist theory has a long history dating back to the early 20th century and stemming out of the field of psychology. Following the early findings by BF Skinner, modern Behavior Analysts seek to understand why a behavior is occurring by understanding the functions of a behavior i.e. what purpose is this behavior serving this person? 

This is done through understanding the functions of behavior. New skills are taught primarily through the principle of positive reinforcement. These behavioral processes result in learning via stimulus control as we learn to respond to specific stimuli but also generalize to other similar stimuli. 

The debate between the Verbal Behavior approach and Relational Frame Theory continues on in the behavioral sciences.

References

Cooper, Heron & Heward. (2007) Applied Behavior Analysis: 2nd Ed. Pearson Education.
Torneke, Niklas. (2010). Learning RFT. New Harbinger Publications.

Your email address will not be published. Required fields are marked *