Saturday, July 30, 2011

Moral righteousness in trying times

A few years ago, I went to see the movie Valkyrie at a local theater chain called the Alamo Drafthouse.  One great aspect of the theater is that you can order food and drinks from your seat as you watch the film.  In addition, for about 30 minutes before the film, they run quirky programming that is somehow related to the film you are about to see.  For Valkyrie, they played a series of clips from documentary films and newsreels about Hitler’s rise to power. 

One in particular caught my attention.  The film talked about how Hitler had fallen out of favor politically, and was able to rise back to power, because the German economy had gone sour.  Hitler was able to capitalize on people’s malaise to drive home his message of Aryan superiority and to cast a variety of groups like the Jews as the cause of evil in the world. 

There is growing evidence that when people feel unsettled, they try to regain their psychological balance by striving to make their world feel more coherent.  One way that people achieve this end is to cling more strongly to the moral norms of their culture.  When they cling to these moral norms, they tend to punish people who transgress more heavily than they would when they feel in balance.

For example, in previous posts, I have talked about influences of the fear of death on behavior.  One thing that happens when people are reminded of their own mortality is that they increase their tendency to punish people who have transgressed morally.  For example, a number of experiments have used a technique where they ask participants to play the role of a judge setting bail for someone accused of prostitution.  These studies demonstrate that people who are reminded of their own mortality set higher bail than people who have not been reminded of their own death. 

Anything that makes an individual feel unsettled can create this effect.  A particularly ingenious version of this effect was obtained by Travis Proulx and Steven Heine in a paper published in the December, 2008 issue of Psychological Science. 

They capitalized on an intriguing study done by Dan Simons and Dan Levin in 1998.  Simons and Levin had an experimenter approach people on the street with a map and ask for directions.  As the person was giving directions, workmen carrying a door cut in between them.  The experimenter then switched places with the person holding the rear of the door so that the person on the street was now giving directions to a new person.  About 80% of people in this situation never noticed that they were talking to a different person. 

Even though people don’t notice that people have been switched, there is evidence that they do feel somewhat unsettled by this experience.  That is, they have a feeling something is wrong, but they don’t consciously recognize the source of the feeling.  Proulx and Heine had people come to the lab to participate in a study.  They were greeted by an experimenter.  After beginning the study, the experimenter went to get more materials for the study and was replaced by another experimenter dressed the same way. 

As in the Simons and Levin study, few people consciously recognized the switch.  After the switch, people filled out the vignette in which they set bail for a prostitute.  The group of people for whom the experimenter switched set bail higher than a control group with no switch.  In subsequent studies, the experimenters also did some clever manipulations to demonstrate that this effect really had to do with people feeling unsettled by the switch of experimenter.

So, what does this have to do with the rise of the Nazis in Germany?

In difficult times, people strive for psychological balance.  When they cannot control their circumstances, they control their interpretation of the circumstances to help them feel like the world makes sense.  Clinging more strongly to social and cultural norms is one way that people try to make sense of their world.  This point seems particularly important now as we enter a difficult economic period.   People suffering economic hardships are particularly vulnerable to individuals who want to capitalize on people’s desire to make their world coherent.

Tuesday, July 26, 2011

Categories, essentialism, race, and culture.

Placing something in a category and describing its properties have very different effects on the way we think about things.  In various blog entries, I have pointed out that calling someone a musician makes playing music seem much more central to their being—more essential—than just saying that they play music.  What about categorizing people by their race?

Throughout the world, racial, cultural, and ethnic differences are used to place people into different categories.  Once we categorize people in this way, we automatically assume that they have the essence of this category.  For example, in 1994, Richard Herrnstein and Charles Murray wrote a book called The Bell Curve in which  they documented racial differences in IQ test scores.  An implicit assumption of this book was that it was meaningful to classify people by race and that these racial categories reflected something essential about the people who were categorized. 

How do racial categories develop?  This issue was addressed in a paper by Marjorie Rhodes and Susan Gelman in a 2009 paper in Cognitive Psychology.  They looked at two factors:  age and cultural background.  The participants in their study were primarily White.  They came either from a mid-sized city that was politically liberal or from a rural area that was politically conservative.  The participants ranged in age from 5-18. 

The younger children played a game with a puppet.  They were told that the puppet came from another place where they do some things wrong, but they do other things differently from the way we do them, but they are not wrong.  After some practice with the game, children were shown an object or person and then were shown a second object or person and were told that the puppet thinks they are the same kind of thing and were asked whether they were right.  For example, they might be shown a wolf and a lion and were told that the puppet thought that they were the same kind of thing.  Over the course of the study, the puppet classified animals, and artifacts (like cars, forks, and dresses).  The puppet also classified people based on gender and race. 

The older kids did a similar task, but without the puppet.  The oldest kids in this task (who were about 17) were asked these questions in a pencil-and-paper test.

So, what happened?

For simplicity, I’ll just focus on the animal and racial categories.  For the animals, kids of all ages tended to say that the puppet was wrong when it put together animals of different categories.  That is, starting at age 5 and upward to age 17, children felt that it was not correct to put different animals in the same category.

The data for race were much more complex.

As an example, the participant might see a White girl and then an Asian girl and be told that the puppet thought that they were both the same kind of person. 

The youngest children (5- and 7-year-olds) showed no strong preference for saying that the puppet was right or wrong when putting together people of different races.  About half the time they said the puppet was right and half the time they said the puppet was wrong. 

For the older children (10-year-olds and 17-year-olds), their answer depended on where they grew up.  The older children who grew up in the politically liberal area said that it was correct to classify people from different races.  Those who grew up in the politically conservative area said that it was incorrect to classify people from different races.

The first thing to notice about these data is that the belief that race is a possible basis for classifying people emerges late.  This observation is similar to what anthropologist Lawrence Hirschfeld has observed in his research. 

The second thing to see is that beliefs about whether it is necessary to classify people based on their race depend on what other members of your culture suggest.  You are much more likely to think it is necessary to classify people based on race if you grow up in a politically conservative environment than if you grow up in a politically liberal environment.

The reason that this type of classification matters is that classifying people into a group brings along the belief that the members of that group share some essential characteristics.  Consistent with that, Rhodes and Gelman asked the 17-year-olds to fill out scales about how strongly they believe that members of the same race share deep underlying characteristics not shared by other races.  Those kids who were most likely to think that it was necessary to classify people based on race were also the ones most likely to think that racial categories reflect something deeply similar about the members of that race.

For each of us, I think, it is worth reflecting on how likely we are to treat people differently because of the way we categorize them.

Friday, July 22, 2011

Seek and ye shall find

In my last post, I discussed the concept of probability matching in choice.  A simple case of probability matching is one in which you are given a choice between two cups, only one of which will have an M&M in it.  If the M&M is in the left cup 80% of the time, and the right cup 20% of the time, you can maximize the number of M&Ms you are likely to receive by always selecting the left cup.  That way, you will get M&Ms 80% of the time.  People (and most other animals) tend not to make this optimal response.  Instead, we tend to probability match.  That is, if the left cup has the M&M 80% of the time, then we pick the left cup 80% of the time. 

There are many reasons why we carry out this behavior, and a paper by Wolfgang Gaissmaier and Lael Schooler in the December, 2008 issue of Cognition suggests a new reason.  They find that people who tend to probability match are better able to detect changes in the environment than people who find the option that is most highly rewarded and stick with it. 

One way to think about this is that there is always a tradeoff between exploring the world and exploiting it.  Exploration is the process of searching for new things.  The potential benefit of exploration is that you may discover rich new sources of reward.  The danger of exploration is that you may spend a lot of time and energy and come up empty handed.  Exploitation is the process of drawing rewards from the world in known places.  The benefit of exploitation is that you have a good idea of what you are going to get.  The danger is that you may miss out on other opportunities that are more rewarding than the one you are currently exploiting.

This exploration exploitation tradeoff occurs in almost every facet of our lives.  If you watch the same TV show routinely, you are exploiting that show.  If you sample different restaurants in the town you live in, you are exploring.  If you play a musical instrument and stick to the same set of songs you have already learned, you are exploiting.  If you deliberately schedule your vacations so that you always visit new places, you are exploring. 

It has always been somewhat puzzling that people continued to explore in experiments demonstrating probability matching.  The optimal behavior is to exploit the option that pays off most often.  And narrowly within the context of the experiment, it is true that exploiting the best option in the study is the best thing to do.  However, the world is dynamic.  Things in the world change.  A restaurant that used to be terrible might get a new chef and suddenly be excellent.  A TV show that started off edgy and radical may slip into mediocrity. 

If you evaluate the world at one time and then exploit after that, you run the risk of missing changes in the world.  The cognitive system is structured to find a reasonable way to resolve the tradeoff between exploration and exploitation.  If there is one option that is far superior to all of the others, then you will tend to pick it most of the time and to select other options every once in a while, just to make sure that the world hasn’t changed radically.  If one option is only slightly better than another, then you sample the better option only slightly more often than the worse one.  That behavior is useful, because a small decrease in the quality of the better option (or increase in the quality of the worse option) could flip the relative goodness of the options.  And because you are doing a good job of managing the tradeoff between exploration and exploitation, you will notice.  So, the unpredictability of human behavior really is a virtue. 

Tuesday, July 19, 2011

Unpredictability is in our nature.

In order to get around the world, we make predictions about what other people do.  We guess that a friend will remember to make a reservation for a lunch date next week.  We assume that a spouse will pick up a bottle of wine to bring to a friend’s party.  We expect that our children will choose the same item from the menu at a restaurant that they always do.  These predictions work well enough.  Often, our predictions are borne out.

Yet frustratingly, sometimes they are not.  We make predictions based on our beliefs about other people’s behavior, and sometimes people do not do what we expect.  We come to rely on other people’s predictability to the point where their failures to be predictable often lead us to wonder what is wrong with them.

As it turns out, though, unpredictability is a fundamental part of human nature. 

Consider a simple case.  Imagine we play a simple game.  Every 10 second, I will flash a light, and then you have to press one of two buttons in front of you.  If you pick the ‘right’ button on that trial of the game, you get $5, and if you pick the wrong button, you get nothing.  You will get to play the game for 20 minutes.  There are 120 trials of the game in 20 minutes, so you could win as much as $600.  Not bad for 20 minutes’ work. 

Now, the way the game is set up, the computer controlling the game picks a button randomly so that 70% of the time, the button on your left is the winner and 30% of the time, the button on the right is the winner. 

What will you do in this game?

Chances are, by the last few minutes of the game, you will end up picking the button on the left about 70% of the time and the button on the right about 30% of the time.  This behavior is called “probability matching” because the proportion of selections you make of each button is about equal to the probability that the button will be the winner. 

Unfortunately for you, this strategy is not the best one you could choose.  The best strategy you could pick is one called “maximizing.”  In this strategy, you should start by sampling the buttons to figure out which one is more likely to be the winner.  After that, you should always pick that button.  If you always pick the button that is rewarded more frequently, you’ll win 70% of the time.  If you pick the worse button 30% of the time, as you would with probability matching, you would probably lose most of the time that you picked that button, and so your overall payoff would be lower. 

That means you’d be best off if your behavior was completely predictable based on which choice is currently the best one.  Yet, your cognitive system is designed to make your behavior more unpredictable.  On each trial of the game, there is only a probability that you’ll pick the thing that looks like the best thing to do.  That means that your cognitive system is willing to pay a price in the short term in order to be unpredictable.  So, there must be some value in it.

Why are you unpredictable?  There are lots of reasons, but let’s just talk about two for now.

First, life is rarely like these simple games.  In the game, I set up a payment schedule for the buttons, and it doesn’t change.  Most of the time, though, the state of the world does change over time.  If you only shop at one store all the time, then you might miss out on a good bargain at a competing store.  If you only go to one restaurant, you might not notice the slow steady decline in the quality of the food there, and lose out on the chance to eat at a new restaurant whose quality is improving. 

That is, our world is a constant tradeoff between exploiting choices that have been good to us in the past, and exploring new options.  If we only exploit, then we run the risk that we will not notice change in the world that make the option we are choosing worse than it was or make other options in the world better than they had been when we first explored the world.

Second, if our behavior is completely predictable, then it makes it easier for other people to take advantage of us.  Now, in the modern world, there may not be lots of people lurking to take advantage of us.  But in the evolutionary environment in which we (and basically all other species on Earth) evolved, there was always a danger that someone (or some thing) would be out there taking advantage of us.  So, some unpredictability has served us well., and those mechanisms that make us unpredictable are still a fundamental part of our cognitive system.

So, the next time that someone does something unpredictable, remember that they are wired to do it.  And remember that if they weren’t unpredictable, they would be losing out in the search for the best things that life has to offer.

Friday, July 15, 2011

Take two Tylenol for heartache too

Last summer, I was playing badminton with my kids, and I tore a calf muscle.  It hurt.  A lot.  Language has lots of ways to express pain.  In the case of my calf, the pain was intense.  The pain shot through my entire leg.  And when the muscle would spasm, I would feel a burning pain.

It is interesting that people also use the language of pain to talk about social pain.  We talk about the pain of a breakup.  Musicians sing about their heart aching for someone they miss.  When people recall being teased as a child, they invariably talk about how much it hurt. 

Is this just a metaphor?

This question was examined in a clever paper by Nathan Dewall, Dianne Tice, Roy Baumeister, and their colleagues in a paper in the July, 2010 issue of Psychological Science.

They reasoned that if we really feel pain from social difficulties, then the strength of that pain might be relieved by taking a pain relieving drug that works on the way the brain processes pain.  One such drug is acetaminophen (the active ingredient in Tylenol). 

In one study, participants in an experimental group took two acetaminophen pills each day, while the control group took a placebo.  Each day, people rated themselves on a scale designed to measure hurt feelings.  At the start of the study, the two groups had similar levels of hurt feelings.  By the end of the study 3 weeks later, people rated themselves as having a lower level of social pain than people who took the placebo.  There was no placebo effect in this study at all, in fact.  People taking a placebo experienced about the same level of hurt feelings throughout the study.

A second study used functional Magnetic Resonance Imaging (fMRI) to look at the brain’s response to pain.  Participants either took acetaminophen or a placebo for 3 weeks before the imaging study.  Then, while in the MRI scanner, people played a game in which they thought they were passing a virtual ball with a group of two other participants.  In one round of the game, the participant had the ball thrown to them frequently.  In another round, the participant was excluded, and the other two players (who were actually computer opponents) threw the ball only to each other. 

Functional Magnetic Resonanace Imaging gives a measure of the amount of blood flowing to different areas of the brain.  Because the brain needs a lot of glucose to act, regions of the brain that are very active when people do some task experience an increase in blood flow.  So, blood flow is a rough marker of the activity of the brain.

In this study, the authors looked at two regions of the brain that are involved in the perception of pain (the dorsal Anterior Cingulate Cortex and the Anterior Insula for those of you who like your brain areas…).  People who took a placebo showed higher levels of activity in these brain regions when being excluded from the game than when being included.  In contrast, the people who took acetaminophen actually showed about the same level of activity in the brain regions associated with pain in both when being excluded and included in the game, suggesting that they did not experience an increase in physical pain when being socially excluded.

These findings suggest that the words to the old song, “Sticks and stones may break my bones, but words will never hurt me” are not quite right.  We really do experience social pain as physical pain.

It is not surprising that the brain would use the mechanisms of pain for social exclusion and other social difficulties.  As humans, we are social creatures.  We rely on our social relationships to survive.  Pain is used as a signal of damage to our bodies, because that helps us to protect ourselves.  It should be no surprise that potential damage to our social relationships is also marked by pain.

Tuesday, July 12, 2011

I like you, because I always feel good around you and I don’t know why

How do you know that you like someone or something?  Often, seeing a person you like gives you a good feeling inside or makes you smile.  You have that reaction far before you could say exactly why you like that person.  Indeed, you might find it hard to put into words exactly why you like them, but you know you do. 

There is a lot of work in Psychology showing that you can come to like someone (or some thing for that matter) not because of anything they have done, but just because you tend to feel good when you are around them.  There is a procedure called evaluative conditioning that shows how this can happen.

As one example, Michael Olson and Russell Fazio presented studies in the journal Psychological Science in 2001.  They had people stare at a computer screen while images were presented to them very rapidly (at a rate of 1.5 seconds per image).  They told people that they were studying people’s ability to do surveillance in a complex environment.  The images consisted both of pictures (of different Pokemon characters) as well as words.  Sometimes, more than one word or picture appeared on the same screen.  In fact, one Pokemon character was repeatedly paired with positive words and images (like the word excellent or a picture of a sundae).   A second character was repeatedly paired with negative words and images (like the word terrible or a picture of a cockroach).  Later, people were asked to rate how much they liked the character.  People consistently gave higher ratings to the character that was paired with positive words and pictures than to the character that was paired with negative words and pictures.  This difference occurred, though the participants in the study were not aware of which words and images had appeared with the characters. 

So, what is going on here?  In a May, 2009 paper in the Journal of Personality and Social Psychology, Christopher Jones, along with Russell Fazio and Michael Olson argue that this change in evaluation of the objects occurs because of a mis-attribution of the good feeling to the object.  That is, in these kinds of experiments, the positive words and pictures make the person feel good.  They are not sure why they feel good, so the good feeling is attached to the Pokemon character that is consistently associated with feeling good.  Likewise, the negative words and pictures make them feel bad.  They are not sure why they feel bad, so they attach the negative feeling to the Pokemon character that is consistently associated with feeling bad. 

Often, of course, this strategy is a pretty good one.  If there is a person in the world, and you usually feel good around that person, chances are that person is making you feel good.  If there is a person and you usually feel bad around them, chances are that person is making you feel bad.  However, this strategy can lead to the wrong outcome too.  You may end up liking people and things you encounter in positive situations more than perhaps you should.  Similarly, you may end up disliking people and things you encounter in negative situations more than you should.

Friday, July 8, 2011

Conspiracy theories are easier to maintain from a distance

I have always scratched my head over conspiracy theories.  It is hard enough to get a group of reasonably intelligent people organized to run their own fantasy football league.  It would require true organizational geniuses to have carried out truly big conspiracies like the alleged conspiracy to kill JFK without somebody finding out about it at some point.  But many of the characters who are supposed to be deeply involved in these conspiracy theories hardly seem like organizational geniuses.  Yet, conspiracy theories abound.

Some research by Marlone Henderson in the October, 2009 issue of Personality and Social Psychology Bulletin helps to explain why. 

He capitalizes on an observation I have talked about before in this blog that people think about things more abstractly when they are far away in space or time than when they are close in space or time.  In order for a group to be united to a common purpose, they all have to have the same goal.  If you think about a group close up, then you can start to think about all of the different people involved, and it is easy to realize that lots of the group members are going to have somewhat different motivations.  But if you think of the group from a distance, then these individual differences in motivation get fuzzy.  Instead, you tend to focus abstractly on the group’s mission. 

To test this prospect, Henderson asked participants to think about a group consisting of a variety of different types of people.  They had come together to work on a project.  Some of the participants in the study were told that the group was meeting in New York (where the study was conducted).  Other participants were told that the group was meeting in San Francisco.  The participants were asked to judge how united the group was toward their common project.  The group that was far away was judged consistently to be more united in their efforts than was the group that was meeting close by.  That is, people really did consider groups that are far away to be more coherent than groups that are close by.

And that is one psychological mechanism that can support conspiracy theories.  You would be hard-pressed to believe that you or your friends, or people you have met could carry out a diabolical plan without someone messing it up or letting a key piece of information slip out.  But at the same time, it seems easier to swallow that a shadowy group operating far away can somehow keep focused on a common goal and then vanish without a trace.