Wednesday, September 21, 2016

You Have More Influence on Others Than You Think


From an early age, we talk to people about the positive and negative influences of peer pressure.  On the negative side, drug education programs talk about the effect of social groups on whether a particular individual will take drugs.   On the positive side, Austin, Texas has a highly successful day of giving in which members of the community urge others to donate to their favorite charities.
But, how much influence do you really have on the actions of other people?  Are you aware of the effect you have on others?
This issue was explored in an interesting paper by Vanessa Bohns, Mahdi Roghanizad, and Amy Xu in the March, 2014 issue of Personality and Social Psychology Bulletin. 
This paper focused on peer pressure to do negative things.  In one study, participants were college students.  They went on campus and asked other students to commit a white lie.  They were asked to approach other students and ask them to sign a sheet acknowledging that the participant had described a new class at the university to them, even though the participant was not going to describe the class because he/she “didn’t really want to do it.”
Before starting this task, participants estimated how many people they would have to ask in order to get three students to sign the forms.  They also asked people how comfortable others would be in saying “no” to the request.  Then, they went out and solicited white lies. 
Participants predicted they would have to ask an average of 8.5 people in order to get three signatures.  In fact, they only had to ask an average of 4.4 participants in order to get those three signatures.  Generally speaking, people felt that others would be comfortable saying “no” to them.  The more comfortable they thought others would be saying “no,” the more people they thought they would have to task before getting the required signatures.
A second study replicated this finding using a situation in which participants asked others to write the word “pickle” in a library book in pen.  Once again, participants believed they would have to task twice as many people to get three people to commit the small act of vandalism as they actually did need to ask.
Two other studies looked at why this effect emerged.  These studies used vignettes in which people imagined small unethical acts like reading someone’s private Facebook messages if their account was left open or calling in sick to work in order to go to a baseball game.
Some people read scenarios in which they were going to perform the act themselves.  Others read scenarios in which they were watching someone else performing the act and they could give them advice.  In one situation, the advice was either to do the unethical thing or the ethical thing.  Participants rated how comfortable they would feel doing the ethical thing in these scenarios.
Participants who played the role of advisor did not feel their advice would have much impact on others.  They felt that other people would be reasonably willing to do the ethical thing whether they were giving other people advice to do the right thing or to do the unethical thing.
In fact, though, participants playing the role of the actor were much less comfortable doing the ethical thing when they got advice to do the unethical thing than when they got advice to do the ethical thing.  That is, people were highly swayed to do the wrong thing by the advice they got.
Other studies by Vanessa Bohns and her colleagues have demonstrated similar findings looking at ethical behavior. 
Putting all of this work together, then, it seems that we have a hard time saying “no” to other people.  Social pressure has a huge influence on our behavior.  At some level, that may not seem surprising to us, but we systematically underestimate the influence that our social pressure has on other people. 
One more reason why we should try to help other people to do the right thing.

Wednesday, September 14, 2016

The Two Competing Selves Inside You


Sitting up late talking with friends, you may spend a lot of time thinking about who you would like to be ideally.  You focus on the people you would help, the good you could do for society, and your dreams.  In your day-to-day life, though, you spend a lot of time just doing what has to be done to get ahead. 
So, are we just lying to ourselves and others when we have these idealistic conversations?
In a paper in the May, 2014 issue of the Journal of Personalty and Social Psychology, Jeremy Frimer, Nicola Schaefer, and Harrison Oakes suggest that each of us has two distinct conceptions of self.
First, there is the actor.  The actor is your public-facing self.  The one that you bring out when other people are watching.  The actor often has some focus on being prosocial, that is doing things that are for society’s benefit.
Second, there is the agent.  The agent is your doing self.  When you pursue your daily goals, you typically act more selfishly in your own interests. 
As evidence for this split between actor and agent, participants from the United States (which is a relatively individualistic society) and India (which is a relatively collectivist society) were asked to perform one of two tasks.  
One task involved rating the importance of a variety of selfish and prosocial goals.  This task was designed to get people to think about themselves as actors.  The other task involved having people describe four of their own most important goals.  This task was designed to get people to think of themselves as agents. 
After this initial task, everyone rated how strongly their own goals were about helping themselves and how much they were about helping others. 
Participants who were asked to think about a variety of prosocial and selfish goals rated that their own goals were equally strongly about helping themselves and others.  Those who were asked to list only their own goals rated that their goals were more strongly focused on themselves than others.  This pattern held both for Americans and Indians, suggesting that the agent is fairly selfish across across cultures.
A second study demonstrated that when people are primed to think about the variety of goals they might pursue, they act more like someone who is told to role-play a prosocial person, while those who are primed to think about their own goals act like someone told to role-play a selfish person.
One final study found that when people were primed to think of themselves as actors, they felt their goals were more idealistic, but when people were primed to think of themselves as agents, they felt their goals were more realistic.
This split between two self-concepts may explain why people often act differently when they believe that others are watching them.  We want our public-facing self to act consistently with our ideals.  Each of us wants to be seen as the kind of person who does things to help society.   But, when left to our own devices, the pressures of life often push us to do what is in our own self-interest.
This work has interesting implications for how to get people to do more public service.  Lots of charities and nonprofits need volunteers to help them carry out their work.  People who do volunteer work also report feeling better about themselves after doing it.  Yet, few people actually volunteer their time.  Perhaps prompting people to think about themselves as actors rather than agents could help to promote more engagement with volunteer organizations.

Thursday, September 8, 2016

Switching Languages Affects Accents


If you spend time in any large city in the United States, chances are you will hear English spoken in a variety of accents.  Some of these accents are just native speakers of English from different regions of the country, while others reflect the speech patterns of people who learned another language as a child and then learned English later.
A foreign accent matters in social situations.  The accent immediately marks someone as an outsider, which can lead to distrust.  In addition, some native speakers find accents hard to understand, and so the accent can also create difficulties in communication.
Are accents a fixed part of a person’s speech pattern in their non-native language?  This question was explored by Matthew Goldrick, Elin Runnqvist, and Albert Costa in a paper in the April, 2014 issue of Psychological Science. 
These researchers were interested in whether switching back-and-forth between languages would affect the strength of a person’s accent.  To test this possibility, participants were run in Barcelona, Spain.  All of them were native speakers of Spanish who began learning English by the age of 4. 
In this study, participants saw pictures of simple objects on a computer screen in which the first sound of the word for that object began with a d (as in desk or doce) or a t (as in tin or taza).  The picture was surrounded by a colored frame.  Participants were instructed to use the English word for one color and the Spanish word for the other color.  The key question is whether participants would have a stronger accent on trials on which they switched languages from the previous trial than on trials on which they used the same language on consecutive trials.
It can be difficult to measure strength of accent by ear, but the researchers used a clever method to study the strength of the accent.  When Spanish speakers produce the sounds ‘d’ and ‘t’, they engage their vocal cords earlier than when English speakers produce these same sounds.  If you look at a digital recording of speech, you can actually see the burst of noise when the vocal cords are engaged.  The researchers did an acoustic analysis of the ‘d’ and ‘t’ sounds at the start of each word to measure when the vocal cords engaged.
Overall, speakers did differentiate between the languages.  They engaged their vocal cords later when speaking English words than when speaking Spanish words.  The language of the previous trial did not affect pronunciation of Spanish words, but it did affect pronunciation of English words.  Overall, participants engaged their vocal cords a bit earlier when saying an English word if the previous word had been spoken in Spanish than if the previous word had been spoken in English.  That is, the person’s Spanish accent was stronger if they had just said a Spanish word than if they had just said an English word.
One interesting aspect of this study is that some of the words used were cognates.  That is, the words for the object were similar in Spanish and English.  For example, the Spanish word for dentist is dentista.  The effect was particularly strong for these cognates.  That is, when participants had just spoken a Spanish word and then they had to speak the English word for a cognate, they had a stronger accent than when they had just spoken an English word.
This result reflects that when people speak multiple languages, they learn information both about the words that are used in that language as well as how to pronounce those words.  For cognates in particular, people have experience speaking similar words in both languages.  Speaking Spanish activates the Spanish pronunciation of the word, while speaking English activates the English pronunciation.  When participants switch languages, they get a combination of these pronunciations, which ultimately affects how they speak those words.

Thursday, September 1, 2016

Young Children are Primed to Learn about Eating Plants


Humans are much more flexible in their behavior than most other animals.  For example, we figure out what to eat in every environment where we find ourselves.  Other animals are not so lucky.  If they find themselves outside of the environment in which they evolved, they can have great difficulty finding food.

The flexibility of human behavior comes at a cost.  Ultimately, we have to learn how to navigate our environment rather than having a lot of that information pre-wired into the system.  That learning is effortful and potentially dangerous.

Consider the problem of eating plants.  Many plants are edible and are important sources of nutrition.  But, some plants are not things we can digest and—worse yet—some are poisonous. 

A fascinating paper by Annie Wertz and Karen Wynn in the April, 2014 issue of Psychological Science examines infants’ ability to learn about what plants are edible.  Infants clearly don’t come wired to know which plants are edible, but their research suggests that infants may come wired to pay attention to the edibility of plants.

In one experiment, 18-month-olds watched an experimenter perform a series of actions.  The experimenter first took a fruit (say a dried apricot) off a realistic looking plant and placed the tip of it in his mouth and said “Hmmmmmm.”  Then, he took a different fruit (say a dried plum) off an object shaped like a plant that was painted silver and housed in a glass case and did the same thing.  So, one object looked like a plant, while the other did not.  (Other children in this study saw the experimenter do the action on the object first and then the plant, so the order in which the actions were performed did not affect the results.) 

After seeing these actions, the experimenter took other fruits off the plant and the object.  Then, a second experimenter came in and asked the child which one they could eat.  Children overwhelmingly chose the fruit that came from the plant. 

The experimenters also ran three control conditions.  In one, when the experimenter took the fruit off, he put it behind his ear rather than in his mouth.  In the test, the infants were asked which object they could use.  In this case, the children had no preference for the fruit from the plant over the fruit from the object.

Of course, it could just be that the plant was more familiar than the object.  In another control condition, the plant was compared to a set of shelves.  Most infants are used to seeing food taken from shelves in their home.  In this condition, after seeing the fruits from the plant and the shelf put in the experimenter’s mouth, the infants strongly preferred to choose the fruit that came from the plant.

In a third condition, the infants saw the experimenter just look at the plant and say “Hmmmmmmm” and then look at the object and say “Hmmmmmmmm.”  This condition was designed to test whether children simply had a preference for fruits that come from a plant rather than fruits that come from an object.  In this case, the infants were equally likely to choose the fruits that came from the plant or the object.  This condition is important, because it is potentially dangerous for infants to learn that all plants are edible, because some are dangerous.

Finally, the researchers also examined whether even younger infants might show this preference.  In a final study, these same actions were shown to six-month-old infants.  Six-month-olds are too young to choose for themselves.  So, after the first experimenter took the fruits off the plant and the object, a second experimenter put each fruit in his mouth in turn and held there.  The experimenters measured how long the infants looked at these events.  Lots of work with infants shows that for unfamiliar situations, infants look longer at surprising events than at unsurprising events. 

In this study, when the infants saw the first experimenter put the fruits in his mouth, they looked longer when the second experimenter put the fruit from the object in his mouth than when the experimenter put the fruit from the plant in his mouth.  But, when the first experimenter put the fruits behind his ear, the infants looked for the same amount of time when the second experimenter put the fruits behind his ear, regardless of whether they came from the object or the plant.

This set of results suggests that by six-months of age, infants are ready to learn about which plants are edible.  Evolution has not pre-wired humans with knowledge of specific plants that we can eat.  Instead, we are wired to learn about plants from other adults.  That mechanism is important for helping us to survive in a wide variety of environments. 

Thursday, August 25, 2016

Trust of Strangers Requires Effort (Sometimes)


Trust is important.  Without the ability to trust strangers, society would fall apart.  You have to trust that people will generally deal with you honestly, and that they will follow through on their commitments.  After all, you do not know all the people who grow your food, make your clothes, and take care of your money in the bank.  You do not have the time to do all of these things for yourself.
Of course, most of this trust is implicit.  You do not often think about the number of strangers you rely on to get through your daily life. 
Sometimes, though, you have to place your trust in a stranger more explicitly.  Not long ago, I was sitting at an airport by a bank of outlets.  A woman walked up, plugged in a cell phone, and asked two of us sitting by the outlets to watch her phone for a few minutes while she went to check her flight.  She had to trust that we would not steal her phone, and we had to trust that she was not leaving us sitting next to a dangerous device.  And in the end, our mutual trust was rewarded.
An interesting paper that has been accepted for publication in the Journal of Experimental Social Psychology by Sarah Ainsworth, Roy Baumeister, Kathleen Vohs, and Dan Ariely examines whether this kind of trust among strangers requires mental effort.
The measure of trust they used in these studies was a behavioral economics game called the Trust Game.  In the Trust Game, participants are given $10.  They are told that they can give as much of that money as they want to their partner.  The experimenter will then triple the amount of money given to the partner, and the partner can return as much of that money as he or she chooses to the participant.  So, if the participant elects to give $3 to the partner, the partner will receive $9 from the experimenter.
This game requires trust.  The best joint outcome for the players in this game requires that the participant give all of the money to the partner and requires the partner to split the proceeds.  If the participant does not trust the partner, then the participant can choose to keep all of the money.
These researchers suggest that trusting a stranger in this game requires overcoming a natural tendency to avoid risk.  To explore this possibility, they overlaid an ego depletion manipulation on this study.  The concept behind ego depletion is that when people engage in a period of effortful self-regulation, they have difficulty overcoming their habitual tendencies in the future.  So, if trust requires some amount of effort, people who first do a task that requires effort will trust the stranger less than those who do not do this task.
In one study, participants watched a silent video of a woman being interviewed.  Periodically, words appeared in the lower right corner of the screen.  One group just watched the video, while a second group was told to ignore the words and to return their attention to the woman as soon as they noticed themselves looking at the words.  This task has been used in previous research on ego depletion.
After watching the video, participants were given the trust game and were told they were playing with a partner in another room.  Participants also filled out a scale that measured the personality characteristic of neuroticism.  Neuroticism is the degree to which people tend to focus on negative outcomes and also the degree to which they tend to experience high-arousal emotions like anxiety and anger.
In this study, participants with low levels of neuroticism were not strongly influenced by the ego depletion manipulation.  However, those with a high level of neuroticism gave much less money to their partner when they had to avoid looking at the words on the video than when they did not. 
The idea here is that people with a high level of neuroticism (and particularly the aspect of neuroticism that focuses on the strength of their negative emotions) have a tendency to fear risk.  This group wants to avoid giving money to their partner.  Only when they have enough motivational energy will they be able to overcome that tendency. 
Two other experiments examined two other factors that also influence people’s likelihood of trusting another person.  In one study, some participants were told they would meet their partner after the game.  In a second study, participants were given a (fake) EEG measurement at the start of the task.  Some participants were told that their partner had a very similar EEG measurement, of the sort you would only expect among siblings, relatives, and close friends. 
The ego depletion manipulation did not influence the amount of money people were willing to give to their partner when they believed they would meet their partner or when they believed they were very similar to their partner.  It did influence the amount of money that highly neurotic individuals were willing to give in these studies when they did not think they would meet their partner or did not think they were similar to their partner.
Putting all of this together, then, trusting strangers sometimes requires effort.  In particular, when you believe you will never meet someone and you have no particular similarity to them, you believe there is a risk to trusting them.  The more strongly you react to that kind of risk, the more effort you need to put in to trust a stranger. 

Wednesday, August 17, 2016

Learning and Sleep in Toddlers


Quite a bit of research has begun to explore influences of sleep on cognitive processes.  In adults, sleep has a huge influence on memory.  Sleep speeds learning of new skills.  It also helps to separate the information being learned from the situation in which it was learned, which can make it easier to use that knowledge in other circumstances.

Young children spend a tremendous amount of time asleep, and so research is also beginning to explore the influence of sleep on things children learn.  An interesting study in the March/April, 2014 issue of Child Development by Denise Werchan and Rebecca Gomez examined how sleep influences toddlers’ ability to learn new words.

When a child learns a new noun, for example, it is important for the child to be able to apply that word to the object (or objects) for which they have seen it used, but also to apply that word to other objects that come from the same category.  For example, a child may sit in the family minivan and hear it called a car.  She may see a neighbor’s sedan and hear that called a car as well.  She might also be given a four-wheeled toy and hear that is a car, too.  To be a successful language-user, this child ultimately has to be able to recognize which other objects should be called a car and which ones should not.

This process requires generalization.  That is, the child has to go from the limited number of instances of the category they have seen to figure out which other objects share the same label.  This process requires some amount of forgetting.  After all, the child will observe many characteristics of these objects like their shape, size, color, and parts.  Some of these characteristics (like shape, and some parts) matter a lot in deciding whether to call something a car, and others (like color) matter less.  So, it is helpful for the child to forget some of what they have seen in order to begin to generalize the new word to other objects.

Werchan and Gomez suggested that sleep might actually interfere with toddlers’ ability to learn new words.  These researchers argued that sleep helps to solidify memories, and so if children associate too much information with a label, they might not learn to generalize it to new objects.

To test this possibility, 30-month-old toddlers were taught labels for three types of novel objects (which were constructed by the researchers).  The labels were words like dax or tiv that are not used as words in English.  During the training, children saw three examples of each object. They were also exposed to several other novel objects that were not labeled, that would be used as distractors later. 

One group of children was tested about an hour before their normal nap time.  They napped, and then came to a psychology lab to be tested four hours after the training.  A second group was tested far from their normal nap time.  They were also tested in the lab four hours after training, but they had not napped.  A third group was trained and then tested immediately.

During the test, children saw four objects:  a new example from one of the categories they learned, an object they saw during training that had not been named, an unfamiliar object, and a familiar object (like a toy duck).  They were told the label and were asked to point to the object.  For example, if they saw the object that had been called a dax during training, they would be asked “Which one’s a dax? Can you point to the one that’s a dax?”

The children who were tested immediately and those who napped got about 40% of the test questions correct.  The children who did not nap got over 80% of the test questions correct.

This study suggests that when toddlers are learning words, their ability to generalize those words to new objects requires them to forget some of what they saw.  More of this forgetting happens when children remain awake than when they sleep.  So, this kind of word learning is enhanced when children stay awake after learning the words.

As the researchers point out, this finding differs from what is usually observed with adults.  Adults often generalize their learning better after sleep.  The difference is that adults are better than toddlers at focusing on the most important information when learning something new.  So, for adults the most important part of generalizing is separating the content of what was learned from the situation in which it was learned.  Sleep helps with that separation.  Toddlers need to forget some of the content of what they learned in order to generalize effectively, and so for them sleep helps them.

Wednesday, August 3, 2016

Guilt and Shame and Crime


When people do something wrong, there are two distinct emotions that they commonly experience:  guilt and shame.  These emotions differ based on what people feel bad about.  When people feel bad about the action they performed, then they experience guilt.  When they feel bad about themselves for having done something bad, then they experience shame.

How do these emotions influence future behavior?

An interesting paper in the March 2014 issue of Psychological Science by June Tangney, Jeffrey Stuewig, and Andres Martinez explored this question with people who had served time in prison for a felony conviction. 

The participants were nearly 500 individuals.  While they were still in prison, they were given an assessment of their tendency to experience guilt and shame following bad behaviors.  They were also given a measure that examined whether they tended to blame circumstances for their actions rather than themselves.  Blaming the circumstance is called externalizing blame, and is often associated with continued bad behavior.  That is, people who do not accept their own responsibility for their actions are less likely to change their behavior in ways that will reduce bad behavior than those who do accept responsibility for their actions.

The participants were also contacted a year after being released from prison.  They were asked to report whether they had been arrested for crimes in that year and whether they had participated in crimes for which they were not arrested.  The researchers also looked up arrest records in the FBI database. 

The researchers then looked at statistical relationships between guilt, shame, the tendency to externalize blame, and the likelihood of continuing to commit crimes.

Guilt and shame had very different influences on future behavior.  Guilt had a negative relationship with future crime.  People with a strong tendency to experience guilt were less likely to commit additional crimes than those with a weak tendency to experience guilt.

Shame had a more complicated relationship to future behavior. 

Shame was positively related to people’s tendency to externalize blame.  So, people who feel bad about themselves after performing a bad action will often try to blame the circumstance rather than themselves in order to help them repair the damage to their self-esteem.  Statistically, the more people externalized blame, they more that they tended to continue to commit crimes after being released from prison.

However, once the researchers accounted for the influence of shame on externalizing blame, shame tended to decrease future bad behaviors.    

What does this mean? 

The problem with shame is that it causes people to feel bad about themselves.  People who deal with shame by externalizing blame will not work to change their behavior.

However, if people experience shame without externalizing blame, then they will act more like people who feel guilty.  Both shame and guilt are negative emotions, and so people will work to find ways to avoid feeling bad.  One good way to keep from experiencing guilt or shame is to change behavior. 

This research also helps to demonstrate why the way we categorize the world is so important.  People experience shame when they use bad actions they have performed to categorize themselves as bad people.  People experience guilt when they think of themselves as people who happened to perform a bad action.  It feels easier to change your behavior when you are focused on changing an action than when you feel like you have to change who you are at your core.