Wednesday, December 28, 2016

If You Want to Focus on the Long Term, Be Grateful


A common observation about human behavior is that people are biased toward what is best in the short-term.  That does not meant that people always pursue short-term pleasures over long-term gains.  It just means that the value of the long-term option has to be much larger than what people will get right now in order for them to choose to delay the benefit.
Economists call this idea temporal discounting.  To use a money example, imagine that I was willing to give you $100 next month, or a smaller amount of money right now.  If I offered you $10 right now, you would probably wait a month to get the $100.  If I offered you $90 right now, you would probably take that rather than waiting.  But, where is your dividing line?  What is the smallest amount of money that you would take to wait a month to get $100?
The smaller the amount of money you would take now, the less you value future experience compared to present experience.  If you would be willing to take $45 now as opposed to $100 in a month, then you are saying that $100 in a month is only worth $45 in today’s dollars.
In many situations, we want people to value the future more than they do now, so that they are willing to engage in activities that create future value.  A paper in the June, 2014 issue of PsychologicalScience by David DeSteno, Ye Li, Leah Dickens, and Jenifer Lerner suggests that when people experience gratitude, they give more value to future events compared to present ones.
In this study, participants ultimately evaluated lots of situations like the prospect of getting $20 now or $50 in a week.  These problems were given in order for the researchers to make an estimate of how much people were valuing future events compared to present events.  Participants were told that some of them would actually get an amount of money based on one of their choices, so they should choose carefully.
The participants were divided into three groups.  A control group was just asked to recall the events of a typical day.  A second group was asked to recall situations that made them happy.  A third group recalled situations that made them feel grateful.   The idea behind the last two groups was to help distinguish between gratitude and more general positive feeling.
The group that thought about gratitude valued the future more than those who thought about either happy events or a normal day.  This finding suggests that there is something about gratitude (above and beyond being positive) that leads people to be more focused on the long-term rather than the short-term. 
It is not completely clear why gratitude should have this effect.  One possibility is that gratitude makes people feel more connected to those around them.  Social connection influences people’s sense that they are part of something larger and more permanent than themselves.  That may make it feel less difficult to wait for a future reward.
Another possibility is that engaging in acts of kindness (which creates gratitude) often requires some degree of altruism on the part of the performer.  So, thinking about these altruistic acts may make people feel like they can give up something in the present in order to get a future reward.
Clearly, though, more work needs to be done to understand why gratitude has the influence on the way people value the future.

Thursday, December 15, 2016

If You Are Going to Take Notes, Do It By Hand


I am in the middle of my 25th year of teaching at universities.  There have been several changes in the way students approach their classes in that time.  The most noticeable is that when I started teaching, students took notes in notebooks, but now almost every desk has a laptop on it when I give a lecture. 
There seem to be a lot of obvious benefits to taking notes on a computer.  For one, it is easy to save the notes in a place where you can find them later.  For another, you will be able to read your notes later.  My own handwriting is terrible, so it is nice to have a tool that will allow me to read my notes later.
Before we go out and encourage every student to bring a laptop to class, though, it is worth checking out a study by Pam Mueller and Danny Oppenheimer in the June, 2014 issue of Psychological Science. 
They compared college students’ performance on tests following exposure to material.  The students were assigned either to take notes longhand or using a laptop.  In these studies, the laptops were set up so that students could only take notes on them.  Of course, in the real world laptops can provide a variety of distractions.
In the first study, students watched a TED talk.  (For those of you who have been living under a rock, TED talks are lectures on a variety of topics that last about 15 minutes.)  They took notes during the talk.  Then, they engaged in other activities for about 30 minutes.  Finally, they were given a quiz about the lecture.  The quiz contained both factual questions and conceptual questions that required some understanding of the subject matter.
Students did about equally well on the factual questions regardless of how they took notes.  However, the students did much better on the conceptual questions when they took notes longhand than when they took them using the laptop.
The experimenters compared the content of people’s notes to the transcript of the lecture the student heard.  When people typed their notes on a laptop, they were much more likely to copy what people said directly rather than writing their impressions of it.  That is, people writing out their notes had to think more deeply about the content of what they heard than those people who were just typing.
The experimenters expanded on this finding in two other studies.  In one study, they instructed people using the laptops to take good notes rather than just transcribing what they heard.  Even when people were given these instructions, they still had a greater tendency to type what they heard than people who were taking notes longhand.  As before, the people who used the laptops did more poorly on a test of conceptual knowledge than those who took notes by hand.
In a third study, students were tested one week after hearing the initial lecture.  In this study, students had a chance to read over their notes before taking the test.  The idea was that if students took really detailed notes on the laptop, then perhaps those notes would be more valuable a week after the lecture than they were immediately afterward. 
In this study, participants who reviewed their notes still did better if they took notes longhand than if they took notes on the laptop.  Interestingly, in this study, the students did equally poorly regardless of the type of notes they took if they were not able to study their notes before taking the test.
Putting all of this together, it suggests that there is real value in having to think about the material in the process of taking notes.  It is because handwriting is slow and effortful that people have to think more clearly about what they want to write down rather than copying down what is being said by rote.  In addition, there is real value to studying later.  Just taking good notes is not enough to be able to remember the information later.  It is also important to go back over your notes and make sure that you think about the information again after being exposed to it the first time.

Monday, December 5, 2016

Learning to Converse Is Learning to Interact


It is hard to study how children really start to use language.  Part of the problem is that we treat language itself as a thing to be studied independent of how it is used.  So, we focus on the words kids learn or the way they structure those words into simple and (eventually) more complex sentences.
Another problem, though, is that when language is really being used, the whole situation is messy.  Early on, a parent or caretaker is interacting with the child.  They are trying to do some activity together.  Originally, the parent may use some words, which the child may or may not understand.  There is also some pointing and holding of objects.  Eventually, language comes to play more of a role in this process.
That means that really studying the development of the use of language requires looking not just at the words kids are using, but also the developing complexity of the interactions between children and the people around them.
An interesting paper in the June, 2014 issue of Child Development by Lauren Adamson, Roger Bakeman, Deborah Deckner, and Brooke Nelson looked at a group of children over several years to begin to map out how these interactions change over time. 
They observed children interacting with their mothers starting at a year and a half old and continuing until they were about five and a half.  It is worth recognizing up front that this kind of research is hard to do.  Most researchers focus on tasks that can be done in one session that take an hour or less.  For a group to follow up with the same children over a period of four years is a tremendous amount of work.
At each visit, the mother and child played a game together in which the experimenter played the role of the director of a play.  The mother was supposed to be a supporting cast member, and the child was the “star” of the play.  Then, the experimenter set up several scenes for the child to play, in which the parent had to help the child achieve some goal.  Over time, the actions got more complicated as the child’s abilities grew.
For example, in one scene, the experimenter brought several objects into the room, put them in a cabinet, and left the room.  The mother was then supposed to get her child to hide the objects in a different spot and then talk to the child about where the experimenter would think the objects would be when she got back to the room.
The researchers looked at video of these interactions to examine how the the nature of the interaction changed over time, as well as how language use entered into the interactions. 
Some of the results are fairly obvious.  For example, at a year and a half, the parent and child interact with each other a lot, but there is very little language being used.  Mostly, the parent is directing the child’s actions and occasionally using some words.  By the time the child is 3, though, language is deeply embedded in the interactions.  Almost every action taken by either the parent or child is accompanied by words.
An interesting change over time is that at younger ages, the mothers are really directing the interaction.  They are setting up a structure for how the task should be accomplished by moving objects around and asking leading questions.  By the time the child is five, the interaction is much more balanced.  The parent still leads, but the child is also injecting more suggestions and making more recommendations.
Another change over time is the type of things that language is being used to describe.  At three, much of the language is focused on single objects and observable elements in the world.  By the age of five, there is also a lot more discussion about relationships among objects and not just about the objects themselves.
One surprising aspect of the data is that at the age of 2 and a half, there is lot of variability between kids in how much language they are using when interacting with their mothers.  Some children use language in nearly every interaction, while others look like the 18-month-olds, where very little language is being used.  But, by the age of 3 and a half, just about every child is using language in all of their interactions with their mother.
That means that as soon as children learn to speak reasonably well, their interactions shift immediately to the use of language, because it is such an important tool for communicating. 
A study like this is largely descriptive.  It focuses on what happens at different points in a child’s life as they start to converse with other people.  What is nice about this work is that it focuses both on the use of words and sentences, but also on the kinds of interactions that children are having with others.  Ultimately, an understanding of how language develops is going to require connecting the use of language to the situations in which language is used.

Tuesday, November 29, 2016

The Value of Believing that People Can Change


We label people by the characteristics they show all the time.  We think of a particular person as being a bully, a nerd, a musician, or an athlete.  The label may be a reasonable reflection of who they are right now, but it also carries a belief that the behavior reflects a person’s essence.
When you say that someone is a bully, you not only mean that they tend to bully other people, but also that—at their core—they are the kind of person who bullies others.  I have a cartoon on my office door of two prisoners sitting in a cell.  One says to the other, “You’re not a murderer, you’re just a person who happened to murder someone.”   This cartoon works, because being called a murderer feels like it carries something essential about the individual.
If you use terms to describe people and you believe that they cannot change, then life can be stressful.  Every time that someone treats you badly, you take that as evidence that they are a bad person and not just that they did a bad thing.  So, if you are able to think about people’s personalities in a less fixed way, perhaps that would decrease your overall stress.
This question was explored in a paper in the June, 2014 issue of the Journal of Personality and Social Psychology by David Yeager, Rebecca Johnson, Brian Spitzer, Kali Trzesniewski, Joseph Powers, and Carol Dweck. 
One paper looked at simple correlations between beliefs and stress in high school students over the course of a school year.  At the start of the school year, ninth graders were given a brief questionnaire about whether they thought people’s personality could change.  They were also given a test of their reaction to social exclusion.  This test is called Cyberball.  In this game, participants sit at a computer and think they are passing a ball along with two classmates playing at other computers.  After the ball is initially passed to everyone, the participant is excluded for several minutes as they other players pass the ball only back and forth to each other.  After this exclusion, participants rated how stressful they found the game to be.  Finally, at the end of the school year, the students provided information about their stress level and their physical health.  The researcher also looked at the students’ grades at the end of the year.
The more participants believed that personality can change, the less affected they were by being excluded while playing Cyberball.  In addition, the more that people believe that others can change, the lower their stress, the better their health, and the higher their grades at the end of the year.
This result raises the possibility that if people were trained to think that personality characteristics can change, then they might do better in school.  In two additional studies, the researchers used an intervention of this type.  One study was done in a fairly wealthy school district, while the other was done in a very poor district.  In each study, participants were ninth-grade students who were at risk for failing out of school.
At the start of the school year, participants in an experimental intervention condition read an article about how personality can change.  They also read stories that were supposed to have come from upperclassmen talking about how this knowledge helped them.  Then, students wrote their own stories that they were told would be used by future students.  Students in the control condition read about how athletic ability can be changed.  As in the study just described, all participants then played the Cyberball game.  In addition, their stress, health, and grades at the end of the year were measured.
Even though, this intervention was brief, it had a significant and lasting impact on participants.  Compared to students in the control condition, those who got the intervention reacted less strongly to the Cyberball game.  At the end of the year, they experienced less stress, had fewer health problems, and had higher grades than those in the control condition.  This effect was strongest for those students who did not already believe that personality could change over time.
Why does this intervention work?  Statistical analyses suggest that believing that personality can change leads to a smaller reaction to social exclusion (as measured by the Cyberball game).  Reacting less strongly to social exclusion has a cascade effect over time, and lowers stress levels while also having a positive impact on performance in school.
These studies fit with a growing body of evidence by Carol Dweck and her colleagues demonstrating that the belief that people can change has many benefits.  Students who believe their own behavior and performance can change work harder in school to overcome academic difficulty.  People who believe that others can change are more likely to work with them to regain trust after they have a bad experience. 
Ultimately, it is important to realize that you should not completely define the people in your life by their current behavior.

Wednesday, November 16, 2016

What Distracted Driving Teaches Us About Attention


The message is finally getting out there that smart phones cause real problems while driving.  Texting while on the road is extremely dangerous, because it requires the driver to look away from the road and also soaks up precious mental resources.  Even talking on the cell phone can be dangerous.
But, if cell phones are so obviously dangerous, then why do we continue to talk on the phone and drive?  Why do so many people think that they are actually pretty good at multitasking while they drive?
This question was addressed in an interesting study by Nathan Medeiros-Ward, Joel Cooper, and David Strayer in the June, 2014 issue of the Journal of Experimental Psychology: General. 
As they point out, recent theories of attention suggest that when we perform complex tasks, we use two circuits of behavior.  One circuit focuses on task performance, while the other focuses on the strategy for the task we are performing.  When driving, the lower-level circuit (called the inner loop of attention) is involved in aspects of driving like keeping the car in the proper lane.  The higher-level circuit (called the outer loop of attention) is involved in aspects of driving like dealing with unpredictable elements of the environment (cars, wind, and pedestrians).  In tasks like typing at the computer, the inner loop controls the typing of letters on the keyboard, while the outer loop controls the selection of words in a sentence.
To explore these aspects of attention, the researchers had participants drive in a simulator.  Participants were driving down a straight highway.  The difficulty of the task was manipulated by changing the wind.  The more unpredictable the wind, the harder it was to keep the car in the lane.
The researchers also manipulated the complexity of a second task that participants had to perform.  The secondary task interferes with the outer loop.  The more complex the second task, the more that the outer loop is focused on that task rather than on driving.
Sometimes, participants did no secondary task.  Sometimes, they performed a 0-back test in which they heard digits between zero and nine, and had to repeat back the digit they just heard.  This task is fairly easy to do.  Sometimes, they did a 2-back test.  In a 2-back test, participants hear digits and they have to repeat the one they heard 2 digits ago.  In order to keep doing the task, then, participants have to remember each new digit and then say back the one they heard two digits before.  This task is hard to do.
Participants drove down the highway in each combination of wind while doing either no second task, the 0-back task, or the 2-back task.  The researchers measured how well people were able to stay in their lane as they drove.
When participants were not doing any secondary task at all, they were equally good at staying in their lane regardless of the level of the wind.  When the wind was highly unpredictable, then participants got much worse as the secondary task got harder.  That is the typical finding in multitasking.
Interestingly, when the wind was only moderately unpredictable, people were not strongly affected by the secondary task.  They were reasonably good at staying in their lane regardless of how difficult the secondary task got.  And when the wind was highly predictable, participants actually got better as the secondary task got harder.
What is going on here? 
When the driving task is very easy, then the inner loop guides driving, but the outer loop does not have much to do.  So, it tends to monitor how the inner loop is doing.  Unfortunately, paying attention to a skilled task can actually make performance of that task worse.  That is one reason why skilled golfers and tennis players have trouble with their swings when they pay attention to the mechanics of their swing.  In this case, the complex secondary task occupies the outer loop, and lets the inner loop do its job.
When the driving task is vary hard, though, the inner loop guides driving, while the outer loop handles the disruptions caused by the wind.  These two systems function well together.  When the outer loop is kept busy by the difficult secondary task, then it cannot monitor the unpredictable wind as carefully, and driving suffers.
What does this mean for driving?
Most of the time, driving is fairly easy.  There are few unpredictable events.  As a result, most people actually drive reasonably well while they are talking on the cell phone.  Unfortunately, it is impossible to know when unpredictable events will happen (by definition), and so when performance suffers while driving, it can be disastrous.  That is why it is important to avoid distracted driving.
Just because participants in this study actually improved when they were distracted is not a good excuse to multitask when you are driving.  Remember that the easy driving task in this study just required staying in a straight lane with no other cars, pedestrians, or wind.  Real driving has many more potentially unpredictable aspects than that.  As a result, your outer loop has plenty to do most of the time when you are driving.

Thursday, October 27, 2016

What Makes You Open to Conversations With The Opposition?


There is a lot of conflict in the world these days, and it seems like it is getting harder than ever to find compromises.  In the United States, Democrats stake out a position, and Republicans immediately claim the opposite.  The middle east is a constant source of tension.  Palestinians and Arabs cannot find common ground to support a peaceful settlement of a conflict that has raged for decades.
What would be required to open up the possibility of a dialogue?
This question was addressed in a fascinating set of studies by Tamar Saguy and Eran Halperin in the June, 2014 issue of Personality and Social Psychology Bulletin.
They used the conflict between Israelis and Palestinians as a starting point.  They suggested that when someone hears a member of an opposing group criticizing their own group, that increases people’s hope that the conflict might be resolved and that leads people to be more open to discussion.
In one study, Israelis read a copy of a (fictitious) report discussing the conflict between Israelis and Palistinians.  One group read a passage that also included a quote from a Palestinian official criticizing Palestinians for the violence.  The other group read a passage without this quote.  After reading the passage, participants rated their hope and optimism for the future and their openness to considering the opposition’s point of view.  Those Israelis who read the passage with this self-critical quote were more hopeful for the future and more willing to consider the opposition view. 
A second study obtained the same effect, but this time the self-critical quote by the Palestinian official was unrelated to the violence.  The official was criticizing the Palestinians attention to education.  Again, those who read the passage with the self-critical quote were more hopeful for the future and more open to considering the opposition view.
A third study also included measures of people’s beliefs about change.  The work of Carol Dweck and her colleagues (which I have written about several times in this blog) suggests that people are most likely to trust others who have hurt them in the past when they believe that people can change their behavior than when they believe that people can’t change. 
In this third study, people’s tendency to be hopeful for the future and to be willing to consider an opponent’s message after hearing self-criticism was influenced by their beliefs about change. Those who believe that others can change were more hopeful for the future and more willing to listen to the opposition when they heard self-criticism by the opposition than when they didn’t.  Those who believe that people cannot change were not influenced significantly by self-criticism by the opposition.
One final group extended this finding by demonstrating that after people hear self-criticism by a member of the opposition, they are also more interested in compromise.  Essentially, people who read an opponent’s self-criticism who also believe that other people can change were more hopeful about the future, which led to a greater openness to consider the opposition viewpoint, which related to a greater willingness to consider a political compromise.
What does all this mean?  There are a variety of signals that people send during conflicts.  When people criticize themselves, they send a signal that they do not agree with everything that their group has done.  That opens the door to thinking about ways to move beyond the conflict.  In general, resolving conflicts does require some degree of compromise. 
Ultimately, when you are engaged in some kind of conflict (which presumably is less thorny and longstanding as the conflict between Israelis and Palestinians), you are also sending signals about your willingness to settle the disagreement.  It is worthwhile thinking about the signals you are sending to see whether you are reinforcing people’s opposition or whether you are opening doors to resolving your conflicts.

Thursday, October 13, 2016

Do You Want the Good News or the Bad News?


Many situations in life involve a double-edged sword that carries good news and bad news.  A promotion at work may come along with an increase in salary as well as more responsibilities and longer work hours.  A workplace evaluation may involve both praise for jobs well done as well as suggestions for improvement.
When you are about to get a shot of good and bad news, what is your preference for getting that news?  What should your preference be?
This issue was explored in an interesting paper in the March, 2014 issue of Personality and Social Psychology Bulletin by Angela Legg and Kate Sweeny.   
In an initial study, participants filled out a personality inventory.  One group was told that they were going to get feedback, some of which was good and some of which was bad, and were asked which they wanted to hear first.  A second group was told that they were going to give someone else feedback about their personality inventory and that some of the news was good and some was bad.  They were asked what news they wanted to deliver first.
Most people (78%) wanted to hear the bad news first followed by the good news, because they believed they would feel better if they got the bad news first and ended on the good news.  People delivering news were split.  People who imagined what a recipient would want to hear tended to want to give the bad news first.  Those who focused on themselves tended to want to give the good news first, because they felt it would be easier to start by giving good news.
A second study focused on participants delivering news.  In this study, participants who were encouraged through instructions to think about how the other person would feel when getting the news were more prone to give the bad news first and then the good news than those in a control condition who were not given specific instructions.
An important question, though, is whether we should get the bad news first followed by the good news.  A third study suggests that the answer to this question depends on whether you are focused on your mood or on changing your behavior.
In this study, participants filled out a personality inventory and then were given bogus feedback about their results.  The feedback consisted both of good news (some positive personality traits like being a good leader) as well as bad news (some traits that are not so positive like being low in creativity). 
The study varied the order in which participants got this feedback.  Before and after getting the feedback, participants rated their degree of worry as well as their mood.  After getting the feedback, participants rated how committed they were to learning to change the negative aspects of their personality.  At the end of the study, participant had the option to watch some videos to help them make personality changes or to help the experimenter by stapling some packets together.
Participants who got the bad news first followed by the good news were in a better mood and were less worried overall than those who got the good news first then the bad news.  However, participants who  got the bad news first followed by the good news were less interested in changing their behavior and were less likely to elect to watch videos to improve their behavior than those who got the good news first followed by the bad news.
Overall, then, you like to get improving sequences of news, because the last thing you hear affects you mood.  However, it turns out that being a little unsettled is motivating.  So, if you are hoping to make changes in your behavior, it is better to focus on what is wrong than on what is right.

Wednesday, September 21, 2016

You Have More Influence on Others Than You Think


From an early age, we talk to people about the positive and negative influences of peer pressure.  On the negative side, drug education programs talk about the effect of social groups on whether a particular individual will take drugs.   On the positive side, Austin, Texas has a highly successful day of giving in which members of the community urge others to donate to their favorite charities.
But, how much influence do you really have on the actions of other people?  Are you aware of the effect you have on others?
This issue was explored in an interesting paper by Vanessa Bohns, Mahdi Roghanizad, and Amy Xu in the March, 2014 issue of Personality and Social Psychology Bulletin. 
This paper focused on peer pressure to do negative things.  In one study, participants were college students.  They went on campus and asked other students to commit a white lie.  They were asked to approach other students and ask them to sign a sheet acknowledging that the participant had described a new class at the university to them, even though the participant was not going to describe the class because he/she “didn’t really want to do it.”
Before starting this task, participants estimated how many people they would have to ask in order to get three students to sign the forms.  They also asked people how comfortable others would be in saying “no” to the request.  Then, they went out and solicited white lies. 
Participants predicted they would have to ask an average of 8.5 people in order to get three signatures.  In fact, they only had to ask an average of 4.4 participants in order to get those three signatures.  Generally speaking, people felt that others would be comfortable saying “no” to them.  The more comfortable they thought others would be saying “no,” the more people they thought they would have to task before getting the required signatures.
A second study replicated this finding using a situation in which participants asked others to write the word “pickle” in a library book in pen.  Once again, participants believed they would have to task twice as many people to get three people to commit the small act of vandalism as they actually did need to ask.
Two other studies looked at why this effect emerged.  These studies used vignettes in which people imagined small unethical acts like reading someone’s private Facebook messages if their account was left open or calling in sick to work in order to go to a baseball game.
Some people read scenarios in which they were going to perform the act themselves.  Others read scenarios in which they were watching someone else performing the act and they could give them advice.  In one situation, the advice was either to do the unethical thing or the ethical thing.  Participants rated how comfortable they would feel doing the ethical thing in these scenarios.
Participants who played the role of advisor did not feel their advice would have much impact on others.  They felt that other people would be reasonably willing to do the ethical thing whether they were giving other people advice to do the right thing or to do the unethical thing.
In fact, though, participants playing the role of the actor were much less comfortable doing the ethical thing when they got advice to do the unethical thing than when they got advice to do the ethical thing.  That is, people were highly swayed to do the wrong thing by the advice they got.
Other studies by Vanessa Bohns and her colleagues have demonstrated similar findings looking at ethical behavior. 
Putting all of this work together, then, it seems that we have a hard time saying “no” to other people.  Social pressure has a huge influence on our behavior.  At some level, that may not seem surprising to us, but we systematically underestimate the influence that our social pressure has on other people. 
One more reason why we should try to help other people to do the right thing.

Wednesday, September 14, 2016

The Two Competing Selves Inside You


Sitting up late talking with friends, you may spend a lot of time thinking about who you would like to be ideally.  You focus on the people you would help, the good you could do for society, and your dreams.  In your day-to-day life, though, you spend a lot of time just doing what has to be done to get ahead. 
So, are we just lying to ourselves and others when we have these idealistic conversations?
In a paper in the May, 2014 issue of the Journal of Personalty and Social Psychology, Jeremy Frimer, Nicola Schaefer, and Harrison Oakes suggest that each of us has two distinct conceptions of self.
First, there is the actor.  The actor is your public-facing self.  The one that you bring out when other people are watching.  The actor often has some focus on being prosocial, that is doing things that are for society’s benefit.
Second, there is the agent.  The agent is your doing self.  When you pursue your daily goals, you typically act more selfishly in your own interests. 
As evidence for this split between actor and agent, participants from the United States (which is a relatively individualistic society) and India (which is a relatively collectivist society) were asked to perform one of two tasks.  
One task involved rating the importance of a variety of selfish and prosocial goals.  This task was designed to get people to think about themselves as actors.  The other task involved having people describe four of their own most important goals.  This task was designed to get people to think of themselves as agents. 
After this initial task, everyone rated how strongly their own goals were about helping themselves and how much they were about helping others. 
Participants who were asked to think about a variety of prosocial and selfish goals rated that their own goals were equally strongly about helping themselves and others.  Those who were asked to list only their own goals rated that their goals were more strongly focused on themselves than others.  This pattern held both for Americans and Indians, suggesting that the agent is fairly selfish across across cultures.
A second study demonstrated that when people are primed to think about the variety of goals they might pursue, they act more like someone who is told to role-play a prosocial person, while those who are primed to think about their own goals act like someone told to role-play a selfish person.
One final study found that when people were primed to think of themselves as actors, they felt their goals were more idealistic, but when people were primed to think of themselves as agents, they felt their goals were more realistic.
This split between two self-concepts may explain why people often act differently when they believe that others are watching them.  We want our public-facing self to act consistently with our ideals.  Each of us wants to be seen as the kind of person who does things to help society.   But, when left to our own devices, the pressures of life often push us to do what is in our own self-interest.
This work has interesting implications for how to get people to do more public service.  Lots of charities and nonprofits need volunteers to help them carry out their work.  People who do volunteer work also report feeling better about themselves after doing it.  Yet, few people actually volunteer their time.  Perhaps prompting people to think about themselves as actors rather than agents could help to promote more engagement with volunteer organizations.

Thursday, September 8, 2016

Switching Languages Affects Accents


If you spend time in any large city in the United States, chances are you will hear English spoken in a variety of accents.  Some of these accents are just native speakers of English from different regions of the country, while others reflect the speech patterns of people who learned another language as a child and then learned English later.
A foreign accent matters in social situations.  The accent immediately marks someone as an outsider, which can lead to distrust.  In addition, some native speakers find accents hard to understand, and so the accent can also create difficulties in communication.
Are accents a fixed part of a person’s speech pattern in their non-native language?  This question was explored by Matthew Goldrick, Elin Runnqvist, and Albert Costa in a paper in the April, 2014 issue of Psychological Science. 
These researchers were interested in whether switching back-and-forth between languages would affect the strength of a person’s accent.  To test this possibility, participants were run in Barcelona, Spain.  All of them were native speakers of Spanish who began learning English by the age of 4. 
In this study, participants saw pictures of simple objects on a computer screen in which the first sound of the word for that object began with a d (as in desk or doce) or a t (as in tin or taza).  The picture was surrounded by a colored frame.  Participants were instructed to use the English word for one color and the Spanish word for the other color.  The key question is whether participants would have a stronger accent on trials on which they switched languages from the previous trial than on trials on which they used the same language on consecutive trials.
It can be difficult to measure strength of accent by ear, but the researchers used a clever method to study the strength of the accent.  When Spanish speakers produce the sounds ‘d’ and ‘t’, they engage their vocal cords earlier than when English speakers produce these same sounds.  If you look at a digital recording of speech, you can actually see the burst of noise when the vocal cords are engaged.  The researchers did an acoustic analysis of the ‘d’ and ‘t’ sounds at the start of each word to measure when the vocal cords engaged.
Overall, speakers did differentiate between the languages.  They engaged their vocal cords later when speaking English words than when speaking Spanish words.  The language of the previous trial did not affect pronunciation of Spanish words, but it did affect pronunciation of English words.  Overall, participants engaged their vocal cords a bit earlier when saying an English word if the previous word had been spoken in Spanish than if the previous word had been spoken in English.  That is, the person’s Spanish accent was stronger if they had just said a Spanish word than if they had just said an English word.
One interesting aspect of this study is that some of the words used were cognates.  That is, the words for the object were similar in Spanish and English.  For example, the Spanish word for dentist is dentista.  The effect was particularly strong for these cognates.  That is, when participants had just spoken a Spanish word and then they had to speak the English word for a cognate, they had a stronger accent than when they had just spoken an English word.
This result reflects that when people speak multiple languages, they learn information both about the words that are used in that language as well as how to pronounce those words.  For cognates in particular, people have experience speaking similar words in both languages.  Speaking Spanish activates the Spanish pronunciation of the word, while speaking English activates the English pronunciation.  When participants switch languages, they get a combination of these pronunciations, which ultimately affects how they speak those words.

Thursday, September 1, 2016

Young Children are Primed to Learn about Eating Plants


Humans are much more flexible in their behavior than most other animals.  For example, we figure out what to eat in every environment where we find ourselves.  Other animals are not so lucky.  If they find themselves outside of the environment in which they evolved, they can have great difficulty finding food.

The flexibility of human behavior comes at a cost.  Ultimately, we have to learn how to navigate our environment rather than having a lot of that information pre-wired into the system.  That learning is effortful and potentially dangerous.

Consider the problem of eating plants.  Many plants are edible and are important sources of nutrition.  But, some plants are not things we can digest and—worse yet—some are poisonous. 

A fascinating paper by Annie Wertz and Karen Wynn in the April, 2014 issue of Psychological Science examines infants’ ability to learn about what plants are edible.  Infants clearly don’t come wired to know which plants are edible, but their research suggests that infants may come wired to pay attention to the edibility of plants.

In one experiment, 18-month-olds watched an experimenter perform a series of actions.  The experimenter first took a fruit (say a dried apricot) off a realistic looking plant and placed the tip of it in his mouth and said “Hmmmmmm.”  Then, he took a different fruit (say a dried plum) off an object shaped like a plant that was painted silver and housed in a glass case and did the same thing.  So, one object looked like a plant, while the other did not.  (Other children in this study saw the experimenter do the action on the object first and then the plant, so the order in which the actions were performed did not affect the results.) 

After seeing these actions, the experimenter took other fruits off the plant and the object.  Then, a second experimenter came in and asked the child which one they could eat.  Children overwhelmingly chose the fruit that came from the plant. 

The experimenters also ran three control conditions.  In one, when the experimenter took the fruit off, he put it behind his ear rather than in his mouth.  In the test, the infants were asked which object they could use.  In this case, the children had no preference for the fruit from the plant over the fruit from the object.

Of course, it could just be that the plant was more familiar than the object.  In another control condition, the plant was compared to a set of shelves.  Most infants are used to seeing food taken from shelves in their home.  In this condition, after seeing the fruits from the plant and the shelf put in the experimenter’s mouth, the infants strongly preferred to choose the fruit that came from the plant.

In a third condition, the infants saw the experimenter just look at the plant and say “Hmmmmmmm” and then look at the object and say “Hmmmmmmmm.”  This condition was designed to test whether children simply had a preference for fruits that come from a plant rather than fruits that come from an object.  In this case, the infants were equally likely to choose the fruits that came from the plant or the object.  This condition is important, because it is potentially dangerous for infants to learn that all plants are edible, because some are dangerous.

Finally, the researchers also examined whether even younger infants might show this preference.  In a final study, these same actions were shown to six-month-old infants.  Six-month-olds are too young to choose for themselves.  So, after the first experimenter took the fruits off the plant and the object, a second experimenter put each fruit in his mouth in turn and held there.  The experimenters measured how long the infants looked at these events.  Lots of work with infants shows that for unfamiliar situations, infants look longer at surprising events than at unsurprising events. 

In this study, when the infants saw the first experimenter put the fruits in his mouth, they looked longer when the second experimenter put the fruit from the object in his mouth than when the experimenter put the fruit from the plant in his mouth.  But, when the first experimenter put the fruits behind his ear, the infants looked for the same amount of time when the second experimenter put the fruits behind his ear, regardless of whether they came from the object or the plant.

This set of results suggests that by six-months of age, infants are ready to learn about which plants are edible.  Evolution has not pre-wired humans with knowledge of specific plants that we can eat.  Instead, we are wired to learn about plants from other adults.  That mechanism is important for helping us to survive in a wide variety of environments. 

Thursday, August 25, 2016

Trust of Strangers Requires Effort (Sometimes)


Trust is important.  Without the ability to trust strangers, society would fall apart.  You have to trust that people will generally deal with you honestly, and that they will follow through on their commitments.  After all, you do not know all the people who grow your food, make your clothes, and take care of your money in the bank.  You do not have the time to do all of these things for yourself.
Of course, most of this trust is implicit.  You do not often think about the number of strangers you rely on to get through your daily life. 
Sometimes, though, you have to place your trust in a stranger more explicitly.  Not long ago, I was sitting at an airport by a bank of outlets.  A woman walked up, plugged in a cell phone, and asked two of us sitting by the outlets to watch her phone for a few minutes while she went to check her flight.  She had to trust that we would not steal her phone, and we had to trust that she was not leaving us sitting next to a dangerous device.  And in the end, our mutual trust was rewarded.
An interesting paper that has been accepted for publication in the Journal of Experimental Social Psychology by Sarah Ainsworth, Roy Baumeister, Kathleen Vohs, and Dan Ariely examines whether this kind of trust among strangers requires mental effort.
The measure of trust they used in these studies was a behavioral economics game called the Trust Game.  In the Trust Game, participants are given $10.  They are told that they can give as much of that money as they want to their partner.  The experimenter will then triple the amount of money given to the partner, and the partner can return as much of that money as he or she chooses to the participant.  So, if the participant elects to give $3 to the partner, the partner will receive $9 from the experimenter.
This game requires trust.  The best joint outcome for the players in this game requires that the participant give all of the money to the partner and requires the partner to split the proceeds.  If the participant does not trust the partner, then the participant can choose to keep all of the money.
These researchers suggest that trusting a stranger in this game requires overcoming a natural tendency to avoid risk.  To explore this possibility, they overlaid an ego depletion manipulation on this study.  The concept behind ego depletion is that when people engage in a period of effortful self-regulation, they have difficulty overcoming their habitual tendencies in the future.  So, if trust requires some amount of effort, people who first do a task that requires effort will trust the stranger less than those who do not do this task.
In one study, participants watched a silent video of a woman being interviewed.  Periodically, words appeared in the lower right corner of the screen.  One group just watched the video, while a second group was told to ignore the words and to return their attention to the woman as soon as they noticed themselves looking at the words.  This task has been used in previous research on ego depletion.
After watching the video, participants were given the trust game and were told they were playing with a partner in another room.  Participants also filled out a scale that measured the personality characteristic of neuroticism.  Neuroticism is the degree to which people tend to focus on negative outcomes and also the degree to which they tend to experience high-arousal emotions like anxiety and anger.
In this study, participants with low levels of neuroticism were not strongly influenced by the ego depletion manipulation.  However, those with a high level of neuroticism gave much less money to their partner when they had to avoid looking at the words on the video than when they did not. 
The idea here is that people with a high level of neuroticism (and particularly the aspect of neuroticism that focuses on the strength of their negative emotions) have a tendency to fear risk.  This group wants to avoid giving money to their partner.  Only when they have enough motivational energy will they be able to overcome that tendency. 
Two other experiments examined two other factors that also influence people’s likelihood of trusting another person.  In one study, some participants were told they would meet their partner after the game.  In a second study, participants were given a (fake) EEG measurement at the start of the task.  Some participants were told that their partner had a very similar EEG measurement, of the sort you would only expect among siblings, relatives, and close friends. 
The ego depletion manipulation did not influence the amount of money people were willing to give to their partner when they believed they would meet their partner or when they believed they were very similar to their partner.  It did influence the amount of money that highly neurotic individuals were willing to give in these studies when they did not think they would meet their partner or did not think they were similar to their partner.
Putting all of this together, then, trusting strangers sometimes requires effort.  In particular, when you believe you will never meet someone and you have no particular similarity to them, you believe there is a risk to trusting them.  The more strongly you react to that kind of risk, the more effort you need to put in to trust a stranger.