Monday, December 11, 2017

Morality and the Focus on Outcomes


In many public situations, we make judgments about people’s commitment to carry through on their stated views.  Politicians express commitments to issues ranging from immigration to gay marriage.  Corporate leaders give their views on fair labor practices and innovation. 
After hearing these views expressed, we have to make judgments about how likely these people are to follow through on their commitments.  These expectations influence our support of politicians and companies.  They also help us to predict what will happen in the future.
When making statements about difficult issues, there are often two different types of justifications people may give for their beliefs.  One type of justification is consequential.  It focuses on the outcomes related to a position.  For example, a business leader might be opposed to child labor, because it harms children.  A second significant type of justification is deontological—it focuses on broad rights and responsibilities.  A second business leader might be opposed to child labor because forcing children to work long hours is unjust.
A fascinating paper by Tamar Kreps and Benoit Monin in the November, 2014 issue of Personality and Social Psychology Bulletin examined how these views influence people’s perception of the moral commitment of the speaker. 
In one study, participants read actual statements from State of the Union addresses given by Bill Clinton and George W. Bush.  Participants did not know which president spoke these words, only that they came from presidential speeches.  The statements took positions and then defended them either because of the positive outcome associated with the position (a consequentialist defense) or because of the rights or justice associated with it (a deontological defense).  A control group of statements had no justification for the position.  After reading each statement, participants rated whether the issue was a moral issue for the speaker.
Participants felt that statements justified by rights and justice were more strongly based in morality than those statements justified by their outcomes.  Indeed, statements justified by their outcomes were judged as less strongly based in morality than those with no justification at all.
This result suggests that positions that are based on beneficial outcomes are seen as pragmatic positions rather than moral ones. 
Another study in this paper explored this phenomenon further.  In this study, participants read statements that were said to have been made by a manager at a company.  In addition to rating whether the speaker had a moral basis for the position, they also rated the speaker’s authenticity in holding that position, their commitment to the position, and how generally they support that issue.
As before, when the speaker gave a justification based on rights and justice, that had a stronger moral basis than when the speaker gave a justification based on outcomes.  In addition, participants felt that positions based on rights and justice were more authentic, more strongly held, and reflected a more general commitment than those based on outcomes.
Why does this happen?
When people focus on the benefit of the outcome of a position, then it suggests that if someone were able to avoid the bad outcome, then the person’s judgment would be flipped.  For example, it seems reasonable that a business leader who opposes child labor because it is bad for children’s long-term education might be convinced to support child labor if accommodations were made that gave the children more education.  The consequentialist view suggests that the leader does not have a broad moral argument against the practice, but rather a narrow pragmatic one. 
These findings also have implications for people who are trying to express a position.  If you want other people to believe that your support for an issue is ironclad, then you should justify it based on broad principles of justice and rights.  If you want to signal that you might be willing to compromise on an issue, then you should frame your justification based on outcomes.

Monday, November 27, 2017

Free Will and Gratitude


There are lots of psychological benefits to gratitude.  Feeling grateful to others can lift your mood.  It enhances your feeling of connection to other people.  Gratitude can also motivate you to do work for others.
When you feel gratitude toward another person, you are feeling appreciation that the person has done something for you that required some effort on their part and that was ultimately designed to be helpful to you.  When there was no effort or cost to someone’s actions, then you may feel fortunate that there was a positive outcome, but you are not necessarily grateful to them for engaging in that action.
For example, suppose an electric cable comes loose on your car while you’re driving, and the car stops by the side of the road.  A driver stops and looks under the hood and reconnects the wire allowing you to get home.  You are grateful that the driver sacrificed the time to help you.  If the driver of the car sped by, but that caused a vibration in the road that cause the cable to reconnect, you would feel lucky that happened, but not grateful to the driver. 
This analysis of gratitude suggests that we need to make some assessment of whether the action of another person came at a cost to them in order to feel grateful.  A paper in the November, 2014 issue of Personality and Social Psychology Bulletin by Michael MacKenzie, Kathleen Vohs, and Roy Baumeister, suggests that people’s beliefs in free will may influence the perception of cost, which may in turn affect the feeling of gratitude.
The idea is that if you believe that people have free will, then you believe that the actions they are taking were intentional.  Those intentions reflect that they have explicitly done things to help you.  That increases your sense of gratitude toward them.
In one set of studies, the researchers simply measured people’s beliefs in free will and also their tendency to be grateful.  As you would expect if beliefs in free will affect gratitude, these measures were positively correlated.  The more that people believed in free will, the more that they tended to experience gratitude in their lives.
Of course, it is hard to draw strong conclusions from correlational studies like this.  In another experiment, the researchers manipulated beliefs in free will by having people reflect on sentences that suggested that there is free will or that there is not.  This induced a temporary difference between groups in the strength of their belief in free will.  Then, participants thought about events of their lives in which someone did something for them.  Participants were more grateful for these events if they were induced to believe in free will than if they were induced to believe that free will does not exist.  A control group who did not think about free will before the task behaved similarly to those induced to believe in free will, suggesting that most participants from this population of undergraduates tend to believe in free will.
A third experiment also induced differences in the belief in free will by using passages that argued that free will does or does not exist.  After that, participants were led to believe that they were going to do a rather boring experiment for another experimenter.  After walking to another room, that experimenter told them that the study could be completed without their help and they did not have to do the boring task.  Participants returned to the first room, where they were asked a few questions about the first experimenter, including questions about whether they were grateful to the experimenter for letting them go and whether the experimenter was sincere about the motivations for letting them out of the experiment.
Participants induced to believe in free will were more grateful to the experimenter than those induced to believe that free will does not exist.  In addition, participants induced to believe in free will felt that the experimenter was more sincere than those who were induced to believe that free will does not exist.  The belief that the experimenter was sincere was able to statistically explain the relationship between belief in free will and gratitude.
Putting all of this together, then, in order to feel gratitude, you have to believe that the person who has done something for you actually wants to help you.  One factor that affects the sense that someone wants to help is whether they have free will.  After all, without free will, you are destined to act the way you do. 
This research also has implications for companies who are performing customer service.  If companies want people to feel grateful for the service they get, it is useful for customer service agents to let customers know they have some autonomy in the actions they take.  This way, customers will believe that agents have chosen to help them, rather than believing that something about company policy forced them to be helpful. 

Thursday, November 9, 2017

The Value of Having A Transcendent Purpose for Learning


School is the ultimate marshmallow test.  I’m sure you all remember Walter Mischel’s marshmallow test in which an experimenter gives a child one marshmallow and leaves the room saying that the child will get two marshmallows if he or she doesn’t eat the marshmallow while the experimenter is out.  Resisting the temptation to eat one marshmallow is taken as a measure of self-control.
School requires doing lots of things in the short-term that are less fun than what you could be doing, but lead to better long-term outcomes.  Studying for an exam is less fun than watching YouTube videos or playing video games.  But, people who get a college education typically make more money and have more satisfying careers than those whose education stops at high school.
A paper by David Yeager, Marlone Henderson, David Paunesku, Gregory Walton, Sidney D’Mello, Brian Spitzer, and Angela Duckworth in the October, 2014 issue of the Journal of Personality and Social Psychology explored motivations that would focus students on schoolwork rather than tempting alternatives. 
These researchers distinguish between two kinds of motivations:  self-interested and self-transcendent.  Almost every student has a self-interested motive for education.  They want to learn to make themselves smarter or to help them get a job.  Only a subset of students, though, has a self-transcendent motive in which they also want their education to allow them to help make the broader world a better place and to help others.  The question is whether this added “purpose” would influence students’ motivation to study.
In a field study, over 1,000 high-school seniors from a low SES background were studied.  These students were all planning to go to college the following fall.  The participants were given questionnaires to assess whether they had a self-transcendent motive for their education, or just a self-interested motive.  They were also asked about other motivations for going to college like extrinsic motivations such as being able to move out of their parents house or to make new friends.  Participants filled out a self-control measure that examined their perceptions of how well they control their behavior.  They participated in a task in which they had the chance to either do math problems (which they were told would strengthen their basic skills and help them academically) or to do something tempting like watch videos or play a video game.  Finally, the experimenters measured how many of these students were enrolled in college the next fall. 
Having a transcendent purpose for their learning predicted participants’ scores on the self-control measures.  It also predicted the likelihood that students would do math problems rather than watch videos or play a game.  Finally, the more that students had a purpose, the more likely they were to be enrolled in college in the Fall. 
Of course, it is hard to draw strong conclusions just based on a correlational field study like this.  In a second study, ninth-grade students were given an exercise to get them to think about having a broader purpose to their education.  This exercise had them write about how their education in high school would allow them to help others and make the world a better place.  A control group wrote about differences between middle school and high school.  The researchers then measured the students’ grades in science and math classes at the end of the term.  The students who did the purpose intervention had higher grades at the end of the term than those who did the control manipulation.  This manipulation was most effective for the students with the lowest GPA before the intervention was done.
Two other studies used college students.  These studies encouraged participants to develop a purpose for their education.  One study demonstrated that participants spent more time on study questions if they were told to think about the transcendent purpose for their education than if they were not.  A second study found that students were better able to resist tempting alternatives to work when they thought about the transcendent purpose for their education than when they did not. 
What does all of this mean?
Success in school requires pushing off many enjoyable moments for the future in order to focus on learning.  Certainly, many learning activities are enjoyable.  But, learning new skills and facts also requires a lot of tedious and frustrating activities.  These desirable difficulties are a part of the learning process.  To stay motivated to engage in these tasks for the long-term, it is crucial to have a broader purpose for education.  It is not enough just to want a job or to make money.  It is also important to want to do things for others and to make the world a better place.  As humans, we find these transcendent goals highly motivating.
And, of course, this works for things beyond school.  Work is another aspect of life that can often be tedious and frustrating.  People who succeed in the workplace are also those people who see their work as having a higher calling to help others and to improve the world. 

Monday, October 30, 2017

Children Decide Who They Should Learn From Based On Experience


Another theme in this blog has been the way children learn to learn. Humans are able to survive in almost any environment in large part because we are able to learn so effectively from other people.  Each generation adapts to the culture and technology of the time.  Although this process takes a lot of time compared to other animals, it supports our ability to create cultures of ever-increasing complexity. 
Of course, not every other person is one that a child should learn from.  Some people know quite a bit about what is going on in the world around them, while others provide unreliable information.  If children get bad information early on, that could hurt their ability to learn more complicated things later.
So, it would be valuable for children to be able to determine the best people to learn from.  A paper by Kathleen Corriveau and Katelyn Kurkel in the October, 2014 issue of Child Development examined whether children can use the quality of the explanations people give to determine who they should learn from.
They studied 3- and 5-year-old children.  In one experiment, the children were introduced to two girls.  The girls each gave short explanations for how the world works.  One girl always gave good explanations, while the other one gave circular explanations.  A circular explanation is one that involves the phenomenon itself in the explanation.  For example, the good explanation for why rain falls is “It rains because the clouds fill with water and get too heavy.”  The circular explanation was “It rains because water falls from the sky and gets us wet.” 
After getting these explanations, the children heard explanations for novel objects given by each girl.  They also heard labels for novel objects given by each girl.  In these tasks, the explanations and names were equally good.  So, the question is whether children were biased to agree with the girl who gave the better explanations earlier in the study. 
The five-year-olds were strongly biased to listen to the girl who gave the better explanations.  They agreed with the explanation for the new object given by the girl who gave good explanations.  They also used the label given by that girl rather than the label given by the girl who gave circular explanations.  The three-year olds were influenced, but to a lesser degree.  They accepted the explanation given by the girl who gave good explanations before, but they did not show a bias to use the labels she gave.
The five-year-olds were also better able to say explicitly that the girl who gave good explanations was a better explainer than the girl who gave circular explanations.  The three-year-olds were not able to make this judgment.
This study suggests that by the time children are five years old, they are able to make good judgments about what people they should learn from.  They use the quality of the explanations people give them to determine who is a reliable teacher.  And, they use this knowledge to influence a variety of things they learn from them.  At the age of three, children can do this to some extent, but they are still learning how to judge which people are good teachers.
This ability is crucial, because it helps children to avoid bad knowledge.  Human memory does not allow us to erase facts that turn out to be false.  Instead, when we learn that something is false, we have to mark it as being untrue so that we explicitly ignore it later.  That is one reason why we often continue to be influenced by information that we have been told in the past was not true.  Ultimately, the better the quality of the information we can learn, the fewer memories that we will have to explicitly ignore in the future.

Monday, October 16, 2017

Are Teens Really Prone to Take Risks?


If you read the local news section of a newspaper, you are bound to come across the story of a tragic death or injury to a teen.  They might be texting, drinking and driving, or skateboarding in a precarious spot.  Reading these stories may reinforce a general belief that teens simply take too many risks.
So, are teens really bigger risk-takers than adults?
A fascinating paper by Erik de Water, Antonius Cillessen, and Anouk Scheres in the October, 2014 issue of Child Development examines this question.
A lot of the behaviors in teens that we think of as risky are really impulsive.  Being impulsive means doing something that feels good right now rather than waiting in order to do something even better in the future. 
There are two parts to impulsivity.
The first is risk.  When you are impulsive, you might choose to do riskier things now without thinking through the long-term consequences.
The second is value.  If you value things in the present more than things in the future, then you may choose to get things right now.  Everyone values the present more than the future to some degree.  I would rather get $10 right now than to get $10 in a week.  But, how much more would I need to get in a week in order to forego $10 right now.  I might still prefer $10 now to $10.01 in a week.  But, perhaps I would take $12 next week rather than $10 today. 
These researchers tested teens ranging in age from 12-17 and young adults ranging in age from 18-27.  All participants were given a test of risk and a test of value.  They were also given a test of fluid intelligence (the Raven’s Progressive Matrices test) and a test of how much they value different amounts of money right now. 
The test they used to measure risk was a simple gambling task.  Participants saw a pie cut into 6 pieces, where 4 pieces were in one color and 2 were in the other.  This pie represented a gamble in which there was a 2/3 chance of winning one prize and a 1/3 chance of winning the other.  The prizes were set up so that the prize with the higher probability was always half the size of the prize with the lower probability.  So, a gamble might offer a 2/3 chance of winning $4 and a 1/3 chance of winning $8.  The more often someone selects the low probability gamble, the more risk they are taking.
The test for value involved having people make a series of decisions about whether they would prefer a particular amount of money right now or a larger amount of money some time in the future.  The time period ranged from 2 days to a year. 
The results suggested that teens were not riskier than adults, but they did value the present more than the adults did.  That is, both adults and teens selected the riskier gamble at about the same rate.  However, compared to the adults, teens needed more money in the future to be willing to delay getting money now rather than money later.  Other measures in this study examined how much teens value particular amounts of money.  Teens definitely find small amounts of money to be more valuable than adults.  But, even taking that difference into account, teens still valued money in the present more than money in the future.
What does this mean?
We tend to think of teens as taking more risks than adults.  But, findings like this suggest that teens are more impulsive not because they are willing to take more risks, but because they value the present so highly.  The difficult thing for teens is to recognize that future experiences may be more valuable than their options in the moment. 
It is important to tease apart these factors, because it influences the kinds of information we want to give to teens to help them take a long-term perspective on life events.  If teens were big risk takers, then we might want to educate them better about the risks associated with behaviors.
But, because teens value the present so strongly, it is important to help them see the value of the future.  One reason for the viral success of the “It Gets Better” videos is that it aimed to help LGBT teens to see that the problems that they face now are not as big as they seem and that the future holds valuable things in store.  This strategy is one that can be generally effective in helping teens to delay impulsive behavior right now in favor of the long term.

Wednesday, October 11, 2017

Sleep and False Memories


When you remember a past event, you are not just playing back a video or audio file of a previous encounter.  Instead, memories are reconstructed.  That means that many sources of information can be combined to influence what you remember about the past.
Most of the time, of course, that is a good thing.  When you are having a discussion about World War II, for example, it does not matter if the information about the war that you talk about came from a single lecture you attended or from years of classes and books you read.  What is important is just that the information is organized around the topic of discussion.
Of course, the specific events and the order of those events do matter a lot in eyewitness situations.  However, quite a bit of research demonstrates that eyewitness accounts are also reconstructed, and the means that information encountered after the initial event can influence later memory.
Does the amount of sleep you get affect how likely it is that you will mix together different sources of information when thinking about an eyewitness event?  This question was explored in a paper by Steven Frenda, Lawrence Patihis, Elizabeth Loftus, Holly Lewis, and Kimberly Fenn in the September, 2014 issue of Psychological Science.
They used a typical misinformation procedure in this study.  First, participants saw a sequence of photographs of two crimes (a car break-in and a thief stealing a woman’s wallet). At some point after seeing the photo sequences, participants read text stories that described the events in the photographs.  However, three of the facts in the story differed from what was shown in the photos.  For example, a photo might show the thief putting the stolen wallet in a jacket pocket, while the story might describe him putting it in his pants pocket.  About 20 minutes after reading the text, participants then got a test about the event.  The critical questions on this test focused on the misinformation parts of the event.  The key measure is whether participants recall what happened in the photograph or whether they use the information from the text of the story to answer the question.  For each question on the test, participants were also asked the reason for their response.  The strictest measure of a false memory is when participants choose the information they read in the story, but tell state that they saw it in the picture.
To explore the influence of sleep deprivation, some participants were kept awake for a full night, while others were allowed to sleep as much as they wanted.   Half the participants saw the pictures in the morning followed by the stories with the misinformation about 40 minutes later.  The other half the participants saw the pictures followed by the misinformation in the morning.
When participants see the pictures in the morning and then the misinformation soon after, sleep deprivation influences their tendency to have false memories.  The sleep deprived participants remember more of the misinformation than the rested participants.
When participants see the pictures the evening before seeing the misinformation, though, sleep deprivation has no reliable impact on false memories.  The likelihood of incorporating information from the stories in their recall is low for all participants who saw the photos the night before.
What is going on here?
Think about how people could respond accurately on this test.  When they encounter the initial event, they have to remember both what they saw as well as when they saw it.  That is, they have to keep track of the source of the memory.  That way, when they read about the event later, they can separate what they saw from what they read.
People who are sleep deprived seem to have more trouble than those who are rested keeping track of the source of the information they get.  They are able to remember facts about the events, but they are more likely to combine together different sources of information.
This result suggests that we might want to be careful about how much we trust the details of memories of events that happened when we were sleep deprived.  The lack of sleep may make it difficult for us to remember the source of the information we have encountered.

Monday, September 11, 2017

The Thinking and Doing Mindsets Affect What You See


At any given moment, you can be focused on thinking about what is going on in the world around you, or you can be motivated to act in the world.  For example, you might contemplate taking an art class.  In the thinking mindset, you would consider the various pros and cons of taking this class.  In the doing mindset, you would generate and then execute a plan for registering and attending the class.  Psychologists have used different terms to describe these orientations, but I will call them the thinking mindset and the doing mindset. 
Most of the research on these mindsets has focused on the behaviors that are associated with them.  An interesting question, though, surrounds whether these motivational states affect what you see.  This issue was explored by Oliver Buttner, Frank Wieber, Anna Maria Schultz, Ute Bayer, Arnd Florack, and Peter Gollwitzer in a paper in the October, 2014 issue of Personality and Social Psychology Bulletin. 
The idea is that when you are focused on thinking, you are open to lots of possibilities, and so you are willing to be expansive in the information you take in.  When you are focused on doing, then you narrow your attention to the information you think is most important. 
To test this possibility, the researchers first manipulated people’s mindset.  To put people in a thinking mindset, they asked people to spend some time reflecting on an unsolved personal problem.  To put people in a mindset, they asked people to create a plan to complete a personal project.  These tasks have been used in previous studies as well.
Then, participants were asked to look at a series of pictures on a computer monitor.  The pictures showed an object or animal against a background (such as a cow against a farm field).  Participants were told to look at the pictures and then rate how much they liked them.
While participants looked at the pictures, their eyes were being tracked.  Though you may not realize it, your eyes are constantly in motion.  You have only a small amount of very clear vision in each eye.  If you hold your arm out and stick up your thumb, the area of clear vision (which reflects the location of a densely packed group of cells in the back of your eye) is about the size of your thumbnail.  In order to build up a view of what you are looking at, you have to move your eyes around.  The information you point your eyes toward is a good indicator of what you are paying attention to. 
Participants who were put in a thinking mindset looked about equally at the central object in the picture and the background.  Those who were put in a doing mindset looked much more at the central object than at the background.  This result is consistent with the idea that the doing mindset narrows attention to what seems important, while the thinking mindset enables people to take in a lot of information.
One reason why this finding is important, is that our modern world promotes getting things done.  We feel as though we should always be doing something.  However, if we want to take a broad view of a situation, then we need to be willing to take a step back and to put ourselves in a mindset of thinking rather than doing.  These mindsets actually influence what we see in the world around us.