There are many situations in life in which you have to
follow other people’s advice. Doctors
recommend medications to treat problems.
Mechanics suggest ways to maintain your car more effectively. Financial advisers indicate the investments
they think you should consider.
Because companies know that advisers play a bit role in the
choices people make, those companies will often create incentives for advisers
to make specific recommendations. Drug
companies wine-and-dine doctors in the hope that the doctors will recommend
their medications to patients. Mechanics
often have higher profit margins on some kinds of maintenance, which gives them
a reason to suggest those procedures.
These incentives create a conflict-of-interest. When you get advice from someone, you hope
that you are getting the advice that is best for you. When advisers have an incentive to recommend
a particular option, then they may suggest that option to you even when it is
not ideal.
To help protect you, there are often regulations that
require disclosures of conflicts-of-interest.
In those cases, an adviser has to tell you in advance that they have
reasons to recommend particular options.
The idea is that if you know that an adviser may be biased, you can use
that information to help you make a choice.
Do these disclosures work?
This question was addressed in a paper in the February, 2013
paper in the Journal of Personality
and Social Psychology by Sunita Sah, George Loewenstein, and Daylian
Cain. Their studies suggest that these disclosures
may actually increase the likelihood that people make choices that are in the
advisers best interest rather than their own.
In their studies, participants were people who had never
met. These participants were recruited
in public places, so they were not generally college students. One participant played the role of the
adviser, while the other played the role of the chooser.
The options in these studies were opportunities to win a
prize. In a given opportunity, there
were six possible prizes. The prize a
participant would actually get depended on the role of a die. Choosers were faced with two sets of prizes,
and they had to select which one they wanted.
Then, the experimenter would roll a die, and they would get the prize
that matched the number for that set.
The two sets were designed so that one was better than the other. For example the better set might have the chance
for prizes like a $50 gift card to Amazon, a candy bar, or a can of Coke. The worse set might have the chance for
prizes like a $20 gift card, a can of generic soda, or a candy bar. When people were given a free choice between
the two sets, they almost always chose the better one.
The second participant played the role of an adviser. The adviser always saw the options first and
made a recommendation for what the chooser should pick. In some cases, there was no
conflict-of-interest. Both the chooser
and the adviser would get prizes after the study regardless of what the adviser
recommended and what the chooser selected.
In those cases, the adviser generally recommended the better option, and
the chooser generally picked it.
In some cases, though, the experimenters created a
conflict. In these cases, the adviser
would get a chance to win a prize only if the chooser picked the worse set of
prizes. So, there was incentive for the
adviser to recommend the worse option.
If the chooser was not told about the conflict, then they
took the advisers recommendation only about 30% of the time. Ratings they gave after the study suggested
that they felt the adviser gave a bad recommendation.
In the disclosure condition, though, the advisers had to
write down that they made this recommendation because they would only receive a
prize if the chooser selected the worse option.
In the conditions in which the conflict was disclosed, choosers felt
that the adviser was not trustworthy, but they took the adviser’s recommendation
about 75% of the time.
What is going on here?
Choosers in these studies seem to have been motivated to
make a selection that helped both themselves and the adviser. That is, choosers picked something that was
worse for them overall just to help the adviser to get something as well.
Other studies in this series found that choosers would pick
the worse option about half the time even when the disclosure was made by the
experimenter rather than by the adviser. That is, having an independent person
disclose the conflict did not stop people from picking the option that was
worse for them just to help the adviser.
These studies demonstrate that your willingness to be
cooperative with others can work against us.
After all, an adviser’s job is to help you get what you want. Yet, when your advisers have a reason to
recommend something that is better for them than it is for you, there is still
a desire to help them out at your own expense.
In the real world (as opposed to these experimental situations),
the problem is probably even worse. In
these studies, participants could generally evaluate the options perfectly well
for themselves. In the world, you go to
advisers (doctors, mechanics, financial planners), because they have expertise
that you don’t have. So, even if you do
not completely trust the advice we are given, that advice is still better than
what you might be able to come up with on your own.
Ultimately, your best defense against biased recommendations
is to get several independent opinions.
Hopefully, the various advisers you consult will have different
conflicts. If you can find some
consensus among several advisers, that agreement may reflect what is best for
you.