by Cheryl K. Olson Sc.D.

Adapted from a plenary presentation at the ENDS (Electronic Nicotine Delivery Systems) 2019 conference in Washington, DC.

Download the full paper with stories and practical guidance


One evening last summer, I attended a reunion event for Harvard public health alumni near where I live, in San Francisco.

I listened as a young recent graduate talked to me about vaping, how bad it was. How diacetyl causes popcorn lung and so on. She didn’t know what she was talking about.

But I had a little epiphany about changing minds. Since we’re here at a Harvard alumni event, she knows I’m a member of the same tribe; she can assume we share values and goals. If I offer information contradicting her beliefs, she’ll listen.

But what if she knows I’ve consulted to industry–how do I show her I haven’t defected, that I’m still a member of our tribe in good standing, so she’ll still trust what I say?

And what might someone from industry say to correct her misinformation that would actually get through to her? She might become indignant and self-righteous. If data collected by industry was introduced, it might be dismissed as tainted.

The narrative around vaping, initially so full of hope and promise, has taken a wrong turn.

Industry and public health people can seem like mistrusting, warring tribes.

How can we open up cracks in hardening misperceptions and start real conversations?

Screen Shot 2021-03-31 at 3.49.23 PM.png

One obstacle is how “public health people” view “industry people.”

Not long ago, for a consulting client, I reviewed the transcripts of meetings that the FDA’s Tobacco Products Scientific Advisory Committee held with several companies seeking modified risk labeling. This was for products such as IQOS and Camel Snus. I was looking for patterns, lessons from past failures that might increase the odds of future success. Especially, what went right and wrong with presentation of behavioral science information by industry to the TPSAC advisors.

It was much more interesting, even entertaining, than I expected. There was so much emotion in some of these pages; I felt like I was reading a play.

I saw that the quality of data in many ways took a back seat to issues of trust. With a very tribal feel. Now that “Big Tobacco” is moving toward harm reduction–a revered concept for many in public health–are we on the same side?

Many industry folks I meet have worked on reduced-harm products their entire careers. But that legacy of dysfunctional interactions between the tobacco industry and health-related government agencies has a long tail. Over and over, these meetings highlighted concerns that industry would try to sneak something nefarious past the FDA’s regulatory process.

Screen Shot 2021-03-31 at 3.49.32 PM.png

In the TPSAC transcripts, examples of past deception were repeatedly raised–usually in the form of stories that had probably been shared many times with colleagues. One mentioned reviewing decades-old industry documents as part of his FDA work, showing plans to use flavors to attract young smokers. Others told of skepticism from supposed harm-reduction innovations that weren’t. Such as, “I’m 35 years past cigarettes…but I relapsed a couple of times because I thought ‘light’ cigarettes were safer, and we know now that that’s not true.”

One example really gave me that sense of opposing tribes: Partway through the IQOS presentation to TPSAC [transcript page 160], a physician on the panel said roughly, “When I see the word ‘significantly,’ statistically we have a whole idea of what that means. When you say harm was significantly reduced, what do you guys mean?”

It struck me that this was about “we researchers” versus “you industry guys.” And that mode of thinking was coloring everything the TPSAC reviewers heard. Are we on the same side of the table, or are we members of warring tribes? Are you fellow scientists or are you Big Tobacco?

The emotional hangover from decades of demonizing the tobacco industry, especially combined with all the recent media fear mongering about vaping, creates a major obstacle to working together toward harm reduction.

Another barrier is the way members of my public health tribe view ourselves.

We assume because we are trained to conduct research and evaluate data, that all of our health-related opinions are driven by data.

There’s an excellent NPR podcast called Hidden Brain. On my flight to give this talk, I listened to an episode of Hidden Brain called “Facts Aren’t Enough.” It discussed research on the “social spread of beliefs” – that is, we get basically all our beliefs from our social channels, from other people. Most of what we know –such as the fact that the earth revolves around the sun and not the other way around–we have no direct evidence of. We have to trust others who got that evidence. We decide which people we trust to tell us the truth, and what beliefs we’ll take up.

Screen Shot 2021-03-31 at 3.49.42 PM.png

It reminded me of my experience doing research at Harvard Medical School on video game violence – basically, a congressman with budget authority was worried about the effects of Grand Theft Auto games on society, and I got a seven-figure grant. I was disappointed to find that most of the research on media effects on youth frankly was shoddily done, and often appeared to be driven by bias. And that this research had sort of infected my otherwise-intelligent colleagues who would say things to me like, “Violent video games… they cause aggressive behavior!” Or, “Don’t video games cause school shootings?” Actually, no. (If you’re interested, I can say more about this over a drink sometime.)

The point is, I came to realize that they got this information from academic social sources: stories from casual conversations with colleagues, article abstracts, and news reports. That recent public health grad at the alumni event, who talked about popcorn lung…she probably heard it over coffee from a colleague, who read it someplace.

Because of our self-concept as researchers, we believe that what we think is based on science. And we often fail to recognize that we really haven’t reviewed the data. That our opinions may have rickety supports.

Studies of how opinions form on issues like vaccines and climate change–and even on video game violence–repeatedly show one thing. When you expose people to new information, if that information supports their existing beliefs, it strengthens those beliefs. But if that new information conflicts with their beliefs, people will ignore it or discount it.

This is confirmation bias. We take in confirmatory data. When data doesn’t conform to what we already believe, we find a reason to discredit it–such as, it comes from an industry study.

There was an article on Scientific American’s website, called “How does the public’s view of science go so wrong?” The author states that, “When their misbeliefs are challenged, laypeople take it not as correction but as a direct attack on their identity.”

But the author missed the point that this is also true of scientists, and their identities and their biases. I remember when one of my former professors at Harvard, a noted expert on diet and weight loss, went ballistic about a government meta-analysis saying that being moderately overweight might be fine, even healthy. He was quoted calling the JAMA paper “really a pile of rubbish,” said it would be exploited by snack food makers, and that basically doctors should not share the data with their patients. My old professor clearly felt attacked by this research.

In short, facts are not enough to change minds.

Another kind of barrier is just lack of information and empathy.

At the IQOS TPSAC presentation, the first presenter tried to set an empathic tone by saying, “I’m sure that most people in this room know someone who smokes. It could be a friend, colleague, or family member….”

No. Most of us don’t know any smokers.

Smoking has become sort of like military service in this country. It used to be a common experience, and it’s now segregated into subgroups of society. Why did the recent media coverage of dozens of deaths linked to vaping not mention the hundreds of thousands of smokers dying each year in this country? Because in our minds, those smokers don’t have faces. Whereas it’s easy to imagine high school kids vaping.

Most public health people don’t identify with smokers. Not once during my MPH training at the University of Minnesota, or in my doctoral courses at the Harvard School of Public Health, did we ever discuss why people like or continue to use tobacco products. It was a given that smokers start because of peer pressure or family example, and they continue because they’re hooked, or can’t grasp the risks.

That’s one reason reviewers focused so intensely on any youth or nonsmoker uptake of modified risk products, even if the numbers will be small compared to lives saved by switching to those products.

Screen Shot 2021-04-01 at 1.26.55 PM.png

In the IQOS TPSAC meeting, one reviewer got clearly frustrated, saying, “You have the charger and the sticks, how is it all packaged? And where would the labels go, and what do they carry with them when–you know, what does this look like?” The reviewers didn’t understand the reality of this technology they were being asked to vote on.

You can’t go wrong assuming public health people have zero clue about nicotine-delivering products, whether old or new. Things like snus pouches are exotic to my tribe. It’s embarrassing how little we know. It puts me in mind of politicians railing against the evils of violent video games who never even played one.

The companies seeking modified risk labeling may not have thought about what’s at stake for the researchers and clinicians on the other side. The TPSAC panelists have what might be described as “asymmetrical personal risk.” In other words…If a TPSAC reviewer is right, if she correctly identifies a lower-risk product as such, she gets little professional benefit. But if she’s wrong, and identifies a high-risk product as lower-risk: that could kill her career, or cause huge personal embarrassment.

If TPSAC panel members don’t feel both scientifically convinced by and emotionally comfortable with both the contents of the application and the people who are making it, they will default to the safest response: rejection. That may explain why modified risk applications that looked like no-brainers to industry folks kept getting rejected.

An ordinary-sounding phrase from an application, like, “the level of exposure to this constituent is below the level of concern,” comes across very differently in this light. Remember, the nightmare of TPSAC members is to say, in effect, “Yes, industry misled us for years about safer products that weren’t, but this one really is safer!” Only to see headlines about novel carcinogens in the product they voted yes on.

All of this is made more difficult by what’s sometimes called “conformist bias.” In the US, skepticism of vaping has become the norm, in contrast to the UK. It’s risky to defy the norms of your tribe. You risk being labeled a collaborator or traitor.

Hope is not lost.

There are some things you can do to help another tribe hear your story or consider your data.

In the TPSAC meetings, it was clear that industry terminology, such as “consumers” and “taste preferences” was off-putting to the reviewers. That’s not the way their tribe talks.

The way you approach a topic, and even key phrases you use, can affect whether someone sees you as a colleague with shared goals and truly listens to you. Whether their mind opens to let some light in, or snaps closed and labels you as the opposition.

  • Don’t generalize beyond the data or make what one TPSAC reviewer called “very sweeping comments about the data,” asking sarcastically if the industry presenter could “please share your reasons for that confidence.” All studies have limitations; you gain credibility by acknowledging those limits and what’s not yet known.
  • Be mindful of their fear of being led down a path. Discuss other approaches you considered. If you’re modeling population effects of a modified risk product, tell the story about how you chose your assumptions and about different ways the data could have come out.
  • Walk them through your thinking. Don’t “persuade.”
  • Be mindful of tribal values, such as extra emphasis protecting vulnerable populations like children and teens. Also low-income or low-literacy groups and historically disadvantaged minorities; that’s why menthol can be a touchy issue.
  • Public health people in particular value evidence of practical real-world significance–not just statistical significance, which can be manipulated. They also like examples of effects on real people in real-world situations, not just controlled trial situations. So that’s a credible way to frame anecdotal information. Research demonstrates the power of stories, especially stories that elicit emotion, to change minds, and that data then confirms those beliefs. Even so, it’s wise to speak in ways that support the tribal self-image: that they make their decisions solely based on science.
  • Walk them through your experiences and your motivations. Keep in mind the fear of losing face with the tribe. A story provides cover as well as perspective. What will the TPSAC reviewer, the day after her vote supporting a modified risk product, say to a colleague over coffee to justify that vote?
  • Collaborations between groups with different beliefs and backgrounds (such as industry and academia) can help to humanize adversaries, to reduce bias and blind spots in research design, and to credibly spread information beyond the same closed circulating pools.

Read the full presentation from the 2019 ENDS US conference