Conflict of Interest - Redefined to Mean Only Industry
In tobacco control, “conflict of interest” has quietly undergone a semantic narrowing so severe that it now means almost one thing and one thing only: proximity to industry. Not proximity to power, not proximity to government, not proximity to advocacy organisations, not proximity to one’s own past statements or professional brand, just industry. Funding from a tobacco or nicotine company is treated not merely as a potential source of bias, but as a moral contaminant that disqualifies arguments in advance. Meanwhile, other forms of interest, ideological, reputational, institutional, and career-based, are waved through as if they were morally neutral, or worse, invisible.
This redefinition matters because it shapes what can be said, who is allowed to say it, and which kinds of evidence are treated as legitimate. It also creates a peculiar ethical asymmetry. One set of interests is assumed to corrupt absolutely; another is assumed not to count as an interest at all.
Start with the obvious example. A researcher who has ever accepted funding from industry must disclose it prominently, often repeatedly, and is routinely dismissed regardless of the content or quality of their work. Their conclusions are treated as suspect by default. This scrutiny is not inherently unreasonable. Financial incentives can distort judgment. Disclosure exists for a reason. The problem is not that industry funding is treated as a conflict; it is that it is treated as the only conflict worth naming.
But now consider the mirror image, which is so normalised it barely registers as an ethical issue. A researcher whose entire career, funding stream, public reputation, and institutional role depend on maintaining a particular narrative, say, that nicotine products other than cigarettes are unambiguously harmful, that harm reduction is a corporate trick, or that prohibition is the only ethical policy stance, is not regarded as conflicted at all. Their incentives are treated as irrelevant background conditions, not as active pressures shaping interpretation.
This is strange. Career incentives are among the strongest incentives humans experience. Entire professional identities are built on being “right” about a particular problem. In tobacco control, whole research centres, endowed chairs, long-running grants, and media-facing expert roles are constructed around a single interpretive frame. Grants are renewed when findings align with funder priorities. Media profiles grow when researchers deliver clear moral messaging rather than nuance. Advisory positions, invitations, and honours accrue to those who stay within the accepted frame. None of this requires conscious dishonesty. It simply shapes what questions are asked, which uncertainties are emphasised, and which findings are quietly downplayed.
Ideology functions similarly, though it is even less likely to be acknowledged. If one begins from the moral conviction that any non-pharmaceutical nicotine use is inherently wrong, or that any association with tobacco companies is beyond redemption, then evidence supporting harm reduction will always feel suspect, regardless of its empirical strength. Data do not arrive in a vacuum; they are interpreted through priors. Yet in tobacco control, strong ideological priors are treated not as potential biases but as signs of moral clarity.
There is also institutional conflict, which is perhaps the most carefully ignored of all. Public health agencies and advocacy organisations often lobby aggressively for specific policies, publicly claim moral ownership of those policies, and then position themselves as neutral evaluators of the outcomes of those same policies. When results are mixed or negative, there is an obvious incentive to reframe, delay, or reinterpret the data to preserve credibility. Admitting error is not just intellectually difficult; it can threaten budgets, authority, and political influence. Still, these conflicts are rarely disclosed, let alone interrogated.
What makes this asymmetry so corrosive is that it collapses the distinction between argument and affiliation and does so intentionally. Instead of asking whether a claim is true, critics ask where the claimant sits in relation to the industry. Once that question is answered, the substantive discussion often ends. This is not scepticism; it is a heuristic for dismissal.
The result is a moral monoculture that is enforced not through open debate but through soft power: peer review norms, funding criteria, ethics committee language, conference invitations, and media amplification.
One of the most effective mechanisms sustaining this monoculture is expert rotation. The same relatively small group of figures circulates continuously through the system: serving on government advisory committees, authoring or overseeing key reports, acting as expert commentators in mainstream media, reviewing each other’s papers, and sitting on the editorial boards of the journals that define what counts as acceptable evidence. At no point does this look like corruption in the crude sense. It looks like experience. It looks like credibility. It looks like continuity.
But rotation creates a closed epistemic loop. When the same individuals help set the research questions, evaluate the findings, advise policymakers on interpretation, and then explain those interpretations to the public, dissent does not need to be actively suppressed. It simply never quite qualifies. Unwelcome findings are framed as methodologically weak, contextually misleading, or ethically suspect. Alternative interpretations are treated as confusion rather than disagreement. New voices struggle to enter not because they are wrong, but because they are unfamiliar.
This dynamic is reinforced by journal practices that rely heavily on a narrow pool of reviewers drawn from the same intellectual community. Peer review becomes less a test of rigour than a test of alignment. Work that challenges foundational assumptions is scrutinised for every possible flaw, while work that confirms them is granted interpretive generosity. Again, no conspiracy is required. Shared priors do the work on their own.
Media participation completes the circuit. Journalists, under deadline pressure, return to the same dependable experts who can deliver clear moral narratives and authoritative soundbites. Those experts, in turn, become more visible, more cited, and more indispensable, further entrenching their status as neutral arbiters rather than invested participants. Over time, the distinction erodes between expert and advocate, while the language of neutrality remains firmly in place. Entire categories of evidence, consumer experience, market data, and real-world substitution effects are treated with suspicion because they do not emerge from approved institutional pathways. Conversely, weak or ambiguous findings that support the dominant narrative are amplified and moralised. Conflict of interest becomes less an ethical safeguard than a boundary-policing tool.
None of this is an argument for trusting industry uncritically. It is an argument for applying the same ethical lens to everyone else. Industry has its own incentives, and history provides ample reasons for caution. But ethical consistency demands symmetry. If financial interests can bias judgment, so can career dependence. If industry funding must be disclosed, so should advocacy roles, policy commitments, and institutional stakes. If we worry about motivated reasoning, we should worry about it everywhere, not only where it is politically convenient.
A more honest framework would treat conflict of interest as a universal human condition rather than a sin uniquely attributed to one group while quietly excused in all others. Everyone brings interests to the table. The ethical task is not to pretend otherwise, but to surface those interests and then evaluate arguments on their merits.
Until that happens, “conflict of interest” will remain less a principle than a weapon used selectively, rhetorically, and often to avoid engaging with uncomfortable evidence. And a field that prides itself on protecting public health will continue to shield itself from the very scrutiny it demands of others


Another brilliant piece, Alan... very well stated. It reminds me of Upton Sinclair:
“It is difficult to get a man to understand something when his salary depends on his not understanding it.”
Excellent article! There's such a big difference between how tobacco control treats industry research vs., say, medical research being funded by pharma. Not saying there are no problems with that approach, either, but any industry that wants to comply with regulations must do their own science, and it's best for everyone if that science is made fully available with transparent disclosures.
Anecdote: several years back I was at a small local conference. One presenter was presenting their nutrition research and disclosed funding from a local meat producer association, but then said "I have no conflicts of interest." The presenter didn't get any push back on that, that I saw. Don't get me wrong, I'm about as pro-beef for nutrition as someone can be, but it was jarring to see them claim no COI. I guess my point with this anecdote is that I think there's a general amnesia about what COIs even are.