On 2017 Nov 15, David Keller commented:
Thank you for your thoughtful reply to my comment. In response, I would first remark that, while "it is sometimes difficult to disentangle the jargon", it is always worthwhile to clearly define our terms.
Commonly, an "analysis" is a method of processing experimental data, while an "effect" is a property of experimental data, observed after subjecting it to an "analysis".
As applied to our discussion, the "per protocol effect" would be observed by applying "per protocol analysis" to the experimental data.
My usage of "per protocol results" was meant to refer to the "per protocol effect" observed in a particular trial.
The above commonly-understood definitions may be a reason causal inference terminology "can get quite confusing to others who may not be used to reading this literature", for example, by defining "per protocol effect" differently than as "the effect of per protocol analysis".
Nevertheless, clinicians are interested in how to analyze clinical study data such that the resulting observed effects are most relevant to individual patients, especially those motivated to gain the maximal benefit from an intervention. For such patients, I want to know the average causal effect of the intervention protocol, assuming perfect adherence to protocol, and no intolerable side-effects or unacceptable toxicities. This tells the patient how much he can expect to benefit if he can adhere fully to the treatment protocol.
Of course, the patient must understand that his benefits will be diminished if he fails to adhere fully to treatment, or terminates it for any reason. Still, this "average expected benefit of treatment under ideal conditions" remains a useful goal-post and benchmark of therapy,despite any inherent bias it may harbor, compared with the results of intention-to-treat analysis.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.