Spring 2017 - Current Topic
FINDING THE GOOD IN GOOD SCIENCE
​
​
​
Note: This document expresses Dr. Jenke’s personal views and opinions and is not a position established and supported by Triad Scientific Solutions, LLC. This document is not professional advice and does not constitute a professional service provided by either Dr. Jenke or Triad Scientific Solutions.
​
When members of the E&L community gather to develop standards, guidelines and best demonstrated practice recommendations, there are three principles they should obey:
-
The standards must be based on “good science”,
-
The standards must be effective and efficient, and
-
The standards must fit every conceivable circumstance well.
It is the inability to achieve these principles that makes the generation of standards, guidelines and recommendations so frustratingly challenging.
Consider the last principle, for example. It is intuitively obvious in a diverse field such as pharmaceuticals that this principle is impossible to achieve as it is clear that a rigorous standard (which is specified set of tests coupled to a specified set of acceptable test results) cannot fit all the diverse circumstances equally well. It is the same problem as trying to design a glove that fits every human being. If the underlying purpose of the glove is “to keep one’s hands warm”, then a standardized glove can be designed that will address this requirement, to some degree, for most people. However, because the glove must keep everybody’s hands warm, it is logical that there will be design tradeoffs which will mean that while it keeps everyone’s hands warm, it does not keep everyone’s hands as warm as they would personally like it. Furthermore, there may be other trade-offs, such as “these gloves are not very sexy”, or “these gloves do not match my coat” or “these gloves make my hands itch”.
While the challenges in making standards that are generally applicable in the greatest number of circumstances are considerable, this is not the point that I want to address in this discussion. Rather, I want to address the challenges associated with good science.
Good science suffers from with the same problem as designing a standardized glove. As good scientists, we learned and we understand that there are very few universal scientific truths; rather, a scientific truth is a truth only under the rigorously defined set of circumstances upon which it is based. When we perform an experiment and draw a conclusion from that experiment, we understand the conclusion is only perfectly valid for the set of defined experimental circumstances we started out with. Extension of that same conclusion to other circumstances involves a certain measure of risk, specifically the risk that in changing the circumstances we have invalidated (knowingly or unknowingly) some fundamental principle that defines the applicability of our conclusions. Thus, we understand that there is an inherent trade-off when we make scientific generalizations and put them into standards, guidances and recommendations. That is, we sacrifice some of the good in good science for the sake of providing a direction that is generally right in the greatest number of circumstances.
The challenge we face as practitioners of good science is not in recognizing good science per say but recognizing the boundaries that differentiate between good science properly applied and good science improperly applied. When we are tempted to use a standard, or leverage a “rule of thumb” or “do this because everybody else is doing it”, as good scientists we must ask ourselves “am I taking a good idea in certain circumstances and applying it to the wrong circumstances?”. If the answer is yes, then surely this is as bad as using “bad” science in the first place.
Let me illustrate this with an example. I use this as an example not because it a particularly bad practice but because it effectively illustrates my point. The following, taken from the PQRI OINDP Best Practice recommendations, is well known and commonly applied in the E&L community.
-
The Working Group recommends that analytical uncertainty be evaluated in order to establish a Final AET for any technique/method used for detecting and identifying unknown extractables/leachables.
-
The Working Group proposes and recommends that analytical uncertainty in the estimated AET be defined as one (1) standard deviation in an appropriately constituted and acquired Response Factor database OR a factor of 50% of the estimated AET, whichever is greater.
The question I would ask you to consider is “where is the good science and the not so good science in these recommendations?”
Here is my answer. It is well known that response factors will vary among the universe of compounds that could be extractables and leachables. Thus, it is good science that a general concept such as the AET, which presumably is applicable to all possible extractables/leachables, take this variation into account. Furthermore, we all understand that basing actions on relevant and sufficient data is the cornerstone of good science, and thus that the requirement to consider “an appropriately constituted and acquired Response Factor database” is a requirement to do good science. However, it must be obvious that the direction to universally “use a factor of 50%” is not necessarily good science. While the derivation of the 50% is itself good science, as it was based on a response factor database (which is somewhat small in the context of the databases available today), it is obvious that the 50% is only relevant for the compounds in that database and the analytical method with which the data was generated. Universal and unquestioned application of the factor of 2 rule to compounds that were not in the original database and to analytical methods other than the method used to generate the data is not the best science; rather, it is poor science, not because the science itself is bad but because the good science aspects are being applied out of context.
To a good scientist, arguments such as “it is better than nothing” or “everybody else is doing it” are inexcusable. Certainly, the idea that “it is better than nothing” has to be examined objectively and harshly. The improper application of science is not guaranteed to be better than doing nothing because it is not the case that the improper application of science will always make things better. In fact, the history of improper application of good science is littered with examples of bad outcomes derived from applying good sciences incorrectly.
Listen, nobody said doing good science was easy. We understand that part of the driving force for recommending that the factor of 2 be universally applied is that back then few people could access a database. Thus, it was nearly impossible to practice the good science required in the recommendation and people, rather than do nothing, gravitated to the other part of the recommendation. However, today, it is virtually impossible to run into a reputable E&L laboratory that is not eager to talk about their database. Thus, in this case, our ability to do good science has finally caught up with our responsibility to do good science. It is proper that we accept that responsibility and be held accountable for meeting that responsibility.
This is true not only to adjusting the AET for analytical uncertainty but in numerous other places where our current capabilities enable our ability and address our responsibility to practice and preach a higher degree of good science than has ever been possible. Currently applied recommendations, standards, guidelines and practices must be adjusted, as appropriate, to leverage this new and higher degree of good science and new recommendations, standards and guidelines must be drafted to reflect this new and higher degree of science. We aspire to better science because we are capable of better science. More importantly, if we are going to talk the talk, we best start walking the walk.
​
TRIAD FOURTH YEAR IN REVIEW
​
​January, 2021​
​When one works for one company for many years as I did, one can develop a world view that is rather focused and not particularly worldly at all. It is remarkable how quickly this becomes apparent when one puts their own shingle out there and takes on clients from all the corners of the E&L world.
To put this in context, I remember the parable of the blind men and the elephant. In this parable, several blind men are asked to describe an elephant based on the body part that they can touch. One bind man, touching the elephant’s truck, exclaimed that the elephant is like a thick snake. Another, touching the elephant’s tail, described the elephant as a rope. A third blind man, who touched the elephant’s ear, suggested that the elephant was like a fan. And so it went, with each blind man in turn touching a different body part and therefore describing the elephant in a different manner.
Ultimately, none of the blind men described the elephant correctly. And yet not one of the blind men was wrong, it was just that their individual viewpoints were simply too narrow for them to visualize “the bigger picture”.
I looked back at what I wrote in the review of Triad’s third year and found the text to be a little gloomier than I think I intended it to be when I noted how hard it is to drive towards consensus on issues fundamental to the technical but practical discipline we collectively call E&L. Nevertheless, another year’s experience has done little to change the essence of my opinion on this subject. I do not see the E&L community of practice getting closer to consensus. Yes, it is true that some issues have been resolved (I think). For example, when manufacturing equipment first appeared on the E&L radar, driven by the emergence of Single-use Systems (SUS), people who suggested that “maybe downstream processes clear the process stream of extractables and the risk is not as great as one might think” were patted in their head and told to “come back when you have some data”. Well, they went back to the lab and went back to their processes and they got the data. Equally importantly, they published the data so that everyone had access to the data and could review it. So today the preponderance of available data is arguably compelling enough that maybe the concept of “it truly is not as bad as we thought it could be” can be translated into actionable guidance and required/recommended practice.
Here I am going to use another analogy. I find progress in the science and practice in E&L to be like the game “Whack-A-Mole”. No sooner do you whack one mole back into its hole, then two other Moles pop out of two other holes. The same thing is true in E&L. “Solve” one problem and two others jump it to take its place.
So, what is it that makes so hard to reach a consensus? I think I could offer any number of causes. But there is one I would like to specifically address. It is the lack of a common foundation of knowledge. Let me pose a scenario. Three E&L scientists walk onto a bar. Scientist A and B went to the same graduate school and have worked in the same Big Pharma company since they graduated. They routinely talk shop and share studies, data and experiences. Scientist C has the same education background but attended a different university. After working a few years as a “LC/MS jockey”, scientist C now is the E&L SME at a start-up going into Phase 3 clinicals with their first candidate molecule. The scientists, who know each other from meetings, workshops and the like, start to talk about the selection of simulating extraction solvents. After a period of animated debate, two of the three scientists come to an agreement in principle while the third scientist is convinced the other two haven’t a clue in terms of what they think they know. So, let me ask you; which of the three scientists are the two that agree?
This is not a trick question and logic suggests that the most likely correct answer is that Scientists A and B have come to an agreement. Why? Because they share the same knowledge, the same background and the same experience!
You see, if we do not have a common understanding, are we not like the blind men trying to describe the elephant?
Interesting point, Dennis, but so what? We can’t make all E&L go to the same school, work for the same company, or share their data and experiences.
Of course, that is true. But we can do the next best thing. It’s called publication. It’s what scientists do. In fact, I would argue it is one of the responsibilities that come with the title.
Let’s be fair, it is a burden for industrial and regulatory scientists to publish. You don’t have to tell me that. I have always thought that it is a shame that there are no academic organizations that have an active research program in pharma E&L because you can be darn sure they would publish. The closest I can come to an academic research group in Pharma E&L are the people at the National Institute for Bioprocessing Research and Training (NIBRT) and if you follow E&L applied to manufacturing you know how well-published they are.
But I digress. It is through publication that the community of practice gains a common foundation of information, data and insight. And it is that on that foundation that consensus will be built and from that consensus that guidance will be developed.
So, let’s get to writing! I look forward to reading about your experiments, reviewing your data, debating your conclusions and moving our profession forward. If there is anything that I can do to help, you know how to get a hold of me.
Considering the small portion of the E&L world that is Triad Scientific Solutions, I again want to thank the growing number of past and current clients who have come to me with their issues, challenged me with their projects, and trusted me with their products and I celebrate the successes that we have had. I apologize that there is not always an easy and obvious answer and that the solution is not always simple but I note that there has always been, and there will always be, a path forward. I look forward to new and ongoing collaborations and working together making continued progress towards the worthy goal of sustaining and saving lives. To potential future clients I would say that “help is only a phone call away”, although to be perfectly honest I answer e-mails much more quickly than I answer the phone. If you seek a partner whose commitment to the patient is as strong as your own and who sees “good science practically applied” as two sides of the same coin, then I do hope you will contact me.
To my friends, colleagues and all who continue to selflessly give their time and talent in service to our profession and in the pursuit of “the greater good”, I wish you boundless energy, uncommon insight, the wisdom of Solomon, an infinite budget, a thick skin, and continued success.
With humility, regards, respect, and gratitude, and yours in science and in practice,
Dennis Jenke
Chief Executive Scientist
Triad Scientific Solutions, LLC
dennisjenke@triadscientificsolutions.com
​
​
​