top of page

Current Topic

The content of this page may not be copied or used in any manner without permission from Triads Scientific Solutions, LLC.

January, 2021 – Current Topic

Note:  This document expresses Dr. Jenke’s personal views and opinions and is not a position established and supported by Triad Scientific Solutions, LLC.  This document is not professional advice and does not constitute a professional service provided by either Dr. Jenke or Triad Scientific Solutions.      

Meeting Our Obligations as E&L Analytical Chemists

The chemical assessment of pharmaceutical packaging systems, manufacturing components and medical devices involves two major activities, generation of information (data) and interpretation of that information.  When chemical assessment is used to address the safe use expectations for pharmaceutical and medical items, it is the responsibility of analytical chemists to generate the data and the responsibility of toxicologists to perform the interpretation.  Although chemical assessment is a team, and not an individual, sport - and it is true that the greater the collaboration is between chemist and toxicologist, the more efficient and effective the process - the fact remains that analytical chemistry and toxicology are two different disciplines and it is the rare individual indeed who excels in both worlds and is therefore able to perform both functions. 

As I am poorly equipped by knowledge and experience to speak to the toxicological aspects of chemical assessment, let me instead turn my attention to the analytical chemistry aspects. The analytical chemist has four responsibilities, whether it is testing an extract for extractables, or a drug product, manufacturing process stream, or medical device for leachables:

  1. The analytical chemist must find every relevant substance, extractable or leachable, in a sample that has a reasonable chance of adversely affecting patient health should the patient be exposed to the substance during the prescribed clinical use of a medical product.

  2. The analytical chemist must secure, with a reasonable degree of certainty, the exact and correct identity of each relevant substance that is found.

  3. The analytical chemist must determine, with a reasonable degree of accuracy, the concentration of each relevant substance that is found. 

  4. The analytical chemist must report the findings in a manner that facilitates the findings’ toxicological interpretation.


Focusing on the first three of these responsibilities, I note the common theme of “reasonable”.  In the ideal world of infinite time, infinite resources, infinite wisdom and infinite capabilities, the concept of reasonableness would be unnecessary.  Analytical methods would be powerful enough that no substance would escape detection.  All analytical responses would be information-rich and readily interpretable, all analytical chemists would be infinitely knowledgeable and insightful, and all possible substances would be readily available for use as reference standards, resulting in confirmed identities for all reported substances.  Each substance’s response would be comparable, on a per unit concentration basis, and would be linearly relatable to concentration over a wide dynamic range, resulting in accurate and relatively easy to secure reported concentrations.  

But we do not live in an ideal world.  In the real world, substances that must be found elude detection even when state of the art analytical processes are performed by highly qualified analytical experts – particularly in the complex samples that one encounters in extractables or leachables studies.  In the real world, analytical responses are confounding and confusing, presenting even the most accomplished analytical sleuths with unsolvable puzzles.  And in the real world, seemingly with little rhyme or reason, analytical responses are “all over the map”, relegating simplicity and accuracy to opposite sides of the coin.  In the real world, there is never enough money, precious little time and limited resources.


So, we do our best under the circumstances that we are obliged to endure and the realities we are forced to accept.  We use multiple orthogonal and complementary analytical methods, often employing multiple detectors, to cast as broad a net as possible.  We harness the power of artificial intelligence (broadly defined) so that the computer can “see” things, process inferences and draw conclusions that we can’t.  We use tools such as the analytical evaluation threshold (AET) to place limits on how low we have to go and still be able to say “I think we got them all”.  Furthermore, we adjust the AET so that it remains protective even in the face of analytical complications such as variable response factors.  Since we cannot secure definitive and correct identities in all cases, we use identification classes that at least communicate the level of confidence we have in the identities we report. To balance the unreconcilable and unrelenting demons of limited resources and accuracy, we make assumptions, use generalizations, employ approximations and “do the best that we can”.


And we sleep well at night, secure in the knowledge that we have discharged our responsibilities to the best of our abilities and with the highest attainable regard for the critical role that we play.


That is, until our best is no longer good enough.  Although uncommon, the analytical process still has gaps and potential bad actors sometimes fall through the cracks.  The adjusted AETs may still be too high and potential bad actors can be hiding under the line. Too frequently, reported identities are secured with too little confidence and with procedures that place convenience above science.  Too often, so called “semi-quantitative” reported concentrations would be more accurate if one secured them by throwing darts at a dart board as they are based on largely false premises and use overly simplistic approximations.


We understand, recognize and acknowledge these shortcomings because they are based on science, or, more accurately, shortcomings in science.  We understand that in an ideal world, the shortcomings can largely be overcome so that gaps can be closed, “lines drawn in the sand” can be made more solid and less dashed, and “good science” can persevere.


And although we do not use the real world as an excuse, we never-the-less understand that realities will always limit our ability to achieve the ideal state.


The minute that we recognize, acknowledge and accept the shortcomings of our work is the same minute that we dedicate efforts to improving that work.  But for those improvements to become achievable, we must quantify what we mean by “reasonable”; that is, we must establish a reasonable goal against which success can be measured.  To do so, we must remind ourselves what the word reasonable means.  As is so many cases in the English language, a word is not defined in absolute terms but rather in the context of other words that convey the same or a similar concept.  So, when I look up “reasonable” in the various dictionaries, I see words like moderate, fair, not excessive or extreme, achievable with an acceptable level of effort, rational, sensible, well-founded, just, ordinary or usual in the circumstances.


These words help me with context but they do not provide specifics.  For example, it is reasonable to say “I want the AET to be adjusted low enough so that it is sufficiently protective”.  But the question is “what is the measure of being sufficiently protective?”  For example, if you tell me “the AET must be sufficiently low that it captures 95% of all extractables regardless of their individual response factors” then at least I now have a goal I can work towards and I can present an objective argument when I think I have achieved the goal.  If you tell me “a reported concentration is considered to be adequately semi-quantitative when the reported value is within a factor of 2 of the true value on either side (i.e., 50% - 200%)” then I can adopt my practices to achieve this objective.  But until I know what the finish line looks like, I can never know when I have finished the race, I do not even know if I can even finish the race and I cannot be sure I even want to enter the race in the first place.


Let us stop putting the cart before the horse, trying to “fix” these problems before we even know what the fix looks like.  Rather, if we are going to jump into action, let the action that we jump into be an action of evaluating the problem and establishing a reasonable outcome and not just dictating “this is what you should (or must) do”.  Let us, as a community of practice and not just as individual practitioners, take the time and effort necessary to define what constitutes reasonable outcomes, establish the appropriate and reasonable metrics and specifications associated with those outcomes, and then relentlessly pursue the outcomes until they are achieved.  Let us use the finish line to help us establish a reasonable race whose end can be achieved and whose completion can be readily verified.


In this way, we ensure that the improvements we devise and adopt will be achievable, will produce the desired outcome (actually fixing the problem without creating more problems), will be acceptable to all stakeholders and thus will be embraced and adopted by all stakeholders.                      

bottom of page