top of page

 
The entire content of this archive cannot be copied or used in any manner without express written permission from Triad Scientific Solutions, LLC.

 

January, 2021 – Current Topic;

Meeting Our Obligations as E&L Analytical Chemists

 

 

Note:  This document expresses Dr. Jenke’s personal views and opinions and is not a position established and supported by Triad Scientific Solutions, LLC.  This document is not professional advice and does not constitute a professional service provided by either Dr. Jenke or Triad Scientific Solutions.     

 

The chemical assessment of pharmaceutical packaging systems, manufacturing components and medical devices involves two major activities, generation of information (data) and interpretation of that information.  When chemical assessment is used to address the safe use expectations for pharmaceutical and medical items, it is the responsibility of analytical chemists to generate the data and the responsibility of toxicologists to perform the interpretation.  Although chemical assessment is a team, and not an individual, sport - and it is true that the greater the collaboration is between chemist and toxicologist, the more efficient and effective the process - the fact remains that analytical chemistry and toxicology are two different disciplines and it is the rare individual indeed who excels in both worlds and is therefore able to perform both functions.

 

​As I am poorly equipped by knowledge and experience to speak to the toxicological aspects of chemical assessment, let me instead turn my attention to the analytical chemistry aspects. The analytical chemist has four responsibilities, whether it is testing an extract for extractables, or a drug product, manufacturing process stream, or medical device for leachables:

 

  • ​​The analytical chemist must find every relevant substance, extractable or leachable, in a sample that has a reasonable chance of adversely affecting patient health should the patient be exposed to the substance during the prescribed clinical use of a medical product.

  • The analytical chemist must secure, with a reasonable degree of certainty, the exact and correct identity of each relevant substance that is found.

  • The analytical chemist must determine, with a reasonable degree of accuracy, the concentration of each relevant substance that is found.

  • The analytical chemist must report the findings in a manner that facilitates the findings’ toxicological interpretation.

 

Focusing on the first three of these responsibilities, I note the common theme of “reasonable”.  In the ideal world of infinite time, infinite resources, infinite wisdom and infinite capabilities, the concept of reasonableness would be unnecessary.  Analytical methods would be powerful enough that no substance would escape detection.  All analytical responses would be information-rich and readily interpretable, all analytical chemists would be infinitely knowledgeable and insightful, and all possible substances would be readily available for use as reference standards, resulting in confirmed identities for all reported substances.  Each substance’s response would be comparable, on a per unit concentration basis, and would be linearly relatable to concentration over a wide dynamic range, resulting in accurate and relatively easy to secure reported concentrations. 

But we do not live in an ideal world.  In the real world, substances that must be found elude detection even when state of the art analytical processes are performed by highly qualified analytical experts – particularly in the complex samples that one encounters in extractables or leachables studies.  In the real world, analytical responses are confounding and confusing, presenting even the most accomplished analytical sleuths with unsolvable puzzles.  And in the real world, seemingly with little rhyme or reason, analytical responses are “all over the map”, relegating simplicity and accuracy to opposite sides of the coin.  In the real world, there is never enough money, precious little time and limited resources.

 

 So, we do our best under the circumstances that we are obliged to endure and the realities we are forced to accept.  We use multiple orthogonal and complementary analytical methods, often employing multiple detectors, to cast as broad a net as possible.  We harness the power of artificial intelligence (broadly defined) so that the computer can “see” things, process inferences and draw conclusions that we can’t.  We use tools such as the analytical evaluation threshold (AET) to place limits on how low we have to go and still be able to say “I think we got them all”.  Furthermore, we adjust the AET so that it remains protective even in the face of analytical complications such as variable response factors.  Since we cannot secure definitive and correct identities in all cases, we use identification classes that at least communicate the level of confidence we have in the identities we report. To balance the unreconcilable and unrelenting demons of limited resources and accuracy, we make assumptions, use generalizations, employ approximations and “do the best that we can”.

 

 

And we sleep well at night, secure in the knowledge that we have discharged our responsibilities to the best of our abilities and with the highest attainable regard for the critical role that we play.

 

 That is, until our best is no longer good enough.  Although uncommon, the analytical process still has gaps and potential bad actors sometimes fall through the cracks.  The adjusted AETs may still be too high and potential bad actors can be hiding under the line. Too frequently, reported identities are secured with too little confidence and with procedures that place convenience above science.  Too often, so called “semi-quantitative” reported concentrations would be more accurate if one secured them by throwing darts at a dart board as they are based on largely false premises and use overly simplistic approximations.

 

 We understand, recognize, and acknowledge these shortcomings because they are based on science, or, more accurately, shortcomings in science.  We understand that in an ideal world, the shortcomings can largely be overcome so that gaps can be closed, “lines drawn in the sand” can be made more solid and less dashed, and “good science” can persevere.

 

 And although we do not use the real world as an excuse, we never-the-less understand that realities will always limit our ability to achieve the ideal state.

 

 The minute that we recognize, acknowledge and accept the shortcomings of our work is the same minute that we dedicate efforts to improving that work.  But for those improvements to become achievable, we must quantify what we mean by “reasonable”; that is, we must establish a reasonable goal against which success can be measured.  To do so, we must remind ourselves what the word reasonable means.  As is so many cases in the English language, a word is not defined in absolute terms but rather in the context of other words that convey the same or a similar concept.  So, when I look up “reasonable” in the various dictionaries, I see words like moderate, fair, not excessive or extreme, achievable with an acceptable level of effort, rational, sensible, well-founded, just, ordinary or usual in the circumstances.

 

 These words help me with context but they do not provide specifics.  For example, it is reasonable to say “I want the AET to be adjusted low enough so that it is sufficiently protective”.  But the question is “what is the measure of being sufficiently protective?”  For example, if you tell me “the AET must be sufficiently low that it captures 95% of all extractables regardless of their individual response factors” then at least I now have a goal I can work towards and I can present an objective argument when I think I have achieved the goal.  If you tell me “a reported concentration is considered to be adequately semi-quantitative when the reported value is within a factor of 2 of the true value on either side (i.e., 50% - 200%)” then I can adopt my practices to achieve this objective.  But until I know what the finish line looks like, I can never know when I have finished the race, I do not even know if I can even finish the race and I cannot be sure I even want to enter the race in the first place.

 

 Let us stop putting the cart before the horse, trying to “fix” these problems before we even know what the fix looks like.  Rather, if we are going to jump into action, let the action that we jump into be an action of evaluating the problem and establishing a reasonable outcome and not just dictating “this is what you should (or must) do”.  Let us, as a community of practice and not just as individual practitioners, take the time and effort necessary to define what constitutes reasonable outcomes, establish the appropriate and reasonable metrics and specifications associated with those outcomes, and then relentlessly pursue the outcomes until they are achieved.  Let us use the finish line to help us establish a reasonable race whose end can be achieved and whose completion can be readily verified.

 

 In this way, we ensure that the improvements we devise and adopt will be achievable, will produce the desired outcome (actually fixing the problem without creating more problems), will be acceptable to all stakeholders and thus will be embraced and adopted by all stakeholders.

 

 

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX   

 

 

   ​

January, 2019 – Current Topic;

Thoughts on What Constitutes Good E&L Science in 2019

 

 

Note:  This document expresses Dr. Jenke’s personal views and opinions and is not a position established and supported by Triad Scientific Solutions, LLC.  This document is not professional advice and does not constitute a professional service provided by either Dr. Jenke or Triad Scientific Solutions.

 

Although it seems that the world of E&L is in constant and continuing flux, I believe that we have reached the point where our community of science can adopt standard practices that once seemed to be more of a dream than a reality, specifically in the area of analytical screening for organic extractables or leachables.  

It was not too long ago that objective observers of E&L practices would note three areas where screening extracts for extractables (or drug products for leachables) were sub-optimal:

 

  1. Too often, screening strategies failed to provide useful information about extractables/leachables either because:

    • The strategies failed to respond to all extractables (or leachables), or

    • The strategies failed to produce information from which the extractable’s identity could be inferred or the extractable’s concentration could be estimated.

  2. The identifications secured by screening strategies were predominantly tentative.

  3. The concentrations secured by screening strategies were more estimated and less quantitative. 

 

We are now at a point where we are capable of better and in fact where we are expected to be better.  In this “new reality”:

 

  1. It will be rare that screening strategies fail to provide useful information about extractables/leachables either because:

    • The strategies will to respond to a vast majority of the most commonly encountered extractables (or leachables), and

    • The strategies will almost always produce information from which the extractable’s identity could be inferred or the extractable’s concentration could be estimated.

  2. The identifications secured by screening strategies will be predominantly confident and often-times confirmed.

  3. The concentration secured by screening strategies will predominantly be semi-quantitative and more frequently fully quantitative. 

 

In considering the details of the transformation from past to future practice, it is appropriate to consider what served as the transformation’s catalyst.  Yes, there have been advances in analytical practices, better individual methods, and better use of orthogonal and complementary methods to “fill in the gaps” that exist in even the best individual methods.   Yes, instrument vendors have produced better, more powerful instruments with greater information content, greater sensitivity and greater selectivity.  But these are not the catalyst, they are merely enablers.  The true catalyst has been … experience.  Each time we got a “bad TOC reconciliation” (for example) and used alternate methods to find the missing extractables, we went back to the screening approach and “filled in the gap”.  Each time we encountered an unidentified substance, secured an identity and then collected the information necessary to support or confirm the identity, we added one more compound to the “list of confident or confirmed IDs” and shortened the list of “tentatively identified (or unidentified) substances”.  Each time we injected an authentic standard and obtained a relative response factor, we transitioned from a “concentration estimate” to a more accurate and precise “semi-quantitative concentration”.  The more studies we did and the more times we took these actions, the closer we became to being able to embrace the future state.

However, the mere generation of this information is an inefficient enabler of the “new reality”.  The true transition from past to future is possible only when the information is collected, collated and archived in a format that facilitates its use in routine analytical practice. This format is termed, for lack of a better word, a database.

Again, it was not too long ago that laboratories, both internal and external to traditional pharmaceutical companies, could establish their “degree of competence” based on whether they even possessed some sort of E&L “database”, regardless of its size, contents and form (how many of you remember the individual “cheat sheets” we used to have on ID’s and response factors?).  Today, it is the size, the contents, and the format of the database, both in terms of information content and information credibility, that is the measure of a “state of the science” analytical capability.

It stands to reason that the assessment of the impact of a leachable (or the potential impact of an extractable as a leachable) is “better” the more confident we are in the leachable’s experimentally determined identity and concentration.  Some organizations have the information and the tool(s) to provide more and better identities and concentrations with a higher degree of confidence.  As more studies supported by this information and these tools are performed and reported, what was once “nice to have” will become “necessary to have”.  This transition from “nice to have” to “necessary to have” will itself serve as a further catalyst for the development, population, proliferation and application of larger, more information-rich and more robust E&L databases.  And the industry we support, the drug products and medical devices we produce and the patients we serve will be better off because of it. 

   

 At a high level, two circumstances must exist for a packaging system to be a relevant source of elemental impurities in a packaged drug product:

  1. Packaging systems must contain sources of elemental impurities in the first place, and

  2. Those elemental impurities that are present in the packaging must leach out of the packaging and into the drug product during system/product contact.

 

Considering these circumstances, a team of authors representing the Extractables and Leachables Safety Information Exchange (ELSIE) and International Pharmaceutical Aerosol Consortium on Regulation and Science (IPAC-RS) published a review of published extractable metals data for plastic and glass packaging systems and concluded that:

 

  • Unless the elemental entities are parts of the materials themselves (for example, SiO2 in glass) or intentionally added to the materials (for example, metal stearates in polymers), their incidental amounts in the materials are generally low.

  • When elemental entities are present in materials and systems, generally only a very small fraction of the total available amount of the entity can be leached under conditions that are relevant to packaged drug products.

 

These conclusions reinforce what I believe is a common opinion among E&L experts, which is that “there is generally a low risk of adverse product effects arising from metals leached from packaging systems” and thus that “monitoring or qualifying packaging systems with respect to extractable metals is, in many circumstances, unwarranted and unnecessary, especially if the packaged drug product is going to be assessed for elemental impurities anyway”.  

 

While I would not necessarily disagree with this point, I would make two counterpoints.  Firstly, it is my opinion that while the dataset supporting this conclusion was comprehensive, it reflected only that information which had been published at the time the review was written.  Personally, I am not 100% convinced that the information contained in the article is sufficient to be the sole basis of such an impactful policy as “no extractable metals testing required for packaging”.  Perhaps a larger and more complete database of information will allow the scientific community to draw a conclusion, one way or the other, with respect to extractable metals testing of packaging.

The second point I would like to make is that “low risk is not no risk”.  In fact, certain elements in certain circumstances can leach from packaging and can have an undesirable impact on a key product attribute.  I do not know how to properly respond to individuals who ask “in the absence of testing, how are you going to reveal known or currently unknown and unanticipated product – leached metal interactions?”.  But I know this: no one wants to be the next case study that everybody is talking about on the E&L circuit in terms of “we do E&L testing to prevent this from happening”.

 

Let us imagine the situation where it has been established that extractable metals from packaging is such a low risk that packaging systems and/or their materials and components of construction do not have to be screened for extractable metals.   The question then comes up “how will this conclusion be captured in the “official” guidelines, guidances and standards?” For example, as it currently stands, USP <661.1> contains mandatory extractable metals testing for polymeric material used in packaging.  Would it be proper for the USP in either <661.1> or its companion document <1661> to state that “extractable metals testing is not necessary” by either directly making this statement or indirectly by removing extractable metals testing? 

 

My opinion is that this would not be proper because it is not <661.1> that establishes the need to consider metals leached from packaging in the context of elemental impurities.  Rather, it is Q3D and <232> that establish the packaging system as a potential source of elemental impurities and requires that packaging systems be assessed to control elemental impurities.  Thus, if it were ever decided that the risk of extracted metals from packaging (or packaging material and components of construction) is so low that testing packaging (or materials or components) for extractable metals is not required, the proper place to capture this point would be the very documents which raise the issue in the first place, meaning Q3D and <232>.   In such a circumstance while it might be proper for both Q3D and <232> to mention packaging as a potential or theoretical source of elemental impurities, it would also be necessary for the same documents to note that the risk is low and thus that routine screening is not required.

Alternatively, let us imagine a situation where it has been established that the risk of extracted metals becoming elemental impurities is sufficiently high that packaging systems and/or their materials and components of construction must be screened for extractable metals.  In this case, four questions are relevant:

1.    What articles should be testing? (e.g., materials, components, and/or packaging),

2.    How should the articles be tested? (e.g., digestion or extraction and under what conditions),

3.    What elements should be targeted in the testing?  (e.g., “The Big Four”, the “entire periodic table” or something in-between),

4.    How should the results of the testing be reported and interpreted?  (e.g., specification limit, reporting threshold and at what level).

 At times I think it would truly require the wisdom of Solomon to provide answers to these four questions which are (a) scientifically valid, (b) practically implementable and (c) acceptable to the many and varied stakeholders who have a stake in this subject.  For those individuals and organizations who are trying to find that wisdom, I say good luck and god-speed and I thank them for their efforts.

 

 

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX   

 

 

​January, 2018 – Current Topic;

Thoughts on Extractable Metals form Packaging

 

 

Note:  This document expresses Dr. Jenke’s personal views and opinions and is not a position established and supported by Triad Scientific Solutions, LLC.  This document is not professional advice and does not constitute a professional service provided by either Dr. Jenke or Triad Scientific Solutions.     

The safety aspects of elemental impurities in finished drug products is a “hot topic” in the pharmaceutical community and guidelines such as ICH Q3D and USP <232> provide directions on how to assess finished drug products for such impurities.  Although a drug product’s packaging system is noted as a potential source of elemental impurities, guidelines have not been established for assessing packaging systems, or their materials and components of construction, for their potential to contribute leached elements in general, and metals in particular, to packaged drug products. 

 

In considering the issue of extractable metals from packaging, the first necessary activity is to expand the playing field.  By this I mean that both ICH Q3D and USP <232> address only one aspect of the potential product impact of extracted metals, patient safety.  It has been well-established that extracted metals can also impact the quality of a packaged drug product, affecting such product properties as potency, efficacy, stability and ability to meet quality specifications.  While the Permissible daily exposure (PDE) values provided in both Q3D and <232> adequately address potential patient safety issues, they are meaningless as limits or thresholds for potential quality effects.  While the list of elements in both Q3D and <232> reflect those elements of greatest toxicological concern, the list is not comprehensive in terms of elements that could or do adversely affect product quality.  Thus, if one is to address the full risk that extracted metals represent, they must go beyond the guidance and insight provided by both Q3D and <232>.

 

At a high level, two circumstances must exist for a packaging system to be a relevant source of elemental impurities in a packaged drug product:

 

  1. Packaging systems must contain sources of elemental impurities in the first place, and

  2. Those elemental impurities that are present in the packaging must leach out of the packaging and into the drug product during system/product contact.

 

Considering these circumstances, a team of authors representing the Extractables and Leachables Safety Information Exchange (ELSIE) and International Pharmaceutical Aerosol Consortium on Regulation and Science (IPAC-RS) published a review of published extractable metals data for plastic and glass packaging systems and concluded that:

 

  1. Unless the elemental entities are parts of the materials themselves (for example, SiO2 in glass) or intentionally added to the materials (for example, metal stearates in polymers), their incidental amounts in the materials are generally low.

  2. When elemental entities are present in materials and systems, generally only a very small fraction of the total available amount of the entity can be leached under conditions that are relevant to packaged drug products.

 

These conclusions reinforce what I believe is a common opinion among E&L experts, which is that “there is generally a low risk of adverse product effects arising from metals leached from packaging systems” and thus that “monitoring or qualifying packaging systems with respect to extractable metals is, in many circumstances, unwarranted and unnecessary, especially if the packaged drug product is going to be assessed for elemental impurities anyway”.   While I would not necessarily disagree with this point, I would make two counterpoints.  Firstly, it is my opinion that while the dataset supporting this conclusion was comprehensive, it reflected only that information which had been published at the time the review was written.  Personally, I am not 100% convinced that the information contained in the article is sufficient to be the sole basis of such an impactful policy as “no extractable metals testing required for packaging”.  Perhaps a larger and more complete database of information will allow the scientific community to draw a conclusion, one way or the other, with respect to extractable metals testing of packaging.

 

The second point I would like to make is that “low risk is not no risk”.  In fact, certain elements in certain circumstances can leach from packaging and can have an undesirable impact on a key product attribute.  I do not know how to properly respond to individuals who ask “in the absence of testing, how are you going to reveal known or currently unknown and unanticipated product – leached metal interactions?”.  But I know this: no one wants to be the next case study that everybody is talking about on the E&L circuit in terms of “we do E&L testing to prevent this from happening”.

 

Let us imagine the situation where it has been established that extractable metals from packaging is such a low risk that packaging systems and/or their materials and components of construction do not have to be screened for extractable metals.   The question then comes up “how will this conclusion be captured in the “official” guidelines, guidances and standards?” For example, as it currently stands, USP <661.1> contains mandatory extractable metals testing for polymeric material used in packaging.  Would it be proper for the USP in either <661.1> or its companion document <1661> to state that “extractable metals testing is not necessary” by either directly making this statement or indirectly by removing extractable metals testing?  My opinion is that this would not be proper because it is not <661.1> that establishes the need to consider metals leached from packaging in the context of elemental impurities.  Rather, it is Q3D and <232> that establish the packaging system as a potential source of elemental impurities and requires that packaging systems be assessed to control elemental impurities.  Thus, if it were ever decided that the risk of extracted metals from packaging (or packaging material and components of construction) is so low that testing packaging (or materials or components) for extractable metals is not required, the proper place to capture this point would be the very documents which raise the issue in the first place, meaning Q3D and <232>.   In such a circumstance while it might be proper for both Q3D and <232> to mention packaging as a potential or theoretical source of elemental impurities, it would also be necessary for the same documents to note that the risk is low and thus that routine screening is not required.

 

Alternatively, let us imagine a situation where it has been established that the risk of extracted metals becoming elemental impurities is sufficiently high that packaging systems and/or their materials and components of construction must be screened for extractable metals.  In this case, four questions are relevant:

 

1.    What articles should be testing? (e.g., materials, components, and/or packaging),

2.    How should the articles be tested? (e.g., digestion or extraction and under what conditions),

3.    What elements should be targeted in the testing?  (e.g., “The Big Four”, the “entire periodic table” or something in-between),

4.    How should the results of the testing be reported and interpreted?  (e.g., specification limit, reporting threshold and at what level).

 

At times I think it would truly require the wisdom of Solomon to provide answers to these four questions which are (a) scientifically valid, (b) practically implementable and (c) acceptable to the many and varied stakeholders who have a stake in this subject.  For those individuals and organizations who are trying to find that wisdom, I say good luck and god-speed and I thank them for their efforts.

 

 

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX   

 

 

​​January, 2017 – Current Topic;

Finding the Good in Good Science

 

 

Note:  This document expresses Dr. Jenke’s personal views and opinions and is not a position established and supported by Triad Scientific Solutions, LLC.  This document is not professional advice and does not constitute a professional service provided by either Dr. Jenke or Triad Scientific Solutions.

 

When members of the E&L community gather to develop standards, guidelines and best demonstrated practice recommendations, there are three principles they should obey:

 

  1. The standards must be based on “good science”,

  2. The standards must be effective and efficient, and

  3. The standards must fit every conceivable circumstance well.

 

It is the inability to achieve these principles that makes the generation of standards, guidelines and recommendations so frustratingly challenging.

 

Consider the last principle, for example.  It is intuitively obvious in a diverse field such as pharmaceuticals that this principle is impossible to achieve as it is clear that a rigorous standard (which is specified set of tests coupled to a specified set of acceptable test results) cannot fit all the diverse circumstances equally well.  It is the same problem as trying to design a glove that fits every human being.  If the underlying purpose of the glove is “to keep one’s hands warm”, then a standardized glove can be designed that will address this requirement, to some degree, for most people.  However, because the glove must keep everybody’s hands warm, it is logical that there will be design tradeoffs which will mean that while it keeps everyone’s hands warm, it does not keep everyone’s hands as warm as they would personally like it.  Furthermore, there may be other trade-offs, such as “these gloves are not very sexy”, or “these gloves do not match my coat” or “these gloves make my hands itch”.

 

While the challenges in making standards that are generally applicable in the greatest number of circumstances are considerable, this is not the point that I want to address in this discussion.  Rather, I want to address the challenges associated with good science.

 

Good science suffers from with the same problem as designing a standardized glove.  As good scientists, we learned and we understand that there are very few universal scientific truths; rather, a scientific truth is a truth only under the rigorously defined set of circumstances upon which it is based.  When we perform an experiment and draw a conclusion from that experiment, we understand the conclusion is only perfectly valid for the set of defined experimental circumstances we started out with.  Extension of that same conclusion to other circumstances involves a certain measure of risk, specifically the risk that in changing the circumstances we have invalidated (knowingly or unknowingly) some fundamental principle that defines the applicability of our conclusions.   Thus, we understand that there is an inherent trade-off when we make scientific generalizations and put them into standards, guidances and recommendations.  That is, we sacrifice some of the good in good science for the sake of providing a direction that is generally right in the greatest number of circumstances.

 

The challenge we face as practitioners of good science is not in recognizing good science per say but recognizing the boundaries that differentiate between good science properly applied and good science improperly applied.  When we are tempted to use a standard, or leverage a “rule of thumb” or “do this because everybody else is doing it”, as good scientists we must ask ourselves “am I taking a good idea in certain circumstances and applying it to the wrong circumstances?”.   If the answer is yes, then surely this is as bad as using “bad” science in the first place.

 

Let me illustrate this with an example.  I use this as an example not because it a particularly bad practice but because it effectively illustrates my point.  The following, taken from the PQRI OINDP Best Practice recommendations, is well known and commonly applied in the E&L community.

 

  1. The Working Group recommends that analytical uncertainty be evaluated in order to establish a Final AET for any technique/method used for detecting and identifying unknown extractables/leachables.

  2. The Working Group proposes and recommends that analytical uncertainty in the estimated AET be defined as one (1) standard deviation in an appropriately constituted and acquired Response Factor database OR a factor of 50% of the estimated AET, whichever is greater.

       

The question I would ask you to consider is “where is the good science and the not so good science in these recommendations?”

 

Here is my answer.  It is well known that response factors will vary among the universe of compounds that could be extractables and leachables.   Thus, it is good science that a general concept such as the AET, which presumably is applicable to all possible extractables/leachables, take this variation into account.  Furthermore, we all understand that basing actions on relevant and sufficient data is the cornerstone of good science, and thus that the requirement to consider “an appropriately constituted and acquired Response Factor database” is a requirement to do good science.  However, it must be obvious that the direction to universally “use a factor of 50%” is not necessarily good science.  While the derivation of the 50% is itself good science, as it was based on a response factor database (which is somewhat small in the context of the databases available today), it is obvious that the 50% is only relevant for the compounds in that database and the analytical method with which the data was generated.   Universal and unquestioned application of the factor of 2 rule to compounds that were not in the original database and to analytical methods other than the method used to generate the data is not the best science; rather, it is poor science, not because the science itself is bad but because the good science aspects are being applied out of context.  

 

To a good scientist, arguments such as “it is better than nothing” or “everybody else is doing it” are inexcusable.  Certainly, the idea that “it is better than nothing” has to be examined objectively and harshly.  The improper application of science is not guaranteed to be better than doing nothing because it is not the case that the improper application of science will always make things better.  In fact, the history of improper application of good science is littered with examples of bad outcomes derived from applying good sciences incorrectly. 

 

Listen, nobody said doing good science was easy.  We understand that part of the driving force for recommending that the factor of 2 be universally applied is that back then few people could access a database.  Thus, it was nearly impossible to practice the good science required in the recommendation and people, rather than do nothing, gravitated to the other part of the recommendation.  However, today, it is virtually impossible to run into a reputable E&L laboratory that is not eager to talk about their database.  Thus, in this case, our ability to do good science has finally caught up with our responsibility to do good science.  It is proper that we accept that responsibility and be held accountable for meeting that responsibility.

 

This is true not only to adjusting the AET for analytical uncertainty but in numerous other places where our current capabilities enable our ability and address our responsibility to practice and preach a higher degree of good science than has ever been possible.  Currently applied recommendations, standards, guidelines, and practices must be adjusted, as appropriate, to leverage this new and higher degree of good science and new recommendations, standards and guidelines must be drafted to reflect this new and higher degree of science.  We aspire to better science because we are capable of better science.  More importantly, if we are going to talk the talk, we best start walking the walk.

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

 

bottom of page