Erring on the Side of Science

Michael A. Kamrin, Ph.D.

Institute for Environmental Toxicology

Michigan State University

C-231 Holden Hall

East Lansing, Michigan 48824

Phone: 517/353-6469

Fax: 517/355-4603

Email: kamrin@pilot.msu.edu

INTRODUCTION

While the questions posed by the editor are rather specific, they get at the heart of the issue of how science can best contribute to the protection of human health - a key goal of risk assessment. Thus, I will attempt to answer these questions in a general context and try to show that carefully examining these issues suggests that fundamentally changing the current approach to risk assessment can lead to improved protection of human health. I propose that the focus be changed from erring on the side of safety to erring on the side of science.

The difference between these approaches was brought into sharp focus when I co-developed and taught the first risk assessment course for graduate students at Michigan State University. The reaction of the students in the natural sciences, such as toxicology, biochemistry and crop and soil sciences, was one of confusion. What they were being taught about toxicity and exposure assessment was so foreign to what they had learned about the way that science is done in their disciplinary courses and their laboratory work, that they couldn't comprehend it. After about six weeks, some students started to show the AHA phenomenon and could understand what was going on. They realized that thinking about risk assessment was quite different than thinking about science.

ERRING ON THE SIDE OF SCIENCE

To illustrate how different they are, consider a thought experiment that relates only to the toxicity (dose-response) assessment part of risk assessment. Imagine that you are an academic scientist who has been given the problem of trying to predict as accurately as possible the adverse effects of Chemical X on humans exposed at environmentally relevant levels. Let us examine the decisions that you would make in attacking this problem presuming that neither the desires of the funding source nor the total amount of funding posed any barriers.

The first decision would be to decide on an experimental system to study. You would undoubtedly want a well characterized system but, at least as important, you would like a system that provided the most accurate model for human responses. To select the species to study, it would be helpful to know the biological target of Chemical X , the type of effect, and the toxicokinetics and toxicodynamics of the chemical in the test system and in humans. Not all of this information may be needed in making a choice; for example, if the chemical is rapidly and totally metabolized in humans but not metabolized at all in the test system under consideration, this is probably not the best choice. In no case is it likely that all information will be available. However, there is often enough information to make an informed choice.

A second choice is how to administer Chemical X to the test animals. Presumably, if the problem deals with the effects of Chemical X in drinking water, then this would logically be the vehicle of administration. If for some reason, another vehicle is considered; e.g., corn oil, this could be used if it could be demonstrated that the administration would mimic what happens with water; i.e., that the toxicokinetics would be the same, or well enough understood to accurately predict the response to the chemical in one vehicle from the response to the chemical in the other.

A third decision relates to the rate and duration of administration of the test compound. If dealing with drinking water, then daily administration over a significant fraction of a lifetime may be appropriate. If, however, exposure varies temporally; e.g., different during the week than over the weekend or different in different seasons, consideration needs to be given as how to best simulate this exposure. Again, some information about the toxicokinetics of Chemical X in the test species and humans would be very helpful in making these choices.

Fourth is the dose range to be chosen. To most accurately simulate the situation of interest, a range of doses that provides exposures similar to those expected in human populations is most appropriate. Since it is possible that the dose-response curve is different at the extremes of dose, providing experimental points at these extremes may provide useful information. It is particularly important to consider the possibility that effects can be beneficial as well as adverse at environmentally relevant or lower doses. Another critical decision is how the doses will be spaced; e.g., in what multiples of the lowest dose selected. All of these choices are likely to be strongly influenced by mechanistic considerations.

There are a number of other decisions about experimental design that have to be made that will not be addressed in detail here. For example, there are decisions as to how many animals of each sex to study; how many control animals to use and whether these are concurrent or historical; how often and at what time intervals to collect data and, of course, what data to collect at each time point.

Once the data have been collected, there are decisions that need to be made about data interpretation. For example, what observations are going to be interpreted as adverse effects? Depending on how the data were collected, will the results be reported in terms of time to effect or in terms of effect at termination of the study? Knowledge of the toxicokinetics and toxicodynamics of the chemical of concern would be very helpful in answering these questions.

Once the data are interpreted and reported, there is the question of how these results are to be quantitatively extrapolated to humans. For example, if doses outside of the range of environmental interest were administered, how are these going to be extrapolated to the exposure levels of concern? How are the levels at which adverse effects occurred in the test system going to be extrapolated to humans? As before, information about the toxicokinetics and toxicodynamics is crucial. For example, chemicals that show exactly the same dose-response in test systems could have vastly different behaviors in humans because of these mechanistic considerations.

ERRING ON THE SIDE OF SAFETY

If this is the ideal, what is the practice? Fortunately, this is well documented since government risk assessment/risk management documents; e.g., Drinking Water Health Advisories1, usually contain extensive information about the what studies the final values are based on, how these studies were conducted, and how they were interpreted. While general guidance documents governing toxicity assessments; e.g., Proposed Guidelines for Carcinogenic Risk Assessment2 generally provide the opportunity for overriding defaults with "adequate evidence", in practice the default assumptions are almost always followed. This can be seen, for example, by examining the way that evidence is used in the establishment of drinking water health advisory values.

Looking at such risk assessment/risk management documents, it can be seen, for example, that instead of using studies that provide the best models for human responses as the bases for conclusions, experiments utilizing test systems that are most sensitive to the chemical are most often selected. Instead of selecting studies in which the route of administration and the vehicle simulate the real situation most closely, studies in which the route and vehicle are chosen on the basis of previous practice, or the highest likelihood that an effect can be produced, are chosen. Rather than choosing a study that reflects an administration schedule based on relevance, it seems that this choice is most often made on the basis of precedent. Rather than selecting experiments that utilize dose ranges most relevant to exposures of interest or to the human situation, doses often appear to be chosen based on other criteria.

With respect to interpretation of the results, a change in the status of the animal is generally equated with an adverse effect - often based on the rationale that it is impossible with absolute certainty to tell the difference between a compensatory response and a toxic one. Effects seen in animals are assumed to also occur in humans. It is also assumed that extrapolation from higher to lower dose can be done in the absence of consideration of mechanistic information, unless this information is conclusive. There are many other similar examples that could be provided.

The justification for these choices is that we must err on the side of safety. If we are not absolutely sure, we must make assumptions that lead to conclusions that, if anything, overstate the toxicity and protect human and environmental health. Indeed, the numbers resulting from these types of toxicity assessments are often couched in terms reflecting this bias; e.g., acceptable daily intake. While this sounds reasonable, and may have been the best approach when less scientific information was available, it is important to carefully evaluate whether this is any longer the case.

DISCUSSION

The pitfalls of current practice have been brought into sharp focus by the requirements of the new Food Quality Protection Act (FQPA). For example, the requirement to incorporate an additional safety factor to account for the greater sensitivity of children is based not on scientific evidence but on the idea that the safety factors used to calculate the existing risk numbers did not take this into account. This does not seem to be the case; indeed it seems very unlikely that the scientists and risk managers who developed the existing values were unaware of the existence of children or that children may have different sensitivities to environmental insults. Instead, while an additional safety factor for children was not explicitly included, it was, in essence, hidden in the multiple "prudent decisions" that were taken in the risk assessment process; e.g., most sensitive animal, most sensitive endpoint, and safety/uncertainty factors of other kinds. In this case, erring on the side of safety obscured both the science and the assumptions that were made in the blending of science and policy at every step.

Another way that FQPA illuminates these issues is in its requirement that toxicities of compounds with similar mechanisms of actions be added together in establishing tolerances. Under the current system, this is very difficult because the comparable data are generally not available for all compounds in a class; e.g., experimental doses administered and endpoints measured may be different for different compounds. Also, toxicity values may represent different policy choices for different compounds. These choices may involve decisions as to the critical studies as well as to which safety factors should be applied.

The FQPA requirement of adding together toxic effects due to multiple routes of exposure; e.g., air and water in addition to food, in calculating tolerances is also very complex. Many of the same issues mentioned in the previous paragraph also hold here but differences in choices may be even greater because the responsibilities for risks from exposures by these different routes lie with a variety of governmental units whose risk assessment approaches and, thus, resultant toxicity assessment values, may vary significantly.

The discussion of the FQPA is presented in some detail because it shows that the fundamental issues that have been raised by the author have not been resolved despite many attempts by experts in the field and the issuance of a number of important documents, such as the recent EPA Proposed Guidelines for Carcinogen Risk Assessment2 . Although these represent small steps in the right direction, they represent incremental improvements and do not address the need to fundamentally change the approach. To put it bluntly, the current mantra is still to err on the side of safety rather than to err on the side of science.

It is my contention that erring on the side of science is a more effective way of protecting public health than erring on the side of safety. How is this possible? First, and foremost, the incorporation of the best data and more data will lead to better distinctions among chemicals. When data are ignored, or poorer data are substituted for better, it becomes very difficult to tell whether one chemical or another poses a greater risk. It is very clear, for example, that chemicals assigned the same Reference Dose or the same potency factor do not necessarily pose the same risk to humans. This has been evidenced by the very slow recognition by regulators, after much discussion, that some effects in animals are not relevant to humans although these chemicals have been managed for many years as if this was the case3 . This suggests that limited resources have been devoted to risks of lower significance and, in a world of finite resources, have not been as available to manage more significant risks.

In addition, starting from a scientific framework provides a much more comprehensive approach to risk that can incorporate new issues without the necessity of approaching each as a problem requiring a new
approach and new defaults. For example, the incorporation of low dose beneficial effects does not require a new risk assessment approach; only the application of
accepted scientific techniques for interpreting data. Thus, for chemicals in which such effects are documented, it is possible to incorporate low dose hysteresis immediately without waiting for a new set of assumptions, defaults, etc. to be developed. Of course, the risk managers will probably need to spend some time considering how they will apply the scientific information provided.

Further, erring on the side of science fits in well with the approach advocated in recent risk management documents such as the report of the Commission on Risk Assessment and Risk Management4. This approach can be applied qualitatively to provide more accurate estimates of the relative risks of various chemicals. These qualitative assessments can then form the scientific bases for the public health approach promoted in this and other recent reports. This approach also aids in democratization of risk decisions since it provides information to citizens that is less influenced by the risk management philosophy of any particular agency.

If my contention is valid, what must be done to move towards adopting the err on the side of science approach? The thought experiment described in the first part of this article describes how things might be done if it were possible to start over again and perform new studies on all of the chemicals of concern. This is clearly not feasible in terms of both the time and the resources needed. Thus, in most cases, it will be necessary to attack toxicity problems largely on the basis of data that are currently available. However, while the body of data is the same, it is possible to change the selection and interpretation of these previously published experiments so as to approach the scientific ideal most closely. Mechanistic information is obviously critical in these choices.

The reader may well be thinking that the suggestion of re-interpreting the available data is not practical when there are over 500 chemicals in the IRIS database5 and other compounds that are likely to be of importance. In addition, recent history suggests that the resources EPA is likely to devote to this task will allow the review of only a small number of these chemicals each year. The answer to this dilemma is to prioritize the chemicals that should be re-evaluated and attack these first.

On what basis should priorities be established? Considering that the objective of the process is the protection of human and environmental health, the degree to which the re-evaluation contributes to this should be the primary criterion. This suggests that chemicals that are currently using up the most resources should be addressed first. These include compounds that drive resources at hazardous waste sites and at sources of air emissions. Perhaps simultaneously, panels should be brought together to identify chemicals where there is the strongest evidence that the current approach significantly over- or understates the best scientific estimates of the risk. In this way, mismatches between risk and resources can be most effectively identified and resources re-allocated to better promote human and environmental health. It should be noted that this resource re-allocation need not be exclusively among chemical threats but may also be between chemicals and other threats to human and environmental health.

In sum, in addition to better use of available data, adopting the approach of erring on the side of science will encourage toxicologists to conduct mechanistic studies of high quality and to conduct these and all other studies in a way that reflects the best science. Having the results of these types of studies available will improve the accuracy of risk estimates and thus further enhance the ability to make distinctions among risks. This, in turn, will also lead to better risk management and greater protection of health and the environment within the limitations of available resources.

REFERENCES

1. U.S. Environmental Protection Agency. Office of Water. Health Advisories for 50 Pesticides. Washington, D.C. 1988.

2. U.S. Environmental Protection Agency. Office of Research and Development. Proposed Guidelines for Carcinogen Risk Assessment. EPA/600/P-92/003C. Washington, D.C. 1996.

3. U.S. Environmental Protection Agency. Risk Assessment Forum. Alpha-2u-globulin: Association with chemically induced renal toxicity and neoplasia in the male rat. EPA/625/3-91/020. Washington, D.C.

4. Presidential/Congressional Commission on Risk Assessment and Risk Management. Final Report. Volume 1. Framework for Environmental Health Risk Management. Washington, D.C. 1997.

5. U.S. Environmental Protection Agency. Office of Research and Development. National Center for Environmental Assessment. Integrated Risk Information System. Washington, D.C. 1998.