Friday, August 12, 2005

Policy Making & Making Errors

Walter Williams discusses error making:

We're not omniscient. That means making errors is unavoidable. Understanding the nature of errors is vital to our well-being. Let's look at it.

There are two types of errors, nicely named the type I error and the type II error. The type I error is when we reject a true hypothesis when we should accept it. The type II error is when we accept a false hypothesis when we should reject it. In decision-making, there's always a non-zero probability of making one error or the other. That means we're confronted with asking the question: Which error is least costly?
[ . . . ]

. . . Food and Drug Administration (FDA) officials, in their drug approval process, can essentially make two errors. They can approve a drug that has unanticipated dangerous side effects (type II). Or, they can disapprove, or hold up approval of, a drug that's perfectly safe and effective (type I). In other words, they can err on the side of under-caution or err on the side of over-caution. Which error do FDA officials have the greater incentive to make?

If a FDA official errs by approving a drug that has unanticipated, dangerous side effects, he risks congressional hearings, disgrace and termination. Erring on the side of under-caution produces visible, sick victims who are represented by counsel and whose plight is hyped by the media.

Erring on the side of over-caution is another matter. A classic example was beta-blockers, which an American Heart Association study said will 'lengthen the lives of people at risk of sudden death due to irregular heartbeats.' The beta-blockers in question were available in Europe in 1967, yet the FDA didn't approve them for use in the U.S. until 1976. In 1979, Dr. William Wardell, a professor of pharmacology, toxicology and medicine at the University of Rochester, estimated that a single beta-blocker, alprenolol, which had already been sold for three years in Europe, but not approved for use in the U.S., could have saved more than 10,000 lives a year. The type I error, erring on the side of over-caution, has little or no cost to FDA officials. Grieving survivors of those 10,000 people who unnecessarily died each year don't know why their loved one died, and surely they don't connect the death to FDA over-caution. For FDA officials, these are the best kind of victims -- invisible ones. When an FDA official holds a press conference to announce its approval of a new life-saving drug, I'd like to see just one reporter ask: How many lives would have been saved had the FDA not delayed the drug's approval?

The bottom line is, we humans are not perfect. We will make errors. Rationality requires that we recognize and weigh the cost of one error against the other.
The economic models we use in class to discuss public policy typically assume perfect information. Of course, making policy choices is subject to the sort of errors Professor Williams discusses. The concepts of Type I and Type II errors can be useful in considering how to evaluate policy alternatives in light of the recognition that our analysis can be subject to error. Does it make sense to say that, in general, we want our policy choices to be consistent with minimizing the costs of our errors?

No comments: