We've also talked about what to do when they begin to fail. Note that I said "when," not if. We know that, in the face of the trends above, our antibiotics will become less and less effective until, at least for some uses, they cease to work at all. In fact, it's already happening.
A recent article in the Chicago Tribune highlights that failure, it's high cost, and key difficulties in addressing the problem. The article focuses especially on the tension between the FDA and the pharmceutical companies.
The article focuses on one specific issue in the development of antibiotics: how, or really, against what standard should they be tested. Is it enough for the new antibiotic to be roughly as good as an existing antibiotic for the same disease? After all, “roughly as good” may well mean “not quite as good,” but without enough data to be certain – certain whether it’s as good, and if not quite as good, by how much.
Or, do you test that it’s at least as good and perhaps better than an antibiotic currently in use? The two ways to accomplish that would be a double blinded comparison study, or a double blinded placebo control study. The blinding (“double blinded” means neither doctor or patient knows which pill the patient receives) would mean that the results would be dependable. These would set higher standards that would prove (or disprove) clear effectiveness and even superiority.
There are, of course, two issues with the more rigorous testing. The first is that , even testing with sinus infections and other non-life-threatening bacterial infections, how many patients would consent to a comparison trial, much less a placebo controlled trial? The second is, of course, that the more rigorous testing takes more time and costs more money.
We need to take seriously the second issue. According to the article, “Drug companies are abandoning the antibacterial business, citing high development costs, low return on investment and, increasingly, a nearly decade-long stalemate with the Food and Drug Administration over how to bring new antibiotics to market.” While the article is focused on the issue of the FDA approval process, development costs and low return are significant challenges. This is part of the challenge of for-profit health care – and pharmaceutical companies are certainly for-profit health care. I’ve commented before that the drug companies are especially interested in medicines for chronic diseases. Patients take such drugs regularly for years, providing a steady revenue stream. Antibiotics aren’t quite as low return as vaccines, that, if things go right, are taken only once or a few times in a lifetime. However, they aren’t like the chronic meds. The patient will take them for a ten days, or a week, or less. Indeed, in some infections the patient still receives the drug just once, as many of us received one injection of penicillin (my memory is of one painful injection, but I was small) for strep throat.
At the same time, the development costs are as high as with any other medication. The axiom is that “The first pill costs a million dollars. Every pill after that costs 35 cents.” (And that axiom has been around for a while; a million dollars is, by today’s standards, blessedly cheap.)
Still, I find myself wondering if there isn’t reason to temper the requirement for profit, especially for antibiotics. You see, pharmaceutical companies spend their time and money on applied research. That is, they put most of their efforts into taking the research of others, and developing specific applications. Certainly, they do some basic research – gathering new knowledge without a specific application in mind – but most basic research is done in academic and clinical settings. Basic research is, for example, the Ph.D. candidate wandering through the jungle collecting plant samples, analyzing back in the lab the proteins and chemicals they produce, and testing what effects those proteins and chemicals might have. Applied research is taking one of those chemicals because of an effect it has shown, developing that into a specific drug, and completing the tests that show whether it’s safe and effective.
The thing is that, while pharmaceutical companies do pay a lot to complete the applied research, they don’t pay nearly as much for the basic research they base the applied research on. Instead, we pay a lot for it. That is, much of it is paid for with tax revenues and fees paid to government and distributed through such agencies as the National Institutes of Health. Much of it is also paid for with charitable contributions, whether large contributions from private foundations or smaller, individual contributions to the latest telethon. The research is available to the pharmaceutical companies because it’s publicly available, published in peer-reviewed journals. Sometimes it makes it possible for researchers to start their own companies, taking their own basic research and developing the applications.
My question is what the for-profit pharmaceutical companies owe to the larger society in compensation for the basic research that they don’t pay for. Oh, they certainly do pay taxes and contribute to charities; but, then, so do we, and I bet they don’t pay anywhere near the percentages of their income that most of us do. On top of that, while they take some clear risks in applied research, they don’t have nearly the risk that we do paying for basic research. Sure, some applications don’t work out, but they don’t make the effort without some likelihood of success. Basic research, on the other hand, is knowledge for its own sake, whether there’s money in it or not. So, arguably it’s mostly risk. That is, while a plant sample or a new animal may produce lots of chemicals, there’s no reason to expect that any one of them, or even most of them, will be useful, or, at any rate, more useful than chemicals already identified.
No comments:
Post a Comment