One of the report’s main conclusions is that the majority of medical errors do not result from individual recklessness or the actions of a particular group--this is not a “bad apple” problem. More commonly, errors are caused by faulty systems, processes, and conditions that lead people to make mistakes or fail to prevent them. For example, stocking patient-care units in hospitals with certain full-strength drugs, even though they are toxic unless diluted, has resulted in deadly mistakes.
Thus, mistakes can best be prevented by designing the health system at all levels to make it safer--to make it harder for people to do something wrong and easier for them to do it right. Of course, this does not mean that individuals can be careless. People still must be vigilant and held responsible for their actions. But when an error occurs, blaming an individual does little to make the system safer and prevent someone else from committing the same error.
The report has been very influential, as we’ve sought to improve the safety and the experiences of patients. Of particular importance has been a “non-punitive workplace.” To improve the system requires continuing improvement; and continuing improvement requires data. More specifically, improvement requires identifying problems and mistakes. After all, that information provides the best direction to take to improve the system. Improvement actually requires that staff report problems, and especially errors. To encourage that reporting, healthcare institutions elected to establish a non-punitive workplace. That is, to encourage reporting staff have been told that they could report mistakes without fear of being fired, so that the system could be improved to prevent such mistakes. The efforts at improving the system required that staff report their errors.
And unfortunately it appear that the reporting isn’t happening. A new report was released last week from the Office of the Inspector General of the Department of Health and Human Services titled, “Hospital Incident Reporting Systems Do Not Capture Most Patient Harm.” (you can link to the report from here; and to a New York Times article here). Worse, it appears that the reporting systems fail precisely because the people on whom they rest, the professionals who make or see the problems, don’t know what to report. As it was reported in the Executive Summary:
Hospital staff did not report 86 percent of events to incident reporting systems, partly because of staff misperceptions about what constitutes patient harm. Of the events experienced by Medicare beneficiaries discharged in October 2008, hospital incident reporting systems captured only an estimated 14 percent. In the absence of clear event reporting requirements, administrators classified 86 percent of unreported events as either events that staff did not perceive as reportable (62 percent of all events) or that staff commonly reported but did not report in this case (25 percent).
In a way, this finding concerned me more:
For the 62 percent of events not reported because staff did not perceive them as reportable, administrators indicated that staff likely did not recognize that the event caused harm or realize that they should complete a report. The most common reason administrators gave for staff underreporting was that no perceptible error occurred (12 percent), indicating that staff commonly equate the need to complete incident reports with medical errors. Other reasons for underreporting include staff becoming accustomed to common occurrences and therefore not submitting reports, such as events that were expected side effects (12 percent) or occurred frequently (8 percent). (Emphasis mine)
Or as the report says in another paragraph, “Although administrators indicated that they want staff to report all instances of harm, when asked about specific events administrators conceded that staff may often be confused about what constitutes harm and is, therefore, reportable.” (Again, emphasis mine)
In one sense, that seems troubling. The determination of harm would seem to be measured based on what did or what could happen to the patient. But, then, that becomes one of the points of decision: if what could happen didn’t happen, was the patient harmed? Or, if it’s an expected or frequent side effect, does the fact (or the assumption) that benefits to the patient exceeded risks mean that the patient isn’t (“isn’t really”) harmed? While we would think that harm would be pretty easy to identify, it may not always be so clear to the professional in the circumstance at the time.
The OIG report recommends the development of a list. That could certainly be helpful. The risk, of course, is that what we see as harmful today may not be an issue in the future; and that the list may soon be dated and inadequate.
At the same time, we do need to pay close attention. Reporting of these incidents that either harm patients or come close to harming patients is dependent on the professionals serving them. I know that they want patients to do well, and to have things go right. It is important that they also recognize what to report when things don’t go right. Once again, that’s the only way that we’ll know where the issues lie, and what we need to improve.