Thursday, January 12, 2012

First Do No Harm; Nor Allow It To Pass Unnoticed

For more than ten years now a central theme in the lives of hospitals and health care professionals has been patient safety. It has always been a concern, but a significant step was a report from the National Academy of Sciences titled, "To Err is Human: Building a Safer Health System" (linked from here). This study noted how often patients were put at risk, and then sought to identify the causes. According to the Report Brief,

One of the report’s main conclusions is that the majority of medical errors do not result from individual recklessness or the actions of a particular group--this is not a “bad apple” problem. More commonly, errors are caused by faulty systems, processes, and conditions that lead people to make mistakes or fail to prevent them. For example, stocking patient-care units in hospitals with certain full-strength drugs, even though they are toxic unless diluted, has resulted in deadly mistakes.
Thus, mistakes can best be prevented by designing the health system at all levels to make it safer--to make it harder for people to do something wrong and easier for them to do it right. Of course, this does not mean that individuals can be careless. People still must be vigilant and held responsible for their actions. But when an error occurs, blaming an individual does little to make the system safer and prevent someone else from committing the same error.

The report has been very influential, as we’ve sought to improve the safety and the experiences of patients. Of particular importance has been a “non-punitive workplace.” To improve the system requires continuing improvement; and continuing improvement requires data. More specifically, improvement requires identifying problems and mistakes. After all, that information provides the best direction to take to improve the system. Improvement actually requires that staff report problems, and especially errors. To encourage that reporting, healthcare institutions elected to establish a non-punitive workplace. That is, to encourage reporting staff have been told that they could report mistakes without fear of being fired, so that the system could be improved to prevent such mistakes. The efforts at improving the system required that staff report their errors.

And unfortunately it appear that the reporting isn’t happening. A new report was released last week from the Office of the Inspector General of the Department of Health and Human Services titled, “Hospital Incident Reporting Systems Do Not Capture Most Patient Harm.” (you can link to the report from here; and to a New York Times article here). Worse, it appears that the reporting systems fail precisely because the people on whom they rest, the professionals who make or see the problems, don’t know what to report. As it was reported in the Executive Summary:

Hospital staff did not report 86 percent of events to incident reporting systems, partly because of staff misperceptions about what constitutes patient harm. Of the events experienced by Medicare beneficiaries discharged in October 2008, hospital incident reporting systems captured only an estimated 14 percent. In the absence of clear event reporting requirements, administrators classified 86 percent of unreported events as either events that staff did not perceive as reportable (62 percent of all events) or that staff commonly reported but did not report in this case (25 percent).

In a way, this finding  concerned me more:

For the 62 percent of events not reported because staff did not perceive them as reportable, administrators indicated that staff likely did not recognize that the event caused harm or realize that they should complete a report. The most common reason administrators gave for staff underreporting was that no perceptible error occurred (12 percent), indicating that staff commonly equate the need to complete incident reports with medical errors. Other reasons for underreporting include staff becoming accustomed to common occurrences and therefore not submitting reports, such as events that were expected side effects (12 percent) or occurred frequently (8 percent). (Emphasis mine)

Or as the report says in another paragraph, “Although administrators indicated that they want staff to report all instances of harm, when asked about specific events administrators conceded that staff may often be confused about what constitutes harm and is, therefore, reportable.” (Again, emphasis mine)

In one sense, that seems troubling. The determination of harm would seem to be measured based on what did or what could happen to the patient. But, then, that becomes one of the points of decision: if what could happen didn’t happen, was the patient harmed? Or, if it’s an expected or frequent side effect, does the fact (or the assumption) that benefits to the patient exceeded risks mean that the patient isn’t (“isn’t really”) harmed? While we would think that harm would be pretty easy to identify, it may not always be so clear to the professional in the circumstance at the time.

The OIG report recommends the development of a list. That could certainly be helpful. The risk, of course, is that what we see as harmful today may not be an issue in the future; and that the list may soon be dated and inadequate.

At the same time, we do need to pay close attention. Reporting of these incidents that either harm patients or come close to harming patients is dependent on the professionals serving them. I know that they want patients to do well, and to have things go right. It is important that they also recognize what to report when things don’t go right. Once again, that’s the only way that we’ll know where the issues lie, and what we need to improve.

3 comments:

Kirkepiscatoid said...

Excellent post on a confusing topic. I totally agree about the confusion at an institutional level of "what's reportable and why it's reportable" and the muddy definition of "harm."

The troublesome part for me is that the "non-punitive" part of the 1999 report largely goes ignored. Most of the time, I see "incident reports" being used specifically in the hope "someone gets in trouble."

I wish I had a dollar for every time someone got angry at the lab for something that the lab supposedly "did" or "didn't do" and filed a bogus incident report. I'd say the bogus to legit ratio runs 10 to 1, and it runs weekends/nights to days 10 to 1 also. Likewise I have had to really discuss with lab personel that incident reports are not for "getting back" at other depts. They are about patient safety and nothing else.

PamBG said...

This is very interesting.

I'm currently working a temp job that is highly process-oriented. The end-users are not medical but legal, however clerical mistakes could potentially cause anguish and difficulty for the parties involved.

What I have been very impressed by - and noticed right away - is that the office maintains a "no blame culture". I would love to know how that has been achieved, but the attitude of the people in charge is certainly part of it.

I was told during my interview that it was vital to bring mistakes to the attention of a supervisor. I am new and still learning these processes, but it has been refreshing to be able to do that and have supervisors respond very matter-of-factly with the response of "OK, let's go fix that."

I was asked to flag up my errors before they created a problem, I did so (with some cynicism, I must admit) and the supervisors responded as promised.

I appreciate that, in a medical context, lives may very well be at stake, but I think that it would be excellent to be able to establish a culture where medical staff really could flag up mistakes before harm was done.

Marshall Scott said...

Kirkepiscatoid, I appreciate what you're saying. "Write it up" is said, as often as not, with some level of frustration and/or bitterness. Sometimes it's the perception (true or not) that efforts to talk about the issue didn't have the desired effect. I've certainly seen problems written up vindictively (and as often or more often received that way), and all too rarely to bring a system approach to addressing the issue.

Pam, it's long been preached in Total Quality Management/Continuous Quality Improvement (TQM/CQI), if not always observed. It's one of the principles originally described by Dr. Edward Deming, the American engineer who saved Japanese manufacturing (after being largely rejected in the United States). If it's really observed, it can make a big difference. If it's preached but not observed, it only adds to a culture of distrust.