Of what real value are query rate benchmarks?
In reading the previous posts re: query rates, I pose the following questions:
1) do you feel that these "benchmarks", created by outside consultants, actually measure performance (how well a CDS does their job?
2)Do you feel that a program's actual performance is dependent on physician queries?
3) If maintaining a query rate equal to the benchmarks, what does this say about the ability of the CDI team to educate physicians?
4) Does anyone feel that this is a benchmark that has real value in reporting or are other measures more valid:
a) Change in CMI over time?
b) Change in SOI/ROM?
c) Improvement in documentation as measured
by targeted reports, i.e, reduction in the
number of 428.0 (CHF, unspecified), for
example?
I believe that query benchmarks, as set by outside financial firms, are artificial measures of CDI performance. Too many facilities are using these figures to correlate with potential increases in revenue and rebuking CDS when this "revenue" drops.
This set-up forces teams to query before a workup is complete, just to justify meeting "benchmarks", or counting queries as "successful" without validating the impact against the coding summary. This approach seems to encourage unethical behaviors in order to make it look like the team is "successful".
It is valuable to measure physician participation in the process (response rates), as this indicates if additional physician education is warranted. But these response rates should be limited to "yes" or "no". After all, it's the physicians' prerogative to disagree.
The role of the CDS is to obtain additional specificity in order to facilitate the appropriate codes - and these can be measured with code-specific reports.
Focusing on the financial impact of a program is an imperfect process at best. Many facilities measure the financial impact of a query based on the difference between the starting and the final coded DRG. Unless a report is accurately including the discharge status (home, LTC, acute rehab, etc.) of each case, the actual reimbursement is variable and shouldn't be reported. Many discharges are paid based on transfer DRGs or other methodologies.
Until the focus is taken off revenue and put back where it belongs - accurate, specific data reporting - the role of the CDS will continue to be viewed with skepticism by those responsible for the quality of the data - physicians and coders.
1) do you feel that these "benchmarks", created by outside consultants, actually measure performance (how well a CDS does their job?
2)Do you feel that a program's actual performance is dependent on physician queries?
3) If maintaining a query rate equal to the benchmarks, what does this say about the ability of the CDI team to educate physicians?
4) Does anyone feel that this is a benchmark that has real value in reporting or are other measures more valid:
a) Change in CMI over time?
b) Change in SOI/ROM?
c) Improvement in documentation as measured
by targeted reports, i.e, reduction in the
number of 428.0 (CHF, unspecified), for
example?
I believe that query benchmarks, as set by outside financial firms, are artificial measures of CDI performance. Too many facilities are using these figures to correlate with potential increases in revenue and rebuking CDS when this "revenue" drops.
This set-up forces teams to query before a workup is complete, just to justify meeting "benchmarks", or counting queries as "successful" without validating the impact against the coding summary. This approach seems to encourage unethical behaviors in order to make it look like the team is "successful".
It is valuable to measure physician participation in the process (response rates), as this indicates if additional physician education is warranted. But these response rates should be limited to "yes" or "no". After all, it's the physicians' prerogative to disagree.
The role of the CDS is to obtain additional specificity in order to facilitate the appropriate codes - and these can be measured with code-specific reports.
Focusing on the financial impact of a program is an imperfect process at best. Many facilities measure the financial impact of a query based on the difference between the starting and the final coded DRG. Unless a report is accurately including the discharge status (home, LTC, acute rehab, etc.) of each case, the actual reimbursement is variable and shouldn't be reported. Many discharges are paid based on transfer DRGs or other methodologies.
Until the focus is taken off revenue and put back where it belongs - accurate, specific data reporting - the role of the CDS will continue to be viewed with skepticism by those responsible for the quality of the data - physicians and coders.
Comments
was getting a greater than 80% response rate from physicians and than a
positive agreement rate. We than measured our CMI from the start of the
program to about the 2 year mark and noted a significant increase. We
are now focusing on the SOI/ROM since the APR system has gone into
effect and have also begun concentrating on assisting with the core
measures of CHF and AMI. So, I think that it is nice to have a goal but
that is not what the program is all about.
Deanna Holowczak, BSN, RN
Clinical Documentation Specialist
St. John's Riverside Hospital
914-964-4580
dholowczak@riversidehealth.org
permanently? Also, if a physician gives no response to a query, what
methods do you use to differentiate between a "no response" or him
disagreeing with it? This question makes sense in my head, I promise!
Robert
Robert S. Hodges, BSN, MSN, RN
Clinical Documentation Improvement Specialist
Aleda E. Lutz VAMC
Mail Code 136
1500 Weiss Street
Saginaw MI 48602
Kari L. Eskens, RHIA
BryanLGH Medical Center
Coding & Clinical Documentation Manager
2. 'no response' - I have the responsibility of getting my queries answered (verbal follow-up, fax etc). I have 48 hours post discharge to accomplish this. If I can not get it answered the coder reviews and (if feels appropriate) sends a fax and the query is added as a deficiency.
'disagree' we hardly ever use - if the physician documents anything as a response to the query (we have many standardized queries which really puts me in a corner) we say he has 'documented a response'
if I have a verbal conversation regarding a question and they indicated they do not see the clinical picture as such I flip the query over and write on the bottom of the form what the physician said for the coder to review when they get the record
Charlene
daily for documentation. If the documentation is made I remove the
Query and update our database.
Once the patient is discharged - and the need for the query is still
there - the Coder will "continue the Query". This query, once answered,
will remain permanently in the medical record.
All CDI queries are returned to the appropriate CDS. The Coder returns
the query to the CDS w/"no response", "another CC/MCC documented", or
"will continue query".
2.) Our software has 3 choices for query responses:
Agree
Disagree
Unaswered
I do wish there were a couple more choices - definitely an "Other"
option.
I have had physicians outright tell me "no - I do not agree with that
diagnosis". That's a definite disagree. But we have many "unanswered"
which I believe are their way of "disagreeing" with the query.
N. Brunson, RHIA
Clinical Documentation Specialist
Bay Medical Center
2) Our Software program has 4 possibilities for a response: "awaiting" (when the query is first placed) "agree" (MD gave some type of answer - even if not a positive impact CC or MCC), "disagree" (ex: no evidence of pneumonia)
And "no response" - which our system defaults to after 15 days - or we can change to after pt discharged and no answer to our query. Our coders will also use this category when CDS query is still relevant, and the Coder does a retrospective query---that way two queries are not shown as "awaiting" - the CDS query is "closed" with a "no response"
Becky Mann, RN, CDS
Queen of the Valley Medical Center
2. The MD responses that we log are:
No response
Positive response - no DRG impact (this just means the physician answered the query)
Positive response - DRG Impact
Post D/C response - no DRG impact
Post D/C response - DRG impact
No response - no post d/c follow up
No response to post d/c query
The only time we do not f/u post d/c is if the query would have no significant impact on case (CKD 3).
We only have No response to post d/c query on less than 1% of our post d/c queries. We make them answer if it all possible.
that putting a check box on the query for them to tell me they have seen
it and have either documented or disagreed, would turn into them just
checking the box and ignoring it altogether. I am new to this role, so I
am still trying to find my way. This forum has really helped me.
Agree
Disagree
No response
We have a CV surgeon who disagrees 100% of the time. But his response
rate is 100% and that's what's on the scorecard. I want to include agree
rate with that, but still trying to make the case to the VP of Med
staff/CMO.
Is there anything else I can do for you?
Clinical Quality Management would like your feedback on our ability to
meet your needs. Please complete a satisfaction survey for our
department.
Sandy Beatty, RN, BSN, C-CDI
Clinical Documentation Specialist
Columbus Regional Hospital
Columbus, IN
(812) 376-5652
sbeatty@crh.org
"The most important thing in communication is to hear what isn't being
said." Peter F. Drucker
2 -- Program performance in dependent on many more variables than simple query activity. Education, changing of habits, changing of the guidelines, support on the administrative and medical executive levels, etc. A significant portion of program performance is not measurable -- as physicians learn, some change their documentation habits with the desired outcome but there was no query activity to record!
3 -- Exactly -- if that is the only measure, then the program is failing! Much more important is the education and partnership than is "getting" a query.
4 -- The measures you mention (as well as a variety of others) that really measure the actual quality of documentation and coding would be much better to track one's success. It is just that obtaining the data, analysis and measurement is complex and time consuming, so some of the current common measures IMHO are much easier to use and are reasonable as stand-in as long as one remembers what the TRUE goals are!
Don
Donald A. Butler, RN, BSN
Manager, Clinical Documentation
PCMH, Greenville NC
dbutler@pcmh.com
2) to some extent, the queries are important. But just as important is my reconciliation with the coders--often I find a missing code or I get them to upgrade a diagnosis based on my clinical knowledge or my knowledge of our physicians. If we are really doing a perfect job as CDS, we'd never have to query because the documentation would be perfect. The better we educate, the fewer queries we should need to generate. But like the cop walking the beat, we always have to be there, watching vigilantly.
3) I think it depends on the hospital. With a stable physician staff, the query rate should go down, but if there is a constant turnover (e.g., new housestaff, etc.), then there is a constant new pool of physicians to be educated and queried.
4) CMI has some value, but so much of it is out of our hands. I can't control what surgery a patient has or when a doctor goes out of town and his caseload goes with him. Reimbursement impact matters, but again, an educated physician isn't going to need a query, so the impact is there but not measurable. Right now I am auditing old charts for validity of single MCCs. It's something the "outside consultant" never trained me for or prepared me for, but it's incredibly valuable to our hospital when RAC could be here any time. I think as the only CDS in the building, it's my job to find ways to make myself needed. I've reviewed denial letters and written appeals, too, some predating my tenure in this job.
1) yep!
2) yep!
3) we are a tertiary care, residency, med school.......
4) some very relevant observations about activities that many consultants as of yet don't seem to actually have much to offer. Also heard of at least one case that sounds like the hospital might have been led into hot water by the training and advice of the consultant.
These additional activities have the potential to significantly expand the activity and involvement and expertise that a good CDI brings to the organization.
Don
Especially regarding consultant benchmarks and CMI! It can be
frustrating when you see the results of educating physicians, working
with them on documentation - trying to catch them at the right time -
getting that query answered - and smiling like a proud parent with a
tear in your eye when you read a perfect H&P with "Ac on Chr Systolic
Heart Failure", "Acute Renal Failure" and "E. coli Sepsis due to Urinary
Tract Infection" - and they call wanting to know why CMI is up this
month.
I think we, as CDS, bring a LOT more value to organizations. So, as a professional organization, it's probably up to us to develop standards of practice and those tools necessary to measure success.
We know what we do - why should we have an absentee landlord writing our evaluations; because, in essence, that's just what's happening in many programs.
If anyone would like to volunteer to serve on a work group, please contact me for any of these issues, please contact me:
Lynne Spryszak, RN, CCDS, CPC-A
CDI Education Director
ACDIS
lspryszak at cdiassociation dot com (written out to prevent spam)
to tailor our program for our facility, not national data presented by our
consultants. It has given the CDS's permission to do the job they wanted
in the first place ....... assist our physicians to document a more
complete medical record which in turn, ultimately provides better care to
our patients. I still use the consultants, primarily for education and
their software. But we do not take their advice as gospel!
Thank you,
Susan Tiffany RN, CDS
Supervisor
Clinical Documentation Program
Tiffany_Susan@guthrie.org
I would be interested in joining the work group.
Linnea Thennes, RN, BS, CCDS
Clinical Documentation Specialist
Clinical Resource Management
Northwest Community Hospital
847.618-3089
lthennes@nch.org
We would like to participate in this since we are not currently with a consultant. We are in the process of developing our own reporting system and would like to hear other ideas on how to measure our CDI Program.
Eileen Pracz, RN
Clinical Documentation Specialist
Oregon Health Science University
503-418-4023
fax 503-494-8439
pracze@ohsu.edu
Robert
Robert S. Hodges, BSN, MSN, RN
Clinical Documentation Improvement Specialist
Aleda E. Lutz VAMC
Mail Code 136
1500 Weiss Street
Saginaw MI 48602
P: 989-497-2500 x13101
F: 989-321-4912
E: Robert.Hodges2@va.gov
"Anyone who has never made a mistake has never tried anything new."
-Albert Einstein