Of what real value are query rate benchmarks?

In reading the previous posts re: query rates, I pose the following questions:

1) do you feel that these "benchmarks", created by outside consultants, actually measure performance (how well a CDS does their job?
2)Do you feel that a program's actual performance is dependent on physician queries?
3) If maintaining a query rate equal to the benchmarks, what does this say about the ability of the CDI team to educate physicians?
4) Does anyone feel that this is a benchmark that has real value in reporting or are other measures more valid:
a) Change in CMI over time?
b) Change in SOI/ROM?
c) Improvement in documentation as measured
by targeted reports, i.e, reduction in the
number of 428.0 (CHF, unspecified), for
example?

I believe that query benchmarks, as set by outside financial firms, are artificial measures of CDI performance. Too many facilities are using these figures to correlate with potential increases in revenue and rebuking CDS when this "revenue" drops.

This set-up forces teams to query before a workup is complete, just to justify meeting "benchmarks", or counting queries as "successful" without validating the impact against the coding summary. This approach seems to encourage unethical behaviors in order to make it look like the team is "successful".

It is valuable to measure physician participation in the process (response rates), as this indicates if additional physician education is warranted. But these response rates should be limited to "yes" or "no". After all, it's the physicians' prerogative to disagree.

The role of the CDS is to obtain additional specificity in order to facilitate the appropriate codes - and these can be measured with code-specific reports.

Focusing on the financial impact of a program is an imperfect process at best. Many facilities measure the financial impact of a query based on the difference between the starting and the final coded DRG. Unless a report is accurately including the discharge status (home, LTC, acute rehab, etc.) of each case, the actual reimbursement is variable and shouldn't be reported. Many discharges are paid based on transfer DRGs or other methodologies.

Until the focus is taken off revenue and put back where it belongs - accurate, specific data reporting - the role of the CDS will continue to be viewed with skepticism by those responsible for the quality of the data - physicians and coders.

Comments

  • edited May 2016
    Our query benchmark was lower than 80% with Navigant there focus with us
    was getting a greater than 80% response rate from physicians and than a
    positive agreement rate. We than measured our CMI from the start of the
    program to about the 2 year mark and noted a significant increase. We
    are now focusing on the SOI/ROM since the APR system has gone into
    effect and have also begun concentrating on assisting with the core
    measures of CHF and AMI. So, I think that it is nice to have a goal but
    that is not what the program is all about.

    Deanna Holowczak, BSN, RN
    Clinical Documentation Specialist
    St. John's Riverside Hospital
    914-964-4580
    dholowczak@riversidehealth.org




  • edited May 2016
    Two questions, how many of you leave your queries on the charts
    permanently? Also, if a physician gives no response to a query, what
    methods do you use to differentiate between a "no response" or him
    disagreeing with it? This question makes sense in my head, I promise!


  • Makes sense to me too. I use a "No Response" and a "No Change". The no change is if they reply but they disagree.

    Robert

    Robert S. Hodges, BSN, MSN, RN
    Clinical Documentation Improvement Specialist
    Aleda E. Lutz VAMC
    Mail Code 136
    1500 Weiss Street
    Saginaw MI 48602
  • edited May 2016
    What method do they use to let you know they disagree? Are your queries left on the chart after final code?


  • edited May 2016
    We leave the queries on the chart until we get a response or the patient is dismissed. Your no response vs. disagree makes perfect sense. We delt with that at our facility. In working with the physicians, we decided to put a box at the bottom of the page where they could put their initials. If they used this box, we knew they saw the query but did not agree so there wouldn't be any add'l documentation in the chart. Thus we give them credit at responding. It has worked well for us.

    Kari L. Eskens, RHIA
    BryanLGH Medical Center
    Coding & Clinical Documentation Manager


  • edited May 2016
    1. our queries are a permanent part of the medical record
    2. 'no response' - I have the responsibility of getting my queries answered (verbal follow-up, fax etc). I have 48 hours post discharge to accomplish this. If I can not get it answered the coder reviews and (if feels appropriate) sends a fax and the query is added as a deficiency.
    'disagree' we hardly ever use - if the physician documents anything as a response to the query (we have many standardized queries which really puts me in a corner) we say he has 'documented a response'
    if I have a verbal conversation regarding a question and they indicated they do not see the clinical picture as such I flip the query over and write on the bottom of the form what the physician said for the coder to review when they get the record


    Charlene



  • edited May 2016
    1.) Our CDI Queries remain on the chart until answered. I check them
    daily for documentation. If the documentation is made I remove the
    Query and update our database.

    Once the patient is discharged - and the need for the query is still
    there - the Coder will "continue the Query". This query, once answered,
    will remain permanently in the medical record.

    All CDI queries are returned to the appropriate CDS. The Coder returns
    the query to the CDS w/"no response", "another CC/MCC documented", or
    "will continue query".

    2.) Our software has 3 choices for query responses:

    Agree
    Disagree
    Unaswered

    I do wish there were a couple more choices - definitely an "Other"
    option.

    I have had physicians outright tell me "no - I do not agree with that
    diagnosis". That's a definite disagree. But we have many "unanswered"
    which I believe are their way of "disagreeing" with the query.




    N. Brunson, RHIA
    Clinical Documentation Specialist
    Bay Medical Center

  • edited May 2016
    1) Our CDS queries are not part of the permanent record. They are left in the progress notes until the MD answers or the pt is discharged. Our MD progress notes are still paper - that is the last part of the medical record to go electronic - ~ sometime in 2011 (unless it gets changed again). Once CDS feels we have an answer, we remove the query from the chart and attach it to our CDS worksheet for keeping for future audits by our consulting company - and possibly by RAC? **does anyone know for certain that RAC could have access to/ask to see CDS queries? We have gotten mixed messages on this from our Coding manager and our consultants**

    2) Our Software program has 4 possibilities for a response: "awaiting" (when the query is first placed) "agree" (MD gave some type of answer - even if not a positive impact CC or MCC), "disagree" (ex: no evidence of pneumonia)
    And "no response" - which our system defaults to after 15 days - or we can change to after pt discharged and no answer to our query. Our coders will also use this category when CDS query is still relevant, and the Coder does a retrospective query---that way two queries are not shown as "awaiting" - the CDS query is "closed" with a "no response"

    Becky Mann, RN, CDS
    Queen of the Valley Medical Center


  • 1. We have some query forms that are a permanent part of the record - the MD must sign, date and time them. Currently they are for CHF, CKD, Anemia, Skin Ulcers and Debridements. All other queries are removed after MD responds or at discharge.
    2. The MD responses that we log are:
    No response
    Positive response - no DRG impact (this just means the physician answered the query)
    Positive response - DRG Impact
    Post D/C response - no DRG impact
    Post D/C response - DRG impact
    No response - no post d/c follow up
    No response to post d/c query

    The only time we do not f/u post d/c is if the query would have no significant impact on case (CKD 3).
    We only have No response to post d/c query on less than 1% of our post d/c queries. We make them answer if it all possible.




  • edited May 2016
    I believe that some of our no responses are disagrees as well. I worry
    that putting a check box on the query for them to tell me they have seen
    it and have either documented or disagreed, would turn into them just
    checking the box and ignoring it altogether. I am new to this role, so I
    am still trying to find my way. This forum has really helped me.


  • edited May 2016
    We record:
    Agree
    Disagree
    No response

    We have a CV surgeon who disagrees 100% of the time. But his response
    rate is 100% and that's what's on the scorecard. I want to include agree
    rate with that, but still trying to make the case to the VP of Med
    staff/CMO.

    Is there anything else I can do for you?
    Clinical Quality Management would like your feedback on our ability to
    meet your needs. Please complete a satisfaction survey for our
    department.

    Sandy Beatty, RN, BSN, C-CDI
    Clinical Documentation Specialist
    Columbus Regional Hospital
    Columbus, IN
    (812) 376-5652
    sbeatty@crh.org

    "The most important thing in communication is to hear what isn't being
    said." Peter F. Drucker


  • edited May 2016
    1 -- Based on our experience, I feel one needs to develop the experience to then be able to carefully analysis one's own experience in contrast to the consultant's "benchmark". I found the answer to be split -- our performance was not as good as what the consultant predicted, there were some unique specific reasons that we could not affect, there were also reasons that we could address through performance improvement methods to raise our performance.

    2 -- Program performance in dependent on many more variables than simple query activity. Education, changing of habits, changing of the guidelines, support on the administrative and medical executive levels, etc. A significant portion of program performance is not measurable -- as physicians learn, some change their documentation habits with the desired outcome but there was no query activity to record!

    3 -- Exactly -- if that is the only measure, then the program is failing! Much more important is the education and partnership than is "getting" a query.

    4 -- The measures you mention (as well as a variety of others) that really measure the actual quality of documentation and coding would be much better to track one's success. It is just that obtaining the data, analysis and measurement is complex and time consuming, so some of the current common measures IMHO are much easier to use and are reasonable as stand-in as long as one remembers what the TRUE goals are!

    Don

    Donald A. Butler, RN, BSN
    Manager, Clinical Documentation
    PCMH, Greenville NC
    dbutler@pcmh.com



  • 1) the key word is "outside consultants." I have finally decided I'm no longer going to tailor my program to keep the outside consultant happy. She works for us, not the other way around, and my boss knows the value of what I do. It's the consultant's benchmark, not mine, and it's how they keep themselves in business.

    2) to some extent, the queries are important. But just as important is my reconciliation with the coders--often I find a missing code or I get them to upgrade a diagnosis based on my clinical knowledge or my knowledge of our physicians. If we are really doing a perfect job as CDS, we'd never have to query because the documentation would be perfect. The better we educate, the fewer queries we should need to generate. But like the cop walking the beat, we always have to be there, watching vigilantly.

    3) I think it depends on the hospital. With a stable physician staff, the query rate should go down, but if there is a constant turnover (e.g., new housestaff, etc.), then there is a constant new pool of physicians to be educated and queried.

    4) CMI has some value, but so much of it is out of our hands. I can't control what surgery a patient has or when a doctor goes out of town and his caseload goes with him. Reimbursement impact matters, but again, an educated physician isn't going to need a query, so the impact is there but not measurable. Right now I am auditing old charts for validity of single MCCs. It's something the "outside consultant" never trained me for or prepared me for, but it's incredibly valuable to our hospital when RAC could be here any time. I think as the only CDS in the building, it's my job to find ways to make myself needed. I've reviewed denial letters and written appeals, too, some predating my tenure in this job.
  • edited May 2016
    Good comments.

    1) yep!
    2) yep!
    3) we are a tertiary care, residency, med school.......
    4) some very relevant observations about activities that many consultants as of yet don't seem to actually have much to offer. Also heard of at least one case that sounds like the hospital might have been led into hot water by the training and advice of the consultant.
    These additional activities have the potential to significantly expand the activity and involvement and expertise that a good CDI brings to the organization.

    Don


  • edited May 2016
    I just wanted to reply to your "reply" with a resounding "Right on!"



    Especially regarding consultant benchmarks and CMI! It can be
    frustrating when you see the results of educating physicians, working
    with them on documentation - trying to catch them at the right time -
    getting that query answered - and smiling like a proud parent with a
    tear in your eye when you read a perfect H&P with "Ac on Chr Systolic
    Heart Failure", "Acute Renal Failure" and "E. coli Sepsis due to Urinary
    Tract Infection" - and they call wanting to know why CMI is up this
    month.




  • Thanks to everyone who replied. One project I will be working on is establishing realistic dashboard metrics for CDI Programs. Many of the metrics on current CDI reports/dashboards are only there to measure revenue, in other words, is this program delivering the return on investment that got the consulting firm the contract?

    I think we, as CDS, bring a LOT more value to organizations. So, as a professional organization, it's probably up to us to develop standards of practice and those tools necessary to measure success.

    We know what we do - why should we have an absentee landlord writing our evaluations; because, in essence, that's just what's happening in many programs.

    If anyone would like to volunteer to serve on a work group, please contact me for any of these issues, please contact me:

    Lynne Spryszak, RN, CCDS, CPC-A
    CDI Education Director
    ACDIS

    lspryszak at cdiassociation dot com (written out to prevent spam)
  • edited May 2016
    I agree completely!!!! About 2 years ago I got permission from our new CMO
    to tailor our program for our facility, not national data presented by our
    consultants. It has given the CDS's permission to do the job they wanted
    in the first place ....... assist our physicians to document a more
    complete medical record which in turn, ultimately provides better care to
    our patients. I still use the consultants, primarily for education and
    their software. But we do not take their advice as gospel!

    Thank you,
    Susan Tiffany RN, CDS
    Supervisor
    Clinical Documentation Program
    Tiffany_Susan@guthrie.org





  • edited May 2016
    Hi Lynne,

    I would be interested in joining the work group.


    Linnea Thennes, RN, BS, CCDS

    Clinical Documentation Specialist

    Clinical Resource Management

    Northwest Community Hospital

    847.618-3089

    lthennes@nch.org



  • edited May 2016
    Lynne,

    We would like to participate in this since we are not currently with a consultant. We are in the process of developing our own reporting system and would like to hear other ideas on how to measure our CDI Program.


    Eileen Pracz, RN
    Clinical Documentation Specialist
    Oregon Health Science University
    503-418-4023
    fax 503-494-8439
    pracze@ohsu.edu



  • edited May 2016
    Count me in too please.



    Robert



    Robert S. Hodges, BSN, MSN, RN

    Clinical Documentation Improvement Specialist

    Aleda E. Lutz VAMC

    Mail Code 136

    1500 Weiss Street

    Saginaw MI 48602



    P: 989-497-2500 x13101

    F: 989-321-4912

    E: Robert.Hodges2@va.gov



    "Anyone who has never made a mistake has never tried anything new."
    -Albert Einstein




Sign In or Register to comment.