CDS Productivity Report

I'm looking for CDS Productivity Report examples. Would appreciate any examples attached. Currently we look at the amount of Admissions, Continued Stays (Follow-ups) per CDS assigned area. We also look at Physician interaction and non-productive time.

Thank you,

Norma T. Brunson, RHIA,CDIP,CCDS

Comments

  • I think this might be related to your question.

    Attached is a draft (for next year) of what I use for evals.
    Major difference in the draft vs this year's document is incorporation of some benchmarking data.

    We've recently had discussion about what metrics should / should not be applied to the individual CDS -- such as physician agreement. IMHO, the key is to what degree does the individual CDS have significant influence or control on the outcome. We have not reached a consensus on these questions.

    These are interpreted when appropriate -- for example, if the program as a whole is missing response metric (and there are external factors), then would be rating the individual more against the program average than these (somewhat arbitrary?) metrics. Similarly when there is a long established pattern with a specific service line, adjustments are made.

    Final caveat -- this is far from settled on the specific numbers shown, and some of the metrics may be more "monitor" than directly apply toward eval rating.

    I've also attached a document that I used for sharing of individual metrics and compare to team averages.

    See this blog post: CDI Productivity Benchmarks (A CDI Talk topic)
    http://blogs.hcpro.com/acdis/2011/11/cdi-productivity-benchmarks-a-cdi-talk-topic/

    Don

  • edited May 2016
    Thanks Don! Those are great tools! I will keep them handy.

    However, I'm looking for something more day-to-day.

    Currently each CDS fills out a weekly productivity sheet with the number of admissions reviewed, number of follow-ups, number of queries submitted, etc. This is broken down further into Financial classes as we are not an "all-payer" review program, and by nursing station.

    Its a little convoluted- and although we actually have a procedure to fill out the report (yes.) Each CDS fills it out differently. So, we would like something more streamlined.

    Its more a way to keep up with what each CDS does with their day.

    NTB



  • Certainly.

    This sounds like it may not be different from what you are already using. Find it is useful with a new CDS.

    Don

  • edited May 2016
    YES! It almost looks like the one we are currently working w/but I like how you have divided out the Query section. And it looks very neat and uncomplicated.

    Thank you so much!

    Norma T. Brunson, RHIA,CDIP,CCDS


  • A short clarification -- the first several columns are meant to count the number of NEW cases that appear on a worklist needing review on each of several units that the CDS is responsible for, then the number of cases are counted under initial reviews that actually get reviewed (generally that # should be equal to or less than the total # of new cases, unless picking up missed cases from a previous day or if cross covering to assist a colleague)

    Don

  • Some one asked privately what SE stood for, as well as what the 1:1, 1:2, 1:3 re-reviews....figured someone else might be wondering.

    The 5 levels of eval are: unacceptable, below, meets, exceeds & substantially exceeds (SE).

    On the re-reviews, ratio of initial reviews (# of cases) vs re-reviews done.
    Unfortunately, we don't have the software technology at this point to count re-reviews...

    I'll also add that some of the metrics don't present measures -- those may not have a way to measure, but are included as a (philosophical) statement of what is valued and important. These factors still influence the overall eval rating (based on 'stories' & trended impressions).

    Don


  • edited May 2016
    Yes, that's the way we also count them: Total of admits, breakdown of units, total admits "reviewed", Followups, queries. We do not break into the responses which is what I most like about your report.

    We split out Blue Cross,/ Tricare and M'Caid. Our software will not allow us to add them into our reviews- they are reviewed "manually". I feel we need to capture the data somehow. Some floors carry a higher incidence of those admits. There needs to be a way to capture that data into assignment splits. That's where the report become a bit convoluted.

    Thank you for your input and examples. They are greatly appreciated!

    Norma T, Brunson, RHIA,CDIP, CCDS

    Sent from my Verizon Wireless 4G LTE DROID

  • Norma -- what software do you use?
    You may have to 'pretend' those cases are medicare, but also be able to flag them in some way to manually pull them out when calculating impacts? In part, depends on what fields in the software you can customize.
    I would have a similar challenge with my current software, but I believe I could figure out a way to work around it (currently we are using an older version of JA Thomas software).

    Contact me directly if you'd like.

    Don

    Donald A. Butler, RN, BSN
    Manager, Clinical Documentation
    Vidant Medical Center, Greenville NC
    DButler@vidanthealth.com ( mailto:mDButler@vidanthealth.com )


Sign In or Register to comment.