CDS Productivity Report
I'm looking for CDS Productivity Report examples. Would appreciate any examples attached. Currently we look at the amount of Admissions, Continued Stays (Follow-ups) per CDS assigned area. We also look at Physician interaction and non-productive time.
Thank you,
Norma T. Brunson, RHIA,CDIP,CCDS
Thank you,
Norma T. Brunson, RHIA,CDIP,CCDS
Comments
Attached is a draft (for next year) of what I use for evals.
Major difference in the draft vs this year's document is incorporation of some benchmarking data.
We've recently had discussion about what metrics should / should not be applied to the individual CDS -- such as physician agreement. IMHO, the key is to what degree does the individual CDS have significant influence or control on the outcome. We have not reached a consensus on these questions.
These are interpreted when appropriate -- for example, if the program as a whole is missing response metric (and there are external factors), then would be rating the individual more against the program average than these (somewhat arbitrary?) metrics. Similarly when there is a long established pattern with a specific service line, adjustments are made.
Final caveat -- this is far from settled on the specific numbers shown, and some of the metrics may be more "monitor" than directly apply toward eval rating.
I've also attached a document that I used for sharing of individual metrics and compare to team averages.
See this blog post: CDI Productivity Benchmarks (A CDI Talk topic)
http://blogs.hcpro.com/acdis/2011/11/cdi-productivity-benchmarks-a-cdi-talk-topic/
Don
However, I'm looking for something more day-to-day.
Currently each CDS fills out a weekly productivity sheet with the number of admissions reviewed, number of follow-ups, number of queries submitted, etc. This is broken down further into Financial classes as we are not an "all-payer" review program, and by nursing station.
Its a little convoluted- and although we actually have a procedure to fill out the report (yes.) Each CDS fills it out differently. So, we would like something more streamlined.
Its more a way to keep up with what each CDS does with their day.
NTB
This sounds like it may not be different from what you are already using. Find it is useful with a new CDS.
Don
Thank you so much!
Norma T. Brunson, RHIA,CDIP,CCDS
Don
The 5 levels of eval are: unacceptable, below, meets, exceeds & substantially exceeds (SE).
On the re-reviews, ratio of initial reviews (# of cases) vs re-reviews done.
Unfortunately, we don't have the software technology at this point to count re-reviews...
I'll also add that some of the metrics don't present measures -- those may not have a way to measure, but are included as a (philosophical) statement of what is valued and important. These factors still influence the overall eval rating (based on 'stories' & trended impressions).
Don
We split out Blue Cross,/ Tricare and M'Caid. Our software will not allow us to add them into our reviews- they are reviewed "manually". I feel we need to capture the data somehow. Some floors carry a higher incidence of those admits. There needs to be a way to capture that data into assignment splits. That's where the report become a bit convoluted.
Thank you for your input and examples. They are greatly appreciated!
Norma T, Brunson, RHIA,CDIP, CCDS
Sent from my Verizon Wireless 4G LTE DROID
You may have to 'pretend' those cases are medicare, but also be able to flag them in some way to manually pull them out when calculating impacts? In part, depends on what fields in the software you can customize.
I would have a similar challenge with my current software, but I believe I could figure out a way to work around it (currently we are using an older version of JA Thomas software).
Contact me directly if you'd like.
Don
Donald A. Butler, RN, BSN
Manager, Clinical Documentation
Vidant Medical Center, Greenville NC
DButler@vidanthealth.com ( mailto:mDButler@vidanthealth.com )