RE: number of queries/evaluation
We only use the calculations in our JATA software. I appreciate the information so far, but does anyone else have information they would be willing to share related to what data is collected or used for the CDI evaluation yearly. I am only asking about the CDI part.
Mary A. Hosler RN, MSN CDS
Alumnus CCRN
McLaren Bay Region
1900 Columbus Ave.
Bay City, Michigan 48708
(989) 891-8072
mary.hosler@mclaren.org
Mary A. Hosler RN, MSN CDS
Alumnus CCRN
McLaren Bay Region
1900 Columbus Ave.
Bay City, Michigan 48708
(989) 891-8072
mary.hosler@mclaren.org
Comments
I share with and discuss with my CDI Team. Provide quarterly
data/metrics reports that allow them to share where they are.
Query rate is # cases (with one or more query) / total # of cases
reviewed
Don
Donald A. Butler, RN, BSN
Manager, Clinical Documentation
Vidant Medical Center, Greenville NC
DButler@vidanthealth.com ( mailto:mDButler@vidanthealth.com )
Paul Evans, RHIA, CCS, CCS-P, CCDS
Manager, Regional Clinical Documentation & Coding Integrity
Sutter West Bay
633 Folsom St., 7th Floor, Office 7-044
San Francisco, CA 94107
Cell: 415.637.9002
Fax: 415.600.1325
Ofc: 415.600.3739
evanspx@sutterhealth.org
However, I realized I have an updated version, now attached.
The initial generation of this about 4 yrs ago was at the request of my
team -- they were looking for objective measures to help anticipate
annual evaluations.
I've been able to incorporate both some external benchmarks as well as
long term internal trends. The external benchmarks include ACDIS
surveys, research done by the Advisory Board, as well as conversations
with at least 2 consultants.
Please note the changes from the version I had earlier shared.
There was quite a bit of conversation that went into this current
version, with a lot of the focus of the conversation being what elements
of metrics does the CDS have almost complete control over (# chart
reviews), elements the CDS has a large degree of influence (obtaining a
response) and elements that the CDS doesn't have significant direct
influence (how the provider responds).
As a result, changed some of the focus for some elements to being
"monitored" metrics -- where the metric does not directly influence
evaluation, however could trigger closer examination if falls out of the
expected range (and that examination could/likely impact evaluation).
The KR column refers to Key Responsibilities which are specific areas
evaluated & metrics are aligned with those KR's on the evaluation form;
additionally the KRs are outlined in the position description.
Data accuracy is a calculation pulled from our CDI data repository
where I calculate the accuracy of query outcomes data -- the expectation
that the only queries that are scored as potential financial impact with
an 'agreed' response are those that actually demonstrated a financial
impact. Ideally, this value should be 100%
Case impact -- % of all cases reviewed that acheived a financial
impact.
A comment on the volume metrics: the O / E is observed to expected
ratio. The meets level of about 1850 discharged cases reviewed annually
is a raw number. For those individuals whom might have an incidence of
FMLA for example, their actual worked time will be considerably less.
Historically, my team works about 85% of possible annual hours and the
1850 matches up with that. The O / E compares case volume to actual
worked hours (ends up to be 1.03 cases per hour). Particularily useful
when looking at monthly or quarterly metrics when normal (ie, no FMLA)
variations among team members for time off may occur.
I also added ranges for lower performance.
The levels of performance are:
U -- unacceptable
B -- below expected
M -- meets expected
E -- exceeds expected
SE -- substantially exceeds
In writing comments above, one of the aspects that seems to be the case
is that we have a large emphasis on financial impact. That is not
entirely correct. We have a significant focus on mortality & LOS
predictive profiling, accuracy of record, appropriately capturing
complications, etc., unfortunately the software tools I currently have
available to me are restricted in ways to capture and measure those
activities and outcomes.
Don
Vanessa Falkoff RN
Clinical Documentation Coordinator
University Medical Center
Las Vegas, NV
vanessa.falkoff@umcsn.com
office 702-383-7322
cell 702-204-0054