Question on AI-generated documentation and CDI validation
I’ve been following some of the recent discussions around AI-generated notes and wanted to ask a broader question from a CDI workflow perspective.
It seems like we’re starting to see a shift from under-documentation to something different—documentation that looks complete, but isn’t always fully supported when you look at the clinical evidence.
For those working in CDI day-to-day, how is this actually showing up? Are you seeing more time spent validating what’s already documented versus identifying what’s missing? And how are teams adapting to that?
I’m trying to better understand where the real friction is before going any further.
My background is more on the data and interoperability side (HL7/FHIR), so I’m trying to learn where this shows up in real CDI workflows.
Appreciate any perspective.


Comments
From my experience, there is a great deal of time spent validating AI documentation and assessing unnecessary or incorrect "flagging". There should be at this point. I think we are in a pivotal time with use of AI in CDI and only time will tell if it is more valuable to have CDI-human reviews versus AI validation/surfacing only. Interested to see how others feel.