Question on AI-generated documentation and CDI validation
I’ve been following some of the recent discussions around AI-generated notes and wanted to ask a broader question from a CDI workflow perspective.
It seems like we’re starting to see a shift from under-documentation to something different—documentation that looks complete, but isn’t always fully supported when you look at the clinical evidence.
For those working in CDI day-to-day, how is this actually showing up? Are you seeing more time spent validating what’s already documented versus identifying what’s missing? And how are teams adapting to that?
I’m trying to better understand where the real friction is before going any further.
My background is more on the data and interoperability side (HL7/FHIR), so I’m trying to learn where this shows up in real CDI workflows.
Appreciate any perspective.

