At a glance
A major US healthcare system needed to identify which older adult patients were most likely to be admitted as inpatients in the coming months — so care managers could prioritize outreach, allocate care-management resources, and support value-based-care contracts. UNVEIL was retained as a direct external contractor to deliver the underlying statistical analyses and predictive risk models, and to document them for the client’s internal team to deploy and monitor.
The situation
The client’s population-health analytics group had a clear strategic goal — proactively identify high-risk, impactable patients within their value-based-care population — but the in-house team did not have spare modeling capacity to develop the analyses and risk models from scratch. Earlier exploratory work had been started but needed to be matured into a production-ready model with documentation suitable for handoff.
The patient population was large and heterogeneous, and the team specifically wanted to focus on adults aged 50 and older, where the clinical and operational stakes around avoidable admissions, long lengths of stay, and readmissions are highest.
The challenge
Three things made this harder than a textbook risk-modeling exercise:
- Impactability, not just risk. Not every high-risk patient is amenable to intervention. The model had to support identifying patients whose admissions could plausibly be prevented through better outpatient management — a more nuanced framing than naive risk prediction.
- Healthcare-grade rigor. The model had to survive clinical, compliance, and analytics review — meaning calibration, fairness checks across subgroups, and feature-leakage controls had to be designed in, not bolted on.
- Handoff, not lock-in. The client wanted the work documented and transferred to their internal team for ongoing deployment and monitoring. There was no value in delivering a black-box artifact only we could maintain.
Our approach
We worked inside the client’s secure cloud ML environment, using their data and infrastructure standards.
- Evidence review and feature scoping. Surveyed the published literature on impactability and avoidable-utilization modeling, then translated it into a candidate feature list grounded in the client’s data assets.
- Cohort definition and data preparation. Defined the adult-50-plus cohort and worked with a client developer to curate and prep training, validation, and out-of-time test sets — keeping the data scientist focused on modeling rather than ETL.
- Risk modeling. Trained, calibrated, and validated a predictive model for inpatient-admission risk over a forward-looking window. Performance was evaluated on out-of-time data with calibration plots, subgroup performance breakdowns, and feature-leakage diagnostics.
- Documentation and handoff. Produced a model document covering data lineage, feature definitions, training methodology, validation results, known limitations, and operational guidance — sized for the in-house team to take over deployment, monitoring, and retraining.
The outcome
- A documented, validated inpatient-admission risk model for adults 50+, ready for the client’s internal team to deploy in their existing MLOps pipeline.
- Statistical analyses that gave the population-health group an updated, evidence-grounded view of their cohort — useful beyond the model itself for value-based-care strategy and care-management prioritization.
- A clean handoff: the client owned every artifact (code, documentation, validation reports), with no dependency on UNVEIL for ongoing operation.
What this means for you
If you operate a population, a portfolio, or a customer base where some segment is high-cost or high-risk and you want to focus interventions on the people who will actually benefit, we can:
- Bring healthcare-grade statistical rigor to your risk and propensity work — calibration, fairness, leakage controls, out-of-time validation.
- Deliver work as documented, transferable models rather than black-box services — your team owns the code and the playbook.
- Work inside your existing cloud ML environment (Azure, AWS) and to your existing data-governance and compliance standards.
- Augment your in-house team for bounded engagements — define the model, validate it, document it, hand it off — rather than embedding indefinitely.
Want to talk about a similar problem in your organization? Contact us.