The challenge: Accessing meaningful, high-quality data

In order to make the most informed diagnostic and treatment decisions, clinicians need a complete longitudinal health record populated with trusted highquality data. 

For most healthcare providers, networks, and health systems, data about the patient population likely exists in a range of electronic systems – both internal and external to the organisation. In order to best make sense of the data, or gain valid insights from it, it needs to be combined and accessible in one place. This is the first critical step toward delivering advanced population health.

Many years of effort promoting the benefits of interoperability have resulted in a fair degree of standardisation in the interfacing approaches used by most electronic health records (EHR), systems. The latest FHIR standards offer the best technical strategic approach currently available. Even so, the on-the-ground reality is still that existing clinical systems have huge variation in the ways they support the sharing of data. The challenges involved increase as we better understand the importance of non-traditional data in determining patient outcomes.

New and emerging data types

The types of data that should be considered as part of a complete patient record are many, and the potential volume is staggering. Clinical data drawn from across a health region or jurisdiction is just the first step. We can build and extend on that base by adding behavioural information, social determinants of health, claims data, patient-generated data and genomic data. Patient generated data has expanded and now includes data generated by medical devices, potentially also from consumer-marketed tests and wellness devices, exercise trackers and more. Another massive category of potentially relevant data is the genomic data and the other ‘-omics’ data including transcriptomics, proteomics epigenomics and metabolomics. Still another group is represented by environmental data and the exposome.

These new types are as significant to an individual’s health as traditional data types. As such, we must accommodate them in our approach to aggregation.

Non-standardised data across disparate systems

But what about the range of disparate systems that we find this data residing in? Valuable data can be found in large data repositories – often housed on legacy technical platforms – that are still operational and have high transaction volumes. What should we do with that data? 

A solution to this challenge is to normalise and index this data, so it can be easily incorporated into a shared record or health information exchange (HIE). This is a complex, though perfectly possible task. And it is one that must be engineered to deliver scrupulous quality of data – crucial for clinical purposes.  

Clinical terminology services, translators and natural language processing engines are essential to ensure an accurate semantic layer that can normalise both free text and structured data of all types as they are expressed across many different systems.  

Data sharing capabilities are crucial

Data sharing is key to enabling high-quality data aggregation. As already outlined, the reality is that there are many formats and available standards. In order to aggregate data centrally, the aggregating mechanism must be able to work with all of those formats and standards, including HL7 V2.x, CDA, CCDA and FHIR as the bare minimum.

Clinical terminology services, translators and natural language processing engines are essential to ensure an accurate semantic layer to normalise both free text and structured data of all types as they are expressed across many different systems.

Rich, aggregated patient data is the backbone of advanced population health. An open ecosystem that facilitates a complete patient record, enables clinicians to make better-informed diagnoses and efficiently manage an entire population of patients. Indeed, the trend globally is to pass actual legislation to promote greater interoperability and data sharing, and requiring the use of FHIR APIs. This is all done with the intent to improve individual patient care as well as the care of an entire population.

Data aggregation is crucial to delivering the right care to the right patient, at the right time and in the right place. It is the first – and arguably most important – step on the path to advanced population health.

Interested in learning how Orion Health can help with comprehensive data aggregation?