This is part one of a 3-part series on designing future clinical information systems. This post covers the importance of data models. Part 2 covers clinical modelling in relation to process and workflow. Part 3 covers patient portals and the internet-of-things.


There are two fundamental principles that we must consider when designing clinical information systems for the future:

  1. We can’t build perfect systems straightaway; if we wait until everything is perfect, we’ll never get anything done.
  2. We shouldn’t give up on creating perfect systems. We’ll never achieve perfection of course, because they will always be something even better to implement, so we need a roadmap for stuff we need to solve now, solve tomorrow and solve in the future.

The conclusion is, therefore, we must get technology in the hands of users, but plan to iterate and try to design systems that are future-proof.

As such, information technology (IT) is not something that one can buy, deploy and then sit back and bask in the glory of improved efficiency and streamlined processes. Instead, we need to consider how our clinical processes work now, how they might, could or should work in the future and build systems that support those processes. At the very least, our designs should not unnecessarily limit our ability to adapt to new ways of working.

None of us have all of the answers at this point in time, so what can we do to create future-proof systems that solve problems now that don’t unnecessarily limit our future scope and prevent future innovation?

The answer is to develop clinical models of data and the processes and workflows relating to those data.

We start with our data.

Clinical modelling

Perhaps I can draw a parallel between software engineering and a famous Danish interlocking brick toy company?

If that toy company creates a model with blocks that are specific to that model, only usable in the construction of that car or boat or train, then it might be possible to try to re-purpose those blocks to build an aeroplane, but it is much less likely. Conversely, creating a wide-range of general purpose blocks means that those blocks can be re-used in hundreds of different applications right now and in the future.

As I opined in my Domain-driven clinical design document, the same is true in software. The best way of creating systems that solve problems right now and support our future iterative development efforts is to start with open and standard data structures with a core focus on interoperability. I cannot overstate the importance of supporting interoperability, even in the unlikely event that you think you don’t need to interoperate with any other system right now. Baking in interoperability forces you and your team to make the right decisions when it comes to designing an architecture to support your software. I have heard people argue that interoperability is something “nice to have” but we’ve “got to get something done” as if designing systems properly will somehow slow or impede development.

Now, the designers of the interlocking brick systems have a problem. They can’t know what bricks to make until they start thinking about the toys they want to be able to build with those bricks, but they can’t design the toys until they know which bricks they have available. So they need to ensure their teams work iteratively, bringing people together with different skills, using some common basic bricks and adding more specialised bricks for certain use-cases. Likewise, in software, modelling data and process requires an understanding of both and how they might be used now and changed in the future.

However, we can make it easier for ourselves. We can begin with modelling the easy things: won’t a blood pressure recorded in millimetres of mercury will always be a blood pressure recorded in millimetres of mercury? Well yes, but don’t we also need to codify who took it, when they took it, how they took it, and what the patient was doing at the time? How can we create a representation in a computer that is re-usable and understandable by multiple systems? So now we need something to represent “120/70 mmHg taken in the right arm by Dr Wardle in the sitting position at rest using a manual sphygmomanometer”. But what about a blood pressure taken using a transducer from an arterial line in the intensive care unit?

So suddenly we need to start thinking not just about representing a numeric value in specified units but all of the additional metadata. We are building an information model to represent a real-life concept.

Fortunately, someone has done most of the work for us already. Read Dr Heather Leslie’s (@omowizard) blog post about blood pressure to see how openEHR provides a rich model of the concept of blood pressure. Similarly, the HL7 FHIR observation resource provides a simpler but similar representation with metadata represented either as a choice of options in openEHR or using a standardised terminology such as SNOMED-CT for FHIR.

Indeed, there are already a large number of archetypes defined within openEHR representing a wide-range of clinical concepts. Similarly, HL7 FHIR already supports exposing a great variety of (similar) clinical concepts as part of an API.

Next steps

The future of healthcare systems is based on open-standards and semantic interoperability in which multiple systems can share information about patients. We can use vendor-neutral archives for data persistence of openEHR archetypes or expose proprietary legacy information to other systems using a standardised HL7 FHIR API. Whatever the case, starting with our data allows us to solve our immediate problems and permits ongoing iterative development to use that information in new and innovative ways in order to support the care of patients.

Part 2 of this series covers clinical modelling in relation to process and workflow.