This is part two of a three-part series on why and how we should be building an open digital health platform in Wales.

Part one was a high-level overview of how standards and interoperability are critical in building a digital ecosystem. This part assesses our current messaging-based architecture and highlights the benefits of an API-first enterprise architecture together with the new technologies that have facilitated the move to cloud computing in other industries. Part three ties together these strands into a manifesto for phase 1 of building our open platform.

So, what is the most effective overall enterprise architecture that facilitates modern interoperability and an ecosystem, as described in part one?

Any answer to that question must consider legacy systems already supporting clinical care within and between our organisations. There are many such services, such as patient administrative systems (eg HL7v2), laboratory information management systems, user identity and authentication and in Wales, we have an enterprise service bus (ESB)-based messaging architecture linking them together. In addition, there are sometimes explicit, often implicit reference data and rules and heuristics built into each application and service.

The first consideration is whether those services and applications currently deployed can support both robust interoperability and our ecosystem project. The second consideration is, if they cannot, how can we most effectively use what we have to do so?

Can our current architecture support interoperability and an ecosystem?

An ESB-architecture is one example of an event-driven architecture connecting disparate software without tight-coupling. An ESB is the typical technology used in Service Orientated Architectures (SOA) most popular several years ago. For example, there are a set of services and applications already in Wales supporting the care of patients on dialysis units across Wales. These take a live real-time feed of messages of laboratory results and persisting those data into a repository for the care of those patients.

The ESB has supported integration, but it has required work for all parties, internal and external, to perform that integration. Normally, an ESB will adopt a standards-based messaging protocol and so data are sent within a data “envelope”, usually SOAP, reflecting the technology of the time. This solves interoperability from a technical standpoint, but semantic and legal interoperability are out-of-scope. The ESB is therefore a component of an event-driven SOA.

As an event-driven architecture, it can be difficult to handle long-running business processes and orchestrate workflow across services. In addition, while applications and services are disconnected, separated by the ESB, the overall architecture is much like a monolith, but with the added complexity of a distributed system with every component part of a larger stream of messages.

This makes testing individual components difficult with changes in one system resulting in difficult to predict consequences on other systems. A test environment might need to replicate the entire ecosystem making it more difficult to automate and have confidence that any component or the whole ‘system’ is deterministic.

In short, an ESB is an internal solution to enterprise architecture integration. It is poorly suited to creating a truly open platform, although of course, can remain an important constituent part. We need to look elsewhere to see how technological trends could transform the way we build healthcare platforms.

What can we learn from cloud computing and modern architectural design?

We have seen a revolution in both how data can be stored and how we can write software that operates on those data; we have cloud computing.

Cloud computing brings together on-demand scalability and efficiency allowing organisations to consume storage and computing power to meet demand. Instead of running your own database in your own data centre, you can delegate its operation, backup and replication to a commercial supplier, normally on a pay-as-you-go basis. Example providers include Amazon AWS, Microsoft Azure and Google Cloud.

The move to a cloud-based architecture has been predicated upon technological advances and new approaches to the design, creation, testing and deployment of modern applications. However, these advances are not necessarily limited to deployment to the cloud. Technologies such as “containers” can be used in local data centres as well. As such, software code can be built as lightweight portable services deployed in-house or to an external hosting provider anywhere. Indeed, this vendor-neutrality means that in essence we are moving to a future in which hosting data and software services is a commodity. As a result of this flexibility, many enterprises adopt a hybrid approach in which their existing infrastructure continues to run on private networks hosted in private data centres but these are supplemented with new services provided using containers, deployed either in-house or by a cloud provider or in combination.

Modern application development and operations (such as testing and deployment into live environments) are called “DevOps” and support greater degrees of agility in all steps of the software lifecycle so that development becomes more modular, modules become more isolated and independently test-able, and changes to those modules can be committed, tested and even deployed with little or no manual intervention. We know that modern technology companies like Google deploy billions of containers per week across their cloud infrastructure; to do so requires automation in deployment and scaling to meet their ever-increasing demands for computing power.

A new type of cloud-based architecture - Serverless - is increasingly popular, recognising that application developers wish to focus on their core business of creating value via writing software and not be concerned about the infrastructure that is needed to run that software. Serverless allows developers to write a function, which could perform any imagined well-defined computing task, and deploy it easily. Such a function can be used by other software components but the provider is responsible for automatically scaling the service to meet demand, expanding or contracting the resources required to computing power responsive to user need.

We are now seeing a new market for software code, in which re-usable independent modules of code, deployed as containers as “microservices” or as “serverless” functions, can be knitted together to create functionality for end-users. These modules might look simple on the surface but hide (abstract) a great deal of complexity to deliver their results. Have a look at the speech recognition services now available from Amazon and Google. Developers using those services simply send an audio file and receive back a text file containing the transcribed text.

It is easy to see that such an architecture permits vendor-neutrality, in which we could substitute one service for another based on quality and price, with our changes limited to changing how we locate the resources used to knit together an end-to-end solution for our users. Substitutability is an important architectural principle for future-proofing and requires us to adopt standardised and open application programming interfaces (APIs).

I am convinced that we will see an increasingly sophisticated marketplace providing on-demand computing services in the next five years; for example, see the new Amazon Serverless Application Repository.

As such, we should recognise that cloud computing should inform our architectural designs for healthcare:

  • Virtually unlimited computing power on an as-required, scalable basis
  • Openness, modularisation and abstraction of software to appropriately ‘decouple’ software and thus facilitating evolutionary change.
  • Sophisticated support for “DevOps” in the develop-test-deploy software lifecycle with a focus on automation and rapid ‘agile’ development.

An API-first approach for an open architecture

It is not difficult to see how healthcare has lagged behind other industries in adopting a more “open” approach to information technology. Most healthcare software has traditionally been dominated by vendors who supply whole-organisation scale software products that are usually ‘closed’ systems.

Open systems provide interoperability so that different components from different vendors can communicate and exchange meaningful information. In addition, open systems support portability of applications and services. Portability means that software can be written once and deployed without change irrespective of the environment in which it is deployed. Portability requires standardisation and usually some form of runtime configuration and resource location. For example, to use a health example, if an application expects to use HL7 FHIR to fetch blood results and perform analytics, it should be possible to deploy that application within any ecosystem that provides those HL7 FHIR endpoints. These endpoints are what we mean by an standard “API”.

So what is an “API”? An API is an application programming interface. Crudely, it is a set of usually related software functions that, for our purposes, can be called by one system to execute software functionality in another. Essentially, an API is an abstraction, a way of decoupling how a software service is used from its implementation. In the example above, you do not need to know how Amazon or Google have implemented their speech transcription service, you send the text file and get back text. For all you know, they have thousands of transcriptionists sitting at computers listening with headphones and typing! In addition, you do not really care whether they change how they implement the transcription service, so long as you get back the results you expect. They could switch over to a new implementation and you wouldn’t know. The use of APIs has meant you have isolated different modules of software from each other and permitted work to continue independently.

For example, we might expect to be able to make use of a module to identify acute kidney injury (AKI). Currently, this could simply implement the national AKI algorithm but, could also be used to inform and develop the next-generation of algorithm that could perhaps identify patients before there is kidney injury and be then replaced with more sophisticated algorithms with different and hopefully improved receiver-operator curves for the patient group at hand.

Not all APIs are “standard”. Proprietary APIs might allow interoperability but at the expense of new work, perhaps on both sides, to perform integration. Similarly, proprietary interfaces usually expose implementation details so that rather than being appropriately decoupled, the two separate software systems become entwined and dependent - much like a limpet clinging to a rock than flexible and interchangeable plugs and sockets we all use for our electrical devices in the home!

A focus on open and standard APIs is key to creating a powerful ecosystem so that both internal and external developers can contribute and work independently. Our ecosystem project in Wales must not simply cater to external users but be the platform on which new innovation can occur both internally and in partnership with others. This is a paradigm shift from the closed approach we have seen in healthcare in the past and treats external developers as first-class citizens in our digital health ecosystem.

Our efforts in Wales on standards, interoperability and the digital ecosystem must therefore build a ‘connected business platform’ usable by developers, internal and external, on behalf of users, across health and care and most importantly the patient and their delegates, to support a transformation in how we deliver healthcare.

Privacy and trust

It is important not to confuse “openness” with a lack of data security. Openness, at a principle in software architecture, relates to interoperability. For example, TCP/IP, the technical underpinnings of the Internet, is an open protocol but that does not stop it being used by your bank to offer online banking to authorise and execute financial transactions.

An open architecture does not mean that confidential patient information is made accessible to all. Any design for a target architecture for Wales must take into account the need for authentication (‘you are you say are’), authorisation (‘you are allowed to do what you are asking to do’) and verifiable logging (‘we are watching what you do’) as well as need to handle different levels of sensitive information. For example, it would be expected that sexual health, genomic, mental health and social care data might be treated differently to general health data.

Similarly, not only might accessibility depend on the type of data, but the intended use of those data. For example, so that data made available for direct patient care might depend on the direct care relationship between citizen and professional while aggregated data for a clinical service or organisation used for planning and performance management and data made available to researchers would be handled quite differently.

It is difficult to ensure privacy and trust when one uses as ‘push’ based messaging fabric like an ESB because data are sent to services who consume or decide to discard those messages. An API-based approach is usually a ‘pull’ service, in which a client requests something from a service and so a service can decide whether to satisfy that request by virtue of authentication and authorisation and can log such access via a logging service. Of course, an API-based approach can adapt to a ‘push’ model usually via ‘publish-and-subscribe’ in which interested clients subscribe to notifications from a service.

So what next?

The next question to ask is, what are the next steps that we need to take to build an open platform in Wales using these architectural design principles? I have tried to answer this in my next post.

Part three ties together these strands into a manifesto for phase 1 of building our open platform.