Scope

: defines the requirements and objectives and what will be delivered from a project.

Framing

: a way of structuring or presenting a problem or issue

Clinical design

Scope and framing are critical in designing valuable health information technology.

If you ask for a system to request laboratory tests electronically then that’s what you’ll get.

If your initial framing of the problem is wrong, you won’t get a solution that solves the true user problem. The user problem is defined by the user, with support to understand and explore the problem in its entirety.

When one creates a model - whether the model represents a train, a new building or software - there is a balance between over-complexity and over-simplification. As I wrote in my ‘Domain-driven design’ document:

A model is an abstraction of the real-world distilling knowledge into core concepts and processes. A model is a careful balance between complexity and over- simplification with models that are too complex providing brittle abstractions which are poorly generalisable and models that are too simple not supporting all of the demands placed upon it in real-life clinical scenarios.

See the full document.

User-centred design

Clinical information systems should solve real-life user problems. The user may be the patient, a health professional, a member of the laboratory staff or anyone else involved either directly or indirectly in patient care. Information systems are therefore inherently bound to process and workflow; to be more specific, the processes and workflows for that user or those groups of users.

Our aim, always, is to create solutions that valuably support the care of patients but while we must not lose sight of this overarching aim, an effective information technology strategy must be explicit in how it will effect change to reach those aims. It is my opinion that we spend too little time understanding and refining the problems we are trying to solve resulting in poorly framed problems with ill-defined scope.

As I wrote in my “Once for Wales” speech in which I pitched for interoperability and data standards for NHS Wales:

It is almost always the case that when building complex solutions to real-life problems, no-one person can conceive of all of the problems in one go. The natural solution to such complexity is to break-down a problem into manageable chunks. These chunks, much like “lego bricks”, can be joined together to form something greater than the sum of the parts. Importantly, it is more likely that we can vouch that a system is deterministic.

We need to spend time working on the problems we are trying to solve and, break them apart into manageable chunks. Such a process requires skill and experience spanning knowledge of both the domain and information technology together with a large amount of pragmatism to get things done.

Importantly, core design principles such as using a domain-driven design to build a layered architecture protect those involved and force an approach which, while not impervious to changes in workflow and process, can more easily adapt to those changes than a design built as a monolith.

Test requesting

We currently have a working test requesting module within NHS Wales’ Welsh Clinical Portal (WCP) software for requesting laboratory tests. Essentially, one navigates to a patient’s record and selects REQUEST->Laboratory test.

On many levels, this is a workable solution to the problem as framed; “allow clinical staff to request laboratory investigations”. The design team, given that brief, designed a multi-page form that allowed that functionality. There are some minor glitches in the user interface, but behind-the-scenes, there is considerable work in dealing with different laboratories and actually performing the test request.

The team should have visited a variety of care settings in which tests are requested and arrived at the following conclusions and questions:

  • In many settings, laboratory tests are requested at the same time as other tests, such as radiology tests. Should the solution work in a more generic way to suit this workflow? How can the life of a clinician sitting in an outpatient clinic be improved? How can we create solutions that work better than the paper systems?
  • In many settings, a health professional is caring for a number of patients at the same time. How can we support the life of a junior doctor when trying to manage all of their inpatients?
  • In many situations, a request for a test is prompted by seeing a result. Should we support requesting tests from the “results” page when we see something that needs acting upon?

I’d like to crowdsource the framing of our problem. Are we only requesting laboratory investigations here or will the same system work for radiology? Can we expand our scope by reframing our problem? What about the future?

  • What about routine monitoring according to protocol, for patients on certain drugs? My PatientCare EPR has this information, and given the right APIs, it could monitor those patients and request tests semi-autonomously. Would that functionality form part of the test requesting module within WCP or be separate?
  • We want to implement care plans that can be activated either manually or automatically for groups of patients. For example, a liver or renal failure patient may have a standard regime of investigations performed at regular intervals for monitoring. How can we support this?
  • What about other suggestions for clinical staff based on the information-at-hand, perhaps using machine-learning?

And what about interaction with other systems?

  • How is this going to work when the health professional filling in the request is going to take the blood and send it to the laboratory themselves? How is this going to work when the blood sampling will be done by a phlebotomist?

Finally what is in scope here? Do we need to design all of this functionality right away or can it evolve iteratively? Are there natural ‘fault lines’ which guide us as to how to break this up into manageable parts?

The current architecture: a monolith

The current solution tries to be everything to everyone. It supports requesting tests for an arbitrary date, or set of dates. It is accessible only within the WCP application itself; its user interface, its functionality and the services it uses to implement its functionality are not available outside of WCP.

Advocates of a monolithic approach say that standardisation of user interfaces makes it easier for users, reduces need for training and avoids duplication. I think this is wrong. I will vouch that you can operate an email client, whether it is web-based, runs on a mobile device or a desktop computer, from any vendor since email was invented because the nouns and verbs are the same. You might have a ‘draft email’ waiting to be ‘sent’, an ‘outbox’, an ‘inbox’, ‘folders’, ‘addresses’ in a ‘to’ field etc.

Standardisation should be at the level of the nouns we use and the verbs we use to operate on those nouns. Thus we develop or adopt standardised information models in which our data and the processes/workflow in which we work are modelled.

In addition, a monolithic or “product”-based approach makes it difficult to adapt to change. If we want to start work on extending the scope to include some of the functionality outlined above, we’ll need to make considerable changes to the WCP application. But we must be careful; every change that we make, every button we add, adds complexity to an already overwhelming user interface. To support every new feature, we are potentially extending our list of responsibilities and scope.

‘Scope creep’ is the term used to describe continuous expansion of a project’s scope, usually a result of a variety of factors including poor initial framing of the problem to be solved, poor initial scoping and lack of flexibility and versatility in any resulting solution.

Adopting a service-orientated architecture

What if we pick apart the problem we are trying to solve?

The diagram shows an abstract view of what we are trying to solve with test requesting:

Test Requesting

The test requesting service wraps an integration layer to provide generic test requesting. Whether a test is requested from a laboratory or another service, a separation between laboratory tests and other types of investigation tests is an artefact of implementation of health services; abstractly, a test request is a test request irrespective of which service will perform the test.

However, it is likely that different tests will require different information to be provided at the time of request. Rather than embedding this important business logic in an application, this architecture forces an approach which explicitly recognises this business requirement and models it appropriately. There is an explicit design to ensure dynamic runtime configuration can modify the behaviour of the ‘system’.

A formal contract between the responsibilities and scope of each component must be defined. While the scope of the overall solution may change to meet new requirements, a modular layered approach means that scope creep of individual components should be minimal.

Applications, such as the web-based front-end application server or applications running on mobile devices, do not directly use a requesting service but use a service provided at the application layer. This layer provides further abstractions and while it will pass on requests to the test request service, it will have responsibility to apply business rules and domain logic such as audit trails and permissions. Those rules may be recorded declaratively in a rules engine rather than hard-coded at the domain layer allowing considerable flexibility in the future to add or change business rules to suit business need.

The core requesting service does not need to have responsibility for domain logic, but has as its only responsibility to make the request against the end-point laboratory service(s).

Designing for the future

With such an architecture, we have considerable flexibility to suit future requirements. If we wish to implement decision support, this could be performed at the domain logic layer and so decision support would be available to all client applications. At project initiation, we may have little or no decision support but our domain layer service must be designed with the hooks to enable this in the future. Over time, should more sophisticated decision support be available, client applications need not know - it is not their responsibility - because our systems are flexible enough to support this new functionality.

The final design

Our final design for redefining the user interface of test requesting within WCP becomes much simpler once there is an underlying robust architectural platform.

It is my opinion that no single user interface will suit all use-cases. I am not concerned if there are multiple versions of the user interface as long as each version fits into the workflow and processes of a specific clinical domain. For instance, it is a long time since I was a junior doctor and I’d want to sit down with a team of them to make the most intuitive and best system to support their processes as possible. Our user interface and logic would be only a thin wrapper around our more complex architecture, but I’d envisage test requesting as part of a jobbing ward round, inpatient list and communication application. The nouns and verbs, the domain-model of the test requesting lifecycle would be standardised across the applications, not the user interface.

Similarly, what I need in a busy outpatient clinic is a somewhat different. Now it may make sense to share a lot of user interface code, or indeed, with today’s responsive technologies, use the same user interface on the same platform. However, this would be a decision made for pragmatic business purposes if the shared user facing components were appropriate in those different clinical contexts. Such a decision would not be made because we are slavishly following a directive to create systems that are “Once-for-Wales” but different devices may need different user interface components to suit their form factor and critically, the workflow in which they are to be used.

Conclusions

  • Frame the problem appropriately.
  • Get early involvement of groups of disparate domain, information technology and usability experts.
  • Extend the scope of the overall problem appropriately, designing in versatility to make solutions future-proof.
  • Limit scope and responsibility of individual components: work to break-up a problem and solution into manageable parts and enforce limitations on responsibility and scope for those component parts.