One of the reasons I haven't been posting much is that I've been nose-deep in my "new" job learning about the new domain, nursing homes. We're getting our arms around the problemspace to the point where we're coming up with some (IMO) really innovative ways to do a mix of voice-directed work (what our core business does) mixed with self-directed-by-voice work (more applicable to healthcare and other domains). But I'm not going to talk about that, it's kinda sensitive ;-)
More recently I've been nose-deep in thinking about how to architect the new server-side system. For various reasons it needs a rethink + rewrite rather than some polishing. And yes, I'm very aware of the second system effect that can plague such efforts.
When you get into healthcare IT (even tangentially with an area like long-term care) one of the things you quickly realize is the tension in being "interoperable".[1] Users want it because they want all information about a patient or her care to be authoritative, and they don't want to enter it twice or have to translate it to different formats (paper) for different departments. Vendors want it because... well, actually AFAICT they don't want it. They only say they want it because users want it. But interoperability, like obscentity, means different things to different people. Vendor A claims to be interoperable because they support web services for basic patient information; Vendor B claims to be interoperable because they support an ancient data exchange standard that nobody actually uses anymore; Vendor C because they "use XML".
At least mainstream healthcare has the federal government breathing down its neck
to force the issue, although it's uncertain if that's a hollow mandate. But long-term care seems to be even further behind in standard interoperability, which creates a sticky situation for your humble developer: How to create a system that must integrate with other systems without knowing in advance what those other systems are?
One way is to join a standards body (like HL7), influence the standard and then beat everyone over the head with it. It's possible if you're Microsoft or IBM, but probably not if you're us. And even if we were a big player we'd have to start from near ground-zero (see any "Long-Term Care" SIG here?). And even if were weren't starting from scratch, standards move at a glacial pace (particularly in healthcare) and have the annoying property of having other people involved.
Another way is to make integration as simple as possible. Using a few basic principles, try to model your data as easily grokked resources and figure out how they can be retrieved and manipulated. If external systems can do the small amount of work to talk to your system directly, great! If they can't, at least your integration task is a small amount of glue instead of starting from scratch with every system -- that is, what you have to change with every new integration is much smaller because most of the hard work is done for you.
Thus, HTTP + REST.
Every modern system can speak HTTP -- I bet even Excel spreadsheets can issue HTTP calls. And I think REST can be a nice companion to a good data model decomposition. (I'm not sure about that since I'm right in the middle of doing it.) It's true that by going this vs SOAP/XML-RPC we're giving up the built-in infrastructure for service discovery and automatic code generation systems based on those services (UDDI + WSDL + your-favorite-IDE). But despite its first initial, SOAP isn't simple, and I think keeping your data messages as close to your data model as possible (vs. using generic representations) can be a good idea. (Mapping documents are something else to keep in sync.)
Another part of making integration simple is to not make it separate. If you have a service layer for external users and another layer for your own applications, the service layer will always get out of date. Inevitably, you'll only learn that it's gotten out of date when you have a quick-turnaround integration to do and you'll have to scramble doubletime to get it back in sync.
So why not use your own external data access for everything? The mainstreaming of AJAX and the slew of frameworks
available means you've got higher-level building blocks to work with. So why not get out of the business of merging (X)HTML templates + data on the server and do it all on the browser?
(I was going to add some stuff here about HTTP plus publish/subscribe messaging, but this is long enough already and I think that subject deserves its own post.)
I haven't made up my mind about this architecture yet and I need to bounce it off some more brains here, but the more I think about it and the more I think of use scenarios, the more the simplicity works in its favor. We'll see if it holds up in some quick prototyping.
[1] I am a rank newbie in this area and I recognize the tension, so I figure everyone else does too. But I could be misreading the situation.
Posted from cwinters.com; read original