CHAPTER 8 Dreaming about the universal

Introduction

Most information infrastructure standardization work is based on a set of beliefs and assumptions about what a good standard is. These beliefs are strong -- but not based on any empirical evidence concerning their soundness. They have strong implications for what kinds of standards that are defined, their characteristics as well as choice of strategies for developing them. Beliefs of this kind are often called ideologies. Hence, the focus of this chapter is on the dominant standardization ideology which we dub "universalism": its content, history, how it is tried applied, what really happens and its shortcomings. We will argue that it has serious short-comings. In fact, dominant standardization approaches do not work for the development of future information infrastructures. New approaches based on different ideologies must be followed to succeed in the implementation of the envisioned networks. Basically, the heterogeneous nature of information infrastructures pointed out in chapter 3 needs to be acknowledged. Furthermore, the way the different components enable and hamper flexibility through their inscriptions needs to be emphasised a lot stronger. To this end, we discuss how visions and patterns of use and strategies for change are inscribed into standardisation efforts dominated by the ideology of universalism.

What is universalism?

Universalism is the dream about the perfect, "universal solution," the "seamless web" where everything fits together perfectly and where any information may be exchanged between anybody connected without any loss of meaning or other obstacles. This seamless web is believed to be realized through the implementation of consistent and non-redundant standards. Dreaming about such a technology seems perfectly sound. This list seems like innocent and desirable characteristics of any well designed information system, including information infrastructures. This is, of course, true in a sense. This is a goal to strive for in most design efforts, including design related to IIs. But striving for this goal will often also cause serious trouble. Exactly because universalism is immediately appealing, it is an extremely strong rhetorical device, and correspondingly difficult to see what is wrong with it. After all, who would rather have a messy, complex solution than a elegant and coherent one? It does not, after all, make much sense to invent a lot of wheels in parallel. A much better idea is to just make one and let everybody use it.

There is only one major drawback with this scheme: it does not work just this way. Universal solutions of this kind presuppose an overarching design comprising the information infrastructure as a whole which is not attainable. It is based on a "closed world" set of assumption. As the world of infrastructures is an open one without limits, the universalism implies trying to make a complete, consistent formal specification of the whole unlimited world - which is simply impossible. Universalism also implies homogeneity as opposed to the heterogeneous character of information infrastructures emphasized in this book (see chapter 3).

In practice universalism leads to big, complex, incomprehensible and unmanageable standards and infrastructures. There cannot be any tidy, overarching design. It is a bricolage of components sometimes developed for different purposes, at different times. Hence, the inclination towards universal solutions, although understandable, needs closer scrutiny. Its benefits are greatly exaggerated and its problems vastly down-played (Graham et al. 1996; UN 1996; Williams 1997).

Universalism is not unique to standardization work. In fact, it is a strong ideal for virtually all technical and scientific work. In this chapter we will look at how universalism is imprinted on health care standardization work in particular and other information infrastructures and technical and scientific work in general. We will further look at the arguments given for the ideals of universalism, how they are tried implemented, what really happens when following the ideology. Finally we will analyse the experiences and identify its shortcomings.

Universalism at large: a few illustrations

Universalism, is expressed in a variety of ways and in numerous situations. Our ambition is neither to be systematic nor comprehensive, but merely to provide enough illustrations to make our point, namely that the bulk of standardisation has been dominated by this ideology.

Health care

The CEN TC/251 clearly expresses universalism as its dominant ideology. Hence, despite the fact that an information infrastructure will evolve, we have to pay the price of "freezing" it into one, given, universal solution:

"in case of moving technologies ... standards could possibly impede the development. On the other hand, it may be desirable to make sure that unsuitable circumstances (e.g. proliferation of incompatible solutions) are not allowed to take root and in that case, standardization must be started as soon as possible in order to set the development on the right track" (De Moor 1993, p. 4).

The assumed need for a coherent, consistent and non-redundant set of global standards are even more clearly expressed by HISPP, 1 the US coordination committee collaborating closely with CEN. The great fear of universalism, fragmentation and hence mess, is warned against:

"the efforts of these groups have been somewhat fragmented and redundant. Parallel efforts in Europe and the Pacific Rim threaten to further fragment standardization efforts" (McDonald 1993).

"If we could eliminate the overlap and differences, we could greatly magnify the utility and accelerate the usage of these standards." (McDonald 1993).

The danger, it is maintained, is that of barriers to the free flow of commodities and services:

"The establishment of a `Fortress Europe' and the creations of barriers of trade have to be avoided" (De Moor 1993, p. 6).

Universalism goes deep. In the very statement of principles for the work of the health care standardization, CEN TC 251, it is explicitly pointed out that redundancy is not acceptable because "duplication of work must be avoided" (De Moor 1993, p. 6).

Similar concerns are voiced by the work coordinated in the US by HISSP which hold that there is:

"too much emphasis on short term benefits while ignoring the long term benefits that would come through the analysis of large data bases collected in uniform fashion over large patient populations" (McDonald 1993).

The belief in one universal standard is explicitly expressed:

"[The goal is] achieving ... a unified set of non-redundant, non-conflicting standard" (McDonald 1993, p. 16, emphasis added).

This is a quite strong expression as it refers to the work of all groups developing health care information infrastructure standards in US as well as in the rest of the world and standards outside the health care sector like EDI standards in general.

Striving for maximally applicable solutions in a world without clear borders has its implications. As different use areas are linked together and are overlapping, developing coherent solutions for subfields, implies making a coherent solution for everything. The same inscriptions and patterns of use are accordingly assumed to equally reasonable everywhere.

The difficulties in defining strict borders and hence the need for all-encompassing solutions were acknowledged in the Medix effort. The objective of this effort was defined as the development of one single global standard, or one coherent set of standards, covering any need for information exchange within health care:

"The eventual scope of P1157 is all of healthcare communications, both in the medical centre, between medical centres, and between individual providers and medical centres" (ibid.).

This was carried over to CEN. CEN consider standards the key to successful health care II development, and that standards should be developed at an international, hopefully global, level:

"More alarming is the ascertainment that many of these telematics-experiments [in US, Europe (typically projects sponsored by EU), ..] in Health Care have been conducted on regional scales [i.e. Europe, US, ..] and inevitably have resulted in the proliferation of incompatible solutions, hence the importance of standardization now" (De Moor 1993, p. 1).

which implies that

"Consensus standards are needed urgently" (De Moor 1993, p. 2).

Computer communications

Universalism has an equally strong position within telecommunication and computer communication in general. The OSI model and the OSI protocols are clear examples of this. The protocols are defined under the assumption that they will be accepted by everybody, and accordingly the only ones in use. Not addressing how they should interoperate with already existing network protocols, i.e. their installed base hostility," has been put forth as the major explanatory factor behind their lack of adoption.

The Internet has often been presented as the opposite alternative of OSI. That might be done here as well (Abbate 1995).

Internet was based on the idea that there would be multiple independent networks of rather arbitrary design, beginning with the ARPANET as the pioneering packet switching network, but soon to include packet satellite networks, ground-based packet radio networks and other networks. The Internet as we now know it embodies a key underlying technical idea, namely that of open architecture networking. In this approach, the choice of any individual network technology was not dictated by a particular network architecture but rather could be selected freely by a provider and made to interwork with the other networks through a meta-level "Internetworking Architecture".

In an open-architecture network, the individual networks may be separately designed and developed and each may have its own unique interface which it may offer to users and/or other providers, including other Internet providers. Each network can be designed in accordance with the specific environment and user requirements of that network. There are generally no constraints on the types of network that can be included or on their geographic scope, although certain pragmatic considerations will dictate what makes sense to offer.

In this way the Internet was developed under the assumption that there would be no one single, universal network, but rather a heterogeneous infrastructure composed of any number of networks and network technologies. These assumptions are even more explicitly expressed in the four ground rules being critical to Robert Kahn's early thinking, which later on led to the design of TCP/IP:

Whether an information infrastructure should be based on a universal network design or allow the degree of heterogeneity underlying the Internet technology was the source for heated debates throughout the seventies (and eighties) in the discussions about standards for such infrastructures. In this debate, one alternative was the Internet and its TCP/IP suite, the other the X.25 standard proposed by CCITT and telecom operators (Abbate 1995). X.25 was put forth as the universalistic alternative to TCP/IP.

The telecom operators preferred X.25 as it was design according to the ideology of universalism, having a strong tradition since the very early days of telecommunication. However, such a homogeneous, universalistic solution was also well aligned with their interest in extending their monopoly from telephone communication and into computer communications (ibid.).

However, the Internet community is far from consistent in its thinking related to universalism. The supporters of the Internet technology, including TCP/IP, have always argued for TCP/IP as the universal network/transport level protocol. The rationale behind TCP/IP is an example of basic assumptions about a heterogeneous, pluralistic world with diverging needs, requiring different network technologies (like radio, satellite and telephone lines). However, this description of a heterogeneous, open world is put for as an argument for one specific universal solution, TCP/IP. From the level of TCP/IP and up there is no heterogeneous world any more, only consistent and coherent user needs to be satisfied by universal solutions. This view is maybe strongest and most explicitly expressed by Einar Stefferud when arguing against e-mail gateways (Stefferud 1994). He argues that gateways translating between different e-mail (address and content) formats and protocols should not be "allowed" as such translations cannot in principle be perfect. In such translations there is, at least in principle, possibilities for loss of information.

Stefferud proposes another solution for integrating enclaves of users using separate e-mail systems. He calls this principle tunnelling, meaning that an e-mail message generated by one e-mail system, say cc:mail, might be tunnelled trough an enclave of e-mail systems of another kind, say X.400 systems, by enveloping the original e-mail in an e-mail handled by the second e-mail system. However, this technique can only be used to send an e-mail between users of the same kind of e-mail systems. A user connected to a cc:mail system might send a mail to another cc:mail user through, for instance, a X.400 network. However, it will not allow communication between a user of a cc:mail system and a X.400 user. If tunnelling is a universal solution, as Stefferud seems to believe, it presupposes a world built up of separate sub-worlds, between which there is no need for communication. Universalism and closed world thinking have also strong positions in the Internet community.

Science: universal facts and laws

Universalism has its strongest position in science. Universalism in technological development is usually the result of seeing scientific ideals as the ultimate, and accordingly trying to apply scientific methods in technological development.

The traditional view on science is that science is simply the discovery of objective, universal facts, laws and theories about nature (and possibly other worlds, like the social). In this sense, universalism corresponds to a kind of heliocentrism, namely the (implicit) assumption that your own position is priviledged, that you are the origo around which everyone circle. It would carry us well beyond the scope of this book to pursue universalism within science. For our purposes it suffices to observe the many forms and appearances of universalism as well as its far-reaching influence.

Standards are found everywhere, and as such they have been focused. Standards - in a wide sense- are indeed the issue addressed in STS. Not specifically technological standards, but rather standards in form of universal scientific facts and theories. These studies also, we believe, have something to tell us about information infrastructure standards.

constructing scientific facts, theories, technologies, standards

Universality, actor network theorists have argued, is not a transcendent, a priori quality of a body of knowledge or a set of procedures. Rather, it is an acquired quality; it is the effect produced through binding heterogeneous elements together into a tightly coupled, widely extended network. In his elegant study on the creation of universality, Joseph O'Connel discusses the history of electrical units.[2] Laboratory scientists, US war planes and consumers buying new TV's do not simply plug into some pre-given, natural Universal called the Volt. Rather, the Volt is a complex historical construct, whose maintenance has required and still requires legions of technicians, Acts of Congress, a Bureau of Standards, cooling devices, precisely designed portable batteries, and so forth.

Other theoretical traditions within STS likewise question these rhetorics. Social constructivist analyses, for example, also argue that the universality of technology or knowledge is an emergent property: to them, a fact or a technology becomes universal when the relevant social actors defining it share a common definition.

obtaining universality

The maybe most basic finding within STS is the local and situated nature of all knowledge - including scientific knowledge. Latour and Woolgar (1986) describes how scientific results are obtained within specific local contexts and how the context is deleted as the results are constructed as universal. Universals in general (theories, facts, technologies) are constructed as the context is deleted, basically by being taken as given. This construction process has its opposite in a deconstruction process when universals are found not to be true. In such cases the universal is deconstructed by re-introducing its context to explain why it is non valid in the context at hand (Latour and Woolgar, 1979).

In spite of the fact the context of origin and the interests of its originators are "deleted" when universals are created, these elements are still embedded in the universals. They are shaped by their history and not just objectively reflecting some reality (in case of scientific facts of theories) or being neutral tools (in case of universal technologies). They embed social and political elements.

In the same way as other universals, infrastructure are standards in fact "local" (Bowker and Star 1994, Timmermans and Berg 1997). They are not pure technical artifacts, but complex heterogeneous actor-networks (Star and Ruhleder 1996, Hanseth and Monteiro 1997). When a classification and coding system like ICD (International Classification for Diseases) 2 is used, it is embedded into local practices. The meaning of the codes "in use" depends on that practice (Bowker and Star 1994). The ICD classification system developed and maintained by WHO in order to enable a uniform registration of death causes globally (to enable the generation of statistics for research and health care management) reflects its origin in the Western modern world. "Values, opinions, and rhetoric are frozen into codes" (Bowker and Star 1994, p. 187). Common diseases in the third world are less well covered, and the coding system is badly suited for the needs of a third world health care system.

implementation - making it work

In parallel with showing how universals are constructed, STS studies have addressed maybe more extensively how they are used, i.e. how they are made to work when applied in spite of the seemingly paradoxical fact that all knowledge is local. This is explained by describing how the construction of universal, the process of universalization, also has its opposite, the process of localization. The meaning of universals in specific situations, and within specific field, is not given. It is rather something that has to be worked out, a problem to be solved, in each situation and context. If we will apply a universal (theory) in a new field, how to do that properly, might often be a rather difficult problem to solve, or in other words - working out the relations between the universal and the local setting is a matter of a challenging design issue. As a universal is used repeatedly within a field (i.e. community of practice), a shared practice is established, within which the meaning and use of the universal is taken as given.

boundary objects

practical accomplishment

Lucy plans

Just as the development of universal is not a neutral activity are social and political issues involved in the use of universals. As their use is not given, "designing" (or "constructing") the use of universals is a social activity like any others, taking place within a local context where social and political issues are involved.

Marc Berg and Stefan Timmermans argue that studies in the STS field tend to reject the whole notion of universals (Timmermans and Berg 1997, Berg and Timmermans 1998). They disagree, saying that universals exist, but they are always embedded into local networks and infrastructures. Universals exist - but as local universals. "The core of universalities lies in the changes built up on local infrastructures." They argue further that there are always multiplicities of universalities. Some of these will be in conflict. Each universal defines primarily an order it is meant to establish. Implicitly it defines at the same time dis-order - does not match the standard. When a multiplicity of standards are involved in an area - which is "always" the case - on standard's order will be another's dis-order. Further, Berg and Timmermans show how a standard even contains, builds upon, and presuppose dis-order.

In this paper we will use these theories and concepts to discuss the definition, implementation and use of corporate infrastructure standards. We will do this by first showing how the need for a universal solution - a standard - was constructed, and subsequently the decision to define and implement a corporate standard called Hydro Bridge, and the definition of its content.

The main part of the article is concentrating on the implementation of the standard. The most characteristic aspect of this implementation of the process is the repeated discovery of the incompleteness of standard in spite of all efforts to extend it to solve this very incompleteness problem. This process is a continuous process of enrolling new actors and technological solutions to stabilize the network constituting the standard. This stabilization process never terminates - partly due to the open nature of infrastructures, but may be more important because the standard creates disorder within exactly the domain it is designed and implemented in order to bring into order.

The origin of universalism

Constructing the need for universal standards

It is not at all obvious that solutions should be maximally applicable, so where does the idea stem from? We illustrate this by drawing on the experience with developing a Norwegian health information infrastructure and tracing its initial, international efforts.

The choice of a standardisation model was not given from the outset. The general Zeitgeist, however, was that of working out as universal and open standards as possible as explained earlier for lab communication (see chapters 2 and 7). Adopting EDIFACT as the basis for electronic prescriptions seemed inevitable even though alternatives were proposed. These alternatives inscribe quite different interests and delegate completely different roles and competencies to involved actors, especially the EDIFACT mafia.

There were several, alternative standardisation and information infrastructure development strategies, or models, promoted originally. These models are all more or less based on deep-seated convictions about how technology development takes place. They inscribe quite different spheres of authoritative competence and steps to proceed in the design. The range of technically feasible standardisation models was practically unlimited. This implied that deciding on one model was less a question of technical superiority of any one model and more a question of who should be allowed to function as a gatekeeper in defining the problem.

The development of electronic information exchange between health care institutions in Norway started when a private lab, Dr. Fürst's Medisinske Laboratorium in Oslo, developed a system for lab report transmission to general practitioners in 1987. The system was very simple -- the development time was only 3 weeks for one person. The interest of Dr. Fürst's laboratory was simply to make profit by attracting new customers. It was based on the assumption that the system would help general practitioners save much time otherwise spent on manual registering lab reports, and that the general practitioners would be find this attractive. Each general practitioner receives on average approximately 20 reports a day, which take quite some time to register manually in their medical record system.

The system proved to be a commercial success and brought them lots of general practitioners as new customers. This implied less profit for the other labs. Within a couple of years, several non-private labs (in hospitals) developed or bought systems with similar functionality in order to be competitive. Although these systems were more or less blue-prints of that of Dr. Fürst's laboratory, there were differences which inscribed extra work for the vendors of electronic medical record systems for the general practitioners. This gave these vendors incentives for working out one, shared solution.

Alongside the growing number of labs adopting systems for exchange of reports, an increasing number of actors saw a wider range of applications of similar technology in other areas. These actors were represented within the health sector as well as among possible vendors of such technology. For all of them it was perceived as important that the technologies should be shared among as many groups as possible in order to reduce costs and enable interconnection of a wide range of institutions.

Telenor (the former Norwegian Telecom) had strong economical interests in promoting extensive use of tele- and data communication based services. As telecommunication technology became more integrated with IT, Telenor searched for candidates for extensive use of new and advanced services. The health sector was selected as the potentially most promising one. After an initial experiment, Telenor launched the project "Telemedicine in Northern Norway" in 1987 which was running until 1993. Although Telenor realised that the services and products developed for a specific sector like health care could never be as general as the telephone, Telenor had a strong economical incentive to make their market as large as possible. This strategy presupposes that the standards are as general as possible in order to cover as many sectors as possible.

Standardisation has always been considered important within the telecommunication sector. Hence, Telenor took it for granted that the new health information infrastructure standards should be like any other telecommunication standard: "open" and developed according to the procedures of formal standardisation bodies. Telenor effectively acted as a standardisation "partisan". Their perceived neutrality together with the investments in the telemedicine project made Telenor a very influential actor within information infrastructure standardisation in Norway in the 80s.

The historical legacy of telecom

Universalism within telecom has a long-standing history. It was first coined in 1907 by the president of AT & T, Theodor Vail, and amounted to "one policy, one system and universal service" (Mueller 1993, cited in Taylor and Webster 1996, p. 219). The notion of universalism in telecom is not well defined. It started out as a tidiness principle, namely the principle of a unified, non-fragmented service. It has since come to include also issues of coverage and reach. The heritage of universalism in telecom mirrors the deeply felt obligation of early telecom providers to avoid fragmentation and inequity. It thus inscribed clear, political goals. Universalism in telecomm was heavily influenced -- arguable even crucially dependent upon -- the prevailing monopoly situation (Taylor and Webster 1996, p. 220):

"The key to this construction of universal service was that it linked political goals, such as universal service, to a particular system of economic organisation, a monopoly which sustained itself through revenue-pooling arrangements."

The question, then, is how universalism may, or indeed should, unfold in the current situation with increasing deregulation.

Within a region controlled by a telecom operator, the ideal of equity was pronounced (XXREF soc hist of tele). Telephone service should be as general as possible, everyone should be granted the same opportunities of access and service. Abbate (1995) shows how this biases the telecom world towards centralization as a mean to achieve coherence, consistency and non-redundancy.

In the currently ongoing deregulation of the telecom sector, "universal service" is a key term. This terms means that the deregulation needs to happen in a way guaranteeing universal service, i.e. all services provided to anybody in a country should be provided to everybody (at the same price) all over the country (OECD).

How to create universals

Having sketched the background for universalism as well as some of its expressions, let us explore next how this maps onto practical design efforts. How, then, do designers go about when (implicitly) influenced by universalism?

Making universal health care standards

There is a strong tendency to aim at solutions that, implicitly or explicitly, are to be fairly stable. The drive is to get it right once and for all, to really capture "it":

"The complete model... will be very big, very complex and expensive to make. In the process of coming there, a collection of smaller, internally consistent sub-models, coordinated via the most common medical objects, and specialized for states and countries, have a value of their own" (MedixINFO).

Object oriented techniques were supposed to provide the tools necessary to develop this all-encompassing information model while all the time keep it consistent and backwards compatible (Harrington 93). The requirements for the model was described as follows:

"The MEDIX framework requires a medical information model. It must be a conceptual model, describing how actual people and medical systems share and communicate information. It is not a model describing how an automated version of the health care environment distributes and communicates technical representations of medical documents. No computers or communication nodes will be present in the model, only real world objects like patients, physicians, beds, and tests.

Making a common information model for use in MEDIX is a necessary task, involving many people and work-months. On one hand, the model must be precise enough to be used as the basis of operational, communicating health care systems. On the other hand, it must capture the reality as it is perceived by many people" (ibid., p 5).

The information model was intended to be developed and specified according to Coad and Yourdon's "Object-Oriented Analysis" (Coad and Yourdon 1991).

The computational/communication model is an implementation of the information model which is assumed to be done more or less automatically from the specification of the information model. An automated health care IT system is assumed to represent "the underlying health care reality in terms of a computational model of that reality" (Harrington 1990a).

"The information model serves two purposes. First, it represents the information flow patterns in the health care environment. In this representation, there are no computers or communicating computer processes, only real world entities like patients, physicians and service requests...

Once the information model has been defined and validated, however, it takes on a new mission in which the model must relate to information technology, and in particular to communication standards and profiles. For the model to serve this mission, it must be translated into precise languages which can be used by automated processes to define the messages and trigger events of the computer systems. Therefore a set of translating (or mapping) rules and methods has been developed for translating selected aspects of the real world model into communication profiles" (ibid.)

"By extracting a small subset of the model, and relating it to a single medical scenario, it is possible to "prove" the model's suitability and correctness with respect to this tiny subset, if a set of medical experts agree that it is a "correct" representation of the real world" (ibid., p. 12).

The MEDIX framework is a strict and straight-forward application of the classical information modeling approach to information systems development. Some weaknesses of this approach are mentioned. However, these weaknesses are implicitly assumed to be irrelevant as there is no attempt to deal with them.

The key tool and strategy for creating universal standards is information modelling. This tool and strategy directly mirrors a naive realist position (Hanseth and Monteiro 1994 (XXSJIS)) within the theory of science, believing that the objective true world is discovered of we use proper methods, and that the world discovered using such a method is a consistent one of manageable complexity. These assumptions is extensively used in the rhetorics of information modelling, in particular when arguing its advantages over alternative positions.

As mentioned, some weaknesses with the approach are mentioned. However, these are no serious weaknesses - either they are not that serious, or they can be solved. The complexity of the models searched for is one such possible problem. However, this is no real problem. It is a problem only to the extent in can be solved, i.e. the only problems seen are problems that really aren't problems. Or problems are only seen when the solution appears. The complexity problems exist only to the extent that object oriented technique can solve them (Harrington 1993).

Universals in practise

So far our analysis and critique of universalism has been of a conceptual nature. If we bracket this theoretically biased view for a moment, what is the practical experience with standardisation dominated by universalism? Is a the critique of universalism but high-flying, theoretical mumble void of any practical implications?

Evaluation at a quick glance

The massively dominant approach to date has been met with surprisingly few objections. The heritage from telecommunication standardisation and information modelling (see above) is evident in the thinking and actions of the EDIFACT mafia. It was, for instance, simply "obvious" that problem of developing lab messages in Norway should be translated from acquiring practical experience from situations of use in Norway to aligning the specification with perceived European requirements. The EDIFACT mafia had a gatekeeping role which allowed them to define the problem. And their definition of the problem was accepted. Proponents of alternatives (for instance, Profdoc´s bar codes) was incapable to market their solutions to users. The statement from EDIFACT cited in (Graham et al. 1996, p.10, emphasis added) illustrates how problems are down-played and benefits are exaggerated: "It should be understood that the benefits of having a single international standard outweigh the drawbacks of the occasional compromise ".

The diffusion, in line with (Graham et al. 1996), has been very slow. The non-standardised lab message systems developed and adopted by users in the period 1987 to 1992 are still in use although their further diffusion has stopped. The installations of systems based on standardised lab messages seem to be used as described by the scenarios worked out as part of the standardisation work. Similarly, the EDIFACT messages implemented adhere to the practice inscribed into the actor-network constituting EDIFACT technology. There is no implementations of the standards based on re-interpretations of some of the design assumptions. Dr. Fürst's lab consider implementing a systems providing services beyond what can be offered by a standardised one as being too difficult at the moment. This would require cooperation with other actors, and establishing such an arrangement is too difficult as it is based on an anti-program compared to that inscribed into the standards.

Learning from experience?

CEN's own judgement is that "CEN/TC 251 has so far been a successful Technical Committee." (CEN 1996, p. 3). This is obviously true from a purely political point of view in the sense that it has established itself as the most authoritative standardization committee on the European level within the health care area. Looking at the implementation (diffusion) of the standards defined, the judgement may be different. So far, ten years of standardization work within Medix and CEN has hardly had any effect on information exchange within health care. Maybe the most significant effect has been that the standardization work has made everybody awaiting the ultimate standards rather then implementing simple, useful solutions. Within the health care sector in Norway, the simple solutions for lab report exchange systems diffused very fast around 1990. The standardization efforts seem to have stopped rather than accelerated the development and diffusion of IIs.

The CEN approach is an example of those being criticized for being all to slow and complex, not satisfying user needs. The Medix work started in a period with limited experience of the kind of work it was undertaking. In the CEN case however, more experience is available. This experience does not seem to influence CEN's approach, neither has the discussion about strategies for implementing NII and the Bangemann plan.

CEN considers the development of consistent, non-redundant global standards most important for building IIs. User influence is also considered mandatory. However, it believes user participation in the information modelling work will do the job. It does not consider changing the standards to adapt to future needs to be of any importance. The issue is mentioned but not addressed. Even the brief comments made in Medix about the need for evolution and how object-oriented methods would enable this has disappeared. CEN's view on their own work is in strong contrast to how they look at ongoing local activities implementing IIs for specific needs. This work is considered of greatest danger as it will lead to permanently incompatible IIs. Accordingly, CEN is based on a belief that these smaller and simpler IIs cannot be changed and evolve into larger interconnected networks. How their own standards, and the IIs implementing them, is going to avoid this problem, i.e. the difficulties in being changed to accommodate to future needs, is hard to see.

The view on standards found in the health care standardization communities mentioned above is quite common among those involved in standardization. For instance, within CEC's Telematics Programme a similar method for engineering trans-European telematics applications is developed (Howard, 1995).

The illusion of universalism

Universalism is an illusion, at least in form of universal information infrastructure standards. This fact will be illustrated by the openness of the use and use areas of infrastructures, the unavoidable duplication and the incompleteness of any information infrastructure standard.

Openness: However universal the solutions are intended to be, there are still more integration and interconnection needs that are not addressed. At the moment of writing this, a popular Norwegian IT journal raises criticism against the government for lack of control ensuring necessary compatibility (ComputerWorld Norge 1997). The problem is, it is said, the fact that two projects are developing solutions for overlapping areas without being coordinated. One is developing a system, including a module for digital signatures, for transmission of GPs' invoices. The (digital signature) security system is intended to be used in all areas for communication between social insurance offices and the health sector as well as for internal health sector communication. It is designed according to CEN specifications as far as possible, to be compatible with future European II needs within health care. The other project develops a solution for document exchange with the government, including a system for digital signatures supposed to be the "universal system" for the government sector - which the social insurance offices are parts of.

This example illustrates a dilemma which cannot be solved, neither can one escape from it: Should CEN specify security systems standards not only for the health care sector in Europe, but the whole public sector as well? Or should the development of a security system for the government sector also include the solution for the whole health sector as well - including trans-national information exchange and accordingly being compatible with the needs in all European countries? And what about the other sectors anybody involved communicates with?

Incompleteness: When developing a "universal solution," irrespective of its completeness it will always be incomplete is the sense that its specification has to be extended and made more specific when it is implemented. For instance, when two partners agree on using a specific EDIFACT message, they must specify exactly how to use it. Although a standard EDIFACT message is specified and so is the use of X.400 as standard for the transport infrastructure, other parts are missing. Such parts may include security systems. If a security system is specified, for instance one requiring a TTP (i.e. a "trusted third party"), the security system has to be adapted to the services supported by the TTP, etc. All these "missing links" introduce seams into the web. This made the implemented solution locally situated and specific - not universal. The universal solution is not universal any more when it is implemented into reality.

Duplication: When implementing a solution, "real" systems are required, not just abstract specifications. Usually the nodes in an II will be based on commercially available implementations of the standards chosen. However, to work together, commercial products have to be adapted to each other. In an EDIFACT based solution requiring security services (encryption, digital signatures, etc.), the so-called EDIFACT converter and the security system must fit together. Usually each converter manufacturer adapt their product to one or a few security systems providing some security functions. This means that when an organization is running two different EDIFACT based services with different security systems, they will often not only have to install two security systems, but two EDIFACT converters as well as the security systems are not integrated with the same EDIFACT converter. This problem is found in the implementation of the EDIFACT solutions for lab reports and GPs' invoices respectively. The GPs using both kinds of services must install two separate systems, duplicating each other's functionality completely: Two EDIFACT converters, two security systems, two X.400 application clients and even access to two separate X.400 systems and networks! (Stenvik 1996).

The problems the development of "universal solutions" meets are basically exactly those the believers in universal solutions associate with heterogeneous solutions, and which the idea about the "universal solution" is proposed to solve. In some cases the solutions developed is even worse than "non-standard" ones because the assumed standardized solutions are equally heterogeneous and in addition much more complex.

This kind of inconsistency is also found within CEN concerning installed base issues. Those involved appear to be well aware of the irreversibility phenomena as far as existing local solutions are concerned (De Moor 1993, McClement 1993). The irreversibility of these incompatible solutions is one CEN's most important arguments in favour of universal, consistent and non-redundant standards. However, it is a bit strange that they apparently believe that the phenomenon does not apply to their own technology.

Linking variants

Universalism faces serious and greatly under-estimated problems. In practise, especially for working solutions, the design of consistent non-redundant solutions inherent in universalism has been abandoned. Instead, duplication, redundancy and inconsistency are allowed. The situation is kept at a manageable level by linking networks through gateway mechanisms.

Successful information infrastructure building efforts have not followed the ideology of universalism, but rather an opposite "small is beautiful" approach. This approach has been followed by concentrating on solving more urgent needs and limited problems. The system developed by Fürst and later copied by lots of other labs is an example of this. To my knowledge, there is no larger effort in the health sector explicitly trying to implement larger IIs based on explicitly chosen transition and interconnection strategies and gateway technologies. However, smaller networks are connected based on shorter term perspectives and practical approaches. Fürst, for instance, has connected their system for lab transmission to GPs in Norway to the system used by three UK and US based pharmaceutical companies (receiving copies of lab reports for patients using any of their drugs being tested out). The networks interconnections are based on "dual stack" solutions. Interfacing a new network and message format takes about one man week of work (Fiskerud 1996).

The Fürst experience indicates that building larger IIs by linking together smaller networks are rather plain and simple. The difference in complexity between the Fürst solution and the CEN effort is striking. The specification of the data format representing lab reports used in the Fürst system covers one page. The CEN specification of the EDIFACT message for lab reports covers 500 pages. Its specification work lasted for 4 to 5 years. According to one Norwegian manufacturer of medical record systems for GPs, for two partners that want to start using a standardized solution, ensuring that they are interpreting the CEN standardized message consistently demands just as much work as developing a complete system like Fürst's from scratch.

If universalism is dead, what then?

Based on the experiences mentioned above, two kinds of gateways seems to be particularly relevant. One is gateways linking together different heterogeneous transport infrastructures into a seamless web. The other is "dual stack" solutions for using different message formats when communicating with different partners. This is pursued further in chapter 11.

Experiences so far, indicates that implementing and running dual stack solutions is a viable strategy. If a strategy like the one sketched here is followed, implementing tools for "gateway-building" seams to a task of manageable complexity.

In the chapter that follows, we explore how the fallacy of universalism can be avoided by appropriating the inscriptions of the existing bits and pieces of an infrastructure.

 


1. HISPP (...) is coordinating the ongoing standardization activities in different standardization bodies in US.

2. ICD has been defined under the authority of WHO, and is used in health care institutions in most of the world. ......

Go to Main Go to Previous Go to Next