From systems and tools to networks and infrastructures - from design to cultivation. Towards a theory of ICT solutions and its design methodology implications.

Ole Hanseth

Abstract

I argue in this article that the kind of IT solutions we are developing today and in the years to come, which are integrating numbers of systems across organizational and geo­graphical borders, in many respects are significantly different from traditional information systems. To succeed with the establishment of such solutions new under­standings and development strategies and approaches are needed. Such new understandings, strategies, and approaches should be based on a perspective seeing such solu­tions as information infrastructures - not systems.

An information infrastructure is, like other infrastructures, evolving over long time. New infrastructures are designed as extensions and improvements of existing ones - not from scratch. The new or improved elements have to fit into the old. In this process the existing infrastructure, the installed base, heavily influences how the new elements can be designed. As the installed base grows its development and further growth become self-reinforcing. Successful development of infrastructures requires, first, the creation of such a self-reinforcing process, second, managing its direction. Strategies for creating and managing such processes are here called cultivation. Gateways are important tools to be used in such cultivation processes.

Introduction

In a resent article Wanda Orlikowski and Suzanne Iacono (2001) survey ten volumes of ISR searching for theoretical conceptions of IT - without finding it! What they argue should be the core of the IS discipline - a theoretical understanding of the key object (if not the constituting object) of our field is virtually nonexistent. Based on this finding they conclude that research aiming at developing such theories is "desperately" needed. Outlining some elements of a proposal for such a theory is the first aim of this article. The second aim is to present a design approach based on this theory.

Such a theory should be in line with more general theories of technology (Feenberg, Borgman, ..), but it should also account for what is specific about IT as well being able to distinguish between different kinds of IT solutions (Monteiro and Hanseth, 1995). The theory presented here is worked out on the basis of the evidence suggesting that most well established approaches to IS development and software engineering are implicitly built upon "theories" of IS which, may be, was appropriate when the IS field was established, but which are not so with regards to the kind of IS solutions we are building today and that is widely believed to be in focus in the years to come. This change in the nature of IT is reflected in public discourses about technology where the term IT has been replaced by ICT to reflect the so-called convergence between information and communication technologies. This convergence process is an extension and enhancement of change processes related to the nature of information systems. From the times when organizations developed and implemented their first systems, the number and types of systems in use have increased. We are now developing solutions to support communication, collaboration and information exchange between any units (people, organizations, information systems) globally. In parallel, as the number of systems grows, so does also their integration.

In spite of all hype often included in discourses about "convergence," the change is real - and important! In academic communities, however, we still talk about information systems and information systems design - not information and communication systems or information and communication systems design, for instance. I will in this article argue that the technological changes reflected in the substitution of "IT" with "ICT" also should be reflected in the way we see our "information systems" and our approaches to their design. In fact, I see the technological changes to be so profound that we should not only build new theories on top of the concepts we are using today. Such theories require that we change some of our most fundamental concepts - in may cases we should give up the notion of (information) system and replace it with (information) infrastructure. This change also implies that we need new methodologies - methodologies that are appropriate for the design of infrastructures rather then systems (and at the same time account for what is specific for information infrastructures compared to other infrastructures).

A key characteristic of infrastructures is that they evolve over long time where the existing infrastructure - the installed base - strongly influences how it can be improved. The concept of the installed base is the core of the theory that will be presented. The design approach will reflect this, seeing the installed base both as a material to be shaped (improved and extended) at the same time as it is an actor that often appears to live a life of its own outside the control of designers and users. The larger the installed base grows, the more powerful it becomes. For this reason I prefer to see the installed base as a sort of a living organism that can be cultivated, rather than some dead material to be designed. The proposed guidelines for cultivation are as follows:

I. Bootstrap a self-reinforcing installed base

  1. The infrastructure should first be targeted for a group of users for whom the infrastructure is useful even with a small number (i.e. installed base) of other users.
  2. Design the solution so that it useful for some users without an existing installed base
  3. The infrastructure should be designed as simple as possible. The simpler and cheaper the infrastruc­ture is, the more willing the user will be to adopt it and the easier it will be to build a required installed base of users (i.e. reaching "critical mass").
  4. Build on exiting installed bases as far as possible - both supporting and neighbouring infrastructures
  5. The usefulness of a new infrastructure should be increased by establishing gateways to already existing infrastructures.
  6. Jump on bandwagons which are moving in the right direction if there are any.
  7. In cases where an infrastructure may be used for many purposes (as most infrastructures can), one should start building the infrastructures for services where one (or a few) actor are provid­ing information which is accessed by many, and infrastructures where "everybody" both provide and access information later on.

II. Managing lock-ins

  1. Make the infrastructures small and simple so that it is easy to change.
  2. Implement new versions of standards using gateways.
  3. Split the infrastructure into independent sub-infrastructures.

III. Specific guidelines for supporting infrastructures

  1. Use existing infrastructures as transport infrastructures as far as possible.
  2. Design the application infrastructure independent of the transport infrastructure.
  3. Build the first version of the infrastructure without the need for specific service infrastructures.
  4. Build required service infrastructures when the application infrastructure is getting momentum and more advanced service infrastructures are required.

The structure of the article is as follows: In the next section the theory of ICT solutions as infrastructure will be worked out. This includes pointing to the limitations of existing notions and methodologies, more extensive argumentation for the importance of seeing ICT solutions as infrastructure, and a discussion of the nature of the installed base. In section 3 I will present and discuss the design guidelines derived from this. In section 4 three examples of infrastructures will be presented in order to give richer illustrations of their characteristics, and to give empirical support for the argument in the article. In section 5 some ideas for future research will be presented.

Towards a theory of ICT as infrastructure/installed base

Existing "theories": IT solutions as tools and systems

In their survey of how IT was conceptualized or theorized in the ten volumes of ISR (vol. 1 to 10, i.e. covering the years 1990 to 1999) Orlikowski and Iacono (2001) found that 24.8% of the articles had a nominal view on IT which means that IT was absent - only mentioned by its name. 24.3% of the articles were based on a computational view (i.e. they were focusing on aspects of specific algorithms or models, or on modelling as a part of a design process or as a part of a simulation task). 20.3% of the articles were based on a tool view. This included several kinds of tools, among these tools for labour substitution, productivity improvements, information processing, social relations. Further, 18.1% of the articles had a proxy view (i.e. seeing IT as a substitution for something else like perception or diffusion processes, or capital), and 12.5% an ensemble view seeing IT as a socio-technical development project (4%), as social structure (3.4%), as a system embedded in a larger social context (3.4%), and as enmeshed within a network of agents and alliances (1.1%). Orlikowski and Iacono (ibid., p 129-130) comment that given the high visibility of Kling and Scacchi's and Markus and Robey's work in articulating versions of the ensemble view in the 1980s, they were surprised to see the low number of articles adopting such a view in the 1990s. "Given the kind of emergent IS phenomena we are witnessing today (open source software, electronic commerce, virtual teams, globally-distributed work, ..,etc.) there clearly is scope for more work to be done from an ensemble view (ibid., p 130). This comment is in perfect line with the argument I will make here. The theory I am proposing is a sort of an ensemble view, and more specifically a view seeing IT solutions as "enmeshed in a network of agents and alliances." The theory has many similarities with Kling and Scacchi's web model. A crucial difference is the role of the installed base and the implications of this for design strategies.

This review shows that the tool view has a dominant position in IS research addressing the relationships between IT solutions and their users or the tasks the solutions are supposed to support. Inherent in the notion of tools are assumptions about a user which is using the tool to achieve specific goals. The user is using the tool (if she is a competent practitioner of the task she is doing, at least) is using the tool as she likes. She is in total control of the tool. 1 Rosenbrock (197x) contrasts this notion of a tool with that of a machine. A machine is characterized as something determining how a production process is unfolding and is user (operator) has to carry out specific operations as required by the machine. 2 The notion of a tool has definitely many qualities as a design ideal to strive for. But real IT solutions do indeed also have some machine like characteristics.

In case of IS and SE design methodologies, we usually see the IT solutions that we believe will become tools for their users as systems. The notion of tools addresses how the solutions are supposed to be used by their users, while the notion of systems capture how we look at the tools internal structure and technical aspects during the design process. However, some important assumptions are underlying both terms. Just as the tool metaphor makes us believe that the users are in complete control, so does the notion of systems make us believe that we through our IS design methodologies are in compete control over the design process and accordingly that we can design an IT solutions exactly as we (and the users) want to.

The early history of information systems was mostly about in-house development of isolated systems from scratch. The first IS methodologies were designed to support this kind of development. At this time the assumptions underlying the methodologies and the concept of system (as it is used in the field) were mostly valid. Unfortunately that is not the case any more. Since then, the situation for IS developers and users have changed significantly. The most obvious change is the growth in the number of systems in use within most organizations. This reflects the evolution and growth of different kinds of information technologies (from the old batch processing mainframe technology to PC's, networks, data bases, GUI packages, etc.), and the growth in areas where various applications are used (from a few niches to virtually any activity in an organization). Each new application introduced are integrated with lots of existing ones, most applications are delivered by vendors rather than developed in-house, and the services required to run the applications are often outsourced, etc.

Traditionally IS design starts by uncovering and specifying user needs, and then the techni­cal solutions are derived from them. The design process is supposed to follow its plan, carefully controlled by the project managers. Uncovering and understanding user needs is no doubt of highest importance. And the whole "IS experience" shows that specifying user needs and designing systems satisfying them are indeed hard tasks. This is due to the complexity of users' working practices which the systems are intended to support and improve, their variety across work sites and communities, and their dynamics and unpredictability.

Starting the development of information systems with user needs is linked to the general assumption that the systems to be developed should be or is designed from scratch. This is indeed an inherent assumption in virtually all IS design methodologies. As already mentioned, existing IS methodologies are focusing on the development of single, isolated, and stand-alone systems. 3 The development is taken care of by a project organization (managed by a project leader) developing the system for a customer - often an organizational unit with one person (the man­ager of the unit) in charge of the IS development and implementation on behalf of the cus­tomer. The design project is assumed to have well defined start and ending times - it is an event, not an ongoing process (Orlikowski 1996). In short: IS design methodologies aim at developing a closed system by a closed project organization for a closed customer organization within a closed time frame.

The IS methodologies field has also changed - to some extent. But the methodologies in use - or at least as they are presented in textbooks - are remarkably close to those emerging in the early era of the IS field. Two significant changes reflecting changes mentioned here are "information systems planning" and methodologies for dealing with "legacy systems." Information systems planning has addressed the problem of how to deal with the collection of computer application in an organization as a whole. But the approach adopted in this field is close to the traditional one. The difference is that the information systems planning aims at designing a collection of applications rather than just one by first designing a shared architecture, then each application is designed based on this. (See (Lederer and Salmela 1996) for a representative example and a review of much of the literature in the field.) Legacy systems is related the long term evolution of applications. But this field tends to look only at one isolated system and how to transfer this from one platform or architecture to another (Sommerville 2001). 4

Three examples - three kinds of infrastructures

I will here briefly point to three kinds of convergent technologies, or information infrastructures, ranging from the more basic and generic to the more narrow and specific. Information infrastructures are emerging partly through the development of new solutions like the Internet and infrastructures for specific business sectors and partly also through the growth in scale of information systems and their integration.

Global "universal service" infrastructures: The Internet

The paradigm example and core of this technological development is - of course - the Internet. In itself it is indeed both a telecommunication and an information system. The Internet is a shared resource for all its millions of users distributed across most countries of the world, it is a foundation upon which large parts of their activities are based. It is widely believed to become the infrastructure of the information society. At the same time it is the most important foundation - or infrastructure - for other technological solutions representing the convergence process. It is used as (or supposed to be) the new technological foundation of more classical telecommunication services like telephony (IP telephony), TV broadcasting and mobile phones services. It is also a common basis for the development of other ICT solutions like those mentioned below. The global reach of the Internet and its number of users and developers certainly make it significantly different from the traditional image of information systems. I will in this article in particular see the Internet as a success story - "best practice" - to be learned from when developing other kinds of information infrastructures.

EDI and "business sector" infrastructures

Telecommunication was introduced into the world of IS through the development and use of so-called interorganizational systems quite some time ago. Later on, the idea of integrating systems and exchange of information across organizational boundaries expanded into the development of EDI networks, and more recently into a variety of solutions shared by organizations within some kind of business sector or larger communities of organizations. This includes solutions for e-commerce, so-called extranets, telemedicine networks, etc. (Such solutions have just as much in common with the Internet as with traditional information systems.)

Corporate infrastructures

The development of information systems inside individual organizations has also changed. First of all, telecommunications have been used to give users distributed across large geo­graphical areas access to the same information and services. Through this change, the number of users and use areas supported by the same information system and data base has grown. Further, integration of telecommunication and information technologies has enabled the integration of information systems across any organizational and geographical borders. To improve their compet­itiveness within the increasingly more globalized business world, organizations are integrating their systems with those of their customers, suppliers, and strategic partners around the world. This changes the situation inside organizations with regards to their information systems. The systems are not a limited collection of individual ones, but a huge and tightly knit web of technological solutions distributed across organizational and geographical borders, use areas, and user communities.

So-called ERP systems have become very popular. These systems include numerous integrated modules supporting virtually any activity in an organization. An installation of an ERP system often replace a huge number of existing systems in and organization - in the Norwegian oil company Statoil, for instance, one single SAP installation is planned to replace more then one hundred existing applications (Hepsø et al., submitted). In addition, just as other applications, ERP systems are also integrated with others (Hanseth et al., in print).

Neither the Internet, nor the various kinds of business sector networks, or the web of inte­grated systems within corporations fit the notion of information systems underlying existing IS design methodologies and strategies as presented above. They should rather be seen as infrastructures and the strategies for building them should be derived from the key characteristics of infra­structures. I will now turn to the identification of these key characteristics.

Infrastructures

In the early days of computing, only one application (or at most just a few) was running on the same computer. As organizations adopted more applications and computers, it became convenient to make a split between the applications on the one hand and the computer hardware and its basic software on the other. This split became widely described as one between applications and infrastructure. This kind of infrastructure has grown in size and importance, but it is, however, not the kind primarily addressed in this article. My basic argument is that the kind of large scale applications mentioned above, and the number of applications being integrated, implies that even applications should be seen as infrastructures. (There will still, of course, be applications which are rather small and isolated and accordingly still fit the definition of information systems.)

Our concept of system should not be replaced by that of infrastructure. Rather the infrastructure concept is needed in addition to that of system. The notion of system and the planning and control oriented strategies associated with it will still be useful - and even required - in the development of new components that are going to be included into infrastructures. But the concept of infrastructure and its associated development strategies will redefine those of systems. Systems have to be seen as part of larger infrastructures and the strategies for developing them have to be implemented within the context of strategies for developing the infrastructures the systems are becoming parts of.

A shared, ...

The term information infrastructure was made widely known through the publication of the Clinton/Gore plan on National Information Infrastructures. This plan described visions for the use of the Internet and various services built on top of it for different areas of society like health care, education, business, entertainment, etc. Seeing such tech­nological solutions as infrastructures is certainly in line with the common use of the term, which is defined in Webster's dictionary as

"a substructure or underlying foundation; esp., the basic installations and facilities on which the continuance and growth of a community, state, etc. depends as roads, schools, power plants, transportation and communication systems, etc." (Guralnik 1970).

This definition describes an infrastructure as a shared resource, or a foundation, for a community. This is opposed to the traditional view on information systems (applications) as individual tools, which are developed for very specific purposes (like an accounting system), and which is used by a clearly defined and limited group (like the accounting department in an organization). It should be easy to see that the Internet as well as EDI networks in sectors like health care and large e-commerce networks are such share resources. It might, however, be less obvious that also larger collections of integrated applications or single applications like (large) ERP installations in fact do. When one application is integrated with others through information exchange (i.e. the other applications get access to the data initially registered by means of and owned by the first application), these other applications are becoming dependent on the data they receive from the first. The first application and its data then become a shared resource, a foundation, upon which the other applications and the activities they are supporting depend. So, as the number of applications a specific application is integrated with grows, the application changes character: from an ordinary application supporting a specific set of activities towards an infrastructure for a larger set of activities within a larger community.

Although one may argue this way that an individual application is turned into an infrastruc­ture in itself, it may be more convenient to focus on the web of integrated applications as an infrastructure, i.e. a foundation underlying all the activities in a community which any of them supports.

...evolving, ....

A key characteristic of infrastructures is the fact that they evolve continuously. In the case of telecommunication infrastructures, for instance, they have been continuously evolving since the first telecommunication links were set up. More switches are added, more users are adopting the technology and its use areas are growing. The same is true for roads. The global road infrastructure has been evolving, i.e. extended and improved, since the very first roads - or paths - were "built." Any new road is an improvement of the existing road infrastructure. So also with information infrastructures like the Internet. Networks of applications where each one is integrated with at least one of the others are also evolving in the sense that one application can be integrated with still more, and new applications are appearing and included into the network.

...open, ...

The continuous growth and evolution of infrastructures leads us to the next characteristic: openness. Openness in this context means lack of borders. For an infrastructure there is no border regarding the number of elements in may include (applications being integrated, computers linked to the Internet, etc.), the number of users that may use it, or the number of use areas that it may support. 5 Further, an infrastructure is also open in the sense that there is no limit to who might participate and con­tribute to its design and deployment. Lastly, its development has no beginning or ending - its development time is open. 6

.. standardized, ...

Traditionally the term open has been closely associated with the term standard. 7 And standards are indeed a crucial aspect of open infrastructures.This is the case for the following reasons, at least:

  • The alternative to standards is a set of bilateral agreements between the individual users and designers. This alternative scale very badly, making the design and maintenance of a larger number of links between computers or applications extremely expensive compared to one based on shared standards. This is the primary argument in favour of standards. But a couple more could be mentioned:
  • A large infrastructure involves many users and designers. All of them cannot come together and agree upon the requirements or design of the whole infrastructure. To make the whole enterprise manageable, they have to identify the minimum set of functionality that all of them have to conform to to make the infrastructure work. 8
  • In many, if not all, cases the number of users and designers of an infrastructure is very high so that they cannot set up any agreements between them at all. In such cases adopters or implementers of an infrastructure just relate to the standard. If you design or buy a system following the standard you can integrate it with others without any further agreements (in theory, at least).

I see standards and infrastructures as the flip sides of the same coin. Standards describe the structure of an infrastructure whether they are deliberately designed or emergent.

.. and heterogeneous ...

An infrastructure is standardized, but it is also heterogeneous - along many dimensions. They are heterogeneous in the sense that they include components of different kinds - technological as well as non-technological (human, social, organizational, etc.). 9 For instance, infrastructure service providers are organizations whose support personnel are absolutely mandatory to make the infrastructures work. Further, layers of infrastructures are built upon each other (the basic TCP/IP services of the Internet is built upon a wide range of more basic telecom infrastructures like ordinary telephone service, mobile phone services, satellite communication; the email and the web infrastructures are built upon the TCP/IP based infrastructure; e-commerce infrastructures are built on top of email and web infrastructures, and so on.). But an infrastructure is also heterogeneous in the sense that it includes sub-infrastructures based on different versions of the same standard (for instance during a transition period - which may be very long - from one version to another) or different standards covering the same area in terms of functionality (for instance different infrastructures running different e-mail protocols, electricity infrastructures linking together AC and DC based networks, a computing infrastructure of both Windows and Linux PC's, etc.)

...installed base.

The fact that infrastructures are open and evolving over a long time has important implications for how this evolution unfolds and what kind of strategies that may be adopted in order to manage or control it. When an infrastructure is changed or improved, each new feature added to the it, or each new ver­sion of a component replacing an exiting one, has to fit with the infrastructure as it is at that moment. This means that the existing infrastructure - the installed base - heavily limits and influ­ences how the new can be designed, and, in fact, how it can evolve.

To summarize: an infrastructure is a shared, evolving, open, standardized, and heterogeneous installed base.

Decomposing heterogeneous infrastructures

An important conceptual tool or strategy in all technological design (and analytical activity in general) is to decompose a complex phenomenon into simpler ones. We decompose systems into sub-systems, and we also need to decompose infrastructures into sub-infrastructures. In discussing design guidelines for infrastructures I will make use of decomposing infrastructures in terms of layering. This layering corresponds to the traditional split between applications and infrastructures. In this case, however, the applications are also infrastructures. To describe this I will use two terms: application infrastructures and support infrastructures. These concepts are relative and apply recursively in the sense that any infrastructure may be split into its top layer - the application infrastructure - and the support infrastructure upon which it is implemented. I will in the discussion of design guidelines further split support infrastructures into two categories: transport and service infrastructures. The transport infrastructures are used to carry the information between the partners; for instance the basic TCP/IP based infrastructures of the Internet underlying the other Internet services. Service infrastructures provide additional support like the Domain Name Service of the Internet which is used by virtually all other Internet services to map textual identifiers (IP addresses, URL's, e-mail addresses, etc.) to numerical IP addresses.

As mentioned above infrastructures are also heterogeneous in the sense that they usually implement several ver­sions of the same standard or several standards serving the same purpose. Two infrastructures which are providing the same kind of services based on different protocols/standards and which are linked together (by means of a gateway) will be called neighbouring infrastructures.

Ecologies of infrastructures

The structure of infrastructure

Existing information infrastructure "theories"

Previously research related to this topic was primarily about how protocols supporting various forms of information exchange should be designed. After the launch of the Clinton/Gore plan also strategies for how to make all the standards required to build the envisioned infrastructures have emerged as an issue deserving atten­tion from researchers (see for instance (Branscomb and Kahin 1995), (Forster and King 1995), (Wood 1994)). So far, the outcome of this research seems to be more directed towards policy making than more engineering oriented activities (Kahin and Nesson 1997, Branscomb and Keller 1996, Kahin and Keller, 1997).

Peter Weill and Marianne Broadbent (1998) have done extensive research on information infrastructures in large corporations. They are, however, maintaining the split between application and infrastructure, i.e. they are not addressing infrastructural aspects of large scale applications or the integration of such. Their contribution has basically been the development of models for estimating the value of and strategies for how to utilize infrastructures from a top management perspective. This is reflected in their analogy seeing an infrastructure as an IT portfolio which should be seen as any other investment portfolio. Using this investment port­folio metaphor can certainly be useful for understanding some aspects of IT infrastructures. But it is a metaphor that may also be very misleading. Investment portfolios are usually very flexible and easy to change, manage, and control. Elements of such portfolios may be sold at almost any time, and individual elements might be sold or bought independently (although portfolios should be balanced to minimize risks, etc.). Infrastructures are the exact opposite of this. The individual elements are highly interdependent, and their size and complexity make them extremely difficult to control and manage.

Weill and Broadbent's perspective on infrastructures is also reflected in the fact that they apparently take standards and standardization for granted. Standards are mentioned only once in their book (ibid.), in an appendix listing a number of recommended guidelines. One such says "Define and enforce IT standards." This article is based on the belief that standards and standardization processes are anything but granted. Understanding how to manage infrastruc­tures can be seen as understanding how to manage standards and standardization processes. This includes the definition, implementation and use of standards as well as the interactions and interdependencies between these processes.

The installed base

Having characterized an infrastructure as a shared, evolving, open, standardized and heterogeneous installed base I will now inquire a bit deeper into the nature of the installed base and how it operates.

Infrastructures of all kinds have been studies within the so-called Large Technical Systems field. One of the most important and influential works in this field is Thomas Hughes' (1983) study of electricity in Western societies in the period 1880-1930. His key contribution is his description of how infrastructures are getting momentum, i.e. the installed base gains force through a self-reinforcing process as it grows "larger and more complex" (Hughes 1987, p. 108). Major changes which seriously interfere with the momentum are, according to Hughes, only conceivable in extraordinary instances: "Only a historic event of large proportions could deflect or break the momentum [of the example he refers to], the Great Depression being a case in point" (ibid., 108) or, in a different example, the "oil crises" (ibid., 112).

Such self-reinforcing processes in relation to standards and infrastructures are more extensively researched and theorized within a certain branch of economy called network economics. The main concepts within the economics of standards and networks that should attract our attention are: increas­ing returns and positive feedback, network externalities, path dependency, and lock-in.

Increasing returns mean that, the more a particular product is produced, sold, or used, the more valuable or profita­ble it becomes. Infrastructures and their standards are paradigm examples of products having this charac­teristic (Arthur 1994).

A communication standard's value is to a large extent determined by the number of users using it--that is, the number of users you can communicate with if you adopt the standard. The basic mechanism is that a large installed base attracts complementary products and makes the standard cumulative more attractive. A larger base with more complementary prod­ucts also increases the credibility of the standard. Together these make a standard more attractive to new users. This brings in more adoptions, which further increases the size of the installed base, and so on (Grindley 1995: 27).

Standards reinforcements mechanism (Grindley 1995).

Increasing returns are created by network externalities. Externalities arise when one market participant affects others without compensation being paid. In general, network externalities may cause negative as well as positive effects. The classic example of negative externalities is pollution: my sewage ruins your swimming or drinking water. Positive externalities give rise to positive feedback - standards being the paradigm example (ibid.).

Network externalities and positive feedback give rise to a number of more specific effects. One such is path dependence (Arthur 1988). This means that past events will have large impacts on future development. In principle irrelevant events may turn out to have tremendous effects (David 1986). We may make a split between two forms of path dependence. When a standard builds up an installed base ahead of its competitors and becomes cumulatively more attractive. In such a case the choice of standard becomes 'path dependent' and highly influenced by a small advantage gained in the early stages (Grindley 1995: 2). The classical and most widely known example of this phenomenon is the design and evolution of keyboard layouts, leading to the development and de facto standardization of QWERTY (David 1986).

Another form of path dependency is related to the fact that early decisions concerning the design of a technology will influence future design deci­sions. When, for instance, a technology is established as a standard, new versions of the technology must be designed in a way that is compatible (in one way or another) with the existing installed base. This implies that design decisions made early in the history of a technology will often live with the technology as long as it exists. Typical examples of this are various technologies struggling with the backward compatibility problem. Well-known in this respect are the different gener­ations of Intel's micro processors, where all later versions are compatible with the 8086 processor, which was introduced into the market around 1982. 10

Increasing returns and path-dependency may lead to yet another effect: lock-in. Lock-in means that, when a technology has been adopted, it will be very hard or impossible to develop competing technologies.

Lock-in is more than cost. As the community using the same technology or standard grows, switching to a new technology or standard becomes an increasingly larger coordination chal­lenge. The lock-in represented by QWERTY, for instance, is most of all a coordination issue. It is shown that the individual costs of switching are marginal (David 1986), but, as long as we expect others to stick to the standard, it is best that we do so ourselves as well.

Many lock-in situations, and certainly those related to infrastructures, are of such a character that to get out of them requires both huge switch­ing costs and coordination efforts.

Related concepts and theoretical positions

The interest in infrastructures having motivated this article is not just the fact that we now are designing large scale networks for information exchange in addition to the traditional information systems. It is also motivated by the observation of a broader trend of building more complex and integrated IT solutions of apparently all kinds at the same time as all parts of society are becoming more integrated and complex - globalization being a buzzword capturing much of this. This trend is reflected in a move from perspectives based on a systems (or system like) concept towards one based on networks (for instance the interest in social networks, organizational networks, actor networks, computer networks, neural networks, network economics, etc.). The concept of network underlying this switch is different from a systems concept in the sense that the borders between what is inside and outside is blurred (more in line with open systems) and the network may include elements of more heterogeneous character and which are more loosely coupled than what is normally the case when one talk about systems. The concept of infrastructure in this article is compati­ble with this notion of network, it is a part of this "movement" and the work here has been inspired by "network thinking" in many different areas.

The assumed increased interdependence and complexity in our world just mentioned has led some scholars to choose the notion of complexity as the focus of their research. This is the case, for instance, for W. Brian Arthur's research mentioned above. Another researcher, Paul Cilliers (1998), define complex systems in a way that is very close to the definition of infrastructure given above.

There are also other variants of the systems concept than the one attached to IS methodologies in section 2 and which are quite close to my definition of infrastructure. This include "open" and "autonomous systems." The reason for my choice of infrastructure rather than open, autonomous, or complex system or network is twofold. First, I want a term which is indicating a more radical break with the future (although this may seem paradoxical - the IS community is also an installed base which is hard to change ...) than just adding an adjective to the systems concept. The term "complex systems" may indicate that we are talking about systems which are exactly like the good old ones, except that they are a little more complex. The term network satisfy this requirement. I prefer infrastructure, however, because it is a richer concept which more easily communicates that information infrastructures share important characteristics with other infrastructures in our society. It is a network, but it is more than that. It is also a shared resource for a society and a foundation upon which its activities depends. It is big and "heavy." Similarly, complex and autonomous systems do not have these characteristics. Autonomous systems, for instance, is primarily used to analyse biological systems.

The role of history 11 and the installed base play a dominant role in the way organizational change and organizational design are seen within the so-called "new institutionalism" (North 1990, Powell and DiMaggio 1991, March and Olsen 1989). Within this field new institutions are designed by improving existing ones.

A comprehensive philosophical approach to design in a broad sense and which has a focus on the role of history is developed by Spinosa, Flores and Dreyfus (1997). They are criticising the conventional way of understanding design, which they called detached, and argues for an alternative which they call historical design (or history-making). They see the first as a part of our cartesian tradition, while the latter is developed on the basis of Heidegger's phenomenology. They present three forms of historical design which they call articulation, reconfiguration, and cross-appropriation. They see history-making as an ideal - what we are doing "when we are living our lives at the best" (ibid.) and demonstrate how this form of design may take place within engineering, politics, and culture. They see design in these areas as entrepreneurship, democratic action, and the cultivation of solidarity respectively.

Cultivating infrastructures

Having sketched a theory of ICT solutions as infrastructures, I will now turn to approaches and guidelines for design of infrastructures derived from this.

From design to cultivation

Concepts

We most often describe the making of ICT solutions as design. This implicitly presumes that we make the technology exactly as we want it to be. Lots of alternative concepts are proposed to capture the "real nature" of IS development and to overcome the limits of traditional development models. One such is improvisation, introduced by Claudio Ciborra (1996) and Wanda Orlikowski (1996). Both see IS development and implementation as a part of organizational transformation. This model rests on two major assumptions which differentiate it from traditional models of change: first, that the changes associated with technology implementations consti­tute an ongoing process rather than an event with an end point after which the organization can expect to return to a reasonably steady state; and second, that the various technological and organizational changes made during the ongoing process cannot, by definition, all be antici­pated ahead of time.

These assumptions are also valid for infrastructure development, as described above. There are, however, important differences: the speed of the process and the role ascribed to the technology in the design process. Orlikowski is aware of this. She notes that "more research is needed to investigate how the nature of the technology used influences the change process and shapes the possibilities for ongoing organizational change" (ibid.). Contributing to such research is exactly what this article aims at.

Jaana Porra's (1999) has proposed another concept: colonial systems. She also sees the design process as some kind of improvisation, but opposed to Ciborra and Orlikowski she focuses on design as a col­lective enterprise and she particularly underscores the role of history in design. The latter aspect makes her model of design closer to the one proposed here. Porra, drawing in particular on Heidegger, focus on how history plays an active role in terms of how our past experiences shape our future actions. But, just like Ciborra and Orlikowski, she does not address the role of existing technology in the design process.

Bo Dahlbom and Lars Erik Janlert (1996) have proposed an alternative notion for under­standing the making of technologies: cultivation. They contrast this with the notion of construction which they see as a more general concept including design as well as engineering. I find that this concept quite nicely captures the way infrastructures are developing and how we can influence this process. They characterize the two concepts in the following way:

"[When we] engage in cultivation, we interfere with, support and control, a natural process. [When] we are doing construction, [we are] selecting, putting together, and arranging, a number of objects to form a system.....

[Cultivation means that] ...we .. have to rely on a process in the material: the tomatoes themselves must grow, just as the wound itself must heal." (ibid. p. 6-7)

The concept of cultivation turns our focus on the limits of rational, human control. Consider­ing technological systems as organisms with a life of their own implies that we focus on the role of existing technology itself, i.e. the installed base, as an actor in the development proc­ess. This perspective on technology is developed within actor-network theory (see for instance (Latour 1991, 1999) and (Callon 1991)). This theory focuses on socio-technical networks where objects usually considered social or technological are linked together into networks.

The installed base acts as "designer" in two ways. It may be considered an actor involved in each single information infrastructure development activity, but perhaps more important, it plays a crucial role as mediator and coordinator between the independent non-technological actors and devel­opment activities.

Actors and material - cultivators and cultivated

Before discussing design, i.e. cultivation, strategies in more detail, I will have a brief look at the actors involved, the cultivators which the strategies are relevant for. The main kinds of actors related to infrastructures are designers (being active in the design and specification of standards); product manufacturers (implementing products following standards and which will be used as components to build infrastructures); service providers (or infrastructure operators) implementing larger parts of the infrastructures; and, finally, users. Actors of any of these kinds may be of different "nature." It might be an individual (for instance an individual researcher actively participating in standardization activities, or an individual user adopting a service); an organization (a product manufacturer, a service provider or a user organization); or a formal institution like standardization bodies, governments, EU, UN, or G7(8). All these actors are involved in designing infrastructures, they are all cultivators.

It is especially worth stressing the role of users - individuals as well as user organizations. Normally, a piece of technology or a product is improved when a new feature is added or an existing one is improved. In the case of infrastructures, however, its value is (as argued extensively above) to a large extent determined by its number of users. Accordingly, a user is improving (i.e. changing - i.e. designing) it just by using it. This makes users designers as well. In fact, a user cannot avoid being a designer. Similarly, if designers (i.e. the kind of people we normally denote by this term) want to design infrastructures being useful and having value for users, they have to make users use it - they have to "design" and "build" a community user which is using the technology just as much as the pure technology itself.

Some of the actors mentioned appear much "bigger" and more powerful than others. But each has limited influence, being just one among a huge number of players, or one member of a large community. This is obvious true for individuals representing themselves only. But it is also true for high level actors like EU. EU is in a position to make decisions for a large area. But in spite of this, infrastructures within the EU will be tightly integrated with other infrastructures. And the EU can only make decisions on a high level and cannot instruct 12 its citizens and companies regarding how they will design their infrastructures in detail. The failure of the OSI effort (which will be presented in more detail in section 5. 1) illustrates this. Accordingly, the EU is also only one member of a large and open community of infrastructure designers and users. The same is true for standardization bodies. This implies that all actors have limited influence over their infrastructures. They can only shape smaller parts of it at the same time as huge number of others are shaping other parts. They can at best try cultivating what appears as a living organism.

Cultivation: Managing two dilemmas

The challenges regarding infrastructure design may be seen as a couple of dilemmas. This is, first of all, the fact that many proposed infrastructures never take off. Because infrastructures obtain their values and their development and growth by the size of their user community, they are initially of no value. Accordingly, no users find it profitable to adopt it, and, accordingly, an installed base never starts growing. Infrastructures are becoming self-reinforcing, or getting momentum, as they grow. In fact, to succeed in building an infrastructure, one has to get such a self-reinforcing process started. This is the most important dilemma to be managed by infrastructure developers.

When an infrastructure starts growing, it might lead us into a lock-in. There are two slightly different kinds of lock-in situations. First, there is a risk that different users adopt different standards and that incompatible infrastructures get established. In such a situation, it will often be considered beneficial if all users agree on and adopt one shared standard. But each user might find the costs of switching to high (arguing that others should switch to their protocol), and a lock-in situation has appeared - a lock-in in chaos. This is a situation those involved in standardization work are well aware of. They see this situation as a key argument for agreeing on one shared, universal standard before infrastructure development starts (De Moor 1993).

If all users agree on and adopt a shared standard, this standard will over time turn out to be inappropriate to new circumstances. In this situation there will be a lock-in - this time a lock-in in order, but an inappropriate order. 13

I will now turn to the discussions of specific design guidelines. These will focus on the management of these dilemmas, i.e. strategies for start building a new infrastructure 14 on the one hand and changing existing ones on the other. These two challenges are, however, closely related. Changing an infrastructure means building a new one in the sense that the new features also obtain their value from the size of their installed base. In spite of this, I will in the following sections make a distinction between making a new and changing an existing infrastructure. Making a new one means building an information (or electronic) infrastructure which is not supposed to replace an exiting one, like building e-commerce infrastructures 15 .

Bootstrapping a self-reinforcing installed base

To make an infrastructure start growing, one must attract users without offering them the benefits from communication with a large group. They must be attracted for other reasons. This may be achieved by making the first version of a new infrastructure tailored to specific needs of the first users so that they get some other benefits from using it than those created by network externalities (1,2) 16 . In addition, being a first adopter of an infrastructure has higher costs and risks than being a later one. Accordingly, the version adopted by the first users must be as cheap as possible in order to make the investments profitable within a reasonable time span (3). The infrastructure may not be implemented on full scale, accordingly the investments may prove to be of less value in the future because of lack of a large user community and the benefits created by network externalities. Being cheap usually means that its functions are implemented by a small piece of software. This gives further advantages: when the software is small and simple, it will usually be easier and simpler to implement in the user organizations (low learning costs, easy to integrate with existing software, etc.), and it will be more easy to change in order to adapt it to future needs. (The last issue will be discussed in the next section.) As the number of users is growing, the infrastructure needs to be changed. The specific needs of the first adopters become less relevant and the more general needs of the larger user community more so. And as more users are attracted and the installed base grows, the more useful the infrastructure becomes, and the more complex software the users are willing to pay for.

One important strategy for making a new infrastructure easy to adopt for users is to utilize existing ones. This means, first, selecting a supporting infrastructures that large parts of the group of potential users already are using (as illustrated by successful the diffusion of the web). Second, the new infrastructure should - if possible - be linked through gateways to neighbouring infrastructures, giving the adopters the benefits of communication with the users of this infrastructure or access to its information (4,5). (More on gateways in the next section.) When linking the new infrastructure to exiting ones, one should also take into account the speed and direction of their development in order to capitalize on bandwagon effects (6).

Information infrastructures are developed to support communication and information exchange along a dimension varying from communities where "everybody" communicates with "everybody" (like e-mail) at one end to communities where only one user sends information and all others are receivers of what the single sender provides on the other. An example of the latter type of com­munication is transmission of lab reports from one lab to its general practitioner (GP) "customers." In this case, the GPs are receiving reports independently of each other. The usefulness of the system, as per­ceived by GPs, is not affected by the size of the installed base in terms of the number of GPs connected. Infrastructures for distribution of information from public authorities have the same characteristic (7).

The nature of the information exchange service to be supported is a major factor determining the installed base requirements of an infrastructure. There will, however, always be space left for different design alternatives, alternatives with different installed base requirements. Mangematin and Callon (1994) illustrate how two alternative designs of an information infrastructure for vehi­cle guidance generate different installed base requirements. The information infrastructure is giving drivers advice about the fastest route to their destination. The computers in cars con­nected to the information infrastructure will continuously receive information about traffic density and the computers are giving advice based on this. One of the alternatives was based on information about traffic density being broadcasted by an already existing service. The other alternative was based on the idea that a number of ground installations were registering cars having the guidance system installed, using this as a measure of traffic density. In the lat­ter case, a large number of cars need to have the system installed in order to generate information for giving guidance advice.

Flexibility

Flexibility needs

While the need for standardization is widely accepted, the need for flexibility is largely neglected. 17 Flexibility in relation to infrastructures and standards is crucial for several reasons:

  1. Enable learning. For all new technologies it is a matter of fact that the first versions devel­oped are poor in quality compared to later ones. They are improved as users get experience in using them and discover what is needed as well as how the technology may be adapted to improved ways of working. For larger systems it is also the case that it is impossible to foresee all relevant issues and problems, they are discovered as we go along, and the tech­nology must be changed accordingly. For users it is impossible to tell in advance what kind of technology that will suit their needs best. User influence is an illusion unless it is based on substantial use experience.
  2. New requirements due to changing environments. For ISs it is a basic fact that their require­ments change over time because their environment (including the user organizations) changes. The same is the case for information infrastructures and standards.
  3. Growth of an information infrastructure in itself generates (in some cases) needs for change. A typical example is the redesign of IP due to the current version's limited address space.
  4. As separate information infrastructures/networks develop and grow, there will be a need for linking them together, or integrate them into one network.

There are different types of flexibility. One is being easy to change, i.e. changing an infrastructure by replacing one version of a standard with another. In case of information infrastructures, it may be difficult to change the design of one version due to its complexity. However, the major difficulty may be to replace one working version with another working one, as change will introduce some kind of incompatibility which may cause a lock-in situation.

Another type of flexibility is use flexibility. This means that an information infrastructure/ standard may be used in many different ways, serving different purposes. Use and change flexibility are linked in the sense that increased use flexibility decreases the need for change flexibility and vice versa.

Exit form lock-ins: revolution or evolution?

It is very important that the infrastructures are designed in ways helping you avoid this trap. An important strategy in that respect is to make the infrastructure as flexible as possible - in terms of use as well as change flexibility. I will here look a bit closer at the latter. Use flexibility makes it less likely that you will approach a lock-in (or you will do so less often), while change flexibility makes it easier to get out of a lock-in when it appears.

There are, in principle, two strategies to choose between to get out of a lock-in: an evolution strategy of backward compatibility or a revolution strategy of com­pelling performance. These strategies reflect an underlying tension when the forces of innova­tion meet up with network externalities: is it better to wipe the slate clean and come up with the best product possible (revolution) or to give up some performance to ensure compatibility and thus ease consumer adoption (evolution) (Shapiro and Varian 1999)?

The key to the evolution strategy is to build a new network by linking it to the old one. The technical obstacles faced have to do with the need to develop a technology that is at the same time compatible with, and yet superior to, existing products.

The revolution strategy is inherently risky. It cannot work on a small scale and usually requires powerful allies. Worse yet, it is devilishly difficult to tell early on whether your technology will take off or crash and burn. Even successful technologies start off slowly and accelerate from there.

Radical changes are often advocated--for instance, within the business process re-engineering literature. Empirically, however, such radical changes of larger networks are rather rare. Hughes (1987) found, as mentioned above, that large networks change only in the chaos of dramatic crises (such as the oil crises in the early 1970s) or in the case of some external shock.

Managing lock-ins - steering the monster: enabling flexibility through modularization and gateways

If you succeed in creating an installed base which starts growing through a self-reinforcing process, you have created a monster having momentum. It takes on a life of its own - developing in certain directions which you may not able to alter, or in other words: the infrastructure has reached a lock-in situation which you may not able to bring it out of.

On a general level, there are two elements being necessary for developing flexible informa­tion infrastructures. First, the standards and information infrastructures themselves must be flexible and easy to adapt to new requirements. Second, strategies for changing the existing information infrastructure into the new one must be developed together with necessary gate­way technologies linking the old and the new. These elements are often interdependent.

The basic principle for providing flexibility is modularization and encapsulation (Parnas 1972). Another important principle is leanness, meaning that any module should be as simple as possible based on the simple fact that it is easier to change something small and simple than something large and complex (8).

We can distinguish between two variants of the evolutionary strategy--slow evolution based on backward compatibility and fast evolution based on gateways linking the new and the old network. Gateways are, may be, the most important remedy to help overcome the negative effects of positive feedback and network externalities, i.e. lock-in and inefficiency (Katz and Shapiro 1985, David and Bunn 1988). Gateways may connect heterogeneous networks, being built independently or based on different versions of the same standards (9).

A well-known and important example of gateways, which is also analysed in the economics of standards and LTS literature, is the alternating/direct current (AC/DC) adapter (David and Bunn 1988; Hughes 1983). At the beginning of the twentieth century, it was still an open and controversial issue whether electricity supply should be based on AC or DC. The two alternatives were incompatible and a 'battle of systems' unfolded. As a user of electrical lighting, you would have had to choose between the two. There were strong proponents and interests behind both. Both had their distinct technical virtues. AC was more cost effective for long-distance transportation (because the voltage level could be higher) whereas a DC-based electrical motor preceded the AC-based one by many years. As described by Hughes (1983) and emphasized by David and Bunn (1988), the introduction of the converter made it possible to couple the two networks. It accordingly became feasible to combine the two networks and hence draw upon their respective virtues.

Gateways fill important roles in a number of situations during all phases of an information infra­structure development. The key effect of traditional converters is that they sidestep--either by postponing or by altogether avoiding--a confrontation. The AC/DC adapter is a classic exam­ple. The adapter bought time so that the battle between AC and DC could be postponed. Hence, the adapter avoided a premature decision. Instead, the two alternatives were able to coexist and the decision to be delayed until more experience had been acquired. AC proved to be the preferable technology in most situation, and is today the dominating one. But DC is superior in some situations and is used in smaller niches where it smoothly interoperates with AC.

Sidestepping a confrontation is particularly important during the early phases of an infrastruc­ture development as there is still a considerable amount of uncertainty about how the infra­structure will evolve. And this uncertainty cannot be settled up front; it has to unfold gradually. Gateways may prevent those in the position of making decisions from acting like 'blind giants' by making early decisions easier to reverse.

It is not only during the early phases that sidestepping confrontation is vital. It is also important in a situation where there are already a number of alternatives, none of which is strong enough to 'conquer' the others. In the case of e-mail systems, for instance, many different proprietary systems and protocols were developed before the Internet or other standards were available. On this basis, it has been considered more convenient to develop the different protocols separately and link the networks together through gateways.

A more neglected role of gateways is the way they support modularization. The modularization of an information infrastructure is intimately linked to its heterogeneous character. The impossi­bility of developing an information infrastructure monolithically forces a more patchlike and dynamic approach. In terms of actual design, this entails decomposition and modularization. The role of a gateway, then, is to encourage this required decomposition by decoupling the efforts to develop the different elements of the infrastructure. This allows a maximum of independence and autonomy (10).

Supporting infrastructures

As mentioned above, infrastructures on one level can only work if supported by infrastructures on levels below. Building a transport infrastructure from scratch will often require a tremendous effort and will potentially spoil the whole infrastruc­ture building effort. This will be illustrated by the EDI in health care example in the next section. On the other hand, if the new infrastructure can run on an already existing infrastructure, building the new one might be surprisingly simple. As already mentioned, the success of the web demonstrates this.

This leads to two guidelines for building transport infrastructures. The first, and most impor­tant, one is simply: Don't do it if you can avoid it - build on an existing installed base (11). 18 The second one is: Design the application infrastructure independent of the transport infrastructures chosen so that new transport infrastructures might be utilized in the future (12).

Most (application) infrastructures are also supported by various service infrastructures. These are playing many different roles. An example of a service infrastructure supporting the Internet is the Domain Name Service (DNS). EDI infrastructures in health care also need support from various infrastructures providing shared unique identifiers of various sorts (like drug identifiers, diagnostic codes, codes for specifying the results of lab tests, etc.). These service infrastructures will be presented quite extensively in section 5. I will here mention two others.

Globally available video-on-demand services enabling anybody to watch any movie or TV program anywhere at any time are assumed to be a major part of the information infrastruc­tures of the future. Video transmission demands high bandwidth. One cable and one video server can provide services to just a few users concurrently. Accordingly, a general large scale video-on-demand information infrastructure needs services implemented by a large network of servers where the movies and programs are stored. Typically the most popular ones are stored in all servers, while less popular ones in just a few. All information may be stored in a central server, distributing copies to others as a network of cache memory. Similar service infrastructures may also be required if large numbers of users are going to use infrastructures providing on-line multi-media based instruction programs.

E-commerce infrastructures will, it is assumed, require shared authentication and certification services, like for instance digital signature services offered by so-called trusted third parties. The services of trusted third parties have to be integrated into a shared global infrastructure. Further these services build upon yet another layer of service infrastructures to enable automatic verification of the certification of the trusted third parties, etc. This kind of security functions is also assumed required in EDI infrastructures for health care. For instance, using digital sig­natures in the transfer of physicians' invoices is required in Norway.

An important difference between transport and service infrastructures is the fact that transport infrastructures are absolutely necessary to move the data between the users of the infrastructures, while the role of service infrastructures are highly depending on the size and use of the infrastructure. You do not need them when the infrastructure is small, while they become very important as the infrastructure scales. This has some important implications for design. To make the infrastructure to be adopted by the first users simple, you should design it on the bases on the simplest possible service infrastructures if any at all (13). Service infrastructures should then be implemented and enhanced as the infrastructure grows (14). This is illustrated by DNS. This service was not included in the early versions of the Internet. The need for such a service was discovered as the Internet was growing. To enhance the Internet with this service at that time turned out to by quite straight forward.

Examples

I will here present three cases, one from each kind of infrastructure mentioned in section 2, which illustrates various infrastructures, the role of the installed base in their evolution, and the relevance of the guidelines presented above. This includes illustrating the failures that the traditional IS strategies are leading to and the successes which are achieved when an installed base cultivation strategy is followed.

The Internet - successful installed base cultivation

Because of its success the Internet is in most current discussions the paradigm example on how to develop a large scale information infrastructure successfully. 19 It is also the best illustra­tion of the importance of the installed base - the challenges involved as well as possible strate­gies for cultivating it. The de facto development strategy behind the Internet is definitely a "best practice" to be replicated. Such replication is certainly not trivial. Each infrastructure to be built has its unique features and context to be accounted for and the success of the Internet also depends on specific historical circumstances which cannot be reconstructed.

Internet - its installed base - has continuously grown and changed for more than 20 years. With its millions of users across the globe and the numerous Internet Service Providers and other kinds of Internet developers it is definitely an infrastructure according to the definition given in this article as well as more common popular definitions. The trajectory it has followed can quite accurately be described by a two leg model presented above: bootstrapping an installed base and at the same time it has been kept flexible and seriously disadvantageous lock-ins have been avoided. 20 I will in the following subsections describe the evolution of the Internet showing that the guidelines presented in the introduction have been followed and played an important role in this evolution process. (The guidelines will be referred to by their numbers in parenthesis.)

Bootstrapping a self-reinforcing installed base
  1. Bootstrapping versus specifying and "big bang"

For a long period there was a fight between OSI and Internet - sometimes called a religious war (Drake 1993). Now the war is over and OSI is the looser. The characteristics of the Internet's bootstrapping oriented development process can may be most clearly be illustrated by contrasting it to that of OSI. The Internet's success is dependent of the experimental, bottom-up and evolutionary bootstrapping oriented strategy adopted (Abbate 1999, Leiner et al. 1997). The OSI approach was the conventional one adopting a closed systems perspective to infrastructure building: First one should agree upon requirements, then specify the standards needed, and, finally, different actors can build the infrastructure by implementing the standards. Einar Stefferud (1992, 1994) and Marshall Rose (1992) claimed that OSI would be a failure due to its "installed base hostility." The OSI protocols were designed without paying attention to existing networks. They were specified in a way causing great difficulties when trying to make them network with corresponding Internet services, or linking them to other (existing or future) infrastructures. The OSI protocols were designed following a closed systems approach - they should be used only within a closed world where there would be no other protocols.

The "religious war" can also be interpreted as a fight between an installed base and improved design alternatives. As OSI protocols were discussed, specified, and pushed through the committees in ISO, the Internet protocols were implemented, deployed, and used. As the installed base of Internet protocols was growing, its use value increased and its position was strengthened relative to the "enemy." The installed base won, in spite of OSI's tremendous support from numerous gov­ernments and EU. I will in the following paragraphs describe the evolution of the Internet's installed base in more detail.

  1. Starting with simple solutions for specific users' needs (1 21 , 2, 3).

Over the years the number of service provided by the Internet has been growing extensively. Most new services - if not all - have been established by first developing simple solutions satisfying specific needs of small user groups. The e-mail service, for instance, was developed (by adapting a computer conferencing system running on a mainframe computer connected to the network) in order to support the required collaboration between the people responsible for the local nodes of the Internet (or rather, ARPANET) at the time when only four computers were linked to the net. As this service was used, its general usefulness was discovered, and the protocols and other standards were adapted to fulfil the requirements of a large scale universal e-mail infrastructure (Abbate 1999). The web technology emerged through a remarkably parallel story. It was originally developed to support the collaboration among researchers attached to CERN.

  1. Build on existing installed bases (4)

The Internet's successful strategy has been experimental, bottom-up and evolutionary in the sense that one layer or service has been established at a time and when this service or layer is working satisfactorily, it has served as a platform for the development of new and more advanced or specific services (Abbate 1999). Currently infrastructures providing necessary services for electronic commerce, etc. are explored and built as yet a new layer on top of the existing installed base.

This bottom-up strategy has also been followed in the development of each layer or service, first in the development of the basic packet switching technology and an infrastructure based on this, and later on in the development of the required protocols and infrastructures providing services like e-mail, remote login, file transfer, the web, etc.

  1. Linking to existing infrastructures through gateways (5)

The Internet has also received parts of its use value through links to other infrastructures. This is in particular true for the web. The web has become useful through the numerous servers containing HTML formatted files. But a crucial part of its use value is created by the information in ordinary data bases or applications that have been linked to the web through gateways. Many such gateways are implemented as scripts written in the Common Gateway Interface (may be more widely known as cgi only) language.

Managing lock-ins - enabling flexibility
  1. Modularization and leanness (8)

Internet has proved to be remarkably flexible. Its protocols are lean and simple (in particular compared the OSI couterparts). This fact can also to a large extent be explained by the experimental and prototype oriented strategy. When developing a prototype of a product or service, one always develops a simple version with only some basic functions. This means that design ambiguities and errors are removed in the implementation process, the technology built is lean and simple and accordingly easy to change. At the same time, when the first version is put into use, new needs and requirements, which the technology has to adapt to, are generated. Accordingly, a consequence of the experimental strategy was that the development community was confronted with information infrastructure change challenges at an early stage. These challenges were then taken into account when designing new technologies as well as in future design strategies. This central element of the Internet approach is nicely expressed by the slo­gan "we believe in rough consensus and running code."

The dynamics of the Internet is reflected in the fact that a special report is issued approxi­mately quarterly to keep track of all the changes (RFC 1995). The need for an informa­tion infrastructure to continue to change alongside its diffusion is expressed in a document describing the organization of the Internet standardization process:

"From its conception, the Internet has been, and is expected to remain, an evolving system whose participants regularly factor new requirements and technology into its design and implementation" (RFC 1994, 6).

The rules of the Internet standardization process require that for each step a protocol is advancing from proposed to full Internet standard, the required number of implementations and users grows.

Currently a series of working groups are working on how to adapt Internet to new needs. Some are due to new requirements stemming from new services or applications. Examples are (real time) video and audio transmission, mobile computers, high speed networks (ATM), and financial transactions and electronic commerce. Other problems, for instance routing, address­ing, and net topology, are intrinsically linked to and fuelled by the diffusion itself of Internet (RFC 1995). There is nothing which suggests that the need for flexibility to change the Internet will cease, quite the contrary (Smarr and Catlett 1992; RFC 1994, 1995).

  1. Gateways (9)

As the infrastructure grows, so also the challenges related to the installed base. And the current size and "weight" of the Internet's installed base definitely make changes more challenging. The best illustration of this is, may be, the development and diffusion of the new version of IP. This case also illustrates well the need for change, the difficulties in achieving this, as well as possible strategies and solutions (Hanseth et al. 1996, RFC 1994, Monteiro 1998). During the period between 1974 and 1978 four versions of the bottom-most layer of the Inter­net, i.e. the IP protocol, were developed and tested out (Kahn 1994). For almost 15 years it has been practically stable. Based on the above mentioned needs, the work on a new version of IP started in the early 90-ies. A detailed requirements list was worked out, and several new proto­col alternatives were proposed. However, agreement was reached about a new IP, called IPng (new generation), or IP version 6, which fulfilled just a few of the requirements. The most important one is a new addressing scheme, which is significantly extending the address space. One of the most important design criteria turned out to be solutions and strategies for a stepwise introduction of the new version into the existing Internet. Initially this criterion was hardly mentioned. But as the designers inquired deeper into the matters, the more important it was considered. Towards the end it was considered the most important one together with the extension of the address scheme.

This criterion, of course, constrains possible solutions to the various registered needs (RFC 1995). Most other requirements were dropped because the designers did not find ways to fulfil them at the same time as version 4 compatibility was maintained. The number of Internet users and user organizations now involved in the standardization process also increases the difficulties in reaching agree­ments about added and changed functionality (Steinberg 1995). It is assumed that the transi­tion from the old to the new version will take years. After ten years the first commercial implementations of the new protocol have just become available and the transition of the net itself has hardly started (van Best 2001). 22

Service infrastructures
  1. Use existing infrastructures as transport infrastructures as far as possible (11)

Exploiting existing infrastructures as transport infrastructures and not building new ones is, as mentioned above, a key issue in the successful development of the Internet.

  1. Build service infrastructures as needed (13, 14)

The Internet is based on and includes several service infrastructures - among them the Domain Name Service (DNS). DNS servers are connected and working together such that a query from an Internet application to its closest name server is propagated to others until the query can be answered. DNS is a global network of servers. It is a global infrastructure on its own, run­ning on the Internet's TCP/IP 23 based infrastructure as its transport infrastructure.

DNS is an important service infrastructure the whole Internet is based upon. It was not, however, a part of the Internet (or ARPANET) from the beginning. At that time the need for such a service was not yet discovered. If it were, however, it would have made the building of the Internet more diffi­cult. At that early time, the novelty of the technology made the whole project complex and risky as it was. More functions would imply larger complexity and higher risks and failure more likely. The need for a support infrastructure like DNS was discovered when the Internet started to grow. At that time it was fairly easy to design and implement the service and link it to the otherwise sta­ble and well known technology. And from that time, DNS has been crucial in giving the Internet its scalability and later success.

The Internet also illustrates the heterogeneous nature of infrastructures. In August 2001 there were app. 740 Internet standards, of these are 67 Full, 76 Draft, and app. 600 Proposed standards (RFC 2001). Further, individual standards are not isolated from each other: a large number of other technical components depend on IP. An internal report assesses the situation more precisely as:

"Many current IETF standards are affected by [the next version of] IP. At least 27 of the 51 full Internet Standards must be revised (...) along with at least 6 of the 20 Draft Standards and at least 25 of the 130 Proposed Standards." (RFC 1995, p. 38). 24

Business sector infrastructures: EDI in health care

The development of EDI infrastructures for health care in Norway (and the rest of Europe) has followed - except from its first phase - a strategy which is very different from the Internet. And so is the result. 25 I will first present the strategies pursued to build such an infrastructure and then discuss its relation to the installed base cultivation strategy presented above.

The beginning: Simple solutions targeted for specific user groups' needs

An important event in the development of an information infrastructure for information exchange in the health care sector in Norway took place when a private lab developed technological solution enabling electronic transmission of reports from the lab to General Practitioners (GPs). The lab developed the solution because they wanted to attract more customers believing that GPs would appreciate receiving reports electronically so that they could be stored into their patients' medical records automatically. The solution was cheap and simple. It was implemented using a terminal emulator software package operating across telephone lines, and it was devel­oped within three weeks. The lab gave it to GPs for free together with modems. The solution was highly appreciated by the existing "customers," and it attracted lots of new ones. The success of this solution caused many other labs to copy it.

The continuation: Big bang - specifying the "final solution" based on universal and eternal standards
  1. Specifying universal and eternal standards, big bang

The positive experience with these lab report transmission systems made lots of people (doctors, health care authorities, software vendors, etc.) believe that this kind of technology could be successfully utilized within a wide range of areas inside health care. This included first of all the transfer of documents having much in common with lab reports: lab orders, admission and discharge letters, orders and reports to/from other kinds of labs, various reports to social security offices, prescriptions, etc. Each of these information types involves several different institutions. Accordingly, exchanging this information electronically would require a shared and open infrastructure for the whole health care sector.

When inquiring into the design of such an infrastructure, those involved also discovered the openness of the information exchanged in the sense that each piece of information was included in several transactions (or messages - corresponding to today's paper forms), and that there were many links and interdependencies between the different information elements. During this process consensus was also established about how to develop the solutions that would make the potential benefits real: defining a coherent set of standards. The key part of these standards should be an information model, defining all information in one coherent way. Separate messages should be defined as "views" into this model. The model should be developed following traditional IS methodologies, i.e. based on user requirements and modelling of the "real world." Further, the standards should be global because patients are increasingly travelling all over the world and the health care sector should be able to by products from manufacturers all over the world more or less (De Moor 1993). From 1990 the focus in Norway and most European countries has been on participation in the specification of shared European standards organized by CEN TC/251. 26 It was taken for granted that when the standards were defined, they would diffuse automatically and be implemented in a "big bang" like process. This strategy implicitly assumed that there is no need for experimentation and learning, and no need to build and cultivate an installed base.

  1. Transport infrastructures

The "design from scratch" way of thinking and disregard of the importance of the installed base is also well illustrated by the fact that it was decided that information like lab reports should be exchanged using an underlying (transport) infrastructure based on X.400. 27 When the first standards were specified and ready for implementation, there was in practical terms no available X.400 based infrastructure. This implied that such an infrastructure had to be built first (or in parallel) with the specific infrastructure for exchange of the medical information. The need to build a (complex and expensive) X.400 infrastructure increased the implementation problems significantly. In spite of the political backing of this choice, user organizations were reluctant. This reluctance is due to X.400 implementations complexity, price, and - lack of installed base. Many organizations had other e-mail systems which were interoperating with most others.' They had difficulties in accepting that they could not use these. Accordingly everybody was waiting to see whether the others really bought X.400.

  1. Service infrastructures

EDI based information infrastructures usually need various service infrastructures, in particular identification services. Some such has been tried designed as part of the standardization work, others are as parts of the efforts to build infrastructures. Lab report messages must identify its order, the ordering unit (GP, hospital department, another lab, etc.), the patient, prescription messages must identify the prescribed drug, etc. Regis­ters of such identifiers must be available to the information infrastructure users. This requires an information infrastructure for maintaining and distributing updated versions of the regis­ters. Such information infrastructures may be rather complex technologically as well as orga­nizationally. At the European level, there has been much discussion about a European standardized system of codes for identifying the object analyzed (blood, skin, urine, etc.), where on the body it is taken, the tests performed, and then their results. It has so far turned out to be too difficult to reach agreement. The problem is partly to find a system serving all needs. But it is maybe more difficult to find a new system which may replace the installed base of existing ones at reasonable costs.

Drug identifiers have been much focused in the building of an information infrastructure for transmission of prescriptions from GPs to pharmacies in Norway (Hanseth and Monteiro 1997). The solution agreed upon was to use the same identifier codes in the prescriptions as those the pharmacies use in their existing inventory control and ordering systems. The fact that this solution required an infrastructure to generate updated versions of the identifier lists for GPs, distributing new versions to the GPs, installing the new versions as a part of the medical record system (which had to be changed to support the code lists), etc. was a big surprise for those involved at the same time as it was far beyond their reach to establish such an infrastruc­ture. Establishment of new registers and their required services and infrastructure will often be of this level of complexity, I am afraid. For this reasons, it was decided that the first version of the infrastructure for exchange of prescriptions the drugs should be identified using the name of the drug. Until the infrastructure is used quite extensively, this should be a perfectly adequate solution, I am convinced.

Another example of a service infrastructure for health care infrastructures is the ICD system for classification and coding of diseases whose infrastructural character is discussed at tremendous depth by Geof Bowker and Leigh Star (1999) 28 .

Discussion

The de-facto strategy followed when developing the "first generation" solutions was very successful, and it was indeed a perfect approach to start building an infrastructure and growing an installed base. These actions were all in line with the rules 1 to3 (and to some extent even 4). 29

The strategy in the next phase was different. Then the traditional "closed system" and specification driven approach to IS development and software engineering was adopted. This was exactly the same as the one followed in the ISO/OSI effort - and with the same unsuccessful result.

Since 1990 huge efforts have been spent on standardization in health care. Some standards are specified, and some of them are implemented in some small and isolated networks. But, by and large, no infrastructure has been built. The lack of progress after about ten years of standardization work can be explained by the complexity - not to say the impossibility - of working out specifications of the information to be exchanged which satisfy the needs for the whole of Europe and the complexity and costs involved in the implementa­tion of the specified standards. This means that the challenges involved in the development of such infrastructures cannot be attached by traditional IS methodologies and their inherent assumptions about closed systems. This approach means not to pay attention to the need for bootstrapping an installed base. The strategy followed implied doing exactly the opposite of what the rules 1 to 7 say. (May be with an exception for rule 6 - those involved believed the OSI effort was a sort of bandwagon.) Further, no attention was paid to the fact that the standards they tried to define had to change after their implementation. 30 They also tried to define and build the service infrastructures needed in the future in parallel with the application infrastructures, and they decided to build new transport infrastructures from scratch rather then drawing upon existing ones. In sum: Their approach was to do exactly the opposite of all the rules advocated by this article!

Would "cultivation" make a difference?

The question is, then: would it make a difference if the actors involved were pursuing a cultivation strategy? Of course, we cannot be sure about this. First, any general strategy might be implemented in a way that seems to be correct by turns out to be stupid. Second, we can never make a strategy that will work for all cases. But I do think that if many actors were following a cultivation strategy, better results would have been achieved. A cultivation strategy for continuing the infrastructure building activities in Norway after the first phase could proceed based on the following principles

  1. Improve and extend the existing infrastructure.
  2. Harmonize the message formats into more general standards.
  3. Link the different transport infrastructures.
  4. Make similar infrastructures for other areas (other kinds of lab reports, then other kinds of forms)
  5. Improve the solutions based on experience gained about how the technology can enable better and/or more efficient health care services

I believe many problems would have been solved by simply focusing on the development of national standards rather than European or global ones. The need for information exchange across national borders is limited while the variety among different health care systems makes the design of standards much, much more challenging. When the need for information exchange across national borders emerge, it should be quite straight forward to set up gateways between them. The lab pioneering electronic transmission of reports in Norway, for instance, has set up gateways to several non-Norwegian pharmaceuticals (that are receiving the lab reports related to testing of new drugs). On average the design and implementation such a gateway took approximately one manweek (Hanseth and Monteiro 1997).

More cultivation oriented approaches are currently emerging also in the health care sector. These are inspired by the suc­cess of the Internet which has also become visible for the actors in this sector. Lots of initiatives focusing on the use on the Internet and Internet technology are now launched. These initiatives are rather experimental and prototype oriented in their approach. This approach is chosen partly because many actors find it appropriate to experiment with this tech­nology to see what it can deliver, partly because they adopt the "Internet approach" due to its demonstrated success (Lundberg and Hanseth, 2001).

Corporate infrastructures: IS for ship classification

The last example I will present is about the design and implementation of a solution which has much in common with traditional information systems. 31

Case description

The designers were definitively seeing the task as developing a complex but otherwise traditional IS. During implementation several installed base issues and challenges emerged, and the designers had to adopt some cultivation tactics, but without changing their espoused model of the solution and their design and implementation strategy.

This system has been developed in-house (with assistance from consultants) for a Scandinavia based maritime classification company, here called MCC. The company was established in 1864 and has a long tradition in operating world wide as an independent foundation with the objective of "safeguarding life, property and the environment." The ship classification work is done by employees located at more than 300 offices in more than 100 countries. The total number of employees is app. 5.500. Most of the work is done in major harbours around the world and in the most important shipbuilding countries.

The work of the ship surveyors has been paper based. Paper documents have been sent to the headquarter where the final reports are produced and sent to the ship owners together with the certificates. Information about the ships were stored in a database running on a mainframe computer to which only personnel at the headquarter had access.

To improve its performance and competitiveness MCC decided in the early 90-ies to develop a new solution supporting their activities. The solution should support the surveyors work all over the world as well as the staff at the headquarter. Surveyors usually do their inspection work when ships are in harbours. This may take some time which may cause delays for the ships. To avoid this MCC wanted to be able to split one survey job into parts carried out in different harbours, say Hong Kong and London. This required a common IT solution for all surveyors, and it also required a standardization of the survey jobs. The solution should support the surveys on a ship during its entire life from the engineers start specifying it on the drawing table until it is scrapped. This requires information sharing with shipyards during the building of the ships. The solution should also support the management of surveys, generate statistics to help identifying best practices, etc. In addition MCC wanted to give shipowners access to information about their ships, and harbour authorities to information about ships approaching.

The development project started in 1994. The key idea was to integrate all relevant information for classification of vessels around a common product model, i.e. a standardized description of all parts of a ship and their relationships. The solution can be seen as an ERP system for ship classification companies. It was planned to be designed from scratch and implemented in a "big-bang" like process.

As the design project proceeded, problems and challenges appeared - of course. Many of them can be explained by the high ambitions and complexity of such an undertaking. But they can also be explained by the infrastructural character of the solution they planned to build and the necessity of adopting an installed-base-cultivation strategy.

The envisioned solution could very well be considered an infrastructure because it should support a large open community and an open range of activities carried out by its members. It is open because it includes - in principle- all shipowner, shipyards, and harbour authorities of the world. But the users inside MCC which the solution should support is also a quite large and open community. It involves a wide range of personnel categories. And among the surveyors there are a huge variety of practices depending upon the kinds of vessels classified, the size of the offices, local/national contexts for the work, etc. The variety, complexity, and knowledge intensity of this work make the surveyors quite autonomous. Their work cannot be designed by central mangers - and definitely not software designers.

The complexity of the users' practices was discovered as the design work unfolded. For this reason the designers gave up making one system supporting all users - at least in the first version. Several functions were supported by off the shelf products, for instance an electronic document archiving system, which were integrated with the system under design. When the organizational implementation started, some user groups found parts of it totally unacceptable, and replaced these with simple ad hoc solutions based on document templates defined in Microsoft Word.

The designers also had to give up some of their objectives because they discovered that the new solution had to be integrated with and adapted to existing installed bases. This includes among others the database which was running on the mainframe computer at the headquarter and a smaller system generating a set of paper based checklists used by the surveyors.

The solution required a supporting infrastructure. One such - which included a shared network linking all offices together, a standardized infrastructure of computers and operation systems, a standardized package of ordinary applications like word processing, spreadsheet, e-mail, etc. - was specified and successfully implemented. The envisioned solution would also require various service infrastructures for coding all the data to be included in the generation of statistics to help improve the practices, etc.

The first version of the solution was implemented in 1999 - five years after the design work started. This was several years behind schedule, and the functionality was very limited compared to the original specification. Currently (March 2002), this version is in operation and running satisfactorily at many offices. But none of the objectives related to improved organizational performance is obtained so far.

Discussion

This case, like the previous one, demonstrates the limits of the traditional perspective on IS development. The complexity and dynamics of a case like this seems far beyond what can be approached by traditional strategies. To make any progress, they had to let some of the integration ambitions go, and split the unified system into a number of rather independent ones. And the design of this was just as much determined by existing systems - the installed base - as future user needs. In parallel they discovered the degree to which the modules of the system that would give external organizations access (ship yards, ship owners, port authorities, etc.), had to be developed in collaboration with these organizations. The system, as it initially was conceived, has to be realized by something having much more the flavour of an infrastructure than a traditional system. And the strategy followed so far has drifted significantly from the initial IS design one into another which is closer to an installed base cultivation one. This has happened concerning the development of the "internal" systems. To build an infrastructure enabling the envisioned kind of communication between MCC and externals, they are now working on the establishment of a consortium that will do this.

At the moment discussions about how to proceed are going on. The original objectives are still very much alive, but opinions about how to reach them diverge. Key designers argue for the original strategy, i.e. try reaching the objectives by "cleaning up" the product model and then integrate all modules around this. Other argues for a more evolutionary approach improving the existing solution step by step. High level managers are becoming worried because of the existing solution's lack of flexibility. They see the need for continuous organizational change, but such changes require the solution to be changed as well. This has already proved to take a long time and the costs are high. Accordingly, a more flexible solution is needed. This means one being split into more independent modules rather then a more integrated one.

The original goals may still seem to be worth striving for, but to reach them there is a long, long way to go. As I see it, there is no alternative to an evolutionary approach where the existing portfolio of integrated systems - or infrastructure - is improved and extended step by step in collaboration between different personnel categories inside MCC and the external community. A cultivation approach seems to be even more important to achieve one of the important goals: a standard system of coding of all date registered in order to enable comparison of the way tasks are done and the results achieved. Such a system is supposed to support continuous improvement processes by letting individuals comparing their own work to others,' and by the aggregation of data into statistics that will work as improvement tools for managers. The development of such a system will be a rather challenging bootstrapping process: users will not use such a coding system unless it is proved appropriate for their primary tasks and they see the benefits for themselves. But such a coding system can only be designed through an iterative or evolutionary process where it is improved based on practical experience.

Future research

Acknowledgement

References

Abbate, J. (1999). Inventing the Internet. MIT Press, Cambridge, Ma.

Arrow, K. J. (1994) Foreword. In (Arthur 1994)

Arthur, W. B. (1994) Increasing returns and Path Dependence in the Economy. Ann Arbor: The University of Michigan Press.

Arthur, W. B. (1988), Competing Technologies: an Overview. In Technical Change and Eco­nomic Theory, Ed.: Giovanni Dosi et al., p. 590 - 607, Pinter Publishers, New York.

van Best, J.-P. (2001). IPv6 Standardization issues. In K. Dittrich and T. M. Egyedi (eds.) (2001). Standards, Compatibility and Infrastructure Development. Proceedings of the 6th EURAS Workshop, Delft University of Technology, Faculty of Technology, Policy, and Management, Delft, the Netherlands.

Bowker, G. and Star, S.L. (1999). Sorting Things Out. Classification and its Consequences. MIT Press, Cambridge, Ma., 1999.

Branscomb, L.M. and Kahin, B. (1995) Standards Processes and Objectives for the National In­formation Infrastructure. In (Kahin and Abbate 1995).

Bud-Friedman, L. (1994) (ed.) Information acumen. The understanding and use of knowledge in modern business, pages 187-213. Routledge.

Callon, M. Techno-economic networks and irreversibility. In Law J., editor, A sociology of monsters. Essays on power, technology and domination, pages 132-161. Rout­ledge, 1991.

CEN TC251/PT03-025.(1995). Methods for the development of Healthcare messages. CEN, Brussels.

CEN TC251. (1996) Standards in Health Care and Telematics. CEN, Brussels.

Ciborra, C. U. Introduction: What Does Groupware Mean for the Organizations Hosting It? In C. Ciborra (Ed.), Groupware and Teamwork. New York: John Wiley & Sons. pages 1 - 19, 1996.

Cilliers, P. (1998). Complexity & Postmodernism. Understanding Complex Systems. Rout­ledge, London, UK, 1998.

Dahlbom, B. and Janlert, L. E. (1996) Computer Future. Manuscript.

David, P.A., (1986) Understanding the Economics of QWERTY. In Economic History and the Modern Economist, edited by W. N. Parker. Basil Blackwell.

David, P.A., and Bunn, J.A. (1988), The Economics of Gateway Technologies and Network Evolution. Information Economics and Policy, vol. 3, pp. 165-202.

De Moor, G.D.E. (1993) Standardisation in health Care Informatics and Telematics in Europe: CEN TC 251 activities. In (De Moor et al. 1993).

De Moor, G.D.E., McDonald, C.J., and Noothoven van Goor, J.(eds.). (1993) Progress in Stan­dardization in Health Care Informatics. IOS Press, Amsterdam.

Drake, W.J. (1993). The Internet religious war. Telecommunications Policy, Vol. 17 No. 9.

Eidnes, H. (1994). Practical considerations for network addressing using CIDR. Communica­tions of the ACM, 37(8):46-53. Special issue on Internet technology.

Fitzgerald, B. (2000). System development methodologies: the problem of tenses. Information, Technology & People, Vol. 13, No. 3, pp. 17-185.

Forster, P.W., and King, J.L. (1995) Information Infrastructure Standards in Heterogeneous Sectors: Lessons from the Worldwide Air Cargo Community. In (Kahin and Abbate 1995).

Grindley, P. (1995) Standards, Strategy, and Politics. Cases and Stories. Oxford University Press, New York.

Gurlanik, D.B. (ed.) (1970) Webster' New World Dictionary of the American Language. The World Publishing Company, N ew York.

Hannemyr, G. (in print).The Internet as Hyperbole. A Critical Examination of Adoption Rates, The Information Society.

Hanseth, O., Ciborra, C., Braa. K. (in print) The Control Devolution ERP and the Side-effects of Globalization, The DATA BASE for Advances in Information Systems. Fall 2001/Winter 2002

Hanseth, O. and Lundberg, N. (Forthcoming) Designing Work Oriented Infrastructures. Computer Supported Cooperative Work.

Hanseth, O., and Monteiro, E. (1996). Information Infrastructure Development: The Tension between Standardization and Flexibility. In Science, Technology & Human Values, Vol. 21 No 4, Fall 1996, 407-426.

Hepsø, I.., Monteiro, E., Schieflo, P. M. (Submitted) Implementing multi-site ERP projects: centralization and decentralization revisited.

Hughes, T.P. (1983) Networks of power. Electrification in Western society 1880 - 1930. The John Hopkins Univ. Press.

Hughes, T.P. (1987) The evolution of large technical systems. In Bijker, W.E., Hughes T. P., and Pinch, T., eds., The social construction of technological systems,. Cambridge, MA: MIT Press.

Irmer, T. (1994) Shaping Future Telecommunications: The Challenge of Global Standardiza­tion. In IEEE Communications Magazine. Special Issue on Standards: Their Glo­bal Impact. Vol. 32 No. 1.

Kahin, B., and Abbate, J. (1995) Standards Policy for Information Infrastructure. MIT Press, Cambridge Massachusetts, 1995.

Kahn, R.E. (1994) The role of government in the evolution of the Internet. Communications of the ACM, 37(8):415-19. Special issue on Internet technology.

Katz, M., and Shapiro, C. (1985), Network Externalities, Competition and Compatibility. American Economic Review, vol. 75 (3), pp. 424-440.

Latour, B. (1991) Technology is society made durable. In J. Law, editor, A sociology of mon­sters. Essays on power, technology and domination, pages 103-131. Routledge.

Latour, B. (1999). Pandora's Hope. Essays on the Reality of Science Studies. Harvard University Press.

Leiner, B.M., Cerf, V.C., Clark, D.D., Kahn, R.E., Kleinrock, L., Lynch, D.C., Postel, J., Rob­erts, L.G., and Wolff, S.S (1997).The past and future history of the Internet. Com­mun. ACM, Vol. 40, No. 2, Pages 102-108.

Lundberg, N. and Hanseth, O. (2001). Standardization strategies in practice. Examples from healthcare In Stegwee, R., and Spil, T. (eds.) Strategies for Healthcare Information Systems. Idea Group Publishing.

Mangematin, V. and Callon, M. (1995) Technological competition, strategies of the firms and the choice of the first users: the case of road guidance technologies. Research Pol­icy, Vol. 24, p. 441 - 458.

March, J.G., and Olsen, J.P. (1989) Rediscovering Institutions. The Organizational Basis of Politics. Free Press, New York.

Monteiro, E. (1998). Scaling information infrastructure: the case of the next generation IP in Internet. The Information Society, 14(3).

North, D.C. (1990) Institutions, Institutional Change and Economic Performance. Cambridge University Press.

Orlikowski, W. J. (1996) Improvising organizational transformation over time: a situated change perspective. Information Systems Research, 7(1):63-92.

Orlikowski, W. J., and Iacono, C. S. (2001). Research Commentary: Desperately Seeking the "IT" in IT Research -- A Call to Theorizing the IT Artifact. Information Systems Research, 12(2):121-134.

Parnas, D.L. (1972). A Technique for Software Module Specification with Examples. Commu­nications of the ACM, Vol. 15 No. 5, pp. 330-336.

Porra, J. Colonial Systems. Information Systems Research, Vol. 10, No. 1, March 1999.

Powell, W.W. and P. J. DiMaggio (eds.). (1991) The New Institutionalism in Organizational Analysis. The University of Chicago Press.

RFC (1994). The Internet standards process - revision 2. RFC 1602, IAB and IESG.

RFC (1994b). SMTP service extension for 8bit-MIME transport. RFC 1652, IAB and IESG.

RFC (1995). The recommendation for the IP next generation protocol. RFC 1752, IAB and IESG.

RFC (2001a). Internet Official Protocol Standards, May 2001. IETF Standard #1 STANDARD. http://www.cis.ohio-state.edu/cgi-bin/rfc/rfc2800.html

RFC (2001b) A SOCKS-based IPv6/IPv4 Gateway Mechanism. RFC 3089. http://www.cis.ohio-state.edu/cgi-bin/rfc/rfc3089.html

Rolland, K. and Monteiro, E. (Forthcoming) Balancing the local and the global in infrastructural information systems. Forthcoming, The Information Society.

Rose, M. T. (1992) The future of OSI: a modest prediction, In Proc. of the Usenix conference 1992. USENIX Association, Berkley USA.

Shapiro, C., and Varian, H. R. (1999) Information rules: a strategic guide to the network economy. Boston, Mass. : Harvard Business School Press.

Sommerville, I (2001) Software Engineering. Addison-Wesley. 6th Edition.

Spinosa. C., Flores, F., and Dreyfus, H. L. (1997). Disclosing new worlds: Entrepreneurship, democratic action, and the cultivation of solidarity. MIT Press, Cambridge, MA.

Stefferud, E. (1992) E-mail sent to Eva Kuiper with copy to IETF's mailing list 12 May 1992.

Stefferud, E. (1994) Paradigms Lost. In Connexions. The Interoperability Report, Vol. 8 No.1.

Steinberg, S. G. (1995) Addressing the Future of the Net. WIRED, May: 141-44.

Weill, P., And Broadbent, M. (1998). Leveraging the New Infrastructure. Harvard Business School Press, Cambridge, Ma.


1. This view on IT as a design ideal is spelled out in detail in by Pelle Ehn (198x) and Terry Winograd and Fernando Flores (198y).

2. An assembly line as portrayed in Charlie Chaplin's movie "Modern Times" being a paradigm case.

3. Although most methodologies mention interfaces towards other systems as an issue, this issue is not seen as an important one, and how to deal with it is not integrated into the methodologies.

4. Brian Fitzgerald (2000) makes an argument which is close to the one I am presenting here. He shows that most key concepts and ideas (like systems development life cycle, prototyping, user participation, structured development, information hiding, etc.) of current IS methodologies came to prominence in the period from the mid-sixties to the mid-seventies in order to deal with the issues most challenging in IS development at that time. He also shows, through extensive survey studies, that there are profound differences between the development environment currently faced and that which prevailed when these methodologies were first promoted (ibid.).

5. Open as opposed to proprietary means that an infrastructure (or system or standard) is open with regards to who can participate in the design, implementation and use of the technology. But the solutions often declared to be open, like the ISO OSI suite of protocols, is closed in the sense that it is designed under the assumptions that OSI protocols should only communi­cate with other implementations of the same protocols, and not allow (or support) gateways to other existing net­works using different protocols.

6. When arguing for the replacement of the concept of system with that of infrastructure, it would be more precise to say that the term system should be replaced by network to describe the change in our view of the inner structure of our ICT solutions. Then we should change our view on these solutions as seen from a user perspective: ICT solutions appear more as an infrastructure than as a tool. But I have found it more convenient to talk about infrastructures which also are open networks.

7. See footnote 6.

8. This brief description is intended to cover emergent standards as well as imposed ones.

9. The definition of infrastructures in the NII document is an excellent illustration of the heterogeneous character of infrastructures.

10. A richer and better illustration of this is presented in footnote 14.

11. This general validity of this point is nicely captured by an email circulated on the Internet (origin unknown): Who says it is not worthwhile to study history? Isn't the item below invaluable?.... The US standard railroad gauge (distance between the rails) is 4 feet 8.5 inches. That's an exceedingly odd number. Why was that gauge used? Because that's the way they built them in England, and English expatriates built the US railroads. Why did the English build them like that? Because the first rail lines were built by the same people who built the pre-railroad tram­ways, and that's the gauge they used. Why did 'they' use that gauge then? Because the people who built the tramways used the same jigs and tools that they used for building wagons, which used that wheel spacing. Okay! Why did the wagons have that particular odd wheel spacing? Well, if they tried to use any other spacing, the wagon wheels would break on some of the old, long distance roads in England, because that's the spacing of the wheel ruts. So who built those old rutted roads? The first long distance roads in Europe (and England) were built by Imperial Rome for their legions. The roads have been used ever since. And the ruts? Roman war chariots first made the initial ruts, which eve­ryone else had to match for fear of destroying their wagon wheels and wagons. Since the chariots were made for, or by Imperial Rome, they were all alike in the matter of wheel spacing. Thus, we have the answer to the original ques­tion. The United States standard railroad gauge of 4 feet, 8.5 inches derives from the original specification for an Imperial Roman war chariot. Specifications and bureaucracies live forever. So, the next time you are handed a speci­fication and wonder which horse's ass came up with it, you may be exactly right. Because the Imperial Roman war chariots were made just wide enough to accommodate the back ends of two war-horses. Now there's an interesting extension to the story about railroad gauges and horses behind. When we see a Space Shuttle sitting on its launch pad, there are two big booster rockets attached to the sides of the main fuel tank. These are solid rocket boosters, or SRBs. Thiokol makes the SRBs at their factory at Utah. The engineers who designed the SRBs might have preferred to make them a bit fatter, but the SRBs had to be shipped by train from the factory to the launch site. The railroad line from the factory had to run through a tunnel in the mountains. The SRBs had to fit through that tunnel. The tunnel is slightly wider than the railroad track, and the railroad track is about as wide as two horse's behind. So, the major design feature of what is arguably the world's most advanced transportation system was determined by the width of a Horse's ass!

12. Even if they have the formal right to do so, they do not have the required competence or work capacity.

13. David (1987) points out three more strategy dilemmas related to lock-in situations:

Narrow policy window. There may be only brief and uncertain "windows in time," during which effective public policy interventions can be made at moderate resource costs.

Blind giants. Powerful actors like governmental agencies are likely to have greatest power to influence the future trajectories of network technologies, just when suitable informational basis on which to make socially optimal choices among alternatives is most lacking. The actors in ques­tion, then, resemble "blind giants" - whose vision we would wish to improve before their power dissipates.

Angry orphans. Some groups of users will be left "orphaned;" they will have sunk invest­ments in systems whose maintenance and further elaboration are going to be discontinued. (This dilemma is related to lock-in in chaos, while the two first ones are related to lock-in in order.)

14. Talking about designing a new infrastructure might appear in contradiction with my repeated claim that infrastructures are never built from scratch - only by enhancing and extending existing ones. This is not the case. The point is that any infrastructure requires a supporting one. The availability of such supporting infrastructures is an important success factor. The less modifications and extension of existing infrastructures required to make the new, the easier it is to build the new.

15. One might argue, however, that such electronic e-commerce infrastructures are replacing non-electronic infrastructures and that these also have to interoperate. Such integration will not be explicitly discussed here. That is done, for instance, in (Hanseth and Lundberg, 2001).

16. The numbers in parentheses in this and the next two sections refer to the numbered list of guidelines presented in the introduction.

17. It is a paradox that proponents of specific standards use the problems related to the irreversibility of not fully interoperable standards as a key argument in favour of the standards they are proposing at the same time as they completely neglect problems and challenges related to the irreversible installed base that their standards may build up. (See for instance, De Moor 1993)

18. This does not imply that one should not build what is commonly seen as some kind of transport infrastructures. They definitively need to be built - otherwise they wouldn't be there when you need them. The point is that when you are developing an (application) infrastructure to serve some kind or purpose, you should avoid a situation (or decisions bringing you into a situation) where you have to design the infrastructures underlying the application infrastructures.

19. Research method:

20. There are, however, good reasons to ask whether the Internet has grown beyond the limits of its organizational and standardization model (see Eidnes 1994, Steinberg 1995, van Best 2001).

21. These numbers refer to the guidelines in the introduction

22. This illustrates that the Internet is not at all changing as fast as the hype tries to make you believe. For more on this issue, see (Hannemyr, in print).

23. DNS is in fact using UDP and not TCP. The same is also true for the ping, talk, and finger services. UDP is a simpler and less secure alternative to TCP.

24. Please note the growth in number of Internet standards from 1995.

25. Research method: ....

26. CEN is the European branch of the International Standardization Organization (ISO).

27. X.400 is the ISO/ OSI standard protocol for e-mail.

28. Several discussions of registers of this kind can be found in (Bud-Friedman 1994).

29. Nobody really seemed, however, to pay attention to the relationship between the solutions developed first and the more general infrastructures that could be developed in the future though.

30. This is not absolutely true. The need for new versions of EDIFACT messages was taken into account by giving each version a unique version id. But beyond this trivial mechanism, this issue was completely overlooked.

31. This presentation is based on Knut Rolland's research and fieldwork in this organization (Rolland 2000, Rolland and Monteiro 2002, Rolland forthcoming). He has in the period from .. to .. interview xx persons involved in the case and visited yy sites where the system is in use (Rolland forthcoming).