CHAPTER 11 Changing networks: Strategies and tools

THIS CHAPTER IS IN A PRE-PRE-PRE-DRAFT STATUS

Introduction

Our analysis of information infrastructures have led to a rephrasing of the notion of design. The "design" of an infrastructure is the purposeful intervention and cultivation of an already existing, well-aligned actor-network. This immediately prompts the question of which strategies there exist for such purposeful intervention. One may distinguish between three, generic strategies:

an evolutionary one: a slow, incremental process where each step is short and conservative;

a more daring one: a faster process where each step is longer and more daring;

a radical one: fast changes which are a radical break with the past;

We have conceptually (chapter 9) as well as empirically (chapter 10) elaborated the former of these. The change represents a (limited) number of new features added to the existing ones. The new is just an extension of the existing. Its successful application within Internet is in itself a warrant for its relevance. It is was corresponds to "backwards compatibility" - a well known phenomenon in the world of products (Grindley 1995).

More daring changes imply "jumping" between disconnected networks and involve building a new one from scratch that is unrelated (that is, completely un-aligned) to the existing and require jumping from one network to another. Examples of this kind of changes is illustrated by e-mail users subscribing to America Online that "jumped" to Internet.

Changing a network through this kind of abrupt changes, however, are often difficult to implement due to the important role of the installed base. Connection to the first network gives access to a large community of communicating partners, while the new one gives initially access to none, making it unattractive to be the first movers. In spite of this fact, this jumping strategy is really the one implicitly assumed in the definition of OSI protocols, and is claimed to be the main explanation of their failure (Stefferud 1994; Hanseth, Monteiro, and Hatling 1996). Such a jumping strategy might, however, be made more realistic if combined with organization and coordination activities. A simple strategy is to decide on a so-called "flag day" where everybody are expected to jump from the old to the new. Still, this strategy requires that the communicating community has a well defined, central authority and that the change is simple enough to be made in one single step. Changing the Norwegian telephone system in 1994 from 6 to 8 digit numbers was done in this way by the Norwegian Telecom who at that time enjoyed a monopoly status.

Radical changes are often advocated, for instance within the business process reengineering (BPR) literature (ref.). Empirically, however, such radical changes of larger networks are rather rare. Hughes (1987) concluded that large networks only change in the chaos of dramatic crises (like the oil crises in the early 70s) or in case of some external shock.

As a strategy for changing information infrastructures of the kind we discuss in this book, relying on abrupt changes is ill-suited and will not be pursued further. Still, the former approach, the evolutionary one, needs to be supplemented. There are important and partly neglected situations where this strategy simply is not sufficient: what we (for lack of a better name) dub "gateway-based" strategies are required for more radical changes. The slow, evolutionary approach involve only modest changes to a well aligned network.

This chapter explains the background for and contents of these, supplementary strategies to the evolutionary approach. We will first illustrate this strategy "in action" through an example, namely the establishment of NORDUnet, a research network in Scandinavia in the 80s. Afterwards, we turn to a more general analysis of the notion of gateways and their role in future information infrastructures.

NORDUnet

Status 83 - 85: networks (and) actors

In the late seventies and early eighties, most Nordic universities started to build computer networks. Different groups at the universities got involved in various international network building efforts. Around 1984 lots of fragmented solutions were in use and the level of use was growing. Obtaining interoperable services between the universities was emerging as desirable - and (technologically) possible.

The networks already in use - including their designers, users, and operating personnel - were influential actors and stakeholders in the design and negotiations of future networks - including Nordunet. We will here briefly describe some.

The EARN network was established based on proprietary IBM technology. The RCSC protocols were used. (Later on these protocols were redesigned and became well known as the SNA protocol suite.) The network was built and operated by IBM. It was connected to BITnet in US. Most large European universities were connected.

The EARN network was linked to the EDP centres. It was based on a "star" topology. The Nordic countries were linked to the European network through a node at the Royal Technical University in Stockholm. In Norway, the central node was located to Trondheim. The network was based on 2.4 - 4.8 KB/second lines in the Nordic countries.

The EARN networks had was used by many groups within the universities in their collaboration with colleagues at other universities. The main services were e-mail and file transfer. A chat service was also used to some extent.

HEP-net was established to support collaboration among physics researchers around the world (?), and in particular among researchers collaborating with CERN outside Basel in Switzerland. This network was based on DECnet protocols. This community represented "big science," they had lots of money an was a strong and influential group in the discussions about future academic networks. The EDP department at CERN was also very active in developing systems that were established as regular services for this community.

EUnet is (was) a network of Unix computers based on UUCP protocols. EUnet has always been weak in Norway, but was used to some extent in computer science communities. Its main node was located at Kongsberg until 1986 when it was moved to USIT.

EUnet was mostly used by Unix users (doing software development), within academic institutions as well as private IT enterprises.

Norway was the first country outside US linked to ARPANET. A node was set up at NDRE (Norwegian Defence Research Establishment) at Kjeller outside Oslo by Pål Spilling when he returned in 198? from a research visit at.... The second node was established by Tor Sverre Lande at the department of informatics at the University of Oslo in.... This happened when he also returned from a one year (??) research visit at .. Lande brought with him a copy of the Berkeley Unix operating system which included software implementing all ARPANET protocols. The software was installed on a VAX 11/780 computer and linked to ARPANET through a connection to the node at Kjeller. Later on more ARPANET nodes were set up.

NDRE was using the net for research within computer communications in collaboration with ARPA. Lande was working within hardware design, and wanted to use the net to continue the collaboration with the people he visited in US, all using VLSI design software on Unix machines linked to ARPANET.

At that time (??), ARPANET was widely used among computer science researchers in US, and computer science researchers in Norway very much wanted to get access to the same network to strengthen their ties to the US research communities.

At this time Unix was diffusing rapidly. All Unix systems contained the ARPANET protocols, and most Unix computers were in fact communicating using this protocols in the local area networks they were connected to. Accordingly there were lots of isolated IP islands in Norway and the other Nordic countries. By linking these IP islands there would be a huge network.

In Norway the development of one network connecting all universities started in the early eighties. The objective was one network linking every user and providing the same services to all. With this goal at hand, it was felt quite natural to link up with the OSI standardization effort and build a network based on what would come out of that. Those involved tried to set up a X.25 network. First, it was tried build this based on an X.25 product developed by a Spanish company. The quality of this product was low, and it seemed out of reach to get the network up and running. 1 Using this product was given up, and it was replaced by an English product called Camtech. Running DECnet over X.25 was considered. Based on the English product one managed to keep the network running and an e-mail service was established in 84/85 based on the EAN system.

Universal solutions

As the networks described above were growing, the need for communication between users of different networks appeared. And the same was happening "everywhere," leading to a generally acknowledged need for one universal network providing the same universal services to everybody. Such a universal network required universal standards. So far so good - everybody agreed on this. But what the universal standards should look like was another issue.

The was a time of ideologies, and the strongest ideology seems to be ISO/OSI model, protocols and approach. In general there was a religious atmosphere. Everybody agreed that proprietary protocols were bad, and that "open systems" were mandatory. The Americans pushed IP based technologies. They did so because they already had an extensive IP based network running, and extensive experience from the design, operations, and use of this network. The network worked very well (at least compared to others), and lots of application protocols were already developed and in use (ftp, telnet, e-mail,....???).

As the IP based network (Arpanet, later Internet) was growing, the protocols were improved and tuned. New ones were developed as it was discovered that they were urgently needed to make the network work smoothly or new ideas developed as one used the existing services. An example of the first is the development of the Domain Name Service, DNS, mapping symbolic names to digital IP addresses. This service made the network scalable. Further, the decision to build the network on a connectionless transport service made the network flexible, robust, and simple as no management of connections was required during communication sessions.

American research and university communities pushed IP, while both European researchers within the computer communications field and telecom operators pushed ISO. The role of telecom operators had the effect that the whole of OSI is based on telephone thinking. 2 The Europeans wanted a non-IP based solution believing that would close the technological gap between Europe and US.

The OSI idea was.

The IP (Internet) idea was

alliances

links between technology and non-technology, for instance the embedding of telecom operators intentions to expand their monopoly was embedded into the design of X.25 (Abbate 1995).

Nordunet

The Nordunet initiative was taken by the top managers at the EDP centres at the universities in the capitals of the Nordic countries. They had met at least once a year for some time to discuss experiences and ideas. Most of them had a strong belief in computer communication. In Norway the director of the EDP department at the University of Oslo, Rolf Nordhagen, was a strong believer in the importance of computer network technology. He had pushed the development of the BRU network at the university, linking all terminals to all computers. He also worked eagerly for establishing new projects with wider scopes, and he was an important actor in the events leading to the conception of the idea of building a network linking together all Nordic universities. When the idea was accepted, funding was the next issue. The Ministry of the Nordic Council was considered to proper funding organization. They had money, an application was written and funding granted.

Arild Jansen, a former employee at the EDP department in Oslo was now working at the Ministry for Public affairs in Norway and played the role as the bridge between the technical community on the one hand and the political and funding communities on the other. He was also the one writing the application for funding. Later he became a member of the steering group.

The idea was one shared network for research and education for all the Nordic countries. This objective almost automatically lead to "openness" as a primary objective. "Openness" was also important for the politicians.

Strategy one: Universal solution, i.e. OSI.

The Nordunet project was established in 1985. Einar Løvdal and Mats Brunell were appointed project coordinators. When the project started, they had hardly the slightest idea about what to do. Just as in the larger computer communications community, those involved in the project easily agreed about the need for a universal solution - agreeing on what this should look like was a different matter.

The people from the EDP centres, having the idea about the project, all believed in the OSI "religion." Next they made an alliance with public authorities responsible for the field computer networks for research and education would fall into and the funding institution (which was also closely linked to the authorities). Obtaining "universal service" was an important objective for the, accordingly they all supported the ideas behind OSI. These alliance easily agreed that an important element in the strategy was to unify all forces, i.e. enrolling the computer communications researchers into the project. And so happened. As they already were involved in OSI related activities, they were already committed to the "universal solution" objective and the OSI strategy to reach it.

However, products implementing OSI protocols were lacking. So choice of strategy, and in particular short term plans, was not at all obvious. Løvdal was indeed a true believer in the OSI religion. Mats Brunell, on the other hand, believed in EARN. To provide a proper basis for taking decisions a number of studies looking at various alternative technologies for building a Nordic network were carried out:

  1. IP and other ARPANET protocols like SMTP (e-mail), ftp, and telnet.
  2. Calibux protocols used in the JANET in UK.
  3. EAN, an X.400 system developed in Canada.

All these technologies were considered only as possible candidates for intermediate solutions. The main rationale behind the studies was to find best currently available technology. The most important criterion was the number of platforms (computers and operating systems) the protocols could run on.

Neither IP (and the ARPANET) nor the Calibux protocols were found acceptable.The arguments against IP and ARPANET were in general that the technology had all too limited functionality. Ftp had limited functionality compared to OSI's FTAM protocol (and also compared to the Calibux file transfer protocol which FTAM's design to a large extent was based on).The Nordunet project group, in line with the rest of the OSI community, found the IP alternative "ridiculous," considering the technology all to simple and not offering the required services. There were in particular hard discussions about whether the transport level services should be based on connectionoriented or connectionless services 3 . The OSI camp argued that connection oriented services were the most important. IP is based on a connection less datagram service, which the IP camp considered one of the strengths of the ARPANET technology.

JANET was at that time a large and heavily used network linking almost all English universities. The network was based on X.25. In addition it provided e-mail, file transfer, and remote job entry services. The protocols were developed and implemented by academic communities in UK. The fact that this large network was built in UK was to a large extent due to the institution funding UK universities required that the universities bought computers that could run these protocols. JANET was also linked to ARPANET through gateways. The gateways were implemented between service/application protocols. The people involved in the development of the Calibux protocols were also active in and had significant influence on the definition of the OSI protocols. The main argument against Calibux was that the protocols did not run on all required platforms (computers and operating systems).

One important constrain put on the Nordunet project was that the solutions should be developed in close cooperation with similar European activities. This made it almost impossible to go for ARPANET protocols, and also Calibux although they were closer to the OSI protocols unanimously preferred by those building academic networks and doing computer communications research throughout Europe.

The IP camp believed that IP (and the other ARPANET protocols) was the universal solution needed, and that the success of ARPANET had proved this.

The users were not directly involved in the project, but their views were important to make the project legitimate. They were mostly concerned about services. They want better services - now! But in line with this they also argued that more effort should be put into the extensions and improvements of the networks and services they were using already, and less into the long term objectives. The HEPnet users expressed this most clearly. They were using DECnet protocols and DEC computers (in particular VAX). DEC computers were popular among most Nordic universities, accordingly they argued that a larger DECnet could easily be established and that this would be very useful for large groups. The physicists argued for a DEC solution, so did Norsk Romsenter. Nobody argued for a "clean" DECnet solution as a long term objective.

On the Nordic as well as on the global scene (Abbate 1995) the main fight was between the IP and OSI camps. This fight involved several elements and reached far beyond technical considerations related to computer communications. At all universities there was a fight and deep mistrust between EDP centres and computer science departments. The EDP centres were concerned about delivering ("universal") services to the whole university as efficient as possible. They thought this best could be done by one shared and centralized services. The computer science departments found most of the time the services provided by the EDP centres as lagging behind and unsatisfactory in relation to their requirements. They saw themselves as rather different from the other departments as computers were their subject. They had different requirements and would be much better off if they were allowed to run their own computers. But the EDP centres were very afraid of losing control if there were any computers outside the domain they ruled.

The computer science departments also disagreed with the EDP centres about what should be in focus when building communication services and networks. The EDP departments focus first on their own territory, then on the neighboring area. This means, first establishing networks across the university. Second, extending and enhancing this so that it becomes linked to the networks at the other universities in Norway, then the Nordic countries. The computer science departments, however, are not interested in communicating with other departments at the same university. They want to communicate and collaborate with fellow researchers at other computer science departments - not primarily in Norway or other Nordic countries, but in US. They wanted Unix computers to run the same software as their colleagues in US, and they wanted connection to ARPANET to communicate with them.

The EDP centres would not support Unix as long as it was not considered feasible as the single, "universal" operating system for the whole university. And they would not support IP for the same reason. And thirdly, they wanted complete control and would not let the computer science department do it by their own either. To get money to buy their own computers, the computer science department had to hide this in applications for funding of research projects within VLSI and other fields. The fight over OSI (X.25) and IP was deeply embedded into networks of people, institutions, and technologies like these ones.

Tor Sverre Lande and Spilling participated in some meetings in the early phase of the project. They were sceptical about Calibux and wanted an IP based solution. They did not have much influence on the Nordunet project and decided to go for the establishment of the IP connections they wanted outside the Nordunet project. And most involved in the Nordunet project were happy when they did not have to deal with the IP camp. At this time there was a war going on between the camps with lots of bad feelings.

As all intermediate solutions were dismissed, it was decided to go directly for an OSI based solution. The first version of the network should be build based on X.25 and the EAN system providing e-mail services. This solution was very expensive, and the project leaders soon realized that it did not scale. X.25 was full of trouble. The problems were mostly related to the fact that the X.25 protocol specification is quite extensive, and accordingly easily leading to incompatible implementations. Computes from several vendors were used within the Nordunet community, and there were several incompatibilities among the vendors' implementations. But maybe more trouble was caused by the fact that lots of parameters have to be set when installing/configuring an X.25 protocol. To make the protocol implementations interoperate smoothly, the parameter setting has to be coordinated. In fact, the protocols required coordination beyond what turned out to be possible.

The project worked on the implementation of the network as specified for about a year or so without any significant progress. The standardization of OSI protocols was also (constantly) discovered to be more difficult and the progress slower than expected, making the long term objectives. The Ministry of the Nordic Council was seriously discussing to stop the project because there was no results. New approaches were desperately needed.

Strategy 2: Intermediate, short-term solutions

At the same time other things happened. IBM wanted to transfer the operations of its EARN network to the universities. Together Einar Løvdal and Mads Brunell then over some time developed the idea to use EARN as backbone of a multi protocol network. They started to realize that OSI would take a long time - one had to provide services before that. OSI was all the time ideological important, but one had to be/become more (and more) pragmatic. The idea about "The NORDUNET Plug" was developed. This idea mean that there should be one "plug" common for everybody that would hook up to the Nordunet network. The plug should have 4 "pins:" one for each of the network protocols to be supported: X.25, EARN, DEC, and IP. The idea was presented as if the plug implemented a gateway between all the networks as illustrated by fig.1. That was, however, not the case.

The Nordunet Plug, seen as gateway

The plug only provided access to a shared backbone network as illustrated by fig 2. An IBM computer running EARN/RSCS protocols could communicate only with another computer also running the same protocols. There was no gateway enabling communications between, say, an RSCS and an IP based network.

 

The NORDUNET Plug, as shared backbone

The EARN idea received strong support. Løvdal and Brunell got hold of the EARN lines through a "coup" and the implementation of a Nordic network based on the "Nordunet plug" idea started. They succeeded in finding products making the implementation of the "plug" quite straight forward. First, Vitalink Ethernet bridges were connected to the EARN lines. This means that Nordunet was essentially an Ethernet. To these Vitalink boxes the project linked IP routers, X.25 switches and EARN "routers." For all these protocols there were high quality products available that could be linked to the Vitalink Ethernet bridges.

This solution had implications beyond enabling communication across the backbone. The EARN network was originally designed by a centralized IBM unit and was based on a coherent line structure and network topology. Such coherent topology would be difficult to design by an organization containing so many conflicting interests as the Nordunet project. However, the EARN topology meant that the Nordunet network was designed in a way well prepared for further growth.

Further, the EARN backbone also included a connection to the rest of the global EARN network. A shared Nordic line to ARPANET was established and connected to the central EARN node in Stockholm. 64 KB lines to CERN for HEPnet were also connected.

The Nordunet topology

 

Having established a shared backbone, the important next step was of course the establishment of higher level services like e-mail, file transfer, remote job entry (considered very, very important at that time to share computing resources for number crunching), etc. As most of the networks in use had such services based on proprietary protocols, the task for the Nordunet project was to establish gateways between these. A large activity aiming at exactly that was set up. When gateways at the application level were established, interoperability would be achieved. A gateway at the transport level would do the job if there were products available at the application level (e-mail, file transfer, etc.) on all platforms implementing the same protocols. Such products did not exist.

Before this, users in the Nordic countries used gateways in the US to transfer e-mail between computers running different e-mail systems. That meant that sending an e-mail between two computers standing next to each other, the e-mail had to be transferred across the atlantic, converted by the gateway in the US, and finally transferred back again. The Nordunet established a gateway service between the major e-mail systems used. The service was based on gateways software developed at CERN.

File transfer gateways are difficult to develop as it requires conversion on the fly. CERN had a file transfer protocol, called GIFT (General Interface for FIle Transfer), running on VAX/VMS computers. An operational service was established at CERN. It linked the services developed within the Calibux (Blue book), DECnet, the Internet (ftp), and EARN. The gateway worked very well at CERN. Within Nordunet the Finnish partners were delegated the task of establishing an operational gateway service based on the same software. This effort was, however, given up as the negotiations about conditions for getting access to the software failed.

A close collaboration emerged between the Nordunet project and CERN people. They were "friends in spirit" ("åndsfrender") - having OSI as the primary long term objective, but at the same time concentrating on delivering operational services to the users.

From intermediate to permanent solution

When the "Nordunet Plug" was in operation, a new situation was created. The network services had to be maintained and operated. Users started to use the network. And users' experiences and interests had to be accounted for when making decisions about the future changes to the network. Both the maintenance and operation work as well as the use am the network was influenced by the way the network- and in particular the "plug" as its core - was designed. The "plug also became an actor playing a central role in the future of the network.

Most design activities were directed towards minor, but important, necessary improvements of the net that was disclosed its use disclosed. Less resources were left for working on long term issues. However, in the NORDUnet community, this topic was still considered important. And the researchers involved continued their work on OSI protocols and their standardization. The war between IP and X.25 continued. The OSI "priests" believed as strongly as ever that OSI, including X.25, was the ultimate solution. Among these were Bringsrud at the EDP centre i Oslo, Alf Hansen, Olav Kvittem in Trondheim, and Terje Grimstad at NR. Einar Løvdal was fighting for making bridges to IP communities, having meetings with Spilling.

The may be most important task in this period was the definition and implementation of a unified address structure for the whole NORDUnet. This task was carried out successfully.

In parallel with the implementation and early use phase of the "Plug," Unix diffused fasted in academic institutions, ARPANET was growing fast, ARPANET protocols were implemented on more platforms and created more local IP communities (in LANs), while there was in practical terms no progress within the OSI project.

The increased availability of IP on more platforms led to an increase in use of "dual stack" solutions, i.e. installing more than one protocol stack on a computer linking it to more than one network. Each protocol stack is then used to communicate with specific communities. This phenomenon was in particular common among users of DEC computers. Initially they were using DECnet protocols to communicate with locals or for instance fellow researchers using HEPnet, and IP to communicate with ARPANET users.

The shared backbone, the e-mail gateway, and "dual stack" solutions created a high of interoperability among NORDUnet users. And individual users could, for most purposes, choose which protocols they preferred - they could switch from one to another based on personal preferences. And as IP and ARPANET were diffusing fast, more and more users found it most convenient to use IP. This led to a smooth, unplanned, and uncoordinated transition of the NORDUnet into an IP based network.

One important element behind the rapid growth of the use of IP inside NORDUnet was the fact that ARPANET's DNS service made it easy to scale up an IP network scalable. In fact, this can be done by just giving a new computer and address and hook it up and enter its address and connection point into DNS. No change is required in the rest of the network. All the network needs to know about the existence of the new node is taken care of by DNS. For this reason, the IP network could grow without requiring any work done by the network operators. And the OSI enthusiasts could not do anything to stop it either.

The coherent network topology and the unified addressing structure implemented also made the network scalable.

Nordunet and Europe

From the very begging, participating in European activities was important for the NORDUnet project. The NORDUnet project also meant that the Nordic countries acted like one actor on the European level. This also helped them coming into an influential position. They were considered a "great power" in line with UK, France and (at that time - West) Germany. However, the relationships change when NORDUnet decided implementing the "plug." This meant that the project no longer was going for the pure OSI strategy. For this reason the Nordic countries were seen as traitors in Europe.

This made the collaboration difficult for some time. But as OSI continued not to deliver and the pragmatic NORDUnet strategy proved to be very successful, more people got interested in similar pragmatic approaches. The collaboration with the CERN community is already mentioned. Further, the academic network communities in the Netherlands and Switzerland moved towards the same approach.

Throughout its pragmatic strategy and practical success, the NORDUnet had significant influence on what happened in Europe in total. This means that the project contributed in important ways to the diffusion of IP and ARPA/Intenret in Europe - and reduced the possibilities for OSI to succeed.

On the notion of a gateway

The term "gateway" has a strong connotation. It has traditionally been used in a technical context to denote an artefact that is able to translate back and forth between two different communication networks (Saleh and Jaragh 1998). A gateway in this sense is also called a "converter" and operates by inputting data in one format and converting it to another. In this way a gateway may translate between two, different communication protocols that would otherwise be incompatible as a protocol converter "accepts messages from either protocol, interprets them and delivers appropriate messages to the other protocol" (ibid., p. 106).

A gateway, as known from infrastructure technologies for communication and transport, is to translate back and forth between networks which would otherwise be incompatible. A well-known and important example is the AC/DC adapter (Dunn xxx; Hughes 1983). At the turn of the century, it was still an open and controversial issue whether electricity supply should be based on AC or DC. The two alternatives were incompatible and the "battle of systems" unfolded. As a user of electrical lighting, you would have to choose between the two. There were strong proponents and interests behind both. Both had their distinct technical virtues. AC was more cost-effective for long-distance transportation (because the voltage level could be higher) whereas a DC based electrical motor proceeded the AC based one by many years. As described by Hughes (1983) and emphasized by Dunn (198xx), the introduction of the converter made it possible to couple the two networks. It accordingly became feasible to combine the two networks and hence draw upon their respective virtues.

Potentially confusing perhaps, but we generalise this technically biased notion of a gateway as an artefact that converts between incompatible formats. In line with ANT, we subsequently use "gateway" to denote the coupling or linking of two distinct actor-networks. Compared to the conventional use of the term, our use of the term gateway is a generalization along two dimensions:

the coupling is not restricted to be an artefact but may more generally be an actor-network itself, e.g. a manual work routine;

the coupling is between actor-networks, not only communication networks;

This needs some unpacking to see that it is not just a play with words. To show that this generalized notion of a gateway may actually contribute with anything substantial, we will spell out the roles they play.

Other scholars have developed notions related to this notion of a gateway. Star and Greisemer's (1992) concept of boundary objects may also be seen as a gateways enabling communication between different communities of practices. The same is the case for Cussins (1996) objectification strategies. These strategies may be seen as constituting different networks, each of them being connected to the networks constituted by the different practices through gateways translating the relevant information according to the needs of the "objectification networks."

The roles and functions of (generalized) gateways

Generalized gateways (or simply "gateways" from now) fill important roles in a number of situations during all phases of an information infrastructure development. The listing of these roles should be recognised as an analytic vehicle. In practise, a gateway may perform several of these roles simultaneously.

Side-stepping confrontation

The key effect of traditional converters is that they side-step -- either by postponing or by altogether avoiding -- a confrontation. The AC/ DC adapter is a classic example. The adapter bought time so that the battle between AC and DC could be postponed. Hence, the adapter avoided a premature decision. Instead, the two alternatives could co-exist and the decision be delayed after more experience had been acquired.

Side-stepping a confrontation is particularly important during the early phases of an infrastructure development as there are still a considerable amount of uncertainty about how the infrastructure will evolve. And this uncertainty cannot be settled up front, it has to unfold gradually.

But side-stepping confrontation is not only vital during the early phases. It is also important in a situation where there already exists a number of alternatives, neither of which are strong enough to "con quer" the others. We illustrate this further below drawing upon e-mail gateways.

When one of the networks are larger than the other, this strategy might be used to buy time to expand and mobilise the smaller network in a fairly sheltered environment. MER???økonomene.....XXXX (David and Bunn 1988)

Modularisation and decomposition

A more neglected role of gateways is the way they support modularisation.The modularisation of an information infrastructure is intimately linked to its heterogeneous character (see chapter 5). As we argued in chapter 8, the impossibility of monolithicly developing an information infrastructure, forces a more patch-like and dynamic approach. In terms of actual design, this entails decomposition and modularization. The role of a gateway, then, is that it encourages this required decomposition by decoupling the efforts of developing the different elements of the infrastructure and only couple them in the end. This allows a maximum of independence and autonomy.

Modularisation, primarily through black-boxing and interface specification, is of course an old and acknowledged design virtue for all kinds of information systems, including information infrastructures (REFS DIJKSRTEA PARNAS). But the modularisation of an information infrastructure supported by gateways has another, essential driving force that is less obvious. As the development is more likely to take ten than one year, the contents is bound to evolve or "drift" (see chapters 5 and 9). This entails that previously unrelated features and functions need to be aligned as a result of this "drifting". The coupling of two (or more) of these might be the result of a highly contingent, techno-economical process, a process which is difficult to design and cater for. Figure XXbyplan illustrates this. Cabel-TV and telephone have a long-standing history of distinctly different networks. They were conceived of, designed and appropriated in quite distinct ways. Only as a result of technological development (XXX) and legislative de-regulation has it become reasonable to link them (REF mansell, mulgan). This gives rise to an ecology of networks that later may be linked together by gateways.

Broadbent XXX??? describe an information infrastructure along two dimensions, reach and range, implying that changes can be made along the same dimensions. Changes along the reach dimension amount to adding nodes (or users) to the network while changes to the range amount to adding new functions. Changes to the latter of these, the range, often take place through the drifting and subsequent coupling of two initially, independent networks. Further below we use the case of MIME (Multipurpose Internet Mail Extension, RFC 1341) to highlight this.

Forging compromises

Gateways may play a crucial, political role in forging a compromise in an otherwise locked situation. This is due to the way a gateway may alternative interests to be translated and subsequently inscribed into the same (gateway-based) solution. The key thing is that the initial interests may be faithfully translated and inscribed into one and the same material, namely the gateway. In this way all the alternatives are enrolled and sufficient support is mobilized to stabilise a solution. This is important in dead-lock situations where no alternative is able to "win". Mobilising the support of two or more alternatives through the use of gateways could very well be what it takes to tip the balance.

XX pek ut hvilke av s-in-a sine fem strategier dette minner om

The polyvalent role of gateways

The way a gateway allows interaction with multiple actor-networks makes the context of use more robust in the sense that a user fluently may move between different actor-networks.

???XX HAR VI andre eks enn papir/elektr journal ?????

dual stack, IPng (her en dings)

This polyvalent character of the gateway provides a flexibility that adds to the robustness of the use of the infrastructure.

KNØLETE!!!!!!!!!

Illustrating the roles of gateways

E-mail

E-mail is one of the oldest services provided by Internet. The current version of the standard for e-mail dates back to 1982. That version developed through revisions spanning three years. A separate standard specifying the format of the e-mail message was launched in 1982 together with the protocol itself. An earlier version of formats for e-mail goes back to 1977. The Internet e-mail service consists of two standards, both from 1982: one specifying the format of a single e-mail (RFC 822) and one protocol for the transmitting of e-mails (Simple Mail Transfer Protocol, SMTP, RFC 821).

The e-mail boom in the US proceeded that of Europe and the rest of the world by many years. Already in the 70s, there were a considerable amount of e-mail traffic in the US. There existed several, independent e-mail services in addition to the Internet one, the most important ones being UUCP (a Unix-based e-mail service) and NJE within BITNET (RFC 1506, p. 3). The problem, however, was that all of these were mutually incompatible. Accordingly there was a growing awareness about the need to develop a uniform standard. This recognition spawned CCITT and ISO efforts in working out a shared e-mail standard that could cater for all by providing a "superset of the existing systems" (ibid., p. 3). LITEN KOMMENTAR: prøvde altså opprinnelig på en universlaisme løsning - før man falt tilbake på GW!). These efforts are known as the X.400 standards.

The X.400 initiate enjoyed heavy backing as it aligned and allied with the official, international standardization bodies (recall chapter 4). Especially the lobbying by the West-Germany was influential (ibid., p. 3). Promoting X.400 in Europe made a lot more sense than a corresponding move in the US. This was because the installed base of (Internet and other) e-mail services in Europe was insignificant (see chapter 9). X.400 based e-mail in Europe was fuelled by the free distribution of the EAN e-mail product to research and university institutions.

During the 80s, this created a situation where there where really two candidates for e-mail, namely Internet e-mail and X.400. The large and growing installed base of Internet e-mail in the US (and elsewhere) implied that one would need to live with both for many years to come. After the overwhelming diffusion of Internet the last few years, it is easily forgotten that during the 80s, even the US Department of Defense anticipated a migration to ISO standards. As a result, the Internet community were very eager to develop gateway solutions between the ISO world and the Internet.

>>>> (EMAIL GW rfc1327 kom TO ÅR etter x400(84)!!)

An e-mail gateway between X.400 and Internet has accordingly been perceived as important within Internet. It provides an excellent illustration of the underlying motivation and challenges of gatewaying. Even today, though, "mail gatewaying remains a complicated subject" (RFC 1506, p. 34). The fact that X.400 is really two different standards complicated matters even more. The X.400 from 1984 (written X.400(84)) was originally developed within IFIP Working Group 6.5 and adopted by CCITT. Only in 1988 did CCITT and ISO align their efforts in a revised X.400(88) versions.

The challenge, then, for an e-mail gateway is to receive a mail from one world, translate it into the formats of the other world and send it out again using the routing rules and protocols of that other world. There are two, principal difficulties with this scheme.

First, there is the problem of translating between basically incompatible formats, that is, a thing from one world that simply has no counterpart in the other world. Second, there is the problem of coordinating different, independent e-mail gateways. In principle, e-mail "gatewaying" can only function perfectly if all gateways operate according to the same translation rules, that is, the different gateways need to synchronize and coordinate their operations. We comment on both of these problems in turn.

With X.400(88), an e-mail may be confirmed upon receipt. In other words, it is intended to reassure the sender that the e-mail did indeed reach its destination. This has no corresponding feature in Internet e-mail. Hence, it is impossible to translate. The solution, necessarily imperfect, is to interpret the X.400/ Internet gateway as the final destination, that is, the receipt is generated as the mail reaches the gateway, not the intended recipient. A bigger and more complicated example of essentially incompatibilities between X.400 and Internet is the translation of addresses. This is what in practise is the most important and pressing problem for e-mail gatewaying. This is because the logical structure of the two addressing schemes differ. The details of this we leave out, but interested readers may consult (RFC 1596, pp. 11, 14-29).

The coordination of translation rules for e-mail gatewaying is attempted achieved through the establishment of a special institution, the Message Handling System Co-ordination Service located in Switzerland. This institution registers, updates and distributes translation rules. As far as we know, there exist no survey of the penetration of these rules in currently operating gateways.

MIME

Independent actor-networks that have evolved gradually in different settings, serving different purposes, may need to be aligned and linked through a gateway because they somehow have "drifted" together. An illustration of this is the current developments around e-mail.

The conceptually self-contained function of providing an e-mail service gets increasingly caught up and entangled with an array of previously unrelated issues. An illustration is the current discussion about how to support new applications related to multi-media requiring other kinds of data than just the plain text of an ordinary e-mail message.

Conceptually -- as well as historically -- e-mail functionality and multi-media file types belong to quite distinct actor-networks. It was anything but obvious or "natural" that the two need to be closer aligned.

There were three underlying reasons why pressure to somehow allow the 1982 version of Internet e-mail to cater for more than unstructured, US-ASCII text e-mails. First and foremost, the growing interest in multi-media application -- storing, a manipulating and communication of video, audio, graphics, bit maps, voice -- increased the relevance of a decent handling of corresponding file formats for these data types. Secondly, the growth and spreading of Internet prompted the need for a richer alphabet than the US-ASCII. The majority of European languages, for instance, require a richer alphabet. Thirdly, the ISO and CCITT e-mail standard X.400 allows for non-text e-mails. With an increasing concern for smooth X.400/ Internet gatewaying, there was a growing need for non-text Internet e-mail.

The problem, of course, was the immense installed base of text-based Internet e-mail (RFC 822). As has always been the Internet policy, "compatibility was always favored over elegance" (RFC 1341, p. 2). The gateway or link between text-based e-mail and multi-media data types and rich alphabets was carefully designed as an extension, not a substitute, for the 1982 e-mail. The designers happily agree that the solution was "ugly" (Alvestrand 1995).

The gateway is MIME, Multipurpose Internet Mail Extension (RFC 1341), and dates back to 1992, ten years after Internet e-mail. What MIME does is fairly straightforward. The relevant information about the multi-media data types included in the e-mail in encoded in US-ASCII. Basically, it adds two fields to the e-mail header: one specifying the data type (from a given set of available options including: video, audio and image) and one specifying the encoding of the data (again, from a given set of encoding rules). Exactly because the need to include different data types in e-mails is recognized to be open-ended, the given list of available options for data types and encoding rules is continuously updated. A specific institution, the Internet Assigned Numbers Authority, keeps a central achieve over these lists.

The political incorrectness of gateways

No one likes gateways. They are regarded as second class citizen that are only tolerated for a little while as they "should be considered as a short to mid-term solution in contrast to the long term solution involving the standardization of network interfaces over which value-added services can be provided" (Saleh and Jaragh 1998, p. 105). A similar sentiment dominates within Internet 4 . An intriguing question -- beyond the scope of our analysis -- is where these negative reactions are grounded. One reason is that gateways loose information and hence are "imperfect". Infrastructure design, also within Internet, seems to be driven towards "purity" (Eidnes 1996). As this purity is likely to be increasingly difficult to maintain in the future, it would be interesting to investigate more closely into the role of and attitudes towards gateways within Internet.

 

Based on the experiences outlined in earlier chapters, two kinds of gateways seems to be particularly relevant in health care information infrastructures. One is gateways linking together different heterogeneous transport infrastructures into a seamless web. The other is "dual stack" solutions for using different message formats when communicating with different partners.

E-mail is considered best as carriers of EDI messages. There exist gateways between most available products and protocols. These gateways work fine in most cases. However, they will cause trouble when using features specific for one product or protocol. When using X.400 systems in the way specified by most GOSIPs, saying that the X.400 unique notification mechanisms shall be used, one cannot use gateways between X.400 systems and others not having compatible mechanisms (Hanseth 1996b).

Experiences so far, indicates that implementing and running dual stack solutions is a viable strategy. If a strategy like the one sketched here is followed, implementing tools for "gateway-building" seams to a task of manageable complexity (ibid.).

XXXX en litt mer snappy avslutingig


1. The product got the nick-name SPANTAX, after the Spanish airline with the same name. At that time "Spantax" became a almost generic term for low quality services in Norway due to lots of tourists having bad experience with that airline when going to Spanish tourist resorts, combined with one specific event where a flight was close to crash when the pilot thought a large area with lots of football fields was the airport.

2. For more on this, see (Abbate, 1995).

3. Connection oriented means.. modeling telephone communication. Connectionless means... modeling ordinary mail (or telegram) services.

4. The notion of a gateway is, perhaps surprisingly, not clear. It is used in different ways. In particular, it may be used as a mechanism to implement a transition strategy (Stefferud and Pliskin 1994). It is then crucial that the gateway translates back and forth between two infrastructures in such a way that no information is lost. Dual-stack nodes and "tunneling" (see chapter 10) are illustrations of such gateways. But gateways more generally might loose information as, for instance, the gateway between the ISO X.400 e-mail protocol and the e-mail protocol in Internet. Within the Internet community, however, only gateways of the latter type are referred to as "gateways". The former type is regarded as a transition mechanism. And it is this latter type of gateways which is not seriously considered within the Internet community.

Go to Previous Go to Main