CHAPTER 10 Changing infrastructures: The case of IPv6

Introduction

As expectations and patterns of use of an information infrastructure tend to evolve during its life span (see chapter 5), changes are called for. Both minor and major changes are required. This creates a dilemma. The pressure for making changes have to be pragmatically negotiated against the conservative forces of the economical, technical and organisational investments in the existing information infrastructure, the installed base (see chapter 9). A feasible way to deal with this is for the information infrastructure to evolve in a small-step, near-continuous fashion respecting the inertia of the installed base (Grindley 1995; Hanseth, Monteiro and Hatling 1996; Neumann and Star 1996; Star and Ruhleder 1996). Between each of these evolutionary steps there has to be a transition strategy, a plan which outlines how to evolve from one stage to another. The controversies over a transition strategy are negotiations about how big changes can -- or have to -- be made, where to make them, when and in which sequence to deploy them.

A transition strategy is a conservative strategy. Rather than trying anything adventurous, it plays it safe. Only modest changes are possible within a transition strategy. A transition strategy is an instance of a cultivation strategy for information infrastructures. In the next chapter we explore other, non-cultivation based approaches to establishing information infrastructures. These approaches facilitate more radical changes to the infrastructure than cultivation based one like the transition strategy described in this chapter.

The IP protocol

The revision of the internet protocol (IP) in the Internet was a direct response to the problems of scaling Internet: "Growth is the basic issue that created the need for a next-generation IP" (Hinden 1996, p. 62). The IP protocol forms the core of the Internet in the sense that most services, including WorldWideWeb, e-mail, ftp, telnet, archive and WAIS, build upon and presuppose IP.

It is fair to say that it has never been more difficult to make changes in Internet than the revision of the IP. This is because the dilemma outlined above has never been more pressing. The explosive growth of Internet is generating a tremendous pressure for making changes, changes which are so fundamental that they need to be made at the core, that is, in IP. At the same time, these changes are likely to have repercussions on an Internet which have never been as huge, never exhibited a stronger inertia of the installed base. Revising IP is the most difficult and involved change ever made to the Internet during its near 30 years of existence. Accordingly, it provides a critical case when studying the problems of changing large information infrastructures.

Our intention is to spell out some of the pragmatics played out within a transition strategy. Adopting a transition strategy is not straightforward. There are a number of socio-technical negotiations that need to be settled. To learn about how to adopt transition strategies accordingly extends well beyond merely stating a conservative attitude. It is necessary to inquire closer into what this amounts to. Changes are always relative. When close, pressing your nose against them, all changes seem big. What, or indeed, whom, is to tell "small" (and safe) changes from "big" (and daring) ones?

Late 80s - July 1992

Framing the problem

There was during the late 80s a growing concern that the success of Internet, its accelerating adoption, diffusion and development, was generating a problem (RFC 1995, p. 4). No one had ever anticipated the growth rate of Internet. The design of Internet was not capable of handling this kind of growth for very long.

Internet is designed so that every node (for instance, a server, PC, printer or router) has an unique address. The core of the problem was considered to be that IPv4 has a 32 bit, fixed length address. Even though 32 bits might theoretically produce 2**32 different identifiers which is a very significant number, the actual number of available identifiers is dramatically lower. This is because the address space is hierarchically structured: users, organisations or geographical regions wanting to hook onto the Internet are assigned a set of unique identifiers (a subnetwork) of predetermined size. There are only three available sizes to choose from, so-called class A, B or C networks. The problem, then, is that class B networks are too popular. For a large group of users, class C is too small. Even though a few times class C would suffice, they are assigned the next size, class B which is 256 times larger than class C.

In this way, the problem of fixed length IPv4 addresses gradually got reformed into the problem of exhausting class B networks. At the August 1990 IETF meeting it was projected that class B space would be exhausted by 1994, that is, fairly soon (ibid., p. 4). This scenario produced a profound sense of urgency. Something had to done quickly. The easy solution of simply assigning several class C network to users requiring somewhat more than class C size but much less than class B was immediately recognised to cause another, equally troublesome, problem. As the backbone routers in Internet, the nodes which decide which node to forward traffic to next, need to keep tables of the subnets, this explosion of the number of class C networks would dramatically increase the size of the routing tables, tables which already was growing disturbingly fast (ibid.). Even without this explosion of class C networks, the size of routing tables was causing severe problems as they grew 50% quicker than hardware advances in memory technology.

During the early 1990s, there was a growing awareness regarding the problems associated with the continued growth of the Internet. It was also recognised that this was not an isolated problem but rather involved issues including assignment policies for networks, routing algorithms and addressing schemes. There was accordingly a fairly clear conception that there was a problem complex, but with a poor sense of how the different problems related to each other, not to mention their relative importance or urgency. In response, the IETF in November 1991 formed a working group called "Routing and addressing (ROAD)" to inquire closer into these matters.

Appropriating the problem

The ROAD group had by November 1992 identified two of the problems (class B exhaustion, routing table explosion) as the most pressing and IP address exhaustion as less urgent:

Therefore, we will consider interim measures to deal with Class B address exhaustion and routing table explosion (together), and to deal with IP address exhaustion (separately).

(RFC 1992, p. 10)

The two most pressing problems required quick action. But the ROAD group recognised that for swift action to be feasible, changes had to be limited as the total installed base cannot change quickly. This exemplifies a, if not the, core dilemma when extending infrastructure technologies. There is pressure for changes -- some immediate, others more long-term, some well understood, others less so -- which need to be pragmatically balanced against the conservative influence of the inertia of the installed base. This dilemma is intrinsic to the development of infrastructure technology and is accordingly impossible to resolve once and for all. On the hand, one wants to explore a number of different approaches to make sure the potential problems are encountered, but on the other hand one need at one stage to settle for a solution in order to make further progress. It makes more sense to study specific instances of the dilemma and see how it is pragmatically negotiated in every case. A necessary prerequisite for this kind of judgement is a deep appreciation and understanding for exactly how the inertia of the installed base operates.

In the discussions around IPng, the Internet community exhibited a rich understanding of the inertia of the installed base. It was clearly stated that the installed base was not only technical but included "systems, software, training, etc." (Crocker 1992) and that:

The large and growing installed base of IP systems comprises people, as well as software and machines. The proposal should describe changes in understanding and procedures that are used by the people involved in internetworking. This should include new and/or changes in concepts, terminology, and organization.

(RFC 1992, p. 19)

Furthermore, the need to order the required changes in a sequence was repeatedly stated. To be realistic, only small changes can be employed quickly. More substantial ones need to be sugared through a gradual transition.

The [currently unknown] long-term solution will require replacement and/or extension of the Internet layer. This will be a significant trauma for vendors, operators, and for users. Therefore, it is particularly important that we either minimize the trauma involved in deploying the short-and mid-term solutions, or we need to assure that the short- and mid-term solutions will provide a smooth transition path for the long-term solutions.

(RFC 1992, p. 11)

So much for the problem in general. How does this unfold in specific instances? Is it always clear-cut what a "small" as opposed to "large" change is, or what a "short-term" rather than "mid-" or "long-term" solution is? The controversy over CIDR and C# illustrates the problem.

CIDR vs. C#

Instead of rigid network sizes (such as class A, B and C), the ROAD working group proposed employing CIDR ("Class-less Inter-Domain Routing"). CIDR supports variable-sized networks (Eidnes 1994). It was argued to solve many of the problems and that the disruptions to the installed base were known:

CIDR solves the routing table explosion problem (for the current IP addressing scheme), makes the Class B exhaustion problem less important, and buys time for the crucial address exhaustion problem.

(...) CIDR will require policy changes, protocol specification changes, implementation, and deployment of new router software, but it does not call for changes to host software.

(RFC 1992, p. 12)

At this stage, the CIDR solution to the most pressing problems was not well known as Fuller's(1992) question to the big-internet mailing list illustrates "but what is `CIDR´?". Nor was it unanimous (Chiappa 1992).

Furthermore, alternatives to CIDR existed that had several proponents. One was C# which supported a different kind of variable-sized networks. The trust of the argument for C#, perfectly in line with the fidelity of the installed base, was that it required less changes:

I feel strongly that we should be doing C# right now. It's not new, and it's not great, but its very easy - there's nothing involved that takes any research, any developments, or any agreements not made already - just say "go" and the developers can start getting this into the production systems, and out into the field. I don't think that CIDR can be done quite that quickly.

(Elz 1992)

The discussions surrounding the different short-term solutions for the IP related problems shows broad consensus for paying respect to the installed base. The CIDR vs. C# debate amounts to a judgement about exactly how much changes to the installed base is feasible within a certain time-frame. This judgement varied, producing disagreement and personal frustration. At the same time, the closing down of the controversy and deciding on CIDR illustrates the widespread belief that the need to move on overrides "smaller" disagreements:

I do feel strongly that it is far more important that we decide on one, and *DO IT*, than continue to debate the merits for an extended period. Leadtimes are long, even for the simplest fix, and needs are becoming pressing. So, I want to see us *quickly* decide (agreement is probably too much to ask for :-) on *one* of the three options and *get on with it*!

(...) I will say that I am extremely, deeply, personally, upset with the process that encouraged the creation of the C# effort, then stalled it for months while the Road group educated themselves, leaving the C# workers in the dark, etc., etc.

(Chiappa 1992)

The immediate steps including deployment of CIDR was to buy some time badly needed to address the big problem of IP address exhaustion. How to solve the problem was a lot less clear and the consequences were expected to be a lot bigger and cause "significant trauma for vendors, operators, and for users" (RFC 1992, p. 11).

The big heat

At this stage in late 1992, there already had been proposed four solutions to the problem. One solution, called CLNP, was acknowledged to have a certain amount of support but was not accepted (RFC 1992, p. 13). Unable to vouch for any one, specific solution, the IESG only outlined a process of exploration which, hopefully, would lead to a solution. Central to this decision was a judgment about exactly how urgent it was to find a solution. As will become clear further below, this was a highly controversial issue. The IESG position was that there still was some time:

[T]he IESG felt that if a decision had to be made *immediately*, then "Simple CLNP" might be their choice. However, they would feel much more comfortable if more detailed information was part of the decision.

The IESG felt there needed to be an open and thorough evaluation of any proposed new routing and addressing architecture. The Internet community must have a thorough understanding of the impact of changing from the current IP architecture to a new one. The community needs to be confident that we all understand which approach has the most benefits for long-term internet growth and evolution, and the least impact on the current Internet.

(RFC 1992, p. 14)

In parallel with the work of the ROAD group, and apparently poorly aligned with it, the IAB proposed its own plan for the next generation IP (IAB 1992). It was dubbed version 7, written IPv7. This plan of July 1992 opposed the recommendations of the ROAD group and IESG regarding the long-term problem of exhausting IPv4 address space. It produced an unprecedented heated debate during the summer of 1992. The debate focused both on the contents of IAB's solution and decision process producing the plan.

The crucial element of the IAB plan for IPv7 was the endorsement of one of the four available solutions, namely CLNP. The thrust of the argument was appealing to the ideals of Internet design: CLNP existed and people had some experience from it, so why not build upon it? Again, the controversy is not about abstract principles -- they are unanimously accepted -- but about how to apply the principles to a difficult situation. Hence, the IAB (1992, p. 14) argues that:

Delaying by a few more months in order to gather more information would be very unlikely to help us make a decision, and would encourage people to spend their time crafting arguments for why CLNP is or is not a better solution than some alternative, rather than working on the detailed specification of how CLNP can be used as the basis for IPv7 (...).

The IAB plan for IPv7 thus makes a different judgement about the available time for the Internet community to search for alternatives than the IESG IPng plan (RFC 1992).

The decisive measures taken by the IAB, settling for a solution rather than keep quarrelling, was praised by a number of people (Braun 1992; Rekhter and Knopper 1992), particularly those close to the commercial interests of Internet. This support for swift action rather than smooth talk is mixed with a discontent for letting the faith of the Internet be left to designers with little or no interest or insight into "reality". A particularly crisp formulation of this position was submitted to the big-internet mailing list shortly after the IAB's decision (Rekhter and Knopper 1992):

We would like to express our strong support for the decision made by the IAB with respect to adopting CLNP as the basis for V7 of the Internet Protocol.

It is high time to acknowledge that the Internet involves significant investment from the computer industry (both within the US and abroad), and provides production services to an extremely large and diverse population of users. Such and environment dictates that decisions about critical aspects of the Internet should lean towards conservatism, and should clearly steer away from proposals whose success is predicated on some future research.

While other than CLNP proposals may on the surface sound tempting, the Internet community should not close its eyes to plain reality -- namely that at the present moment these proposals are nothing more than just proposals; with no implementations, no experience, and in few cases strong dependencies on future research and funding. Resting the Internet future on such foundation creates and unjustifiable risk for the whole Internet community.

The decision made by the IAB clearly demonstrated that the IAB was able to go beyond parochial arguments (TCP/IP vs. CLNP), and make its judgements based on practical and pragmatic considerations.

Yakov Rekhter (IBM Corporation)

Mark Knopper (Merit Network)

One of the founding fathers of the Internet, Vint Cerf (1992), agreed initially with the IAB that in this case one should organise the efforts rather than fragment them:

The CLNP specification is proposed as the starting point for the IPv7 both to lend concreteness to the ensuing discussion (I hope this does NOT result in concrete brickbats being hurled through MIME mail....!!) and to take advantage of whatever has already been learned by use of this particular packet format.

But the majority of the Internet was appalled. In the heated debate on the big-internet mailing list, a number of people spoke about "shocked disbelief", "a disastrous idea", "shocked", "dismayed", "strongly disagree" and "irresponsible". The general feeling was clear. The frustration with the decision was obviously very much influenced by the oblique way the IAB had reached its decision thus breaching deep-seated concerns for participating, quasi-democratic decision processes in the Internet.

Bracketing the frustration about the decision process itself, the controversies circled around different views and interpretations of praised design principles. 1 In other words, even though it can be said to be near full consensus among the Internet community regarding concerns about continuity, installed base, transition etc. (see above), the application to specific context is regularly contested. The debate over IAB's IPv7 illustrates this in a striking way.

Abstract design principles meets the real world

The main reason, IAB argued, why it favoured CLNP was that it was necessary for the Internet to find a solution very soon (IAB 1992, p. 14). CLNP is a protocol which "is already specified, and several implementations exist" so it "will avoid design of a new protocol from scratch, a process that would consume valuable time and delay testing and deployment." (ibid., p. 10).

The concern for practical experience is deep and the CLNP solution of the IAB appealed to this. Furthermore, it paves the road for interoperability, another key principle in Internet. Interoperability is recognised to be the end-result of a process of stabilisation:

I think that relying on highly independent and distributed development and support groups (i.e., a competitive product environment) means that we need a production, multi-vendor environment operating for awhile, before interoperability can be highly stable. It simply takes time for the engineering, operations and support infrastructure to develop a common understanding of a technology.

(Crocker 1992)

While acknowledging this design principle, the IAB (1992b) in its Kobe declaration of June 1992 explained its IPv7 decision and argued that for IP an exception had to be made:

[W]e believe that the normal IETF process of "let a thousand (proposals) bloom", in which the "right choice" emerges gradually and naturally from a dialectic of deployment and experimentation, would in this case expose the community to too great a risk that the Internet will drown in its own explosive success before the process had run its course.

The principal difference was the pragmatic judgement of the amount of time and resources available to work out a revised IP protocol. The IESG's judgement is a head-on disagreement with the IAB's judgment. In addition, more indirect strategies for challenging the IAB were employed. One important line of argument aimed at questioning the experience with CLNP: did it really represent a sufficiently rich source of experience?

There does exist some pieces of an CLNP infrastructure, but not only is it much smaller than the IP infrastructure (by several orders of magnitude), but important pieces of that infrastructure are not deployed. For example the CLNP routing protocols IS-IS and IDRP are not widely deployed. ISIS (Intra-Domain routing protocol) is starting to become available from vendors, but IDRP (the ISO inter-domain routing protocol) is just coming out of ANSI. As far as I know there aren't any implementations yet.

(Tsuchiya 1992)

And more specifically, whether the amount and types of experience was enough to ensure interoperability:

While there certainly are some implementations and some people using [CLNP], I have no feel for the scale of the usage or -- more importantly -- the amount of multi-vendor interoperability that is part of production-level usage. Since we have recently been hearing repeated reference to the reliance upon and the benefits of CLNP's installed base, I'd like to hear much more concrete information about the nature of the system-level shakeout that it has _already_ received. Discussion about deployment history, network configuration and operation experience, and assorted user-level items would also seem appropriate to flesh out the assertion that CLNP has a stable installed base upon which the Internet can rely.

(Crocker 1992).

Interoperability resulting from experience in stable environments presupposes a variety of vendors. CLNP was associated with one specific vendor, DEC, as succinctly coined by Crowcroft (1992): "IPv7 = DECNET Phase 5?" (DECNET is DEC's proprietary communication protocols). Hence, the substance of the experience with CLNP experience was undermined as Crocker (1992) illustrates:

So, when we start looking at making changes to the Internet, I hope we constantly ask about the _real_ experience that is already widely available and the _real_ effort it will take to make each and every piece of every change we require. (...) [R]eferences to the stability of CLNP leave me somewhat confused.

Gaining experience from keeping certain parts stable is a design principle (see above). But some started challenging the very notion of stability. They started questioning exactly what it took for some part to be considered "stable". An important and relevant instance of this dispute was IPv4. Seemingly, IPv4 has been stable for a number of years as the protocol was passed as an Internet Standard in 1981 without subsequent changes. But even if the isolated protocol itself has been unchanged for 15 years, have there not been a number of changes in associated and tightly coupled elements? Is it, then, reasonable to maintain that IPv4 has been stable?

How long do we think IP has been stable? It turns out that one can give honestly different answers. The base spec hasn't changed in a very long time. On the other hand, people got different implementations of some of the options and it was not until relatively recently that things stabilized. (TCP Urgent Pointer handling was another prize. I think we got stable, interoperable implementations universally somewhere around 1988 or 89.)

(Crocker 1992)

I still don't see how you can say things have been stable that long. There are still algorithms and systems that don't do variable length subnets. When were variable length subnets finally decided on? Are they in the previous router requirements? (...). So things are STILL unstable.

(Tsuchiya 1992)

This is an important argument. It will be addressed also later. In effect, it states that the IP protocol cannot be considered an isolated artefact. It is but one element of a tightly intertwined collection of artefacts. It is this collection of artefacts -- this infrastructure -- which is to be changed. A shift of focus from the artefact to infrastructure has far-reaching repercussions on what design is all about.

A highly contested issue was exactly which problems CLNP allegedly solved and whether these were in fact the right ones. A well-known figure in the Internet (and OSI) community, Marshall Rose, was among the ones voicing concern that it "is less clear that IPv7 will be able to achieve route-aggregation without significant administrative overhead and/or total deployment." (Rose 1992a).

After the storm of protests against IAB, combining objections against CLNP with IAB's decision process, one of Internet's grand old men, Vint Cerf, reversed the IAB decision at the IETF in July 1992:

Vint Cerf Monday morning basically retracted the IAB position. They are now supporting the IESG position, and he said that the IAB has learned not to try and enforce stuff from above. (...) Apparently Vint did a strip tease until he took off his shirt to reveal an "IP over everything" T-shirt underneath.

(Medin 1992)

The overall result of the hot summer of 1992 was that a plan to explore and evaluate proposals was worked out (RFC 1992). By this time it was clear that "forcing premature closure of a healthy debate, in the name of `getting things done', is *exactly* the mistake the IAB made." (Chiappa 1992).

July 1992 - July 1994

Let the thousand blossoms bloom, or: negotiating the available time

The situation by July 1992 was this. The IESG recommendation (RFC 1992) of June 1992 calling for proposals drowned in the subsequent controversy over IAB's IPv7 plan. As the dramatic July 1992 IETF meeting led by Vint Cerf decided to reject the IAB plan, the IESG plan (RFC 1992) was accepted and so a call for proposals for IPng was made at the meeting itself.

The problem now was to organise the effort. Central to this was, again, the issue of time: how urgent were the changes, how many different approaches should be pursued, at which stage should one move towards a closing?

The plan by IESG formulated in June 1992 and revised a month later at the IETF meeting was shaped according to a definite sense of urgency. But it was far from panic. IESG declined to accept the problem as one merely of timing. So even though "[a]t first the question seemed to be one of timing" (RFC 1992, p. 14), the IESG was calm enough to hold that "additional information and criteria were needed to choose between approaches" (ibid., p. 14). Still, the suggested timetables and milestones clearly mirror a sense of urgency. The plan outlines phases of exploring alternatives, elaborating requirements for IPng and a pluralistic decision process -- all to be completed within 5 months, by December 1992 (ibid., p. 15). As it turned out, this timetable was to underestimate the effort by a factor of more than by four. It eventually took more than two years to reach the milestone the IESG originally had scheduled for late 1992.

The IESG feared fragmenting the effort too much by spending an excessive amount of time exploring many different proposals. This argument, as illustrated See Abstract design principles meets the real world. above, was it that led Vint Cerf to initially go along with the IAB IPv7 plan which focused on CLNP. At this stage in July 1992, four proposals existed (called "CNAT", "IP Encaps", "Nimrod" and "Simple CLNP", see (RFC 1995, p. 11)). This was, according to the IESG, more than sufficient as "in fact, our biggest problem is having too many possible solutions rather than too few" (RFC 1992, p. 2).

Following the call for proposals in July, three additional proposals were submitted during the autumn of 1992, namely "The P Internet Protocol (PIP)", "The simple Internet protocol (SIP)" and "TP/IX" (RFC 1995, p. 11). So by the time the IESG had planned to close down on a single solution, the Internet community was facing a wider variety of proposals than ever. Seven proposed solutions existed by December 1992.

Preparing selection criteria

In parallel with, and fuelled by, the submission of proposals, there were efforts and discussions about the criteria for selecting proposals. As it was evident that there would be several to chose from, there was a natural need to identify a set of criteria which, ideally, would function as a vehicle for making a reasonable and open decision.

The process of working out these criteria evolved in conjunction with, rather than prior to, the elaboration of the solutions themselves. From the early sketch in 1992, the set of criteria did not stabilise into its final form as a RFC until the IPng decision was already made in July 1994 (RFC 1994c). It accordingly makes better sense to view the process of defining a set of selection criteria as an expression of the gradual understanding and articulation of the challenges of an evolving infrastructure technology like the Internet.

Neither working on the proposals themselves nor settling for selection criteria was straightforward. The efforts spanned more than two years involving a significant number of people. The work and discussions took place in a variety of forms and arenas including IETF meetings and BOFs, several e-mail lists, working groups and teleconferencing. In tandem with the escalating debate and discussion, the institutional organisation of the efforts was changed. This underscores an important but neglected aspect of developing infrastructure technology, namely that there has to be a significant flexibility in the institutional framework not only (the more well-known challenge of) flexibility in the technology. It would carry us well beyond the scope of this paper to pursue this issue in any detail, but let me indicate a few aspects. The Internet establishes and dismantles working groups dynamically. To establish a working group, the group only has to have its charter mandated by the IETF. In relation to IPng, several working groups were established (including ALE, ROAD, SIPP, TUBA, TACIT and NGTRANS, see ftp://Hsdndev.harvard.edu/pub/ipng/archive/). As the explorative process unfolds during 1993, there is a sense of an escalating rather than diminishing degree of clarity:

The [IPDECIDE] BOF [about criteria at the July 1993 IETF] was held in a productive atmosphere, but did not achieve what could be called a clear consensus among the assembled attendees. In fact, despite its generally productive spirit, it did more to highlight the lack of a firm direction than to create it.

(RFC 1994b, p. 2)

In response to this situation, Gross, chair of the IESG, called for the establishment of an IPng "area", an ad-hoc constellation of the collection of relevant working groups with a directorate (which he suggested the leaders of himself). At a critical time of escalating diversity, the IESG thus institutionalises a concerting of efforts. The changes in the institutional framework for the design of Internet is elaborated further below.

Returning to the heart of the matters, the contents of solutions and the criteria, there were much variations. The rich and varied set of criteria mirror the fact that many participants in the Internet community felt that they were at a critical point in time, that important and consequential decision had to me made in response to a rapidly changing outside world. Hence, the natural first aim of formulating a tight and orderly set of criteria was not possible:

This set of criteria originally began as an ordered list, with the goal of ranking the importance of various criteria. Eventually, (...) each criterion was presented without weighting (...)

(RFC 1994c, p.2)

The goal was to provide a yardstick against which the various proposals could be objectively measured to point up their relative strengths and weaknesses. Needless to say, this goal was far too ambitious to actually be achievable (...)

(SELECT 1992)

To get a feeling of the kind of considerations, types of arguments and level of reflection about the problem, a small selection of issues are elaborated which related to this paper's core question of how to make changes to infrastructure technology in order to scale.

Market-orienting Internet

One issue concerned the role and extent market forces, big organisations and user groups should be involved. Of course, none objected to their legitimate role. But exactly how influential these concerns should be was debated. Partly, this issue had to do with the fact that historically the Internet has been dominated by individuals with a primary interest in design. There has until fairly recently not been much attention to the commercial potential of Internet among the community itself. This is clearly changing now (Hinden 1996). The economic and commercial repercussions of Internet was debated as, for instance, the IPDECIDE BOF at the July 1993 IETF confirmed that "IETF decisions now have an enormous potential economic impact on suppliers of equipment and services." (IPDECIDE 1993). There was widespread agreement that the (near) future would witness a number of influential actors, both in terms of new markets as well as participants in the future development of Internet:

Remember, we are at the threshold of a market driven environment. (...) Large scale phone companies, international PTTs and such, for example, as they discover that there is enough money in data networking worth their attention. A major point here is that the combination of the IETF and the IAB really has to deliver here, in order to survive.

(Braun 1992)

Market forces were recognised to play an important, complementary role:

[The] potential time frame of transition, coexistence and testing processes will be greatly influenced through the interplay of market forces within the Internet, and that any IPng transition plan should recognize these motivations (...)

(AREA 1994)

Still, there was broad consensus that the Internet community should take the lead. At one of the earliest broad, open hearings regarding selection criteria, the IPDECIDE BOF at the July 1993 IETF, it was forcefully stated that "`letting the market decide' (whatever that may mean) was criticised on several grounds [including the fact that the] decision was too complicated for a rational market-led solution." (IPDECIDE 1993).

Nevertheless, the increasing tension between the traditional Internet community of designers and commercial interest surfaced. Several pointed out that the Internet designers were not in close enough contact with the "real" world. The "Internet community should not close its eyes to plain reality" (Rekhter and Knopper 1992). This tension between users, broadly conceived, and designers did not die out. It was repeatedly voiced:

Concerns were expressed by several service providers that the developers had little appreciation of the real-world networking complexities that transition would force people to cope with.

(IPDECIDE 1993)

More bluntly, I find it rather peculiar to be an end user saying: we end user's desperately need [a certain feature] and then sitting back and hearing non-end-users saying "No you don't".

(Fleichman 1993)

Stick or carrot?

Still, the core problem with IPng concerned how large changes could (or ought to) be made, where, how and when to make them -- in other words, the transition strategy broadly conceived.

On the one hand, there were good reasons for making substantial changes to IPv4. A number of new services and patterns of use were expected including: real-time, multimedia, Asynchronous Transfer Mode, routing policy and mobile computing. On the other hand, there was the pressure for playing it reasonable safe by focusing on only what was absolutely required, namely solving the addressing space and routing problems. This was recognised as a dilemma:

There was no consensus about how to resolve this dilemma, since both smooth transition and [new services like for instance] multimedia support are musts.

(IPDECIDE 1993)

It was pointed out above that balancing the pressure for changes against the need to protect the installed base is an intrinsic dilemma of infrastructure technology. In the case of IPng, this was amplified by the fact that the core requirements for IPng, namely solving the routing and address space problems, were invisible to most users. They were taken for granted. Hence, there was few incentives for users to change.Why would anyone bother to change to something with little perceived, added value?

In the final version of the selection criteria, addressing this dilemma is used to guide all other requirements:

[W]e have had two guiding principles. First, IPng must offer an internetwork service akin to that of IPv4, but improved to handle the well-known and widely-understood problems of scaling the Internet architecture to more end-points and an ever increasing range of bandwidths. Second, it must be desirable for users and network managers to upgrade their equipment to support IPng. At a minimum, this second point implies that there must be a straightforward way to transition systems from IPv4 to IPng. But it also strongly suggests that IPng should offer features that IPv4 does not; new features provide a motivation to deploy IPng more quickly.

(RFC 1994c, pp. 3-4)

It was argued that the incentives should be easily recognisable for important user groups. Hence, it was pointed out that network operators were so vital that they should be offered tempting features such as controlling "load-shedding and balancing, switching to backup routers" (NGREQS 1994). Similarly, the deep seated aversion for Application Platform Interfaces, that is, tailor-made interfaces for specific platforms, was questioned. Despite the fact that "the IETF does not `do' [Application Platform Interfaces]" (RFC 1995, p. 39), the IESG finally recommends that an exception should be made in the case of IPng. This was because it meets the pressing need for tangible incentives for a transition to IPng (ibid., p.5).

Internet is an infrastructure, not an artifact

A large number of requirements were suggested and debated. They include: topological flexibility, mobile communication, security, architectural simplicity, unique identifiers, risk assessment, network management, variable-length addresses and performance (RFC 1994c). Besides addressing perceived and anticipated requirements, the requirements might have repercussions on the whole infrastructure, not only IPng.

It was repeatedly pointed out that IPng was not only about revising one, self-contained element of the Internet. It was about changing a core element of an infrastructure with tight and oblique coupling to a host of other elements in the infrastructure:

Matt Mathis pointed out that different proposals may differ in how the pain of deployment is allocated among the levels of the networking food chain (backbones, midlevels, campus nets, end users) (...).

(SELECT 1992)

I would strongly urge the customer/user community to think about costs, training efforts, and operational impacts of the various proposals and PLEASE contribute those thoughts to the technical process.

(Crocker 1992)

This well-developed sense of trying to grasp how one components, here IPng, relates to the surrounding components of the information infrastructure is a principal reason for Internet's success up till now.

New features are included to tempt key users to change. But the drive towards conservatism is linked to one of the most important design principles of Internet, namely to protect the installed base. It is of overriding importance:

[T]he transition and interoperation aspects of any IPng is *the* key first element, without which any other significant advantage won't be able to be integrated into the user's network environment.

(e-mail from B. Fink to sipp mailing list, cited by Hinden 1996)

This appeal for conservatism is repeated ad nauseam. The very first sentence of (RFC 1996) describing the transition mechanisms of IPv6, reads: "The key to a successful IPv6 transition is compatibility with the large installed base of IPv4 hosts and routers" (ibid., p. 1). The pressure for holding back and declining features which might disturb the installed base is tremendous.

"Applying" the principles

A rich and varied set of proposed requirements was worked out. Still, it is not reasonable to hold that the decision was made by simply "applying" the abstract selection criteria to the different proposals for IPng. Despite the fact that the resulting requirements (RFC 1994c) with 17 criteria were "presented without weighting" (ibid., p. 3), a few themes were of overriding importance (IPDECIDE 1993). At this stage, draft requirements had been suggested for more than one year and seven candidates existed but the requirements were "too general to support a defensible choice on the grounds of technical adequacy" and "had so far not gelled enough to eliminate any candidate" (ibid.). The concern for sharper criteria prevailed. It was repeated as late as in March 1994 only two months before the decision was made:

One important improvement that seemed to have great support from the community was that the requirements should be strengthened and made firmer -- fewer "should allows" and the like and more "musts."

(AREA 1994)

The core concern focused on making transition from IPv4 to IPv6 as smooth, simple and uncostly as possible. A few carrots were considered crucial as incentives for a transition, primarily security:

What is the trade-off between time (getting the protocol done quickly) versus getting autoconfiguration and security into the protocol? Autoconfiguration and security are important carrots to get people to use IPng. The trade-off between making IPng better than IP (so people will use it) versus keeping IPv4 to be as good as it can be.

(NGDIR 1994)

Other requirements were to a large extent subordinate or related to these. For instance, autoconfiguration, that is, "plug and play" functionality, may be viewed as an incentive for transition.

The collection of proposed IPng solutions had evolved, joined forces or died. As explained earlier, there was tight interplay between the development of the solutions and the criteria. The real closing down on one solution took place during May-July 1994. In this period, there was extensive e-mail discussions, but more importantly, the IPng Directorate organised a two day retreat 19.-20. May 1994 at BigTen with the aim of evaluating and reworking the proposals (Knopper 1994). Through his and the subsequent IETF in July 1994, an IPng solution was decided upon.

Showdown

By the spring of 1994, three candidates for IPng existed, namely "CATNIP" (evolving from TP/IX), "SIPP" (an alliance between IPAE, SIP and PIP) and "TUBA" (evolving from Simple CLNP). A fourth proposal, Nimrod, was more or less immediately rejected for being too unfinished and too much of a research project.

CATNIP was "to provide common ground between the Internet, OSI, and the Novell protocols" (RFC 1995, p. 12). The basic idea of CATNIP for ensuring this was to have Internet, OSI and Novell transport layer protocols (for instance, TCP, TP4 and SPX) run on to of any of the network layer protocols (IPv4, CLNP, IPX -- or CATNIP). The addressing scheme was borrowed from OSI.

A primary objection against CATNIP which surfaced during the BigTen retreat, was that it was not completely specified (Knopper1994; RFC 1995, pp. 14-15). Beyond the obvious problems with evaluating an incomplete proposal, this illustrates a more general point made earlier and illustrated by Alvestrand (1996), area director within IETF: "The way to get something done in the Internet is to work and write down the proposal". Despite appreciation for the "innovative" solution, there was scepticism towards the "complexity of trying to be the union of a number of existing network protocols" (RFC 1995, p. 15).

The TUBA solution was explicitly conservative. Its principal aim was to "minimize the risk associated with the migration to a new IP address space" (ibid., p. 13). This would mean "only replacing IP with CLNP" (ibid., p. 13) and let "existing Internet transport and application protocols continue to operate unchanged, except for the replacement of 32-bit IP[v4] addresses with larger addresses" (ibid., p. 13). CLNP is, as outlined above, OSI's already existing network layer protocol. Hence, the core idea is simply to encapsulated, that is, wrap up, TCP in CLNP packets.

The evaluation of TUBA acknowledged the benefits a solution making use of the "significant deployment of CLNP-routers throughout the Internet" (ibid., p. 16), that is, a solution paying respect to an installed base. Similar to the arguments outlined above See The big heat. regarding the IAB's IPv7 plan to build IPng on CLNP, "[t]here was considerably less agreement that there was significant deployment of CLNP-capable hosts or actual networks running CLNP." (RFC 1995, p. 16). The worries -- "including prejudice in a few cases" (ibid., p. 16) -- about the prospects of losing control of the Internet by aligning IPng with an OSI protocol were deep-seated.

SIPP was to be "an evolutionary step from IPv4 (...) not (...) a radical step" (ibid., p. 12). SIPP doubles the address size of IP from 32 to 64 bits to support more levels of addressing hierarchy and a much greater number of addressable nodes. SIPP does not, in the same way as CATNIP or TUBA, relate to non-Internet protocols.

The reviews of SIPP were favourable. SIPP was praised for its "aesthetically beautiful protocol well tailored to compactly satisfy today's known network requirement" (ibid., p. 15). It was furthermore pointed out that the SIPP working group had been the most dynamic one in the previous year, producing close to a complete specification.

Still, it was definitely not a satisfactory solution. In particular, the transition plans (based on the encapsulation suggestion originally in IPAE) was viewed as "fatally flawed" (Knopper 1994). A number of reviewers also felt that the routing problems were not really addressed, partly because there was no way deal with topological information and aggregation of information about areas of the network.

In sum, there were significant problems with all three proposals. Because CATNIP was so incomplete, the real choice was between TUBA and SIPP. Following the BigTen evaluation retreat, Deering and Francis (1994), co-chairs of the SIPP working group, summarised the BigTen retreat to the sipp-email list and proposed to build upon suggestions which came out of it. Particularly important, they suggested to "change address size from 8 bytes [=64 bits, the original SIPP proposal] to 16 bytes [=128 bits] (fixed-length)" (ibid.). This increase in address length would buy flexibility to find better solutions for autoconfiguration, more akin to the TUBA solution. These suggestions were accepted by the SIPP working group who submitted the revised SIPP (version 128 bits) to the IPng Directorate together with a new but incomplete transition plan inspired by TUBA. This was accepted in July 1994 as the solution for IPng, finally ready to be put on the ordinary standards track of Internet.

July 1994 - today

Finished at last -- or are we?

By the summer of 1994, a recommended candidate for IPng was found. It was called IPv6. It has been put on the standard track (see chapter 4) and made a Proposed Standard in November 1994. One could accordingly be tempted to think that it was all over, that one had found a way which secured the future of Internet. This, however, is not quite the case, not even today. There is a fairly well-founded doubt "whether IPv6 is in fact the right solution to the right problem" (Eidnes 1996). There are two reasons for this, both to be elaborated later:

Full-scale testing

A core element of the Internet design principles, what could be said to be the realisation of the Internet pragmatism, is the emphasis on practical experience and testing of any solutions (RFC 1994). Although this principle is universally accepted within the Internet community, the point is that as the installed base of Internet expands, so does the difficulties of actually accomplishing large-scale, realistic testing. So again, how should the principle of realistic testing be implemented for IPng? This worry was voiced fairly early on:

It is unclear how to prove that any proposal truly scales to a billion nodes. (...) Concern was expressed about the feasibility of conducting reasonably-sized trials of more than one selected protocol and of the confusing signals this would send the market.

(IPDECIDE 1993)

The problem of insufficient testing is important because it undermines the possibility of establishing interoperability (ibid.):

It is also difficult to estimate the time taken to implement, test and then deploy any chosen solution: it was not clear who was best placed to do this.

Current deployment of IPv6 is very slow. Implementations of IPv6 segments, even on an experimental basis, do hardly exist (Eidnes 1996). Even though the phases a standard undergo before becoming a full Internet Standard may be as swift as 10 months, a more realistic projection for IPv6 is 5 years (Alvestrand 1996). The upgrading of IPv6 to a Draft Standard requires testing well beyond what has so far been conducted.

As Internet expands, full-scale testing becomes more cumbersome. Some within the IETF see an increasingly important role for non-commercial actors, for instance, research networks, to function as early test-beds for future Internet Standards (Alvestrand 1996). The US Naval research lab. has implemented an experimental IPv6 segment by June 1. 1996 as part of their internetworking research. The Norwegian research network, which traditionally has been fairly up-front, expects to start deployment of IPv6 during 1997.

Unresolved issues

At the time when the IPng protocol was accepted on the standards track, several crucial issues were still not completed. At the November 1994 IETF immediately following the IPng decision, it was estimated that 10-20 specifications were required (AREA 1994b). Most importantly, a transition strategy was not in place. This illustrates the point made earlier, namely that the actual design decisions are not derived in any straightforward sense from abstract principles. Besides a transition strategy, the security mechanisms related to key management was not -- and, indeed, still is not -- completed.

A core requirement for IPng was to have a clear transition strategy (RFC 1995). The SIPP (version 128 bits) was accepted as IPng without formally having produced a clear transition strategy because the concerns for facilitating a smooth transition was interwoven with the whole process, as outlined See July 1992 - July 1994. earlier. There was a feeling that it would be feasible to work out the details of the transition mechanisms based on the IPng protocol. It was accordingly decided by the IPng Directorate just prior to the BigTen retreat to separate transition from the protocol.

In response to the lack of a complete transition strategy, informal BOFs (NGTRANS and TACIT) were held at the November 1994 IETF. TACIT was a working group formed during the spring of 1994, NGTRANS was established as a working group shortly after the November 1994 IETF. Both TACIT and NGTRANS were to address the issue of a transition strategy, but with slightly different focus. NGTRANS was to develop and specify the actual, short-term transition mechanisms leaving TACIT to deal with deployment plans and operational policies (NGTRANS 1994). The available time for a transition was to be "complete before IPv4 routing and addressing break down" (Hinden 1996, p. 62). As a result of the deployment of CIDR, it was now estimated that "IPv4 addresses would be depleted around 2008, give or take three years" (AREA 1994b).

From drafts sketched prior to the establishment of NGTRANS and TACIT, the work with the transition strategy was completed to the stage of a RFC only by April 1996 (RFC 1996).

The transition mechanisms evolved gradually. It was early on recognised that a cornerstone of the transition strategy was a "dual-stack" node, that is, host or router. A dual-stack node implements both IPv4 and IPv6 and thus functions as a gateway between IPv4 and IPv6 segments. Dual-stack nodes have the capability to send and receive both IPv4 and IPv6 packets. They enforce no special ordering on the sequence of nodes to be upgraded to IPv6 as dual-stack nodes "can directly interoperate with IPv4 nodes using IPv4 packets, and also directly interoperate with IPv6 nodes using IPv6 packets" (RFC 1996, p. 4).

Progress was also made on closely related elements of an IPv6 infrastructure. The bulk of the IPv4 routing algorithms were reported to be working also for IPv6 routers, a piece of pleasant news in November 1994 (AREA 1994a, p.4).

The additional, key transition mechanism besides dual-stack nodes was IPv6 over IPv4 "tunnelling". This is the encapsulation, or wrapping up, of an IPv6 packet within an IPv4 header in order to carry them across IPv4 segments of the infrastructure. A key element to facilitate this is to assign IPv6 addresses which are compatible to IPv4 addresses in a special way. (The IPv4 compatible IPv6 address has it first 96 bits set to zero and the remaining 32 bits equalling the IPv4 address).

Discussion

Rule following vs. reflective practitioners

A striking aspects of the IPng effort is the difference between abstract design principles and the application of these to situated contexts. A considerable body of literature has, both on a theoretical and an empirical basis, pointed out how human action always involves a significant element of situated interpretations extending well beyond predefined rules, procedures, methods or principles (Suchman 1987). That designers deviate from codified methods and text-books is likewise not news (Curtis, Krasner and Iscoe 1988; Vincenti 1990). Still, the manner deviation from, application to, or expectation from design principles is made the subject of a fairly open and pluralistic discussion is rare. It is not merely the case that the actual design of Internet does not adhere strictly to any design principles. This should not surprise anyone. More surprisingly is the extent to which the situated interpretations of the design principles is openly and explicitly discussed among a significant portion of the community of designers.

When outlining different approaches to systems design or interdisciplinarity, the engineering or technically inclined approach is commonly portrayed as quite narrow-minded (Lyytinen 1987). The Internet community is massively dominated by designers with a background, experience and identity stemming from the technically inclined systems design. The design process of IPng, however, illustrates an impressively high degree of reflection among the designers. It is not at all narrow-minded. As outlined earlier, there are numerous examples of this including crucial ones such as: how the installed base constrains and facilitates further changes, the new role of market forces and the balance between exploring alternatives and closing down.

Aligning actor-networks

The majority of the Internet community has a well-developed sense of what they are designing. They are not designing artefacts but tightly related collections of artefacts, that is, an infrastructure. When changes are called for (and they often are), they do not change isolated elements of the infrastructure. They facilitate a transition of the infrastructure from one state to another.

Key to understanding the notion of transition and coexistence is the idea that any scheme has associated with it a cost-distribution. That is, some parts of the system are going to be affected more than other parts. Sometimes there will be a lot of changes; sometimes a few. Sometimes the changes will be spread out; sometimes they will be concentrated. In order to compare transition schemes, you *must* compare their respective cost-distribution and then balance that against their benefits.

(Rose 1992b)

In the vocabulary of actor-network theory (Callon 1991; Latour 1992), this insight corresponds to recognising that the huge actor-network of Internet -- the immense installed base of routers, users' experience and practice, backbones, hosts, software and specifications -- is well-aligned and to a large extent irreversible. To change it, one must change it into another equally well-aligned actor-network. To do this, only one (or very few) components of the actor-network can be changed at a time. This component then has to be aligned with the rest of the actor-network before anything else can be changed. This gives rise to an alternation over time between stability and change for the various components of the information infrastructure (Hanseth, Monteiro and Hatling 1996).

The crucial but neglected insight of infrastructure design is well developed in the Internet community as the IPng case contains several illustrations of: the difference between short-term and long-term solutions, the debate over CIDR vs. C# and concerns regarding transition mechanisms. The failure to really appreciate this is probably the key reason why the otherwise similar and heavily sponsored OSI efforts have yet to produce anything close to an information infrastructure of Internet's character (Rose 1992). Hanseth, Monteiro and Hatling (1996) compares the OSI and Internet efforts more closely.

An actor-network may become almost impossible to change by having the components accumulating too much irreversibility and becoming too well aligned with each other (Hughes 1983). The components of the actor-network become so to speak locked into one another in a deadly dance where none succeeds to break out. This is not seldom the case with infrastructure technologies. Grindley (1995) describes the collapse of closed operating systems along these lines, without employing the language of actor-network theory. The operating systems were too conservative. They were locked into each other by insisting that new versions were backwards compatible with earlier ones and by tailoring a large family of applications to run only on one operating system. The danger that something similar should happen to Internet is increasing as the infrastructure expands because the "longer it takes to reach a decision, the more costly the process of transition and the more difficult it is to undertake" (IPDECIDE 1993).

Obviously, there are no generic answers to how much one should open an infrastructure technology to further changes, and when to close down on a solution which addresses at least fairly well-understood problems -- or simply keeping the old solution without changes for the time being. Internet has pursued and developed what seems a reasonably sound, pragmatic sense of this problem:

Making a reasonable well-founded decision earlier was preferred over taking longer to decide and allowing major deployment of competing proposals.

(IPDECIDE 1993)

Striking a balance between stability and change has to date been fairly successful. Whether this level of openness and willingness to be innovative suffice to meet future challenges remains to be seen. It is anything but obvious.

But what about the future?

The institutionalised framework of Internet is under a tremendous -- and a completely new kind of -- pressure. This is partly due to the fact that the majority of users come from other sectors than the traditional ones. The crucial challenge is to preserve the relatively pluralistic decision process involving a significant fraction of the community when confronted with situations calling for pragmatic judgement.

So there it is: politics, compromise, struggle, technical problems to solve, personality clashes to overcome, no guarantee that we'll get the best result, no guarantee that we'll get any result. The worst decision making system in the world except for all the others

(Smart 1992)

But only a minority of today's Internet community has acquired the required sense of pragmatism in Internet. There are signs which indicate a growing gulf between the traditional design culture and the more commercially motivated ones (Rekhter and Knopper 1992).

The core institutions of Internet are the IETF, the IESG and the IAB. Despite the fact that the IAB members are appointed from the IETF, the IAB was -- especially during the heated debate over the Kobe declaration -- poorly aligned with the IESG and the IETF. How, then, can the interests of the IAB seemingly differ so much from those of the IESG and the IETF? I point out a couple of issues I believe are relevant to working out an explanation.

Even if the IAB today is recruited from basically the same population as the IESG and the IETF, this has not always been the case (Kahn 1994). The bulk of the current members of the IAB come from the computer and telecommunication industry (8), two from universities, one from a research institute and one from manufacturing industry. Seven are based in the United States and one from each of Australia, Britain, Canada, the Netherlands and Switzerland (Carpenter 1996). The IAB struggled until fairly recently, however, with a reputation of being too closed (IAB 1990). The minutes of the IAB were not published until 1990. In addition, the IAB was for some time "regarded as a closed body dominated by representatives of the United States Government" rather than the traditional designers of the IETF and the IESG (Carpenter 1996). In connection with the Kobe declaration, this legacy of the IAB was made rhetorically use of and hence kept alive: "Let's face it: in general, these guys [from IAB] do little design, they don't code, they don't deploy, they don't deal with users, etc., etc., etc." (Rose 1992b).

The programmatically stated role of the IAB to advice and stimulate action -- rather than direct -- has to be constantly adjusted. As Carpenter (1996), the IAB chair, states: "the IAB has often discussed what this means (...) and how to implement it". It seems that the IAB during recent years has become more careful when extending advice in order not to have it misconstrued as a direction. The controversy over the Kobe declaration was an important adjustment of what it is to mean for the IAB to provide advice: "the most important thing about the IAB IPv7 controversy [in the summer of 1992] was not to skip CLNP. It was to move the power from the IAB to the IESG and the IETF" (Alvestrand 1996).

The last few years have witnessed a many-folded increase in the IETF attendance, even if it seems to have stabilised during the last year or so. Many important elements of the future Internet, most notably related to Web technology, are developed outside the Internet community in industrial consortia dealing with the HTML protocol family, HTTP, web-browsers and electronic payment. It is not clear that all of the standards these consortia develop will ever get on the Internet standards track. These consortia might decide to keep them proprietary. Still, a key consortium like the WorldWideWeb consortium lead by Tim Berners-Lee has gained widespread respect within the Internet community for the way the standardisation process mimics that of Internet (see http://www.w3.org/pub/WWW). As the organisation of Internet standardisation activities grows, so does the perceived need to introduce more formal, bureaucratic procedures closer to those employed within the OSI: "the IETF might be able to learn from ISO about how to run a large organization: `mutual cultural infection' might be positive" (IAB 1993).

An important design principle within Internet is the iterative development of standards which combine practical testing and deployment with the standardisation process. This principle is getting increasingly more difficult to meet as the IP revision makes painfully clear. There is a growing danger that the Internet standardisation process may degenerate into a more traditional, specification driven approach. Non-commercial actors, for instance, research networks, have an important role to play to function as a testbed for future standards (Alvestrand 1996).

Conclusion

To learn about the problems of scaling information infrastructure, we should study Internet. With the escalating use of Internet, making changes required for scaling become increasingly more difficult. Internet has never faced a more challenging task regarding scaling than its revision of IP. After years of hard work most people reckon that IPv6 will enhance further scaling of Internet. But even today, there is a reasonably well-founded doubt about this. We have yet to see documented testing of IPv6 segments.

The real asset of Internet is its institutionalised practise of pragmatically and fairly pluralistically negotiating design issues. Whether this will survive the increasing pressure from new users, interest groups, commercial actors and industrial consortia remains to be seen.

Having argued conceptually for cultivation based strategies to the establishment of information infrastructures (chapter 9) and illustrated an instance of it in some detail in the case of IPv6, we now turn to alternative, more radical approaches to the "design" of information infrastructures. Together with the cultivation based approach, these alternatives approaches make up the repertoire of strategies we have available when establishing information infrastructures.

 


1. Alvestrand (1996) suggests that had it not been for the clumsy way IAB announced its decision, many more would probably have gone along with the CLNP solution.

Go to Main Go to Previous Go to Next