We have outlined how the elements of an information infrastructure inscribe future patterns of use (chapters 6 and 7) and how the to date dominating approach inscribes completely unrealistic scenarios of use (chapter 8). This leads us to explore further what more realistic ways to intervene, i.e. design, an information infrastructure. This implies a closer analysis of the way behaviour is inscribed in the already existing elements of an infrastructure -- the installed base. The direction of our analysis will be a radically different approach to the "design" of infrastructure, an approach dubbed "cultivation".
The building of large infrastructures takes time. All elements are connected. As time passes, new requirements appear which the infrastructure has to adapt to as explained in chapter 5. The whole infrastructure cannot be change instantly - the new has to be connected to the old. The new version must be designed in a way making the old and the new linked together and "interoperable" in one way or another. In this way the old - the installed base - heavily influence how the new can be designed. Infrastructures develops through extending and improving the installed base .
The focus on infrastructure as "installed base" implies that infrastructures are considered as always already existing, they are NEVER developed from scratch. When "designing" a "new" infrastructure, it will always be integrated into and thereby extending others, or it will replace one part of another infrastructure. This has been the case in the building of all transport infrastructures: Every single road - even the first one if it make sense to speak about a such - has been built in this way; when air traffic infrastructures have been built, they have been tightly interwoven with road and railway networks - one needed these other infrastructures to travel between airports and the travels' end points. Or even more strongly - air traffic infrastructures can only be used for one part of a travel, and without infrastructures supporting the rest, isolated air traffic infrastructures would be useless.
Information infrastructures are large actor-networks including: systems architectures, message definitions, individual data elements, standardisation bodies, existing implementations of the technology being included in a standard, users and user organisations, software vendors, text books and specifications. Programs of action are inscribed into every element of such networks. To reach agreement and succeed in the implementation of a standard, its whole actor-network must be aligned.
In the vocabulary of actor-network theory (presented in chapter 6), this insight corresponds to recognising that the huge actor-network of Internet -- the immense installed base of routers, users' experience and practice, backbones, hosts, software and specifications -- is well-aligned and to a large extent irreversible. To change it, one must change it into another equally well-aligned actor-network. To do this, only one (or very few) components of the actor-network can be changed at a time. This component then has to be aligned with the rest of the actor-network before anything else can be changed. This gives rise to an alternation over time between stability and change for the various components of the information infrastructure (Hanseth, Monteiro and Hatling 1996).
"I would strongly urge the customer/user community to think about costs, training efforts, and operational impacts of the various proposals and PLEASE contribute those thoughts to the technical process."
"Key to understanding the notion of transition and coexistence is the idea that any scheme has associated with it a cost-distribution. That is, some parts of the system are going to be affected more than other parts. Sometimes there will be a lot of changes; sometimes a few. Sometimes the changes will be spread out; sometimes they will be concentrated. In order to compare transition schemes, you "must" compare their respective cost-distribution and then balance that against their benefits."
Within the field institutional economy some scholars have studied standards as a part of a more general phenomena labelled "self-reinforcing mechanisms" (Arthur 1988, 1989, 1990) and "network externalities" (Katz and Shapiro 1986). We will here briefly review these scholars conception of standards and the and their self-reinforcing character as the installed base grow. We will look at the cause of this phenomenon as well as its effects.
Self-reinforcing mechanisms appear when the value of a particular product or technology for individual adopters increases as the number of adopters increase. The term "network externalities" is used to denote the fact that such a phenomenon appears when the value of a product or technology depends also on aspects being external to the product or technology itself.
A standard which builds up an installed base ahead of its competitors becomes cumulatively more attractive, making the choice of standards "path dependent" and highly influenced by a small advantage gained in the early stages (Grindley 1995, p. 2). The development and diffusion of standards and "infrastructural" technologies are determined by
The basic mechanism is that the large installed base attracts complementary production and makes the standard cumulative more attractive. A larger base with more complementary products also increases the credibility of the standard. Together these make a standard more attractive to new users. This brings in more adoptions which further increases the size of the installed base, etc. (ibid., p. 27).
Self-reinforcing mechanisms are, according to Arthur (1990), outside the scope of traditional, neo-classical economy, focusing on diminishing return on investment. Typical examples of this phenomenon is found within resource based economics. Assuming that the sources (of for instance hydro power, oil, ..) most easily available are used first, the costs increases as more is consumed (Arthur 1990).
In general, the part of the economy that is knowledge-based are largely subject to increasing returns: large investments in research, incremental production is relatively cheap, increased production means more experience and greater understanding in how to produce additional units even more cheaply, benefits of using them increase (ibid.)
Arthur (1988) identifies four sources of self-reinforcing processes: Large set-up or fixed costs; learning effects (improvement through experience); coordination effects (advantages in going along with others); and adaptive expectations. Katz and Shapiro (1985) presents three possible sources of network externalities: Direct physical effect of the number of purchases on the quality of the product (for instance telephone); indirect effects, for instance the more users that buy a particular computer, the more software will be available; and post-purchase service depends on the experience and size of the service network.
The self-reinforcing effects of the installed base corresponds to Thomas Hughes' concept of momentum . This concept is one of the important results of Hughes study of electricity in Western societies in the period 1880-1930 (Hughes 1983). This study is certainly the most important and influential within the field of LTS studies.
Hughes describes momentum as very much a self-reinforcing process gaining force as the technical system grows "larger and more complex" (Hughes 1987., 108). Major changes which seriously interfere with the momentum are, according to Hughes, only conceivable in extraordinary instances: "Only a historic event of large proportions could deflect or break the momentum [of the example he refers to], the Great Depression being a case in point" (ibid., 108) or, in a different example, the "oil crises" (ibid., 112). As Hughes describes it, momentum is the result of a larger actor-network including the technology and its users, manufacturers, educational institutions, etc. In particular the establishment of professions, educational institutions and programs are crucial in this respect. We will give a more comprehensive presentation of Hughes study later in this chapter as an example of an evolving infrastructure.
The most widely known example of this phenomenon is the QWERTY layout of typewriter and computer keyboards (David 1986). Other examples documented by economists include the VHS standard for VCRs and water-cooling systems in nuclear power plants as well as FORTRAN. Technological standards in general tend to become locked-in by positive feedback (Farrel and Saloner 1985, 1986, 1992).
In the cases of VCR standards and nuclear power plant cooling Betamax and gas cooling respectively were considered technological superior but became losers in the "competition" due to the alternatives, VHS and water cooling, getting an early "advantage" being reinforced through positive feedback.
IIs and communication technologies are paradigm examples of phenomena where "network externalities" and positive feedback (increasing return on adoption) are crucial, and accordingly technologies easily being "locked-in" and turning irreversible. All factors mentioned above apply. The positive feedback from new adopters (users) is strong. The usefulness is not only dependent on the number of users, in case of e-mail for instance, the usefulness is to a large extent its number of users. The technology become hard to change as successful changes need to be compatible with the installed base. As the number of users grow, reaching agreement about new features as well as coordinating transitions become increasingly difficult. Vendors develop products implementing a standard, new technologies are built on top of it. As the installed base grows, institutions like standardization bodies are established, the interests vested in the technology grow.
An actor-network becomes irreversible when it is practically impossible to change it into another aligned one. At the moment, Internet appears to be approaching a state of irreversibility. Consider the development of a new version of IP mentioned earlier. One reason for the difficulty to develop a new version of IP is the size of the installed base of IP protocols which must be replaced while the network is running. Another major difficulty stems from the inter-connectivity of standards: a large number of other technical components depend on IP. An internal report assesses the situation more precisely as: "Many current IETF standards are affected by [the next version of] IP. At least 27 of the 51 full Internet Standards must be revised (...) along with at least 6 of the 20 Draft Standards and at least 25 of the 130 Proposed Standards." (RFC 1995, 38).
The irreversibility of II has not only a technical basis. An II turn irreversible as it grows due to numbers of and relations between the actors, organisations and institutions involved. In the case of Internet, this is perhaps most evident in relation to new, commercial services promoted by organisations with different interests and background. The transition to the new version of IP will require coordinated actions from all of these parties. It is a risk that "everybody" will await "the others" making it hard to be an early adopter. As the number of users as well as the types of users grow, reaching agreement on changes becomes more difficult (Steinberg 1995).
For a long period there was a fight between OSI and Internet - sometimes called a religious war (Drake 1993). Now the war is obviously over and OSI is the looser. Einar Stefferud (1992, 1994) and Marshall Rose (1992) have claimed that OSI would be a failure due to its "installed base hostility." They argued that OSI was a failure as the protocols did not pay any attention to existing networks. They were specified in a way causing great difficulties when trying to make them interwork with corresponding Internet services, i.e. link them to any existing installed bases. The religious war can just as well be interpreted as a fight between an installed base and improved design alternatives. As OSI protocols have been discussed, specified and pushed through the committees in ISO, Internet protocols have been implemented and deployed. The installed base won, in spite of tremendous support from numerous governments and CEC specifying and enforcing GOSIPs (Governmental
The Internet has been a rapidly growing installed base. The larger the installed base the more rapid growth. The rapid diffusion of WorldWideWeb is an example of how to successfully build new services on an existing installed base.
The growth of the Internet is a contrast to the failed OSI effort. However, there are other examples also being contrasts. Leigh Star and Karen Ruhleder (1994, 1996) documents how a nicely tailored system supporting a world wide research community "lost" for the much less sophisticated (i.e. specialized, tailored to users' needs) gopher and later Web services. As we see this case, the specialized system lost due to its requirement for an installed base of new technological infrastructure. Important was also the lack of installed base of knowledge and systems support infrastructure.
One strategy David (ibid.) finds worth considering is that of "counter-action" - i.e. to prevent the "policy window" from slamming shut before the policy makers are better able to perceive the shape of their relevant future options. This requires positive action to maintain leverage over the systems rivalry, preventing any of the presently available variants from becoming too deeply entrenched as a standard, and so gathering more information about technological opportunities even at the cost of immediate losses in operations efficiency. In "the race for the installed base" governments could subside only the second-place system.
According to Arthur (1988), exit from an inferior lock-in in economics depends very much on the source of the self-reinforcing mechanism. It depends on the degree to which the advantages accrued by the inferior "equilibrium" are reversible or transferable to an alternative one. Arthur (1988) claims that when learning effects and specialized fixed costs are the source of reinforcement, advantages are not transferable to an alternative equilibrium. Where coordination effects are the source of lock-in, however, he says that advantages are often transferable. As an illustration he mentions that users of a particular technological standard may agree that an alternative would be superior, provided everybody switched. If the current standard is not embodied in specialized equipment and its advantage-in-use is mainly that of convention (for instance the use of colors in traffic lights), then a negotiated or mandated changeover to a superior collective choice can provide exit into the new equilibrium at negligible cost.
The may be most important remedy to help overcome the negative effects of positive feedback and network externalities, i.e. lock-in and inefficiency, is the construction of gateways and adapters (Katz and Shapiro 1985, David and Bunn 1988). Gateways may connect heterogeneous networks, being built independently or based on different versions of the same standards.
Based on an analysis of the circumstances and consequences of the development of the rotary converter which permitted conversion between alternating and direct current (and vice versa), David and Bunn (1988) argue that ".. in addition to short-run resource saving effects, the evolution of a network technology can be strongly influenced by the availability of a gateway innovation." In this case, this gateway technology enabled the interconnection of previously incompatible networks, and enabled the evolution from a dominating inferior technology into a superior alternative.
On a general level, there are two elements being necessary for developing flexible IIs. First, the standards and IIs themselves must be flexible and easy to adapt to new requirements. Second, strategies for changing the existing II into the new one must be developed together with necessary gateway technologies linking the old and the new. These elements are often interdependent.
The basic principle for providing flexibility is modularization and encapsulation (Parnas 1972). Another important principle is leanness, meaning that any module should be as simple as possible based on the simple fact that it is easier to change something small and simple than something large and complex. Formalization increases complexity, accordingly less formalization means larger flexibility.
Techniques that may be used are to design a new version as an extension of the existing guaranteeing backward compatibility and "dual stacks" meaning that a user that are interested in communicating with users on different networks are connected to both. This is explored further in chapter 11.
Accepting the general thrust of our argument, that the elements of an infrastructure inscribe behaviour and that there always already exist such elements, implies a radical rethinking of the very notion of design. Traditional design (implicitly) assumes a degree of freedom that simply does not exist (cf. chapter 8 on design dominated by univeralism). This leads us to explore alternative notions of design.
When describing our efforts and strategies for developing technological systems, we usually characterize these efforts as design or engineering. Bo Dahlbom and Lars Erik Janlert (1996) use the notion of construction to denote a more general concept including design as well as engineering. They further use the notion of cultivation to characterize a fundamentally different approach to shaping technology. They characterize the two concepts in the following way:
[When we] "engage in cultivation, we interfere with, support and control, a natural process." [When] "we are doing construction, [we are] selecting, putting together, and arranging, a number of objects to form a system.....
Construction and cultivation give us two different versions of systems thinking. Construction is a radical belief in our power to, once and for all, shape the world in accordance with our rationally founded goals. Cultivation is a conservative belief in the power of natural systems to withstand our effort at design, either by disarming them or by ruining them by breakdown."
The concept of cultivation turns our focus on the limits of rational, human control. Considering technological systems as organisms with a life of their own implies that we focus on the role of existing technology itself as an actor in the development process. This theory focuses on socio-technical networks where objects usually considered social or technological are linked together into networks. The "development organization" as well as the "product" being developed are considered unified socio-technical networks.
Claudio Ciborra propose the concept of "improvisation" as a concept to understand what is going on in organizations when adopting information technology. He holds that this is a concept much more grounded in individual and organizational process than planned decision making (XXCiborra ICIS96). He describes improvisation as situated performance where thinking and action seem to occur simultaneously and on the spur of the moment. It is purposeful human behavior which seems to be ruled at the same time by chance, intuition competence and outright design.
Wanda Orlikowski use the same concept in her "Improvisational Model for Managing Change" (XXOrlikowski Sloan, ISR). In this model organizational transformation is seen as an ongoing improvisation enacted by organizational actors trying to make sense of and act coherently in the world. The model rests on two major assumptions which differentiate it from traditional models of change: first, changes associated with technology implementations constitute an ongoing process rather than an event with an end point after which the organization can expect to return to a reasonably steady state; and second, various technological and organizational changes made during the ongoing process cannot, by definition, all be anticipated ahead of time.
Through a series of ongoing and situated accomondations, adaptations, and alterations (that draw on previous variations and immediate future ones), sufficient modifications may be enacted over time that the fundamental changes are achieved. There is no deliberate orchestration of change here, no technological inevitability, no dramatic discontinuity, just recurrent and reciprocal variations of practice over time. Each shift in practice creates the conditions for further breakdowns, unanticipated outcomes, and innovations, which in turn are responded to with more variations. And such variations are ongoing; there is no beginning or end point in this change process.
Given these assumptions, the improvisational change model recognizes three different types of change: anticipated, emergent, and opportunity-based. Orlikowski distinguish between anticipated changes -- changes that are planned ahead of time and occur as intended -- and emergent changes -- changes that arise spontaneously out of local innovation and which are not originally anticipated or intended. An example of an anticipated change would be the implementation of electronic mail software which accomplishes its intended aim to facilitate increased and quicker communication among organizational members. An example of an emergent change would be the use of the electronic mail network as an informal grapevine disseminating rumors throughout an organization. This use of e-mail is typically not planned or anticipated when the network is implemented, but often emerges tacitly over time in particular organizational contexts.
Orlikowski further differentiate these two types of changes from opportunity-based changes -- changes that are not anticipated ahead of time but are introduced purposefully and intentionally during the change process in response to an unexpected opportunity, event, or breakdown. For example, as companies gain experience with the World Wide Web, they are finding opportunities to apply and leverage its capabilities in ways that were not anticipated or planned before the introduction of the Web. Both anticipated and opportunity-based changes involve deliberate action, in contrast to emergent changes which arise spontaneously and usually tacitly out of people's practices with the technology over time (Orlikowski, 1996).
These three types of change build on each other over time in an iterative fashion (see Figure 1). While there is no predefined sequence in which the different types of change occur, the deployment of new technology often entails an initial anticipated organizational change associated with the installation of the new hardware/software. Over time, however, use of the new technology will typically involve a series of opportunity-based, emergent, and further anticipated changes, the order of which cannot be determined in advance because the changes interact with each other in response to outcomes, events, and conditions arising through experimentation and use.
Similarly, an improvisational model for managing technological change in organizations is not a predefined program of change charted by management ahead of time. Rather, it recognizes that technological change is an iterative series of different changes, many unpredictable at the start, that evolve out of practical experience with the new technologies. Using such a model to manage change requires a set of processes and mechanisms to recognize the different types of change as they occur and to respond effectively to them. The illustrative case presented below suggests that where an organization is open to the capabilities offered by a new technological platform and willing to embrace an improvisational change model, innovative organizational changes can be achieved.
Emergent change also covers where a series of smaller changes add up to a larger whole being rather different from what one was striving for. Some researcher see organizational structures as well as computerized information systems as emergent rather than designed (Ngwenjama 1997). Defined in this way, the concept of emergent change comes close to what Claudio Ciborra (1996a, 1996b, 1997) calls "drifting". When studying groupware, he found that the technology tends to drift when put to use. By drifting he means a slight or significant shift of the role and function in concrete situations of usage, that the technology is called to play, compared to the predefined and assigned objectives and requirements (irrespective of who plans or defines them, users, sponsors, specialists, vendors or consultants). The drifting phenomenon also captures the sequence of ad hoc adjustments. Drifting can be looked at as the outcome of two intertwined processes. One is given by the openness of the technology, its placticity in response to the re-inventions carried out by the users and specialists, who gradually learn to discover and exploit features and potentials of groupware. On the other hand, there is the sheer unfolding of the actors' "being-in-the-workflow" and the continuous stream of bricolage and improvisations that "color" the entire system lifecycle.
Marc Berg (1997) describes drifting in terms of actor network theory. He sees drifting of actor-networks as a ubiqitous phenomena. A network drifts by being unconsciously changed as the effect of a conscious change of another one network whose elements are also parts of others.
In most discussion about conceptions of and approaches to technological development and change, a key issue is to what extent and how humans can control this process. The concepts of design and construction implicitly assumes that the technology is under complete human control. Improvisation also sees the humans as being in control although less so as the design process cannot be planned. When Orlikowski sees improvisation as "situation change," one might say that "situations" influences the design. However, she does not discuss (or describe) the "nature" of "situations" or whatever might determine what "situations" we will meet in a design or change activity. She notes however that "more research is needed to investigate how the nature of the technology used influences the change process and shapes the possibilities for ongoing organizational change" (Orlikowski xx, p. yy), implying that she assumes that the technology may be an influential actor.
The notion of "drifting" implies that there is no conscious human control over the change process at all. Ciborra does not say anything either about whether there is any kind of "control" or what happens is completely random. However, he is using the notion of technologies being "out of control," which is often used as a synonymous with "autonomous technology," i.e. technological determinism (Winner 1977). Technological determinism is the opposite extreme of engineers conception of deign and engineering, assuming that it is not humans that design the technology, but rather the technology and its inner logic that determine, i.e. "design," its own use, implying in the end the whole society - including its own future development. In this extreme position the technology is the only actor.
These two extreme positions is an example of the old dichotomy and discussions within the social sciences between agency and structure. The notions of cultivation is here seen as a middle position where technology is considered shaped by humans although the humans are not in complete control. The technology is also changed by "others," including the technology itself.
Acknowledging the importance of the installed base implies that traditional notions of design have to be rejected. However, denying humans any role at all is equally, at least, wrong. Cultivation as a middle position captures quite nicely the role of both humans and technology. It is the concept providing us the best basis for developing strategies for infrastructure development. The installed base is a powerful actor. Its future cannot be consciously designed, but designers do have influence - they might cultivate it.
The installed base acts in two ways. It may be considered an actor involved in each single II development activity, but perhaps more important, it plays a crucial role as mediator and coordinator between the independent non-technological actors and development activities.
If humans strive for control, i.e. making our world appropriate for engineering tasks, strategies for cultivating IIs may be considered strategies for fighting against the power of the installed base.
Technological determinism is an extreme position where the installed base is the only actor having power. An actor-network perspective where II development is seen as installed base cultivation is a middle position between technological determinism and social reductionism.
Cultivation might be related to the discussion about the limits and relationships between design and maintenance, and someone might be tempted to see cultivation for just another term for maintenance. This is not the case. First, maintenance is related to design in the way that what has come into being through design is maintained for a while, i.e. until being replaced by a new designed system. Cultivation replaces design in the way that there is nothing being designed which later on will be maintained. Technology comes about and is changed by being cultivated. However, it might still be useful to talk about maintenance as minor changes of minor parts of a larger system or infrastructure.
In terms of actor-network theory, a transition strategy for an information infrastructure corresponds to a situation where one well-aligned actor-network is modified into another well-aligned actor-network. Only one (or a few) of the nodes of the actor-network is modified at a time. At each step in a sequence of modifications, the actor-network is well-aligned. In the more conventional vocabulary of the product world, a transition strategy corresponds to "backwards compatibility" (Grindley 1995). Backwards compatibility denotes the case when a new version of a product -- an application, a protocol, a module, a piece of hardware -- functions also in conjunction with older versions of associated products. Intel's micro processor chips, the 80x86 processor family, are examples of backward compatible products. An Intel 80486 processor may run any program that a 80386 processor may run, and a 80386 processor may run any program a 80286 may run. Micorsoft's Words application is another example of a backwards compatible product: the newer versions of Word may read files produced by older versions of Word (but not the other way around, not forward compatibility).
To appreciate what changing an information infrastructure is all about, it is necessary to identify underlying assumptions, assumptions which go back to the essentially open character of an information infrastructure outlined earlier in chapter 5. It is illuminating to do so by contrasting it with the way changes in another infrastructure technology are made, namely the telephone system. Despite a number of similarities between a telephone infrastructure and an information infrastructure, the issue of how changes are made underline vital and easily forgotten differences.
During the early 90s the Norwegian Telecom implemented a dramatic change in their telephone infrastructure: all telephones were assigned a new 8 digit number instead of the old 6 digit one. The unique digit sequence of a telephone is a core element of any telephone infrastructure. It is the key to identify the geographical and logical location of the phone. Both routing of traffic and billing rely on the unique number sequence. Hence changing from 6 to 8 digits was a major change. The way it was implemented, however, was simple. At a given time and date the old numbers ceased to function and the new ones came into effect. The crucial aspect of this example of a changing infrastructure technology is the strong assumption about the existence of central authority (the Norwegian Telecom) with absolute powers. None were allowed to hang on to the old numbers longer than the others, absolutely everyone had to move in concert. This situation might prevail in the world of telecommunication but it is fundamentally antagonistic to the kind of openness hardwired into the notion of an information infrastructure. Making changes in the style of telecommunication is simply not an option for an information infrastructure. There is simply no way to accomplish abrupt changes to the whole information infrastructure requiring any kind of overall coordination (for instance, so-called flag-days) because it is "too large for any kind of controlled rollout to be successful" (Hinden 1996, p. 63). It is accordingly vital to explore alternatives. Transition strategies is one of the most important of these alternatives. To develop a firmer understanding of exactly how large changes can be made, when they are appropriate and where and in which sequence they are to be implemented, is of vital concern when establishing a National Information Infrastructure (IITA 1995).
Strategies for scaling, which necessarily include changes more generally, to information infrastructures are recognized as central to the NII initiative. In a report by the Information Infrastructure Technology and Application working group, the highest level NII technical committee, it is pointed out:
We don't know how to approach scaling as a research question, other than to build upon experience with the Internet. However, attention to scaling as a research theme is essential and may help in further clarifying infrastructure needs and priorities (...). It is clear that limited deployment of prototype systems will not suffice (...) (IITA 1995, emphasis added)
Contributing to the problems of making changes to an information infrastructure is the fact that it is not a self-contained artefact. It is a huge, tightly interconnected yet geographically dispersed collection of both technical and non-technical elements. Because the different elements of an information infrastructure is so tightly interconnected, it becomes increasingly difficult to make changes when it expands. The inertia of the installed base increases as the information infrastructure scales as is the case with Internet: "The fact that the Internet is doubling in size every 11 months means that the cost of transition (...) (in terms of equipment and manpower) is also increasing." (IPDECIDE 1993). But changes, also significant ones, are called for.
The scaling of an information infrastructure is accordingly caught in a dilemma. It is a process where the pressure for making changes which ensure the scaling have to be pragmatically negotiated against the conservative forces of the economical, technical and organisational investments in the existing information infrastructure, the installed base. A feasible way to deal with this is for the information infrastructure to evolve in a small-step, near-continuous fashion respecting the inertia of the installed base (Grindley 1995; Hanseth, Monteiro and Hatling 1996; Inhumane and Star 1996; Star and Ruhleder 1996). Between each of these evolutionary steps there has to be a transition strategy, a plan which outlines how to evolve from one stage to another.
In the next chapter, we empirically describe an effort to design according to the principles of cultivating the installed base, namely the revision of the IP protocol in Internet. That chapter fleshes out some of the programatically stated principles outlined in this chapter.
Go to Main Go to Previous Go to Next