Topmenu Calendar Archive Links Faculty and staff Publications Research projects Sitemap Main page

Niels Ole Finnemann: Thought, Sign and Machine, Chapter 10 © 1999 by Niels Ole Finnemann.
| Table of Contents | Chapters: | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Literature | Download pdf-file |

10. Epilogue

10.1 What is a computer?

With the analysis of the computer's symbolic properties given here it is possible both to distinguish this machine from all other machines, whether these be clocks, steam engines, thermostats, or automatic calculating machines, from all other symbolic media such as the telephone, the telegraph, the radio, the television, the VCR and from all other symbolic languages, whether these be written languages and other visual languages, speech and other auditive languages, or music, as well as all formal symbolic languages, just as it is also possible to distinguish the symbolic properties of this machine from those of the human mind.

The description hereby fulfils a basic demand which must be made on any description of the computer, as the idea itself of describing the computer assumes that it exists as a distinct phenomenon.

As the computer possesses properties which are related both to those of the machine, other symbolic media and other symbolic languages and can be used to execute a great number of mental processes mechanically, the description of these properties raises a number of questions which are also connected with previous views, not only of the computer, but also of these more or less related phenomena.

This holds true in particular of the understanding of the relationship between the mechanical and the symbolic, the relationship between the symbolic expression and the content and the relationship between the rule and its execution.

It is not my purpose to provide any complete answer to these problems, which, however, it has not been possible to ignore either. The conclusions in the book are therefore divided into two parts, as in this section there is a summary of the analysis of the computer, while in the following sections there is a short account of the theoretical and cultural perspectives.

The most obvious place to start a description of the computer's symbolic properties is in relationship to the automatic calculating machine. There are historical reasons to do so, as the computer was a product of attempts to build a calculating machine which could execute any calculable operation mechanically. But this is even more obvious and informative because the comparison leads directly to the basic principles which provide the computer with its unique characteristics.

By what could resemble a historical accident, Alan Turing presented the first theoretical description of the principles of the modern computer almost at the same time as the German engineer, Konrad Zuse, built the hitherto most perfect automatic calculating machine. While Zuse's machine, however, used a mechanical calculator and thereby assumed that the rules of calculation were incorporated into the machine's physical architecture, Turing's theoretical analysis showed that a universal calculating machine assumed that the rules of calculation were not incorporated into the invariant physical architecture. Where Zuse's machine could and should only be fed with the data for calculation, the Turing machine could and should also be fed with data which could produce the rules of calculation that were to be executed.

There is a world of difference between these two construction principles because the demand that the machine must be fed with data which produce the rules of calculation means that the rules must not only be specified, they must also be expressed in the same notation units as the data for calculation.

As a consequence of this, the Turing machine cannot operate with formal notation systems because formal notation contains no explicit description of the rules which the notation refers to and does not permit rules and data to be expressed in the same notation units.

The epoch-making leap forward from the automatic calculating machine to the universal symbol handler was thus brought about in and through the development of a new notation system. This event occurred, by and large, in a couple of pages of Turing's article On Computable Numbers, where he converted the formal expression to the notation form necessary for mechanical execution.

Turing himself saw this conversion as an operation which was necessary from a purely technical point of view, as the new notation could be read as completely defined by the original formal expression.

It was nevertheless a question of a new notation system with a number of new properties. Those features which make the Turing machine a universal calculating machine also make the machine a universal symbol handler, as the new notation can contain not only formal symbolic procedures, but any symbolic expression which can be formulated in a discrete notation system with a finite number of previously defined notation units, as the demand on this definition is primarily a demand that the notation must have a physical form which is capable of producing a simple mechanical effect.

The conditions made on this notation can be summarized in three points, which also express the necessary and sufficient condition allowing both symbolic and non-symbolic processes to be represented or simulated in a computer:

In addition to this - as a kind of negative condition - a fourth condition, comes a demand that there be a purpose which is not represented in the system.

This condition stems from the demand on the physically defined notation system. As the notation is solely defined on the basis of physical (mechanically active) values, it can also be manifested as a purely physical form which activates the same mechanical effects in the system without being intended. In other words, the machine cannot decide whether a given physical value is simply a physical value which is produced as a noise effect, or whether it is the physical expression of an intended notation unit. Any definition of notation systems thus contains an intentional element, but this element cannot be implemented in a mechanical machine. The problem can be solved in practice by using control codes whereby each signal's validity as a notation is determined by the surrounding signals.

With this description of the notation system it is possible to provide an initial, elementary description of the computer, partly as distinct from other machines and partly as distinct from other symbolic media, as:

This simultaneous dissolution of and connection between the mechanical and symbolic procedures represents both an innovation in the history of mechanical and symbolic theory, in the history of machine technology and of symbolic media.

Now, the use of informational notation is determined by the algorithmic linking of shorter or longer sequences of notation units and it might therefore be asked whether the notation system's multisemantic openness is limited by the algorithmic condition. A closer look at the algorithmic procedure, however, shows that this is not the case. First, because the algorithmic structure itself has polysemic properties, second, because when the algorithm is implemented in a computer it is represented in a notation system which permits an arbitrary modification or suspension of the algorithmic structure which creates the basis for the machine's multisemantic potential.

The first argument can be expressed in the following points:

The algorithmic expression can be described on this basis as a deterministic, syntactic structure with polysemic potential. In linguistic terms it could be said that the algorithmic procedure represents an empty expression system, a syntactic structure, which is emancipated from the content form. This emancipation is only relative, however, because the algorithmic expression is produced through a linguistically articulated definition of premises, just as the interpretation of the procedure and its result depend upon the re-establishment of a sign function which links the expression form with a content form.

The features of the algorithmic procedure which are drawn attention to here are perhaps not the most important features when we work with algorithms ourselves, but they are central when it comes to understanding what happens when the algorithmic procedure is converted to a mechanically executable form, because this conversion takes its point of departure solely in the algorithmic expression form. This conversion also implies a dissolution of semantic determination and the result of this can be summarized in the following points:

While the automatically executed procedure can be described as an intervention, where the semantic intervention plane is maintained over a sequence, for each new intervention we can choose to vary the intervention plane "up and down", or between different semantic regimes, whether these be formal regimes which can be mechanically executed, or informal, where it is the user who effectuates the semantic regime through his choice of input.

If this description of the symbolic properties of the algorithmic procedure is correct, we can draw the conclusion that the algorithmic procedure does not place any limitation on utilizing the multisemantic potential which is contained in the informational notation system.

While there are still sharp restrictions regarding which rules can be executed mechanically, there is only a single restriction regarding which symbolic and non-symbolic expressions can be represented and handled in a computer. With respect to the latter, this restriction is constituted solely by the demand that it must be possible to express the given content in a finite notation system with a finite number of empty notation units. With respect to the former, the question as to which rule systems can be implemented in a computer, it is still the case that the rule system must be characterized by well-defined start and stop conditions, that several rules cannot be used simultaneously, that there must be no unclarified overlapping between the extent of different rules (no over-determination, such as in common languages), that there must be no part of the total expression which is not subject to a given rule (no under-determination) and that the rules (or rules for creating new rules) must be declared in advance.

As Turing showed, these demands can be fulfilled for all formal procedures which can be executed through a finite number of steps. Whereas there has since been an explosive development in the number of procedures which fulfil these demands, no rule system has hitherto emerged which fulfils both these demands and at the same time completely covers the description of a specific subject area, except that of abstract, formal systems. The explanation for this is to be found in the circumstance that we are not capable of fulfilling the demand for a precise definition of start and stop conditions in the description of non-symbolic relationships and are only able to fulfil this demand for a very limited set of artefacts produced by humans, including theoretically delimited, finite physical or logical "spaces".

As the computer is a symbolic machine, a semantic dimension is included in all uses and as it can be subjected to a plurality of semantic regimes, it is consequently described as a multisemantic machine.

By a semantic regime we understand that set of codes we use to produce and read a symbolic expression, whether we are capable of formulating these codes in a complete or incomplete form or not. In this terminology, written and spoken languages comprise two semantic regimes which again distinguish themselves from formal regimes because they are based on different codes. In addition, there are a number of other semantic regimes, some of which are pictorial, others auditive. The concept is used both of symbolic expressions which are available as distinct notation units and as symbolic expressions (as pictures) which are not - or need not necessarily be.

It follows from this that the different semantic regimes need not necessarily build upon one and the same sign function and a description is therefore given of the way in which the relationship between the expression form and the content form are formed in different symbolic languages, as the emphasis is placed on the function of the notation forms.

The general results of the comparative analysis can be summarized in the following points:

Although this description is not exhaustive with regard to each symbolic language, it is sufficient to show that they use different expression forms and substances and that these differences provides a basis for the use of different reading codes. The comparative analysis thereby also provides the possibility of amplifying and going into greater detail in the description of the special relationship between the expression form and the reading code which characterize informational notation.

In formal and common languages the definition of the semantic component of the notations is thus closely connected with the given, superior semantic regime. In these cases there is a fixed bond which connects a given expression form with (a set of) reading codes. While such a bond appears to be a precondition for the use of other notation systems, it is not a precondition for the use of informational notation, as the semantic component, which is included in the definition of the notation unit, is defined through a formal semantic which is independent of the superior semantic regime. The background for this difference lies in the circumstance that informational notation is not directly defined relative to human sense and meaning recognition, but on the contrary, relative to the demand for mechanical effectiveness, which implies that the semantic component must always be manifested in a physical expression.

This is thus a question of a difference which justifies speaking of a symbol system of a new type. The absence of the fixed bond between the expression form and the reading code gives this symbol system a central property, as the absence is a precondition for the fact that we can represent all these other symbolic expressions in the informational notation system. In other words, it is the precondition for the multisemantic properties of the machine.

By multisemantic properties, the three following circumstances should be understood:

With this description it now becomes possible to add yet another criterion both to the distinction between a computer and other machines and to the distinction between the computer and other symbolic expression media.

While other machines can be described as mono-semantic machines in which a given, invariant rule set, which establishes the machine's functional mode of operation in the machine's physical architecture, has been implemented, the computer is a multisemantic machine based on informational architecture which is established by the materials the machine processes.

While other symbolic expression forms can be described as mono-semantic regimes with rule sets which connect the semantic regime with notation and syntax, the computer is a multisemantic symbolic medium in which it is possible to simulate both formal and informal symbolic languages as well as non-symbolic processes, just as this simulation can be carried out through formal and informal semantic regimes.

Together, these two delimitations contain a third, important criterion for the definition of the computer, as a computer can be defined as a medium in which there is no invariant threshold between the information which is implemented in the machine's architecture and the information which is processed by that architecture.

On the basis of this analysis of the properties of the computer it is possible to draw the conclusion that the computer, seen as a medium for the representation of knowledge, not only has the same general properties as written language, but also properties which create a new historical yardstick both for the concept of a mechanical machine and for the symbolic representation of knowledge.

Although this thesis hereby follows the research traditions which are in accord with the belief that it is possible to provide an unambiguous answer to the question as to whether the computer sets new historical standards, the interpretation given here deviates both in the understanding of the computer's mechanical and symbolic properties. It will therefore be reasonable to round off this section by characterizing and motivating this deviation.

When the computer is considered in continuation of the history of the mechanical technologies, the discussion has particularly centred on the extent to which and how this machine contributes to the transition from an industrial society to an information society.

Within this descriptive framework, the computer is seen as a technology which makes it possible to reduce the industrial production sector and control the industrial functions through information processes. It seems, however, to lead to the paradox of controlling industry by industrial means of control.[1] It could, therefore, be claimed with equal justification, that this is also a question of a machine which can contribute to an expansion of industrialization, as it permits both a) mechanization of control functions which were formerly handled (or not handled) with other means; this holds true of many administrative functions, for example, b) the use of mechanical methods in new areas, for example in biology and psychology, but also in handling purely physical material and c) the use of mechanical registration and processing of data in connection with phenomena not accessible to the senses (including macrocosmic, micro-physical and molecular-biological phenomena). Whether the historical result can actually best be described as a transition from an industrial to an informational social paradigm, or as a qualitative renewal and extension of the industrial paradigm can hardly be considered as decided.[2]

We can, however, establish that the mechanical procedure can now be dissolved (or subdivided) into "atomistic" components and manipulated and organized as sequences of individual steps. In this perspective, the question is one of an extension of the mechanical handling potential through an analytical dissolution of the mechanical procedure and thereby the operative intervention plane.

This new handling potential not only permits a much greater differentiation between various kinds of industrial use, but also provides the possibility of choosing other uses which fall outside both old as well as renewed mechanical-industrial paradigms. The computer can be used as an industrialization machine, but it can also, as such, be used in several ways, although even together these do not constitute the only possibility. It offers a choice (or a combination of several choices) which, in the social scale, have the same multisemantic dimensions as the machine itself.

The concept of the computer, on which the idea of a transition from industrial society to information society is based, is highly debatable, but the description given here also gives occasion to consider whether the industrial society and the mechanical-industrial paradigms are the right parameter for a description of the properties of the new technology and its implications.

There is one circumstance in particular which gives occasion to raise this problem, namely that with the computer we have obtained a symbol-controlled, mechanical machine in which we can represent all the forms of knowledge which were developed in the industrial society in one and the same symbolic system, where in the industrial society we represented different forms of knowledge in different symbolic expression systems. This means that the computer possesses a set of properties which make it a new, general medium for the representation of knowledge.

Although as yet we can only have vague ideas of what this implies, it is certain that this technology will bring about a change in the possibilities we have for producing, processing, storing, reproducing and distributing knowledge. In other words, this is a question of a change at level of knowledge technology, which forms an infrastructural basis of the industrial society.

Although the industrial societies have produced a great number of new, largely electrical and electronic symbolic media - including the telephone, the telegraph, the radio, the magnetic tape, the television and the VCR - writing and the printed book have maintained their position as the most important knowledge media with regard to the functioning of society. The computer, however, shakes this knowledge technological foundation.

It is therefore also reasonable to assert that it is writing and the printed book and not industrial mechanics, the calculating machine or the former use of mechanical energy systems for symbolic purposes which are the most important parameters for comparison and this implies that the horizon within which we relate to the cultural implications of the computer cannot be less broad than the horizon delimited by the role of writing and the printed book as media for knowledge in modern Euro-American history from the Renaissance up until today. The postulate is not that we can take in this field at a glance, it is simply that the computer revolution has a range which will affect all the themes inherent in the history of modernity since the Renaissance. In other words, an extremely comprehensive, and in many respects probably new history of modernization. For the present, however, only vaguely outlined.

Just as little as other views, the view of the computer presented in the preceding pages can naturally not be used to predict the future. This is particularly so because, according to this view, it is a technology which offers many possible choices and variations with very few invariant features. This so-called prediction machine's own development has also hitherto evolved in the face of all predictions. Regarding factors such as speed and capacity, all predictions have been superseded by reality, the same goes for the differentiation of potential use, whereas the introduction of this technology has often created results which were completely different to those which were expected in the form of greater efficiency, breadth of perspective and control. Whereas 20-30 years ago in Denmark, it was expected that very few mainframe machines would be sufficient to cover Danish society's need for calculating power - and nobody imagined that the machine would be used for very much else - today, there is still a need which has not been catered for in spite of an enormously expanded calculating capacity. Where, only ten years ago, these machines could be marketed in the name of the `paper-less' society, they have instead created even higher stacks of paper.

The fact that the predictions which describe the meaning of the computer as a means for achieving some definite purpose have often been completely wrong can to a great degree be explained on the basis of the machine's multisemantic properties, as these imply that the machine is not determined by or bound to the purposes which are implemented in the same way as other machines. In respect of this point too, it is more relevant to compare the computer with other knowledge media, as such a comparison reveals that it is not possible to draw direct conclusions from a description of the medium to the content of that which is expressed in the medium. The individual book does not decrease the need for new books, it increases the need.

Similarly to the book, the computer is a medium of knowledge and both produce a set of - mutually different - conditions for the articulation of knowledge with regard to form. While the medium's form is thus probably part of the message, the content of the individual book, its effect or significance, cannot be predicted on the basis of this form.

The comparison with other knowledge media, however, shows not only the dubious aspect of a certain type of prediction, it also contains a point of departure for another type, as the description of the computer as a knowledge medium also indicates the cultural plane, that sphere in society which is undergoing change, notwithstanding the way in which the medium is used.

It is also possible, on the basis of the description of the machine presented in the preceding, to suggest some of the structural features which characterize this new knowledge medium.

The first link in this sketch concerns the structural changes in the organization of knowledge as a whole, while the second concerns the changes which come into play at each link in the chain.

Where structural changes are concerned, at least three main points can be indicated, as the computer:

In itself, the integration of all these functions, which were formerly distributed between different media and functions, is epoch-making, but in addition to this comes the fact that the computer's properties also change the conditions and possibilities in each of these individual areas. Although these cannot be described under the same heading, they have, however, a common background in the general properties of the machine. It is possible here to point out three important aspects which will be of significance in all areas:

10.2 A new technology for textual representation.

Although the symbolic properties of the computer go far beyond the capacities of any previously known means of representation, there are two basic limitations.

First, that any representation in computers is conditioned by a series of sequentially processed notational units. No matter what the specific function or semantic format used, and no matter what the specific purpose, any use of computers is conditioned by a representation in a new type of alphabet, implying that the content is manifested in an invisible, textual form, which can be edited at the level of this alphabet.

Second, that the global reach is conditioned and limited by the actual presence of and access to the machinery.

Taken together, these limits delineate a system for knowledge representation which is most properly conceived of as a new global archive of knowledge in which anything represented is manifested and processed sequentially as a permanently editable text.

Hence, the computer is basically a technology for textual representation, but as such it changes the structures and principles of textual representation as known from written and printed texts, whether they belong to common or formal languages.

The character of this structural change, however, goes far beyond the internal structure of textual representations, because - due to the integration of both linguistic, formal, visual and auditive formats of knowledge - it widens the range and logic of textual representation and - due to the integration of globally distributed archives in one system - widens the social and cultural reach of any kind of textual representation.

We can therefore say that, as an agent of change, the computer provides a new textual infrastructure for the social organization of knowledge.

The basic principle in this change is inherent in the structural relation between the hidden text and its visible representation. While the informational notation shares linear sequencing with other kinds of textual representation, it is always randomly accessible as a synchronic manifestation from which a plenitude of "hypertexts" can be derived independently of previous sequential constraints. What is at stake here, however, is not a change from seriality to non-seriality, but a change in which any sequential constraint can be overcome by the help of other sequences, as anything represented in the computer is represented in a serially processed substructure.

One of the significant implications is that sequences defined by a sender can be separated - and rearranged and reinterpreted - with sequences defined by any receiver, while the position of receiver in the same act is changed to a more active role as "writer", "co-writer" or simply as user. Hence, interactivity becomes a property inherent in the serial substructure and available as an optional choice for the user, limited only by his or her skills and intentions.

Seriality persists, even in the case of non-serial expressions such as photographs and paintings, since non-serial representation is only the result of an iteration of a selected set of serially processed sequences. The same is true of the representation of any stable expression, whether of a certain state or of a dynamically processed repetitive structure and even in those cases where one or another binary sequence is made perceptible for editing as a first order representation.

As an interplay between the textual substructure and any superstructure (whether textual or not) is indispensable in any computer process, this is the core of the structural change in the principles of textual representation.

10.3 Computerization of visual representation as a triumph of modern textual culture.

The inclusion of pictorial representation seems to be one of the most significant indicators of the new range and logic of textual representation, as now, for the first time in history, we have an alphabet in which any picture can be represented as a sequential text.

Textual representation is a feature common to all computer-based pictures, and defines their specificity by contrast with other pictures. Since any picture in a computer has to be processed in the identical - binary - alphabet, it follows that any picture can be edited at this level, implying that any computer-based picture can be transformed into any other picture in this alphabet. Morphing may perhaps in many cases be only a curiosity, but the basic principle that any computerized picture is always the result of an editable textualized process performed in time is far from a curiosity since it changes the very notion of a picture as a synchronously and not serially manifested whole.

Seriality and time are not only introduced into the notion of pictures as an invisible background condition, they are also introduced at the semantic and perceptible levels, since the textualized basis allows the representation of - editable - time to be introduced at both these levels. While the synchronously manifested whole is an axiomatic property of a painting or a photograph - even though they are produced serially in time - the same property in the computer has to be specified and declared as a variable at the same level as any other feature whether it belongs to the motive, to the compositional structure or to the relation between foreground and background. Variability and invariance become free and equal options on the same scale, applicable to any pictorial element which implies that there is no element of the picture whatsoever which is not optionally defined and permanently editable.

There is of course a price to be paid for this new triumph of textualization, as the textual representation presupposes a coding of the picture into an alphabet. The basic principle in this coding is the substitution of physically defined notational units for physical substance, implying a definition of a fixed set of legitimate physical differences (i.e.: differences in colours) which are allowed to be taken into account. Since we cannot go back to the original if we only have a digitized version, the coding is irreversible and the possible secondary codings and transformations will therefore always be constrained by the primary coding.

The relevance and weight of this constraint is itself a variable which has to be taken into account in the use of computer-based representations, but in general there are two main aspects.

First, that some of the substance qualities of the original will always be missing since there is a change of expression substance. There will therefore always be some doubt about the validity of the reference to the original. This is obviously a serious constraint on the scholarly study of art.

Second, that the definition of a fixed set of legitimate physical differences at the time of the original coding may later prove to be misleading, in that physical differences which are not taken into account may be of significance. Since the computer-based picture is conditioned by an invariant distinction between differences in noise and information in the substance, there may be cases - in medical diagnostics, for instance - in which a reinterpretation of this distinction is needed.

The constraint here is directly related to the logical interrelation between noise and information, which implies that information can only be defined by the delimitation and treatment of potential information as noise, since information is always manifested in one or another kind of substance.

While missing information concerning some qualities of substance cannot be completely avoided, computerization at the same time allows a broad repertoire of possible enrichments concerning global accessibility, as well as analytical and interpretational procedures.

Since the constraints on informational representation are basically those of notation and process time, it is not possible to define any other invariant semantic or syntactic limitations to these enrichments. That this is itself a significant property can be seen by comparing previously known pictorial representations for which there does exist one kind or another of textual representation, such as those described in Euclidean geometry for instance, by the analytical geometry of Descartes, or in the various other forms of syntactically defined pictures, whether based on a well-defined perspective or a well-defined iconic or diagrammatic system.

The basic and general change in representational form towards any of these representations can be described as a transition from representation at a syntactic level to representation at the level of letters (those of the new alphabet). The textual representation of geometrical figures defines a naked syntactic structure, whether two-dimensional or three-dimensional, without regard to substance qualities such as colours etc., while any syntactic structure in a computer-based representation of a picture can be dissolved into a series of notation units, including the representation of some kind of substance. Although this is a change from a higher to a lower level of stable organization, it is for the same reason a change from a more restricted set to a more elaborate set of variation potentialities in which the higher level structures become accessible to manipulation at the lower level. In the first case the picture is defined by a stable syntactic structure - to which can be added certain rules for variation, while in the latter, stability is defined solely at the level of notational representation - to which it is possible to ascribe a plenitude of - editable - syntactic and compositional structures as well as to integrate representations (only partially, however) of substance qualities such as colours and backgrounds at the same textual level. Form, structure and rule become editable on the same scale as substance. The representation of substance is necessary, but need not, however, be a simulation of the substance of the original, the representation of an arbitrarily defined and itself editable background on the screen will suffice.

Moreover, informational notation is a common denominator in which some substance qualities, the syntax as well as the motive, are manifested on a par with each other. As any sequence representing one or another element of a picture can be selected and related to other sequences in various ways and possibly ascribed various functions as well (i.e. add an referential function, which is itself editable, to other sequences) it follows that any fragment of a picture or a picture as a whole can be integrated into a still increasing - or decreasing - syntactic and semantic hierarchy completely independently of the original form and source. The insecurity in the referential relation to the original is thus complementary to the enrichment of possible hierarchies and frames of reference.

Perspective becomes optional and variable and so do other kinds of representational structures such as representation based on the size and positioning of motifs and the choice of colours in accordance with semantic importance, as was often used during the Middle Ages. The resurrection of - or a return to - the Middle Ages, however, is not on the agenda of computerization, since no single, non-optional hierarchy of values can be established.

When seen from the cognitive point of view this is a radical extension of the ways in which cognitive content can be manifested in pictorial representations, whether in iconic, diagrammatic or geometrical form. When seen from the pictorial point of view, it is a radical extension of the ways in which the representation of both physical objects and pictures can be made subject to cognitive treatment.

Much of this is a result of the fact that the computer-based representation of stable structures has to be "played" in time, but since time has already been represented in the film and on the television screen, the proposition must be qualified accordingly.

In the case of film making the basic difference is that the definition or selection of perspective is constrained by the optical artefacts - the lenses of the camera - used, while the definition of perspective in the computer has to be defined as - a still editable - part of the same text as the motif, which implies that the very division between the optical constraints and motif becomes editable. So with regard to freedom of choice the computerized picture more closely resembles the animated cartoon than the film.

In the case of television the difference is primarily the result of the notational definition of the signals, as the stable picture on the TV screen is only the - perceptible - result of serial processes. As will be familiar, a basic constraint on real time digital television is the enormous amount of binary letters needed to represent what was formerly an analogue signal. This is a constraint, however, which at the same time transgresses a series of other constraints which characterize the old-fashioned television of the 20th century. The most far-reaching of these is probably the possible breakdown of the one-way transmission and communication. Since a receiving computer can also be a sender, the receiver can also become the editor of the editors, able to decide what and when he will receive from whom. And since the computer is not only a medium for communication but also for storing in a completely editable form, the new medium transgresses the documentation monopoly of senders too.

If, as has often argued in media studies, other modern electronic media contribute to a revitalization of visual and oral culture - although in a mediated secondary form, as claimed by Walter J. Ong - at the expense of the hegemonic regime of the "typographic culture" as it was claimed by Marshall McLuhan, the computer can more properly be understood as a medium by which the reach of modern discursive culture is extended to embrace visuality and pictorial expressions by the textualization of electronics, which at the same time allows the representation of other media as genres within this medium.

It is not only the picture or any other visual object which can now be embraced by a text. As the author of a discursive text is able to represent himself in the text, so the observer or spectator - given the appropriate paraphernalia of "virtual reality - is now able to represent himself as an interacting part of any picture. In both cases however, only as a fragmentary representation.

Under any circumstances, computerization implies that some physical and organizational constraints and invariants (whether substantial, structural or conventional) are converted to text and hence becoming optional variables.

10.4 One world, one archive.

That the computer - due to the properties described - has the potential to become a new general and globally distributed network medium for representing knowledge does not necessarily imply that it will actually do so.

There are, however, strong indications that it will.

First of all it seems beyond reasonable doubt that the use of computers will spread almost everywhere, whether this is rational or not, due to a widespread, powerful human fascination. The spread of computers into a still growing number of fields - and throughout the world - indicates that a profound change in the basic infrastructural level of all societies has already begun.

Although we are not able to predict what will happen in the future, there are very few reasons to believe that this process can be stopped and the only argument which should not be marginalized seems to be the risk of a breakdown due to inadequate electricity supplies.

Computerization in general need not to be argued for and arguments given in the past have often turned out to be wrong, or have had no particular impact.

If we are only able to guess at what may happen anyway, we might ask why we should bother about this matter at all. In this connection I should therefore like to mention two arguments which could indicate a high degree of social and cultural necessity resultant on the process of computerization.

The first argument is closely related to changes in the global reach of modernity. While the global perspective - inherent both in the claim of universality for human rights and western rationality in general, as well as in the process of colonization - is as old as modernity itself, most decisions in modern societies have until recently depended mainly on knowledge based on a more limited - locally restricted - scale. Today, however, a rapidly increasing number of local decisions on local issues depend on knowledge based on global considerations. This is true both of economical, political, military and especially ecological information and, in consequence, there is also a need for a global scale for cultural issues. While some might argue that it would be better to attempt to re-establish a local economy and local political and military government, there no longer appears to be any room left for the idea of a locally restricted ecology.

Given that an increasing number of local decisions concerning ecological issues need to be based on a corpus of knowledge of global dimensions, there is no real alternative to the computer.

While this is an argument of the natural conditions for cultural survival, the second argument comes from within culture and is a consequence of the exponential growth in the production of knowledge anticipated by J. D. Bernal in the 1930's and Vannevar Bush in the 1940's and later described in the steadily growing number of books, papers and articles which have appeared since the pioneering work of Derek de Solla Price, among others,[3] in the early 1960's. Whether measured in number of universities, academic journals, published articles, or the number of scientists and scholars in the world, or the number of reports prepared for politicians for making decisions etc., the overall tendency is the same. Limits to the growth of knowledge production are in sight - whether seen from an economical or organizational point of view, or as a general perspective on a chaotic system in which nobody can keep abreast of what is known even within his or her own specialized field.

Basic structural changes are inevitable, whether in the form of a cultural collapse or a cultural reorganization. The computer is obviously not the solution to the handling and reorganization of this exponential growth, but is an inevitable part of any viable solution, since any cultural reorganization must include a repertoire of remedies for storing, editing, compressing, searching, retrieving, communicating etc., which can only be provided by computers.

The computer may widen some cultural gaps, but if it were not used there might be no cultural gaps to bridge, since there might be no culture.

10. 5 Modernity modernized.

It should be evident from this that the computer is in the process of becoming a new platform for the social organization of knowledge and communication based on textual representation. While the very idea of a universal computer was the outcome of a short-circuiting of the modern dualism between mind and mechanical nature and hence represented a rupture in the principles of discursive representation in modernity, it became at the same time a means with which to expand modern discursive representation, but in a new form as a hidden second- order representation beneath perceptible first-order representations.

Although it may seem odd seen from previous modern viewpoints, it is a change which is in complete accordance with one the most stable and persistent principles of modernity, i.e. that of placing former axioms on the agenda as objects for investigation, description and thereby textual representation. [4]

The principle of transgressing a former conceptual framework by placing the axioms on the agenda can be found at work throughout the history of modernity, but although it is a general principle, the outcome naturally depends on the conceptual structure of the specific axioms to which the principle is applied. For this reason the same principle may cause different effects, which implies that modernity can only exist as a history of permanent self-transgression. In consequence, a conceptual rupture related to the transgression of axioms becomes a basic principle of continuity in modern culture. If this is the case, modernity cannot exist without a history in which progressive expansion is based on theoretical regression, i.e. the theoretical undermining of previous theories.

There would be no modern history, however, if continuity were only represented in the form of conceptual ruptures. On the contrary, they can only exist in the distinct modern form as conceptual ruptures at the level of axiomatics because they are always manifested in and bound to discursive textualization.

Since computerization is completely in accordance with both these modern principles of continuity, it can most properly be seen as a genuine modern phenomenon contributing to the ongoing process of modernizing modernity.

The main impact of computerization on this process is beyond doubt the modernization of the modern textual infrastructure, which implies that the process of modernization has now come to embrace the primary medium of truth in modern societies. If discursive textual representation formed the basis for the modern secularization of the human relationship to nature, the very same process has now come include the textual representation itself.

There seems to be a kind of logic in this process of secularization, which takes its point of departure in the notion of inanimate and external nature - initially conceived of as materially well-defined entities moved by immaterial forces, later as well-defined material entities and energy processes - and expanded to include biological processes leading towards the inclusion of mental processes and symbolic representation, which imply that the observer is observed and included in the very same world as any observed phenomenon.

It may seem that this is only a form of logic concerning the movement towards an all-embracing inclusion of subject matter, as the story of theoretical and epistemological developments is in many respects one of increasing divergence, in spite of many vigorous efforts to create a unified, scientifically based corpus of knowledge. Even though this may be true, it is also true that there is a logic in theoretical and epistemological developments as the movement towards the inclusion of all subject matter, whether physical, biological or mental, is related as a main cause to the history of axiomatic transgressions.

While the theories of the 16th and 17th centuries relate to the axioms of a static universe based on fixed entities and substance defined by form, 19th century theories relate to - various - axioms of dynamic and developmental systems based on variable entities, while 20th century theories predominantly differ from both of these, in that the notion of form is now separated from the notion of substance and is hence seen as a self-reliant structure or pattern which can organize arbitrary substances. In these theories substance does not matter. The computer is one of the fruits of this development, caused among other things by inner tensions in mechanical theories and most theories relating to the computer are still based today on the same type of axiomatics.

So, how then, is it possible to predict that computerization will bring about a new transgression of axioms? A new textual infrastructure as such would not, if it were not at the same time based on the very fact that substance does matter, and does so because - contrary to the main axiomatics of the 20th century and contrary to the ideas necessary for the invention - substance can neither be identified with form nor reduced to amorphous matter without affecting form.

This being so, we can predict that computerization will necessarily return substance to the theoretical and epistemological agenda, from which it was removed in late 19th and early 20th century theories. It will not, however, return as it was when removed. The return of substance will not take the shape of a notion defined by - extensional form - nor will it return leaving the notion of self-reliant forms untouched. On the contrary, it will return primarily as a resource which will force a change in the notion of form, as the same substance can be the carrier - itself transformed - of various forms, patterns and repetitive or unique structures.

Just as in modern physics, where energy under certain circumstances is converted into corpuscular matter, which implies a complete substitution of properties (from those of interfering waves to those of colliding particles), material substance seems to need an interpretation as a generic resource or material which allows the formation and change of various forms and structures.

In the case of complete substitution, there seems to be nothing left for further description except the curious fact that two completely different sets of properties are ascribed to the very same "phenomenon". To say that energy is completely transformed into matter implies that a specific amount of energy is identical with a specific amount of matter, although they have no common properties except the rule of exchange.

Now since there is a physical process taking place in time and space before as well as after the conversion, we may wonder how it could be possible to maintain that the process of exchange is not itself a process which takes place in time and space? And since there is substance before as well as after it would seem that there must also necessarily be substance in between. Whether there is a way to get around this question in physics, it is impossible to say that substance can be identified with only a single invariant form.

Thus the break with early modern concepts of form as something which defines substance must be maintained, while a break with late modern concepts of self-reliant forms is placed on the agenda.

A most intriguing aspect of this is that the very notion of rules and coding procedures must now be included as processes taking place in time, space and substance, in a world identical to that of the coded substances.

The logic of this process is the logic of progressive secularization, as it moves from the idea of the transcendental, cosmological rules of the Middle Ages and Renaissance, passing through the Enlightenment reinterpretation as natural laws immanently given in the world, but still seen as axiomatically given invariants and functioning as transcendentally given on the phenomena ruled (as the rules of language were still described in 20th century structuralism) while we are now confronted with a third step in the transition from transcendentally to immanently given rules, that of the breakdown of the idea that rules are functionally transcendental invariants to the ruled.

Even if the notion of a rule or code must imply a dividing borderline between the code and the coded, there is no way to maintain that the borderline is invariant, as it can only be established through the very same process as the coding itself.

Although it may be convenient to assume that some codes have existed since the very origin of the universe, this does not tell us much, as the very idea of such an origin can only refer to the idea of some divinely given invariant codes. There may or may not be such transcendentally given codes which are not the result of processes taking place in time, space and substance, but there is certainly a multitude of codes which can only exist as the result of coding procedures which actually do take place in time, space and substance.

A basic conceptual inversion implicitly comes from the need to explain how stability is possible at any level, including the question as to how levels come into existence. The very question implies that the notion of stability, rule, code and invariance must be moved from the field of axiomatics to the field of what is to be explained.

This is exactly the type of question posed by computerization, as the rules governing the processes in the computer must come in the same package as the governed, ready to be processed and edited in exactly the same way. To the notion of rule based systems we must now add the notion of rule-generating systems. Among the properties of such systems, redundancy functions seem to be one of the most important, as these functions can provide stability, although they allow existing rules and levels to be suspended or modified and new rules and levels to be created in ways which are not possible in strictly rule based systems.

Although the computer is not a rule-generating system such as we - in some respects - are ourselves, it transgresses the constraints which define strictly rule based systems, placing the very notion of rules on the agenda and thereby removing this notion from its sacred position of axiomatically given phenomena. A position in which nothing now appears to remain.

What has been said here about notions of substance, rules and codes is parallel to what could be said about the notion of the observer, the brain-mind relationship, the notion of human subjectivity and so forth. The process of modernization has come to embrace exactly those notions on which the process itself has been based in previous epochs. If this is the case, we are heading towards a secularization of the relationship to the secularizing mind or a transition from modernizing on a first-order scale to modernizing on a second-order scale. A continuation of modernity both through the integrative transgression of former axioms and through the extension of global reach, whether in the form of second-order textualization of such things as visual representation or second-order integration on a global scale. A continuation, however, which is only possible because the principles of modernity are not those of rule based systems, but those of rule-generative systems based on redundancy functions which allow any specific axiom or rule to be modified, suspended or transgressed.

Go to top


Notes, chapter 10

  1. A paradox because the need to control industrial processes reflects the fact that industrial processes do not provide control by themselves.
  2. Present-day society is sometimes described as an information society with reference to the fact that more than half the people employed work with information services. If we use this definition of the information society, the computer may well be the instrument for a transition from this to a new industrial society, as many of these information services can be executed mechanically. On the other hand, however, all societies could be described as information societies because knowledge and the organized exchange of information are necessary conditions for any society. Under any circumstances, it is therefore necessary to differentiate between the "information" society we are familiar with today and the possible social forms which can be created with the computer as the basic information technology in society. Such a distinction can be established with the point of departure in a conceptual distinction between information technologies which have no informational architecture and information technologies such as the computer, which have.
  3. J. D. Bernal, 1939. Vannevar Bush (1945) 1989. Derek de Solla Price, (1961) 1975.
  4. Previous examples which could be mentioned are: the transgression of the Newtonian distinction between physical matter and immaterial forces manifested in the new concept of material energy in 19th century physics; the transgression of the absolute distinction between matter and energy inherent in Einstein's theories of the early 20th century; the transgression of the definition of substance as form inherent in 20th century theories of structuralism, information theory, functionalism and pragmatism, among others; the transgression of formalist axiomatics inherent in Gödel's theory; the inclusion of human emotionalism and sexuality in the concept of man inherent in late 18th century philosophy and Romantic literature. Or in more general terms the transgression of the concept of a static universe inherent in 19th century theories of evolution, development and growth, the transgression of 19th century materialistic dynamics inherent in 20th century functional dynamics, such as that manifested in Chomsky's generative grammar theory, for instance.