Topmenu Calendar Archive Links Faculty and staff Publications Research projects Sitemap Main page

Artiklens URL: www.hum.au.dk/ckulturf/pages/publications/nof/modernity.htm
This is the electronic edition of Niels Ole Finnemann, "Modernity Modernised - The cultural impact of computerisation". Workpaper 50-97, Center for Cultural Research, University of Aarhus, 1997.

The pagination of the printed edition is indicated by red numbers marking the beginning of the page.

ISBN: 87-7725-232-2

Electronically published: November 12, 1997

©Niels Ole Finnemann 1999. All rights reserved. This text may be copied freely and distributed either electronically or in printed form under the following conditions. You may not copy or distribute it in any other fashion without express written permission from me. Otherwise I encourage you to share this work widely and to link freely to it. 

Conditions

You keep this copyright notice and list of conditions with any copy you make of the text. 

You keep the preface and all chapters intact. 

You do not charge money for the text or for access to reading or copying it. 

That is, you may not include it in any collection, compendium, database, ftp site, CD ROM, etc. which requires payment or any world wide web site which requires payment or registration.You may not charge money for shipping the text or distributing it. If you give it away, these conditions must be intact.

For permission to copy or distribute in any other fashion, contact cfk@au.dk

Niels Ole Finnemann:

Modernity Modernised - The Cultural Impact of Computerisation

Guest lecture, Tampere University, Finland, January 16th 1997.

It is not a matter of discussion whether the computer is part of the social and cultural transformations we are witnessing today, both in the modern or western societies and in any other society in the world. Although there are many different interpretations of these transformations, one can hardly find any which does not ascribe an important role to the computer - whether as a main cause or as an important means of contemporary cultural changes.

If nothing else unites these various theories, they are uniform in their reference to the significance of the computer. This being so, one might expect it to be easy to find a basic and commonly acknowledged description of the properties of the computer on which such predictions concerning the cultural effects were founded. Despite the plenitude of descriptions to be found, I have not succeeded in finding one able to meet the demand, on the one hand, of being valid for any possible use of computers and, on the other, of taking into account the fact that the computer is a mechanical machine, a medium, and is based on a specific set of principles for the representation of symbolic content.

Consequently, I had to make such a description, and since it deviates from previous descriptions I shall start with a short summary of my answer to the question, what is a computer?

It might be helpful to point out a few of the previously given answers, as for instance:

  • the computer is a computer (i.e., it is a calculating machine);
  • the computer is a machine by which data are processed by means of a programme;
  • the computer is only an artefact comparable to any other kind of artefact.

-2-

I shall not give a detailed account of the various mistakes inherited in these answers, but only say the following:    

1. Strictly speaking a computer is not a computer, since it does not operate as other known calculating devices. It actually does not calculate at all. The computer is not a calculating machine but a machine in which we can simulate calculating machines as well as many other kinds of devices and processes in a mechanical way.

2. The computer is not a machine in which data are processed with the help of a programme. Of course we do use programmes. But we should always keep in mind that the programmes themselves need to be represented and processed in exactly the same way as any other kind of data. That is: as sequences/strings of bits, each one processed and editable. In the machine there is no difference between data and programme.

3. The computer is not an artefact on a par with other artefacts, since it can only function as an artefact - i.e. simulate artefacts - with the help of symbolic representations.

I shall return to the central points in these statements very soon, but first I will give a general definition of the computer and use it to explain why the question of having a definition does matter.

As mentioned above, if we want to discuss the cultural impact of the computer, we need a concept of the computer that is valid for any possible kind of use. We cannot take any specific and current use as the point of departure. The reason for this is very simple. If we donÌt use a definition which is valid for any possible use, we have no way to determine what the cultural impact will be, since this may depend on and vary with not yet known ways to use the machine which may evolve in the years to come. What we need is a description that is valid not just for all currently existing kinds of use but also for any kind of possible use; that is: we need to describe the invariant properties of the computer, or in other words the constraints manifested regardless of the purpose for which the machine is used.

There are three such constraints inherent in all kinds of use:

l. Any process which is to be performed in a computer needs to be represented and processed by means of a mechanically effective notation system - or what I will describe as a specific and new type of alphabet.

2. Any process performed needs to be governed by means of some algorithmic syntax.

3. Finally, any process performed is performed by means of an interface determining the semantic content of the algorithmic (or syntactical) processes.    

-3-

I will now describe some of the implications. First, it is worth emphasizing that these three conditions need to be met in any kind of use. There will always be a notational representation, an algorithmic syntax, and an interface defining the semantic content of the syntax. But there is also an important difference between the first condition and the other two conditions. While all processes performed in the computer need to be performed in the very same alphabet - there is only one alphabet available in the machine - there are no definable limits either for the variation of algorithms used to govern the process or for the definition of the semantical content of the syntactical processes. The need for a syntax and a semantics will always exist, but both syntax and semantics are open for free variation according to our own choices and ideas.

We can specify the basic principles for the possible syntactical variation in four aspects:

l. We are free to use any of all the existing algorithms or to create new algorithms under the sole condition that it can be processed through a finite set of steps (as described by Alan Turing in his definition of a finite, mechanical procedure).

No single formal procedure exists which is necessarily a part of any computer process.

2. We are free to use a given algorithmic procedure for different purposes - i.e. giving the same formal procedure new semantic content. The very same algorithm can be used for various purposes by redefining the semantic content (and it can be done on the level of the interface).

3. The same purposes might be achieved by using various/different algorithms.    

-4-

4. It is always possible to modify any syntactical structure as well as its function and semantic content, implying that no non-optional, syntactical determination exists by which previous steps determine following steps.

For the semantic level we can specify the freedom of choice in two aspects:

1. First we can chose between a range of various formal interpretations, (some of which can be used to calculate or to perform logical operations) and a range of informal semantical regimes - as we do when we use the computer as a typewriter or to represent pictures.

2. Second, we are free to use a combination of more than one semantic regime simultaneously, as we do when using the computer to simulate a typewriter, in that we control the process both by an iconographic semantics and by the semantics of ordinary language.

Since we are not able to predict or point out specific limitations for the future development either of new algorithms or of possible semantics, it follows that predictions of the cultural impact cannot be predictions concerning the semantic content of the effects of computerisation. (just as knowledge of the medium of the book does not allow us to say anything about the content of the next book to be written). What we are able to predict, however, is that any present as well as any future use will be constrained by the demand that the content represented has to be manifested in the above-mentioned alphabet.

One may wonder why the binary notation is denoted as an alphabet, a point I shall now explain. One should first of all be aware that the binary notation system used in the computer is not defined according to the principles for the use of notations in formal symbol systems. In formal symbol systems each notational unit needs to be defined as representing a semantic value either as data value or as a rule, while the binary - or informational notation - units used in the computer are defined independently of any semantic content. Since they are in themselves always semantically empty units, they can never have any semantic value of their own. Semantic content in the computer is always related to a sequence of units, never to a single unit, as is always the case in formal notation systems.    

-5-

The reason for this can be found in the principles of the universal computing machine. As it was principally shown by the English mathematician Alan Turing in 1936, such a machine had to function independently of any specific formal rule or programme (any specific semantic content), since it should be able to perform any rule or programme. (If the machine was determined by some specific formal rule, it would be deprived of its universality.)

We can illustrate the basic difference between the principles of formal and informational notation by comparing the use of binary notation as a formal notation system (the binary number system) and the use of the very same two units in the computer. Used in a formal system, the two units are always defined by their data value according to their position in the expression. They can never represent a rule for addition or subtraction (for this purpose other notation units are necessary), while in the computer they shall both represent data (numbers) and various rules, such as the rules of addition for instance - and they shall not only represent the rule, they are also used to carry out the process of addition in a mechanical way. While a formal notation unit is defined as a physical representation of a semantic value, informational units are defined as physical forms which are legitimate units but without any semantic value of their own. On the other hand, they need to be defined as mechanical operative units in the physical machine.

Consequently, there is also another important difference. While we are always free to introduce new units in formal systems - by defining the semantic content of the unit - we are never able to introduce new units in the informational notation system. The number of units needs to be finite (actually we use two, but this is arbitrary, although practical; theoretically we could use 17, for instance) since it shall be defined at the time the machine is built - as part of the hardware. The number of notation units cannot be modified since it cannot be a part of the editable software.

Hence, informational notation systems can be defined as notation systems consisting of a finite number of members each of which is defined by a unambiguous physical form and a lack of any semantic content of its own. On the other hand, this is exactly why the very same notation system can be used to represent formal expressions, whether data or rules, the alphabet of ordinary language, pictorial and musical expressions, as well as a huge amount of non-symbolic phenomena and processes - due only to our own choices.    

-6-

Since the units are the physical operative units - performing the process as a step by step process in the machine, we are always able to manipulate computer processes on the level of the individual units, that is: on a level lower than that of the semantic content and lower than the formal level, rules and hence independently of the semantic content and the syntactical rules. This is why we can say that the previous processes or programme never predetermine the later process in a nonoptional way.

Of course, most of us would never use the machine if we were to use it on the level of binary notation. It is not very convenient and we do use programmes to determine many processes. But we do it for our own convenience. Programmes are not a constraint determining what can be done. Accordingly, we can state that the notation system (and the presence of a computer) represents the only invariant constraints for any computer process. If something can be represented in this system it can be processed in a computer, and thus we can say that the computer is basically defined by a new kind of alphabet. Some of the characteristics of this alphabet are similar to the characteristics of the alphabet of written language, others are not.

Both similarities and differences are described at length in my book, "Thought, Sign and Machine", and here I shall mention only that both notation systems are "subsemantic" (based on double articulation, i.e. letters beneath the level of the smallest semantic unit) and both consist of a limited set of units. But while the units in the linguistic alphabet do have certain qualities - as they are either vocals or consonants and as they in some cases do have semantic content of their own (e.g. in the grammatical endings and in the words 'a' and 'I') - this is never the case in the informational system. The letters in this alphabet are completely empty of any specifying qualities. We could say that the new alphabet is totally clean and in this respect the perfect alphabet, also allowing us to represent any other known alphabet in it.

For our purposes, the most important difference, however, is that the linguistic alphabet can only be used as an alphabet of linguistic representations, while the informational alphabet can be used to represent a multitude of semantic regimes. For example: linguistic regimes, whether phonetically or alphabetically represented, formal regimes, pictorial and musical regimes, and these can even be represented simultaneously.

For this reason the computer can be defined as a multisemantic machine. Its multisemantic properties can be specified in the three following points:

  • It is possible to use this machine to handle symbolic expressions which belong to different semantic regimes (linguistic, formal - including mechanical, mathematical and logical - as well as pictorial and auditive regimes, and so on) with the sole restriction that the expression which is handled can be represented in a notation system comprising a finite number of expression units.

-7-

  • It is also possible to control the machine (or the computational process) with different semantic regimes with the same restriction, as this control, however, can only be performed mechanically for a limited class of procedures, while for others it requires the semantic regime to be exercised through continuous human intervention.
  • Any process executed in the machine runs as a relationship between at least two semantic regimes, namely one laid down in the system and one contained in the use. The two regimes may coincide, as can happen when a programmer is editing a programme, or when a closed semantic procedure is executed, such as in the form of the automatic execution of a mathematical proof. Usually, however, this is more a question of a plurality of semantic regimes, but always at least two.

With this description it now becomes possible to add yet another criterion both to the distinction between the computer and other machines and to the distinction between the computer and other symbolic means of expression. While other machines can be described as mono-semantic machines in which a given, invariant rule set is physically implemented and defines the functional architecture as an invariant architecture, the computer is a multisemantic machine based on informational architecture which is established by the materials the machine processes. While other symbolic expression systems can be described as mono-semantic regimes in which the semantic regime is closely connected to a given notation and syntax, the computer is a multisemantic symbolic medium in which it is possible to simulate both formal and informal symbolic languages as well as non-symbolic processes, just as this simulation can be carried out through formal and informal semantic regimes.

Together, these two delimitations contain a third, important criterion for the definition of the computer, in that the computer, contrary to other machines and media, can be defined as a medium in which there are no invariant thresholds between:

  • the machine and the material processed by the machine. If there is no software there is only a heating device.

-8-

  • the programme and the data, since any programme or structure of rules is to be represented and executed in the binary notation system in exactly the same way and form as any other data.
  • the information which is implemented in the functional architecture of the machine and the information processed by that architecture.

On the basis of this description of the properties of the computer it is possible to conclude that the computer is a medium for the representation of knowledge in general, whether formally coded or not, and that it not only has the same general properties as written language, but also properties providing a new historical yardstick both for the concept of a mechanical machine and for the concept of the representation of knowledge and any other kind of symbolic content.

We can use this description to delineate a whole range of "first-time-in-history" features according to the functions we can perform with computers as well as to the symbolic formats we can represent. Concerning functional innovations, at least three main points can be indicated, as the computer performs the following operations.

  • First, it is a medium for producing, editing, processing, storing, copying, distributing and retrieving knowledge, thereby integrating for instance the production of knowledge, the production and selling of books, and the library into a single system.
  • Second, it is a medium for presenting linguistically (spoken and written) formally, pictorially and auditorily expressed knowledge, thereby integrating all stable forms of knowledge in modern society into the same medium and in the same symbolic system of representation.
  • Third, it is a medium for communication, thereby integrating the most important previous means of communication, such as mail, telegraph, radio, telephone, television, etc., whether one-to-one, one-to-many, many-to-many, and both close to real-time interactive communication and rapid communication independent of the presence of the receiver.

In itself, the integration of all these functions, which were formerly distributed between different media and functions, is epoch-making, but in addition to this comes the fact that the properties of the computer also change the conditions and possibilities in each of these individual areas. Although these cannot be described under the same heading, they have, however, a common background in the general properties of the machine.    

-9-

There is one important aspect which will be of significance in all areas: a great number of the restrictions which were formerly connected with the physically bound architecture of the symbolic media are here transformed into facultative symbolic restrictions which are implemented in a physically variable (energy-based) and serialized textual form. Symbolic representation is thus available in a permanently editable form.

I shall concentrate on three of these unprecedented features:

  • First, we have a new alphabet in which we are able to represent knowledge represented in any of the formats used in the prior history of modern societies.
  • Second, we now have a means of textualised - serial - representation of pictures.
  • Third, we are on the way to having one globally distributed, electronically integrated archive of knowledge.

For this reason we can conclude that the multisemantic machine represents a revolution in the technology of knowledge representation, a revolution based on a new alphabet, a new format for written, textualised - sequentially manifested and processed - representation. Since the computer provides new ways to produce, represent and organizise knowledge in general as well as new ways of communication, it also provides a change in the societal infrastructure, in so far society is defined by the methods of knowledge representation and communication.

Basic constraints of the new medium - seriality and machinery

Although the symbolic properties of the computer go far beyond the capacities of any previously known means of representation, there are two basic limitations. First, as previously mentioned, any representation in computers is conditioned by a series of sequentially processed notational units. No matter the specific function and semantic format used, and no matter the specific purpose, any use of computers is conditioned by a representation in a new type of alphabet, implying that the content is manifested in an invisible, textual form, which can be edited at the level of this alphabet. Second, the global scope is conditioned and limited by the actual presence of and access to the machinery.    

-10-

Taken together, these limits delineate a system for the representation of knowledge which is most properly conceived of as a new electronically integrated, globally distributed archive of knowledge, in which anything represented is manifested and processed sequentially as a permanently editable text. Hence, the computer is basically a technology for textual representation, but as such it changes the structures and principles of textual representation as known from written and printed texts, whether they belong to common or formal languages. The character of this structural change, however, goes far beyond the internal structure of textual representations, because - due to the integration of both linguistic, formal, visual and auditive formats of knowledge - it widens the range and logic of textual representation and - due to the integration of globally distributed archives in one system - widens the social and cultural scope of any kind of textual representation.

Therefore, we can say that as an agent of change the computer provides a new textual infrastructure for the social organization of knowledge. The basic principle in this change is inherent in the structural relation between the hidden text and its visible representation. While the informational notation shares linear sequencing with other kinds of textual representation, it is always randomly accessible. The whole "text" is synchronically manifested in the storage from which a plenitude of "hypertexts" can be selected independently of previous sequential constraints. What is at stake here, however, is not - as often supposed - a change from seriality to non-seriality, but a change in which any sequential constraint can be overcome with the help of other sequences, since anything represented in the computer is represented in a serially processed substructure.

One of the significant implications is that sequences defined by a sender can be separated - and rearranged and reinterpreted - with sequences defined by any receiver - while the position of the receiver in the same act is changed to the more active role of "writer", "co-writer", or simply of user. Interactivity thus becomes a property inherent in the serial substructure and available as an optional choice for the user, limited only by his or her skills and intentions.

Seriality persists, even in the case of non-serial expressions such as photographs and paintings, since non-serial representation is only the result of an iteration of a selected set of serially processed sequences. The same is true for the representation of any stable expression, whether of a certain state or of a dynamically processed repetitive structure and even in those cases where some binary sequence is made perceptible for editing as a first order representation. Thus, since an interplay between the textual substructure and any superstructure (whether textual or not) is indispensable in any computer process, this interaction is the core of the structural change in the principles of textual representation.    

-11-

2. The Textualisation of Visual Representations

A triumph for the culture of the text

The inclusion of pictorial representation seems to be one of the most significant indicators of the new range and logic of textual representation, as now, for the first time in history, we have an alphabet in which any picture can be represented as a sequential text.

Textual representation is a feature common to all computer-based pictures, and defines their specificity compared to other pictures. Since any picture in a computer has to be processed in the very same - binary - alphabet, it follows that any picture can be edited at this level, implying that any computer-based picture can be transformed into any other picture in this alphabet.

In many cases, morphing may perhaps be only a curiosity, but the basic principle that any computerized picture is always the result of an editable textualised process performed in time is far from a curiosity, since it changes the very notion of a picture as a synchronously and not serially manifested whole.

Seriality and time are not only introduced into the notion of pictures as an invisible background condition, they are also introduced at the semantic and perceptible levels, since the textualised basis allows the representation of - editable - time to be introduced at both these levels.

The pictures on the screen are always played in time - even if there is no change in the picture.

While the synchronously manifested whole is an axiomatic property of a painting or a photograph - even though they are produced and perceived serially in time - the same property in the computer has to be specified and declared as a variable at the same level as any other feature, whether it belongs to the motive, to the compositional structure, or to the relation between foreground and background. Variability and invariance become free and equal options on the same scale, applicable to any pictorial element, which implies that there is no element of the picture whatsoever which is not optionally defined and permanently editable.    

-12-

There is of course a price to be paid for this new triumph of modern typographic and textual culture, as the textual representation presupposes a coding of the picture into an alphabet. The basic principle in this coding is the substitution of physically defined notational units for physical substance, implying a definition of a fixed set of legitimate physical differences (i.e.: differences in colours) which are allowed to be taken into account. Since we cannot go back to the original if we only have a digitized version, the coding is irreversible and the possible secondary codings and transformations will therefore always be constrained by the primary coding which always defines a specific borderline between informational and noisy aspects of the original.

Various aspects of these constraints

The relevance and weight of this constraint is itself a variable which has to be taken into account in the use of computer-based representations, but in general there are two main aspects. First, some of the substance qualities of the original will always be missing since there is a change of expression substance. There will therefore always be some doubt about the validity of the reference to the original. This is obviously a serious constraint on the scholarly study of art represented in computers. Second, the definition of a fixed set of legitimate physical differences at the time of the original coding may later prove to be misleading, in that physical differences which are not taken into account may be of significance. Since the digitized picture is conditioned by the definition of a - later invariant - distinction between differences which are regarded as either noise or information in the substance, there may be cases - in medical diagnostics, for instance - in which a reinterpretation of this distinction is needed but not possible.

The constraint here is directly related to the logical interrelation between noise and information, which implies that information can only be defined by treating potential information as noise, since information is always manifested in some kind of substance. While missing information concerning some qualities of substance cannot be completely avoided, at the same time computerization allows a broad repertoire of possible enrichments concerning global accessibility, as well as analytical and interpretational procedures. Since the constraints on informational representation are basically those of notation and processing time, it is not possible to define any other invariant semantic or syntactic limitations for the digitized representation of pictures.    

-13-

The significance of of the textualization of the picture can be seen by comparing previously known pictorial representations for which some kind of textual representation exists, such as those described in Euclidean geometry for instance, in the analytical geometry of Descartes, or in the various other forms of syntactically defined pictures, whether based on a well-defined perspective (like linear perspective) or a well-defined iconic or diagrammatic system.

The basic and general change in representational form towards any of these representations can be described as a transition from representation at a syntactic level to representation at the level of letters (those of the new alphabet). The textual representation of geometrical figures defines a naked syntactic structure, whether two-dimensional or three-dimensional, without regard to substance qualities such as colours and so forth, while any syntactic structure in a computer-based representation of a picture can be dissolved into a series of notation units, including the representation of some kind of substance.

Although this is a change from a higher to a lower level of stable organization, for this very reason it is a change from a more restricted set to a more elaborate set of variation potentialities in which the higher level structures become accessible to manipulation at the lower level. In the first case the picture is defined by a stable syntactic structure - to which can be added certain rules for variation; while in the latter, (the digitized version) stability is defined solely at the level of notational representation - to which it is possible to ascribe a plenitude of - editable - syntactic and compositional structures as well as to integrate representations (only partially, however) of substance qualities such as colours and backgrounds at the same textual level. Form, structure and rule become editable on the same scale as substance. The representation of substance is necessary, but need not, however, be a simulation of the substance of the original. The representation of an arbitrarily defined and itself editable background on the screen will do the job.

Moreover, informational notation is a common denominator in which some substance qualities, the syntax as well as the motive, are manifested on a par with each other. As any sequence representing one or another element of a picture can be selected and related to other sequences in various ways and possibly ascribed various functions as well (i.e. add a referential function, which is itself editable, to other sequences), it follows that any fragment of a picture or a picture as a whole can be integrated into a continuously increasing - or decreasing - syntactic and semantic hierarchy completely independently of the original form and source. The insecurity in the referential relation to the original is thus complementary to the enrichment of possible hierarchies and frames of reference.    

-14-

Perspective becomes optional and variable and so do other kinds of representational structures such as representation based on the size and positioning of motifs and the choice of colours in accordance with semantic importance, as was often the case during the Middle Ages. The resurrection of - or return to - the Middle Ages, however, is not on the agenda of computerization, since no single, non-optional hierarchy of values can be established.

Considered from a cognitive point of view, this is a radical extension of the ways in which cognitive content can be manifested in pictorial representations, whether in iconic, diagrammatic or geometrical form. Considered from a pictorial point of view, it is a radical extension of the ways in which the representation of both physical objects and pictures can be made subject to cognitive treatment.

Much of this is a result of the fact that the computer-based representation of stable structures has to be "played" in time, but since time has already been represented in film and on the television screen, the proposition must be qualified accordingly. In the case of film making the basic difference is that the definition or selection of perspective is constrained by the optical artefacts used - the lenses of the camera, while the definition of perspective in the computer has to be defined as - a still editable - part of the same text as the motif, which implies that the very division between the optical constraints and motif becomes editable. So with regard to freedom of choice the computerized picture more closely resembles the animated cartoon than the film.

In the case of television the difference is primarily the result of the notational definition of the signals, as the stable picture on the TV screen is only the - perceptible - result of serial processes. As will be familiar, a basic constraint on real time digital television is the enormous amount of binary letters needed to represent what was formerly an analogue signal.

This is a constraint, however, which at the same time transgresses a series of other constraints characterizing the old-fashioned television of the 20th century. The most far-reaching of these is probably the possible breakdown of one-way transmission and communication. Since a receiving computer can also be a sender, the receiver can also become the editor of the editors, able to decide what and when he will receive from whom. And since the computer is not only a medium for communication but also for storing in a completely editable form, the new medium transgresses the documentation monopoly of senders too.    

-15-

Summary

If, as it has often been argued in media studies, other modern electronic media contribute to a revitalization of visual and oral culture - although in a mediated secondary form, as claimed by Walter J. Ong - at the expense of the hegemonic regime of "typographic culture", as it was claimed by Marshall McLuhan, the computer can more properly be understood as a medium by which the scope of modern discursive culture is extended to embrace vision and pictorial expressions by the textualisation of electronics, which at the same time allows the representation of other media, but now as optional genres within this medium.

Not only the picture or any other visual object can now be embraced by a text. As the author of a discursive text is able to represent himself in the text, so the observer or spectator is now able to represent himself as an interacting part of any picture - given the appropriate paraphernalia of "virtual reality", in both cases, however, only as a fragmentary representation. Under any circumstances, computerization implies that some physical and organizational constraints and invariants (whether substantial, structural or conventional) are converted to text and hence become optional variables.

3. On the Global Archive: One world, One Archive

The fact that the computer - due to the properties described - has the potential to become a new general and globally distributed network medium for the representation of knowledge does not necessarily imply that it will actually become such a medium.

There are, however, strong indications that it will.

First of all, it seems beyond reasonable doubt that the use of computers will spread almost everywhere - whether this is rational or not - due to a widespread, powerful human fascination. The spread of computers into an ever-increasing number of fields - throughout the world - indicates that a profound change in the basic infrastructural level of all societies has already begun.    

-16-

Although we are not able to predict what will happen in the future, there are very few reasons to believe that this process can be stopped and the only argument which should not be marginalized seems to be the risk of a breakdown due to inadequate supplies of electricity. Computerization in general need not be argued for, and arguments given in the past have often turned out to be wrong or have had no particular impact. If we are only able to guess at what may happen anyway, we might ask why we should bother about this matter at all. In this connection I should therefore like to mention two arguments possibly able to indicate a high degree of social and cultural relevance which might give the grounds for the process of computerization.

The first argument is closely related to changes in the global scope of modernity. While the global perspective - inherent both in the claim of universality for human rights and western rationality in general, as well as in the process of colonization - is as old as modernity itself, most decisions in modern societies have until recently depended mainly on knowledge based on a more limited - locally restricted - scale. Today, however, a rapidly increasing number of local decisions on local issues depend on knowledge based on global considerations. This is true for economical, political, military and especially ecological information and, in consequence, there is also a need for a global scale for cultural issues. While some might argue that it would be better to attempt to re-establish a local economy and local political and military government, there no longer appears to be any room left for the idea of a locally restricted ecology.

Given that an increasing number of local decisions concerning ecological issues need to be based on a corpus of knowledge of global dimensions, there is no real alternative to the computer. While this is an argument based on the natural conditions for cultural survival, the second argument comes from within culture and is a consequence of the exponential growth in the production of knowledge (anticipated by the English scientist J. D. Bernal in the 1930´s and the US scientist Vannevar Bush in the 1940´s, and later described in the steadily growing number of books, papers and articles which have appeared since the pioneering work of Derek de Solla Price, among others [1], in the early 1960´s).

Whether measured as the number of universities, academic journals, published articles, the number of scientists and scholars in the world, or the number of reports prepared for politicians for making decisions, etc., the overall tendency is the same. Limits to the growth of paper-based knowledge production are in sight - whether seen from an economical or organizational point of view, or in a general perspective: as a chaotic system in which nobody can keep abreast of what is known even within his or her own specialized field.    

-17-

Basic structural changes are inevitable, be they in the form of a cultural collapse or a cultural reorganization. The computer is obviously not the only solution to the handling and reorganization of this exponential growth, but is an inevitable part of any viable solution, since any cultural reorganization must include a repertoire of remedies for storing, editing, compressing, searching, retrieving, communication and so forth, which can only be provided by computers. The computer may widen some cultural gaps, but if it were not used there might not be any cultural gaps to bridge, since there might not be any culture.

4. Modernity modernised

I shall now address my last and perhaps rather more intriguing question concerning the general cultural impact of computerization, and I will do so by placing the computer in the history of media in modern societies.

The basic idea is that we are able to delineate three main epochs in the history of the media, and that we can relate these epochs of the media to a set of more general concepts which forms a history of modern thinking as well. The concepts chosen will be: the notion of the self and the relation between the observer and the observed. The notion of perspective and the notion of laws and rules.

In the following the model of this history is presented along with a few words to lend it some plausibility.

Three epochs of the media history of modernity.

l) The epoch of the printed text - (orality + script +print)

Dominant from the Renaissance to the Enlightenment (1500-1800)

Main ideas: the world (nature) is seen as a reversible mechanical machine (static) consisting of passive particles - "atomic" and "indivisible" entities held together by immaterial forces.

The human observer is seen as an ideal observer able to conceive the world as if positioned outside the world (in a divine perspective). The main argument for this being so was the idea that God has written not one but two books. The Holy Bible and the book of nature, which can be read because God has "written" nature according to a set of eternal natural laws which we can recognize by our human reason. Since nature is a readable book, man can look into it and detect and describe the laws of nature. We can discover divine will directly by studying nature and describing what we see, without asking the priests or the pope.    

-18-

Although there was no denial of God, there is a secularization of the relation to physical nature - made possible by God himself, so to speak - as he created this nature according to a set of rational natural laws. Consequently, the idea of truth is directly identified with the notion of eternal and universally given laws. Truth is defined as knowledge of the rules governing the system and these rules are established as given prior to and from outside the system they govern. As a consequence of this axiom, the laws are not accessible for human intervention. Since the laws are universal and eternal they are also detectable from any place and at any time. They are independent of human subjectivity. Truth is objective knowledge and objective knowledge is knowledge of universal laws manifested in nature and described in the printed texts of the philosophers and scientists.

Thus the printed text became the new medium of truth by which the secularization of the relation to physical nature took place. There was, however also a dichotomy established between the observer as observer of the natural laws and the human self as a free and reasoning individual, leading to the dichotomy between the idea of a deterministic nature and the free will of man.

Among the ideas related to this general frame we also find the idea of linear perspective (representing a model for the formalized repeatable and mechanical production of pictorial representations), the idea of the author as the independent creator writing on his own authority, and the idea of the sovereignty of human reason - which during history leads to the development of public discourse, enlightenment and democracy - and the fragmentation of reason in descriptive (scientific) reason, normative (ethical) reason and evaluative (aesthetic) reason (concerning nature, society and culture/art/ taste respectively).

This is of course a very broad overview in which many different positions are reduced to a common and maybe somewhat reductive denominator. But it seems to me that it actually depicts some very important notions, since they are all changed and reinterpreted during the 19th century, which we can see as a transformed and second edition of modernity.

2) The epoch of analog electronic media: from 1843 until late the 20th century (speach + script + print + analog electric media)    

-19-

Considered in the perspective of media history, the 19th century is characterized by the emergence of analog, electronic media, starting with the electrical telegraph in 1843 and continuing ever since, including devices such as the telephone, the camera, the radio, the tape recorder, the television, the video, and a huge amount of other media and measuring instruments all based on the use of energy and especially electricity for symbolic purposes.

Compared to the static, printed text the new media are all dynamic, they are a means of communication more than of storing knowledge and not least because of that: the printed text maintains its sacred position as the basic means of representing true knowledge in modern society. However, the new media contributes to a far-reaching development of the whole social media matrix and we need to take all the media present - and their interrelations - into consideration if we want to study the function of any single medium, since new media also contribute to changing the function of older media.

So a new media matrix is formed around both the old and new media including speech+script+print+telegraph/radio/movie and later television and video, etc. The new energy-based media adds dynamic properties and they imply a revolution in pictorial representation (photo, movie, television..) and in the means of global communication, as simultaneous communication now exists on a global scale.

Basic concepts in science, philosophy and culture are changed as well. This is not caused by the new media - but probably the new ideas co-evolved in a kind of interdependency. New paradigms are emerging in physics, biology, psychology, sociology and the humanities. In spite of many differences they are all concerned with the interpretation of dynamic processes, development, evolution, change (and in some cases even irreversibility), and they see the world not as the traditional Newtonian - and reversible - machine but as a dynamic and all embracing, integrated energy system, in which the observer is also included in one form or another. For example, as in the Hegelian idea of history as the unfolding of the universal weltgeist, in the pantheistic ideas of Romanticism, or in the historical materialism of Marx, and most remarkably in the various new physical theories developed around the study of energy in general and - at the end of the century - resulting in the paradigms of thermodynamics, soon after the turn of the century followed by the theories of relativity and quantum physics.

In spite of the differences and contradictions between these new physical theories they all take the relation between the observer and the observed into account, as they include and localise the observer in the same universe as the observed - in one way or another. Still, many of the previous axioms are maintained during the 19th century, as for instance the idea that the truth about nature and society can be found in a set of universal laws or rules, which can assure that we still have access to universal knowledge - a divine perspective.    

-20-

Although the notion of a rational and rule-based universe is maintained, the status of the rules themselves are often reinterpreted, as according for instance to Kant they should rather be regarded as a set of logical or rational principles for the functioning of human reason rather than as natural laws as such. In both cases the rules are still given as the axiomatic and invariant basis for human cognition, and they are inaccessible to human intervention, but in the latter case they are now given within the universe as rules for mental processes. The notion of truth is still identified with the notion of universal laws, but the observer and the human self are brought on the bench as dynamic figures in the very same dynamic world they inhabit, as is the notion of laws.

Accordingly, new concepts of visual representation emerge too, as for instance manifested in the attempts to describe how the human sense apparatus influences our sensations, in the development of new kinds of visual representations such as the panorama, the panoptikon and other similar attempts to create what we could call an all-encompassing system perspective, either included in one mechanism or in the combination of many mechanisms which should be added together to visualize the totality. The world is seen as one integrated dynamic system or as a set of many different dynamic systems - but they are all seen as rule-based. However, the interrelations between various systems remain a basically unsolved mystery.

Again, much could be added to this picture, but it still includes some of the important notions, which again are changed and reinterpreted during 20th century, in which we see a transition into a third epoch of modern thinking.

3) The epoch of digital electronic media from 1936 (speach + script + print + analog electric media + digital media)

From the notions of the world as a mechanical and reservible machine and as an all-embracing dynamic energy system (leading to pragmatic functionalism) we are now on the way to regarding the world as an ecological information system - a system in which it seems that we need to reinterpret the notion of the self, the observer, the idea of universal knowledge, the notion that rules and laws as being given outside and independently of the systems they regulate, as well as basic ideas on visual representation and perspective. These are of course vast questions which deserve many years of analysis and discussion, so now I will only give a rough sketch of a few aspects implied in contemporary conceptual changes.    

-21-

Concerning the notion of laws and rule-based systems it seems that we need to acknowledge the existence of systems in which the rules are processed in time and space as part of and on a par with the ruled system, implying that there are systems in which the rules can be changed, modified, suspended or ascribed new functions during the process, influenced by any component part of the system or according to new inputs whether intended or not. In such systems the rules are not able to provide the stability, and thus we need to explain how stability can exist in such systems.

I see basically three arguments in favour of the need for a concept for this kind of system:

  • First, the idea of rule-based systems does not allow or explain the existence of individual cases, or such cases are treated as irrelevant noise.
  • Second, the idea of rule-based systems does not allow or explain the existence of deviations from and modifications of the rules if the deviations cannot be traced back to some other rules.
  • Third, rule-based systems only allow the formation of new rules as the result of previously existing rules.

Not only does the computer not fit the notion of rule-based systems, as is the case for many - if not most - human and cultural phenomena, like for instance ordinary language and even biological phenomena. As a consequence of these arguments, it seems reasonable - or even inevitable - to introduce a concept for what one could label rule-generating systems, as distinct from rule-based systems.

Rule-based systems can be defined as systems in which the processes are governed by a set of previously given rules outside the system (and inaccessible from within the ruled system). The rules govern the system and guarantee its stability. Contrary to this, rule-generating systems can be defined as systems in which the rules are the result of processes within the system and hence open for influence from other processes in the same systems as well as from higher or lower levels and the surroundings. Such systems are to some extent, but not completely, governed by the rules, as these are themselves open for modification, change and suspension. Since they are not completely governed by the rules, the stability needs to be provided in other ways, and this is obtained by using redundancy functions in some way, as is the case in ordinary language, computers, and many other areas of culture.    

-22-

A main point here is that objective knowledge need not be universal knowledge. Individual phenomena, the existence of singularities and so-called exceptions are as real as the existence of universals. Truth should not to be identified with universally valid laws. Another important aspect concerning the notion of knowledge and truth follows from the fact that we are always and only concerned with fragmentary representations selected among a number of possibilities. Any phenomenon in the world may have more than one representation. The implication is that we cannot stick to the notion of complete representation as was done in the former stages of modern thinking.

Parallel to this we are also able to recognize new ideas concerning visual representation and the notion of perspective. While both linear perspective and the various system perspectives in previous epochs were seen as models for universal representation, we are now heading towards a new concept of fragmented perspective, which could be labelled a scanning perspective, since scanning processes do not give the complete representation but a picture of some selected indicators.

An interesting point is that more or less parallel conceptual changes to those mentioned here can be found in many other contemporary sources. (recursive systems, Luhmann, autopoeisis, Giddens, temporal logic). It seems that they all come as a logical consequence of previous modern thinking, as a further elaboration of modern thinking, and - in my view the most important - as a transgression of the basic axioms of the previous epoch, in that the axioms are moved from the field of axioms to the field of analysis, moved from the area of invariance to the area of variability - as a broadening of the potential of modern thinking.

This applies to the notion of rules, which were first moved into the system, then regarded as processes in the system and later as the resultant effects of various other processes, bringing us from the notion of homogenous rule-based systems to the notion of heterogenous and reciprocally interfering systems - and as I would suggest to the notion of rule-generating systems. It also applies to the notion of the observer, who was originally conceived of as positioned outside the system, then brought into the system, and later regarded as an interactive component.    

-23-

Since these evolving concepts can be seen as consequence of attempts to give an answer to unsolved questions in previous modern thinking, we may assume that we are still in the modern tradition. So we are in many other respects, but we are so in a manner which might be called a process of modernizing modernity, that is, a process in which former axioms are placed on the agenda and made into objects for analysis and description. This is actually the way the process of secularisation has always worked. We are only taking a small new step in continuation of this process.

If, as I assume, the previous axioms, which are now placed on the agenda, are the axioms concerning the concepts of the human mind, human reason, knowledge, symbolic representation, language, and the text as the medium of truth, we can state that the secularisation of the relation to nature has now come to include and embrace the very means of the same process. A secularisation in the relation to the (symbolic) means by which the previous secularisation towards other parts of nature was carried out. If so, we may say that we are facing a secularising process of a second order.

Go to top


Notes:

[1] J.D. Bernal, 1939. Vannevar Bush (1945) 1989. Derek de Solla Price, (1961) 1975.