Topmenu Calendar Archive Links Faculty and staff Publications Research projects Sitemap Main page

Niels Ole Finnemann: Thought, Sign and Machine, Chapter 1 © 1999 by Niels Ole Finnemann.
| Table of Contents | Chapters: | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Literature | Download pdf-file |

1. Overview

Framing the question

Throughout what is now the more than 50-year history of the computer a great number of theories have been advanced regarding the contribution this machine would make to changes both in the structure of society and in ways of thinking.

Like other theories regarding the future, these should also be taken with a pinch of salt. The history of the development of computer technology contains many predictions which have failed to come true and many applications which have not been foreseen.

While we must reserve judgement as to the question of the impact on the structure of society and human thought, there is no reason to wait for history when it comes to the question: what are the properties which could give the computer such far-reaching importance?

The present book is intended as an answer to this question.

The fact that this is a theoretical analysis is due to the nature of the subject. No other possibilities are available because such a description of the properties of the computer must be valid for any kind of application. An additional demand is that the description should be capable of providing an account of the properties which permit and limit these possible applications, just as it must make it possible to characterize a computer as distinct from a) other machines whether clocks, steam engines, thermostats, or mechanical and automatic calculating machines, b) other symbolic media whether printed, mechanical, or electronic and c) other symbolic languages whether ordinary languages, spoken or written, or formal languages.

This triple limitation, however, (with regard to other machines, symbolic media and symbolic languages) raises a theoretical question as it implies a meeting between concepts of mechanical-deterministic systems, which stem from mathematical physics, and concepts of symbolic systems which stem from the description of symbolic activities common to the humanities. The relationship between science and the humanities has traditionally been seen from a dualistic perspective, as a relationship between two clearly separate subject areas, each studied on its own set of premises and using its own methods. In the present case, however, this perspective cannot be maintained since there is both a common subject area and a new - and specific - kind of interaction between physical and symbolic processes.

It immediately becomes obvious that such a description of an interaction between physical and symbolic processes can be of significance for theories of consciousness and the way this problem presents itself in existing research has also given rise to the formulation of hypotheses regarding cognition and consciousness. The question as to the significance of theories of consciousness, however, is not simply whether we are considering a form of interaction which can be regarded as a model of human consciousness - or the other way around, whether the machine can think. It is also a question of the conceptualization of physical and symbolic phenomena which have been of significance as preconditions for the discovery and development of computer technology and, perhaps most decisively with regard to the result, of the conceptualizations used in the hypotheses on consciousness and thereby in the definition of what is interacting. The description must therefore also include a theoretical and historical account of the concepts used in describing the physical, symbolic and conscious.

In consequence the book takes its point of departure in a description of the theoretical preconditions for the modern computer with emphasis on two separate, yet parallel tracks.

One of them runs from Ludwig Boltzmann's statistical thermodynamics from the latter part of the last century to Claude Shannon's definition of the information concept in his mathematical communication theory from 1948. The other originates in mathematical logic from the first third of this century with Gödel's proof as the theoretical turning point from which the English mathematician Alan Turing started in 1936 when he described the principles of a universal computing machine by showing how any finite formal procedure can be carried out as a sequence of very few and simple, mechanical processes.

While these innovations in the history of mechanical theory are remarkable in themselves and are regarded as necessary preconditions for the development of the modern computer, the analysis leads to the conclusion that mechanically based symbol theories are neither adequate to describe the symbolic properties of consciousness nor those of the machine.

The basic argument for this position - as far as consciousness is concerned - is to be found in the fact that the concept of human consciousness and intelligence must at least include the ability to generate its own symbolic units of expression, while the precondition for mechanical theory is an already given set of invariant units.

If there is a similarity between the computer and human consciousness, it will thus consist in the fact that neither of them are subject to a definite, invariant set of rules for the representation of meaning.

While consciousness can be described as a rule creating system possessing the ability to produce symbolic rules, the computer can be described as a rule free system which, by virtue of this, can be used to represent and process an indeterminately large number of symbolic representations and a certain class of rules.

Where the machine is concerned the basic argument can be found in the condition that any rule whatsoever which must be carried out by a computer must appear in the same notational form and be treated in exactly the same way as all other data. It is therefore not possible - as is a precondition in mechanical theory - to define any invariant borderline between the machine and the material processed in the machine, between the rule and the regulated, between programme and data and between the knowledge implemented in the functional architecture of the machine and the knowledge processed in this architecture.

As a description of these characteristics cannot be carried out on a mechanical or formal basis the point of departure will be taken in sign theoretical concepts.

As sign theories - just like mechanical theories - are anchored in a dualistic thesis regarding the relationship between the physical and the symbolic, they do not provide a complete conceptual basis either. They have, however, two advantages which appear incompatible with a mechanical theory. First, because the existence of a once-and-for-all given set of rules for creating signs is not a precondition for the sign concept, whereas a mechanical system can only be imagined with the precondition of an invariant and preordained system of rules. Second, because a definite a priori assumption of the relationship between the physical and the symbolic is not a precondition for the sign concept either, whereas a given mechanical theory cannot be imagined without some sort of a priori assumption regarding this. In other words, by taking the point of departure in sign theories it becomes possible to include in the analysis the axioms which are a precondition for a mechanical theory. As sign theories on the other hand do not exclude the description of mechanical and other formal symbol systems in advance, they allow the theoretical openness which the subject demands.

The main thesis of this book is that the properties which characterize any use of computers and also characterize the computer as distinct from other mechanical technologies, other symbolic media and other symbolic languages, are determined by the symbolic notation form. This is primarily defined by the demand for mechanical execution, but - by virtue of this - it acquires a number of properties which justify referring to it as a new, independent notation system - called informational notation in the following - which differs both from formal and common language notation systems.

As this thesis implies an assertion to the effect that a computer is defined by this - unique - notation system, it also implies a negative assertion to the effect that it is impossible to provide a description of the computer's properties at a higher logical or semantic level (e.g. as a logical or thinking machine), if the description must both be valid for any application and be capable of characterizing this machine as distinct from other machines, media and languages.

I am thus claiming that a description of the computer as a logical machine is a description of a dedicated machine without the property of universality, while a description of a computer as a thinking machine is rejected because a computer - unlike a human being - does not possess the ability, so decisive for human intelligence, to produce its own notation system.

On the other hand I am claiming that a computer can be defined as a multi-semantic machine, by which I mean:

In continuation of this definition the conclusion will be drawn that the computer represents a new, general medium for representing knowledge, as it:

As a new, general medium for representing knowledge the computer is characterized by - what is in itself - an epoch-making integration of physical, social and symbolic functions which were formerly distributed among separate machines, institutions, media and symbolic languages. As this is not just a question of integration in one and the same medium (such as television, for example), but in one and the same notation form, which is defined by the demand for mechanical execution, this medium for knowledge representation has in addition a set of independent properties which also change the conditions and possibilities in each of the possible areas of use. These conditions and possibilities cannot be described under one heading and are therefore outside the framework of this book, but it is possible to point out at least four aspects of significance in all areas, namely:

The following pages contain a presentation of the relationship of this thesis to previous theories, a broader description of the content of the thesis and an account of the construction of the book.

Earlier theories

If the existing scientific literature is grouped in accordance with its approach, it is possible to point out four different main sources which have made their mark on the understanding of computer technology.

First, there is a large group of sociological theories concerned with the transition from the industrial society to the post-industrial information and knowledge society. While the term information society itself appears to have been used for the first time in a Japanese futurological study,[1] the basic conceptuali]zation stems from Daniel Bell, 1973, who emphasizes three overall features in the development: first, the growing extent of information work, second, the use of theoretical knowledge as a "strategic resource" and third, the development of new "intellectual technologies" such as the computer, the two last features, according to Bell, make a social diagnostics possible which can also be used to predict and hence prevent crises.

Where Bell in 1973 wrote cautiously of "axial principles" for future developments, only 13 years later James R. Beniger could show that it had now become almost trivial to refer to existing society as an information society (Beniger, 1986). Unlike Bell, who defined the new society in contrast to the industrial society, Beniger also stresses continuity, in that he sees computer technology as the latest step in the series of - energy-based - control technologies which have been created as a part of the establishment and stabilization of the modern industrial societies.[2]

Second, there is a group of ]cultural and philosophical analyses, partly linked to the concept of the postmodern, partly to concepts of thinking machines. As an exponent of postmodern theory, mention can be made of Francois Lyotard's description (Lyotard, 1979) of information as a radical break-up of the conditions for knowledge structures - a new, postmodern scene where hope is linked to the sublime, beyond the rational, deterministic islands in the postmodern ocean.

Where the postmodern understanding alludes to a contrast between controlling, mechanizeable rationalism and human thought, the theories of thinking machines are built up around the idea that it is also possible to describe consciousness as a finite, reproducible information or symbol system. There is thus clear agreement regarding the understanding of the machine, but of the opposite with regard to human thought.

The theories of thinking machines can be traced back to Turing, 1950, but are given a more elaborate and ambitious formulation by Newell, Shaw & Simon, 1961, and Newell & Simon, (1976) 1989. The philosophical aspects are discussed on the basis of different perspectives by such authors as Bruce Mazlish (1967) 1989, Hubert Dreyfus, (1972) 1979, H. & S. Dreyfus, 1986, Pamela McCorduck, 1979, Douglas Hofstadter, 1979, John Searle, 1980, David J. Bolter, 1984, John Haugeland, 1985 and Theodore Roszak, 1986.

A third approach to the computer can be found in the literature on the history of technology, but this is particularly concerned with the development of hardware and consists largely of descriptions in which the computer is seen as a further development of the automatic calculating machine, such as in Herman Goldstine, 1972, N. Metropolis et al. (eds.), 1980 (with a number of contributions from computer pioneers), René Moreau (1981) 1984, Bryan Randell, 1983, Michael R. Williams, 1985.

Although this literature provides widely differing descriptions and evaluations of the significance of the computer, there is a general consensus in seeing it as a key technology which - for better and/or worse - allows an epoch-making leap forward concerning the possibilities for social regulation and control. Despite all other disagreement, the computer appears as the almost perfect - perhaps not fully developed - automatic calculation, control and prediction machine.

This common, and basically control-theoretical understanding of the computer is not completely unfounded, on the contrary, it is clearly in harmony with the ideas which dominated the fourth group of main sources regarding the understanding of the computer up to the 1980's, namely those theories which created the basis for computer development research.

Among the earliest exponents, mention can be made of Alan Turing's theoretical description of the universal computer (1936) and John von Neumann's and others' description of what has since become known as the von Neumann machine (Neumann 1945), (Goldstine & Neumann, 1947-48). But the first general formulation of a control-theoretical understanding makes its appearance in Norbert Wiener's interpretation of the computer as a cybernetic system, (Wiener, 1948 and 1950). Wiener also laid the foundation for the later discussion regarding social implications in raising the question as to whether the machine could be used as a centralistic, bureaucratic administration instrument which would make Thomas Hobbes' Leviathan look like a pleasant joke.

The control-theoretical understanding can be rediscovered in new forms in the classic AI description (Allan Newell, Cliff Shaw & Herbert A. Simon, 1961), in the reformulated AI descriptions which appear in Cognitive Science (e.g. Zenon Pylyshyn, 1984) and a number of accounts on information theory, (e.g. Børje Langefors, 1966) where the machine is defined by its computational process which is described as an independent, finite, mechanically performed symbolic procedure which operates on the basis of a previously established rule structure.

The core of this literature was created around the basic symbol-theoretical thesis of classic AI, according to which a "physical symbol system" comprises a set of physical units of expression which can be joined together in sequences and of a set of rules which can transform a given sequence to another.[3] But the group also includes theories which transfer concepts which were developed to describe other linguistic media (whether general or formal languages) to the description of the computer and theories which consider concepts developed to describe computational processes as general symbol concepts. All these theories assume, implicitly or explicitly, that informational notation builds upon the principles of formal notation.

Loosely speaking, the control-theoretical descriptions cover what happens in the time that elapses from the moment a programme is started until it has been carried out as an automatic - and here that also means mechanical - procedure. They are founded upon the basic assumption that the programmer and the user can be ignored in the description of the symbolic properties and thus see the machine as an autonomous, linguistic or cognitive agent.

In this respect the control-theoretical understanding also includes the "connectionist" theories of Cognitive Science (e.g. J.L. McClelland, & D.E. Rummelhardt (eds.) 1986) as the symbolic process is also described here as a finite, mechanically performed procedure. But as these relinquish the essential control-theoretical demand for a rational description of the symbolic rule structure, the latter group of theories can also be seen as a phase in the break with the control-theoretical understanding which, for the past ten years, has also been the subject of growing criticism from other quarters.

In continuation of this a number of other theoretical descriptions of the computer have emerged in which the idea of describing the machine as an independent and automatic manipulator of symbols has been abandoned in favour of a description of various forms of relationships between system and use. Where the machine was formerly understood as an automatic calculating machine, a mathematical and/or logical manipulator of symbols, or literally as a thinking machine, it is now also understood as a tool, as a plastic, freely designable material or as a (communicative and interactive) medium. Exponents of these views include Alan Kay & Adele Goldberg, 1977, the American Human Computer Interaction tradition, such as Norman & Draper, 1986, Terry Winograd & Fernando Flores, 1986, Scandinavian system development theory, for example Pelle Ehn, 1988, while P. Bøgh Andersen, 1991 and Andersen, Holmqvist and Jensen (eds.) 1993, describe the computer, on a semiotic basis, as a medium.

This development in the theoretical description of the computer can be regarded as a differentiation between an increasing number of competing descriptions, but can also be seen as a theoretical expression of a differentiation of possible kinds of use, not least promoted by the appearance of small, inexpensive personal computers which at one blow made a broad range of previously poorly exploited applications accessible to a much greater group of potential users.

While the control-theoretical approaches correspond to uses which emphasize automatic procedures (numerical control of other machines, the performance of complex calculation and control tasks, mechanical pattern recognition etc.), the tool and medium- oriented approaches correspond rather to uses based on continuous human interaction (whether text and image processing, database retrieval, the use of decision supporting systems, virtual reality etc.).

Both points of view imply, however, that it is a question of a differentiation in the understanding of the computer which raises doubts regarding that understanding of the machine on which the analysis of its social and cultural implications have been based.

In recent years a number of analyses have appeared which place considerably more emphasis on the many human choices which can have a significant influence on these implications - thus, for example, Shoshana Zuboff, 1990, who in addition to the automatic perspective emphasizes the informative perspective, as well as Andrew Feenberg, 1991. A similar tendency is evident in a number of detailed studies of the use of computers in companies, including analyses which stress the social and constructive elements in technological development.

By stressing human choice, the understanding of computer technology becomes linked to the question of the relationship between the respective competence of machines and humans and the relationship between control and democracy in the business community and in society.

Even if we subscribe to the - good - intentions in these confrontations with a deterministic understanding of technology, we still lack a description of the computer which will account for the properties which are common to every possible type of use and will explain how these properties can be exploited for the many - both good and less good - possible applications. The machine cannot simply be understood on the basis of the intentions implied in its use, it is also necessary to take into account the form these intentions will receive when implemented in this machine.

In other words we need a description that provides an account of the common platform which is the condition for the use of the computer, both as an automatic control and calculating machine, as a logical manipulator of symbols, as a tool, as a plastic, freely designable material, as a communicative and/or interactive medium (whether for word processing or virtual reality) and also describes the characteristic differences between these uses.

In its simplest form the problem is to explain how it is possible to use this machine both as a calculator and a typewriter. But the question must be treated subject to the condition that we can also use the machine to re-present an indeterminately large number of other both symbolic and non-symbolic processes. The description can therefore not take its point of departure in one or another specific use. Although the computer was created as a further development of the automatic calculating machine, it can no longer be understood by using the calculating machine as a model. We must rather say the opposite, because that which separates the computer from the automatic calculating machine is precisely that which also makes it possible to use it as a typewriter.

While the computer and the calculating machine are both machines which can be used for calculation purposes, the computer can also be used to represent and perform other symbolic processes and a great number of non-symbolic processes. It is therefore necessary to describe how this machine differs both from automatic calculating machines and how it differs from other symbolic media and languages.

Mechanical procedures have also formerly been used for symbolic purposes (e.g. in the form of machines such as the calculating machine and the clock, or in the form of organized energy processes such as the telegraph, telephone and television). In all these cases, however, we are considering applications which are characterized by a single - or a limited set of - finite, invariant mechanical procedure(s) which establish the functional structure of the machine or tool in a repetitive process. The individual machines and tools can correspondingly be characterized on the basis of these finite procedures and these are again linked to a limited set of possible applications. A calculating machine cannot be used as a typewriter, a clock as a telephone and so on. Where the telephone, the telegraph, the typewriter, the clock and the television are concerned the mechanical procedure is completely independent of the symbolic content, whether this be the meaning or the symbolic rules. Where the calculating machine is concerned the symbolic rules (rules of arithmetic) are implemented in the invariant physical structure of the machine. In all these cases we can therefore speak of a clear, invariant division between the mechanical and the symbolic, between the physical apparatus and the symbolic material which is handled by this apparatus. In the computer, on the other hand, the mechanical procedure which establishes the machine's functionality is defined by the symbolic material which is processed.

This difference has sometimes been cited as a reason for describing the computer as a machine which is not defined by its physical organization but on the contrary by its - symbolic - programmability. Although this definition is both suitable and perhaps even necessary, it is inadequate for many constructive purposes. As a description of the machine's basic features it is also misleading, because the computer as mentioned can ]only carry out a programme by representing and treating it in exactly the same way as all other data.

While a decisive factor in the use of other mechanical technologies has been to avoid or minimize the material's effect on the machine's organization and mode of operation - or in some cases - to define invariant physical limits for such effects - the use of computers is based on continuous interference between material and machine.

It has sometimes been claimed that this property is not peculiar to the computer and reference has been made to such areas as the cybernetic feedback procedure used in physical thermostats. The comparison is excellent because it can contribute to a more precise definition of the difference. While a precondition for informational feedback in a thermostat is that the same physical state - for example the temperature - always has the same informational meaning and mechanical effect, the computer on the contrary is characterized by the fact that the same physical state - in the electronic circuit - can have changing informational meaning and be connected with changing effects. While the thermostat is defined by an invariant and closed body of information which has been implemented once and for all, the computer is defined by a variable and open body of information as there is no invariant borderline for interference between the knowledge which is part of the machine's construction and the knowledge which is part of its use.

The computer, however, is not the only tool which is characterized by this type of interference between tool and material. The same is also true of common languages and this characteristic thereby links these two media for the expression of knowledge.

The structure of the book

The general sequence of this book moves from a description of the development of mechanical theory on local, finite systems, partly in mechanical physics (chapters 2-3 and 6) and partly in mathematical logic (chapters 4-5) to a description of the informational sign's physical, notational, algorithmic-syntactic and semantic levels (chapters 6-9). In the book's penultimate chapter (chapter 9) the analysis is outlined in relationship to more recent, semiotically-based descriptions of the computer, one an American, Peirce inspired, the other a European, Hjelmslev inspired description, namely those of James H. Fetzer (1990) and Peter Bøgh Andersen (1990). The final chapter, the epilogue, contains an account of the theoretical considerations on the nature of symbolization which have been of significance for the present analysis.

As the book concerns subjects which are traditionally classed as mutually separate areas the following contains a short summary intended to provide an overall perspective of its sequence.

It has generally been accepted that the various post-war information theories have their roots in theories of physics and particularly in the German-Austrian physicist Ludwig Boltzmann's statistical formulation of thermodynamics from the end of the last century. In interpreting this connection authors have often been content to supply a rather short summary of Boltzmann's work emphasizing his mathematical-statistical definition of entropy as a yardstick for the degree of "disorganization" in a closed physical system. The many references to, but few expositions of, Boltzmann's deliberations have motivated a more extensive treatment. This treatment led my attention to another area which has been overlooked in discussions of information theories, namely the break-up of the physical theories on mechanical processes, which is a central theme in Boltzmann's theoretical and philosophical considerations regarding mechanical theory. Although Boltzmann has had no influence on the reinterpretation of the mechanical theory contained in Alan Turing's theory on the universal computer, (which is discussed in chapter 5) he nevertheless anticipated many of the questions that arise in this connection, just as he established a theoretical model for describing local and closed systems based on an arbitrary subdivision of an - imaginary - finite space.

Where nature was understood in classical physics as a huge, coherent machine, Boltzmann's view is rather a question of an understanding of nature as a number of small, finite machines and, as perhaps the most far-reaching point considered in retrospect, of the germ of a break with classical physics' definition of matter on the basis of its - outer - extent and form. While this definition binds form to its material substratum, (expressed, among other things, in the demand that physics should supply a mathematical abstraction corresponding to physical reality), Boltzmann's statistical description model paved the way for an emancipation of the form concept which would become the point of departure for what - considered as a whole - can be described as a neo-Cartesian paradigm of information theory.

The paradigm of information theory, which has been of decisive importance for the emergence and development of computer technology, takes over the mechanical and dynamic process perspective formulated in the energy theories of 19th century physics, but at the same time releases the understanding of the mechanical process from the physical binding to matter with the resulting development of an abstract, mechanical description model which can be applied to an arbitrary area of matter - whether physical, biological or psychological.

Whereas the mechanically based information theory follows Descartes in the sense that it describes informational processes in the way Descartes would describe the external, physically extended world, for the same reason it breaks with the Cartesian construction because it now includes the - for Descartes detached - consciousness in the same world of time and space.

Chapter 3 provides an overview of the development of the physically-based information concept - from the physical to the symbolic - up to Claude Shannon, while in chapter 4 there is an overview of a parallel line of development - but now from the symbolic to the mechanical - in mathematical logic which leads to Alan Turing's theory of the universal computer - with a glance at the almost contemporary sign theories of Ferdinand Saussure and Charles Peirce.

Turing's theoretical description of the principles of an universal computer are discussed in chapter 5. This theory, which occupies a central position in any discussion of the theory of computers, is treated here with particular emphasis on its new interpretation of 1) mechanical theory, 2) the informational notation form and 3) the use of algorithmic procedures for the mechanical linking of mutually separate physical-mechanical individual states, in that his contribution regarding these three points is central to the description of the physical and algorithmic levels of the informational sign system. The point of view taken gives rise to a partial reinterpretation of the theory as emphasis is placed on features which Turing himself did not accord the same weight and because the conclusions which are drawn are of a nature he would hardly have been able to imagine. This is first and foremost true of the description of the notation form which is necessary for mechanical performance and of the character of the universality of the machine.

This re-reading of Turing can naturally be discussed. But the choice of Turing's theory as a point of departure for the description of the physical basis of the informational sign system can also be discussed - a) because the "Turing machine" is not subject to the same finite conditions as actual, physical computers - b) because Turing did not exploit the properties connected with the separation of programme from control unit - c) because he was unable to take into consideration the later developed random access memory - d) because he worked within the image of a traditional, physical-mechanical machine - and e) because he worked on the presupposition that all symbols were perceptually identifiable. The analysis of Turing's work, however, provides several important results; among them:

In later chapters it will be claimed that the possibility of choice is decisive for an understanding of what is called here the computer's multisemantic potential.

Turing's' description of the computer, however, lacks two significant features. One is a description of the properties related to the separation of programme and control unit. This separation was explicitly described for the first time in 1945 by John von Neumann and Herman Goldstine and implies that any part of a programme whatsoever can become an object for processing, just as any data element can be utilized in a programme function. That a programme can only be carried out when it functions as data, however, was first clearly formulated at the end of the 1950's by John McCarthy in his description of a programme as a simple list of instructions and his creation of the programming language, LISP, on this basis. In the present work the theme is treated in connection with the more general development of algorithmic handling competence which is described as a transition to a second-order handling or algorithmic handling of algorithms.

The second feature lacking in Turing's theory is the physical definition of the machine's notation system which is independent of human perception. Although Turing mentions this aspect in a footnote - and makes it clear that mechanical "reading" depends entirely on the definition of the symbols' physical form - he fails to take into account the possibility of completely ignoring the demand for perceptual recognition and the possibility of utilizing an entirely arbitrary definition of physical values and, as already mentioned, failed to note any qualitative difference between formal and informational notation either.

The definition of notation units independent of perceptual recognition, on the other hand, were familiar in the technical sphere, where for half a century work had been performed on invisible information transport in connection with such media as the telephone, radio and television. It was also an engineer, Claude Shannon of Bell Telephone Laboratories, who formulated the first theory on invisible informational entities, defined solely on mathematical-physical criteria. Shannon's theory has also had great influence in other ways, both on later information and computer theory, but is used in the present context particularly as a primary source for describing the informational notation system and the redundancy functions which belong to it.

This understanding of Shannon's information concept as a theoretical definition of a - new - informational notation system breaks with Shannon's own, more general understanding of the information concept, but it also differs from much - not least - linguistic criticism of Shannon's a-semantic information concept because the concept, seen as a contribution to the construction of a new notation system, is maintained here as an extremely useful theoretical and operational asset.

These deviations from former interpretations of Shannon's information theory have set their stamp on the following part of this account because they raise several questions regarding the theoretical basis for describing informational signs. This is true with regard to Ferdinand Saussure's and Louis Hjelmslev's distinction between the concepts of expression substance and expression form and the relationship of the expression substance to the sign function, as well as with regard to Umberto Eco's distinction between "signals" and "signs".

From a linguistic point of view it would perhaps be tempting to keep to an established, theoretical foundation for as long as possible and thereby make a point of maintaining or adjusting the individual concept in rigorous accordance with the existing conceptual inventory. But with the point of departure given here it might be more appropriate to see the relationship between the informational sign system and existing linguistic theory as a contrapuntal relationship where the concepts used to describe the informational sign system have, on the one hand, roots in linguistic theory, but on the other have their meaning established in relative freedom in order to prevent that which is new from drowning in old meanings.

The problems which emerge in connection with a linguistic description of the computer can hardly be collected in a general form as they not only depend on the computer's properties, but also on the linguistic theory chosen. The only practicable course has therefore been to include the linguistic theory on the basis of its relevance to the description of the informational sign system.

These sources (primarily Ferdinand Saussure, 1916, Louis Hjelmslev, 1943, Umberto Eco, 1968 and 1976 and Eric A. Havelock, 1982) were not chosen to ensure linguistic representativeness, but because they were considered suitable for illustrating various aspects of the relationship between common language, speech and writing, and the informational sign system.

The linguistic material is primarily included as part of a comparative analysis of various forms of the use of notation systems (spoken and written language and formal language) and the relationship between various forms of redundancy used in these systems.

Redundancy is understood in a broad sense as a sounding board which makes distinctive expressions possible. The concept is used in music theory to describe such things as recurrent patterns which are varied. It is evident from this that the sounding board itself is part of the musical expression manifested which can be separated from the physical background noise. While the concept on the one hand is thus defined by the demarcation between the symbolic sounds (of music) and other sounds, on the other it is defined by the demarcation between the "more" distinctive from the "less" distinctive musical symbols.

In a sense, the two definitions are circular because the musicality as such is manifested in distinctive musical symbols. A given musical sequence can in one sense belong to the redundant sounding board for other distinct musical expressions, while in the other sense it manifests itself as such a distinct expression. Due to this structure a given element in an expression system can therefore also manifest itself as redundant and distinctive at one and the same time.

Although - or precisely because - it is impossible to define redundancy as a concept with an invariant feature it may well fill the bill in the description of all symbolic expression forms. However, as redundancy is regarded as a key concept in describing structural differences between expression systems, a more precise definition is also given according to which redundancy is understood as repeatable patterns, structures or systems which:

The concept of redundancy is used here as an alternative to the concept of linguistic structure. At a theoretical level the most important purpose is to dissolve the conceptual borderline between the concept of linguistic structure and usage which has had axiomatic status in many areas of linguistics in the 20th century. This dissolution is first and foremost motivated by the fact that the rule structure of language can itself become the object of semantically motivated changes, including in addition the creation of new rules which are not defined by the established rule structure, but also by the relationship to the non-linguistic substances - whether the expression substance or the meaning.

It may be possible to claim that this loss of conceptual precision, which will necessarily be transmitted to other concepts, is an expression of a more precise picture of the relationship between language rules and usage, but in any case the redundancy concept allows a better understanding of the relationship between the different levels of the symbolic expression, from the physical, through notation, to the syntactic and semantic, as the common question at all of these levels is how to bring about expression distinctiveness in a given symbolic language, partly relative to the underlying level and partly relative to other distinct expressions at the same level.

As the primary aim has been the analysis of the informational sign system, emphasis has been placed on a comparative description of structural differences to other symbolic redundancy structures at the level of notation forms. The intention was thus not to fulfil the need for a more exhaustive analysis of the redundancy structures of different symbolic languages.

The overall result of this analysis is that the different symbolic expression media, spoken and written language, figurative and formal representation, are characterized by the differences in redundancy structure at all levels, in the physical articulation, in the notation system, at the "syntactic" and semantic levels.

Although there are considerable differences between the redundancy structures of spoken and written language, they do have a common feature which both separates these symbolic expression formats from both figurative and formal formats, as the smallest expression units in the former languages manifest themselves as redundant and distinctive expressions at the same time. This double articulation is closely connected with the fact that the smallest expression units, which are also the smallest semantic variation mechanisms, are smaller than the smallest content units.

On the other hand figurative and formal expressions are characterized by the absence of specific, redundant expression manifestations.

Where pictures are concerned this absence is described as a consequence of the fact that there is no fixed, pictorial notation structure in the form of a limited set of expression units, as the creation of pictures depends on the creation of form through an indeterminately large number of possible colour variations.

Where the formal expression is concerned, on the other hand, the absence is described as the result of a semantic operation: The formal expression depends on the declaration of prescriptive rules or values which establish invariant, semantically distinctive values for each individual expression unit. The formal expression has a fixed notation structure and an arbitrary number of expression units each demanding a specific declaration as a member of the notation system. The smallest unit of the formal expression cannot be manifested as redundant and distinctive at the same time. The prescriptive declaration thereby allows the intended elimination of the linguistic redundancy structure and takes the place of the linguistic redundancy by manifesting itself as a stabilizer of meaning.

The meaning of the redundancy concept in the informational notation was demonstrated for the first time in Shannon's theoretical analysis of physical information transport, as the physical definition of informational entities includes both a definition of the informational entity relative to the physical medium and relative to other informational entities.

Shannon, however, confined himself to an analysis of redundancy at the notation level with the result that while he could certainly define a physical scale for informational entities in the form of fixed, recurrent physical signals which appeared with a calculable, statistical probability, he could not separate distinct, meaning-carrying physical notation forms from the occurrence of noise in the same physical form.

Although his intention was to formulate an a-semantic theory Shannon assumed - apparently unconsciously - that this distinction would be carried out on semantic lines.

He thereby overlooked the fact that the semantic level is not only of significance for the choice of distinctive notation elements, but also for the redundancy structure which is a condition for semantic distinctiveness. Shannon's redundancy concept can not therefore be used in the analysis of the syntactic and semantic structures which characterize different uses of a given notation system. Nevertheless he indicates a method by which the semantic legitimacy of informational notation can be ensured by adding an extra coding, which is independent of (and has no disturbing effect on) the semantic content of the message, to the informational expression. Shannon therefore refers to this procedure as a means of ensuring the content of the message by increasing the redundancy.

Shannon's analysis thus shows not only that the redundancy function plays a central role for the stability of the informational notation system - which is not the case with formal notation - but also that the redundancy function is completely different to linguistic redundancy functions because informational redundancy can, on the one hand, be defined independently of the semantic regime in which the message appears and, on the other, must be expressed as an independent sequence of notation units which is added to the given message. This is thus also solely a question of redundancy in relationship to the meaning content of the message and not in relationship to the notational expression. This use of a formal semantic as a redundancy function is unique to informational notation. As the formal coding which is added does not change the meaning of the message, Shannon's analysis shows in addition that the content of a formal procedure can be a function - variable to the point of weakness of content - of other semantic regimes.

While the mechanical performance of the algorithmic procedure presupposes informational notation, the algorithmic procedure is itself a precondition for the simultaneous, mechanical and symbolic use of informational notation. It is this relationship between informational notation and algorithmic syntax which differentiates the computer from other dynamic media such as the telephone, radio and television and gives the machine its unique symbolic properties. Implemented in this machine, however, the algorithm takes on new properties at the same time because - due to the synchronically manifested representation - it becomes possible to work systematically with the algorithmic handling of algorithms.

The description of algorithmic syntax therefore includes a general description of the dynamic and arbitrary second-order handling of algorithms and a description of the linguistic dependency of the algorithmic structure.

The fact that the term algorithmic second order handling is used here instead of the commonly used notion of algorithmic complexity, is due to three factors in particular.

First, the term points directly towards the new qualitative moment which is linked to the self-referential aspect: the algorithmic expression is handled with the help of - other - algorithmic expressions, while the notion of complexity primarily refers to a more complicated algorithmic handling of something which is non-algorithmic.

Second, the term points, albeit indirectly, towards an underlying connection to more comprehensive developments within the history of ideas, often referred to as "the linguistic turn" characterized by the assimilation of linguistic representation in the subject area of a number of disciplines.

Third, the term "second-order handling" is a more precise expression for the dynamic procedure as it is realized in the computer, as every step here is defined as a relationship between two elements. Although each of these elements is related to a multiple of algorithmic structures, there can be only one relationship at each step, where one element from one informational sequence appears in one relationship to one element from another.

This definition of the algorithmic second-order procedure is not exhaustive, but makes it possible to point out two invariant features which differentiate it from the algorithmic first-order procedure.

As the synchronically manifested expression contains all rules, any rule can become the object of a semantically motivated modification, alteration or suspension. The synchronic structure, however, can only be handled through a diachronically organized process which is subject to the demand for step-by-step transition which is defined by the relationship between the total system's actual state and the next - binary - notation unit.

We can therefore conclude that the informational sign system at the syntactic level is characterized both by a synchronic and a diachronic redundancy structure. The synchronic redundancy structure comprises the total system as manifested in a given state, excluding the notation which defines the next step. Nor, as this notation can consist in a new input, does the diachronic "syntax" only include the internal computational structure, but also the chosen input structure. Within the diachronic structure, the synchronic structure does not therefore appear as an ordinary syntactic structure either, but rather as a - complexly composed - singular notation unit which can be subordinated to another, complexly organized input structure, which again can be an expression of different semantic regimes or purposes because the input structure not only allows formally finite - calculative or logical - regimes, but also informal regimes. In the diachronic sequence the smallest expression unit (and smallest semantic variation mechanism) consists of the actual state of the total system plus the next notation unit.

In continuation of this description, I claim finally that the semantic restrictions of the informational sign system alone are contained in the demand that a given semantic expression be present in a notation system with a finalized, established number of expression units.

In addition to the general restriction there is also a technological and historical restriction, as there is a semantic restriction in the relationship between the time taken by physical processing and the time taken by human perception, because different semantic articulation forms demand a different degree of dissolution and rebuilding in order to be represented in a discrete notation system. It is thus insufficient to subdivide a picture into informational entities. Pictorial representations also presuppose that the machine can operate sufficiently rapidly to transpose the serial representation in what is to us a simultaneous, visually recognizable form. As the time occupied by physical processing is not restricted by the speed of human perception, this restriction is relative to technological competence and not to that of the speed of human perception.

The informational sign system can therefore not only be subordinated to all the semantic regimes which already use fixed notation systems, but also - through a suitable subdivision of the expression form - semantic systems which do not. It is thus characterized by the fact that not only the notation system, but also the syntactic structure have a multi-semantic potential.

Finally I claim that the multi-semantic potential of the syntactic structure differentiates the computer from other machines and expression media and confers on it its far-reaching civilizing significance, just as this structure also guarantees that the medium always retains that form of unpredictability which holds good for speech, writing, arithmetic and pictorial art. Although it is possible to describe the properties of these symbolic media and describe certain restrictions on the type of knowledge which can be expressed through them, it is impossible to predict the knowledge content expressed. Unlike other mechanical media it is true of the computer, as also previously mentioned, that there is no invariant borderline between the knowledge which is included in the functional architecture of the medium and the knowledge which can be expressed.

In the penultimate chapter there is a more detailed description of the informational sign system compared to James H. Fetzer's Peirce-inspired description of the computational process as a formal symbol process, characterized by the absence of the referential and interpretational functions in Peirce's sign concept and Peter Bøgh Andersen's Hjelmslev-inspired theory of computer-based signs.

With regard to Fetzer's theory, which was formulated in opposition to the classic AI concept, the central objection is that with his acceptance of Newell and Simon's symbol definition as an adequate definition of the computational processes (but not of the semiotic) he has in fact excluded a semiotic understanding of the computer medium.

Where Fetzer's theory is formulated in opposition to consciousness-theoretical elements in the classical AI concept, but not with the idea of the computer as an autonomous symbol machine, Bøgh Andersen's theory is formulated as a contribution to the development of new areas of application (e.g. "narrative systems") based on a semiotic analysis, as he takes his point of departure in the interaction between the programmer, the machine and its user. Although the latter work was an important source of inspiration for the present work, the emphasis has been placed on differences and deviations.

In relationship to the description provided here, the most important divergence is that Bøgh Andersen (with reference to the linguistic definition of the expression form as the perceptible expression) assumes that the computer-based sign can be described at the interface level, while the underlying processes are regarded as expression substance or sign candidates which can be utilized in sign production. While this emphasis contributes to the development and analysis of the visually expressed semantic potentialities, it also creates an obstacle to the utilization of the non-visually expressed aspect of the informational sign.

The theoretical criticism of this definition of the borderline between sign and non-sign is based on the fact that the borderline between "system" and "interface" itself is manifested as a result of sign work, namely the programmer's. As the programmer, who creates the system and selects an interface, is himself a user and any sufficiently competent user can also take the programmer's place and alter the programme, the entire existing system must be regarded as part of the informational sign. The relationship between the programmer and the user must correspondingly be regarded as a relationship between several different - and always at least two - semantic relationships - for the same expression form and this expression form is, unlike other familiar symbolic languages, not defined by the demand for perceptibility, but by the demand for mechanical execution.

Go to top

Notes, chapter 1

  1. The Plan for Information Society - a National Goal toward the Year 2000. Japan Computer Usage Development Institute, Tokyo 1972. Source: Göranzon & Josefson (eds.) 1988: 5.
  2. A great number of corresponding works could be mentioned. In a list of this literature Beniger, 1985: 4-5, mentions more than 80 different suggested descriptions for that state of society which is now generally referred to as the information society. A few examples will illustrate common features and breadth: Posthistoric Man (R. Seidenburg 1950); Postcapitalist Society (Dahrendorf, 1959); End of Ideology (Bell 1960); Computer Revolution (Edmund C. Berkeley, 1962); Knowledge Economy (Machlup, 1962); Postbourgeois Society (Lichtheim 1963); The Global Village (McLuhan, 1964); The Scientific-technological Revolution (Radovan Richta et al., 1967); Neocapitalism (Gorz, 1968); The Age of Information (Helvey, 1971); Limits to Growth (Meadows et al., 1972); Post-industrial Society (Touraine, 1971, Bell, 1973); The Third Industrial Revolution (Stine, 1975, Stonier 1979); Telematic Society (Nora & Minc, 1978); The Gene Age (Sylvester & Klotz, 1983).
  3. Newell & Simon, (1976) 1989: 112-113. The thesis is quoted and discussed in chapter 5, 9 and the epilogue.