Topmenu Calendar Archive Links Faculty and staff Publications Research projects Sitemap Main page

Niels Ole Finnemann: Thought, Sign and Machine, Chapter 9 © 1999 by Niels Ole Finnemann.
| Table of Contents | Chapters: | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Literature | Download pdf-file |

9. The informational sign function

9.1 The algorithm in the machine

If the algorithmic revolution is characterized by abstraction, block structuring and hierarchical division with the centre of gravity in the distinction between code instruction and control unit, as well as programme and data structure with second-order handling of algorithms as a consequence, this is undoubtedly a far-reaching methodological break in the history of algorithmic management. But it is not immediately obvious that this should also occasion new linguistic considerations, as there is no element included in it which in itself touches on the relationship to the semantic surroundings. It is a revolution inside a concluded parenthesis.

At the same moment, however, as we take this second-order handling into account as it is performed in a computer, the picture changes, as was theoretically anticipated by Turing.

What has been changed is first and foremost the possibility of utilizing the access to the step-by-step choice of a new instruction at the notational level. While the algorithmic first-order procedure was formerly characterized by diachronic determination, which stretches from the beginning to the end, the algorithm in the computer is available in a form which dissolves the diachronic determination, as all determination in the computer is locally limited so that it is only valid for the transition from one step to the next.

The diachronic, algorithmic expression is not only available as a sequence of informational notation units. It is available in a synchronic form where an arbitrary notation unit can become the object of the next operation, independently of its position in the preceding diachronic sequence. As the synchronic manifestation is produced as a result of previous states, they can be contained in this manifestation, but they need not determine the next step.

It is difficult to decide whether the access to the step-by-step choice of new rules should be seen as a result of a new conceptualization of algorithmic procedures, or as a product of the new possibility for the mechanical handling of these procedures. Under any circumstances, both parts work in the same direction. With mechanical handling the algorithm appears in a secularized form, distinct from any specific overall semantics and accessible to a step-by-step handling with the help of other algorithms which themselves must be dissolved into individual steps at the level of informational notation.

In the following I shall argue that the step-by-step procedure and the synchronic representation of the algorithmic structure imply that the algorithmic revolution inside the concluded parenthesis stretches beyond this parenthesis with the formation of a new sign system, the informational sign system, as a consequence.

H. Goldstine and John von Neumann were the first to diagnose the crux of the matter in this process. They referred to it as a transition from a static to a dynamic decision procedure, but it was evident from their description that this was only a half-truth. The new procedure contained not one, but two dynamic agents. The dynamic procedure runs as an exchange between a coded instruction and the machine's control organ, which implies the possibility of continuous modification of codes during the process. It could not be assumed, they wrote, that the code's instructions simply stood for an actually defined content at a certain point in the process. A given code can obtain a changing content, as it can both be summoned for use on a content, which is modified during the process, or can itself be modified as a consequence of other instructions which can be similarly modified.

Hence, it will not be possible in general to foresee in advance and completely the actual course of C [the control organ], its character and the sequence of its omissions on one hand and of its multiple passages over the same place on the other as well as the actual instructions it finds along this course, and their changes through various successive occasions at the same place, if that place is multiply traversed by the course of C. These circumstances develop in their actually assumed forms only during the process (the calculation) itself, i.e. while C actually runs through its gradually unfolding course.[1]

That the computational process, seen from the control organ's point of view, is manifested as an unpredictable process, follows from the fact that the control organ can change the codes which control its own operations. The decisive aspect is not the relationship between programme and data, but the division of the controlling function in code and control organ as two distinct features which control each other step by step.

Just as for Goldstine and von Neumann, for later computer architects it was an - easily understandable and well-founded - main motive to develop methods which could control this unpredictable process. The computer has therefore often been defined on the basis of the programme emphasizing the overall logical structure as the basic characteristic of the automatic, computational procedure, whether they worked with arbitrary, imperative functions (such as the assignment function and the go-to function) or with syntactic or logical programming methodology, where the use of arbitrary steps is often described as "dirty tricks".[2]

The very existence of these two conflicting views of programming not only reflects the fact that programming is a necessary condition for the use of a computer, but also that this necessity is not determined by the machine, but by the human use of it. The differences are not a question of what is possible, but of what is considered correct. The programme expresses a conceptualization at a semantic level which concerns human interpretation and use for specific purposes. The machine will work with any programme notwithstanding its semantic structure, as long as it is used in an informational notation system which is in accordance with the physical structure of the machine.

Nor does the programme therefore constitute the most basic conceptual frame for a description of the computer. The necessity of the programme stems on the contrary from the fact that the computer, due to local determination and the distinction between the stored programme and the control mechanism, possesses a more basic and anarchic property, where each next step is accessible - in principle - to a free choice.

We would not make much progress if we were to attempt to exploit this possibility of choice to its full extent. Any use depends on a semantically motivated choice which is utilized in regulating the diachronic sequence. Local determination is nevertheless of central importance because it implies that the former states exercise no determination on the subsequent states. Although the system's actual state is produced by a predetermined rule structure, the next step can not only be executed independently of these rules, the actual state can also create the starting point for new steps which build upon other semantic interpretations of the actual state. As a consequence of this, any rule can both be modified, suspended and/or have a new function and meaning ascribed to it.

This conflicts with the understanding of the computational process as a sequence determined by an algorithm or a programme.

It also conflicts with the experience we have of handling linguistic and formal notations which build upon - mutually different sets of - stable syntactic rules for the sequential organization of notation units. The most incalculable element, however, is probably that this dissolution of the rule structure conflicts with the basic ideas of the relationship between the rules and that which is regulated. Whether we think here of the idea of natural laws or of social conventions, in both cases we employ the idea of a precept or inherent structure which cannot be influenced by the system the precept regulates.

As described in chapter 5, it was a similar idea which created the foundation for Allen Newell's and Herbert Simon's theory of "physical symbol systems", where the rules are given outside the regulated (physical) system. Where the computer is concerned we know that the programmer can formulate such rules (including rule systems which can generate new rules), but also that they must be dissolved and converted to another notation system in which the rules are produced as the effect of a mechanical process which is not bound by the symbolic rules. The individual, mechanically effective symbolic entities, as described in chapters 7 and 8, have no definite content value and there is no directly compelling equivalence between a certain symbolic content and its mechanical execution.

In other words, Newell and Simon's symbol theory gives an incorrect description of the computational process, as the description ignores those features which characterize informational notation as distinct from formal notation systems.

The same idea of the rule system given from outside which controls the process also underlies the use of concepts such as operative systems and programmes. These concepts are often highly functional because they indicate a delimitation according to purpose and task, but at the same time also give a misleading idea of the semantic freedom of choice which is connected with the diachronic process, because they describe the sequence as a process in which the former states determine the following states. In the computer, however, every symbolic rule effect appears as the result of the process the rules are supposed to regulate.

In order to describe the diachronic process, it is also necessary to include another conceptual difficulty which appears in the implementation of the algorithmic procedures in the machine, as this implementation at the same time implies that the algorithmic expression (whether this is the prescribed programme or a given data structure) be converted from a sequentially organized, diachronic structure to a synchronic manifestation of the total expression.

While the algorithmic expression - just like the alphabetical-linguistic expression - is manifested as sequences of successive notations in which the individual notations are defined relative to the preceding and subsequent notations in a linearly organized sequence of relationships, local determination in the computer implies that the functional value of each notation unit is exclusively defined by the total system's actual - simultaneous - state, or in Turing's words: by the relationship between the machine figuration qn and the actual symbol (the instruction) - described by Turing as S(r) - as the pair, qn, S(r) comprise the total figuration which determines the machine's possible behaviour.[3]

The concept of the synchronically manifested notation corresponds to Turing's concept of the system's actual state, the machine figuration, but with the emphasis on the fact that this figuration is available as a notation structure which is not subject to any specific diachronic determination.

The concept of a synchronic structure itself is derived from linguistics where, however, it creates great theoretical problems. On the other hand, it is an extremely apposite term for the circumstance that, at any given moment, informational notation is available as a simultaneously manifested whole. Its use in linguistics will be discussed in more detail in the next section, but there is a reason to point out that in linguistics the concept refers to an underlying, invariant rule structure such as in Saussure, for example, who uses it on a presumed stable linguistic state, or in Hjelmslev, who uses it of a language system, while it is used here of a manifest notation structure.

The circumstance that every next step is determined solely by the relationship between the actual state of the system and the actual notation implies that the informational expression has a unique character, because while the individual step only embraces the relationship between two bits, every individual step at the same time implies a change in the state of the total system.

While the smallest expression unit in the synchronic expression is constituted by the informational notation unit, there is no invariant, smallest expression unit which corresponds to the bit, the grapheme or phoneme in the diachronic sequence. The smallest expression unit here is constituted by a complex expression, namely the constellation of bits which comprises the total system's actual state. The relationship between the total system's actual state and the next individual step thus comprises the smallest semantic expression unit and it therefore represents the basic form of the informational sign.

The circumstance that the smallest diachronic expression unit is itself a complex, synchronic notation structure which coincides with the smallest sign function distinguishes the informational sign system from other symbolic sign systems.

The expression form of this sign structure can be described with complete precision, but this can only be done by describing it as a sequence of successive system states which are not connected by a general, underlying rule structure. Although all computational procedures assume a specific syntactic and semantic composition of the sequence structure, there is no general syntax for the diachronic sequence. The semantic restrictions are determined solely by the demand on the notation form and not by the demand for a specific syntactic and semantic regime, as is the case with the linguistic utilization of the alphabet and the use of formal notation.

The choice of syntactic structure and the interpretation of its significance is on the contrary a semantically motivated choice. While other sign systems (among other things) are characterized by syntactic stabilization rules for the use of notation elements, the informational sign system is characterized by syntactic freedom. Here, the notation structure is the stable element for syntactic variation. For the same reason, the development of - new - syntactic structures is therefore a general - and inexhaustible - source of innovation. The rapidly growing number of different programming and system development theories could also be described as a huge reservoir of syntactic structures or models relative to different uses. Finally, the synchronically manifested notation implies that there is no invariant relationship between a certain syntactic structure and a certain semantic regime.

Compared with other notation systems, the risk structure of informational notation is also different with regard to semantic breakdown and correspondingly requires other control and redundancy structures. The smallest synchronic expression unit can thus bring about a more radical semantic variation than the smallest expression units in other notation systems, as a single incorrect bit can imply the complete dissolution of the expression. Conversely, it has a weaker intrinsic meaning because its notation value is completely fixed relative to the system's actual state. Informational notation has, as we saw in chapter 8, no independent qualities over and above its physical value and notational legitimacy.

The synchronic manifestation creates the foundation for an incalculable expansion of the potential choice which is connected with the step-by-step procedure and syntactic freedom in the choice of the diachronic sequence.

There are certain cases where it would be quite correct to say that a synchronically manifested notation represents a programme for the execution of a process, namely those cases where we do not utilize the possibility of making new choices during the process, as we use a given synchronic starting state as a determinant for the following diachronic process.

In these cases we are not describing a universal Turing machine, but a dedicated machine for performing a limited set of tasks. Such a description is not a description of the computational procedure, it is on the contrary a description of a given step in the performance of a pre-established task where we do not utilize the potential choice. Here, all that it necessary is simply to turn on the machine.

In all other cases the synchronic states are on the contrary subject to a diachronic determination which is not bound to any definite rule structure. The diachronic sequence cannot be described through the concept of a programme which determines the process.

The objections to the view of the computer as a machine which is determined by a programme can be summarized in two points.

First, it is difficult, if not impossible, to provide a clear definition of the concept 'programme', as we have no criterion which can determine when a given data sequence can be referred to as a programme and when a sequence must be referred to as something else.

If, for example, by a programme we understand a collection of precepts which control a sequence from beginning to end, the concept will include neither operative systems nor application programmes, such as word-processing programmes. Using this definition we must say on the contrary that we start a programme when we open the system, a new programme when we open the word-processing programme and yet another when we strike a key in order to produce a letter, change a typeface or adjust a margin. The word-processing programme therefore does not contain a set of precepts which determine which data sequences are used in which order, just as this type of programme is not defined by any invariant set of functions either.

In practice, the programme concept is not used in any consistent sense. It is used, on the contrary, as a pragmatic, common name which covers different forms of semantic organization of data sequences. Some programmes are based on a purely mathematical or logical, formal, closed semantic. Others are based on an informal semantic and the kind (or level) of semantic is determined by the user. The user is thus not bound to intervene at only one, e.g. logical or linguistic level. It is both possible to intervene at the level of the informational notation unit, at the level where we use a sequence of bits as a representation of a notation unit in another symbolic language (for example, in the form of the ASCII code) and at the syntactic level, as we can use a sequence of bits as a syntactic structure which performs a rule of calculation, and at a semantic level, as a sequence of bits can represent a mathematical or formal way of presenting a problem, a logical retrieval procedure, a text, a picture and so on.

That which is referred to as a programme is a freely selected number of notation sequences, but what makes these sequences a programme has nothing to do with the specific sequences, but on the contrary with the circumstance that there is some purpose which could, in general, be fulfilled by completely different sequences, just as the given sequences could well have been used for other purposes.

The second, and most decisive, objection to describing the computer as a machine which handles data with the help of a programme is that the computer can only execute a programme by treating it in exactly the same way as all other data and that it can only handle data which are represented in the informational notation system.

Here, every rule and all data values are present in the same notation and manifested in a synchronic form which makes it possible to handle each individual notation unit independently of the previous values, whether as part of a rule or of data.

The concept 'programme' can therefore only be distinguished from the concept 'algorithm' by connecting a given purpose to a given quantity of informational sequences. It is not the programme which organizes the notational structure, but the notational structure which creates the foundation for programmatic variation.

The synchronic structure not only permits an absolute division between the preceding and subsequent states, but also provides the possibility of a facultative utilization and interpretation of arbitrary elements which are produced through the previous states. This dividing line determines that it is both possible to implement the assignment and go-to functions with the arbitrary definition of the next step relative to the preceding steps and to change operation mode, for example, from process execution to programme changes. Both can be seen as specific uses of the more general possibility of choosing the next step without taking the preceding diachronic sequence into account. As both the execution of the preceding steps and the result of the process are only available as a synchronically manifested notation structure, this independence holds true not only of the choice of new data or the possibility of switching between programmes, it also holds true of the choice of the semantic regime for further handling.

This arbitrariness is not limited to the free choice of the fragments we will use, it also includes the possibility of choosing the syntactic functions and semantic values of the fragments used, because all functions and values are only available as a set of synchronically manifested notation units. The synchronic expression constellation thereby constitutes a redundancy structure, as defined in chapter 7, for the diachronic use.

It is precisely this redundancy function which both makes it possible for a user to respect and/or modify or suspend the precepts the programmer has laid down in the system. From the programmer's point of view the informational expression form is an expression of a semantic purpose, i.e. an expression of a content form. The user can - hopefully - understand the message, but is only bound to take over the expression form and this bond is moreover only valid for the user's starting point, as the user can both change the expression form and/or its interpretation, because the expression is available in the informational notation structure.

The diachronic structure is thus a semantically open structure which is neither congruent with the idea of a programme which is executed, nor with the synchronic structure. A congruence between these structures only takes place when the machine is used as a dedicated automaton. For this purpose, it will moreover often be an advantage to exclude a number of symbolic choices by incorporating a number of procedures in the hardware. In all other cases, the programme, the synchronic manifestation and the diachronic sequence will require three different descriptions, of which the last is the superior.

The crux of the matter here is that the possible semantic regimes not only embrace formal and closed regimes, but also informal and open regimes, as the only semantic restriction on the process lies in the demand that it must be possible to represent the semantic content in a discrete notation system with a finite number of members.

It is this circumstance which explains how it is possible to use the machine both as a typewriter, where notation is subject to such elements as linguistic syntax and semantics, as a calculating machine, where it is subject to the syntax of the rules of calculation, as a picture processing machine, where the notation is subject to a pictorial semantic regime, just as it is a precondition for the use of graphic interfaces[4] as well as Virtual Reality systems in which the user can represent selected fragments of his or her own behavior and interact with symbols generated by a programme.[5]

This peculiar circumstance can be illustrated by comparing the letters of written notation with the corresponding informational representation of the letters we see on the screen. While a letter - for example an /a/ in writing constitutes the smallest notation unit, an /a/ on the monitor screen is the result of a - rapidly executed - but extremely large number of individual steps comprising a series of changing synchronic states.

This sequence in itself can be described as the execution of a closed algorithmic procedure, or a little programme, but it is clear at the same time that there is no fixed relationship between this programme and the diachronic sequence in which the programme is utilized. The programme for executing such an /a/ works as a - composite - notation unit when using a word-processor. In this case the diachronic process is linguistically defined. In other cases, such as when used for calculation, such computer programmes can act as syntactic structures and in yet others - for example for performing logical demonstrations - as semantic structures. In the case of Virtual Reality the diachronic process is a result of the interaction between a programme and the behavior of the user, which again may be determined by a variety of motives. The informational notation structure prescribes no interpretation plane. Nor does the algorithmic linking of the individual notations into longer sequences.

This special syntactic and semantic freedom when interpreting the binary, synchronic representation is determined by the physical definition of the notation system, which once again thus appears as a vital, central element in understanding the symbolic properties of the symbolic machine.

The facultative handling of the synchronic notation includes the possibility of replacing, re-interpreting or suspending the syntactic rules and/or semantic values. It is this structure which makes it possible to regulate computational processes with linguistic and pictorial semantics and/or bodily behaviour, even though we are not capable of formulating these semantic regimes in the form of programmes.

The use of a computer for word-processing which, during the course of less than ten years has changed from an almost unknown to an almost everyday occurrence, is a good example of how the computer's multisemantic potential can be used.

If we take our starting point in the image on the screen, it can be described as a combination of a pictorial semantic and linguistic control of the computational procedure. The pictorial semantic control, which is a precondition for the linguistic, (because it is the precondition for the visually simultaneous representation of a serial process), is at the same time subject to the linguistic semantic, which as mentioned above exploits a number of algorithmic sequences each of which corresponds to a single unit in alphabetical notation.

Word-processing is thus an excellent example of the fact that the semantic use of informational signs is not bound to formal, closed semantic regimes. The same goes for picture processing, as here, the formal procedure alone defines the elementary particles of the picture and a sequential procedure for constructing the picture in a given output medium. Here, there is only a physical-mechanical connection between the symbolic precept and the picture content it represents. While the formal picture construction elapses in time, the reading off of the picture is bound to the possibility of perceiving the whole picture simultaneously.

The semantic restriction lies neither in the binary form nor in the demand for finite algorithmic procedures, but solely in the demand that the semantic regime can be expressed in a discrete notation system with a finite number of members.

On the other hand, this demand implies a sharp restriction on the kind of rule structures which can be implemented as automatic procedures, as it only holds true of finite, formal rule systems. For symbols which are not expressed in a notation system - such as pictures - another restriction holds true, namely that they cannot be represented without loss of information since they can only be represented by the help of a coding which defines certain selected physical values as legitimate informational units, while other physical traits are ignored. The coding is irreversible, there is no path from the informational representation back to the original.

Although the user, in the case of word-processing, controls the computational process with a linguistically rooted semantic in a way similar to that in which a typewriter is used, there are also several important differences, since a number of mechanical typewriter functions have been replaced by a series of small programmes, The use of a computer for word-processing not only requires that the letters are available as programmed notation sequences, the paper that is written on must also be available in a symbolic representation in the same notation system. This symbolic representation can either be a precept for the background of the screen image, or a precept for a printing routine. It is naturally preferable to have both.

The peculiar conceptual consequence of this circumstance is that here writing is represented in the same symbolic notation as its background and that both parts are available at all times in the same synchronically manifested form. The same goes for a number of other physical-mechanical typewriter functions, such as margin and correction functions which can be simulated with the help of iconographic control.

Whereas the word-processing programme can be described as a symbolic representation of the mechanical typewriter and regulated with the same semantic, the two apparatuses produce the "same" text in two different symbol systems with very different editing and handling possibilities. These differences are connected with the underlying informational notation, which is characterized partly by being independent of the demand for direct perceptual recognition, partly by the fact that all rules are contained in the same notation as that which is regulated, partly by the fact that the text - or any other simulated phenomenon/process - is represented in a synchronically manifested notation and thereby within another time structure and finally by the fact that the simulation of the typewriter presupposes a transformation of - at least some - physical constraints into symbolic constraints whereby the physically invariant constraints becomes optional.

Each of these elements constitutes a distinct and unique feature which, together with the dynamic properties, characterizes the informational sign system as distinct from other sign systems.

Word-processing programmes use only a small fraction of these options, but they show that it is possible to control the computational process with the help of several - co-ordinated - semantic regimes.

If the more user-oriented design architecture which made its breakthrough in the 1980's is a marked expression of the possibilities which lie in the use of the informational sign system's synchronic structure and the radical, step-by-step freedom of choice - as is claimed here - it must be added that this use also has a regressive character, as the algorithmic and formal semantics which were formerly dominant have been replaced by more traditional semantic regimes.

User orientation has generally been utilized in metaphorical imitation - whether in the form of the typewriter, keyboard, paper, pencil, drawing board, tape recorder, filing cabinet or some other more closely delimited area of the existing working processes. This conservatism has also been the object of increasing discussion and criticism in several of the design-theoretical reflections of recent years.[6]

Metaphors cannot and should not be avoided in developmental work. The arbitrary synchronism of the informational sign system is not only a basic structure, but also one that is difficult to handle and which can only be used through self-chosen semantic restrictions which are not only significant for the purpose, but also for the construction of the syntactic organization. As the informational system's syntax, however, is not related to a specific semantic regime, the user-oriented perspective, whether utilized in one or another metaphorical model, can hardly be understood as more than a first step in the direction of a more radical leap from the mono-semantic to the multisemantic machine.

One of the next steps is a question of releasing the user perspective from the visually bound handling of the informational signs at the interface level, because this understanding of the user perspective cuts the user off from the potentialities which lie in the non-visually represented, underlying notation structure. As this question, which will be discussed further in section 9.5, also concerns human competence, developments here will presumably be the result of a slow and insidious process of change which is far removed from the common idea of rapid technological changes in society.

The multisemantic potential is perhaps that element which, more than any other, can motivate a comparison with human consciousness, while at the same time it distinguishes the computer from other symbolic representation media because it is connected with the specific, arbitrary and synchronic structure which makes it possible to store any input and retrieve any stored element whatsoever.

It is nevertheless more relevant to regard the informational sign system on the basis of its differences to other expression systems because the combination of synchronic determination and diachronic freedom of choice assumes explicit, descriptive declarations.

The multisemantic potential also exists exclusively as a human relationship to the system.

9.2 The informational sign system

As informational signs are based on a synchronically manifested structure, it might be imagined that the linguistic concept of the synchronic language system would come into its own precisely in the description of these signs, although - as claimed in chapter 7 - it is not suitable for describing the common languages.

In linguistic theory, the concept of the synchronic structure appears as a concept of the invariant language system at a given time which not only organizes the linguistic sequence (usage), it also creates the framework for diachronic changes in the language system itself. The synchronic structure is thus seen as the superior instance at all times. Basically, the concept serves to establish a sharp distinction between two forms of diachrony, namely that of usage and that diachrony manifested as changes in the language system.

Hjelmslev, who takes over and tightens up Saussure's concept of the synchronic structure, thus acknowledged at a very early stage that the concept assumes a postulate to the effect that changes in usage and language norms can never bring about any change in the system. He therefore proposes the thesis that changes in the language system can only occur as the result of (tensions in) the synchronic system's structure, as this can thereby be regarded "as a self-sufficient totality, a structure sui generis", or what is called today a self-regulating or 'autopoietic' system.[7] The dynamic forces which are incorporated in this system are not described further, but Hjelmslev presumes that they have a algebraic form.

The interesting point in the present connection is not the theory's lack of validity for linguistic analysis, but on the contrary, that Hjelmslev's idea of an invariant synchronic structure forces him to distinguish between two completely separate types of diachronic processes, those changing the synchronic structure itself and those manifested in the actual usage, although the rules for - or constraints on - both types are given in the synchronic structure.

In the informational sign system, however, the synchronic structure contains no invariant rules for diachronic sequences. On the contrary, it is itself included as a redundancy structure through which the former states and sequences - taking in all rule structures - become accessible to change through use. It is not only that the system can be changed through use, it is also the fact that it is the only possibility for both constructing it and for changing what has already been constructed.

The informational sign structure, which is available in a distinct synchronic state at all times, is thus an excellent example of how synchronic structures can be included in a redundancy system in which the rules can be modified and changed through the use they regulate.

This example also shows how such a system can also contain algorithmic procedures.

It is not possible, however, to maintain the concept of an asemantic - algorithmic - structure in the description of the diachronic sequence. These structures act as stabilizing elements through semantic codings which include both the composition of the algorithmic sequence and the possibility of variation, suspension and/or dissolution of the algorithmic procedure and/or its meaning.

The diachronic sequence is established by separating an individual element (a notation unit or a synchronically manifested sequence of notations) step by step into a series of synchronic states. In the given state, the element which is separated constitutes the semantically distinctive element, whereas the actual figuration constitutes the actual redundancy structure in a given state. The diachronic redundancy structure does not, on the contrary, have the same unambiguous and delimited character. The individual bits in a sequence of states can alternately act as semantically distinctive and redundant and the function of each bit is determined by the total sequence. Here, there is no semantically independent, constant structure. Any element in the informational sign can act both as redundant and as semantically distinctive, but not - as in common languages - simultaneously.

It can thus be noted that the structuralistic interpretation of the concept of the synchronic structure falls short in the description of informational signs on exactly the point it fell short in the description of common languages, although the informational sign system is distinct from these. It is not possible to describe either the linguistic or the informational sign structure without taking the semantic content into account and this is manifested in both sign systems in - mutually different - continuous modulations of a redundancy structure which, for the same reason, can have no delimited, distinct form.

As Hjelmslev assumes that the synchronic structure creates definite and restrictive rules for diachronic succession it is clear that using his theory to describe the informational sign requires the theory to be greatly modified, partly because the synchronic construction here is produced as a manifested notation structure, partly because the informational sign system is characterized by a free, arbitrary and step-by-step choice, precisely at the point Hjelmslev places all linguistic determination.

A modification of Hjelmslev's theory is thus also the starting point for Peter Bøgh Andersen's theory of computer semiotics which, together with James H. Fetzer's theory, constitute two significant attempts to analyse, on the basis of sign theory, what Peter Bøgh Andersen calls computer-based signs, while Fetzer, taking his starting point in a critique of the Cognitive Science/AI approaches to the analysis of informational symbol systems, dismisses the idea that informational symbol manipulation - with Newell and Simon's definition as a prototype - can be regarded as a semiotic process.[8]

While both theories formulate the semiotic approach as an alternative to the Cognitive Science computer paradigm, (in the classical version which Haugeland dubbed the GOFAI version),[9] they thus lead to two diametrically opposed conclusions. Where Bøgh Andersen would introduce the sign concept, Fetzer would exclude it.

This disagreement becomes no less striking when we add that Bøgh Andersen takes his point of departure in the semiotics of Saussure-Hjelmslev, which contains no concept concerning the structural relationship between the sign system and human use, while his analysis has this use relationship as its cardinal point. Fetzer, on the other hand, takes his point of departure in Peirce's triadic semiotics, but completely ignores the relationship between the informational system and its human interpreter(s).

The difference between the two analyses leads to one of the central problems in the semiotic description of the informational sign system, namely the relationship between the (chosen) semiotic theory and the analytical results the theory can produce when brought into play in the analysis of a never previously described sign system which is radically different to the sign systems which created the foundation for the formulation of the theory.

The two theses make a pointedly different response to this question. Fetzer's strength lies in his theoretical analysis of Newell and Simon's, in many respects well-defined, symbol-theoretical basis for the AI paradigm, in which he also admits that the problem presents itself in a number of new ways.[10] Fetzer, however, completely avoids the question as to whether there is a new sign system at all. This - taking into account the semiotic starting point - must be considered as quite remarkable. On the face of it, the explanation is quite simple. Fetzer assumes in advance that the symbol-theoretical paradigm constitutes an adequate description of the informational sign system (or at least of the most advanced or "intelligent" forms).

This assumption is not only an expression of a praiseworthy effort to avoid misrepresenting the symbolic-theoretical paradigm, it is also motivated by Fetzer's more general enterprise, which is not concerned with the analysis of different sign systems, but on the contrary with replacing the symbol theory with semiotics as the rightful interpreter of human consciousness, as he disputes that the symbol theory can constitute a stable basis for an understanding of genuine semiotic processes, including the human sign production which, according to him, can be described on the basis of Peirce's triadic sign concept.

...the evidence that has been assembled here would appear to support the conclusion that the semiotic-system approach clarifies connections between mental activity as semiotic activity and behavioral tendencies as deliberate behavior - connections which, by virtue of its restricted range of applicability, the system-symbol approach cannot accommodate. By combining distinctions between different kinds (or types) of mental activity together with psychological criteria concerning the sorts of capacities distinctive of systems of the different kinds (or types), the semiotic approach provides a powerful combination of (explanatory and predictive) principles, an account that, at least in relation to human and non-human animals, the symbolic-system hypothesis cannot begin to rival. From this point of view, the semiotic system-conception, but not the symbol system conception, appears to qualify as a theory of mind.[11]

The first victim of the struggle for the right to occupy the place as the interpreter of consciousness is thus the analysis of that sign system which is the starting point for the struggle.

The next victim, however - with due respect to Fetzer's other merits - is the semiotic theory's demand that it is the adequate and general theory of human sign production, as the semiotic theory - certainly in Fetzer's Peircian form - does not include the sign production humans carry out with the computer.

The question is whether this omission is connected with Fetzer's interpretation of the theory as to whether all that is lacking is an application of the theory to the computer, or whether it is a necessary consequence of the theory's structure?

Under any circumstances it is remarkable that the semiotic theory can divert attention from its own subject area to such a degree and, apparently, completely lack concepts for delimiting different sign systems and guidelines for the way in which it can be applied to the analysis of specific sign systems.

The central argument for claiming semiotics' primacy as a paradigm for a theory of consciousness, according to Fetzer, is that semiotic theory provides the space for different forms of sign formation and sign production which cannot be described with the symbol-theoretical paradigm. This thereby raises the question as to what constitutes the common and constitutive feature of semiotic systems as distinct from other systems. Fetzer answers - with a slant in the direction of Eco's intriguing dictum that signs are "everything which can be used in order to lie"[12] - that the most apposite criterion for identifying a semiotic system is: "the capacity to make a mistake."[13]

It would be wrong to deny that a theory of human consciousness must make room for the ability to make a mistake. But when precisely this ability is made the distinctive criterion of semiotic systems, it is no longer sufficient to refer to fallibility in general, what is required instead is a clear definition of what a mistake is. Fetzer also defines the possibility of a mistake as follows:

In order to make a mistake, something must take something to stand for something other than that for which it stands, a reliable evidential indicator that something has the capacity to take something to stand for something, which is the right result... to mis-take something for other than that for which it stands appears to afford conclusive evidence that something has a mind.[14]

It is difficult to see how it would be possible to decide whether the semiotic definition of the possibility of a mistake is exhaustive, on the other hand it is not difficult to see that a semiotic definition of a mistake excludes the possibility that the mistake can at the same time define semiotics.

This circular semiotic conclusion also conceals the problem that semiotics has no criterion at all for deciding whether something is understood as an "expression of something other than that it stands for". Although the mistake criterion has its roots in a justified opposition to ontological truth criteria, used as a theoretical, distinct concept, it stumbles over the same problem. Deciding what is false contains exactly the same problems as deciding what is true. The decision regarding the one is also the decision regarding the other. It is therefore advisable to take care in introducing references to decisions of this character into the epistemological foundations of science, which can rather be motivated by referring to the undecided.[15]

As mentioned, Bøgh Andersen, unlike Fetzer, takes his point of departure in Saussure's sign definition rather than in Peirce's. Neither, however, provides any reason for his respective choice of theoretical starting point, nor does this choice appear to be particularly motivated by the respective subjects. There appears to be nothing wrong with using Saussure's sign theory to arrive at Fetzer's conclusions - as the distinction between the symbol theory and semiotics is drawn primarily between the syntactic structure of the symbol theory paradigm and the semantic structure of the semiotic paradigm. Nor, on the face of it, does there appear to be anything to prevent Bøgh Andersen from using a triadic sign concept, as he attempts to add a third dimension to the Saussure-Hjelmslevian concept, which certainly bears a family resemblance to Peirce's interpreter.

The most important difference between the two theories rather has its roots in the different purposes which motivate them. Where Fetzer aims at a general theory of conscious, human sign production, Bøgh Andersen's goal is to develop a semiotic conceptual inventory with special reference to the computer as a communicative medium.

Seen in relationship to the symbol-theoretical paradigm, the medium perspective is an inversion of the relationship between the theory's original subject area and the new area of use. In the symbol-theoretical paradigm (AI and the later Cognitive Science),[16] the symbol definition has been used as a theoretical foundation for the description of what have been referred to, with an unfortunate term, as "natural languages".

The opposite path is taken with regard to the medium perspective, as here the linguistic theory which was developed in the description of spoken and written languages is transferred to the description of a different symbol system. In justifying this inversion, Bøgh Andersen introduces four objections to the formal symbol theory.

The first objection is that symbol-theoretical approaches to language description are based on logical or psychological symbol theories rather than linguistic theories. As language is treated as an expression of something else and not as language, the central linguistic insights are simply overlooked.

The second objection is directed towards a general assumption in the AI/CS tradition, namely that it is possible to describe consciousness and language as a well-delimited - individually borne - system, whereas the linguistic viewpoint emphasizes the fact that language is a basic cultural and social phenomenon which exists in the relationship between individuals.

The third objection concerns the more or less explicit mimetic or naturalistic view of representation which in particular lacks the ability to describe that variability which exists between the signifier and the signified due to the arbitrary character of the relationship.

As a corollary to this, the fourth objection is introduced as a criticism of the general leitmotif in AI research, namely the idea of imitating human consciousness, which is seen partly as a false analogy, not least with regard to language competence, partly as an expression of an effort to replace people with machines, where it would be both more correct and better to look at computers from the point of view of their meaning to those who use them.

These delimitations contribute to two purposes in particular. One is to motivate a return to especially Hjelmslev's theory. The other is to include the sign production, which stems from the symbol theory tradition, in the description of computer-based signs by viewing the AI/CS tradition as a producer of a special type of computer-based sign - to the extent that the results produced can actually be implemented.

However, a position made up of negative statements like "AI is nothing but...", "AI is not..." effectively discourages one from working seriously with AI. This is unfortunate since AI techniques are both exciting and potentially useful.

A more fruitful attitude in the present framework would be to describe AI as a special mode of sign production. Instead of describing a question-answering system as a case of machine-intelligence, one could describe the question-answering pairs as a special kind of computer-based signs. This would imply moving AI-questions from the "language as knowledge" box to [the] "language as art(ifacts)" box, reinterpreting AI as a discipline concerned with [the] invention of a new kind of signs.[17]

The combination of these two purposes provides a double advantage. By demanding of the semiotic theory that it also include the - new - forms of sign production which are carried out with computers - it becomes clear at the same time that the semiotic theory cannot be expected to be available in an adequate form either.

Computer-based sign are new, very few systematic descriptions exist, and... the glossematic procedure only gives advice for presenting scientific descriptions of well-known domains.

The problem related to working with little known symbol-systems was not recognized in the earlier stages of glossematics where the analytical procedure was mixed up with the discovery procedure.[18]

The theory - like any programme in relation to data - is on the same agenda as its subject.

9.3 The computer-based sign

To isolate the concept of the computer-based sign, Bøgh Andersen takes his point of departure in a sign model which includes four possible perspectives in the consideration and analysis of signs, namely:

The primary purpose of the model is to place the semiotic description in relationship to other approaches by pointing out the advantages of the approach. When semiotics, in a graphic illustration, is thus placed in the centre from which the other approaches branch out like the legs on a milking stool, this does not express - at least in advance - a postulate to the effect that semiotics should or can create the foundation for other approaches. The reason, on the contrary, is that semiotics is regarded as the most suitable theory for describing the sign system which is the main theme of the book.

Semiotic theory is thus seen as a specific perspective which views "the subject area through a particular pair of glasses", relative to other perspectives, as the semiotic perspective can only include "a subset of phenomena in the field".[20]

In spite of this delimitation, the semiotic point of view is principally applied to all computer systems and use contexts, as the presence of the sign function is the ultimate criterion for delimiting the subject area of semiotics.

Hereby, the borderlines just established again become fluid, as both the psychological, sociological and aesthetic approaches are based on sign functions. If a border between these perspectives must be maintained, we must therefore assume that the semiotic theory is not seen as an exhaustive theory of signs.

Whether this is a principle limitation or the expression of an evaluation of the, as yet incomplete, character of the semiotic description is not made quite clear in Bøgh Andersen's exposition. The missing answer, however, is not necessarily a weakness or a flaw, but rather one of the productive questions which motivate the exposition. The relationship between the linguistic and non-linguistic must therefore also be seen as one of the central, unsolved theoretical problems in semiotic theory, as the theory on the one hand concerns all sign formation and thereby becomes a factor in the self-reflection of other sciences, while on the other hand it indicates the sign function as a specific subject area which can be studied separately from the knowledge content expressed in the sign function.

In addition to the - perhaps provisional - borderlines which are initially drawn in order to place the computer-semiotic theory, there is another borderline, however, which is drawn with rather more distinctiveness, namely the borderline between the semiotic description of the computer as a sign system and the AI/CS descriptions of the computer as a symbol system.

While the four different perspectives regarding the sign concept previously mentioned can be understood as different - complementary or competing - suggestions for the interpretation of the computational system's relationships to non-linguistic contexts, the relationship between the semiotic and symbol-theoretical descriptions is more a dispute regarding the way the system is included in a sign function.

Where the symbol-theoretical views regard the system as a depiction of the external world - and, if it is consciousness which is being depicted, also therefore as an autonomous or self-dependent, sign producing system - Bøgh Andersen sees the system as an expression substance which can be used in human sign production.

As the system itself thus contains no signs, it cannot be part of a communicative process. On the other hand, it can be included as a medium for communication between users.[21]

This critique is partly inspired by Winograd and Flores (1986), who denied that the computer system itself had any form of semantic content.

There is nothing in the design of the machine or the operation of the program that depends in any way on the fact that the symbol structures are viewed as representing anything at all. [22]

The description of the machine processes as symbolic processes requires a motivation which qualifies this description as distinct from a description of the computer process as a purely physical process. As this motivation cannot be produced by any known machine, just as there is not even the hint of an idea as to how such a machine could be built, it is not difficult to follow Winograd and Flores' main point of view: "the significance of what is stored in the machine is externally attributed".

Hereby, however, all that is produced is a new problem, as a description of how this attribution can take place is still lacking. There is a certain vagueness in Winograd and Flores' argumentation on this point. While, on the one hand, they insist that the system as a whole must be seen in relation to a outside interpreter - and thereby as part of a sign function - on the other they are inclined to believe that this sign function can be defined solely from the use perspective and independently of the system.

While Bøgh Andersen joins Winograd and Flores on this aspect of their thinking, which goes against the symbol-theoretical understanding of the system as an independent and semantically closed system which possesses a meaning content independent of the interpreter, he deviates in his view of how the system can be described, as he uses the linguistic distinction between the expression form and content form to describe the system as:

a calculus of empty expression units, some of which can be part of the sign system that emerges when the system is used and interpreted by humans. [23]

Where Winograd and Flores attempted to subsume the view of the system under the use perspective, Bøgh Andersen emphasized the description of how - part of - the system can be included in a - use-motivated - sign function. As a consequence of this, the critique of the symbol-theoretical description of the system as an autonomous system is not directed towards the idea that meaning can be ascribed to the system, but towards the idea that only one meaning can be ascribed to it. The system is a - semantically empty - expression system to which several meanings can be ascribed.

The symbol-theoretical descriptions are thus rejected because they lack the semiotic distinction between expression form and content form. This distinction can be avoided, it is claimed, by assuming that the same form - the system perspective - can describe both the expression and content planes.[24]

According to Bøgh Andersen, such homomorphism is certainly not always unimaginable, but it places unnecessary - and often also incorrect - restrictions on the understanding of the informational potential, as it can be shown that the practical use of computers introduces structures into the system process which are not contained in the system's own structure.

Since content and interface are not properties that can be assigned to the system in itself, but are a relation between system and use, it follows that the system should not be viewed as a semiotic schema in which content and expression planes are homomorphic, but rather as a mechanism for generating the expression substance for one or more interfaces.[25]

The symbol theoretical description thus constitutes a valid description of the system only in those cases where the interpreter allows the system to determine the use completely. It is not the system itself, however, which contains the meaning, but the interpreter who establishes the semantic content by using the system as a means of expression in a sign relationship. Computers are correspondingly described as "sign vehicles that can only exist as real signs in situations where users interpret them".[26]

There are good reasons to accept - and emphasize - the possibility of using the same system to create different interfaces where different parts of the system are used in different sign functions. But the description of the system as a semantically empty expression substance is not without its problems. One of these problems appears directly if we put a programmer in the user's place, as it will hereby become evident that the semantically empty expression substance itself is produced through a sign production in which the programmer expresses a meaning content.

Bearing the programmer in mind, Bøgh Andersen himself also takes a step towards modifying the description of the system as a semantically empty expression substance, as he moves the expression elements in the system closer to an independent sign function by describing them as "sign candidates" which almost represent an intentional meaning content.

To say that the computer itself is an empty expression system is only a half truth: by relating it to other semiotic systems, e.g. the existing work language, the designer can strongly invite certain interpretations and a certain content system. I will say that the computer system generates sign candidates, reflecting the view and intention of the designers.[27]

The same, however, can be said of the relationship between an author, his book and its reader. But the relationship between the programmer's sign production and the user's differs from the relationship between an author and a reader because it is possible for the user to process each individual notation unit in the notation structure which comprises the communicative link.

The vagueness which appears here is due to the fact that Bøgh Andersen, fully in line with Winograd and Flores, treats the interpreter function as an occasional function which only commences when the system is being used. As the system itself, however, is the result of a sign production, the description of the use must consequently include at least two interpreters whose mutual relationship is distinct from the classical relationship between sender and receiver.

The central question here is not so much the relationship between the two semantic objectives which meet in use, but rather the question as to how they interfere at the expression level.

Although Bøgh Andersen accepts that the programmer has supplied the system with a hint of a semantic relationship, he still maintains the overall understanding of the system as an expression substance, as he connects up with Hjelmslev's description of the asemantic language system as a set of rules for using the figures of language.

Here, Bøgh Andersen utilizes Hjelmslev's view of the system as an asemantic structure which does not represent a content, but unlike Hjelmslev, he does not see the system as a determinator of the sequence. On the contrary, the sequence is bound to an interpreter function which is first manifested in use.

This adaptation of Hjelmslev's theory for analysing the informational system raises a problem, however, because the informational system, unlike Hjelmslev's language system, is available as an explicitly expressed notation system which itself can become the object of interpretation and which additionally contains the rules Hjelmslev considered as invariant. As Hjelmslev's concept of an asemantic language system stands or falls with the demand that the rules which comprise the system are not themselves an explicit part of the linguistic expression, because in such a case they would be accessible to semantically motivated variation, it is not possible to transfer it to the description of the informational system.

Bøgh Andersen's attempt also leads to an untenable distinction between one part of the manifested expression substance (or the sign candidates) which are assigned to the "system", because they presumably do not enter into a sign function, and the other part which does. Of the manifested expression elements there are thus only some which can be utilized in a sign function.

But what, we may ask, about the part that is not used? Could this be dispensed with? Or how many possible uses should be taken into account in order to be able to establish such a borderline between notation sequences which are included in a sign function and sequences which are "only" unused substance? How long, for example, must use be observed in order to delimit what is used?

These questions concern not only that plethora of - often unused, yet usable - possible instructions which nowadays characterize any good programme, but also the relationship between the parts of a programme which are necessary for the system and which are described as underlying instructions (e.g. the operative system and many of the other automatic sequences which can enter into the performance of a programme) and those which are included as expression elements in sign production.

Nor is this a question of the extent to which it is both meaningful and necessary to work with hierarchic structures which prevent a large number of instructions from becoming used operationally in a given use, but on the contrary of the extent to which such an operative borderline between the concept of use and the concept of system is a borderline between sign and non-sign.

As the manifest "unused" notation elements which comprise a necessary condition for the functionality of the system are the result of a sign production and are accessible to potential use in new sign functions - depending solely on the user's competence and the purpose of the use - it is difficult to see how it is possible to exclude part of the informational notation from the sign-theoretical description. The definition of borderlines between the sign candidates which are at the user's disposal and those which are not, constitutes not only a - significant - part of the programmer's work, it is also included in the factual, implemented system which is accessible to the user's processing.

As the user's possibility of using all parts of the system to regenerate it in other expression forms is not limited by the system, but by his own competence, it is not possible to eliminate any part of the system from the description of the sign function. And as the sign function does not first enter into the picture when a system is used, but already in the construction of the system, the relationship between the system and the user must rather be seen as a meeting place where two sign functions, the one that is included in the system and the one that is included in the use, must interfere. The possibility of such an interference is due to the fact that the synchronically manifested notational structure can be used as a redundancy structure.

That the user can use the programmer's work as a tool for his own purpose - without thinking for a second of the programmer's sign production - does not mean that the programmer does not produce signs, but on the contrary that the user turns these signs into tools by accepting the programmer's symbolic definition.

If the distinction between user and programmer (or the corresponding "internal" distinction between system and interface) constitutes a relevant distinction at all, this is not due to the fact that it coincides with the borderline between sign and non-sign, the distinction first and foremost indicates a difference between purposes of use and between competences in sign handling.

That it is possible to connect these two different ways of sign handling at all in the computer based "communication" is due to the fact that the synchronous manifestation implies that the informational sign system can be subject to two different semantic regimes at one and the same time.

The problem which emerges here is a question - in linguistic terminology - of the description of a communicative process where the same notation and syntactic structure contains several possibilities for semantic organization and interference between several semantic regimes.

This, however, also indicates a limitation on the applicability of linguistic terminology, because in spite of the sharp distinction between the semantic and syntactic planes, linguistics assumes that a given semantic potential corresponds to a given syntactic structure. Syntax parallels semantics.

As it holds true of other communicative media - such as the book, the film, the television, the telephone - that they build upon a semantic regime which is common to the sender and the receiver, it is also possible to conclude that the simultaneous, multisemantic regime constitutes one of this medium's specific communicative properties.

While the conformity between semantic regimes is a basic and general condition for other communicative media and languages, the possibility of conformity in the computer medium simply constitutes a specific threshold case. The programmer cannot, in Bøgh Andersen's words, control the user's utilization.

Although a skilled programmer can be said to have full control over the computational processes that manifest the interface, he can only partially control which of its features the user exploits in his interpretation and how he exploits them.[28]

In the terminology used in the preceding chapters, we can say that Bøgh Andersen rightfully criticizes the symbol-theoretical views which regard the diachronic process as a function of the synchronic state, as he points out that the diachronic process permits semantic regimes which are not bound to the semantic description of the synchronic representation.

It thereby appears at the same time that the semantic interference between programmer and user has its characteristic form precisely because the programmer's total sign work is in the form of a synchronic re-presentation.

The synchronic form can be described as a meeting place between two different diachronic - and individually semantically determined - sequences; that which is determined by the programmer and that which is determined by the user. It is thus the user who decides the extent to which - and at which semantic level - he will subject himself to the diachronic bond the programmer has prepared. The limit to this independence lies, as previously described, exclusively in the demand that the semantic regime must be expressed in a physically defined, synchronically manifested notation system which can be used as a redundancy system.

Although the user does not re-interpret the system as a whole, it is part of his sign activity. That it will often be purposeless, or directly contrafunctional to re-interpret large parts of the system, does not change this structural relationship. If the process is regarded simply as the transition between two steps, it is quite true that it is possible to isolate part of the system as the unused part. The unused part acts here as a chosen, completely passive redundancy. As soon as there is a question of a sequence involving several transitions, however, the possibility of making a sharp distinction between the redundant and the distinctive parts of the system is lost. Those parts which are redundant in one state may become distinctive in the next.

The synchronic redundancy structure which is manifested in the transition between two states does not therefore coincide with the diachronic. The synchronic redundancy structure can be described precisely, but only at the level of notation where it includes the entire system, except the actual instruction and the entity the instruction handles. As the diachronic sequence comprises the transitions between different synchronic states, where alternating parts of the notation system are used distinctively, it is also characterized by a variable, semantically defined redundancy structure which does not include a specially delimited part of the notation system. Redundancy in the diachronic sequence cannot be described at the level of notation, here it is a function of the syntactic and semantic choices.

The synchronic structure is thus the condition for the exchange between the programmer's and the user's two different semantic expressions, which individually have a diachronic structure.

When Bøgh Andersen draws the untenable conclusion that only part of the synchronic notation structure is included in the diachronic sign function, it is not simply a consequence of the fact that language theory has no concepts with which to describe the relationship between the synchronic and diachronic processes. The explanation must also be found in the design theoretical purpose which motivates the semiotic description of the computer system, as the point of departure here is the design strategic distinction between a given system and the abundance of different possible interfaces between system and user.

It is thus the description of the interface as a mediation between system and user which forms the foundation of the definition of the concept of the computer-based sign, which does not include all the system processes it is based on.

While the system's processes are defined without recourse to the sign concept - which "permits all the computer processes and the system's structure" - all computer based signs are defined as:

a sign whose expression plane is manifested in the processes changing the substance of the input and output media of the computer (screen, loudspeaker, keyboard, mouse, printer etc.).[29]

This definition again creates a foundation for a linguistic definition of the interface concept as a collection of computer-based signs which include all the parts of the system process which can be seen, heard, used and interpreted by users.

The important thing in this definition of interface is that it denotes a relation between the perceptible parts of a computer system and its users. The system processes are substances that can be turned into expressions of computer based signs in an interpretative process that simultaneously establishes their content. The definition is one more example of a structuralist shift from focus on objects to their interrelation.[30]

It is also, however, an example of how the description of the interface structure in Bøgh Andersen is limited by yet another premise of linguistic theory, as the interface, which is defined as a collection of computer-based signs, is delimited by the perceptibility criterion which may well be a valid linguistic criterion for the definition of linguistic expressions, but is not valid for the computational expression system which is precisely distinguished by the use of a physically defined notation system which is not bound to the senses.

Bøgh Andersen himself also refers directly to linguistic theory in his justification of the point of view, as he introduces the perceptibility criterion as the first of six important characteristics for the semiotic view:

The default requirement for a sign is that a human interpreter must be able to perceive it. Without expression, no content.[31]

Although it is correct that the sign function, defined by the relationship between a content and expression plane, must necessarily have an expression and even though it is correct that the sign function has to be perceptually accessible, it is not correct that the two requirements justify each other. The demand for perceptibility need not necessarily be valid for the notation. It may, as is the case with the computer, be fulfilled through mechanically executed transformations to an output medium with the help of physical notation which is not accessible to the senses. Nor, conversely, does the demand for notation always serve the need for perceptual recognition. It may, as is also the case with the computer, similarly serve as a non-perceptible, physically-mechanically organized, but semantically controlled manipulation of the notation system.

That the perceptibility criterion is not suitable for delimiting the sign function is also indirectly indicated by Bøgh Andersen's own analysis, as this includes invisible sign manifestations - c.f. next section - just as he also introduces a special class of invisible "ghost signs" which are defined as:

... signs that lack both permanent and transient features [which are visible]. They are not represented by icons or other identifiable graphical elements, and they cannot be manipulated directly. However, they do have a function to other [visible] signs. Like controller signs they show their existence by influencing the behaviour of other non-ghost signs.[32]

The visual criterion for the definition of the expression form appears here as a filter which conceals the unique properties of the informational sign system, namely that any informational sign element, unlike those from other sign systems, always has an invisible manifestation. The possibilities for transforming these expressions into a visually recognizable expression mechanically are always limited, a complete representation of the notation during the process is not possible - and certainly not desirable.

That the visual criterion for the definition of the sign function's expression plane is misleading is finally also indicated indirectly by Bøgh Andersen's presentation, as he motivates the analysis of the visual representation as a design-strategic goal. The visual expression, the interface structure, is produced as a result of sign work which uses non-visible notation forms. It is also indirectly evident from this that the criterion of perceptibility has its relevance because the informational sign system is not available in a form accessible to the senses, as visualization is seen as a means to make the user's handling of the notation system easier.

Because of the supremacy of the interface and its functions regarding the work context, a good system structure is one that makes it easy for the designer to experiment with the different effects for achieving a given communicative purpose, and makes visible the role of the different system parts in the creation of meaning.[33]

The concept of the computer based sign is thus defined as a - visual - mediation - a symbolic interface - between system and use.

9.4 The properties of computer-based signs

The concept of the computer-based sign is described here on the basis of a productive contradiction between two linguistic theories, related to system and use respectively. Among other things, the productive aspect in this is that it locates the relationship between the two poles as a centre of gravity whereas the linguistic tradition has to a great degree been formed in the struggle between theories, which give prominence to the one aspect at the expense of the other.

The background for this accentuation is correspondingly clear, the system does not play the same explicitly preordained role in spoken and written languages as it plays in the informational processes.

The concept of the computer-based sign, however, is not only interesting because it accentuates the relationship between system and use, but also because it gives rise to a classification of a number of informational sign properties - described at the interface level.

In Bøgh Andersen's classification the - prototypical - computer-based sign is created as a composition of three features, a handling feature, a permanent feature and a transient feature.

The handling feature embraces the possibilities available to the user for influencing the system with given, physical input mechanisms. Permanent features, on the other hand, are features which are generated by the system, they are constant in a given sign expression's "lifetime", they serve to identify the sign and represent the system's components. Transient features are also generated by the computer system, but unlike the permanent features, are subject to variation during use and therefore represent changes in the system state.[34]

That computer-based sign expressions can have permanent features is only surprising inasmuch as this feature has not previously been specified in the description of sign systems. The fact that it now acts as a specified feature is not only because it has become relativized and specific relative to the two other features, but also because the permanent features of the computer-based sign possess, in spite of everything, no more permanency than lies in the fact that they are defined, facultative and editable features. The same naturally also goes for the so-called transient features. The permanent features are thus not parts of an invariant language system, "behind" the sign function, but are on the contrary established in a manifested sign function. The "permanence" itself is defined as a specific symbolic value and part of the expression.

That which characterizes the sign features which are "generated by the system" is therefore the structural possibility of operating with the combination of features which are maintained and the features which are varied. This is also an expression structure which is unique to informational systems. That Bøgh Andersen places great emphasis on the unique aspects of handling features which are generated by the user is probably connected with the general attempt to extend the potential use of the medium.

The characteristic feature of computer systems is the availability of handling features. The active hand movements of the "reader" are an essential ingredient of computer-based signs... Because of the handling features, the computer medium differs from the older ones in having properties also known from tools. This shows up in the interpretation of the signs... what we see are not objects, tools and actions - we see and use signs signifying these phenomena.[35]

If, however, there is to be an advantage in emphasizing the difference between the tool and the sign for the tool, this must be due to the fact that the sign function is not bound to be maintained. It is also a quite banal experience that we can use the same machine both to simulate and/or execute many different tool functions, whereby we once again come to the conclusion that there is no expression element in the system which is external to the informational sign system.

This also means that the features Bøgh Andersen connects with the interface structure must rather be seen as a more specific utilization of the general properties which are connected with the informational sign system as such.

If we similarly maintain that the sign function is connected with human use - there is nobody else who can point out the referent - whether this is a question of the design of the physical circuit, programming the machine, the preparation or adaptation of applications, or the end user's utilization for a given purpose, we can conclude that the handling feature is not just a new, marginal sign property in the informational sign system, but on the contrary the basic property whereby we both define permanent, variable and new handling features. All computational processes begin with a user-defined command which produces a physically organized effect in the machine.

While the handling feature at the interface level appears as determined by the system, a more general viewpoint of the informational sign system shows that is not simply a secondary or derived sign feature, but on the contrary that feature which defines the informational sign system. The sign theoretical definition of the handling feature simply constitutes the semiotic concept of the programming process which constitutes the informational sign as distinct from all other known sign systems.

Unlike the more established programming concepts which are connected with the idea of a semantically closed whole, the sign theoretical definition of the handling feature, however, points both to the semantic and compositional freedom of choice in the construction of handling precepts and leaves no theoretical gulf between the system, the interface structure and the use context. On the other hand, it actualizes, as Bøgh Andersen moreover discusses in detail in his analysis of the work language's relationship to the non-linguistic, the theoretical and practical problems in our understanding of the relationship between symbolic and non-symbolic actions.

The concept of the symbolic handling feature not only reveals that often overlooked semantic freedom of choice in the programmed composition, it also reveals that feature which makes it possible to use Turing's choice machine, as this feature creates a foundation for yet another unique sign function, namely the interactive sign.

Bøgh Andersen defines the interactive sign as a composite sign which, unlike other composite signs as actors, object signs and controllers, is formed in a compositional structure of system and user-generated instructions.

The interactive sign possesses both permanent and variable features, but is distinct from other sign compositions because the variable features can be regulated by the user-defined action. Named as typical examples of this interactive sign function are the hero figure in innumerable games, the scroll function and a number of other tool functions from ordinary application programmes.[36]

As the interactive sign is defined as a specific utilization of the action component, it appears - if only indirectly - that it is not simply a sign function at the interface level, it also occupies a central place in the general informational sign concept.

While such a generalization is necessary, on the one hand, because all elements in the informational system both emanate from and can be included in a sign production, on the other it raises the question as to how it is possible to describe the symbolic dimensions of the interface level.

9.5 The interface between the internal and the external

As it stands, the theory of the computer-based sign is motivated in particular by design theoretical considerations which are profiled partly in relationship to other design strategies and partly in relationship to linguistic sign theories.

The main emphasis in the theoretical profiling is placed on the difference to symbol theoretical views characterized by the description of the computational process as a symbolic imitation, either of mental processes (such as Simon and Newell's neo-Cartesian AI paradigm) or of processes in the surrounding world (represented, among other things, by model and object oriented programming strategies which, with regard to the representation theory, operate with a mimetic relationship between system and external reality).

The basic reservation of Bøgh Andersen towards these theories concerns the idea that the computational system has any representational content at all which can be described independently of a human interpreter. On this point, Bøgh Andersen is completely in accord with Fetzer's critique and other Peirce inspired critiques of the symbol theoretical paradigm. The theoretical objection, however, is utilized in a different way, as Bøgh Andersen attaches himself to tool-oriented design strategies, primarily those of the American Human Computer Interaction tradition and the Scandinavian activity and work-oriented design tradition.[37]

While these strategies, and with them also those of Bøgh Andersen, have a common focus at the interface level, which is seen as a strategic key point for the integration of the system into a use context, they diverge in the theoretical description of the connection between the system's "text" and the context. It was this difference which motivated Bøgh Andersen to distinguish between a semiotic, psychological and aesthetic approach to the interpretation of what in linguistic terminology can be described as the contextual referent.

The semiotic approach, however, also implies an opposition to one of the principal design ideals in the use and work-oriented strategies, namely the idea of the "transparent" interface which does not attract the user's attention because such attention would disturb the execution of the tasks the tool is to be used for.

This opposition is a direct consequence of the element which constitutes the merit of semiotic theory, namely the focus on the possible interplay between the expression form and the content form. From the semiotic point of view, the demand for transparency with regard to the tool is an expression of a one-sided concentration on the content side of the sign function which leads to the loss of the semantic variation possibilities which are connected with the sign relation between content and expression form.

If the idea of the invisible or transparent screen, the screen as a window on the world, is a central element in the use-oriented strategies, semiotic theory is concerned with the visible screen, the screen as a pictorial or symbolic construction. The difference which is manifested through the different views of the screen, however, emerges because what is to appear on the screen must be obtained from two different places. Whereas the use and work oriented strategies regard the screen as a medium for semantic regimes in the surrounding world, in semiotic theory the screen is regarded to a higher degree as a medium for articulating a selected part of the semantic potential of the internal informational system.

If the two different views of the screen emerge because the screen is approached from two different directions, they need not necessarily conflict with each other. The difference can also be viewed as a result of the double determination of the interface level itself.

In Bøgh Andersen's definition, the interface comprises a collection of perceptually accessible computer-based signs, where the signs are used and interpreted in a given use context. Like other definitions of the interface concept, this definition was formulated with regard to the development of design strategies. The interface concept is thus defined as a working area from the point of view of the designer, as it serves to thematize the question as to how the designer can meet the user's needs. There is no reason to deny that such needs exist, but there are reasons to consider why a professional management of this need is necessary at all.

Perhaps the most obvious answer is that the need to design good interfaces stems from the fact that the lay user does not possess - and should not have to possess - professional programming competence. A good interface can thus be seen as a means of maintaining an appropriate division of labour.

This sounds like a plausible reason. But it cannot explain precisely why the interface concept originates and how it acquires its special significance for the efficient division of labour in connection with computer technology, where in many other cases the division of labour can be established without correspondingly specialized and professional mediation between different areas of competence.

If the consideration regarding efficiency is the correct reason for working with the design of interface structures, this must therefore be conditioned by the fact that a special kind of incongruousness exists in this area.

Most interface theories ascribe the need to the many different areas of use, each of which requires its own specific interface structure, which must thus be modified relative to the different use contexts. No matter which special use is in question, however, they all have one thing in common, namely the need for an interface. While the answers to the problem differ from case to case, the source is always the same. The need to design the many different interface structures does not stem from one or another of the use contexts, or the special features of working competence, but from the character of the computer and the informational representation.

It would therefore appear most obvious to define the interface concept relative to the informational system.

If we take our point of departure in the lay user's standpoint, it is natural to point to the formal descriptive languages, which have often been used to handle the informational process, as a central competence barrier. This, however, can be compensated for through training without removing the need for an interface. An interface structure is also required in order to utilize formal languages to control the informational process. The need for an interface does not thus stem from the formal language, on the contrary, it stems from the mechanical form of the informational process which is not accessible to the senses.

The demand for perceptual accessibility is therefore rightfully included as a basic criterion in Bøgh Andersen's definition of the interface concept as "the relation between the perceptible parts of a computer system and its users".[38]

In opposition to the older system theoretical definitions, which describe the interface as part of the system, he connects the criterion of perceptibility to the needs of the lay user, as the perceptible part of the system is seen at the same time as a set of restrictions placed on the lay user through the system.

Both the programmer and the lay user, however, must always handle the informational process through some kind of interface which uses perceptible expressions to handle the internal process in the machine which is inaccessible to the senses, just as all operations involve a change in the total state of the system, no matter which parts are accessible to the senses. The interface must therefore be described as a medium which permits the necessary exchange between a perceptible expression form and the non-perceptible informational notation. In its general form the concept therefore embraces any kind of input or output medium, which is also in accordance with the postulate that the interface is not necessary because of the user's - lack of - competence, but due to the character of the technology involved.

Although we may theoretically be able to imagine that the conversion from perceptible to non-perceptible expression forms takes place as a complete conversion - for example with the use of the binary notation of input and output - such a complete conversion would in reality imply that the computer could not be used as a computer. Pure binary notation contains no syntactic or semantic structures. Consequently these structures can only come to expression at the interface level which, for exactly the same reason, must be designed as a selective - perceptible - compression of an internal notation structure which is not accessible to the senses.

The informational sign system is thus characterized by a double expression structure, whether the machine is used to control another machine, to simulate a calculating machine, a logical procedure, a drawing apparatus, a typewriter, or as a medium for storing and processing information.

If the machine is used as a dedicated machine which must always execute the same set of repetitive procedures notwithstanding their complexity, designing the interface constitutes a one-off problem. If the syntactic and semantic structure required for executing these procedures has been discovered, the machine can work as an automaton and the demand for perceptibility will only be in evidence before, after and in the case of disturbances. This borderline case at the same time reveals that the demand for perceptibility is closely connected with the utilization of the computer's syntactic and semantic potential and that this potential can only be expressed at the interface level, while it is effectuated at the internal notation level.

This background also makes it possible to understand the use of the screen as a central interface medium. The screen, as will be familiar, is not a necessary part of a computer system and even though the first screen was made use of in the middle of the 1950's, a quarter of a century would elapse before the comprehensive syntactic and semantic control potential made possible by the use of the screen was taken up in earnest.[39]

Looked at from the lay user's point of view, these possibilities lie especially in the introduction of graphic and linguistic means of control which redress the formal description barrier. From the designer's point of view, the same possibilities offer the opportunity to include information on later use in system development.

The result was a significant breakthrough, a new epoch, both in computer technology and in the history of society. There are nevertheless reasons to see the convergence between the two attempts as a provisional convergence, with the two parties each taking their own direction, which in both cases raises a more general problem of competence.

Where the designer is on his way out into the world, the user is on his way into the system. A good interface does not therefore help to remove the barrier, neither for the designer nor the user, on the contrary, it extends it because it implies that both parties will find it necessary to acquire more knowledge of an unfamiliar area of competence.

To the immediate and in itself far-reaching advantage which lay in the use-oriented definition of the interface can thus be added another, which may also have far-reaching effects, namely the advantage that lies in the fact that the same interface has both a semantic component, which is determined through the system, and one which is determined through use. This implies that the definition of the interface - including the screen - must be abandoned as a limited meeting place between two distinct components. The actual meeting between these areas of competence does not take place at the screen's interface level, but between two different interpreters who regard the screen in different ways.

In order to describe the relationship between these interpreters it is necessary to describe the interface as a synchronic transitional state between two - or more - different diachronic, semantic sequences.

Seen in relationship to the internal notation structure, the interface reproduces only a segment which originates as a semantically motivated selection carried out by the system's designer. There is thus no question of a complete representation of the system's synchronic structure at a given stage, but of a semantically motivated compression which distinguishes a sequence of diachronic transitions at the level of internal notation as a perceptually accessible semantic structure. The synchronic re-presentation on the screen, seen from inside the notation, is a fiction, the image on the screen is only synchronic if it is seen as a semantic structure without taking into account the flicker which reveals that the constant image itself is diachronically constructed.

While the screen image, seen from the system's side, appears as an output, from the user's side it is at the same time the point of departure for an input, where it is not only possible to utilize the perceptible output, but the entire system. Screen representation, therefore, permits a transition between output and input without loss of the informational notation.

Even though the synchronic interface is produced as semantically defined restrictions which are meant to help the user to handle the internal process, the synchronic form, however, means that the user also gains semantic freedom with regard to these restrictions. Not only can he choose between what is offered, but - solely dependent on his competence - can also choose to redefine the semantic structure by ignoring what is offered or by using it for other purposes.

On the face of it, while this possibility appears contrafunctional viewed from the use-oriented design viewpoint, it is not necessarily the case that this really is so from the user's. The central question here is the degree to which it is relevant for the user also to acquire areas of competence which make him capable of utilizing this potential of the informational sign system.

As poles in this area, we have on the one hand the fully developed, finalized application system which, for this very reason, approaches a functional use similar to that of a traditional machine and, on the other, a machine which only works as a heater. The interesting area, however, is all the possible intermediate forms between these two mechanical poles, as it is only these which make the machine a computer, determined by its symbolic properties.

If the idea that we must all become programmers is untenable, which there is at present good reason to accept, because it unnecessarily disregards the advantages of the division of labour, the idea that most of us should only be innocent users is equally so. The informational medium has its own properties which can only be used by those who learn to express themselves through them.

Just like the picture of the automatic machine, the picture of the perfect interface is also the picture of a computer which is not a computer. In these cases, the machine wins out over the sign. In all other cases, the thought wins out over the sign and the machine because there is a functional relationship between at least two semantic regimes which require two different forms of sign work, the programmer's and the user's. In order to edit the programme, the user himself must execute the programmer's sign work.

If, however, we wish to use this for a purpose, we must necessarily perform another sign work - namely the user's. The informational sign system's syntactic structure is always subject to both semantic regimes, which coincide only in certain borderline cases. The informational sign system, relative to other sign systems, thus implies a structural doubling of the sign work.

Go to top


Notes, chapter 9

  1. Goldstine and von Neumann, (1947-48), quoted here after the excerpt in Goldstine, 1972: 269.
  2. C.f. Trakhtenbrot, 1988: 620, who argues that certainly some of the imperative programming features can be contained in a structured programming language based on Church's [[lambda]] calculation.
  3. Turing, (1936) 1965: 117.
  4. A comprehensive sign theoretical analysis of graphic screen communication can be found in P. Bøgh Andersen, 1990, which is discussed in more detail in sections 9.3-9.5.
  5. In spite of the name Virtual Reality, there is neither more "virtuality" nor "reality" in those systems than in any other symbolic system such as ordinary language, for instance. In both cases we use a part of our own body to produce symbols, whether as a symbolic expression of a movement we make in the actual situation or as an expression of something which is not present such as when we talk of cows, for instance, without having one at hand.
  6. Thus in Ehn, 1988, Bannon, 1990, Bannon and Schmidt, 1990 and P. Bøgh Andersen, 1990.
  7. Hjelmslev (1934) 1972: 38 and (1943) 1961: 6.
  8. Peter Bøgh Andersen, 1990 and James H. Fetzer, 1990.
  9. Haugeland, 1985: 112. GOFAI is an acronym for Good Old-Fashioned Artificial Intelligence.
  10. Newell and Simon's symbol definition is quoted in chapter 5 and is also discussed in the epilogue.
  11. Fetzer, 1990: 52.
  12. "Semiotics is concerned with everything that can be taken as a sign. A sign is everything which can be taken as significantly substituting for something else. This something else does not necessarily have to exist or to actually be somewhere at the moment in which the sign stands in for it. Thus semiotics is in principle the discipline studying everything which can be used in order to lie. If something cannot be used to tell a lie, conversely it cannot be used to tell the truth: it cannot in fact be used 'to tell' at all." Eco (1976) 1979: 7.
  13. Fetzer, 1990: 40, with a discussion of the possible fallibility of purely syntactic systems, p. 56 ff.
  14. Fetzer, 1990: 40.
  15. C.f. Finnemann, 1990b.
  16. As separate terms, AI refers primarily to the rationalistic symbol theories of the 1950's and 1960's (among them those of Newell and Simon) and Cognitive Science to the 1970's and 1980's (including Fodor and Pylyshyn). The journal "Cognitive Science" was founded in 1977. AI is also used as a common, general concept for both directions and sometimes also includes the empirical network theories. The latter is also true of Cognitive Science. This terminological sliding reflects an increasing tendency to define areas of research on the basis of methodological criteria, although a definition based on the subject area cannot completely be abandoned. A permanent discipline, however, must increasingly emancipate itself from purely methodological definitory criteria, as otherwise it will end as a victim of its own dogmatism. On the other hand, a direct binding of the method to the subject area would block the investigation of the subject area, not least when it comes to investigating those areas where disciplines draw their mutual borderlines.
  17. Bøgh Andersen, 1990: 24. C.f. also the use of this viewpoint for developing "narrative systems", Bøgh Andersen and Berit Holmqvist, 1990.
  18. Bøgh Andersen, 1990: 16.
  19. Bøgh Andersen, 1990: 18-20.
  20. Bøgh Andersen, 1990: 20.
  21. Bøgh Andersen, 1990: 120.
  22. Winograd and Flores 1986: 86.
  23. Bøgh Andersen, 1990: 120.
  24. Bøgh Andersen, 1990: 128.
  25. Bøgh Andersen, 1990: 130.
  26. Bøgh Andersen, 1990: 23.
  27. Bøgh Andersen, 1990: 131.
  28. Bøgh Andersen, 1990: 183. One might object here that it is always possible for the receiver to interpret any message independently of the intentions of the sender - and hence not under his control, but the synchronous manifestation still provides the computer with a multisemantic potential of its own, since it both allows the receiver to take the position of the sender as editor of the message (the programme), to use the message as intended in a variety of ways, and to use the message in a variety of ways which were not foreseen, by reinterpreting various features.
  29. Bøgh Andersen, 1990: 129. By defining the figures without recourse to the sign concept, Bøgh Andersen moreover breaks with Hjelmslev's premise, as the figures here can only be distinguished through a sign analytical dissolution of the expression into the smallest units.
  30. Bøgh Andersen, 1990: 129.
  31. Bøgh Andersen, 1990: 188.
  32. Bøgh Andersen, 1990: 221.
  33. Bøgh Andersen, 1990: 175.
  34. Bøgh Andersen, 1990: 176 ff. with examples and a more detailed analysis.
  35. Bøgh Andersen, 1990: 311.
  36. Bøgh Andersen, 1990: 199 ff., where the typology is described in more detail.
  37. The "Scandinavian tradition" is usually traced back to the Norwegian computer scientist, Kresten Nygård. The label 'activity-oriented' "human activity approach" is taken from Susanne Bødker, 1987, who provides an analysis of the interface concept. The label "work-oriented" is taken from the title of Pelle Ehn's book, 1988, where he discusses a number of the Scandinavian tradition's projects as an approach to an extension and renewal of the design concept.
  38. Bøgh Andersen, 1990: 129.
  39. René Moreau, (1981) 1984: 86. The first screen used as a medium for the operator's intervention in the process is believed to have been used for the first time in 1954 in a machine built by IBM (NORC, or Naval Ordnance Research Calculator, which was inaugurated by John von Neumann on 2 December 1954). Cathode ray tubes and radar had formerly been used for more specific purposes where visual access was required to particularly critical parts of the process. Visual representation, however, was only regarded as an auxiliary function in monitoring the system process and the screen image was usually defined in very few parameters, for example a fixed number of lines with a fixed number of signs per line, whereas the graphic screen image is typically defined by dots.