Topmenu Calendar Archive Links Faculty and staff Publications Research projects Sitemap Main page

Niels Ole Finnemann: Thought, Sign and Machine, Chapter 8 © 1999 by Niels Ole Finnemann.
| Table of Contents | Chapters: | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Literature | Download pdf-file |

8. Informational notation and the algorithmic revolution

8.1 The problem of noise theory

As appeared from chapter 6, Shannon used the information concept as a concept for an expression unit which could be defined independently of the language in which it was included, as the individual notation was primarily defined as a physical value. The exact definition of the physical form of the notation, however, was not sufficient to define the individual notation unit because any physically defined notation form can also exist as a physical form without being a notation. In other words, a semantic component is also included in the definition of informational notation.

As this holds true in general of all notation forms, the conclusion was drawn in chapter 7 that the use of notation systems assumes a double coding of the individual notation unit, as there must both be a coding of the physical form - relative to the physical background noise and to the physical forms of other notation units - and a coding relative to the occurrence of the same physical form as an unintentional, illegitimate form.

While the first coding appears as a solution to a purely physical noise problem, the second coding appears as the solution of a semantic noise problem. This appears, in other words, to be a question of two mutually independent code procedures which answer two clearly distinct noise problems.

The relationship, however, is more complicated, as Shannon's analysis also showed that it is possible to compensate for an elimination of redundancy in the physical notation structure with a semantically determined redundancy. There is thus an inner and variable relationship between the two codings.

This conclusion therefore gave rise in chapter 7 to a more detailed investigation of how noise problems are solved in other notation systems - primarily those of common language and formal language. It appeared from this that there is always an internal connection in the solution of the two noise problems in the individual notation system and that this internal connection differs in different notation systems.

While some differences are concerned with the use of the physical properties (i.e. some of the properties) of a given expression substance in different ways, other are concerned with utilizing different properties in the same expression substance and others again of differences connected with the mutual differences of expression substances. Finally comes the additional fact that different properties (of the same or different expression substances) can be utilized to solve the same problem and that the different notation systems each have a set of possibilities for substituting the use of semantic content criteria for the use of physical form criteria.

At the same time, the criteria used to establish the limits of physical variation establish a set of conditions for the semantic exploitation of the physical forms. The solution to the two noise problems is thus always a solution which establishes a set of semantic variation possibilities which can be used in a given notation system.

Although the comparative analysis took its point of departure in Shannon's utilization of the redundancy function to solve the noise problem, it was not possible to use any of his mutually inconsistent notions of redundancy to describe the different redundancy structures of common language. Moreover, a general definition of the redundancy concept was given and it was shown that notational redundancy plays a central role for the properties of common languages, as different semantic potentialities are connected with the notational redundancy structures of written and spoken languages respectively, just as it was shown that the use of notational redundancy structures distinguishes the common languages from formal languages, which are characterized by the elimination of notational redundancy.

The intention now is to resume and pursue the analysis of informational notation with the point of departure in the results of the comparative analysis.

In section 8.2, I show that the redundancy functions used by Shannon contradict his own definition of the redundancy concept, whereas it is possible to describe these functions with the help of the definition given in chapter 7.

In 8.3 - 8.5 there is a description of the semantic variation mechanisms of informational notation relative to linguistic and formal notation. In 8.3 the emphasis is placed on the informational use of properties which are also used in other notation systems, while the emphasis in 8.4 and 8.5 is on properties which are only used in informational notation, namely 1) the notation's independence of the demand for sensory recognition, 2) its mechanical effect and 3) multisemantic potential.

As the treatment of informational notation is based on the use of algorithmic procedures, the significance of algorithmic procedures for the semantic properties of informational notation is treated in sections 8.6 - 8.8, while the synthetic description of the informational sign function as a whole is the subject of chapter 9.

8.2 The redundancy concept in information theory

The central problem of noise theory in Shannon's analysis, as we saw in chapter 6, was the question of how to decide whether a given physical form appears as part of a message or as a result of an unintentional noise effect.

It is obvious that the way this problem presents itself is of particular interest in connection with working on electrical signals, because here the signal values are expressed through threshold values between varying amperage and duration in a continuous medium and where notation - as the most decisive factor - is handled (transmitted) independently of the human interpreter.

Shannon thus had good reason to make the notation form the object of a separate consideration independent of human sensory and meaning recognition. He also had good reason to speak of the general character of the way the problem presents itself, just as he found the right means to handle the technical problem, in that he suggested that transmission could be stabilized by increasing the redundancy of the message.

If we consider Shannon's own redundancy concept here, however, the suggestion is meaningless, as he used the concept of all forms of repetitive structure which - due to the repetitive element - are regarded as superfluous and without meaning for the content of the message. In addition, he assumes that this concept also embraces the rule structures which are valid for the given symbolic language and that meaning alone is contained in the signals which occur quite arbitrarily as deviations from any form of repetitive structure. On the basis of this redundancy concept the idea of stabilizing the message by increasing its redundancy is a waste of time. If redundancy is completely superfluous, it will naturally not help to add more of it to the message.

While Shannon starts by defining redundancy as that which is without importance for the meaning, he continues with two mutually different definitions of redundancy in contrast to the meaning, (the one equal to the system-determined part, the other equal to the alternative, possible, but unused choices). The redundancy concept he uses in the statistical description, however, is a fourth, as redundancy is defined here independently of any regard to meaning. With this definition, redundancy is solely determined by the statistical procedures used.

This redundancy - in accordance with Shannon's asemantic approach, but contrary to his supposition that meaning content is only manifested in the random variations - is thus completely independent of the meaning content and rule structure of the message. For example, Shannon would not be able to decide, on the basis of the determination of this redundancy in a message, whether the message existed in a common language or in formal notation. The redundancy function is solely determined here in relationship to the physical manifestation of the expression - i.e. the form of the expression substance.

As a consequence of this any message contains redundancy of this form, simply if a given notation occurs more than once, or simply if a single notation comprises some quantity of repeatable, smaller physical units.

With this definition, the idea of increasing redundancy in order to stabilize the message immediately becomes more understandable, because the elimination of the thus determined redundancy will unavoidably come to affect the content. It is therefore all the more peculiar that the method Shannon proposes for increasing redundancy builds upon yet another, fifth, definition, as he suggests that redundancy can be increased by adding a set of control codes so that the validity of the individual signal or signal sequence is conditioned by preceding and subsequent signals. This condition could be fulfilled by describing the notation system with the help of a formal semantics in which a numerical value is ascribed to the individual notation units. He thereby showed that it was possible to solve the semantic noise problem independently of the language in which the message appeared - and in this sense without regard to meaning.

Here, it is no longer a question of a purely statistically determined redundancy structure which can be described at the level of notation, nor of a redundancy which can be defined relative to the physical form, but of a determination of redundancy relative to a - formal - semantic interpretation of the notation's value.

The codes Shannon used to increase redundancy can neither be derived from an analysis of the physical notation nor of the stochastic procedure used. They can only be derived from a semantic interpretation of the given message, because the asemantic consideration - whether this is founded on the notation's physical form or on the statistical repetition structure - contains no criterion for distinguishing the random variation which is emitted by the source of the message from the random variation which is emitted by the noise source. Shannon's use of the redundancy concept in connection with these control codes only has meaning if the codes are seen in relationship to the original meaning of the message. They are only redundant in this relationship because when compared to the code procedure and the physical structure of the notation, they are equally as distinctive expression units as the "meaning bearing" signals.

That Shannon did not attach much importance to the difference between these concepts can probably be explained by the fact that he used a formal semantics which was independent of the semantic structure of the original message, but in addition to this comes the point that the semantic redundancy concept (in that form in which it was defined relative to the expression substance) could be used on any notation system, just as it was also this concept which provided the economical advantage in transmission, whereas the semantic concept was an economical liability - albeit very small.

Shannon, however, not only used semantically determined redundancy because it was possible or economical, but because it was necessary, as the asemantic approach was not sufficient. It was only possible to eliminate the one redundancy structure by establishing another. Shannon's analysis therefore provides yet another important result, as it demonstrates that a variation of redundancy at one level can be compensated for by a variation at the other.

Shannon's analysis thereby also confirms - contraintentionally - that the redundancy structure is necessary in order to establish the symbolic legitimacy of the notation units and that the physical and semantic determination of the notation units constitutes two mutually connected variation axes.

Shannon's demonstration of the importance of the redundancy structure for positive physical-mechanical recognition is therefore not connected with his mistaken idea that any form of redundancy can be described with the help of a stochastic procedure His own analysis shows, on the contrary, that a semantic component is always included in the stabilization of the expression unit in the physical expression substance and that expression redundancy enters into an internal relationship with content redundancy.

While the description of redundancy as a meaning independent "system function" must be abandoned, Shannon's use of a meaning related redundancy structure confirms first that the redundancy function is a precondition for stabilizing the expression form in the expression substance and, second, that it is also a precondition for the establishment of the sign function as a link between the expression form and the content form.

8.3 Linguistic, formal and informational mediation between the expression substance and meaning

While Shannon's analysis on the one hand - directly contrary to its main purpose - leads to the conclusion that it is not possible to provide a purely physical or algorithmic - or other form of asemantic - description of notation forms, it also reveals on the other that the "physics" of notation forms, the manifestation of notation forms in the expression substance, plays an important role for the semantic use of the notation system.

This too only appears indirectly because Shannon uses the electrical signal as a prototype of the concept of notation. He thereby assumes, 1) that the informational form has an unambiguous physical value, 2) that a notation unit in an arbitrary notation system is defined by a set of - very few - invariant physical values (signal strength and duration), 3) that the same yardstick is used for the definition of the different notations, and 4) that the individual notations follow each other in a single-stringed serial order or in synchronized parallel series.

It appears to be possible to fulfil these four conditions with the necessary precision as far as energy-based mechanical transmission systems are concerned, where communication is understood solely as a question of reproducing the same physical manifestation:

The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point.[1]

While these four assumptions on the one hand exhaust the possibilities of a precise physical definition - it is not possible, on the other, to solve the second problem of physical noise with more precise physical criteria for the definition of notation units - they give far too narrow a picture of the possibilities we have for utilizing the physical expression substance for symbolic purposes.

While all notation systems are based on critical thresholds which delimit the notation system relative to the physical medium and the other notation units, the distinction of physical forms can be brought about in several ways, each of which is connected with a set of semantic variation mechanisms, as the solution of the physical noise problem is connected with the solution of the semantic noise problem.

As appeared from chapter 7, spoken language, which uses far less precise physical criteria, contains such possibilities as sound variations, tonality, stress, dialectal and sociolectal characteristics etc., which can be used for distinctive purposes. Spoken language thus permits the physical values to be varied during use, while the demand of informational notation for an exact physical definition implies that the possibility of variation is excluded. The different relationship to the expression substance gives the two expression systems different semantic potentialities.

This is not only true in the sense that different stability criteria are connected with the different physical expression substances, but also in the sense that the same problem can be solved in different ways - also within the same notation system - as the semantic component which is included in the definition of the individual notation unit can be included in several ways.

Shannon's own analysis also provides examples of both, as he is concerned both with the differences between analogue and discrete signal systems which are based on different forms of the symbolic use of the "same" physical expression substance and - as we saw in 8.2 - both with expression and content determined redundancy as a means of stabilizing a message in a given notation system.

In these cases, the redundancy function serves first and foremost as a means of stabilizing the message's expression form in the expression substance, as the redundancy function helps to distinguish the legitimate physical forms from the identical - as well as the non-identical - illegitimate forms. But the effect relative to the expression substance can on the other hand only occur because the redundancy function also has a semantic component, so that the stabilization downward is connected with the stabilization of a superjacent level, whether this is the notation level or the semantic content level.

Redundancy thus also serves to distinguish and stabilize a level above an underlying level and to make possible the formation of new superjacent levels. The precondition for this duality is that the underlying notation system as a whole is included as a redundancy potential for a semantic utilization at a superjacent level.

There are examples of this in connection with linguistic notation in chapter 7, as we saw here that the legitimacy of the notations could be founded both on habitual conventions for notation sequences, syntactic rule structures and on the semantic context. In addition to this is the fact - compared with the number notation system, informational notation and the Morse alphabet - that a reasonably large number of different notation units are used, which make it possible to use conventions for illegal, but possible combinations, while any combination of numbers, for example, can be legitimate. As examples of illegitimate combinations in Danish we can mention /dn/ in the same syllable, the occurrence of a number of consonants (e.g. - b, d, f, g, h,) before /s + vowel/ in the roots. Over and above illegitimate combinations there are also many combinations which are not used, but which are more legitimate.

Although the different methods for solving the noise problem of written language notation are often used simultaneously, with over-determination as a consequence, these provide no complete guarantee. Over-determination contributes to an increase in the stability of language, but does not exclude such things as the occurrence of printer's errors which completely alter the sense of what has been written.

The solution in written language of the problem of noise, however, is not only concerned with the establishment of criteria for legitimate occurrences. The methods used simultaneously establish a framework and possibilities for utilizing the notation units as semantic variation mechanisms.

The habitually established conventions, the semantic context and the syntactic rule structures not only each make their own contribution to a stabilization of linguistic notation, they also each provide their own set of conditions for the use of the smallest semantic variation mechanisms, while over-determination also provides considerable room for semantically motivated deviation from rules and conventions.

The linguistic solution of the noise problem is thus connected with 1) the use of semantically empty notation units, 2) the limited use of rule determined notation sequences, 3) a relatively large latitude for semantically motivated rule, norm and convention deviation.

The extent of the semantic variation potential was shown, among other things, by the fact that the rule structures play a much smaller role for the stability of the notation systems than the convention determined norms and the semantic context. The rule structure thus works mainly on the relationship between whole words (rules of word order) and suffixes (and possibly from there back to the root).

The use of rule determined notations plays a limited role in the linguistic solution to the noise problem and there are no overall rule structures for the use of the various means of stabilization either. The stability of linguistic notation depends on a plurality of different mechanisms, each of which is accessible to semantically motivated variation. Conversely, over-determination permits the notation forms to undergo change without meaning being affected. Such changes, which necessarily arise as - individual - variations, can later emerge as new rules of expression.

Dissimilarly to this situation, formal notation systems operate only with semantically determined notation units, as the individual notations are either determined by a rule function or a content value.

While written language notation is stabilized through the use of a great number of conventional notation sequences, stable rule structures, over-determination and the meaning of the context, formal notation has few built-in stabilization mechanisms. Formal notation gains its precision by replacing the redundancy structure of written language with a semantic definition of the value of each notation unit. The content form is unambiguously connected with the expression form through this definition. The formal expression therefore not only has a different and greater vulnerability with regard to printer's errors - i.e. the occurrence of unintended, physically legitimate notation forms - it also has a different potential for semantic variation.

While the linguistic notation unit is a semantic variation mechanism which can only work through the context - having no independent semantic value - the formal notation unit is a semantic variation mechanism which only influences the context through (a change in) its own semantic value. As the declaration of the value of the individual notation (as a referent to a general rule or to the content which is regulated) is at the same time a declaration of the physically legitimate notations, formal language permits the use of an indeterminately large number of "local" notations, whereas common languages operate with a limited (although modifiable) number of general notation units. Formal notation substitutes linguistic notation redundancy with a rule determined notation, as the definition of semantic value is always connected with the definition of a certain physical form.

In spite of these differences, the notations in both systems are basically determined through their function as semantic variation mechanisms, while the physical definition is primarily based on criteria of identity and difference which can be registered by the senses. The sensory criterion is not distinctive in the relationship between these notation systems, but on the contrary, to a great extent determines the use of linguistic notation units in formal notation. While formal notation can use any linguistic notation unit for its own purposes, language, on the other hand, can express any formal content.

Informational notation has both a number of features in common with the common languages, a number of other features in common with formal language and finally, a number of features which are unique to this notation system.

The kinship with written language notation comes first and foremost to expression in the fact that both notation systems are based on the use of semantically empty notation units, that no separate rule notations are used and that a limited set of notation units is used. But even in connection with these points, there is still no complete identity, partly because written language in some cases permits a notation unit to have an independent semantic value, which is never the case with informational notation, partly because written language permits the introduction of "locally valid" notation units, which is not possible in informational notation either.

The two notation systems, however, are also distinguished by a number of other points. First, written language notation is subject to the demand for sensory recognition, where informational notation is subject to the demand for mechanical efficacy. Second, written notation - as described in chapter 7 - uses a number of different forms of notational redundancy which cannot be used in informational notation, just as written language uses qualitatively different notation units (vowels versus consonants, punctuation marks etc.) while it is not possible to qualitatively determine the informational notation units. Third, the entire inventory is used in all informational expressions, no matter how short they are, whereas common language uses a variable range.

In addition to this come differences in the relationship between the notation system and the expression substance. Both spoken and written language and formal notation thus allow considerable latitude with regard to the physical form of the "same notation units". Informational notation, on the other hand, is subject to the demand for an unambiguous, invariant definition of the physical form of the notation units and this form cannot be varied during use.

The unambiguous definition of the physical form permits the emancipation of the notation from the demand for direct, sensory recognition, while conversely we must say that this demand, which holds true of the human recognition of letters, is not based on an unambiguous invariance with regard to form. It is thus not possible to use the physical form of letters as the smallest physical expression units in energy based media, while it is conversely extremely difficult to use the human sensory apparatus to handle a notation which is defined by physical criteria that are not subject to the demand for sensory recognition.

Different discrimination procedures are thus used in distinguishing the physical features of notation systems. Distinguishing physical characteristics occurs in different ways and these differences are of exceptionally far-reaching importance for the possible uses of the physical form for symbolic purposes.

The difference in the physical definition of different notations is not dissolved by the fact that it is possible to convert an expression which appears in one form to that of another. The different forms still provide the possibility of different kinds of use. The most striking difference is that the informational notations lack any kind of quality in the definition implying that any possible quality can therefore be ascribed to a constellation of the same two units.

In spite of these differences, the kinship between these notation systems is considerably stronger mutually than the kinship between informational and formal notation systems. First, there are simply fewer similarities between informational and formal notation. As shown in the table (p. 276) of the typical characteristics of chosen notation systems below, there are only two common features (namely single-stringed seriality and no utilization of redundant notation sequences), of which the first must still be modified, as we shall see in section 8.4. Second, the differences between informational and formal notation are of fundamental importance for the properties of the two notation systems. Whereas formal notation only operates with semantically determined notation qualities, as all notations are either rule or data notations, the ascription of meaning in informational notation is connected with sequences of notation units in which rule and data values - similarly to language - are manifested in the same notation units. The individual unit is used both as a notation unit in sequences which can represent a rule, and sequences which can represent a content value.

Finally, in addition, is the fact that formal and informational notation are also distinguished by the way the semantic value is bound to the physical manifestation. Although a semantic component is part of the definition of the physical form of informational notation, this semantic component can be expressed independently of the semantic content of the informational sequence because - as was evident from Shannon's analysis - it could be brought about through an appropriate code procedure which did not affect the semantic content of the sequence. Informational notation is thus characterized by the possibility of distinguishing the definition both of the individual notation unit as well as of the notation system from the ascription of semantic content value and can therefore also be used as an expression system for a plurality of semantic regimes.

In formal notation systems it is also possible to ascribe a new value to an already given notation, but the relationship between the notational expression form and the content form are wholly rule determined for each individual notation unit and there is always a direct equivalence between the physical form of the notation unit and its semantic value.

Formal notation systems mainly use the expression substance as a means of stabilizing the expression, whereas the properties of the expression substance, both in common languages and in informational notation, are used as semantic variation mechanisms, but in a mutually very different way.

In informational notation the exact physical definition is used as a basis for the notation's mechanical effect. In spoken language, which has physically weakly - or broadly - defined means of expression, the physical variation is used for a multiplicity of semantic purposes (dialectal and sociolectal characteristics, distortion, irony, stylistic choice etc.). In handwriting the physical variation has a mainly individual stamp, while physical variation in printed matter is mainly an aesthetic means, which, however, from time to time - such as with italicization - is also used semantically distinctively.

Although the schematic arrangement cannot contain any description of the meaning of these characteristics, either individually or as a whole, it nevertheless shows a number of interesting connections and differences. It thus appears - if we initially ignore the Morse alphabet - that:

The table exaggerates the similarities in three respects.

First, those features which are reproduced as common features cover very different variants. This holds true, as mentioned, of the qualitative differences in formal and common language notation, the non-single-stringed seriality in speech and informational notation, the demand for sensory recognition in speech and writing, the - low - physical stability of speech and informational notation.

Second, the table does not reproduce the differences which may be connected with the function of the individual qualities in the respective notation systems, as the same quality cannot simply possess a different function in itself, but can also possess it through the relationship with the other features.

The different demands made on the physical definition (more or less exact, more or fewer different physical criteria etc.) thus simultaneously contain a set of restrictions for the use of the expression substance as a semantic variation potential. The notational rule system is included in a connection with the overall semantic regime which is characteristic for each notation system.

Third, the table does not reproduce the different forms of definition of the relationship between and use of the properties of the expression substance. Where linguistic and formal notation are primarily subject to the demand for sensory recognition, informational notation is primarily subject to the demand that the individual notation unit must be able to appear as a mechanically effective entity in a machine.

With this definition of the informational form relative to the physical medium and other informational forms there appears - for the first time in history - a non-sensorily determined, discrete, mechanically effective and semantically open notation system.

8.4 The unique characteristics of informational notation

Although a semantic component is always contained in the definition of a notation system, there were two important historical innovations behind Shannon's idea of an asemantic, purely physically defined notation system. One of these concerned the exact physical definition which is conditioned by the demand for the mechanical effectiveness of the notation units. The other concerned the definition of the semantic component of the notation.

While the semantic component in linguistic and formal notation systems is defined through the semantic regime in which the message is produced, the semantic component which is included in the definition of the informational notation unit is produced with the help of a formal code procedure which is independent of the semantic regime of the message. This coding can take place no matter whether a given data sequence represents a rule, a set of data, a text, a sound, a picture, or a physical machine.

It is this circumstance which makes it possible to represent both formal and informal semantic regimes in the informational notation system, or conversely: that informational notation can be used both as a notation for a numerical expression, for a rule of calculation and for a logical expression - where it is a question of using informational notation to represent a formal semantic regime - or as an expression of a linguistic message where a given notation sequence can represent a - semantically empty - linguistic notation unit - or as an expression of a picture or a sound subject to pictorial or auditive semantic regimes. At the same time, it also appears from this that it is possible to represent both linguistically, formally, pictorially and auditively expressed information, each of which was formerly represented in its own notation system (or in no notation at all) in one and the same notation system.

In other words, informational notation is not subject to one specific, overall semantic regime in the same way as other notation systems. Shannon's "asemantic" consideration thus contains a description of a notation system which is relative not to a single, but to several semantic regimes. Shannon himself also prepared a list - under the term "information sources" of the different semantic regimes which can utilize informational notation.[3]

Now it is quite true that it is also possible to use the letters of the alphabet, for example, in formal expressions, but this use assumes that the letters are subject to the criteria which are valid for formal notation. Only the physical form can be transferred, not the linguistic qualities and functions which are connected with the form - whether this be the distinction between vowel and consonant, or conventions for notation sequences - and thereby not the semantic variation mechanisms which are connected with the form and its quality either. When the grapheme is used in a formal expression it no longer belongs to the alphabet of common language.

That which thus characterizes informational notation as a special and unique feature is the complete lack of quality in the definition of the individual notation unit. This complete lack of quality in the definition distinguishes informational notation equally sharply from linguistic and formal notation, each of which operates with its own form of quality determination and this lack is identical with complete openness to contextual determination. Informational notation is the closest we can get to a perfect, "pure alphabet", and it can contain any form of symbolic content with the single - but not unimportant - restriction that it must be possible for the symbolic content to be manifested in a sequence of notation units belonging to a notation system with a finite number of mutually different expression units. The decisive point being not the number itself, but the condition that the number is established in advance. Whether we use 2, 5, 27 or 117 notation units is theoretically of no importance, but we can only use a definite, previously established number if we wish to utilize the mechanical properties of the notation system.[4]

Although any notation system has a built-in semantic dimension, the value of the reductive idea of asemantic notation which lay behind the establishment of informational notation should neither be rejected nor underestimated. That the reach of the idea is considerably expanded because its effect is increased by the semantic dimension can be illustrated by a related historical precedent.

It was thus precisely such an asemantic handling of the alphabetical figures as singular, physical entities which comprised the pioneering innovation in Johann Gutenberg's typographical revolution.

This is not only a convincing, but in this connection also a central historical example. Although opinion is divided as to the correct interpretation of Gutenberg's typographical revolution, nobody disputes that it has had far-reaching cultural and historical implications. Here, we will simply consider a couple of the aspects which are of particular interest in relation to an understanding of informational notation.[5]

In itself, Gutenberg's use of movable type first and foremost implied an effectivization of text reproduction with regard to time and money, as the individual type could be reused for producing texts with a different content. A direct consequence of this was that books became cheaper and it became possible to increase the extent of book distribution.

The new technique, however, also implied a considerable improvement in the reliability of the copied texts, as through proof-reading it became possible to emancipate the text from the semantic bond which lay in the manual copying techniques of the Middle Ages where the reproduction of texts was subject both to the individual writer's interpretations and errors. The technique permitted - at least in principle - an asemantic proof-reading. Proof-reading itself could also be reduced to the proof-reading of the original proof rather than of individual copies. This advantage had become partly available with the introduction of wooden block printing, but block printed books were roughly as expensive as hand-written copies and a single mistake could mean the loss of a whole block, whereas Gutenberg's technique permitted the making up of a single or a few lines.

At the same time the conditions for semantic control were changed. Where this control in the Middle Ages could be exercised directly in the semantically rooted copying process, and be done efficiently because the number of possible copies was limited, the faster and a-semantic reproduction technique implied that semantic control had to be exercised externally - through a separate and visibly censoring hand. The technical form thus contained a quite obvious secular potential for posterity.

Last, but not least, it can also be mentioned here that Gutenberg's technique made it possible to store and manage a great body of knowledge which could not be managed, or managed only with difficulty, using the existing technology. This holds true first and foremost of all technically, mathematically and numerically expressed knowledge, where the demand on accuracy of detail and the individual notation is particularly rigorous - and highly limiting for the validity and use of manual reproduction.

The technique involved a manual trade, but as a medium for the representation of knowledge it fulfilled one of the necessary conditions for the entire industrialization process which followed later.

It is difficult to indicate precisely when printed knowledge became decisive for the technical development of modern society. It was not a precondition for the development of the early mechanical technologies - including that of printing. An epoch-making effect can perhaps first be noticed in the energy technologies of the 17th and 18th centuries, which were founded on technical, physical and mathematical knowledge which could not be produced, verified and managed without the printed book.

As a general medium for representing knowledge the printed book not only released a new technological and theoretical potential, it also became - as a medium for stabilized, generally objectified and theoretical knowledge which is available without respect of persons and power - one of the preconditions for the development of modern society from enlightened absolutism and the Age of Enlightenment to democratic movements and the constitutional division of power. When we consider that the idea of a free market and modern man's personality emerged as results of a comprehensive theoretical work of construction which - again through the medium of written knowledge - brought about far-reaching strategic developments and educational initiatives, it becomes clear that the printed book as a common, typical and distinctive medium for representing knowledge in modern society forms an essential part of the infrastructure of these societies.

It is thus the book rather than the computer which has made possible the transition to a society based on the utilization of theoretical knowledge as a strategic resource nor, therefore, does this transition - as Daniel Bell claimed - characterize the relationship between industrial and post-industrial society, but rather the transition to modern society.[6]

It is quite true that the printed book is not a condition for any production of theoretical and technical knowledge since it is only a means of reproduction, but it is to a great degree a condition for the articulation of some types of knowledge and for the dissemination and use of existing knowledge. It is difficult if not impossible to imagine that theoretical knowledge can become a strategic resource in society without this or another medium with similar properties.

While Gutenberg's typographical inventiveness lay in the asemantic consideration and handling of alphabetical notation, the cultural and historical significance of the invention is due to the fact that this consideration paved the way for a number of previously unknown or unexploited semantic potentialities.

Viewed from this perspective the question therefore is not only whether the asemantic consideration of the informational notation system was right or wrong, but also which potentialities are embraced by this revolution in the technology of textual representation.

8.5 A notation that is not accessible to sense perception

In using printing to emphasize the cultural and historical perspectives which may be connected with an asemantic view of notation, it is necessary to add that this is a question of effects which first emerged during the course of a long period of time and as part of other cultural processes. Gutenberg developed his technique in the 1430's, but the printed book only became the most important, socially supporting knowledge and script technology several hundred years later and this was naturally not because of the medium, but of the knowledge expressed in the medium. When we read a boring book it is not the handling of the alphabet we criticize, it is the meaning and style.

Discussing informational notation in the same historical perspective is impermissible in the nature of the case. If informational notation permits new ways of expressing forms of knowledge (not to anticipate the question as to whether it could also permit the development of new forms of knowledge) to the same extent as Gutenberg's printing technique - and this is not an unreasonable expectation - an attempt to discount the cultural implications in advance would be a foolhardy undertaking.

On the other hand, it is not impossible to discuss whether informational notation contains new semantic potentialities, as in such a case they would be connected with those features which distinguish informational notation from alphabetical and other familiar sequential notation systems which, as we saw in 8.4, include the semantically empty, quality-less notation, the finite number limitation and the mechanical effect potential, as well as the special form of rule determination which will be considered in 8.6 - 8.9.

Another modification, however, must be introduced here, because informational notation does possess one quality common to all notation systems, as it has a physical value. But this quality is not only defined in another way, it also serves a different purpose.

While the physical manifestation in alphabetical writing and formal notation serve to ensure perceptual recognition, the informational entity is not bound to any criterion for perceptual recognition.

This independence is ensured by the precise physical value and implies first that it is possible to employ the mechanical processing of informational structures, second, that it is possible to work with semantically distinctive, physical entities of a completely different, small size and a correspondingly high process speed and, third, makes it possible to implement notation in energy substances.

It is quite true that there is nothing wrong in defining the smallest informational units with threshold values registrable by the senses, but this is not simply an unnecessary restriction, it is also a contra-functional restriction. It is only possible to utilize sensory registration if - as occurs in the alphabet and the Morse system - we have in advance bound any given perceived entity to an invariant - recognizable - place in the notation system. If we wish to utilize the perceivable manifestation, we must also abandon the advantages which lie in the unambiguous definition of the physical form of the symbols.

At the same moment an informational process is accessible to human understanding, it is therefore also accessible to another expression system where informational notation is also used as a means of mechanically producing an output recognizable by the senses. As such a transformation is both a necessary starting and finishing point for any use of informational notation, this notation system will be limited, it can only exist as a means to a non-perceptible re-presentation of other perceptible expression systems. On the other hand, this mechanical transformation implies that informational notation can also represent - for example, pictorial - expressions which cannot themselves be expressed in the same sequential form.[7]

Informational notation has thus, as a distinct characteristic, the fact that due to its definition it can be realized in a machine in a form which is not accessible to the senses. Where Hjelmslev at the time was puzzled by his own statement that "it is in the nature of language to be overlooked"[8], that is, was concealed behind the auditive or alphabetical clothing, we are puzzled today by the fact that, as far as informational notation is concerned, it is the clothing, the expression form, which cannot be seen. The little boy in the fairy-tale may still be right, but now the tailors are too.

That the form of the information cannot be seen does not mean that it cannot be made visible, or that it is of an immaterial or transcendental character, it means on the contrary that special demands are made on the threads which are used in sewing the clothes.

It is now already clear that informational notation possesses properties which are not only new, but also more profound than those of Gutenberg's invention. Where Gutenberg made a contribution to a new use of an existing notation system, informational notation, seen as a physical expression system, is a completely new system. It was - and could only be - developed in connection with the development of new technical and semantic handling methods.

Shannon developed his theory primarily with the aim of improving a number of existing communication technologies, but it rapidly became evident that informational notation came into its own particularly in connection with the realization of Turing's theory. Turing had already discovered the - algorithmic - thread which was necessary to utilize the new potentialities of the informational notation system.

8.6. The algorithmic thread

When the informational entity has no solid physical form there is naturally a great risk that the informational structure will collapse. In Turing's theory, the informational structure was supported by well-defined, permanent physical sign manifestations, where a physical-mechanical operation, which also stood for a symbolic operation, could be allocated to a given manifest form.

Turing did not use informational notation units, but notation units which could be recognized by the senses, as he regarded the - necessary - physical definition of the expressions of these entities as a purely technical question and saw the notation system as a formal notation system.

On the other hand he showed how - by regarding a physical-mechanical procedure as a relationship between one step and the next - it was possible to organize a physical-mechanical system in such a way that it could perform any symbolic operation which could be described step by step. Whether the next step was established in advance, or had to be defined during the process, made no decisive difference.

Although Turing worked with fixed, well-defined physical symbol manifestations, this input might well comprise new definitions of their value. The demand was only that any change should be carried out step by step as the result of an unambiguous declaration, whether this was given in advance in the form of a programme or in the form of a continuous input of new instructions.

The Turing machine was thus not only defined by physical mechanics, or through physically determined, symbolic expressions or given symbol values, but also through the algorithmic procedure which simultaneously organizes the physical and symbolic process.

While the ability to maintain an informational structure in a physically fluid substance depends on the definition of critical threshold values for the legitimate physical forms, the ability to vary the informational notation structure is based on the algorithmic treatment of the relationship between the physical and the symbolic structures.

Herewith, Turing had also discovered the means which could be used to utilize the properties of informational notation independently of human recognition.

As informational notation and the algorithmic procedure together constitute the necessary and sufficient foundation for the mechanical execution of computational processes, they are also included as distinctive basic elements in all informational signs.

As the connection between informational notation and the algorithmic procedure not only implies that it is now possible to "electrify" the algorithm, but also indicates a deep conceptual change in the understanding and handling of algorithmic processes, it becomes necessary to include this advance in algorithmic management before describing the informational signs.

8.7 The multisemantic potential of the algorithmic structure

In mathematics, the term algorithm is understood generally as any precise precept for the execution of a procedure. The algorithm defines a set of procedural rules through which it is possible to unambiguously transform a given set of numerical values to another, or in Trakhtenbrot's formulation:

A list of instructions specifying a sequence of operations which will give the answer to any problem of a given type.[9]

In its basic form the algorithm thus represents a system of invariant rules for handling a set of variable data appropriate to the rules. The algorithmic structure ensures that a calculation involving the same data will always lead to the same result and that a calculation involving different data must be performed in accordance with the same rules of calculation, whereas a calculation involving different data does not necessarily lead to different results, as both 3 + 5 and 4 + 4 and 14 - 6 all give 8. When there is only an algorithmic result there is thus no algorithmic path to return on, either to the process or the point of departure. The algorithmic result itself is empty.

The central part of the algorithmic procedure is connected with the distinction between rule and data. But the distinction is not absolute. Although the rule system is available independently of the data, it is not possible to handle any set of data with a given algorithm. The algorithmic structure makes demands on the structure of the data set and there may also sometimes be a demand on the permissible data values, which we are familiar with from the rule that the value zero may not occupy the place of the denominator in a fraction, just as there is often an indication of upper and lower limits to the variation of data values included in the definition of an algorithm.

Although the algorithmic procedure is not limited to handling numerical values, but may often include symbolic and logical values and relationships which have the character of complex semantic structures, there is a fundamental demand that not only the procedure, but also the data structure must be available in the form of mono-semic values.

The algorithmic structure thus does not permit "the cumulative acquisition of new dimensions of meaning" which, according to Ricoeur, is a characteristic feature of linguistic expressions, nor the complementary and equally characteristic possibility of storing meaning for an unspecified period (including the risk that it will be forgotten if not actualized) which is also contained in the linguistic redundancy structure.

The algorithmic procedure's unambiguousness is connected with the demand for well-defined starting conditions and sequencing, including the demands:

As a consequence of this unambiguousness, the formulation of the algorithmic expression has often been seen as a goal for a scientific description of a given problem and scientific attention has then been directed towards other problems if this goal was achieved. The relationship between language and algorithmic representation is seen from this perspective as a relationship between a problem and its solution. From a linguistic point of view, however, there is a different result, as the relationship between problem and solution is manifested as a relationship between polysemic and mono-semic language - and not as a transition from a problem to a solution.

That it is a relationship between two linguistic articulation systems rather than a relationship between problem and solution appears experientially from the circumstance that even the purest mathematical exposition must both be introduced and concluded with a linguistic account. This familiar experience is not only due to a - good or bad - habit, it is on the contrary the unavoidable result of the fact that an algorithmic expression by definition assumes an establishment of start conditions and an interpretation of results which have no algorithmic expression.

As we saw in chapter 7, mono-semic expressions are formed with a starting point in a definition of a specific linguistic redundancy structure. In the linguistic representation this bond is expressed in the declaration of unambiguous statements which again can create the starting point for an algorithmic procedure.

The transition from an unambiguous linguistic expression to an algorithmic procedure, however, is not a simple matter, as there is also a question of a complete transition from one - linguistic - rule system to another - algorithmic - rule system. This replacement of the rule system does not leave the mono-semic expression untouched. At the same moment a mono-semic expression is subjected to an algorithmic rule structure, it loses its referential meanings.[10] Whether we multiply apples by pears, metres by kilograms, the height of the Eiffel tower by the sound of a thunderclap, makes no difference to an algorithmic sequence:

While the transition from polysemic to mono-semic articulation depends on a fixed definition of the redundancy structure, the transition from mono-semic, linguistic articulation to the algorithmic handling of mono-semic values depends on the elimination of the expression's referent.

The elimination, however, only holds true during the algorithmic sequence, as it is only possible to refer to the procedure's result as a result if it is assumed that the mono-semic expression's referent remains the same throughout the sequence. This is thus a question of an abstraction procedure where the referent is assumed or, more correctly: placed in parenthesis. This construction of the relationship to the referent is distinctive for algorithmic expressions and determines the relative autonomy of the algorithmic procedure, i.e.: its existence as an expression system, which can stand for itself without standing for anything else.

But the same construction simultaneously places the algorithmic procedure in a position of semantic dependence on the linguistic expression. When the parenthesis is closed, the expression must again be handled in linguistic form. The algorithmic procedure cannot stand alone because the mono-semic values lack their referent.

That which distinguishes the algorithmic from the linguistic procedure, at the same time places it in a one-sided relationship of dependency on this.[11]

As the relationship to the linguistic referent is placed in parenthesis, however, it becomes possible to change the referent and transfer an algorithmic procedure from a linguistic reference system to another without, for this reason, bringing about any identity or connection between the referents (formal polysemy). It was this property that Boltzmann saw as a frequently used and characteristic feature in and of mathematical physics and this appears to support the view of the algorithmic procedure as an immanent, purely formally defined procedure which runs in accordance with its own rules. The relationship to the linguistic referent, however, is more complicated.

While there are rigorous demands on the definition of the individual rule, on the sequential linking of rules and on the relationship between rule structure and data structure, there are ]no general demands on the choice and combination of rules. Although any algorithmic expression is completely deterministic, there are no general syntactic rules for the composition of the algorithmic expression. We can multiply, divide, integrate and differentiate as much as we wish, as long as the sequence of each individual operation has been established. As the rules are individually established, the choice of rules is therefore a semantic choice which cannot be made without a linguistic referent.

This naturally does not mean that we have a free hand in the referential interpretation of any algorithm, but only that a given algorithmic procedure constitutes a compositional whole which does not itself have an algorithmic form. The composition is therefore not determined by the algorithmic functions either.

The algorithmic function cannot motivate the choice of itself and the algorithmic procedure cannot motivate its own continuance. While all algorithmic procedures are completely deterministic, each procedure is based on a series of arbitrary, semantically motivated choices. The algorithmic procedure can therefore be described rather as a rule system for co-ordinating linguistically motivated entities where both the individual part alone and the total expression as a whole have placed the linguistic referents in parenthesis.

The algorithmic form's semantic bond, however, not only embraces the parenthetical relationship to the linguistic expression's referents, the form is also - seen as a detached expression form - subject to structural limitations of a semantic nature. Any algorithmic expression is both determined in its relation to one or more sets of general rules (here we can rightfully speak of a language system) and a specific realization in the form of a correspondingly distinct usage. As any algorithm can be described as a relationship between a precept and a data structure, and as a given precept also establishes the conditions for adequate data structures, it is evident that the relationship between precept and data (i.e. language use) is a semantically distinct relationship. The semantically motivated choice of precept is also a semantic choice of data structure. The same holds true of the relationship between a given precept and the established set of possible calculational or procedural rules, as the precept can be seen as a choice of calculational rules from a formal language.

The semantics of algorithms thus includes at least three levels. First, the individual expression unit is always defined as a unit which connects the notation's form with a mono-semic value. Second, the specific relationship between precept and data structure is semantically distinctive in the sense that a given precept - a defined rule system - can only handle a certain amount of structurally uniform data sets. Third, the total composition of a given algorithmic expression is based on a semantically motivated choice of the possible rules of procedure which are contained in the "language system". If this is a calculation algorithm, the language system is thus constituted by the existing rules of calculation. As the term language system here may remind us of Hjelmslev's terminology, it must be added that the rules of a language system are only included in an algorithmic expression if they are declared as a referent for a specific notation unit, such as is the case, for example, when we refer to the rules of addition with the notation +.

Unlike common languages, these different semantic choices are distinguished in a series of distinct semantic choices because all choices are subject to the mono-semic restriction and the demand for a delimited area of effect for chosen rules. The algorithmic procedure, unlike the linguistic expression, thus contains no semantic interference between expression elements, on the other hand the algorithmic language demands that rules are manifested as distinct expression elements.

The algorithmic language not only has a polynomial semantic structure relative to common language, but is also itself structured at several formal levels. Whereas the language system is constituted by the available set of procedural rules, usage is constituted partly by a precept and partly by a data structure. It is not difficult to distinguish the system from the usage, as the system is constituted by the legitimate rules of operation.

This clear distinction determines, on the one hand, that it is both possible to construct new general rules and freely choose rules for use with specific purposes. As the rules of formal language systems are fully deterministic, it is quite true that they place certain limitations on the possible combinations, but these limitations can be avoided through delimiting the areas of use of the individual rules.

While a formal language system as a whole can be described as an independent and total system of well-defined procedural rules, usage which is constituted by the chosen combination of rules and mono-semic value sets is rather more difficult to describe.

Seen in relationship to a given data structure, it is not possible to choose any - but perhaps several - rule structures and seen in relationship to a given rule structure, it is not possible to handle any data structure. As the relationship between rule and data structure is thus characterized by a mutual bond of solidarity, it is not possible, on the face of it, to identify the rule structure, the precept, as a superior interpreter relative to the data structure and the data this permits. The relationship between rule and data certainly possesses certain features which could perhaps be seen as reminiscent of the linguistic relationship between expression and referent, as the relationship between rule and data is derived from the linguistic definition of the mono-semic referents.[12]

In a certain sense, it might also be possible to claim that the solidarity between programme structure and data structure implies that the algorithmic expression - over and above the parenthetic relationship to the linguistic referent - also itself contains an immanent referential function between programme and data structure and that the linguistic referent is thus not the sole referent. On the other hand, the relationship between rule structure and data also possesses features which clearly distinguish themselves from the relationship between word and referent, as both parts are manifested. Together they comprise the basic syntactic structure, which is why they are rather equivalent to the nexus relationship of the sentence.

Where a sentence, however, produces a meaning, the algorithmic procedure instead produces the transformation of one expression to another. While both the construction as a whole and each individual element are semantically motivated, the transformation procedure which occurs from a given input to a given output, is asemantic, as the connection between input and output is accessible to - and completely dependent on - an interpretation which is independent of the procedure. The procedure guarantees that there can be a connection, but says nothing regarding in what it consists.

As the choice of referent and semantic regime can thus be distinguished from algorithmic syntax, the syntactic structure itself is open to several semantic regimes. [13]

This multisemantic property not only makes it possible to develop different algorithmic procedures for different semantic regimes (whether logical, numerical, linguistic, pictorial or auditive), it also permits one and the same algorithmic structure to serve as a basis for any semantic regime.

The description given here of the algorithmic expression can be summarized as follows:

8.8 The algorithmic revolution

With his description of the mechanical process as an algorithmic sequence of local, step-by-step determinations, Turing was apparently the first to discover the unique syntactic properties of algorithms, just as he was also the first to see how it was possible to bring mechanical procedures into a logical regime by first reducing finite, formal procedures to mechanical procedures.

Although this was an epoch-making breakthrough, the construction contained a decisive obstacle to the description of the syntactic potential he had discovered.

Within the logical regime the dissolution of mechanical procedures into individual steps could immediately be interpreted on the basis of a classical understanding of determination, as the superior, general determination was now ensured by the logical and not the mechanical totality. He was not aware that the local determination could also form the basis of other and not least, non-deterministic semantic regimes - nor, apparently, did it interest him, as he saw the choice machine as a less successful version of the automatic machine.

This limitation is not particularly surprising and naturally does not detract from Turing's original efforts.

Turing's limitation on this point, however, cannot simply be explained by regarding it as due to the dominant currents of contemporary science, it can also be understood as a consequence of the fact that it only became possible to illustrate and handle the new syntactic potential once the machine Turing had thought out came into existence.

Although in many respects there are good reasons to regard the human brain as superior relative to any existing - conception of - computers, all computers are superior to the human brain when it comes to handling exactly this type of step-by-step process which creates the foundation for algorithmic syntax.

If the algorithmic handling of symbols created the basis for Turing's idea of the universal computer, the later computer technology also created the basis for a revolution in algorithmic management.

The concept of revolution may perhaps appear rather hackneyed and produce meagre descriptive associations, but in this connection it is an apposite concept, because it unites a reference to a definite, fundamental change with a reference to the change's equally fundamental indefiniteness and incalculability. It is thus hardly possible to provide a total picture of the course of this development as it is still an ongoing - and uncontrollable - process which runs along a multiplicity of mutually unconnected paths.

As far as I am aware, no general investigation of this development up to the present exists and naturally even less an informed opinion as to how it will develop further. How far it has progressed today and how it will develop tomorrow must remain open questions.

A picture of the point of departure and some of the lines of development which issue from here, however, have gradually begun to delineate themselves through a number of spread, sometimes sporadic, contributions in various available sources. As the subject, both in extent and with regard to the demand on insight, considerably exceeds my competence, the reader must be content with a more summary account based on a relatively limited choice of sources.

On the face of it, the most eye-catching feature is without doubt the tremendous explosion in the number of available, written algorithms itself. In a standard textbook on algorithms from 1983, Robert Sedgewick thus starts by stating that as good as all algorithms mentioned in the book are less than 25 years old, while a few have been known for a couple of thousand years - although under a different designation, as algorithms owe their name to mathematician, Mohammed ibn Al-Khowarizimi, who published an - epoch-making - arithmetical textbook around the year 850.[14]

The quantitative growth in the production of new algorithms includes both the development of new algorithms for old purposes and the development of algorithms for new purposes. Among the new purposes the development of computer technology is one of the most important and advances within this area led during the 1950's to the introduction of computer science as an independent subject distinct from mathematics.

In addition, there is another remarkable new departure, as algorithmic models were developed in a number of new areas. Where the algorithmic procedure had hitherto only occupied a central place in mathematics, logic, physics and economics, it now began to occupy a central position in areas of biology, psychology and linguistic and a wider range of social sciences.[15] Subject areas which still hesitate, such as a number of disciplines within the humanities, appear to be correspondingly losing esteem.

The technological potential of this expansion is immediately visible. Where mechanical manipulation had hitherto had its centre of gravity in the handling of knowledge extracted from studies of inorganic, physical nature, the way is now clear for a corresponding, algorithmically based mechanical handling of knowledge extracted from the study of living organisms, mental and social processes.[16]

There appears to be little doubt that the two new departures developed in close mutual interplay? The sources provide a multiplicity of examples of intersecting lines of inspiration and none of those involved appear to have any precise, not to mention concurrent, picture of these lines. Nevertheless, the two lines of development also contain an inner conflict.

One the one hand, the computationally oriented development of algorithms is necessarily and strongly bound to and determined by the way the technical problems present themselves and the understanding of algorithmic functions is characterized by the abstract and arbitrary functionalism of algorithms. The internal algorithmic functionality is central and the understanding of the algorithm is closely connected with its effectuation as a process which elapses in time. Algorithmic process time, which had played no role in mathematics and logic, has thus become a central element in computer science.

On the other hand, the use of algorithms in a growing number of disciplines is rather understood as a goal for scientific description in a more or less explicit analogy to classical mathematical physics, but naturally with the addition that it is now a question of handling algorithms at a higher, more complex level.

There are a number of reasons why this conflict in the conceptualization of the algorithmic procedure has been under-emphasized. First, the fact that there was a common root in the classical mathematical-physical tradition, so that the new departure was seen as an expansion of the potential of this tradition. Second, the fact that many of the divergences appeared as divergences between the special ways problems present themselves within different disciplines. Third, the fact that there was also common ground in a general automatization perspective and last, but not least, the fact that as a whole this was a question of a development where trying out the many new - immeasurable - possibilities necessarily came to occupy a dominant position as a guiding principle.

That it is reasonable to describe this expansion in quantity and areas of use as a revolution proper in algorithmic management competence, is also due to the fact that the quantitative growth of algorithmic procedures and areas of use are closely connected with a fundamental leap in the history of algorithmic methodology. This leap also has its centre of gravity in the new computer technology and began in the 1940's.

According to Wells[17] this change is expressed as a growing clarification, structuring and abstraction in the formulation of algorithmic procedures. Where previously, algorithms had been seen and worked with as short sequences related to specific problems in a given context, they now became regarded as detached, independent expressions which could be utilized in long sequences, structured in blocks and released from the specific data connected with the given subjects.

The same view leads to a more systematic use of the distinction between procedure and data - manifested among other things in the introduction of such terms as data and data structures - as references to data are now (solely and completely) established as a definition of input parameters.

The mechanical execution of these procedures at the same time produced a number of other methodological innovations, among them the calculation of process times, problem-solving times and expression complexity.

An important fulcrum in this development, according to Knuth and Trabb Pardo, was the appearance of the assignment function as distinct from the mathematical "equal to" expression. The assignment function was first used by Konrad Zuse in 1945 in his "Plankalkül", which at the same time was the first general programming language.[18] The Plankalkül, however, was first published in its entirety in 1972. Although shorter excerpts appeared in 1948 and 1959, his work had no demonstrable significance for the new trend in algorithmic development.

According to Knuth and Trabb Pardo, the first significant step towards distinguishing the assignment function was taken instead by Herman Goldstine and John von Neumann with their suggestion of the - graphic - representation of algorithmic procedures as flow diagrams - or flow charts - from 1946-1947.[19]

Although Goldstine and von Neumann do not define the assignment function here, it was waiting - certainly in retrospect - say Knuth and Trabb Pardo - in the wings, as the block divided algorithmic sequence is connected with directional, irreversible transitions (marked in the diagrams by arrows), where the mathematical "equal-to" designates reversible transitions. The assignment function, however, is also distinct from the mathematical equal to function, as it replaces an automatic or determined connection with a semantic and facultative connection.

As a whole, flow diagrams represented under any circumstances a pioneering innovation with their procedural and functional view of the algorithmic sequence.

If the flow diagram - and Wiener's cybernetic feedback mechanism - represent the first step in the transformation of the concept of algorithms, the next step is the transition to an understanding of the algorithmic procedure as a programme which can be designed with arbitrary, formal-logical symbols. Hamming describes this development as a conceptual transition from the 'number crushing' metaphor to an understanding of programming as a - logical - symbol manipulation and believes that this change made its breakthrough with those involved - himself among them - in 1952-1953, in this case coinciding with the appearance of the first compilers which gradually emancipated programming from the built-in machine language.

Hamming admits that Turing perhaps developed this symbol understanding rather earlier, but believes that it is still the idea of the number crusher that is the basis of Burks', Goldstine's and von Neumann's pioneering work on the logical construction of electronic computers from 1946, where they formulated the basic principles of the modern serial computer (von Neumann architecture) with a stored programme.[20] Hamming's view is indirectly confirmed by Goldstine who, in referring to the ideas of the 1940's exclusively mentions the mathematical perspectives for use, although both a logical description of the computer, the idea of the stored programme and a control system, which made it possible for the machine to alter its own programme structure, had been developed.[21]

While it is thus possible to date the emergence of a new perspective on algorithmic representation to the 1940's, the more systematic utilization in the form of fixed programme functions and a programming language proper stretches over a rather longer period. According to Wells it is only possible to speak of a general algorithmic language with the emergence of block structuring, structural control, data structuring, data abstraction - and two-dimensional notation and set theoretic notation in the 1960's. The programming language, ALGOL, which was completed by Peter Naur in 1960 is indicated by many sources as the first fully developed programming language with a general and consistent syntactic notation.[22]

As the final part of this summary account of the algorithmic revolution we must also include the fact that the practical developments created the foundation for the appearance of the first algorithmic theories proper with the Russian mathematician A. A. Markov's ]Theory of Algorithms from 1954 as the first. Here, Markov made a direct connection with Gödel's, Church's and Turing's work from the 1930's, but aimed at a more precise, mathematical analysis of the computability of various algorithmic systems. As, according to Markov, it was possible to show that there is a series of mathematical problems which demonstrably could not be solved through algorithmic means, he naturally also rejected the idea that it would be possible to design a machine capable of solving problems of the same type.

If an algorithm solving every isolated problem of a given class is impossible, then a machine solving every such problem is likewise impossible.

This deprives of their very foundations the stories published in foreign (especially American) literature concerning machines capable of solving any problem, and automata replacing the scholar... Therefore the conative research enterprise in mathematics (as well as in any other branch of learning) will never be transferred to machines, capable only of assisting man but not replacing him.[23]

Against this it is often claimed, especially in areas of the American tradition, that it is not possible to generalize over and above the existing algorithmic competence - that we can never say never - and therefore cannot exclude the possibility of new algorithmic revolutions.[24] What remains is the fact that the algorithmic revolution up to the present has not brought about such an automated, general problem solving method, neither in mathematics nor any other area.

The control and automation perspective has played a central role as a motivating and driving force in the algorithmic revolution, but is not suitable for describing the result of the process. The explanation of this circumstance is of a linguistic character. The automatic procedure assumes that the semantic value of symbols is first frozen and then placed in parenthesis. As the automatic procedure therefore cannot contain its own preconditions, it cannot describe its own results either.

It can hardly be disputed that the algorithmic revolution implies a considerable expansion of the possibilities for designing and executing automatic processes. This automatization, however, includes only problems that have already been unambiguously formulated and automatization in addition describes only one part of the potential which lies in the transition from physically bound to programmed mechanical operation. At the same time, with programming comes a complete dissolution of the automatic bond, as each individual step in any sequence can be made the object of choice, because the stored programme is distinct from the machine's control unit which can thus be used to control and alter the stored instructions.[25] As the algorithmic expression is available in informational form these changes can be executed at the level of the individual notations, quite independently of the original algorithmic expression's rule and data structure.

This property appears to a great degree to contradict the properties normally ascribed to an algorithmic procedure and it also apparently contradicts another of the properties which motivated the use of the algorithmic procedure in computers, namely to guarantee the reliable, automatic handling of the - otherwise inaccessible and unmanageable - informational processes.

The explanation for this apparent paradox lies in the fact that the computational programme structure not only permits the algorithmically controlled, automatic handling of data, but also the algorithmic handling of algorithmic expressions which are available in the informational notation form. Although it may be possible to find older examples of such a second-order handling of algorithms, there has never previously been an operative procedure which was independent of the task, not to mention any mechanical apparatus by which such a second-order handling could be performed relative to any informational notation unit, whether this is included as part of the expression of a rule or a data value. The methodological leap from first to second order handling of algorithms is therefore the central element in the algorithmic revolution.

Go to top


Notes, chapter 8

  1. Shannon, (1949) 1961: 31.
  2. If we include the Morse alphabet, the latter point is modified, as the Morse alphabet also possesses two of informational notation's otherwise unique properties (invariant number limitation and no qualitatively different notations). Finally, the Morse alphabet is related to written language in the sense that it too has no unique characteristics which are not shared with at least one other notation system.
  3. C.f. 6.4 where this list is given.
  4. In the 1940's, the binary form was the object of much discussion and the choice was made on functional, pragmatic grounds which included such elements as the physical layout of the machine, process efficiency and simplicity, although von Neumann also referred to the binary character of logic as an argument for emphasizing the computer's logical rather than arithmetical functionality. Goldstine, 1972: 260.
  5. See Eisenstein, 1979 for further details. Gutenberg's personal role in the development of the new method of printing is still unclear, but as the method was developed at a printing house under his management, his name can still reasonably be used as a code.
  6. Bell, 1973. Beniger, 1985 traces the strategic use of theoretical knowledge as a foundation for the development of American society back to the building of new infrastructures around the beginning of the 19th century, but only sporadically touches upon the script-technological preconditions for the development strategies. It should perhaps be added that what has been written here should not be considered as a suggestion of any form of causality between the technical media and the exploitation of its potentialities. Many other circumstances are included in the same processes and the existence of a potential is not the cause of anything. The exploitation of technical potential has also, suprisingly often - perhaps almost as a rule - completely unforeseen and unforseeable consequences.
  7. The demand for re-presentation in a form recognizable by the senses is handled in modern computers by an interface. The same internal procedure can thus be transformed into an arbitrary quantity of different expression forms which depend solely on the organization of the chosen output medium. A "picture" transmitted to a loudspeaker will thus produce a sequence of sounds which will largely be completely meaningless. We can therefore draw the conclusion that the meaning of the informational procedure is formed in relation to the interface and is not immanent in the procedure.
  8. Hjelmslev (1943) 1961: 5. The Danish original has "at sproget vil overses", that is: "language insists on being overlooked", implying that language has a will of its own.
  9. Trakhtenbrot, (1960) 1989: 203.
  10. The loss includes not only the reference to phenomena in the world, but also the linguistic meaning relationships between words and between sentences and - naturally also - the linguistic syntax.
  11. In Hjelmslev's terminology, this relationship is called a determination, i.e. a relationship between a functive, (the linguistic antecedent) which is necessary for the occurrence of another functive (the algorithmic procedure). Hjelmslev (1943) 1961: 34-35. In Hjelmslev's rather awkward use of the term determination it would be expressed: that the algorithmic procedure determines the linguistic sentence.
  12. It might perhaps be possible to describe a definition as a specific linguistic form different to other linguistic expressions - as explicit designations of chosen referents, as the referent is normally taken as given and is therefore not included in the linguistic expression.
  13. Since an algorithm is a formal expression, it may appear that this contradicts the statement in chapter 7: that formal systems can be polysemic but not multisemantic systems. Only a modification, however, is necessary, since the multisemantic potential of algorithms presupposes that the algorithm is itself conceived as a purely syntactic structure, without any reference to or dependence on a semantic interpretation. But even so, there is still a significant difference between the multisemantic potential of algorithms expressed in formal and informational notation systems, since the latter, as will be described in chapter 9, allows a wider range of possible variations.
  14. Sedgewick, 1983: 7. Williams, 1985, 21-24.
  15. Where psychology is concerned the developments of the 1950's are described in Miller, Galanter and Pribram, 1960, among others, in an attempt to create a foundation for algorithmic psychology in a break with behaviourist psychology. Within linguistics, Hjelmslev is one of the pioneers of an algorithmically oriented theory, but the dynamic process perspective was first formulated in Chomsky's generative grammar. In the biological field, corresponding ways of presenting the problem are discussed by such authors as Emmeche, 1990 from a Peirce inspired viewpoint.
  16. That there is also a great potential in this for an industrial expansion based on the development and use of industrial methods for handling biological and mental natural resources, is moreover often overlooked or underestimated in the many theories on "post-industrial" society.
  17. Wells, 1980: 276 ff.
  18. Knuth and Trabb Pardo, (1977) 1980: 202 ff. According to Williams 1985: 225, the Plankalkül uses by far the greater number of basic programming functions, among them variable functions, conditional sentences, loops, subscripts and parameter determined procedures, but not recursive functions, i.e. procedures which contain themselves.
  19. Published in H.H. Goldstine and John von Neumann (1947-1948).
  20. R.W. Hamming, 1980: 7-8. The work referred to is by Burks, Goldstine and von Neumann (1946) 1989. von Neumann had formulated the logical principles of a machine with a stored programme in First Draft of a Report on the EDVAC from June 1945 and in Memorandum on the Program of the High-Speed Computer Project from 8 November 1945. They contained, however, no description of the coding and programming process. C.f. Goldstine, 1972: 191-203, 242, 253, 266.
  21. Goldstine, 1983. Especially chapter 2 and Beeson, 1988: 200.
  22. Wells, 1980: 277-283. A more detailed account of various aspects of the development of programming language can be found in, among others, Goldstine, 1972, Simon and Newell, 1972, Metropolis et al, 1980, Herken (ed.), 1988 and, to a lesser extent also Williams, 1985, who mainly discusses the development of hardware.
  23. Markov, (1954) 1961: 441.
  24. A view which is taken as a basis in Haugeland, (1985) 1987, among others.
  25. This property was not clearly in evidence in Turing's theoretical model from 1936 although he actually allowed the machine to alter its instructions and moreover stored both the programme and data in the same medium. But this was a central element in von Neumann's logical description from 1945. Goldstine, 1972: 259.