Niels Ole Finnemann: Thought, Sign and Machine, Chapter 5 © 1999 by Niels Ole Finnemann. |

| Table of Contents | Chapters: | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Literature | Download pdf-file | |

5.1 The demand for physically defined symbolic forms

Turing presented his answer to Hilbert's Entscheidungsproblem in the article *On Computable Numbers, with an Application to the Entscheidungsproblem* in 1936.[1] The answer was negative. It could be demonstrated that there is no general calculation procedure which can decide whether an arbitrary, well-defined arithmetical or formal logical problem can be solved in a finite number of operations. It is true that this piece of news was already a month old, as Alonzo Church had shortly before published a similar demonstration. Turing's demonstration, however, was formulated in a different way and contained two original results.

One was that Turing's definition of finite, formal procedures did not - like Church/Kleene's and Gödel/Herbrand's - depend on a specific set of formal axioms. This meant, maintained Alonzo Church in 1937, that Turing's definition had...

*...the advantage of making the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately - i.e. without the necessity of proving preliminary theorems.*[2]

Church therefore viewed his own thesis as a theoretical definition which was proved by Turing's analysis. It was ostensibly also this independence of specific formal axioms which convinced Gödel that his own and Church's definitions were not simply heuristic theorems.[3] He certainly emphasized a number of years later that Turing's definition had a distinct epistemological value.

*With this concept, one has for the first time succeeded in giving an absolute definition of an interesting epistemological notion, i.e. one not depending on the formalism chosen.*[4]

This theoretical gain and the evaluation of its implications belong to mathematical logic.[5]

The second main result in Turing's article was contained in the tool he developed to carry out his demonstration. This tool comprised a description of what he called himself "the universal computing machine", later often referred to as a Turing machine. The key to this also lay in his definition of the finite, formal procedure in that he showed that any such procedure could be divided into a series of step-by-step operations which could be performed mechanically.

Turing himself understood the two results as being mutually connected, just as the later literature also exhibits a tendency to view the Turing machine solely in a mathematical-logical perspective.

Although the tool was developed as part of a mathematical-logical demonstration, it has an independent character, however, which can be described - and used - independently of mathematical logic.

This postulate implies on the one hand that mathematical-logical interpretations are regarded as valid descriptions of certain delimited classes of computational processes. On the other, the thesis implies that Turing's - and later others' - mathematical-logical descriptions of the computer contain restrictions which are not conditioned by the properties of the tool, but on the contrary by the mathematical-logical interpretation, which can consequently only be seen as a special case within a more general description.

Mathematical-logical descriptions of the Turing machine can thus be understood as descriptions of dedicated machines where the mechanical procedure is subordinated to a closed, formal - mathematical or logical - semantics. The Turing machine, however, is not defined by any demand on a formal semantics. It is, on the contrary, defined by the demand that the symbolic expression must be available in a physically and mechanically executable form.

The demands made on the physical form are, as will become evident, not only independent of the meaning and semantic organization of the expression, they also imply that any symbolic expression which is to be handled must be available in a notation system which is not subordinated to the semantic restrictions which hold true for formal notation systems.

The analysis which follows in sections 5.2-5.4 will thus result in three connected conclusions, namely that:

- A Turing machine is distinct from other known machines because the rules which establish its functional architecture are not defined as part of the machine, but on the contrary are included and defined in the description of the task. A Turing machine can thus only be used as a calculating machine (or to carry out a formal procedure) because it is not itself subordinated to the restrictions which are contained in the rules of arithmetic (or in the formal procedure).
- The physical demand for mechanical performance implies that both the rules which define the functional architecture of the Turing machine and the data that are to be processed must be represented in a notation system comprising an invariant, finite number of notation units, each of which is individually semantically empty. The central leap from the automatic calculating machine to the universal computer is brought about in and by the construction of a notation system which is principally different to formal notation systems. This notation system will be designated informational notation in the following.
- Informational notation, which makes it possible to use a Turing machine to simulate an automatic calculating machine, also makes it possible to use this machine to simulate an indeterminately large quantity of both formal and informal symbolic expressions, as well as a multiplicity of non-symbolic processes and phenomena.

As will be shown, the unique properties of the Turing machine are founded on a new form of exchange between physical-mechanical and symbolic procedures. As Turing discovered the foundation for this construction in a number of assumptions connected with human cognition, which have also played a central part in later interpretations, they will be discussed in sections 5.6-5.9.

The Turing machine in its basic form is a quite simple model for performing mechanical calculation procedures. The leading idea is that any such calculation subsists in running through one or another, possibly large, but for a given task, finite, number of repetitions of very few and individually simple operations. The demands on such a machine can be specified in the following points:

- It must perform its operations
*step by step*. - It must be able to
*receive instructions in the form of symbols*on a*tape*divided into a number of*squares*where a given square can contain*one symbol*(or be empty). This tape should in principle be endless, but it will at all times contain only a finite number of squares and symbols. - Each symbol must have a physically well-defined form, as it must be able to produce a physical-mechanical effect. The number of permissible symbolic units must be finite as they must comprise an invariant part of the machine.
- It must be able to "scan" the squares on the tape one by one, either by
*moving forwards or backwards*, but always only one step at a time. - A scanning must result in - similarly very few - different effects, either on the tape or on the state of the machine:
- It must be able to write symbols in empty squares, delete a symbol which has been read in, leave it there, or change it.
- It must be able to move the tape forwards or backwards to the next square.
- It must be able to change the machine's "figuration".

- Finally, in addition, it must also be possible to describe the machine - not the tape - using a finite number of distinct states and each individual state, "machine figuration", must be addressable.[6]

The description can be compressed, as any Turing machine can be described as a finite set of sequences, each with the form:

as the form expresses that a machine in a given figuration, F, with [[alpha]] in the actually scanned square will replace this [[alpha]] with a [[beta]], move the tape, marked by M, one place to the left, right, or remain in the same place and change its figuration to G.[7]

As the machine's behaviour is in this way determined by its actual state (machine figuration) and the actual scanned symbol, this specification is adequate to describe the behaviour of the total system (the configuration) at any given moment.

Using this inventory, Turing then went on to show how it was possible to draw up a table for any arbitrary computable sequence which would indicate the necessary configuration in a standard form so that the calculation could be performed solely with the help of the operations indicated by the table. As the machine can begin by reading a description of the standard form for a given computable sequence, there lies in this a new possibility, since referred to as Turing's thesis or theorem:

*It is possible to invent a single machine which can be used to compute any computable sequence.*[8]

The idea is thus not only that any - possible - calculating procedure can be performed mechanically, but that with suitable programming it can be performed by one and the same machine "the universal computing machine".[9]

Looked at from Turing's - and mathematical logic's - point of view this description of the machine is exhaustive. What remained was to give an account of its possible uses which, as far as Turing was concerned, primarily involved two questions. One was the question of which calculation tasks such a machine could carry out. The other was the meta-theoretical question of how the theoretical model could be exploited to clear up Hilbert's Entscheidungsproblem and eventually other meta-theoretical problems in mathematical logic as well.

Neither of these questions gave occasion to regard the machine's physical method of functioning as a central element in understanding its basic form. In later literature it has often been claimed that, among its other merits, the purely formal description of the machine was that it was not determined by specific physical properties. In general it is acknowledged, however, that a physical realization contains certain restrictions, among them that which lies in the difference between Turing's infinite tape and the finite capacity which characterizes any real machine, just as formal procedures in real machines are subject to the restrictions of time.

But the physical realization not only plays a central role for an understanding of modern computers, it also plays a somewhat overlooked, but nevertheless fundamental role for an understanding of the functionality of the Turing machine.

5.2 The demand for universality and the dissolution of mechanical and symbolic procedures

If Turing had been asked how he would describe the relationship between the symbolic and the physical process, he might have answered that the physical-mechanical processes were simply divided into a series of individual steps which were regulated by a finite and deterministic symbolic procedure. Such an answer would be in agreement both with the classical understanding of physical-mechanical processes and fulfil the purpose that lay in using the machine to solve finite calculation tasks, just as it incidentally places the Turing machine in the company of other, already familiar calculating machines.

There can be little doubt either that Turing himself saw the universal computer as a calculating machine and considered the main point to be the arithmetical analysis of finite procedures, as this analysis implied a considerable increase in the types of problem which could be made the object of automatic calculation. With this overstepping of the hitherto known limits for calculating machines, Turing took the idea of the automatic calculating machine to its theoretical completion.

There were others who were going in a similar direction. The German engineer, Konrad Zuse, thus built the first automatic calculating machine (with memory, control unit and a punched tape as input medium) during the years 1936-1938, while the American engineer, Claude Shannon, published a thesis in 1938 in which he used George Boole's logical algebra to describe and organize physical relay systems as logical functions.[10]

Turing's description, however, contained a far more radical innovation, as he not only described how to construct an automatic calculating machine, or a logically controlled relay system, but also how ]*any* finite formal procedure could be performed by *one and the same machine*. He thereby also took the fundamental theoretical step which led from the automatic calculating machine to the universal computer.

The principle difference between these machines stems from the demand for universality, as this implies that the operational rules of the machine are described together with the task. They can therefore not be built into the machine's invariant physical architecture which, on the contrary, must function completely independently of any definite calculational rule of arithmetic as, if it does not, the machine will be limited to a finite set of built-in rules/formal axioms.

It is precisely at this point that Turing's theoretical machine differs from Zuse's, which used a built-in mechanical calculator while the punched tape only contained the calculating task itself.

Now this was not simply a question of separating the definition of the machine from the definition of the rules of calculation it was to follow. Or rather, this separation immediately raises a very radical question, namely whether and how any finite formal procedure can be described in such a way that it can be carried out by a machine which is only capable of repeating the same few, very simple mechanical operations again and again?

The answer to this can be obtained by looking more closely at the tape where the dividing line is drawn between machine and task and where the transition from the formal expression to the physical-mechanical performance takes place. It appears from this that the mechanical performance of a finite formal procedure requires that the formal expression - the task as well as the rules which are to be effectuated - be converted to a notation system which consists of a finite number of notation units individually empty of meaning.

That Turing himself understood the mechanically executable notation form as absolutely equivalent to formal notation, makes it necessary to look more closely at his conversion procedure and the "machine language" which is produced as a result.[11]

In the examples he provides he uses a quite arbitrary notation. In some cases he uses the two symbols (0-1) of the binary number system as a notation for the number system (and only for that), the letters L (left), R (right), N (none) for rules of movement, P (print) for the writing operation, E (erase) for a deletion operation and a number of auxiliary signs for addressing and other functions. In other cases both number values and functions are expressed by letters - all in accordance with the principles of formal notation. The quantity and types of sign are directly derived from what is necessary for the given calculation procedure and each symbol has its own semantic value. As the symbols and functions can also be described physically, they can as such perfectly well be implemented in a machine.

This machine, however, is not a Turing machine, but a calculating machine, as it operates with a limited number of functions/rules and with a specific semantic content connected to the individual physical units of expression. The universal machine, on the other hand, demands that this formal notation be converted to a standard notation which, in its form, is quite independent of the rules of calculation and the meaning of the symbols.

Turing described the conversion with a starting point in the previously mentioned description of finite calculation procedures as a list of the total number of operations of the form

,

as each of these is given a number which indicates the sequence so that the complete list comprises a set in the form of

to which is added a punctuation mark between the individual sequences.[12]

To enable the machine to identify a given figuration (sequence no. ]*i*) the expression is converted to a new form. The machine figuration (before = F and G) is represented by the letter D, while the figuration's number in the list (*i*) is represented by another letter, A, appearing *i* times.

The actual symbol is similarly defined partly by the same letter, D, partly by a number (*j*) which is similarly represented by allowing another letter, C, to appear (*j*) times. There is a differentiation between (*i*) and (*j*) because the symbol in a given square can change during the procedure. The combination DA...A thus indicates the actual figuration, while the following DC...C indicates the actually scanned symbol which is again followed by a new sequence DC...C, which indicates the new symbol value in the actual square.

Then follows one of the symbols for the next movement (L, R, N) and finally a new DA...A sequence which contains the address of the next figuration to be processed. To differentiate between the two functions covered by DA...A sequences (which partly mark the beginning of a figuration, partly the next actual figuration) a punctuation mark? is introduced to the left of the first and to the right of the second DA...D.

This produces an unambiguous and serial representation which can be performed mechanically step by step in the form of a long list with very few different letters (A, C, D, L, R, N) and the separating character which indicates the beginning of a new figuration. As an example he presents the standard form

DADDCRDAA;DAADDRDAAA;DAAADDCCRDAAAA;DAAAADDRDA;

which can produce the expression 0 1 0 1 as a result.

By converting the letters to numbers in accordance with an established code (1 for A, 2 for C... and 7 for the separating character) a corresponding description number is produced which can stand as an unambiguous description of the sequence of machine figurations which can perform a given calculation task.

Turing's point with this description is to show that it will always be possible to express a given calculation procedure in (at least) one such standard form with an accompanying, unambiguous description number which, conversely, will also correspond to only one given calculation procedure:

*To each computable sequence there corresponds at least one description number, while to no description number does there correspond more than one computable sequence.*[13]

This is a truth which must be modified because it presupposes that the sequence is seen from the point of view of the given formal task. Turing thereby overlooked the principle difference between the notation units of the standard form which are defined on the basis of their mechanically effective, physical form and the formal notation units which are defined by a semantic value determined in relationship to the task.

The explanation is naturally that this difference played no part in the mathematical-logical perspective, where the whole point was to show that the formal expression could be converted to the mechanically executable form.

The demands the mechanical performance makes on notation, however, imply that the expression appears in a form which can be processed independently of the semantic content and this provides the universal computer with a number of properties which are beyond the scope of even the most sophisticated automatic calculating machine. These demands constitute the only principle restriction on the universal computer and they can be described by taking yet another look at Turing's tape.

Now looking at this tape is not exactly a straightforward matter, as Turing claimed that it had to be infinite and thereby possess an abstract, physically unrealizable property. He did this because it is impossible to define an upper limit to the number of squares which may be necessary in order to construct a machine that must be able to perform all imaginable finite calculation tasks. But he also made the provision that the tape is a thoroughly concrete and mechanically effective physical entity.

Turing defended this dualism in the conceptualization of the tape by saying that the machine would always only carry out a finite number of operations and therefore also only needs a finite number of squares on the tape. The infinite tape simply indicated that the necessary number of squares varied with the given task. Since a finite formal procedure is defined as one that can be performed through a finite number of steps, it would be possible to do so with a machine equipped with the corresponding number of squares.

The answer implied that, as such, there is no demand for an infinite tape, but only for a tape with an indefinitely large number of squares. It is therefore rather surprising that Turing raised the theoretical problem of the infinite tape at all.

He does so, however, because the problem exists, even though it has no significance for the machine's construction and mode of operation. The reason for this is that it was impossible to determine beforehand whether an arbitrary task could actually be performed through a finite number of steps and therefore impossible to decide how long it would be necessary to continue to supply the machine with more squares.

Mechanical theory offers no solution to Turing's problem as it does not allow a material representation of the infinite, or physically active bodies to have an infinite extent. Turing's tape cannot be imagined on the basis of classical mechanical physics and his re-interpretation of mechanical procedures would hardly have a purpose if it merely concerned building ordinary physical machines. The reward, on the contrary, was the possibility of building a machine which could be regulated through mechanical procedures which were not built into the invariant physical structure of the machine.

Nor did he derive the unique ideas of the tape and the step-by-step procedure from mechanical physics. They were derived as a result of a phenomenological analysis of a certain type of human symbol manipulation, namely practical arithmetic as carried out with a pencil and paper.

Here he also found another theoretical argument which it is true did not solve the problem of the infinite tape, but which provided a completely new dimension, as he used the physical image of a closed, finite world to describe consciousness as a closed, finite system. He then concluded that the universal computer would be capable of performing any calculation which could be performed mechanically by man. The problem of the infinite tape fades behind the limitations of human consciousness. It first emerges beyond our own reach.

Turing used not only the contemporary - human - computers as an illustrative analogy, but also as a starting point for a more detailed analysis of the arithmetical process.

The argument regarding theories of consciousness, which will be taken up in section 5.6-5.9, thus plays a central role for Turing, but none at all for the Turing machine's actual mode of operation. Turing uses the hypothetical assumptions regarding human calculation as a source of inspiration - and draws a veil across the problem of the impossible tape.

What remains is the activity carried out on the physical and finite part of the tape. Looked at from a physical perspective this is a question of two process levels. First, the tape as a whole is moved step by step as it leads a new square into the reading mechanism. The movement halts here while the reading mechanism reacts - mechanically - to the physical form, a symbol which is manifested in the given square. A mechanical effect is now produced, as the physical form may remain unaltered, be deleted or replaced by another symbol, after which the tape is moved another step so that a new square reaches the reading mechanism.

In any given state the relationship between a given square and the symbol it contains is bound and fixed, but the bond is only local. The next step may be an alteration of the symbol in a given square, or an alteration of the square and thereby also the, in this case, invariant symbol's place. The relationship between the square and the symbol itself is variable whether the symbol is actually altered or not. This functional property is possible because the square and the symbol comprise two separate physical levels which are not part of a physically bound determination, even though the symbol can only exist as a physical manifestation in a square. The connection is subject to an optional regulation which makes it possible to make a symbol in a given place change itself into another.

It is evident that all these mechanical procedures depend on physical forms we ourselves understand as symbols, but appearing here and working as purely physical-mechanical entities and that these symbols therefore belong to the physical, invariant part of the Turing machine. The physical symbol forms must therefore be defined prior to and independently of the task to be performed. It is also clear for the same reasons that there must be a predetermined finite number of permissible physical symbolic units - which are independent of the task to be performed.

The mechanical process is thus independent of the symbolic interpretation of the physical forms and depends entirely on the effects created by the physical form of these symbols. Whether the symbol symbolizes something outside the system (and if so, how) is of no importance for the machine's physical mode of operation.

It is also at the same time clear that not all these mechanical parts of the machine can be included in its construction, as the individual mechanically effective symbol's - changing - location, sequence and mechanical effects on the tape and on other symbols is first defined by the task the machine is to perform. The tape and the symbols on it are at all stages both part of the machine and part of the material that is to be processed. As far as the tape is concerned the necessary number of squares is determined by the task while as far as the symbols are concerned it is the task which determines their sequence and the semantic value of the total procedure. The machine, on the other hand, determines the structural division of the tape into squares and the permissible number of physically determined, semantically empty symbols.

The central point is thus the distinction between the definition of the physical and semantic value of the symbols, as the one definition is part of the machine and the other part of the task.

The physical symbol definition was not in itself a theoretical innovation. This type of definition had long been familiar in such areas as the Morse alphabet used in telegraphy. Here, however, the physical definition still went hand in hand with the declaration of unambiguous and invariant semantic values. Nor did the physical definition particularly interest Turing. He touches upon it only in a footnote where he remarks that it is possible to describe the symbol as a - measurable - set of points corresponding to the form of the ink within each square and thereby defines the necessary criteria which make the machine capable of differentiating between the individual symbols.[14] This definition indicated not only a general theoretical solution of the problem of mechanical reading for a few chosen symbols, it allowed the use of an arbitrarily large number of symbols with a single restriction, that it had to be a finite number, because the difference between symbols approaches zero in step with increasing numbers.

Turing thus appears to have imagined that a very large number of notation units might be necessary for solving complicated tasks. The decisive point for him was that there must be a finite number because the permissible symbols had to be included in the building of the physically invariant machine. This is also a reflection of the fact that he still understood the number of notation units as a function of the task and the individual notation unit as loaded with a semantic content of its own.

Notwithstanding this he could not avoid the conversion from formally and semantically defined to physically defined symbol sequences, as the conversion to "machine language" is the condition for mechanical, step-by-step performance and thereby also the Turing machine's *conditio sine qua non*.

The demand for universality not only implies that there must be a predetermined and therefore limited number of notation units defined by the mechanically effective form, the same - few or many - units must also be capable of manifesting themselves as expressions with different meanings as they must both be able to represent an arbitrary quantity of changing data and an arbitrary number of changing rules. As no distinct limits can be defined for this demand on the possible semantic variation of the notation units, there can consequently be no definite semantic value in the definition of the individual notation unit. This cannot represent anything definite as it must be able to successively play a part in representing everything.

Turing thus passes over the crucial point in the conversion of the deterministic symbol procedure to the mechanically executable form, as he reads the two forms as equivalent. At the same moment a given formal procedure is available in the standard form it is available in a form where the individual notation unit is accessible to manipulation and where its meaning is solely determined by the - optional - preceding and succeeding units.

The universal computer not only requires the mechanical procedure to be carried out as a series of semantically empty single steps, it also requires a corresponding subdivision of the task as well as of the rules of calculation that are to be used.

Turing's theoretical description thus shows not only that it is an advantage to specify the rules of calculation together with the task rather than to incorporate them into the machine, it also shows that this advantage, which is a necessary precondition for the universality of the machine, implies that these rules must be expressed in a form in which they can be processed independently of their semantic content. This means that the rules of calculation can become an object of calculation in themselves and that at each stage of the process they can be modified or suspended independently of the previous steps and of the original rule structure.

The universal computer can only be universal because it is not defined by the symbolic logic of the task it is to perform. The universal properties of the machine are on the contrary contained in and determined by the demand that it must be possible to re-present the task in a notation system which is defined by the notation's physical - mechanically effective - form, independent of the symbolic meaning and logical structure of the notation.

Where the automatic calculating machine builds on the mechanical execution of deterministic arithmetical rules, the Turing machine builds on a dissolution or breakdown of the deterministic rule structure into separate mechanical steps.

While the machine is subordinated to the demand for a well-defined alphabet, it is not subordinated to any demand for a specific syntax, or semantic. It thereby allows a treatment of symbols which completely lacks the determination which defines the calculation procedure.

This difference also appears when we look at what Turing calls the machine's memory, as the total memory, whose content can be described and calculated on the basis of the symbolic description, cannot be contained in the Turing machine because it continuously erases or changes some of the symbols and continuously increases the number of used squares.

Although it is true that the system's memory at any given time can be described on the basis of the machine's (i.e. the tape's) total state at a given stage in the process, this definition does not contain all the existing and erased information which has been or will be on the tape's other squares. Thus no finite representation of the whole system's total memory exists. The deterministic character of the system is only local.[15] As erased and changed symbols are also included, the determination is at the same time irreversible.

The point is naturally that the machine has no use for such a memory, as long as the necessary instructions are present at the time they are to be used. What this demand implies can only be established, however, through a symbolic reading, it cannot be decided through reading in the - mechanical form - the machine is bound to.

The paradoxical result of Turing's analysis is therefore that the symbolic rule determination he used to dissolve the physical-mechanical process into facultative individual steps must itself be dissolved before it can be performed mechanically. This reduction at the same time constitutes the decisive dividing line which separates it from all earlier attempts to create a universal calculating machine or logical symbol manipulator - from Raymond Lull through Leibniz to Charles Babbage.

The independent physical definition of the form of symbols is thus not simply a technical detail which is only connected with the mechanical performance either, it is also the foundation of the previously mentioned principle difference which separates the universal computer from all calculating machines, as it determines:

- That any given sequence of individual steps can be performed independently of its symbolic meaning.
- That one and the same sequence of notation units can, in principle, represent facultative variable symbolic values and/or logical structures.
- That the symbolic procedure "the programme", which is used to control the mechanical process, must be an explicit expression and converted to the standard description's form as a series of individually manipulable notation units similarly to all other kinds of data.

Turing's postulate that a given sequence which is available in a standard form can only correspond to one definite computational process thus primarily reflects the fact that he interpreted the universal machine on the basis of a deterministic (mathematical-logical) understanding of symbols which allowed no room for describing these three properties. These properties are, conversely, necessary preconditions for the ability of a computer to solve a multiplicity of tasks, such as word processing, represented by the present work, for example, although there is no formal mathematical-logical description of these tasks.

5.3 Formal and informational notation

As will be evident from the preceding, the demand for computational universality implies that the machine must work independently of any specific rule of arithmetic or formal procedure. The price for this necessary freedom is that the formal expression must be converted to a mechanically executable form which demands a notation with a predetermined, finite set of physically defined notation units which are individually empty of meaning.

With these two demands the conversion of a given task to the mechanically executable form becomes identical with a complete conversion from a formally-defined to a physically-defined notation system, which depends upon other principles for meaning attribution and allows several forms of meaning representation, as this notation is not subordinated to the demand for a complete formal description of the meaning represented.

Turing provides no complete description of these notation conditions as he only makes explicit the demand for a finite number of physically defined notation units, but not the demand that the individual notation units be defined without any intrinsic semantic value. This demand is only an implicit, not explicit, but necessary precondition in Turing's analysis.

Together, however, these two demands imply the use of a notation system which does not build upon formal notation principles. As this also - as will appear from chapters 7-8 - differs from linguistic and other previously described notation systems, it will be regarded in the following as a new, independent notation system.

As Turing's notation is distinct from the - binary - notation which is used in modern computers, there are reasons to include the latter already at this point. This will also provide the opportunity to illustrate the difference between formal and informational notation principles with the same (binary) notation set as an example.

In the binary number system both notation units always have a definite numerical value determined by their position in the expression. If the same unit appears in another position it has a correspondingly, different predetermined numerical value. A set of general, invariant rules is a prerequisite of all arithmetical notation systems for attributing values to each individual notation, just as it is a prerequisite of formal notation that the individual notation units are connected with a definite value which is either a data value or a rule value. Rule and data values are thus each expressed through their own distinct set of notation units (or rules of positioning) and any change of a single notation unit is connected with a change - determined by the semantic value of the new notation - in the total content of the expression.

There are only two notation units in the binary number system, but as it is also necessary to use an - arbitrary - number of rule notations, binary number notation can only be used in connection with a more comprehensive notation system where, in principle, new notation units for operators can be introduced arbitrarily on the single condition that each individual notation unit is ascribed a certain content value at the same moment as it is introduced. There is no definite invariant limit to the number of notation units in formal notation systems, on the other hand a notation can only become a member of a formal notation system through a declaration of its semantic value.

This also holds true of formal expressions which use notations with variable values, as this makes it necessary to indicate well-defined, formal rules for value variation. A variable value, *x*, can only appear in connection with a declaration of variation thresholds and it cannot at one moment appear as a variable numerical value and at the next as a rule of arithmetic.

Arithmetical notation systems are, like all mathematical and formal notation systems, based on explicit and unambiguous declarations of the individual content value of the notation units. These values are again determined in relationship to an overall set of rules for semantic variation within the given formal system.

None of these conditions is valid for the use of binary notation as informational notation used in computers. On the contrary, here, as previously shown, it is a question of a notation system comprising semantically empty notation units without general rules for the values which can be attributed to the individual units. In the binary version only two notation units are used, which is not, however, a necessary condition, as long as their number has been predetermined. Nor can any differentiation be made between separate rule and data notations. The same two units must represent both parts of numbers, parts of arithmetical rules, or logical relationships. In some cases they must act as parts of an address in the system, at others as parts of a procedure for producing an output. In other words, they appear with changing values in the same sequence. These values are never bound to the individual notation unit, but only to the given sequence as a whole. Thus no separate rule notations appear, nor can new notation units be introduced during a given procedure. Rule and data must on the contrary be expressed with the same notation units and the rule can only be effectuated through a sequence of individual steps carried out at the level of the notation units, which means that the rule can only be effectuated by being represented and processed exactly as all other data.

It is also evident from this that the concept `data' is not an adequate concept for the operationally active notation unit, if we thereby infer that there is an equivalence between the minimum unit of expression and the minimum content value.

While such an equivalence is the foundation of formal notation, informational notation is defined by non-equivalence which - as will appear from chapter 7 - is a property informational notation shares with common language notation. The principle of such notation systems is expressed by the concept `double articulation', by which is understood notation systems where the minimum expression unit is a semantic variation mechanism which is smaller than the minimum content value.

It is natural to illustrate this relationship by starting with a well-established representation standard such as the ASCII code which establishes a convention-determined (freely chosen) binary representation of up to 256 notations derived from other notation systems in a constellation of 8 bits, which can each assume one of the two values 0-1 corresponding to 2^{8} bit patterns. As long as we concentrate solely on the ASCII code itself there is a clear equivalence: each individual represented notation unit has an unambiguous binary equivalent. The point of informational notation, however, is that it must not only be possible to represent letters, numbers and operators, but also to effectuate the operations mentioned. Where *we* can simply write 1+1, the *machine* must express both the two numerical values and the operator as well as effectuate a mechanical process which produces the ASCII value for a total with the help of two and only two notation units. These thus appear in this process both as partial elements in the binary expression of the number /1/, the letter /a/ and the notation /+/ and the arithmetical rule of addition. The ASCII code also shows that the individual notation unit never has an intrinsic semantic value. The meaning is only connected with the total constellation - in this case by 8 bits - but the meaning variation is nevertheless manifested through the variation of a single bit.

This difference between formal and informational notation also holds true in the cases where binary notation is interpreted as a logical relationship between two possible alternatives. Although the alteration of a single bit in the informational notation sequence can have semantic effects, these cannot be described by interpreting the binarity as an alternative between the two semantic content values, yes or no. The individual bit is a unit of expression which is smaller than the smallest content value, as the smallest content value (a numerical value or a logical yes, for example) always requires a sequence of bits before it can be performed in the machine. While the rules in a formal system are always defined outside the system, the rules must be explicitly contained in an informational expression and they can only work as rules if they appear themselves through a number of step-by-step, mechanical stages. The rule effects appear here as an integral part of that process we say that they regulate.

While it is thus possible to describe formal, (symbolic or mechanical) processes as rule determined processes where the rules are predetermined and given outside the regulated system, the process in the Turing machine must be described as a process in which rule formation and execution is an integral part of the result of the process.

The difference between formal and informational notation is finally emphasized by the fact that it is not the physical form of the formal notations, e.g. the binary numbers, but their numerical value which determines the effect on the calculation process, whereas informational, e.g. binary, notation works solely by virtue of the physical form of the notation, no matter whether the entire sequence has been imagined as a logical value, a rule structure, or a numerical value.

When we take these differences into account there appears to be no possibility of understanding informational notation within the framework of formal notation principles. The conversion of the formal expression to a mechanically executable form implies that the structure of the formal expression can only be retained in a form in which the determination of the structure assumes a resoluble, freely editable and variable form in line with the material that is structured.

It is also evident that this structural dissolution of the expression form not only goes much further than the aims which motivated it, but also exceeds the understanding of formal notation. It was not by chance that Turing overlooked the demand for semantically empty notation units and that he failed to arrive at the binary notation form.

It is therefore doubtful whether Gandy is right in presuming that Turing relinquished the idea of suggesting a purely binary notation out of regard for the reader.[16] It is true that nobody can be certain what unspoken considerations Turing may have taken into account, but there is no indication that in 1936 he would have been able to imagine such a complete binary representation.

If he had, it would have complicated the explanation and produced problems in the description of the machine. This is because he would not only have had to specify how it could decide which were numbers, which were arithmetical rules and which were rules for movement, instead of sticking to the intuitive advantage which lay in the use of a more arbitrary choice of easily recognizable symbols where the physical form was still directly connected with functional semantic values, it would also have created a conceptual break for which there was, at the time, no motive.

5.4 The automatic, the circular and the choice machine

The lack of a distinction between formal and informational notation principles was of no direct significance to Turing's project and reflects his formal perspective regarding the machine.

The limitations of the perspective, however, became evident in his formal definition of the mechanical procedure, as he introduced here two central modifications which, each in its own way, showed that he was unable to demonstrate the necessary theoretical distinction between the universal machine and the specific tasks.

The first modification comes to expression in his distinction between what he refers to as the automatic machine - which in his eyes was the genuine universal machine - and what he called "the choice machine". While the automatic machine is presumably determined completely by the given figuration, the choice machine is characterized by the fact that in certain states there is a need for a choice made by an external operator.

*When such a machine reaches one of these ambiguous configurations, it cannot go on until some arbitrary choice has been made by an external operator.*[17]

States thus occur in this machine where the next step is not determined by the actual configuration.

Turing clearly understood the choice machine as a less interesting and more limited version of the automatic machine. The choice machine is dealt with only in a single footnote which mentions that it can be simulated on the automatic machine.

The interesting point here, however, is that Turing's distinction between the automatic machine and the choice machine has nothing whatever to do with the properties of the universal computer.

His introduction of the distinction is not due to the fact that this is a case of machines which work in different ways, but on the contrary that there are different tasks. The two machines are identical. The introduction of the distinction rested on a theoretical problem connected with the question of whether all formal systems could be represented in a set of distinct, finite operations.

For Turing the potential of the choice machine lay exclusively in the need for the intervention of an external operator in the handling of certain formal systems. Although he described the necessity of the choice as a consequence of the fact that the next step was not determined, he quite naturally assumed that the possibility of choice alone was of relevance for the handling of - a certain group - of formal symbol systems.

As this possibility of choice is not connected with a machine which differs in any way from the automatic machine and as the possible choices are solely limited by the demand that the symbolic meaning must be expressed in a finite notation system, it becomes clear that Turing is only capable of defining the universal computer as an automaton by defining the machine on the basis of a certain class of task. He thereby draws a veil across the properties which make the machine universal, whether this universality is seen in its specific mathematical-logical meaning, or in a more general symbolic way.

Turing saw the difference between the automatic machine and the choice machine in the light of different classes of deterministic symbol processes, but the potential of the choice machine goes further than this difference, because the machine itself makes no demand that the formal expression must represent a closed or unambiguous semantic message.[18] It only makes a demand on the form of the notation. The potential of the choice machine is therefore only limited by the external operator's ability to express a message in a finite set of distinct expression elements. It is also perfectly possible - as is shown by any word-processing programme today - to control the computational process with a symbol system where no unambiguous deterministic relations exist between the individual symbols used by the external operator.

The automatic Turing machine represents only a special case of a more universal symbol manipulating machine. It realizes only a limited spectrum of the machine's potential. This spectrum is characterized by the operator allowing the mechanical procedure to be controlled by a precept which contains a deterministic description of a given problem area. In other words the automatic machine is a dedicated machine devoted to a previously limited set of tasks which determine all its operations. In this form it approaches the classical machine, but also in this case there is a basic difference, as the automatic procedure's determination is symbolic - not physical - and thereby accessible to new choices.

Turing's distinction between the choice machine and the automatic machine emerged as a consequence of the theoretical problem which was his starting point, namely the question of whether it is possible to break down formal procedures into a finite number of distinct mechanical steps. It was possible in many cases, but not all.

The other central modification came, on the other hand, from the result. Turing decided the Entscheidungsproblem by demonstrating that there is no algorithm which can determine whether an arbitrary formal procedure fulfils the demand that it can reach a conclusion with the help of a finite number of operations. This proof of what later became known as "the stop problem" not only demonstrated that there are formal procedures which cannot be carried out with finite means, it also demonstrated that there is no general method for determining whether an arbitrary, given procedure has such a finite solution.

Turing himself hereby supplied a theoretical proof that there it is not possible to limit a universal calculating machine to the status of an automatic calculating machine.

If it is not possible to determine in advance whether an arbitrary formal procedure has a finite solution, a machine capable of carrying out any finite calculating procedure must work independently of this criterion. The stop condition cannot be built into the machine. This also explains how Turing "happened to" break down the concept of mechanical and symbolic determination into facultative decisions which can be made step by step, in conflict with his own basic theoretical assumptions.

He bypassed this problem by introducing a distinction between what he called respectively "circular" and "circle-free" machines. These machines are also identical, however, the difference lies exclusively in the character of the task presented. If the task can be carried out with a finite number of operations, it is a circle-free machine, if not, it is a circular machine which either comes to a standstill, runs in a circle or continues without yielding new information.

From Turing's mathematical-logical perspective such a "circular machine" would have no purpose, but this is solely due to the mathematical-logical perspective, as he understood circularity as an expression of the fact that the machine had come to a standstill (ran in a circle) in a calculating process. That a circular Turing machine could be used to simulate other machines was entirely outside the sphere of his attention and interest. Furthermore, the term circular is used - rather confusingly - both of the sequences where the machine runs indefinitely in a circle without yielding new information and of the sequences where it comes to a standstill and demands new input (as a variant of the choice machine).

That the Turing machine can simulate the structure of the classical machine, including that of the calculating machine, does not reduce the difference between them. It increases it, as the possibility of simulating the classical machine depends on a property which the classical machine does not possess. The property which makes the simulation possible must thus also be regarded as more basic than the phenomenon simulated, whether this be a machine, or in Turing's case, a formal calculation procedure processed in the circle-free machine, which is defined by procedures which bring the machine to a stop when it has reached the result of the calculation.

Turing's distinction between the circular and the circle-free machine is still justified when seen in the light of his purpose and in connection with the performance of automatic calculating procedures. It has considerable mathematical-logical relevance, but is also central because Turing demonstrates that it is incapable of playing any part in the construction of his machine. As there is no general method for deciding whether a calculation can actually be completed, the demand for non-circularity cannot be built into the physical layout of the machine. On the contrary, the physical layout must be independent of this criterion.

Turing saw the circle-free machine as the genuine universal computer, once again because of his interests and aims. But he overlooked the fact that the freedom of the circle-free machine from circularity depends on the character of the task and not of the machine, by which he robs the machine of its universal properties. He also overlooked the fact that his own definition of the formal procedure implies that it can only be described as a finite, deterministic procedure if a purposeful task is included in the description. Without such a specified purpose the procedure breaks down into random steps which are devoid of meaning. The Turing machine can only function as an intentional machine. Its prerequisite condition is an intention, but if the task is included in the definition of the machine it is no longer universal. If instead the symbolic level is left out entirely, it will hardly be a machine at all, but simply an imperfect radiator.

It is not the purposefulness itself which separates this machine from other physical machines, as they are similarly characterized by the fact that their finite character is brought about through the implementation of a purpose. The difference appears on the contrary because the Turing machine does not demand - nor allow - these purposes to be built into the invariant structure of the machine. It is thus not only the stop conditions which are not incorporated, the starting conditions are not incorporated either.

As the machine itself can neither define the starting conditions nor the stop conditions, in its general form it is always a choice machine and never an automaton. The closest that this machine can come to a completely automatic procedure is when it runs in a closed loop where it never encounters any stop condition.

5.5 The universal computer as an innovation in the history of the machine and of mechanical theory

Turing's theoretical description of mechanical procedure as a step-by-step procedure where the physical determination is limited to the relationship between two steps, together with his description of the way in which the individual steps could be connected by linking them to corresponding, step-by-step symbolic choices, represents a far-reaching innovation in mechanical theory.

This appears directly from Turing's own account, but the reach appears with even greater clarity if we look at Turing's description in relationship to a physical-mechanical process which prior to Turing existed in one of two basic forms. A mechanical process could either be understood as a regular, universal and deterministic natural process (first clearly and generally formulated by Pierre Laplace) or as a sequence of a definite number of finite physical operations which comprise a mutually connected and outwardly delimited whole, such as exists in the form of actual, physical machines in well-defined laboratory experiment arrangements and in the concept of Ludwig Boltzmann and that of later physics of local, completely delimited finite space.

The two points of view were usually interwoven to a greater or lesser degree in spite of the theoretical contradiction between the image of the universal, deterministic system (nature as a whole) which permits no kind of intervention (man is smaller than the system and is within it) and the image of the finite, local system which can both be interrupted and produced as a selective and constructive choice and combination of mechanical processes with limited and local effect (man is greater than the system and stands - as before God in front of the huge machine - outside it).

The Turing machine partly represents a polarization between and partly a break with these conceptualizations.

In classical mechanical theory the determination between the individual steps is seen as an effect of the general laws which operate throughout the system. The difference between two steps is a simple function of a time variation in a system where each possible state is established on the basis of a set of predetermined, well-defined starting conditions. The relationship between the individual steps in the process is such that they are bound together so that the individual step is only an intermediate link with the next and whose effect on the following step is completely determined by the starting conditions. The individual step cannot itself influence the previous and following steps.

In the Turing machine the mechanical determination is clearly of a different character, as its physical movement is defined solely by the relationship between the actual state and the actual symbol. The physical determination is local and never includes more than one step at a time. A *new* instruction *can* be called for at each step and the transition to the next step *must* be specified for each individual step.

Turing's contribution to mechanical theory thereby comprises a proof that it is possible to break down any finite mechanical system into a sequence of step-by-step, facultative operations in which physical determination is limited to the relationship between one step and the next and, conversely, that every further step can be made accessible to a free choice.

The Turing machine, however, cannot be described within the framework expressed in the description of classical machines as a sequence of a finite number of delimited physical operations which comprise a mutually connected and outwardly delimited whole.

Although traditional machines are often based on the use of many different - physical - laws, each governing a fraction of the operations, the functionality depends at the same time on the fact that the various mechanical effects on the individual steps are connected in a repetitive system in a pre-established and physically bound way.

It is quite true that it is theoretically possible to describe the Turing machine's repetitive physical operations step by step, but this description only includes movement from square to square and the mechanical operation on the actual physical notation. Such a description, however, is not a description of a Turing machine as it does not include the continuous changes in the physical notation units and thereby the effect of the mechanical operation on the individual square. At the same moment the notation units and their mechanical effect are included, the limit to the description of the invariant mechanical process has been reached, because this is a machine in which the sequence of steps has not been pre-established, is not repetitive and not unambiguously bound by the physical layout of the machine.

The physical process of the machine - the number of steps and the continuous mechanical changes in the location and sequencing of the physical notation units - depends on and varies with the task to be performed.

Rules are not simply allocated step by step in the Turing machine, they are also allocated in another way which appears from the fact that the - symbolic - rules which determine the individual steps can not only be made conditional - as is familiar from such equipment as the thermostat - they can also be modified. The - mechanically effective - instruction which controls a given step must thus either be produced by a previous step or a new input, but can also be re-activated and thereby altered by a subsequent step.

The key to the machine lies in the double character of the tape, which is at one and the same time part of the machine and the material and the place of exchange between the physical-mechanical and the symbolic procedures.

As a consequence of this the invariant borderline between the machine and the material processed, which constitutes classical physical machines, is broken down. As the material not only *can* but *must* contain the rules which control the machine's operations, this is a radical extension of the concept of machine.

The Turing machine thus differs from previously known physical machines at two central points. One point is contained in the step-by-step procedure which limits physical determination to a simple relationship between two steps, by which the classical machine is broken down into its "atomistic" components. The other point is contained in the breaking down of the borderline between machine and material, as the machine not only demands that the rules governing the sequencing of the physical steps must be contained in the material, but also that they must be contained in a form in which they can be effectuated as a chain of - variable and facultative - individual steps in the course of the process they regulate.

The two machines, therefore, do not differ because the rules which are incorporated in the classical physical machine originate in physics, while the rules governing the symbolic machine originate in mathematics or logic, they differ because the rules are implemented in two different ways. While a traditional machine can be described as a machine in which a number of causal processes are collected and ordered under a single, overall final intention which is implemented in the machine's invariant physical architecture, the Turing machine can be described as a mechanical apparatus in which an arbitrarily large number of different final intentions can be implemented continuously in arbitrarily small portions.

Turing's theoretical analysis of the principles for a universal computer thus contains marked renewals of mechanical theory and of the history of the machine and leads finally to a radically new way of presenting the problems regarding the concepts of rules and regularity.

The renewals appear in direct connection to the description of the machine, as this description assumes 1) that mechanical theory is understood and formulated as an abstract, theoretical model and not as a model of the physical world and 2) that the abstract rules of procedure are effectuated through a physically performed process. Where mechanical theory previously represented physical nature, it is now seen as a model for the physical execution of formal, symbolic procedures.

The first link in this conversion consisted of emancipating mechanical theory from physics and it appeared, when Turing simply transferred the mechanical model of nature as a universal and deterministic system to the understanding of the computer as a deterministic, finite, formal system. In a later article he draws a direct parallel to Laplace's formulation of the ideal ambition of mechanical theory: to predict all previous and later states on the basis of a single, given state, in remarking that this ambition is closer to fulfilment in the computer than in the physical world where infinitesimal inaccuracies in the starting conditions create huge disturbances.[19]

With the abstract re-interpretation of mechanical theory the conflict between universal and local mechanical theory assumes a less contradictory character, as it is now expressed in the distinction between infinite and finite procedures, of which only the last can be performed mechanically, as it is only here that it is possible to speak of a complete establishment of the conditions for starting and stopping. In return for this freedom from contradiction, however, the theory only concerns formal systems which are defined on the basis of axiomatic criteria of validity. The inner consistency with regard to meaning and validity is thus achieved by abandoning all demands on referential validity.

The second renewal lies in the demand for physical execution of the formal rules, as this demand implies that the rules must be made explicit in a form in which they become regulable themselves. In other words, here the rules assume the character of freely defined, chosen and variable laws or conventions. They no longer stand outside the regulated system as transcendentally preordained and invariant laws, but are included on the contrary as step-by-step performed sequences which can be influenced through intervention at the - lower - level of physical notation, i.e. independently of the content of the given rules.

In this respect the Turing machine represents a model of a mechanical system in which at any moment outside impulses, which are not only capable of changing the further sequence, but also any previously given rule, can make an appearance.

While formal, deterministic symbol theory was a necessary prerequisite for constructing an idea - and the first description - of the universal computer, it not only fails in describing the result, it is also undermined because mechanical performance implies that the symbolic procedure is emancipated from the concept of determination, so that the connection alone expresses a semantic choice which is connected to specific tasks and purposes.

There are several reasons why Turing did not pay attention to these aspects. The most obvious lay in his mathematical approach and the mathematical-logical purposes he had in view. These purposes meant that he had to overlook this aspect because the whole point of his work was to show how it would be possible to carry out any finite and deterministic symbolic procedure with an "ordinary" mechanical machine. This meant, so to speak, that he had cut himself off in advance from concerning himself with the dissolution of the fusion of the concepts of mechanics and determination.

5.6 Written down by a machine

In the previous sections the Turing machine has been described with the emphasis on the new type of exchange between physical-mechanical and symbolic procedures and it has been demonstrated that the notation Turing used to provide the formal expression with a mechanically executable form was not simply - as he believed - a practical notation technique, but comprised an independent notation system with properties that separated it from formal notation systems.

While Turing saw the new notation technique as an - almost trivial - equivalent to formal notation because it was possible to derive the informational form of the formal expression from simple, unambiguous procedures, on the other hand he used some less trivial assumptions from theories of consciousness as a starting point for the new construction of the relationship between mechanical and symbolic procedures. Although Turing's universal calculating machine worked because of its mechanical properties and therefore quite independently of our understanding of the organization of human consciousness, certain ideas on this are included in the theoretical assumptions Turing used as a precondition for the construction of the machine. As these assumptions also played a central role both in Turing's and others' later interpretations of the machine, they will be discussed in the following sections before the analysis of informational notation is continued in chapters 6-9.[20]

One thing can be established immediately however. The universal Turing machine does not work in the same way as Turing's consciousness did. His article from 1936 provides excellent documentation of this, as it unites stringent theoretical analysis with a presentation containing a number of errors due to sheer carelessness in the details. Martin Davis thus introduces his 1965 reprint of Turing's article with a well-meant warning:

*This is a brilliant paper, but the reader should be warned that many of the technical details are incorrect as given.*[21]

A few years later Gandy supplements this with more imagery and greater tolerance:

*The approach is novel, the style refreshing in its directness and simplicity. The bare-hands, do-it-yourself approach does lead to clumsiness and error. But the way in which he uses concrete objects such as exercise books and printer's ink to illustrate and control the argument is typical of his insight and originality.*[22]

It is only reasonable that well-informed colleagues are ready to make such allowances. It shows not only that the erroneous, mechanical procedure is regarded as a far less significant part of human thinking than the originality, brilliance, simplicity and imagination necessary to transcend the previous conceptual frameworks, it also shows that human consciousness is capable of working in ways which would bring any Turing machine to a standstill.

Although later analyses based on theories of consciousness are mistaken in ignoring or underrating this difference, they are correct in placing the implications of theories of consciousness on the agenda in connection with the Turing machine. This is the case because Turing's theory confirms that there is no path from classical, universal mechanical theory regarding the organization of nature to the machine which does not pass through human consciousness.

It was therefore not by chance that Turing himself - in the middle of the busy road between mathematics and physics - had to make an epistemological leap by initially using mechanical theory in the area occupied by theories of consciousness.

It was the problem of the stop condition which made this leap necessary, as he showed that it was impossible to determine in advance whether a formal or mechanical procedure would reach a conclusion. As any classical machine is characterized by the fact that it constitutes a delimited and closed system it was not possible to find a solution to this stop problem within the world of the machine. Nor, conversely, was it possible to derive the construction of the machine directly from universal mechanical theory which, it is true, contains no precondition regarding a stop condition, but which for the same reason represents a system which it is completely impossible to delimit and which is infinite, whether we believe that it reflects the order of the universe or simply a mental picture.

A theoretical leap was necessary in order to find a solution which was possible in practice.

The reader first receives notice of this when Turing, almost in passing, concludes the introductory survey of the general content of the article with his definition of computable numbers which can be written down by a machine. The article begins:

*The "computable" numbers may be described briefly as the real numbers whose expressions as a decimal are calculable by finite means. Although the subject matter of this paper is ostensibly the computable *numbers, *it is almost equally easy to define and investigate computable functions of an integral variable or a real or computable variable, computable predicates, and so forth. The fundamental problems involved are, however, the same in each case, and I have chosen the computable numbers for explicit treatment as involving the least cumbrous technique. I hope shortly to give an account of the relations of the computable numbers, functions and so forth to one another. This will include a development of the theory of functions of a real variable expressed in terms of computable numbers. According to my definition, a number is computable if its decimal can be written down by a machine.*[23]

Simple and neat. Everybody knows how a machine works. But the abrupt introduction of the machine is not due to the familiarity of the image and its obvious pedagogical advantages. What Turing is introducing here with the concept of a machine is not the mechanical writing function, but the demand that through mechanical means a result of a calculation must be produced in the course of a finite number of procedures, the stop condition, which separates the finite from the infinite formal procedure.

This is not exactly the first evocation given of the idea of a machine, nor is it the physical proficiency in writing, but the mental mechanics of the proficiency in arithmetic which is the source.

As with the last sentence of the introduction, so with the first, apparently even more trustworthy sentence where the computable numbers are defined as numbers which can be calculated through finite means.

Underlying this idea are two assumptions derived from theories of consciousness, one a general theory of human memory as a finite system and the other a more detailed and specific idea of how humans calculate.

Turing does not appear to have paid any great attention to this not very obvious introduction of the machine, but when it comes to the unusual assumptions derived from theories of consciousness, he is perfectly clear. It is from here that he takes his point of departure:

*We have said that the computable numbers are those whose decimals are calculable by finite means. This requires a rather more explicit definition. No real attempt will be made to justify the definitions given until we reach [[section]] 9. For the present I shall only say that the justification lies in the fact that the human memory is necessarily limited.*[24]

Turing has thus, in the first and last sentences of the introduction, carefully placed, but not developed, the two conceptual frames from which he obtains the ingredients for his definition of the finite, formal procedure, namely classical, universal mechanics as formulated by Laplace and human proficiency in calculation as analysed by - Turing.

In his introductory resumé, for some reason, Turing introduces the two images, the mental and the physical-mechanical, as though they were two poles which delimit the article's space. In the continuation they become completely fused:

*We may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions.*[25]

There is a paradoxical point in this construction, as it is not possible to derive the idea of human consciousness as a finitely delimited system from universal mechanical theory (whose preconditions are that the universe is one great, cohesive machine and that a given system which is left to itself can continue indefinitely) or from an analysis of formal arithmetical procedures (as Turing's own analysis showed that infinite, formal procedures did exist).

Nor does the structure of the classical machine correspond to a fungible Turing machine, but rather to Turing's circular machine which runs in a circle without yielding any output.

Turing also equipped his machine with an infinite tape and limited the use of the idea of finite consciousness to the finite arithmetical procedures. The infinite tape is not part of Turing's model of consciousness, which is more reminiscent of Boltzmann's theoretical model of finite, thermodynamic space and of Hilbert's idea of a completely closed, formal system.

Paradoxical or not, by transferring mechanical theory's tension-filled combination of the idea of the single, great universal machine and the many small, specific machines from the domain of physics to that of consciousness, Alan Turing got the idea that it must be possible to construct a machine which would be capable of carrying out any finite mathematical symbol manipulation.

Taking into account the result - this must be considered an extremely productive exploitation of the theoretical contradiction, but there is no marked confirmation of its relevance to theories of consciousness. On the contrary, the idea of a finite, step-by-step operating consciousness is a less plausible part of Turing's theory. As it is also an idea which has in particular given rise to later schools of theories of consciousness, his formulation and exploitation of the idea deserve a more detailed investigation.

Turing's idea of finite consciousness served two more specific purposes - over and above the possible motifs of theories of cognition. One was to create a basis for the idea that it might be possible to reduce a large part of logic and mathematics to the premises of mechanical physics. It is highly probable that this idea played a pioneering and necessary role in the development of his theoretical description, just as it is also clear that he did not maintain the idea in this form, as he equipped his machine with an infinite tape.

As will be evident from the following, Turing never, neither in 1936 nor in his later work, subscribed to the idea that his own theoretical machine or the later computers worked similarly to human consciousness. On the contrary, he distances himself increasingly from the idea of describing consciousness as a closed system characterized by a finite set of discrete states.

Nevertheless, this idea serves yet another important purpose in Turing's theory, as he also uses it in his phenomenological analysis of arithmetical procedure. He thereby made the theoretical leap which definitively and rightfully takes the use of mechanical thinking into the area of theories of consciousness.

In Turing's analysis of arithmetical procedure the idea of finite - calculating - consciousness is utilized in five different elements, which comprise:

- The idea of consciousness as physically processed in time and space.
- The idea of finite consciousness as a set of distinct, mechanically connected states.
- The idea of step-by-step, locally determined procedure.
- The criterion of readability, i.e. the demand for breaking down into simple expressions which can be recognized "immediately" and the demand for a limited number of actually possible "readable" squares.
- The idea of a completely explicit representation of the contents of the calculation which is developed on the basis of the observation that an arithmetical procedure can be interrupted and notes taken which contain all the information necessary for a later continuation.

The question now is whether this model of conscious processes can be seen as a general model of the way in which consciousness works, or as a model for certain types of conscious processes, such as arithmetical processes and other mechanical conclusion procedures, or whether it is rather a question of a model which describes how, with the help of outside aids, we can arrive at the same results which we could arrive at ourselves in other ways?

It is reasonable to take our point of departure in the arithmetical procedure itself, as we can both perform many arithmetical procedures with a calculating machine, a Turing machine and without outside aids. And it is also here in particular that Turing utilizes the assumptions of theories of consciousness in more specific criteria which are especially connected with the concept of memory and concrete arithmetical procedure.

The question is not yet whether *all* forms of thinking can be broken down into simple, step-by-step sequences, but conversely whether human consciousness performs *some* of its activities by breaking down complex, formal expressions in the same way that they are broken down in a Turing machine.

There is no doubt that certainly in 1936 Turing believed that there was a clear relationship here, as he simply gives the grounds for the breaking down procedure by referring to the empirical experience that we are unable to differentiate large numbers which resemble one another without breaking them down into smaller units.

*The differences from our point of view between the single and compound symbols is that the compound symbols, if they are too lengthy, cannot be observed at one glance. This is in accordance with experience. We cannot tell at a glance whether 99999999999999 and 999999999999999 are the same.*[26]

When we make a calculation we work serially and step-by-step forward through a number of discrete states and must therefore break down any more complex symbol into a finite sequence through subdivision. The fact that we do not lose our way in this process may similarly be because the individual step is determined by the relationship between the actual memory state and the single symbol observed at the given stage:

*The behaviour of the *[human] *computer at any moment is determined by the symbols which he is observing, and his "state of mind" at that moment. We may suppose that there is a bound B to the number of symbols... which the computer can observe at one moment. If he wishes to observe more, he must use successive observations. We will also suppose that the number of states of mind which need be taken into account is finite. The reasons for this are of the same character as those which restrict the number of symbols. If we admitted an infinity of states of mind, some of them will be "arbitrarily close" and will be confused. Again, the restriction is not one which seriously affects computation, since the use of more complicated states of mind can be avoided by writing more symbols on the tape.*[27]

That this description of arithmetical procedure can actually create a basis for the performance of calculations is not in doubt. If we accept that what is referred to here as "state of mind" only includes the relevant information for the specific arithmetical procedure, it also appears, on the face of it, as a quite plausible model for a description of how human beings *can* do arithmetic.

A more detailed consideration, however, gives rise to considerable doubt. It is not only improbable that the description corresponds to the way in which we perform calculations ourselves, it is also doubtful whether we will be able to perform many calculations in this way.

The central point in Turing's description of the arithmetical process consists in breaking down the task to its smallest components, as such, a classical and familiar analytical procedure which could hardly find a more suitable area of use.

The critical points in this procedure are similarly familiar. They lie partly - as mentioned previously - in the establishment of premises, i.e. the starting conditions, and partly in the question of how we can define the optimum or maximum degree of breaking down.

It is also at these two points that Turing's model of the arithmetical process differs most markedly - but from both human calculation and from the way in which the Turing machine works.

With regard to arithmetical procedure carried out by a human being, the most striking difference is that Turing's ideal model can *only* function if the task is actually broken down into its smallest expression components. This condition is not ultimately binding on human calculation. Even though we *can* break down large numbers into smaller components, there is no evidence that we break down these numbers into their *smallest* expression components. On the contrary, it is far more characteristic - normally - that we find it both difficult to handle large numbers and to reduce them to their smallest expression components. While many people for the same - or other - reasons completely give up doing arithmetic, others, at precisely this point, begin to use external aids such as counting boards, abacuses, pencils and paper.

Turing's analysis actually helps to illustrate one of the reasons, as it is evident that the radical breaking down of the expression into its individual components requires a dramatic expansion of a stable and reliable memory. The number of symbols which must be remembered is not only increased, they must also be located in precisely defined places on a tape where the access to each square is unambiguously established as a mechanical procedure and where the values in the individual places can be varied.

We are quite simply incapable of handling the simple, serial manifestations of the complex expression which comprise the core of Turing's model. Not only do we find it extremely difficult to reduce an arithmetical task into its smallest components - with the exception of quite simple operations involving whole numerical values from perhaps -100 to +100, or thereabouts - we are not bound to do so either.

Turing's model does not reveal much - if anything at all - about mental arithmetical processes. Nor is his model a model of a human being doing mental arithmetic, but of someone working with paper and pencil and, moreover, he immediately replaces ordinary squared - two-dimensional - paper and the decimal system with a one-dimensional tape and the binary representation of numbers, while the other symbols and rules of arithmetic are not made explicit at all. He assumes that we have them in our heads.

What he showed in this respect was that it was possible to arrive at the same results in other ways and particularly that it was possible in this way to perform calculations which we can only perform ourselves with the greatest difficulty - if at all - with other aids.

Nor has it subsequently been possible to identify an equivalent to the Turing tape and a mechanical reading unit, neither in the human brain nor in the mind.

The human arithmetical procedure thus approaches the Turing model in the area of very simple tasks where the model is least relevant, while Turing's model comes into its own exactly in connection with arithmetical tasks we are unable to perform without the use of aids, of which Turing's machine is undoubtedly the most perfect hitherto.

That this is the case, however, is due to the fact that it does not work in the way Turing describes in his model either.

While Turing's arithmetical model is based on the breaking down of the arithmetical procedure into its smallest components of formal notation units, the Turing machine is based on this expression being further broken down and subdivided into components which no longer possess any intrinsic semantic value.

As will be evident from the preceding sections in this chapter, this state is closely connected with the demand that both the description of the task and of the rules for its performance must be contained in the same notation, which must itself have a form that is independent of the task. As will also be evident, this demand was fulfilled by separating the physical definition of the symbols from the definition of their value.

That this is far from being a banal condition, however, appears not only from its significance for an understanding of the Turing machine, but also from the fact *that the ability to construct such definitions of the physical forms of symbols is a unique human ability, beyond any computational competence, as this competence itself assumes one or another minimum number of such previous definitions.*[28]

The demand for a definition of the physical form of the symbol, independent of its function and value, can also be seen as a distinctive criterion in understanding the relationship between the Turing machine's processes and human intelligence. As human intelligence includes the ability to create and define delimited, physically-defined symbolic forms, we possess a mental competence which the Turing machine cannot possess.

As this more far-reaching conclusion affects both Turing's point of view and a general main assumption in later theories of information and cognition, it will be considered in more detail in the following section.

5.7 Turing's machine, consciousness and the Turing test

According to Turing's biographer, Andrew Hodges, it was Gödel's demonstration in particular and the problem of description in quantum mechanics which inspired Turing to describe consciousness as a finite system, because he saw both elements as a manifestation of the fact that human consciousness was subject to decisive limitations: Although humans are living organisms, which apparently possess free will, they must, at a more fundamental level, be subjected to deterministic restrictions. Consciousness itself must be a "machine", although much more complex than other physical, chemical or biological "machines".[29]

Hodges thus understood Turing's description of human consciousness as an abstract generalization of the deterministic limitations common to mechanical physics, formal logic and effective calculation procedure - or what Turing calls computability.

It can certainly be taken for granted that Turing gave a new turn to the old dream of reproducing the human thinking process in an appliance by starting with the idea of a limited, finite consciousness, rather than with its sovereignty and that a radical expansion of mechanical handling competence lay in his proof that it was possible to reduce all finite logical, formal and mathematical procedures to pure mechanics.

With this new turn he became the first person capable of showing how it was possible to construct a machine which could perform all the finite arithmetical and logical procedures which humans can perform with the brain.[30]

If the machine were able to carry out this reduction itself, we would be able to use it to free humankind from a great civilizing burden, as it would immediately become possible to remove arithmetic, a large part of mathematics and logic from the necessary repertoire of human competence and therefore also from the obligatory curriculum in schools.

What we might otherwise think about such a possibility, it will never under any circumstances be furthered by Turing's machine which, on the contrary, produces a growing need for increased human competence in interpreting, handling and producing algorithmic procedures. The explanation is naturally that the machine is not subject to the same limitations as is human consciousness.

What Turing himself thought about these questions in 1936 is not clear, but it is absolutely clear that his view of thinking, including mathematical thinking, took exactly the same direction in 1939 when he wrote:

*Mathematical reasoning may be regarded rather schematically as the exercise of a combination of two faculties, which we may call *intuition *and *ingenuity. *The activity of the intuition consists in making spontaneous judgments which are not the result of conscious trains of reasoning... I shall not attempt to explain this idea of "intuition" any more explicitly.*

*The exercise of ingenuity in mathematics consists in aiding the intuition through suitable arrangements of propositions, and perhaps geometrical figures and drawings.*[31]

For Turing, "intuition" and "ingenuity" are the two typical mathematical tools. The fact that he mentions them is not because he believes that thinking can be reduced - nor mathematical thinking either - to these two concepts, it is rather because he is taking stock of the programme of logical formalism.

Where, *in* *pre-Gödel times*, as he writes, the goal had been to replace all the intuitive judgements of mathematics with a limited set of formal rules of inference thereby making intuition superfluous, great progress had now been made in the direction of the diametrically opposite result. It was not intuition, but organizing reason, ingenuity, that was being replaced, as it was the reasoning, systematic endeavour which now, to a great degree, could be reduced to a mechanical procedure:

*We are always able to obtain from the rules of formal logic a method of enumerating the propositions proved by its means. We then imagine that all proofs take the form of a search through this enumeration for the theorem for which a proof is desired. In this way ingenuity is replaced by patience. In... heuristic discussions, however, it is better not to make this reduction.*[32]

There is nothing here to provide any indication that Turing had mistaken the Turing machine's formal procedure for the general form of human thinking. Mechanical symbol procedure is a specific thinking (and proof) procedure derived from formal logic. Not only does intuition remain unchallenged, Turing also indicates - in an introductory footnote - that here he is completely ignoring "*that most important faculty which distinguishes topics of interest from others".*

Here, in the description of mathematical thinking, Turing establishes a - quite traditional - concept of consciousness with no visible trace of the model he used in 1936. It might appear as though he had completely abandoned the question of the relationship between the Turing machine and human intelligence.

He had undeniably abandoned one thing, namely the idea that human beings and machines think in the same *way*. When, some years later, he returned to the question in his now classical article *Computing Machinery and Intelligence*,[33] he begins by rejecting the question because it is impossible to provide a precise definition of the concepts "intelligence" and "machine". Instead he suggests that the question: can a machine think? should be replaced by the question: can a human being differentiate between an answer he receives from a machine and one received from another human being? - as he establishes the precondition that the subject of the experiment be kept in ignorance of everything except the content of the answer which is passed on in a neutral, technical form.

Although Turing proposed his experimental test criterion because there was no clear definition of the concepts of intelligence and machine, the test itself relied on such definitions. The most decisive definition of human intelligence lay in the assumption that it is an advantage to differentiate between a human being's physical and intellectual capacities:

*The new problem has the advantage of drawing a fairly sharp line between the physical and the intellectual capacities of a man*.[34]

This definition provided Turing with a reason for placing the subject in another room without direct sensory contact with the test arrangement. Turing also admitted that the machine could possibly perform something which must be described as thinking, even though performed differently to human thinking. He claimed that such limitations, however, were only problematical if the machine failed to live up to the demand on intelligence presented by the test. If it could pass the test, i.e. produce the impression in the subject that he was communicating with another person, there would be no need to take these differences into consideration.

As far as concerns the machine the most important definitions are that it can be constructed using any technique, that the constructors need not necessarily be able to describe its mode of operation, as they must be allowed to use experimental methods and, finally, that the concept `machine' does not include humans "born in the usual manner". The three criteria cannot all be completely fulfilled because the possibility of constructing a human being from a single cell cannot be ruled out. As in this case it would not be possible to claim that a thinking machine had been constructed, as the thinking mechanism would perhaps already be contained in the cell, Turing only accepts digital computers.

The Turing test has no meaning, however, if we assume that the relationship between the computer and human thinking is connected with a more or less common way of functioning. The test is exclusively based on the result, the process is regarded as an irrelevant black box.

There is thus no support for the assumption that Turing believed that the structure of human thinking could be described in terms of computational processes. On the contrary, in 1950 he only stated that it was possible to imagine a machine which would be capable of producing the same results that humans could arrive at and that this could be taken as an argument for saying that it could think "like" a human.

As the Turing test assumes that there is someone withholding relevant information from the subject of the test, it is first and foremost suitable for testing the human art of illusion.

5.8 Consciousness in Turing's hall of mirrors

Turing's purpose with the experimental test was not to produce a new theory of human consciousness or intelligence in the form of a philosophically consistent summary of an empirical material. His purpose was to ask the question as to whether a machine could think in such a way that it could create a basis for a new research project or programme in which it would be possible to use human consciousness as a model from which ideas regarding the mechanical imitation of human thought processes could be extracted.

He did not imagine, however, that this programme would lead to any serious answer to the question of whether machines could think. He believed, on the contrary, that in the course of fifty years it would be possible to design machines which - in the given test arrangement - would often be confused with humans and that this would imply such a change in the ordinary use of language that a contradiction would seldom follow the assertion that machines think. This clearly formulated, but often overlooked, perspective deserves to be given in his own words:

*I believe that in about fifty years' time it will be possible to programme computers, with a storage capacity of about 10 ^{9 }, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent. chance of making the right identification after five minutes of questioning. The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. I believe further that no useful purpose is served by concealing these beliefs. The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any unproved conjecture, is quite mistaken. Provided it is made clear which are proved facts and which are conjectures, no harm can result. Conjectures are of the greatest importance since they suggest useful lines of research*.[35]

It is remarkable that Turing completely rejected the possibility of - seriously - discussing the relationship between the computer and human intelligence, but also that he formulated the much vaguer cultural expectation that the computers of the future would bring about a state of affairs in which this distinction would simply disappear from the language. He thus did not possess the imagination necessary to conceive that the attempt to imitate human thinking could possibly lead to new arguments for differentiating between the computational process and human thinking. The explanation may be that he would not have dreamt of claiming that humans think in the same way as machines.

It is certainly striking that in the same passage Turing reveals a characteristic of scientific thinking containing an aspect which cannot be accommodated in his picture of the computational process. The latter corresponds exactly to the popular, but according to Turing inaccurate, view of scientific processes as systematic, step-by-step procedures.

Turing also utilized this difference in a specific criterion connected with human intelligence, as he claimed that this included the ability to differentiate between surmise and fact. But he failed to formulate any criterion by which it would be possible to decide whether the computer possessed such a discriminative competence, just as he also failed to consider "that most important faculty which distinguishes topics of interest from others", to which he had called attention in 1939. It is quite true that the test concerns the ability to differentiate, but it is not the computer's ability which is being tested, it is that of the test person. The result of the test shows exclusively the degree to which he can decide whether he is talking to a man or a machine.

It is not difficult to find a pattern in this picture. That intelligence which Turing makes the object is different from that intelligence which makes the intelligence the object. It is neither his own intelligence, scientific thinking in a broader sense, or the ability to formulate new thoughts and pose questions which is produced here as a model for imitation, it is a much more narrowly conceived intelligence. There is no demand here to differentiate between surmise and fact.

It was therefore on good grounds that Turing formulated his expectations for a development in common-sense understanding which lay fifty years in the future as a matter of personal belief. The question, however, is what justified the inclusion of such a declaration in a highly esteemed scientific journal and its later incorporation in the basic documents of an entire field of research.

The most obvious reason would probably be - taking the scientific context into account - to provide a well-formulated research programme which, on the basis of a relatively consolidated or clear theoretical basis, defined a number of more distinct ways of presenting the problem which could become the objects of investigation. But here too, Turing is remarkably clear. There are no particularly convincing arguments for the idea:

*The reader will have anticipated that I have no very convincing arguments of a positive nature to support my views. If I had I should not have taken such pains to point out the fallacies in contrary views.*[36]

Another obvious possibility might be that his expectations corresponded so closely to general contemporary expectations with regard to science. But this is not the case either. The idea was epoch-making and contrary to time-honoured scientific trains of thought. Turing himself introduced a great many of the objections which presented themselves from various philosophical, theological and scientific points of view and - in under 12 pages - he touched upon as good as all the themes which have since been included in the discussion. There is one central point in particular which is repeated in Turing's answers to these objections, as his general argument is not - as it is in almost all later discussions - that there may be an answer to the question of whether a computer can think, but on the contrary that the objections to this possibility are just as illusory as the postulate.

The identification of this wide-open, undecidable question undoubtedly comprises one of the two reasons for the later significance of the article. Turing hereby staked out a new research-political Utopia where the dream of reproducing human thinking ability was connected with - it appears - a correspondingly open technological potential. The second reason lay in the rather more prosaic suggestion for the first steps. In the final section of the article Turing described two possible strategies, namely the reproduction of the logical-deductive procedure, the logic of chess, and the reproduction of human perceptual and learning competence.

Here, on the other hand, all the problems which were ignored in the Turing test make an appearance.[37] First and foremost that any step which can be taken as a facet of the construction of a machine capable of fulfilling the Turing test contains a specification of human thinking which is in conflict with the point of departure: that no well-defined and meaningful description of the concept `intelligence' as opposed to the concept `machine' can be given. This paradox is not due to a careless mistake which can easily be corrected. If the latter postulate is abandoned we are faced with the demand that we must make the concept of intelligence explicit, so that we can no longer simply point out that the concept of intelligence is unclear and we can therefore not content ourselves with the Turing test of human illusion. If, instead, we abandon the first, we have on the other hand no possibility of pointing out any specific step as a step towards such a machine.

Turing's suggestion for a strategy, however, contains not a single - consistent in itself - view of human intelligence, it contains several mutually incompatible models which individually have their roots in older, more traditional and therefore, on the face of it, reasonably plausible assumptions. The three most important models are 1) the description of consciousness as a result of a Darwinian process of development, 2) the description of - the child's - consciousness as a well-delimited and blank page and 3) the description of consciousness as a logical-deductive symbol machine.

There is no discussion in Turing's article of the connection between these three different models. They are referred to individually only in different connections, but it should be noted that Turing clearly separates the logical-deductive procedure as the object of a specific development project, while the learning project is built up around the two other models. This line of demarcation has since been maintained and further developed in two different and - especially in the 1980's - competing research strategies within Cognitive Science.

That Turing with no further ado could juxtapose the two strategies as equally reasonable and explicitly refrain from weighing them mutually was not an expression of a later misplaced clarity, it was rather an expression of the fact that he failed to see that they were based on incompatible premises. That this holds true of the relationship between the logical-deductive strategy and the learning process strategy is documented in the later development. But there is a corresponding conflict hidden in the two models Turing suggests as starting points for the development of the learning machine. The Darwinian model cannot easily be reconciled with the image of the individual organism's consciousness as a blank page.

The image of the - child's - consciousness as a blank page at birth serves in Turing's argument as an instance of a differentiation between a very simple, innate mechanism, the programme, and the subsequent experience, data. But the page is not completely blank. While learning and other experience is assumed capable of producing a more complex programme structure, the basic programme is given in advance, it is invariant and independent of data.

*Presumably the child brain is something like a note-book as one buys it from the stationers. Rather little mechanism, and lots of blank sheets. ... Our hope is that there is so little mechanism in the child-brain that something like it can easily be programmed. The amount of work in the education we can assume, as a first approximation, to be much the same as for the human child.*[38]

The background is naturally that the computer requires such a programme. Turing overlooks the fact that precisely this programme cannot be an "inborn" part of the machine, if the machine is to have universal properties. He also overlooks the fact that the programme in the computer must be available in exactly the same form as all other data.

The distinction between a preordained programme and data is incidentally not reconcilable with the Darwinian model either. It is quite true that we can imagine that the individual child has an inborn mental capacity, but it also has parents who have parents who, at some stage or another of prehistory, descended from organisms without this inborn capacity. The Darwinian theory not only requires that we allow the development of increasingly complex organizations of elements which already exist, it also requires that we assume that biological and mental processes have their origins in a physical universe in which these processes were not found before these origins. If there is anything that can be described as a mental programme, it must not only have the property of developing into a more comprehensive and complex programme, it must also have the "property" that it has originated from something which is not a mental programme.

It will be of no help here to supplement with a description of reality as a realization of a potential which has existed since the dawn of time, because in such a case the potential will be more comprehensive than the realization, which will not become more invariant on these grounds. The possible genetic potential for consciousness must also have a history of origin and development if we refuse to explain its existence as the result of divine creation.

Turing side-steps this problem, as he only uses the Darwinian model as a metaphorical analogy without asking himself the question regarding the relationship between the biological and the mental. The child's mental programme is equated with the hereditary material, the changes in the programme made by scientists are equated with mutations and their evaluation of which improvements in the programme they will use are equated with natural selection.[39]

*The survival of the fittest is a slow method for measuring advantages. The experimenter, by the exercise of intelligence, should be able to speed it up. Equally important is the fact that he is not restricted to random mutations. If he can trace a cause for some weakness he can probably think of the kind of mutation which will improve it.*[40]

It is no longer the ideal observer who is here attempting to fill the position once assigned to the divine creator, but nor is it - as for Niels Bohr - the participating observer who appears, it is the ideal constructor. The paradox, however, resides in the fact that he can only appear in this place because he at the same time assumes that it is, and will remain, empty.

*I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it.*[41]

As with consciousness, so with an onion, wrote Turing. We can peel off one (mechanically functioning) layer after another until we possibly have nothing left in our hand,[42] but he does not draw the conclusion that this emptiness is the result of analytical subdivision and that what the onion and consciousness have in common is their joint - but mutually different - corporeality.

To the conventional and problematic distinction between the biological and mental processes corresponds a strikingly loose treatment of the human perceptual apparatus. Turing assumes without further consideration that the perceptual processes can be replaced by learning through a symbolic language. This implies a postulate to the effect that the biological and neurophysiological level has no independent significance for the understanding of intelligence. This can naturally be discussed, but in the given case it challenges his own use of biological theories in his definition of the mental machine. Turing also claims elsewhere in the article that the human nerve system is definitely not a digital computer, but rather a continuous machine.

This highly contradictory account rests on an underlying assumption which Turing never explicitly discussed, namely that it is possible to produce a description of all natural phenomena in a mathematical-algorithmic form. The problem he raised with the continuous nerve machine can therefore be reduced to the relationship between continuous and discontinuous mathematical functions.

Turing does not, however, claim that the digital computer can provide exactly the same answer as a continuous calculation, he simply claims that it can provide an answer which is so similar that a test person would not be able to decide what kind of a machine had calculated the result.[43]

He once again uses the limitation of human consciousness as an argument for ignoring a difference he acknowledges as valid himself.

We may, but need not, wait for the future results of biological science. Although Turing's thesis in its general form consists of a debatable postulate on the immateriality of consciousness, the elimination of the biological and perceptual dimensions can be accepted. If they mean anything, this meaning must also be manifested at the symbolic level.

As symbolic representation is at the same time an indispensable condition for computational processes, this level comprises not only a necessary, but also an adequate basis for an understanding of both intelligence and machine, given that we accept the idea that we can think ourselves.

5.9 Symbol generative competence as a criterion of intelligence

There can be no doubt that the "consciousness machine" that Turing modelled in 1936 can contain a deterministic calculating machine. Nor can there be any doubt that this machine, thanks to its memory function, the division of its mechanical procedure into single steps and the possibility of programming (and mechanical exchange of) mechanical instructions represented a machine of a new type, both with regard to mechanical functionality and areas of use. Although it can hardly be claimed that Turing played a decisive role in the development of the early computers, he was the first to provide a theoretical description and definition of this type of machine and this description did have considerable influence on later developments.

Some of his predictions have also been fulfilled. It would be unreasonable not to acknowledge that it is possible today to build computers which can compete with humans, when it comes to chess, and it would be equally unreasonable to claim that it is not possible to build computers which can be trained to carry out many other thinking procedures which are very much in keeping with the perspectives he drew in 1950.

There are also strong indications in favour of accepting Turing's break with the Cartesian construction of consciousness and its many unreasonable dualistic implications. We do not *know* of any mental, spiritual or psychic processes which are not corporeally realized in the physically extended world. This is true of all experiences of timelessness, weightlessness, all perceptual experiences, all hallucinations, all revelations, exactly as it is true of any articulation of the idea of a god, of eternity, immateriality and immortality. We are always able to give a time and date to any human experience.

Finally, there are also strong indications - along with Turing - that we cannot draw the conclusion from this that human consciousness and thinking can be described solely through a description of consciousness as a physical or physiological system. A theory of consciousness must include a dimension which attempts to provide an account of the course of the thought process as a process of thought.

The remaining question, however, if we follow Turing this far, is whether human consciousness under these preconditions can be described as a finite system with a finite number of distinct possible states and whether the relationship between these possible states can be described with the help of the concept of a mechanical process.

Turing's answer to this question comprises an ingenious combination of two arguments. One the one hand, he claims that we cannot exclude that there is a basic equivalence between consciousness and the universal computer, because the question cannot be formulated in a meaningful, i.e. precise form. On the other, he claims that it seems possible to design machines which can answer questions in such a way as to make it impossible for people normally to decide whether they are receiving an answer from a machine or from a human being. Among the arguments for this view, he mentions that it is possible to make the machine capable of answering incorrectly and thereby increasing the similarity to a human answer.

It may well be the case that Turing is correct, both in claiming that it is impossible to provide a precise description of consciousness and that it is possible to build machines which can pass the Turing test. But he cannot be correct when he claims that these two states are compatible with the postulate that it is impossible to exclude an equivalence between consciousness and a Turing machine. He hints at this himself when he maintains that it is necessary to allow the constructor to work with experimental methods which are not predefined, which simply means that the idea of equivalence is an idea of equivalence between two completely unknown entities.

The Turing test, however, can only be carried out when a machine has been built, which again implies that we can give an account of the way in which it is built. The question is therefore not one of a relationship between two quite indefinite phenomena, but of a relationship between the non-defined consciousness and a definite, specific machine which works on the basis of a finite number of discrete states. Any equivalence is thereby excluded, as it is possible to provide a precise description of how such a machine works, while it is impossible to describe consciousness with the same precision.

The objection could now be raised that a description of the machine which can pass the Turing test will therefore also provide a description of consciousness and thereby solve the original problem. This objection will not hold, however, as the Turing test can only reveal whether we can confuse the content of the answers, not the way in which they are produced. It was precisely because it is impossible to conclude from the result to the sequence which produced the result that Turing constructed the test as he did.

This is the reason why he did not predict that machines would be built which would work in the same way as humans think, but only that it would be possible to build machines which produced results which resembled the results of human thinking to the point where the two were indistinguishable and that he believed this would encourage people to accept the idea of thinking machines as a natural part of common sense and common language usage.

The greatest problem in Turing's construction, however, is not that equivalence between consciousness and the discrete-state-machine must be abandoned because we do not know how consciousness works, but do know how the machine works. The greatest problem is that his own description of the universal computer also makes it possible to describe the difference between the machine and consciousness with a hitherto unknown precision, which shows that consciousness quite simply cannot function as a Turing machine.

While Turing uses the idea of the indescribable consciousness to keep open a place for the idea that it can be described as a finite, discrete system, his definition of mechanical procedure provides the possibility of drawing the opposite conclusion.

It is not only possible with this definition to 1) exclude any possibility that consciousness can only operate as a discrete, mechanical symbol system, it is also possible 2) to exclude the idea that there may be a single, even if tiny or bizarre mechanical procedure which itself can produce some form of symbolic activity, if it falls within Turing's definition. It is finally also possible with this definition to prove 3) that human consciousness cannot be completely manifested in a discrete physical system.

The first proof follows immediately if, instead of starting with the mystic consciousness and the human art of illusion, we start with Turing's description of mechanical procedure.

It is evident from this definition that human beings can both formulate procedures which can be executed - and produced - as a result of a finite number of step-by-step operations, and procedures which cannot be executed - and therefore cannot be formulated through - a finite number of step-by-step mechanical operations either. If an attempt were made to get the machine to execute these incomplete procedures mechanically, it would be unable to conclude the process, as it cannot itself produce a stop condition independently of the process. The machine is thus incapable of formulating both the start and stop conditions of the formal procedure. These limitations are not applicable to human consciousness which, on the contrary, also has the ability to formulate such conditions, just as we are capable of working with undetermined conditions, undecidable questions and of interrupting a process which has no built-in stop conditions.

These abilities are explained in mechanical symbol theories by looking at consciousness as a more comprehensive system, where a procedure at one level can be interrupted by a procedure at a higher symbolic level. If we take into account the capacity of consciousness, it is also possible for us to imagine a very great number of finite states.

Turing's analysis, however, also provides the possibility of rejecting this model with greater certainty, as he shows that any mechanical execution of symbolic procedures depends on a physical definition of the individual symbols which are included in the process.

This definition cannot be carried out by the machine itself and cannot in general be produced as a result of a mechanical symbol procedure because any such procedure is based on a previous definition of the symbols in which the procedure is expressed and through which it is carried out.

It is obvious that the physical symbol cannot be explained as the result of a process which presupposes that it already exists.

In other words, any mechanically performed symbolic procedure depends on a symbolic activity which cannot be explained on the basis of a mechanical symbol theory.

The physical definition must therefore be the result of a symbolic process which is not itself bound to the use of discrete symbols, it must be produced by a physical system which possesses the ability to create discrete symbols. This system cannot, in the given case, be a mechanical system which works step by step because such a system is bound to and limited by the precondition that all effects can be derived mechanically from the given start conditions.

If the system is defined as a mechanical system it is thus bound to comprise a given set of physical entities and a certain set of rules regarding movement. It cannot, at some later step in the process, move in such a way as to make it capable of distinguishing some of the physical entities as symbolic, as the rules governing its movement cannot provide the individual physical entities with new qualities. Nor does the concept of the finite mechanical system allow the introduction of new, physically effective symbols during the process.

If physical-mechanical systems possess a symbolic content this is because it is given outside the system and if this symbolic content manifests itself as an independently operating force which can be distinguished from the given physical rules of movement, the system is no longer mechanical.

No matter whether we explain the ability to define the distinct symbols on the basis of a symbolic competence which is not itself bound to operate with distinct symbols, or as a property of the physical system in which the symbol is created, the result will be that the symbol-creating activity is rooted in a system which is not itself limited to working in distinct, mechanical steps.

This hereby completely excludes the possibility that a Turing machine or any other machine, which only operates with distinct symbolic entities and step-by-step defined, physical movements, itself can possess symbol-creating competence.

As, on the other hand, we know that human consciousness exists in a physical system which can distinguish certain physical forms as symbolic from other, non-symbolic, physical forms in the same system, it is impossible for symbolic competence, consciousness or human intelligence to be contained in a discrete mechanical system, notwithstanding the incalculable number of physical and symbolic possible states in this system.[44]

If we therefore wish to maintain the idea that consciousness is a finitely extended system, we must abandon the demand that it can only operate in distinct, mechanical steps and if, on the other hand, we wish to maintain this demand we must - just as Turing allowed the infinite tape - accept that consciousness is not subject to the finiteness of the physical world, while at the same time admitting the ability of this transcendental force to continually intervene in the physical world.

The attempt to use mechanical description on human consciousness thus leads unavoidably to a complete dissolution of the constitutional premises of this thinking. The explanation is, in fact, not particularly surprising, as mechanical thinking both assumes the concept of infinite consciousness and the divine creation of immaterial force as well as of material particles.

While it is easy to explain how it is possible to perform calculation processes on a machine, namely by referring to our consciousness which is capable of defining both the physical and semantic value of symbols, it is more difficult - or as Turing claimed perhaps quite impossible - to explain how we ourselves are capable of carrying out these symbolic operations. But it is not particularly difficult to see that a mechanical description of physical systems, which are capable of creating symbols, must describe this ability as a transcendentally given, metaphysical precondition by which the entire mechanical point is dissolved.

Even the simplest step-by-step symbol procedure presupposes a symbolic competence the machine cannot possess, whereas human consciousness possesses the ability to both 1) establish symbols in its own physical system and in the physical surroundings, 2) formulate start and stop conditions for finite procedures, 3) handle undecided and undecidable states, 4) formulate expressions which cannot be produced through a finite number of simple mechanical steps, and 5) possibly not work at all, but certainly not exclusively, with a single, delimited and physically-defined notation system in its brain.

As Turing's machine cannot describe its own physical system, it cannot itself produce any distinction between physical forms which are symbolic and physical forms which are not, either.

This competence must conversely be seen as a basic condition for intelligence and must therefore necessarily be included in a theory of human consciousness and thinking.

Exactly the same holds true of the relationship of the Turing machine to the content of thought. It cannot itself formulate the concepts which must be defined in order to make it fungible. His model does not include the ability to establish the symbolic meaning of a symbol, just as any attribution of functional, syntactic or semantic content depends on a symbol activity produced by a human.

As the ability to create symbols depends on distinguishing between physical processes which are not symbolic, relative to physical processes which are manifested as symbolic, it is tempting to propose the thesis that the human ability to make such distinctions is connected with the fact that our conceptual competence is not bound to a well-defined relationship between the corporeal realization and a certain symbolic structure. It appears as if the human brain - the physical, neurophysiological and mental system - can only possess symbolic competence because the symbolic structure does not coincide, or is not congruent with the physical structure in which it is embodied.

The informational, mental or conscious level cannot, however, be completely separated from the physical-physiological as an absolutely separate level with its own delimited, stable structure, because consciousness must also be understood as the process in which the definition of symbols as symbols takes place. Such a definition can in many cases be derived from already established symbols, but it cannot hold for all symbols. We must also include in the concept of consciousness the physical-physiological "system's" ability to crystallize symbolic forms as well as symbolic meaning.

While it is true of human cognition that it takes place in a physical system which in one or another - unknown - way has become capable of producing a critical threshold for creating symbols itself, it is true of all known physical machines, including the computer, that they cannot of themselves produce such a critical threshold.

Which physical explanation is necessary in order to introduce human consciousness into the extended world can hardly be decided at present, but it is difficult to see how it is possible to avoid concepts of indefinite transitions and other non-mechanical concepts.

The demand on both formal and physically well-defined symbols comprises the computational start condition, but not that of the consciousness. In itself it comprises an irreducible and distinctive criterion for distinguishing between human consciousness and all types of mechanical calculation procedures.

Turing incidentally also had the idea of transferring the concept of a critical threshold from quantum physics to an understanding of consciousness. While the consciousness of animals appears to be sub-critical, he writes, it is perhaps possible to assume that human consciousness contains a special super-critical threshold where a certain mental input produces an "explosive" chain reaction which takes the form of a production of new ideas.[45]

He limited himself, however, to discussing the critical threshold as an analogy between the physical and the conscious planes. He overlooked the fact that the first critical threshold which characterizes human consciousness is that threshold which enabled the physiological system to create symbols by itself at one stage or another in the history of development.

Although we cannot provide an exhaustive description of consciousness, we are perfectly able to localize symbol creating competence as the common minimum condition of consciousness, thinking and language.

It is therefore not only obvious or necessary to reject the intelligence criterion of the Turing test (that which resembles something else is probably the same as that which it resembles). It is also possible to formulate a more precise test criterion, as we can make this - as yet unexplained - but evident physiological property a central test criterion for a scientifically well-defined use of the term `thinking machine'. No ingenious experimental arrangement is necessary to carry out the test. The demand is solely that we build a physical apparatus which possesses the ability to develop symbol generative competence with the help of components which do not possess such competence.

With this criterion we not only avoid the unbecoming reference to a more or less widespread terminological confusion, we can also emancipate the understanding of the immanent character of symbolic processes from mechanical reductionism.

We may wonder that Turing himself did not discover this criterion, as he was perfectly at home in the borderland between the physical, biological and psychological. The explanation can perhaps be found in the conceptual tradition which is based on the development of separate conceptual apparatuses for each area and discipline, which thereby become jointly responsible for placing the borderlines between areas and disciplines as a given precondition which thereby falls outside the area and scope of each individual discipline.

Turing did not actually consider the physical manifestation of consciousness, on the contrary he used - and reinterpreted - the mechanical model of physics because it turned out that it was possible to exploit this model to represent a considerable part of mathematical logic, which undeniably has its place within the concept of human intelligence.

A similar - rather less subtle - figure can be found later in Newell and Simon's formulation of the theoretical basis for the idea of artificial intelligence, although they actually describe their, possibly universal, symbol theory as a theory of ]*physical* symbol systems, because:

*... such systems clearly obey the laws of physics - they are realizable by engineered systems made of engineered components... A physical symbol system consists of a set of entities called symbols, which are physical patterns that can occur as components of another type of entity called an expression (or symbol structure). Thus a symbol structure is composed of a number of instances (or tokens) of symbols related in some physical way. Besides... the system also contains a collection of processes that operate on expressions to produce other expressions.*[46]

A physical symbol system is thus defined here as

- A given set of symbolic entities, each with a well-defined, finite physical manifestation.
- A given set of sentences created by a constellation of such entities and
- A given set of rules for how one sentence can be transformed into another.

This model is generally consistent with formal logic and the basic theses of formal linguistic theories, with the single addition that Newell and Simon assume that the symbols are also physically manifested and that the entire system obeys physical-mechanical laws. It can also be noted that its structure is completely equivalent to Newtonian mechanics, as it is a system of physical entities corresponding to Newton's particles which are moved by an immaterial or transcendental set of rules without a physical manifestation and not themselves processed in time and space, corresponding to his transcendental concept of force.

It is only the first two points, however, which are understood as physically manifest, whereas the governing rules are understood as given outside the system.

Newell and Simon clearly attempt to bridge the gap between physical process and mechanical theory by defining the symbols as physical "atoms", which together create "sentences" which again can be transformed with the help of a number of "processes" which are apparently elevated above the physical system as a Newtonian transcendental force that regulates the physical symbol particles. Although they - naturally - consider which properties are also necessary for such a system to be regarded as intelligent, they take as little trouble as Turing to explain why certain symbolic entities are physical while others are not and why certain physical entities are symbolic and others are not.

The quasi-physical symbol theory which was the basis of classical AI research thus begins by ignoring exactly that which delimits its subject area from other areas, namely the symbol generative competence which separates certain physical systems, including human consciousness, from others - among them all known machines.

That Newell and Simon formulate their physical symbol theory in the image of formal notation is not only inadequate, because it identifies the physically defined expression form with an equivalent content form, it also gives a completely incorrect description of how the symbolic rule structure is produced as the result of a physical-mechanical process in a Turing machine as well as in an electronic computer. Like Turing they overlook the fact that rules can only be effectuated if they are themselves represented as a sequence of individually and freely editable, physically manifested notation units - in exactly the same form as all other data.

These inadequacies and errors presumably partially explain the discrepancy between the many proclamations on the theoretical explanatory force and the actual results which have since been arrived at. But they do not explain the considerable impact of the theory, which is surprising when we take into consideration that far into the 1970's and 1980's - classical mechanics in the manner of Laplace was understood, in these theories, as a universal physical paradigm simply utilized as though the great machine were the sum of many small machines.

Turing was not only more cautious, he was more precise because he maintained a distinction between the indefinable basic category `consciousness' (or intelligence) and the definite machine. Although he therefore had no illusions that the Turing test could be used to draw conclusions regarding the way in which humans think - the inability to distinguish between, or the similarity of two phenomena does not mean that they are produced in the same way - he still became enmeshed in his own net. There is no room in the world of mechanical concepts, neither in Turing's nor Simon's version, to describe how symbol generative competence can arise in a physical system. In fact, there is no room at all for events of this character. But it nevertheless explicitly assumes such human symbol activity, without which there is no machine.

Formerly, there was much criticism both of the Turing test and the implications for theories of consciousness which were formulated as a corollary to this. The criticism can be divided into three main positions.

First there are objections which start with a traditional, idealistic understanding of consciousness. These oppose the idea of describing the content of consciousness as processes which are expressed in time and space, as this presentation is understood as a mechanical-reductionist and materialistic idea which cannot give an account of values and content. It is characteristic that from this point of view there is no concern with the creation and development of consciousness, but consciousness is taken as axiomatically given - usually from an introspective viewpoint.[47]

Second come the objections which accept the idea that the brain's physiological activities can be described on the basis of mechanics, whereas the possibility of deriving a description of consciousness - intentionality - from the physiological description is contested. An exponent of this view is John Searle, who formulated it briefly and clearly:

*The brain, as far as its intrinsic operations are concerned, does not do information processing. It is a specific biological organ, and its specific, neurobiological processes cause specific forms of intentionality. In the brain, intrinsically, there are neurobiological processes and sometimes they cause consciousness. But that is the end of the story.*[48]

It is characteristic here that the biological anchoring of consciousness is acknowledged, but that the relationship between the biological and the conscious is accessible (or necessary to take into account) is rejected.

Third come the objections which accept the idea that the brain and consciousness both operate as a physically realized, finite system, but which dispute that the - informational - processes of consciousness are rule determined.[49] It is characteristic that here it is accepted that the processes of both the brain and consciousness are realized either in a single system, or in two homologous systems which operate exclusively with a finite number of discrete - physically definable - states.

The following criticism has connections with elements from all three positions.

From the first and second positions I accept that there is no possibility of deriving the content of consciousness from the underlying neurophysiological processes, or of describing it as homologous or homomorphic to the physical realization, certainly not if this realization is identical or analogous to computational processes.

From the third position I accept that the content of consciousness is always processed in a physical form manifested in time and space, but not the idea that consciousness can be regarded as a system which operates exclusively with discrete and finite states.

The decisive difference to previous criticisms comes when we accept the idea that consciousness is physically processed and manifested in time and space, as a consistent implementation of this idea leads to the conclusion that consciousness must possess symbol generative competence, which includes the ability to produce the symbolic forms that are taken as axiomatically given in the view of consciousness as a finite and discrete system.

Although it is not possible on the basis of our present knowledge to give an account of the origin of this property in the physical and biological universe, it has a character which implies that it is possible to draw the conclusion that consciousness must necessarily possess at least one property which is incompatible with the idea of a discrete, finite formal or physical system.

But this is not a question of rejecting the formal theories of cognition on the basis of older assumptions of theories of consciousness. On the contrary, it is a question of a criticism which appears when we take up the consideration of Cognitive Science and follow it to its conclusion, which here means back to the problem of the indefinite beginning.

While the models of Cognitive Science fall short as models of consciousness, the attempt to use the conceptual world of mechanical physics on consciousness gave rise to a remarkable transformation in the understanding of the mechanical system itself, as we replace the reference of classical mechanics to the physical universe with a functionalistic idea of a separate, distinctly delimited symbolic world which is realized (in different or similar) human and mechanical forms.

While the mechanical procedure in Newtonian theory is controlled by an immaterial force, its effects are purely material. Conditions are reversed in the symbolic interpretation of mechanical theory, as the relationship between material entities is seen as the cause of symbolic effects. Although the mechanical symbol theories do not provide an account of the force concept, they apparently allow it to be understood both as an immaterial Newtonian concept (as a kind of symbolic force) and as a concept of physical energy, as long as the physical energy is described as pure form.

It was Boltzmann who took the first step in this transformation by interpreting mechanical theory as an abstract, finite model of description. While he still viewed the mechanical model as a physical model, however, it was interpreted in other disciplines as a model of the their respective domains. The idea of the finite space is thus interpreted in mathematical logic as a logical space and from here Turing could take the final step in this transformation as he brought the logical space back into a mechanical-physical form, by showing that it is possible to reduce all finite mathematical and logical procedures to simple, mechanically executed steps. On this footbridge between logic and physics mechanical procedure apparently becomes equipped with a built-in symbolic meaning and the way is paved to the opposite movement: from symbolic mechanics, where physical materiality no longer means anything, to logic which, as the highest expression of human intelligence, perhaps also contains its essence.

The result was a neo-Cartesian research paradigm which supplies the Cartesian subject with finite and delimited physical-dynamic properties derived from 18th century materialistic theories of energy and instinct, as the concept of physical and/or biological forces is replaced by an immaterial process concept.

This abstraction results in a new version of the Cartesian dualism between consciousness and corporeality, as corporeality is removed from the room under consideration and the form concepts derived from the study of corporeality are used on consciousness. In this way the concept of the mental subject and the physical object disappear into a formal - often mathematical or information theoretical - transcendence. The relationship to the Cartesian tradition is thus not characterized by a confrontation with its dualism, but on the contrary by a confrontation with the use of this dualism in the polarization between an immaterial, non-extended subject and a material, extended object.

In the neo-Cartesian paradigm everything is extended in time and space and nothing is material. Matter is no longer accepted as a source of meaning, it is manifested only as a tiresome restriction or a completely passive and arbitrary medium. The system is at the same time deterministic and allows no room for either will or instinct as potential sources of disturbance of a given structure. But it is itself a theoretical system and was thought of in opposition to other theories. In other words it is itself a product of tension and will. But its existence must still be taken into account, just as for some time yet we must probably come to terms with the fact that the source of this will is not only reason, but also instinct.

The fact that this paradigm can provide new knowledge, and there is no reason to deny this, is connected both with the theoretical, levelling one-sidedness, which contributes to making a number of problems of cognitive theory more acute, and with the well-documented circumstance that it is far from the case that only valid theories are capable of providing knowledge of the world.

It is hardly by chance that the neo-Cartesian paradigm takes the shape of a highly speculative cognitive paradigm, as human cognition, unlike all other areas of research, occupies a doubly exceptional position. First, the process of cognition is invisible and therefore inaccessible to direct observation. In this it resembles much of modern physics which must also use instrumentally mediated measurement procedures, the meaning of which can hardly be separated from the phenomena measured. In both cases they are areas where any type of observation is determined by complex, hypothetical assumptions which are implemented in experimental arrangements, measuring and testing apparatuses. And, as far as cognitive science goes, also that it is by definition self-referential. The extent and properties of the research subject are identical with those of the object.

The study of cognitive processes perhaps resembles, as Zenon Pylyshyn wrote, the attempt of a blind man to study elephants, but more closely resembles a blind man's attempt to study blindness.[50]

These circumstances not only make the circular conclusion of neo-Cartesianism understandable, they also show that cognitive science could only achieve its modern breakthrough from the moment that an advanced set of theoretical and hypothetical assumptions became available which would make the necessary objectivization possible through externalized test apparatuses.

While the idea of utilizing the description of conscious processes in the construction of computers together with the idea of using computer models to test empirical and theoretical material on mental processes may be two individually well-reasoned, scientific paths, the interweaving of these ideas in the idea of a constitutive similarity is a blind alley which creates an obstacle for an understanding of the properties of consciousness and the computer alike. A scientifically responsible comparison must start by conceptualizing the differences which are the precondition for a comparison.

Turing's mistake was not that he suggested a strategy for constructing an apparatus which could serve as a tool for human thinking and also as a tool for research into human thinking, in which case civilization as such is an error; it was that he made the symbolic start and stop conditions, which are a precondition for the machine, a precondition for that consciousness which is the only known producer of such conditions. It is with this constitutive mistake that the information theoretical

paradigms of the 20th century - with all due respect to the other many merits - take leave of the energy-theoretical paradigms of the 19th century.

Turing was perfectly aware of this. There are, he wrote, no systems which are characterized by discrete states:

*(The discrete state machines) are the machines which move by sudden jumps or clicks from one quite definite state to another. These states are sufficiently different for the possibility of confusion between them to be ignored. Strictly speaking there are no such machines. Everything really moves continuously. But there are many kinds of machines which can profitably be *thought* *of *as being discrete state machines.*[51]

Everything moves continuously.

The discrete state is only a - rewarding - mental idea. For exactly the same reason the double reflection of the idea of the discrete consciousness and the discrete machine is a circular short-circuit which by definition ignores the insoluble, research motivating basic problem - the relationship of the discrete representation to the continuous phenomenon.

We could say that this is particularly an ecological error, but the correction of this error must be located in symbolic theory.

A related conclusion, which however, is limited to providing an account of the difference between the physical-mechanical and the formal logical procedure, has also been presented by Robert Rosen in pointing out that there is no formal method which can produce a statement on the congruence between causal relationships in physical systems and the logical conclusion procedures which define any formal simulation of a physical system:

*Thus in formal systems, we already find that a purely syntactical encoding will in some sense *lose information. *The information lost must then pertain to an irreducible, unformalizable *semantic component* in the original inferential structure. By changing the encodings, we can shift to some extent where this semantic information resides, but we cannot eliminate it.*[52]

Rosen's objection is thus not simply that the physical "causation" and the logical "implication" represent two different logical structures, but also that the formal procedure is of a syntactic nature and is produced by an elimination of the information on the system which is to be simulated (or represented).

As will be shown in chapters 6 and 7 it is possible to provide an even more precise definition of the information which is lost, as the formal procedure is not produced as a selective choice (disposal) of more relevant information from less relevant, but as a principle and constitutive elimination of referentiality to the non-symbolic.

***

The general theme of Turing's thinking is the relationship between matter and consciousness. To this he added two fine distinctions, both closely connected with the inversion which lay in the understanding of consciousness in the image of matter.

The first distinction - clearest in its difference from a classical physical-mechanical understanding - was that his materialistic model included both memory, self-control and development mechanisms. At this point he was completely in line with the contemporary efforts of behaviourist psychologist, Clark L. Hull, to design mechanical robots.[53] Unlike these, Turing's machine, however, was not simply characterized by containing a memory function, a control unit and a feedback mechanism - the three elements then considered as the decisive obstacles to providing a mechanical description of biological and mental processes - Turing's machine was also characterized by a complete dissolution of both mechanical determination and the invariant bond between the mechanical function and its symbolic meaning.

The second distinction - clearest in relationship to classical mentalism - was the use of the condition of finiteness as a question of physical-mechanical execution. Turing believed that through a combination of these two trains of thought it would be possible to place the significance of physical corporeality in - a mathematical - parenthesis. Corporeality itself becomes an external, arbitrary and replaceable vehicle.

In this way Turing eliminated the materialistic dimension from materialistic thinking and thereby concluded an epoch in the history of materialistic thinking, as he also opened the way for a new, at once "symbolic" and practical technological epoch in the mentalist tradition.

He led the old dream of a mathematically perfect solution to the enigmatic relationship between matter and consciousness to new limits - as a mathematician, as one of the leading English cryptographers during and after World War II, with access to knowledge so secret that even its existence was a secret - subject to military regulations of secrecy, sentenced to hormone treatment for a homosexual relationship with a young man of proletarian background at a time when the cold war was breeding a paranoid fear of sexual perversion and communism, finally it is believed - perhaps, perhaps not in accordance with his own basic beliefs - that he committed suicide on 7 June 1954.[54]

**Notes, chapter 5**

- Turing, (1936) 1965: 115-154.
- Quoted here after Gandy, 1988: 85.
- Feferman, 1988: 117 f.
- Kurt Gödel, (1946) 1965: 84.
- C.F. Michael J. Beeson, 1988: 194-198. There appears to be agreement that Turing's result accords with Church's thesis: that effectively calculable functions in general are recursive. The thesis is sometimes referred to as Church's and sometimes as Church-Turing's thesis. C.f. Kleene, 1988. Haugeland, (1985) 1987 assumes that the two analyses are equivalent. Church, as mentioned, believed that his thesis had been demonstrated by Turing's analysis, but its status is still under discussion. Gandy thus objects that it cannot be excluded that it may be possible in the future to formulate non-recursive mathematical-logical algorithms and demonstrations and that the thesis can therefore not be regarded as having been proved. Gandy, 1988: 78-79. Gandy also emphasizes that Turing's analysis also contains another independent thesis, usually called Turing's theorem: Any calculable function (in Church's sense) which can be performed by a human, can also be performed by a machine.
- C.F. Kleene, 1988: 23 and Gandy, 1988: 81 for slightly varied specifications of the operational structure of the Turing machine. On practical grounds, Turing introduced on the way several operational mechanisms, among them a division of the tape so that every other square was reserved for auxiliary signs which could be deleted and which were used during the operations.
- This specification is from Martin Davis, 1988: 155, who presents it in three variants - one for each of the three possible movements. It should be noted that [[alpha]] and [[beta]] can have the same value, so that the result of the operation will be that the value remains unchanged, corresponding to nothing being written. In this form [[alpha]] and [[beta]] cannot be replaced by 0 and 1. Turing uses the form as a starting point for a conversion of the programme to "machine language", Turing (1936): 126-127.
- Turing, (1936) 1965: 127.
- Turing uses the term computer of a person who performs calculations, in accordance with its then ordinary meaning. The machine is called a "computing machine".
- Zuse's first and subsequent machines are described in Williams, 1985: 216-224.
- The relationship between Turing's notation and binary notation as used in modern computers will be discussed later in this chapter.
- In Turing the expression is given in the form qi Sj SkRq m, where R (right) can be replaced by L (left) or N (none).
- Turing, (1936) 1965: 127.
- Turing, (1936) 1965: 135 (footnote).
- C.f. Kleene, 1988: 30. Turing, (1936) 1965: 118.
- Gandy, 1988: 90 note 38.
- Turing (1936) 1965: 118.
- Turing's view of the choice machine is incidentally too narrow, even if only the machine's ability to simulate calculations is taken into account. By allowing the operator to provide new input it also becomes possible to use new or unforeseen information. It may not only be difficult or impossible to realize the ideal dream of including these possibilities in an automatic process, it may also be inappropriate.
- Turing 1950: 440. This equally pioneering article is discussed in greater detail in sections 5.6-5.8.
- Among them, cybernetics, classical AI research and Cognitive Science research.
- Davis, 1965: 115.
- Gandy, 1988: 85.
- Turing, (1936) 1965: 116.
- Turing, (1936) 1965: 117.
- Ibid: 117.
- Turing, (1936) 1965: 136.
- Turing, (1936) 1965: 136.
- The separation of the definition of physical form from the definition of the symbolic value also has linguistic implications, as it appears difficult to reconcile with the linguistic description of the sign relationship as a unity of expression and content. C.f. chapters 7-9.
- Hodges, 1983: 96 ff.
- Hodges, 1983: 96.
- Turing, (1936) 1965: 208-209.
- Turing, (1939) 1965: 209.
- Turing, 1950: 433-460.
- Turing, 1950: 434.
- Turing, 1950: 442.
- Turing, 1950: 454.
- This last section of Turing's article from 1950, which according to Turing contained the positive, concrete and debatable evidence is omitted from the reprint in Hofstadter and Dennett, 1981: 53-67. The whole article (apart from some cross-references) has been reprinted in Bannon and Pylyshyn, 1989: 85-109.
- Turing, 1950: 456.
- Turing, 1950: 456.
- Turing, 1950: 456.
- Turing, 1950: 447.
- Turing, 1950: 454-455.
- Turing, 1950: 451-452.
- This conclusion is strengthened as is can also be shown - as will appear from chapter 6 - that a conscious system must necessarily possess the ability to decide whether a given physical form in an arbitrary situation is only a physical form or whether it is also a symbolic form. In other words, it is not possible to provide a purely physical or mechanical definition of the concepts (and distinction between) `noise' and `information'.
- Turing, 1950: 454.
- Newell and Simon (1976) 1989: 112-113.
- Theodor Roszak can be mentioned as an exponent of this view, (1986). 1988.
- John Searle, 1990: 19.
- Exponents of this view can be represented by Hubert and Stuart Dreyfus, (1986), 1991.
- Pylyshyn, (1984) 1985.
*Computation and Cognition: Toward a Foundation for Cognitive Science.* - Turing, 1950: 439.
- Rosen, 1988: 533.
- C.f. Roberto Cordeschi, 1991, who discusses Hull's work in relation to Cognitive Science.
- Hodges, 1983.