|Artiklens URL: www.hum.au.dk/ckulturf/pages/publications/|
Electronically published: November 12, 1997
|©Niels Ole Finnemann 1999. All rights reserved. This text may be copied freely and distributed either electronically or in printed form under the following conditions. You may not copy or distribute it in any other fashion without express written permission from me. Otherwise I encourage you to share this work widely and to link freely to it.
You keep this copyright notice and list of conditions with any copy you make of the text.
You keep the preface and all chapters intact.
You do not charge money for the text or for access to reading or copying it.
That is, you may not include it in any collection, compendium, database, ftp site, CD ROM, etc. which requires payment or any world wide web site which requires payment or registration.You may not charge money for shipping the text or distributing it. If you give it away, these conditions must be intact.
For permission to copy or distribute in any other fashion, contact firstname.lastname@example.org
Niels Ole Finnemann
Redundancy and Codes
Paper for the symposium "The emergence of codes and intentions" held at Odense University, Oct. 1995.
Assuming that symbolic processes emerge in a pre-existing nonsymbolic world and only takes place in time and space seems to be a consequence of many theories, but raises difficult questions about the concepts of codes and rules. Although many symbolic processes can be derived from preexisting codes, there is no way to derive the first symbols from preexisting codes, since the coding procedure itself is a symbolic activity. Hence the first coding - the capacity to create codes and symbols - cannot itself be explained as a result of a coded procedure, but has to be seen as an - yet? - unexplained fait accomplit. Since the emergence of coding procedures is a result of a symbolic activity, presupposing some kind of intentionality or consciousness, it is as unlikely as unprovable that this capacity itself is rulegoverned. On the contrary, it can be proven that we have symbolic systems (eg: linguistic and informational) which are not rule-based, although they can contain and produce codes. Proofs can be found in the way these systems exploit redundant patterns as the basis for generating new codes - in some cases independent of previous established codes - leaving out any possibility to give a rulebased account of these systems. As a consequence it is necessary to distinguish between rulebased systems - characterized by programmes and codes prescribing the structure - and redundant systems in which new codes can be established either by changing the function of existing patterns, by the creation of new patterns and/or by variation of a pattern. While formal systems belongs to the first type, linguistic and informational systems as carried out in computers belongs to the second. Here we may also expect to find the mind - implying that the mind posses the capacity to use the neurophysiological system (the brain) as a redundant system.
I think everybody here can agree that we are in the need of a theory of coding, which can provide a basis for our understanding of both biological and mental systems.
However, I am not sure that we actually are looking for the same thing or ask the same questions, so I shall start to present what I am looking for and specify the questions I have in mind.
The first thing to say then would be that a theory of coding should allow us to address questions on the relation between physical, biological and mental phenomenas.
This, I think, can basicly be done from two different perspectives. On the one hand it can be done in a diachronic perspective in which the question turns up in the form: How can we explain the origin and development of physical, biological and mental phenomenas, and on the other hand in a synchronic perspective in which the question comes in the form: How do mental, biological and physical phenomenas interrelate as different - but interferring - levels of the same processes?
As exponents for these two perspectives we can mention the darwinist - and neodarwinist - models of evolution as paradigms for the diachronic model and the theories of artificial intelligence and the like as a paradigm for the synchronic model.
Now, one might of course hope that it would be - if not easy then at least possible - to combine these two kinds of paradigms, but since I don't think it is so, I shall try to explain why.
The are in fact several reasons to doubt the possibility. One being that the darwinist model limits itself to relations between living organisms and their relations to the surroundings, while the model of artificial intelligence on the other hand doesn't contain an independent biological level at all - or only does so by assuming that it can be reduced to mechanical - or maybe - chemical processes.
Other reasons could be given too, but I will restrict myself to mention the one I see as the most basic, that is: that both types of theories presupposes the existence of a coding procedure. However, of two different types.
In darwinist theories this presupposition is implied in the assumption that the very existence of living organisms (the basic code of life) can be taken for granted, and in neodarwinist theories in the assumption that the genetic code can be taken as such.
In the theories of artificial intelligence a similar assumption is implied in the basic definition of a physical symbolsystem, in which the very existence of physically defined symbols as well as the basic rules or codes for symbol processing are taken for granted.
It is exactly these presuppositions we now want to get hold on, implying that the coding procedure - or as I will name it: the coding competence - is moved from the set of axioms into the field of analyzed processes.
However, there are som basic limits for this - questions to which real answers still seem to be out of reach: I don't think, for instance, we are able to provide a theory of coding which actually can explain the origin of life or the origin of human consciousness and symbolprocessing.
(See Appendix 1 on the four questions of origin)
I think we need to be aware of these limits, not only because it is always wise to remember what we don't know, we also need to be aware that the lack of knowledge on the origin of these phenomenas does not allow us to ignore that they actually have originated in nature.
This is a very important point, because it reveals that the existence of symbolic codes has to been seen as a result of a process in which these codes comes into existence. They have not always been there.
So, while we can explain many biological and symbolic codes as produced by previously existing codes, there is no way to derive the first codes from such previous existing ones.
Hence we need to say, that the first codes - whether biological or symbolic - presupposes the existence of a capacity to create codes which cannot itself be the result of such a code, but has to be seen as an yet unexplained fait accompli. It is not the existence of codes which should be taken for granted, but the existence of a coding capacity.
It is not easy to see how this capacity to code has originated i nature, but we need to accept, that it has actually happened. At least it seems to be easier to accept the emergence of a coding competence, than to accept the emergence of a complete, and fully developed system of codes, as assummed in the current theories on DNA and AI.
On the other hand, I agree with the AI-theories when asserting that the creation and use of symbols is a basic condition for mental phenomenas, and that we need to accept that these symbols and patterns do have a physical or physiological manifestation.
(see appendix B on Simon & Newells "physical symbol theory")
I hold this for true both for externalized symbols in the form of speach, writing, formal representation, pictorial representation etc. as well as for internal, mental processes.
But this is only one of the implications inherited in the question of origin. If we accept the idea of the emergence of a capacity to create codes which cannot be derived from previous existing codes, we can also assume that we still have a capacity to create new codes, which are not results of former codes.
Instead of theories presupposing that human symbol production is based on a given or even well-defined set of basic codes or rules, from which we can derive for instance the functioning of the human mind, I will suggest a theory of mind in which the capacity to code, basicly is seen as a capacity to use the neurophysiological system as a stabilizing, but not determinating basis for performing coding activities. That is: that the brain (and body) does not contain the rules for the operations of the mind (and mental processes performed in the rest of the body, if any). However the brain (and body) does serve a purpose for the functioning of the mind (and mental processes), in so far as it provides a set of possible patterns, which I will describe as a system of redundant patterns, which can be used and at certain levels maybe even created of the mind.
I shall now try to describe the way this might be done. But since we are not able to go into the mind and see how it uses the neurophysiological system or the physical patterns of different symbols and codes, we have to go some other way to clarify our understanding of the relation between the physical, neurophysiological and mental processes.
There are severable possible ways. Some of them would be to analyze the relationships between physical forms and symbolic content in our different externalized symbolsystems. This can be done both for linguistic and formal languages as well as for pictorial representations and - at this point still following the AI-theories - it can also be done by analyzing the computational model of physical symbol processing. The result however would be another than theirs, since they don't take the very existence of the capacity to create codes into their account of physical symbolsystems.
(See Appendix 3 for an account of basic differences between binary notation as used in computers and formal notation principles).
I shall now explain what this capacity to code is about.
As the first example I will take a common experience, that is our own ability to distinguish between the occurence of a physical pattern, which we do accept as a legitimate symbol, and the occurence of an identical physical pattern, which we do not accept as a symbol.
One here might recognize the strong problem of noise, first treated in Shannon's theory of communication. See appendix 4 on the three problems of noise, and papercopy of overheads on the concepts of redundancy exploited by Shannon.
I think, nobody will doubt, that we often are able to make this distinction, although some cases may leave us in doubt whether a manifested symbol is actually a symbol or only a physical pattern. In the same way we know for sure, that we are able to conceive many different physical patterns as expressions of the very same symbolic form, as we do when we recognize different (handwritten or computergenerated) forms of the letter (a), eg: as an (a).
Hence we can conclude, that we are able to categorize physical distinctions according to importance, both between more or less importance and between importance and nonimportance.
We can describe this ability or competence as a capacity to decide - and ascribe - symbolic or semantic content to a physical pattern by selecting between important an nonimportant physical differences and even between the legitimacy or illegitimacy of identical physical forms. And we can do it without being aware of it.
Actually we can also use this capacity to change the borderlines between these instances, by taking in new physical patterns and distinctions, which were not formerly legitimate symbolic forms. This capacity, I think, is a basic component and condition for any kind of coding, (while none of this can take place in a computer nor is it allowed in the computational model of the mind).
Since the physical form itself cannot not provide a quarantee that it is actually a legitimate symbolic expression, we are forced to conclude that such a quarantee can only be provided by a coding competence which is not itself definable as a distinct physical symbol.
The case illustrates that we posses the ability to code and recode the symbolic content ascribed to physical patterns. But it does not tell os much about, how it is possible or how it is done.
I shall now give another example, which will show how new codes can be generated in systems independent of preexisting rules, but dependent on redundancy functions, and secondly that systems based on redundancy (as are linquistic and informational systems) are closer related to the functioning of the human brain and mind than are any kind of formal symbol system (in which redundancy is substituted for rules).
As the first step I will now give you a linguistic example demonstrating the basic mechanism for generating new rules in redundant systems. The example is taken from a recent innovation in ordinary danish, but anyway I think it is well suited for the purpose. The example concerns a group of compounds which for some years ago, but rather suddenly were changed to a new form as listed :
Børneren for Børnehaven (Nursery class)
Døgneren for Døgnkiosken (Small grocery)
Fritteren for Fritidsinstitutionen (After school recreation centre)
Trykkeren for Fjernbetjeningen (Remote control unit)
There is a rule of reduction in this, and we can describe it by saying: A weak ending is substituted for the second part of some compounds.
The rule is applied to a limited group compounds, ie: compounds referring to central - new - institutions in daily family life in Denmark from the 1970-es onwards. However the conditions and restrictions for the use of the rule are extralinquistic and not ruledbased, but based on familiarity (which is a case of redundancy). The familiarity is actually a part of the message contained in the new form, and it is also the general precondition for the change as for the selection of the range of application.
Maybe one should also take into account that the compounds, except that of "børnehaven", refers to quite new - and at the time not culturally internalized - institutions making the language usage in the area less stabilized. But in any case the example can be seen as a general paradigm, which can be called the mechanism for rule-formation (or the principle for generating of rules) in redundant systems.
We can describe this mechanism as a proces developing through 3 steps:
We can describe the structure of these steps in the following manner: (General principles for formation of rules in redundant systems:
1: The establishing of a new expression form, a new pattern.
In some cases new forms can be established by legitimation of formerly - noisy - varieties as independent forms. The ultimate limit for establishing of new forms can be given in the physical substance used, and/or in a set of more or less welldefined physical and/or constitutional criterias for legitimate forms in a given symbolsystem. A main point being that new forms can be legitimated as such, with or without a specific semantic content.
2: The repetition of the form - changing it from new meaning (eg: theme) to expression form for redundant information (eg: rheme).
3: The use of the form as a rule, ie: connecting the form with a regulative function.
This should be an obvious demonstration of the basic mechanism in redundant symbol systems. I dont think it proves that language as such is not a rulebased system (although it contains many rules), but based on redundancy structures of this type. But, since time is scarce, I will only state, that we can find the same mechanism at work at any level of language, whether the notational and syllabic, the syntactical or the semantical including the stylistic level.
This is one of the reasons why I se ordinary language as based on redundancy. And redundancy as a precondition or ressource for generating meanings as well as new rules.
Another - but connected - reason would the existence of over- and underdetermination, interferences between rules and the lack of rules for regulating relationships between overlapping rules an so forth - phenomenas often described as marginal - expressed for instance in the phrase: no rule without an exception. Itself a rule which can be applied to a very high degree in all linguistic matters.
However, it should be stressed that redundant systems does allow the formation rules, as a means of stabilization. But the point is that the description of language (and other symbolsystems) as based on redundancy implies that the establishing of rules is seen as a part of the usage, including the acceptance, ie: that the formation of rules are an integrated part of the use - contrary to a description of language as a ruled-based system, in which the rules are supposed to be given as invariants, somehow given from the outside.
In a broader perspective we could say that one of the basic reasons that language has to be based on redundancy is inherited in the functioning of language as mediator between senders and receivers who are not - and cannot be - fully synchronized to each other. (One could also ask: why communicate at all if they were synchronized - on beforehand?).
Instead of going further into this I shall now give a general definition of the concept of redundant systems - stressing the generative potential which are often overlooked if not totally excluded (as it is the case for instance in Shannon's use of the concept).
In common use redundancy denotes the repetitive occcurence of patterns which have no function or meaning - and hence patterns which could be left out just as well. That is: as a passive more or less irrational phenomenon.
Contrary to this, it can be shown that redundant structures has important functions, and are used to many ends not only in ordinary language, but also in computers and in any other known use of physical patterns as carriers of symbolic content.
The basic reason, it seems, is that systems based on the use of redundancy posses a set of mechanisms for semantic variation which cannot be found in strictly rule based systems. This set of mechanisms consists basicly in four axes of variation, as specified in the following overhead:
Redundant systems (IV) The Four axes of variation:
l) The axis of variation of physical form as legitimate physical form - relative to the substance (new forms, variation of existing forms). for instance: The level of basic notation (in symbolsystems using notations) whether alphabetical, binary notation or other forms.
2) The axis of variation of structural relations between legitimate forms or patterns. The levels of constellations in syllables and syntax in language, the level of the ascii-codes and algorithms in computers.
3) The 1. axis of variation on the level of semantic content. The level of weakness-strenght of a given content expressed. Variation on this axis can be both continuous and discrete in ordinary language (oral). These variations are not expressed (but presupposed) in written manifestations, while only discrete variation (according to selection on a scale) is possible in computers. However discrete variations can be approximated to nearby continuous at least to the human sense organs.
4) The 2. axis of variation on the level of semantic content:
- as change of content of a given form (different from change of the semantic strength)
- as the transition from a first manifestation as legitimate form with a new meaning, to the repetitive use of the new form - eather as a change of meaning or in a regulative function.
- changing the content of the form from new meaning to conventional rule (eg: syntactically stored content)
I would like to give the more basic and general argument for the case of computers and informational processes too. I think it can be done by recalling that any rule processed in a computer has to be represented as a string of bits, implying that the rule is represented in exactly the same physical and symbolic form and treated in exactly the same way as all other kinds of data.
In the daily use of computers we do of course depend on the fact the rules actually are followed, but the main point is that they can only be followed and executed because they are represented in a form, which is itself independent of the content or function of the rule, (ie: as binary strings in which any single bit can be manipulated independently). Hence we are able to change both the rule, the way it is executed, and the effects (the meaning and function) of the rule quite independent both of the original intentions, of the programmatic structure and of any other previously determined element.
As in the case of ordinary language this is not a marginal phenomen, it is the basic principle of the universal computing machine, because the universality of the machine depends on the fact, that the machine is not itself determined by any symbolic rule, since that would deprive the machine the universality and restrict the use of the machine to a limited set of rulebased formal procedures.
Hence we can say that the computer is a machine in which there does not exist any invariant border betweenthe machine and the material which can be processed in this machine.programe and data.the knowledge implemented in the functional - symbolic - architecture and the knowledge processed.
Since the rules are here given in the same terms as the ruled, and the rules are the result of a proces, which proceeds through series of singular and distinct (optional) steps, they can also be changed during the proces. They can be changed, modified og even suspended in form as well as function - according only to our own mental capacities and to the restriction contained in the necessity to use an alphabet of semantic units without any semantic value of their own.
I shall now specify the main capacities of redundant systems in these respects, (rulevariation): Redundant systems allows:
A basic principle in this is the need to have a concept which can allow stability without breakdown in cases, where there are no rule.
I will now carry on to the interpretation of the functioning of our brains and minds.
The reason to do so is that I believe we need a concept of our mental capacities, which allow us to perform symbolic activities which are difficult if not impossible to comprehend if we understand the relation between brain and mind as rulebased.
In the case of computers we know, that the machine cannot handle such situations itself. If there are no instruction given, the proces stops immediately, waiting for an external operator to bring the next instruction. Here we have a basic difference between computers and humans, since we are able to handle such indeterminate situations. In fact we are not able to stop at all, although we know some cases in which it seems as we are only moving in circles.
This is not the only difference between computers and human consciousness. There is another difference which has not been talked about very much in spite of many years of arguing around the concept of artificial intelligence. That is the difference between humans which at some point in the history have been able to etablish their own symbolic units (ie: coding a physical form as symbolic), and computers who are not able to do so, since a computer is not able to create or define its own units of expression. They have to be defined before the machine can be build.
There have been much debate on the question whether computers posses the ability to generate new symbols from existing ones - as we know we are ourselves, but there have been no mention of the basic criterion, the ability to create the first symbol. While we can explain many of our own symbols as generated from older symbols, we cannot explain all our symbols in this way and especially not the first ones.
Hence we can say that the ability to create our own means of expression - even if it is itself unexplained, as it is the case - has to be a part of our concept of mind and brain. But if so, we can also add that this ability cannot it self be derived from any symbolic rule, since the establishing of such rules are preconditioned by this very ability itself.
I have pointed out three cases of interdermination, concerning the competence to create symbols, that of the ability to create the first symbols, that of being able to handle indeterminate situations in which there are no rule for the proceeding, and finally that we are in fact able to recode the relation between physical form and symbolic content at any known level.
Why then, should this not also be the true, in the - yet unknown - case of the mind?
Traditionally, these cases has been marginalized by saying, that it is only because we dont know the functioning of the neurophysiological system, and when we know it, we will be able to give it a rulebased explanation. This, however cannot be true, because we actually do know, that it is not possible to give a purely physiological or physical definition of symbolic units without referring to a competence to create symbols. That is the competence to decide whether a physical or neurophysiological form (a physical difference, continuous or discrete) actually carries a symbolic/mental content or not.
On this basis, I think it is reasonable to suggest that the relation between the biological (the physical and neurophysiological levels of existence) and the mental are to be understood as an interface in which the neurophysiological system provides the basic physically and physiologically features and structures which are exploited by the mind as a redundancy system.
Implying among other things that the relationship between these levels allows
Much of this, I believe, depends on a difference, of which I cannot prove the existence, but which I find strongly supported by indications, ie: that our own consciousness probably is not incorporated in one and overall synchronized system as it is the case in computers. While a computer is only able to function, because it is synchronized at the basic physical and symbolic level, that of the bits, it seems, that we ourselves are only able to function - and overcome indeterminate situations - because we are not restricted to the use of one such system of basic synchronized forms. One of the reasons for this could be, that we are able to create such units - and probably in many different and not synchronized forms and relations.
So if there should be something like an artificial thinking machine with the competences matching our owns, it should - I think - be able to handle indeterminate situations and I believe this would imply that it was able to create its own new symblic units in a deliberate way.
At least we can say for sure that the existing models of ruledbased machines and minds are not candidates neither to be consistent models of our own minds nor for the creation of intelligent machines.
Appendix 1: On four different forms of the question on origin:
1) as the question of the possible origin of the universe - ie: the question whether there is something which can be called the origin of the universe in any reasonable sense or not?
Many scientist today seems to believe in the idea of an origin, but the most reasonable interpretation would be, that the big bang or any other idea of an origin only marks a beginning to what can be observed since the concepts of time and space, mass and energy, (and information) runs out at that point.
2) as the question of the origin of life which according to what we know seems to have taken place rather many years later than the possible origin of the physically observable universe. Hence the origin of life cannot be taken for granted in the same way as the origin of the universe, since the former has taken place in the course of physically observable time and space. The origin of life might be directly connected or even identical with the origin of a - probably very elementary - coding competence.
3) as the question of the origin of human consciousness, symbolic communication, human languages, culture and society, which has taken place much later than the assumed origin of life.
These are the 3 basic questions on origin related to different domains, to which we traditionally apply different kinds of explanations. The reason for this can be found in 19th century theories (of physics, geology and biology) according to which the christian idea of the unity of creation "in the six days of the beginning" couldn't be maintained.
However, in 20th century theory we have seen a new framing of these questions in which they are brought into a new kind of unified theory, or at least as questions which can be framed in a unified way:
4) ie: as the question of the origin of any form, structure, pattern and rule at any level in nature - whether physical, biological or mental, including the formation of these levels as relevant concepts.
My purpose giving these four specifications of the problem of origin is primarily to pose the question, whether the latter can be understood as the most general framing in which we can also specify the other framings as specific manifestations. Doing so, I am in accordance with a major tendency in 20th century science ie: the formalist epistemological trend to direct itself to specify abstract models independent of the material substance of the subject area (which is assumed to be an amorphous substance).
Among the theories forwarding these ideas we find a group of neomechanical theories (cybernetics, AI, Cognitive Science and the use of mechanical theores in biology and sociology) other formalistic theories in logic and mathematics (incl. some kinds of information theory) and structuralist theories in the humanities etc.
However, I am only following these theories in an attempt to go beyond the basic model because
a) it does not allow us to conceive the question of formation of form in a way which can explain the emergence of new levels. Although the existence of different levels is accepted, they are at the same time taken for granted as coexisting - and seen in retrospect.
b) because it leaves us with a model of the universe as an aggregation of infinite many, distinct, internally closed systems - each based in their own, autonomous set of rules. Although the different systems sometimes are seen as systematically connected, they are only seen so as fully developed systems in a synchronic and synchronized perspective.
c) and finally because the model does not give an account of the differentia specifica for the formation of forms at the different levels ie: physical, biological and mental. As a consequence they cannot explain why there are different levels at all or describe these differences, why a certain structure - for instance an algorithm - in some cases should be seen as a physical, in others as a biological or as a mental structure?
One could also state these objections in another way, ie: as the question how can we bring the idea that biological and mental systems are produced in a physcial universe together with the idea that the physical, biological and mental is distinct, internally regulated levels, each functioning on its own conditions?
Appendix B: On Simon & Newells ªphysical symbol theory´ or the computational model of mind.
The main idea of this theory is to explain how the mind - interpretated as goal-directed consciousness - can execute its own intentions and goals as a physical process - which in their terms means mechanical.
As they write themselves such physcial symbolsystems
... clearly obey the laws of physics - they are realizable by engineered systems made of engineered components ...
And they define such systems by saying that a physical symbolsystem consists of
This is a nice and simple definition - very similar to the mechanical theory of Newton, the only difference being that the physical "atoms", the basic movable entities and patterns, are now provided with some kind of meaning and content. They are symbolic entities.
According to Simon and Newell they are so as a property belonging to the system itself - and the system can generate new symbolic sentences by the help of the rules.
But although we can derive many symbols and new rules from other and former symbols and rules, we cannot derive the first ones in this way.
Simon and Newell has no answer to this. They take the existence of physically manifested symbols and the general rules for granted as a unexplained precondition in their computational model of the mind.
In the case of the computer it is not difficult to see that these preconditions are provided by the human creators of the system. As a consequence we can conclude that the human creators posses a capacity external to the computational model, that is the capacity to define the first symbols - ie: the physical units used in the mechanical performance - and the rules of the very same model.
It should also be noted that the physical symbol theory is only physical in the sense that the basic units are defined as physical forms. The rules governing the system are not only given outside the system, they doesn't have a physical expression themselves.
This is contrary to the situation in computers, while the rules in this case actually need to be expressed in the notational system in exactly the same physical form as the data, implying that rules can only be carried out as the result of the mechanically performed proces, but it is very much in accordance with older, more traditionel ideas of rules and laws - in the theory of Newton these were given by God - but when we leave out such preconditions, and we are forced to do so, when we are going to explain the origin of life and consciousness as processes which have taken place in the history of the physically observable nature, we also have to ask ourselves, how such rules and codes can arise, ie: we need to ask how we can explain the existence of goals, intentions and rules for symbolic activity and other mental phenomenas.
The basic shortcomings of the fourth and absctract (or formal) framing of the question of origin - as manifested in the computationel model of the mind can be shown by asking the following questions.
1. How do the computational model of the mind explain the difference between a physically manifested symbol and an physically identical form which is not symbolic?
2. How do the computational model explain the origin of the first symbols? and the origin of the first symbolic rules?
3. How do the computational model explain that humans are able to handle ambiguous situations to which there are no solutions through a series of finite steps?
4. How do the computational model explain relations between systems which are not synchronized at the level of mechanical execution?
Since none of the different models and theories of the computational mind gives an answer to any of these questions, one should of course keep in mind that the reason could be, that they dont't accept the questions as real problems - or that they are seen as problems, which cannot be given an reasonable answer.
Appendix 3: Binary notation as used in computers compared to the principles of formal notation systems.
While many people seems to believe that binary notation system is a kind of formal notation system - for instance a specific utilization of the binary number system, it is easy to show that this is not the case, since formal notations systems can be characterized by the following features:
l) any notation in formal system is defined by a basic semantic value, which is kept invariant unless a new value is defined and specified. The definition holds even for notations defined as variables, since such definitions imply a definition of the range or type of allowed variation.
2) the value of a notation need to be either the value of a functional rule or the value of data.
3) In a formal system it is possible to bring in new (properly defined) notations at any time if neccessary.
4) A formal system is defined by the existence of a given (selected and specified) set of general rules.
5) Formal systems are polysemic in so far that the same syntactical structure can be interpreted as the expression of different and distinct semantic contents.
We can illustrate these features in the use of the binary notation as a number system:
Here the basic values of /O/ and /1/ are those of the ciphers according to general rules for positioning ciphers in the binary number system.
The two ciphers in the system has always and only the value of data, while the operators of such expressions is expressed by the use of other notations (or presupposed and not explicated, or manifested in the graphical positioning). There are no specific limitation for the choise of operators and hence for the number of allowed notation units in such expressions.
Whenever the two digits are manifested they are manifested with a rulebased semantic value. The same expression (eg: an addition) can be interpreted as polysemic while the result can be an expression of quite different phenomenas (the addition of amounts whether physical or mental, depending of the chosen context).
Non of this holds true for the use of binary notation i computers.
In this case the same two units are exploited in a quite different way.
l) The two units is defined as physical forms without any kind of semantic content except their general legitimity as the basic notational units. There can only and always be two (or another fixed number) of expresional units.
2) the same two units are both used as parts of expressions of data and as parts of functional rules. Anything represented in a computer has be to represented by the help of these two units and those alone. Accordingly, meaning is always and only ascribed to strings of bits and never to any single bit. However, the change of a single bit can change the meaning of the whole expression as it is the case in the ascii-code.
3) it is not possible to bring in new notational units during the operations - whether during a single proces or during the whole lifetime of the machine.
4) There are no general rules for the use of the notations except the demand, that they function as mechanical agents in a purely physical way.
ie: there is no general syntactical rules, which cannot be suspended or substituted by quite different syntactical rules for the use of the two notation units.
5) The binary notation as used in computers is not only polysemic in the same sense as the binary notation system, but also in the sense that any new expression can be interpreted independent of the former steps.
So, informational notation system comes closer to the alphabetic system than to any kind of formal notation system, and the informational notation represents at the same time a system which can be changed both from beneath (the physical level of impulses whether intended or not, and from top, governed by symbolic purposes).
Appendix 4: Three kinds of noise.
In the above description (app. 3) of the differences between the use of binary notation as a formal notation system (binary numbers) and in computers I only made a hint to the two different ways of organizing the relation between the physical and symbolic in the two systems, saying that formal notations are always defined by semantic values, while the informational notation units are defined by physical values (which need to be combined with a criterion for legitimate occurrrences of a given form as distinct from illegitimate occurrences of the same physical form).
As a consequence, it is of no importance whether we define a formal notation in a physically precise form, we can use a lot of af different physical forms to express the same semantic content. We do not even need to use totally identical forms for the same notation, the limit of variation being our competence to identify a set of different physical patterns as expression of the same semantic content - as we know it from differences in handwriting.
The effect of this of course is that the identifcation of the semantic values depends on our own mental capacities and knowledge of possible intentions. As a consequence, we can say that we possess the competence to categorize physical differences according to a distinction between importance or non-importance of such differences. What we see here is a capacity to decide - and ascribe semantic content to some physical forms - by selecting between important og nonimportant physical differences. But this is not the only thing we can see, since this capacity can also be used to change the borderline by decision - ie: taking in physical differences, which were not formerly used, as new symbolic patterns.
I think this is rather obvious, but what about the relation in the computer? Here we have another situation, since the functioning of the computer is based on the restriction, that it can only use two physical patterns, those of the two digits, and they shall be defined as part of the invariant machine.
But as said, they are also defined independent of semantic content and hence wee need to ask how it is possible to define a physical form as a symbolic form in this case?
We can divide the question in two, first asking whether it is possible to do from the outside, by the help of our own mental capacities, and secondly, whether it is also possible to do without the help of these?
The first question is rather simple, since we do have computers in which we can ascribe semantic content to strings consisting of these two digits. And as we know, we can also use these digits to perform symbolic processes in a purely mechanical way.
The second question is more complicated, but to see this, I will reformulate the question to the form: Is it possible to define a symbolic form on purely physical criteria, eg: by defining a physical value/form or pattern?
The answer to this question is no. It is not possible. The reason that it is so, can be found in Claude Shannon's mathematical theory of information in which he treats the problem of noise in mechanical symbolsystems.
As it can be seen in his theory there are three different types of noise, or the problem of noise need to be considered and treated in three distinct perspectives.
The first being the need to define physical values which allows the identification of a symbol relative to the background. That is the specification of identifiable physical forms or patterns in the substance used. Different substances allows different patterns, it is not easy to keep symbolic structures permanent in the medium of water. But it shouldn't be easy in electricity either. However it is possible and we could without much doubt also organize water in the same way.
The second is the need to define the physical values in a way which make them distinctive relative to each other - in the same substance. We do not need to use exact physical criteria to do this ourselves, but when it comes to the computer we need to do so because the units shall operate as mechanical triggers in a totally predictable way.
Both these aspect are wellknown and they seem to indicate, that we are free to define symbols on a scale from the purely physically defined to the purely semantically defined. If so, it would also be possible to maintain the idea, that the development of symbols could be explained as an gradual development leading from orginally physically defined symbols to more elaborate and semantically defined ones. The only question left would then be the origin of the first physically defined symbol.
However, this cannot be so, which becomes clear when we take the third aspect of noise into account. That is the question: how is it possible to distinguish between a physically defined symbol and the occurence of a physically identical form as a mere physical form?
As far as I know, this problem was first addressed in Shannon, 1949. He did so, because he needed to find a way to decide whether a received bit - let's say an /I/- actually was intended - and part of the message send - or whether it was changed from an /O/ to an /I/ during transmission - as a result of influencing noise in the channel?
In this case of mechanical transmission Shannon had no way to rely directly on our own mental capacities - especially because the main idea was to reduce the transmitted messages by eliminating as many parts of the message as possible, including those parts we ourselves would use to interpret the message.
Although Shannon found a way to solve this problem - by adding a formally defined controlsequence to the message - he could not do so by a physical definition of symbolic units, he had to use semantical criterias to establish the distinction between legitimate and illegitimate occurences of the same physical forms (or values).
I shall not go further into the detailed analysis of this, but only stress that it is not a specific problem for a single class of symboltransmission systems, but a general problem which have to be solved in any kind of symbolsystem.
The reason is that we can only use physical forms as symbols if these forms can exist without being symbols - whether they are provided of nature without human intervention or by the help of that.
Appendix 5. Redundancy in information systems.
Different kinds used although not explicitly described in Shannon:
I: As opposed to meaning.
a) Repetitive patterns without meaning/function
Meaning = the whole message including rules necessary for the interpretation.
b) Patterns belonging to the system (constants)
Meaning = the new information, defined as the part of the message which is distinct from other possible messages in the same language. (Ie: excluding expressions of rules belonging to the language structure).
c) The amount of possible alternatives to a specific message in a given language.
Meaning: as in b).
II: As independent of meaning.
Recurrent patterns (statistically defined) whether part of the system or of the specific information/meaning.
III: As meaning.
Calculated information added to a message to verify the legitimacy of any symbol - contrary to the possible, non-intended occurrence of the same physical form as noise. Meaning = The calculated - redundant - information, is calculated by inter- preting a given sequence of notation units as a formal expression (ie: ascribing a formally defined semantic content by which the legitimacy of the individual units can be verified by comparing with the ascribed ªvalues´ of other units). Redun- dancy of this type is only redundant at the semantic level of the original message, since it is a specific sequence which has to be added to the message transmitted (in stead of being eliminated) and is necessary for the verification. It forms a distinct or specific part of the transmitted message, ie = information.
Elimination of redundancy type II is only possible by adding redundancy type III. Ie: by substituting semantically defined redundancy for statistically defined redundancy - the latter being defined at the level of notation units.
Since redundancy type III is defined in a formal semantic, it can be defined independent of the semantic content of the message. Ie: It can be added before and eliminated after the transmission, without influencing the content of the message at all.
Conclusions concerning Shannon
l) It is possible to substitute formally defined redundancy on the semantic level for (statistically) defined redundancy on the notational level.
The (economical and effective) point being that the former can be shorter than the latter.
2) Some kinds of redundancy is always needed for communication of messages - even in the case of physically precise (nonambiguous) defined expressions.
The basic neccessity stems from the fact that any physical form which can be used as a symbol/notation unit (of type information) can always exist as a mere physical form (of type noise).
3) It is possible to substitute semantically defined redundancy for redundancy in the physically manifested expression.
Redundancy (according to Greimas og Courtes)
(In linguistic systems:) Recursive patterns of some - not yet defined - importance for the internal organization of meaning.
Recursions (redundancy) as a mechanism for variation in language are in use on all levels: - notational, syllabic, syntactic, semantic including stylistic variance. Redundant patterns on one level can be used in different ways:
I: As a means to stabilize a level relative to another level, eg: syllables to stabilize the use of letters, or syntactical forms to stabilize meaning on the semantic level etc.
II: As a repertoire of forms from which new varieties can be created (pattern deviation)
III: As a repertoire of forms which can be taken into use - to express a new meaning or new aspects of meaning, or to ascribe a new regulative function.
Finnemann, Niels Ole, 1994. Tanke, sprog og maskine. En teoretisk analyse af computerens symbolske egenskaber. Akademisk Forlag, Aarhus, Denmark. English edition Thought, Sign and Machine - the computer reconsidered forthcoming.
Greimas, A. J. & Courtès, J. (1979) 1982. Semiotics and Language. An Analytical Dictionary. Indiana University Press, Bloomington. Fransk orig.: Semiotique. Dictionnaire raisonÈ de la thèorie du langage.
Newell, A. & Simons, H. (1976) 1989. "Computer Science as Empirical Study: Symbols and Search´. Reprint In Bannon & Pylyshyn (eds.) 1989. Perspectives on the Computer Revolution. 2.ed. Ablex, Norwood, New Jersey.
Shannon, Claude, (1949) 1969. The Mathematical Theory of Communication. Univ. of Illinois Press, Urbana.
 Conserning the concept of redundancy one might add, that it is always a phenenomen presupposing an observing and interpreting mind to whom something can be redundant, implying that redundancy is also always relative to something more distinctive. That is:as a difference which is minor to another in som respect. Hence one might condlude that if there is distinct meaning there is also redundancy of some kind. I might also be noted that the only difference between a redundant pattern and a "structure" is the function of the recurrent pattern, if redundant it might have no function at all, except that of the potential functions in the past or in the future, while "structures" means patterns which actually have an organizing function.
 It can be doubted, I think, that it is possible to define the brain without recurrence to properties belonging to the mind, sinc it is not easy to see what constitutes the brain as a phenomenon (a seperate entity or system) in itself, independent of the mental processes. Mental processes seems to be those which allow the separation of brain-functions (or neurophysiological) from other bodily processes.