From rwhit@cs.umu.se Ukn Mar 27 13:41:36 1993 Received: from jupiter.cs.umu.se by world.std.com (5.65c/Spike-2.0) id AA03404; Sat, 27 Mar 1993 13:41:33 -0500 Received: by jupiter.cs.umu.se (5.61-bind 1.5+ida/91-02-01) id AA18686; Sat, 27 Mar 93 19:41:12 +0100 Return-Path: Date: Sat, 27 Mar 93 19:41:12 +0100 From: rwhit@cs.umu.se (Randall Whitaker) Message-Id: <9303271841.AA18686@jupiter.cs.umu.se> To: ONAVARRO@UCRVM2.BITNET, omarti@tifton.cpes.peachnet.edu, ormohrbh@ubvms.cc.buffalo.edu, palmer@world.std.com Status: RO X-Status: ==========================--------------------============================ [{{{{{{{{{{{{{{{{{{{{{( T H E O B S E R V E R )}}}}}}}}}}}}}}}}}}}}}}] ========================================================================== | --------<< An Electronic Forum for Those Interested in: >>------- | | <> | __________________________________________________________________________ [ _____________ Number 5: Issue date = 28 March, 1993 ______________ ] [ CONTACT ADDRESS for subscriptions, submissions,etc.: ] [ RANDY WHITAKER ] [ Informationsbehandling / ADB, Umea University, 901 87 Umea, Sweden. ] [ Telephone: (+46) 90 16 61 77 / Fax: (+46) 90 16 61 26 ] [ Email: rwhit@cs.umu.se ] ========================================================================== {{{{{{{{{{{{{{{{{{{{{{{{( NOTES FROM THE EDITOR )}}}}}}}}}}}}}}}}}}}}}}}}} Welcome to issue no. 5 of _The Observer_. In this issue, we'll get started with responses to Barry McMullin's questions from issue 3, meet David Vernon, and air some questions. First, though, I am pleased to report that in the wake of remarks in a Net news group and a couple of targeted announcements in related mailing lists, there has been a surge in new subscribers to our autopoiesis / enaction forum. Last week alone, our 'population' more than doubled. The new subscribers range around the planet and across many research fields. Many express prior work or interests in our focal area, and I would like to welcome their experience as well as solicit their contributions to the forum. Some are newcomers to autopoiesis and/or enactive cognitive science, and I would like to welcome their interest and express my hope that they will find the forum informative and useful. Finally, an unexpectedly large proportion of the new subscribers have commented that they thought they were the only one(s) interested in this, that they were working in isolation, etc. I have known that feeling well. I guess that's why I brought up the subject of an ongoing autopoiesis forum at the 1992 Dublin conference on Autopoiesis & Perception. When Francisco Varela said there wasn't (nor had there ever been) such a forum, it confirmed my fear that isolation was the norm for the autopoiesis aficionado. His suggestion that I try doing something about it reminded me of the dangers inherent in opening one's big mouth [ ;-) ] and made me question the wisdom of having brought it up. Now that _The Observer_ is gathering momentum, I trust that isolation need no longer prevail. Now that the isolation is dissipating, I'm glad I opened my big mouth. -- R. {{{{{{{{{{{{{{{{{{{{{{{{{{{{( INTRODUCTIONS )}}}}}}}}}}}}}}}}}}}}}}}}}}}}} EDITOR'S NOTE: Personal / professional introductions are a good way of getting to know each other and to outline the range of interests in this forum. All such introductions are welcome. Today's subject is: === DAVID VERNON === Background. ----------- Having completed a Ph.D. on robot vision in 1985, I could not help but feel somewhat disenchanted with the emerging computational theories of vision at the time. This disenchantment arose not from a frustration with the _usefulness_ of the discipline but with its (apparent) shallowness: we could do simple things well but the approaches did not scale well when attempting to deal with much more complex things (flexible objects, ill- defined environments, uncertainty, natural variability, and so on). In the same year, I was given a copy of Varela's "Principles of Biological Autonomy" and there began a lengthy and enjoyable, if sometimes confounding, quest to understand, work with, and develop the theory. What happened next. ------------------- Although I didn't know it at the time (or didn't have the wherewithal to articulate it), it was the representationalism inherent in conventional computer vision which jarred so much. Autopoiesis seemed to offer, not so much an alternative, but an approach which was _premissed_ on a sounder foundation: specifically the concept of self- organization. This foundation is, if nothing else, less pejorative in that it seems to assume less about the domain of discourse (the system environment) than did (and does) representational vision. Since then, my research has proceeded along two parallel paths. On the one hand, I have been developing an understanding of the philosophical (ontological) foundations of autopoiesis and this has led me directly (almost) to phenomenology and away from idealism and realism. On the other hand, I have been attempting to develop a computational simulator for autopoietic systems which is grounded in the "real" world, i.e. it should interact with the environment with which you and I are familiar. In more specific terms, my work is concerned with identifying, in a prescriptive manner, the requirements for any instantiation of autopoietic organization, i.e., to specify the structural conditions necessary for the actualization of an autopoietic system. Significantly, it is the symbiosis of the two paths that has been the most satisfying (and potentially fruitful) aspect of the work in that I wish to see what "additional" considerations must be addressed if we are to go beyond autopoiesis to more sophisticated autonomous systems. All of the work has been founded squarely on Varela's theories; I have tried to exploit Spencer Brown's Calculus of Indications and Varela's extensions as my working formalism, and I have tried to incorporate Bennett's "Systematics" of multi-term systems to develop the work. So far, I have made no fundamental breakthroughs but at least now I think I know what I am trying to do! Collaboration. -------------- All of the work I outline above has been done with Dermot Furlong in the Department of Microelectronics and Electrical Engineering, Trinity College, Dublin. Who I am. --------- I have been a lecturer in the Department of Computer Science, Trinity College, Dublin since 1983 and I am at present on a career break in the Commission of the European Communities (DGXIII). My e-mail address is dve@dg13.cec.be -- David Vernon {{{{{{{{{{{{{{{( AUTOPOIESIS AND ARTIFICIAL LIFE (A-LIFE) )}}}}}}}}}}}}}}} SOME RESPONSES TO BARRY MCMULLIN'S QUESTIONS In issue no. 3 of _The Observer_, Barry McMullin (Dublin City University: personal summary in issue no. 2) offered some questions to get a thread started on how to apply the principles of autopoiesis to self-* (* = organizing; reproducing) automata realised in software. I have rearranged and blended the responses so far into the following. If any respondent feels I have damaged his contribution, please contact me for a heartfelt apology. -- R. A BRIEF RECAP OF BARRY'S QUERIES (for the full account, see issue 3): The notions of *organization* and *structure* are fundamental to autopoietic theory; yet I find I am not always clear on their meaning. So I should like to consider a simple framework in which I feel unsure of how these terms should be interpreted [John Conway's so-called *Game-of-Life* (C-Life)], and ask you for your views. In the C-life universe we can recognise and identify a variety of entities (unities? systems?). There are the individual cells, or cell-automata. [...*Barry's description of these cells deleted*...] Now, for each of these kinds of entity or system, I would like to know the answers to the following questions: (i) What is its *structure*? (ii) What is its *organization*? (iii) What is its *boundary*? (iv) Is it *organizationally closed*? (v) Is it *autopoietic*? I should also be interested in a more general prior question: do these questions have definitive answers at all? And if not, why not? ========================================================================= [I] SPECIFIC RESPONSES TO BARRY'S LINE OF QUESTIONING: [[ CONOR DOHERTY ]] offers the following direct responses to Barry's questions (i) - (v): ** BARRY: >For the moment I shall consider only three very simple such patterns: > (a) Block: this is a completely static pattern on a fixed > set of cells. ** ** CONOR ANSWERS: (i) structure: spatial organisation of cells (ii) organisation: reproduction rules not dependent on neighbouring states (iii) boundary: neighbour cells (iv) organisational closure: Yes (v) autopoietic: Not really. No equivalent of "metabolism". More like a rock. ** BARRY: > (b) Blinker: this is a dynamic pattern, but the total set > of cells involved is fixed. ** CONOR ANSWERS: (i) structure: set of patterns involved in reproduction of the blinker. (ii) organisation: again state transition rules (iii) boundary: set of neighbouring cells for any given pattern. (iv) organisational closure: Yes (v) autopoietic: yes, informatically ** BARRY: (c) Glider: this is a dynamic pattern which "moves", i.e.\ the set of cells involved changes continuously, and without limit (in principle at least). ** CONOR ANSWERS: (i) structure: patterns involved in reproduction of glider as it moves across tesselation. (ii) organisation: transition rules (iii) boundary: neighbour cells (iv) organisational closure: yes (v) autopoietic: yes, informatically [[ RANDY WHITAKER ]] responds to Conor on Barry's queries: I've long grappled with how to apply autopoietic concepts to computer software. It seems to me that the computer is itself a structure- determined, operationally closed system in the physical space. By this I mean that its functionality (including it's "logical" or "programmed" functions) are reducible to the configuration of the hardware plus the electromagnetic states of subregions therein. Bits, bytes, data structures, etc., are our _explanations_ for this system's functional regularities. To address the "logical" aspects of software (e.g., rules or data structures) requires extending the "domain of consideration" to encompass a logical / conceptual dimension _OR_ adding the observer so as to delineate a composite system considered as a whole. To address the "visible" aspects of software (e.g., the C-Life cells as displayed graphically at the interface) similarly requires shifting to / incorporating this graphical or display "space", or else blending the observer into a joint system. An observer may ascribe "systemhood" to a manifested structure which is only a subregion within a subsuming organizational whole. This may be due to the observer's limited ability to couple with the whole (e.g., a limit to the intersection of the domains in which (a) the observer operates as such; and (b) the whole manifests its organization). It may also result from the observer's ontogeny (e.g., the bias of prior "learned" categories). I grant this is a sloppy summary, but it leads to the point: systems are ascribed by observers. The systems delineated depend on both (1) the intersection of the observer's cognitive domain and the domain of manifestation for any system; and (2) the manner in which the observer "slices up" that domain of intersection. Now, Conor's responses rely on organization being mapped onto the transition rules (the regularities of occurrence of the graphical cells), and structure being mapped onto the graphical cells (singly or a set of conventionalized composites). Given that the system of interest includes the graphical component / aspect, I go along with this. Now let me disconnect the monitor. I no longer see the patterns visibly, but the "program" continues operating in a regular fashion. Assume some alternative means of inspection, e.g. numerical printout, as assurance of continued operation. I can still apply Conor's mappings, by shifting my application of "structure" from the screen to the printer. Now let's shut off _all_ the input / output peripherals. Something's still going on in the circuitry, and it presumably still manifests regularity. The observer now has to shift the "horizon" for discerning structure to (e.g.) RAM, registers, etc. Now we're getting down to the "minimal case" - - paring away the structural (display) extensions to (hopefully) leave only the most basis kernel of this C-Life beastie for further analysis. It's still the same beastie Barry offered up for inspection, but now stripped down to its "innards". This reduction does not, however, permit me to "pin down" the C-Life "system". I have managed to restrict the _scope_ of the space in which it is manifested (by removing the graphical / display extensions), but I have still not determined _which_ space in which to address it. I can address the system in an abstract, "conceptual" space as a network of (e.g.) data structures for the cells and finite state transition networks for the "program" itself. I can also address it in the physical space as a network of electrical states (e.g., states of the registers and RAM locations) manifested in a particular physical architecture (e.g., the busses; the connectivity / transition constraints of the circuitry). [Intermediate pause to note: I don't mean to seem needlessly pedantic -- I'm just trying to suggest that Barry's queries are more complex than they might initially seem. Furthermore, I don't think either of these (or any of many) alternative "interpretations" is necessarily "correct" in any absolute sense.] Now, in both the stated cases, the manifestation (and continuance) of the network of relations is itself dependent on a subsuming "system" -- my cognitive domain in the "conceptual" case, and the computer in the "physical" case. Interruption or forgetfulness disintegrates the first version, while a fault in the underlying hardware or operating system disintegrates the second. In both cases, there is an implied agency which ensures the persistence of _both_ the network of relations (organization) and its specific manifestations (structure). As such, I would dispose of Barry's question (v) by saying that in _neither_ case is the C-Life beastie (distinguished from its supportive agency) autopoietic, because it does not reproduce its components. Furthermore, I would claim that in neither case is the C-Life beastie _autonomous_ (in Varela's specific usage, cf. _Principles of Biological Autonomy_), because it does not (in and of itself) maintain its defining network of relations. [NOTE: Since autopoiesis is a special case of autonomy, I suggest that the more general case be the focus for further analysis and discussion.] For all 3 instances (Barry's a,b,c), taken with regard to the "stripped down" or "minimal" C-Life beastie, I would guess the following: CASE I: "CONCEPTUAL" SPACE (i) What is its *structure*? The set of elements (e.g., "mental images"; "predicative / logical units"; data structures) whose characteristics are determined by (1) the specific "space" in which I delineate them and (2) the agency of the supporting framework in that "space" (my cognitive domain if the elements are taken as "ideas"; a logical schema if taken as abstract "logical variables"; the machine's logical architecture if taken as instantiated "data variables"). (ii) What is its *organization*? The pattern of relations determining the configuration (locational; logical; procedural) of the elements. (iii) What is its *boundary*? Without organizational closure (see iv below), I don't see that the C-Life beastie can manifest a closed "boundary". The "conceptual" case is further complicated by the fact that the program (as a finite state automaton) can have any number of superfluous transitions tossed into its definition without degrading the apparent consistency of the structure's manifestation. This would seem to make the "boundary" somewhat arbitrary. In the absence of autopoiesis (and even organizational closure), it would seem to me that the system does not generate a boundary as an immanent characteristic linked to its organization. (iv) Is it *organizationally closed*? I don't think so. I think the C- Life beastie (taken independently of its supportive substrate) fails to achieve "closure" because of its dependence on the aforementioned agency. CASE II: "PHYSICAL" SPACE (i) What is its *structure*? The set of physico-electrical components participative in the minimal C-Life beastie. The _persistent_ structure includes circuitry, registers, data busses, and RAM. The _momentary_ structure includes the specific set of electrical states manifested at a given time. (ii) What is its *organization*? The pattern of relations (physico- electrical transformations) through which the minimal state of the C-Life "program" is manifested. (iii) What is its *boundary*? On a persistent physical basis, any "boundary" encloses the range of hardware components (circuits, RAM space) within which the minimal structure is manifested. On a momentary basis, the "boundary" encloses the electrical states manifest in the hardware. (iv) Is it *organizationally closed*? Possibly. In the "conceptual" case, I didn't merge the "supportive agency" into the C-Life beastie itself, because Barry initially seemed to be focusing on the operating software (and, of course, its display manifestations). In the "physical" case, I think I can obtain this merger by taking the whole thing as a specific configuration of the structure-determined, operationally closed computer. If we consider the electrical supply referentially neutral with respect to identifying / delineating the resultant composite, something like organizational closure seems feasible. Closing Remarks: I never have been able to convince myself on these issues when it comes to software. If I had written this some other day, I probably would have come up with marginally different answers. I hope at least to have illustrated the necessity of trying to pinpoint the combination of space and observer involved. Perhaps more importantly, I've never come up with a satisfactory account of what to do with the "supportive agency" which influences software's manifested "structure". Perhaps I've created my own problems here, but I keep coming back to this issue again and again. More specifically, I wonder: * Should such "supportive agency" be stripped of its own structural attributions and somehow subsumed under organization? For example, is it reasonable to incorporate the regularities of the hardware into a pattern of relations defined solely with regard to the software? * Should it be taken as a causal character of the space itself? Living systems (M & V 1980) were defined as being "autopoietic in the physical space". I always assumed this implied that causal relations (e.g., physical / chemical laws) explaining ordered transformations in matter were included therein. Does this imply that operations / logic, etc., explaining ordered transformations in (e.g.) data structures can be reasonably subsumed as a part of the "character" of the space in which a software system is delineated? * Or is it preferable to manipulate the scope of consideration to subsume the "supportive agency" (cf. my reduction of the software into physico- electrical states)? I guess all this has more to do with Barry's last, general questions (Do these questions have definitive answers at all? And if not, why not?) than with C-Life per se. ** HANS-ERIK NISSEN (in a comment which seems pertinent at this point): I feel surprised by reading someone looking upon human concepts as constructs phrasing questions of the kind: "What is X?" I would prefer to ask e.g. "What *structure* and *organization* should I assign to it?" Maybe I should supplement such a question with "By what interventions could I arrive at a *structure* and an *organization* to assign to it?" ** CONOR DOHERTY ASKS: It would be nice if someone could clarify exactly what a pattern of relations of structural reproduction is. What's the relationship to physical structural implementation? Is Autopoiesis a general theory of patterns? If not, is there one out there? ** RANDY RESPONDS: I think Conor is pointing to one or many _very_ important issues here. In _Autopoiesis and Cognition_, Maturana and Varela repeatedly invoke "relations", "patterns", etc., as the fundament for the theory. At one point, there is a claim that relations are "the stuff of systems" (p. 63 -- glad I compiled that index ;-) ). As such, I would say that autopoietic theory is heavily dependent on "patterns", but does not elucidate that concept beyond its generic usage. I, too, have been wondering about this, and I'm currently researching what all this "relation" stuff is, anyway. Our _descriptions_ of cognition make a lot of "relations", but there doesn't seem to be much out there in terms of anyone having taken "relations" as a subject of interest. For example, in mathematics, "relations" are defined by default, via sets of ordered "objects". This approach "silhouettes" a relation, delineating it solely in terms of its referents. Conor, I can't find where you got the phrase "pattern of relations of structural reproduction". Please clarify what you're pointing at; I think it's something I'd like to point at, too. ========================================================================= [II] GENERAL RESPONSES TO BARRY'S LINE OF QUESTIONING: ** HANS-ERIK NISSEN (with respect to artificial life): When it comes to using a word such as "life" in a new context I recall what Norbert Wiener wrote about such endeavors in his _The Human Use of Human Beings_, Houghton Mifflin Co., Boston, 1954 (first edition in 1950): "Here I want to interject the semantic point that such words as life, purpose, and soul are grossly inadequate to precise scientific thinking. These terms have gained their significance through our recognition of the unity of a certain group of phenomena, and do not in fact furnish us with any adequate basis to characterize this unity. Whenever we find a new phenomenon which partakes to some degree of the nature of those which we have already termed 'living phenomena', but does not conform to all the associated aspects which define the term 'life', we are faced with the problem whether to enlarge the word 'life' so as to include them, or to define it in a more restrictive way so as to exclude them. We have encountered this problem in the past in considering viruses, which show some of the tendencies of life - to persist, to multiply, and to organize - but do not express these tendencies in a fully-developed form. Now that certain analogies of behavior are being observed between the machine and the living organism, the problem as to whether the machine is alive or not is, for our purposes, semantic and we are at liberty to answer it one way or the other as best suits our convenience. ... ... It is in my opinion, therefore, best to avoid all question-begging epithets such as "life," "soul," and "vitalism," and the like, and say merely in connection with machines that there is no reason why they not resemble human beings in representing pockets of decreasing entropy in a framework in which the large entropy tends to increase." (ibid. pp. 31-32) ** RANDY WHITAKER (with respect to artificial life): Hans-Erik's reference to Wiener is very relevant at the outset of this thread into a-life -- a research area "...devoted to the creation and study of lifelike organisms and systems built by humans", where life's "essence is information." (Levy*, p. 5) Fascinating stuff, but I frankly feel an ache (as from an old injury) at certain resemblances between a-life and AI (artificial intelligence) -- another suboptimally labeled endeavor (and my prior specialty). AI also assumed the essence of its focus (intelligence) was "information", and that information-processing was _the_ model for human intelligence (a faculty then, and still undefined). The problem was that: (1) AI researchers came to subordinate explanations for the phenomenon of interest (intelligence) to the "interesting" aspects of their simulators; and (2) those with the most research interest in "intelligence" (e.g., the psychologists) were beguiled away from the messy state of their own field toward the more orderly, if less illuminating, view of humans as computer- analogues. AI's early apparent successes fueled a cross-disciplinary migration of information-processing metaphors from which the cognitive sciences still bear the scars. To be sure, there was a diversity of opinions (and ambitions), leading to the distinction between "soft" and "hard" AI -- the former claiming only to use computer models to understand the real thing, and the latter claiming the computerized version _was_ the real thing. In neither case did (or could) AI researchers claim that they (or anyone else) had an adequate grasp of the "intelligence" or "cognition" they sought to replicate. Imagine the pain of recognition when I read of "weak" versus "strong" a- life -- the former seeking "...to illuminate and understand more clearly the life that exists on earth...", and the latter aiming "...toward the long-term development of actual living organisms whose essence is information." (Levy*, pp. 5-6) Deja vu in the extreme! AI's problem was a lack of balance between a theoretical understanding of intelligence and its attempted simulation in software. Preventing a similar imbalance in a-life research requires attention to theoretical understanding of the "real thing". Autopoiesis originated as such a theory -- a systemic framework for delineating those entities to which we attribute life. The questions Barry raises will hopefully serve as a beginning toward matching the ambitions of a-life research with the understanding of living systems afforded by autopoietic theory. Then let's go back and straighten out what's left of AI. ;-) * Levy, S., _Artificial Life: The Quest for a New Creation_, New York: Pantheon, 1992. ** CONOR DOHERTY: I am unsure about the relationship of CA [cellular automata -- ed.] to autopoiesis. My understanding is that biological autopoietic systems are characterised by, amongst other things, causal reproduction of the internal relations which characterise the system. This raises the question of whether biological patterns can be specified independently of the implementation substrate, i.e. as a computer program. In the case of CA, at what level does organisational closure operate? If we have non-adaptive or non-stochastic transition rules then by definition it would seem that if this is the organisational level, it is closed. I have a feeling that I'm probably completely on the wrong wicket here. ** HANS-ERIK NISSEN (speaking generally about Barry's specific example being Conway's Game of Life): ...Now to the mathematical games you refer to. I feel somewhat surprised by the choice. Why do you not start by discussing the application of concepts by means of the tessellation example of autopoiesis given by Varela in _Principles of Biological Autonomy_, Elsevier North Holland, New York, 1979, chapter 3. pp. 19 - 23. My reason for guessing the task you take on might get into difficulties by your very choice of example I ground in the following quotation from the chapter just mentioned: "...It (the tessellation example in chapter 3) is fundamentally distinct from other tessellation models, such as Conway's well-known game of "life". (Gardner, 71) and other lucid games proposed by Eigen and Winkler (1976), because in these models the essential property studied is that of reproduction and evolution, and not that of individual self-maintenance. In other words the process by which a unity maintains itself is fundamentally different from the process by which it can duplicate itself in some form or another. Production does not entail reproduction, but reproduction does entail some form of self- maintenance or identity. In the case of von Neumann, Conway, and Eigen, the question of the identity or self- maintenance of the unities they observe in the process of reproducing and evolving is left aside and taken for granted; it is not the question these authors are asking at all." (ibid. p. 22) At least it might show worth-while to investigate your questions with this example in mind. Kind regards, Hans-Erik ** RANDY COMMENTS: OOPS! Hans-Erik, the Eagle-Eyed Emeritus, has come up with the kind of question that gives dissertators nightmares before their thesis defense! In addition to Varela's _Principles..._, the following citations apply: Zeleny, Milan, and Norbert A. Pierre Simulation of Self-Renewing Systems in Jantsch, Eric, and Conrad H. Waddington (eds.), Evolution and Consciousness: Human Systems in Transition, Reading MA: Addison-Wesley, 1976. Zeleny, Milan Self-Organization of Living Systems: A Formal Model of Autopoiesis International Journal of General Systems, Vol. 4 (1977), pp. 13 - 28. =========================================================================== CLOSING COMMENTS Well, that's about it for issue no. 5. The forum is growing, and we're starting to get some discursive "momentum". The foregoing responses to Barry's queries certainly do not exhaust the topic. Many more of you no doubt have ideas, comments, answers, suggestions, etc. -- share them with everyone else. In addition, there have been as many or more questions generated in this issue as have been answered -- meaning there should be a corresponding expansion of the volume and scope of discussion. COMING ATTRACTIONS: This'n'that on: _The Embodied Mind_, Spencer Brown and his "calculus of indications", the conundrums of social systems, etc.