Tuesday, January 30, 2024

The M-M-M-Megamachine

We're working backwards here.  The argument in Autonomous Technology  was based on a 'materialist' reconstruction of the techno-criticism that people like Mumford gave voice to in the sixties.  Basically, Winner wanted to demonstrate that you needn't subscribe to some paranoid fear of what 'they' (which could be anyone from political elites to sentient machines) are doing with technology to legitimately worry about what's happening with technology.  Instead, we can simply argue that the systematic interaction of certain aspects of modern technology (inter-connectivity, reverse adaptation, technical imperative) with certain characteristics of human political, social, and psychological organization (cognitive limits, group think, desire for security) leads to a world increasingly dominated by goal-seeking machines run amok.  Last time I discussed how this idea opens up a new set of questions that Winner appears unprepared for, but generally I think this systematic and structural type of analysis is very useful.  And in tracing backwards to Winner's predecessors like Mumford, it's clear why he took this approach.

Because Mumford was certainly not afraid to think of the machine as The Man.  The reason he's comfortable doing this, however, has less to do with the embrace of some vague techno-vitalism, than with the fact that he has a very concrete Man in mind -- Pharaoh.  Our situation today is Pharaoh's fault. That is the basic story of Volume 1 of The Myth of the Machine -- Technics and Human Development.  In this book, Mumford tries to trace the history of human technical development all the way back to the point at which our species disappears back into the apes.  Most of the prehistory we encounter in this volume is a frankly speculative endeavor.  In fact, history proper begins with Pharaoh's wrong turn.  But first let's recap the early years.  

Humans were developing just fine, and had invented all kinds of sophisticated 'technologies' like ritual, language, omnivorous hunting and foraging strategies, and domestic horticulture.  Of course, we don't often think of any of these as technologies because they don't look like simple machines or tools.  Nevertheless, they are all learned behavior patterns that serve to enhance εὐδαιμονία, for lack of a better word.  They open new possibilities for life, expand human experience in many dimensions, and increase our power in Deleuze and Spinoza's sense.  Developing this type of 'technology' is precisely what, in Mumford's view separated us from the animals.  So, while one of Mumford's goals is to reject the idea the humans are just tool-using apes, he's not really opposed to the idea that we are a technical species right from the outset.

Then along comes Pharaoh.  And he builds a proper machine, a tool for increasing his personal individual power.  Mumford calls this divine tool the megamachine, because it's so large that individual humans just serve as its component parts.  It's a machine of enormous power and precision, capable of erecting pyramids that have stood for 5,000 years already.  At first, calling the structure of ancient Egypt a machine might seem like a somewhat loose and metaphorical extension of the concept.   But by considering everything that went into the production of these monuments, Mumford would like to convince us that the truth is almost the reverse -- our latter day concept of a machine is actually patterned on Pharaoh's breakthrough technology.  What did it take to get the pyramids built?  For one, it required the production of a huge agricultural surplus.  This had to support not only a massive diversion of human labor into pushing around big stone blocks, but it also had to feed the army of scribes who organized this labor.  This large informational bureaucracy itself had to designed and constructed to introduce the first standardized units of space, time, and labor.  And all of this newly massed and organized human power needed to be put at the service of the dream of immortality of one allegedly divine individual.  The whole apparatus was a clockwork mechanism that sucked in everything around it in pursuit of an insane and life-denying goal.  For how can the quest for immortality be anything but the denial that life is real?  In a sense, this unquenchable thirst for personal power is almost Mumford's definition of a machine.  For him, it connotes a goal oriented mechanism that breaks free of its human context, a sort of teleology gone mad.  In the pyramids, Mumford see the root of our current obsession with more, more, more ... something ... that technology will somehow magically provide for each of us.  All that has happened in between these two is that the machines have been miniaturized and the number of individuals expanded.  But we still imagine that organizing and optimizing the productivity of the world will allow us to live forever.

This is the illusion that Mumford hopes to dispel for us.  Because while Pharaoh's megamachine undoubtedly increased his power, it was at the cost of enslaving all the rest of us.  While we'll have to wait till volume 2 to discover how Mumford extends this slavery analogy into the modern era, its application to ancient Egypt is pretty obvious.  Pharaoh quite literally made humans the cogs of his divine machine.  He took full living human individuals and turned them into standardized parts in a larger system.  In Simondon's language, he reduced humans to technical elements, and constituted himself as the first true technical individual.

This raises an obvious question: why did the people go along with it?  Mumford style is long on suggestion and analogy, and short on careful causal analysis.  He mentions the terror and violence of Pharaoh's regime, though this begs the question of how violence alone would allow king and court to sustainably control a large population.  Ultimately, his answer is that Pharaoh enslaved us by exploiting a much older human 'technology' -- myth and ritual.   It's the myth of the machine that keeps us from seeing its perverse and life-denying reality.  Mechanistically speaking, this is almost no answer at all.  The Eqyptians were simply hoodwinked into enslaving themselves?  And yet there's something that rings true here.  There's some way in which technology lends itself to a certain forgetfulness.  We seem to become so enthralled by its prodigious efficacy that we lose track of what we really wanted to accomplish with it in the first place.  Somehow the servant turns into the master, or perhaps more accurately, the master falls asleep while the servant continues his mechanical march along a predetermined course.  We seem to be susceptible to a sort of willful ignorance, a tendency to rely on magical thinking, especially when it consecrates our little life to a higher purpose.  We'll see next time how Mumford develops these themes in volume 2.  For now, I'll just leave you with my brief individual chapter notes.

2) Suddenly humans got big brains that they didn't know what to do with.  The growth of the brain was not an adaptive evolutionary response. 

3) Big brains dream up all kinds of crazy shit.  The unconscious becomes a chaos that can only be tamed by the repetitiveness of ritual.

4) Language extends ritual and is really the first technical operation.  Standardized sounds are introduced that can be recombined for expressive production.  Language is not primarily about communicating information, but about structuring the chaos of a big brain.  It's a form of magical control over our growing internal life.

5) Foraging develops technical intelligence in a way that predated tool making.  But the necessities of ice-age hunting introduced the first concept of work with the repetitive actions needed for fashioning stone tools.  The first true machine is the bow and arrow because it is not just a better hand. 

6) The agricultural revolution is misnamed.  There was a long slow transition from horticulture to agriculture. At this point everything becomes more settled.  Domesticated plants and animals arrive with the domestication of human lifestyle.  The big innovations are ground stone tools and ground cereals, which begin to make life a daily grindContainers of surplus become important for the first time -- the basket as a technology.

7) The neolithic synthesis of foraging and agriculture (basically, gardening) strikes the perfect balance between grinding work and ritual play.  Everyone person, plant, and animal is integrated into a domestic situation.  But this paradise easily becomes too stable and insular.

8) Kingship fuses divine religious power with administrative control of violence to create the first megamachine formed of human parts.  People begin looking to astronomical, rather than biological regularity as a model of order. 

9) The king's magamachine captured and coordinated human labor to both productive (pyramids) and destructive (war) ends.  It functioned through an elaborate bureaucracy that relied on conspicuous consumption to reinforce class divisions.

10) The megamachine must consume the surplus it produces either through elite fantasy or through war.  The rulers become neurotically anxious, and the collective human sacrifice of war appeases this the way individual human sacrifice used to appease a neolithic community's anxiety.  Cities manage to be partially exempt from machine conscription, and the synagogue is a form of organized resistance to its dehumanization.

11) The megamachine is not the only kind of technology.  There has always been a small scale 'democratic' technology that serves human scale life and aesthetics.  The prophets of the Axial Age actively resist the megamachine -- the preach a movement of the importance of the smallest.

12) The Benedictines invented the minimachine.  A small scale 'democratic' technics (based on the water wheel) was placed in service of a regimented spiritual life.  Small machines were invented on the fringes of great empires because they are more valuable where there are fewer slaves.  The middle ages generally strike a balance by placing machine power in service to human life.  This balance is destroyed when the religious aspiration of the Benedictines is replaced by the infinitely unsatisfiable monetary aspirations of early capitalism.  Mammon becomes the new pharaoh.


Friday, January 12, 2024

Which one is autonomous?

Next up on the bookshelf in the philosophy of technology project is Langdon Winner's Autonomous Technology: Technics-out-of-control as a Theme in Political Thought.  Since Winner is a professor of political science, the tone of this book differs markedly from the frankly speculative efforts of Simondon and Arthur.  While it's both accessible and fairly well written, it still suffers from the long windedness of someone who constantly feels the need to justify their academic existence.  So, for example, we only come to understand the overall design of the book at the very end.

We began our inquiry with the simple recognition that ideas and images of technology-out-of-control have been a persistent obsession in modern thought. Rather than dismiss this notion out of hand, I asked the reader to think through some ways in which the idea could be given reasonable form. The hope was that such an enterprise could help us reexamine and revise our conceptions about the place of technology in the world. In offering this perspective, I have tried to indicate that many of our present conceptions about technics are highly questionable, misleading, and sometimes positively destructive. I have also tried to lay some of the early groundwork for a new philosophy of technology, one that begins in criticism of existing forms but aspires to the eventual articu­lation of genuine, practical alternatives. (AT, 306)

This is a nice summary, and explains the rationale for what was an otherwise puzzling duplication of material between the first third of the book and its remaining two parts.  Winner spends quite a lot of time establishing that 1) we're worried about technology and 2) while some of these worries don't make a lot of rational sense, there is some real reason to worry.  I think both were important points to make, especially at the time of publication in 1977; it's clear that even today not everyone would agree with them.  And in making them, Winner addresses some interesting questions that often go unnoticed in our thinking about technology.  For example, asking if we should be worried about technology allows us to back up and suspend judgement for a moment so that we can see what's at stake in the question itself.  If we're afraid that technology is now somehow "out of control", then we clearly need to interrogate our notions of control and power.  With this question we immediately enter a political realm. 

But in the end, Winner's answer to this question is sufficiently not-earth-shattering (we thought we were the goal-setting master and technology the goal-obeying slave), and our perspective sufficiently jaded (by the century long failure of the modern, technocratic, or internet revolutions to bring about anything resembling utopia), that this first third of the book now reads as a elaborate and unnecessary justification of why you should think about these issues at all.  The attempt to maintain a facade of neutrality throughout this section -- "some people have said that modern technology qualitatively changes human experience"-- when it will later become clear that Winner is himself almost entirely a techno-critic who agrees with 'some people', introduces a lot of duplicate material.  So, for example, we see Jacques Ellul presented first as potentially just a wild-eyed prophet of techno-doom, only to find these same basic ideas re-presented, in the chapter on "Technological Politics", as compelling observations on the changing nature of human autonomy in a technological society.  It's enough to make us wonder whether Winner devoted a third of the book to shadow boxing a straw man just to keep in top academic condition, or whether political science was backwards enough (in 1977) that this approach was necessary to keep your job.

As the summary quoted above indicates though, once Winner finally legitimizes criticism of technology, he goes on to articulate two basic approaches to the problem of improving things.  He dedicates one chapter to "Technocracy" as illustrated by the theories of Don Price and John Kenneth Galbraith.  While the details differ, these theories hold that any political problems caused by the unintended side effects of technology can be resolved with better political solutions.  Both solutions may require an elite new interest group to be implemented, but they still fit squarely within our understanding of politics as the process of human self-determination.  A second chapter illustrates a much more radical take on the problem that Winner calls "Technological Politics".  This is the view associated with figures like Ellul, Mumford, and Marcuse.  They argue that modern technology so permeates human experience that to imagine we are free to direct it as we see fit is pure hubristic fantasy.  Technology now has a logic of its own, together with its own autonomous momentum.  And we can neither control it in any straightforward sense, nor even get off the train.  Winner summarizes two positions here:

   The first, the utili­tarian-pluralist approach, sees that technology is problematic in the sense that it now requires legislation. An ever increasing array of rules, regulations, and administrative personnel is needed to maximize the benefits of technological practice while limiting its unwanted maladies. Politics is seen as the process in representative government and interest group interplay whereby such legislation takes shape.
   The second approach, disjointed and feeble though it still may be, begins with the crucial awareness that technology in a true sense is legislation. It recognizes that technical forms do, to a large extent, shape the basic pattern and content of human activity in our time. Thus, politics becomes (among other things) an active encounter with the specific forms and processes contained in technology.
   Along several lines of analysis this book has tried to advance the idea central to all thinking in the second domain - that technology is itself a political phenomenon. A crucial turning point comes when one is able to acknowledge that modern technics, much more than politics as conventionally understood, now legislates the conditions of human existence. (AT, 323)

As you can see, Winner himself falls squarely into the latter, "technological politics", approach.  But because he conceives of his work as overcoming the objections of some putative technocratically optimistic Chad, he spends a great deal of time trying to specify exactly how the (assumed to be muddle-headed) idea that technology already is politics can be made to fit with a naively dualistic view of human agency.  This is a very useful exercise.  However, it's also an exercise that tends to dissolve its own starting point in a way that Winner doesn't quite appreciate.  Fortunately, this undermining actually takes us to the heart of the issue, and carries us right back into the terrain that Simondon explored  -- what does it even mean to be an 'autonomous' individual?  So in what remains, I'll trace Winner's careful dualistic reconstruction of how the sixties critics of Technological Politics might well have been onto something, and then look at how the means of this reconstruction paint him into a corner that he could only have gotten out of with the help of Simondon's concept of reciprocal causality.  

---------

Winner's central contention is that technology has changed how power functions in human society.  Whereas previously the power to steer society towards particular values may have rested with kings, or capitalists, or even 'the people', now this power increasingly rests with 'technology'.  This is the conclusion of the school of Technological Politics that Winner wants to reconstruct.

In twentieth-century social philosophy the conception of a self-maintaining technological society has recurred in a number of interesting and disturbing books - Oswald Spengler's Man and Technics, Friedrich Georg Juenger's The Failure of Tech­nology, Karl Jasper's Man in the Modern Age, Lewis Mumford's The Myth of the Machine, Herbert Marcuse's One-Dimensional Man, Sieg­fried Giedion's Mechanization Takes Command, and Jacques Ellul's The Technological Society. These works contain a widely diverse collec­tion of arguments and conclusions, but in them one finds a roughly shared notion of society and politics, a common set of observations, assumptions, modes of thinking and sense of the whole, which, I be­lieve, unites them as an identifiable tradition. Taken together they ex­ press an inchoate theory which takes modern technology as its do­ main. (AT, 174)

But what is 'technology'?  Isn't it just a neutral collection of means for achieving what are always ultimately human ends?  How could these means, when we loosely group them together as a whole, themselves gain the power to determine the ends of human society?  Aren't we just anthropomorphizing 'technology' by treating it as an entity or organism or subject in its own right, and then turning this fiction into a political actor?  Does this represent progress in political theory, or its mystification?  To reconstruct what it might mean for technology to be a political actor, Winner first has to rescue the idea from those hard-headed critics who still believe that only human individuals can have political agency.  In other words, he has to rephrase what sounds like the description of an organism -- "a self-maintaining technological society" -- as nothing more than some peculiar thing that happens when humans have a whole lot of tools at their disposal.  

Are there certain conditions, constraints, necessities, requirements, or imperatives effectively governing how an advanced technological society operates? Do such conditions predominate regardless of the specific character of the men who ostensibly hold power? This, it seems to me, is the most crucial problem raised by the conjunction of politics and technics. It is certainly the point at which the idea of autonomous technology has its broadest significance. (AT, 173)

What characteristics allow us to confuse technology with a self-maintaining organism?  Well, one is Winner's very broad use of the term 'technology'.  He doesn't simply mean the various apparatus that we commonly refer to as machines.  He also includes the entire body of human "technical activities", that is, all goal seeking rational behavior, which he refers to as "technique" (following the French use of the term).  Finally, he also means to address the various organizations, or networks, that function like machines, regardless of whether they consist of entirely human or artificial parts, or some mix of the two.  So it's clear from the outset (AT, 11-12) that there is no clear dividing line between human society and technology.  The latter is being construed not only to include much of human social behavior, but even the parts of individual behavior that conform to a 'rational' logic of means and ends.  Already, we can see that the question of which one is an autonomous organism is complex and messy.  We might be talking about technology, or society, or even, if we stop taking it for granted for a moment, a human person.  In fact, most of the time Winner's term "autonomous technology" means the complicated systemic interaction of these three.  One of the secrets of thinking about technology, which to me was not apparent when I began with Simondon, is the way it reveals how badly we need a better definition of 'organism' or 'individual', along with the closely related notions of 'control' and 'autonomy'.

Winner, however is a social scientist, not a philosopher, so he doesn't try to give us a definition of an organism.  Instead, he appeals again and again to the various ways that technology appears to acquire a life of its own by calling into question human's ability to control and limit it.  In other words, the proof of technology's autonomy lies in the way we lose our own.  For example, modern technology seems to constantly expand its reach, and at an ever accelerating rate that hairless chimps find difficult to keep up with.  In the process, it also seems to suck in everything in its environment and reshape it as raw material for technological growth.  And this logic applies not only to natural resources, but even tends to reshape human beings into suitably standardized and machined parts, as if it were technology were incorporating or disciplining us.  Technical systems seem to inevitably become megatechnical systems, to use Mumford's term.  The scale of an individual apparatus or organization tends to increase in an endless quest for efficiency, and somehow each tends towards a complex interaction with others.  In addition, technical development doesn't always seem to respond to human goals. If it's really just working for us, then why are there so many unintended consequences that don't seem to benefit any human?  We wanted universally accessible knowledge and instead we got TikTok.  We want to get from A to B, and somehow we got an endless parade of SUVs.  Technological means seem to become ends in themselves with surprising regularity.  Finally, technology doesn't ever seem to want to be turned off.  A decade ago we might have illustrated this with the mushrooming of highly redundant data center bunkers.  Today we can imagine adding an AI whose job is to motivate those notoriously fickle humans to continue supplying fuel and replacing servers.  But Winner has in mind something more systemic that has long involved the interaction of human desire and technical means, which he calls the threat of apraxia.  Imagine a politician who suggests shutting down the internet and you'll immediately comprehend the idea.  Even if technology seems to require human participation in it as both as efficient means and final justifying end, and even if we could theoretically abandon it and survive just fine, somehow the idea of stopping is always made unthinkably apocalyptic.  For Winner, all of these examples serve to illustrate that we are not in control of our modern technology the way we imagine ourselves in control of, say, a hammer. 

In its centrality to the daily activity and consciousness of the "employee," the function-serving human com­ponent, the technical order is more properly thought of as a way of life. Whatever else it may be, a way of life is certainly not neutral. Oppor­tunities for "use" or "control" that the human components have within this system are minimal, for what kind of "control" is it that at every step requires strict obedience to technique or the necessities of techni­cal organization? One can say that the "control" is exercised from the center or apex of the system; this is true, although we shall soon see that even this has a paradoxical character. But in terms of the function­ing of individual components and the complex social interconnections, "control" in the sense of autonomous individuals directing technical means to predetermined ends has virtually no significance. "Control" and "use" simply do not describe anything about relationships of this kind. The direction of governance flows from the technical conditions to people and their social arrangements, not the other way around. What we find, then, is not a tool waiting passively to be used but a technical ensemble that demands routinized behavior.
   In this way of seeing, therefore, the tool-use model is a source of illusions and misleading cues. We do not use technologies so much as live them. (AT, 201)

But if humans aren't in charge of technology, then what is?  Has Winner given us any reason to think that technology is in charge of itself, as the word autonomy suggests?  And if modern technology constitutes a new organism, when was it born and above all, what does it want?  Here, Winner's methodology begins to create problems.  On the one hand, he's sought to be hard-headed and ignore the idea that there's anything mystical or vital involved here.  'Technology' is really just a short-hand for the systemic interaction of mechanical apparatus, human technique, and organizational networks that we see in the world around us. Humans are the only real agents in this system, and the problem is simply that we have lost this agency, this control. The idea that it has thereby passed to some other entity is just an expression of our Frankenstinian fear.  On the other hand, Winner has actually described something that walks and talks like an organism in its own right.  He hasn't emphasized this emergence, but as I suggested earlier, as the book progresses he increasingly talks as if there is a real "self-maintaining technological society".  He has even sketched what this system or organism wants.  It seems to want to expand and incorporate everything, and it seems to want to turn its ever-evolving means into ends in themselves. 

In fact, Winner will even go on to talk about how these tendencies to expansion ("technical imperative") and self justification ("reverse adaptation") actually serve to meld together the two perspectives of Technocracy and Technological Politics we discussed earlier.  Does technology require legislation or is it legislation?  Apparently the answer is yes.

  On the basis of the revised notions of technological politics pre­sented here, however, I want to suggest an alternative conception of technocracy. It is one which avoids the pitfalls of the centralist-elitist notion and which, I believe, comes closer to defining the crucial problem that writers on this question have been aiming at for some time. The conception, although difficult to formulate in terms of pre­cise indexes of measurement, could certainly be tested.
   I offer it as follows. Technocracy is a manifestation of two influen­ces upon public life, which we have dealt with at some length: the technological imperative and reverse adaptation as they appear to a whole society with the force of overwhelming necessity. From this point of view it matters little who in specific obeys the imperative or enacts the adaptation. It is of little consequence what the nature of the education, technical training, or specialized position of such persons may be; indeed, they need not be technically trained at all. Similarly, the issue has little to do with whether there is a "center" of decisions, who occupies it, or what their personal, professional, or class interests may be. (AT, 258)

Here Winner reinterprets the technocrats themselves as mere dispensable tools of technology.  Technocracy becomes a description of how the system works, not who controls it.  Technology gives itself the legislation it requires, though this legislation only serves to unleash more technology rather than allow some human elite to limit it.  This view seems perfectly accurate, and is very close to the critique of Marxism that we saw in Simondon.  The elites are slaves of the machines just as much as the workers, notwithstanding their more comfortable yachts.  Placed in the same situation, anyone would act as they do, because the system has made any other behavior unthinkable.

However, this insistence that the system runs itself, that it doesn't require an elite who understands it, or even a center from which power is constructed poses a tricky question.  What leads us to call it a single system at all?  Winner has described something that we can only recognize because it has a coherent logic of its own that resists human efforts at control.  But then he claims that this same thing is so decentralized that each megatechnical system charts its own course.

Finally, the view I am taking here does not find the essence of tech­nocracy at the center of all centers. If what we have seen is correct, then one would expect a dispersion of power into the functionally specific large-scale systems of the technological order. The systems do on occasion appeal to the central decision-making organs of the state for support and assistance, but it is incorrect to say that in so doing they necessarily yield control of their affairs. Their tendency is, in fact, to resist the final centralization Ellul predicted. The direction of gov­ernance by technological imperatives and reverse adaptation runs from megatechnical systems to the state. (AT, 261)

I think Winner sees this idea of a decentralized technocracy as his crucial contribution to the theory of Technological Politics.He thinks his more rigorous materialistic methodology allows him to reach the same type of conclusions as Mumford and Ellul, but without the whiff of vitalism or human essentialism that (supposedly) clings to those authors.  This is why he emphasizes here that his theory requires none of the state centralization that the others apparently see as inevitable. 

But this idea won't work as it stands.  If each megatechnical system has its own "affairs" to control, and these are not organized by a central state, then we really don't have a system at all.  Why would all these individual megatechnical systems all share common goals, and why would the resulting overall behavior have any coherent logic that we can recognize?  And if megatechnical systems resist final integration by the mega-mega-technical state, then why doesn't this logic apply inductively, and so prevent technical systems from ever forming mega-technical systems in the first place?  Where's the secret sauce that allows certain technical systems to come together so that they act like distinct centers that can control their affairs and resist centralization?  Conversely, if these systems are the only real agents, then how is it that we've identified a characteristic expansion and self-justification of 'technology' as a whole?  Was this just a sloppy shorthand for describing the combined effect of distinct megatechnical systems?  But then why do they all seem to have the same combined effect where they don't just individually expand but also constantly produce new megatechnical systems?  Winner appears to be arguing that we are slaves to many machines which are not necessarily compatible.  This is probably true, but doesn't help us see how the individual machines formed nor why they seem to come together to form a system. 

What we need here is the idea of causal feedback loops.  This was the core of Simondon's theory of the technical individual.  We have to distinguish the individual from the element and ensemble levels of organization, and the key to that is examining where self-reinforcing causal loops get formed.  At this point, it feels like an anti-climax to admit that I don't completely understand how to construct the theory we need here.  It  seems that Simondon developed the idea in his book on individuation, and merely applied it in On the Mode of Existence of Technical Objects.  So presumably we'll return to this point.  For now, I can only sketch a few things that our discussion of Winner has gotten me thinking about.  I think we need to think about something like an ecology of feedback loops.  On the one hand, the existence of individuals clearly requires a positive feedback loop.  I'm imagining something like an autocatalytic set of causal relations.  If the internal functioning of the individual doesn't cause the individual to continue functioning as it does then we're not going to see that individual around for very long.  On the other hand, it seems individuals also require negative feedback loops to maintain a separate identity.  We need a condition of closure to create a distinct self that can be maintained.  Left to itself, a self-reinforcing autocatalytic set wouldn't maintain the equilibrium we associate with individuals, but would simply expand without limit.  It wouldn't make a copy of itself but would simply more of itself.  This is the kind of behavior we associate with crystals, not living individuals. 

This is about as far as I've gotten with the idea so far.  It's not obvious to me that one of these loops should operate exclusively internally and the other externally, between the individual and the environment.  This was one of the interesting, though confusing, points in Simondon.  The individual and its associated milieu are held together like two poles of a teepee.  The individual can't be what it is without a suitable associated milieu, but this in turn can't be as it is without the actions of the individual.  Winner describes a similar phenomenon when he talks about the way technology reshapes both the natural and human environment in a way that leads to more technology.  At the same time, the individual is not just the same thing as its associated milieu, so there has to be some mechanism that creates this distinctness.