Wednesday, November 22, 2023

The Secret Life of Machines

What exactly does it mean to live amongst the machines, to be involved in the creation of technical finality even as we recreate our own human finality along the way?  And how are these finalities related?  On the one hand, Simondon seems to want us to reprise our earlier, artisanal, role as the technical individual.  We are not to let ourselves be reduced to the level of elements or elevated to the level of ensembles.  We are meant to insert ourselves into the workings of the machines, join them together as elements in a concrete new technical individual, perhaps creating some sort of new informational mega-machine.  In this case it appears that we once again machine ourselves.  On the other hand, our new regulation (not control) of the machines seems to occur at a level removed from them.  Simondon sees our interaction with them as a means of discovering or rediscovering the construction of our own finality, but also as a way to appreciate that finality itself is merely one aspect of living things.  In that case, it would seem that the new technical individual would just be one aspect of a larger sense of what it means to be a human individual.  So then, which is it?  Are humans themselves machines, or somehow essentially different?  

In retrospect, I see that this ambiguity runs through the remainder of Part 2 (2.2.2-2.2.4) and is what forced me to read it a couple of times before I felt like I had some grip on his argument.  Here, Simondon attempts to explain in some detail how the new "self-determination" we ended with last time is supposed to work.  It's a fairly complicated story that will take us to the foundations of information theory.  

First, though, the short version -- the new cybernetic technical individual that Simondon hopes to help create is like a virtual version of what was previously the actual craftsman or artisan as technical individual.  It's as if there's another phase shift in the definition of the individual, which moves it from begin an integrator of elements to an integrator of other individuals.  This new role differs from the capitalist's management of ensembles, which assumed that each machine had a distinct and fixed purpose that served his invariant end, an arrangement which effectively turns the old "man of ensembles" into a "machine of machines".  Instead, the new cybernetic individual is investigating what tendencies machines have and asking what they might want to do and what problems they could be made to solve.  Such an investigation explores the individual's self direction at the same time as it virtually integrates the machines.  In a sense, this takes us back to our earlier role as craftsman or tool user, but our tools are now directly aimed at fabricating new human possibilities and not simply some specific material object.

Man as witness to machines is responsible for their relation; the individual machine represents man, but man represents the ensemble of machines, for there is not one machine of all machines, whereas there can be a thought that encompasses all machines. (METO, 157)

The long version is considerably longer and departs from some observations on the difference between human and machine memory that at first might seem irrelevant.  Simondon notes that humans only remember certain things, whereas machines remember everything.  A human who remembered everything would (famously) be pathological.  Likewise, a computer's omnivorous memory is, without any classification system, completely useless (for example, without a functioning file system to interface with humans, my external hard drive is just a sea of varying magnetic charge).  Memory is only useful when it has both context and content.  The computer keeps these two aspects of memory as separate as file name and file data.  Simondon is way ahead of his time in observing that human memory blurs these.  We don't remember what happened shortly after noon on November 22, 1963.  We remember the content only together with the context: we were trading cow lips for alien technology when Kennedy was shot.  

Noting this mixing of context and content in human memory is just the tip of a profound iceberg of reexamining how the living mind operates.  Simondon is ahead of his time here because he's groping towards a predictive processing Bayesian inference model of cognition.  He even seems to think of this as a sort of feedback in time (mirroring the feedback in neural processing that was discovered much later).

Content introduced into human memory will superimpose itself on prior content and take form on it: the living is that in which the a posteriori becomes a priori; memory is the function by which a posteriori matters become a priori. (METO, 138)

Normally of course, we think of memory as the faculty by which the prior (the a priori) comes back as subsequent experience (the a posteriori).  In reversing this direction, Simondon means to suggest that perception is a form of memory, in which our formed experience of what we perceive becomes what we are 'given' (the a priori).  Our experience of the world is not given as raw data, but as a series of models interpreting how this data might fit together based on our past experience.  It's this 'perceiving the past' that makes life possible, that allows us to select what's relevant from the world and act on it without ending up as paralyzed as Funes.  Which means that it's not just the past that gets involved in the present, but the future of possible actions which also get implicated in this passing present.  The error correcting feedback of predictive processing leads us to construct and reconstruct 'what happened' on the basis of what we're going to do about it.  The temporality of all this is interesting to think about but hard to describe.  The present is actually the past (statistical likelihood of certain forms).  But the past is actually the future (likelihood of the forms relevant to our organism).  And now what in God's holy name am I blathering about?  The point here is just that for Simondon, life is inherently virtual.  Living is the art of possibility, where the 'not real' somehow intervenes in the real and opens it up.  Time is the perfect symbol of this fact, and this is why he begins with a discussion of memory.

Machines, by contrast, live only in the present.  No matter how sophisticated we make them, no matter how much of the past they store up or how many simulations of the future they incorporate into their functioning, all of their goals are laid out in advance by their creator.  They simply execute.  Their execution may be enormously complex and impossible to predict.  It may even contain elements of randomness (like Ashby's homeostat) or some other limited ability to reprogram itself.  Try as it might though, the machine cannot ask itself, "what should I do?" in general, but only with reference to the specific task assigned to it.  This general direction is always supplied to the machine from outside, and never generated from within.  While an intelligent machine may solve many subsidiary problems along the way, it is incapable of grappling with the problem of self definition. This is precisely what makes something like Bostrom's paperclip maximizer so scary.  The problem with a completely autonomous machine is that there is no way to interact with it at all.  It just does what it was programmed to do, regardless.  Simondon explicitly likens the situation to Leibniz's monad, whose actions are entirely determined in advance from its internal wiring, so to speak, so that it never communicates with the outside world (Proposition 7: monads have no windows).  If you think about it, you'll see that this makes our fully autonomous and self-perpetuating dream machine kinda useless.  By hypothesis, you can't turn it off, nor can you modify it.  So unless its programmed goal happens to be your own unique goal for all of eternity, you two will inevitably part ways at some point.

At this point I'm sure it's obvious that Simondon has set up this contrast between the machine and the living in order to point out that it's the living, and most relevantly the human, that gives the machine its goal and alters it when need be.  Ultimately though, this is not so much an observation as a definition.  Having a temporality, having access to the virtual, having a capacity for self definition -- these are just what it means to be 'living'.  Likewise, executing operations in the present and lacking a capacity for self redefinition are what it means to be a 'dead' machine.  Seeing these as definitions rather than descriptions goes a long way to clearing up the ambiguity I mentioned at the outset.  

Recall that the problem with capitalism was that it took the finality of the machines for granted.  By contrast, the craftsman was directly involved in the recurrent causality that created the finality of the technical individual he represented.  The point of Simondon's Techno-Marxist history was to show us that there is a 'living' and a 'dead' mode of interacting with our technology.  In the first case we interact with our technology on its own level, by providing the ongoing feedback between different elements and regulating their interaction as an artisan working alongside a material that has its own structure and potentialities, and not according to some goal fixed forever in advance.   An analogy might be the manner that a sculptor interacts with her stone -- beginning with a general plan, but letting the outcome depend on the reaction of the material, thus allowing the combined system of sculptor and stone together to access possibilities that were not apparent at the outset.  In the second, capitalist, mode, we interact with technology either from above or below, forcing it to serve our pre-determined agenda or vice versa.  Here there is no creative feedback loop between ourselves and our technology.  The analogy would no longer be sculpting a recalcitrant material with 'a life of its own', but casting or moulding a pre-determined shape using a material presumed to be perfectly malleable.  It's all just a bunch of dead machines that execute their function, despite the fact that we claim in this instance that (certain) humans are in charge of it all.  This is to say that the role of the same biological individual completely changes depending on the mode of interaction with our technology.  Despite the fact we earlier described the craftsman as a sort of machine and the capitalist as a purely human overseer, it's actually the former that is alive and the latter dead in Simondon's sense.  In short, life is not an inherent property of biological organisms that is forever denied to other causal networks like machines -- it's a mode of interaction between something and its environment.  When we interact with technology by integrating the virtual into its operation, we literally bring it to life.

There is something alive in a technical ensemble, and the integrative function of life can be ensured only by human beings; the human being has the capacity to understand the func­tioning of the machine, on the one hand, and the capacity to live, on the other: one can speak of technical life as being that which actualizes this relation between these two functions in man. (METO, 140)

Hopefully, I've elucidated the root cause of the Simondon's tendency to flip between seeing opposition and seeing continuity between man and machine.  But this really only sets the stage for a more detailed discussion of how the living mode operates.   For instance, at this point you might still feel there's a vague whiff of vitalism about Simondon's notion of technical life.  Just how exactly does the living thing incorporate the virtual into the technical ensemble?  Or for that matter how does the organism, interpreted as a physical entity, incorporate the virtual into itself?  How is the 'not real' made real?  Fortunately, Simondon's discussion of memory has already given us the key to understanding this.  We can actually just restate those idea in information theoretic terms to see how humans could give meaning and life to the machine -- it's all a matter of expectations.

Simondon begins by distinguishing between thermodynamic and informational coupling of machines, and seems to consistently use the term "regulation" for the latter.  Of course, Maxwell's Demon assures us that these two can't be completely divorced, and the example he starts with blurs them.  Consider how a centrifugal governor works.  It is designed to maintain a constant engine speed under a load.   When the engine speed starts to slow down, the rotating balls coupled to it also slow down, and thus drop, which opens the throttle and increases the speed.  In short, the governor couples to the engine to create a more self-regulating engine.  In this case, we could say that the position of the balls provides 'information' which regulates the throttle.  This sounds kinda funny though, since the governor just seems to be part of the machine; for instance, we wouldn't really say that the pressure in the combustion chamber 'regulates' the position of the piston, though it's hard ot see much physical difference between these situations.  Simondon clarifies this confusion when he points out that this governor's regulation is not as perfect as it could be. In this scheme, the engine has to actually start to slow down appreciably before it can speed back up.  In other words, the regulation operates on the same time scale as the machine it regulates.  It would be better (ie. the engine RPM would remain more constant) if the feedback from the regulation happened much faster than the variation in the engine's thermodynamic variables.  At the limit, it would be nice if feedback from internal operating variables were made instantaneous.  Though actually, since instantaneity is a limit, it would be best if the regulator knew what was about to happen to the engine.  For example, if we had a sensor that looked ahead and saw we were about to go up a hill, it could rev the engine just in time to compensate for this.  This would enable us to track our target RPM much closely.  In that case, we would intuitively say that the sensor provides 'information' used to regulate the engine.  This makes sense for two reasons.  First, the time scale of this regulation is much faster than the time scale of relevant fluctuations in the thing we're trying to regulate.  Second, true information actually comes from outside the machine.  It coordinates the machine's potential interactions with its environment.  

So when Simondon talks about the cybernetic individual living amongst the machines and regulating them, he's talking about creating an informational coupling that goes beyond the thermodynamic coupling.  In fact, he points out that the difference between these types leads directly to differences in our technology.  Thermodynamic machines tend to be very large and very isolated from their environment, since this leads to the highest thermodynamic efficiency (defined as energy-out/ energy-in).  On the other hand, informational machines tend to get smaller and smaller, since we'd like the information to be as fast and consume as little energy as possible (a pretty prescient observation to make in 1958, seven years ahead of Moore's Law).  

Ultimately, we dream of making the information instantaneous and completely disembodied.  Alas, this is not possible.  If we send the information down an electrical wire, we run into the limits of the speed of light and quantum fluctuations (aka noise).  Even if we consider the problem in general thermodynamic terms (irrespective of the physical substrate used to transmit the information) we run into Maxwell's Demon.  And Simondon adds a philosophical reason -- information can never be 'pure' because it is parasitic upon the forms that it in-forms.  All of these arguments converge on the same idea -- if we make information too disembodied, we end up confusing it with random variation.  We see the consequences of this everyday.  To transmit information more efficiently in the thermodynamic sense (faster and with less use of energy) we compress it.  And the better we compress it, the more it ends up looking like a random string of numbers because we've rung all the redundancy out of it.  At first, there's something paradoxical about the fact that information, when it is perfected, seems to turn into its exact opposite -- randomness.  But then you realize that this almost follows from its definition as "surprisal".  For data to be informative, you have to have some expectation about what the data could have been.  Information depends on the existence of a set of background possibilities.  Without knowing what might happen, and what we might do about it, data we take in from the environment doesn't constitute information.  Which, as Gregory Bateson famously said, is a difference that makes a difference.  Consider all the data we take in about the speed of air molecules colliding with out skin; it isn't information until we decide the room is too hot or too cold, meanings which are obviously relative to our bodies attempt to maintain an equilibrium form.  Apart from the attempt to maintain this form, we could either see the stream of collision data as utterly meaningless thermal noise, or as a wealth of information about the speed of individual air molecules (if we adopt the demonic perspective).  The point is simply that information is variability with respect to a set of possible forms.    

Information is thus halfway between pure chance and absolute regularity. One can say that form, conceived as absolute spatial as well as temporal regularity, is not information but a condition of information; it is what receives information, the a priori that receives information. Form has a function of selectivity. But informa­tion is not form, nor is it a collection \ensemble] of forms; it is the variability of forms, the influx of variation with respect to a form. It is the unpredictability of a variation of form, not pure unpredictability of all variation. (METO, 150)

Just as information cannot be totally disembodied, the regulation of machines cannot dispense with the living.  Of course we can imagine a thermodynamic coupling of machines into some mega-machine (though Simondon's careful causal analysis would distinguish between whether this larger entity is actually a machine per se, or better described as an ensemble of machines, or even jsut a collection of elements).  And we can even imagine a direct 'informational' coupling of machines where the operations of one machine change the behavior of another.  In this case though, we are simply seeing the goals of life baked directly into the machine ensemble.  Because a machine's functioning can only truly be information for another machine when we have some notion of what the first machine should do.  It's only against this background of human derived expectations that the variations in the operation of a machine can become meaningful for other machines.  What the machine actually does needs to be compared to what it should do.  Humans program these informational evaluations into machines and humans re-program them all the time.  Life (really by definition) is the only thing that can invent and evaluate the new possibilities for what a set of machines could do, which means that it's the only force that can change the inter-machine wiring diagram in order to make it carry out a new goal.  

For Simondon, the informational regulation of machines means incorporating the living, the virtual, into their operation.  And with this piece, we can finally bring the whole scope of his METO project into view.  His goal is for our culture to incorporate the technical knowledge of how machines and humans are actually causally coupled.  We need to create people who understand how the machines are hooked up, how they influence us and how we influence them, what tendencies they have and what goals they can serve -- a sociologist of machines, a mechanologist, a cybernetician.  As I mentioned at the beginning, these would be new "technical individuals" in Simondon's causal definition.  Their evaluation of the possibilities afforded by different configurations of the ensemble of machines makes them a necessary element in machine function, one that can close the causal loop by which a new mega-machine defines its own finality.  Their thought becomes an essential node in the exact type of feedback system that defines a technical individual.  This makes them the virtual, informational, counterpart of the traditional artisan.  

However, the role of this new technical individual is not, as Weiner dreamed, to serve as technocratic philosopher king.  Simondon wants many of these individuals to grow from a general culture.  Ultimately, he'd like to see a technical understanding so ubiquitous that everyone can think about how technology shapes their aims and how they might, in turn, repurpose that technology to new ends.  Which is to say that he's not interested in creating a new technocratic class of cybernetician dedicated to optimizing society according to some fixed, homeostatic, metric.  Instead, he's interested in giving us the knowledge to effectively govern ourselves, that is to define what's important, and change our minds about it.  So finally, the goal of Simondon's new culture is not a better realization of a particular homeostasis, but a realization that all goals are constructed, and that finality is just one mode of approaching life, a technical one.  

No comments:

Post a Comment