Tuesday, October 24, 2023

What is a Machine?

This is the question that animates the first section of Gilbert Simondon's On the Mode of Existence of Technical Objects.  Simondon's answer to this question is complex and original, especially considering that he laid his theory out in 1958.  In fact, anyone critically pondering the current breathlessness around generative AI could benefit from considering his 75 year old critique of early cybernetics' facile identification of man with autonomous machine.  His ideas are so clarifying because he refuses the habitual way we have of looking at the machine; it is poorly understood if we reduce it to a functional black-box (either as a physical or algorithmic input/output table) that fulfills some human appointed task.  Instead, Simondon claims that to understand the machine, we have to look at it in itself, and not just as an intermediary that serves another purpose.  We have to look inside and see how it actually works.  And he backs up this striking change in perspective with a wealth of technical examples detailed enough to have sent me scrambling to wikipedia to keep up.   For instance we learn in detail about the diode (and triode to pentode), the Guimbault turbine, and the audiometer, to cite a few examples, as well as why it's called an internal combustion engine (in retrospect kinda obvious, but not something I ever considered -- in a steam engine, the combustion happens outside the chamber).  Simondon discusses such details at length primarily in order to show us the actual causal interactions of energy and information that make each machine work.  But the examples also enable him to examine how these interactions change over time as technology evolves.  In fact, the history of development of the machine plays an integral role in Simondon's theory of the precise way in which technical objects exist as such.

This probably all sounds pretty abstract.  And you can see a confusion in the last sentence between "machine" and "technical object".  So let me start by giving a rough and ready definition of Simondon's machine, and clarify some of his terminology.  For Simondon, the machine is a specific type of technical object, which he calls the technical individual.  There are also technical elements, which compose parts of the machine, and technical ensembles consisting of several machines.  Each of these technical objects is both a material thing and a functional thing.  But what makes them "technical" is essentially the intersection of these two worlds.  Roughly speaking, a technical object is a material object that has some human thought baked into it.  It's this "designedness" that distinguishes it from a natural object (or perhaps more accurately that gives it a technical mode of existence, alongside its natural mode).  Conversely, a technical object is not just a thought, just an abstract idea or diagrammatic function, but an idea that has been made concrete, or been materialized in a particular way.  

In a sense, this is kinda obvious.  At least at this moment in history, how would we talk about the products of technology without talking about humanity?  But Simondon's point is not simply that humans fabricate technical objects, which then exist somewhere between the ideal and the natural world.  He sees the development of technical objects as having an evolution from the abstract realm of thought towards the concrete realm of the natural world.  That is, their mode of existence also implies a trajectory of technical progress.  However, this definition of progress shouldn't be confused with our everyday notion of better, faster, stronger, and cheaper.  The two concepts may often overlap, but Simondon's definition considers progress from the point of view of the machine itself, irrespective of whether today's machine fulfills its human appointed task in a way we value more highly than yesterday's.  "Progress" is concretization.  

How exactly does this work though?  To really understand him, it's important to study Simondon's examples in detail.  Unfortunately, the details would take us to far afield.  So instead, let me describe the process generally, but with reference to a single example.  Consider an engine.  The abstract idea of an engine is pretty straightforward.  You heat something and it expands and pushes on something.  Then you let it cool and it contracts and pulls. After that you repeat the cycle.  An abstract engine has various functional parts we can represent in a diagram.  We need a chamber with a moving piston.  We need a source of heat and a cooling reservoir held at different temperatures.  But as the Feynman Lecture admirably demonstrates with its example of the rubber band engine, we have a lot of leeway in choosing what material structures will fill these roles.  This indeterminacy is obviously what makes the physicists black-box functional diagram of the engine "abstract".  The ideal engine doesn't have any friction, it's reversible, and it's constructed of ideal materials that don't deform under pressure or store heat in themselves, or ... do anything other than what they're supposed to do.  It's a spherical cow.  In this abstract engine, each part has only one role, and it plays that role perfectly, without any unintended side effects or consequences for how the other parts complete their task.  Obviously, no concrete engine fulfills these criteria.  But when we start building engines, we start off trying to build them to match the abstract idea we have of them.  We make each material piece of the concrete machine correspond as closely as possible to a single function in the abstract machine.  And we try to isolate the operation of these pieces so that they only interact with one another in the simple ways specified in the abstract machine (eg. the piston should move, but not heat up, or if it heats up it should not transfer this heat to the chamber).  However, as we build more sophisticated engines from different materials and made to serve different applications, we start to have to take into account the various "side effects" due to any particular materialization of our design.  These side effects are really just the inescapable consequences of the full laws of physics being applied to the concrete materials of our machine.  None of our actual parts can really be nothing but the Platonic From of their functional role.  Because they are material structures, they always end up containing something more than they were designed for.  They are subject to gravity and friction, have some chemical composition and structural integrity that does not bear infinite cycles of heating and cooling, etc ...  

Of course, we can try to improve our concretization of each part so that it more closely matches our abstract diagram.  But Simondon observes that this is not actually how technical objects progress (ie. become more concrete).  Changing the particular way an abstract part is materialized leaves the technical object at the same level of overall concreteness.  The only way the object can become more concrete is through creatively using, rather than minimizing, these side effects.  In other words, progress happens when the pieces of the machine begin to leave behind the single abstract function they originally played in the designer's notebook and to have their concrete properties directly incorporated into the design in a positive and productive way.  In other words, a more concrete machine synthesizes functions held distinct at the level of the abstract machine.  Each piece of the concrete machine is designed to play more than one abstract functional role, and conversely, any given function will be carried out by several pieces at once.  In this way, the margin of indeterminacy that applied to the materialization of each abstract functional role is reduced; if if neither of two functions can occur without this particular material part, then the part is rendered more concrete, more necessary to the machine's functioning, and less abstract.  The example Simondon gives in the case of the engine is the way the cooling fins on the chamber also come to serve as structural reinforcement of its capacity to contain the explosion happening inside it (in the case of an internal combustion engine).  

At the limit of this process of concretization, the technical object converges with the natural object.  Every piece becomes so tightly related to all the others that all the possible interactions between pieces become necessary for it to function.  No material piece can be considered in isolation, unless it be the entirety of the machine.  The technical individual (aka the machine) comes together as an insoluble individualized unit precisely when the causal interactions between all its pieces form a self-stabilizing feedback loop.  Each part becomes so co-adapted to the others that none would function in isolation.  But at the same time, the effect of their mutual functioning guarantees their mutual conditions of functioning, like a snail that secretes its own environment.  In short: the technical individual arises in the manner of a vortex.  It stands up tile a tipi.  At that point, all the causal effects described by all the various sciences that apply to the functioning of the machine have been designed into it from the beginning, so that the abstract functional diagram of the machine also converges with our full scientific understanding of its materialization.  While this is clearly a limit case we never reach with our technology, Simondon claims that machines naturally progresses in the direction of greater concreteness when they can.  

Progress, even in Simondon's technical sense, is not inevitable.  Under what conditions does the technical object become more concrete?  That is, how do machines evolve?  It's really to fully answer this question that Simondon builds his whole tripartite theory of the technical object (element, individual, ensemble).  First, he discusses technical evolution at the level of the individual (the machine).  His idea is that a machine placed in a relatively static environment won't evolve or progress in his sense of the term.  Such a machine might change, in fact, in human terms it might even improve dramatically, but it will not exhibit the type of creative evolution powered by a structural synthesis of functions that defines Simondon's idea of progress.  In a static and controlled environment, the machine will only become over-adapated to its conditions (he calls this hypertelic -- though he may be using this word in its root sense of hyper-telos).  The functioning of each part might be improved, but there won't be a discontinuous and creative jump to a new functional synthesis.  In order to create such a jump in concreteness, the machine must confront an inherently dynamic environment, one defined by the fact that it reacts back upon the machine.  This can only come about when the machine's human defined task collides with the continuous variability of the world in which it is supposed to carry this task out.  Such unstable conditions prevent the specialized, over-adapted machine from functioning and require that any successful machine create a sort of buffer around itself that is stable enough to allow it to function.  In other words, the functioning of the machine will create the environment in which it is able to function.  

Here we're clearly looking at the same vortical structure we discussed earlier from a slightly different angle.  Simondon refers to what I've called a buffer as an "associated milieu" that surrounds the machine.  Each true technical individual requires such a milieu to function, and conversely, each associated milieu supports only a single technical individual.  The symbiosis between machine and milieu becomes a tight, self-regulating feedback loop that allows the machine to persist in the unstable environment at the intersection of human purpose and real world variability.  And it's the existence of this milieu that allows the various functional parts of the machine to interact in the global and synergistic way we talked about earlier.  So technical progress depends on the stabilization of an unstable environment through the actions of something operating immanently within that environment.  Simondon connects the idea of the associated milieu to the "ground" in gestalt theory, as well as to the Freudian unconscious (and the Deleuzian virtual, we might add).  But his discussion of this is very compressed and tangential to his main point about technical evolution.  So, despite the fact that the existence of this milieu, and its distinction from the machine itself, bring up a host of interesting philosophical questions related to the BwO, I'm going to skip over them now. 

So, technical elements evolve by interacting with a variable material world until they come together in a moment of concrescence to form a technical individual.  Is that the end of the story then?  Do these individuals go on to produce other individuals like themselves, with perhaps some variation?  Have we turned Darwin loose among the machines?  Simondon's answer is no -- individual machines do not reproduce themselves.  In his view it takes an entire ensemble of machines to produce the elements needed to construct an individual.  On the one hand, this seems like common sense.  My computer does not spontaneously produce another version of itself (Apple shareholders would be very disappointed if it did).  In fact, while I can design a new computer with this one, actually materializing it requires a globe-spanning supply chain of truly enormous complexity that produces everything from its silicon to the metal in its usb dongles.  On the other hand, it seems like this might be a matter of semantics.  I mean, couldn't we just call the collection of machines a machine itself?  Maybe that mega-machine doesn't exactly reproduce itself, but its sure seems to grow at least, metastasizing like capital itself.  Perhaps the ensemble is a kind of evolving, quasi-organic individual in its own right?

Simondon, however, is having none of this vague thinking by analogy.  One of the great contributions of his theory is his insistence that we need to examine the actual causal connections that allow a machine to function.  This is, after all, how he came to his definition of the machine to begin with; it is defined by a set of reciprocal causal relations between elements and between the individual machine and its associated environment that lead to a homeostatic feedback loop.  In other words, the pieces of an individual technical object are functionally inseparable.  If we were to cut into those connections, the integrity of the machine would fall apart (or at least change qualitatively).  This is not true of an ensemble.  An ensemble contains several individuals that are maintained as distinct.   It limits the casual interactions between them in just such a way that an associated milieu that would weld them into a single machine does not develop amongst them.  The example Simondon gives is the scientific laboratory -- we try to make everything in it a separate and isolated variable that doesn't couple to any of the other parts.  Likewise, in the opposite direction, a technical element is, by itself, inert.  It is stable in the way a seed is stable precisely because it contains all its possibilities inside itself and doesn't interact with the outside world much at all.  In short, the levels are defined by the density and distribution of their causal connections.  [The modern analogy for the same idea would be Tononi and Koch's Intergrated Information Theory of consciousness, with the machine playing the role of the high-Φ conscious entity.]

This rigorous separation of element, individual, and ensemble is the foundation of Simondon's theory of technical evolution.  Individuals don't directly produce new individuals (which, given their definition could seemingly only happen as some sort of asexual budding?).  However, when integrated into ensembles, they do produce new elements that have all sorts of potentialities latent within them.  One simple example would be something like steel.  It takes a pretty serious ensemble to produce steel.  You need supplies of iron and carbon produced by various machines, as well as a complicated foundry machine.  But what you get out of this process is not just a hunk of metal.  The result is a special material with a certain "technicity" built into it.  By itself, the steel won't do anything, but, because of all the properties engineered into it by the ensembles which goes into its production, it can be integrated into all kinds of new machines that could not be constructed in any other way.  It takes a village to raise a skyscraper. Thus, technology evolves through element-individual-ensemble-element cycles that gradually bootstrap whole new types of technology into existence.  

The clear distinction between levels of technical objects also allows us to understand where the confusing analogy between man and machine comes from.  As Simondon repeatedly points out, humans and machines are only superficially similar, and coincide only when we consider them solely from a functional utilitarian standpoint. Since the underlying causal structures that accomplish these goals are totally different, its only under very narrow and artificial conditions that the two will function identically.  Consider the contemporary case of "artificial general intelligence" in this light.  When we call a large language model like ChatGPT 'intelligent' we are actually comparing two very dissimilar objects.  On the one side we have a statistical model of human language encoded in silicon. When we give it a set of plain english input symbols it will give us another set of plain english output symbols that we deem relevant or satisfactory in some sense.  On the other side, we have an adult hairless chimp whose neural network is capable of performing a similar input-output task.  And all we know about what these two have in common is that a hairless chimp named Turing cannot tell which input-out relation belongs to which side, no matter how much time he spends typing into a chat window.  Is this result surprising though?  Turing is testing the difference between these objects in the most narrow possible way.  I won't say its uninteresting or irrelevant to look at a human being as solely a language processing algorithm, but you'll have to grant that it is a tad reductive, no?  Is human intelligence really reducible to statistically likely language processing?  Are humans really reducible to their intelligence?  In this comparison, we're treating humans in a way that Simondon would refuse to treat even a machine!  

Why would ever we take up such a narrow mode of analysis of either human or machine?  Simondon answers this question by pointing to the history of technology -- we confuse man with machine because, historically, man machined himself.  Consider the technology involved in pre-industrial, artisanal production.  At this level of technology, humans used discreet tools that were mostly incapable of directly interacting with one another.  How did these technical elements come together into a self-stabilizing technical individual in this era?  They only came together as a machine in Simondon's sense through the human individual, the artisan or craftsman.  It was the craftsman who understood the potential of his tools and materials and who provided for their mutual causal interaction in his workshop.  Thus the craftsman was a technical individual at the center of an associated milieu.   That is, he was a machine.  Of course, in some cases, it was a group of human individuals who coordinated their actions to form a single machine (consider, for example, the rhythm of harvesting with a scythe).  In either case the point is the same.  Humans made themselves coextensive with the causal network that accomplished the task.  Naturally, then, when ensembles of these various human machines began producing new technical elements that would later coalesce into new industrial machines, the human individual found his role as a technical individual replaced by the material machine.  In short, we can only imagine the machine replacing us if we have already "machined" ourselves.  And when this replacement happens, the human individual is forced to move either up or down the technical hierarchy.  We either move down to become elements of the new machine, ancillary parts serving a single role in its functioning, or we move up become overseers of the ensemble of machines, coordinators of information produced by a system of machines held as distinct.  I assume that it's something like this latter role that Simondon will commend to us as the book unfolds, but that's the subject of the next post. 



Wednesday, October 11, 2023

Some Preliminary Comments on The Fold and a New Reading List

After dipping my toe into the source material, I proceeded on to Deleuze's discussion of Leibniz in The Fold.  Not surprisingly, this is a pretty difficult book.  Not only does Deleuze try to make a consistent system out of Leibniz's scattered and fragmentary work, but he also tries to connect it to Baroque painting, architecture, and music.  The first task alone would be relatively monumental, since Leibniz not only wrote quite a lot, but he wrote in a number of different contexts -- pure mathematics, physics, religion, philosophy -- which we would now consider incompatible, but which Leibniz clearly saw as harmoniously related.  The second part could likewise be a study in itself, especially since Deleuze does not confine himself to historical Baroque figures such as El Greco, but extends to concept to include "neo-Baroque" artists like Simon Hantai, Dubuffet, and others.  And then the basic thesis of the book is that these two sides, while they are completely distinct and each possess a complicated logic of their own, are nevertheless so inseparable as to be two sides of the same coin.  This, after all, is the concept of the fold -- two things that are doubled or folded over one another to form the non-dual.  So it's a lot to pack into a 137 page book.

Early on in reading the book, I decided to take a different approach than I usually have to reading Deleuze.  Instead of live blogging it line by line, I read the whole thing through relatively quickly and superficially (ie. in under a month).  The plan was to immediately begin again with a more detailed second reading.  Along the way, however, I began to feel like I hadn't read quite enough Leibniz source material to fully appreciate what was going on.  Since Deleuze makes frequent reference to Leibniz's New Essays on Human Understanding, and since this was one of only two complete books Leibniz ever wrote, I thought perhaps I should go through it before returning to Deleuze.  Unfortunately, what makes the essays New is that the they are commentary on Locke's mammoth: An Essay Concerning Human Understanding.  And Lord, you can imagine where it goes from here.  I obviously enjoy reading in the history of philosophy, but 800 pages of Locke is a bit much even for me (though one interesting shortcut of sorts might be to read Dewey's commentary on the two works together).

Anyhow, after pondering the extent of this detour, I decided against it.  Instead, I've planned an even longer background reading list, but of exclusively modern authors.  There's a group of contemporary authors that Deleuze refers to frequently throughout his work, particularly when the context involves the question of individuation, which is one of the clear themes of The Fold.  Perhaps the most interesting aspect of Leibniz's monad is the way that each individual somehow contains the entire world.  In fact, containing everything is almost synonymous with being a true or unique or perfected or completely determined individual.  But how do you pack the infinite world into the limits of a finite being?  Obviously, you fold it up. -- infinitely many times.  Not only is this folding a way of packing many things into a small space, but the process of folding itself creates an inside and an outside from what was previously a continuous fabric.  

Whenever Deleuze discusses this process of folding, which we might equally call a process of differentiating or of selecting, ordering and filtering the world, three names inevitably appear -- von Uexküll, Ruyer, and Simondon.  I already have Simondon's On the Mode of Existence of Technical Objects sitting on the shelf.  I've been intending to read von Uexküll's A Foray into the Worlds of Animals and Humans: with A Theory of Meaning for at least a decade now because, well, how can you not want to read something with that title?  And while Ruyer is a more recent addition to the list, my interest in his work was piqued after reading Daniel W. Smith's review of Neofinalism.  So first I'll try to deepen my understanding of the concept of individuation by reading those three texts.  After that, I'd like to check out Bernard Cache, an architect who Deleuze mentions repeatedly in The Fold, as well as to finally read Daniel W. Smith collected Essays on Deleuze.  From what I've gathered over the years, Smith seems to be the best English language commentator on his philosophy.  Though given that Deleuze's US reception skews towards wanky literature department types, this is admittedly a low bar to hurdle.