Current state of the humanities in 3 minutes

Eva von Dassow clearly articulates the academic reality of the humanities today. I would imagine that as a professor of classical and Near Eastern studies her discipline is on the potential list of programs to be cut. Regardless, what she says about the plight of U of Minn could be applied to most state institutions. At my own institution, for example, we just cut the French program, and this happens to be at a university in Louisiana.

Questions of Substance

To treat Spinoza’s understanding of substance and how substance in turn relates to the attributes, God, and the modes, is far beyond the scope of a single post – perhaps even an entire manuscript – but a few suggestions will be offered that follow through on arguments made in previous posts (here and here). As usual, feel free to jump in with a comment (or email me if you prefer), no matter how far after the post date it might be. I’ll no doubt still be dealing with questions of substance and can use all the help I can get!

Before addressing Spinoza’s unique and truly radical understanding of substance it will be helpful to turn to Aristotle’s. Aristotle, like Spinoza, understands substance as that which individuates something and determines what it is to be that thing; that is, the essence of the thing. Substance is also not to be confused with matter, for Aristotle, since as pure potentiality matter can assume contrary forms (see Metaphysics 1050b28, ‘the same thing [as matter] can be potentially both contraries at the same time’), whereas substance determines what it is to be a particular thing and it cannot be other than that thing. Spinoza argues along very similar lines. Substance, as attributive substance, cannot be conceived in any other way than through themselves. This latter point is crucial since the attributes, on Spinoza’s reading, are radically distinct from one another and can be understood solely through themselves and not in their relations, whether relations of compatibility or incompatibility, with any other attributes. The attributes are thus not to be understood in the manner of conceptual determinations, whereby what it is to be this determinate attribute involves a relationship to what it is not. It is indeed true that Spinoza famously claimed that ‘all determination is negation,’ but this form of determination is what characterizes, as an earlier post sketched (here), the actuality of modes rather than the reality of the attributes as substance. Aristotle, by contrast, did understand substance as a conceptual determination, and hence in understanding this determination one can subsequently affirm, for example, that a dog, as dog and unlike a human being, cannot be happy since a dog is not rational. Spinoza’s ontology of substance is therefore a truly affirmative ontology of immanence since substance cannot be conceived by way of anything other than itself (hence by anything transcendent) nor does it entail any negation. It is no wonder then that Deleuze frequently referred to himself as a Spinozist.

But what then is the relationship between substance and the attributes if it is not one of conceptual determination? I’ll make two passes, two arguments, to attempt to answer this question. The first will be Deleuze’s largely Gueroult-inspired answer. The second I’ll attempt to tease out of Spinoza’s texts alone. In Deleuze’s review essay of Gueroult’s first volume on Spinoza’s Ethics, Deleuze argues that what is important about Gueroult’s approach is that it doesn’t begin with the idea of God (God enters the scene with the sixth definition and the ninth, tenth and eleventh propostions). Does this mean that the first six definitions and eight propositions are inessential to Spinoza’s project – mere preliminary work Spinoza simply had to get out of the way before the real work began? For Gueroult and for Deleuze the answer is a definitive ‘no’. When the answer is yes, Deleuze argues, we get

…two misreadings of the attribute: 1) the Kantian illusion that makes attributes forms or concepts of the understanding, and 2) the neo-Platonic vertigo that makes attributes already degraded emanations or manifestations.

It is at this point where the nature of the attributes as conceived through themselves, or the ‘logic of real distinction’ in contrast to the logic of numerical distinction, comes into play. The attributes are indeed really distinct from one another but they are not numerically distinct. We have difference without negation, or with the attributes we have what Deleuze will call a substantive multiplicity:

The logic of real distinction is a logic of purely affirmative difference and without negation. Attributes indeed constitute an irreducible multiplicity, but the whole question is what type of multiplicity. The problem is erased if the substantive ‘multiplicity’ is transformed into two opposed adjectives (multiple attributes and one substance).

We are back with the problematic, with a substantive multiplicity, and thus to understand God as absolutely infinite substance we need also to understand how God is related to the problematic, to substantive multiplicity. We gain a sense of how Deleuze and Guattari understand this relationship when they claim, in A Thousand Plateaus, that God is a lobster, a double articulation. It is all too easy to underestimate the philosophical importance of this claim. We see it at work in the context of Deleuze’s essay on Gueroult, for example, where the first eight propositions correspond to the first articulation; or, as Deleuze puts it, ‘the first eight propositions represent a first series through which we ascend to the differential constitutive elements’ – the attributes. As Deleuze had stressed earlier in the essay, there is ‘no ascension from attributes to substance…to absolutely infinite substance’; rather, there is an ascension through a ‘regressive analytic process’ to the ‘differential constitutive elements’ themselves, to the substantive multiplicity. Then there is the second articulation, the second series found in the 9th-11th propositions ‘through which,’ Deleuze argues, ‘the idea of God integrates these elements and makes clear it can be constituted only by all these elements together.’ The attributes, as a multiplicity of incommensurable and really distinct entities, come to be integrated by the power of causa sui whereby ‘essence is the cause of the existence of substance and the cause of the other things that derive from it.’ To clarify (I hope), the ‘regressive analytic process’ arrives at the attributes as constitutive elements by deriving them from the affirmation of infinite substance as conceived only through itself, showing that any determinate modification or affection of substance is not conceived through itself but through another, and hence the really distinct multiplicity of attributes; and then the integration of these attributive substances constitutes the existence of an absolutely infinite substance – God. Understood in this way, God as the power of causa sui is both the condition that enables the regressive analytic process that leads to a multiplicity of really distinct attributes – first articulation – and the conditioned that is the integration of this multiplicity – second articulation. God is self-caused, as Spinoza argues, or God is a lobster, a double articulation, as Deleuze and Guattari argue.

I now want to make the second pass, the second articulation so to speak, and in doing so hopefully clarify my take on Deleuze’s reading of Spinoza. First, I must admit a fondness for H.F. Hallett’s interpretation of substance as ‘absolutely indeterminate,’ or we might say objectively indetermined to refer to an earlier post. Hallett’s reading is by no means the consensus reading, but there are two important things going for it. First, since God is defined as absolutely infinite (1D6), God can in no way be limited or be in any way determinate, for reasons mentioned above. This is also why God is absolutely infinite rather than infinite in its own kind, as the attributes are, since this would require being a determinate form of infinite and hence a form that could (when understood conceptually by way of the understanding – namely the infinite mode of understanding) be related to what it is not, what is other than it. Our second reason follows from a claim Spinoza makes in a letter to Jelles (letter 50) that anyone who ‘calls God one or single has no true idea of God’ because, as we’ve already noted, all determination is negation. With this in place let’s turn to the scholium to 2P7 – the proposition that sets forth the famous parallelism of ideas and things. In the scholium to this proposition Spinoza says that ‘the thinking substance and the extended substance are one and the same substance, which is now comprehended under this attribute, now under that.’ To clarify by way of an example, Spinoza claims that a ‘circle existing in nature and the idea of the existing circle, which is also in God, are one and the same thing, which is explained through different attributes.’ Despite the two ways of conceiving a circle, as an extended circle actually existing in nature or the idea of this circle, they each reflect ‘one and the same order, or one and the same connection of causes.’ Spinoza then reminds the reader that the idea we have of the circle is only as a mode of thought, a mode caused by another mode, and so on to infinity, and the circle as extension is caused by another mode, the drawing hand, and so on. He concludes this scholium by stating that ‘God is really the cause [of the parallel order of causes] insofar as he consists of infinite attributes. For the present, I cannot explain these matters more clearly.’ Gueroult will argue that this missing explanation is to be found in 2P21S and in 3P2S, but what one finds there is simply a reference to 2P7S and not an explanation of the manner in which God is ‘really the cause’ of the parallel order of mental and physical causes. Needless to say, there has been a large body of literature devoted to trying to make sense of 2P7 and provide the explanation Spinoza doesn’t offer.

It is at this point where Deleuze’s emphasis upon God’s essence being God’s power as self-cause, as double articulation, or what I would call the power of self-ordering becoming, comes in as a possible explanation. As absolutely indeterminate substance, God as the power to exist is, in the first articulation, the power to exist in infinitely many ways, and hence the absolutely indeterminate is drawn into an infinite number of ways of actualizing the absolutely indeterminate – that is, the multiplicity of attributes that are neither one nor multiple. In the second articulation these ways are actualized as a series of infinite causation, whereby determinate existents require the existence of an other determinant existent, and so on – for example, the series of the modes of thought and the series of the modes of extension.

To bring this already long post to an end I want briefly to tie some of the points to what was said in earlier posts by addressing a few questions (I’m not being exhaustive here of course):

  • Is God a being, or can we read Heidegger’s ontological difference into Spinoza   whereby God is the Being that is not to be confused with any beings?

Put bluntly, no, God is a being. However, as the double articulation makes clear, coupled with Hallett’s reading of Spinoza, God is a being whose essence is the power to exist (see 1P11S), and this power is absolutely indeterminate. This is in sharp contrast to Aristotle for whom essence is not an absolutely indeterminate power to exist but rather a determinate form of existence. There is no place for causa sui in Aristotle’s thought. Therefore, while God is the singular and unique being whose essence entails ‘an absolutely infinite power of existing,’ this power of existing, as Deleuze notes, is neither predetermined by ideas or models in the understanding nor is it a power separate and distinct from ways of existing, from attributive substances. God’s being contains no other reality than the attributes, and yet God’s being exceeds our everyday understanding of beings insofar as it consists of an infinite number of attributes while we are only aware of two (thought and extension). This brings me to the second question.

  • Is God the anhypothetical absolute that serves as the foundation for Spinoza’s deductive method (his Ordine Geometrico Demonstrata)?

Yes, but here too the axiomatic method that begins with God as an anhypothetical absolute supervenes upon the regressive analytic that resulted in a multiplicity of attributes that are neither one nor multiple. To restate this in earlier terms, the second articulation that gives us God as an integration of the multiplicity of attributes does indeed give us a foundation for the axiomatic method, but since it is a foundation that supervenes upon the problematic and objectively indeterminate multiplicity of attributes, this axiomatic method that follows will be both necessary for and insufficient to the task of determining the objectively indetermined (or the absolutely indeterminate). To restate this point we could take the title of this post, questions of substance. A question of substance is not exhausted by the answers it receives – these answers supervene upon the question, and hence they are not arbitrary answers, but they do so without eliminating the question, the problematic, itself. And finally,

  • What is the role of the common notions?

A full answer to this question would entail addressing Spinoza’s three kinds of knowledge (and Deleuze’s essay “Spinoza and the three ‘Ethics’” if we are to continue to track his reading of Spinoza), but it is brought up at a crucial point early on in the Ethics, in the long scholia of 1P8 (“Every substance is necessarily infinite”). Spinoza argues in reference to 1P7 (‘It pertains to the nature of substance to exist”) that ‘if men would attend to the nature of substance, they would have no doubt at all of the truth of P7. Indeed, this proposition would be an axiom for everyone, and would be numbered among the common notions.’ If we would only attend to the nature of substance, but we don’t! Instead, we attend to the random encounters of our everyday lives, to the casual relations between modes and the haphazard patterns of our experiences. In short, we are too focused upon the singular and determinate aspects of our particular lives to ‘attend to the nature of substance’ itself. It was for a very similar reason that Spinoza abandoned the TIE, as the previous post argued, when Spinoza ‘discovers and invents’ the common notions. Recalling that the ethical project of TIE was to acquire an eternal knowledge by way of the knowledge of our determinate minds and any singular, determinate truth, we can now see why this failed and why the common notions was seen as key to a solution. Any effort to attain knowledge of the eternal based upon the axiomatic and determinate alone will fail precisely because it supervenes upon the problematic multiplicity of attributes. One cannot axiomatically deduce an answer to a question of substance! With the common notions in hand, however, the first kind of knowledge, the determinate and singular knowledge of our bodily lives in the world (the knowledge that keeps us from seeing that ‘it pertains to the nature of substance to exist’) is drawn into a problematic common knowledge (second kind of knowledge) that then comes to be actualized as the third kind of knowledge, the knowledge that finally releases us from things which are ‘liable to many variations and which we can never fully possess.’ Much more is needed here, I realize, but hopefully the general idea of how this would go is clear enough.

Bye Bye Middle Class

22 Statistics That Prove The Middle Class Is Being Systematically Wiped Out Of Existence In America.

Although this link doesn’t do much more for me than to validate what I already know, it nicely puts all the stats into one spot so that I can come back to them. Not coincidentally, I would add, the number of tenure and tenure track jobs has been cut in half over same 35+ year period  these statistics track. The decline of tenure was pointed out in this Chronicle of Higher Ed story a few weeks back.

Ordine Geometrico Demonstrata

In the next few posts I’d like to develop a few arguments concerning Spinoza’s method, hence the title of this post, then move on to Spinoza’s notion of substance as a radical aberrant monism, and finally touch upon the third kind of knowledge as the solution to the problems with which Spinoza began his Treatise on the Emendation of the Intellect. These will be sketches of arguments, or trial runs so to speak, and I will not address the voluminous secondary literature to the extent a published argument would need to do so. This blog is for me a working blog, in the same vein as Shaviro, and not a depository for finished work, so feel free to point out the dead ends I’m venturing into, or point out secondary sources, etc., that should not be ignored, or that have already said what I’m saying here.

As I pointed out in Philosophy at the Edge of Chaos, Spinoza begins his Treatise on the Emendation of the Intellect (hereafter TIE) and the latter half of part 5 of the Ethics with the same concern: namely, to show how we can overcome suffering and misery and live a good life. The first paragraph of the TIE reads as follows:

After experience had taught me the hollowness and futility of everything that is ordinarily encountered in daily life, and I realized that all things which were the source and object of my anxiety held nothing of good or evil in themselves in so far as the mind was influenced by them, I resolved at length to enquire whether there existed a true good, one which was capable of communicating itself and could alone effect the mind to the exclusion of all else, whether, in fact, there was something whose discovery and acquisition would afford me a continuous and supreme joy to all eternity.

As Spinoza moves into the latter half of part 5 of the Ethics, at 5P20S, a similar concern is expressed:

‘…it should be noted that sickness of mind and misfortunes take their origin especially from too much love toward a thing which is liable to many variations and which we can never fully possess.’

There are two points to make right off the bat. First, in both cases what concerns Spinoza is how the mind is influenced by things, its attachment to things, especially things which are ‘liable to many variations and which we can never fully possess.’ Secondly, Spinoza begins the TIE with a classical ethical concern – how to live a good life – which then becomes the very title of his masterpiece. This fact should not be overlooked and as a result as we come to an understanding of Spinoza’s metaphysics we should remember to situate it into Spinoza’s broader ethical concerns. The ethical claims at the end of the Ethics are not to be understood as an addendum to Spinoza’s metaphysical project, and an addendum Spinoza would have been better to have left out of the work entirely (as Jonathan Bennett has argued); to the contrary, if the Ethics is to be interpreted as an effort to realize the efforts with which Spinoza began the TIE, then the ethical claims ought instead to be placed at the center of Spinoza’s project in the Ethics.

But is Spinoza continuing in the Ethics with an effort to realize the task he set for himself in beginning the TIE? I have argued that this is indeed what Spinoza is doing in the Ethics. The subsequent question then is why Spinoza abandoned the TIE and started over with the Ethics, developing the arguments this time by way of an axiomatic, geometric method (Ordine Geometrico Demonstrata)? To begin to answer this question involves understanding how the mind itself is related to things, and in particular to the eternal truths that will eventually emerge as the way to move beyond the ‘sickness of mind’ that results when we become overly attached to that ‘which we can never fully possess.’ In the TIE Spinoza’s effort was to demonstrate how a finite, discrete mind could come to fully know and possess an eternal, timeless truth. When Spinoza comes to an understanding of truth itself he claims that it ‘is nothing but the objective essence itself, i.e., the mode by which we are aware of the formal essence is certainty itself. And from this, again, it is clear that, for the certainty of the truth, no other sign is needed than having a true idea’ (§35, II/15). In other words, and to avoid the skeptical argument of criterion, Spinoza argues that the truth of an idea does not depend upon some independent criterion or method which will verify and justify this truth, which would lead to the skeptical argument of what justifies this independent criterion, and so on; instead, the very mode in which a true idea is grasped is the truth and certainty of this idea. But how are we to know whether the ‘very mode in which a true idea is grasped is the truth and certainty of this idea’? Key here for Spinoza is to begin with true definitions. As Spinoza puts it, ‘that Method will be good which shows how the mind is to be directed according to the standard of a given true idea’ (§38, II/15), and these are to be the ‘true and legitimate definitions.’ Thus, late in the TIE Spinoza returns to the knowledge of eternal things and claims that

When the mind attends to a thought—to weigh it, and deduce from it, in good order, the things legitimately to be deduced from it – if it is false, the mind will uncover the falsity; but if it is true, the mind will continue successfully, without any interruption, to deduce true things from it. (§104, II/37-8).

It is therefore the activity of the mind itself, whether unimpeded or impeded from true, legitimate definitions, that is the only foundation for Spinoza upon which the truth of our thoughts is to be determined. But this is precisely where problems begin. If the axiomatic method is to succeed on the basis of true and legitimate definitions, it will be because of the power of the mind to proceed, ‘without any interruption, to deduce true things’ from these definitions; and yet Spinoza admits to lacking a clear understanding of the powers of the mind, and hence the proper place for the mind to begin upon its axiomatic path:

But so far we have had no rules for discovering definitions. And because we cannot give them unless the nature, or definition, of the intellect, and its power are known, it follows that either the definition of the intellect must be clear through itself, or else we can understand nothing. It is not, however, absolutely clear through itself… (§107, II/38).

In the final paragraphs of the TIE Spinoza attempts to work through this problem, to provide a way for understanding the powers of the mind. He begins first with an effort to understand the mind by way of the properties of the mind. Early in the TIE, however, Spinoza ruled out this approach. When Spinoza contrasts knowing something through itself or through its proximate cause such as its properties, Spinoza favors the former and criticizes Descartes for understanding the mind in terms of its proximate, transcendent cause (i.e., God), and thus one can see he would resist reverting to that solution himself. As Spinoza would claim later in the Short Treatise, properties, or ‘Propria’, do ‘indeed belong to a thing, but never explain what it is.’ (ST 1.vi.6). What Spinoza needs, therefore, and what was lacking for him in the TIE, is a way of understanding how the knowledge of the eternal and infinite could be founded upon the essence of a our singular, finite mind rather than upon something that transcends this mind (Propria, for example). Because of the dissatisfaction with the alternatives he had before himself in the TIE he would abandon this work and then, in the Ethics, approach them from a different perspective.

Of the few commentators to attempt to nail down precisely why Spinoza abandoned the TIE and moved on to the Ethics, Deleuze, in his Spinoza: Practical Philosophy, offers a simple explanation: ‘when he discovers and invents the common notions, Spinoza realizes that the positions of the Treatise on the Intellect are inadequate in several respects, and that the whole work would have to be revised and rewritten.’ (pp. 120-1). We can understand the implications of this discovery, and the resultant axiomatic method that emerges in the Ethics, if we recall the previous post on Deleuzian supervenience (a now slightly modified and corrected post). In his effort to understand the powers of the mind and its ability to move through an axiomatic process from true and legitimate definitions to further truths, and hence to escape the mind’s attachment to things which are ‘liable to many variations and which we can never fully possess,’ Spinoza encountered a similar problem to Lewis as Lewis sought to use the tools of modal logic and semantics to move beyond the correlationist trap (though of course Lewis would not have used this terminology). In short, the common notions are neither to be understood as the clearly defined truths and definitions with which the axiomatic method begins, nor are they the truths one arrives at after successfully moving through the processes of deduction. They are, instead, to use again the terminology of the previous post, a ‘zone of objective indetermination’ (the problematic) upon which the axiomatic method supervenes and which it is nonetheless irreducible to. This accounts for another aspect of Spinoza’s method that Deleuze also stresses; namely, the role the scholia play in the midst of the axiomatic deductions. For Deleuze ‘the use of the geometric method involves no problems at all’ (DR 323, n.21) and it is for this reason that Spinoza, on Deleuze’s reading, interspersed the scholia into the axiomatic deductions of the Ethics in order to fuel the necessity and inventiveness of the geometric method by supervening upon problems of the scholia. And it is for this reason as well that Spinoza begins his Ethics not with a stated ethical concern as he did in the TIE, but with six definitions that lead to the definition of God as substance: ‘By God I mean an absolutely infinite being; that is, substance consisting of infinite attributes, each of which expresses eternal and infinite essence.’ (D6). Substance, in other words, is to be understood not as an axiomatic given from which the remaining deductions follow, but rather as the problematic upon which the axiomatic deductions supervene. And it is with this approach in hand that Spinoza will attempt to address the ethical concerns with which he began the TIE. The next post will begin to sketch how this works.

Deleuzian Supervenience

In his account of necessity David Lewis proposes that given two worlds that are exactly alike at time1, W and W*, and in which the same natural laws apply, then at any later time these two worlds will continue to be exactly alike. As a good Humean, however, Lewis encountered what he claimed to be a damning problem, the problem of undermining futures. On Lewis’s reading of Hume, any claims or truths we make regarding the world, including claims concerning necessary laws, supervene upon a given distribution of qualities. There cannot be a change in this distribution without there also being a change in the claims or truths that supervene upon them. Given the laws of probability, the chances of a dice coming up showing a six is one in six. Three or four sixes may show up in a row, but given a large enough number of throws the number of times it shows up sixes approaches one in six. These laws of probability therefore supervene upon a given distribution of qualities in the world up to and including time1. If there is a non-zero chance, however, that after time1 sixes come up every time then that would effect the chance distribution at W at time1—it would be something higher than one in six, but this contradicts Humean supervenience. Given the case of an undermining future we would assign a probability value x and non-x to the throw of the dice at time1. In his response to this problem, Lewis proposes modifying the laws, but many have been unhappy with Lewis’s proposal. In his analysis of Hume in After Finitude, Meillassoux, following Badiou, would argue that the very laws themselves presuppose a totalized whole, an All, in order for there to be the regularities upon which these necessary laws supervene. If mathematics thinks the not-All, however, then there is no reason why Humean supervenience needs to stay the same or be different at a later time. The notion of an undermining future would be vacated of sense. We may continue to axiomatize and mathematize the distribution of qualities in the world, but there is no All which assures their necessity, and hence no necessity to be undermined. The only necessity for Meillassoux is the necessity of contingency. In his response to the problem of undermining futures, John Roberts argues that what gives rise to the problem is the idealization of our knowledge of chance at time1. It is only under the assumption that we can specify a particular value to the chance of a particular event happening whereby we are led to a contradictory belief when the undermining future entails a different value and result. ‘But real evidence,’ Roberts claims, ‘never constrains these credences by specifying the objective chances of such events.’ (“Undermining Undermined,” p. 104) As Roberts clarifies, ‘if HS is correct, there could be such evidence only if there were no problem of induction,’ meaning that this evidence would have to ‘entail…contingent information about the future, something no evidence in principle available to creatures like us could ever do.’ (ibid.). By constraining evidence concerning chance and supervenience to ‘finite empirical cognizers,’ Roberts is able to block the reduction that results from undermining futures. In other words, people like us, finite cognizers, are simply unable to process all the evidence necessary to get the appropriate value, much less the undermining futures which we cannot even access, and thus we would be unable to generate the contradictory value. In doing this, however, Roberts fails to avoid the central critique of Meillassoux’s book. By calling upon the mathematical thinking of the not-All associated with Cantorian set-theory, Meillassoux sought to address the correlationist trap that has been in place since Kant—namely, we cannot know an object as it is in itself except for how it appears in its relationship to us as finite empirical cognizers. This is precisely what Roberts does, however, and the fact that Lewis himself was not attracted to the solution Roberts offers should give us pause. The reason for this is that Lewis sought, with the tools of modal logic, to do much what Meillassoux and Badiou would like to do—break free from the limiting cages of finite cognizers and arrive at truths about an autonomous reality that is not correlated to a finite cognizer. As a good Humean, however, Lewis would no doubt not accept Meillassoux’s rejection of the problem of undermining futures, the problem of induction, and similarly Meillassoux would reject Lewis’s approach since Humean supervenience continues the correlationsist legacy—knowledge of reality in itself is correlated to the various qualities of the world as they are related to a presupposed totality. This last claim I think is debatable since Lewis’s metaphysics may well involve a reality of possible worlds that are non-totalizable. Lewis nonetheless continued to believe in the necessity of natural law, thus even if there is a transfinite set of possible worlds (a not-All), the laws, whatever they are in each world, would hold as a consequence of the totality of that world; and hence Meillassoux’s argument would resurface.

At this point we can turn to Deleuze’s understanding of mathematics to clarify our take on Lewis’s position. As Daniel Smith has shown in his essay on the importance of mathematics in understanding Deleuze’s theory of multiplicities, and how this theory in turn differs from Badiou’s theory of the multiple-without-One, Smith shows that Deleuze’s theory is informed by a tradition of problematics in mathematics in contrast to the axiomatic approach favored by Badiou. (“Mathematics and the Theory of Multiplicities”). The difference becomes clear in Deleuze and Guattari’s very definition of the nondenumerable: ‘What characterizes the nondenumerable is neither the set nor its elements; rather, it is the connection, the “and” produced between elements, between sets, and which belongs to neither, which eludes them and constitutes a line of flight.’ (TP 518). The nondenumerable is problematic, for Deleuze, precisely because it constitutes problems that have, as Smith puts it, ‘an objectively determined structure, apart from its solution,’ and this objectively determined structure entails ‘a zone of objective indetermination’ that precludes being reduced to demonstrative and axiomatic methods in mathematics. The ‘genetic and problematic aspect of mathematics…remains inaccessible to set theoretical axiomatics,’ and yet, through continual movements and translations, the problematic in mathematics gives way to axiomatic innovations and recodings (Smith offers the example of the translation of infinitesimals and approaching the limit in calculus [an example of the problematics tradition] into the axiomatic epsilon-delta method as developed by Weierstrass). We have in short what you might call Deleuzian supervenience, whereby the discretization of the axiomatic maps or supervenes upon the continuity of the problematic, but the problematic forever exceeds the axiomatic, it is the ‘power of the continuum, tied to the axiomatic but exceeding it.’ (ibid. 466). Axiomatics, or what Deleuze will also call major or royal science, thus draws from problematics the necessity of inventing and innovating in response to the ‘objectively determined structure’ of the problem. Similarly problematics, minor or nomad science, calls upon axiomatics to actualize the solutions it lays out, if only indeterminately so. Deleuze and Guattari are clear on this point: ‘Major science has a perpetual need for the inspiration of the minor; but the minor would be nothing if it did not confront and conform to the highest scientific requirements.’ (ibid. 486). We can thus rethink Lewis’ problem of induction not as a problem intrinsic to the relationship between a finite cognizer and the distribution of qualities as they relate to this cognizer, but rather as an ‘objectively determined’ problem that exceeds the tools of modal realism and axiomatic logic. Lewis, in short, is encountering the necessity and insufficiency of invention. In the next post we’ll see that one interpretation of why Spinoza abandoned his Treatise on the Emendation of the Intellect was precisely because he, like Lewis, encountered the necessity and insufficiency of invention.

Latour on factishes and belief

Before moving on to work on Spinoza and the concept of aberrant monism, I want to add one more post on Latour. I hope that between this and previous posts there may emerge a relatively coherent picture of my reading of Latour. I also hope to indicate how Latour’s thought can become, and ought to become, an effective tool in countering a number of the presuppositions of contemporary neoliberal politics, or what Mark Fisher aptly calls capitalist realism. As usual, feel free to point out the errors of my ways.

As contradictory as it may seem, Latour argues that ‘construction’ and ‘autonomous reality’ are to be understood as synonymous. Among the neologisms Latour uses to elaborate this point is ‘factish,’ being a combination of fact and fetish. A fact, traditionally understood, is autonomous and unconstructed. When Pasteur discovered the role microorganisms play in the process of fermentation he simply, on this view, came to recognize an autonomous fact that was already there and independent of the historical events involved in Pasteur’s efforts to locate these microorganisms. A fetish, by contrast, involves the projection of beliefs upon a mute, passive object. In both cases, according to Latour, what is maintained is the subject-object dichotomy. In the case of facts, objects are ultimately responsible for the success of Pasteur’s experiments; and in the case of fetishes, subjects are the ones responsible for projecting beliefs onto objects. A factish, for Latour, is a type of action that does ‘not fall into the comminatory choice between fact and belief.’ (Pandora’s Hope, 306). Rather, a factish entails events; or, as Latour puts it, ‘I never act; I am always slightly surprised by what I do. That which acts through me is also surprised by what I do, by the chance to mutate, to change, and to bifurcate…’ (ibid. 281). In a nod to Deleuze, Latour claims that factishes are ‘rhizomelike,’ or ‘one should always be aware of factishes…[because] their consequences are unforeseen, the moral order fragile, the social one unstable.’ (ibid. 288).

A factish is thus neither an independent reality that comes to be discovered after a successful scientific experiment, nor is it merely the projection of human beliefs onto an inert object. A factish involves both human and nonhuman actors, and scientific experiments, as events, involve relations between actors that return more than any of the actors contributed singly (neoliberal understandings of the market ignore this fact, but the recurrence of crises, as David Harvey has argued, justifies Latour’s point). It is for this reason that Latour claims ‘an experiment is an event which offers slightly more than its inputs…no one, and nothing at all, is in command, not even an anonymous field of force.’ (ibid. 298)(Nor, for Latour, would an anonymous, impersonal market be in command). Latour acknowledges the influence of Whitehead when he uses the term event, most notably the use Whitehead makes of the term to replace ‘the notion of discovery’ and its attendant assumption concerning the ahistorical nature of objects and the historicity of human endeavors. When Latour defines an experiment as an event, therefore, he intends precisely to argue that this ‘event has consequences for the historicity of all the ingredients, including nonhumans, that are the circumstances of that experiment.’ (ibid. 306).

Despite all the apparent differences between modernism and postmodernism, Latour argues that they have each ‘left belief, the untouchable center of their courageous enterprises, untouched.’ (ibid. 275). In particular, the modernists, on Latour’s reading, felt the task of philosophy and science was to track down which of our beliefs are justified true beliefs. To this end, science comes in armed with facts to hammer away at any beliefs that are not in line with the facts. With postmodernism, on the other hand, science itself comes to be seen as nothing but beliefs that construct a reality, or ‘construction and reality are the same thing; everything is just so much illusion, storytelling, and make believe…’ (ibid.). And what we have then, ‘when science itself is transformed into a belief’ is ‘postmodern virtuality—the nadir, the absolute zero of politics, aesthetics, and metaphysics.’ ‘Virtuality,’ in short, is for Latour ‘what everything else turns into when belief in belief has run amok.’ (ibid. 287). When Latour argues that it is because factishes are constructed that they are so very real, therefore, he is saying something quite different from the postmodernist who claims that ‘construction and reality are the same thing.’ The difference has to do with who or what is acting. In the case of factishes, there is a ‘rhizomelike’ network of both human and nonhuman actors, and no one actor is in command, ‘not even an anonymous field of force.’ For the postmoderns, it is the power of belief that is in command, or the virtuality of the One as Badiou interprets Deleuze, and the autonomous nature of reality is nothing but a mode of belief or virtuality.

If Badiou’s critique of Deleuze is correct (as an earlier post indicated it was not), then it would seem that Latour would echo Badiou’s criticisms, at least on this point. As we unpack precisely what Latour is arguing for when he claims that it is ‘because it [a factish] is constructed that it is so very real,’ we will find that it bears much in common with Deleuze’s project, as Latour himself recognizes, and thus Badiou’s criticisms are directed at a ‘postmodern’ Deleuze that never was. Key to understanding how ‘construction’ and ‘autonomous reality’ are synonymous is what Latour calls ‘historical realism,’ whereby the reality of what is is inseparable from processes that increase or diminish the number of associations that are accumulated over time. It is not all or nothing regarding the existence of entities, but we ought rather to speak of ‘relative existence.’ As Latour puts it, ‘An entity gains in reality if it is associated with many others that are viewed as collaborating with it. It loses in reality if, on the contrary, it has to shed associates or collaborators (humans and nonhumans).’ (ibid. 257). For Deleuze the heterogeneous network of human and nonhuman associations that constitutes the relative existence of an entity is just what he calls a multiplicity, or what I have called historical ontology, the double articulation and movement that is inseparable from the reality of entities. Moreover, we could say that the reality of entities is a reality-effect of historical ontology, though an autonomous effect whereby historical ontology is a quasi-cause (as Deleuze understands this term). Using Deleuze’s terminology, the reality of an entity is inseparable from a double articulation, with the first articulation drawing a number of associations and links between human and nonhuman elements into a plane of consistency, and the second articulation actualizing this plane of consistency as a real, autonomous entity. As Latour and Woolgar state it in Laboratory Life, ‘“reality” cannot be used to explain why a statement becomes a fact, since it is only after it has become a fact that the effect of reality is obtained.’ (p. 180). For Deleuze the multiplicity is the reality of the virtual that is the quasi-cause inseparable from the reality of entities that are its autonomous effects. A further and related concern of Deleuze is what he calls counter-actualization, or a problematizing history as I call it, whereby the virtual as quasi-cause becomes tapped, thereby problematizing the actual so as to allow for its possible transformation. Latour’s use of history in science studies shares a similar concern. In particular, by problematizing traditional understandings of science and the presuppositions it entails concerning the relation between beliefs and things, it could be argued that Latour is attempting to problematize and hence move beyond belief. Rather than attempt to justify beliefs through scientific facts or unmask beliefs as mere fancy and fetish, Latour sees ‘the role of the intellectual’ as the task of ‘protect[ing] the diversity of ontological status against the threat of its transformation into facts and fetishes, beliefs and things.’ (Pandora’s Hope, 291). In short, Latour, as with Deleuze, sees the ‘role of the intellectual’ as one of affirming multiplicity in order to counter the identification of multiplicity with the one or multiple of beliefs and things. And it is with this move that Latour’s thought has begun to engage more actively and critically with a number of the presuppositions of contemporary neoliberal politics, as I’ve tried to indicate briefly here and in previous posts.

Quick response to a quick response

Harman responded quite quickly to my post and I just want to add a few things that I perhaps should have added in the initial post (I’m new to blogging – I suppose it’s obvious now). First, my post was intended to be a praise for and attempt to think through aspects of Latour’s thought and it was not intended to be a critique of Harman or OOO. I have the greatest admiration for the work Harman, Levi, and others are doing, and Harman especially is to be applauded for shining the light onto Latour’s work. So if I came across in my post as unduly harsh or critical, rather than trying to explain and understand why Latour may be holding the positions he does, then I apologize and welcome all criticisms that point this out. My attempt is to generate discussion rather than cut it off with a condescending ‘this is the way it is’ approach. I did not address Harman’s work in detail here simply because I was trying to lay out a reading of Latour rather than Harman. I am more familiar with Levi’s work and what he is doing certainly was not characterized by what I wrote, nor, I suspect, was Harman’s. In fact, I believe Levi can account for the autonomy of objects while avoiding what I see as Latour’s Humean skepticism concerning causation (though I’ll let Levi speak for himself). I was thus attempting to spell out why I think Latour argues in the way he does and again apologize if it came across as a harsh attack on work I admire. I frequently criticize the overly polemical nature of philosophical discourse and so the last thing I want is to engage in it myself. So when I turn to Nietzsche’s critique of Descartes my intention was simply to highlight Nietzsche’s Humean scepticism of the cause-effect duality in order to shed light on why I think Latour may be inclined to reduce objects to their relations and develop an alternative, Whiteheadian approach to understanding the underdetermination of facts to theories, or of relations to objects. I do appreciate Harman’s point about Latour and Whitehead’s anti-Bergsonianism – and the occasionalism that goes with this. That was an excellent point and I’m thankful to him for pointing it out to me.

As for the symmetry –> asymmetry between humans and nonhumans I wrote of. That may simply be where we have a disagreement, and I may well be on the wrong end. I attempted to lay out my thoughts here in this and an earlier post as to why I think one needs to focus on Latour’s understanding of the stabilization of the event. And as for the philosophy of the event, I agree with Harman that the last thing philosophy needs is an institutionalized dogma whereby with heads Deleuze wins and tails you lose. If anything, at the moment I am more inclined to argue that Latour is right rather than Deleuze on a number of issues. I felt the same way Harman did when Derrida was in this instituionalized position and wouldn’t want Deleuze or Latour for that matter to end up there as well.

As for the third criticism, I did not have Harman in mind but a couple of other blog sites where I had read this criticism (and I link to those sites), and so I was turning away from his criticisms (though he accepts this one as well). Thus the use of the word normative was not in reference to his work and I’m sorry my post forced him to use it in his reply.

In Defense of Latour (and his neo-Humean ways)

As most know, especially those who would be reading this post in the first place, interest in Latour has increased dramatically. No longer the eccentric novelty of being an anthropologist working at the Salk Institute and carving out a niche that came to be known as science studies (which in turn is to be contrasted with the sociology of scientific knowledge [SSK] school in Edinburgh), Latour has more recently come to be a central figure in a philosophical movement on the ascendancy – OOO. Graham Harman is quite forthright in his praise: Latour is for him ‘the closest figure [he] can think of to the ideal object-oriented hero.’ This is not to say that Latour’s thought fits seamlessly into the object-oriented ontology. It doesn’t, as I alluded to in an earlier post. In fact, there are a handful of criticisms that Harman and others make of Latour that I want to address briefly here, and will probably tackle at greater length in another context. Much of the thrust of the criticisms is diminished if one understands the role Latour’s ontology plays both in setting out a study of science that does not reduce it to being a branch of social constructivism, as Latour believes the SSK school does; and if one unpacks his politics as a compositionalism (which I’ll have to detail in a later post to keep this relatively short). But first the criticisms. The first two are from Harman and the third one will find here and here.

  1. Latour reduces objects to their relations.
  2. Latour begins with a symmetrical relationship between humans and nonhumans but ends with an asymmetrical relationship that gives power to humans and not objects.
  3. In doing away with the distinction between might and right Latour ultimately reduces normative force to causal force.

As for the first it does appear that Latour is guilty as charged. But why is this a problem? Primarily this fails to give to objects their proper autonomy. Only if an object is something more than its relations (of translation, causation, transformation, etc.) can we account for the becoming of this object. Harman gives the example of Heidegger’s Being and Time. If new interpreters of this text were to come on the scene, ‘what they would be interpreting,’ Harman argues, ‘is Being and Time itself, not the sum total of other interpretation’; that is, not the sum total of Being and Time’s relations. Harman thus concludes that ‘an actor is not identical with whatever it modifies, transforms, perturbs, or creates, but always remains underdetermined by those effects.’ (Prince of Networks, 186-7). There’s a few points to make concerning this conclusion. First, I agree completely that an object is underdetermined by its effects, and so does Latour on my reading. It was precisely on this issue where Latour broke ranks with the SSK school, and David Bloor in particular. On Bloor’s account a scientific theory is underdetermined by the observed empirical relations and hence something else needs to be added to the mix in order to account for why one theory is accepted over another when the facts themselves don’t mandate the one over the other. Bloor argues that it is the social, or social relations, that accounts for the eventual acceptance of one theory over another. Latour rejects this solution just as he rejects the naïve scientific realist who continues to believe that it is the object itself that guarantees the eventual acceptance or rejection of a theory (I am not equating OOO with naïve scientific realism). Nonetheless, Latour does accept the underdetermining of theories by facts, a claim one can find in Whitehead, and Whitehead himself claims that this is the indispensable legacy of Hume. As Whitehead puts it:

This conclusion that pure sense perception does not provide the data for its own interpretation was the greatest discovery embodied in Hume’s philosophy. This discovery is the reason why Hume’s Treatise will remain as the irrefutable basis for all subsequent philosophic thought. (Modes of Thought, p. 133).

So why is it not the object itself, not in the naïve realist sense, but in the OOO sense, the object as withdrawn, that accounts for the eventual interpretations and relations? I think for Latour the answer again stems from Hume, and in particular from Hume’s skepticism concerning causation. But really it is a skepticism concerning Descartes’ argument in the second Meditation. When Harman argues that ‘an actor is not identical with what it modifies,’ we’re back with Descartes’ claim that the cogito is not identical with its thoughts. Nietzsche found this argument unconvincing. As Nietzsche argues,

Modern philosophy, as epistemological skepticism, is secretly or openly Anti-Christian, although (for keener ears, be it said) by no means anti-religious. Formerly, in effect, one believed in “the soul” as one believed in grammar and the grammatical subject: one said, “I” is the condition, “think” is the predicate and its conditioned—to think is an activity for which one must suppose a subject as cause. Beyond Good and Evil

Then there is this quote (from Will to Power #484):

“Something is thought, therefore there is something that thinks”: this is what Descartes’ argument amounts to. But this is tantamount to considering our belief in the notion “substance” as an a priori truth: –that there must be something “that thinks” when we think, I merely a formulation of a grammatical custom which sets an agent to every action. In short, a metaphysico-logical postulate is already put forward here and it is not merely an ascertainment of fact…On Descartes’ lines nothing absolutely certain is attained, but only the fact of a very powerful faith.

So how then do objects come to be autonomous while at the same time not being independent of their relations? To ask it yet another way: how do we account for the underdetermination of an actor (actant or object) by its effects and relations without reducing it to the social in Bloor’s sense or an object in Descartes’ sense. This is where I think Latour adopts Whitehead’s and Deleuze’s metaphysics of the event. Put briefly, and absurdly simplifying, Latour follows Hume in claiming that there are processes inseparable from determinate, known causation but which are nonetheless irreducible to it. First there are habits, then there is knowledge. Latour will thus call for autonomous objects, objects that are irreducible to their relations—since 1864, for example, there have always been microorganisms, microorganisms independent of Pasteur’s work—and yet this irreducibility, this something more or underdetermination is not itself a set of actualized, determinate relations. It is what Deleuze will call the virtual, or more precisely it is the metaphysics of the event one finds in Deleuze and Whitehead. Every actual entity, as process and event, exceeds every other actual entity, including God (See Whitehead, Process and Reality [260]: ‘…every actual entity also shares with God the characteristic of transcending all other actual entities, including God. The universe is thus a creative advance into novelty.’) It is this move, this metaphysics (which I admit needs further elaboration and which I’ve detailed elsewhere), that enables a Latourian account of objects that does reduce them to their actual, determinate relations, but in a way that is nonetheless underdetermined by these relations.

This brings me to the second criticism – Latour’s move from a symmetrical to an asymmetrical relationship between humans and nonhumans. Whereas Pasteur is accepted to have preceded the existence of microorganisms, the same is not true for the microorganisms themselves; and this move runs counter to a basic tenet of OOO that there is no asymmetry or hierarchical differences between objects, whether human or not. First, it’s not true that for Latour microorganisms went from being nothing to something at the hands of Pasteur who was something all along the way. What was crucial to Pasteur’s success was that he was able to translate his laboratory findings into the fields and farms, that he translate the relations and effects observed in the laboratory to the observations had by farmers working with cows, beer, etc. A similar process occurs with respect to subjects. Pasteur no doubt was transformed by the experience as he went from being an obscure scientist to a leading light in the French Academy. Ian Hacking has also shown how we can come to reconceptualize and rethink, or translate, ourselves and who we are as subjects. What was once a misbehaving child is now a child with a medical condition, ADHD. The symmetry between humans and nonhumans breaks, for Latour, when Pasteur is given credit by textbooks for bringing microorganisms into being from nonbeing. This results only at the end of the process of the stabilization of events (as discussed here), and hence it is to prioritize the discontinuities of ready-made science over the continuities of science in the making. Latour clearly finds the work of science to be science in the making and ready-made science is the death of scientific work, much as an actual entity, for Whitehead, ‘has perished when it is complete.’ (PR, 99). This is not to say that there are no differences between humans and nonhumans. Some things do come to be artifacts rather than facts, are reflections of subjective bias rather than objective fact. For Latour this is a consequence of the strength and reach of networks rather than having anything to do with an objective or subjective essence that comes to be revealed through empirical studies. And if Latour devotes more time to studying the human component of these relations, or if he does not, as some have criticized him, address object-object relations, I believe this follows from his political concerns, as well as his academic training as a sociologist, rather than from a flawed ontology–hence his admiration for Whitehead.

And this brings me to the final criticism—namely, that Latour, in eliding the distinction between might and right ultimately reduces normative force to causal force. In other words, a position, theory, or argument ultimately wins the day not because it satisfied standards of rational coherence, consistency, and logical rigor, but rather because it makes the biggest splash and produces the most ripples throughout the networks. This is again where Latour’s neo-Humean streak comes to the fore. Latour certainly does not dismiss normative force (at least not on my reading). As Latour discovered and detailed in Laboratory Life and even in his most recent book, The Making of Law, the power of rational (and not rhetorical) argument is one of the factors that is brought into play in attempting to establish a claim. An argument that adheres more rigorously to the normative standards of rationality is thus more likely, all other factors being equal, than one that does not. But following Hume, and Whitehead as we saw above, the facts underdetermine the rational arguments that can be made concerning them. To adopt a cliché, reasonable people can disagree even if they accept all the facts of the case. This is why other factors need to be brought into play, and they are numerous, as a reading of Laboratory Life in particular makes clear. One of the problems Latour has with neo-liberalism, as he makes quite clear in his recent book, The Science of Passionate Interests, is that it reduces reality to a too simplistic quantity—our individual desires and beliefs as rationally determined. It is not that rational-choice theory is wrong but far too limiting to understand the complexities that constitute our social and political worlds. Latour is thus loathe, and he admits to following Tarde on this point, to reduce our understanding of the social world either to a set of normative primitives or to a mere quantifiable summation of causal effects. Things are much more complex than this, and forever in process. Latour thus does not reduce normative force to causal force but allows for normative forces, among others, to account for how underdetermined facts come to acquire their status as accepted, unquestioned facts and truths. Latour is therefore a true successor to Hume and Whitehead.

Becoming-dangerous: a philosophical agenda

In reading through Steven Nadler’s nice introduction to Spinoza’s Ethics I was reminded of how dangerous Spinoza’s thought was taken to be at the time. Jonathan Israel’s Radical Enlightenment also drives this point home. Spinoza himself was excommunicated from his church before he had even published a word, an unprecedented and extreme measure even in Spinoza’s time. Spinoza also survived an attempt on his life when the attacker’s knife tore through his cloak but missed him, and yet Spinoza’s case is not an isolated one in the history of philosophy. Descartes fled France; Locke fled England for a time; Giordano Bruno was burned at the stake; Marx fled Germany; Aristotle fled Athens so he wouldn’t suffer the same fate as Socrates; and of course there was Socrates. But when was the last time a philosopher’s ideas were seen as truly dangerous, and where might one look today for the possibility of a dangerous philosophy? One will certainly not find any dangers among the analytic philosophers, and if one accepts John McCumber’s thesis that analytic philosophers turned away from any philosophy that smacked of Marxism during the red scare of the 1950s, then this absence of danger is no accident. Continental philosophers are no better it seems, and the failure of May ’68 seems only to highlight the impotence of philosophical discourse. But if philosophy as Foucault argued in his late work is integrally involved with parrhesia – free, frank, and truthful speaking to power – then it will necessarily involve risk and inevitably become a danger to those in power. This risk and danger, in fact, is an essential component of parrhesia according to Foucault but it is precisely what seems to be lacking in the contemporary scene. Can philosophy become dangerous again, and if so how might it do so in a way that is not already anticipated and hence neutralized by a media that thrives on marketing outrage and scandal (think Glenn Beck)? If philosophy cannot regain its traditional status as a danger to established power then it will most likely become increasingly irrelevant.

Events and Objects (à la Latour)

In We Have Never Been Modern, Latour adopts a cartographic metaphor to explain the stabilization of the Nature-Society duality from unstable events, or the trajectory from A” to D” marks, as Latour puts it, ‘the gradient that registers variations in the stability of entities from event to essence.’ (p. 85). Adopting Spinozist language, Latour will later clarify his ontology; namely:

…the immanence of naturing-natures [Nature] and collectives [Society] corresponds to a single region: that of the instability of events, that of the work of mediation…If we add to the official, stable version of the Constitution [i.e., the Nature-Society divide or gap] its unofficial, ‘hot’ or unstable version, the middle is what fills up, on the contrary, and the extremities [the Nature and Society poles] are emptied. (ibid. 86-7).

A very similar point is made in Latour’s early work, Laboratory Life, when discussing the status of scientific objects, in particular TRF (thyrotrophin releasing factor), he argues that ‘the solidity of this object, which prevented it from becoming subjective or artefactual, was constituted by the steady accumulation of techniques.’ (127). A consistent theme for Latour, then, though he did not always use the same terms to detail it, is that events and objects are not to be confused with one another even though they are not fundamentally distinct, much less different in kind. Rather, objects are stabilized events; or, adopting Latour’s own metaphor, objects are ‘the cooled down continents of plate tectonics.’ (Never Been Modern, 86). Objects are thus inseparable from their unstable networks, and even though an object may be particularly stable and even lionized as a textbook fact, it may lose allies to another object that, through a ‘steady accumulation of techniques’ and alliances, displaces it. Latour gives the example of how prior to Watson chemists preferred, and textbooks stated as an established fact, that the four DNA bases were in the enol form which subsequently made it more difficult for Watson to cast doubt upon it and put forth the case that it was in the keto form instead. (Lab Life, 243). A Latourian ontology is monistic, therefore, in that there is nothing but objects and events – the middle is filled and the poles are empty as he put it – but this is not a stable monism of autonomous objects and lawful events; rather, it is an aberrant monism that continually moves between stable natures and collectives and unstable, aberrant events. What implications does this have for placing Latour within the OOO camp? It seems to place him on the margins, though I’m not committed to this.