Why normativity?

One of the main themes of my research is normativity. In this post, I want to provide something of a primer on normativity. Hopefully, this will go someway towards explaining why this is an issue we should take seriously and why normativity acts as a core concept for philosophers like myself. The immediate prompt to this post is the ongoing discussions between Levi and Pete, which keep running aground whenever normativity arises. Pete has set out the pertinent issues a number of times, but I thought it would be useful to approach them from my own perspective too, since a bit of triangulation might help us get a clearer sense of what is being said.

In short, normative issues concern correctness. Obviously, there’s many different senses of correctness and many different ways in which it arises as an issue. This helps to explain the tendency for normativity to to become a monolithic topic, which can seem to suck all philosophical light towards it like some supermassive body. One crucial distinction here though is between first-order normative inquiry and metanormative inquiry. I’ll explicate this in relation to some philosophically familiar topics.

In ethics, we are often asking what we ought to do — what would be good in the way of practical action. Should I give up philosophy and train to be a psychiatrist? Was it too callous to have decided not to meet your friend because you felt too drained to listen to their problems? Is a society without socialised medicine thereby unjust? These are first-order ethical questions, and they are normative because they are oriented by the question of how it is correct to act.

But we can also ask (as metaethicists do) about the metaphysics, epistemology and semantics involved. What does it mean to say I should do something? What is it for an action to be good? Is it possible to know whether I did the right thing? How? What, if anything, separates ethical demands from those of social etiquette? These are metanormative questions, also oriented by the notion of correctness in action, but which try to uncover what this notion of correctness amounts to. At its limit, it might even conclude that there is no sense in which actions are appropriate or inappropriate — they just are.

Similar distinctions between the first-order normative and the metanormative arise in other areas too. Another familiar example is deliberation about empirical facts and epistemology. Again, there are first-order matters: What is that animal? Would it be true to believe it is a chaffinich? Are physicists justified in believing that the Higgs boson exists? Do these problematic observations imply that the standard model of particle physics must be revised? These are normative matters, but only in the thin sense that they, arguably like all inquiry, are oriented by the question of what is good in the way of belief. The usual (and right) answer is that we should believe the truth, but we can also assess correctness along a number of other dimensions: justification, entitlement, inference, probabilifaction, consistency, coherence, accuracy, and so on.

One of the functions of epistemological inquiry is to examine the status of these first-order matters. What does it mean to say that we are not entitled to believe that P, even though P might turn out to be true? What is it for our perceptual beliefs to be justified? What is problemtic about inconsistency in belief? These questions have a metanormative dimension insofar as they abstract themselves from the immediate issue of what beliefs are good or bad along various dimensions in order to ask what any such assesments consist in.

The tentacles of normativity reach far and wide beyond these two examples too. We find them in the philosophy of language, where many have argued that meaning is normative insofar as terms have a correct and incorrect usage, and where the task is to flesh out this normative dimension to linguistic practice. The same goes for concepts and their conditions of application (alongside the murky notion of ‘mental content’ more generally) in the philosophy of mind. Aesthetics can be thought to be a discipline with a metanormative aspect too, especially when beauty and art are notions entangled up with endorsement, which a theory of aesthetic judgement may need confront. The theory of agency is another locus for normative issues, especially insofar many people (myself included) think that there is a distinctive logical form to action-explanation which needs to be articulated in relation to reasons in addition to brute causality. The list goes on… I certainly do not mean to endorse all these projects — I think many of them are misguided ventures — but merely to point to some of ways in which metanormative matters appear within contemporary philosophy.

Hopefully, it will now be clear that we can pitch normative inquiry at different levels. One worry that Levi has expressed is moralism: doesn’t an obsession with normativity lead us to fixate on judging people, weeding the unworthy from the worthy, and seeking to police people’s activity? With our distinction between first-order and metanormative inquiry in hand, we can respond by saying that focusing upon the normative does not necessarily betray a desire to be the judge of what is right and wrong. Instead, for philosophers interested in normativity, there is more often an attempt to understand the conditions under which assesments of correctness can succesfully be made at all and what the upshot of such assessments is. One way of articulating this is to say that we want to understand ‘the force of the better reason.’ I take this to be quite some distance from the right-wing obsession with ‘values’ that seems to be incensing Levi.

Another worry of Levi’s may be more pressing though: turning to norms can seem to be a turning away from the world. We can think of this in terms of a kind of ‘cognitive ascent’ which proceeds in two steps. First, there is a distinction between talking about the world and talking about our orientation towards the world. On one view, there are plenty of philosophically interesting features of the world to talk about — the structure of objects, the nature of causality, the individuation of social actors — and we should just get on and do that. Talking about norms is not talking about the world in this way: either it is talking about how we should talk about the world, and not plainly talking about the world; or it is talking about how we should act in the world, which is both tediously anthropocentric and still not talking about the great outdoors. Furthermore, there is a second ascent here too, since there is not only first-order normative inquiry but metanormative inquiry. It is either talking about how we should talk about how we should talk about the world, or it is talking about how we should talk about we should act in the world. The reassuring cruch of reality is thus even further from being underfoot.

In response, there a few things to be said. Firstly, I do take discussions of normativity to be about the world directly — they are no mere escape from it. I will be brief here because in a way this does not meet Levi’s worries head on. Nevertheless, whilst it may be démodé in some quarters to be concerned with rational agency as a distinctive phenomenon, I think it is a pressing descriptive task for which marshalling the vocabulary of normativity is essential. (See Pippin’s excellent piece and the surrounding discussion for more on this.) Beyond this, normativity is itself part of the world and is threaded throughout countless human practices. When McDowell gives us an account of virtue alongside the capacities, social practices and formative processes needed to make sense of our responsiveness to it, or when Brandom excavates the structure of the practice of giving and asking for reasons, they are talking about real phenomena which need to be elucidated. It is perhaps worth stressing that there needs to be nothing a priori about these normative investigations, even if figures like Habermas (and Pete for that matter) sometimes try to derive minimal rational norms in a quasi-transcendental fashion.

It is the intersection of metaphysics and normativity that seems to be worrying Levi here though. I am less enamored of constructive metaphysics than Levi or Pete, but I think the latter does a brilliant job of demonstrating the methodological role that metanormative inquiry should have within any such metaphysics. I can’t do better than his own exploration of these issues in this essay.

Finally, I want to underscore the possibility of detatching normativity from deontology. The latter understands normativity in terms of necessity imposed through legislation. In Kant’s practical philosophy, this is bound up with a system of duties and rights which takes a rigorist form that many people find both implausible and distateful. Indeed, Lukács tries to show how the Kantian subject, delineated through reference to an abstract set of duties, is the one presupposed by capitalism, being a reactionary ideological symptom of modern forms of exchange.

My own understanding of normativity, both in the practical and theoretical realm, has a more Aristotelian character to it. I do not take the reasons we have to stem from legislation but rather from the concrete situations we face as agents. Nor do I typically articulate rational requirements in terms of necessities imposed on us by reason. Instead, I help myself to the less austere vocabulary of the good as well as the right, and try to extend the concern with what one can do to who one should be as well. In addition to my own, there are innumerable other ways of approaching normativity too. It is important that this point be made in order to correct the assumption that recourse to normative themes is meant to bolster to some quasi-Kantian project, when this is certainly not always the case.

I could continue but that’s more than enough for now.

Warwick Transcendental Realism Workshop

Apologies for it being so quiet around here recently: I’m in the process of finishing my thesis, starting my new job at the University of Essex, and getting ready to move house. I thought I’d post this announcement for a workshop that I’ll be speaking at next month alongside Pete Wolfendale (Deontologisitcs) Nick Srnicek (Speculative HeresyThe Accursed Share), Reid Kotlas (Planomenology) and James Trafford. Ray Brassier will be headlining, giving a longer paper to the Warwick Colloquium in European Philosophy. Hopefully, I’ll be able to develop some of the sketchy thoughts about realism and correlationism which I posted on here last year, especially in relation to Meillassoux. Drop Pete an e-mail if you’d like to come along.

Warwick Transcendental Realism Workshop

Time: Tuesday 11th of May, 12:00pm (registration) – 7:00pm

Location: University of Warwick, LIB2 and S0.11

Organised by Pli: The Warwick Journal of Philosophy, in conjunction with the Research Group in Post-Kantian European Philosophy

The purpose of the workshop is to examine the arguments underlying the increasing push towards realism in parts of modern continental philosophy, along with approaches that bridge the analytic/continental divide, and to assess the possibility of transcendental approaches to realism within this context. Particular themes that we be focused upon include:-

- The arguments of Quentin Meillassoux, and the possibility of transcendental responses to the problems he raises.

- The relation between epistemology and ontology.

- The relation between philosophy and the natural sciences.

The event will be split into two parts. The first part will take place in LIB2 (in the university library building) from 12:30pm to 5:00pm, which will consist in five papers presented by graduate students on matters relevant to the topic, along with discussion. The second part will be the headline talk, given by Ray Brassier, which will take place in S0.11 (in the social studies building) from 5:30pm to 7:30pm, under the auspices of the department’s regular Colloquium in European Philosophy.


Ray Brassier (Philosophy, American University of Beirut) – ‘Kant and Sellars: Nominalism, Realism, Naturalism’

James Trafford (Philosophy, Unaffiliated) – ‘Follow the Evidence: Realism, Epistemology, Semantics’

Reid Kotlas (Philosophy Grad Student, Dundee) – ‘From Transcendental to Abstract Realism: Epistemology after Marx’

Nick Srnicek (International Relations PhD Student, LSE) – ‘Extending Cognition: Bridging the Gap between Actor-Network Theory and Scientific Realism’

Tom O’Shea (Philosophy PhD Student, Sheffield) – ‘On the Very Idea of Correlationism’

Pete Wolfendale (Philosophy PhD Student, Warwick) – ‘Objectivity, Reality, and the In-Itself: From Deflationary to Transcendental Realism’

The workshop is free to attend, but please email pete.wolfendale ‘at’ gmail.com to register in advance, or to request any further information.



Modernity is often associated with disenchantment. But what does this mean? Ancient thinkers had tended to ascribe teleological principles to the natural world: the stone strives for its home at the centre of the earth; the eclipse communicates divine displeasure. The monotheistic traditions which then gained ascendancy in Europe and the Near East retained something of this, finding God’s plan suffusing nature: God creates walnuts to resemble brains, signing to human reason that the former is good for the latter; gold and silver lie beneath the ground and the sun and stars shine in the heavens above, displaying a divinely ordained symmetry (both these latter examples are taken from Foucault’s The Order of Things). But with the rise of the mathematical sciences, natural teleology and divine order came to be treated with increasing derision. Aristotle was to be banished to the libraries of the Schoolmen, and if God was to have daubed nature with language, he would speak to us in mathematics and not dainty allegories. For philosophers such as Descartes, matter was extension, and must yield its secrets to a physics taking mathematisable form. This approach to the natural world was further buttressed in the minds of natural philosophers by the successes of the Newtonian revolution. In biology, by 1828 even the demand for a vital force — said to divide the organic from inorganic — proved empty, Wöhler having proved that the organic could be synthesised from inorganic components.

Everywhere, meaning fell under the sword of mechanism, and myth and mysticism with it. But suspicion hung over this evacuated nature, for was it not also our home — perhaps even the very substance of our being? If so, what remained of freedom, providence, value, beauty or morality in all this? The very meaning of life appeared to be under threat, since there seemed to be no room for God, rational harmony or true righteousness amongst the icy torrents of indifferent particles. The height of the Enlightenment saw the most avid articulation of these worries, with Jacobi coining the term ‘nihilism’ to describe what he saw as Godless and fatalistic Critical philosophies, which in his eyes provided little more than a fig-leaf covering their destruction of a transcendent source of value.

In all this, there are both progressive and regressive currents. The rise of modern science has been a near-unparalleled breakthrough, on a par with the development of agriculture, city dwelling or the institution of constitutional legal codes. In so doing, it has rightly banished God-talk from natural philosophy and much else besides. So too, it has helped deaden the appeal of any view of freedom wherein it consists in some contra-casual power to intervene in the world (quantum mechanical gymnastics aside). But there is a risk of the burning light of science blinding us to the proper significance (or even existence) of certain equally natural phenomena. My own interests here settle on normativity — what we are committed, entitled or prohibited from thinking and doing; how we are subject to the ‘force of the better reason’; why we not merely do but should follow certain rules and conventions — ethical, theoretical, aesthetic, affective — whilst rightly rejecting others. Often, attempts to understand normativity suffer from a scientism which extends far beyond a healthy respect for the natural sciences, and which commonly has its roots in a problematic conception of disenchanted nature.


In the face of the disenchantment of nature, we can easily succumb to that curious form of philosophical vertigo that Wittgenstein diagnoses so well. We then grasp about for a solid handhold. Confronting frigid nature, operating with lawful or law-like regularity, one response has been to cast aside concepts like freedom, obligation and representation as folk-psychological detritus which we can do without. For example, Stephen Stich has claimed:

intentional states and processes that are alluded to in our everyday descriptions and explanations of people’s mental lives and their actions are myths. Like the gods that Homer invoked to explain the outcome of battles, or the witches that inquisitors invoked to explain local catastrophes, they do not exist. [quoted in a recent article by Dwyer]

This is the eliminativist approach: the world is nothing like the fantasies of religion and art had led us to believe — it is the indurate ground of animal life but not our ‘home’. For the eliminativist, there is no need to sweeten the pill of the disenchantment brought on by the scientific mind-set. As Ray Brassier has recently written, “Philosophy should be more than a sop to the pathetic twinge of human self-esteem.”

Drawing back from eliminativism, another response has been to reconstruct those concepts suspected of anthropocentrism in a more respectable vocabulary for the naturalist. So, there is no need to ditch freedom, say, but let us just be clear what we mean by it, where this might legitimately be causation along certain biochemical pathways and not others, or action in light of knowledge of the conditions under which it was caused, or whatever natural-scientific form of description best approximates actual or ideal folk-psychological usage. The manifest image of humanity is not entirely wrongheaded, just naïve. Properly regimented, it captures something important about human patterns of understanding, behaviour and our place in the world. Let us call this view naturalistic revisionism.

Different again from eliminativism and revisionism is expressivism. The expressivist agrees that the world is a cold, dead place when contrasted with the animisms, platonisms and providentialisms of old. However, the human animal ‘stains’ and ‘gilds’ reality with its sentiments (to borrow Hume’s terms). For the expressivist, it is we who project value on the world, and this can give us the resources to explain ethics, freedom and aesthetics outside of the tight net of the scientific naturalist’s privileged nomenclature. There is nothing unnatural about our caring about (or disdaining) each other, our projects and our environments; but that need not force us to redescribe ourselves in natural-scientific terms alone — our passions have their own logic and significance that subsists upon but grows out of its natural base.

Yet another response to disenchantment has been to foreground not human emotion but reason and autonomy. For constructivists, the legacy of disenchantment has been to show us that we are alone in the world, with no divine firmament above or promontory below that would help us surveil a normative order. But unlike expressivists, we should look to our activity of trafficking with reasons stretching beyond our structures of passions. We forge obligations for ourselves through the exercise of autonomous legislative capacities, claiming ownership of our actions through drawing them into an unfolding plan which we grant authority over our desires, projects and identities as a whole. In doing so, we act with the dignity proper to creatures capable of self-determination, who are not merely buffeted around by events, beliefs or desires, but who manage to establish some sort of purchase and sovereignty over themselves and thereby lead their lives.


Now, you need not be a platonic boogeyman to be uneasy about this collection of options. My own thinking about these issues is heavily indebted to John McDowell. His suggestion that we need “a partial re-enchantment of nature,” as with many of McDowell’s trademark phrases, is a little unfortunate though. He stridently rejects the idea that ‘re-enchantmant’ has a “crazily nostalgic” character which gives any ground towards a “regress into a pre-scientific superstition” which would encourage us to interpret the fall of a sparrow like we would a text. But nevertheless the associations surrounding ‘enchantment’ remain — something spooky gets evoked. Talk of ‘re-enchantment’ is misleading, and a better McDowellian phrase would be resistance to the “interiorization of the space of reasons.”

Disenchantment makes it seem like reasons are illusory or are at best absorbed into the activity of subjects. What we get is meaning, and the rational relations it makes intelligible, restricted to meaning-conferring subjects. At most, so understood, we project reasons into a world of rationally inert objects. The car-crash is then only a reason to phone an ambulance in light of human ethical practices; the ionized radiation in the cloud chamber only justifies belief in the presence of an alpha particle in light of the construction and testing of electromagnetic and particle theories. Now, there is something right and something wrong about all this. We cannot intelligibly think from a perspective of cosmic exile and must accept the finitude of our cognitive capacities (contra SR and OOO). All of our truck with value, reasons, justification must proceed from local and situated circumstances and continue to lean upon human forms of knowing and valuing. But that does not mean we should rest content with the idea that these are ‘merely human’ standards whose shadows fall upon an apathetic world. Our finitude, properly understood, ought not impugn normative realism, and we should not be carried away by the characterless world presented by natural science.

Nature is not exhausted by natural scientific description, and so it is misguided to require human interests for any more juice to be squeezed out of it. The predominantly nomothetic explanations offered by natural science are pearls without price, but they have no claim to speak for the totality of nature. Human life is obviously in some sense ontologically decomposable into organic compounds, atoms, quarks and electrons, and so on. But the explanatory matrix which most often befits it is normative and not immediately natural scientific (whatever the prospects of reductionsism about normativity). Again, there is nothing unnatural about humans as they fall under normative descriptions, appraised in terms of their intentions, virtue, beauty or freedom. We come to employ these concepts in the course of our biological maturation, supplemented by a process of socialisation which is no less a part of the natural history of humanity.

The temptation towards the modernist division between meaning-conferring subjectivity and intrinsically meaningless nature arises when we think that we can only have meaning on human terms — the human forge of meaning being the correlate to the frozen world of mechanism. If the logical space of nature and the logical space of reasons are irreconcilable, then this would seem to follow (assuming naturalistic revisionisms are moribund, which I think is very plausible). But this is only so if nature is also exhausted by natural scientific description. And it is not: natural events can be legitimately characterised in normative terms without a regression to pre-scientific rationalism. This is the sort of re-enchantment McDowell seeks, and rightly too. The claim to be defended is thus: “the natural world is in the space of logos.” My optimism on this count is rarely shared though.

Might and Right: Against Latour (part 2)


In the previous post, I outlined some of the background to Latour’s denial that there is a distinction in kind between force and reason. In this post, I shall say why I think he is wrong to do so. A good place to begin is Latour’s explicit comments on reason and logic. He says:

A force establishes a pathway by making other forces passive. It can then move to places that do not belong to it and treat them as if they were its own. I am willing to talk about ‘logic,’ but only if it seen as a branch of public works or civil engineering. (PF 171)

What a daring and suggestive analogy! This is the sort of stuff that makes Latour a great read. Later, he continues this thought: “We cannot distinguish between those moments when we have might and those when we are right.” (PF 183)

One obvious worry here is that no accommodation is made for distinction between persuasion by means of inveigling, deceptiveness and threat, and by means of argument and sincere discussion which tries to make itself answerable to the facts and the well-being of the participants. Perhaps academics, politicians and scientists do not operate without the consuming cynicism of the advertiser or the predatoriness of the bully; but must this be the case — could there be no principled distinction between brainwashing and being convinced in light of evidence?

In the excellent discussions of Latour in Prince of Networks, Graham Harman defends him from this line of attack by reminding us that the relevant forces extend beyond the human to nonhuman actants:

A charlatan might convince a roomful of dupes that they can walk on hot coals without being harmed, but the coals remain unconvinced–leading the charlatan into lawsuits or beatings from his angry mob of victims.

This move tries to defuse the charge of having a crude cod-Machiavellian conception of the ubiquity of power in human affairs by extending the analysis to both the human, the nonhuman and hybrid networks inclusive of each. However we manage to convince humans, we need to ‘convince’ nonhuman objects too in order to be effective. There is no slide into relativism insofar as weak theories will collapse under the pressure of their inability to command actants. Or rather, we can hold onto them, but only at a cost:

We can say anything we please, and yet we cannot. As soon as we have spoken and rallied words, other alliances become easier or more difficult. (PF 182)

Is then rationality only one type of efficacy amongst others, fighting it out on a flat battlefield?


Latour implies that rationality must either take the form of some spectral a priori process, entity or interaction, or fall under the thrall of networked objects. For example, he says:

There has never been such a thing as deduction. One sentence follows another, and then a third affirms that the second was already implicitly or potentially already in the first. Those who talk of synthetic a priori judgments deride the faithful who bathe at Lourdes. However, it is no less bizarre to claim that a conclusion lies in its premises than to believe that there is holiness in the water. (PF 176)

If we want anything like a deduction, we are supposed to earn it: we must subsidise the labour of translation which allows us to glide seamlessly from antecedent to consequent, from P to Q. Behind any such translation will be a massive apparatus of networked actants which we ‘black box’ in practice (treating them as uncontroversial) but which must ultimately be accounted for. One of Latour’s concerns with approaches which make rationality stand apart from force seems to be that they ignore the process of the genesis and reproduction of rationality and rational behaviour.

I think that something goes awry here. In short, Latour has reified the rational in an attempt to save it from platonism. But the normative dimension of rational action does not primarily consist of any kind of object. We will not find it by looking to heaven with the platonist nor to earth with Latour. Nevertheless, Latour’s approach is right insofar as it treats rationality as unintelligible apart from an understanding of how something is treated or mobilised as a reason, where this requires us to grasp features of our form of life and their inextricable embeddedness amongst nonhuman objects. But this does not mean that we should only talk about such mobilisations. The pressure to do so appears to stem from a sceptical bent: what else could we be talking about if it is neither objects in action nor mysterious rational Forms? Again, Latour is right to adhere to a flat ontology; there can be nothing above the one plane of the natural world crowded with interacting objects. But the vocabulary of rationality, as with normativity in general, can be deployed from a distinct standpoint within and upon this self-same reality.


We undertake normative talk from a practical perspective, which inflects the theoretical mode of explanation in important ways. Normative vocabularies do not seek to describe the efficacious dimension of objects alone but rather throw another kind of light upon them which illuminates their place in the space of reasons. This neither takes normatively inert events and projects human interests upon them nor domesticates normative phenomena by reconstructing them in terms of their power to exert leverage on humans and nonhumans alike. Rather, it exploits anthropocentric modes of responding to the world — informed by our history, preoccupations, social organisation, physiology, art, environment, language, and technology, inclusive of all the hybridity that involves — in order to reveal distinctive aspects of certain situations which are not themselves directly parasitic on elements of our points of view. For example, decrying an action as cruel may only make sense within certain forms of life, but that does not mean that cruelty is a second-rate property or a cruel action is so only in light of us taking to be so. Some degree of epistemological anthropocentrism does not preclude genuine objectivity.

We ought to act in certain ways in virtue of how the world stands, but without supposing our characterisation of the world must only encompass those features identifiable from within an explanatory matrix focused exclusively upon ”wide cosmological role’ (i.e. those things which have effects upon things other than human attitudes). Seeing a badly injured friend, deciding whether to go to a protest, or it striking us that we have failed to balance an equation, can authoritatively demand things of us that overspill how we actually respond in these situations. Here we can use normative concepts, absent from the natural scientist’s official toolkit, to capture what is going on — being obliged to help, having reasons to go, being inaccurate in our calculations, and so on. But there should be no pressure to evacuate incipient normativity here from either our characterisation of the original situation (e.g. ‘injured’, ‘failed’) or its implications (e.g. ‘obligation’, ‘having a reason’). The absolute standpoint of the scientist, engineer or anthropologist, which sees only objects in action and their epiphenomena, although essential, has no claim to exhaust any legitimate account of reality.

Latour suspects hocus-pocus when we say that the premises are present in the conclusion. But this will only be the case on a crude reading of this claim which is shored up by the prejudice that if something cannot be slotted alongside all the other properties of objects in the same respect then it must be bogus or in need of reclamation in more causally respectable terms. Yes, rational agents will have to perform the translation from premises to conclusion themselves. Logical practice must be undertaken by someone (or something) somewhere, and such behaviour can be described. But to suppose that the only legitimate framework for such description is one of battling actants is false and will lead to a distorted picture of reasoning and its significance. No good basis for a revisionary account can be found. There is nothing spectral about the force of the better reason, what we ought to do, what virtue demands: distinctions of this kind pervade our language. All we need is the philosophically won confidence to take our own practices seriously.

Might and Right: Against Latour (part 1)


The founding gesture of Latour’s philosophical thought is his suspension of any distinction in kind between might and right. One of the things that he claims puzzled him were objections to science studies which claimed that its social explanations would always miss an important dimension of science. On the first page of Irreductions, Latour takes this objection to rest on a fateful assumption:

This is the assumption that force is different in kind to reason; right can never be reduced to might. All theories of knowledge are based on this postulate. So long as it is maintained, all social studies of science are thought to be reductionist and are held to ignore the most important features of science. (PF 153)

But Latour seeks to suspend this assumption:

Although, like the postulates about parrallel lines in Euclidean geometry, it seemed absurd to deny this position, I decided to seee how knoweldge and power would look if no distinction were made between force and reason. Would the sky fall on our heads? Would we find ourselves unable to do justice to science? Would we be condoning immorality? Or would we be led towards an irreductionist picture of science and society. (PF 153-4)

For the most part, I have no objections to the methodology of science studies as pursued by Latour, Shapin and Schaffer, even when it comes to the idea that they have overlooked some rational kernel to science that overspills careful description of scientists and the social instutions which undergird their work. But I think Latour is wrong to abandon the idea that there is no distinction in kind between force and reason. These posts will explain why, beginning with this one detailing the background to Latour’s claims.

The larger frame within which Latour advances these claims about force and reason is his rejection of the category of modernity. In this way he opposes himself to moderns, antimoderns and postmoderns. Moderns celebrate modernity; antimoderns take it to be catastrophic; and postmoderns see it as a catastrophe that nevertheless ought to be celebrated. But Latour rejects all these stances: he is a nonmodern. In his view, modernity has not even taken place — ‘We Have Never Been Modern,’ as his book announces. But what does he mean by this?


Firstly, Latour’s rejection of the category of modernity marks his scepticism about the idea that the rise of science, technology and liberal institutions in the West has had effects of an epoch-making significance upon either social organisation or the processes of the natural world. For Latour, these effects are overplayed and neither go unparalleled in human history nor constitute a radical break with what came previously. The modern period has not witnessed humanity’s despotic domination of nature and there has been no hollowing out of the human spirit wherein it has been supplanted by rationalised agents and social relations. At most, what has taken place is a “network-lengthening process”:

To take the precise measure of our differences without reducing them as relativism used to do, and without exaggerating them as modernisers tend to do, let us say that the moderns have simply invented longer networks by enlisting a certain type of nonhumans. (WHNBM 117)

So, Latour thinks that we have not wrenched ourselves free from nature, as modernist rhetoric can seem to imply. Rather, humans have always deployed and been deployed by nonhumans, and there has simply been a widening of these human-nonhuman networks.

This brings us to the second important aspect of Latour’s take on the very idea of modernity. In the self-conception of moderns, the human and nonhuman must be treated as “two entirely distinct ontological zones.” (WHNBM 10) Yet, as we have just said, Latour thinks the modern period sees an extension of ‘hybrids’ or ‘quasi-objects’ which straddle this ‘Great Divide.’


These reflections upon nature and culture are put to work in an analysis of approaches to science. For Latour, the modern ‘Great Divide’ breeds problematic asymmetrical explanations. For example, we find explanations of the production of truth in scientific endeavours as something attributed to nature. Left to its own devices science secures correspondence with nature — its revelation to us. But error in these endeavours is to be explained not by nature but culture. It is ideology, power interests, myth, and so on (so the story goes) which are to account for error. By contrast, Latour thinks we must pursue symmetrical explanations: “If you analyze Pasteur’s successes, do the same terms allow you to account for his failures?” (WHNBM 93) This entails a shift of explanatory strategy, and one which must ultimately go beyond explaining success and failure in the self-same terms (e.g. it is even insufficient to explain both in terms of social factors rather than just failure). Instead, explanation must encompass hybrids: phenomena with dual composition of human and nonhuman. In other words, the ‘Great Divide’ must be dismantled in the imaginations of the moderns, antimoderns and postmoderns alike, just as it has always had to be in much of their actual practical activity.

Latour’s rejection of a distinction in kind between force and reason is part of this larger story. For him, reason cannot be quarantined from the unified field of hybrid networks. In other words, it cannot stand in nature alone, or in culture alone, and still less outside of both (as if a benefactor of platonism). This claim has a kind of axiomatic status, much like Kant’s attempts to see where we get if we suppose that objects must conform to our knowledge or like nineteenth century organic chemists who began to dispense with the notion of a ‘vital force’. If the world comes crashing down around us on this hypothesis, so be it, but we may just find ourselves in strange but alluring new lands where we can settle with the new hypothesis intact. Any such venture will require some charity whilst we see whether any kinks in the new worldview can be straightened out, but I think Latour’s hypothesis is not ultimately one we should accept. In explaining why, the next post will begin by considering some of his more explicit comments on reason and logic.

On the Ontological Principle


In my previous post, I outlined Levi’s Principle of Translation, which states that “all transportation is translation.” This principle opposes the idea that objects are mere passive items which simply acquisece to influences upon them. Instead, onticology is an ontology of resistant objects, which struggle with each other. The point of these dramatic metaphors is to insist that influences must be taken up by objects, where this involves a ‘fusion of differences’. For example, when oxygen and water cause iron to rust, then the iron itself is active here, entering into a network with the oxygen and water to produce the difference, rather than being a mere container for their effects.

One of the philosophical upshots of this principle is that objects are not simply vehicles of some set of differences. In other words, they are not inert items that can have a form imposed upon them and yet not redound upon the process of formation. I think Levi thinks this is significant because it is incompatible with certain types of correlationism, where a correlate would determine objects without being determined and with the object playing no role in its determination. (Again, I will stress that I think the concept of correlationism is a red herring.) In this way, it helps to avoid Levi’s Hegemonic Fallacy, namely that difference cannot be reduced to ‘one difference that makes all the difference’ or ‘the most important difference’.

I take the Hegemonic Fallacy to be Levi’s main target. This is significant because it not only sets him against correlationism but also against the speculative realisms of people like Ray Brassier. Brassier embraces eliminativist lines of thought and would doubtless not shrink from the charge of scientism. Here, materialism would seem to introduce matter as ‘one difference that makes all the difference.’ In contrast, Levi is keen not to debunk the human and his ontology is oriented to be open ended and inquiry led: if it is found to make a difference, then it is real — whether it be Oedipus, evil, Edith Piath or an electron. This is captured in the Ontological Principle which results from the Ontic Principle: “Being is said in a single and same sense for all that is.” Indeed, this is all that Levi thinks can be said about being qua being; thus, ontology must be pursued on the ontic level, dealing with beings themselves.


The Ontological Principle demands a flat ontology. One contrast here would be with vertical ontologies, where one sort of being overdetermine the rest. Correlationism and platonism would fit the bill here. However, Levi refuses to equate the univocality of being with a univocality of translations. In other words, no one type of being dominates others, and they all sit alongside each other, but that does not mean that every object must act and be acted upon in the same way. This comes out in this Deleuze passage which he quotes:

Being is said in a single and same sense [...] of all its individuating differences or intrinsic modalities. Being is the same for all these modalities, but these modalities are not the same. — Difference and Repitition, p.36

So, we can make sense of existence at “different levels of scale” whereby each level is not reducible without remainder into another level. This idea — no reduction without remainder — Levi calls the Principle of Irreduction. One consequence of this principle is that “the relation between individuals is not one where one type of individual explains the rest without remainder, but where processes of translation must take place.” Levi’s example is DNA. It is a condition of my body existing and explains my anatomy, but cannot serve as an autonomous explanation since it must act upon resistant objects which take up that action according to their affections: “DNA, in unfolding, must nonetheless undergo translation as it transports itself [...] and the body formed in translation with DNA produces its own differences.”

It is at this point which I am interested in the explanatory consequences of onticology. This is because I am sympathetic to something like the Ontological Principle and also want to accomodate different explanatory modalities within it (note here that my concern is primarily explanatory rather than metaphysical, though I don’t think I am guilty of Levi’s Epistemic Fallacy). In my case, I want to hold onto a form of naturalism which does not degenerate into scientism. Thus, I reject supernatural entities, like divine beings, along with platonic Forms (sympathetic readings of Plato aside). But I also resist any hegemonic move on behalf of the natural sciences to act as final arbiter for acceptable forms of explanation. The main clash here come with our understanding of rational agency, which I think neither requires nor can be given an exhaustive explanation in natural-scientific terms. This is because many of the locutions which we (legitimately) use in explanations of rational agency — such as ‘justified’, ‘perceptive’ and ‘immoral’ — are not employed as empirical descriptions of behaviour but ascriptions of a standing in what Sellars calls the ‘space of reasons.’ A different mode of intelligibility is required to characterise the empirical properties of natural objects than to characterise rational proprieties like entitlement, permission or inaccuracy.

The claim that this sort of rational intelligibility is irreducible to empirical intelligibility can be expressed by saying that the space of reasons is sui generis. It is this claim which I think we need to maintain, where Levi’s talk of the mind’s translations not being special seemed to threaten it. He has now clarified his position, where his talk of the lack of the mind’s specialness is only meant to stretch to it not being included in every relation. So, it seems that on these grounds there may be no source of objection to my approach, though there may be other reasons to object to it which stem from onticology. Nevertheless, in the next post I will fulfil my promise to say more about how we should understand the distinctively spontaneous translations of the subject, and how this bears upon metaphysical issues.

On the Principle of Translation


Levi has been developing a version of object-oriented philosophy which he calls ‘onticology’. In doing so, he recommends understanding objects as ‘actors’ which produce ‘differences’ in each other. Significantly, these modes of production include but are not limited to causality, such that anything which produces differences counts as acting. I am not sure exactly what sorts of non-causal production Levi wants to allow here, but we might think of examples like individuation, such that something counts as information, say, because of its place in a informational network even if it does not have to be in causal relations with all the parts that make up the network. So, objects can act in both causal and non-causal ways. Levi thinks that we should understand this action in terms of translation:

The Principle of Translation states that there is no transportation without translation. What I mean by this is that when the difference of one object acts on another object it translates or transforms that difference in a way unique to the receiving object. Thus, for example, my pepper plant “translates” the difference of sunlight producing energy in the form of sugars that it uses to produce its fruit and leaves. The process of translation thus transforms the differences of other objects in a way particular to the object doing the translation.

A second way in which Levi expresses this idea is in terms of affect (in Spinoza’s sense). The affective aspects of objects are those through which it can act and be acted upon. If influences upon objects must be transmitted through their affections, then there is a sense in which the production of difference in an object must be particular to it. Levi’s example of this is the neutrino, whose small mass, high speed, and lack of charge leaves it with a limited set of causal powers to act and be acted upon.

Thirdly, Levi frames his Principle of Translation in terms of an extreme radicalisation of the Kantian insight about the activity of the subject, which he claims “transforms data of the world such that it does not represent the world as it is “in-itself””. The polemical suggestion is that Kant did not go far enough — why stop with subjects? So, Levi advocates a “generalized Kantianism of objects”. All objects are active because they transform what affects them, just as Kant rejects the Lockean idea that the mind is passive with respect to what effects it. This forms part of the call for a flat ontology, which develops a univocal analysis of objects which treats subjectivity and sociality as contiguous with everything else.


There is something to be said for a flat ontology. We ought to be wary of supposing that reality contains discrete levels, where the relations between them become hard to fathom. For instance, Cartesian dualism is reviled for good reason; it is understandable how it arose in response to the pressures of a mechanistic philosophy of nature, but it nonetheless invites mystification. So too, more recent appeals to sociality risk reprising its mistakes in another key. Latour has done much to expose the emptiness of those sorts of social explanation which do not pursue to the composition of the social itself, and which he sees as the primary task of sociology. Levi introduces a further worry, that we turn to a vertical ontology, where one ontological level dominates — subjectivity being present in all relations, for example, as certain ‘correlationists’ are meant to believe (though see my previous posts on Meillassoux for my reservations about the charge of correlationism). But I think this should be kept distinct from the epistemological problems which would be created from a discontinuous ontology, which appear to force on us the explanatory task of showing how these distinct levels of reality interact. This kind of gap-bridging task — which rarely fares well — is the main fallout of non-flat ontologies.

Even with these difficulties in mind, I think that some of the aspects of Levi’s attempt to construct a flat ontology ought to be resisted. There is something distinctive about subjects which makes some forms of flat ontology problematic. We can talk both about objects translating objects and about subjects translating objects. But the translations of the subject include those of a unique kind, which are not adequately addressed by simply increasing the complexity of a unitary flat ontology. So, there is no objection to saying that objects are active and possess affections which translate influences upon them in particularised ways. But there is a highly significant type of activity which subjects engage in, which the Kantian tradition characterises as spontaneous. It is in virtue of their spontaneity that subjects are responsible for the translations which they undergo: and this brings with it many of the traditional distinguishing traits which have been used to mark out subjects, namely freedom, normativity, rationality and intentionality. In the next post, I shall say more about how we should understand the spontaneity of subjects and how that impacts upon metaphysical issues.

Realism and Correlationism: Kant and the Short Argument

Meillassoux takes the correlationist to rely on the following argument:

thought cannot get outside itself in order to compare the world as it is ‘in itself’ to the world as it is ‘for us’, and thereby distinguish what is a function of our relation to the world from what belongs to the world alone. Such an enterprise is effectively self-contradictory, for at the moment when we think of a property as belonging to the world in itself, it is precisely the latter that we are thinking, and consequently this property is revealed to be essentially tied to our thinking about the world. (AF: 4)

This argument is a form of what Karl Ameriks calls the ‘short argument’ to idealism, which often gets attributed to Kant. However, Kant does not make this short argument. Ameriks traces this form of argument to Reinhold, and he notes that it does sometimes appear in the post-Kantian tradition. So, we find Reinhold claiming the following:

What is represented, as object, can come to consciousness and become represented only as modified through the form of representation, and not in a form independent of representation, as it is in itself. (Versuch: 240; quoted in Ameriks FoA: 129)

Reinhold takes it that a need to represent objects for them to be given to consciousness ensures that we cannot come into an epistemic relationship to those objects which could be disentangled from our representations:

The concept of a representation in general contradicts the representation of an object in its distinctive form independent of the form of representation, or the so-called thing in itself; that is, no thing in itself is representable. [...]

[T]he object distinguished from the representation [...] can only be represented under the form of representation and so in no way as a thing in itself. (Versuch: 244, 246)

So, for Reinhold, because we cannot get outside of our representations, then objects cannot be represented as they are in themselves.

If the correlationist — whatever ‘originary correlation’ they are meant to argue for, and whatever it means to say that they cannot consider its terms independently — has to rely upon this argument as it stands, they are in trouble. This is because the conclusion it argues for is trivial given the way key terms in the argument are understood. Reinhold is trying to prove that we cannot know things in themselves, where he takes knowledge to require that objects are represented to us. But if he tacitly understands ‘things in themselves’ just to be what is not representable, then the conclusion follows all too easily. Thus, on its own, this argument ought to convince no-one.

Meillassoux’s presentation of the argument proceeds in a similar fashion. It seeks to establish an (underspecified) ‘essential tie’ between thought and things in themselves. Like Reinhold, this is meant to undermine the possibility of an epistemic relation to the world as it in itself independently of thought (one that the realist requires to distinguish primary and secondary qualities). The way that it does this is by simply noting that we cannot think of features of the world in itself without the world in itself being the object of that thought. Thus, we must always factor in a correlation between thought and the world in itself when attempting to reflect on the latter. Again, the shallowness of this argument ought to be transparent. Knowledge of the world in itself, as required by the realist, is denied to us because thinking is always present when thinking about the world in itself. However, this is only because here we are to understand knowledge of the world in itself as knowledge where thought is not present. The opposition is simply defined out of existence. Nothing is demonstrated by this argument, and it is no more contentful than Reinhold’s efforts.

* * *

Even with Meillassoux’s distinction between weak and strong correlationism, and the specification of different possible correlates than simply thought and world, I am not yet clear in my own mind what the status of the correlationist’s claim that thought and world must be thought together is meant to be. So, I am hesitant to assert or deny that particular philosophers are correlationists. Besides, I am not sure how useful a discussion along the lines of ‘is x really a correlationist?’ would be. Still, insofar as transcendental idealism can be thought of as introducing some significant relation between thought and world, whether we understand this idealism as metaphysical, formal, methodological or whatever, then it may bear considering in this context.

However we understand the relation between objects and cognition in Kant, I have claimed that we do not find a ‘short argument’. Yet, Kant does claim that objects conform to the conditions of cognition. So, we can ask, how does Kant’s position differ from the ‘short arguments’ dismissed above? This ought not to be of mere historical interest insofar as it can furnish us with alternative arguments for either correlationism or a more plausible relative of it. Speculative realists have an interest in attending to other such strategies insofar as their own positions can develop in dialogue with a wider range of opposition than the colourless proponent of the short argument.

Transcendental idealism famously effects a Copernican turn. Instead of assuming that all our knowledge must conform to objects, Kant ventures a hypothesis: objects must conform to our knowledge. This claim has proven difficult to understand. It is clear that Kant is not asserting an empirical idealism, which holds that objects have a metaphysical dependence upon our epistemic activity or our ‘representations’. Kant denies this when distinguishing his position from what he calls Berkeley’s dogmatic idealism. In the Prolegomena, he calls his position formal idealism, and any dependence of objects upon our knowledge is restricted to the forms of our knowledge. In the Analytic of the first Critique, regarding the categories of the understanding, Kant denies he is engaged in a traditional metaphysical investigation of being qua being (A247=B303). However, it can appear that the Aesthetic claims that our forms of sensibility, namely space and time, are ontological conditions of objects (although Kantians such as Henry Allison and Graeme Bird forcefully argue against such a reading). Whatever the right interpretative approach here, obviously some important connection between formal conditions of knowledge and objects is being asserted. But why? The answer provides some possible motivations for something like a correlationist position which are not simply versions of the short argument.

Kant makes his speculative Copernican hypothesis because he is dissatisfied with metaphysics. When compared with mathematics, say, which also seeks knowledge which is not directly empirical, it can hardly be said to be on the ‘sure path’ of science. For Kant, this was illustrated by the hollowness of metaphysical inquiry into the nature of the soul, God and world, reflected in the the interminable debates in rational psychology, rational theology and rational cosmology which are diagnosed in the Transcendental Dialectic. The problem, he thinks, is that metaphysics has employed theoretical reason in illicit ways, beyond its proper bounds. Traditional metaphysicians have failed to take into account the anthropocentric forms of human cognition, and so constantly come to grief by asking of reason what it cannot deliver. However, this is merely a sketch of some of the territory. There is no swift move from registering the forms of human cognition and towards sealing us off from a non-human world. From the bare fact that it is our cognition, it does not follow that it cannot deliver things in themselves. To attribute such a short argument to Kant on this basis is to ignore the details of Kant’s examination of cognition and his lengthy inquiry into metaphysics.

If transcendental idealism does ultimately count as a form of correlationism, this will be on the basis of the determinate limits on knowledge explored in Kant’s inquiries. These include sensible conditions, intellectual conditions, cognitive conditions governing the relation of the sensible and intelligible (e.g. the discursivity thesis), and rational conditions pertaining to the proper use of practical and theoretical reason. Each is supported by argument and analysis, which vary in success. For example, the intellectual conditions on empirical knowledge include conformity to the categories of the understanding. These conditions on thought are backed by an examination of the forms of judgement, which many people have found problematic and dogmatic. This set of conditions will probably not be the most troubling for the speculative realist though (Kant allows that we can think the thing itself — though whether that is just as a limiting concept is debatable). Rather, it will be the sensible conditions which will be most problematic. These sensible conditions enable objects to be given. Thus, they provide the main receptive framework for cognition, where the understanding provides the main spontaneous framework. Objects are given to sensibility according to its forms, namely space and time. This can seem an unassuming empiricist move: we know about things through spatio-temporal experience. But it goes beyond this insofar as Kant’s Copernican turn makes an a priori pure form of intuition logically prior to objects. Objects are given according to this pure intuition, such that they have formal properties in conformity with this pure form. This can be understood in more or less metaphysical terms. It is where realists will doubtless demur though, since it can seem to impugn the independence of objects from our cognitive apparatus.

Why does Kant embrace something like correlationism here? Some reasons are arguably idiosyncratic. For example, Kant thinks that we require pure forms of intuition to help apply the categories of the understanding (such as existence or plurality) to sensible objects — they bind the a priori and the empirical together ‘schematically’. Also, given his understanding of geometry and arithmetic, pure forms are meant to explain the synthetic a priori status of mathematical knowledge.

What may have a wider resonance though is the role of forms of intuition in grounding Kant’s revised metaphysics. Kant thinks that reason can be shown to fail when, like the rationalists, it strays from the path of possible experience. This was what led metaphysics into darkness. But if objects have to conform to the forms of intuition, then their formal properties can be grasped a priori. So, for any object which is given to us, we can justify limited metaphysical knowledge of it with reference to the pure forms, since nothing can be given that does not conform to these forms. Kant sums it up like this: “reason has insight only into that which it produces after a plan of its own.” Now, by my lights, Kant’s specific appeal to pure forms of intuition is not ultimately successful. But it does give a substantive argument for a correlationist-like understanding of the relation between objects and cognition. Furthermore, it outlines a strategy which I think can be made to work, albeit in a heavily revised form, with respect to the normative bases of cognition (and which, in time, I hope to outline).

* * *

A final thought on the question of metaphysics. The metaphysics which Kant seeks to cut down to size is an unbridled rationalism. But speculative realism has typically championed a kind of empirical metaphysics. It seeks to be porous with respect to scientific discovery: it is science which is to be the leading-edge of ontology. I have some limited sympathy with this approach with respect to certain theoretical endeavours, and agree that on the whole there is no need for a metaphysical grounding for science, provided by philosophy. However, I wonder quite how speculative realism will come to understand the status of its own metaphysical claims.

Alexei has raised the problem of normativity in this area: does a radical materialism have the resources to account for its own justification? We are all naturalists now — after a fashion, at least. But speculative realists have adopted a particularly strident form, which does not seem to be friendly to normativity. Just witness Ray Brassier’s Nihil Unbound. Can it understand, or sufficiently redescribe, the context in which it puts forward its own theory, such that it can allow that such a theory is meaningful, justifiable and truth-apt, whilst cleaving to a sparse materialist metaphysics which admits values, if it all, only in an anti-realist fashion? I will have more to say about this at a later date.

Realism and Correlationism: Some preliminaries

Over at Larval Subjects, Now-Times and Perverse Egalitarianism there has been a fractious debate regarding realism which has gone on for some time. This is in the wake of ‘speculative realism’ coming to increased prominence, under the influence of Quentin Meillassoux, Ray Brassier, Iain Hamilton Grant and Graham Harman. This realism has been contrasted with a correlationist position, which is taken to infect much contemporary philosophy.

Meillassoux introduced the term ‘correlationism’ to describe a non-realist position which claims that “we only ever have access to the correlation between thinking and being, and never to either term considered apart from the other.” (AF: 5) As Meillassoux also puts it, the correlationist denies that it is possible to ‘consider’ the realms of subjectivity and objectivity independently of one another. Of course, this could mean any number of things. Whether correlationism proves to be a useful philosophical category depends upon how this claim is spelled out.

Kant is supposed to be the paradigm correlationist. This is because Kant was meant to disallow us knowledge of any object subsisting ‘in itself’. Instead, knowledge was to be restricted to objects as they are ‘for us’. Thus, Kant is said to have eroded the pre-critical distinction between primary and secondary qualities, since even central candidates for the status of primary qualities (such as its mathematisable ones) must be “conceived as dependent upon the subject’s relation to the given — as a form of representation.” (AF: 4)

Does Kant’s position get fairly characterised by the new realists? A lot of acrimony has resulted from the attempt to answer this question in discussions between Levi, Alexei and Mikhail. Both sides are now pretty entrenched, and that is when they are on speaking terms. I don’t want to reignite these ‘Kant wars’ but I will offer some comments on this issue in the next few posts.

Firstly, Levi has expressed some dismay that this question has become a focal point at all. It is, he thinks, another sign of a kind of hermeneuticism endemic in continental philosophy, which drives philosophers into endless debates over the meaning of texts at the expense of assessing their truth. Of course, detailed textual work is often extremely valuable, but — the concern is — many philosophers have stopped reading the work of Kant, Heidegger or Deleuze as tools in a larger quest to understand the world, but have taken this activity to be an end-in-itself. It is true that this is a problem, and I am equally frustrated when scholars turn into scholastics. But I do not think the charge applies in this instance.

Levi claims that Kant is the ‘inventor’ of correlationism and is a central example of a correlationist (though by no means a unique one). Moreover, there is repeated reference to his position — and perhaps more importantly, his vocabulary — in contrasting correlationism and the new realism. If there is a dispute over Kant’s position, where there is a risk of it being unclear, it is important to at least articulate this. Otherwise, the exposition of correlationism risks being unclear — where it has been to me, for one, until getting a handle on what reading of Kant is in play here (for example, regarding how ‘in itself/for us’ is being understood). More importantly though, Kant gives us a detailed and nuanced treatment of the ways in which being might be taken to be related to thought. If that account was buried under a problematic reading of him, then the substantive debate risks being all the poorer as a result. These two considerations should have some weight even amongst those for whom understanding Kant’s own thought is a secondary consideration.

Secondly then, moving to the issue proper, I want to flag some of my concerns over the use made of Kant. In these matters, I am predominantly in agreement with Alexei, who I think has done a sterling job in this respect. I suspect this is because we are familiar with much of the same recent literature on Kant which brings out just how complex and well-crafted a project transcendental idealism is. Here, I am thinking of Kant scholars such as Henry Allison, Karl Ameriks, Graeme Bird, Fred Beiser, Allen Wood, Onora O’Neill and Paul Guyer. Though by no means united, the sophistication of their approaches to Kant is commendable, and their sustained attention to detail has shown how Kant was aware of many of the standard charges against him (subjectivism, a priorism, emptiness, etc.) and either responded to them or developed the resources to do so. The point is not to be an apologist for Kant but to do justice to the power of his thought insofar as it promises to help us understand the world. I think that it still can, even if I am not (just as Alexei and Mikhail are not) a paid-up Kantian.

In the posts that follow, I will concentrate on three cases, with an eye towards why the readings of Kant matter. (I won’t address the recent hot topic concerning time and ancestrality, since I can’t devote the energy to it, especially as tempers are flaring once again.). Again, the aim will be to show why a focus on Kant is not a morbid fixation but a useful piece of the puzzle. I want to show how the cases I’ll look at bear upon substantive issues in metaphysics, epistemology and ethics, even when abstracted from the historical issue of what Kant thought. Also, I shall try to counter the second-guessing of the motivations of critics of speculative realism, providing some symptomatological musings of my own. However, I also want to issue a plea for a bit of old-fashioned bourgeois civility, which would not go amiss on all sides. I’ve no interest in questioning other people’s intelligence or integrity. This said, the next post will be about what Ameriks calls the ‘short argument’ to idealism, and which Meillassoux and Levi attribute to correlationists.

The Year in Books

Academic presses are still creaking under the weight of books published, so you would be forgiven if the occasional gem passed you by. It being the end of the year as well, I thought I would flag some notable philosophy books published this year, as well as point to some to look out for in the coming year. I’d be happy to hear of any of your own picks for this year’s best too.

My favourite book to appear this year is one I’m still reading — Robert Pippin’s masterful Hegel’s Practical Philosophy: Rational Agency as Ethical Life. As ever, Pippin manages to combine a wonderful lucidity of thought with a rich and suggestive prose style, which makes all his work a pleasure to read. This book develops the reading of Hegel which he shares with Terry Pinkard, which sees Hegel as engaged in the project of constructing a theory of normativity which would build upon, whilst radically revising, Kant’s talk of self-legislation. As long-time readers will be aware, I think this project is flawed both historically and philosophically. Nonetheless, Pippin has brilliantly buttressed his case here; and even where I think he goes astray, he is always insightful, especially when engaging with contemporary philosophical developments. If you have any interest in Hegel, metaethics or normativity, this comes highly recommended!

Another book in a similar vein, though this time arguing against a central role for autonomous agency, was Charles Larmore’s The Autonomy of Morality. Like Larmore’s other books, its mainstay is a collection of revised articles, loosly connected to the central theme. These are tied together by a central essay, arguing against Kantian constructivism as a metanormative theory. Larmore thinks that in place of a morality of autonomy we need to reclaim an autonomous morality. To unpack that slogan a little, he thinks that treating autonomy as a foundation for normativity is incoherent: any norms based upon autonomous endorsement alone will be little more than products of what Donald Regan calls ‘arbitrary self-launching’. Any putative norms arising from a process of self-legislation, so understood, cannot have a rational claim upon us. Instead, he thinks we must suppose that morality itself (and presumably other normative domains) is autonomous — independent of our practices, insofar as its ultimate authority is concerned.

My main reservations about his position arise with his conception of this independent normative realm — something he takes to be a robust metaphyiscal space, akin to the space of physical or psycholgical inquiries. In one essay, ‘Attending to Reasons’, he argues against the more Wittgensteinian conception of philosophical inquiry which animates McDowell’s work on just this sort of issue. It seems to me that Larmore lacks any good argument against such a position though; he simply restates the demand for philosophical explanation — e.g. surely we need to know what reasons are — which is the very thing that the Wittgensteinian tries to get us to loosen our grip upon by directing us to more modest questions about what we do and what we treat as a reason. This is a debate which needs reformulating if either side is to find traction with the other — something I am finding myself tasked with doing at the moment.

Talking of Wittgenstein, Oskari Kuusela’s The Struggle against Dogmatism: Wittgenstein and the Concept of Philosophy came out in April. This is another which I have not got all the way through yet, but the parts I have read are promising. The book is an attempt to describe Wittgenstein’s methodology, especially as it blossoms in the later philosophy. I had occasion this year to speak to Oskari whilst attending an event we were at, and I was struck by the intensity of his commitment to reading Wittgenstein with an anti-dogmatic tenor — one in which we have to radically rethink philosophy’s approach, as opposed to sliding into an equally formulaic characterisation of philosophy (e.g. the first thesis of Philosophy Club is that there are no theses in Philosophy Club…). What is particularly striking about Oskari’s approach is that it takes the question of methodology to be the beating heart of Wittgenstein’s work, whilst nevertheless letting us see how genuinely productive, progressive and insightful philosophy can still be done under its auspices.

I was rather less enamoured with Brandom’s Between Saying and Doing: Towards an Analytic Pragmatism, in which he attempts to reconcile pragmatism and more mainstream analytic philosophy. He claims that it is pragmatism in both the classical and Wittgensteinian senses which are to be one side of this reconcilliation. However, Brandom’s Wittgenstein is the worst of caricatures — a sloganeer, reduced to spitting ‘meaning is use’ and other proto-systematic dictums. His is a decidely non-Kuuselic reading. This bears upon his recent book insofar as it is animated with the worst of Brandom’s habits, and indeed the red thread which will unravel most of his work: reductionism. Brandom seeks to describe a set of reductive relations between different sets of vocabulary (logical, modal, normative, intentional, etc.). My thoughts here are that Brandom is doing little more than repeat the mistakes of traditional metaphysical inquiry in a semantic key. The lure of reductive accounts is great, and they are quite rightly indispensable in the natural-scientific enterprise. But philosophy is neither natural science nor composed of formal systems like logic, and the understanding which a massive program of theoretical interdefinability promises is little more than a mirage. It is Wittgenstein himself who provides the greatest lesson about this in the development of his early work away from the false clarity of the thoroughgoing analysis of the logical structure of natural language. This is yet another reason why Brandom counting Wittgenstein as an ally, albeit a misguided one, is perverse.

On a happier note, the blogosphere’s very own Sinthome, of Larval Subjects, published Difference and Givenness: Deleuze’s Transcendental Empiricism and the Ontology of Immanence. The project is an exciting one: a rehabilitation of a Deleuzian metaphysics as the ground of rethinking the perennial philosophical questions surrounding the particular-universal, existence-essence and sensible-conceptual relationships. It is the last of these which takes centre-stage, with the guiding question being how we are to understand Deleuze’s ‘transcendental empiricism’, which seeks to unfold the productive conditions for experience. It is in virtue of this topic that those of you with a ‘post-Sellarsian’ temperament may find it particularly interesting, since it tackles questions surrounding the intelligible structure of experience, familiar in the neo-pragmatist literature, from an interesting angle. Unfortunately, it has proved a little too hard-going for a casual reader like myself with little exposure to Deleuze. I hope to have the stamina for another go in the future though.

McDowell-watchers will have noted John McDowell: Experience, Norm and Nature, edited by Jakob Lindgaard, which collects many of the recent essays on his work from the European Journal of Philosophy, including new replies by McDowell. The most notable addition is a new essay by McDowell in which he revised his long-held and controversial position on the propositional structure of experience, replacing it with a claim that experience is conceptual simply in virtue of its ability to be discursively articulated. This claim is ostensibly made in response to Charles Travis’ arguments about conceptual content, though I think it may come to be seen as being heavily influenced by the next book I’ll mention.

I’ve yet to read more than a handful of pages of it, but Micheal Thompson’s book Life and Action: Elementary Structures of Practice and Practical Thought looks fascinating. In it, he undertakes an Aristotelian analysis of the concepts of life, action and practice, as the basis for a clear view of practical philosophy. As I say, I suspect that it is Thompson’s influence on McDowell which can account for some of the impetus for his revised position, as reflected in McDowell’s eagerness to make room for a distinct mode for the representation of life within experience. I am reliably informed that Thompson’s work is attracting a lot of attention amongst the Chicago-Pittsburgh circuit, and I would expect to see his work discussed widely in the future. Were I to hazard a guess for which philosophy book this year in the broadly conceived post-Kantian tradition will end up being most influential, it would be this one.

Next year will see another promising book on metaphysics, namely, Robert Stern’s Hegelian Metaphysics. It’s going to be a collection of some of his essays, both new and old, on Hegel and metaphysical themes. In particular, there’ll be essays on themes from Hegelian metaphysics, like concrete universality and the Hegelian conception of truth, alongside critical and comparitive essays on historical movements influenced by Hegel, like the classical pragmatists (especially Peirce) and the British idealists. Again, Deleuzian metaphysics comes up, with a defence of Hegel’s position against Deleuzian criticism.

Also next year, two McDowell collections appear, The Engaged Intellect: Philosophical Essays and Having the World in View: Essays on Kant, Hegel, and Sellars. The contents should be familiar to those already keeping up with McDowell’s recent work, though there is what appears to be a new essay on Hegel which I am keen to see. Korsgaard’s Locke Lectures, Self-constitution: Agency, Identity, and Integrity, also come out. From the lecture texts already online, this looks like it will be a good read, and will no doubt draw a lot of attention! (She also had a collection of essays out this year on similar themes, called The Constitution of Agency: Essays on Practical Reason and Moral Psychology.) A volume of essays on Making It Explicit is also due out, called Reading Brandom: On Making It Explicit. The contributors are not quite as illustrious as those for the McDowell volume in the same series, but it looks interesting nonetheless.

As I say, I am happy to hear your own notable philosophy books of the year!