Why normativity?

One of the main themes of my research is normativity. In this post, I want to provide something of a primer on normativity. Hopefully, this will go someway towards explaining why this is an issue we should take seriously and why normativity acts as a core concept for philosophers like myself. The immediate prompt to this post is the ongoing discussions between Levi and Pete, which keep running aground whenever normativity arises. Pete has set out the pertinent issues a number of times, but I thought it would be useful to approach them from my own perspective too, since a bit of triangulation might help us get a clearer sense of what is being said.

In short, normative issues concern correctness. Obviously, there’s many different senses of correctness and many different ways in which it arises as an issue. This helps to explain the tendency for normativity to to become a monolithic topic, which can seem to suck all philosophical light towards it like some supermassive body. One crucial distinction here though is between first-order normative inquiry and metanormative inquiry. I’ll explicate this in relation to some philosophically familiar topics.

In ethics, we are often asking what we ought to do — what would be good in the way of practical action. Should I give up philosophy and train to be a psychiatrist? Was it too callous to have decided not to meet your friend because you felt too drained to listen to their problems? Is a society without socialised medicine thereby unjust? These are first-order ethical questions, and they are normative because they are oriented by the question of how it is correct to act.

But we can also ask (as metaethicists do) about the metaphysics, epistemology and semantics involved. What does it mean to say I should do something? What is it for an action to be good? Is it possible to know whether I did the right thing? How? What, if anything, separates ethical demands from those of social etiquette? These are metanormative questions, also oriented by the notion of correctness in action, but which try to uncover what this notion of correctness amounts to. At its limit, it might even conclude that there is no sense in which actions are appropriate or inappropriate — they just are.

Similar distinctions between the first-order normative and the metanormative arise in other areas too. Another familiar example is deliberation about empirical facts and epistemology. Again, there are first-order matters: What is that animal? Would it be true to believe it is a chaffinich? Are physicists justified in believing that the Higgs boson exists? Do these problematic observations imply that the standard model of particle physics must be revised? These are normative matters, but only in the thin sense that they, arguably like all inquiry, are oriented by the question of what is good in the way of belief. The usual (and right) answer is that we should believe the truth, but we can also assess correctness along a number of other dimensions: justification, entitlement, inference, probabilifaction, consistency, coherence, accuracy, and so on.

One of the functions of epistemological inquiry is to examine the status of these first-order matters. What does it mean to say that we are not entitled to believe that P, even though P might turn out to be true? What is it for our perceptual beliefs to be justified? What is problemtic about inconsistency in belief? These questions have a metanormative dimension insofar as they abstract themselves from the immediate issue of what beliefs are good or bad along various dimensions in order to ask what any such assesments consist in.

The tentacles of normativity reach far and wide beyond these two examples too. We find them in the philosophy of language, where many have argued that meaning is normative insofar as terms have a correct and incorrect usage, and where the task is to flesh out this normative dimension to linguistic practice. The same goes for concepts and their conditions of application (alongside the murky notion of ‘mental content’ more generally) in the philosophy of mind. Aesthetics can be thought to be a discipline with a metanormative aspect too, especially when beauty and art are notions entangled up with endorsement, which a theory of aesthetic judgement may need confront. The theory of agency is another locus for normative issues, especially insofar many people (myself included) think that there is a distinctive logical form to action-explanation which needs to be articulated in relation to reasons in addition to brute causality. The list goes on… I certainly do not mean to endorse all these projects — I think many of them are misguided ventures — but merely to point to some of ways in which metanormative matters appear within contemporary philosophy.

Hopefully, it will now be clear that we can pitch normative inquiry at different levels. One worry that Levi has expressed is moralism: doesn’t an obsession with normativity lead us to fixate on judging people, weeding the unworthy from the worthy, and seeking to police people’s activity? With our distinction between first-order and metanormative inquiry in hand, we can respond by saying that focusing upon the normative does not necessarily betray a desire to be the judge of what is right and wrong. Instead, for philosophers interested in normativity, there is more often an attempt to understand the conditions under which assesments of correctness can succesfully be made at all and what the upshot of such assessments is. One way of articulating this is to say that we want to understand ‘the force of the better reason.’ I take this to be quite some distance from the right-wing obsession with ‘values’ that seems to be incensing Levi.

Another worry of Levi’s may be more pressing though: turning to norms can seem to be a turning away from the world. We can think of this in terms of a kind of ‘cognitive ascent’ which proceeds in two steps. First, there is a distinction between talking about the world and talking about our orientation towards the world. On one view, there are plenty of philosophically interesting features of the world to talk about — the structure of objects, the nature of causality, the individuation of social actors — and we should just get on and do that. Talking about norms is not talking about the world in this way: either it is talking about how we should talk about the world, and not plainly talking about the world; or it is talking about how we should act in the world, which is both tediously anthropocentric and still not talking about the great outdoors. Furthermore, there is a second ascent here too, since there is not only first-order normative inquiry but metanormative inquiry. It is either talking about how we should talk about how we should talk about the world, or it is talking about how we should talk about we should act in the world. The reassuring cruch of reality is thus even further from being underfoot.

In response, there a few things to be said. Firstly, I do take discussions of normativity to be about the world directly — they are no mere escape from it. I will be brief here because in a way this does not meet Levi’s worries head on. Nevertheless, whilst it may be démodé in some quarters to be concerned with rational agency as a distinctive phenomenon, I think it is a pressing descriptive task for which marshalling the vocabulary of normativity is essential. (See Pippin’s excellent piece and the surrounding discussion for more on this.) Beyond this, normativity is itself part of the world and is threaded throughout countless human practices. When McDowell gives us an account of virtue alongside the capacities, social practices and formative processes needed to make sense of our responsiveness to it, or when Brandom excavates the structure of the practice of giving and asking for reasons, they are talking about real phenomena which need to be elucidated. It is perhaps worth stressing that there needs to be nothing a priori about these normative investigations, even if figures like Habermas (and Pete for that matter) sometimes try to derive minimal rational norms in a quasi-transcendental fashion.

It is the intersection of metaphysics and normativity that seems to be worrying Levi here though. I am less enamored of constructive metaphysics than Levi or Pete, but I think the latter does a brilliant job of demonstrating the methodological role that metanormative inquiry should have within any such metaphysics. I can’t do better than his own exploration of these issues in this essay.

Finally, I want to underscore the possibility of detatching normativity from deontology. The latter understands normativity in terms of necessity imposed through legislation. In Kant’s practical philosophy, this is bound up with a system of duties and rights which takes a rigorist form that many people find both implausible and distateful. Indeed, Lukács tries to show how the Kantian subject, delineated through reference to an abstract set of duties, is the one presupposed by capitalism, being a reactionary ideological symptom of modern forms of exchange.

My own understanding of normativity, both in the practical and theoretical realm, has a more Aristotelian character to it. I do not take the reasons we have to stem from legislation but rather from the concrete situations we face as agents. Nor do I typically articulate rational requirements in terms of necessities imposed on us by reason. Instead, I help myself to the less austere vocabulary of the good as well as the right, and try to extend the concern with what one can do to who one should be as well. In addition to my own, there are innumerable other ways of approaching normativity too. It is important that this point be made in order to correct the assumption that recourse to normative themes is meant to bolster to some quasi-Kantian project, when this is certainly not always the case.

I could continue but that’s more than enough for now.

Nature and Normativity

Here are two excellent essays, each taking opposing stances on how to answer questions centring on nature and normativity, alongside the roles of sciences and humanities in understanding reality. Both are admirably lucid and make a good case for their competing methodologies: firstly, an unashamed defence of ‘scientism’; secondly, the demand to take the standpoint of practical reasoning seriously.

An excerpt from each, beginning with Alex Rosenberg’s ‘The Disenchanted Naturalist’s Guide To Reality’:

What science has discovered about reality can’t be packaged into whodunit narratives about motives and actions. The human mind is the product of a long process of selection for being able to scope out other people’s motives. The way nature solved the problem of endowing us with that ability is by making us conspiracy theorists—we see motives everywhere in nature, and our curiosity is only satisfied when we learn the “meaning” of things—whose purposes they serve. The fundamental laws of nature are mostly timeless mathematical truths that work just as well backwards as forward, and in which purposes have no role. That’s why most people have a hard time wrapping their minds around physics or chemistry. It’s why science writers are always advised to get the science across to people by telling a story, and why it never really works. Science’s laws and theories just don’t come in stories with surprising starts, exciting middles and satisfying dénouements. That makes them hard to remember and hard to understand. Our demand for plotted narratives is the greatest obstacle to getting a grip on reality. It’s also what greases the skids down the slippery slope to religion’s “greatest story ever told.” Scientism helps us see how mistaken the demand for stories instead of theories really is.

From Robert Pippin’s ‘Normative and Natural’:

Normative questions, I mean, are irreducibly “first-personal” questions, and these questions are practically unavoidable and necessarily linked to the social practice of giving and demanding reasons for what we do, especially when something someone does affects, changes or limits what another would otherwise have been able to do. By irreducibly first-personal, I mean that whenever anyone faces a normative question (which is the stance from which normative issues are issues)  – what ought to be believed or what ought to be done – no third-personal fact about “why one as a matter of fact has come to prefer this or that” can be relevant to what I must decide, unless (for good practical reasons) I count it as a relevant practical reason in the justification of what I decide. Knowing something about evolutionary psychology might contribute something to understanding the revenge culture in which Orestes finds himself in Aeschylus’s Oresteia, and so why he feels pulled both to avenge his father’s murder by his mother Clytemnestra, and also feels horrified at the prospect of killing his mother in cold blood. But none of that can be, would be, at all helpful to Orestes or anyone in his position.  Knowing something about the evolutionary benefits of altruistic behavior might give us an interesting perspective on some particular altruistic act, but for the agent, first-personally, the question I must decide is whether I ought to act altruistically and if so why. I cannot simply stand by, as it were, and “wait” to see what my highly and complexly evolved neuro-biological system will do. “It” doesn’t decide anything; I do, and this for reasons I must find compelling, or at least ones that outweigh countervailing considerations. It is in this sense that the first-personal perspective is strictly unavoidable. I am not a passenger on a vessel pulled hither and yon by impulses and desires; I have to steer.

Bad Habits: The Philosopher as Concept-Monger

You don't understand me...

In the analytic tradition, one popular characterisation of philosophy has been that it is conceptual analysis. After the Quinean attack on the analytic-synthetic distinction, this view has become less popular, but it still has its adherents. The idea is that philosophers take a problem (e.g. free will) and then decompose it into a set of pertinent concepts (e.g. responsibility, determinism, freedom, agency), clarify what these concepts might mean and how they relate to each other, and thereby hope to remove the air of mystery which hangs over the unanalysed problem. Ordinarily, at most, such a philosopher might recommend we use a word in a different way (e.g. talking of ‘free agents’ but not ‘free actions’ or vice versa), or stop invoking certain concepts at all (e.g. final causes). But there is another less conservative model of philosophy which also takes its object to be concepts, most often associated with recent continental thought.

Deleuze thinks philosophy is the “continuous creation of concepts.” (WIP: 8.) In part, this is meant to align philosophy more with productive activities, which make and create, than those that test and observe — philosophy is to be more poesis than theoria. Deleuze brings his own inflections to the notion of the concept, and thereby philosophy as concept creation too. But the details of Deleuze are not my concern here. Instead of criticism of Deleuze, my focus here is the kind of uses (or misuses) to which this idea of philosophy as concept-creation has been put.

The main ill-effect of the idea of philosophy as concept-creation which I want to point to here has been its reinforcement of one way of approaching philosophers. So, we get the philosopher-as-conceptual-toolsmith model. At its worst, we end up with synecdoche run amok, where one prominent idea comes to dominate everything else about a philosopher’s work — Wittgenstein = language games, Foucault = power-knowledge, Levinas = the Other, Badiou = the Event, etc. For example, Simon Critchley describes the post-Kantian landscape thus:

you get the Subject in Fichte, Spirit in Hegel, art in the early Schelling, and then in later nineteenth and early twentieth century German philosophy, Will to Power in Nietzsche, Praxis in Marx and Being in Heidegger. (New British philosophy: 187)

Similarly, Graham Harman claims that Heidegger only really had one idea which he endlessly repeats, namely the tool-analysis. But even without this extreme hermeneutic reductionism, there is a real coarsening which can go on when we chisel down a philosopher to a handful of headline concepts.

All of this is not to say that philosophers do not produce new concepts. Nor is a plea for endless textual analysis and scholarly ensconcement such that we never put a philosopher’s ideas to work in a new context. And neither does it display a blindness to the realities of communicating philosophical ideas in circumstances where people do not have the time or inclination to master more than the headline ideas of many thinkers. Instead, all I want to do is make the observation that emphasising the concept-creation model of philosophy too much can promote some dubious tendencies in both historiography and contemporary critical debate.

Firstly, unsurprisingly, it often leads to trading in caricatures and straw men. Second, it tends to drive a mechanical style of philosophy, whereby the aim is to ‘apply’ the concepts of the master-philosopher to a given material rather than approach it afresh — ‘I will now give a Foucauldian/Wittgensteinian/SR analysis of x’. Third, it tends to occlude the historical dimension of much philosophy (responding to a certain set of material circumstances; intervening in a historically evolving tradition). Fourth, it can also shroud what is valuable in philosophical work, which sometimes is the purchase which a new concept provides, but is often dissolving a bogus problem, reframing a question to allow it to be answered, effecting a more diffuse change of perspective on an issue, instilling a sense of Entfremdung with respect to something we’ve taken for granted, and so on. All these dangers make me wary of overplaying the image of the philosopher as a forge for concepts.

Disenchantment

sp2

Modernity is often associated with disenchantment. But what does this mean? Ancient thinkers had tended to ascribe teleological principles to the natural world: the stone strives for its home at the centre of the earth; the eclipse communicates divine displeasure. The monotheistic traditions which then gained ascendancy in Europe and the Near East retained something of this, finding God’s plan suffusing nature: God creates walnuts to resemble brains, signing to human reason that the former is good for the latter; gold and silver lie beneath the ground and the sun and stars shine in the heavens above, displaying a divinely ordained symmetry (both these latter examples are taken from Foucault’s The Order of Things). But with the rise of the mathematical sciences, natural teleology and divine order came to be treated with increasing derision. Aristotle was to be banished to the libraries of the Schoolmen, and if God was to have daubed nature with language, he would speak to us in mathematics and not dainty allegories. For philosophers such as Descartes, matter was extension, and must yield its secrets to a physics taking mathematisable form. This approach to the natural world was further buttressed in the minds of natural philosophers by the successes of the Newtonian revolution. In biology, by 1828 even the demand for a vital force — said to divide the organic from inorganic — proved empty, Wöhler having proved that the organic could be synthesised from inorganic components.

Everywhere, meaning fell under the sword of mechanism, and myth and mysticism with it. But suspicion hung over this evacuated nature, for was it not also our home — perhaps even the very substance of our being? If so, what remained of freedom, providence, value, beauty or morality in all this? The very meaning of life appeared to be under threat, since there seemed to be no room for God, rational harmony or true righteousness amongst the icy torrents of indifferent particles. The height of the Enlightenment saw the most avid articulation of these worries, with Jacobi coining the term ‘nihilism’ to describe what he saw as Godless and fatalistic Critical philosophies, which in his eyes provided little more than a fig-leaf covering their destruction of a transcendent source of value.

In all this, there are both progressive and regressive currents. The rise of modern science has been a near-unparalleled breakthrough, on a par with the development of agriculture, city dwelling or the institution of constitutional legal codes. In so doing, it has rightly banished God-talk from natural philosophy and much else besides. So too, it has helped deaden the appeal of any view of freedom wherein it consists in some contra-casual power to intervene in the world (quantum mechanical gymnastics aside). But there is a risk of the burning light of science blinding us to the proper significance (or even existence) of certain equally natural phenomena. My own interests here settle on normativity — what we are committed, entitled or prohibited from thinking and doing; how we are subject to the ‘force of the better reason’; why we not merely do but should follow certain rules and conventions — ethical, theoretical, aesthetic, affective — whilst rightly rejecting others. Often, attempts to understand normativity suffer from a scientism which extends far beyond a healthy respect for the natural sciences, and which commonly has its roots in a problematic conception of disenchanted nature.

sp1

In the face of the disenchantment of nature, we can easily succumb to that curious form of philosophical vertigo that Wittgenstein diagnoses so well. We then grasp about for a solid handhold. Confronting frigid nature, operating with lawful or law-like regularity, one response has been to cast aside concepts like freedom, obligation and representation as folk-psychological detritus which we can do without. For example, Stephen Stich has claimed:

intentional states and processes that are alluded to in our everyday descriptions and explanations of people’s mental lives and their actions are myths. Like the gods that Homer invoked to explain the outcome of battles, or the witches that inquisitors invoked to explain local catastrophes, they do not exist. [quoted in a recent article by Dwyer]

This is the eliminativist approach: the world is nothing like the fantasies of religion and art had led us to believe — it is the indurate ground of animal life but not our ‘home’. For the eliminativist, there is no need to sweeten the pill of the disenchantment brought on by the scientific mind-set. As Ray Brassier has recently written, “Philosophy should be more than a sop to the pathetic twinge of human self-esteem.”

Drawing back from eliminativism, another response has been to reconstruct those concepts suspected of anthropocentrism in a more respectable vocabulary for the naturalist. So, there is no need to ditch freedom, say, but let us just be clear what we mean by it, where this might legitimately be causation along certain biochemical pathways and not others, or action in light of knowledge of the conditions under which it was caused, or whatever natural-scientific form of description best approximates actual or ideal folk-psychological usage. The manifest image of humanity is not entirely wrongheaded, just naïve. Properly regimented, it captures something important about human patterns of understanding, behaviour and our place in the world. Let us call this view naturalistic revisionism.

Different again from eliminativism and revisionism is expressivism. The expressivist agrees that the world is a cold, dead place when contrasted with the animisms, platonisms and providentialisms of old. However, the human animal ‘stains’ and ‘gilds’ reality with its sentiments (to borrow Hume’s terms). For the expressivist, it is we who project value on the world, and this can give us the resources to explain ethics, freedom and aesthetics outside of the tight net of the scientific naturalist’s privileged nomenclature. There is nothing unnatural about our caring about (or disdaining) each other, our projects and our environments; but that need not force us to redescribe ourselves in natural-scientific terms alone — our passions have their own logic and significance that subsists upon but grows out of its natural base.

Yet another response to disenchantment has been to foreground not human emotion but reason and autonomy. For constructivists, the legacy of disenchantment has been to show us that we are alone in the world, with no divine firmament above or promontory below that would help us surveil a normative order. But unlike expressivists, we should look to our activity of trafficking with reasons stretching beyond our structures of passions. We forge obligations for ourselves through the exercise of autonomous legislative capacities, claiming ownership of our actions through drawing them into an unfolding plan which we grant authority over our desires, projects and identities as a whole. In doing so, we act with the dignity proper to creatures capable of self-determination, who are not merely buffeted around by events, beliefs or desires, but who manage to establish some sort of purchase and sovereignty over themselves and thereby lead their lives.

hm2

Now, you need not be a platonic boogeyman to be uneasy about this collection of options. My own thinking about these issues is heavily indebted to John McDowell. His suggestion that we need “a partial re-enchantment of nature,” as with many of McDowell’s trademark phrases, is a little unfortunate though. He stridently rejects the idea that ‘re-enchantmant’ has a “crazily nostalgic” character which gives any ground towards a “regress into a pre-scientific superstition” which would encourage us to interpret the fall of a sparrow like we would a text. But nevertheless the associations surrounding ‘enchantment’ remain — something spooky gets evoked. Talk of ‘re-enchantment’ is misleading, and a better McDowellian phrase would be resistance to the “interiorization of the space of reasons.”

Disenchantment makes it seem like reasons are illusory or are at best absorbed into the activity of subjects. What we get is meaning, and the rational relations it makes intelligible, restricted to meaning-conferring subjects. At most, so understood, we project reasons into a world of rationally inert objects. The car-crash is then only a reason to phone an ambulance in light of human ethical practices; the ionized radiation in the cloud chamber only justifies belief in the presence of an alpha particle in light of the construction and testing of electromagnetic and particle theories. Now, there is something right and something wrong about all this. We cannot intelligibly think from a perspective of cosmic exile and must accept the finitude of our cognitive capacities (contra SR and OOO). All of our truck with value, reasons, justification must proceed from local and situated circumstances and continue to lean upon human forms of knowing and valuing. But that does not mean we should rest content with the idea that these are ‘merely human’ standards whose shadows fall upon an apathetic world. Our finitude, properly understood, ought not impugn normative realism, and we should not be carried away by the characterless world presented by natural science.

Nature is not exhausted by natural scientific description, and so it is misguided to require human interests for any more juice to be squeezed out of it. The predominantly nomothetic explanations offered by natural science are pearls without price, but they have no claim to speak for the totality of nature. Human life is obviously in some sense ontologically decomposable into organic compounds, atoms, quarks and electrons, and so on. But the explanatory matrix which most often befits it is normative and not immediately natural scientific (whatever the prospects of reductionsism about normativity). Again, there is nothing unnatural about humans as they fall under normative descriptions, appraised in terms of their intentions, virtue, beauty or freedom. We come to employ these concepts in the course of our biological maturation, supplemented by a process of socialisation which is no less a part of the natural history of humanity.

The temptation towards the modernist division between meaning-conferring subjectivity and intrinsically meaningless nature arises when we think that we can only have meaning on human terms — the human forge of meaning being the correlate to the frozen world of mechanism. If the logical space of nature and the logical space of reasons are irreconcilable, then this would seem to follow (assuming naturalistic revisionisms are moribund, which I think is very plausible). But this is only so if nature is also exhausted by natural scientific description. And it is not: natural events can be legitimately characterised in normative terms without a regression to pre-scientific rationalism. This is the sort of re-enchantment McDowell seeks, and rightly too. The claim to be defended is thus: “the natural world is in the space of logos.” My optimism on this count is rarely shared though.

On the Ontological Principle

ptrees

In my previous post, I outlined Levi’s Principle of Translation, which states that “all transportation is translation.” This principle opposes the idea that objects are mere passive items which simply acquisece to influences upon them. Instead, onticology is an ontology of resistant objects, which struggle with each other. The point of these dramatic metaphors is to insist that influences must be taken up by objects, where this involves a ‘fusion of differences’. For example, when oxygen and water cause iron to rust, then the iron itself is active here, entering into a network with the oxygen and water to produce the difference, rather than being a mere container for their effects.

One of the philosophical upshots of this principle is that objects are not simply vehicles of some set of differences. In other words, they are not inert items that can have a form imposed upon them and yet not redound upon the process of formation. I think Levi thinks this is significant because it is incompatible with certain types of correlationism, where a correlate would determine objects without being determined and with the object playing no role in its determination. (Again, I will stress that I think the concept of correlationism is a red herring.) In this way, it helps to avoid Levi’s Hegemonic Fallacy, namely that difference cannot be reduced to ‘one difference that makes all the difference’ or ‘the most important difference’.

I take the Hegemonic Fallacy to be Levi’s main target. This is significant because it not only sets him against correlationism but also against the speculative realisms of people like Ray Brassier. Brassier embraces eliminativist lines of thought and would doubtless not shrink from the charge of scientism. Here, materialism would seem to introduce matter as ‘one difference that makes all the difference.’ In contrast, Levi is keen not to debunk the human and his ontology is oriented to be open ended and inquiry led: if it is found to make a difference, then it is real — whether it be Oedipus, evil, Edith Piath or an electron. This is captured in the Ontological Principle which results from the Ontic Principle: “Being is said in a single and same sense for all that is.” Indeed, this is all that Levi thinks can be said about being qua being; thus, ontology must be pursued on the ontic level, dealing with beings themselves.

pab

The Ontological Principle demands a flat ontology. One contrast here would be with vertical ontologies, where one sort of being overdetermine the rest. Correlationism and platonism would fit the bill here. However, Levi refuses to equate the univocality of being with a univocality of translations. In other words, no one type of being dominates others, and they all sit alongside each other, but that does not mean that every object must act and be acted upon in the same way. This comes out in this Deleuze passage which he quotes:

Being is said in a single and same sense […] of all its individuating differences or intrinsic modalities. Being is the same for all these modalities, but these modalities are not the same. — Difference and Repitition, p.36

So, we can make sense of existence at “different levels of scale” whereby each level is not reducible without remainder into another level. This idea — no reduction without remainder — Levi calls the Principle of Irreduction. One consequence of this principle is that “the relation between individuals is not one where one type of individual explains the rest without remainder, but where processes of translation must take place.” Levi’s example is DNA. It is a condition of my body existing and explains my anatomy, but cannot serve as an autonomous explanation since it must act upon resistant objects which take up that action according to their affections: “DNA, in unfolding, must nonetheless undergo translation as it transports itself […] and the body formed in translation with DNA produces its own differences.”

It is at this point which I am interested in the explanatory consequences of onticology. This is because I am sympathetic to something like the Ontological Principle and also want to accomodate different explanatory modalities within it (note here that my concern is primarily explanatory rather than metaphysical, though I don’t think I am guilty of Levi’s Epistemic Fallacy). In my case, I want to hold onto a form of naturalism which does not degenerate into scientism. Thus, I reject supernatural entities, like divine beings, along with platonic Forms (sympathetic readings of Plato aside). But I also resist any hegemonic move on behalf of the natural sciences to act as final arbiter for acceptable forms of explanation. The main clash here come with our understanding of rational agency, which I think neither requires nor can be given an exhaustive explanation in natural-scientific terms. This is because many of the locutions which we (legitimately) use in explanations of rational agency — such as ‘justified’, ‘perceptive’ and ‘immoral’ — are not employed as empirical descriptions of behaviour but ascriptions of a standing in what Sellars calls the ‘space of reasons.’ A different mode of intelligibility is required to characterise the empirical properties of natural objects than to characterise rational proprieties like entitlement, permission or inaccuracy.

The claim that this sort of rational intelligibility is irreducible to empirical intelligibility can be expressed by saying that the space of reasons is sui generis. It is this claim which I think we need to maintain, where Levi’s talk of the mind’s translations not being special seemed to threaten it. He has now clarified his position, where his talk of the lack of the mind’s specialness is only meant to stretch to it not being included in every relation. So, it seems that on these grounds there may be no source of objection to my approach, though there may be other reasons to object to it which stem from onticology. Nevertheless, in the next post I will fulfil my promise to say more about how we should understand the distinctively spontaneous translations of the subject, and how this bears upon metaphysical issues.

On the Principle of Translation

csp2

Levi has been developing a version of object-oriented philosophy which he calls ‘onticology’. In doing so, he recommends understanding objects as ‘actors’ which produce ‘differences’ in each other. Significantly, these modes of production include but are not limited to causality, such that anything which produces differences counts as acting. I am not sure exactly what sorts of non-causal production Levi wants to allow here, but we might think of examples like individuation, such that something counts as information, say, because of its place in a informational network even if it does not have to be in causal relations with all the parts that make up the network. So, objects can act in both causal and non-causal ways. Levi thinks that we should understand this action in terms of translation:

The Principle of Translation states that there is no transportation without translation. What I mean by this is that when the difference of one object acts on another object it translates or transforms that difference in a way unique to the receiving object. Thus, for example, my pepper plant “translates” the difference of sunlight producing energy in the form of sugars that it uses to produce its fruit and leaves. The process of translation thus transforms the differences of other objects in a way particular to the object doing the translation.

A second way in which Levi expresses this idea is in terms of affect (in Spinoza’s sense). The affective aspects of objects are those through which it can act and be acted upon. If influences upon objects must be transmitted through their affections, then there is a sense in which the production of difference in an object must be particular to it. Levi’s example of this is the neutrino, whose small mass, high speed, and lack of charge leaves it with a limited set of causal powers to act and be acted upon.

Thirdly, Levi frames his Principle of Translation in terms of an extreme radicalisation of the Kantian insight about the activity of the subject, which he claims “transforms data of the world such that it does not represent the world as it is “in-itself””. The polemical suggestion is that Kant did not go far enough — why stop with subjects? So, Levi advocates a “generalized Kantianism of objects”. All objects are active because they transform what affects them, just as Kant rejects the Lockean idea that the mind is passive with respect to what effects it. This forms part of the call for a flat ontology, which develops a univocal analysis of objects which treats subjectivity and sociality as contiguous with everything else.

csp

There is something to be said for a flat ontology. We ought to be wary of supposing that reality contains discrete levels, where the relations between them become hard to fathom. For instance, Cartesian dualism is reviled for good reason; it is understandable how it arose in response to the pressures of a mechanistic philosophy of nature, but it nonetheless invites mystification. So too, more recent appeals to sociality risk reprising its mistakes in another key. Latour has done much to expose the emptiness of those sorts of social explanation which do not pursue to the composition of the social itself, and which he sees as the primary task of sociology. Levi introduces a further worry, that we turn to a vertical ontology, where one ontological level dominates — subjectivity being present in all relations, for example, as certain ‘correlationists’ are meant to believe (though see my previous posts on Meillassoux for my reservations about the charge of correlationism). But I think this should be kept distinct from the epistemological problems which would be created from a discontinuous ontology, which appear to force on us the explanatory task of showing how these distinct levels of reality interact. This kind of gap-bridging task — which rarely fares well — is the main fallout of non-flat ontologies.

Even with these difficulties in mind, I think that some of the aspects of Levi’s attempt to construct a flat ontology ought to be resisted. There is something distinctive about subjects which makes some forms of flat ontology problematic. We can talk both about objects translating objects and about subjects translating objects. But the translations of the subject include those of a unique kind, which are not adequately addressed by simply increasing the complexity of a unitary flat ontology. So, there is no objection to saying that objects are active and possess affections which translate influences upon them in particularised ways. But there is a highly significant type of activity which subjects engage in, which the Kantian tradition characterises as spontaneous. It is in virtue of their spontaneity that subjects are responsible for the translations which they undergo: and this brings with it many of the traditional distinguishing traits which have been used to mark out subjects, namely freedom, normativity, rationality and intentionality. In the next post, I shall say more about how we should understand the spontaneity of subjects and how that impacts upon metaphysical issues.

Realism and Correlationism: Kant and the Short Argument

Meillassoux takes the correlationist to rely on the following argument:

thought cannot get outside itself in order to compare the world as it is ‘in itself’ to the world as it is ‘for us’, and thereby distinguish what is a function of our relation to the world from what belongs to the world alone. Such an enterprise is effectively self-contradictory, for at the moment when we think of a property as belonging to the world in itself, it is precisely the latter that we are thinking, and consequently this property is revealed to be essentially tied to our thinking about the world. (AF: 4)

This argument is a form of what Karl Ameriks calls the ‘short argument’ to idealism, which often gets attributed to Kant. However, Kant does not make this short argument. Ameriks traces this form of argument to Reinhold, and he notes that it does sometimes appear in the post-Kantian tradition. So, we find Reinhold claiming the following:

What is represented, as object, can come to consciousness and become represented only as modified through the form of representation, and not in a form independent of representation, as it is in itself. (Versuch: 240; quoted in Ameriks FoA: 129)

Reinhold takes it that a need to represent objects for them to be given to consciousness ensures that we cannot come into an epistemic relationship to those objects which could be disentangled from our representations:

The concept of a representation in general contradicts the representation of an object in its distinctive form independent of the form of representation, or the so-called thing in itself; that is, no thing in itself is representable. […]

[T]he object distinguished from the representation […] can only be represented under the form of representation and so in no way as a thing in itself. (Versuch: 244, 246)

So, for Reinhold, because we cannot get outside of our representations, then objects cannot be represented as they are in themselves.

If the correlationist — whatever ‘originary correlation’ they are meant to argue for, and whatever it means to say that they cannot consider its terms independently — has to rely upon this argument as it stands, they are in trouble. This is because the conclusion it argues for is trivial given the way key terms in the argument are understood. Reinhold is trying to prove that we cannot know things in themselves, where he takes knowledge to require that objects are represented to us. But if he tacitly understands ‘things in themselves’ just to be what is not representable, then the conclusion follows all too easily. Thus, on its own, this argument ought to convince no-one.

Meillassoux’s presentation of the argument proceeds in a similar fashion. It seeks to establish an (underspecified) ‘essential tie’ between thought and things in themselves. Like Reinhold, this is meant to undermine the possibility of an epistemic relation to the world as it in itself independently of thought (one that the realist requires to distinguish primary and secondary qualities). The way that it does this is by simply noting that we cannot think of features of the world in itself without the world in itself being the object of that thought. Thus, we must always factor in a correlation between thought and the world in itself when attempting to reflect on the latter. Again, the shallowness of this argument ought to be transparent. Knowledge of the world in itself, as required by the realist, is denied to us because thinking is always present when thinking about the world in itself. However, this is only because here we are to understand knowledge of the world in itself as knowledge where thought is not present. The opposition is simply defined out of existence. Nothing is demonstrated by this argument, and it is no more contentful than Reinhold’s efforts.

* * *

Even with Meillassoux’s distinction between weak and strong correlationism, and the specification of different possible correlates than simply thought and world, I am not yet clear in my own mind what the status of the correlationist’s claim that thought and world must be thought together is meant to be. So, I am hesitant to assert or deny that particular philosophers are correlationists. Besides, I am not sure how useful a discussion along the lines of ‘is x really a correlationist?’ would be. Still, insofar as transcendental idealism can be thought of as introducing some significant relation between thought and world, whether we understand this idealism as metaphysical, formal, methodological or whatever, then it may bear considering in this context.

However we understand the relation between objects and cognition in Kant, I have claimed that we do not find a ‘short argument’. Yet, Kant does claim that objects conform to the conditions of cognition. So, we can ask, how does Kant’s position differ from the ‘short arguments’ dismissed above? This ought not to be of mere historical interest insofar as it can furnish us with alternative arguments for either correlationism or a more plausible relative of it. Speculative realists have an interest in attending to other such strategies insofar as their own positions can develop in dialogue with a wider range of opposition than the colourless proponent of the short argument.

Transcendental idealism famously effects a Copernican turn. Instead of assuming that all our knowledge must conform to objects, Kant ventures a hypothesis: objects must conform to our knowledge. This claim has proven difficult to understand. It is clear that Kant is not asserting an empirical idealism, which holds that objects have a metaphysical dependence upon our epistemic activity or our ‘representations’. Kant denies this when distinguishing his position from what he calls Berkeley’s dogmatic idealism. In the Prolegomena, he calls his position formal idealism, and any dependence of objects upon our knowledge is restricted to the forms of our knowledge. In the Analytic of the first Critique, regarding the categories of the understanding, Kant denies he is engaged in a traditional metaphysical investigation of being qua being (A247=B303). However, it can appear that the Aesthetic claims that our forms of sensibility, namely space and time, are ontological conditions of objects (although Kantians such as Henry Allison and Graeme Bird forcefully argue against such a reading). Whatever the right interpretative approach here, obviously some important connection between formal conditions of knowledge and objects is being asserted. But why? The answer provides some possible motivations for something like a correlationist position which are not simply versions of the short argument.

Kant makes his speculative Copernican hypothesis because he is dissatisfied with metaphysics. When compared with mathematics, say, which also seeks knowledge which is not directly empirical, it can hardly be said to be on the ‘sure path’ of science. For Kant, this was illustrated by the hollowness of metaphysical inquiry into the nature of the soul, God and world, reflected in the the interminable debates in rational psychology, rational theology and rational cosmology which are diagnosed in the Transcendental Dialectic. The problem, he thinks, is that metaphysics has employed theoretical reason in illicit ways, beyond its proper bounds. Traditional metaphysicians have failed to take into account the anthropocentric forms of human cognition, and so constantly come to grief by asking of reason what it cannot deliver. However, this is merely a sketch of some of the territory. There is no swift move from registering the forms of human cognition and towards sealing us off from a non-human world. From the bare fact that it is our cognition, it does not follow that it cannot deliver things in themselves. To attribute such a short argument to Kant on this basis is to ignore the details of Kant’s examination of cognition and his lengthy inquiry into metaphysics.

If transcendental idealism does ultimately count as a form of correlationism, this will be on the basis of the determinate limits on knowledge explored in Kant’s inquiries. These include sensible conditions, intellectual conditions, cognitive conditions governing the relation of the sensible and intelligible (e.g. the discursivity thesis), and rational conditions pertaining to the proper use of practical and theoretical reason. Each is supported by argument and analysis, which vary in success. For example, the intellectual conditions on empirical knowledge include conformity to the categories of the understanding. These conditions on thought are backed by an examination of the forms of judgement, which many people have found problematic and dogmatic. This set of conditions will probably not be the most troubling for the speculative realist though (Kant allows that we can think the thing itself — though whether that is just as a limiting concept is debatable). Rather, it will be the sensible conditions which will be most problematic. These sensible conditions enable objects to be given. Thus, they provide the main receptive framework for cognition, where the understanding provides the main spontaneous framework. Objects are given to sensibility according to its forms, namely space and time. This can seem an unassuming empiricist move: we know about things through spatio-temporal experience. But it goes beyond this insofar as Kant’s Copernican turn makes an a priori pure form of intuition logically prior to objects. Objects are given according to this pure intuition, such that they have formal properties in conformity with this pure form. This can be understood in more or less metaphysical terms. It is where realists will doubtless demur though, since it can seem to impugn the independence of objects from our cognitive apparatus.

Why does Kant embrace something like correlationism here? Some reasons are arguably idiosyncratic. For example, Kant thinks that we require pure forms of intuition to help apply the categories of the understanding (such as existence or plurality) to sensible objects — they bind the a priori and the empirical together ‘schematically’. Also, given his understanding of geometry and arithmetic, pure forms are meant to explain the synthetic a priori status of mathematical knowledge.

What may have a wider resonance though is the role of forms of intuition in grounding Kant’s revised metaphysics. Kant thinks that reason can be shown to fail when, like the rationalists, it strays from the path of possible experience. This was what led metaphysics into darkness. But if objects have to conform to the forms of intuition, then their formal properties can be grasped a priori. So, for any object which is given to us, we can justify limited metaphysical knowledge of it with reference to the pure forms, since nothing can be given that does not conform to these forms. Kant sums it up like this: “reason has insight only into that which it produces after a plan of its own.” Now, by my lights, Kant’s specific appeal to pure forms of intuition is not ultimately successful. But it does give a substantive argument for a correlationist-like understanding of the relation between objects and cognition. Furthermore, it outlines a strategy which I think can be made to work, albeit in a heavily revised form, with respect to the normative bases of cognition (and which, in time, I hope to outline).

* * *

A final thought on the question of metaphysics. The metaphysics which Kant seeks to cut down to size is an unbridled rationalism. But speculative realism has typically championed a kind of empirical metaphysics. It seeks to be porous with respect to scientific discovery: it is science which is to be the leading-edge of ontology. I have some limited sympathy with this approach with respect to certain theoretical endeavours, and agree that on the whole there is no need for a metaphysical grounding for science, provided by philosophy. However, I wonder quite how speculative realism will come to understand the status of its own metaphysical claims.

Alexei has raised the problem of normativity in this area: does a radical materialism have the resources to account for its own justification? We are all naturalists now — after a fashion, at least. But speculative realists have adopted a particularly strident form, which does not seem to be friendly to normativity. Just witness Ray Brassier’s Nihil Unbound. Can it understand, or sufficiently redescribe, the context in which it puts forward its own theory, such that it can allow that such a theory is meaningful, justifiable and truth-apt, whilst cleaving to a sparse materialist metaphysics which admits values, if it all, only in an anti-realist fashion? I will have more to say about this at a later date.

Realism and Correlationism: Some preliminaries

Over at Larval Subjects, Now-Times and Perverse Egalitarianism there has been a fractious debate regarding realism which has gone on for some time. This is in the wake of ‘speculative realism’ coming to increased prominence, under the influence of Quentin Meillassoux, Ray Brassier, Iain Hamilton Grant and Graham Harman. This realism has been contrasted with a correlationist position, which is taken to infect much contemporary philosophy.

Meillassoux introduced the term ‘correlationism’ to describe a non-realist position which claims that “we only ever have access to the correlation between thinking and being, and never to either term considered apart from the other.” (AF: 5) As Meillassoux also puts it, the correlationist denies that it is possible to ‘consider’ the realms of subjectivity and objectivity independently of one another. Of course, this could mean any number of things. Whether correlationism proves to be a useful philosophical category depends upon how this claim is spelled out.

Kant is supposed to be the paradigm correlationist. This is because Kant was meant to disallow us knowledge of any object subsisting ‘in itself’. Instead, knowledge was to be restricted to objects as they are ‘for us’. Thus, Kant is said to have eroded the pre-critical distinction between primary and secondary qualities, since even central candidates for the status of primary qualities (such as its mathematisable ones) must be “conceived as dependent upon the subject’s relation to the given — as a form of representation.” (AF: 4)

Does Kant’s position get fairly characterised by the new realists? A lot of acrimony has resulted from the attempt to answer this question in discussions between Levi, Alexei and Mikhail. Both sides are now pretty entrenched, and that is when they are on speaking terms. I don’t want to reignite these ‘Kant wars’ but I will offer some comments on this issue in the next few posts.

Firstly, Levi has expressed some dismay that this question has become a focal point at all. It is, he thinks, another sign of a kind of hermeneuticism endemic in continental philosophy, which drives philosophers into endless debates over the meaning of texts at the expense of assessing their truth. Of course, detailed textual work is often extremely valuable, but — the concern is — many philosophers have stopped reading the work of Kant, Heidegger or Deleuze as tools in a larger quest to understand the world, but have taken this activity to be an end-in-itself. It is true that this is a problem, and I am equally frustrated when scholars turn into scholastics. But I do not think the charge applies in this instance.

Levi claims that Kant is the ‘inventor’ of correlationism and is a central example of a correlationist (though by no means a unique one). Moreover, there is repeated reference to his position — and perhaps more importantly, his vocabulary — in contrasting correlationism and the new realism. If there is a dispute over Kant’s position, where there is a risk of it being unclear, it is important to at least articulate this. Otherwise, the exposition of correlationism risks being unclear — where it has been to me, for one, until getting a handle on what reading of Kant is in play here (for example, regarding how ‘in itself/for us’ is being understood). More importantly though, Kant gives us a detailed and nuanced treatment of the ways in which being might be taken to be related to thought. If that account was buried under a problematic reading of him, then the substantive debate risks being all the poorer as a result. These two considerations should have some weight even amongst those for whom understanding Kant’s own thought is a secondary consideration.

Secondly then, moving to the issue proper, I want to flag some of my concerns over the use made of Kant. In these matters, I am predominantly in agreement with Alexei, who I think has done a sterling job in this respect. I suspect this is because we are familiar with much of the same recent literature on Kant which brings out just how complex and well-crafted a project transcendental idealism is. Here, I am thinking of Kant scholars such as Henry Allison, Karl Ameriks, Graeme Bird, Fred Beiser, Allen Wood, Onora O’Neill and Paul Guyer. Though by no means united, the sophistication of their approaches to Kant is commendable, and their sustained attention to detail has shown how Kant was aware of many of the standard charges against him (subjectivism, a priorism, emptiness, etc.) and either responded to them or developed the resources to do so. The point is not to be an apologist for Kant but to do justice to the power of his thought insofar as it promises to help us understand the world. I think that it still can, even if I am not (just as Alexei and Mikhail are not) a paid-up Kantian.

In the posts that follow, I will concentrate on three cases, with an eye towards why the readings of Kant matter. (I won’t address the recent hot topic concerning time and ancestrality, since I can’t devote the energy to it, especially as tempers are flaring once again.). Again, the aim will be to show why a focus on Kant is not a morbid fixation but a useful piece of the puzzle. I want to show how the cases I’ll look at bear upon substantive issues in metaphysics, epistemology and ethics, even when abstracted from the historical issue of what Kant thought. Also, I shall try to counter the second-guessing of the motivations of critics of speculative realism, providing some symptomatological musings of my own. However, I also want to issue a plea for a bit of old-fashioned bourgeois civility, which would not go amiss on all sides. I’ve no interest in questioning other people’s intelligence or integrity. This said, the next post will be about what Ameriks calls the ‘short argument’ to idealism, and which Meillassoux and Levi attribute to correlationists.

Ethical education and philosophy

A little while back, N.N. of Methods of Projection posted a link to some papers by P.M.S. Hacker. I tend not to find Hacker very illuminating, especially as a reader of Wittgenstein, since I think he tends to underestimate the radical shift in philosophical methodology that Wittgenstein tries to effect. Hacker’s own conception of the tasks and methods of philosophy are outlined in his paper, ‘Philosophy: A contribution, not to human knowledge, but to human understanding.’ As a slogan, this is quite attractive, although the way Hacker spells it out, assigning philosophy the job of untangling conceptual confusions, strikes me as too rigid a view of what philosophy can and should achieve, even if I agree with its opposition to a conception of philosophy as in the business of providing explanations with a scientific form. Here, I shall just point to one analogy in the paper that did strike me as very useful:

Precisely because philosophy is not a quest for knowledge but for understanding, what it achieves can no more be transmitted from generation to generation than virtue. Philosophical education can show the way to philosophical clarity, just as parents can endeavour to inculcate virtue in their children. But the temptations, both old and new, of illusion, mystification, arid scholasticism, scientism, and bogus precision fostered by logical technology may prove too great, and philosophical insight and overview may wane. Each generation has to achieve philosophical understanding for itself, and the insights and clarifications of previous generations have to be gained afresh.

The analogy with ethical education is very apt, and is especially helpful when we think about philosophy’s relation to its own history. It’s no accident that the best philosophy is always in dialogue with the wider tradition and that the insights of that tradition have to be reclaimed again and again, unlike, for example, those of mathematical knowledge, which can be transmitted from one generation to the next with relative ease. I think this observation sits well with my recent post on ‘Philosophy as Bildung.

The Year in Books

Academic presses are still creaking under the weight of books published, so you would be forgiven if the occasional gem passed you by. It being the end of the year as well, I thought I would flag some notable philosophy books published this year, as well as point to some to look out for in the coming year. I’d be happy to hear of any of your own picks for this year’s best too.

My favourite book to appear this year is one I’m still reading — Robert Pippin’s masterful Hegel’s Practical Philosophy: Rational Agency as Ethical Life. As ever, Pippin manages to combine a wonderful lucidity of thought with a rich and suggestive prose style, which makes all his work a pleasure to read. This book develops the reading of Hegel which he shares with Terry Pinkard, which sees Hegel as engaged in the project of constructing a theory of normativity which would build upon, whilst radically revising, Kant’s talk of self-legislation. As long-time readers will be aware, I think this project is flawed both historically and philosophically. Nonetheless, Pippin has brilliantly buttressed his case here; and even where I think he goes astray, he is always insightful, especially when engaging with contemporary philosophical developments. If you have any interest in Hegel, metaethics or normativity, this comes highly recommended!

Another book in a similar vein, though this time arguing against a central role for autonomous agency, was Charles Larmore’s The Autonomy of Morality. Like Larmore’s other books, its mainstay is a collection of revised articles, loosly connected to the central theme. These are tied together by a central essay, arguing against Kantian constructivism as a metanormative theory. Larmore thinks that in place of a morality of autonomy we need to reclaim an autonomous morality. To unpack that slogan a little, he thinks that treating autonomy as a foundation for normativity is incoherent: any norms based upon autonomous endorsement alone will be little more than products of what Donald Regan calls ‘arbitrary self-launching’. Any putative norms arising from a process of self-legislation, so understood, cannot have a rational claim upon us. Instead, he thinks we must suppose that morality itself (and presumably other normative domains) is autonomous — independent of our practices, insofar as its ultimate authority is concerned.

My main reservations about his position arise with his conception of this independent normative realm — something he takes to be a robust metaphyiscal space, akin to the space of physical or psycholgical inquiries. In one essay, ‘Attending to Reasons’, he argues against the more Wittgensteinian conception of philosophical inquiry which animates McDowell’s work on just this sort of issue. It seems to me that Larmore lacks any good argument against such a position though; he simply restates the demand for philosophical explanation — e.g. surely we need to know what reasons are — which is the very thing that the Wittgensteinian tries to get us to loosen our grip upon by directing us to more modest questions about what we do and what we treat as a reason. This is a debate which needs reformulating if either side is to find traction with the other — something I am finding myself tasked with doing at the moment.

Talking of Wittgenstein, Oskari Kuusela’s The Struggle against Dogmatism: Wittgenstein and the Concept of Philosophy came out in April. This is another which I have not got all the way through yet, but the parts I have read are promising. The book is an attempt to describe Wittgenstein’s methodology, especially as it blossoms in the later philosophy. I had occasion this year to speak to Oskari whilst attending an event we were at, and I was struck by the intensity of his commitment to reading Wittgenstein with an anti-dogmatic tenor — one in which we have to radically rethink philosophy’s approach, as opposed to sliding into an equally formulaic characterisation of philosophy (e.g. the first thesis of Philosophy Club is that there are no theses in Philosophy Club…). What is particularly striking about Oskari’s approach is that it takes the question of methodology to be the beating heart of Wittgenstein’s work, whilst nevertheless letting us see how genuinely productive, progressive and insightful philosophy can still be done under its auspices.

I was rather less enamoured with Brandom’s Between Saying and Doing: Towards an Analytic Pragmatism, in which he attempts to reconcile pragmatism and more mainstream analytic philosophy. He claims that it is pragmatism in both the classical and Wittgensteinian senses which are to be one side of this reconcilliation. However, Brandom’s Wittgenstein is the worst of caricatures — a sloganeer, reduced to spitting ‘meaning is use’ and other proto-systematic dictums. His is a decidely non-Kuuselic reading. This bears upon his recent book insofar as it is animated with the worst of Brandom’s habits, and indeed the red thread which will unravel most of his work: reductionism. Brandom seeks to describe a set of reductive relations between different sets of vocabulary (logical, modal, normative, intentional, etc.). My thoughts here are that Brandom is doing little more than repeat the mistakes of traditional metaphysical inquiry in a semantic key. The lure of reductive accounts is great, and they are quite rightly indispensable in the natural-scientific enterprise. But philosophy is neither natural science nor composed of formal systems like logic, and the understanding which a massive program of theoretical interdefinability promises is little more than a mirage. It is Wittgenstein himself who provides the greatest lesson about this in the development of his early work away from the false clarity of the thoroughgoing analysis of the logical structure of natural language. This is yet another reason why Brandom counting Wittgenstein as an ally, albeit a misguided one, is perverse.

On a happier note, the blogosphere’s very own Sinthome, of Larval Subjects, published Difference and Givenness: Deleuze’s Transcendental Empiricism and the Ontology of Immanence. The project is an exciting one: a rehabilitation of a Deleuzian metaphysics as the ground of rethinking the perennial philosophical questions surrounding the particular-universal, existence-essence and sensible-conceptual relationships. It is the last of these which takes centre-stage, with the guiding question being how we are to understand Deleuze’s ‘transcendental empiricism’, which seeks to unfold the productive conditions for experience. It is in virtue of this topic that those of you with a ‘post-Sellarsian’ temperament may find it particularly interesting, since it tackles questions surrounding the intelligible structure of experience, familiar in the neo-pragmatist literature, from an interesting angle. Unfortunately, it has proved a little too hard-going for a casual reader like myself with little exposure to Deleuze. I hope to have the stamina for another go in the future though.

McDowell-watchers will have noted John McDowell: Experience, Norm and Nature, edited by Jakob Lindgaard, which collects many of the recent essays on his work from the European Journal of Philosophy, including new replies by McDowell. The most notable addition is a new essay by McDowell in which he revised his long-held and controversial position on the propositional structure of experience, replacing it with a claim that experience is conceptual simply in virtue of its ability to be discursively articulated. This claim is ostensibly made in response to Charles Travis’ arguments about conceptual content, though I think it may come to be seen as being heavily influenced by the next book I’ll mention.

I’ve yet to read more than a handful of pages of it, but Micheal Thompson’s book Life and Action: Elementary Structures of Practice and Practical Thought looks fascinating. In it, he undertakes an Aristotelian analysis of the concepts of life, action and practice, as the basis for a clear view of practical philosophy. As I say, I suspect that it is Thompson’s influence on McDowell which can account for some of the impetus for his revised position, as reflected in McDowell’s eagerness to make room for a distinct mode for the representation of life within experience. I am reliably informed that Thompson’s work is attracting a lot of attention amongst the Chicago-Pittsburgh circuit, and I would expect to see his work discussed widely in the future. Were I to hazard a guess for which philosophy book this year in the broadly conceived post-Kantian tradition will end up being most influential, it would be this one.

Next year will see another promising book on metaphysics, namely, Robert Stern’s Hegelian Metaphysics. It’s going to be a collection of some of his essays, both new and old, on Hegel and metaphysical themes. In particular, there’ll be essays on themes from Hegelian metaphysics, like concrete universality and the Hegelian conception of truth, alongside critical and comparitive essays on historical movements influenced by Hegel, like the classical pragmatists (especially Peirce) and the British idealists. Again, Deleuzian metaphysics comes up, with a defence of Hegel’s position against Deleuzian criticism.

Also next year, two McDowell collections appear, The Engaged Intellect: Philosophical Essays and Having the World in View: Essays on Kant, Hegel, and Sellars. The contents should be familiar to those already keeping up with McDowell’s recent work, though there is what appears to be a new essay on Hegel which I am keen to see. Korsgaard’s Locke Lectures, Self-constitution: Agency, Identity, and Integrity, also come out. From the lecture texts already online, this looks like it will be a good read, and will no doubt draw a lot of attention! (She also had a collection of essays out this year on similar themes, called The Constitution of Agency: Essays on Practical Reason and Moral Psychology.) A volume of essays on Making It Explicit is also due out, called Reading Brandom: On Making It Explicit. The contributors are not quite as illustrious as those for the McDowell volume in the same series, but it looks interesting nonetheless.

As I say, I am happy to hear your own notable philosophy books of the year!