Some thoughts on program rankings and philosophical method

Philosophical method – not as a parochial habit of professors, but as a force that has given shape to much of the western world as we know it today – is underwritten by a quite radical spirit of charity: it takes the ownmost aim of its interlocutor and adopts it as its own, occasionally to the point of imploding that aim. For all their violence, this is essentially what happened in Leibniz’s critique of Descartes, Heidegger’s of Husserl, and even Kierkegaard’s of Hegel. (This is also the reason, I think, that non-mathematical approaches to solving liar-like paradoxes don’t get as much traction in the literature on the topic). It is this spirit that so easily leads us to think of Socrates as the intellectual father of the whole discipline

Apply this point to the questions of whether and/or how the PGR should continue, and what it should be replaced/supplemented by.

For the above reason, the idea that rankings can be altogether eliminated must, for the present moment, fail. Dissatisfaction with qualitative metrics is grounded in antipathy to the aim of these rankings; and thus, for all my sympathy with the reasons behind that stance, I do not think it can provide the ground for progress on the issue at present.

On the other hand, there is a very real movement that this antipathy is generating, one that seems not to have been fully appreciated. The move away from quantitative measures of quality, in the form it is taking at present, is simultaneously a further entrenchment of consumerism in philosophy, and the corresponding tendency to see everything – in this case, a philosophical education – as a product: the move is apt to become one from unified measures developed by experts to measures customizable by the potential-grad student-turned-consumer.

I am surprised that no one, as far as to my knowledge, has made the connection from the rankings question to this post at DailyNous on teacher rankings, and the article it links to. Though the parallel is not exact, the phrase “Summary items such as ‘overall effectiveness’ seem most susceptible to extraneous factors” is likely true in the case of the PGR as well. Hence, it seems that if a reputation-based overall rank were to be preserved, if it were to be reliable, it would be best to arrive at it in a more mediated fashion.

Here are two suggestions for how to do this. Neither is perfect. But they aim to be constructive in their own way.

  1. One way would be to cut out the human observer altogether, and simply set up an algorithm (perhaps several) for approximating this. For example, citations (or weighted citation,) per document per person per department over a fixed period of time. Ideally, the method would be one that doesn’t encourage quantity of work over quality, and one that doesn’t automatically privilege larger departments.
  2. Another way would be to do specialty rankings (since I think these tend to be much more reliable), and develop an overall ranking on the assumption that, ceteris paribus, strength in more areas is better than strength in fewer. I wonder whether something useful for this aim could be developed following lines similar to those followed by an analysis Kieran Healy did a few years ago on the PGR here.