You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our Group site.

Musing about Kant (2)

26 May, 11 | by Iain Brassington

It’s very easy, having encountered Kant for the first time, to think that his account of morality is much too cold and impersonal to be plausible – the sort of thing you might expect from a computer rather than a human.  And though this criticism is rather simplistic – I think that Kant does have a deep humanity to him: it’s just that he doesn’t think that that should inform morality – I wonder whether there’s something to it after all.  I wonder whether there’s a reading of Kant that could only make sense to intelligent computers, and – more importantly – computers in a network; and whether such an account of morality would come naturally to them.

The starting point for this little essai (and I make no claims that the thoughts expressed here are particularly well-developed: all I’m doing is taking the opportunity afforded by the blog to publicise some stuff that’s been knocking around my brain for a while) is fairly straightforward: Kant’s separation of the sensible and intelligible parts of human life.  As far as he’s concerned, morality has to do with the latter rather than the former (because sensibility implies determinism; morality implies freedom; freedom implies autonomy; autonomy implies the will; and the will is practical reason); he claims that

a rational being must regard himself qua intelligence as belonging not to the world of sense but to the world of understanding.  Therefore he has two standpoints from which he can regard himself and know laws of the use of his powers and hence of all his actions: first, insofar as he belongs to the world of sense subject to laws of nature (heteronomy); secondly, insofar as he belongs to the intelligible world subject to laws which, independent of nature, are not empirical but founded on reason.

As a rational being and hence belonging to the intelligible world, can man never think of the causality of his own will except under the idea of freedom.  […]  Now the idea of freedom is inseparably connected with the idea concept of autonomy, and this in turn with the universal principle of morality, which ideally is the ground of all actions of rational beings, just as natural law is the ground of all appearances. (4:452-3; emphasis mine)

and makes similar claims elsewhere.

Right: so the moral law is ideal rather than real; but, more importantly, Kant contrasts the “universal principle of morality” with the natural law that is the ground of all appearances – and so, implicitly, the universal principle of morality is to be distinguished from appearances.  I don’t think that any of this is particularly radical.  By which I mean, of course, that it is radical – but it’s standard undergrad philosophy stuff.

However, things get a bit weirder once you begin to prise that apart.The thing is, if thinking of ourselves as agents requires thinking of ourselves as belonging to the intelligible rather than the sensible world – the world of reason rather than the world of sense – how we think about ourselves at all suddenly becomes a bit mysterious.  After all, when I use the personal pronoun, I am referring to myself qua object of my own experience.  More generally, the “I/ thou” distinction seems to belong to sensibility; there’s no obvious reason to think that it obtains in the intelligible world.  (There’s a hint of this somewhere in the first Critique, but I don’t have it to hand.)  If morality belongs to the intelligible, super-sensible side of our being, personal pronouns seem to lose their currency.

This is an idea that owes a lot to a couple of conversations I’ve had over the past few years with Joss Walker, and I’m really looking forward to his book on Kant when it comes out: though I suspect that his stuff is rather better than mine.  Walker ventures the theory that, while agents may be distinct as human beings, they are not distinct – or may not be – as persons.  To paraphrase that other great German philosopher Blixa Bargeld, youme is meyou.  (Yes, that link is gratuitous.)

So far so good.  The striking thing about this picture, though, is that it’s just so weird.  And it’s not even obvious to me that it’s possible to conceptualise: truly to understand ourselves as moral beings would require that we be willing to think of ourselves without any of our experienced characteristics – age, sex, ethicity, yadda yadda… but also duration through time, and location and extension in space.  In fact, we’d have to think of ourselves without any self at all, since that belongs to sensibility.  (If those things are parts of experience, they should be ditched for the sake of morality; if they’re the principles that organise experience, there’s no obvious need to invoke them in respect of an aspect of ourselves that is ex hypothesi non-experiential.)* If we can make sense of this, then it’d explain universalisation perfectly – but I don’t think we can.  It’s just too big a leap.  (Fortunately for Kantians, I also don’t think that you need to accept this metaphysic to make sense of universalisation; you can get to that by other means.)

But because we can’t make the leap, it doesn’t follow that other entities with a quite different form of life (or what’ll pass for life) mightn’t be able to.  Entities like intelligent computers attached by a network, for example.

The idea here is that artificially intelligent machines, working together via a network so that calculations and memory were not confined to one CPU – and so not to a single processor (for which read “brain”) – would  be more like Kant’s transpersonal agents than entities such as we.  Perhaps the investigation of a truly Kantian moral philosophy will have to wait until we’ve made the right advances in computer science.  And, working the other way, we might wonder what’d happen if we instructed these networked machines to come up with a moral theory from first principles: would it look anything like Kant’s?

*As you can probably tell, I’m struggling a bit with this – and it might be where the idea breaks down.  Any thoughts?

By submitting your comment you agree to adhere to these terms and conditions
  • Keith Tayler

    Sorry I have to make one or two comments about this.
     
    The philosophy of AI research is useful in the philosophy mind, language, logic, maths, etc., and can throw some light on issue like ethics and law. There are already a number of Expert Systems and AI system that are claimed to be “agents” in one form or another. Allen and Saxon claim that an ES using Legal Relations Language (Hohfeld eight legal concepts with ’improved’ deontic logic) can ’represent every possible legal rule as well as every possible legal argument.’ Karnow claims that his legal entity Electronic Persona (EPERS)is ‘quite similar to what are usually conceived as artificial agents in the multiagent systems community [corporations]…[EPERS,s] enjoy and are subject to legal rights.’ Bettoni, Stuart and Dobbyn have proposed a Kantian architecture to achieve AI consciousness and life. Alexander claims to have done it with MAGNUS. Be it Locke or Leibniz, Heidegger or Wittgenstein, just about every philosopher of any note has been taken as being the solution for AI. Of course none of them are the solution because the “problem” cannot be solved. We can meaningfully speak of computers being “intelligent” (the division of labour by machines and systems is intelligence), but, as Weizenbaum states, it is and always will be an ‘alien’ intelligence (both Smith and Marx realised this). The Kantian system you envisage could be built (it would not require separate CPUs or a network because one computer can create a programs/network environment). I do not think Kant would be too impressed by it because it would encounter many of the problems Kant recognised and could not resolve. For example, the Kantian (later to become central to Wittgenstein and Kripke) problem of ‘rule-following’ always returns to haunt machine intelligence (“our intelligence” is the ghost in these machines). But let us assume we managed to get an AI system to appear to “process” like Kantian transpersonal agents. Would this enable us to investigate Kantian moral philosophy? I say ’appear’ because we would encounter the problems that undermines the Turing test. Indeed, appearing 'alien' might even convince us that we have created a transpersonal agent. But the first thing Kant would ask is “how do we know?” If the system came up with moral theory from first principles which did not look like Kant’s, do immediately reject it? If it produced something much the same as Kant, we are back to the problem of rule-following and what is it to be the same? Not a small problem – how do we ‘instruct’ the system? How would the system “understand” an instruction? Does it “process” in an language that gives it a recognisable form of life? Can we follow its “processing/reasoning” by analysing it? Or, as with so much computer mathematical proofs, are we unable to survey it and it therefore becomes an empirical experiment; which takes use back to the ‘how do we know?’, empiric/analytic problem, etc.. And so it goes on and on. AI is useful because it raises these problems. AI researchers (Weizenbaum is a notable exception) rely heavily on Turing’s prediction that before we achieve advanced machine intelligence the meaning of words will have to change and educated opinion will have to be altered. In short, the myth will have to be normalised and we will stop, as Turing put it, ‘contradicted‘ AI researchers. (Turing does not see it as a myth, but he does have difficult with his imagination). Following the principle that we do not understand something until we build it, AI is a useful tool. Of course we have to recognise our failures and not pass them of a minor obstacles to an assured price. Nonetheless attempt such as Bettoni’s at developing a Kantian architecture do throw a new perspective over his philosophy. Your ‘transpersonal agent’ should be view from the perspective of “how do we build it?” is ‘quite similar to what are usually conceived as artificial agents in the multiagent systems community [corporations]…[EPERS,s] enjoy and are subject to legal rights.’ Bettoni, Stuart and Dobbyn have proposed a Kantian architecture to achieve AI consciousness and life. Alexander claims to have done it with MAGNUS. Be it Locke or Leibniz, Heidegger or Wittgenstein, just about every philosopher of any note has been taken as being the solution for AI. Of course none of them are the solution because the “problem” cannot be solved. We can meaningfully speak of computers being “intelligent” (the division of labour by machines and systems is intelligence), but, as Weizenbaum states, it is and always will be an ‘alien’ intelligence (both Smith and Marx realised this). The Kantian system you envisage could be built (it would not require separate CPUs or a network because one computer can create a programs/network environment). I do not think Kant would be too impressed by it because it would encounter many of the problems Kant recognised and could not resolve. For example, the Kantian (later to become central to Wittgenstein and Kripke) problem of ‘rule-following’ always returns to haunt machine intelligence (“our intelligence” is the ghost in these machines). But let us assume we managed to get an AI system to appear to “process” like Kantian transpersonal agents. Would this enable us to investigate Kantian moral philosophy? I say ’appear’ because we would encounter the problems that undermines the Turing test. Indeed, appearing 'alien' might even convince us that we have created a transpersonal agent. But the first thing Kant would ask is “how do we know?” If the system came up with moral theory from first principles which did not look like Kant’s, do immediately reject it? If it produced something much the same as Kant, we are back to the problem of rule-following and what is it to be the same? Not a small problem – how do we ‘instruct’ the system? How would the system “understand” an instruction? Does it “process” in an language that gives it a recognisable form of life? Can we follow its “processing/reasoning” by analysing it? Or, as with so much computer mathematical proofs, are we unable to survey it and it therefore becomes an empirical experiment; which takes use back to the ‘how do we know?’, empiric/analytic problem, etc.. And so it goes on and on. AI is useful because it raises these problems. AI researchers (Weizenbaum is a notable exception) rely heavily on Turing’s prediction that before we achieve advanced machine intelligence the meaning of words will have to change and educated opinion will have to be altered. In short, the myth will have to be normalised and we will stop, as Turing put it, ‘contradicted‘ AI researchers. (Turing does not see it as a myth, but he does have difficult with his imagination). Following the principle that we do not understand something until we build it, AI is a useful tool. Of course we have to recognise our failures and not pass them of a minor obstacles to an assured price. Nonetheless attempt such as Bettoni’s at developing a Kantian architecture do throw a new perspective over his philosophy. Your ‘transpersonal agent’ should be view from the perspective of “how do we build it?”

  • Keith Tayler

    Again my long posting has not copied well. Sometimes I think computers do not like me (what am I saying?).

    Loved the cat link.

  • http://www.law.manchester.ac.uk/aboutus/staff/iain_brassington Iain Brassington

    Years of wasting time on the internet have taught me one thing: if in doubt, use cats.
    http://www.rathergood.com/cats

You can follow any responses to this entry through the RSS 2.0 feed.

Latest from JME

Latest from JME

Blogs linking here

Blogs linking here