An Alternative View

This is an article I read over the weekend by Amir Taheri, who is described as an author. Middle Eastern politics is not an area of expertise for me but I just thought it interesting to read an alternative view to that put out by the BBC and others.

‘What do you do when you have no policy, but want to appear as if you do? In the case of Barack Obama, the answer is simple: you go around the world making speeches about your “personal journey”.

The latest example came last Thursday, when Mr Obama presented his address to the Muslim world to an invited audience of 2,500 officials at Cairo University. The exercise was a masterpiece of equivocation and naivety. The President said he was seeking “a new beginning between the US and Muslims around the world”. This implied that “Muslims around the world” represent a single monolithic bloc – precisely the claim made by people like Osama bin Laden and Mahmoud Ahmadinejad, who believe that all Muslims belong to a single community, the “ummah”, set apart from, and in conflict with, the rest of humanity.

Mr Obama ignored the fact that what he calls the “Muslim world” consists of 57 countries with Muslim majorities and a further 60 countries – including America and Europe – where Muslims represent substantial minorities. Trying to press a fifth of humanity into a single “ghetto” based on their religion is an exercise worthy of ideologues, not the leader of a major democracy.

Mr Obama’s mea culpa extended beyond the short span of US history. He appropriated the guilt for ancient wars between Islam and Christendom, Western colonialism and America’s support for despotic regimes during the Cold War. Then came the flattering narrative about Islam’s place in history: ignoring the role of Greece, China, India and pre-Islamic Persia, he credited Islam with having invented modern medicine, algebra, navigation and even the use of pens and printing. Believing that flattery will get you anywhere, he put the number of Muslim Americans at seven million, when the total is not even half that number, promoting Islam to America’s largest religion after Christianity.

The President promised to help change the US tax system to allow Muslims to pay zakat, the sharia tax, and threatened to prosecute those who do not allow Muslim women to cover their hair, despite the fact that this “hijab” is a political prop invented by radicals in the 1970s. As if he did not have enough on his plate, Mr Obama insisted that fighting “negative stereotypes of Islam” was “one of my duties as President of the United States”. However, there was no threat to prosecute those who force the hijab on Muslim women through intimidation, blackmail and physical violence, nor any mention of the abominable treatment of Muslim women, including such horrors as “honour-killing”. The best he could do was this platitude: “Our daughters can contribute just as much to society as our sons.”

Having abandoned President Bush’s support for democratic movements in the Middle East, Mr Obama said: “No system of government can or should be imposed on one nation by another.” He made no mention of the tens of thousands of political prisoners in Muslim countries, and offered no support to those fighting for gender equality, independent trade unions and ethnic and religious minorities.

Buried within the text, possibly in the hope that few would notice, was an effective acceptance of Iran’s nuclear ambitions: “No single nation should pick and choose which nations should hold nuclear weapons.” Mr Obama did warn that an Iranian bomb could trigger a nuclear arms race in the region. However, the Cairo speech did not include the threat of action against the Islamic Republic – not even sanctions. The message was clear: the US was distancing itself from the resolutions passed against Iran by the UN Security Council.

As if all that weren’t enough, Mr Obama dropped words such as “terror” and “terrorism” from his vocabulary. The killers of September 11 were “violent extremists”, not “Islamist terrorists”. In this respect, he is more politically correct than the Saudis and Egyptians, who have no qualms about describing those who kill in the name of Islam as terrorists.

Mr Obama may not know it, but his “Muslim world” is experiencing a civil war of ideas, in which movements for freedom and human rights are fighting despotic, fanatical and terrorist groups that use Islam as a fascist ideology. The President refused to acknowledge the existence of the two camps, let alone take sides. It was not surprising that the Muslim Brotherhood lauds him for “acknowledging the justice of our case” – nor that his speech was boycotted by the Egyptian democratic movement “Kifayah!” (“Enough!”), which said it could not endorse “a policy of support for despots in the name of fostering stability”.

In other words, the President may find that by trying to turn everyone into a friend, he has merely added to his list of enemies.’

A Medievalist’s View of Popper’s Open Society. Posted by ChooChoo

At the close of his Gifford Lectures in 1932, the great historian of medieval philosophy, Etienne Gilson, quoted the 12th century thinker Bernard of Chartres:

 “We are like dwarfs,” said Bernard, “seated on the shoulders of giants. We see more things than the Ancients and things more distant, but it is due neither to the sharpness of our sight nor the greatness of our stature, it is simply because they have lent us their own.”

 Gilson lamented the loss of this “proud modesty”:

 It is a sad old age that loses all its memories. It if were true, as some have said that St. Thomas was a child and Descartes a man, we, for our part, must be very near decrepitude.

 I don’t mention this to ask whether we tread (or even trample) on the shoulders of dwarfs. Both Bernard and Gilson demonstrate the cognitive importance of narrative. These differing modes of relation to the past – Bernard’s modest pride in development and Gilson’s gibing lament over loss – are not simply glosses to the real business of understanding the past but are inextricable constituents of such understandings.

The Open Society starts with an altogether different image.  For Popper, ours is a civilisation “still in its infancy…which continues to grow in spite of the fact that is has been so often betrayed by so many of the intellectual leaders of mankind”, a civilisation still “in the shock of its birth” from, as one political philosopher has put it, “the cosy womb-like certainties of Gemeinschaft” [roughly, community where individuals’ association is, in important ways, directed to the greater whole rather than to self-interest]. This does not preclude possibilities for a developmental narrative; our society is “still in its infancy”. But this fledgling development occurs within the open society. The transition from closed to open society, by contrast, is difficult to grasp as a form of development or evolution given Popper’s broader scheme.

This scheme is well-known. History – in no pre-ordained way of course – testifies to the conflict between the closed and open society, between, on the one hand, a tribalism in thrall to magical forces and a primitive desire to live in solidarity and, on the other, a critical rationalism, which unfetters man’s capacity to reason. Notice two features. First – and I don’t wish to sound like an inquisitor – there is something Manichaean about the constellation of ideas associated with the closed and open societies. A series of binary opposites distinguish these ideal types: collectivism and individualism; passionate violence and rationality; taboo and law; ‘utopian’ and ‘piecemeal’ social engineering; liberation and slavishness; essentialism and nominalism; totalitarianism and democracy. We are invited to see something comprehensively coherent in these constellations.

Second, some attributes of these ideal types are far from unambiguous. Perhaps historicism – the idea that development laws govern history, laws on which historical prophecy can be grounded – is the classic example. In his critical review of The Poverty of Historicism, the philosopher Charles Taylor began, “It is not easy to see what kind of doctrine Professor Popper is trying to pillory under the title ‘historicism’”. (Taylor concluded that what the various thinkers whom Popper impugns as historicists share in common is that they are frequently the “whipping boys” for a certain kind of liberal thought). But even a more sympathetic reader of The Open Society, the medievalist-philosopher Peter Munz, a former pupil of Popper’s, was as baffled by the insistence on rooting the closed society in historicism as he was enthused by Popper’s sociological and ethical insights. My point here is to stress a curious tension in reading Popper. Despite his insistence, at least some of the ideas within the constellation of associations have confused rather than clarified his general scheme. Readers like Munz suggest that we can remove some of these associations without dealing a death-blow to the broader scheme. (This raises a question: how far can we go on abandoning parts before we have to abandon the whole?).  

Incidentally, a curious feature of discussion of historicism – even in critiques of Popper’s interpretation of Plato, Aristotle, Marx and Hegel as historicists – is the assumption that religious narratives yield far more obvious examples of historicism.  Popper himself suggests that the conception of a chosen people was a tribalist precursor of modern collectivism. Of course, any student of medieval history will be familiar with the multiple ways in which Edenic aboriginality, salvation history and eschatological speculation served as nodal points in medieval thought. They are central to Augustinian thought, a notable absence in The Open Society. (The absence is glaring given the overwhelming importance of Augustine not only to medieval thought but also to Reformation thought).  We may wish to question whether and precisely how these narratives might be understood as historicist in Popper’s terms.  

 How do the Middle Ages function in Popper’s scheme? Largely, it must be said, as an intermittent spectre. Sporadic invocations of medieval totalitarianism, especially the Inquisition, serve as a kind of apotropaic charm upon which the rationalist humanitarian can call without embarrassment. But he also advances a substantial historical thesis: the constellation apparent in Aristotelian and, especially, Platonic thought found their stifling fruition, we are told, in medieval societies. We read, at the end of volume one, that Plato ultimately failed to arrest social change: “Only much later, in the dark ages, was it arrested by the magic-spell of Platonic-Aristotelian essentialism”.

Popper does not develop this in detail. But at the beginning of the second volume, he briefly speculates upon the broader picture between Aristotle and Hegel – between ancient and modern – albeit in a different way. This period might be interpreted, he suggests, not simply as the consummation of totalitarianism, but as one of conflict between the open and the closed society, in which the latter ultimately prevailed. I do not wish to subject this self-consciously speculative passage to uncharitably exacting scrutiny – Popper was not writing as a historian and medieval history, especially early medieval history, has changed dramatically since he wrote – though, even with these concessions, there are some distinctly odd points of interpretation (e.g. the idea that late fourth century church structures – and Popper seems to have such things as social action for the poor in mind – were “after the model of Julian the Apostate’s Neo-Platonic Anti-Church” is odd: the direction of influence was almost certainly in the opposite direction, as Julian himself makes clear in some of his correspondence).

In broad outline, Popper couches the conflict in the following way. The anti-idealising spirit of the “Great Generation of Greeks” was embodied in the Cynics and, later, early Christians. Martyrs died, Popper writes, for the same cause as Socrates. But the fourth-century saw a dramatic shift, hinging around Constantine, one which Popper casts as an “ingenious political move” to break the “tremendous moral influence of an equalitarian religion”. Hereafter, “the Church followed in the wake of Platonic-Aristotelian totalitarianism” from Justinian’s sixth-century persecutions right through to the Inquisition.

Popper intensifies this outline in his rejection of voguish, Romantic eulogising of the Middle Ages (he quotes – and pours scorn on – Toynbee for mounting a sort of rehabilitating argument from the beauty of medieval cathedrals).  For Popper, such eulogies are premised upon historicist notions of “the essential Character of Western Civilisation”. That’s to say a Platonising way of doing history desensitises the historian to the baleful influence of Platonism in history. The rationalist interpretation of history, by contrast, connects this pre-Constantinian thread to the Renaissance, through the Enlightenment, into modern science. At this juncture, it is tempting to raise the question of the construction of the Middle Ages, the origins and repercussions of this inherently relational form of periodisation. But I’ll leave that for another day.

Instead, I’m interested in what happens to what this thread quickly passes over. At first glance, it is perfectly intelligible that medieval societies were, in Popper’s terms, closed societies. In fact, I don’t wish to argue otherwise. The question is: does his scheme offer sufficient conceptual resources for understanding them? (There are many things which could be said of medieval societies which are both far from false and far from illuminating). And does his scheme offer any conceptual resources for understanding how the closed society of medieval Christendom could have become the fledgling open society of modernity?

There are historical problems. To put it bluntly: we know Plato and Aristotle far more directly than a large number of the medieval purveyors of “Platonic-Aristotelian totalitarianism” did. Engagement with Aristotle was limited until the re-entry of texts in translation from the 12thcentury or so. As for Plato, in practical terms, Western readership was mainly limited to the Timaeus until 15th century translations of his corpus. This does not altogether dismantle Popper’s outline. There were other conduits for Platonic and Aristotelian thought. But combined with a student of early medieval history’s admittedly slender grasp of later medieval philosophy, not least its self-conscious transcendence of ancient roots – dwarfs and giants – and the peculiar scholastic emphasis on observation (or, being wary of anachronism, peculiar empiricism), Popper’s narrative is decidedly shaky.

Far more importantly, Popper’s understanding of the closed society is, if anything, excessively coherent. Not only persecution, but even pestilence is taken to be the hallmark of medieval society; and, as hallmarks, both persecution and pestilence are rendered explicable by the confluence of ideas, by that constellation of associations inextricable from the closed society: magic and taboo – and theistic historicism (“Men believed God to rule the world”) – impeded responsibility; essentialism arrested the desire for social change; and so on.  Terror, wretchedness, hardship, oppression, plagues, and even dancing manias all flowed from this cohesive ideological font, Popper declares.

Returning to my early medieval comfort zone, does Popper’s constellation of ideas help or hinder our reading of social change? Even to put the question in this way suggests a problem. Popper’s scheme appears to be largely uninterested in change within closed societies. The most notable exception is his hypothetical – and interesting – typology of changing configurations of natural and normative law, from tribal naïve monism (where nature and norms are not distinguished) to critical dualism (which separates nature and norms). But even these types are framed in terms of their relation to the critical rationalist’s demarcation of ‘is’ and ‘ought’, of facts and decisions (or values): they are progressive, even if not historically inevitable. But, otherwise, there is little sense of how closed societies can change or develop, or of how medieval society did.

It is beyond my aim here to give an account of trends in early medieval historiography. But a brief foray may explain my fraught encounter with The Open Society. Post-Roman societies can no longer be viewed solely through the prism of decline and stasis, following the illumination of late antique and early medieval cultural innovation, most astoundingly and famously in the work of Peter Brown. One fundamental problem, of course, is that our material tempts us into thinking otherwise. The programme of reform attempted in a late eighth and ninth-century Carolingian context – and the morass of material produced – idealised traditionalism and corporate uniformity.  But its products – from forged papal documents to John Scotus Eriugena’s dizzying onto-theology – are best read in the context of a series of innovative social changes. There was no question of attempting to arrest social change and, if anything, it was the messy realities of enactment which arrested change.

Likewise, the broader dynamics of ‘christianisation’ and the interplay between ‘official’ and ‘popular’ religious practice problematise an overly static view of early medieval thought and social practice. Alongside the mutable constructions of ‘paganism’ as a strategy of distinction, with sometimes violent consequences, there were bursts of more self-conscious thought on the shifting boundaries of acculturation. That exceptionally influential early medieval pope, Gregory the Great, was not unusual in writing self-consciously about cultural adaptation and interchange in his widely read letter to Anglo-Saxon missionaries (which, incidentally, incorporated a discussion of prevailing taboos which was at once critical and sympathetic). For Gregory, some ‘pagan’ customs could be fruitfully absorbed or ‘baptised’ rather than eradicated.  Indeed, from Augustine to Alcuin, thinking about both missionary and pastoral activity presupposed a certain awareness of the modalities of cultural exchange. As Alexander Murray once put it, “the entire history of medieval religion is a commentary on Gregory’s letter”.

In the encounter not just with Charlemagne’s violent expansion into Saxony, but also with the multiple responses of Saxon converts and Frankish churchmen in subsequent generations, the constellation of ideas associated with closed societies – and their coherence – begins to disintegrate. This impression is consolidated when one considers later medieval developments – for example, the rebirth of the state and an increasing centralisation of power, the development of a specialised – and sophisticated – political discourse and even the multi-faceted flourishing of scholasticism. The longue duree histories of hagiography, theology and even medicine or – to put it another way – the changing dynamics of utopianism, essentialism, tribalism, magicalism and so on across this vast span all militate against the security of this constellation.

 The fundamental issue here is not primarily with Popper’s historical outline, nor am I undertaking medievalist apologetics. The key point is that the developments and energies of medieval societies, whether seen in subtly complex cultural processes or inescapably horrific atrocities, cannot be adequately grasped in the ideological terms of the closed society.

 

            Why should this be so? The intellectual and political context in which Popper was writing may offer some light: perhaps his work  is most fruitfully read as a historical document.  But another reason may stem from Popper’s philosophy of science. (At various points, Popper seems to identify the open society as a society which lives out falsification across the board: whether this is a strangely unPopperian form of utopian thinking – utopia, in the old pun, is also a no-where – is surely open to debate). I can only scratch the surface here. Popper’s justification and description of scientific practice focuses on the revolutions and ruptures wrought by falsification of theories. The famous Kuhnian response is to try to focus instead upon how the epistemological crises which lead to such revolutions might come about and also upon the stretches of theoretical stability between such crises. In terms of Kuhn’s critique, Popper’s thesis is ill-equipped to grasp how the epistemological crises which prompt such revolutions are experienced, and the nature of the relationship between a novel theory and the context from which it emerged. One consequent blind-spot is the nature of the context before theoretical revolution. And if I have understood Popper’s notion of the closed society correctly, it is perhaps for similar reasons that his historical perspectives and conceptual resources are ill-equipped to grasp medieval societies, and their relationship to modernity. This is, in part, the problem of identity. It is unclear how a closed society can turn into an open one and retain any identity given the characterisation of both. More concretely, the medieval can only relate to the modern through rupture and yet how such a rupture might have come about is difficult to comprehend on his terms.

Was Gilson completely mistaken in his lament over “a sad old age that loses all its memories”? This brings to light the possibility of a subtle omission in terms of Popper’s explicit notion of the ‘open society’. After all, are modes of relation to the past, are narratives not central to the self-understanding of ‘open societies’? Does the open society, as Popper understands it, necessitate a narrative of rupture, revolution and emancipation from the closed? Or, to adapt Gilson’s terms, does it require a kind of historical forgetting?

Review of The Wire

The Wire is about life in Baltimore. In particular it is about the issue of drugs in Baltimore and how that trade affects different aspects of the city’s life – law and order, social and economic, education, politics and journalism.

I bought a dvd of Season One of The Wire on the strength of a review in The Guardian in 2007. It claimed that The Wire was the best thing on television in the last twenty years.  Is it? Yes – because apart from anything else this allows The World At War, Fawlty Towers and, most importantly, The Phil Silvers Show their rightful place in the TV Pantheon.

What about The Sopranos? First allow me to explain why The Sopranos is not quite as good as The Wire. The Sopranos is great television, moving, funny, shocking but rarely meaningful. It’s derivative which is not necessarily a bad thing but it depends on a comprehension of references to The Godfather.The way in whichTony Soprano the vicious crime boss is sometimes depicted as Homer Simpson is, however, a touch of genius. But there’s no moment when you say to yourself, ‘That’s just like my life!’ Now I’m not ‘police’ and despite the fact I taught in an urban comprehensive in South London does not really make my life like Prez’s school in season four – but the way in which public service jobs have been reduced to target setting so that the targets are ends in themselves speaks to a very wide audience. All of the characters have their good and bad points there’s moral ambiguity all around which makes everything seem more realistic. Towards the end of Season Three where a prominent public servant is spotted in a gay bar his hypocrisy and duplicity is not dwelt upon and preached about it’s just noted – all of this resonates with our everyday experience of people. As has been pointed out already by somebody else The Wire is like a novel -you cannot skip chapters – it demands effort but rewards the viewer not just with TV entertainment but the the same reward that great literature brings. The characters are so strong and the acting is simply phenomenal – especially that of the school children.
What about The Shield? Well it too is great television brilliantly acted and superbly written – it too creates all sorts of moral dilemmas that test our consciences but for me it does not transcend the genre of a cop programme – and it is very much from a police perspective – which is fine in itself but it lacks the depth of The Wire. The viewer is given less perspective of the LA politician and little insight into gang members despite it being about much of the same subject matter. Consider what the viewer has learnt about Baltimore drug dealing or teaching or municipal politics with what we learn from The Shield.

I have never felt so evangelical about anything as I do about The Wire. I have recommended it various members of my family and numerous friends and without exception they have either enjoyed it as much as I did or pretended that they did so extremely convincingly. The five seasons are available for purchase or rent from Amazon or can be watched online.

Our Second Lunch

This was a very enjoyable occasion – it was just as good as last year’s with added youth and celebrity (provided by Finn and Ariane). Many thanks to all those who made the effort – I really am looking forward to the next one – I’d like to think that feeling is shared. I hope that everyone (especially those who had travelled far) managed to get back without too much fuss. Please feel free to make suggestions about the timing and location of our next meeting. I’ll add the photos as soon as I find my lead for the camera!

Article by Steve Jones

Please find below an article by the eminent geneticist, Steve Jones.

‘It’s not done to kill the goose that lays the golden eggs, nor to bite the hand that feeds you – nor, in my own profession, to criticise the research programme of the Wellcome Trust, an enormously rich charity that paid much of the bill to read the message written in human DNA. Not done, perhaps, but a pack of renegade biologists has turned on that source of nutrition to claim that what it is doing is welcome, but plain wrong.

Science has done well in studying – and even helping to treat – rare inherited diseases such as haemophilia. After the famous sequencing of the double helix, the hunt started for the genes behind the illnesses that affect most of us – stroke, diabetes, cancer – as well as multiple sclerosis and a variety of brain disorders.

 

The hope was (and five years ago, it was a reasonable one) that such conditions could be blamed on a small set of common genetic variants. Track them down and we would begin to understand what had gone wrong, diagnose patients before symptoms appeared, and perhaps even come up with a few cures.

The logic was to search the double helix for about half a million variants that could be used to set up a grid of diversity, scattered across the whole genome. This could then be scanned using a magic “chip”, which could identify thousands of changes at once to see whether one, or a few, of the molecular milestones might predispose a given individual to a particular disease. If so, the actual gene responsible could be close to the telltale marker.

The latest version of this grid, produced by what is known as the Wellcome Case Control Consortium, involves 120,000 samples, taken both from invalids and those who are perfectly healthy. It is a huge – and expensive – operation. Just a couple of years ago there was real optimism that a new era of understanding was around the corner. That did not last long, for hubris has been replaced with concern: like Macavity the Mystery Cat, the evidence of genetic inheritance is clear, but the genes themselves are just not there.

Take height. A good way of predicting how big a baby will grow is to measure its mother and father. Tall parents have tall children, and height is highly heritable. The molecular mappers have now used their tape measures on around 30,000 people. They find 50 or so different genes associated with being tall or short – but altogether, they account for only one part in 20 of the variation needed to explain the similarity of children and parents. Macavity has struck, and does so again and again.

To give another example, today’s explosion of obesity means that tomorrow’s greatest killer may be adult-onset diabetes. The genome scans reveal 18 different bits of chromosome that light up as culprits – but together they explain less than one part in 20 of the overall inherited liability to diabetes. At that rate, as many as 800 different genes may be behind this illness; which means that their individual value as predictors of risk is tiny.

In other words, our chances of being born with a predisposition to a common illness such as diabetes or heart disease are not represented by the roll of a single die, but a gamble involving huge numbers of cards. Some people are dealt a poor mix and suffer as a result. Rather than drawing one fatal error, they lose life’s poker game in complicated and unpredictable ways. So many small cards can be shuffled that everyone fails in their own private fashion. Most individual genes say very little about the real risk of illness. As a result, the thousands of people who are paying for tests for susceptibility to particular diseases are wasting their money.

Not all the news is bad, however. Some genes, even those that have a small influence, hint at what may be going wrong in the case of a particular malady. Several of those behind a certain age-related blindness that runs in families are involved in the immune system – an unexpected finding that hints at what its cause might be, and where to start looking for a cure.

Even so, many geneticists now think that the constant pressure to sample thousands and thousands more people for a myriad of unknown genes that have a tiny effect may be misplaced. Instead, we would be better off abandoning the scattergun approach, and reading off the entire three thousand million DNA letters of a much smaller number of individuals, healthy and unhealthy, to see in detail what might have gone wrong.

Whatever the panjandrums of science decide to do with their Everest of cash, it is time to turn to one of the few genetical proverbs, for their mountain has laboured and brought forth not much more than a mouse. And what was that adage about throwing good money after bad?’

Steve Jones is professor of genetics at University College London.

Article by James le Fanu

This is an article by Dr James le Fanu in today’s DT. It is interesting to contrast how little we actually know with how much we think we know. A parallel to this article is the mess that is currently theoretical physics – we know very little more than we did 70 years ago, except to enlarge upon the vastness of our ignorance.

‘”Wonders are there many,” observed the Greek dramatist Sophocles, “but none more wonderful than man.” And rightly so, for we, as far as we can tell, are the sole witnesses of the splendours of the universe – though consistently less impressed by this privileged position than would seem warranted.

The chief reason for that lack of astonishment has always been that the practicalities of our everyday lives are so simple and effortless as to seem unremarkable. We open our eyes on waking to be surrounded by the shapes and colours, sounds and smells of the world in the most exquisite detail. We feel hungry, and by some magical alchemy of which we know nothing, our bodies transform the food and drink before us into our flesh and blood. We open our mouth to speak and the words flow in a ceaseless bubbling brook of thoughts and ideas.

We reproduce, and play no part in the transformation of the fertilised egg into a fully formed embryo with its 4,000 functioning parts. We tend to our children’s needs, but effortlessly they grow to adulthood, replacing along the way virtually every cell in their bodies.

These practicalities are not in the least bit simple, but in reality are the simplest things we know – because they have to be so. If our senses did not accurately capture the world around us, were the growth from childhood not virtually automatic, then “we” would never have happened.

There is, from common experience, nothing more difficult than to make the complex appear simple, just as a concert pianist’s effortless playing is grounded in years of toil and practice – so that semblance of simplicity must reflect the complexities of the processes that underpin them. This should, by rights, be part of general knowledge, a central theme of the school curriculum, promoting that appropriate sense of wonder in young minds at the fact of their very existence.

But one could search a shelf’s worth of biology textbooks in vain for a hint of the extraordinary in their detailed exposition of those complexities of life. Rather, for the past 150 years, scientists have interpreted the world through the prism of supposing there is nothing in principle that cannot be accounted for – where the unknown is merely waiting to be known. At least till very recently, when the findings of two of the most ambitious scientific projects ever conceived have revealed quite unexpectedly – and without anyone really noticing – that we are after all “a wonder” to ourselves.

It started in the early 1980s with a series of technical innovations in genetics and neuroscience that promised to resolve the final obstacles to comprehensive understanding of ourselves. They were, first, the immensely impressive achievement of spelling out the entire sequence of genes strung out along the double helix – the genome – of worms, flies, mice, monkeys and humans, which should have identified those “instructions” that so readily distinguish one form of life from another.

And second, the development of those equally impressive scanning techniques that would permit neuroscientists for the first time to observe the brain “in action”: thinking, imagining, perceiving – all the seemingly effortless features of the human mind.

This was serious science of the best kind, filling learned journals and earning Nobel Prizes while holding out the exhilarating prospect that these most fundamental questions of genetic inheritance and the workings of the human brain might finally be resolved.

The completion of the human genome project, on the cusp of the new millennium, marked “one of the most significant days in history”, as one of its architects described it. “Just as Copernicus changed our understanding of the solar system… so knowledge of the human genome would change how we see ourselves.”

At the same time Professor Steven Pinker, of the Massachusetts Institute of Technology, after reviewing how neuroscientists with their new techniques had investigated everything “from mental imagery to moral sense”, confidently anticipated “cracking the mystery of the brain”.

Nearly a decade has passed since those heady days, and looking back, it is possible to see how the findings of both endeavours have enormously deepened our knowledge of life and the mind – but in a way quite contrary to that anticipated.

The genome projects were predicated on the reasonable assumption that spelling out the full complement of genes would clarify, to a greater or lesser extent, the source of that diversity of form that marks out the major categories of life. It was thus disconcerting to learn that virtually the reverse is the case, with a near equivalence of a (modest) 20,000 genes across the vast spectrum from a millimetre-long worm to ourselves.

It was similarly disconcerting to learn that the human genome is virtually interchangeable with that of our fellow vertebrates, such as the mouse and our primate cousins.

“We cannot see in this why we are so different from chimpanzees,” remarked the director of the chimp genome project. “The obvious differences cannot be explained by genetics alone.” This would seem fair comment but leaves unanswered the question of what does account for those distinctive features of standing upright and our prodigiously large brain.

More unexpected still, the same regulatory genes that cause a fly to be a fly, it emerged, cause humans to be humans with not a hint of why the fly should have six legs, a pair of wings and a brain the size of a full stop, and we should have two arms, two legs and a turbo-sized brain. These “instructions” must be there, of course, but we have moved in the wake of these projects from supposing we knew the principles of the genetic basis of the infinite variety of life, to recognising we have no conception of what they might be.

At the same time, neuroscientists observing the brain in action were increasingly perplexed at how it fragments the sights and sounds of every transient moment into a myriad of separate components, with no compensatory mechanism that would reintegrate them together into that personal experience of being at the centre of a coherent, ever-changing world.

Meanwhile, the greatest conundrum remains – how the monotonous electrical activity of those billions of neurons in the brain “translates” into the limitless range and quality of subjective experiences of our lives, where every moment has its own unique, intangible feel.

The implications are clear enough: while theoretically it might be possible for neuroscientists to know everything about the physical structure of the brain, its “product”, the mind, with its thoughts and ideas, impressions and emotions, would still remain unaccounted for.

“We seem as far from understanding the brain as we were a century ago,” says the editor of Nature, John Maddox. “Nobody understands how decisions are made or how imagination is set free.”

There is in all this the impression that triumphant science has stumbled on something of immense importance – a powerful parallel reality that might conjure the richness of the living world from the bare bones of the genes strung out along the double helix and the parallel richness of the mind from the electrochemistry of the brain.

Certainly, for the foreseeable future there will be no need to defer to those who would appropriate our sense of wonder at the glorious panoply of nature and ourselves, by their claims to understand it. Rather, the very aspect of the living world now seems once again infused with that deep sense of mystery of “How can these things be?”‘

Review of ‘Moral Minds’ by Marc D. Hauser

Below are two reviews of ‘Moral Minds’ by Marc D. Hauser; the first by Richard Rorty from the New York Times and the second by Jonathan Derbyshire from the Guardian. They encapsulate my feelings precisely and they say it better than I could. The book is unsatisfying on many levels, not least the drudgery of wading through his rather dense prose.

Richard Rorty’s Review

‘Nazi parents found it easy to turn their children into conscientious little monsters. In some countries, young men are raised to believe that they have a moral obligation to kill their unchaste sisters. Gruesome examples like these suggest that morality is a matter of nurture rather than nature — that there are no biological constraints on what human beings can be persuaded to believe about right and wrong. Marc Hauser disagrees. He holds that “we are born with abstract rules or principles, with nurture entering the picture to set the parameters and guide us toward the acquisition of particular moral systems.” Empirical research will enable us to distinguish the principles from the parameters and thus to discover “what limitations exist on the range of possible or impossible moral systems.”

Hauser is professor of psychology, organismic and evolutionary biology, and biological anthropology at Harvard. He believes that “policy wonks and politicians should listen more closely to our intuitions and write policy that effectively takes into account the moral voice of our species.” Biologists, he thinks, are in a position to amplify this voice. For they have discovered evidence of the existence of what Hauser sometimes calls “a moral organ” and sometimes “a moral faculty.” This area of the brain is “a circuit, specialized for recognizing certain problems as morally relevant.” It incorporates “a universal moral grammar, a toolkit for building specific moral systems.” Now that we have learned that such a grammar exists, Hauser says, we can look forward to “a renaissance in our understanding of the moral domain.”

The exuberant triumphalism of the prologue to “Moral Minds” leads the reader to expect that Hauser will lay out criteria for distinguishing parochial moral codes from universal principles, and will offer at least a tentative list of those principles. These expectations are not fulfilled. The vast bulk of “Moral Minds” consists of reports of experimental results, but Hauser does very little to make clear how these results bear on his claim that there is a “moral voice of our species.”

Many of the experiments Hauser tells us about are intended to delimit stages in child development. Three-year-olds already know, for example, that “if an act causes harm, but the intention was good, then the act is judged less severely.” Hauser takes this fact to support the claim that “rather than a learned capacity … our ability to detect cheaters who violate social norms is one of nature’s gifts.” But do such facts as that children learn to use expressions like “didn’t mean to do it” at roughly the same time as they learn “shouldn’t have done it” help us draw a line between nature and nurture? Hauser does not spell out the relevance of data about child development to the question of whether internalizing a moral code requires a dedicated area of the brain.

To convince us that such an organ exists, Hauser would have to start by drawing a bright line separating what he calls “the moral domain” — one that nonhuman species cannot enter — from other domains. But he never does. The closest he comes is saying things like “a central difference between social conventions and moral rules is the seriousness of an infraction.” He takes this to suggest “that moral rules consist of two ingredients: a prescriptive theory or body of knowledge about what one ought to do, and an anchoring set of emotions.” Apparently both rules of etiquette and moral rules embody knowledge about what ought to be done. All that is distinctive about morality is added emotional freight. But, as Hauser tells us, many nonhuman species obey social conventions. (For example, “Do not start tearing at the carcass before the alpha male has eaten his fill.”) It is hard to see why evolution had to carve out a new, specialized organ just to generate the extra emotional intensity that differentiates guilt from chagrin.

Perhaps Hauser does not mean to say that greater seriousness is the only, or the most important, mark of the moral domain. But the reader is left guessing about how he proposes to distinguish morality not just from etiquette, but also from prudential calculation, mindless conformity to peer pressure and various other things. This makes it hard to figure out what exactly his moral module is supposed to do. It also makes it difficult to envisage experiments that would help us decide between his hypothesis and the view that all we need to internalize a moral code is general-purpose learning-from-experience circuitry — the same circuitry that lets us internalize, say, the rules of baseball.

Hauser thinks that Noam Chomsky has shown that in at least one area — learning how to produce grammatical sentences — the latter sort of circuitry will not do the job. We need, Hauser says, a “radical rethinking of our ideas on morality, which is based on the analogy to language.” But the analogy seems fragile. Chomsky has argued, powerfully if not conclusively, that simple trial-and-error imitation of adult speakers cannot explain the speed and confidence with which children learn to talk: some special, dedicated mechanism must be at work. But is a parallel argument available to Hauser? For one thing, moral codes are not assimilated with any special rapidity. For another, the grammaticality of a sentence is rarely a matter of doubt or controversy, whereas moral dilemmas pull us in opposite directions and leave us uncertain. (Is it O.K. to kill a perfectly healthy but morally despicable person if her harvested organs would save the lives of five admirable people who need transplants? Ten people? Dozens?)

Hauser hopes that his book will convince us that “morality is grounded in our biology.” Once we have grasped this fact, he thinks, “inquiry into our moral nature will no longer be the proprietary province of the humanities and social sciences, but a shared journey with the natural sciences.” But by “grounded in” he does not mean that facts about what is right and wrong can be inferred from facts about neurons. The “grounding” relation in question is not like that between axioms and theorems. It is more like the relation between your computer’s hardware and the programs you run on it. If your hardware were of the wrong sort, or if it got damaged, you could not run some of those programs.

Knowing more details about how the diodes in your computer are laid out may, in some cases, help you decide what software to buy. But now imagine that we are debating the merits of a proposed change in what we tell our kids about right and wrong. The neurobiologists intervene, explaining that the novel moral code will not compute. We have, they tell us, run up against hard-wired limits: our neural layout permits us to formulate and commend the proposed change, but makes it impossible for us to adopt it. Surely our reaction to such an intervention would be, “You might be right, but let’s try adopting it and see what happens; maybe our brains are a bit more flexible than you think.” It is hard to imagine our taking the biologists’ word as final on such matters, for that would amount to giving them a veto over utopian moral initiatives. The humanities and the social sciences have, over the centuries, done a great deal to encourage such initiatives. They have helped us better to distinguish right from wrong. Reading histories, novels, philosophical treatises and ethnographies has helped us to reprogram ourselves — to update our moral software. Maybe someday biology will do the same. But Hauser has given us little reason to believe that day is near at hand.’

Richard Rorty recently retired from teaching at Stanford. He is the author of “Philosophy and Social Hope.”

Jonathan Derbyshire’s Review

‘According to Marc Hauser, “morality is grounded in our biology”. We’ve heard this sort of thing before, of course – from evolutionary biologists, for instance, who claim that natural selection favours altruistic behaviour, since acting benevolently towards other people is a way of securing our genetic posterity. Some proponents of the evolutionary explanation go further, and infer from this that what seem to be our moral concerns aren’t our real concerns at all, and that what looks like altruism is in fact just a disguise for the operation of selfish genes.

Though Hauser himself believes that the moral machinery of human brains has been designed by the “blind hand” of Darwinian selection, he rejects such extreme interpretations. There’s no gene for altruism, he says, so we can’t derive specific rules for conduct from the structure of our DNA. And for that reason, we shouldn’t worry that our genetic inheritance leaves us trapped in an unchanging set of moral beliefs or judgments. On the contrary, our biology does not fix the range of possible moral systems, which is constrained only by history and culture. What that biology gives us is a set of very general principles on the basis of which we are able to develop one system of moral beliefs or another.

These general principles are at the heart of Hauser’s argument in Moral Minds. His contention, which he thinks amounts to nothing less than a “radical rethinking” of the nature of morality, is that human beings are creatures born with innate “moral instincts”. Because Homo sapiens is the only species to construct complex moral systems, morality has to be grounded in some distinctive property of the human brain – what Hauser calls a “moral organ” or “moral grammar”.

As the latter description suggests, Hauser’s inspiration here is the work done in theoretical linguistics by Noam Chomsky. Chomsky argues that the ability of children to learn to talk, which involves mastering highly complex rules of grammar, couldn’t simply be acquired by listening to competent adult speakers. There must be an innate “universal grammar” underlying different languages, deep structures that can be uncovered through painstaking comparative study.

Hauser builds on the “linguistic analogy” suggested by the philosopher John Rawls, who thought that a satisfactory account of our moral capacities would involve appealing to intuitive principles that we aren’t necessarily capable of articulating for ourselves. Just as we generate different, and mutually unintelligible, languages on the basis of universal grammatical principles, so, Hauser argues, there are deep moral “intuitions” that underlie cultural variations in social norms.

In order to uncover this “universal moral grammar”, Hauser devised a “moral sense test”. The test presented subjects with a number of so-called “trolley” problems, imaginary dilemmas dreamt up by philosophers and designed to tease out people’s moral intuitions. Imagine, for example, that you’re standing on a footbridge from which you can see a driverless tram hurtling in the direction of five people stranded on the track. The only way of stopping the tram and saving the lives of those people is to drop a heavy weight in its path. As it happens, a fat man is also standing on the bridge. Should you push the fat man to his death in order to stop the tram or leave him unmolested, in which case those on the track will die?

Hauser reports that only 10% of respondents said it was morally permissible to push the fat man from the bridge. From this and similar results, he deduces a universal “intention principle”, according to which intended harm is morally worse than harm that is foreseen but not directly intended. What is unclear, however, is why Hauser thinks data like these also license claims about the existence of a discrete moral faculty or “organ”. It is one thing to articulate principles that help to make sense of our intuitive responses to moral dilemmas, but quite another to conclude from this that such principles must belong to a particular region of the brain.

Moral Minds is full of fascinating reports on psychological experiments, few of which offer any obvious support for Hauser’s ambitious claims about moral grammar. This accounts, in part, for the book’s longueurs – that and the fact that Hauser’s rather colourless prose style is no match for that of scientific popularisers such as Steven Pinker or Richard Dawkins.

Hauser’s extravagant promise, in the prologue, to “explain how an unconscious and universal grammar underlies our judgments of right and wrong” is therefore not fulfilled. In fact, he comes close to acknowledging this in a somewhat deflating conclusion when he concedes that the “science of morality” is still in its infancy. And there is nothing here to suggest that this nascent discipline will conquer the “proprietary province of the humanities” any time soon.’

Jonathan Derbyshire is a philosopher and blogger