Thursday, November 29, 2007

Why 'Never Let Me Go' isn't really a 'science novel'

I have just finished reading Kazuo Ishiguro’s Never Let Me Go. What a strange book. First, there’s the tone – purposely amateurish writing (there can’t be any doubt, given his earlier books, that this is intentional), which creates an odd sense of flatness. As the Telegraph’s reviewer put it, “There is no aesthetic thrill to be had from the sentences – except that of a writer getting the desired dreary effect exactly right.” It’s a testament to Ishiguro that his control of this voice never slips, and that the story remains compelling in spite of the deliberately clumsy prose. That s probably a far harder trick to pull off than it seems. Second, there are the trademark bits of childlike quasi-surrealism, where he develops an idea that seems utterly implausible yet is presented so deadpan that you start to think “Is he serious about this?” – for instance, Tommy’s theory about the ‘art gallery’. This sort of dreamlike riffing was put to wonderful effect in The Unconsoled, which was a dream world from start to finish. It jarred a little at the end of When We Were Orphans, because it didn’t quite fit with the rest of the book – but was still strangely compelling. Here it seems to be an expression of the enforced naivety of the characters, but is disorientating when it becomes so utterly a part of the world that Kathy H depicts.

But my biggest concern is that the plot just doesn’t seem at all plausible enough to create a strong critique of cloning and related biotechnologies. Is that even the intention? I’m still unsure, as were several reviewers. The situation of the donor children is so unethical and so deeply at odds with any current ethical perspectives on cloning and reproductive technologies that one can’t really imagine how a world could have got this way. After all, in other respects it seems to be a world just like ours. It is not even set in some dystopian future, but has a feeling of being more like the 1980s. The ‘normal’ humans aren’t cold-hearted dysfunctionals – they seem pretty much like ordinary people, except that they seem to accept this donor business largely without question – whereas nothing like this would be tolerated or even contemplated for an instant today. It feels as though Ishiguro just hasn’t worked hard enough to make an alternative reality that can support the terrible scenario he portrays. As a result, whatever broader point he is making loses its force. What we are left with is a well told tale of friendship and tragedy experienced by sympathetic characters put in a situation that couldn’t arise under the social conditions presented. I enjoyed the book, but I can’t see how it can add much to the cloning debate. Perhaps, as one reviewer suggested, this is all just an allegory about mortality – in which case it works rather well, but is somewhat perverse.

I’ve just taken a look at M John Harrison’s review in the Guardian, which puts these same points extremely well:
“Inevitably, it being set in an alternate Britain, in an alternate 1990s, this novel will be described as science fiction. But there's no science here. How are the clones kept alive once they've begun "donating"? Who can afford this kind of medicine, in a society the author depicts as no richer, indeed perhaps less rich, than ours?

Ishiguro's refusal to consider questions such as these forces his story into a pure rhetorical space. You read by pawing constantly at the text, turning it over in your hands, looking for some vital seam or row of rivets. Precisely how naturalistic is it supposed to be? Precisely how parabolic? Receiving no answer, you're thrown back on the obvious explanation: the novel is about its own moral position on cloning. But that position has been visited before (one thinks immediately of Michael Marshall Smith's savage 1996 offering, Spares). There's nothing new here; there's nothing all that startling; and there certainly isn't anything to argue with. Who on earth could be "for" the exploitation of human beings in this way?

Ishiguro's contribution to the cloning debate turns out to be sleight of hand, eye candy, cover for his pathological need to be subtle… This extraordinary and, in the end, rather frighteningly clever novel isn't about cloning, or being a clone, at all. It's about why we don't explode, why we don't just wake up one day and go sobbing and crying down the street, kicking everything to pieces out of the raw, infuriating, completely personal sense of our lives never having been what they could have been.”

Monday, November 26, 2007

Listen out

Let me now be rather less coy about media appearances. This Wednesday night at 9 pm I am presenting Frontiers on BBC Radio 4, looking at digital medicine. This meant that I got to strap a ‘digital plaster’ to my chest which relayed my heartbeat to a remote monitor through a wireless link. I am apparently alive and well.

Salt-free Paxo

No one can reasonably expect Jeremy Paxman to have a fluent knowledge of all the subjects on which he has to ask sometimes remarkably different questions on University Challenge. But if the topic is chemistry, you’d better get it word-perfect, because he’s got no latitude for interpretation. Tonight’s round had a moment that went something like this:
Paxman: “Which hydrated ferrous salt was once known as green vitriol?”
Hapless student: “Iron sulphate.”
Paxman: “No, it’s just sulphate.”
I’ve seen precisely the same thing happen before. How come someone doesn’t pick Paxo up on it? The fact is, contestants are advised that they can press their button to challenge if they think their answer was unfairly dismissed. The offending portion of the filming then gets snipped out. But I suspect no one ever does this – it’s just too intimidating to say to Paxo “I think you’ve got that wrong.”

Friday, November 23, 2007

War is not an exact science
[This is my latest muse column for news@nature.com]

General theories of why we go to war are interesting. But they'll never tell the whole story.

Why are we always fighting wars? That’s the kind of question expected from naïve peaceniks, to which historians will wearily reply “Well, it’s complicated.”

But according to a new paper by an international, interdisciplinary team, it isn’t that complicated. Their answer is: climate change. David Zhang of the University of Hong Kong and his colleagues show that, in a variety of geographical regions – Europe, China and the arid zones of the Northern Hemisphere – the frequency of war has fluctuated in step with major shifts in climate, particularly the Little Ice Age from the mid-fifteenth until the mid-nineteenth century [1].

Cold spells like this, they say, significantly reduced agricultural production, and as a result food prices soared, food became scarce – and nations went to war, whether to seize more land or as a result of famine-induced mass migration.

On the one hand, this claim might seem unexceptional, even trivial: food shortages heighten social tensions. On the other hand, it is outrageous: wars, it says, have little to do with ideology, political ambition or sheer greed, but are driven primarily by the weather.

Take, for example, the seventeenth century, when Europe was torn apart by strife. The Thirty Years War alone, between 1618 and 1648, killed around a third of the population in the German states. Look at the history books and you’ll find this to be either a religious conflict resulting from the Reformation of Martin Luther and Jean Calvin, or a political power struggle between the Habsburg dynasty and their rivals. Well, forget all that, Zhang and his colleagues seem to be saying: it’s all because we were suffering the frigid depths of the Little Ice Age.

I expect historians to respond to this sort of thing with lofty disdain. You can see their point. The analysis stops at 1900, and so says nothing about the two most lethal wars in history – which, as the researchers imply, took place in an age when economic, technological and institutional changes had reduced the impact of agricultural production on world affairs. Can you really claim to have anything like a ‘theory of war’ if it neglects the global conflicts of the twentieth century?

And historians will rightly say that grand synoptic theories of history are of little use to them. Clearly, not all wars are about food. Similarly, not all food shortages lead to war. There is, in historical terms, an equally compelling case to be made that famine leads to social unrest and potential civil war, not to the conflict of nation states. But more generally, the point of history (say most historians) is to explain why particular events happened, not why generic social forces sometimes lead to generic consequences. There is a warranted scepticism of the kind of thinking that draws casual parallels between, say, Napoleon’s imperialism and current US foreign policy.

Yet some of this resistance to grand historical theorizing may be merely a backlash. In particular, it stands in opposition to the Marxist position popular among historians around the middle of the last century, and which has now fallen out of fashion. And the Marxist vision of a ‘scientific’ socio-political theory was itself a product of nineteenth century mechanistic positivism, as prevalent among conservatives like Leo Tolstoy and liberals like John Stuart Mill as it was in the revolutionary socialism of Marx and Engels. It was Tolstoy who, in War and Peace, invoked Newtonian imagery in asking “What is the force that moves nations?”

Much of this can be traced to the famous proposal of Thomas Robert Malthus, outlined in his Essay on the Principles of Population (1826), that population growth cannot continue for ever on an exponential rise because it eventually falls foul of the necessarily slower rise in means of production – basically, the food runs out. That gloomy vision was also an inspiration to Charles Darwin, who saw that in the wild this competition for limited resources must lead to natural selection.

Zhang and colleagues state explicitly that their findings provide a partial vindication of Malthus. They point out that Malthus did not fully account for the economic pressures and sheer ingenuity that could boost agricultural production when population growth demanded it, but they say that such improvements have their limits, which were exceeded when climate cooling lowered crop yields in Europe and China.

For all their apparently impressive correlation indices, however, it is probably fair to say that responses to Zhang et al.’s thesis will be a matter of taste. In the end, an awful lot seems to hinge on the coincidence of minimal agricultural production (and maximum in food prices), low average temperatures, and a peak in the number of wars (and fatalities) during the early to mid-seventeenth century in both Europe and China. The rest of the curves are suggestive, but don’t obviously create a compelling historical narrative. At best, they provoke a challenge: if one cannot now show a clear link between climate/agriculture and, say, the Napoleonic wars from the available historical records themselves, historians might be forgiven for questioning the value of this kind of statistical analysis.

Yet what if the study helps us to understand, even a little bit, what causes war? That itself is an age-old question – Zhang and colleagues identify it, for example, in Thucydides’ History of the Peloponnesian Wars in the 5th century BC. Neither are they by any means the first in modern times to look for an overarching theory of war. The issue motivated the physicist Lewis Fry Richardson between about 1920 and 1950 to plot size against frequency for many recent wars (including the two world wars), and thereby to identify the kind of power-law scaling that has led to the notion that wars are like landslides, where small disturbances can trigger events of any scale [2-4]. Other studies have focused on the cyclic nature of war and peace, as for example in ecologist Peter Turchin’s so-called cliodynamics, which attempts to develop a theory of the expansion and collapse of empires [5,6].

Perhaps most prominent in this arena is an international project called the Correlates of War, which has since 1963 been attempting to understand and quantify the factors that create (and mitigate) international conflict and thus to further the “scientific knowledge about war”. Its data sets have been used, for example, in quantitative studies of how warring nations form alliances [7], and they argue rather forcefully against any notion of collapsing the causative factors onto a single axis such as climate.

What, finally, do Zhang and colleagues have to tell us about future conflict in an anthropogenically warmed world? At face value, the study might seem to say little about that, given that it correlates war with cooling events. There is some reason to think that strong warming could be as detrimental to agriculture as strong cooling, but it’s not clear exactly how that would play out, especially in the face of both a more vigorous hydrological cycle and the possibility of more regional droughts. We already know that water availability will become a serious issue for agricultural production, but also that there’s a lot that can still be done to ameliorate that, for instance by improvements in irrigation efficiency.

We’d be wise to greet the provocative conclusions of Zhang et al. with neither naïve acceptance nor cynical dismissal. They do not amount to a theory of history, or of war, and it seems most unlikely that any such things exist. But their paper is at least a warning against a kind of fatalistic solipsism which assumes that all human conflicts are purely the result of human failings.

References

1. Zhang, D. D. et al. Proc. Natl Acad. Sci. USA doi/10.1073/pnas.0703073104
2. Richardson, L. F. Statistics of Deadly Quarrels, eds Q. Wright and C. C. Lienau (Boxwood Press, Pittsburgh, 1960).
3. Nicholson, M. Brit. J. Polit. Sci. 29, 541-563 (1999).
4. Buchanan, M. Ubiquity (Phoenix, London, 2001).
5. Turchin, P. Historical Dynamics (Princeton University Press, 2003).
6. Turchin, P. War and Peace and War (Pi Press, 2005).
7. Axelrod R. & D. S. Bennett, Brit. J. Polit. Sci. 23, 211-233 (1993).

Thursday, November 22, 2007

Schrödinger’s cat is not dead yet

[This is an article I’ve written for news@nature. One of the things I found most interesting was that Schrödinger didn’t set up his ‘cat’ thought experiment with a gun, but with an elaborate poisoning scheme. Johannes Kofler says “He puts a cat into a steel chamber and calls it "hell machine" (German: Höllenmaschine). Then there is a radioactive substance in such a tiny dose that within one hour one atom might decay but with same likelihood nothing decays. If an atom decays, a Geiger counter reacts. In this case this then triggers a small hammer which breaks a tiny flask with hydrocyanic acid which poisons the cat. Schrödinger is really very detailed in describing the situation.” There’s a translation of Schrödinger’s original paper here, but as Johannes says, the wonderful “hell machine” is simply translated as “device”, which is a bit feeble.]

Theory shows how quantum weirdness may still be going on at the large scale.

Since the particles that make up the world obey the rules of quantum theory, allowing them to do counter-intuitive things such as being in several different places or states at once, why don’t we see this sort of bizarre behaviour in the world around us? The explanation commonly offered in physics textbooks is that quantum effects apply only at very small scales, and get smoothed away at the everyday scales we can perceive.

But that’s not so, say two physicists in Austria. They claim that we’d be experiencing quantum weirdness all the time – balls that don’t follow definite paths, say, or objects ‘tunnelling’ out of sealed containers – if only we had sharper powers of perception.

Johannes Kofler and Caslav Brukner of the University of Vienna and the Institute of Quantum Optics and Quantum Information, also in Vienna, say that the emergence of the ‘classical’ laws of physics, deduced by the likes of Galileo and Newton, from the quantum world is an issue not of size but of measurement [1]. If we could make every measurement with as much precision as we liked, there would be no classical world at all, they say.

Killing the cat

Austrian physicist Erwin Schrödinger famously illustrated the apparent conflict between the quantum and classical descriptions of the world. He imagined a situation where a cat was trapped in a box with a small flask of poison that would be broken if a quantum particle was in one state, and not broken if the particle was in another.

Quantum theory states that such a particle can exist in a superposition of both states until it is observed, at which point the quantum superposition ‘collapses’ into one state or the other. Schrödinger pointed out that this means that the cat is neither dead nor alive until someone opens the box to have a look – a seemingly absurd conclusion.

Physicists generally resolve this paradox through a process called decoherence, which happens when quantum particles interact with their environment. Decoherence destroys the delicately poised quantum state and leads to classical behaviour.

The more quantum particles there are in a system, the harder it is to prevent decoherence. So somewhere in the process of coupling a single quantum particle to a macroscopic object like a flask of poison, decoherence sets in and the superposition is destroyed. This means that Schrödinger’s cat is always unambiguously in a macroscopically ‘realistic’ state, either alive or dead, and not both at once.

But that’s not the whole story, say Kofler and Brukner. They think that although decoherence typically intervenes in practice, it need not do so in principle.

Bring back the cat

The fate of Schrödinger’s cat is an example of what in 1985 physicists Anthony Leggett and Anupam Garg called macrorealism [2]. In a macrorealistic world, they said, objects are always in a single state and we can make measurements on them without altering that state. Our everyday world seems to obey these rules. According to the macrorealistic view, “there are no Schrödinger cats allowed” says Kofler.

But Kofler and Brukner have proved that a quantum state can get as ‘large’ as you like, without conforming to macrorealism.

The two researchers consider a system akin to a magnetic compass needle placed in a magnetic field. In our classical world, the needle rotates around the direction of the field in a process called precession. That movement can be described by classical physics. But in the quantum world, there would be no smooth rotation – the needle could be in a superposition of different alignments, and would just jump instantaneously into a particular alignment once we tried to measure it.

So why don’t we see quantum jumps like this? The researchers show that it depends on the precision of measurement. If the measurements are a bit fuzzy, so that we can’t distinguish one quantum state from several other, similar ones, this smoothes out the quantum oddities into a classical picture. Kofler and Brukner show that, once a degree of fuzziness is introduced into measured values, the quantum equations describing the observed objects turn into classical ones. This happens regardless of whether there is any decoherence caused by interaction with the environment.

Having kittens

Kofler says that we should be able to see this transition between classical and quantum behaviour. The transition would be curious: classical behaviour would be punctuated by occasional quantum jumps, so that, say, the compass needle would mostly rotate smoothly, but sometimes jump instantaneously.

Seeing the transition for macroscopic objects like Schrödinger’s cat would require that we be able to distinguish an impractically large number of quantum states. For a ‘cat’ containing 10**20 quantum particles, say, we would need to be able to tell the difference between 10**10 states – just too many to be feasible.

But our experimental tools should already be good enough to look for this transition in much smaller ‘Schrödinger kittens’ consisting of many but not macroscopic numbers of particles, says Kofler and Brukner.

What, then, becomes of these kittens before the transition, while they are still in the quantum regime? Are they alive or dead? ‘We prefer to say that they are neither dead nor alive,’ say Kofler and Brukner, ‘but in a new state that has no counterpart in classical physics.’

References

1. Kofler, J. & Brukner, C. Phys. Rev. Lett. 99, 180403 (2007).
2. Leggett, A. & Garg, A. Phys. Rev. Lett. 54, 857 (1985).
Not natural?
[Here’s a book review I’ve written for Nature, which I put here because the discussion is not just about the book!]

The Artificial and the Natural: An Evolving Polarity
Ed. Bernadette Bensaude-Vincent and William R. Newman
MIT Press, Cambridge, Ma., 2007

The topic of this book – how boundaries are drawn between natural and synthetic – has received too little serious attention, both in science and in society. Chemists are notoriously (and justifiably) touchy about descriptions of commercial products as ‘chemical-free’; but the usual response, which is to lament media or public ignorance, fails to recognize the complex history and sociology that lies behind preconceptions about chemical artifacts. Roald Hoffmann has written sensitively on this matter in The Same and Not the Same (Columbia University Press, 1995), and he contributes brief concluding remarks to this volume. But the issue is much broader, touching on areas ranging from stem-cell therapy and assisted conception to biomimetic engineering, synthetic biology, machine intelligence and ecosystem management.

It is not, in fact, an issue for the sciences alone. Arguably the distinction between nature and artifice is equally fraught in what we now call the fine arts – where again it tends to be swept under the carpet. While some modern artists, such as Richard Long and Andy Goldsworthy, address the matter head-on with their interventions in nature such as the production of artificial rainbows, much popular art criticism now imposes a contemporary view even on the Old Masters. Through this lens, Renaissance writer Giorgio Vasari’s astonishment that Leonardo’s painted dewdrops “looked more convincing than the real thing” appears a little childish, as though he has missed the point of art – for no one now believes that the artist’s job is to mimic nature as accurately as possible. Perhaps with good reason, but it is left to art historians to point out that there is nothing absolute about this view.

At the heart of the matter is the fact that ‘art’ has not always meant what it does today. Until the late Enlightenment, it simply referred to anything human-made, whether that be a sculpture or an engine. The panoply of mutated creatures described in Francis Bacon’s The New Atlantis (1627) were the products of ‘art’, and so were the metals generated in the alchemist’s laboratory. The equivalent word in ancient Greece was techne, the root of ‘technology’ of course, but in itself a term that embraced subtle shades of meaning, examined here in ancient medicine by Heinrich von Staden and in mechanics by Francis Wolff.

The critical issue was how this ‘art’ was related to ‘nature’, approximately identified with what Aristotle called physis. Can art produce things identical to those in nature, or only superficial imitations of them? (That latter belief left Plato rather dismissive of the visual arts.) Does art operate using the same principles as nature, or does it violate them? Alchemy was commonly deemed to operate simply by speeding up natural processes: metals ripened into gold sooner in the crucible than they did in the ground, while (al)chemical medicines accelerated natural healing. And while some considered ‘artificial’ things to be always inferior to their ‘natural’ equivalents, it was also widely held that art could exceed nature, bringing objects to a greater state of perfection, as Roger Bacon thought of alchemical gold.

The emphasis in The Artificial and the Natural is historical, ranging from Hippocrates to nylon. These motley essays are full of wonders and insights, but are ultimately frustrating too in their microcosmic way. There is no real synthesis on offer, no vision of how attitudes have evolved and fragmented. There are too many conspicuous absences for the book to represent an overview. One can hardly feel satisfied with such a survey in which Leonardo da Vinci is not even mentioned. It would have been nice to see some analysis of changing ideas about experimentation, the adoption of which was surely hindered by Aristotle’s doubts that ‘art’ (and thus laboratory manipulation) was capable of illuminating nature. Prejudices about experiments often went even further: even in the Renaissance one could feel free to disregard what they said if it conflicted with a priori ‘truths’ gleaned from nature, rather as Pythagoras advocated studying music “setting aside the judgement of the ears”. And it would have been fascinating to see how these issues were discussed in other cultures, particularly in technologically precocious China.

But most importantly, the discussion sorely lacks a contemporary perspective, except for Bernadette Bensaude-Vincent’s chapter on plastics and biomimetics. This debate is no historical curiosity, but urgently needs airing today. Legislation on trans-species embryology, reproductive technology, genome engineering and environmental protection is being drawn up based on what sometimes seems like little more than a handful of received wisdoms (some of them scriptural) moderated by conventional risk analysis. There is, with the possible exception of biodiversity discussions, almost no conceptual framework to act as a support and guide. All too often, what is considered ‘natural’ assumes an absurdly idealized view of nature that owes more to the delusions of Rousseau’s romanticism than to any historically informed perspective. By revealing how sophisticated, and yet how transitory, the distinctions have been in the past, this book is an appealingly erudite invitation to begin the conversation.

Sunday, November 18, 2007


Astronomy: the dim view

One Brian Robinson contributes to human understanding on the letters page of this Saturday’s Guardian with the following:
“Providing funding for astronomers does not in any way benefit the taxpayer. Astronomy may be interesting, but the only mouths that will get fed are the children of the astronomers. Astronomy is a hobby, and as such should not be subsidised by the Treasury any more than trainspotting.”
The invitation is to regard this as the sort of Thatcherite anti-intellectualism that is now ingrained in our political system. And indeed, the notion that anything state-funded must ‘benefit the taxpayer’ – specifically, by putting food in mouths – is depressing not only in its contempt for learning but also in its ignorance of how the much-vaunted ‘wealth creation’ in a technological society works.

But then you say, hang on a minute. Why astronomy, of all things? Why not theology, archaeology, philosophy, and all the arts other than the popular forms that are mass-marketable and exportable? And then you twig: ‘astronomy is a hobby’ – like trainspotting. This bloke thinks that professional astronomers are sitting round their telescopes saying ‘Look, I’ve just got a great view of Saturn’s rings!’ They are like the funny men in their sheds looking at Orion, only with much bigger telescopes (and sheds). In other words, Mr Robinson hasn’t the faintest notion of what astronomy is.

Now, I have some gripes with astronomers. It is not just my view, but seems to be objectively the case, that the field is sometimes narrowly incestuous and lacks the fecundity that comes from collaborating with people in other fields, with the result that its literature is often far more barren than it has any right to be, given what’s being studied here. And the astronomical definition of ‘metals’ is so scientifically illiterate that it should be banned without further ado, or else all other scientists should retaliate by calling anything in space that isn’t the Earth a ‘star’. But astronomy is not only one of the oldest and most profound of human intellectual endeavours; it also enriches our broader culture in countless ways.

The presence of Mr Robinson’s letter on the letters page, then, is not a piece of cheeky provocation, but an example of the nearly ubiquitous ignorance of science among letters-page editors. They simply didn’t see what he was driving at, and thus how laughable it is. It is truly amazing what idiocies can get into even the most august of places – the equivalent, often, of a reader writing in to say that, oh I don’t know, that Winston Churchill was obviously a Kremlin spy or that Orwell wrote Cold Comfort Farm. Next we’ll be told that astronomers are obviously fakes because their horoscopes never come true.

Monday, November 12, 2007


Is this what writers’ studies really look like?

Here is another reason to love Russell Hoban (aside from his having written the totally wonderful Riddley Walker, and a lot of other great stuff too). It is a picture of his workplace, revealed in the Guardian’s series of ‘writers’ rooms’ this week. I love it. After endless shots of beautiful mahogany desks surrounded by elegant bookshelves and looking out onto greenery, like something from Home and Garden, here at last is a study that looks as though the writer works in it. It is the first one in the series that looks possibly even worse than mine.

The mystery is what all the other writers do. Sure, there may be little stacks of books being used for their latest project – but what about all the other ‘latest projects’? The papers printed out and unread for months? The bills unpaid (or paid and not filed)? The letters unanswered (or ditto)? The books that aren’t left out for any reason, other than that there is no other place to put them? The screwdrivers and sellotape and tissues and plastic bags and stuff I’d rather not even mention? What do these people do all day? These pictures seem to demand the image of a writer who, at the end of the day, stretches out his/her arms and says “Ah, now for a really good tidy-up”. That is where my powers of imagination fail me.

It all confirms that we simply do not deserve Russell Hoban.

Sunday, November 11, 2007

Minority report

Here’s an interesting factoid culled from the doubtless unimpeachable source of Wllson da Silva, editor of Australian science magazine Cosmos: the proportion of scientist who question that humans are responsible for global warming is about the same as the proportion who questions that HIV is the cause of AIDS. Strange, then, that whenever AIDS is discussed on TV or radio, it is not considered obligatory to include an HIV sceptic for ‘balance’.

Of course, one reason for that is that people are not (yet) dying in their thousands from climate change (although even that, after the recent European heat waves, is debatable). This means it can remain fashionable, among over-educated media types with zero understanding of science, to be a climate sceptic. This, not the little band of scientific deniers, less still the so-called ignorant masses that some scientists lament, is the real problem. The intelligensia still love to parade their ‘independent-mindedness’ on this score.

Here, for example, is Simon Hoggart a couple of weeks ago in the Guardian on ‘man-made global warming’: “I'm not going to plunge into this snakepit, except to say that there are more sceptics about than the Al Gores of this world acknowledge, and they are not all paid by carbon fuel lobbies. Also, if it's true, as Booker and North claim [in their book Scared to Death], that there is evidence of global warming on other planets, might it not be possible that the sun has at least as much effect on our climate as we do? I only ask.”

No, Simon, you do not only ask. If you genuinely wanted enlightenment on this matter, you could go to the wonderful Guardian science correspondents, who would put you straight in seconds. No, you want to show us all what a free thinker you are, and sow another little bit of confusion.

I’m not even going to bother to explain why ‘other planets’ (Venus, I assume) have experienced global warming in their past. It is too depressing. What is most depressing of all is the extent to which well-educated people can merrily display such utter absence of even the basics of scientific reasoning (such as comparing like with like). I’m generally optimistic about people’s ability to learn and reason, if they have the right facts in front of them. But I sometimes wonder if that ability declines the more you know, when that knowledge excludes anything to do with science.