Monday, May 21, 2018

What is a superposition really like?

Here’s a longer version of the news story I just published in Scientific American, which includes more context and background. The interpretation of the outcomes of this thought experiment within the two-state vector formalism of quantum mechanics is by no means the only one possible. But what the experiment does show is that quantum mechanics suggests that superpositions are not always simply a case of a particle seeming to be in two places or states at once. A superposition, liker anything else in quantum mechanics, tells you about the possible outcomes of a measurement. All the rest is contingent interpretation. I’m reminded yet again today that it is going to take an awful lot to get media folks to accept this. I'm starting to see now that it was a mistake for me to assume that they didn't know any better; rather, I think there an active, positive desire for the "two places at once" to be true.

I should say also that I consciously decided to turn a blind eye to the use of the word “spooky” in the title of this piece, because it does perfectly acceptable work as it is. It does not imply that “spooky action at a distance” is a thing. It is not a thing, unless it is a disproved thing. Quantum nonlocality is the alternative to that Einsteinian picture.

______________________________________________________________________

It’s the central question in quantum mechanics, and no one knows the answer: what goes on for a particle in a superposition? All of the head-scratching oddness that seems to pervade quantum theory comes from these peculiar circumstances in which particles seem to be in two places or states at once. What that really means has provoked endless debate and argument. Now a team of researchers in Israel and Japan has proposed https://www.nature.com/articles/s41598-018-26018-y an experiment that should let us say something for sure about the nature of that nebulous state [A. C. Elitzur, E. Cohen, R. Okamoto & S. Takeuchi, Sci. Rep. 8, 7730 (2018)].

Their experiment, which they say could be carried out within a few months using existing technologies, should let us sneak a glance at where a quantum object – in this case a particle of light, called a photon – actually is when it is placed in a superposition of positions. And what the researchers predict is even more shocking and strange than the usual picture of this counterintuitive quantum phenomenon.

The classic illustration of a superposition – indeed, the central experiment of quantum mechanics, according to legendary physicist Richard Feynman – involves firing particles like photons through two closely spaced slits in a wall. Because quantum particles can behave like waves, those passing through one slit can ‘interfere’ with those going through the other, their wavy ripples either boosting or cancelling one another. For photons the result is a pattern of light and dark interference bands when the particles are detected on a screen on the far side, corresponding to a high or low number of photons reaching the screen.

Once you accept the waviness of quantum particles, there’s nothing so odd about this interference pattern. You can see it for ordinary water waves passing through double slits too. What is odd, though, is that the interference remains even if the rate of firing particles at the slits is so low that only one passes through at a time. The only way to rationalize that is to say each particle somehow passes through both slits at once, and interferes with itself. That’s a superposition.

To put it another way: when we ask the seemingly reasonable question “Where is the particle in a superposition?”, we’re using a notion of “where” inherited from our classical world, to which the answer can simply be “there”. But quantum mechanics is known now to be ‘nonlocal’, which means we have to relinquish the whole notion of locality – of “whereness”, you might say.

But that’s a hard habit to give up, which is why the ‘two places at once’ picture is commonly invoked to talk about quantum superpositions. Yet quantum mechanics doesn’t say anything about what particles are like until we make measurements on them. For the Danish physicist Niels Bohr, asking where the particle was in the double-slit experiment before it was measured has no meaning within quantum theory itself.

Why don’t we just look? Well, we can. We could put a detector in or just behind one slit that could register the passing of a particle without absorbing it. And in that case, the detector will show that sometimes the particle goes through one slit, and sometimes it goes through the other. But here’s the catch: there’s then no longer an interference pattern, but just the result we’d expect for particles taking one route or the other. Observing which route the particle takes destroys its ‘quantumness’.

This isn’t about measurements disturbing the particle, since interference is absent even in instances where a detector at one slit doesn’t see the particle, so that it ‘must’ have gone through the other slit. Rather, the ‘collapse’ of a superposition seems to be caused by our mere knowledge of the path.

We can try to be smarter. What if we wait until the particle has definitely passed through the slits before we measure the path? How could that delayed measurement affect what happened earlier at the slits themselves? But it does. In the 1960s the physicist John Wheeler proposed a way of doing this using an apparatus called a Mach-Zehnder interferometer, a modification of the double-slit experiment in which a partial mirror creates a superposition of photons that seems to send them along two different paths before they are brought back together to interfere (or not).

The result was that, just as Bohr had predicted, it makes no difference if we delay the detection. Still superposition and interference vanish if we detect the path before we measure the photons. It is as if the particle ‘knows’ our intention to measure it later.

Bohr’s argument that quantum mechanics is silent about ‘reality’ beyond what we can measure has long seemed deeply unsatisfactory to many researchers. “We know something fishy is going on in a superposition”, says physicist Avshalom Elitzur of the Israeli Institute for Advanced Research in Zichron Ya’akov. “But you’re not allowed to measure it”, he says – because then the superposition collapses. “This is what makes quantum mechanics so diabolical.”

There have been many attempts to develop alternative points of view to Bohr’s that restore an underlying reality in quantum mechanics – some description of the world before we look. But none seems able to restore the kind of picture we have in classical physics of objects that always have definite positions and paths.

One particular approach that aims to deduce something about quantum particles before their measurement is called the two-state-vector formalism (TSVF) of quantum mechanics, developed by Elitzur’s former mentor the Israeli physicist Yakir Aharonov and his collaborators. This postulates that quantum events are in some sense determined by quantum states not just in the past but also in the future: it makes the assumption that quantum mechanics works the same way both forwards and backwards in time. In this view, causes can seem to propagate backwards in time: there is retrocausality.

You don’t have to take that strange notion literally. Rather, in the TSVF you can gain retrospective knowledge of what happened in a quantum system by selecting the outcome: not, say, simply measuring where a particle ends up, but instead choosing a particular location in which to look for it. This is called post-selection, and it supplies more information than any unconditional peek at outcomes ever could, because it means that the particle’s situation at any instant is being evaluated retrospectively in the light of its entire history, up to and including measurement. But odd thing is that it looks as if, simply by you choosing to look for a particular outcome, the choice caused that outcome to happen.

“Normal quantum mechanics is about statistics”, Cohen says: what you see are average values, or what is generally called an expectation value of some variable you are measuring. But by looking at when a system produces some particular, chosen value, you can take a slice though the probabilistic theory and start to talk with certainty about what went on to cause that outcome. The odd thing is that it then looks as if your very choice of outcome was part of the cause.

“It’s generally accepted that the TSVF is mathematically equivalent to standard quantum mechanics,” says David Wallace of the University of Southern California, a philosopher who specializes in interpretations of quantum mechanics. “But it does lead to seeing certain things one wouldn’t otherwise have seen.”

Take, for instance, the version of the double-slit experiment devised using the TSVF by Aharonov and coworker Lev Vaidman in 2003. The pair described (but did not build) an optical system in which a single photon can act as a ‘shutter’ that closes a slit by perfectly reflecting another ‘probe’ photon that is doing the standard trick of interfering with itself as it passes through the slits. Aharonov and Vaidman showed that, by applying post-selection to the measurements of the probe photon, we should be able to see that a shutter photon in a superposition can close both (or indeed many) slits at once. So you could say with confidence that the shutter photon really was both ‘here’ and ‘there’ at once [Y. Aharonov & L. Vaidman, Phys. Rev. A 67, 1–3 (2003)] – a situation that seems paradoxical from our everyday experience but is one aspect of the so-called nonlocal properties of quantum particles, where the whole notion of a well-defined location in space dissolves.

In 2016, Ryo Okamoto and Shigeki Takeuchi of Kyoto University implemented Aharonov and Vaidman’s proposal experimentally using apparatus based on a Mach-Zehnder interferometer [R. Okamoto & S. Takeuchi, Sci. Rep. 6, 35161 (2016)]. The ability of a photon to act as a shutter was enabled by a photonic device called a quantum router, in which one photon can control the route taken by another. The crucial point is that this interaction is cleverly arranged to be completely one-sided: it affects only the probe photon. That way, the probe photon carries away no direct information about the shutter photon, and so doesn’t disturb its superposition – but nonetheless one can retrospectively deduce that the shutter photon was definitely in the position needed to reflect the probe.

The Japanese researchers found that the statistics of how the superposed shutter photon reflects the probe photon matched those that Aharonov and Vaidman predicted, and which could only be explained by some non-classical “two places at once” behaviour. “This was a pioneering experiment that allowed one to infer the simultaneous position of a particle in two places”, says Cohen.

Now Elitzur and Cohen have teamed up with Okamoto and Takeuchi to concoct an even more ingenious experiment, which allows one to say with certainty something about the position of a particle in a superposition at a series of different points in time before any measurement has been made. And it seems that this position is even more odd than the traditional “both here and there”.

Again the experiment involves a kind of Mach-Zehnder set-up in which a shutter photon interacts with some probe photon via quantum routers. This time, though, the probe photon’s route is split into three by partial mirrors. Along each of those paths it may interact with a shutter photon in a superposition. These interactions can be considered to take place within boxes labeled A, B and C along the probe photon’s route, and they provide an unambiguous indication that the shutter particle was definitely in a given box at a specific time.

Because nothing is inspected until the probe photon has completed the whole circuit and reached a detector, there should be no collapse of either its superposition or that of the shutter photon – so there’s still interference. But the experiment is carefully set up so that the probe photon can only show this interference pattern if it interacted with the shutter photon in a particular sequence of places and times: namely, if the shutter photon was in both boxes A and C at some time t1, then at a later time t2 only in C, and at a still later time t3 in both B and C. If you see interference in the probe photon, you can say for sure (retrospectively) that the shutter photon displayed this bizarre appearance and disappearance among the boxes at different times – an idea Elitzur, Cohen and Aharonov proposed as a possibility last year for a single particle superposed into three ‘boxes’ [Y. Aharonov, E. Cohen, A. Landau & A. C. Elitzur, Sci. Rep. 7, 531 (2017)].

Why those particular places and times, though? You could certainly look at other points on the route, says Elitzur, but those times and locations are ones where, in this configuration, the probability of finding the particle becomes 1 – in other words, a certainty.

So this thought experiment seems to lift part of the veil off a quantum superposition, and to let us say something definite beyond Bohr’s “Don’t ask” proscription. The TSVF opens up the story by considering both the initial and final states, which allows one to reconstruct what was not measured, namely what happens in between. “I like the way this paper frames questions about what is happening in terms of entire histories, rather than instantaneous states”, says physicist Ken Wharton of San Jose State University in California. “Taking about ‘states’ is an old pervasive bias, whereas full histories are generally far more rich and interesting.”

And the researchers’ interpretation of that intermediate history before measurement is extraordinary. The apparent vanishing of particles in one place at one time, and their reappearance in other times and places, suggests a new vision of what the underlying processes are that create quantum randomness and nonlocality. Within the TSVF, this flickering, ever-changing existence can be understood as a series of events in which a particle is somehow ‘cancelled’ by its own “counterparticle”, with negative energy and negative mass.

Elitzur compares this to the notion introduced by British physicist Paul Dirac in the 1920s that particles have antiparticles that can annihilate one another – a picture that seemed at first just a manner of speaking, but which soon led to the discovery that such antiparticles are real. The disappearance of quantum particles is not annihilation in this same sense, but it is somewhat analogous.

So while the traditional “two places at once” view of superpositions might seem odd enough, “it’s possible that a superposition is a collection of states that are even crazier”, says Elitzur. “Quantum mechanics just tells you about their average.” Post-selection then allows one to isolate and inspect just some of those states at greater resolution, he suggests. With just a hint of nervousness, he ventures to suggest that as a result, measurements on a quantum particle might be contingent on when you look even if the quantum state itself is unchanging in time. You might not find it here when you look – but had you looked a moment later, it might indeed have been there. Such an interpretation of quantum behaviour would be, Elitzur says, “revolutionary” – because it would entail a hitherto unguessed menagerie of real states underlying counter-intuitive quantum phenomena.

The researchers say that to do the actual experiment will require some refining of what quantum routers are capable of, but that they hope to have it ready to roll in three to five months. “The experiment is bound to work”, says Wharton – but he adds that it is also “bound to not convince anyone of anything, since the results are predicted by standard quantum mechanics.”

Elitzur agrees that this picture of a particle’s apparent appearance and disappearance at various points along the trajectory could have been noticed in quantum mechanics decades ago. But it never was. “Isn’t that a good indication of the soundness of the TSVF?” he asks. And if someone thinks they can formulate a different picture of “what is really going on” in this experiment using standard quantum mechanics, he says, “well, let them go ahead!”

Tuesday, April 24, 2018

More on the politics of genes and education

There was never any prospect that my article in New Statesman on genes, intelligence and education would wrap up everything so nicely that there was nothing left to be said. For one thing, aspects of the science are still controversial – I would have liked among other things, to delve more deeply into the difficulties (impossibility, actually) of cleanly separating genetic from environmental influences on intelligence.

I was, I admit, somewhat hard on Toby Young, while wanting to absolve him from some of the kneejerk accusations that have come his way. He is not some swivel-eyed hard-right eugenicist, and indeed if I have given the impression that he is a crude social Darwinist, as Toby thinks I have, then I have given a wrong impression: his position is more nuanced than that. Toby has been rather gracious in his response in The Spectator.

OK, not entirely – but so it goes. I recognize the temptation to construct artificial narratives, and I fear Toby has done so in his discussion of my article in Prospect. I take his remark on my “bravery” in tackling this subject after writing that piece as a backhanded compliment that implies I was brave to return to a subject after I’d screwed up earlier. In fact, my Prospect piece was not primarily about genes and intelligence anyway. Yes, Stuart Ritchie had some criticisms about that particular aspect of it, but these centred on technical arguments about other studies in the field – in other words, on issues that the specialists themselves are arguing about. Other geneticists, including some who work on intelligence, saw and approved my article. To say that I had to “publish some ‘clarifications’” after “a lot of criticism” is misleading to a rather naughty degree. The reader is meant to infer that these are euphemistic ‘clarifications’, i.e. corrections made in response to errors pointed out. Actually I “published” nothing of the sort – what Toby is referring to are merely some comments I posted on my blog in response to the discussion.

As for the link to the criticisms made by Dominic Cummings: well, I recommend you read them. Not because they add anything of substance to the discussion, but because they are a reminder of what this man, who once wielded considerable behind-the-scenes political power and who has had an inordinate influence on the current predicament of the country, is really like. I still find it chilling.

What’s most striking about Toby’s piece, however, is how political it is. I don’t consider that a criticism, but rather, a vindication of one of the central points of my article in New Statesman: that while the science is fairly (if not entirely) clear, what one concludes from it is highly dependent on political leaning.

This includes a tendency to attribute ideas and views to your political opposites simply because of their persuasion. I must acknowledge the possibility that I did so with Toby. He returns the favour here:
“I suspect the popularity of the ‘personalised learning’ recommendation among the experts in this field – as well as Philip Ball – is partly because they don’t want to antagonise their left-wing colleagues.”

Actually I am sceptical about ‘personalized learning’ based on genetic intelligence measures, and said so in the article, since I see no evidence that they could be effective (although I’m open to the possibility that that might change). The aim of my article, Toby decided, was to reassure my fellow liberals that yes, genes do influence intelligence, but really it’ll be OK.

I find this bizarre – but not as bizarre as the view Toby attributes to Charles Murray, who seems to think that the “left” is either going to have a breakdown over genetic influences on traits or, worse, will decide to embrace genetic social engineering, using CRISPR no less, to eradicate innate differences in some sort of Brave New World scenario. If Murray really thinks that, his grasp of the science is as poor as some experts have said it is. And if in his alternative universe he finds a hard-left government trying to do such things anyway, he’ll find me alongside him opposing it.

You see, what we leftists are told we believe is that everyone is a blank slate, equal in all respects, until society kicks in with its prejudices and inequalities. And we denounce anything to the contrary as crypto-fascism. Steven Pinker, who has pushed the ‘blank slate’ as a myth of the left, weighed in on my article by commenting that even left-leaning magazines like New Statesman are now having to face up to the truth, as though my intention were to confess to past leftie sins of omission.

Now, I fully acknowledge that there have been hysterical reactions to ‘sociobiology’ and to suggestions that human traits may be partly genetically hardwired. And these have often come from the left – indeed, sometimes from the Marxian post-modern intellectuals who Pinker regards as the root of so many modern evils. But such denial is plain silly, and I’m not sure that many left-leaning moderates would disagree, or would be somehow too frightened to say so.

The caricatures Toby creates are grotesque. “It’s now just flat out wrong to think that varying levels of ability and success are solely determined by economic and historical forces”, he says. We agree – but does anyone seriously want to argue otherwise?

“That means it’s a dangerous fantasy”, he continues, “to think that, once you’ve eradicated socio-economic inequality, human nature will flatten out accordingly – that you can return to ‘year zero’, as the Khmer Rouge put it. On the contrary, biological differences between human beings will stubbornly refuse to wither away, which means that an egalitarian society can only be maintained by a brutally coercive state that is constantly intervening to ‘correct’ the inequities of nature.”

But most of us who would like to see an “egalitarian society” don’t mean by that a society in which absolute equality is imposed by the jackboot. We just want to see, for example, fewer people struggle against the inequalities they are born into, while others rise to power and influence on the back of their privileged background. We want to see less tolerance of, and even encouragement of, naked greed that exploits the powerless. We want to see more equality of opportunity. I think we accept that there can never be equality of outcome, at least without unjustified coercion. But we would also like to see reward more closely tied to contribution to society, not simply to what you can get away with. And in fact, while we will differ in degree and probably in methodology, I suspect that in these aspirations we liberal lefties are not so different from Toby Young.

In fact, evidently we do agree on this much:
“The findings of evolutionary psychologists, sociobiologists, cognitive neuroscientists, biosocial criminologists, and so on, [don’t] inevitably lead to Alan Ryan’s ‘apocalyptic conservatism’. On the contrary, I think they’re compatible with a wide range of political arrangements, including – at a pinch – Scandinavian social democracy.”

Which is why it’s baffling to me why Toby thinks we “progressive liberals” should be so disconcerted by the findings of genetics. Disconcerted by the discovery that traits, like height, are partly innate? Disconcerted that a society that tries to impose complete equality of ability on everyone will be a Stalinist dystopia? The implication here seems to be that science has disproved our leftwing delusions, and we’d better face up to that. But all it has ‘disproved’ is some wild, extreme fantasies and some straw men.

Such comments only reinforce my view that all this politicization of the debate gets in the way of actually moving it on. In my experience, the reason many educators and educationalists are not terribly enchanted with studies of the genetic basis of intelligence is not because they think it is some foul plot but because they don’t see it as terribly relevant. It doesn’t help them do their job any better. Now, if that leads them to actually deny the role of genes in intelligence, then they’re barking up the wrong tree. But I think many see it merely as a distraction from the business of trying to improve education. After all, so far genetics has offered next to no suggestions about how to do that – as I said in my article, pretty much all the sensible recommendations that Robert Plomin and Kathryn Asbury make in their book could have been made without the benefit of genetic studies.

Now, one way to read the implications of those studies is that there actually isn’t much that educationalists can do. Take the recent paper by Plomin and colleagues claiming that schools make virtually no additional contribution to outcomes beyond the innate cognitive abilities of their student intake. This is a very interesting finding, but there needs to be careful discussion about what it means. So we shouldn’t worry at all about Ofsted reports of “failing” schools? I doubt if anyone would conclude that, but then how is a school influencing outcomes? When a new head arrives and turns a school around, what has happened? Has the new head somehow just managed to alter the IQ distribution of the intake? I don’t know the answers to these things.

The authors of that paper are not so unwise as to conclude that (presumably beyond some minimal level of competence) “teaching makes no difference to outcomes”. But you can imagine others drawing that conclusion, and then should understand if some teachers and educators express frustration with this sort of thing. For one thing, the differences teaching and teachers make are not always going to be registered in exam results. As things stood, I was always going to get A’s in my chemistry A levels – but it was the enthusiasm and advocacy of Dr McCarthy and Mr Heasman that inspired me to study the subject at university. I was probably always going to get an A in my English O level, but it was Ms Priske who encouraged me to read Mervyn Peake.

All too often, however, the position of right-leaning commentators on the matter can read like laissez-faire: tinker all you like but it’s not going to make much difference, because you well-meaning liberals are just going to have to accept that some pupils are smarter than others. (So why are Conservative education ministers so keen to keep buggering about with the curriculum?) And if you do manage to level the playing field, you’ll see that even more clearly. And then where will you be, eh, with all your Maoist visions?

I don’t think they really do think like this; at least I don’t think Toby does. I certainly hope not. But that’s why both sides have to stop any posturing about the facts, and get on with figuring out what to make of them. We already know not all kids will do equally well in exams, come what may. But how do we find those who could do better, given the right circumstances? How do we find ways of engaging those pupils with ability but not inclination? How do we find ways of helping those of lower academic ability feel fulfilled rather than discarded in the bottom set? How do we decide, for God’s sake, what is important in an education anyway? These are the kinds of hard questions that teachers and educators have to face every day, and it would be good to see if the knowledge we’re gaining about inherent cognitive abilities could be useful to them, rather than turning it into a political football.

Friday, April 13, 2018

The thousand-year song

In February I had the pleasure of meeting Jem Finer, the founder of the Longplayer project, to discuss the “music of the future” at this event in London. It seemed a perfect subject for my latest column for Sapere magazine on music cognition, where it will appear in Italian. Here it is in English.
______________________________________________________________

Most people will have experienced music that seemed to go on forever, and usually that’s not a good thing. But Longplayer, a composition by British musician Jem Finer, a founder member of the band The Pogues, really does. It’s a piece conceived on a geological timescale, lasting for a thousand years. So far, only 18 of them have been performed – but the performance is ongoing even as you read this. It began at the turn of the new millennium and will end on 31 December 2999. Longplayer can be heard online and at various listening posts around the world, the most evocative being a Victorian lighthouse in London’s docklands.

Longplayer is scored for a set of Tibetan singing bowls, each of which sounds in a repeating pattern determined by a mathematical algorithm that will not repeat any combination exactly until one thousand years have passed. The parts interweave in complex, constantly shifting ways, not unlike compositions such as Steve Reich’s Piano Phase in which repeating patterns move in and out of step. Right now Longplayer sounds rather serene and meditative, but Finer says that there are going to be pretty chaotic, discordant passages ahead, lasting for decades at a time – albeit not in his or my lifetime.


The visual score of Longplayer. (Image: Jem Finer/Longplayer Foundation)


An installation of Tibetan prayer bowls used for Longplayer at Trinity Buoy Wharf, London Docks. (Photo: James Whitaker)

One way to regard Longplayer is as a kind of conceptual artwork, taking with a pinch of salt the idea that it will be playing in a century’s time, let alone a millennium. Finer, though, has careful plans for how to sustain the piece into the indefinite future in the face of technological and social change. There’s no doubt that performance is a strong feature of the project: live events playing part of the piece have been rather beautiful, the instruments arrayed in concentric circles that reflect both the score itself and the sense of planetary orbits unfurling in slow, dignified synchrony.

But if this all seems ritualistic, so is a great deal of music. I do think Longplayer is a serious musical adventure, not least in how it both emphasizes and challenges the central cognitive process involved in listening: our perception of pattern and regularity. Those are the building blocks of this piece, and yet they take place mostly beyond the scope of an individual’s perception, forcing us – as perhaps the pointillistic dissonance of Pierre Boulez’s total serialism does – to find new ways of listening.

More than this, though, Longplayer connects to the persistence of music through the “deep time” of humanity, offering a message of determination and hope. Tectonic plates may shift, the climate may change, we might even reinvent ourselves – but we will do our best to ensure that this expression of ourselves will endure.


A live performance of part of Longplayer at the Yerba Buena Center, San Francisco, in 2010. (Photo: Stephen Hill)

Thursday, March 01, 2018

On the pros and cons of showing copy to sources - redux

Dana Smith has written a nice article for Undark about whether science journalists should or should not show drafts or quotes to their scientist sources before publication.

I’ve been thinking about this some more after writing the blog entry from which Dana quotes. One issue that I think comes out from Dana’s piece is that there is perhaps something of a generational divide here: I sense that younger writers are more likely to consider it ethically questionable ever to show drafts to sources, while old’uns like me, Gary Stix and John Rennie have less of a problem with it. And I wonder if this has something to do with the fact that the old’uns probably didn’t get much in the way of formal journalistic training (apologies to Gary and John if I’m wrong!), because science writers rarely did back then. I have the impression that “never show anything to sources” is a notion that has entered into science writing from other journalistic practice, and I do wonder if has acquired something of the status of dogma in the process.

Erin Biba suggests that the onus is one the reporter to get the facts right. I fully agree that we have that responsibility. But frankly, we will often not get the facts right. Science is not uniquely hard, but it absolutely is hard. Even when we think we know a topic well and have done our best to tell it correctly, chances are that there are small, and sometimes big, ways in which we’ll miss what real experts will see. To suggest that asking the experts is “the easy way out” sounds massively hubristic to me.

(Incidentally, I’m not too fussed about the matter of checking out quotes. If I show drafts, it’s to check out if I have got any of the scientific details wrong. I often tend to leave in quotes just because there doesn’t seem much point in removing them – they are very rarely queried – but I might omit critical quotes from others to avoid arguments that might otherwise end up needing third-part peer review.)

Dana doesn’t so much go into the arguments for why it is so terrible (in the view of some) to show your copy to sources. She mentions that some say it’s a matter of “journalistic integrity”, or just that it’s a “hard rule” – which makes the practice sound terribly transgressive. But why? The argument often seems to be, “Well, the scientists will get you to change your story to suit them.” To which I say, “Why on earth would I let them do that?” In the face of such attempts (which I’ve hardly ever encountered), why do I not just say, “Sorry, no”? Oh, but you’ll not be able to resist, will you? You have no will and judgement. You’re just a journalist.

Some folks, it’s true, say instead “Oh, I know you’ll feel confident and assertive enough to resist undue pressure to change the message, but some younger reporters will be more vulnerable, so it’s safer to have a blanket policy.” I can see that point, and am not unsympathetic to it (although I do wonder whether journalistic training might focus less on conveying the evils of showing copy to sources and more on developing skills and resources for resisting such pressures). But so long as I’m able to work as a freelancer on my own terms, I’ll continue to do it this way: to use what is useful and discard what is not. I don’t believe it is so hard to tell the difference, and I don’t think it is very helpful to teach science journalists that the only way you can insulate yourself from bad advice is to cut yourself off from good advice too.

Here’s an example of why we science writers would be unwise to trust we can assess the correctness of our writing ourselves, and why experts can be helpful if used judiciously. I have just written a book on quantum mechanics. I have immersed myself in the field, talked to many experts, read masses of books and papers, and generally informed myself about the topic in far, far greater detail than any reporter could be expected to do in the course of writing a news story on the subject. That’s why, when a Chinese team reported last year that they had achieved quantum teleportation between a ground base and a satellite, I felt able to write a piece for Nature explaining what this really means, and pointing out some common misconceptions in the reporting of it.

And I feel – and hope – I managed to do that. But I got something wrong.

It was not a major thing, and didn’t alter the main point of the article, but it was a statement that was wrong.

I discovered this only when, in correspondence with a quantum physicist, he happened to mention in passing that one of his colleagues had criticized my article for this error in a blog. So I contacted the chap in question and had a fruitful exchange. He asserted that there were some other dubious statements in my piece too, but on that matter I replied that he had either misunderstood what I was saying or was presenting an unbalanced view of the diversity of opinion. The point was, it was very much a give-and-take interaction. But it was clear that on this one point he was right and I was wrong – so I got the correction made.

Now, had I sent my draft to a physicist working on quantum teleportation, I strongly suspect that my error would have been spotted right away. (And I do think it would have had to be a specialist in that particular field, not just a random quantum physicist, for the mistake to have been noticed.) I didn’t do so partly because I had no real sources in this case to bounce off, but also partly because I had a false sense of my own “mastery” of the topic. And this will happen all the time – it will happen not because we writers don’t feel confident in our knowledge of the topic, but precisely because we do feel (falsely) confident in it. I cannot for the life of me see why some imported norm from elsewhere in journalism makes it “unethical” to seek expert advice in a case like this – not advice before we write, but advice on what we have actually written.

Erin is right to say that most mistakes, like mine here, really aren’t a big deal. They’re not going to damage a scientist’s career or seriously mislead the public. And of course we should admit to and correct them when they happen. But why let them happen more often than they need to?

As it happens, having said earlier that I very rarely get responses from scientists to whom I’ve shown drafts beyond some technical clarifications, I recently wrote two pieces that were less straightforward. Both were on topics that I knew to be controversial. And in both cases I received some comments that made me suspect their authors were wanting to somewhat dictate the message, taking issue with some of the things the “other side” said.

But this was not a problem. I thought carefully about what they said, took on board some clearly factual remarks, considered whether the language I’d used captured the right nuance in some other places, and simply decided I would respectfully decline to make any modifications to my text in others. Everything was on a case-by-case basis. These scientists were in return very respectful of my position. They seemed to feel that I’d heard and considered their position, and that I had priorities and obligations different from theirs. I felt that my pieces were better as a result, without my independence at all being compromised, and they were happy with the outcome. Everyone, including the readers, were better served as a result of the exchange. I’m quite baffled by how there could be deemed to be anything unethical in that.

And that’s one of the things that makes me particularly uneasy about how showing any copy to sources is sometimes presented not as an informed choice but as tantamount to breaking a professional code. I’ve got little time for the notion that it conflicts with the journalist’s mission to critique science and not merely act as its cheerleader. Getting your facts right and sticking to your guns are separate matters. Indeed, I have witnessed plenty of times the way in which a scientist who is being (or merely feels) criticized will happily seize on any small errors (or just misunderstandings of what you’ve written) as a way of undermining the validity of the whole piece. Why give them that opportunity after the fact? The more airtight a piece is factually, the more authoritative the critique will be seen to be.

I should add that I absolutely agree with Erin that the headlines our articles are sometimes given are bad, misleading and occasionally sensationalist. I’ve discussed this too with some of my colleagues recently, and I agree that we writers have to take some responsibility for this, challenging our editors when it happens. It’s not always a clear-cut issue: I’ve received occasional moans from scientists and others about a headline that didn’t quite get the right nuance, but which I thought weren’t so bad, and so I’m not inclined to start badgering folks about that. (I wouldn’t have used the headline that Nature gave my quantum teleportation piece, but hey.) But I think magazines and other outlets have to be open to this sort of feedback – I was disheartened to find that one that I challenged recently was not. (I should say that others are – Prospect has always been particularly good at making changes if I feel the headlines for my online pieces convey the wrong message.) As Chris Chambers has rightly tweeted, we’re all responsible for this stuff: writers, editors, scientists. So we need to work together – which also means standing up against one another when necessary, rather than simply not talking.

Sunday, February 04, 2018

Should you send the scientist your draft article?

The Twitter discussion sparked by this poll was very illuminating. There’s a clear sense that scientists largely think they should be entitled to review quotes they make to a journalist (and perhaps to see the whole piece), while journalists say absolutely not, that’s not the way journalism works.

Of course (well, I say that but I’m not sure it’s obvious to everyone), the choices are not: (1) Journalist speaks to scientist, writes the piece, publishes; or (2) Journalist speaks to scientist, sends the scientist the piece so that the scientist can change it to their whim, publishes.

What more generally happens is that, after the draft is submitted to the editor, articles get fact-checked by the publication before publication. Typically this involves a fact-checker calling up the scientist and saying “Did you basically say X?” (usually with a light paraphrase). The fact-checker also typically asks the writer to send transcripts of interviews, to forward email exchanges etc, as well as to provide links or references to back up factual statements in the piece. This is, of course, time-consuming, and the extent to which, and rigour with which, it is done depends on the resources of the publication. Some science publications, like Quanta, have a great fact-checking machinery. Some smaller or more specialized journals don’t really have much of it at all, and might rely on an alert subeditor to spot things that look questionable.

This means that a scientist has no way of knowing, when he or she gives an interview, how accurately they are going to be quoted – though in some cases the writer can reassure them that a fact-checker will get in touch to check quotes. But – and this is the point many of the comments on the poll don’t quite acknowledge – it is not all about quotes! Many scientists are equally concerned about whether their work will be described accurately. If they don’t get to see any of the draft and are just asked about quotes, there is no way to ensure this.

One might say that it’s the responsibility of the writer to get that right. Of course it is. And they’ll do their best, for sure. But I don’t think I’ll be underestimating the awesomeness of my colleagues to say that we will get it wrong. We will get it wrong often. Usually this will be in little ways. We slightly misunderstood the explanation of the technique, we didn’t appreciate nuances and so our paraphrasing wasn’t quite apt, or – this is not uncommon – what the scientist wrote, and which we confidently repeated in simpler words, was not exactly what they meant. Sometimes our oversights and errors will be bigger. And if the reporter who has read the papers and talked with the scientists still didn’t quite get it right, what chance is there that even the most diligent fact-checker (and boy are they diligent) will spot that?

OK, mistakes happen. But they don’t have to, or not so often, if the scientist gets to see the text.

Now, I completely understand the arguments for why it might not be a good idea to show a draft to the people whose work is being discussed. The scientists might interfere to try to bend the text in their favour. They might insist that their critics, quoted in the piece, are talking nonsense and must be omitted. They might want to take back something they said, having got cold feet. Clearly, a practice like that couldn’t work in political writing.

Here, though, is what I don’t understand. What is to stop the writer saying No, that stays as it is? Sure, the scientist will be pissed off. But the scientist would be no less pissed off if the piece appeared without them ever having seen it.

Folks at Nature have told me, Well sometimes it’s not just a matter of scientists trying to interfere. On some sensitive subjects, they might get legal. And I can see that there are some stories, for example looking at misconduct or dodgy dealings by a pharmaceutical company, where passing round a draft is asking for trouble. Nature says that if they have a blanket policy so that the writer can just say Sorry, we don’t do that, it makes things much more clear-cut for everyone. I get that, and I respect it.

But my own personal preference is for discretion, not blanket policies. If you’re writing about, say, topological phases and it is brain-busting stuff, trying to think up paraphrases that will accurately reflect what you have said (or what the writer has said) to the interviewee while fact-checking seems a bit crazy when you could just show the researcher the way you described a Dirac fermion and ask them if it’s right. (I should say that I think Nature would buy that too in this situation.)

What’s more, there’s no reason on earth why a writer could not show a researcher a draft minus the comments that others have made on their work, so as to focus just on getting the facts right.

The real reason I feel deeply uncomfortable about the way that showing interviewees a draft is increasing frowned on, and even considered “highly unethical”, is however empirical. In decades of having done this whenever I can, and whenever I thought it advisable, I struggle to think of a single instance where a scientist came back with anything obstructive or unhelpful. Almost without exception they are incredibly generous and understanding, and any comments they made have improved the piece: by pointing out errors, offering better explanations or expanding on nuances. The accuracy of my writing has undoubtedly been enhanced as a result.

Indeed, writers of Focus articles for the American Physical Society, which report on papers generally from the Phys Rev journals, are requested to send articles to the papers’ authors before publication, and sometimes to get the authors to respond to criticisms raised by advisers. And this is done explicitly with the readers in mind: to ensure that the stories are as accurate as possible, and that they get some sense of the to-and-fro of questions raised. Now, it’s a very particular style of journalism at Focus, and wouldn’t work for everyone; but I believe it is a very defensible policy.

The New York Times explained its "no show" policy in 2012, and it made a lot of sense: it seems some political spokespeople and organizations were demanding quote approval and abusing it to exert control over what was reported. Press aides wanted to vet everything. This was clearly compromising to pen and balanced reporting.

But I have never encountered anything like that in many years of science reporting. That's not surprising, because it is (at least when we are reporting on scientific papers for the scientific press) a completely different ball game. Occasionally I have had people working at private companies needing to get their answers to my questions checked by the PR department before passing them on to me. That's tedious, but if it means that what results is something extremely anodyne, I just won't use it. I've also found some institutions - the NIH is particularly bad at this - reluctant to let their scientists speak at all, so that questions get fielded to a PR person who responds with such pathetic blandness and generality that it's a waste of everyone's time. It's a dereliction of duty for state-funded scientific research, but that's another issue.

As it happens, just recently while writing on a controversial topic in physical chemistry, I encountered the extremely rare situation where, having shown my interviewees a draft, one scientist told me that it was wrong for those in the other camp to be claiming X, because the scientific facts of the matter had been clearly established and they were not X. So I said fine, I can quote you as saying “The facts of the matter are not X” – but I will keep the others insisting that X is in fact that case. And I will retain the authorial voice implying that the matter is still being debated and is certainly not settled. And this guy was totally understanding and reasonable, and respected my position. This was no more or less than I had anticipated, given the way most scientists are.

In short, while I appreciate that an insistence that we writers not show drafts to the scientists is often made in an attempt to save us from being put in an awkward situation, in fact it can feel as though we are being treated as credulous dupes who cannot stand up to obstruction and bullying (if it should arise, which in my experience it hasn’t in this context), or resist manipulation, or make up our own minds about the right way to tell the story.

There’s another reason why I prefer to ask the scientists to review my texts, though – which is that I also write books. In non-fiction writing there simply is not this notion that you show no one except your editor the text before publication. To do so would be utter bloody madness. Because You Will Get Things Wrong – but with expert eyes seeing the draft, you will get much less wrong. I have always tried to get experts to read drafts of my books, or relevant parts of them, before publication, and I always thank God that I did and am deeply grateful that many scientists are generous enough to take on that onerous task (believe me, not all other disciplines have a tradition of being so forthcoming with help and advice). Always when I do this, I have no doubt that I am the author, and that I get the final say about what is said and how. But I have never had a single expert reader who has been anything but helpful, sympathetic and understanding. (Referees of books for academic publishers, however – now that’s another matter entirely. Don’t get me started.)

I seem to be in a minority here. And I may be misunderstanding something. Certainly, I fully understand why some science writers, writing some kinds of stories, would find it necessary to refuse to show copy to interviewees before publication. What's more, I will always respect editors’ requests not to show drafts of articles to interviewees. But I will continue to do so, when I think it is advisable, unless requested to do otherwise.

Friday, January 05, 2018

What to look out for in science in 2018

I wrote a piece for the Guardian on what we might expect in science, and what some of the big issues will be, in 2018. It was originally somewhat longer than the paper could accommodate, explaining some issues in more detail. Here’s that longer version.

_____________________________________________________

Quantum computers
This will be the year when we see a quantum computer solve some computational problem beyond the means of the conventional ‘classical’ computers we currently use. Quantum computers use the rules of quantum mechanics to manipulate binary data – streams of 1s and 0s – and this potentially makes them much more powerful than classical devices. At the start of 2017 the best quantum computers had only around 5 quantum bits (qubits), compared to the billions of transistor-based bits in a laptop. By the close of the year, companies like IBM and Google said that they are testing devices with ten times that number of qubits. It still doesn’t sound like much, but many researchers think that just 50 qubits could be enough to achieve “quantum supremacy” – the solution of a task that would take a classical computer so long as to be practically impossible. This doesn’t mean that quantum computers are about to take over the computer industry. For one thing, they can so far only carry out certain types of calculation, and dealing with random errors in the calculations is still extremely challenging. But 2018 will be the year that quantum computing changes from a specialized game for scientists to a genuine commercial proposition.

Quantum internet
Using quantum rules for processing information has more advantages than just a huge speed-up. These rules make possible some tricks that just aren’t imaginable using classical physics. Information encoded in qubits can be encrypted and transmitted from a sender to a receiver in a form that can’t be intercepted and read without that eavesdropping being detectable by the receiver, a method called quantum cryptography. And the information encoded in one particle can in effect be switched to another identical particle in a process dubbed quantum teleportation. In 2017 Chinese researchers demonstrated quantum teleportation in a light signal sent between a ground-based source and a space satellite. China has more “quantum-capable” satellites planned, as well as a network of ground-based fibre-optic cables, that will ultimately comprise an international “quantum internet”. This network could support cloud-based quantum computing, quantum cryptography and surely other functions not even thought of yet. Many experts put that at a decade or so off, but we can expect more trials – and inventions – of quantum network technologies this year.

RNA therapies
The announcement in December of a potential new treatment for Huntington’s disease, an inheritable neurodegenerative disease for which there is no known cure, has implications that go beyond this particularly nasty affliction. Like many dementia-associated neurodegenerative diseases such as Parkinson’s and Alzheimer’s, Huntington’s is caused by a protein molecule involved in regular brain function that can ‘misfold’ into a form that is toxic to brain cells. In Huntington’s, which currently affects around 8,500 people in the UK, the faulty protein is produced by a mutation of a single gene. The new treatment, developed by researchers at University College London, uses a short strand of DNA that, when injected into the spinal cord, attaches to an intermediary molecule involved in translating the mutated gene to the protein and stops that process from happening. The strategy was regarded by some researchers as unlikely to succeed. The fact that the current preliminary tests proved dramatically effective at lowering the levels of toxic protein in the brain suggests that the method might be a good option not just for arresting Huntington’s but other similar conditions, and we can expect to see many labs trying it out. The real potential of this new drug will become clearer when the Swiss pharmaceuticals company Roche begins large-scale clinical trials.

Gene-editing medicine
Diseases that have a well defined genetic cause, due perhaps to just one or a few genes, can potentially be cured by replacing the mutant genes with properly functioning, healthy ones. That’s the basis of gene therapies, which have been talked about for years but have so far failed to deliver on their promise. The discovery in 2012 of a set of molecular tools, called CRISPR-Cas9, for targeting and editing genes with great accuracy has revitalized interest in attacking such genetic diseases at their root. Some studies in the past year or two have shown that CRISPR-Cas9 can correct faulty genes in mice, responsible for example for liver disease or a mouse form of muscular dystrophy. But is the method safe enough for human use? Clinical trials kicked off in 2017, particularly in China but also the US; some are aiming to suppress the AIDS virus HIV, others to tackle cancer-inducing genetic mutations. It should start to become clearing 2018 just how effective and safe these procedures are – but if the results are good, the approach might be nothing short of revolutionary.

High-speed X-ray movies
Developing drugs and curing disease often relies on an intimate knowledge of the underlying molecular processes, and in particular on the shape, structure and movements of protein molecules, which orchestrate most of the molecular choreography of our cells. The most powerful method of studying those details of form and function is crystallography, which involves bouncing beams of X-rays (or sometimes of particles such as electrons or neutrons) off crystals of the proteins and mathematically analysing the patterns in the scattered beams. This approach is tricky, or even impossible, for proteins that don’t form crystals, and it only gives ‘frozen’ structures that might not reflect the behaviour of floppy proteins inside real cells. A new generation of instruments called X-ray free-electron lasers, which use particle-accelerator technologies developed for physics to produce extremely bright X-ray beams, can give a sharper view. In principle they can produce snapshots from single protein molecules rather than crystals containing billions of them, as well as offering movies of proteins in motion at trillions of frames per second. A new European X-ray free-electron laser in Hamburg inaugurated in September is the fastest and brightest to date, while two others in Switzerland and South Korea are starting up too, and another at Stanford in California is getting an ambitious upgrade. As these instruments host their first experiments in 2018, researchers will acquire a new window into the molecular world.

100,000 genomes
By the end of 2018 the private company Genomics England, set up by the UK Department of Health, should have completed its goal of reading the genetic information in 100,000 genomes of around 75,000 voluntary participants. About a third of these people will be cancer patients, who will have a separate genome read from cancer cells and healthy cells; the others will be people with rare genetic diseases and their close relatives. With such a huge volume of data, it should be possible to identify gene mutations linked to cancer and to some of the many thousands of known rare diseases. This information could help diagnoses of cancer and disease, and perhaps also to improve treatments. For example, a gene mutation that causes a rare disease (one of which is likely to affect around one person in 17 at some point in their lives) supplies a possible target for new drugs. Genetic information for cancer patients can also help to tailor specific treatments, for example by identifying those not at risk of side effects from what can otherwise be effective anti-cancer drugs.

Gravitational-wave astronomy
The 2017 Nobel prize in physics was awarded to the chief movers behind LIGO, the US project to detect gravitational waves. These are ripples in spacetime caused by extreme astrophysical events such as the merging of two neutron stars or black holes, which have ultra-strong gravitational fields. The ripples produce tiny changes in the dimensions of space itself as they pass, which LIGO – comprising two instruments in Washington State and Louisiana – detects from changes in the distances travelled by laser beams sent along channels to mirrors a few kilometres away. The first gravitational wave was detected in late 2015 and announced in 2016. Last year saw the announcement of a few more detections, including one in August from the first known collision of two neutron stars. Gravitational-wave detectors now also exist or are being built in Europe, Korea and Japan, while others are planned that will use space satellites. The field is already maturing into a new form of astronomy that can ‘see’ some of the most cataclysmic events in the universe – and which so far fully confirm Einstein’s theory of general relativity, which explains gravitation. We can expect to see more cataclysmic events detected in 2018 as gravitational-wave astronomy becomes a regular tool in the astronomer’s toolkit.

Beyond the standard model
It’s a glorious time for fundamental physics – but not necessarily for the reasons physicists might hope. The so-called standard model of particle physics, which accounts for all the known particles and forces in nature, was completed in 2013 with the discovery of the Higgs boson using the Large Hadron Collider (LHC), the world’s most powerful particle accelerator, at CERN in Switzerland. The trouble is, it can’t be the whole story. The two most profound theories of physics – general relativity (which describes gravity) and quantum mechanics – are incompatible; they can’t both be right as they stand. That problem has loomed for decades, but it’s starting to feel embarrassing. Physicists have so far failed to find ways of breaking out beyond the standard model and finding ‘new physics’ that could show the way forward. String theory offers one possible route to a theory of quantum gravity, but there’s no experimental evidence for it. What’s needed is some clue from particle-smashing experiments for how to extend the standard model: some glimpse of particles, forces or effects outside the current paradigm. Researchers were hoping that the LHC might have supplied that already – in particular, many anticipated finding support for the theory called supersymmetry which some see as the best candidate for the requisite new physics. But so far there’s been zilch. If another year goes by without any chink in the armour appearing, the head-scratching may turn into hair-pulling.

Crunch time for dark matter
That’s not the only embarrassment for physics. It’s been agreed for decades that the universe must contain large amounts of so-called dark matter – about five times as much, in terms of mass, than all the matter visible as stars, galaxies, and dust. This dark matter appears to exert a gravitational tug while not interacting significantly with ordinary matter or light (whence the ‘dark’) in other ways. But no one has any idea what this dark matter consists of. Experiments have been trying to detect it for years, primarily by looking for very rare collisions of putative dark-matter particles with ordinary particles in detectors buried deep underground (to avoid spurious detections caused by other particles such as cosmic rays) or in space. All have drawn a blank, including results from separate experiments in China, Italy and Canada reported in the late summer and early autumn. The situation is becoming grave enough for some researchers to start taking more seriously suggestions that what looks like dark matter is in fact a consequence of something else – such as a new force that modifies the apparent effects of gravity. This year could prove to be crunch time for dark matter: how long do we persist in believing in something when there’s no direct evidence for it?

Return to the moon
In 2018, the moon is the spacefarer’s destination of choice. Among several planned missions, China’s ongoing unmanned lunar exploration programme called Chang’e (after a goddess who took up residence there) will enter its fourth phase in June with the launch of a satellite to orbit the moon’s ‘dark side’ (the face permanently facing away from the Earth, although it is not actually in perpetual darkness). That craft will then provide a communications link to guide the Long March 5 rocket that should head out to this hidden face of the moon in 2019. The rocket will carry a robotic lander and rover vehicle to gather information about the mineral composition of the moon, including the amount of water ice in the south polar basin. It’s all the prelude to a planned mission in the 2030s that will take Chinese astronauts to the lunar surface. Meanwhile, tech entrepreneur Elon Musk has claimed that his spaceflight business SpaceX will be ready to fly two paying tourists around the moon this year in the Falcon Heavy rocket and the Dragon capsule the company has developed. Since neither craft has yet had a test flight, you’d best not hold your breath (let alone try to buy a ticket) – but the rocket will at least get its trial launch this year.

Highway to hell
Exploration of the solar system won’t all be about the moon, however. The European Space Agency and the Japanese Aerospace Exploration Agency are collaborating on the BepiColombo mission, which will set off in October on a seven-year journey to Mercury, the smallest planet in the solar system and the closest to the Sun. Like the distant dwarf planet Pluto until the arrival of NASA’s New Horizons mission in 2015, Mercury has been a neglected little guy in our cosmic neighbourhood. That’s partly because of the extreme conditions it experiences: the sunny side of the planet reaches a hellish 430 oC or so, and the orbiting spacecraft will feel heat of up to 350 oC – although the permanently shadowed craters of Mercury’s polar regions stay cold enough to hold ice. BepiColombo (named after renowed Italian astronomer Giuseppe Colombo) should provide information not just about the planet itself but about the formation of the entire solar system.

Planets everywhere
While there is still plenty to be learnt about our close planetary neighbours, their quirks and attractions have been put in cosmic perspective by the ever-growing catalogue of “exoplanets” orbiting other stars. Over the past two decades the list has grown to nearly 4,000, with many other candidates still being considered. The majority of these were detected by the Kepler space telescope, launched in 2009, which identifies planets from the very slight dimming of their parent star as the planet passes in front (a ‘transit’). But the search for other worlds will hot up in 2018 with the launch of NASA’s Transiting Exoplanet Survey Satellite, which will monitor the brightness of around 200,000 stars during its two-year mission. Astronomers are particularly interested in finding ‘Earth-like planets’, with a size, density and orbit comparable to that of Earth and which might therefore host liquid water - and life. Such candidates should then be studied in more detail by the James Webb Space Telescope, a US-European-Canadian collaboration widely regarded as the successor to the Hubble Space Telescope, due for launch in spring 2019. The Webb might be able to detect possible signatures of life within the chemical composition of exoplanet atmospheres, such as the presence of oxygen. With luck, within just a couple of years or so we may have good reason to suspect we are not alone in the universe.

Mapping the brain
It’s sometimes said, with good reason, that understanding outer space is easier than understanding inner space. The human brain is arguably the most complex object in the known universe, and while no one seems to be expecting any major breakthrough in 2018 in our view of how it works, we can expect to reach next Christmas with a lot more information. Over the summer of 2017 the €10bn European Human Brain Project got a reboot to steer it away from what many saw as an over-ambitious plan to simulate a human brain on a computer and towards a more realistic goal of mapping out its structure down to the level of connections between the billions of individual neurons. This shift in emphasis was triggered by an independent review of the project after 800 neuroscientists threatened to boycott it in 2014 because of concerns about the way it was being managed. One vision now is to create a kind of Google Brain, comparable to Google Earth, in which the brain structures underpinning such cognitive functions as memory and emotion can be ‘zoomed’ from the large scale revealed by MRI scanning down to the level of individual neurons. Such information might guide efforts to simulate more specific ‘subroutines’ of the brain. But one of the big challenges is simply how to collect, record and organize the immense volume of data these studies will produce.

Making clean energy
Amidst the excitement and allure of brains, genes, planets and the cosmos, it’s easy for the humbler sciences, such as chemistry, to get overlooked. That should change in 2019, which UNESCO has just designated as the International Year of the Periodic Table, chemistry’s organizing scheme of elements. But there are good reasons to keep an eye on the chemical sciences this year too, not least because they may hold the key to some of our most pressing global challenges. Since nature has no reason to heed the ignorance of the current US president, we can expect the global warming trend to continue – and some climate researchers believe that the only way to limit future warming to within 2 oC (and thus to avoid some extremely alarming consequences) is to develop chemical technologies for capturing and storing the greenhouse gas carbon dioxide from the atmosphere. At the start of 2017 a group of researchers warned that lack of investment in research on such “carbon capture and storage” technologies was one of the biggest obstacles to achieving this target. By the end of this year we may have a clearer view of whether industry and governments will rise to the challenge. In the meantime, development of carbon-free energy-generating technologies needs boosting too. The invention last year at the Massachusetts Institute of Technology of a device that uses an ultra-absorbent black “carbon nanomaterial” to convert solar heat to light suggests one way to make solar power more efficient, capturing more of the energy in the sun’s rays than current solar cells can manage even in principle. We can hope for more such innovation, as well as efforts to turn the smart science into commercially viable technologies. Don’t expect any single big breakthrough in these areas, though; success is likely to come, if at all, from a portfolio of options for making and using energy in greener ways.

Wednesday, November 01, 2017

Science writing and the "human bit"

This article on Last Word On Nothing by Cassandra Willyard brought about a fascinating debate – at least if you’re a science writer or have an interest in that business. Some have criticized it as irredeemably philistine for a science writer – honestly, to not know Hubble refers to a telescope! (Well, many things bear Hubble’s name, so I really don’t see that as so deplorable.) This is shallow criticism – what surely matters is how well a writer does the job she does, not what gaps might exist in her knowledge that never come to light unless she admits to them. Know your limits, is the only corollary of that.

Indeed, it makes me think it would be fun to know what areas of science hold no charms for other science writers. No doubt everyone’s blind spots would horrify some others. I long struggled to work up any enthusiasm for human origins. What? How could I not be interested in where we came from? Well, it seemed to me we kind of know where we came from, and roughly when, give or take a million years. We evolved from more primitive hominids. The rest is just detail, right?

Oddly, it is only now that the detail has become so messy – what with Homos floresiensis and naledi and so forth – that I’ve become engaged. Perhaps there’s nothing quite so appealing as blithe complacency undermined. I can’t say I yet care enough to fret about where each branch of the hominid tree should divide, but it’s fun to see all these revelations tumble out, not least because of the drama of some of the new discoveries.

My heartbeat doesn’t race for much of particle and fundamental physics either. I suspect this is for more partisan, and more dishonourable, reasons: particle physics has somehow managed to nab all the glamour and public attention, to the point that most people think this is what all of physics is, whereas my own former field of condensed matter physics, which has a larger community, never gets a look in. Meanwhile, particle physicists take the great ideas from CMP (like symmetry breaking) and then claim they invented them. You can see how bitter and twisted I am. So I was rather indifferent about the Higgs – an indifference I know some condensed matter physicists shared.

Some other fields I want to stick up for merely because they’re the underdogs – cell biology and biophysics, say, in the face of genetic hegemony.

So if a science writer admits to being unmoved by space science, it really doesn’t seem occasion to get all affronted. I edited an awful lot of astronomy papers at Nature that made my eyes glaze, often because they seemed to be (like some of those fossil and protein structure papers) a catalogue of arbitrary specifics. (Though don’t worry, I do love a good protein structure.)

Where I’m more unsure about Cassandra’s article is in the discussion of “the human element”. I suppose this is because it sends a chill wind down my spine. If the only way for science communication to connect with a broad public is by telling human stories, then I’m done for. I’m just not that interested in doing that (as you might have noticed).

That’s not to say that one shouldn’t make the most of a human element when it’s there. If there’s a way of telling a science story through personalities, it’s generally worth taking. “I might not be interested in gravitational waves, but I am interested in science as a process”, Cassandra writes. “Humanize the process, and you’ll hook me every time.”

Fair enough. But what if there is no human element to speak of? Every science writer will tell you that for every researcher who dreamed from childhood of cracking the problem they have finally conquered, there are ten or perhaps a hundred who came to a problem just because it was a natural extension of what they worked on for their PhD – or because it was a hot topic at the time. And for every colourful maverick or quirky underdog, there are lots of scientists who are perfectly lovely people but really have nothing that distinguishes them from the crowd. It’s always good to ask what drew a researcher to the topic, but often the answers aren’t terribly edifying. And there’s only so many times you’re going to be able to tell a story about gravitational waves as a tale of grit and persistence of a few visionaries in the face of scepticism about whether the method would work.

I quickly grew to hate that brand of science writing popular in the early 1990s in which “Jed Raven, a sandy-haired Texan with a charm that would melt glaciers, strode into the lab and boomed ‘Let’s go to work, people!’” Chances are, in retrospect, that Jed Raven was probably harassing his female postdocs. But honestly, I couldn’t give a toss about how Jed grew up collecting beetles or learning to herd steers or whatever they call them in Texas.

The idea that a science story can be told only if you find the human angle is deadly, but probably quite widespread. Unless you happen to strike lucky, it is likely to make whole areas of science hard to write about at all: health, field anthropology and astronomy will probably do well, inorganic chemistry not so much.

But Cassandra is right to imply that there is sometimes a presumption in science writing (including my own) that this stuff is inherently so interesting that you don’t need a narrative attached – you don’t even need to relate it beyond its own terms. It’s easy to be far too complacent about that. As Tim Radford wisely once said, above every hack’s desk should hang the sign: “No one has to read this crap.”

So what’s the alternative to “the human angle”? I’ll paraphrase Cassandra for the way I see it:
“I might not be interested in X, but I am interested in elegant, beautiful writing. Write well, and you’ll hook me every time.”